人工智能计算的区块链框架_第1页
人工智能计算的区块链框架_第2页
人工智能计算的区块链框架_第3页
人工智能计算的区块链框架_第4页
人工智能计算的区块链框架_第5页
已阅读5页,还剩5页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

BlockchainFrameworkforArtificialIntelligenceComputation

JieYou1,2,*

1DasudianTechnologiesLtd.,Shenzhen,518057,China

2InstituteofComputerEngineering,HeidelbergUniversity,Heidelberg,69117,Germany

*

barco@

Abstract

Blockchainisanessentiallydistributeddatabaserecordingalltransactionsordigitaleventsamongparticipatingparties.Eachtransactionintherecordsisapprovedandverifiedbyconsensusoftheparticipantsinthesystemthatrequiressolvingahardmathematicalpuzzle,whichisknownasproof-of-work.Tomaketheapprovedrecordsimmutable,themathematicalpuzzleisnottrivialtosolveandthereforeconsumessubstantialcomputingresources.However,itisenergy-wastefultohavemanycomputationalnodesinstalledintheblockchaincompetingtoapprovetherecordsbyjustsolvingameaninglesspuzzle.Here,weposeproof-of-workasareinforcement-learningproblembymodelingtheblockchaingrowingasaMarkovdecisionprocess,inwhichalearningagentmakesanoptimaldecisionovertheenvironment’sstate,whereasanewblockisaddedandverified.Specifically,wedesigntheblockverificationandconsensusmechanismasadeepreinforcement-learningiterationprocess.Asaresult,ourmethodutilizesthedeterminationofstatetransitionandtherandomnessofactionselectionofaMarkovdecisionprocess,aswellasthecomputationalcomplexityofadeepneuralnetwork,collectivelytomaketheblocksnoteasytorecomputeandtopreservetheorderoftransactions,whiletheblockchainnodesareexploitedtotrainthesamedeepneuralnetworkwithdifferentdatasamples(state-actionpairs)inparallel,allowingthemodeltoexperiencemultipleepisodesacrosscomputingnodesbutatonetime.

Ourmethodisusedtodesignthenextgenerationofpublicblockchainnetworks,whichhasthepotentialnotonlytosparecomputationalresourcesforindustrialapplicationsbutalsotoencouragedatasharingandAImodeldesignforcommonproblems.

Introduction

SincetheappearanceofBitcoin1,blockchaintechnologieshavebroughtaboutdisruptionstotraditionalbusinessprocesses2,3,4,havebeenusedforindustrialadvance5-11,andhaveeventriggeredinnovationsinbiotechandmedicalapplications12-16.

Blockchainseekstominimizetheroleoftrustinachievingconsensus2.Therearedifferentconsensusmechanismsexit17,wherethemostwell-knownistheproof-of-workthatrequiressolvingacomplicatedcomputationalprocess,suchasfindinghasheswithspecificpatterns.Thisconsensusalgorithmdisincentivizesmisbehaviorbymakingitcostlyforanyagenttoalterthestate,sothereisnoneedfortrustinanyparticularcentralentity.Althoughthereareothermechanismsforachievingconsensus,proof-of-workisself-sufficientandrent-freesimultaneously18.

Proof-of-worksystemshaveseveralmajorbenefits.First,theyareanexcellentwaytodeterspammers.Inaddition,proof-of-worksystemscanbeusedtoprovidesecuritytoanentirenetwork.Ifenoughnodes

(computersordedicatedminingmachines)competetofindaspecificsolution,thenthecomputationalpowerneededtooverpowerandmanipulateanetworkbecomesunattainableforanysinglebadactororevenasinglegroupofbadactors.

However,thereisaprimarydisadvantagetoproof-of-worksystems.Theyconsumealargeamountofcomputingpowerandwasteenergy,asadditionalelectricityisusedforcomputerstoperformextracomputationalwork.Thiscanadduptoanextremelylargeamountofexcesselectricityconsumptionandenvironmentaldetriment19,20,21.

Machine-learningtechnologyhasbeenpoweringmanyaspectsofmodernsociety,fromwebsearchestocontentfilteringonsocialnetworkstorecommendationsone-commercewebsites,anditisincreasinglypresentinconsumerproductssuchascamerasandsmartphones.Machine-learningsystemsareusedtoidentifyobjectsinimages22,transcribespeechintotext23,matchnewsitems,postsorproductswithusers’interests,andselectrelevantsearchresults.ParticularlywiththeboomindigitaldataontheInternet,deeplearning,asarepresentation-learningmethod,hasshowngreatpowerindrivingmyriadintelligentapplicationsandwillhavemanymoresuccessesinthenearfuture24.Becauseitrequiresverylittleengineeringbyhand,deeplearningcaneasilytakeadvantageofincreasesintheamountofavailablecomputationanddata24.

Asonebranchofmachinelearningtechnology,reinforcementlearningisthetaskoflearningwhatactionstotake,givenacertainsituationorenvironment,tomaximizearewardsignal.Incontrasttodeeplearning,whichisasupervisedprocess,reinforcementlearningusestherewardsignaltodeterminewhethertheaction(orinput)thattheagenttakesisgoodorbad.Reinforcementlearninghasinspiredresearchinbothartificialandbiologicalintelligence25,26andhasbeenwidelyusedindynamictaskscheduling27,planningandcognitivecontrol28,andmoreinterestingtopicshavebeeninactiveresearch29.

Tousemachinelearninginpracticalscenarios,generallyplentyofcomputationalpowerisrequiredtosupportso-calledartificialintelligence(AI)modeltrainingandexecutionatdifferentscalesaccordingtothecomplexityofmodelsandtheamountofdatatobeprocessed.Forinstance,GPT-330andSwitchTransformers31haveshownthatAImodelperformancescalesasapowerlawofmodelsize,datasetsizeandamountofcomputation.ThecostofAIisincreasingexponentiallytoachievethedesiredtargetwithalargermodelsizeandmorecruncheddata.Ingeneral,whenAImodelsandthetrainingdatasetsarelargeenough,themodelsneedtobetrainedformorethanafewepochstolearnfullyfromthedataandgeneralizewell;therefore,thehardwarecostandtimecostarebothhighforwell-performingAIapplications.

Ontheonehand,blockchainsystemswastealargeamountofcomputationalpowertosolvethemeaninglesspuzzlesforproof-of-work,andontheotherhand,manyusefulAIapplicationsrequiresubstantialcomputingcapacitiestoachievehighperformance.Tobalancethesetwoaspects,inthispaper,wepresentablockchainmodelthatcombinesthecomputationforproof-of-workandforartificialintelligencemodellearningproceduresasoneprocess,achievingaconsensusmechanismofblockchainandartificialintelligencecomputationsimultaneouslyandinanefficientway.

Theblockchainmodel

Inthispaper,wemodeltheblockchainsystemasanagentofreinforcementlearning.AsdepictedbyFig.1,everyblockrepresentsastateofaMarkovstatemachine,whereasthecreationandlinkingprocessofblocksisaMarkovdecisionprocess(MDP)29,withthefollowingsetup:

Theenvironmentisdefinedasoracleinthisblockchainsystem,whichprovidesthedatatoblockchainviaitsstatetransitions(𝑆𝑡→𝑆𝑡+1).

Inthepresentstate(𝑆𝑡),theagentchoosesanaction(𝐴𝑡)accordingtothecurrentpolicy(𝜋𝑡)and

receivesareward(𝑅𝑡+1)fromtheenvironment,whilethestateoftheenvironmenttransformsfrom𝑆𝑡to𝑆𝑡+1.Afterwards,thenodesofblockchaintrainthepolicymodelandupdateitfrom𝜋𝑡to𝜋𝑡+1,whicharestoredinthememoryofcomputingnodesasthefunctionforchoosingthenextactionby

feedingthenextstate.Thecomputationthatoccursinthisprocessisdefinedastheproof-of-workforthecomputingnodes,whichcompetetodosointheblockchainsystem.

Computingnodesofthesystemcreateanewblock,recordingthecurrentstateofenvironment(𝑆𝑡+1),thelastchosenaction(𝐴𝑡),thereward(𝑅𝑡+1)receivedfromtheenvironment,thedata(𝐷𝑡+1)tobewrittenontoblockchainforatransaction,andtheHashvalueoflastblock(ℎ𝑡+1=

𝐻𝑎𝑠ℎ(𝑆𝑡,𝐴𝑡−1,𝑅𝑡,𝐷𝑡,ℎ𝑡)),asshowninFig.2.Whenanodefinishesthecomputationofproof-of-

workandcreatesanewblock,itissayingthataminingprocessiscompleted.

Whenaminingprocesscompletes,thenewlycreatedblockislinkedtothepreviousblockbythehashvalueofthepreviousblock(Fig.2).

Figure1Theblockchainmodelbasedonreinforcementlearning

Figure2Themechanismforblockstostoredataandbeinglinked

Inanyblockofthechain,thestoredHashvalueofthepreviousblockpreventsthedatafrombeingfalsifiedbecauseifanydataarechanged,theblock’sHashvaluemustbedifferentandinturnchangethedatastoredinthenextblock,whichinvalidatesthelinkageofblockswithinthechain.Inaddition,ifthestateoftheenvironment(𝑆𝑡)oraction(𝐴𝑡−1)storedinoneblockismodified,thenextstate(𝑆𝑡+1),next

action(𝐴𝑡)andreward(𝑅𝑡+1)willprobablybedifferentfromthoseactuallystoredinthenextblockwhen

transformedbythepolicy,whichalsolargelydecreasesthepossibilityofandincreasesthedifficultyoftamperingwithdata.

Proof-of-work

Theproof-of-workalgorithmisimplementedasfollows:

Atpresentstate(𝑆𝑡)chooseanaction(𝐴𝑡)basedoncurrentpolicy(𝜋𝑡);

Exert𝐴𝑡ontotheenvironment,orsayinteractwiththeoracle,receivingareward(𝑅𝑡+1),andthestateofenvironmentchangesto𝑆𝑡+1;

Basedonthestatetransition(𝑆𝑡→𝑆𝑡+1),actionselected(𝐴𝑡)andtherewardreceived(𝑅𝑡+1),the

nodesofblockchaintrainthepredefinedaction-valuefunctionofthereinforcement-learningmodelandupdatethepolicyto𝜋𝑡+1.

Inthispaper,theproof-of-workincludesthecomputingprocessesofselectingaction,generatingrewardregulatedbycurrentpolicy(𝜋𝑡),andtrainingtheaction-valuefunctionmodelandupdatingthepolicy.ConsideringmanypracticalMDPproblems,thestatespacesarelargeenoughorevenwithunlimitedstates,whichrequirelargeandcomplicateddeepneuralnetworkstoachieveawell-performing

approximatoroftheaction-valuefunction,sothecomputationofproof-of-workishighlyresource-demanding.Therefore,anyattemptstotamperwithdataorhackthewholeblockchainarealmostunachievableduetothedauntingcostofcomputingresourcesandtime.

Consensusbasedonrewardingofreinforcement-learning

Whenanodeworkingfortheblockchainfinishesproof-of-work,orsayaminingprocess,itneedstosynchronizethenewlygeneratedblocktoothernodesinthenetworktoguaranteetheconsistencyofdatawithinthewholenetwork.However,becauseoftheoccurrencesofnetworkdelay,errorsandattacks,nodesmaykeepdifferentversionsoftheblockchaininformation,resultingininconsistency.Therefore,wedesignaconsensusmechanismfornodestoachievedataconsistencyacrossthewholenetwork,asfollows:

First,prioritizethelongestchain:ifnodeskeepchainsofdifferentlengths,thenthelongestchainsshouldbechosenastheprovenchains;

Ifatstep1,thereismorethanonechainkept,therearetwooptionalwaystodeterminethefinalchain:

Comparingtherewardvalue(𝑅)atthelastblockofthechains,choosethechainwiththemaximumrewardasthefinalconsentedchain.

Comparingthesumofrewards(∑𝑅𝑡)acrossallblocksofthechain,choosetheonewiththemaximumsummationasthefinalconsentedchain.

Althoughdifferentnodessharethesamepolicyalgorithm,theyexperienceself-uniquemodeltrainingand

policyupdatingprocessesandkeeptheirownaction-valuefunctionmodelandpolicyinstancesinmemory,whicharenotsynchronizedtoeachother,soforthesamestate(𝑠𝑡),differentnodeswillnotnecessarilyselectthesameactionorreceivethesamereward.Thisbringsabouttwovaluableaspects:

Evenifmorethan51%ofthetotalnodeswithinthenetworkarehacked,whichattemptstofalsifythedataandregenerateanewchain,whentheycompletetheproof-of-work,themaximumreward

(𝑅𝑚𝑎𝑥)isnotdefinitelyreceivedbythembutratherpossiblybytheunhackednodes,inwhichcasethefalsifiedblockswillnotbeconsented.Thus,theconsensusmechanismdesignedinthispaperadditionallyenhancesthesafetyoftheblockchainsystembyreducingthepossibilityofbeinghampered.

Becauseeverynodekeepsitsowninstancesoftheaction-valuefunctionmodelandpolicyandcompetestoachievethemaximumreward(𝑅𝑚𝑎𝑥)byimplementingtheproof-of-work,thisallowsthereinforcement-learningalgorithmtolearnalongmorethanonepath(thenumberofpathsequalstheworkingnodeswithinthenetwork)onthesameenvironmentstateandatonetimepoint.It

equivalentlyreplacestimewithspaceforAImodeltraining,whichachievesmultipleepochsoftrainingatoneround.Inthisway,whiletheblockchainisgrowing,thereinforcement-learningalgorithmbackingitsproof-of-workandconsensusmechanismmorefullylearnsdiversified

possibilitiesandconvergesfaster,therebymakingmorepreciseprediction(𝑆𝑡→𝐴𝑡)asquickas

possible,whichisconducivetotheoverallgoalachievementinashortertermforthereinforcement-learningmodel.ThisisspecificallybeneficialforonlinelearningapplicationsofAI.

Insummary,theblockchainsystempresentedinthispaperisadistributedtrainingsystemforreinforcement-learningalgorithms,whichacceleratesthelearningprocessofAImodelswhilerealizingblockchainproperties.

Proof-of-workwithdeepQ-learning

Specifically,weusedeepQ-learning29,32,33asthepolicyupdatingalgorithmfortheagenttolearn.Theiterationoftheaction-valuefunctioninQ-learningisformulatedas:

𝑄(𝑆𝑡,𝐴𝑡)←𝑄(𝑆𝑡,𝐴𝑡)+𝛼[𝑅𝑡+1+𝛾𝒎𝒂𝒙𝑎𝑄(𝑆𝑡+1,𝑎)−𝑄(𝑆𝑡,𝐴𝑡)] (1)

where𝑄istheaction-valuefunctiontobelearnedfortheoptimaldecision;𝑎and𝑅𝑡+1aretheselectedactionandreceivedrewardatstate𝑆𝑡+1,respectively;and𝛼(0<𝛼<1)and𝛾(0<𝛾<1)arethestep-sizeparameteranddiscount-rateparameter,respectively.

Adeepneuralnetworkisusedtorepresentthe𝑄function,andeverynodeoftheblockchainwillbetheagenttolearnthe𝑄functionanditeratesaccordingtoequation(1),withthepolicydeterminingwhichstate-actionpairsarevisitedandupdated.

Figure3TheblockchainmodelbasedondeepQ-learning

AsshowninFig.3,atanytimestep𝑡thenodesoftheblockchaincalculatetheoptimalaction𝐴𝑡accordingtothecurrent𝑄functionandstate𝑆𝑡andthenupdatethe𝑄functionaccordingtoformula(1)forthenextstate.Specifically,inthisresearch,werepresentthe𝑄functionasadeepneuralnetwork.As

illustratedinFig.4,thesectioninredrepresentsthetarget,whichhasthesameneuralnetworkarchitectureasthe𝑄functionapproximator(sectioningreen)butwithfrozenparameters.Forevery𝐶iterations(ahyperparameter),theparametersfromthepredictionnetworkarecopiedtothetargetnetwork.AlossfunctionisdefinedasthemeansquarederrorofthetargetQ-valueandpredictedQ-value:

𝐿𝑜𝑠𝑠=(𝑅+𝛾𝒎𝒂𝒙𝑄(𝑆

,𝑎;𝜃′)−𝑄(𝑆,𝐴;𝜃)2 (2)

𝑎 𝑡+1

𝑡 𝑡 )

where𝜃′and𝜃representtheparametersofthetargetnetworkandpredictionnetwork,respectively.Then,thisisbasicallyaregressionproblem,wherethepredictionnetworkupdatesitsgradientusingbackpropagationtoconverge.

Figure4SchematicdiagramforQfunctioniterationanditsneuralnetworkrepresentations

ThestepsinvolvedinthedeepQ-learningprocedureforeverynodeoftheblockchainareasfollows:

Attimestep𝑡,everynodefeedsstate𝑆𝑡intothepredictionQnetwork,whichwillreturntheQ-valuesofallpossibleactionsinthestate.

Selectsanactionusinganepsilon-greedypolicy:withprobabilityepsilon(0<𝜀<1)toselectarandomactionandwithprobability1−𝜀toselectanactionthathasamaximumQ-value,suchas

𝒂𝒓𝒈𝒎𝒂𝒙(𝑄(𝑆𝑡,𝑎;𝜃).

Performsthisaction𝐴𝑡instate𝑆𝑡andmovestoanewstate𝑆𝑡+1toreceivereward𝑅𝑡+1.Writesthistransitioninformationintoanewblockandstoresitinareplaybufferofthenodeas

(𝑆𝑡,𝐴𝑡,𝑅𝑡+1,𝑆𝑡+1).

Next,samplessomerandombatchesoftransitionsfromthereplaybufferandcalculatesthelossdefinedbyequation(2).

Gradientdescentisperformedwithrespecttothepredictionnetworkparameterstominimizethisloss.Then,thenodefinishesonceproof-of-workcomputationandprovesanewlygeneratedblock.

Afterevery𝐶iterations,copiesthepredictedQnetworkweightstothetargetnetworkweights.

Repeatabovesteps.

Theawardingmechanismformining

Inthisframework,thecomputationsforthereinforcement-learningalgorithmandparticularlyforthetrainingofdeepneuralnetworksareassignedtothenodes(miningmachines)ofblockchaintocompetefortheproof-of-work,andafternodescompletetheproof-of-work,thenodesthatarefastesttofinishthecomputationandreceivethemaximumrewardcanfinallywintoprovetheblocks,whichistheconsensusmechanismofthisblockchain.Thus,inourdesign,westipulatethemaximumreward𝑅𝑚𝑎𝑥astheaward

tothenodethatfinallywinsthecompetitionofproof-of-workandconsensustoencouragemore

computerswithbettercapacitytojointheblockchainnetworkandcontributetoartificialintelligencecomputations.Thisawardvalue𝑅𝑚𝑎𝑥iscalledthetokenofthisblockchain.

Conclusion

Inthispaper,wepresentablockchainframeworkthatorganicallystitchescomputationsforreinforcement-learningandproof-of-workaswellasaconsensusmechanism,achievingaversatiledistributedcomputingsystem.Ontheonehand,takingadvantageofthecomplexityandhighcomputingcostofthereinforcement-learningprocessanddeepneuralnetworktrainingincreasesthedifficultyofhackingtheblockchainnetworkorfalsifyingthedata.Inparticular,becausethenodeskeepself-ownedinstancesofpolicyandneuralnetworks,theykeepuncertaintiesofstatetransition(𝑆𝑡→𝑆𝑡+1)andaction

selectionthatmaybedifferentnodesfromnodes.Theseuncertaintiesadditionallyconsolidatethestability

ofchainlinkagesthataredifficultforhackerstomutate.Theconsensusmechanismofmaximum-reward-winaddsanadditionalbarrierdeterringhackerstotamperwiththechain.Ontheotherhand,utilizingthenodeswithintheblockchainnetworktofulfilthetrainingandrunningofAIalgorithmsnaturallycontributescomputingpowertopracticalintelligentapplications.Meanwhile,bydistributingtheAImodeltrainingtomultiplenodesthatsimultaneouslycrunchthesamedatageneratedbytheenvironment,orsayingoracleinthisblockchainsystem,thenodeskeeptheirowninstancesoftheAImodel,sothenodesexperiencedifferentpathsoflearningwithdifferentparametervaluesandhiddenstatesoftheAImodelateverytimestep.Thisequivalentlyimplementsmultipleepochsoftrainingwithinonlyoneroundofthelearningprocess,whichimprovesthetrainingefficiencyandacceleratestheconvergenceofmodels.

Discussion

TheblockchainframeworkpresentedinthispaperpavesanavenueforAIapplicationsthatrequireintensivecomputingpowerandaquickergeneralizationrateandacrediblenetworkforfeedingdatatoAImodels.Therefore,thisprovidesapotentialsolutionforfacilitatingthedevelopmentofindustrialintelligence,whichhasbeendevelopingslowlyduetoalackofdata,becauseenterprisesinindustrial

verticalsarenotwillingtosharetheirassets.Inaddition,inindustry,thereareeitherinsufficientprofessionalAItalentorcomputingcapacitiesforAIapplications,sothisblockchainframeworkcouldprovideanopenplatformencouragingAIprofessionalstocontributetheirexpertiseaswellascomputingresourcessupportingtheadvancementofindustry.Furthermore,thisframeworkisparticularlypragmaticfornonepisodicreinforcement-learningproblemswithmodelscontinuouslyadaptingtotheenvironment,suchasfinancialmarkets,IoTnetworksandfactoryoperations.

Ultimately,itcouldbeexpectedthatbycombiningblockchainandartificialintelligenceintoonecomputationalframework,thetwomostimportantresources,dataandcomputingpower,canbeutilizedinamutuallysupportivewayoveracreditableplatformthatencouragesmoreinnovationsinartificialintelligenceapplications.Finally,webelievethatthisblockchainframeworkforAIcomputationcouldbeapotentialbackboneoftheindustrialInternet.

References

Nakamoto,S.Bitcoin:APeer-to-PeerElectronicCashSystem./bitcoin.pdf(2008).

Casino,F.,Dasaklis,T.K.&Patsakis,C.Asystematicliteraturereviewofblockchain-basedapplications:Currentstatus,classificationandopenissues.TelematicsandInformatics,36,55-81(2019).

Viriyasitavat,W.&Hoonsopon,D.Blockchaincharacteristicsandconsensusinmodernbusinessprocesses.JournalofIndustrialInformationIntegration,13,32-39(2019).

Pal,A.,Tiwari,C.K.&Haldar,N.Blockchainforbusinessmanagement:Applications,challengesandpotentials.TheJournalofHighTechnologyManagementResearch,32,Issue2(2021).

Javaid,M.etal.BlockchaintechnologyapplicationsforIndustry4.0:Aliterature-basedreview.

Blockchain:ResearchandApplications(2021).

Elghaish,F.etal.Blockchainandthe‘InternetofThings'fortheconstructionindustry:researchtrendsandopportunities.AutomationinConstruction,132(2021).

Esmaeilian,B.etal.BlockchainforthefutureofsustainablesupplychainmanagementinIndustry4.0.Resources.ConservationandRecycling,163(2020).

Liu,X.L.etal.Industrialblockchainbasedframeworkforproductlifecyclemanagementinindustry

4.0.RoboticsandComputer-IntegratedManufacturing,63(2020).

Leng,J.etal.Blockchain-empoweredsustainablemanufacturingandproductlifecyclemanagementinindustry4.0:Asurvey.RenewableandSustainableEnergyReviews,132(2020).

Gupta,R.etal.Blockchain-basedsecurityattackresilienceschemesforautonomousvehiclesinindustry4.0:Asystematicreview.Computers&ElectricalEngineering,86(2020).

Mehta,D.etal.Blockchain-basedroyaltycontracttransactionsschemeforIndustry4.0supply-chainmanagement.InformationProcessing&Management,58,Issue4(2021).

Wong,D.R.,Bhattacharya,S.&Butte,A.J.Prototypeofrunningclinicaltrialsinanuntrustworthyenvironmentusingblockchain.NatCommun10,917(2019).

Mamo,N.etal.Dwarna:ablockchainsolutionfordynamicconsentinbiobanking.EurJHumGenet

28,609–626(2020).

DeFrancesco,L.&Klevecz,A.YourDNAbroker.NatBiotechnol37,842–847(2019).

Guo,X.etal.Smartphone-basedDNA

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论