版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
NLP基础语言模型入门:N-gram概率模型与信息检索应用(自然语言处理/搜索引擎)作者:DjoerdHiemstra许可:Unknown(encyclopediaentry)【内容简介】百科全书条目,介绍语言模型的基本概念,包括N-gram模型和其在信息检索中的应用,适用于NLP入门学习。(学术资料,仅供参考)【关键词】语言模型、N-gram、信息检索、自然语言处理、概率模型【分类】科技>计算机与互联网>人工智能——————————————————————————————LANGUAGEMODELSDjoerdHiemstraUniversityofTwentehttp://www.cs.utwente.nl/~hiemstraSYNONYMSGENERATIVEMODELSDEFINITIONAlanguagemodelassignsaprobabilitytoapieceofunseentext,basedonsometrainingdata.Forexample,alanguagemodelbasedonabigEnglishnewspaperarchiveisexpectedtoassignahigherprobabilityto“abitoftext”thanto“awpittovtags”,becausethewordsintheformerphrase(orwordpairsorwordtriplesifso-calledN-GRAMMODELSareused)occurmorefrequentlyinthedatathanthewordsinthelatterphrase.Forinformationretrieval,typicalusageistobuildalanguagemodelforeachdocument.Atsearchtime,thetoprankeddocumentistheonewhich’languagemodelassignsthehighestprobabilitytothequery.HISTORICALBACKGROUNDThetermlanguagemodelsoriginatesfromprobabilisticmodelsoflanguagegenerationdevelopedforautomaticspeechrecognitionsystemsintheearly1980’s[9].Speechrecognitionsystemsusealanguagemodeltocomplementtheresultsoftheacousticmodelwhichmodelstherelationbetweenwords(orpartsofwordscalledphonemes)andtheacousticsignal.Thehistoryoflanguagemodels,however,goesbacktobeginningofthe20thcenturywhenAndreiMarkovusedlanguagemodels(Markovmodels)tomodellettersequencesinworksofRussianliterature[3].AnotherfamousapplicationoflanguagemodelsareClaudeShannon’smodelsoflettersequencesandwordsequences,whichheusedtoillustratetheimplicationsofcodingandinformationtheory[17].Inthe1990’slanguagemodelswereappliedasageneraltoolforseveralnaturallanguageprocessingapplications,suchaspart-of-speechtagging,machinetranslation,andopticalcharacterrecognition.Languagemodelswereappliedtoinformationretrievalbyanumberofresearchgroupsinthelate1990’s[4,7,14,15].Theybecamerapidlypopularininformationretrievalresearch.By2001,theACMSIGIRconferencehadtwoseparatesessionsonlanguagemodelscontaining5papersintotal[13].In2003,agroupofleadinginformationretrievalresearcherspublishedaresearchroadmap“ChallengesinInformationRetrievalandLanguageModeling”[1],indicatingthatthefutureofinformationretrievalandthefutureoflanguagemodelingcannotbeseenapartfromeachother.SCIENTIFICFUNDAMENTALSLanguagemodelsaregenerativemodels,i.e.,modelsthatdefineaprobabilitymechanismforgeneratinglanguage.Suchgenerativemodelsmightbeexplainedbythefollowingprobabilitymechanism:ImaginepickingatermTatrandomfromthispagebypointingatthepagewithclosedeyes.ThismechanismdefinesaprobabilityP(T|D),whichcouldbedefinedastherelativefrequencyoftheoccurrenceoftheevent,i.e.,bythenumberofoccurrencesofawordonthepagedividedbythetotalnumberoftermsonthepage.Supposetheprocessisrepeatedntimes,pickingoneatatimethetermsT1,T2,...,Tn.Then,assumingindependencebetweenthesuccessiveevents,theprobabilityofthetermsgiventhedocumentDisdefinedasfollows:P(T1,T2,···,Tn|D)=nYi=1P(Ti|D)(1)AsimplelanguagemodelingapproachwouldcomputeEquation1foreachdocumentinthecollection,andrankthedocumentsaccordingly.Apotentialproblemmightbethefollowing:Theequationwillassignzeroprobabilitytoasequenceoftermsunlessalltermsoccurinthedocument.So,alanguagemodelingsystemthatusesEquation1willnotretrieveadocumentunlessitcontainsallqueryterms.Thismightbereasonableforawebsearchenginethattypicallyprocessessmallqueriestosearchavastamountofdata,butformanyotherinformationretrievalapplicationsthisbehaviorisaproblem.Astandardsolutionistouselinearinterpolationsmoothing(seePROBABILITYSMOOTHING)ofthedocumentmodelP(T|D)withacollectionmodelP(T|C),whichisdefinedasfollows.P(T1,T2,···,Tn|D)=nYi=1λP(Ti|D)+(1−λ)P(Ti|C)(2)Thisway,atermthatdoesnotoccurinthedocumentwillnotbeassignedzeroprobabilitybutinsteadaprobabilityproportionaltoitsnumberofoccurrencesintheentirecollectionC.Here,λisanunknownprobabilitythatshouldbetunedtooptimizeretrievaleffectiveness.Linearinterpolationsmoothingwasusedinseveralearlylanguagemodelingapproaches[7,14].ImplementationAlthoughthelanguagemodelingequationsabovesuggesttheneedtocomputeprobabilitiesforalldocumentsinthecollection,thisisunnecessaryinpractice.Infact,mostlanguagemodelingapproachescanbeimplementedefficientlybytheuseofstandardinvertedindexsearchsystems.ThiscanbeseenbytheequationbelowwhichcanbederivedfromEquation2bytwobasictransformations:First,dividingitbytheprobabilityofthecollectiongroundmodel;andsecond,takingthelogarithm.P(T1,T2,···,Tn|D)∝nXi=1log1+λP(Ti|D)(1−λ)P(Ti|C)(3)Equation3doesnolongerproduceprobabilities,butitranksthedocumentsintheexactsameorderasEquation2,becausethecollectionmodeldoesnotdependonthedocument,andthelogarithmisastrictlymonotonicfunction.Takingthelogarithmpreventstheimplementationfromrunningoutoftheprecisionofits(floatingpoint)representationofprobabilities,whichcanbecomeverysmallbecausetheprobabilitiesaremultipliedforeveryqueryterm.Similartoforinstancevectorspacemodelsininformationretrieval,rankingisdefinedbyasimplesumoftermweights,forwhichtermsthatdonotmatchadocumentgetazeroweight.Interestingly,theresulting“termweight”canbeseenasavariantoftf.idfweights,whichareoftenusedinvectorspacemodels.DocumentpriorsTheequationsabovedefinetheprobabilityofaquerygivenadocument,butobviously,thesystemshouldrankbytheprobabilityofthedocumentsgiventhequery.ThesetwoprobabilitiesarerelatedbyBayes’ruleasfollows.P(D|T1,T2,···,Tn)=P(T1,T2,···,Tn|D)P(D)P(T1,T2,···,Tn)(4)Theleft-handsideofEquation4cannotbeuseddirectlybecausetheindependenceassumptionpresentedaboveassumestermindependencegiventhedocument.So,inordertocomputetheprobabilityofthedocumentDgiventhequery,Equation2needtobemultipliedbyP(D)anddividedbyP(T1,···,Tn).Again,asstatedinthepreviousparagraph,theprobabilitiesthemselvesareofnointerest,onlytherankingofthedocumentbytheprobabilitiesis.AndsinceP(T1,···,Tn)doesnotdependonthedocument,rankingthedocumentsbythenumeratoroftheright-handsideofEquation4willrankthembytheprobabilitygiventhequery.ThisshowstheimportanceofP(D):Themarginalprobability,orpriorprobabilityofthedocument,i.e.,itistheprobabilitythatthedocumentisrelevantifthequeryisignored.Forinstance,wemightassumethatlongdocumentsaremorelikelytobeusefullthanshortdocuments[7,14].Inwebsearch,suchaso-calledstaticranking(arankingthatisindependentofthequery)iscommonlyused.Forinstance,documentswithmanylinkspointingtothemaremorelikelytoberelevant,ordocumentswithshortURLsaremorelikelytoberelevant.Thepriorprobabilityofadocumentisapowerfulwaytoincorporatestaticrankinginthelanguagemodelingapproach[11].DocumentgenerationmodelsAnimplicitassumptionofthelanguagemodelspresenteduntilnowisthatthereismoreinformationavailableaboutthedocumentsthanaboutthequery.Insomeapplicationshowever,thesituationisreversed.Forinstanceintopictracking,asystemhasthetasktotrackastreamofchronologicallyorderedstories.Foreachstoryinthestream,thesystemhastodecideifitisontopic,ornot.Thetargettopicisusuallybasedonanumberofexamplestoriesonacertaintopic,thereismoreinformationavailableaboutthetopicthanaboutasinglestory.Unlikequerygenerationmodels,documentgenerationmodelsneedsomeformofnormalizationbecausedocumentswillhavedifferentlengths:Theprobabilityofgeneratingadocumenttendstobesmallerforlongdocumentsthanforshortdocuments.Therefore,severalnormalizationtechniquesmight2beapplied,suchasnormalizationbydocumentlengthandadditionalGaussiannormalization[10,18].Relevancefeedback,i.e,theusermarkedsomedocumentsasrelevant,isanothersituationinwhichthereismoreknowledgeavailableaboutthequerythanabouteachsingledocument.Ifsomerelevantdocumentsareknown,orifthetoprankeddocumentsareassumedtoberelevant,thenthosedocumentsmightbeusedtogenerateanew,improvedquery[20].Asanexample,considerthefollowingso-calledrelevancemodelsapproach[12]P(Q|T1,···,Tn)∝XdP(D=d)P(Q|D=d)nYi=1P(Ti=ti|D=d)(5)Here,theformuladefinestheprobabilityofanewwordQ,giventheoriginalqueryT1,···,Tnbymarginalizingoveralldocuments.Inpractice,onlythetoprankeddocumentsforthequeryT1,···areused.Interestingly,therelevancemodelmightbeusedtoinferotherinformationfromthetoprankeddocuments,forinstancethepersonthatismostoftenmentionedforacertainquery,so-calledexpertsearch[2].TranslationmodelsLanguagemodelsforinformationretrievalaregenerativemodels,andthereforeeasilycombinedwithothergenerativemodels.Toaddamodeloftermtranslation,thefollowingprobabilitymechanismapplies:ImaginepickinganEnglishtermTatrandomfromthispagebypointingatthepagewithclosedeyes(whichdefinesaprobabilityP(T|D)),andthentranslatethetermTbypickingfromtheterm’sentryinaEnglish-DutchdictionaryatrandomaDutchtermS(withprobabilityP(S|T)).Themodelmightbeusedinacross-languageretrievalsystemtorankEnglishdocumentsgivenaDutchqueryS1,···,Snbythefollowingprobability[4,6,12,19]:P(S1,S2,···,Sn|D)=nYi=1XtP(Si=si|Ti=t)(λP(Ti=t|D)+(1−λ)P(Ti=t|C))(6)Here,DutchisthesourcelanguageandEnglishthetargetlanguage.TheformulauseslinearinterpolationsmoothingofthedocumentmodelwiththetargetlanguagebackgroundmodelP(T|C)(Englishintheexample)attheright-handsideoftheformula.Insomeformulations,thetranslationmodelissmoothedwiththesourcelanguagebackgroundmodelP(S|C)whichaestimatedonauxiliarydata.Thetwobackgroundmodelsarerelatedasfollows:P(S|C)=PtP(S|T=t)P(T=t|C).Thetranslationprobabilitiesareoftenestimatedfromparallelcorpora,i.e.,fromtextsinthetargetlanguageanditstranslationsinthesourcelanguage[6,19].Translationmodelsmightalsobeusedinamonolingualsettingtoaccountforsynonymsandotherrelatedwords[4].AspectmodelsInaspectmodels,alsocalledprobabilisticlatentsemanticindexingmodels,documentsaremodeledasmixturesofaspectlanguagemodels.Intermsofagenerativemodelitcanbedefinedinthefollowingway[8]:1)selectadocumentDwithprobabilityP(D),2)pickalatentaspectZwithprobabilityP(Z|D),3)generateatermTwithprobabilityP(T|Z)independentofthedocument,4)repeatStep2andStep3untilthedesirednumberoftermsisreached.ThisleadstoEquation7.P(T1,T2,···,Tn|D)∝nYi=1Xz(P(Ti|Z=z)P(Z=z|D))(7)Theaspectsmightcorrespondwiththetopicsorcategoriesofdocumentsinthecollectionsuchas“health”,“family”,“Hollywood”,etc.TheaspectZisahidden,unobservedvariable,soprobabilitiesconcerningZcannotbeestimatedfromdirectobservations.Instead,theExpectationMaximization(EM)algorithmcanbeapplied[9].Thealgorithmstartsoutwitharandominitializationoftheprobabilities,andtheniterativelyre-estimatestheprobabilitiestoarriveatalocalmaximumofthelikelihoodfunction.IthasbeenshownthattheEMalgorithmissensitivetotheinitialization,andanunluckyinitializationresultsinanon-optimallocalmaximum.Asasolution,clusteringofdocumentshasbeenproposedtoinitializethemodels[16].AnotheralternativeislatentsemanticDirichletallocation[5]whichhaslessfreeparameters,andthereforeislesssensitivetotheinitialization.KEYAPPLICATIONSThisentryfocusesontheapplicationoflanguagemodelstoinformationretrieval.Theapplicationspresentedincludenewswireandnewspapersearch[4,5,15],websearch[11],cross-languagesearch[6,19],topicdetectionandtracking[10,18],andexpertsearch[2].However,languagemodelshavebeenusedinvirtuallyeveryapplicationthatneedsprocessingofnaturallanguagetexts,includingautomaticspeechrecognition,part-of-speechtagging,machinetranslation,andopticalcharacterrecognition.3CROSSREFERENCEPROBABILITYSMOOTHING,N-GRAMMODELSRECOMMENDEDREADING[1]JamesAllan(editor),JayAslam,NicholasBelkin,ChrisBuckley,JamieCallan,BruceCroft(editor),SueDumais,NorbertFuhr,DonnaHarman,DavidJ.Harper,DjoerdHiemstra,ThomasHofmann,EduardHovy,WesselKraaij,JohnLafferty,VictorLavrenko,DavidLewis,LizLiddy,R.Manmatha,AndrewMcCallum,JayPonte,JohnPrager,DragomirRadev,PhilipResnik,StephenRobertson,RoniRosenfeld,SalimRoukos,MarkSanderson,RichardSchwartz,AmitSinghal,AlanSmeaton,HowardTurtle,EllenVoorhees,RalphWeischedel,JinxiXu,andChengXiangZhai.ChallengesinInformationRetrievalandLanguageModeling.SIGIRForum37(1),2003[2]KristianBalog,LeifAzzopardi,andMaartendeRijke.Formalmodelsforexpertfindinginenterprisecorpora.InProceedingsofthe29thannualinternationalACMSIGIRconferenceonResearchanddevelopmentininformationretrieval(SIGIR’06),pages43–50,2006.[3]GelyP.Basharin,AmyN.Langville,andValeriyA.Naumov.ThelifeandworkofA.A.Markov.LinearAlgebraanditsApplications386,pages3–26,2004.[4]AdamBergerandJohnLafferty.Informationretrievalasstatisticaltranslation.InProceedingsofthe22ndACMConferenceonResearchandDevelopmentinInformationRetrieval(SIGIR’99),pages222–229,1999.[5]DavidM.Blei,AndrewY.Ng,andMichaelI.Jordan.LatentDirichletallocation.JournalofMachineLearningResearch3(5):993–1022,2003.[6]DjoerdHiemstraandFranciskadeJong.Disambiguationstrategiesforcross-languageinformationretrieval.LectureNotesinComputerSciencevolume1696:ProceedingsoftheEuropeanConferenceonDigitalLibraries,Springer-Verlag,pages274–293,1999.[7]DjoerdHiemstraandWesselKraaij.Twenty-OneatTREC-7:Ad-hocandcross-languagetrack.InProceedingsoftheseventhTextRetrievalConferenceTREC-7,pages227–238.NISTSpecialPublication500-242,1998.[8]ThomasHofmann.Probabilisticlatentsemanticindexing.InProceedingsofthe22ndannualinternationalACMSIGIRconferenceonResearchanddevelopmentininformationretrieval(SIGIR’99),pages50–57,1999.[9]FrederickJelinek.StatisticalMethodsforSpeechRecognition.MITPress,1997.[10]HubertJin,RichSchwartz,SreenivasaSista,FrederickWalls.TopicTrackingforRadio,TVBroadcastandNewswire.InProceedingsoftheDARPABroadcastNewsWorkshop,1999.[11]WesselKraaij,ThijsWesterveld,andDjoerdHiemstra.Theimportanceofpriorprobabilitiesforentrypagesearch.InProceedingsofthe25thACMConferenceonResearchandDevelopmentinInformationRetrieval(SIGIR’02),pages27–342002.[12]VictorLavrenkoandW.BruceCroft.Relevancemodelsininformationretrieval.InW.BruceCroftandJohnLafferty(eds.)LanguageModelingforInformationRetrieval,KluwerAcademicPublishers,pages11–56,2003.[13]DonaldH.Kraft,W.BruceCroft,DavidJ.Harper,andJustinZobel(eds.).Proceedingsofthe24thACMConferenceonResearchandDevelopmentinInformationRetrieval(SIGIR’01).AssociationforComputingMachinery,2001.[14]DavidR.H.Miller,TimothyLeek,andRichardM.Schwartz.AhiddenMarkovmodelinformationretrievalsystem.InProceedingsofthe22ndACMConferenceonResearchandDevelopmentinInformationRetrieval(SIGIR’99),pages214–221,1999.[15]JayM.PonteandW.BruceCroft.Alanguagemodelingapproachtoinformationretrieval.InProceedingsofthe21stACMConferenceonResearchandDevelopmentinInformationRetrieval(SIGIR’98),pages275–281,1998.[16]RichardM.Schwartz,SreenivasaSista,TimothyLeek.UnsupervisedTopicDiscovery.InProceedingsoftheLanguageModelsforInformationRetrievalworkshop(LMIR),2001.[17]ClaudeE.Shannon.Amathematicaltheoryofcommunication.BellSystemTechnicalJournal,27:379–423,623–656,1948.[18]MartijnSpitters,WesselKraaij.Languagemodelsfortopictracking.InW.BruceCroftandJohnLafferty(eds.)LanguageModelingforInformationRetrieval,KluwerAcademicPublishers,pages95–124,2003.[19]JinxiXuandRalphWeischedel.Aprobabilisticapproachtotermtranslationforcross-lingualretrieval.InW.BruceCroftandJohnLafferty(eds.)LanguageModelingforInformationRetrieval,KluwerAcademicPublishers,pages125–140,2003.[20]ChengxiangZhaiandJohnLafferty.Model-basedfeedbackinthelanguagemodelingapproachtoinformationretrieval.InProceedingsoftheTenthACMInternationalConferenceonInformationandKnowledgeManagement(CIKM’01),pages403–410,2001.4PROBABILITYSMOOTHINGDjoerdHiemstraUniversityofTwentehttp://www.cs.utwente.nl/~hiemstraDEFINITIONProbabilitysmoothingisalanguagemodelingtechniquethatassignssomenon-zeroprobabilitytoeventsthatwereunseeninthetrainingdata.Thishastheeffectthattheprobabilitymassisdividedovermoreevents,hencetheprobabilitydistributionbecomesmoresmooth.MAINTEXTSmoothingovercomestheso-calledsparsedataproblem,thatis,manyeventsthatareplausibleinrealityarenotfoundinthedatausedtoestimateprobabilities.Whenusingmaximumlikelihoodestimates,unseeneventsareassignedzeroprobability.Incaseofinformationretrieval,mosteventsareunseeninthedata,evenifsimpleunigramlanguagemodelsareused(seeN-GRAMMODELS):Documentsarerelativelyshort(sayonaverageseveralhundredsofwords),whereasthevocabularyistypicallybig(maybemillionsofwords),sothevastmajorityofwordsdoesnotoccurinthedocument.Asmalldocumentabout“informationretrieval”mightnotmentiontheword“search”,butthatdoesnotmeanitisnotrelevanttothequery“textsearch”.Thesparsedataproblemisthereasonthatitishardforinformationretrievalsystemstoobtainhighrecallvalueswithoutdegradingvaluesforprecision,andsmoothingisameanstoincreaserecall(possiblydegradingprecisionintheprocess).Manyapproachestosmoothingareproposedinthefieldofautomaticspeechrecognition[1].Asmoothingmethodmaybeassimpleso-calledLaplacesmoothing,whichaddsanextracounttoeverypossibleword.Thefollowingequationsshowrespectively(8)theunsmoothed,ormaximumlikelihoodestimate,(9)Laplacesmoothing,(10)Linearinterpolationsmoothing,and(11)Dirichletsmoothing[3]:PML(T=t|D=d)=tf(t,d)/Pt′tf(t′,d)(8)PLP(T=t|D=d)=(tf(t,d)+1)/Pt′(tf(t′,d)+1)(9)PLI(T=t|D=d)=λPML(T=t|D=d)+(1−λ)PML(T=t|C)(10)PDi(T=t|D=d)=(tf(t,d)+µPML(T=t|C))/((Pt′tf(t′,d))+µ)(11)Here,tf(t,d)isthefrequencyofoccurrenceofthetermtinthedocumentd,andPML(T|C)istheprobabilityofatermoccurringintheentirecollectionC.Bothlinearinterpolationsmoothing(seealsotheentryLANGUAGEMODELS)andDirichletsmoothingassignaprobabilityproportionaltothetermoccurrenceinthecollectiontounseenterms.Here,λ(0<λ<1)andµ(µ>0)areunknownparametersthatshouldbetunedtooptimizeretrievaleffectiveness.Linearinterpolationsmoothinghasthesameeffectonalldocuments,whereasDirichletsmoothinghasarelativelybigeffectonsmalldocuments,butarelativelysmalleffectonbiggerdocuments.Manysmoothedestimatorsusedforlanguagemodelsininformationretrieval(includingLaplaceandDirichletsmoothing)areapproximationstotheBayesianpredictivedistribution[2].CROSSREFERENCELANGUAGEMODELS,N-GRAMMODELSRECOMMENDEDREADING[1]StanleyF.ChenandJoshuaGoodman.Anempiricalstudyofsmoothingtechniquesforlanguagemodeling.TechnicalReportTR-10-98,CenterforResearchinComputingTechnology.HarvardUniversity,August1998.[2]HugoZaragoza,DjoerdHiemstra,MichaelTipping,andStephenRobertson.BayesianExtensiontotheLanguageModelforAdHocInformationRetrieval.InProceedingsofthe26thAnnualInternationalACMSIGIRConferenceonResearchandDevelopmentinInformationRetrieval,pages4-9,2003.[3]ChengXiangZhaiandJohnLafferty.Astudyofsmoothingmethodsforlanguagemodelsappliedtoinformationretrieval.ACMTransactionsonInformationSystems(TOIS)22(2),pages179-214,2004.5N-GRAMMODELSDjoerdHiemstraUniversityofTwentehttp://www.cs.utwente.nl/~hiemstraDEFINITIONInlanguagemodeling,n-grammodelsareprobabilisticmodelsoftextthatusesomelimitedamountofhistory,orworddependencies,wherenreferstothenumberofword
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 护理管理沟通中的冲突解决技巧
- 2025年前台服务冲刺试卷
- 2026年合同能源管理与节能环保服务在制造业中的应用
- 2026年深度学习在非侵入式脑机接口神经信号解码中应用指南
- 2026年生物质能“十五五”多元化高价值发展新方向解读
- 管廊防水施工方案
- 皮内注射的过敏试验操作
- 2026年消防演练安全培训
- 2026年消防安全知识普及
- 投资监理技术方法
- 保护水文化遗产实施方案
- 2025年新疆维吾尔自治区公务员录用考试公安专业科目试题
- 营养调查与分析16课件
- 研究生考研复试自我介绍
- EP05-A3定量测量程序的精密度评估 中文版
- T/GIEHA 021-2020医用和类似用途空气消毒净化器除菌性能分级
- 零基预算改革解读
- 石场工地管理制度
- 养羊场管理制度
- 2025年电信人工智能学习考试题库(含答案)
- 2020年0822公务员多省联考《申论》题(内蒙古县级卷)及参考答案
评论
0/150
提交评论