版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
AIAgentsBeyondChatGPT
LLM
LLM
LLM
Zhou(Jo)Yu
ColumbiaUniversity
&ArklexAI
WhosupportsAIAgents?
SlidesadaptedfromYuSu
WhatareAIAgents?
Perception:Multimodalinputsincluding,text,image,audio,video,touch,etc.
Planning(InnerMonologue):
Chain-of-ThoughtreasoningovertokensthatpoweredbyLLMs
Reflection:meta-reasoningineverystop
Actions:function/toolcalling,embodiedactions.
AIAgentDeploymentConsideration
Slide:AlexWang@ScaleAI
18
Overview
1.Modelself-improvementwithLLMs(Yuetal,NAACL2024,Outstandingpaper)
2.Elicitingstrongermodelabilityviatreesearch(Yuetal,EMNLP2023)
3.AIagentself-improvementviatreesearch(Yuetal,ICLR2025)
1
Background:In-ContextSelf-Improvement
Input:
Q:Calculate(4*1)-(2*3)=?
XiaoYu,BaolinPeng,MichelGalley,JianfengGao,ZhouYu,TeachingLanguageModelstoSelf-ImprovethroughInteractiveDemonstrations,NAACL2024,Outstandingpaper
2
Background:In-ContextSelf-Improvement
Input:
Q:Calculate(4*1)-(2*3)=?
few-shotprompt
chain-of-thought
Q:Calculate(4*-1)+(2*3)=?Let’sthinkstepbystep:
Q:Calculate1+2=?
Ans:3
Q:Calculate…
Ans:…
Q:Calculate(4*1)-(2*3)=?
Ans:-2
Step1:(4*1)-(2*3)=4-6.
Step2:4-6=-2
Ans:-2
3
Background:In-ContextSelf-Improvement
Input:Q:Calculate(4*1)-(2*3)=?
Self-ImprovementPrompting
(Madaan,etal,2023)
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-3
Ans:-3
Madaan,A.etal.(2023)‘Self-Refine:IterativeRefinementwithSelf-Feedback’
4
Background:In-ContextSelf-Improvement
Input:Q:Calculate(4*1)-(2*3)=?
Self-ImprovementPrompting
(Madaan,etal,2023)
promptfeedback
promptupdate
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-3
Ans:-3
Instep2thepart“4-6=-3”isincorrect.Thisisbecause…
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-2
Ans:-2
Madaan,A.etal.(2023)‘Self-Refine:IterativeRefinementwithSelf-Feedback’
5
Background:In-ContextSelf-Improvement
Input:Q:Calculate(4*1)-(2*3)=?
Self-ImprovementPrompting
(Madaan,etal,2023)
promptfeedback
promptupdate
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-3
Ans:-3
Instep2thepart“4-6=-3”isincorrect.Thisisbecause…
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-2
Ans:-2
promptfeedback
Madaan,A.etal.(2023)‘Self-Refine:IterativeRefinementwithSelf-Feedback’
6
Background:In-ContextSelf-Improvement
Background
Motivation
Experiments
Problem1:smallLMcannotself-improveviaprompting!
Approach
7
Background:In-ContextSelf-Improvement
Problem1:smallLMcannotself-improveviaprompting!
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-3
Ans:-3
Background
Motivation
Experiments
Instep1thepart“2*3=6”is
incorrect.Thisisbecause…
…errorpropagates!
Approach
8
Background:In-ContextSelf-Improvement
Problem2:smallLMcannotlearn
Background
Motivation
Experiments
“self-improvement”fromLLMdemonstrations!
Approach
9
Background:In-ContextSelf-Improvement
Problem2:smallLMcannotlearn
“self-improvement”fromLLMdemonstrations!
Q:Calculate4-0*-1*8+6=?
=4-(0*-1*-8)+6=4-(0+8)+6
=4-8+6
=-2+6=4
=4-(0*-1*-8)+6=4-(0)+6
=4-(0+6)=4-6
=-2
feedback:…
irrelevantdemonstrations!
10
Motivation
Priorworkshowsthatself-improvement(S.I.)isusefulfortaskperformance/generalization(Madaan,etal,2023)Wefindprompt-basedS.I./simpledistillationmethodsfailswithsmallLM
Background
Motivation
Experiments
Madaan,A.etal.(2023)‘Self-Refine:IterativeRefinementwithSelf-Feedback’
Approach
11
Motivation
Priorworkshowsthatself-improvement(S.I.)isusefulfortaskperformance/generalization(Madaan,etal,2023)Wefindprompt-basedS.I./simpledistillationmethodsfailswithsmallLM
1.Treat“self-improvement”asatasktolearn
-(attempt)->(feedback,update)
Background
Motivation
Experiments
Madaan,A.etal.(2023)‘Self-Refine:IterativeRefinementwithSelf-Feedback’
Approach
12
Motivation
Priorworkshowsthatself-improvement(S.I.)isusefulfortaskperformance/generalization(Madaan,etal,2023)Wefindprompt-basedS.I./simpledistillationmethodsfailswithsmallLM
1.Treat“self-improvement”asatasktolearn
2.Butlearn“self-improvement”online
-considerLLMs/pythonscriptsasteachermodeleditmodelstomodifysmallLM’sattempts
-replaythisinteractionexperiencetotrainthesmallLM
Background
Motivation
Experiments
Madaan,A.etal.(2023)‘Self-Refine:IterativeRefinementwithSelf-Feedback’
Approach
13
Motivation
Priorworkshowsthatself-improvement(S.I.)isusefulfortaskperformance/generalization(Madaan,etal,2023)Wefindprompt-basedS.I./simpledistillationmethodsfailswithsmallLM
smallLM’sattempts
Feedback:thereisamistake!
1.Treat“self-improvement”asatasktolearn
2.Butlearn“self-improvement”online
-considerLLMs/pythonscriptsasteachermodeleditmodelstomodify
-replaythisinteractionexperiencetotrainthesmallLM
Edit:maybe2+2=4?
Background
Motivation
Experiments
Madaan,A.etal.(2023)‘Self-Refine:IterativeRefinementwithSelf-Feedback’
Approach
14
TriPosT
1Interactivetrajectoryediting
-usesLLM/pythonscriptsaseditmodels
trainingsample:
-gatherinteractionrecordsbetweensmallLMandLLM
Q:Calculate(4*1)-(2*3)=?
promptfeedback
promptupdate
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-3
Ans:-3
Instep1thepart“2*3=6”isX
incorrect.Thisisbecause…
Step1:…
Background
Approach
Experiments
Motivation
Background
15
TriPosT
1Interactivetrajectoryediting
-usesLLM/pythonscriptsaseditmodels
-gatherinteractionrecordsbetweensmallLMandLLM
trainingsample:
promptfeedback
Q:Calculate(4*1)-(2*3)=?
Instep1thepart“2*3=6”isX
incorrect.Thisisbecause…
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-3
Ans:-3
Instep2thepart“4-6=-3”isincorrect.Thisisbecause…
Step1:…
Approach
Experiments
Motivation
16
TriPosT
1Interactivetrajectoryediting
-usesLLM/pythonscriptsaseditmodels
trainingsample:
-gatherinteractionrecordsbetweensmallLMandLLM
Q:Calculate(4*1)-(2*3)=?
promptfeedback
Instep1thepart“2*3=6”isX
incorrect.Thisisbecause…
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-3
Ans:-3
Instep2thepart“4-6=-3”isincorrect.Thisisbecause…
Step1:(4*1)-(2*3)=4-6
Step1:…
Background
Motivation
Experiments
Step2:
Approach
17
TriPosT
1Interactivetrajectoryediting
-usesLLM/pythonscriptsaseditmodels
trainingsample:
-gatherinteractionrecordsbetweensmallLMandLLM
Q:Calculate(4*1)-(2*3)=?
promptfeedback
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-3
Ans:-3
Instep2thepart“4-6=-3”isincorrect.Thisisbecause…
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-2
Instep1thepart“2*3=6”isX
incorrect.Thisisbecause…
Step1:…
Background
Approach
Experiments
Ans:-2
Motivation
18
TriPosT
1Interactivetrajectoryediting
2Datapost-processing
-reformatsinteractiondatainto(attempt,feedback,update)triplet
-datafilteringandre-balancing
Q:…
Q:…
…
attempt1:…
feedback:…
attempt2:…
feedback:…
Q:…
attempt:…
feedback:…
update:…
filter
attemptN:…
attempt:…
feedbackN:…
Background
Motivation
Experiments
feedback:…
Approach
19
TriPosT
1Interactivetrajectoryediting
2Datapost-processing
-reformatsinteractiondatainto(attempt,feedback,update)triplet
-datafilteringandre-balancing
Q:…
attempt:…
Q:…
feedback:…
attempt1:…
update:…
feedback:…
attempt2:…
Q:…
re-balance
dset!
feedback:…
…
attempt:…
filter
feedback:…
attemptN:…
Background
Motivation
Experiments
feedbackN:…
Approach
20
TriPosT
1Interactivetrajectoryediting
2Datapost-processing
3Modeltraining
-weightedSFTwithmoreemphasisonfeedbackandupdatetokens
training
Background
Motivation
Experiments
“on-policy”dataLLaMA-1/LLaMA-2
Approach
Modelself-improvementwithLLMs
MainIdea:
PriorworkshowsthatLLMscanbepromptedtoself-improve
Explicitcraft“self-improvement”datawithLLMstotrain/enhancethisability
2UseastrongerLLMtoperform“processsupervision”
1LetaweakLLMattemptself-improvement
Q:Calculate(4*1)-(2*3)=?
Editedrevisedsolution:…
promptfeedback
Editedfeedback:…
Attemptsolution…
Feedback:…
promptupdate
Revisedsolution…
XiaoYu,etal.2024.TeachingLanguageModelstoSelf-ImprovethroughInteractiveDemonstrations.NAACL2024OutstandingPaper.
Modelself-improvementwithLLMs
MainIdea:
PriorworkshowsthatLLMscanbepromptedtoself-improve
Explicitcraft“self-improvement”datawithLLMstotrain/enhancethisability
1
LetaweakLLMattemptself-improvemenUseastrongerLLMtoperform“processsupervision”
3TraintheLMwithimproveddata
training
Improved“on-policy”dataLLaMA-1/LLaMA-2
XiaoYu,etal.2024.TeachingLanguageModelstoSelf-ImprovethroughInteractiveDemonstrations.NAACL2024OutstandingPaper.
Modelself-improvementwithLLMs
Evaluation:BigBenchHard
-taskswheresmallLMstruggles
-splittasksintoeasy(seen)andharder(unseen)subtaskstomeasuregeneralization
XiaoYu,etal.2024.TeachingLanguageModelstoSelf-ImprovethroughInteractiveDemonstrations.NAACL2024OutstandingPaper.
CanTriPosTimproveoverallperformance?
Evaluation:BigBenchHard
XiaoYu,etal.2024.TeachingLanguageModelstoSelf-ImprovethroughInteractiveDemonstrations.NAACL2024OutstandingPaper.
CanTriPosTtrainedmodelsself-improve?
Evaluation:BigBenchHard
-
XiaoYu,etal.2024.TeachingLanguageModelstoSelf-ImprovethroughInteractiveDemonstrations.NAACL2024OutstandingPaper.
Interactive(“on-policy”)dataiscrucial
AblationStudies:
simpleSFTongoldanswers
XiaoYu,etal.2024.TeachingLanguageModelstoSelf-ImprovethroughInteractiveDemonstrations.NAACL2024OutstandingPaper.
Modelself-improvementwithLLMs
Takeaway:improvingmodelperformancewithouthumansupervisionispossible
XiaoYu,etal.2024.TeachingLanguageModelstoSelf-ImprovethroughInteractiveDemonstrations.NAACL2024OutstandingPaper.
Limitations:needastrongeditorLLMforsupervision
Takeaway:improvingmodelperformancewithouthumansupervisionispossible
Step1:(4*1)-(2*3)=4-6
Step2:4-6=-3
Ans:-3
prompteditedfeedback:
Allstepsarecorrect.Thefinalanswerisalsocorrect.
XiaoYu,etal.2024.TeachingLanguageModelstoSelf-ImprovethroughInteractiveDemonstrations.NAACL2024OutstandingPaper.
18
Overview
1.Modelself-improvementwithLLMs
2.Elicitingstrongermodelabilityviatreesearch(Yuetal,EMNLP2023)
!
3.AIagentself-improvementviatreesearch
LLMModelperformanceimproveswithtrainingcompute
OpenAI."Scalinglawsforneurallanguagemodels."arXivpreprintarXiv:2001.08361(2020).
Modelperformanceimproveswithtest-timecompute
(e.g.GPT4-o1)
Jones,AndyL."Scalingscalinglawswithboardgames."arXivpreprintarXiv:2104.03113(2021).
OpenAI."LearningtoReasonwithLLMs"
/index/learning-to-reason-with-llms/
(2024)
PerformanceImprovementviaScaling
Centraltothesearescalinglawsistoimprove,withouthumansupervision:
ElicitstrongermodelbehaviorbeyondCoT
Improvemodelperformancewithstrongerdata
EnhancingModelCapabilityviaTreeSearch
MainIdea:
Manydialoguetasksareessentiallyaboutdecisionmaking
Self-ImprovementwithLLM
Wecanuselook-aheadsearchfromgameslikechesstoenhancethis
EnhancedModelCapabilityviaSearch
Self-ImprovementvisSearch
Self-ImprovementwithLLM
EnhancingModelCapabilityviaTreeSearch
MainIdea:
Manydialoguetasksareessentiallyaboutdecisionmaking
Wecanuselook-aheadsearchfromgameslikechess,toenhancethis
[greet]Hello.Howareyoudoingtoday?
Iamgood!
[task-relatedinquiry]Great.Haveyoueverdonatedtocharities?
IfI'mintherightplaceattherighttimeoramgivenanopportunity.
[whatshouldIsayhere?]
Persuadee
EnhancedModelCapabilityviaSearch
Self-ImprovementvisSearch
40
Lookaheadviatreesearch
chess:whitetomove
41
Lookaheadviatreesearch
chess:whitetomove
42
Lookaheadviatreesearch
chess:whitetomove
Lookaheadviatreesearch
chess:whitetomove
:simplywinning
43
HikaruNakamura,GrandMaster
44
EnhancingModelCapabilityviaTreeSearch
chess:whitetomove
propose
moves
simulate
evaluate
Dialogdecisionmakingastreesearch
MainIdea:
Manydialoguetasksareessentiallyaboutdecisionmaking
Wecanuselook-aheadsearchfromgameslikechesstoenhancethis
good!
[task-related
donatedtocharities?
time
[greet]Hello.Howareyoudoingtoday?
inquiry]
If
oram
Persuadee
[whatshouldIsayhere?]
XiaoYu,MaximillianChen,andZhouYu.2023.Prompt-BasedMonte-CarloTreeSearchforGoal-orientedDialoguePolicyPlanning.EMNLP2023.
Dialogdecisionmakingastreesearch
1
MCTSwithZero-training
-search(potentially)promisingactions
=promptanLLMtoactasπ
-simulateactionoutcomes
=promptanLLMtoactasM
-evaluateactionquality
=promptanLLMtoactasV
-updateitsestimateofeachactionsquality
XiaoYu,MaximillianChen,andZhouYu.2023.Prompt-BasedMonte-CarloTreeSearchforGoal-orientedDialoguePolicyPlanning.EMNLP2023.
Dialogdecisionmakingastreesearch
1
MCTSwithZero-training
-search(potentially)promisingactions
=promptanLLMtoactasπ
-simulateactionoutcomes
=promptanLLMtoactasM
-evaluateactionquality
=promptanLLMtoactasV
-updateitsestimateofeachactionsquality
LLMasusersimulator
XiaoYu,MaximillianChen,andZhouYu.2023.Prompt-BasedMonte-CarloTreeSearchforGoal-orientedDialoguePolicyPlanning.EMNLP2023.
Dialogdecisionmakingastreesearch
1
MCTSwithZero-training
-search(potentially)promisingactions
=promptanLLMtoactasπ
-simulateactionoutcomes
=promptanLLMtoactasM
-evaluateactionquality
=promptanLLMtoactasV
-updateitsestimateofeachactionsquality
LLMasvaluefunction
XiaoYu,MaximillianChen,andZhouYu.2023.Prompt-BasedMonte-CarloTreeSearchforGoal-orientedDialoguePolicyPlanning.EMNLP2023.
Open-LoopMCTSfordialogs
1
MCTSwithZero-training
2
Open-LoopMCTSfordialogue
-considersstochastictransitionsfromadialoguestate
(traditional)Closed-LoopMCTS
XiaoYu,MaximillianChen,andZhouYu.2023.Prompt-BasedMonte-CarloTreeSearchforGoal-orientedDialoguePolicyPlanning.EMNLP2023.
Open-LoopMCTSfordialogs
1
MCTSwithZero-training
Open-LoopMCTS
2
Open-LoopMCTSfordialogue
-considersstochastictransitionsfromadialoguestate
XiaoYu,MaximillianChen,andZhouYu.2023.Prompt-BasedMonte-CarloTreeSearchforGoal-orientedDialoguePolicyPlanning.EMNLP2023.
EnhancingModelCapabilityviaTreeSearch
Evaluation:PersuasionTask
-PersuasionForGoodDataset:persuadeapersontodonatetoacharitycalledSavetheChildren
-“whatisagoodpolicy?”issubjective->veryhardtotrain
XiaoYu,MaximillianChen,andZhouYu.2023.Prompt-BasedMonte-CarloTreeSearchforGoal-orientedDialoguePolicyPlanning.EMNLP2023.
EnhancingModelCapabilityviaTreeSearch
Evaluation:PersuasionTask
-PersuasionForGoodDataset:persuadeapersontodonatetoacharitycalledSavetheChildren
-“whatisagoodpolicy?”issubjective->veryhardtotrain
-CanGDP-ZeroproduceamorepersuasivepolicythanthebaseLLMitself?
XiaoYu,MaximillianChen,andZhouYu.2023.Prompt-BasedMonte-CarloTreeSearchforGoal-orientedDialoguePolicyPlanning.EMNLP2023.
GDP-Zeroimprovesdialogtasksuccess
Evaluation:PersuasionTask
-PersuasionForGoodDataset:persuadeapersontodonatetoacharitycalledSavetheChildren
-“whatisagoodpolicy?”issubjective->veryhardtotrain
-CanGDP-ZeroproduceamorepersuasivepolicythanthebaseLLMitself?
(OfflineEvaluation)
XiaoYu,MaximillianChen,andZhouYu.2023.Prompt-BasedMonte-CarloTreeSearchforGoal-orientedDialoguePolicyPlanning.EMNLP2023.
GDP-Zeroimprovesdialogtasksuccess
Evaluation:PersuasionTask
-PersuasionForGoodDataset:persuadeapersontodonatetoacharitycalledSavetheChildren
-“whatisagoodpolicy?”issubjective->veryhardtotrain
-CanGDP-ZeroproduceamorepersuasivepolicythanthebaseLLMitself?
(OfflineEvaluation)(InteractiveEvaluation)
XiaoYu,MaximillianChen,andZhouYu.2023.Prompt-BasedMonte-CarloTreeSearchforGoal-orientedDialoguePolicyPlanning.EMNLP2023.
GPD-Zerolearnsdomainknowledge
HowdidGDP-Zeroplanninghelp?
-avoidseager“propositionofdonation”
-balancedstrategywith
emotionandlogicalappeal
XiaoYu,MaximillianChen,andZhouYu.2023.Prompt-BasedMonte-CarloTreeSearchforGoal-orientedDialoguePolicyPlanning.EMNLP2023.
EnhancingModelCapabilityviaTreeSearch
Self-ImprovementwithLLM
Takeaway:treesearchasaneffectivemethodtodirectlyimprovemodelbehaviorattest-time
EnhancedModelCapabilityviaSearch
Self-ImprovementvisSearch
EnhancingModelCapabilityviaTreeSearch
Takeaway:treesearchasaneffectivemethodtodirectlyimprovemodelbehaviorattest-time
Limitations:
-ExtensionbeyonddialoguetaskssuchasAIagents?
Self-ImprovementwithLLM
-Transferthisimprovedbehaviorbacktothemodelviatraining?
EnhancedModelCapabilityviaSearch
Self-ImprovementvisSearch
18
Overview
1.Modelself-improvementwithLLMs
2.Elicitingstrongermodelabilityviatreesearch
!
3.AIagentself-improvementviatreesearch(Yuetal,ICLR2025)
1
Background:VLMonComputerTasks
023+Tim2020-2022
VQATasks
Q:Whatishedoing?
Heisperformingaskateboardtrick…
ComputerTasks
Canyouhelpmeclearmyshoppingcart?
clickbutton[shoppingcart]….
2
Challenge:extremelydifficultasinteractingwithcomputerwasnotpartofVLM(pre-)training
3
1.Scaletest-timecomputetoimproveagentperformance
2.TransfersearchknowledgebacktoVLMviatraining
IntroduceR-MCTS
R-MCTS=exploredecisionspaceandself-improveon-the-fly
Introduction
Scalingtest-timecompute
Conclusion
Transferringsearchknowledge
IntroduceR-MCTS
R-MCTS=exploredecisionspaceandself-improveon-the-fly
Introduction
Scalingtest-timecompute
Conclusion
Transferringsearchknowledge
IntroduceR-MCTS
1
R-MCTS=MCTSwithcontrastiveself-reflection
Introduction
Scalingtest-timecompute
Conclusion
Transferringsearchknowledge
IntroduceR-MCTS
1
R-MCTS=MCTSwithcontrastiveself-reflection
Introduction
Scalingtest-timecompute
Conclusion
Transferringsearchknowledge
Introduction
IntroduceR-MCTS
12
R-MCTS=MCTSwithcontrastiveself-reflectionandamulti-agent-debatevaluefunction
Q=0.07
Scalingtest-timecompute
Q=0.15
2
Goodaction,because…
Badaction,because…
N=1
V=0.07
V=0.38!
Judge
N=1
N=1
V=0.38
Conclusion
V=0.15
Transferringsearchknowledge
IntroduceR-MCTS
Withineachtask,R-MCTSperformsatreesearchtofindthebesttrajectory
Introduction
IntroduceR-MCTS
Withineachtask,R-MCTSperformsatreesearchtofindthebesttrajectory
Aftereachtask,R-MCTSperformscontrastiveself-reflectiontoimproveitfutureexecution
Introduction
Scalingtest-timecompute
Conclusion
Transferringsearchknowledge
R-MCTSResults
Benchmark:VisualWebArenaandOSWorld
-Realisticandreproducible
-Tasksspansmultipledomains
Introduction
Scalingtest-timecompute
Conclusion
VisualWebArenaOSWorld
Transferringsearchknowledge
R-MCTSResults
R-MCTSoutperformsothersearchalgorithms(ToT,A*,orMCTS)
Introduction
Scalingtest-timecompute
Conclusion
Transferringsearchknowledge
R-MCTSResults
R-MCTSachievesnewSOTAonVisualWebArena,andishighlycompetitiveonOSWorld!
Introduction
Scalingtest-timecompute
Conclusion
VisualWebArenaLeaderboardOSWorldLeaderboard
Transferringsearchknowledge
3
1.Scaletest-timecomputetoimproveagentperformance
2.TransfersearchknowledgebacktoVLMviatraining
IntroduceExploratoryLearning
ExploratoryLearning=explore,evaluate,andbacktrackbytrainingontreetraversals!
Introduction
Conclu
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025-2030中国造纸化学品节水减排技术与成本效益分析报告
- 2026北京师范大学万宁实验学校春季学期(第34届)教师招聘19人备考题库附答案详解(a卷)
- 2025-2030中国超高清视频内容生产与传输技术评估报告
- 2025-2030中国超高清视频产业生态构建与投资回报周期评估报告
- 2025-2030中国超高清视频产业技术标准与内容生态研究报告
- 2026重庆外语外事学院党政办办公室招聘备考题库附答案详解(考试直接用)
- 2025-2030中国证券交易行业市场完整考察及金融市场和发展态势报告
- 2026北京师范大学万宁实验学校春季学期(第34届)教师招聘19人备考题库及答案详解(夺冠系列)
- 2026浙江杭州市三墩中学招聘出纳(非事业)1人备考题库及一套参考答案详解
- 2026四川德阳市老干部活动中心招聘编外聘用工作人员2人备考题库及1套参考答案详解
- 2026年高压电工证考试试题及答案
- 2025年高职(城市轨道交通机电技术)设备调试阶段测试题及答案
- 【全科医学概论5版】全套教学课件【694张】
- 电厂防汛课件
- 【完整版】2026国考《行测》真题(行政执法)
- 福建开放大学2025年《犯罪学》形成性考核1-4答案
- 2026年安检证考试题库及答案
- 日志观察及写作指导手册
- 野战生存课件军用
- 医药代表工作分享汇报
- T-BWEA 4-2025 大中型泵站设备养护维修规程
评论
0/150
提交评论