2026负责任AI进展报告 Responsible Al Progress Report-谷歌Google_第1页
2026负责任AI进展报告 Responsible Al Progress Report-谷歌Google_第2页
2026负责任AI进展报告 Responsible Al Progress Report-谷歌Google_第3页
2026负责任AI进展报告 Responsible Al Progress Report-谷歌Google_第4页
2026负责任AI进展报告 Responsible Al Progress Report-谷歌Google_第5页
已阅读5页,还剩26页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

ProgressReport

PublishedinFebruary2026

Foreword:

Theopportunity

oftheAIera

If2024wasdefinedbybuildingoutthefoundationsforanAIfuture,2025markedAI’sshiftintoahelpful,proactivepartner,capableofreasoningand

navigatingtheworldwithusers.Asmodels

growevenmoresophisticated,weseeusersandbusinessesaroundtheglobetransitioningfromexplorationtointegration,findingnewwaystoputthesetoolstoworkintheirdailylives.Fromfoundationaladvancesin

scientificdiscovery

and

clinicalmilestonesinhealthcare

totheriseofagenticsystemsandnewtoolstosupportcreativitysuchas‘vibecoding’andgenerativemedia,thetransformationalpotentialofthesetoolsiscomingmoreclearlyintofocus.

Sincewestartedpublishingthesereports,

ourresponsibleAIdevelopmentapproachhas

continuedtomatureandisnowfullyembedded

withourproductdevelopmentandresearch

lifecycles.In2025,asmodelsbecamemorecapable,personalized,andmulti-modal,werelieduponrobustprocessesfortestingandmitigatingrisks,and

deepenedtherigoroussafeguardsbuiltintoour

products.TomeetthischallengeatthespeedandscaleofGoogle,wehavepaired25yearsofuser

trustinsightswithacomprehensivetestingstrategythatisdrivenbyhumanexpertiseandsupportedbyAI-enabledautomation.

Thisworkcontinuestobeguidedbyour

AIPrinciples

,whichweupdatedlastyeartorelectourlatest

understandingoftheopportunitiesandrisks

presentedbythisplaformshift.Today’sreport

detailsourmulti-layeredapproachtoresponsible

AIgovernance,andfocusesinparticularonagenticandfrontierrisksfromincreasinglysophisticated

models.Insuchadynamicenvironment,italso

showshowoursystemsarebuilttobeabletodetectandthenadapttoemergingrisks.Whetherwearehardeningagenticsystemsagainstadversarial

manipulationorembeddingprovenancesignals

intoeverysyntheticoutput,ourgoalremainsclear:

toensurethatweare“bold”and“responsible”

inbothourdevelopmentandimplementation.

Responsibilityisnotonlyaboutstoppingbad

outcomes.Itisalsoaboutenablingbroadaccesstothesetoolsforthemaximumbenefitofpeopleandsociety.BystrikingtherightbalancewecanensurethatAIisusedtotackleexistentialchallengesthatwerepreviouslyinsurmountable,from

forecasting

loodsfor700millionpeople

to

decodingthe

humangenome

and

helpingpreventblindness

.

Buildingtrustinthesetoolsrequiresdeep

partnershipwithgovernments,academics

andcivilsociety.Wewillcontinuetovigorouslycollaboratetosetstandardsforthisremarkableera.AsAIadvances,we’llcontinuetoiterate

andshareresearchandtoolswiththebroaderecosystem,withagoaltopromoteusesof

AIthatwillimproveliveseverywhere.

LaurieRichardson

VicePresident,Trust&Safety,Google

HelenKing

VicePresident,Responsibility,GoogleDeepMind

2

3

Howwe

ThefoundationsforAI-driveninnovationaresystemsthataredevelopedanddeployedresponsiblyfromthestart.WeareboldinourambitiontodelivertheeconomicandsocietalbenefitsoftheAIera—

benefitsthatcanunlockopportunityforcommunitiesandacceleratescientificdiscovery.Weachieveourgoalofbeingboldandresponsiblethrougha

comprehensiveapproachthatspanstheentireAI

lifecycle—frommodeldevelopmentanddeploymenttopost-launchmonitoringandremediation.

Amulti-layeredapproachtoresponsibleAIgovernance

Weemployamulti-layeredapproachtoAIgovernancethatcombineshumanexpertise,userfeedback,

andautomatedsystemsthathelpscaleourworktomanagerisk.

Research.Wetakearesearch-drivenapproachtoAIriskandgovernance.Thisincludesidentifyingcurrentandemergingrisksassociatedwithourmodelsandproductsacrossnewmodalitiesandformfactors

—suchas

robotics

and

agenticAI

.

PoliciesandFrameworks.Wedeveloprigorous

AIpoliciesandguidelines—suchasourcontentsafety

policies

and

ProhibitedUsePolicy

—thataredesignedtopreventpotentiallyharmfuloutputsandmisuseofourproducts.Developedwithinternalandexternal

experts,theseprotectionsguidemulti-modaloutputstomitigaterisksinkeyareasincluding:childsafety,dangerouscontent,sexualcontent,andmedical

information.Wealsodevelopframeworksfor

managingmorenascentrisksposedbyfrontierAImodels,asillustratedinourlatest

FrontierSafety

Framework

and

SecureAIFramework

.

Testing.Wetakeacomprehensiveapproachtostresstestoursystemsagainstourpoliciesandframeworks.Ourtestingincludesbothscaledevaluationsandredteamingofourmodelsand

products,includingourmostadvancedAIsystemsthatleverage

personalintelligence

and

agenticAI

.

Mitigation.Weproactivelymitigaterisksthroughbothsupervisedfine-tuningandreinforcementlearningtoensuremodelsarealignedwithourcontentsafetypolicies.Additionally,wedeployout-of-model

mitigations,suchassafetyfiltersandconditional

systeminstructions,toprovideadditionallayersofprotectionbyidentifying,filteringout,orsteering

modeloutputawayfromharmfulorinappropriate

content.WealsoleverageourSearchtoolstofactuallygroundresponsesthatrequirefreshorauthoritativeinformation.Tofurtherminimizerisk,wephaseglobalexpansionofmodelsandproductstoallowsufficienttimeandsafetyconsiderationsfordiferentlanguagesandregions.Weimplementaddedcareforsensitiveaudiences,especiallyourunder-18users,forwhomweenforceheightenedprotocolsandmitigations.

LaunchReviewandReporting.Beforelaunchingamodelorproduct,weevaluateawidearrayofriskstodeterminewhetheroursafetyguardrailsappropriatelymitigatethoserisksorifadditionalprotectionsare

needed.OurAIlaunchesundergoexpertreviewstoconfirmtheymeetrigorousresponsibilitystandards,guidedbyour

AIPrinciples

.Wealsopublish

modelcards

andother

reports

toprovideessentialinformation

regardingmodelcreation,function,andintendeduse.

MonitoringandEnforcement.Weuseacombinationofautomatedsystemsandhumanreviewstoengageincontinuouspost-launchmonitoringtoimproveourAImodelsandproducts,anddetectactivityand

behaviorthatsuggestsmisuseofourconsumer

products.Thisincludesactivelysolicitinguser

feedback,evaluatinglogsdatatoidentifyknownandemerginguseradoptionpatterns,andmonitoringthird-partysignalsviasocialmediaandtrusted

partners.Wecollatetheseinsightsandextract

opportunitiestoimproveourmodelsandproducts.

Research

Monitoring&Enforcement

Policies&Frameworks

Testing

LaunchReview&Reporting

Mitigation

GovernanceForums

GovernanceForums.Ourmulti-layeredprocess

includeslaunchreviewsforbothfrontiermodelsandapplicationsdevelopedusingthesemodels.OurmodellaunchesarereviewedatGoogleDeepMind’sLaunchReviewforum,whichapprovesmodelreleases,andourmanyapplicationlaunchesarereviewedsystematicallyvialaunchinfrastructureandcentralizedexpertriskreviews,aswellasviavariousapplication-focused

launchreviewforums.Theselaunch-specificforumsarecomplementedbyourArtificialGeneralIntelligence(AGI)

FuturesCouncil,whichconsistsofmembersofGoogle’sseniormanagementandAlphabet’sBoardofDirectors.Buildingonour

AIPrinciples

,theCouncilprovides

perspectivesandrecommendationstoourBoardandmanagementteamonlong-termopportunities,risks,andimpactsassociatedwiththedevelopmentofAGI.Counciltopicsincludepromotingwidespreadbenefits,addressingtechnicalsafetyandsecuritypriorities,

supportingscientificmoonshots,andprogressingalignmentonnationalandinternationalstandards.

4

Casestudy

ResponsibleAI

inaction

Ourmulti-layeredapproachtoresponsibleAIgovernanceis

designedtoadapttoeachuniqueinnovation.Ourmostrecent

launches,including

Gemini3

,ourlatest

FrontierSafetyFramework

,andourprogressinemergingAIfieldssuchas

agentic

AI,

personal

assistance

,andartificialgeneralintelligence(AGI)demonstratethisresponsibilityinaction.

Gemini3:ourmostsecuremodelyet

Weconductedrigoroustestingtoassess

modelalignmentwithourpoliciesand

frameworks.Weappliedtheseinsightsto

deploytargetedmitigationstofurthermodelalignment,whileourongoingmonitoringhelpsinformcontinuousmodelimprovement.

Gemini3

representsourmostsecuremodelyet,havingundergonethemostcomprehensivesetofsafetyevaluationsofanyGoogleAImodel

todate.Developedinclosepartnershipwith

internalsafetyandsecurityteams,Gemini3wassubjectedtorigoroustestingvia

redteamingand

safetyreviews

alignedwithour

AIPrinciples

and

Geminisafetypolicies

.OurevaluationsshowedthatGemini3achievedspecificgainsinreducingsycophancy,resistingpromptinjections,and

improvingprotectionagainstcybermisuse.

Ourupdated

FrontierSafetyFramework

,whichincorporateslessonsfrompreviousversionsandthelatestindustrybestpractices,wascentraltoourapproachfordeployingGemini3.The

frameworkcontainsasetofprotocolsdesignedtoidentifyandmitigatesevererisksfromfrontierAImodels,suchascyberattacks,CBRNrisks,andharmfulmanipulation.

TheFrameworkisbasedaroundasetof“CriticalCapabilityLevels”—thresholdswhereamodel’scapabilities,ifunmitigated,couldposesevererisks.ThisincludesanewresearchCritical

CapabilityLevel(CCL)onharmfulmanipulation.ThisCCLisfocusedonamodel’scapabilitytosystematicallyandsubstantiallymanipulate

usersindirectAI-humaninteractionsandwhichmaybemisusedtocauseharmataseverescale.Thisadditionbuildsonandoperationalizes

researchwe’vedonetoidentifyandevaluate

mechanismsthatdrivemanipulationfrom

generativeAI

.

ToaccompanythelaunchofGemini3,we

publisheda

report

documentinghowwe

evaluatedthemodelagainstthesethresholdsandwhyweultimatelydeemeditsafetodeploy.Inadditiontoourowntesting,wealso

partnered

withworld-leadingsubject-matterexperts,

providedearlyaccesstobodiessuchastheUK

AISecurityInstitute,andobtainedassessmentsfromindependentevaluatorssuchasApollo

Research,Vaultis,Dreadnode,andmore.

OurresponsibleapproachtoGemini3continues

throughourmonitoringandenforcement,informedbyourrobust

AIusagepolicies

,our

product-levelpolicies

,andfeedbackfrom

userreporting

.

5

Securingthenext

generationofbrowsing

Aswebeginto

introduceagentic

capabilities

toChrome—allowingGeminitoassistwithcomplex,multi-stepweb

tasks—wehavedesignedanovel

security

framework

tomitigaterisksandprotecttheuserexperience.

Useralignment

Wedeployedaspecialized,high-trustAImodelwecalltheUserAlignmentCriticthatreviews

proposedagentactions.TheAlignmentCritic

actsasanindependentreviewer,vetoingactionsthatdonotalignwiththeuser’sspecificintent.

Strictboundaries

WeintroducedAgentOriginSets,whichrestricttheagent’sreachtointeractonlywithdata

relatedtothetaskathand.

Mitigationofsocialengineering

Whiletheagentisactive,itcheckseverypageitseesforindirectpromptinjection.InadditiontoChrome’s

safetyfeatures

and

on-deviceAI

thathelpdetecttraditionalscams,this

prompt-injectionclassifierhelpspreventtheagentfromtakingactionsthatarenotalignedwiththeuser’sgoal.

Mandatoryhumanoversight

Sensitiveactions—includingpaymentsandpurchases,postingonsocialmedia,and

credentialuse—requirehumanconfirmationbeforeexecution,givinguserstransparencyandcontroloverthesetypesofinteractions.

Ongoingtesting,monitoring,andmitigation

Inadditiontoothersafeguards,webuilt

automatedred-teamingsystemsthattry

toderailtheagentinChrome.Westartwith

asetofdiverseattackscraftedbysecurity

researchers,anduseLLMstoexpandonthemfollowinga

technique

weadaptedforbrowseragents,prioritizingtestingagainstbroadandhigh-impactattacks.

orSearchAIMode,andtheycanalsochoosetoengageinconversationswithoutpersonalization,andsettheiractivitytoauto-delete.

Datasecurity

Ifusersoptin,weuseourbest-in-class

securityinfrastructuretoensurethatusers’dataissecurelyconnectedtotheGeminiApporSearchAIModethroughPersonal

Intelligence,ensuringthedataisprotectedevenasitpowersnew,personal

AIexperiences.

Knowledge

Weempoweruserswithknowledgeabout

PersonalIntelligence

,fromacknowledgingits

limitations

,toprovidinguserswithresourcessuchasthe

GeminiappHelpCenter

andthe

AIModeHelpCenter

tolearnmoreabouthowPersonalIntelligenceinteractswiththeirdata.

Launchingpersonalassistancewithcontrolsbuiltin

Aspartofthedevelopmentof

Personal

Intelligence

weidentifiedthespecific

mitigationsrequiredtohelpkeepusers

safewhilepushingtheboundariesofwhatAIcanachieve.

Usercontrol

Usershaveachoiceonwhetherornotto

connectnewdatasourcestotheGeminiApp

6

Casestudy

risksfrom

advancedAIsystems

AswepushforwardthefrontiersofwhatAIiscapableof,ourresearchteamscontinuetostudythepotentialrisksthatmayemergeandhowtobestevaluateandmitigatethem.

PreparingforAGI

InApril2025,ourresearcherspublisheda

proactive

approach

tobuildingartificialgeneralintelligence(AGI)safelyandresponsibly.TheresearchassumesthathighlycapableAIcouldbedevelopedby2030andanalyzesthepotentialrisks,fromthreatactorsmisusingAIcapabilitiestocarryoutcyberattacks

againstcriticalinfrastructure,toAIsystems

becomingmisalignedanddeceivinghumanusers.Theresearchalsoconsidersvariousmitigations,suchasblockingaccesstodangerouscapabilitiesbyusingfilterstopreventmisuse,orusingAI

assistancetohelpmaintainoversight.

Newcapabilitiesandformfactors

ThenatureofAIrisksdependsonthecapabilitiesoftheunderlyingmodels,butalsotheformfactorsusedtodeploythesecapabilitiesintotherealworld.In2025,ourteamcarriedoutresearchondiferentkindsofcapabilitiesandformfactors.

Robotics.Our

Geminiroboticsmodels

are

equippedwithcapabilitiessuchasadvancedspatialunderstanding,thatwillenablerobotstoperformawiderrangeofreal-worldtasks.Tomitigatesafetyrisks,wehavedevelopedan

approach

that

combinesmultiplelayersofsafeguards,buildingonourongoingsafetyresearchinthisspace.Forexample,inMarch2025wepublisheda

method

forgenerating“constitutions,”orrulesofbehavior,toguiderobots’actions.WealsopartneredwithPrincetonUniversityto

demonstrate

howtoidentify

andpredictrobotfailuresinreal-worldscenarioswithoutrequiringphysicalhardwaretesting.Ourindustry-leadingworkonsafetyhashelpedmakeourGeminiroboticsmodelsbestinclass.

Agents.AsnewelementsofAImodelsandsystems,AIagentscanactautonomouslyonbehalfoftheuser—performingtaskssuchasresearching,

planning,andusingtools.

InMay2025,wepublishedapaperoutliningsecurityprinciplesfor

SecureAIAgents

.

InSeptember2025,wepublished

research

examiningtheimpactsthatmayoccurasAIagentsbecome

morecapableandinterconnected,andbeginto

transactwitheachother,intheeconomyatscaleandspeedsbeyonddirecthumanoversight.The

authorsproposearangeofpotentialinterventions,fromidentifiersforagentstosandboxenvironments.

InDecember2025,

ourresearchersmapped

potentialrisksofahypotheticalfutureinwhichAGImaynotemergeasasinglepowerfulmodel,butratherasadistributednetworkofspecialized,sub-AGIagentsthatcancollectivelyperformcomplextasksthatnoindividualagentcoulddoalone.Inresponse,they

recommendthatsafetyinterventionsmovebeyondindividualmodelalignmenttowarda“defense-in-

depth”frameworkthatgovernstheentireecosystemthroughcontrolledagenticmarkets,systemiccircuitbreakers,androbustoversightofcollectivebehaviors.

Focusareas

AcrossGoogle,ourexpertsundertakeandsupportresearchonarangeofprioritytopics,from

relationshipsandhowtoprotectthementalwellbeingofAIusers,tochemical,biological,radiological,andnuclearrisks.Somerecentexamplesinclude:

Cybersecurity.InMarch2025,wepublisheda

framework

forevaluatingtheofensivecyber

capabilitiesofAIsystems.Thisevaluationcoverseveryphaseofthecyberattackchain,addressesawiderangeofthreattypes,andisgroundedinreal-worlddata.

InformationQuality.InNovember2025,wepublishedthe

FACTSLeaderboard

,asuiteof

methodstoevaluatetheaccuracyofLLMs.Itevaluatesmodelsontheirabilitytoaccuratelyanswerdiferentkindsofquestions,includingquestionsaboutimages,questionsthatrelyonusingsearchtools,“closed-book”questionsthat

modelsmustanswerwithoutexternaltools,andquestionsaboutlong-formdocuments.

Mentalhealth.InJuly2025,

weannouncedour

partnership

with

WellcomeTrust

,oneofthelargestcharitiesintheworld,onamulti-yearinvestmentinAIresearchfortreatinganxiety,depression,andpsychosis.Wealsoworkedwith

GrandChallenges

Canada

and

McKinseyHealthInstitute

tocreatea

practicalfieldguideformentalhealthorganizations

onhowtouseAIforscalingevidence-based

mentalhealthinterventions.

KidsandFamilies.InOctober2025,weannouncedthewinnersofthe

GoogleAcademicResearch

Awards

,throughwhichwehavesupported

research

exploringcriticaltopics,includingtheimpactofAIonteenagersandearlychildhood

development.Inadditiontothefundsattachedtotheseawards,awardeesarematchedtoaGoogleresearchsponsor,providingdirectconnectiontoourownresearchcommunity.

Casestudy

7

Stressoursystems

AsAIcapabilitiescontinuetodevelop,weareevolvingourrigoroustestingframeworksandspecializedteamstoaddressnewrisk

profiles.ByintegratinghumanexpertisewithAI-assistedautomation,weareensuringtheseadvancedsystemsscalesafelywhile

remaininghelpfulforeveryone.

Mappingunexpectedrisksthroughadversarialredteaming

Acoreaspectofourtestingstrategyisred

teaming—unstructured,adversarialtesting

designedtouncoverunexpectedriskvectorsthatstandardevaluationsmightmiss.Relyingonlateralthinkingandmethodicalexploration,ourteamssimulatehowmaliciousactors

mightattempttomisuseoursystems.These

specialistscoverabroadrangeofkeyrisk

areas,includingchildsafetyandcontentsafety.In2025alone,ourContentAdversarialRedTeam(CART)completedover350exercises.This

workspansallmajormodalities—includingtext,audio,images,andvideo—aswellascomplexcapabilitieslikeagenticAI,allowingustomapriskstostayaheadofarapidly

shiftingthreatlandscape.

OurCARTteamsareexpertsinconducting

human-drivenunstructuredtestsatscale.Tosupportthis,weadditionallydeploy

automated

redteaming

techniquestosystematically

exploreadversarialattackstoenableabroadassessmentofmodelvulnerabilities.

Addressingnovelandemergingrisks

NovelAIsystemscanmeanthereispotentialfornovelrisks.Toevaluateourmostadvancedfrontiersystems,ourNovelAITestingteamwasformedtospearheadevaluationsatscalefornewAIsystems,suchasadvancedagentsand

PersonalIntelligence

.Withinpersonalization

testing,theteamengineeredascaledapproachfordynamic,context-awareevaluations.

Managingsafetythroughcollaborativescrutiny

Ourinternalrigoriscomplementedbyexternalvalidationtoensureobjectiveassessments.WepartnerwithindependentevaluatorsincludingApollo,Vaultis,andDreadnode,andprovide

earlyaccesstoourmodelstobodiessuch

Approachtoagentictesting

asthe

UKAISecurityInstitute

.Thisexternal

Inthisagenticera—whereAIsystems

autonomouslyinteractwithservicesand

users—werequireanewtestingparadigmdesignedspecificallyfortheseinteraction-basedrisks.Toensureourtestingkeeps

pacewiththespeedofproductinnovation,weareevolvingourcapabilitiestobemoreauthentic,automated,andactionable.

Thesandbox

Wedevelopedanauthentic,interactive

sandboxenvironmentthatreplicates

complex,multi-turndigitaluserexperiencesandstate-of-the-artattacks.Thisplaform

lookstoaddresscriticalsafety,legal,and

scalabilitychallengesinherentinliveinternettestingofagenticproducts,andallowsusto

proactivelyidentifyandmitigatehighharmriskswithoutexposingthepublicwebtopotentialharm.

“BuddyAgents”

Wearecurrentlyimplementingautomatedmonitoringagentsthatloginteractionsandassesscomplianceinreal-timeoftheagentbeingtested.

Multi-turninteractions

Wearedevelopingthecapabilitytoprovideinsightsintohowagentsperformincomplex,multi-turninteractionsusingpersonalizeddata–allowingustoaccuratelyevaluatetheintersectionofmultiplenovelcapabilitiesastheyconverge.

scrutinyvalidatesthatourmodelsadheretothesafetypracticesoutlinedinourupdated

Frontier

SafetyFramework

,helpingustostresstestourmodelsindiferentriskareas,fromcyberto

harmfulmanipulation.

Ultimately,thiscomprehensivestrategy—

combininghuman-in-the-loopexpertisewithAI-assistedscale—enablesdata-drivensafetyandsecurityassessments,andensuresthat

weareabletoaddressnewandemerging

riskswhileenablingthenextgenerationofboldAIexperiences.

8

HowweapplyAIto

WebelievethatboldandinnovativeAI

requiresaresponsiblefoundationthatsafelyminimizespotentialrisks,whilemaximizingtheextraordinaryopportunitiesAIpresents—fromunlockingnewgrowthacrossthe

globaleconomy,toimprovingthetreatment

ofdiseases,toacceleratingscientific

timelinesfromyearstomonths.True

responsibilityintheAIeragoesbeyond

safeguards.ItdemandsthatweuseourscaleandinfrastructuretohelpAIaddresssociety’smostpressingchallenges.

Workingalongsideourpartners,wetakea

comprehensiveandresponsibleapproachtoAIdevelopmentthatenablesourmostadvancedmodelstobeappliedwheretheycanhavethegreatestpositiveimpact.

Acceleratingscientificprogress

WearefosteringanewgoldenageofdiscoverybyapplyingAItofundamentalsciences.This

includes

advancingnuclearfusionresearch

andutilizing

quantumcomputing

tosolveproblemsthatwerepreviouslyintractable.Alongsidethis,wearecreatingtoolslike

AIco-scientist

that

helpscientistsgeneratenovelhypothesestoacceleratethespeedofscientificdiscoveries.

Improvingglobalhealth

Wearedrivingprogressin

genomics

and

disease

detection

,

automatingadministrativeburden

forclinicians,andpartneringwithinstitutionslikeYaleUniversitytodiscovernewpotential

cancertherapypathways

.ThroughAItoolslike

AlphaFold

,whichpredictsproteinstructures,weareacceleratingourunderstandingofdisease—enablingdrugdiscoveryandopeningnew

frontiersindiagnosticsandtreatment.

Strengtheningresilience

Wearestrengtheningglobalresilience

byprovidingresponsibleagencieswith

experimentaltoolsthatgiveearlierwarningfor

loods

,

cyclones

,and

earthquakes

.Whenused,thesetoolscanhelpcommunities

prepareforandrespondtodisastersmore

efectively.Beyondcrisisresilience,we

areapplyingAItohigh-accuracy

weather

forecasting

andidentifyingtechniquesforsustainable

agriculture

,helpingsocietyadapttotherealitiesofachangingclimate.

Supportingeducation

Wearecommittedto

empoweringeverylearner

.AIistransformingeducationbysupporting

teachersandpersonalizinglearningexperiences.Throughinitiativesfocusedon

AIliteracy

and

learning

,andtechnologieslike

LearnLM

,wearehelpingtounlockhumanpotentialandmakeknowledgeuniversallyaccessible.

9

Casestudy

Acceleratingthrough

specializedAIagents

Ourcommitmentto

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论