版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
11November2025
GuidanceforRiskManagementofArtificialIntelligencesystems
2
Contents
Executivesummary 4
1Introduction 5
1.1Objective 5
1.2Scope 5
1.3Audience 6
2RiskManagementmethodology 7
3UnderstandingtheAIlifecycle 9
3.1DefinitionofanAIsystem 9
3.2LifecycleofanAIsystem 9
3.3ProcuringanAIsystem 11
4Interpretabilityandexplainabilityassinequanon 13
4.1.1Risk1:UninterpretableorunexplainableAIsystem 14
5RisksassociatedtomainDataProtectionPrinciples.15
5.1Principleoffairness 15
5.1.1Risk1:Biascausedbythelackofdataqualityintraining
personaldata 17
5.1.2Risk2:Biasintrainingpersonaldata 19
5.1.3Risk3:Overfittingtothetrainingpersonaldata 21
5.1.4Risk4:Algorithmicbias 22
5.1.5Risk5:Interpretationbias 24
5.2Principleofaccuracy 25
5.2.1LegalmeaningofaccuracyintheEUDPR 25
5.2.2StatisticalmeaningofaccuracyinAIdevelopment 25
5.2.3Risk1:Inaccuratepersonaldataoutput 25
5.2.4Specificexample:Inaccurateoutputduetodatadriftand
deteriorationofinputpersonaldataquality 27
5.2.5Risk2:UnclearinformationfromtheAIsystemprovider 28
5.3Principleofdataminimisation 30
5.3.1Risk1:Indiscriminatecollectionandstorageofpersonaldata 30
5.4Principleofsecurity 31
5.4.1Risk1:AIsystemoutputdisclosureoftrainingpersonaldata 32
5.4.2Risk2:Personaldatastorageandpersonaldatabreaches 34
3
5.4.3Risk3:Personaldataleakagethroughapplicationprogramming
interfaces 35
5.5Datasubject’srights 36
5.5.1Risk1:Incompleteidentificationofthepersonaldataprocessed
36
5.5.2Risk2:Incompleterectificationorerasure 38
6Conclusion 39
Annex1:Metrics 41
Annex2:Overviewofconcernsandrisks 47
Annex3:ChecklistperphaseoftheAIlifecycle
development 48
DevelopingandAIsystem 48
ProcuringanAIsystem 52
Executivesummary
Thedevelopment,procurementanddeploymentofAIsystemsinvolvingtheprocessingofpersonaldatabyEuropeanUnionInstitutions,Bodies,OfficesandAgencies(EUIs)raisessignificantriskstodatasubjects’fundamentalrightsandfreedoms,includingbutnotlimitedtoprivacyanddataprotection.AsthecornerstoneofRegulation2018/1725(EUDPR),
1
theprincipleofaccountabilityenshrinedinArticle4(2)(foradministrativepersonaldata)andArticle71(4)(foroperationalpersonaldata)requiresEUIstoidentifyandmitigatetheserisks,aswellastodemonstratehowtheydidso.ThisisallthemoreimportantforAIsystemsthataretheproductofintricatesupplychainsofteninvolvingmultipleactorsprocessingpersonaldataindifferentcapacities.
ThisGuidanceaimstoguideEUIsactingasdatacontrollersinidentifyingandmitigatingsomeoftheserisks.Morespecifically,theyfocusontheriskofnon-compliancewithcertaindataprotectionprincipleselicitedintheEUDPRforwhichthemitigationstrategiesthatcontrollersmustimplementcanbetechnicalinnature–namelyfairness,accuracy,dataminimisation,securityanddatasubjects’rights.Assuch,thetechnicalcontrolslistedinthisGuidancearebynomeansexhaustive,anddonotexemptEUIsfromconductingtheirownassessmentoftherisksraisedbytheirspecificprocessingactivities.Indoingso,itrefrainsfromrankingtheirlikelihoodandseverity.
First,thisdocumentprovidesanoverviewoftheriskmanagementmethodologyaccordingtoISO31000:2018(Section
2)
.Second,itoutlinesthetypicaldevelopmentlifecycleofAIsystemsaswellasthedifferentstepsinvolvedintheirprocurement(Section
3)
.Third,itexploresthenotionsofinterpretabilityandexplainabilityascross-cuttingconcernsthatconditioncompliancewithalltheprovisionscoveredinthisGuidance(Section4).Lastly,itbreaksdownthefourgeneralprincipleslistedabove,namelyfairness,accuracy,dataminimisationandsecurityintospecificrisks,eachofwhichisthendescribedandpairedwithtechnicalmeasuresthatcontrollerscanimplementtomitigatetheserisks(Section
5)
.
TheEDPSissuesthisguidanceinhisroleasadataprotectionsupervisoryauthorityandnotinhisroleasmarketsurveillanceauthorityundertheAIAct.ThisguidanceiswithoutprejudicetotheArtificialIntelligenceAct.
1Regulation(EU)2018/1725oftheEuropeanParliamentandoftheCouncilof23October2018ontheprotectionofnaturalpersonswithregardtotheprocessingofpersonaldatabytheUnioninstitutions,bodies,officesandagenciesandonthefreemovementofsuchdata,andrepealingRegulation(EC)No45/2001andDecisionNo1247/2002/EC[2018]OJL295/39
https://data.europa.eu/eli/reg/2018/1725/oj.
4
5
1Introduction
1.1Objective
ThisdocumentaimstoguideEUInstitutions,Bodies,OfficesandAgencies(EUIs)actingascontrollerswithinthemeaningofArticle3(8)ofRegulation(EU)2018/1725(EUDPR)inidentifyingandmitigatingsomeoftherisksfordatasubjects’fundamentalrightsraisedbytheprocessingofpersonaldatawhendeveloping,procuringanddeployingAIsystems.
2
ItisintendedtocomplementPartIIoftheAccountabilityonthegroundtoolkitonDataProtectionImpactAssessments&PriorConsultation.
3
ItalsocomplementstheEDPSOrientationsoftheuseofGenerativeAIbyEUIsforensuringdataprotectioncompliancewhenusingGenerativeAIsystemsissuedinJune2024,whichprovidespracticaladviceonhowEUIscanensurecompliancewiththeEUDPRwhendevelopingorusinggenerativeAIsystems.
4
Thepresentdocumentisbothbroader,asitencompassesalltypesofAIsystems,butalsonarrower,asitfocusesontechnicalratherthanlegalmitigationstrategies(seeSection
1.2)
.
ThisdocumentprovidesananalyticalframeworkforidentifyingandtreatingrisksthatmayariseinAIsystems,structuredaccordingtothedataprotectionprinciplespotentiallyaffected.Itdoesnotconstituteandshallnotberelieduponasasetofcomplianceguidelines.Thesolepurposeofthisdocumentistofacilitateasystematicassessmentofrisksfromadataprotectionperspective.Inotherwords,itdoesnotreplacethenecessarycomplianceassessmentofeachAIsystemtobedonebythecontroller,whomustensurethattherisksidentified(withthesupportofthisframework)aremanagedasnecessarytomeetalltheobligationsarisingundertheEUDPR.
TheEDPSissuesthisguidanceinhisroleasadataprotectionsupervisoryauthorityandnotinhisnewroleasmarketsurveillanceauthorityundertheAIAct.
1.2Scope
ForthepurposeofthisGuidance,andbuildingontheterminologyusedinISO31000:2018,
5
thenotionof“risk”isexpressedintermof“risksource”,“event”,“consequence”and“control”,wherethe“risksource”referstotheprocessingofpersonaldatainthecontextoftheprocurement,developmentordeploymentofanAIsystem,the“event”referstoasituationinwhichthatprocessingwouldimpedeondatasubjects’fundamentalrightsandfreedoms,the“consequence”referstothematerialornon-materialharmthismightcause
2Regulation(EU)2018/1725oftheEuropeanParliamentandoftheCouncilof23October2018ontheprotectionofnaturalpersonswithregardtotheprocessingofpersonaldatabytheUnioninstitutions,bodies,officesandagenciesandonthefreemovementofsuchdata,andrepealingRegulation(EC)No45/2001andDecisionNo1247/2002/EC[2018]OJL295/39
https://data.europa.eu/eli/reg/2018/1725/oj.
3EDPS,,AccountabilityonthegroundPartII:DataProtectionImpactAssessments&PriorConsultation,February2018,
https://www.edps.europa.eu/sites/default/files/publication/18-02-06_accountability_on_the_ground_part_2_en.pdf
4EuropeanDataProtectionSupervisor,GenerativeAIandtheEUDPR.OrientationsforensuringdataprotectioncompliancewhenusingGenerativeAIsystems.(Version2),28October2025,
https://www.edps.europa.eu/system/files/2025-10/25-
10_28_revised_genai_orientations_en.pdf
5InternationalOrganizationforStandardization,ISO31000:2018Riskmanagement—Guidelines,Edition2,2018,
/standard/65694.html.
6
todatasubjects,
6
andthe“control”referstothemitigationstrategiesthatcontrollerscanputinplacetoreducethelikelihoodofthatriskmaterialisingand/ortheimpactithasondatasubjectsshoulditmaterialise.
7
PursuanttoArticle1(2)EUDPR,theobjectiveoftheRegulationistoprotecttherightsandfreedomsofnaturalpersons,includingbutnotlimitedtoprivacyanddataprotectioninthecontextoftheprocessingoftheirpersonaldata.AccordingtoArticles4(2),26(1)and27(1),EUIsareresponsibleforidentifyingandmitigatingriskstotheserightsandfreedomsraisedbytheirprocessingactivities,andtodemonstratehowtheydidso.Thisisparticularlyimportantwhenitcomestotheprocurement,developmentanddeploymentofAIsystems,alltheadverseimpactsofwhichhavenotyetbeenassessed.Itisthereforecrucialforcontrollerstoproperlyidentifyandmitigate,foreachoftheirprocessingactivities,theriskstheseraiseforalldatasubjects’fundamentalrights.CompliancewiththeprovisionsexplicitlylaiddownintheEUDPRisaproxytoachievethatobjective.ThisGuidancethereforestickstoaconceptualisationoftheriskinwhichthe“event”isasituationofnon-compliancewithaprovisionexplicitlylaiddowninthetext.
Morespecifically,thisGuidancefocusesontheriskofnon-compliancewithaselectfewdataprotectionprinciplesforwhichthe“controls”thatcontrollersmustimplementcanbetechnicalinnature–namelyfairness,accuracy,dataminimisation,securityandcertaindatasubjects’rights.TheEDPSinsistsonthefactthatthelistofrisksandcountermeasuresoutlinedinthisGuidanceisnotexhaustive,butmerelyreflectssomeofthemostpressingissuesthatcontrollersmustaddresswhenprocuring,developinganddeployingAIsystems.
1.3Audience
TheintendedaudienceforthisdocumentisEUIs’staffinvolvedintheprocurement,developmentanddeploymentofAIsystems,includingsoftwaredevelopers,datascientists,ITengineers,ITprojectmanagers,DataProtectionOfficersandDataProtectionCoordinators.
6Recital46EUDPRspecifiesthat“[t]therisktotherightsandfreedomsofnaturalpersons,mayresultfrompersonaldataprocessingwhichcouldleadtophysical,materialornon-materialdamage,inparticular:wheretheprocessingmaygiverisetodiscrimination,identitytheftorfraud,financialloss,damagetothereputation,lossofconfidentialityofpersonaldataprotectedbyprofessionalsecrecy,unauthorisedreversalofpseudonymisation,oranyothersignificanteconomicorsocialdisadvantage;wheredatasubjectsmightbedeprivedoftheirrightsandfreedomsorpreventedfromexercisingcontrolovertheirpersonaldata;wherepersonaldataareprocessedwhichrevealracialorethnicorigin,politicalopinions,religionorphilosophicalbeliefs,tradeunionmembership,andtheprocessingofgeneticdata,dataconcerninghealthordataconcerningsexlifeorcriminalconvictionsandoffencesorrelatedsecuritymeasures;wherepersonalaspectsareevaluated,inparticularanalysingorpredictingaspectsconcerningperformanceatwork,economicsituation,health,personalpreferencesorinterests,reliabilityorbehaviour,locationormovements,inordertocreateorusepersonalprofiles;wherepersonaldataofvulnerablenaturalpersons,inparticularofchildren,areprocessed;orwhereprocessinginvolvesalargeamountofpersonaldataandaffectsalargenumberofdatasubjects”.
7ISO31000:2018(section3.8)usestheterm“control”definedas“measurethatmaintainsand/ormodifiesrisk”.TheEUDPRusestheterm“measure”.Theremainderofthisdocumentwillusethetermmeasure.
7
2RiskManagementmethodology
AccordingtotheISO31000:2018,riskmanagementisaprocessbywhichanorganisationcancontroltherisks(see
Figure
1).Thecoreofthatactivityistheriskassessmentpart,duringwhichanorganisationsuccessivelyidentifies,analysesandevaluatestherisks.
8
Figure1:riskassessment
9
Riskidentificationinvolvessystematicallyrecognisingrisksthatcouldpotentiallyaffecttheorganisation’sobjectives.Thisstepfocusesonidentifyingthesourcesofrisk,theareasofimpact,andtheeventsorsituationsthatmightleadtouncertainty.Thegoalistocreateacomprehensiveriskregisterthatwillbefurtheranalysedinthefollowingsteps.AsalreadyhintedatinSection
1.2
,thisGuidanceassumesthattheobjectivepursuedbyEUIsistoensurethattheprocessingofpersonaldatawithregardtowhichtheyactascontrollersdoesnotimpedeondatasubjects’fundamentalrights.
Riskanalysisisthenextstep,duringwhichtheorganisationexaminestherisksidentifiedtounderstandtheirnature,theirsources,theirlikelihood,andtheirpotentialconsequences.Thisstepaimstodeterminethelikelihoodofeachriskmaterialisingitself,aswellasitsimpactondatasubjectsshouldithappen.Forqualitativeriskanalysis,thelevelsoflikelihoodandimpactcanbegradedonascalerangingfrom“Verylow”,“Low”,“Medium”,
8IsabelBarberá,AIPossibleRisks&Mitigations-NamedEntityRecognition,September2023,
https://www.edpb.europa.eu/system/files/2024-07/ai-risks_d1named-entity-recognition_edpb-spe-programme_en.pdf
IsabelBarberá,AIPossibleRisks&Mitigations-OpticalCharacterRecognition,September2023,
https://www.edpb.europa.eu/system/files/2024-06/ai-risks_d2optical-character-recognition_edpb-spe-
programme_en_2.pdf
9FromISO31000:2018.
“High”,and“Veryhigh”.
10
Onceeachoftheseelementsinthescaleisdefined,
11
theriskcanbeevaluatedastheproductofitslikelihoodandtheseverityofitsimpact(Risk=LikelihoodxImpact).Thisistypicallyrepresentedviaariskmatrix(see
Figure2)
.
Likelihood
VeryHigh
Medium
High
Veryhigh
Veryhigh
High
Low
High
Veryhigh
Veryhigh
Low
Low
Medium
High
Veryhigh
Unlikely
Low
Low
Medium
Veryhigh
Verylimited
Limited
Significant
Verysignificant
Severity
Figure2:qualitativematrixforrisk
12
Riskevaluationisthefinalstepoftheriskassessment,wheretheresultsoftheriskanalysisarecomparedagainsttheorganisation'sriskcriteria(suchasriskappetiteandtolerance)todeterminewhethereachriskisacceptableorrequirestreatment.Theoutcomeofthisevaluationhelpstheorganisationdecidewhethertoavoid,mitigate,transfer,oraccepttherisks,dependingontheirseverityandorganisationalgoals.
Aftertheriskassessmentcomestherisktreatment,thepurposeofwhichistoselectandimplementmeasurestomitigatetheseriskseffectively.Itisaniterativeprocessthatinvolvesseveralkeysteps.First,risktreatmentoptionsareformulatedandselected.Then,aplanisdevelopedandimplementedtoaddresstheidentifiedrisks.Afterimplementation,theeffectivenessofthemeasuresisassessedtodeterminewhetherithasmitigatedtherisksufficiently.Iftheremainingriskisdeemedacceptable,nofurtheractionisnecessary.However,iftheriskisstillunacceptable,additionalmeasuresaretakentofurtherreduceit.
ThisGuidancefocusontwospecificaspectsoftheriskmanagementprocess,namelyriskidentificationandrisktreatment;theriskanalysisandtheriskevaluationaspectsaretoodependentonthespecificprocessingcontextandtheirassessmentisbetterlefttoeachorganisationinlinewiththeirownriskcriteria.ThismeansthatEUIsshouldperformathoroughanalysisforeachAIsystemtheyplantouseinordertoalsoevaluatethelikelihoodandimpactoftherisks,anddecideonthemitigatingmeasurestoaddressthem,aswellas
10Incontrasttoquantitativeriskassessmentswhichrequiremeasurementswhichareoftendifficulttocollect.
11IsabelBarberá,AIPossibleRisks&Mitigations-NamedEntityRecognition,September2023,
https://www.edpb.europa.eu/system/files/2024-07/ai-risks_d1named-entity-recognition_edpb-spe-programme_en.pdf
12ibid
8
9
ontheresidualrisks.
13
ThisanalysiscouldevenleadtotheconclusionthattheEUIisunabletomitigatebyreasonablemeanstherisksposedbytheplannedAIsystemandthusadifferentsolutiontotheorganisation’sneedshastobefound.Inthatcase,theEUIwouldhavetopriorconsulttheEDPSpursuanttoArticle40(1)EUDPR.
3UnderstandingtheAIlifecycle
3.1DefinitionofanAIsystem
ForthepurposesofthisGuidance,anAIsystemisunderstoodwithinthemeaningofArticle3(1)ofRegulation2024/1689(AIAct)as“amachine-basedsystemdesignedtooperatewithvaryinglevelsofautonomy,thatmayexhibitadaptivenessafterdeploymentandthat,forexplicitorimplicitobjectives,infers,fromtheinputitreceives,howtogenerateoutputssuchaspredictions,content,recommendations,ordecisionsthatcaninfluencephysicalorvirtualenvironments”.
14
TheAIAct,however,doesnotcontainadefinitionofan“AImodel”.
15
ThetermsAIsystemandAImodelareoftenusedasiftheyweresynonyms,whentheyarenot.
AImodelsaremathematicalrepresentationsthatcapture,inasetofparameters,thepatternsunderlyingtheirtrainingpersonaldata.
16
AlthoughAImodelsareessentialcomponentsofAIsystems,theydonotconstituteAIsystemsontheirown,astheywillalwaysrequireothersoftwarecomponentstobeabletofunctionandinteractwithusersandthevirtualorphysicalenvironment.Infact,anAIsystemcanbecomposedofmorethanoneAImodel.Forexample,avoicetranslatorAIsystemcouldbecomposedofafirstmodeltranscribingvoicedataintotext,asecondmodeltranslatingthetextfromonelanguagetoanotherandathirdmodelproducingasoutputvoicedatafromthetranslatedtext.
3.2LifecycleofanAIsystem
RiskscanappearindifferentpartsofthedevelopmentlifecycleofanAIsystem.Thus,itisnecessarytounderstandthespecificitiesofthedevelopmentlifecycleofanAIsystemcomparedtoatraditionaldevelopmentlifecycle(fornon-AIsystems).
17
Differentrisksmayappearinthedifferentphasesofthedevelopmentlifecycle(seeSections
4
and
5)
.TheAIdevelopmentlifecycletypicallycomprisesthestepsdetailedin
Figure3
.
18
14Regulation(EU)2024/1689oftheEuropeanParliamentandoftheCouncilof13June2024layingdownharmonisedrulesonartificialintelligenceandamendingRegulations(EC)No300/2008,(EU)No167/2013,(EU)No168/2013,(EU)2018/858,(EU)2018/1139and(EU)2019/2144andDirectives2014/90/EU,(EU)2016/797and(EU)2020/1828(ArtificialIntelligenceAct)EC[2024]OJL2024/1689
http://data.europa.eu/eli/reg/2024/1689/oj.
15Althoughitdefinesageneral-purposeAImodelas“anAImodel,includingwheresuchanAImodelistrainedwithalargeamountofdatausingself-supervisionatscale,thatdisplayssignificantgeneralityandiscapableofcompetentlyperformingawiderangeofdistincttasksregardlessofthewaythemodelisplacedonthemarketandthatcanbeintegratedintoavarietyofdownstreamsystemsorapplications”(Article3(63)AIAct)
16ISO/IEC22989:2022definesanAImodelasa“physical,mathematicalorotherwiselogicalrepresentationofasystem,entity,phenomenon,processordata”.
17SeeISO/IEC15288,ISO/IEC12207andISO24748:2024.
18DetailsareprovidedinISO22989:2022
10
Inception/Analysis
Dataacquisitionand
preparation
Development
Verificationandvalidation
Deployment
Operationandmonitoring
Continuousvalidation
Re-evaluation
Retirement
1Inception/Analysis:ThisinitialstageinvolvesclearlydefiningtheproblemtheAIsystemisintendedtosolveandselectingtheAImodelarchitecture.
2Dataacquisitionandpreparation:TherequiredtrainingpersonaldatadependsontheobjectivesoftheAIsystem.Forexample,anAIsystemmeanttoprocessimageswillexpectimagesastrainingpersonaldata.Theseimagescancomefromvarioussources(internet,privatedatabasesetc.).ThetrainingpersonaldatatobefedtoaspecificAIsystemneedstobeformatted,checkedagainstapplicablequalityandlegalrequirements,andnormalisedbeforetheycanbeused.
3Development:AIsystemscanbeprogrammedandtrainedtofulfilspecificpre-definedlimitedfunctions.Thisstepincludesselectingappropriatealgorithms,trainingtheAIsystemontheprepareddata,testing(tocheckiftheAIsystemworksandisfreeofbugs)andtuningitshyperparameters(e.g.learningrate)toimproveitsperformance.BuildingtheAIsystemmightbedonewithacombinationof“libraries”(thatcouldbeprocured),acquiredpre-trainedmodelsandinternaldevelopment.
4Verificationandvalidation:Afterthedevelopmentphase,theAIsystemisrigorouslyverified("Arewebuildingtheproductcorrectly?")andvalidated("Arewebuildingtherightproduct?")toensureitmeetsthefunctionalandnon-functionalrequirementssetintheinceptionstage.ThisinvolvescheckingtheAIsystem'sstatisticalaccuracy,robustness,andgeneralisabilityusingtestandvalidationdatasets.IssuesrelatedtotheAImodelduringthisphaseareaddressedbyretrainingtheAIsystem.
Figure3:AIdevelopmentlifecycle
5Deployment:TheAIsystemcanbedeployedinitsfinalenvironment(whichcouldbeend-userdevices,servers,cars,etc.).
6Operationandmonitoring:Oncedeployed,theAIsystemisthenoperatedbyitsusersandneedscontinuousmonitoringtoensureitoperatesasexpected.Thisincludestrackingperformance,updatingtheAIsystemtomeetnewrequirements,andrefiningitbasedonfeedback.
7Continuousvalidation:WhenanAIsystemutilisescontinuouslearning,
19
theoperationandmonitoringphaseisextendedintoanadditionalphaseofcontinuousvalidation.Inthisphase,trainingisconductedcontinuouslywhilethesystemisliveinproduction.Thesystem’sperformanceisregularlyassessedusingtestdatatoensureproperfunctioning.Additionally,thetestdatamayneedtobeupdatedperiodicallytobetterreflectthecurrentproductiondata,ensuringamoreaccurateevaluationoftheAIsystem’scapabilities.
8Re-evaluation:Aftertheoperationandmonitoringphaseandthepossiblecontinuousvalidation,itmaybecomenecessarytoreassesstheAIsystembasedonitsperformanceresults.TheoperationalresultsofthesystemshouldbethoroughlyanalysedandcomparedagainsttheidentifiedrisksassociatedwiththeAIsystemtoverifyiftheidentifiedriskshavebeensuitablymitigated.Itispossiblethat,duringthisphase,risksthathavenotbeenpreviouslyidentifiedappear.ThesewouldneedtobetreatedinthenextcycleoftheriskmanagementprocesspresentedinSection
2
.
9Retirement:AnAIsystemshouldberesponsiblyandefficientlydecommissionedwhenitisnolongerneededorisreplacedbyamoreadvancedsolution.
3.3ProcuringanAIsystem
Ontopofthat,inmanycases,buildinganAIsystemrequiresexternalexpertiseortheacquisitionofacommercialproductcoveringpartofthefunctionality,thedata,thesecurity,etc.Inthosecasesitisalsoimportanttoacknowledge,alreadyintheprocurementcycle,makingariskevaluationbeforeanybudgetisactuallycommittedforasolutionthatwouldbringundesirableriskstotheorganisation.
Inthesecases,oneofseveralphasesareallocatedtotheAIexternalprovider(whowilltakecareoftheprovisionoftheAIsysteminwholeorinpart).TheEUIs’staffinvolvedintheprocurementofAIsystemswillneedtocoordinateeffortswiththeEUIs’staffinvolvedinthedeploymentoftheseAIsystems(e.g.ITengineers,ITprojectmanagers,DataProtectionOfficersorDataProtectionCoordinators)inordertodraftthetechnicalpartofthecallfortender,fortheselectionoftherightproductandtheexecutionoftheimplementationoftheAIsystem.
SeveralapproachescanbetakentointegrateaprocuredAIsystemintoanexistinginfrastructure.AsperRegulation2024/2509
20
,theprocessfollowedbyEUIsisthefollowing:
1.Publicationandtransparency(Article163):Ensuretheprinciplesofsoundfinancialmanagement,transparencyandequaltreatment.
2.Callfortenders(Articles167-169):Launchanopencallfortendersthatoutlinesallnecessaryspecifications.ThetenderspecificationsshouldcontainrequirementsastothecapabilityoftheprospectivetenderertoprocuretheplannedAIsystemandall
19Abilitytoadaptandimproveitsperformanceovertimebylearningfromnewdatawithoutneedingtoberetrainedfromscratch.
20Regulation(EU,Euratom)2024/2509oftheEuropeanParliamentandoftheCouncilof23September2024onthefinancialrulesapplicabletothegeneralbudgetoftheUnion(recast).
https://eur-lex.europa.eu/legal
-content/en/TXT/?uri=CELEX%3A32024R2509
11
relevanttechnicalandproceduralqualityguarantees.Amongothers,informationtoberequiredisdescribedinSection
5.3
.
3.Selectionandawardcriteria(Article170):Evaluateoffersbasedonpredefinedcriteria(e.g.price,quality,sustainability).
21
4.Execution(Article175):Monitorimplementationandensurecompliance.
Inthiscase,the“executionphase”willbecomprisedofphasessimilartosomeofthephasespresentedinSection
3.2
namely:
5.1Verificationandvalidation
5.2Deployment
5.3Operationandmonitoring
5.4Continuousvalidation
5.5Re-evaluation
5.6Retirement
SimilarlytowhathappensinthedevelopmentofAIsystems,differentrisksmayappearinthedifferentphasesoftheprocurementlifecycle(seeSections
4
and
5)
.Thed
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2024-2025学年度专升本考前冲刺测试卷附参考答案详解(典型题)
- 2024-2025学年化验员每日一练试卷含答案详解(典型题)
- 2024-2025学年度岳阳职业技术学院单招数学模考模拟试题【重点】附答案详解
- 2024-2025学年度机械设备制造修理人员考试彩蛋押题附参考答案详解【综合题】
- 2024-2025学年度执业药师考前冲刺练习必考附答案详解
- 2024-2025学年度医疗卫生系统人员每日一练试卷附答案详解(综合题)
- 2024-2025学年反射疗法师3级题库检测试题打印附完整答案详解(夺冠)
- 2024-2025学年度公务员考试《常识》通关题库一套附答案详解
- 2024-2025学年广州民航职业技术学院单招《职业适应性测试》考前冲刺练习题含答案详解(研优卷)
- 2024-2025学年度医学检验(士)考试彩蛋押题带答案详解(黄金题型)
- 2026年及未来5年中国非油炸方便面行业市场全景监测及投资前景展望报告
- 2026年爱眼日:科学护眼远离误区
- 2026年人教PEP版新教材四年级下册英语教学计划(含进度表)
- 2026年漯河职业技术学院单招职业技能考试备考试题含详细答案解析
- 油气管道巡检维护操作手册
- 2026春青岛版(新教材)小学科学三年级下册(全册)课时练习及答案(附目录P106)
- 合规催收沟通技巧
- 铝单板喷涂管理制度规范
- 2025至2030中国充气膜建筑行业销售渠道趋势及发展现状调研报告
- 2026年固态变压器(SST)项目投资计划书
- 海报设计培训课件
评论
0/150
提交评论