2024在工作场所使用人工智能:机会、风险和政策应对措施(英)-OECD_第1页
2024在工作场所使用人工智能:机会、风险和政策应对措施(英)-OECD_第2页
2024在工作场所使用人工智能:机会、风险和政策应对措施(英)-OECD_第3页
2024在工作场所使用人工智能:机会、风险和政策应对措施(英)-OECD_第4页
2024在工作场所使用人工智能:机会、风险和政策应对措施(英)-OECD_第5页
已阅读5页,还剩24页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

OECDpub1ishing

USINGAIINTHEWORKPLACE

OPPORTUNITIES,RISKS

ANDPOLICYRESPONSES

OECDARTIFICIAL

INTELLIGENCEPAPERS

March2024No.11

2

USINGAIINTHEWORKPLACE©OECD2024

ThispaperispublishedundertheresponsibilityoftheSecretary-GeneraloftheOECD.TheopinionsexpressedandtheargumentsemployedhereindonotnecessarilyreflecttheofficialviewsofOECDmembercountries.

Thisdocument,aswellasanydataandmapincludedherein,arewithoutprejudicetothestatusoforsovereigntyoveranyterritory,tothedelimitationofinternationalfrontiersandboundariesandtothenameofanyterritory,cityorarea.

Coverimage:©Kjpargeter/S

©OECD2024

Theuseofthiswork,whetherdigitalorprint,isgovernedbytheTermsandConditionstobefoundat

/termsandconditions.

3

USINGAIINTHEWORKPLACE©OECD2024

UsingAIintheworkplace:

Opportunities,risksandpolicyresponses

Introductionandpurpose

Policymakersacrosstheglobearegrapplingwiththerapiddevelopmentsinartificialintelligence(AI)technologiesandtheiradoptionintheworkplace.EvenbeforetheadventofgenerativeAI,impressiveprogresshadbeenmadeinarangeofdomains,includingcomputervision,reasoning,problemsolving,aswellasreadingcomprehensionandlearning.EmployersarebeginningtouseAIapplicationstosiftthroughCVs,interactwithcustomers,allocate,direct,andevaluatework,andtoidentifyandprovidetraining.WorkersareusingAIinanincreasingnumberoftasks.TheadventofgenerativeAIhasresultedinashiftandaccelerationintheuseandimpactofAI,whichisnowageneralpurposetechnologythatislikelytoaffecteveryoccupationandsectoroftheeconomy.

AIcanbringsignificantbenefitstotheworkplace.IntheOECDAIsurveys,fourinfiveworkerssaidthatAIhadimprovedtheirperformanceatworkandthreeinfivesaidithadincreasedtheirenjoymentofwork(Lane,WilliamsandBroecke,2023[1]).WorkerswerealsopositiveabouttheimpactofAIontheirphysicalandmentalhealth,aswellasitsusefulnessindecisionmaking(Lane,WilliamsandBroecke,2023[1]).NotinvestinginAIandnotadoptingitintheworkplacewouldbeamissedopportunitytoboostproductivityandimprovejobquality,amongstothers.UnequalaccesstoanduseofAIintheworkplacecouldleadtoincreaseddisparitiesbetweenfirmsandworkersaswellasacrosscountries.

Torealisetheseopportunities,itishowevernecessarytoaddresstherisksraisedbyAIforthelabourmarket.TheOECDAIsurveysshowthat3in5workersareworriedaboutlosingtheirjobtoAIinthenext10years,and2in5expectAItoreducewagesintheirsector.Workersalsoexpressconcernsaroundincreasedworkintensityandthecollectionanduseofdata,amongstothers(Lane,WilliamsandBroecke,2023[1]).Otherrisksinclude:biasanddiscrimination,unequalimpactonworkers,lackofhumanoversight,aswellaslackoftransparency,explainabilityandaccountability,amongstothers.

Box1.TheOECDAIsurveys

Wishingtocaptureworkers’andemployers’ownperceptionsofthecurrentandfutureimpactofAIontheirworkplaces,theOECDsurveyedatotalof5334workersand2053firmsinthemanufacturingandfinancialsectorsinAustria,Canada,France,Germany,Ireland,theUnitedKingdomandtheUnitedStates.ThesurveysexaminehowandwhyAIisbeingimplementedintheworkplace;itsimpactonmanagement,workingconditionsandskillneeds;itsimpactonworkerproductivity,wagesandemployment;whatmeasuresarebeingputinplacetomanagetransitions;andconcernsandattitudessurroundingAI.ThemostfrequentlyreportedusesofAIincludedataanalyticsandfrauddetectioninthefinancesector,andproductionprocessesandmaintenancetasksinmanufacturing.

4

USINGAIINTHEWORKPLACE©OECD2024

ThesurveyrevealsthatbothworkersandemployersaregenerallyverypositiveabouttheimpactofAIonworkerproductivityandworkingconditions.Around80%ofAIuserssaidthatAIhadimprovedtheirperformanceatwork,andAIusersweremorethanfourtimesaslikelytosaythatAIhadimprovedworkingconditionsastosaythatAIhadworsenedthem.

However,therearealsoconcerns,includingaboutjobloss–anissuethatshouldbecloselymonitored.Thesurveysalsoindicatethat,whilemanyworkerstrusttheiremployerswhenitcomestotheimplementationofAIintheworkplace,morecanbedonetoimprovetrust.Inparticular,thesurveysshowthatbothtrainingandworkerconsultationareassociatedwithbetteroutcomesforworkers.

Source:Lane,M.,M.WilliamsandS.Broecke(2023[1]),“TheimpactofAIontheworkplace:MainfindingsfromtheOECDAIsurveysof

employersandworkers”

,/10.1787/ea0a0fe1-en.

Arisk-basedapproachhasbeencommoninthinkingaboutthepolicyandregulatoryresponsetoAI.InDecember2023,theEuropeanParliamentandCouncilreachedaprovisionalagreementontheArtificialIntelligenceAct,whichwillestablishrulesforAIbasedonitspotentialrisksandlevelofimpact,withsomeapplicationsbeingbannedandobligationsimposedforapplicationsthataredeemedtobehighrisk–suchasmanyusesintheworkplace.IntheUnitedStates,theExecutiveOrderonSafe,Secure,andTrustworthyArtificialIntelligenceissuedinOctober2023directs“themostsweepingactionsevertakentoprotectAmericansfromthepotentialrisksofAIsystems”including,forexample,developingprinciplesandbestpracticestomitigatetheharmsandmaximisethebenefitsofAIforworkers.TheBletchleyDeclarationbycountriesthatattendedtheAISafetySummitatBletchleyPark(UnitedKingdom)inNovember2023focusedonidentifyingAIsafetyrisksandbuildingrisk-basedpolicies.Inmanycases,AIdoesnotoperateinaregulatoryvacuumandthattherearealreadylawsthatregulateitsuseandimpact.Howevertherearegapsintheexistingregulatoryandpolicyframeworks,andurgentpolicyactionisneeded.

Aspolicymakersimplementthesemeasures,thereisaneedforspecificguidanceonrisksandmeasureslinkedtotheuseofAIintheworkplace.ThisnoteusestheOECDPrinciplesontrustworthyAIanddrawsonthesubstantialbodyofworkdonebytheOECDinthisfield(OECD,2023[2])toidentifykeyrisksposedbytheuseofAIintheworkplace,toidentifythemainpolicygapsandofferpossiblepolicyavenuesspecifictolabourmarkets.Thenotepresentstherisksandtheassociatedpolicyresponsesindividually,buttheserisksinteractamongeachotherandmeasurestoaddressoneriskwilloftencontributetoaddressingothersaswell.

Risks,policygapsandpolicyavenues

Automationandjobdisplacement

Risks:AIisanautomatingtechnologythatdiffersfromprevioustechnologiesinatleastthreeimportantaspects.First,AIextendsthetypesoftasksthatcanbeautomatedtomanynon-routinecognitivetasks,andthereforeexposesworkerswhowerepreviouslyrelativelyprotectedfromautomation(e.g.thehigh-skilled)totherisksofdisplacement.Second,alloccupationsandsectorsarelikelytobeaffectedbyAI(asopposedto,forexample,robotswhichprimarilyimpactedthemanufacturingsector).Third,thespeedofAIdevelopmentandadoptioninthelabourmarketleaveslittletimeforadjustmentandcouldraisefrictionalunemployment.Sofar,thereislittleevidenceofanetnegativeimpactofAIonthenumberofjobs,buttheriskofautomationremainssubstantial:theOECDestimatesthatoccupationsatthehighestriskofautomationaccountforabout27%oftotalemployment.Itwillbeimportanttohelpworkersmovefromdecliningsectorsandoccupationsintotonewandgrowingones.

5

USINGAIINTHEWORKPLACE©OECD2024

Figure1.Percentageofemploymentinhighlyautomatablejobs,2019

40

35

30

25

20

15

10

5

0

Source:OECD(2023[2]),OECDEmploymentOutlook2023

,/10.1787/08785bba-en.

Policygaps:MostcountriesrecognisetheimportanceofskillsandtrainingtoadapttoAI-relatedautomation,butfewhaveproposedconcreteactionplans,andfewarepreparedforthequantumleapintrainingthatwillberequired.ExistingprogrammestendtofocusondigitalorAIskills,butfewrecognisetheimportanceofcomplementaryskills(e.g.communication,creativity,orworkingwithothers),andonlyaminorityhavedevelopedanintegratedapproachforAIskillsdevelopment.Socialdialoguewillalsobeimportantinmanagingthesetransitions,butfacesitsownchallenges(seesectiononsocialdialoguebelow).

Possiblepolicydirectionsthatcountriesmayconsider:

•MonitoringtheimpactofAIonthelabourmarkettoidentifyjobsmostatriskofautomation.

•AnticipatingfutureskillneedsrelatedtoAIadoptionintheworkplace.

•Skillsdevelopmentprogrammesatalllevelsofeducation,todevelopskillsneededtoworkwithanddevelopAI.

•TrainingforworkersandmanagerstosupporttheadoptionanduseoftrustworthyAI.

•Employmentsupportmeasures,includingtargetedtrainingprogrammesandcareerguidance,forworkersatdirectriskofautomationbyAI.

•AdequatesocialprotectionforworkersdisplacedbyAI.

•Supportingsocialdialogue(seebelow).

Risinginequality

Risks:Workersfacedifferentrisksofautomation-forexampledependingontheirskills,occupation,firmsize.Theyhavealsodifferentexposuretoriskofbiasanddiscrimination,privacybreaches,andhealthandsafety.Ontheotherhand,workersthatdonothaveaccesstoAIintheworkplacecannotbenefitfromtheopportunitiesitoffers,forexampletobemoreproductive,toovercomeobstacleslinkedtodisability,oraccessnewjobscreatedbyAI.EmergingevidenceshowsthatAIcanalsoincreaseproductivityoflow-skilledworkersincertainoccupations,reducingproductivitygapswithhigher-skilledworkers.ThereisthereforeaconcreteriskthattheadoptionofAIintheworkplaceleadstoincreasedinequalityinthelabourmarket.

6

USINGAIINTHEWORKPLACE©OECD2024

Olderworkers

Lowskilledworkers

Femaleworkers

Workerswithdisabilities

Figure2.PercentageofemployerswhothinkAIhelps/harmsgroupsofworkers,financeandmanufacturing

Helpthem

Harmthem

0%10%20%30%40%50%60%

Source:OECD(2023[2]),OECDEmploymentOutlook2023

,/10.1787/08785bba-en.

Policygaps:WhilesomecountriesalreadyhavepoliciesinplacesuchastrainingorsubsidiesforAIadoption,theymaybepoorlytargetedandthereisaneedtobetterunderstandwhichgroupsfacethehighestrisksothatpublicresourcesareusedefficiently.WhereAIoffersopportunitiesforreducinginequalities,governmentscandomoretofostertheirdevelopmentandadoption,especiallyamongsmallerfirmswhichhavelessmeanstoaccessgoodqualityAItools.Forexample,eventhoughmanyAIsolutionsexisttohelppeoplewithdisabilitiesovercomelabourmarketbarriers,therearechallengeswithfunding,certificationandqualitystandardsforsuchtools,aswellasalackofaccessibilitytrainingamongdevelopers.Policiestoaddresstheotherrisksdiscussedintherestofthisbriefwillhelpaddressinequalities.

Possiblepolicydirectionsthatcountriesmayconsider:

•IdentifyingthegroupsmostexposedtoAI-relatedrisksinthelabourmarket.

•TrainingandsupporttargetedtodisadvantagedworkerspriortoandduringAIadoption.

•TargetedgrantsorsubsidiesforSMEstofacilitatetheiradoptionoftrustworthyAI.

•TacklingrisksinAIsystemsrelatedtobiasanddiscriminationandautonomy(seebelow).

•InvolvingvulnerableandunderrepresentedgroupsinthedevelopmentandadoptionofAIsystemsfortheworkplace.

Riskstooccupationalhealthandsafety

Risks:AIsystemscanbeusedtoimproveworkers’healthandsafetyatwork,forexamplebyautomatingdangeroustasks,detectinghazards,ormonitoringworkerfatigue.TheOECDAISurveysshow,forexample,thattheadoptionofAIatworkincreasedenjoymentatworkfor3in5workers(Lane,WilliamsandBroecke,2023[1]).Atthesametime,theuseofAIcreatesnewrisksfromanOccupationalSafetyandHealth(OSH)perspective.Forinstance,someAI-poweredmonitoringsystemsmayincreasetimeandperformancepressuretotheextentthattheycausestressand/orcreateincentivesforworkerstoignoresafetystandards.Stressmayalsoresultfromdecisionsthatareunfair,lacktransparencyandexplainability,andwherethereisnoeasyopportunityforredress.Thedisappearanceofroutinetasks

7

USINGAIINTHEWORKPLACE©OECD2024

throughAImaydeprivetheworkeroftherespiteprovidedbythesetasks,leadingtomorementallytaxingshiftsandpossiblyincreasingtheriskofphysicalinjury.IncreaseduseofAIintheworkplacemayalsodecreasehumancontacttothedetrimentofmentalhealth.

Figure3.Numberofincidentscausingphysicalorpsychologicalharmtoworkers,2023

40

35

30

25

20

15

10

5

0

JanFebMarAprMayJunJulAugSepOctNovDec

Source:OECDAIIncidentsMonitor(AIM)

,https://oecd.ai/en/incidents.

Gaps:Mostcountrieshaveregulationsthatsetoutemployers’obligationstowardsemployeesconcerningtheiroccupationalsafetyandhealth.Whilethedetailsvaryfromcountrytocountry,employersusuallyhavetoassessrisks,andeliminateorreducethemwithpreventativeandprotectivemeasures,andinformworkersabouttherisksandtrainthem.WhileintheorysuchregulationsshouldalsocoverAI,theremaybegaps,particularlyinmentalhealth.Also,whilemostcountrieshaveproductliabilityregulations,theylikelywillneedtobeadaptedtotheuseofAIsystems.Finally,labourinspectoratesmaylacktheknowledgeand/orcapacitytoaddressnewrisksposedbyAI.

Possiblepolicydirectionsthatcountriesmayconsider:

•Reviewingand,ifnecessary,updatinglabourlawsandOSHregulationstoaddresstheAIuseintheworkplace.

•Healthandsafetyriskassessments,auditsandcertificationsforAIsystemstoensureworkers’healthandsafetyfromthedesignstage.

•Strengtheninglabourinspectorate’scapacitiestoinspectandenforcecompliancewiththelaw.

•Involvingmanagers,workers,andtheirrepresentativesinthedesignandadoptionofAIsystemsintheworkplace.

•Informingemployers,workersandtheirrepresentativesaboutthepossibleOSHrisksofAIsystemsusedintheworkplace.

Privacybreaches

Risks:TheincreaseduseofAIintheworkplacewilllikelyresultinthegreatercollectionandanalysisofdataonworkersandjobcandidatestotrainandusethesesystems.Datamayormaynotbepersonal,andcouldincludeinformationsuchas:workermovements,biometricdata,likeheartratesandbloodpressure,aswellasdigitalactivities.Workersmayfeelthatthisisaninvasionoftheirprivacy,inparticular

8

USINGAIINTHEWORKPLACE©OECD2024

Stronglyagree

Somewhatagree

Neitheragreenordisagree

Somewhatdisagree

Stronglydisagree

iftheygavenoconsenttothecollectionanduseofthedata.Workersmightalsoworrythatthedataareusedforpurposesotherthanforwhichitwasintended.Moreover,datacollectionmayresultinincreasedmonitoringandsurveillance,whichcouldleadtostress.

Gaps:TheprotectionofworkersagainstprivacyrisksvariesconsiderablyacrossOECDcountriesbut,eveninthosewiththestrongestprotections,gapsremain.Forexample,inEUcountries,theGeneralDataandPrivacyRegulation(GDPR)strengthensindividuals’controlandrightsovertheirpersonalinformationbuttherearesignificantenforcementgaps.TheGDPRalsoleavesdataprotectionintheemploymentcontexttobeaddressedattheMemberStatelevel,sotheserulesarestillfarfrombeingharmonisedacrosscountries,consistentandcomprehensive.ProtectionsareevenweakerinotherOECDcountries.Forexample,inmostUSstates,thereareverylimitedprotectionswhenitcomestothecollectionanduseofdataonworkersbyemployers.

Figure4.Percentageofworkerswhoareworriedabouttheirprivacy,manufacturingandfinanceemployerswhouseAI

11%

24%

14%

19%

32%

Note:Workerswhoreportthattheiremployers’useofAIinvolvedthecollectionofdataonworkersortheirworkwereasked:“Towhatextentdoyouagreeordisagreewiththefollowingstatements?Iworryaboutmyprivacywhenmydataiscollected”.

Source:Lane,M.,M.WilliamsandS.Broecke(2023[1]),“TheimpactofAIontheworkplace:MainfindingsfromtheOECDAIsurveysofemployersandworkers”

,/10.1787/ea0a0fe1-en.

Possiblepolicydirectionsthatcountriesmayconsider:

•ImpactassessmentsandqualitylabelstoevaluateprivacyandsecurityofpersonalinformationintheAIsystems.

•Restrictingthecollection,use,inference,anddisclosureofworkers’personalinformation.

•Requirementstosafeguardworkers’personalinformationandappropriatehandlingofdata.

•Providinginformationtoworkersaboutdatacollectedbyemployersandpurposeofuse(seealsoTransparency).

•Rightsforworkerstocorrect,delete,opt-outof,orlimittheuseofsensitivepersonalinformation,includingthroughworkers’representatives.

•QualitylabelsandcertificationsforAIsystemswithgooddataprotection.

9

USINGAIINTHEWORKPLACE©OECD2024

Biasanddiscrimination

Risks:TrustworthyAIcanhelpidentifyandreducehumandiscriminationandbiasintheworkplacebysupportingdecisionswithquantitativeevidence.However,ifnotwelldesignedand/ortrainedonbiased/non-representativedata,AIsystemscanreplicateandsystematisehumanbiasesthathavehistoricallyexistedinthelabourmarket,leadingtobiasanddiscriminationinwhocanseejobpostings,whoisshortlistedforjobopenings,whoisassignedwhichtasksatwork,whoreceivestraining,andperformanceassessment,amongothers.

Gaps:Intheory,existinganti-discriminationlegislationisapplicabletoAIuseintheworkplace.Theremay,however,begapsandloopholesinthislegislation.Relevantcaselawisstilllimitedandwillshowwherelegislationmayneedtobereviewed.LackoftransparencyandexplainabilityofAIsystems(seeTransparencyandExplainability)posesfurtherchallengesincountriesthatrelyheavilyonindividualactionforseekingredress,makingitdifficulttocontestAI(-based)workplacedecisionsusingonlyexistinganti-discriminationlaws.

Figure5.PercentageofAI-usingorganisationsthatdonottakestepstoreduceunintendedbiasinthesystem

80

70

60

50

40

30

20

10

0

NotreducingunintendedbiasNotmakingsuretheycan

explainAI-powereddecisions

NotdevelopingethicalAIpolicies

Notguardingagainst

adversarialthreatsandpotential

incursionstokeepsystems

healthy

Notsafeguardingdataprivacythroughtheentirelifecycle

Source:IBMWatson(2022[3]),IBMGlobalAIAdoptionIndex2022,

/downloads/casGVAGA3JP?mkt_tok=/

NjczLVBISy05NDgAAAGH0tcnDiI.

Possiblepolicydirectionsthatcountriesmayconsider:

•Reviewingand,wherenecessary,adaptingexistinganti-discriminationlegislationtotheuseofAIintheworkplace.

•Impactassessmentstoassessrisksofbiaspriortoimplementation,andregularauditsafterimplementation.

•Qualitylabelsandcertificationsagainstbias.

•InvolvingsocialpartnersandrepresentativesofvulnerableandunderrepresentedworkersinthedesignanddeploymentofAIsystemsintheworkplace.

10

USINGAIINTHEWORKPLACE©OECD2024

Lackofautonomy,agency,anddignity

Risks:FirmsfrequentlyintroduceAIsystemstostreamlineproductionprocesses,boostefficiencyandincreaseproductivity.Thesesystemscangiveworkersreal-timeandcontinuousfeedbackontheirperformance,directworkandprovidebehaviouralnudges.This“algorithmicmanagement”canundulylimitworkers’autonomy,reducehumancontactandtheabilityofworkerstodiscusstheirworkwithmanagers,orcontestdecisionsthatseemunsafe,unfair,ordiscriminatory.Thesepracticescouldundermineworkers’senseofprofessionalidentityandmeaningfulness,andpresentrisksforphysicalandmentalhealthandsafetyatwork.

Gaps:Somecountrieshaveintroducedregulationonworkplacemonitoring(e.g.theElectronicCommunicationsPrivacyActintheUnitedStates,theGDPRintheEuropeanUnionandtheUnitedKingdom,orthePersonalInformationProtectionandElectronicDocumentsActinCanada)andautomateddecision-making(theAlgorithmicAccountabilityActintheUnitedStatesandtheGDPR).Acomprehensiveapproachtoregulatingalgorithmicmanagementisstilllackinginmostjurisdictions,however.TheEUplatformdirectiveisoneofthefirstpiecesoflegislationtodoso,butitonlyappliestoaverysmallsub-sectionoftheworkforce(platformworkers).

Figure6.Percentageofworkerswhosesenseofautonomydecreased,manufacturingandfinanceemployerswhouseAI

ManagedbyAI

OtherAIusers

0246810121416

Source:OECD(2023[2]),OECDEmploymentOutlook2023

,/10.1787/08785bba-en.

Possiblepolicydirectionsthatcountriesmayconsider:

•DefiningclearboundariesforuseofAIsystems,e.g.onthepermissibleextentofmonitoringandautomateddecision-making.

•Requiringhumanoversightofdecisionsthataffectworkers’safety,rights,andopportunities.

•Consultationsandinvolvementofworkersand/ortheirrepresentativesintheadoptionofAIsystems(seeChallengestosocialdialogue).

11

USINGAIINTHEWORKPLACE©OECD2024

Lackoftransparency

Risks:Theabilityofworkerstoexercisespecificrights(e.g.therightnottobesubjecttoautomateddecision-making),detectrisks,and/oreffectivelyquestionoutcomes,hingesontheirawarenessoftheirinteractionswithAIsystemsandhowthatsystemreachesitsoutcomes(seealso

Insufficient

explainability)

.However,AIusecanbedifficulttodetectwithoutexplicitdisclosure.Forinstance,Harris,B.etal.(2023[4])findthatonly17%ofadultsintheUnitedKingdomcanoftenoralwaystellwhentheyareusingAI.EvenifindividualsareawareoftheirinteractionswithAI,gaininginsightintoitsdecision-makingprocesscanbedifficult,forinstanceduetodevelopers’reluctancetodiscloseinformation,ortothecomplexityofthesystem.

Gaps:MostAIprinciplesunderscoretheimportanceoftransparencyofAIanditsuse,buttranslatingtheseconceptsintopracticemaybecomplex.Forinstance,severalStatesintheUnitedStateshaveintroducedlawsrequiringemployerstonotifyapplicantsand/oremployeesabouttheirinteractionswithAI,butoftentheseregulationsdonotencompassallconceivableAIapplications,andfocusontheuseofAIforrecruitmentorelectronicmonitoring.IntheEU,thePlatformWorkDirectiveprovidesindividualswithsomerightstoinformationonthelogicofalgorithmswhereautomateddecision-makingisused,howeveritonlyappliestoplatformworkers.Inaddition,theremaybebarrierstotransparencyduetointellectualpropertyrights(tradesecrets)andprivacylaws,bothofwhichlimithowmuchinformationcanbedisclosed.

Possiblepolicydirectionsthatcountriesmayconsider:

•RequirementstodiscloseuseofAIsystemsintheworkplaceandinhiringprocesses,forbothemployersandworkers.

•Reviewingand,ifnecessary,updatingprivacyandintellectualpropertylawstoaddresspotentialambiguitiesandbalancetherightstheyprotectagainsttheneedfortransparentAI(use).

Insufficientexplainability

Risks:AIsystems,particularlythoseusingcomplextechnologieslikedeepneuralnetworks,yieldoutcomesthatcanbedifficultorevenimpossibletoexplain.AlackofexplainabilitycanunderminethetrustandconfidencethatpeopleplaceinAIsystemsandthedecisionsthatareinformedbythem.Italsomakesitdifficultforindividualstoprovideinformedconsenttotheuseofsuchsystems,ortoidentifyandseekredressforadverseeffectscausedbyAIsystemsintheworkplace.Alackoftrustandconfidence,inturn,cancauseworkerresistanceandhencehindertheadoptionofAIsystemsintheworkplace.

Gaps:PolicymakersinvariouscountrieshavetoutedexplainabilityasadesirablepropertyofAIsystems,however,therestillisnobroadagreementonwhatexplainabilitywouldentail.TheGDPRforexamplerequiresdatasubjectstobeprovidedwith“meaningfulinformationaboutthelogicinvolved”inautomateddecisionmakingprocesses,whichoftenstartsbyprovidinginformationaboutwhattheAIsystemhasbeen“optimised”todo.Explanatorytools,suchasasimplealgorithmthatapproximatesthebehaviouroftheAIsystemandthusprovidesapproximateexplanation.ForsomeAIsystems(anddependinguponthedefinitionused),explainabilitymaybedifficultifnotimpossibletoachieve,oritmaybeinconflictwithotherdesirableobjectivessuchasaccuracyorprivacy.NeithertheEUAIActnortheUSPresidentialExecutiveOrdermentionexplainability.

Possiblepolicydirectionsthatcountriesmayconsider:

•Requiringdeveloperstoprovidedocumentation,instructionsofuse,andexplanatorytoolstoaccompanyAIsystemsusedintheworkplace.

•RequiringemployersandworkerstodisclosetheuseofAIsystemsintheworkplaceandinhiringprocesses,andprovideresultsofexplanatorytoolsupontherequestofworkersortheirrepresentatives.

12

USINGAIINTHEWORKPLACE©OECD2024

Lackofaccountability

Risks:EstablishingclearlinesofaccountabilityisfundamentalforatrustworthyuseofAIandtheenforcementofregulations.Itisnotalwaysclear,however,whichac

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论