世界经济论坛-组织中AI公平教育的整体指南(英)-2021.9-31正式版_第1页
世界经济论坛-组织中AI公平教育的整体指南(英)-2021.9-31正式版_第2页
世界经济论坛-组织中AI公平教育的整体指南(英)-2021.9-31正式版_第3页
世界经济论坛-组织中AI公平教育的整体指南(英)-2021.9-31正式版_第4页
世界经济论坛-组织中AI公平教育的整体指南(英)-2021.9-31正式版_第5页
已阅读5页,还剩28页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

AHolisticGuidetoApproachingAIFairnessEducationinOrganizationsWHITEPAPERSEPTEMBER2021每日免费获取报告1、每日微信群内分享7+最新重磅报告;2、每日分享当日华尔街日报、金融时报;3、每周分享经济学人4、行研报告均为公开版,权利归原作者所有,起点财经仅分发做内部学习。扫一扫二维码关注公号回复:研究报告加入“起点财经”微信群。。Cover:GettyImages/metamorworksInside:GettyImages/metamorworks;GettyImages/Drazen;GettyImages/AndreyPopov;GettyImages/pixelfit;GettyImages/alvarez;GettyImages/anandaBGD;GettyImages/IgorKutyaevContents345ForewordExecutivesummary1Introduction6681.1AbriefintroductiontoAIfairness1.2Whyfairness?1.3Approachingfairnesseducationinacorporation111316182022242627292Seniorleadership3ChiefAIethicsofficers4Managers5Buildteams6Businessteams7Policyteams8ConclusionCasestudy–AIandyouthContributorsEndnotes©2021WorldEconomicForum.Allrightsreserved.Nopartofthispublicationmaybereproducedortransmittedinanyformorbyanymeans,includingphotocopyingandrecording,orbyanyinformationstorageandretrievalsystem.DisclaimerTheNetworkofGlobalFutureCouncilsisaninvitation-onlycommunitythatservesasabraintrustfortheWorldEconomicForumandtheworldatlarge.ThispaperhasbeenwrittenbytheWorldEconomicForumGlobalFutureCouncilonAIforHumanity.Thefindings,interpretationsandconclusionsexpressedhereinarearesultofacollaborativeprocessfacilitatedandendorsedbytheWorldEconomicForum,butwhoseresultsdonotnecessarilyrepresenttheviewsoftheForum,northeentiretyofitsMembers,Partnersorotherstakeholders,northeindividualGlobalFutureCouncilMemberslistedascontributors,ortheirorganizations.AHolisticGuidetoApproachingAIFairnessEducationinOrganizations2September2021AHolisticGuidetoApproachingAIFairnessEducationinOrganizationsForewordEmilyRattéKayFirth-ButterfieldHeadofAIandMachineLearning,ProjectSpecialist,AIandMachineLearningMemberoftheExecutiveCommitteeWorldEconomicForumManager,GlobalFutureCouncilonAIforHumanityWorldEconomicForumOverthepastfewyears,manytechnologyTheGFCcomprises24expertsfromaroundtheworld,whoaremakingadvancementsinthisspace.Representingmanyprofessionalandculturalbackgrounds,sectorsandindustries,thegrouprecognized,andearlyoncametoaconsensuson,themultidimensionalnatureofAIfairness–whichprecludesanyonedefinitionofwhat“fair”AIlookslike.Forthisreason,thefollowingreportoutlinesaholisticapproachtoaddressingAIfairnesseducationinancompanies–and,frankly,manyorganizationsthathadneverthoughtofthemselvesasinherentlytechnological–havebeguntorecognizetheimportanceofethicalandresponsibledevelopment,design,deploymentanduseofartificialintelligence(AI).ThiscanbeattributedtotheincreaseduseofAIinmanycommonplaceorganizationalfunctions,suchasmarketingplatforms,talentmanagementtoolsandsearchengines;and,byextension,theincreasinglyfrequentethicaldilemmasemergingfromthewayinwhichtheseAIapplicationshavebeencreatedorused.ManyorganizationshavedevelopedprinciplesregardingAIethics,suchastransparency,organization,whichcanbeadaptedtodifferentsectorandindustrycontextsasnecessary.Thereportdrawsonseveralcollectivevaluesincludingaccess,equality,equityandtransparency.explainability,privacy,robustnessandfairness.1NoteveryorganizationwillhavetheresourcesnecessarytohireadedicatedteamofAIethicists,letaloneexpertsfocusedonfairnessspecifically.DevelopingcurricularmaterialstoeducateemployeesonthepotentialimplicationsofbiasedorunfairAI,aswellasmethodologies,toolsandpracticestoaddresstheseimplications,willrequireevengreaterfinancialcommitmentandmoreresources.WiththisholisticlookattheroleandimpactofdifferentmembersofabusinessinaddressingAIfairness,wehopetoprovideoptionsandaNorthStarforanyorganizationopentoimprovingitspracticesorcreatingproductswithapositiveandequitableimpactonthelargerpopulationoftheworld.TheGlobalFutureCouncil(GFC)onAIforHumanitywasconvenedunderthemandateoffindingsolutionstocriticalissuesofAIfairness.Somereadersmaywonderwhythiscouncilofexpertsisprimarilyfocusedonfairnesswhentherearemanyotherissueswithintheburgeoningfieldof“AIethics”worthyofbeingfurtheraddressed.Withthiscouncilconveningin2020atatimemarkedbydeepeconomicandsocialunrestandinjustice,theWorldEconomicForumhopedtoshineaspotlightonfairnessasanessentialpartofanyfutureinwhichAIcontinuestobedeveloped,deployedandusedatscale.AHolisticGuidetoApproachingAIFairnessEducationinOrganizations3ExecutivesummaryAsorganizationsautomateoraugmenttheirdecision-makingwithAI,thereisahighriskthattheresultantdecisionseithercreateorreinforceunfairbias.ThenegativeimpactofbiasandunfairnessinAIdoesnotaffectindividualvictimsalone.Organizationsthatdesign,developanddeployAIcanfaceseriousrepercussionssuchasbrand/reputationaldamage,negativesentimentamongemployees,potentiallawsuitsorregulatorypenalties,andlossoftrustfromallstakeholders,includingcustomersandthegeneralpublic.ThispaperoutlinessixfunctionswithinanorganizationthathaveaparticularroleinoperationalizingAIfairness:seniorleadership,chiefAIethicsofficers,managers,buildteams(datascientists,developers,productdesigners,engineeringteamsetc.),businessteams(customer-facingteamssuchassales,marketingandconsulting)andpolicyteams.EachsectionofthereportdelineatestheresponsibilitiesoftheteamincontributingtoAIfairnessoutcomes,aswellasthecompetenciesandtrainingthatshouldbemeasuredandprovidedtotheteamtoenablethemtocarryouttheirresponsibilities.Thisreportaimstoaddressoneimportantpartoforganizations’approachtoAIfairness:educatingdifferentteamsabouttheroletheyplayinadvancingAIfairness.HolisticlearningandeducationonAIfairnessacrossanorganizationcandriveemployeestounderstandtheimportantroletheyplayincontributingtobetter,moreequitableandmoreethicaluseofAI.ThereportalsoincludesrecommendationsonfurthereffortsneededtoimproveAIfairnessoutcomesbeyondeducation,includingdefininganorganization’sfairnessobjectives,creatingasupportivecorporatecultureandhiringdiverseteamsatalllevelsandpartsofanorganization.Itendswithacasestudyonachild-centredapproachtoAIfairness,tohelpreaderstocontextualizetheinformationpresentedthroughout.AHolisticGuidetoApproachingAIFairnessEducationinOrganizations4Introduction1TheFourthIndustrialRevolutionisblurringtheboundariesbetweenthephysical,digitalandbiologicalworlds.AIisdrivingthisrevolution.AHolisticGuidetoApproachingAIFairnessEducationinOrganizations5Artificialintelligence(AI)hasbeeninexistencesincethemid-1950s,whenJohnMcCarthy,computerandcognitivescientist,coinedtheterm.AIreferstotechnologiesthatemploydataandalgorithmstoperformcomplextasksthatwouldotherwisetorequirehumandecision-making.AIisconsideredcolloquiallytobethesimulationofhumanintelligenceinmachines.However,AIsystemstodaymayperformwithhighaccuracyonagiventaskordataset,theydonothave“generalintelligence”,ortheabilitytoautonomouslycomprehendandrespondtoanydecision,particularlyinsocialcontexts,inthesamewaythathumansdoastheyconstantlymakedecisionsthroughouttheirday.1.1AbriefintroductiontoAIfairnessAsatechnologythatcanbeusedforanumberofends,includingincreasingefficiencyandautomatingrotetasks,AIisbeingdeployedtosomedegreeinmostlarge-scaleorganizations.TheextentofAIimplementationhasbecomeadifferentiatorformanybusinesses.AccordingtoKPMG,84%offinancialservicescompaniesstatethatAIadoptionorientationordisabilityanddonotdiscriminateamongvariouspeopleorgroupsofpeople.Forexample,anAI-basedhiringsystemmayrecommendcandidateswhoaremoreoutgoingorextrovertedbecausemanyextrovertedcandidateswerehiredinthepast.However,thisdecisiondoesnottakeintoaccountwhetherintrovertedmannerismscouldbearesultofculturaldifferences.ThiscouldbeanunfairoutcomeofatechnicallyaccurateAIsystem.acceleratedduringtheCOVID-19pandemic.2However,asAIpicksupmomentuminbusinessapplicationsacrosstheglobe,thequestionofAIfairnessloomslarge.BecauseanAIsystemtouchesmanyteamswithinanorganizationbeforeitisusedbyacustomerorstakeholder–design,datascience,developers,marketing,salesetc.–AIfairnesscannotbemadetheresponsibilityofoneteamalone.Forexample,designteams,intheirexcitementtobuildanaturallanguageprocessingmodelthatfunctionsinmanylinguisticcontexts,mayoverlookissuesofaccessforuserswithhearingimpairments.Similarly,salesteamsfocusedonAI’spowertocreatevaluemightneglecttheethicalimplicationsofasaletoanauthoritariangovernment.AIfairnessisapillarofthelargerfieldofAIethics,whichaimstomaximizeitspositiveimpactwhilemitigatingtherisksofAItobenefithumansandtheenvironment.AIethicsstudiesthedesign,developmentanddeploymentofAIsystemsinaccordancewithagreed-uponvaluesandprinciplessuchasdataresponsibility,privacy,inclusion,transparency,accountability,security,robustnessandfairness.Whiletheseprinciplesandvaluesmayappearinanorganization’scodeofconduct,AIethicsandfairnessshouldbeapproachedholistically,beyondcompliance,asacontinuousprocesstoimproveproductsandservicestobetterservebothcustomersandbroadersociety,ensuringAI’slifecycleprotectshumanrightsandThisreportaimstoaddressoneimportantpartoforganizations’approachtoAIfairness:educatingdifferentteamsabouttheroletheyplayinadvancingAIfairness.Holisticlearningandeducationwell-being.3regardingAIfairnessinanorganization’secosystemcandriveemployeestounderstandtheimportantroletheyplayincontributingtobetter,moreequitableandmoreethicaluseofAI.DecisionsmadebyAIsystemsaresaidtobefairiftheyareobjectivewithregardtoprotectedindicatorssuchasgender,ethnicity,sexual1.2Whyfairness?AsorganizationsautomateoraugmentAIsystems.ThisisespeciallyimportantsinceAIsystems,largelydesignedbyhomogeneousgroups–only26%ofpositionsindataandAIareheldbydecision-makingwithAI,thereisahighriskthatdecisionseithercreateorreinforceunfairbias.TheproblemofbiasisnotuniquetoAI.Creatingfairandequitablesystemsisstillaworkinprogressforallsocieties.Thisstemsfromcoresocialissuesunrelatedtotechnology,suchasstructuresofpowerandeconomy,andthelackofinclusionofheterogeneousperspectivesindecision-making.Justaswetakestepstoaddressdiscriminationwomen,5andaround67%ofAIprofessorsarewhite6–mayamplifythebiasesofdevelopers,therebyharmingexponentiallymorepeople.ThenegativeimpactofbiasandunfairnessinAIdoesnotaffectindividualvictimsalone.Organizationsthatdesign,developanddeployAIcanfaceseriousrepercussionsincludingbrand/reputationaldamage,negativesentimentamongemployees,potentiallawsuitsorregulatorythrougheducation,publicpolicyandregulations,weneedtotakestepstomitigateagainstunintendedandinappropriatediscriminationembeddedin4AHolisticGuidetoApproachingAIFairnessEducationinOrganizations6ItisagainstthisbackdropthatwedeliverconcertedattentiontoAIfairnesswithinthelargerrealmofAIethics.penalties,andlossoftrustfromallstakeholdersincludingcustomersandthegeneralpublic.workaswellforcertaingroups.Atypicalexampleof“quality-of-serviceharms”11isthevaryingaccuracyinfacerecognitionfordifferentethnicgroups,whichhasawide-rangingimpact.AbiasedAIsystemcouldalsoreinforcediscriminationagainstcertaingroupsbyperpetuatingstereotypes.12AnoteworthyexampleofthisistheuseoftheCOMPASalgorithm,whichdisproportionatelyratedblackdefendantsathigherriskofrecidivismwhencomparedwiththeirwhitecounterparts.13Justafewyearsago,discussionsaboutAIfairnesswerestillmostlyconductedinacademicandresearchcircles.Inrecentyears,asgreaterattentionhasbeenpaidtoreal-lifescenariosandusecasesinwhichindividualshavebeenharmedbyalgorithmicdecision-making,manyorganizationsareworkingtoapplyAIfairnessresearchtoimprovetheirworkforcedevelopmentandbusinessmanagementprocesses.Thismayincludeusingethicalandinclusivedesignpracticesintheinitialstagesofproductdesign,orimplementinggatekeepingprocessessothatAIproductsarenotdeployedbeforetheyhavebeencheckedforfairness.Indeed,itisnotablethattheextentoftheimpactofunethicalbehaviourbyhumanbeingstendstobelimitedbyanindividual’srelationshipsorpositioninacommunityororganization.TheeffectofbiasedorunfairAIsystems,however,canbefargreater,assoftwarecreatedinonecorneroftheworldcaneasilybeshared,soldandscaledaroundtheworld.AccordingtoarecentPWCmarketstudy,47%oforganizationstestforbiasindata,modelsandAIfairnesscouldgoundetectedunlessattentionispaidtoit.Ashumansareinherentlybiasedinmanyways,basedonfactorssuchasupbringing,education,environmentandmore,theymayunconsciouslybakebiasesintotheAIsystemstheybuild.WithoutawarenessbytheteamsbuildingAImodelsoftheimportanceofAIfairness,iftheAIsystemisdeployed,theharmscausedmaygoundetected.Take,forexample,theuseofAIinresourceallocation–unlesstwoindividualsfromdifferentgroupscomparewhattheyreceived,thediscriminatedindividualwillnotrealizethattheyreceivedlessthantheotherindividual.Quality-of-serviceharmscouldsimilarlygoundetectedforalongtime,especiallyifadiscriminatedindividualsubconsciouslyacceptsalowerstandardofservicebecauseitalignswithothersimilarlydiscriminatoryexperiences.HarmsfromAIsystemsthatperpetuatecommonstereotypesareevenmoreinsidiousastheymaygounrecognizedinnon-diversecompaniesandcommunities.Fundamentally,alltypesofharmcausedbyAIbiasrequireadeliberatechoicebyanorganizationorstakeholdertoinvestigateandmitigatethem.humanuseofalgorithms.But,accordingtoa20217BCGstudy,executivesarebroadlyoverestimatinghowresponsiblethey’reactuallybeingandaren’tappropriatelymeasuringtheiruseofAIagainstpracticalguidanceframeworks.Accordingto8BCG’sassessmentoforganizations’responsibleAI(RAI),ofthecompaniesitsurveyed,14%werelagging,34%weredeveloping,31%wereadvancedand21%wereleading.BCGalsofoundthatorganizationsareseriouslyoverestimatingtheirRAIprogress.WhenBCGaskedexecutiveshowtheywoulddefinetheirorganization’sprogressonitsRAIjourney,resultsindicated:noprogress(2%ofrespondents),haddefinedRAIprinciples(11%),hadpartiallyimplementedRAI(52%)orhadfullyimplementedRAI(35%).9StevenMills,ManagingPartnerandChiefAIEthicsOfficeratBCG,statedthat:“TheresultsweresurprisinginthatsomanyorganizationsareoverlyoptimisticaboutthematurityoftheirresponsibleAIimplementation.Whilemanyorganizationsaremakingprogress,it’sclearthedepthandbreadthofmosteffortsfallbehindwhatisneededtotrulyensureresponsibleAIimplementation.”10Lastly,andmostcritically,AIfairnessisasocio-technicalchallenge,andthereisnorightanswer.Fairnessisasocialconstructandhasmanydifferentdefinitionsindifferentculturesandfordifferentcontexts.EncouragingadiverseandinclusiveAIecosystemisthusallthemorecrucialtoensurethatonedefinitionoffairnessdoesnotcontradictanother,andthattheprocessofdefiningfairnessitselfisfair,withunder-representedgroupsatthetableleadingtheconversation.Whiletherearetoolsthatcanhelptoassessfairnessthroughvariousmetrics,AIfairnessisnotaproblemthatcanbesolvedsimplythroughtechnicalmeans.Beyonddefiningfairnessinlinewithnon-discriminatoryvalues,humanandchildrights,anorganization’sriskappetite,businessobjectivesandcustomerexpectations,organizationswillneedtoengagewithallstakeholdersanddeliberatecarefullyonwhatfairnessmeansfortheirspecificAIusecase.14ItisagainstthisbackdropthatwedeliverconcertedattentiontoAIfairnesswithinthelargerrealmofAIethics.WeidentifyfourmainreasonswhyAIfairnessrequiresexplicitattentionfromorganizationstoday:First,harmscausedbybiasedresultsofAIsystemsarenottrivial,andcanaffectlargegroupsofpeopleinsubstantialways.OnetypeofharmiswhenabiasedAIsystemallocatesorwithholdsanopportunityorresourcefromcertaingroups.Ifabiaseddecisionwasmadeforconsequentialdecisionsforlargepopulations(e.g.accesstoeducationalinstitutionsorgovernmentgrants),thedamagewillbelarge.AnothertypeofharmcouldresultifabiasedAIsystemdoesnotAHolisticGuidetoApproachingAIFairnessEducationinOrganizations7Managingreputationriskisahugebusinesschallenge.ManagingAIfairnesswillreducerisksthatcompaniesfacedownthelinewhich,withoutproperattention,couldseverelydamagethecompany’sreputationandaffectrevenue,ontopoftheharmstostakeholders.EducatingcompanyemployeesonAIfairnessprovidesacriticaldefence,andhastangiblefinancialvalueinminimizingthecostoffuturescandals,regulatorypenalties,litigationcostsandcustomerloss.1.3ApproachingfairnesseducationinacorporationGiventhatAIfairnessisasocio-technicalchallenge,itisnotonlytheresponsibilityofthoseintechnicalrolestoaddressit.Allpersonnelinvolvedinthedevelopment,deploymentanduseofAIsystemshavearoleandresponsibilitytooperationalizeAIfairnessandshouldbeeducatedaccordingly.Role-specificeducationonAIfairnessAllpersonnelinvolvedinthedevelopment,deploymentanduseofAIsystemshavearoleandresponsibilitytooperationalizeAIfairnessandshouldbeeducatedToturnprinciplesintopractice,organizationswillalsoneedtoassignrolesandresponsibilities.EducationandtrainingwillneedtobeprovidedsothatpersonnelindifferentrolesareabletocarryouttheirresponsibilitiesandrecognizethewaysinwhichtheyhaveanimpactonAIfairness.Todothis,organizationsshouldconsiderrole-andresponsibility-specificeducationonAIfairness.Inthispaper,wehavedefinedsixkeyfunctionswithinanorganizationthathaveaparticularroleinoperationalizingAIfairness.Theseareseniorleadership,chiefAIethicsofficers,managers,buildteams(datascientists,developers,productdesigners,engineeringteamsetc.),businessteams(customer-facingteamssuchassales,marketing,consulting)andpolicyteams.Intheseteams,multiplecompetenciesareneededtoaddressAIfairness,includingbutnotlimitedto:GeneraleducationonAIfairnessPersonnelwillnotbeabletocarryouttheirresponsibilitiesmeaningfullyunlesssupportedbyacorporateculturethatsharesacommonconviction,understandingandawarenessaboutAIfairness.Acultureofopennesstoraiseethicalissues,discusstheirtrade-offsandimplications,andactontheoutcomes,isessentialtoensurethatAIfairnessissuesarenotsweptunderthecarpetinordertomeetdeadlines.accordingly.OrganizationsdefinewhatAIfairnessmeanstothem,basedontheirorganizationalcorevaluesandhighersocietalvalues.15Drawingonthispattern,weproposethatthefirststepinapproachingAIfairnessistodevelopanAIethicscharterwithastrongchapteronAIfairness.Indoingso,anorganizationdefineswhatAIfairnessmeanstothem,basedontheirorganizationalcorevalues.ThischartercanguidetheoverarchingstrategyanddecisionsinrelationtoAIfairness,aswellashelpemployeesatalllevelsdistinguishbetweenrightandwrongdecisionsoractions.–MobilizeemployeesonAIfairness.IsleadershipengagedinandcommittedtoAIfairness?AreteamsembracingAIfairnessvaluesinspiritandaction?Companiesmustidentifywherethereissupportfortheseinitiatives,andwhichpartsoftheorganizationmaybelessinterestedinAIfairness.Onlywiththisunderstandingcancompaniessubstantivelychangetheirculturetoincreaseawarenessof,andattentionto,keyissues.Importantly,theprocessbywhichthecharteriswrittenmatters.Ratherthanbeingcreatedinatop-downfashionbyleadershipteams,organizationsshouldstartthisstepbyenablingconversations,creatingstructuresforenhancedcross-teamrelationshipsandcommunication,andbuildingtrustamongstakeholders.Employees,contractors,usersandcustomerswillbebestpositionedtoraiseconcernsandoffersuggestionstoimproveaproductorsystem,anddeservetobeconsultedon–ifnottodrive–thisprocess.DevelopingafairprocessfordefiningvaluesisagoodplacetostartfororganizationslookingtooperationalizetheircommitmenttoAIfairness.–Identifypossibleharmstodifferentstakeholders.BasedontheAIusecase,teamsshouldconsiderpotentialharmsthatcouldaffectdifferentstakeholders,includingdirectandindirectusers.ThepotentialharmcouldarisefromanerrormadebytheAIsystem,aswellaswhentheAIsystemworksasintended.Inthisprocess,theteamidentifiesthesensitivefeaturesanditsprivilegedandunprivilegedgroups.ThiskindofeducationinvolvesguidingemployeestoempathizeandenvisionhowtheAIsystemcouldcauseharm.TeamscouldbetrainedtousetoolssimilartoMicrosoft’sJudgmentCall,whichdrawsonvalue-sensitivedesignanddesignfictiontounearthethicalconcernsbyhavingemployeeswritefictionalproductreviewsbasedondifferentpersonas,suchasayoungdigitalnativeandanelderlyperson.16Effectivetoolsgetteamstodiscoverunexpectedpotentialharmsbyidentifyingandrelatingtodifferentstakeholders.Afterduetimehasbeenallocatedtodevelopingthecharter,organizationsshoulddedicateresourcestopromulgatingtheprinciplesandguidelinesdevelopedinthecharter,e.g.bydevelopingaglossaryande-learningmodule.ResourcesmustbeallocatedtoensurethatthepeoplethroughouttheorganizationunderstandAIfairnessandtheroletheyplayinencouragingit.AHolisticGuidetoApproachingAIFairnessEducationinOrganizations8–Definefairnessobjectivesthatarerelevanttotheusecase.TherearemanydifferentdefinitionsofAIfairness,dependingondifferentusecasesorculturalcontexts.Employeesneedanin-depthunderstandingofthevariouspossibleAIfairnessusecases,suchasequalopportunity,demographicparity,equalizedoddsetc.TeamsmustlearntoapplytheirAIethicscharter,andevaluatethefairnessimplicationsdependingontheusecasetheyarefacing.17Notably,fairnessobjectivescanincludetechnicalmetrics(e.g.basedonprobabilisticdistributions)aswellassocialobjectives,whichmightvarydependingonlocalcontext.–Monitorwhetherthelevelofbiaschangeswhendeployed.TeamswouldneedtosetupprocessestomonitorchangesinfairnessmetricsthroughoutthelifecycleoftheAIsystem.ThiscouldrequireinvestmentinproprietarysystemsthatintroduceAIgovernance-by-design,andprovidevisibilityofmetricstoenablemonitoringofAIfairnessmetrics–andbyextension,trainingforteamstomonitorandinvestigateirregularitiesinfairnessmetricswhentheAIsystemislive.Itcouldalsoinvolveeducatingconsultingandaccountmanagementteamstoraisearedflagiftheycomeacrossnon-technicalindicatorsofafairnessissue.–Detectandevaluatebias.TeststodetectbiasinanAIsystem,suchascounterfactualfairnessassessments,shouldberequiredandintegrated,basedonthefairnessobjectivesdefinedearlier.Allteamsshouldbeequippedeitherwithtoolkitstodetectbias,orthe–Engagestakeholdersonpotentiallybiasedoutput.PersonnelinvolvedinengagingstakeholdersoftheAIsystemshouldbetrainedtobeawareofthebenefits,risksandlimitationsoftheAIsystem–particularlyforunder-resourcestohelpconductsuchtests.Itmaynotbepossibletoeliminateallbias,andpersonnelevaluatingtheAIsystemwillneedtoassessifthelevelofbiaspresentisacceptable.Suchpersonnelshouldbecomfortablewithtechnicalfairnessmetrics,aswellasqualitativeimpactsofbiasedsystems.Sincetherewillbetrade-offstoconsider,personnelinvolvedinevaluationshouldalsohavetraininginriskmanagement.representedgroups,unexpectedusersandvulnerablecommunities–sotheyknowwhentoalerttheproductteam.Theyshouldalsobetrainedtohandlecustomercomplaintswithgreatsensitivityandempathy.Thisprocesswillfurtherhelptheorganizationidentifyifthereareanyexcludedusergroupsforwhichtheproductshouldberevisitedorredesigned.Inadditiontothesecorecapabilities,itiscrucialthatteamsthemselvesarediverseandinclusive.Notonlywillnon-diverseteamsfacemanymoreobstaclesinidentifyingandmitigatingbiasedoutcomes,unrepresentativegroupshaveafargreaterlikelihoodofembeddingtheirownbiasesintotheAIsystemstheydesignordevelop,orevenperpetuatingissuesofaccessanddigitalliteracyintheinequitabledeploymentofAIsystemstocustomersandusers.Diverseteamshavealsobeenproventobemoreadeptatpointingoutdifferentperspectivesandcognitivebiasesintheirteammembers.–Mitigatebias.Teamsandrelevantleadershipshouldbeinvolvedinidentifyingmitigatingmeasurestoaddressthisbias.ThiscouldincludecollectingmoredatatoensurethatthedatausedisrepresentativeofthepopulationwhomakeuptheendusersoftheAImodel,adjustingdatasamplesofunderprivilegedgroups,orevenrevisitingthedesignandpurposeofthealgorithmitself.NotethatfairnessintersectswithotherpillarsofAIethicssuchastransparencyandexplainability;whilethisreportfocusesonfairness,theotherpillarsarecrucialanddeservefurtherattention.Inadditiontothesecorecapabilities,

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论