情感陪护机器人的表情交互模型研究_第1页
情感陪护机器人的表情交互模型研究_第2页
情感陪护机器人的表情交互模型研究_第3页
情感陪护机器人的表情交互模型研究_第4页
情感陪护机器人的表情交互模型研究_第5页
已阅读5页,还剩45页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

情感陪护机器人的表情交互模型研究摘要:老年群体在长期缺乏子女陪伴的生活下,对服务机器人情感陪护需求日益高涨,同时,《“十四五”机器人产业发展规划》等国家战略明确提出支持智能养老装备的研发。本文就情感陪护机器人如何通过自然的人机交互来为用户提供更好的服务这一难点问题展开研究,调研了当今世面上主流的情感陪护机器人类型以及表情交互技术研究现状,通过两个不同层面的映射揭示情感陪护机器人与用户表情交互的映射机理,第一层为表情到情感的映射,首先采用haar级联分类器进行人脸检测,再通过轻量化卷积神经网络深度学习人脸表情特征。第二层为面部动作单元到伺服电机的机器人表情生成映射,将机器人的面部主要分为眼部模块和下颚模块通过舵机来进行控制。在此基础上,本文设计了基于表情和简单语音反馈的情感陪护机器人交互系统。最后,实验通过图片输入,视频输入以及真人摄像验证了情感陪护机器人系统与用户表情交互的映射关系。本文为服务机器人在情感陪护和表情交互技术奠定基础。关键词:人机交互;表情识别

ResearchonfacialexpressioninteractionmodelforemotionalcompanionshiprobotsAbstract:Elderlygroupsinthelong-termlackofchildrentoaccompanythelifeoftheservicerobotemotionalcompaniondemandisgrowing,atthesametime,the‘14thFive-YearPlan’RoboticsIndustryDevelopmentPlanandothernationalstrategiesclearlyputforwardtosupporttheresearchanddevelopmentofintelligentelderlyequipment.Thispaperfocusesonhowemotionalescortrobotscanprovideuserswithbetterservicesthroughnaturalhuman-robotinteraction,andresearchesthemainstreamtypesofemotionalescortrobotsandthecurrentstatusofexpressioninteractiontechnology,revealingthemappingmechanismoftheinteractionbetweenemotionalescortrobotsandusers'expressionsthroughthemappingoftwodifferentlevels,withthefirstonebeingthemappingfromexpressiontoemotion,andthehaarcascadeclassifierbeingusedtoperformthemappingofexpressiontoemotion.Thefirstlayeristheexpression-to-emotionmapping,whichfirstuseshaarcascadeclassifierforfacedetection,andthenlearnsthefacialexpressionfeaturesdeeplybylightweightconvolutionalneuralnetwork.Thesecondlayeristhemappingoffacialactionunittoservomotorforrobotexpressiongeneration,whichdividestherobot'sfacemainlyintoeyemoduleandjawmoduletobecontrolledbyservomotor.Onthisbasis,thispaperdesignsanemotionalcompanionrobotinteractionsystembasedonexpressionandsimplevoicefeedback.Finally,theexperimentsverifiedthemappingrelationshipbetweentheemotionalcompanionrobotsystemandtheuser'sexpressioninteractionthroughpictureinput,videoinputandlivecamera.Thispaperlaysthefoundationforservicerobotsinemotionalcompanionandexpressioninteractiontechnology.Keywords:human-computerinteraction;facialexpression

IntroductionBackgroundandsignificanceofthestudyWiththedevelopmentofsocietyandeconomyaswellasthereductionoffertilityrate,theagingofChina'spopulationhascontinuedtodeepen,andbytheendof2023,theelderlypopulationaged60yearsandabovehasreached297million,accountingfor21.1%ofthetotalpopulation.Atpresent,Chinaistheonlycountryintheworldwithanelderlypopulationofover100million.China'sagingisalsogrowingatarapidpace,andaccordingtotheUnitedNationsDataCentre'sforecast,theproportionofChina'spopulationaged60andoverinthetotalpopulationwillreach30%by2050,Theaccelerationofagingandthetrendoffewerchildrenaresuperimposed,asshowninREF_Ref196427311\hFigure1.1,leadingtoasignificantclimbintheproportionofemptynestersandelderlylivingalone.Thelong-termlackofchildren'scompanionshipcaneasilyleadtopsychologicaldisordersamongtheelderly.2023surveyshowsthattheincidenceofpsychologicalproblemsamongtheelderlylivingaloneisashighas60%,mainlymanifestedbyanxiety,depressionandloneliness,andtherateofsuicideamongtheelderlyisalsoontherise.Althoughtraditionalmedicationandpsychotherapyareeffective,thecoveragerateislessthan20percentduetotheshortageofprofessionalresourcesandhighcosts,andthereisanurgentneedtoexploreinnovativesolutionsthatcanbeappliedonalargescale.FigureSTYLEREF1\s1.SEQFigure\*ARABIC\s11MapofageingtrendsinChinaInthiscontext,emotionalcompanionshiprobotshavegraduallybecomeanimportanttechnologicalpathtoalleviatethementalhealthcrisisoftheelderly.Itscorevalueisreflectedinthesynergisticpromotionofsocialdemand,technicalfeasibilityandpolicysupport.Atthesociallevel,robotscanpartiallyfillthegapoffamilycompanionshipandalleviatethepressureofoldage,atthetechnologicallevel,theprogressofartificialintelligencetechnologymakesanthropomorphicinteractionpossible,providingabasisforcontinuousemotionalsupport,andatthepolicylevel,the‘14thFive-YearPlan’forthedevelopmentoftheroboticsindustry,andothernationalstrategiesareclearlyproposedtosupporttheresearchanddevelopmentofintelligentelderlyequipment,whichprovidesacleardirectionforthetransformationoftechnology.Atthepolicylevel,nationalstrategiessuchasthe‘14thFive-Year’RoboticsIndustryDevelopmentPlanexplicitlysupporttheresearchanddevelopmentofintelligentelderlyequipment,whichprovidesacleardirectionfortechnologytransformation.Thekeytoachievingefficienthuman-robotinteractionistoimprovethesystem'semotionalunderstandingandfeedbackcapabilities.Researchshowsthatoverahalfoftheinformationinhumanemotiontransmissionreliesonfacialexpressions,andthedeclineinthehearingabilityoftheelderlyduetoagingmakesthisgroupmoresensitivetoexpressionscomparedtospeech.Servicerobotsneedtohavetheabilitytoparsetheuser'semotionalstateinrealtimeandgenerateadaptedfeedback.Basedonthis,thedevelopmentofanaturalinteractionmechanismcentredonfacialexpressions,asshowninREF_Ref196427325\hFigure1.2,hasbecomethekeytobreakingthroughthehuman-robotemotionaldivide.FigureSTYLEREF1\s1.SEQFigure\*ARABIC\s12HumanoidfacialrobotsTheaimofthispaperistorevealthemappingmechanismofemotionalcompanionshiprobotinteractingwithuser'sexpression,andtodesignanemotionalcompanionrobotinteractionsystembasedonexpressionandspeech,whichlaysasolidfoundationforfurtherresearchonservicerobotsinthefieldofemotionalcompanionandexpressioninteractiontechnology.AnalysisofdomesticandforeignresearchstatusanddevelopmenttrendsResearchonthedevelopmentofemotionalcompanionshiprobottechnologyTheemotionalcompanionshiprobotsonthemarketatpresentaremainlydividedintotwocategories,oneisthe‘non-humanoidrobot’imitatinganimalsorcartoonmodelling,andtheotheristhehumanoidrobot.Thesetwotypesofrobotshavetheirowncharacteristicsintermsoftechnologyroutesandapplicationscenarios,andallofthemprovideemotionalsupportforgroupswithdifferentneeds.AttheInternationalConsumerElectronicsShow(CES)2025,whichisknownasthe‘SpringFestivalofScienceandTechnology’,anumberofnon-humanoidemotionalcompanionrobotshaveattractedattention,asshowninREF_Ref196427337\hFigure1.3.USroboticscompanyTombothaslaunchedabattery-andartificialintelligence-poweredroboticpetnamedJennie,whichisdesignedtoprovideemotionalsupportforelderlypeoplewithmildcognitiveimpairmentorthosesufferingfromdepression.Jennie,whichresemblesasmallLabradorpuppylyingdown,isequippedwithtouchsensorsthatrespondtotouchbehaviourandtovoicecommands,andemitsarecorded‘puppybark’inresponsetovoicecommands.The‘BarkingPuppy’isamodulardevicefromChinesebrandTCL.ChinesebrandTCLlaunchedAiMe,amodularAIcompanionrobotthatisvisuallyinteractiveviaahigh-definitionscreen,withbothsmarthomecontrolandchildren'seducationfunctions,whileTokyo-basedbrandYukaiEngineeringbroughtMirumi,apursegoblinrobot.underitsfurryappearance,Mirumiattachesitselftobagsandarms.Itscansthepeopleorobjectsarounditthroughinternalsensorsandcanfocusitsattentiononthingsorpeoplethatappealtoit,Mirumialsodoublesupasatherapeuticrobotfordepressionandanxiety.Ropet,asmartpetfromMoeoIntelligence,cansimulatealivingorganismwithbodytemperatureandhungerresponses,andRopetcanrecognisedifferentemotionsandrespondtothemusingacamerarecognitiononitsnose.Theinhumanrobothasarichexteriordesignthatoffersportabilityandentertainment.a)RobotpetJennieb)AIcompanionrobotAiMec)pursegoblinrobotMirumid)smartpetRopetFigureSTYLEREF1\s1.SEQFigure\*ARABIC\s13Non-humanoidemotionalcompanionrobotsatCES2025Humanoidrobotsaremorecomplexintechnology,butcanachieveaninteractiveexperienceclosetothatofarealperson.Inthefieldofhumanoidheadrobotresearch,theresearchteamatWasedaUniversityinJapanhasmadepioneeringcontributions.SincetheuniversitylaunchedtheWEseriesrobotresearchanddevelopmentprojectin1996,ithasgonethroughmanygenerationsoftechnologyiteration,andgraduallyimprovedthebionicheaddesignsystem.Theearlymodelsachievedpreliminarysimulationoffacialcontoursthroughbasicmotionmechanisms,whiletheWE-3RIIrobotreleasedin2001madeabreakthroughimprovement,i.e.,theintroductionofabioniceyebrowassemblyandjawlinkageintermsofmechanicalstructure,andthecombinedexpressionofsevenbasicexpressivemovements,suchaseyebrowraisingandlippursing,withthehelpofaprecisiongearsetREF_Ref196160383\r\h[1].TheresearchteammarkedatechnologicalleapwiththeintroductionoftheWE-4Rseriesin2004,amodelthatadoptsamodulardesignconcept,andbyintegratingamultimodalperceptionunit,therobotaddedachromatographicrecognitionsensor,whichsignificantlyimprovesitsenvironmentalinteractioncapabilitiesREF_Ref196160459\r\h[2].Meanwhile,itsexpressionlibrarywasexpandedtoincludesixcategoriesofstandardexpressionssuchasjoy,sadness,anger,etc.,asshowninREF_Ref196427352\hFigure1.4.Italsorealisedthesynergisticmotioncontroloftheheadandtorsoforthefirsttime.FigureSTYLEREF1\s1.SEQFigure\*ARABIC\s14WE-4RseriesrobotInthefollow-upresearch,theKOBIAN-Rseriesrobotsdevelopedbytheteammadeanimportantbreakthrough.Thesystembreaksthroughthetraditionalsymmetricexpressiongenerationmode,andisabletoreproducecomplexasymmetricexpressionsincludingbittersmileandsingleeyebrowbyindependentlycontrolling42groupsofmicro-servomechanismsontheleftandrighthalvesoftheface.Togetherwiththewhole-bodygesturecoordinationalgorithm,therobotcansynchronisetheexpressionchangeandbodylanguageexpressiontoformamulti-dimensionalemotionoutputsystem,asshowninREF_Ref196427280\hFigure1.5FigureSTYLEREF1\s1.SEQFigure\*ARABIC\s15KOBIAN-RRobot'sEmotionalBodyLanguageTheAmecarobot,asshowninREF_Ref196427365\hFigure1.6,developedbyBritishcompanyEngineeredArt,isoneofthemostrealistichumanoidrobotsintheworld,equippedwiththousandsofsensorsalloveritsbodyandfacialmusclescapableofmakingdelicateexpressionssuchasraisingeyebrowsandpursinglipsREF_Ref196160498\r\h[3].Itscontrolsystemcanbeoperateddirectlyfromawebpage,andthecompanycanremotelycontrolallaspectsoftherobot'sface,head,neck,limbs,andothercomponents,allowingtherobottoadapttosuddenchangesintheenvironmentandrespondinstantlytoensurethesafetyandenjoymentofhuman-robotinteractionREF_Ref196160510\r\h[4].FigureSTYLEREF1\s1.SEQFigure\*ARABIC\s16PatientsimulationwithhumanoidrobotAmecaWALTDisneyImagineeringhasbeenmorecreativewiththeirresults.RobotshavebeenaspecialthemeatDisneylandsincethe1960s.WaltDisneyintroduced‘voice-mechanicalanimatronicfigures’inthosedays,whichweremachinecharacterswithcarefullychoreographedmovements.Theywerethedollsthatsanginharmonyallthetimein‘It'saSmallWorld,’themaraudingpirateswhochantedin‘PiratesoftheCaribbean,’andtheAmericanpresidentwhogavethe‘GettysburgAddress’in‘TheHallofPresidents.Theydevelopedstuntmen,asshowninREF_Ref196427380\hFigure1.7,The‘Spider-Man’robotcanlikethefilmcharacterstosomersault,launch‘spidersilk’,inthethemeparktobringvisitorsanimmersiveexperience,whichisalsoaformofemotionalaccompanimentREF_Ref196160528\r\h[5].However,althoughtheinteractiveeffectoftheserobotsisamazing,butthecostishigh,andrequiresaprofessionalteamtomaintain.FigureSTYLEREF1\s1.SEQFigure\*ARABIC\s17ImagineerMorganPopeandhisrobotsIntermsoftechnologytrends,thetwotypesofrobotsareintegratingwitheachothertoachieveemotionalcompanionship.Non-humanoiddevicesarebeginningtoincorporatesmartscreensandvoiceassistants,suchasTCL'sAiMerobot,whichcanaccompanychildrentolearnEnglishandcontrolsmartlampsinthehome,butitlacksthemostrealisticexpressioninteractionorisdifficulttocarryoutin-depthemotionalcommunication,whilehumanoidrobotstrytomodularisetheirdesigntoallowuserstoassembledifferentexpressionmodulesandreducethethresholdofuse.Withtheagingofsocietyandchangesinfamilystructure,theserobotsaregraduallymovingfromtheR&DstagetothemarketandbecomingpartofdailylifeREF_Ref196160542\r\h[6].Researchersarecommittedtoenhancingtheemotionalintelligenceofrobotstomeetusers'needsinemotionalcommunicationandpsychologicalsupportREF_Ref196160569\r\h[7]REF_Ref196160579\r\h[8].ResearchStatusofEmotionRecognitionTechnologyAsthecorecarrierofhumanemotionexpression,thedevelopmentofitsrecognitiontechnologyhasexperiencedanevolutionaryprocessfrommanualannotationtoautomatedanalysis.Inthe1970s,AmericanpsychologistsEkmanandFriesenproposedsixbasicemotionclassificationtheories(happy,angry,surprised,fear,disgust,sadness)throughcross-culturalresearch,layingascientificfoundationforthecorrelationbetweenexpressionsandemotionsREF_Ref196160595\r\h[9].Inreal-lifescenarios,anadditionalcategoryofexpressionisusuallyaddedinordertodistinguishtheabsenceofexpression:normal,andthesevenbasicexpressionsareshowninREF_Ref196427395\hFigure1.8.FigureSTYLEREF1\s1.SEQFigure\*ARABIC\s18ThesevenbasicexpressionsOnthisbasis,theyfurtherdevelopedtheFacialActionCodingSystem(FACS),whichquantifiesthemusclemovementpatternsofexpressionsthroughanatomicallydefinedfacialActionUnits(AUs).Comparingwiththeexpressionlessneutralfacefeatures,thebasicexpressionfacefeatureshaveuniquemusclemovementpatternsandcanbequantitativelyrepresentedbyFACS,whichisveryimportantformorein-depthresearchonfaceThisisofgreatsignificanceforamorein-depthstudyoffaceexpressionmodelsandrobotexpressionreproductiontechniquesREF_Ref196160632\r\h[10].Thissystemnotonlyachievesanobjectivedescriptionofsubtleexpressions,butalsoprovidesastructuredstandardforsubsequentcomputermodelling.SomeoftheFACSactionunittablesareshowninTable1.TableSEQTable\*ARABIC1FACSactionunitCodeFACSNameCodeFACSNAMEAU1InnerBrowRaiserAU10UpperLipRaiserAU2OuterBrowRaiserAU12LipCornerPullerAU4BrowLowererAU15LipCornerDepressorAU5UpperLidRaiserAU17ChinRaiserAU6CheekRaiserAU20MouthOpenerAU7LidTightenerAU25LipsPartAU9NoseWrinklerAU26JawDropThecurrentexpressionrecognitiontechnologymainlyrevolvesaroundthreetypesofmethods,eachofwhichexhibitsdifferentcharacteristicsinspecificscenes.TraditionalimageprocessingmethodsarerepresentedbyHaarcascadeandLBPfeaturescombinedwithSVMclassifiers,whichhavelowcomputationandlowhardwareperformancerequirements,andaresuitablefordeploymentinembeddeddevicessuchassmartwatchesREF_Ref196160647\r\h[11].However,theirdefectsarealsomoreobvious,forexample,therecognitionerrormayincreasewhenthelightingconditionschange,limitingtheirapplicationincomplexenvironments.Breakthroughsindeeplearningmethodshavesignificantlyimprovedtherobustnessofexpressionrecognition.ModelsbasedonnetworkarchitecturessuchasXception,ResNet,etc.havehighaccuracyinrecognisingmicro-expressionsinlaboratoryenvironments,especiallyinsceneswithfacialocclusionorunevenlightingREF_Ref196160660\r\h[12].However,suchmodelsusuallyrequirealargeamountoflabelleddatafortraining,andthemodelsaregenerallylargeinsize,whichleadstotheirinefficientoperationonmobiledevices.Hybridmodelsincludinglightweightmodelsattempttofindabalancebetweenspeedandaccuracy.Thistypeofapproachsupportsincrementaltrainingandcangraduallyadapttothepersonalisedexpressionfeaturesofdifferentusers.However,theparametertuningprocessismorecomplicated,andthegeneralisationabilityofthemodeltocross-racialandcross-agegroupsstillneedstobeimprovedREF_Ref196160660\r\h[12]REF_Ref196160709\r\h[13].Fromtheperspectiveofthetechnologydevelopmentpath,facialexpressionrecognitionhasexperiencedanevolutionfrommanuallydesignedfeaturestoautomatedlearning,asshowninREF_Ref196427412\hFigure1.9.EarlyresearchreliedonmanuallyextractedHaarorLBPfeatures,andemotionjudgementwasaccomplishedbyclassifierssuchasSVM.Withtheimprovementofarithmeticpower,deeplearningmethodsgraduallybecamemainstream,usingconvolutionalnetworkstoautomaticallylearnfacialtextureandmusclemovementpatterns.Inrecentyears,researchershavebeguntoexplorethecombinationoflightweightnetworksandattentionmechanisms,andtrytoincorporatemultimodaldatasuchasspeechandgesture.FigureSTYLEREF1\s1.SEQFigure\*ARABIC\s19HistoryofFacialExpressionRecognitionTechnologyTypicalresearchresultsreflectthediversityoftechnologyapplications.Intheoptimisationoftraditionalmethods,HantingYang'steamachievedalargeimprovementintherecognitionrateofthesysteminthemask-wearingscenariobyimprovingtheLBPfeatureoperatorREF_Ref196160734\r\h[14];inthefieldofdeeplearning,themini-XceptionSEdual-channelmodelandSqueezenet-mini-Xceptionhybridarchitectureoptimisedtherecognitionperformanceundercomplexscenariosthroughmultimodalfeatureextractionandlightweightdesign,respectively,ontheFER2013andotherdatasetstooptimisetherecognitionperformanceincomplexscenesREF_Ref196160752\r\h[15]REF_Ref196160767\r\h[16].HantingYangetal.pointedoutthattraditionalfacialexpressionrecognitionframeworkshavelimitedperformanceimprovementincomplexenvironments,anddeeplearningmodelsshowmorerobustnessindealingwithproblemssuchasocclusionandlightingchangesREF_Ref196160734\r\h[14].ZhaoYu(2022)proposedalightweightexpressionrecognitionsystemforservicerobotinteraction,whichimprovestheaccuracyandgeneralisationperformanceofexpressionrecognitionbyoptimisingthetrainingdatasetandintroducingaconfidencelearningapplicationmethod,whichiscrucialfornaturalinteractioninemotionalcompanionshiprobotsREF_Ref196160569\r\h[7].Inaddition,XiangyiChen(2022)proposedafacialactionunitdetectionmethodbasedonregionalattentionlearning,whichemphasisestargetedattentiontokeyfacialregionstoimprovedetectionaccuracy,whichisimportantforthedevelopmentofemotionrecognitiontechnologyREF_Ref196160827\r\h[17].Together,thesestudiesshowthatfeatureoptimisationandmodelstructureinnovationarethecorepathstoimprovetheaccuracyofexpressionrecognition.Fordatasetselection,researchershavereliedonseveralpubliclyavailabledatasetssuchasFER2013andRAF-DB,whichproviderichsamplesforalgorithmtrainingandvalidation.MayyadahR.Mahmoodetal.validatedtheadvantagesofRandomForestclassifiercombinedwithminimalfeatureselectionbasedonCK+datasetwithanaccuracyof94.23%REF_Ref196160865\r\h[18].DharanyaV.etal.usedAC-GANfusedwithcapsulenetworkstogeneratediverseexpressiondatatoalleviatethesmallsampleproblemandachieved93.4%accuracyontheADFES-BIVdatasetREF_Ref196160879\r\h[19].However,thecomplexityandvariabilityoffacialexpressionsmaketheaccuracyofemotionrecognitionstillchallenging.XiaKaiandWangQingemphasisedthatthediversityofemotionsandindividualdifferencesmakerobotsfacedifficultiesinrecognisinguseremotionsinreal-time,whichlimitstheapplicationscenariosofemotionalcompanionshiprobotstosomeextentREF_Ref196160905\r\h[20].GuoXinweietal.proposedanefficientself-repairingnetwork(ESCN),whichdemonstratedthepotentialapplicationofexpressionrecognitiontechnologyincomplexinteractionscenariosandfurtherimprovedtheaccuracyofemotionrecognitionREF_Ref196160941\r\h[21].Byconstructingadeeplearning-basedemotionrecognitionsystem,XuGuizhietal.exploredhowrobotscaninteractnaturallywithusersthroughfacialexpressionrecognition,demonstratingthatemotionrecognitiontechnologycannotonlyenhancethenaturalnessofhuman-robotinteraction,butalsoprovidemorepersonalisedservicesinemotionalaccompanimentscenariosREF_Ref196161087\r\h[22].Improvingtheaccuracyofemotionrecognitionmayalsorequireemotionalrecognitionofspeechandvoiceinteraction,thedevelopmentofhuman-computerdialoguetechnologyallowsmachinestounderstandandexpressemotions,whichpromotestheformationofinteractiverelationshipsbetweenhumansandmachinesREF_Ref196161102\r\h[23].Expressionrecognitiontechnologyisexpandingtocross-domainapplications,combiningreal-timedetection,securitymonitoringandhuman-computerinteractionneeds,andpromotingtechnologicalintegrationandscenario-basedinnovation.Withtheadvancementoftechnology,expressionrecognitionresearchisgraduallyexpandingtodiversifiedscenariossuchassecurity,medical,andhuman-computerinteraction.Forexample,HaarcascadeandYOLOv3algorithmareusedformaskdetectionREF_Ref196161129\r\h[24]REF_Ref196161137\r\h[25]toaddresstheneedsofepidemicpreventionandcontrol;themini-Xceptionmodelisfurtherappliedtoreal-timemonitoringofpressurelevelREF_Ref196161151\r\h[26],withanaccuracyrateof95.3%.Meanwhile,researcherscontinuetoexploreoncross-modalfusion,suchasInstantSpeechmodelcombinedwithspeechsynthesistooptimiseinteractionfeedbackREF_Ref196161233\r\h[27].Theseadvancesnotonlyvalidatethetechnicalversatilityofexpressionrecognition,butalsoprovidemethodologicalsupportforbuildingintelligentsocialsystems.Insummary,theapplicationoffacialexpressionrecognitiontechnologyinemotionalcompanionshiprobotshasgreatpotential,butthelimitationsofexistingresearchstillneedtobeexploredindepthtopromotethefurtherdevelopmentofthefield.Futureresearchwillpaymoreattentiontotheconstructionofemotionalintelligenceanditsrealisationinpracticalapplications.SummaryandAnalysisThecurrentresearchonemotionalcompanionshiprobottechnologyandexpressionrecognitionhasmadesignificantprogress,andthedesignofemotionalcompanionshiprobotsshouldconsidertheemotionalneedsofusersinordertoenhanceuserexperienceandsatisfactionREF_Ref196161249\r\h[28].Fromthetechnicalrealisationlevel,non-humanoidrobotsarerapidlygainingpopularityinhomescenariosbyvirtueoftheirlowcostandeaseofuse,buttheirinteractiondimensionsarerelativelyhomogeneous,whichmaybedifficulttosatisfytheelderly'sneedfordeepemotionalcommunicationREF_Ref196161262\r\h[29].Humanoidrobotscansimulatecomplexexpressionsmoreflexibly,buthighhardwarecosts,suchasAmeca'ssingleunitcostingmorethanamilliondollars,alongwithhighenergyconsumptionissuesseriouslyconstrainscaledeployment.Thetrendofconvergenceofthetwotypesofproductshasbroadenedtheapplicationscenarios,butabettersolutionhasnotyetbeenformedintermsofcross-platformdatainteroperabilityandlong-terminteractionadaptability.Atthesametime,thedeeplearningmodelmayhavetheproblemofexcessivemodelvolume,thispaperdesignslightweightexpressionrecognitionmodelontheseissues,whichcantakeintoaccounttheexpressionfeedbackofhumanoidrobots,andformamorein-depthemotionalcommunication.ResearchcontentandorganisationalstructureofthispaperResearchcontentframeworkTheResearchframeworkisshownbelow:FigureSTYLEREF1\s1.SEQFigure\*ARABIC\s110ResearchcontentframeworkOrganisationalstructureThewholepaperisdividedintofourchapters,andtherestofthechaptersarearrangedasfollows:Chapter2isaresearchonthemappingmechanismofemotionalcompanionshiprobotinteractingwithuserexpressions.Thechaptermainlyintroducestwolayersofemotionmappingrelationships,thefirstlayeristhemappingfromuserexpressioninputtoemotionrecognition,includinganalysingtheoperationprincipleofhaarcascadeclassifier,introducingtheprincipleoffacialfeatureextractionandlearningbasedonmini-Xception,etc.Thesecondlayerisphysicalfeedback,i.e.,therobot'sfacialexpressionsimulationisachievedthroughmechanicalcontroltointeractwiththeuser,focusingonthediscussiononthephysicalexpressiongenerationoftherobot'semotionThesecondlayerisphysicalfeedback,i.e.,therobot'sfacialexpressionissimulatedthroughmechanicalcontroltointeractwiththeuser,focusingonthegenerationofphysicalexpressionoftherobot'semotionalfeedback,andthemechanicalactuatorunitisestablishedwithreferencetothefacialactioncodingsystem,whichsimulatesthemovementpatternofhumanfacialmusclesthroughservomotors.Thethirdchapteristhedesignoftheemotionalcompanionshiprobotinteractionsystembasedonexpressionandspeech.Onthebasisoftheresearchonthemappingmechanismoftheinteractionbetweentheemotionalcompanionshiprobotandtheuser'sexpression,Chapter3firstlycarriesoutthetrainingofthemodelusedinthestudy,thenbuildstheexpressionrecognitionsystembasedonthemodel,thenimplementsthetext-to-speechspeechinteraction,andfinallydesignstheinteractioninterfaceandcarriesouttheexperimentalvalidationtoillustratetheusabilityofthesystem.TheconclusionsinChapter4providesasystematicconclusiontothepaper.

MappingMechanismStudyofEmotionalCompanionshipRobotandUser

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论