资源目录
压缩包内文档预览:
编号:209455933
类型:共享资源
大小:5.88MB
格式:ZIP
上传时间:2022-04-25
上传人:机械设计Q****6154...
认证信息
个人认证
陈**(实名认证)
上海
IP属地:上海
30
积分
- 关 键 词:
-
连杆
部件
加工
工艺
设计
数控
仿真
夹具
- 资源描述:
-
连杆部件加工工艺设计、数控仿真及夹具设计,连杆,部件,加工,工艺,设计,数控,仿真,夹具
- 内容简介:
-
Full length articleDesign based on fuzzy signal detection theory for a semi-autonomousassisting robot in children autism therapyPedro Ponce*, Arturo Molina, Dimitra GrammatikouTecnologico de Monterrey, Campus Ciudad de Mexico, Calle del Puente 222, Ejidos de Huipulco, Tlalpan, Mexicoa r t i c l ei n f oArticle history:Received 11 May 2015Received in revised form8 August 2015Accepted 24 August 2015Available online 13 September 2015Keywords:RobotSignal detection theorySocial skillsFuzzy logica b s t r a c tThere are different kinds of robots that are used to assist autistic children during therapy; however, thereis not a previous evaluation in place to decide if the robot can detect and send social interaction clues tothe child in correct manner. Since the signal detection and fuzzy signal detection theories are well knowntechniques in human psychology for detecting signal and noise relationships, this work proposes thosetechniques as a main tool to identify how effectively stimuli are detected by social robots. Unliketraditional psychophysical approaches, which treat observers as sensors, signal detection theory rec-ognizes that observers are both sensors and decision makers, and that these are distinct processes thatcan be measured using separate indices, sensitivity and response criterion. Hence, the robot can bedefined as an observer using the signal detection theory. This proposal allows to evaluate social robotswith human psychology tools in order to improve the humanerobot interaction. Thus, the robotsaccomplish specific social responses that can be a better approach during the autism therapy. Further-more, the fuzzy signal detection theory (FSDT) applied to social skills can be an enhanced procedure fordesigning social robots. A semi-autonomous social robot was designed to validate the proposal. 2015 Elsevier Ltd. All rights reserved.1. IntroductionSince assistive robots have been included in autism therapies,the number of technology tools has increased. However, there arefew evaluations that are used to evaluate social skills in robots.Thus, Fuzzy Signal Detection Theory (FSDT) can be incorporatedinto the validationprocess forassistive robots (Robins, Dautenhahn,& Boekhorst, 2005). The Fuzzy signal detection theory could beintegrated into the validation process for assistive robots as it wasused in psychological human evaluations. Children with AutismSpectrum Disorder (ASD) (Rivi?ere, 2002; Rogers, 2000), exhibitsignificant difficulties while interacting with their parents as wellas socially (DSM 5 (Diagnostic and Statistical Manual of MentalDisorders, May 27, 2013). They avoid eye contact, seem indif-ferent or even resistant to hugs or physical contact, and they seemwithdrawn, isolated, and not able to adapt to their environment.This does not mean to say that children with ASD cannot feel or arenot attached to their parents, caregivers, teachers, or later, theirpeers (Werry, Dautenhahn, Ogden, & Harwin, 2001). Children withASD have difficulty identifying emotions (Cohen et al., March 2014),both in themselves, and in others, therefore in expressing thisattachment in a way that is recognizable or interpretable. They alsohave difficulty interpreting what others may be thinking or feeling.They cannot interpret the social meaning of a smile (Xu & Tanaka,2014), facial expressions or body language. When some socialabilities are not developed, children exhibit confusion and anxietyin social relationships. Children with ASD are slower at learningthan their typically developing peers; however, this does not meanto say that they cannot advance in acquiring the necessary skills inorder to adapt to their environment. Technology, taking intoconsideration the needs of children with ASD, offers exciting pos-sibilities for intervention innovation aiming towards the acquisi-tion of social skills. The clinical use of robots to aid children withASD seems to be promising and much of it is concentrated oneliciting a specific behaviour from the child (Giullian et al., 2010;Ricks & Colton, 2010). The hypothesis being that individuals withASD are drawn to technology because of its predictability and sorobots may be useful for eliciting target behaviours, particularlypro-social (Diehl, Schmitt, Villano, & Crowell, 2012) since the maindeficit is in social interaction. Robots have been used to provideinteresting visual displays and respond to a childs behaviour dur-ing an intervention aiming at eliciting joint attention or shared* Corresponding author.E-mailaddresses:pedro.ponceitesm.mx(P.Ponce),armolinaitesm.mx(A. Molina), dgrammatikou (D. Grammatikou).Contents lists available at ScienceDirectComputers in Human Behaviorjournal homepage: /locate/comphumbeh/10.1016/j.chb.2015.08.0360747-5632/ 2015 Elsevier Ltd. All rights reserved.Computers in Human Behavior 55 (2016) 28e42enjoyment in interaction (McConnell, October 2002), behavioursthat are difficult for children with ASD. In several papers, the robothas served as the object of joint attention. However, Dautenhahn(2003), thinks that it could be used as a catalyst that could even-tually aid the child in interacting with another individual (Feil-Seifer & Mataric, 2009). Hence, the designed robot in this paperserves as an interactive object with the child. Probably one of themostrenownedautism-relatedrobotsisKeepon(Kozima,Michalowski, & Nakagawa, 2008) Developed by teams at NIICT inJapan and Carnegie Mellon University, Keepon is described ashaving a yellow, snowman-like body and is only 120 mm tall. Itseyes are colour cameras with a wide-angle lens.Keepon has also been used to direct the joint attention to anobject outside the dyad robot-child (Hideki & Marek, January2009). Robins et al. (2005), Robins, Dautenhahn, et al. (2004),Robins, Dickerson, et al. (2004) and Ruffman, Garnham, &Rideout (2001) measured eye gaze, touch, imitation and prox-imity of four children to the robot. In addition, De Silva, Tadano,Saito, Lambacher, & Higashi (2009) studied five children withencouraging results as far as joint attention was concerned anddemonstrated the robots capability to track the object that thechild was looking at as a defined moment. Costa (2014), Costa,Lehmann, Robins, Dautenhahn, & Soares (2013) had a robot thattaught children how to play a game with a ball and then reportedthat the children continued playing with each other without theparticipation of the robot. Robins, Dautenhahn, & Dubowski (2006),Robins & Dautenhahn (2006), Robins, Dautenhahn, et al. (2004),Robins, Dickerson, et al. (2004) also found that children includeda third participant in a conversation with a robot. Feil-Seifer &Mataric (2009) had two children, one with ASD and one without,play a game similar to Bubble Play from the Autism DiagnosticObservationSchedule(TheAutismDiagnosticObservationSchedule (ADOS) By & Lord) and reported that the social behav-iours towards the robot and the adult increased when the robotblew bubbles contingently rather than randomly. Wainer, Ferrari,Dautenhahn, & Robins (2010) used Lego robot kits with childrenwith higher functioning ASD and found that enjoyment in class andcollaboration increased, and they were able to continue interactingwith each other after the class was over. As a result, therapy couldbe improved when technology is used (de Urturi, Zorrilla, &Zapirain, 2012). Several advances have been made in the use ofrobots in autism therapy (Cabibihan et al., November 2013;Scasselati, 2007), and the development of detailed requirementshas the potential to help improve upon the effectiveness of usingclinical robots in the treatment of children with autism (Blow,Dautenhahn, Appleby, Nehaniv, & Lee, 2006; Dautenhahn, 2003;Diehl et al., 2012). As mentioned above, a large number of robotshave been created with great variations in shape, size, and style.The evaluation of their effectiveness is primarily based on thejudgment and experience of expert clinicians and engineers(Giullian et al., 2010). Furthermore, it has been suggested that arobot must be robust, easily reprogrammable, affordable (Robinset al., 2006; Robins & Dautenhahn, 2006), and appealing to chil-dren with autism in order to be useful in therapy (Bandura, 1987;Benedet,2002;Grofer-Klinger&Renner,2000).Otherre-quirements that have been proposed for the creation of a robotinclude having aspects familiar to the child, providing choices,having a modular design that can easily be customized, and beingsimple to use. Hence, the robot proposed in this paper is anexcellent alternative because it can deal with these requirements.Signal Detection Theory (SDT) is used to analyse data coming frompsychological experiments where the task is to categorize ambig-uous stimuli which can be generated either by a known process(signal) or be obtained by chance (noise). For example, a radaroperator must decide if what he sees on the radar screen indicatesthe presence of a plane (the signal) or the presence of parasites (thenoise). This type of application was the original framework of SDT.Signal detection theory (SDT) assumes a division of objective truthsor “states of the world” into the non-overlapping categories ofsignal and noise. The definition of a signal in many real settingsvaries with context and over time. In the terminology of fuzzy logic,a signal has a value that falls within a range between unequivocalpresence and unequivocal absence. The definition of a response canalso be non-binary. Accordingly, the methods of fuzzy logic can becombined with SDT, yielding fuzzy SDT. A social skill survey can beused to evaluate the social skills in the robot using fuzzy SDT. FuzzySDT can considerably extend the range and utility of SDT byhandling the contextual and temporal variability of most signals.This paper gives an insight into the possibility of using an evalua-tion tool that was mainly developed for human psychology in as-sistive robots. Although the robot is not able to answer the surveyby itself, the survey is answered by the responses provided by therobot when it is used in autism therapy for children.2. Signal detection theorySince signal detection theory was developed by (Green & Swets,1966), it has been used in different areas in order to evaluate theresponsetodifferentinput-signalconditions.SDTisatheoreticalformof detection between signal (stimulus) plus noise and noise (dis-tractors) only in which the response is classified in binary categories(Paredes-Olay, Moreno-Fern?andez, Rosas, & Ramos-?Alvarez, 2005).This concept is based on normal distributions in both signals as it isshown in Fig. 1 (e. g., raw score ?0.989, Z-score ?0.989,Fig. 1. Normal distribution used for noise and signal.P. Ponce et al. / Computers in Human Behavior 55 (2016) 28e4229Pright 0.839, Pleft 0.161). SDT does not require the presence of anoisy environment or signal per se, but it does assume that eachresponse is strictly classified in one of two categories (1/0). Thepossible responses according to the input signal are defined by theSDT matrix representation, Table 1, in which the possible responsesare classified into four categories. Correct rejection (the signal wasabsentandtheresponsewascorrectlydetected),Miss(thesignalwaspresented and the response was not correctly detected), Hit (thesignal was presented and the response was correctly detected) andFalsealarm(thesignalwasabsentandtheresponsewasnotcorrectlyclassified). The information shown in Table 1 comes from Fig. 2 thatpresented the normal distribution of noise and signal plus noise(Concepci?on2005(Paredes-Olayetal.,2005)andAnthony2010).Thecriterionlinedelimitstheboundariesinwhicheachzoneisdivided.Inaddition, Fig. 2 illustrates the sensitivity index named d0and thelikelihood ratio namedb(Green & Swets,1966).The index (d0) is the standard distance between the normaldistribution curve approximating to the signal and the noise dis-tribution; hence the result is defined as the horizontal distance instandard deviation units between those curves. The Hit Rate(HR) ofthe observer on a normal distribution minus the False Alarm Rate(FAR) on a normal distribution is the distance between noise andsignal plus noise. The sensitivity (d0) is basically the distance be-tween the means of the probability distributions associated withthe signal and the noise. It is calculated by the z-score associated to(HR) and False alarm (FAR) rates (Macmillan & Creelman, May1990). This distance represents the observer sensibility and it isrepresented by Receiver Operating Characteristic curves (ROC).Fig. 3 shows the relationship between those components. The ROCcurve allows for the visualization of the tradeoff between theTable 1Signal detection theory (Truth table).Response01Signal 0CRFA1MISSHITFig. 2. SDT based on noise and signal.Fig. 3. Sensitivity index (d0) (a) and ROC curves performance (b).Fig. 4. Flowchart of FDT algorithm.P. Ponce et al. / Computers in Human Behavior 55 (2016) 28e4230observers sensory performance and observers decision biases.HRandFARarecalculatedbyEquations(1)and(2)inwhichtheHRis the proportion of signal 1 in which the observer responded 1 andtheFARistheproportionofsignal0towhichtheobserverresponded1 according to the representation in Table 1. HR could be seen as thenumber of hits S(H) divided by the total number of possible signals.HR P1?1(1)FAR P0?1(2)Thus, the sensitivity index is calculated by Equation (3)d0 ZHR ? ZFAR(3)The likelihood ratio (LR) is derived using Equation (4)b eInb eInd0*cb eZFAR2?ZHR2=2(4)where C ?Z(HR) Z(FAR)The LR value could be compared with an optimization based onthe ratio of the probability of noise to the probability of receiving aviable signal. (bopt P(N)/P(S).Ifbbopt, the social robot is liberal. Nor-mally, the desired value is nearlyequal to the optimization criterionbopt. The criterion location (C) is a measurement of response bias. Ifevaluated relative to the point at which the two distributions cross,C ?Z(HR) Z(FAR) can be used in order to find its value.2.1. Fuzzy signal detection theory (FDT)FDTallows to have degree values in the range of 1,0 that can beused in the signal and response (Masalonis & Parasuraman,10 Nov2010). Those values precisely describe the perception of the socialrobot. For a complete description of each step of the FDTalgorithm,see (Parasuraman, Masalonis, & Hancock, 2000). A general view ofthe algorithm is shown in Fig. 4.The basic steps of the FDT algorithm are described below:(1) The mapping functions are defined according to the data andneeds of the analysis. The mapping functions for the signaland response describe the states of membership values in therange 0,1.(2) When the mapping is performed, an analysis of the signaland response is derived to calculate the membership valuesfor Hit (H), Miss (M), False Alarm (FA) and Correct Rejection(CR). Those values are reached by implication functions. InEquation (5), the fuzzy set membership for the four possibleoutcomes is defined.H mins;rM s ? r;0FA maxr ? s;0CR min1 ? s;1 ? r(5)(3) After n observations Hit Rate (HR), False Alarm Rate (FAR),Miss Rate (MR) and Correct Rejection Rate (CRR) are calcu-lated by Equation (6).HR PniHiPnisiFAR PniFAiPni1 ? siMR PniMiPnisiCRR PniCRiPni1 ? si(6)(4) The Fuzzy sensitivity (d0) and likelihood ratio called alsocriterion (b) values are calculated using the signal and falsealarm fuzzy values. The sensibility and criterion fuzzy valueshave the same meaning assigned in SDT. In Equations (7) and(8) the sensibility and criterion by fuzzy value is determined.d0 ZHR ? ZFAR(7)b YHR=YFAR(8)The ordinate of the normal distribution of HR is represented byY(HR) and the ordinate of the normal distribution of FAR is calcu-lated as Y(FAR). Using Equation (9), those values are reached.YHR 1ffiffiffiffiffiffi2ppexp?ZHR22YFAR 1ffiffiffiffiffiffi2ppexp?ZFAR22(9)3. Robot (TEC-O)TEC-O develops different tasks as facial expressions (Cohenet al., March 2014), voice commands, image processing anddetection of facial expressions. This section describes the design ofTEC-O. The robot (TEC-O) requires a built-in Personal Computer(PC), a video camera, a robust body, two arms, pressure sensors forthe body and face, and a gestural face that makes TEC-O a completebuilt-in robot that can be used in HumaneRobot Interaction (HRI).Table 2TEC-Os servomotors.PartServo motorsLeft arm3 X DGServo S05NF STD1 X Feetech RC FS5106Right arm3 X DGServo S05NF STD1 X Feetech RC FS5106Neck2 X Feetech RC FS5106Facial gestures4 X PowerHD HD-1900AP. Ponce et al. / Computers in Human Behavior 55 (2016) 28e4231Although there are different robots on the market which could beused in autism therapies, those robots are not normally designedwith a specific purpose and have limitations. As a result, TEC-O wasspecifically designed for the treatment of autism to be able to beapplied to the complete spectrum of therapies shown by Argall &Billard (2010) TEC-O is designed mainly to attract the childsattention in autonomous mode; when the child does not payattention, the therapist can use the manual mode to again capturethe childs attention using different social resources.Several works suggest that the use of robots in the therapies ofchildren with autism help increase their social abilities (Scasselati,2007) whenever their degree of autism is not severe. Some worksclaim that a humanoid shape is the best choice for this kind oftherapy (Aldebaran Robotics, 2014; Blow et al., 2006; Robins et al.,2006); moreover, there are works that demonstrate realistic robotswith similar human features like the robots presented by (Robinset al., 2006; Robins et al., 2006). Generally, if the robot is realistic,the human interaction has a better chance of being improved.However, the decision regarding the use of realistic a robot willdepend on the therapist, because the preference of a realistic ornon-realistic robot is different for each child (Diehl et al., 2012).Some of the basic characteristics that were taken into consid-eration for designing TEC-O are presented below.1. TEC-O can be directly connected to a conventional electricsupply (120 V, 60 Hz), this source is electrically and mechani-cally isolated for the protection of the child.2. TEC-O can be either programmed by LabVIEW or used withoutprevious programming knowledge using basic blocks (seeFig. 11), so in the therapies the robot can be moved in manualmode.3. TEC-O can be controlled by the therapist on-line (using thefrontal panel) or off-line (using the pre-loaded program).4. TEC-O can generate five basic facial expressions: happy, angry,serious, surprised and sad.Fig. 5. TEC-Os dimensions and servomotors.Table 3Mechanical movement range for each servomotor.ServoRange (degrees)Default valueR00,1800R10,180180R20,18090R390,180180L0180,0180L1180,00L2180,090L30,900N00,180110N189,13489Fig. 6. TEC-Os face with four servomotors used for facial expressions (h0-h3) and thepressure sensor located in the nasal region.P. Ponce et al. / Computers in Human Behavior 55 (2016) 28e42325. An audio system is included to play TEC-Os voice, which can begenerated by the Microsoft Speech Synthesizer or by pre-recorded phrases integrated into the program.6. Its arms can move like human arms. Although it does notinclude functional hands, it can make basic movements. Forinstance, TEC-O can say hello by moving its arm and it can give ahug. It can also produce a complete set of gestures.7. Its video camera provides the ability to detect the childs faceand track it, so the robot can smile when it finds the childs face.If the child remains in the same position for a short period oftime (2 s), the robot will smile.8. Its tactile sensors provide the ability to sense the pressure andthe frequency of the touch events.For the childs safety, TEC-O was designed to resist normalmechanical impacts. Because it is made of Nylamid SL/60, a strong,resistant plastic which is more durable, children are able to useTEC-O as aggressively as they wish without incident. Small robotsare not always attractive to children, so TEC-O is a medium-sizedrobot. An Arduino Mega 2560 board, which is a microcontrollerboardbasedontheATmega2560,generatesPulseWidthFig. 7. The face detection process.Table 4Fuzzy set definition by voltages representation (linguistic label of strength whenchild touches).Set symbolDescriptionMean (voltage)Variance (voltage)LLow strength0.44130.4498MModerate strength2.53990.6178HHigh strength4.66760.2365Fig. 8. MIMO T2FLS expressed as four MISO T2FLS.P. Ponce et al. / Computers in Human Behavior 55 (2016) 28e4233Modulation (PWM) outputs in order to control servo motors. A dataacquisition card is used for getting the information from the pres-sure sensors. TEC-O has 14 mechanical degrees of freedom (DOF)distributed as follows:? 4 DOF for the left arm? 4 DOF for the right arm? 2 DOF for the neck? 4 DOF for the facial expressionsEach DOF is operated by controlling a specific servomotor ac-cording to Table 2. Since servomotors are controlled by voltagesignals, the motor current can be regulated by its minimum valuefor moving the robot from the initial point to the final one. Whenthe child blocks the movement of the robot, the motor current in-creases and the robot immediately stops; this current system is likea protection system for the children. The threshold for the motorcurrent can be adjusted for different tasks like giving a hug.A graphical representation of TEC-O is presented in Fig. 5. Theposition of the servomotors (left arm L, right arm R and neck N) andthe pressure sensors (c, n and l) are shown.The range of movement for each servomotor is defined inTable 3 and the complete information about the ranges is presentedin Fig. 6.Gesticulating is an important part of TEC-O which helps tointeract with the child during the therapy (Vielma & Salas, 2000;Vigostky, 1962). For generating facial expressions, servomotorswere placed into the facial structure. Fig. 6 shows the position ofeach servomotor inside the head of TEC-O (h0-h3). In addition,pressure sensors can be adjusted in the face as it is shown in thenasal region.TEC-O can also track the childs face and detect some basic facialgestures such as smiling which is an input stimulus. TEC-O can alsogenerate facial expressions, voice messages and body movementswhich are the outputs. This control system is based on Fuzzy LogicType 2 (T2FLS) (Mendel, 2007; Mendel & John, 2002) becausefuzziness is the essence of human development and existence,which understandably is a necessary condition for human learning,growth and survival (Karwowski, 1992).3.1. TEC-O inputs and outputs (interface)The inputs for the robot are: facial expressions, the distancefrom TEC-O to the child, and pressure signals. TEC-Os outputs are:body movements, facial expressions and voice signals. Using thoseinputs and outputs it is possible to have an effective interactionwith children (Boccanfuso & OKane, November 2011; Ruffmanet al., 2001). The vision system is performed using the NI IMAQmodule of LabVIEW (IMAQ). The face detection is performedusing basic image operations. The colour of the skin is found byEquation (1).S R ? Gr(1)whereR is the red channel of the RGB matrix.Gr is the grey-scale version of the original image.S is the matrix that contains the skin colour.Fig. 7 depicts the face-detection process. TEC-O is able to recordthe numberof gestures done and it generates a classification. This isuseful for analysing progress during the intervention.Touching TEC-O during therapycanplayan important rolein thedevelopment of several social skills. Pressure sensors are used todetermine the correct output values for facial expressions, verbalmessages and body movements. The relationship between inputsand outputs in the robot is nonlinear; thus, a T2FLS is an excellentproposed solution because it can handle uncertainties. For instance,a pressure signal could be misinterpreted in a person if the forceFig. 9. T2FS description for inputs.Table 5Input and output definitions according to Fig. 9.InputRangeDescriptionOutputRange/SetDescriptiont0,10 secFace-detected elapsed average timeh089,114,138?C,N,OEyelids servon0,5 voltsNose sensor voltageh122,56,89?O,N,CMouth servoc0,5 voltsChest sensor voltageh232, 61,89?O,N,CEyebrows servol0,5 VoltsLeft-hand sensor voltageh389,120, 150?C,N,OLeft arm servor0,5 voltsRight-hand sensor voltageh489,120, 150?C,N,ORight arm servoP. Ponce et al. / Computers in Human Behavior 55 (2016) 28e4234applied to the hand is not regulated when a handshake is per-formed. Common descriptions may have different meanings todifferent people (Mendel, 1999, 2007). Therefore, a survey of 30people was done to measure they perceived the following conceptsof pressure “Low, Moderate and High”. Table 4 shows the surveysresults, which are used to define the T2FLS for each variable.Due to the differences of how people define touching, everytriggered facial gesture is also generalized by a T2FLS, which pro-duces a general response according to each programmed gesture,no matter how people perceived the concept of touching. The to-pology of T2FLS is illustrated in Fig. 8 inwhich the 3 main stages aredefined for each T2FLS.The decision system is based on T2FLS described in Fig. 9(Karnik, Mendel, & Liang, 1999; Mendel & John, 2002); the inputvariables are t,n,c,l,r 2 X and each output variable h0,h1,h2,h3 2 Y.Table 5 shows the inputs and outputs used.In addition, the T2FLS has five inputs and four outputs and eachoutput has its own rule set.According to (Lee, 1990), a MIMO system could be expressed asseveral MISO systems, so this T2FLS (Karnik et al., 1999; Mendel,1999, 2007) is defined as four T2FLS, with five inputs each.Figs. 9 and 10 depict the input and output variables imple-mented in T2FLS.The variables c, l, r and n measure the pressure that the childapplies to the robots body when TEC-O is touched (Robins &Dautenhahn, 2014). This pressure value is measured and trans-formed into voltage signals. Variable n has its sets contractedbecause the sensitivity inTEC-Os face is greater than the sensitivityin the arms and chest. More sensors can be added in order to detectadditional information but the defined sensors cover the mainpartsof TEC-O. For several people, touching the face, nose, or ears canresult in an uncomfortable sensation, so the membership functionswere adjusted in order to send the correct response to the childwho represents the sensation.Forh0, the closed linguistic term means that the eyelids areclosed; while h1 means that the mouth is completely closed h2means that the eyebrows are lowered and h3 means that the leftand right arms are in a relaxed position. The set of linguistic rulesfor each output is defined in Table 6.Each set premise is organized in the following form: for t fromleft to right are Not Present (NP) and Present (P); for variablesc,l,r,n,from left to right are Low (L), Moderate (M) and High (H).Each consequent set is organized in the following form: fromleft to right are Closed (C), Neutral (N) and Opened (O).It is important to mention that the expression of sadness is notincluded in the T2FLS since the intervention therapy does notinclude this condition. If this emotion is needed, it could be pro-grammed into TEC-O.TEC-O can have a wide variety of social expressions during thetherapy. Some conditions are preprogramed in order to help thetherapist to manipulate the robot. However, the therapist canmodify the expressions online in order to improve the therapy;hence, the initial condition of TEC-O was defined with a seriousFig. 10. T2FS description for the outputs.Table 6Linguistic rules for T2FLS.1. t : Pl : Lr : Lc : L0h0 : Oh1 : Ch2 : Nh3 : C2. t : Pl : Mr : Mc : M0h0 : Oh1 : Ch2 : Nh3 : N3. t : Pl : Hr : Hc : H0h0 : Oh1 : Ch2 : Ch3 : O4. t : Pn : L0h0 : Oh1 : Ch2 : Nh3 : C5. t : Pn : M0h0 : Nh1 : Ch2 : Nh3 : N6. t : Pn : H0h0 : Ch1 : Ch2 : Ch3 : O7. t : NPn : Ll : Lr : Lc : L0h0 : Nh1 : Ch3 : C8. t : NPn : Ml : Mr : Mc : M0h0 : Nh1 : Ch3 : N9. t : NPn : Hl : Hr : Hc : H0h0 : Nh1 : Oh2 : Oh3 : OP. Ponce et al. / Computers in Human Behavior 55 (2016) 28e4235expression (initial expression). If the robot detects the childs faceusing the vision system, then the robot could behave in severalways according to the information provided by the pressure sen-sors. If the childs face is not detected and the child touches therobot, TEC-O will become surprised. If the robot detects the childsface, and one or several tactile sensors are touched by the child, therobotwill showan expression of unhappiness (anger) and producesa voice response. Conversely, if the child decides to touch the ro-bots nose, the robot will show an angry expression, but it will closeits eyes and will turn its head to one side or the other in order toavoid the childs face. The validation test of facial expressions isbased onT2FLSthat ispresented inTable 7. More alternatives can beincluded in the therapy but they have to be selected according tothe childs response so that the therapist can select the correctsocial expression. In a nutshell, Table 7 shows the facial expressionresponses (TEC-Os gesture)that are generated when tactile sensorsare touched. The proposed fuzzy logic type 2 system (T2FLS result)calculates these facial responses. As a result, TEC-O can generateseveral facial expressions that are according to the childs needs.Furthermore, the fuzzy logic system can be adjusted on-line inTable 7Some of TEC-Os gestures according to the linguistic set of rules.GestureT2FLS resultTEC-Os gestureSeriousSurprisedHappyAngryAngry (variation)SadN/A: Binary selection (Non-fuzzy)P. Ponce et al. / Computers in Human Behavior 55 (2016) 28e4236order to give a strong facial expression that can be recognized bythe child when the facial expressions are not well detected.The user interface, which has to be friendly, is related withonline adaptability and is the most important connection betweenTEC-O and the therapist during the intervention. With this inter-face, the therapist can intuitively learn how to control TEC-O. Inaddition, LabVIEW, a graphical program, shows the digital codein a block diagram which is easier to modify. Fig. 11 shows part ofthe front panel used to control the robot during the therapy.One of the main impairments that children with ASD present isrelated to language (Ekman & Friesen,1969; Goldberg et al., 2003),so the robot contains a sound system for sending voice messagesthat allows the therapist to engage the child in a verbal interaction(Sampath, Indurkhya, & Sivaswamy, 2012). The possibility of mak-ing a sound could be the first step in establishing communicationwhich is necessary for social interaction. Furthermore, the voiceFig. 11. Part of the front panel for controlling the robot.Fig. 12. Microsoft Speech Synthesis VI for .NET Framework of Windows.P. Ponce et al. / Computers in Human Behavior 55 (2016) 28e4237command is designed to be used by the therapist as a reinforce-ment method for completing the activities when the child isdistracted. TEC-O produces speech by using the Microsoft Speechfor Microsoft .NET Framework. Depending on the therapy activities,TEC-O could be programmed to speak when the therapy requiresverbal communication. The Microsoft Speech Module allows for theselection of the best voice from several types of preloaded voices.The age and gender of the speaker can also be modified.AnotheralternativeforprovidingTEC-Owithspeechisrecording a specific voice that could be familiar for the child; forexample, the voice of the childs mother or father could be used tosend verbal messages. These voice alternatives are ultimatelyselected by the therapist. This voice system is presented in Fig. 12.In the first step of the therapy, TEC-O uses short, simple sentenceslike “I am happy” to send a clear message.Therapist Social situa?ons Preselected social situationsSignal detec?on theoryRobot adjusted On-line theraphyChildFig.13. Flow diagram for adjusting robot criterion and social detection based on signaldetection theory.Table 8Evaluation survey by signal detection theory.Stimulus generated by the autistic childQuestion/ResponseA strong smile on the face is presented (Input Stimulusvalue of 0.9 in FDT and 1 in SDT)Can the robot read human expressions (smile)?/The robot detects the smilebut it depends on the distance from the child to the robot.FSD 7SDT 1The child looks at the robots eyes for few seconds(input stimulus value of 0.5 in FDT and 1 in SDT)Can the robot determine human expressions Eye contact?/The robot candetect the stimulus.FDT 4SDT 1The child produces different tone of voice (inputstimulus value of 0.8 in FDT and 1 in SDT)Can the robot classify human expressionstone of voice?/The robot does not group in a correct manner the tone ofvoice only partial tones are classified.FDT 3SDT 0The child does not show a complete expression offear (input stimulus value of 0.4 in FDT and 0 in SDT)Can the robot classify human expressionsFear/resistance?/the strong stimulus and null expression are detectedFDT 2SDT 0The child touches the robot in a strong manner andobserves directly to the eyes (input stimulus valueof 0.9 in FDT and 1 in SDT).Does the response time of the robot is satisfactory ?/The robot response isadequateFDT 7SDT 1The child does not show joint attentions. (input stimulusvalue of 0 in FDT and 0 in SDT).Does the robot detect joint attention? The robot can detect join attentionif the kid follows the directions in the therapy.FDT 2SDT 0The child approaches when the robot produces sounds.(input stimulus value of 1 in FDT and 1 in SDT).Does the robot determine if the patient response appropriately to audio cues?The robot detects partially the audio cluesFDT 5SDT 1The child does not interact in a correct manner (inputstimulus value of 0.7 in FDT and 1 in SDT).Does the robot show a deficit in their ability to initiate basic social interaction?The robot begins the social interaction and it is able to send different social stimuliFDT 6SDT 1The child and the therapist are sending stimuli (inputstimulus value of 0.3 in FDT and 0 in SDT). The childis generating more than 90% of the stimuliCan the robot deal with various forms of triadic interactions like a pivot? Therobot is not able to be the pivot in the therapy. The therapist is always the pivot.So the robot generates incorrect responses when it plays the role of pivotFDT 1SDT 1The child is motivated with this programed therapy.(input stimulus value of 1 in FDT and 1 in SDT).The robot is proactive during the therapy motivating the patient?/The robots isproactive. during the therapyFDT 5SDT 1The child waits for the robot response more than 3 s(input stimulus value of 0.9 in FDT and 1 in SDT).Does the robot encourage the child to wait for the robot responses ?/the responsetime is adequateFDT 6SDT 1Fig. 14. List of choices for each question (Values from 1 to 7).P. Ponce et al. / Computers in Human Behavior 55 (2016) 28e42384. Robot interventionThe intervention with the robot is divided into two modalities.The first is based on decisions made by the robot in an autonomousmanner (preselected social situations). The second offers thetherapist the possibility to control the intervention with the robot,creating new social situations within the therapy session, whichdepends on the needs of the child and on the spontaneous op-portunities that arise in the session. Signal detection theory can beseen as a method that facilitates the evaluation of the performanceof the robot and its sensitivity in detecting social signals. Fig. 13illustrates the intervention scheme during the first and secondmodality. The second modality can be regulated by the therapist ina direct way so that the therapist can make decisions about newsocial situations according to the needs of the child. In this case, therobot functions as a tool that the therapist uses according to his orher criterion, as much as he or she is normally required to do in anytraditional therapy. However, when the robot makes autonomousdecisions, a previous evaluation is required since the therapistscriterion has to be substituted by the criterion of the robot. Signaldetection provides the information needed to change the designconditions of the robot.1. Robot Intervention based on Signal Detection Theory (SDT)In the first modality, SDT is employed in order for the robot tomake decisions based on the reactions of the child. The preselectedsocial situations were designed to start a basic social contact usinggreetings, facial expressions and body movements generated by therobot. Those situations enable the child to develop initial socialskills in the following scenarios:1) Facial expressions are used for associating the emotion with thefacial expression assumed in the robots face.2) Body movements are coordinated with facial expressions.3) A library of audible messages is offered to complement thetherapy.The robots facial expressions, body movements, and audiblemessages confront the child with several social situations within aparticular context. The robot then has to detect the childs facialexpression and select the correct social response. When the robotchooses the preselected social situation, SDT is applied to adapt thetherapy tothe childs needs. It is convenient for SDT to be used witha liberal criterion when the therapy is offered to a child with higherfunctioning ASD. For a child with lower functioning ASD, a con-servative criterion of SDT in the robot is more appropriate since thechilds tolerance to stimuli is lower. Children with high-functioningASD can participate in social interaction using different perceptionchannels at the same time, so different stimuli can be implementedby the robot. If the robot does not detect the percentage of thesocial signals generated by the child, the criterion of the robot canbe adjusted using SDT. In addition, the therapists experience can beused to obtain information about the response of the robot and,thus, improve the performance of the robot.4.1. Manual robot interventionIn order to offer the therapist maximum flexibility in the use ofthe robot, a manual intervention mode is available. In this mode,the therapist can adjust the robot to send particular signals to thechild independent of the childs response. This mode can beimplemented while the criterion of the robot is being adjusted. Itcan also be used in specific spontaneous situations that arise duringtherapy in which the preselected social situations need to becomplemented.5. Fuzzy signal detection theory robot evaluationThe evaluation is based on a survey that was applied when therobot was operating during a therapy session with an autistic child.However, the set of questions could be modified according tospecific social skills that need to be evaluated in the robot. Thequestions were selected according to basic autism therapy needsfor a high-functioning autistic child. Each question is designed toprovide information connected with the social skills that have to beincluded in the autism therapy. If the therapist chooses, the surveycould be extended to include more social aspects. Additionally, theanalysis done by the fuzzy signal detection theory allows adjustingthe therapy based on the social responses of the robots. As a result,the interaction between the therapist, the child and the robot isimproved. The evaluation is developed by a LabVIEW, programwhich has a graphical interface, which is user friendly. With Lab-VIEW, results are obtained automatically. The survey was appliedwhen the robot was operating; hence, the stimuli are generatedduring the humanerobot interaction. When the fuzzy signaldetection theory is applied, the stimulus can be adjusted accordingto different situations (e. g. if the robot is stimulated with a happyfacial expression, the value of the input is close to one but when therobot is stimulated with non-visual contact, the stimulus is close tozero). The designed survey is shown in Table 8. The scale foranswering the survey goes from 1 to 7 in fuzzy detection theory(see Fig. 14). In the conventional detection theory, the possibilitiesrange from 0 to 1. Conversely, the input for each question is definedfrom 0 to 1 in FDT and SDT defines only 0 and 1.The evaluation is done automatically by a LabVIEW program.The complete program was developed using LabVIEW and con-sists of an interface which originates in the front panel shown inFig. 15. The program needs a file path where the file is the survey.This file must be in CSVformat and have at most 14 questions. Then,the usercan choose between twotestingmethods: Signal DetectionTheory (SDT) or Fuzzy Signal Detection Theory (FDT). The startbutton is used to load the questions and indicators for signals andresponses. The return button functions to reset the entire program,to load another file, and/or change modality.After the questions are displayed, all signal indicators on the leftside appear where the designer can assign the input values for eachquestion and then hide the values with the show button. In SDTFig. 15. LabVIEW frontal panel for SDT and FDT.P. Ponce et al. / Computers in Human Behavior 55 (2016) 28e4239mode (Fig.16a), signals and responses have values of 0 or 1 and willshow results ofb,d0,HR and FAR. In FDT mode (Fig. 16b), the valuesfor signals are any number between 0 and 1, while the values forresponsesarenumbersbetween1and7.Theresultsareb,d0,HR,FAR,Y(HR) and Y(FAR). When all the values for each signaland response are assigned, the calculate button performs thenecessary calculations to come up with a result.6. Preliminary resultsAccording to the results (see Table 9), the robot has a correctclassification of the childs social response, so the visual system isable to detect how the facial expression looks when the childsmiles. TEC-O has a medium liberal criterion that gives the robotthe opportunity to interact with the child. When low-functioningautistic children are using TEC-O, the criterion is set to aFig. 16. Front panel for SDT (a) and FSDT (b) in LabVIEW.Table 9SDT and FDT results.SDTFDTHit Rate (HR)0.818120.76Miss Rate (MR)0.1818180.24False Alarm Rate (FAR)0.50.1417Correct Rejection Rate (CRR)0.50.8583Sensitivity (d0)0.9084581.77916Likelihood Rate (b)0.6618951.38553Fig. 17. ROC curve for FDT.Fig. 18. ROC curve for SDT.P. Ponce et al. / Computers in Human Behavior 55 (2016) 28e4240conservative level because the child cannot tolerate some socialstimulus. These additional social inputs include several bodymovements for attracting the childs attention. The results pre-sented inTable 9 are used to adjust the robot in the therapy session.The results can determine the criterion a robot will set. Since theinformation is monitored during the therapy session, it is possiblethat the robot can set their criterion in a liberal or conservativemanner. However, usually the robot sets the betavalue tomaximizeor near maximize the outcome of the social response.The robot has to maximize the number of hits during the ther-apy therebysending the correct positive social reinforcement to thechild. Thus, both d0and the criterion can be changed according tothe autistic childs functionality, so the robot increases the numberof hits. The robot can be manipulated for adjusting how much a hitor a correct rejection is valued and how much a false alarm or misscost in social skills. Figs.17 and 18 show the ROC curves for FDTandSDT.Some of the parameters of the robot that can be adjusted toimprove the autistic childs social skills and interaction duringtherapy are presented below. The parameters also change the cri-terion and the conditions of d0(b).? Waiting time response: The waiting time is the period of timethat begins when the child touches the robot and finishes whenthe robot produces a facial expression (See Tables 5 and 6).? Interaction distances: Social distance between the robot and thechild is recorded by the camera installed in the robot. This dis-tance is regulated by sending verbal messages from the robot tothe child. During the therapy session, the robot maintains asocial distance at a normal level with the child (see Fig. 7).? Motion speed: The motion speed variable controls the speed ofthe body movement (see Fig. 11) and facial expressions.Generally, this parameter is set to the same value but it can beadjusted according to the childs needs.As shown, it is possible to use fuzzy signal detection theory toadjust the parameters in the semiautonomous-robot. The mainlimitations of this methodology are presented below.? The evaluation survey is designed to find specific social condi-tions in the robot. Hence, a complete set of evaluation surveys isrequiredwhentherobotisusedincomplexsocialenvironments.? The evaluation survey is not running an intelligent system tocreate new questions for improving the evaluation survey.Moreover, an optimization algorithm could be included in theLabVIEW program, which runs the survey.? Although fuzzy signal detections gives an excellent approach foradjusting a semiautonomous robot, there are social parametersthat the therapist must validate. Thus, autonomous robots arenot included in this work.7. ConclusionsSeveral robots have been used for autism therapy; however,there are not enough evaluation methods for adjusting the robot toobtain the correct social response. The form of the response in therobot determines the robots criterion. This criterion can be liberal,normal or conservative. Since there are different levels of autism,the criterion of the robot has to be changed according to the childsneeds. For example, high-functioning autistic children need a lib-eral criterion but low -functioning autism children need conser-vative criterion. To determine the robots capacity for detectingsocial stimulus and its criterion, FDT is applied to the robot.Moreover, the concept test shows that this evaluation is a powerfultool for validating the correct performance of the robot. This paperproposes to use psychological tools in order to evaluate robots thatinteract with children with autism. If these tools are exploited forrobotics systems, the digital programs inside of the robot can beseen as a programmed psychological system. Moreover, the per-sonality of the robot is based mainly on the criterion established bythe digital program. A theoretical evaluation using FSD is a funda-mental tool in the robots design process for knowing more aboutthe robots social response. This paper shows the first approach toevaluate the criterion of the robot using FDT, so it can be expendedto several humanerobot interactions.ReferencesAldebaran Robotics, NAO, Online. Available: /(visited 2014).Argall, B. D., & Billard, A. G. (2010). A survey of tactile human-robot interactions.Robotics and Autonomous Systems, 58, 1159e1176.Bandura, A. (1987). Pensamiento y acci?on: Fundamentos sociales. Barcelona-Espana:Martnez Roca.Benedet, M. J. (2002). Neuropsicologa Cognitiva. Aplicaciones a la clnica y a lainvestigaci?on, Fundamento te?orico y metedol?ogico de la Neuropsicologa Cognitiva.Ministerio de Trabajo y Asuntos Sociales.Blow, M., Dautenhahn, K., Appleby, A., Nehaniv, C. L., & Lee, D. (2006). The art ofdesigning robot faces - dimensions for human e robot interaction. In Humanrobot interaction, Salt Lake City, USA.Boccanfuso, L., & OKane, J. M. (November 2011). CHARLIE : an adaptive robot designwith hand and face tracking for use in autism therapy. International Journal ofSocial Robotics, 3(4), 337e347. Date: 24 Sep. 2011.Cabibihan, J.-J., Javed, H., Ang, M., Jr., & Aljunied, S. M. (November 2013). Why ro-bots? A survey on the roles and benefits of social robots in the therapy ofchildren with autism. International Journal of Social Robotics, 5(4), 593e618.Cohen, I., Looije, R., & Neerincx, M. A. (2014). Childs perception of robots emotions:effects of platform, context and experience. International Journal of Social Ro-botics, 6(4), 507e518.Costa, S. (2014). Robots as tools to help children with ASD to identify emotions. Autisme Open Access /10.4172/2165-7890.1000e120.Costa, S., Lehmann, H., Robins, B., Dautenhahn, K., & Soares, F. (2013). ”Where isyour nose?” e Developing body awareness skills among children with autismusing a humanoid robot. In The Sixth International Conference on Advances inComputer-Human Interactions.Dautenhahn, K. (2003). Roles and functions of robots in human society: implica-tions from research in autism therapy. Robotica, 21, 443e452.De Silva, P. R. S., Tadano, K., Saito, A., Lambacher, S. G., & Higashi, M. (2009).Therapeutic assisted robot for children with autism. In IEEE/RSJ internationalconference on intelligent robots and systems (pp. 3561e3567). New York, NY:ACM Press.DSM-5 Paperback. Diagnostic and statistical manual of mental disorders (5th ed.)(May 27, 2013). American Psychiatric Association. ISBN: 13: 978e0890425558.Diehl, J. J., Schmitt, L. M., Villano, M., & Crowell, C. R. (2012). The clinical use ofrobots for individuals with autism spectrum disorder: a critical review. Researchin Autism Spectrum Disorder, 6, 255.Ekman, P., & Friesen, W. V. (1969). The repertoire of nonverbal behavior: categories,origins, usage, and encoding. Semiotica, 1, 49e98.Feil-Seifer, D., & Mataric, M. J. (2009). Towards socially assistive robots for aug-menting interventions for children with autism spectrum disorders. Experi-mental Robotics, 54, 201e210.Giullian, N., Ricks, D., Atherton, A., Colton, M., Goodrich, M., Brinton, B., Detailedrequirements for robots in autism therapy, Systems Man and Cybernetics(SMC), 2010 IEEE International Conference on DOI:10.1109/ICSMC.2010.5641908Publication Year: 2010, Page(s): 2595e2602.Goldberg, W. A., Osann, K., Filipek, P. A., Laulhere, T., Jarvis, K., Modahl, C., et al.(2003). Language and other regression: assessment and timing. Journal ofAutism and Developmental Disorders, 33, 607e616.Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. NewYork: John Wiley & Sons Ltd.Grofer-Klinger, L., & Renner, P. (2000). Performance based measures in autism:Implications for diagnosis, early detection, and identification of cognitive pro-files. Journal of Clinical Child & Adolescent Psychology, 29, 479e492.Hideki, K., & Marek, P. (January 2009). Michalowski, Cocoro Nakagawa. Keepon In-ternational Journal of Social Robotics, 1(1), 3e18. Date: 19 Nov 2008.IMAQ Vision for LabVIEW User Manual e National Instruments Online /pdf/manuals/.Karnik, N. N., Mendel, J. M., & Liang, Q. (1999). Type-2 fuzzy logic systems. IEEETransactions on Fuzzy Systems, 7, 643e658.Karwowski, W. (1992). The human world of fuzziness, human entropy, and the needfor general fuzzy systems theory. Journal of Japan Society for Fuzzy Theory andSystems, 4(5), 825e841.Kozima, M., Michalowski, P., & Nakagawa, C. (2008). Keepon: a playful robot forP. Ponce et al. / Computers in Human Behavior 55 (2016) 28e4241research, therapy, and Entertainment. International Journal of Social Robotics, 1,3e18.Lee, C. C. (1990). Fuzzy logic in control systems: fuzz
- 温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。