IROS2019国际学术会议论文集 1710_第1页
IROS2019国际学术会议论文集 1710_第2页
IROS2019国际学术会议论文集 1710_第3页
IROS2019国际学术会议论文集 1710_第4页
IROS2019国际学术会议论文集 1710_第5页
已阅读5页,还剩2页未读 继续免费阅读

付费下载

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Inference of user intention in remote robot wheelchair assistance using multimodal interfaces Vinicius Schettino and Yiannis Demiris Abstract Shared control methodologies have the potential of enabling wheelchair bound users with limited motor abilities to perform tasks that would usually be beyond their capabilities Deriving such methodologies in advance is challenging since they are frequently heavily dependent on unique characteristics of users Learning Assistance by Demonstration paradigms allow derivation of customized policies by recording how remote human assistants assist particular users However for accurate determination of the optimal policies for each user and context the remote assistant needs to infer the intention of the driver which is frequently obscured by noisy signals dependent on the user s motor impairment In this paper we propose a multi modal teleoperation interface incorporating map information haptic feedback and user eye gaze data and examine which of these factors are most important for allowing accurate determi nation of user intention in a simulated tremor experiment Our study indicates that for expert assistants presence of additional haptic and gaze information increases their ability to accurately infer the user s intention providing supporting evidence for the utility of multimodal interfaces in remote assistance scenarios for Learning Assistance by Demonstration Our study also reveals strong individual preferences on the different modalities with large variations of performance occurring depending on whether supplemental eye gaze or haptic information was given I INTRODUCTION Research on the topic of navigational assistance for drivers of powered wheelchairs has been active for nearly three decades These mobility devices with added sensors and a computing unit capable of providing shared control or au tonomous navigation have been termed Smart Wheelchairs SW and some studies indicate that up to 60 of powered wheelchair users could benefi t from some sort of assistance provided by this technology 1 Research in the area of navigational assistance is traditionally concerned with the utility of different assistance paradigms integration of varied sensor and control technologies to the system as well as different forms of feedback among others Interestingly a much smaller portion of the literature is focused on the people that besides being unable to walk also have moderate to severe motor diffi culties in controlling a joystick This includes for example people suffering from Parkinson s disease cerebral palsy amyotrophic lateral scle rosis multiple sclerosis etc where the range of symptoms affecting the upper limbs include tremors muscle weakness and muscle spasms Yet such patients would be among those that need such assistive technology the most Authors are with the Personal Robotics Laboratory Department of Electrical and Electronic Engineering Imperial College London United Kingdom v schettino17 imperial ac uk y demiris imperial ac uk One of the main challenges in addressing these patients is that deriving an assistive policy that could simultaneously compensate for all these symptoms would be extremely diffi cult On the other hand targeting only one of them signifi cantly reduces the applicability of the technology Furthermore even the same symptoms can affect individuals in very different manners and to compensate this would require a level of personalisation on the design of assistive policies that is beyond current approaches A potential way for tackling this impasse is with the application of Learning Assistance by Demonstration LAD 2 a subset of Learning by Demonstration 3 In this scenario a professional familiarised with the user condition such as a physiotherapist a nurse or a caretaker observes the wheelchair driver s performance and provides remote demonstrations of how to best assist that driver Then a machine learning algorithm is used to understand the associa tion between the given demonstrations and the environmental and user inputs collected by the smart wheelchair Upon successful learning and generalization the system should be able to autonomously replicate the customized assistance that was being offered by the human professional eliminating the need of deriving individualized hard coded assistive policies However for this to be effective demonstrations of high quality are needed which means that the assistant should be able to infer as accurately and as fast as possible the intention of the remote driver In turn this requires the assistant to be provided with the appropriate tools for the job which in this case are interfaces capable of representing the states of driver and environment Nevertheless we are not aware of any study investigating how effective different modalities of interfaces are in communicating the driver s intention to the remote assistant This information is of high value for continuing the development of LAD techniques as the performance of learned policies is upper bounded by the performance of demonstrations provided In this paper we discuss a novel study done to explore the impact that three different modalities of interfaces have on remote inference of intention namely map and localization haptic pairing and eye gaze We developed a platform where all three interfaces are available to a remote assistant as shown in Figure 1 and conducted an experiment were subjects tried to infer the intention of a driver suffering from simulated intense tremors During the experiment subjects had the competing goals of trying to be as accurate and as fast as possible in their guesses while being exposed to one of four combinations of interfaces map only map haptic pairing map eye gaze map haptic pairing eye gaze 2019 IEEE RSJ International Conference on Intelligent Robots and Systems IROS Macau China November 4 8 2019 978 1 7281 4003 2 19 31 00 2019 IEEE4600 Fig 1 System overview A remote operator provides assistance to a wheelchair user through the use of a multimodal interface including map haptic and eye gaze information II RELATED RESEARCH A recent review 4 on the topic of smart wheelchairs shows that the fi eld is still very much active with nearly 40 prototypes developed in a decade from institutions all over the world However this raises the question of why none or almost none of this research has made it into a commercial product especially considering how big the potential market is 5 6 This topic has been explored many times in recent literature 7 5 8 4 9 and researchers seem to have reached a common answer current systems still lack customization capabilities Each user is unique in his disabilities and preferences and therefore there is no one size fi ts all assistive solution that could suit the whole population While designing hard coded assistive policies to attend users individually is impractical LAD posits that customiza tion might be attained by leveraging the tacit knowledge of rehabilitation and care taking professionals as they can na turally provide assistance demonstrations that are personally fi t for users disabilities preferences and needs 2 A few works have previously explored Learning by De monstration in the context of smart wheelchairs For exam ple in both 10 11 demonstrations from expert drivers are used to extract short primitive driving paths Then based on the sensory information of the smart wheelchair estimations of the driver intention are made and the most closely related path is used for autonomous navigation A more direct approach is used in the case of LAD as was done in 2 12 13 14 Here a remote assistant directly provides demonstrations of when and how to help the driver This information is fed to a machine learning algorithm which attempts to learn the association between the smart wheelchair s sensory data and the demonstrations However a constraint to the LAD approach is that the learned assistive policies can only be as good the demons trations provided Hence a successful application of this technique requires that the remote assistant must be able to accurately infer the intention of the driver In the previously mentioned works where LAD was used this was facilitated by the assistant having either a direct view or a video feed of the scene However this leads to diffi culties in the learning process due to the demonstrator having access to information that is not available to the machine 3 Therefore in an improved scenario the assistant should be able to make intention inferences based solely on the data that is also available to the machine A review of concepts and techniques available for robot detection of intention is presented in 15 but the LAD context presents a different problem In LAD a human has to make inferences about the intention of another remote human albeit through the eyes of a robot The challenge in this process is how to convey raw sensor data information to humans such that they can make use of this information in an intuitive way Although different interfaces are available for executing this task we are not aware of any prior study exploring how effi cient they are In previous works using LAD a map and localization 4601 interface was available to represent information about the en vironment Additionally a joystick equipped with program mable force feedback was used 2 14 for conveying to the remote assistant information about the drivers intention The force feedback allows implementation of haptic pairing between both human agents such that the assistant can feel as if holding the same joystick as the remote driver and vice versa This is effi cient in the sense that it allows the remote assistant to perceive intended velocity commands of the driver in an intuitive way However in the case of people with diffi culties in upper limb motor control the intention information might not be entirely present muscle weakness or present but corrupted with extraneous signals tremors or muscle spasms An alternative source of information not previously ex plored in this context is eye gaze estimation 16 17 The premise here is that drivers would normally look at their destinations before arriving there and thus this information should be valuable in inference of intention By extracting a person s gaze direction from a video feed the dimensionality of the data is signifi cantly reduced which leads to gains in both teleoperation small bandwidth needed and machine learning reduced overfi tting III METHODOLOGY To explore the impact that different modalities of in terfaces map and localisation haptic pairing and gaze estimation have on remote inference of intention we de veloped a system that can connect a wheelchair driver to a remote assistant in real time while simultaneously providing information from all three interfaces in an intuitive way We also devised an experiment to test how subjects react to different combinations of these interfaces A System overview An overview of the developed system is illustrated in Figure 1 The base for this system is the ARTA Assistive Robot for the Transport of Adults smart wheelchair 2 Laser scanners two in the front and one in the back are the sensing units used to give the robot information about its surroundings The gathered data is fed to a central computer which makes use of ROS 18 for both implementing and connecting different system components The fi rst interface explored in this study is based on the movement and localisation of the wheelchair in relation to adjacent obstacles and potential goals This can be used in two different scenarios localisation or SLAM Simultane ous Localization and Mapping For localisation where the robot is moving in a previously built map the Adaptive Monte Carlo Localization approach as implemented be the amcl ROS package is used Since ARTA does not possess wheel encoders odometry information is built using the hector mapping 19 ROS package which leverages the high update rate of the scanners for detecting movements in the location of detected spatial features If the wheelchair is moving in a previously unknown environment then hec tor mapping can solely be used as the SLAM solution In either case this results in a 2D map and a transform for the wheelchair position on this map We represent this information to the remote assistant by displaying a map and a model of the wheelchair in the given position as shown in Figure 3 The second interface uses eye gaze estimation For this a webcam facing the driver is used to capture a video feed of the driver The images are processed online by the RT GENE framework Real Time eye Gaze Estimation in Natural Envi ronments 16 RT GENE fi rst extracts images patches of the face and eyes of the subject Based on these image patches the head pose and eye gaze angles are estimated using a set of deep neural networks These angles are made available to the remote assistant by rotating the head of a simulated human model sitting on the wheelchair accordingly Since only a 2D map is used we opted for mapping only the yaw angle of the head plus gaze direction RT GENE is capable of running in real time with a low latency on the wheelchair s internal computer Hence the processing is performed on the wheelchair s side which reduces the dimensionality of the data being transmitted from a full image 640 x480 pixels to a single degree of freedom The third interface considered is haptic pairing For this a joystick which features programmable force feedback is needed As in previous studies done in our laboratory we chose the Novint Falcon joystick due to the relatively strong force that it can exert and a high update rate which is needed for a real time haptic feeling To better integrate this device to the ROS ecosystem a customized software solution was built on top of the libnifalcon driver This allows one to use a PID controller to have the joystick follow a target 3D position Furthermore a position and or force offset can be applied In our system this interface is used by constantly reading the position of the wheelchair driver s joystick and using this as a target for the assistant s joystick which then feels as if holding the same joystick as the remote driver If the assistant detects that intervention is needed they can press a button to take over control of operation In this case the direction of haptic pairing is reversed and the assistant s movements are given as feedback to the driver as a mean of explaining the assistance being provided The joystick position of the agent in control at any given time is used to drive the wheelchair But this information is fi rst passed through a safety layer which is responsible for ensuring that the wheelchair will not collide with static or moving obstacles A safety strategy can be chosen between collision or obstacle avoidance In the former case only the magnitude of the wheelchair s speed will be capped when an imminent collision is detected bringing it to a halt if needed In the latter the angular velocities are adjusted instead to move the wheelchair around the obstacle without having to stop it B Experimental setup With this system in place it was important to test how helpful each of the available interfaces actually were in improving remote inference of intention For this an ex 4602 Fig 2 User driving in obstacle course while data is recorded periment was devised in which users were asked to drive to different goal positions in a room Subjects in a different room would try to guess as accurately and as fast as possible the intention of the user while using a combination of the available interfaces In order to reduce the complexity of the experiment it was divided in two stages which allowed us to increase the number of subjects tested and also to eliminate random effects imposed by a driver assistant pair independent variable The fi rst stage consisted of recording real data from users driving in a obstacle course as illustrated in Figure 2 For this we fi rst had to build a map of the room using the SLAM method described above Then two drivers were asked to drive to 9 different goal positions on this map The goals locations were chosen with the intention of resembling regular targets that a wheelchair user could have While driving all the relevant data was recorded laser scanner readings localization joystick position and a video of their faces The goal position was marked by an object on the fl oor and drivers were instructed to always approach it at a right angle The drivers reported not having any known motor disabilities but had some prior experience with driving a powered wheelchair We are interested in the cases of people who might have extra diffi culty in controlling a powered wheelchair but asking our target population to use an untested system at this stage of research would be exposing them to unnecessary burden Instead we decided to work with able bodied drivers and leverage the force feedback joystick to impose on them a simulated disability The chosen disability was a tremor of the hand similar in nature to what people suffering from Parkinson Disease may feel This particular form of tremor typically with a 4 8 Hz frequency is more intense when the arms are at a resting position and less intense during the execution of a movement This was simulated in our system by imposing a circular force offset to the drivers joystick The actual testing happened on the second stage Subjects were presented with a map displayed on a screen and the associated goal positions as shown in Figure 3 When the Fig 3 Map that was shown to subjects during the experiment recorded data was played back the wheelchair model would start moving on the map and subjects were asked to estimate as accurately and as fast as possible which was the goal position of the driver For this task they would have aid of one of the following combinations of interfaces 1 only map and localisation 2 map and localisation haptic pairing 3 map and

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论