IROS2019国际学术会议论文集2525_第1页
IROS2019国际学术会议论文集2525_第2页
IROS2019国际学术会议论文集2525_第3页
IROS2019国际学术会议论文集2525_第4页
IROS2019国际学术会议论文集2525_第5页
已阅读5页,还剩3页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Hierarchical Motion Segmentation through sEMG for Continuous Lower Limb Motions Seongsik Park, Donghyeon Lee, Wan Kyun Chung, Fellow, IEEE, and Keehoon Kim, Member, IEEE AbstractSurface electromyograms (sEMG), are records of electrical signals generated by muscles, and have long been used to indicate the motions intended by humans to enable interactions between a robot and a human. To support not only the diverse movements in human daily living but also the task of increasing the humanrobot interface and its applications, a new algorithm that can classify continuous lower-limb motion using sEMG signals is proposed herein. By simply constructing the motion hierarchy and probability distribution of sEMG for each motion phase obtained using only kinematic motion data and sEMG data, it is possible to demonstrate higher classifi cation accuracy than with state-of-the-art supervised learning methods that consume much time. Four different experiments were performed involving fi ve participants and the algorithm was verifi ed to distinguish successfully walking from running, and the sit-to-stand motion from jumping. I. INTRODUCTION Humanrobot interaction requires human motion recogni- tion to make a robot move as desired by recognizing the mo- tion intended by a human (e.g., intuitive control of robot and robotic prosthesis) or to help robots assist users by identify- ing human motion and behavior (e.g., exoskeleton robots and rehabilitation robots). Motion segmentation, which is used to segment, discretize, and classify a continuous datastream of human motion, is a part of the process of understanding and interpreting human motion as a kind of higher level grammar. Most segmentation studies for continuous motion have been performed using vision-based or inertial sensors 16. Continuous motion can be segmented into motion phases that constitute functional units of a continuous motion. Surface electromyogram (sEMG) signals from muscles provide information that is benefi cial for understanding the motion a human intends because it allows human intentions that cannot be captured by vision-based systems and inertial sensors to be decoded. However, previous studies on motion classifi cation using sEMG have focused primarily on discrete gestures by hands and/or wrists. Numerous machine-learning techniques and pattern recognition methods have achieved suffi cient gesture classifi cation accuracy for use in practical This work was supported by the Global Frontier R however, the muscular activation that can be detected by sEMG sensors for those two motions is signifi cantly different 13. Furthermore, muscle activation signals can be used to maximize muscle elastic energy 14, 15, to adjust joint stiffness for stability and balancing, and to provide impact absorption 13, 16. The ability to learn these athletic capabilities has also contributed to the dynamic motion of recently developed elastic robots. This potential ability of sEMG to learn the dynamic capabilities of human motion has contributed to recently developed elastic robots. Hence, the proposed algorithm is a novel hierarchical motion segmenta- tion for the lower limb using sEMG, and reveals characteris- tics of motion that are not exhibited in conventional vision- based motion segmentation. In addition, compared to existing vision-based sensors that are restricted in valid workspace, sEMG can be used indoors and outdoors after learning it once, due to its ease-of-use. This greatly enhances practical IEEE Robotics and Automation Letters (RAL) paper presented at the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Macau, China, November 4-8, 2019 Copyright 2019 IEEE hipknee RF ankle VLVMPLTABFGLGM hip knee ankle HO: Heel off TO: Toe off SW: Swing HS: Heel strike MS: Mid-stance RF: Quadriceps rectus femoris VL: Quadriceps vastus lateralis VM: Quadriceps vastus medialis PL: Peroneus longus TA: Tibialis anterior BF: Biceps femoris GL: Gastrocnemius laterialis GM: Gastrocnemius medialis Fig. 2.Automatically generated motion label by unsupervised clustering. This clustering is performed in an augmented vector space with 3-D joint angle space and 8-D sEMG signal space. Therefore, it can be inferred that motion phases are grouped considering not only the joint angle but also the muscle activation level. The motion data shown in the motion data space are color-coded with motion labels. Left: 3-by-3 graph: joint angular position represented in a 2-D scatter plot. The joint angles denoted in the diagonal position represent the x and y axes of the scatter plot, located in each column and row, respectively. Right: 3-by-8 graph: coupled space consisting of 8-D sEMG and 3-D joint angular positions are represented by the x and y axes of the scatter plots, respectively. Training dataset HMM Learning (Sec. II-B) Augmentation the second motion accuracy, in which the motion phase is incorrect while the motion type is correct, is red. Because we performed unsupervised clustering by determining only the number of phases for each motion type, the proposed algorithm could produce only arbitrary integer values assigned to the motion phase. It is noteworthy that the physical interpretation of each motion phase was analyzed separately from the proposed method. The result of S3 includes the countermovement jump, as shown in Fig. 6. In this case, segmentation error appears intermittently; however, it can be verifi ed that it is similar to the true motion label without any signifi cant segmentation errors. To quantify these errors for all the participants, a box- plot graph that evaluates the segmentation accuracy for each participant was constructed, as shown in Fig. 7. Specifi cally, 1: HO-TO2: TO-SW-HS3: HS-MS4: MS-HO -1012 time s 2 4 6 8 10 12 14 16 18 20 22 2 4 trial S1 -0.5 0 0.5 1 1.5 time s 2 4 6 8 10 12 14 16 18 20 22 24 2 4 6 trial S3 -10123 time s 2 4 6 8 10 12 14 16 18 20 2 4 trial S2 -10123 time s 2 4 6 8 10 12 14 16 18 20 2 4 trial S4 Fig. 8.Segmentation result for all participant data on walking motion. All segmentation results are color-coded by the color index at the top. Below the black horizontal line is the data used for training, and the others above the line are the data used for testing. The temporal alignment is t = 0 when the hip fl exion angle reaches its minimum. when segmentation error was observed, we analyzed whether this error occurred in the fi rst layer such that it is incorrect from the motion type, as in (19) or if this error occurred in the second layer such that the motion type is correct, but the motion phase was incorrect, as in (20). In addition, for each participant, the classifi cation errors from implementing the four different supervised learning algorithms are also compared here. In all participant cases, the proposed HHMM showed the lowest, or close to the lowest, classifi cation error. This was confi rmed again in the averaged classifi cation error for all participant assessments. The bar graph on the right of Fig. 7 shows the average time consumed to learn each algorithm in log-scale. The HHMM, which can be confi gured without additional computation after conducting the MoG learning in the motion labeling, does not require much time. On the other hand, given the motion labels, the other algorithms (except for HLSTM) for which computations need to be performed separately for the supervised learning, an average of several minutes is required. For HLSTM, which is more complex than the other supervised learning methods, computation takes an hour. 2) Physical analysis of the segmentation results: Figures 8 & 9 show the consistency of the motion labels produced for walking and running, respectively. Here, the time is t = 0 when the hip fl exion angle reaches its minimum, and all segmentation results of the training dataset and testing dataset are arranged in the vertical direction. Thus, it can be concluded that the more similar the cycles all are to each other, the more consistent are the segmentation results. This is because the participants walked and/or ran at a constant speed. In particular, while walking and running constitutes 1: HO-TO-SW2: SW-HS3: HS-MS4: MS-HO -0.5 0 0.5 1 1.5 time s 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 2 4 6 8 trial S2 -1012 time s 2 4 6 8 10 12 14 16 18 2 4 trial S3 -0.5 0 0.5 1 1.5 time s 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 2 4 6 8 10 trial S1 -0.500.51 time s 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 2 4 6 trial S4 01 time s 2 4 6 8 10 12 14 16 18 2 4 trial S5 Fig. 9.Segmentation result for all participant data of running motion. All segmentation results are color-coded by the color index at the top. Below the black horizontal line is the data used for training, and the others above the line are the data used for testing. Temporal alignment is t = 0 when the hip fl exion angle reaches its minimum. -1-0.500.51 x position m 0 0.5 1 z position m 34 1 2 3 412 torso thigh shank foot 00.51 time s 0 20 40 60 80 angle deg 3 4123412 hip knee ankle 00.51 time s 0 20 40 60 sEMG V RF VL VM PL TA BF GL GM -1.5-1-0.500.5 x position m 0 0.5 1 z position m 34 1 2 3 4 1 2 torso thigh shank foot 00.51 time s 0 20 40 60 80 angle deg 34 123412 hip knee ankle 00.51 time s 0 20 40 60 sEMG V RF VL VM PL TA BF GL GM Fig. 10.Two selected trials of S1 walking data produced by HHMM. (top) Positions of all the markers of motion capture in the xz plane to represent the body posture when motion phase transition occurs with the integer index of the destination phase inserted. (middle) Joint angle and (bottom) sEMG of right leg with the transition points illustrated by black vertical lines. (a) & (b)1: rest2: hip flex. & knee flex.3: stabilize4: complete (c) & (d)1: rest & hip flex.2: hip ext. & knee ext.3: upright4: complete -101 time s 2 4 6 8 10 12 14 16 18 20 22 24 2 4 6 trial S4 (a) -0.6-0.4-0.2 x position m 0 0.5 1 1.5 z position m 12 3 4 torso thigh shank foot -101 time s 20 40 60 80 100 angle deg 123 4 hip knee ankle -101 time s 0 50 sEMG V RF VL VM PL TA BF GL GM (b) -101 time s 2 4 6 8 10 12 14 16 18 20 22 24 2 4 6 trial S4 (c) -0.6-0.4-0.2 x position m 0 0.5 1 1.5 z position m 12 3 4 torso thigh shank foot -101 time s 50 100 angle deg 12 34 hip knee ankle -101 time s 0 50 sEMG V RF VL VM PL TA BF GL GM (d) Fig. 11.Sit-to-stand experimental results of S4 produced by HHMM assuming four motion phases. (a) & (c) All segmentation results of S4 sitting down and standing up motions, respectively. Temporal alignment is t = 0 when the hip fl exion angle reaches its maximum. (b) & (d) One selected trial of S4 sitting down and standing up experiments, respectively. The left panel shows the positions of all the markers of motion capture in the xz plane to represent the body posture when motion phase transition occurs, with the integer index of destination phase inserted. These transition points are illustrated by the black vertical lines in the joint angle and sEMG graphs as well. the same number of motion phases, heel off and toe off in walking are in one phase, followed by toe off, swing, and heel strike in another phase for walking. Meanwhile, heel off, toe off, and swing are in one phase, whereas swing and heel strike are separated into another phase, in running. It is noteworthy that this is an autonomously constructed phase in the unsupervised clustering, done without any human intervention. In other words, unlike walking, swing and heel strikes are more clearly characterized in the muscle signals in running. The motion phase can be either the same or different, depending on the motion or participant. This interpretation can be revealed through the following motion analysis of the segmentation results. Subsequently, the trained HHMM is used to physically analyze the motion phase inferred from sEMG by comparing the result with the true joint position or joint angle obtained from the motion capture data. This confi rms that the inferred motion phase in the sEMG, segments the continuous motion into physically meaningful phases. Figure 10 shows the results of walking. At the top, the actual position of each marker on the 3-D space measured by the motion capture (a) & (c)1: rest12: rest24: propulsion5: land3: countermovement (b) 1: rest12: rest24: idle & land5: propulsion3: countermovement -20246 time s 2 4 6 8 10 12 14 16 2 4 trial S1 (a) -4-2024 time s 2 4 6 8 10 12 14 16 2 4 trial S3 (b) -10123 time s -0.5 0 0.5 vertical position m 123 4 521 -10123 time s 0 50 100 angle deg 123 4 521 hip knee ankle -10123 time s 0 100 200 sEMG V RF VL VM PL TA BF GL GM (c) Fig. 12.Countermovement jump experimental results of S1 and S3 pro- duced by HHMM assuming fi ve motion phases. (a) & (b) All segmentation results of S1 and S3 contermovement jump motions, respectively. Temporal alignment is t = 0 when the vertical position of clavicle is maximized. (c) One selected trial of S1 countermovement jump experiment. The fi rst, second, and third graphs show the vertical position of the clavicle, three joint positions, and sEMG, respectively. The vertical black lines denote the time when the motion phase transition occurs, in addition to the integer index of the destination phase inserted in the fi rst and second graphs. is represented. The right leg with the sEMG electrodes was colored depending on the body segment while the opposite leg was gray. Furthermore, the time at which the posture is represented, was when the motion phase transition occurred in the segmentation result. These transition moments are represented by the vertical black line along the time axis in the joint angle graph and the sEMG graph. These results reveal the physical features of each motion phase. In this graph, the segmented motion phase inferred from the sEMG accurately predicts the timing of the phase change of the motion when the joint angle or motion capture position changes signifi cantly. Figure 11 summarizes the results of S4 for the sit-to-stand motion. The segmentation result at the top is the result of the whole dataset expressed similarly as in Fig. 8, and the subsequent two graphs are the motion capture data, joint angle data, and sEMG data, which are similar to Fig. 10. Similarly, when the participants sit in a chair or stand up, the motion phase can be correlated with the physical meaning through the measured joint angles. In particular, in this sit- to-stand motion, the change in hip angle can be observed in addition to the change in knee angle, which is characterized by how the upper body is used when sitting and standing. The segmentation of the divided motion phase can be observed as this hip angle changes. Finally, Fig. 12 shows the results of the countermove- ment jump for two participants. The individual motions are discussed more comprehensively in 20. However, it is noteworthy that the results shown herein are inferred not only for one motion, but also for a variety of motion types simultaneously. In this case, it is confi rmed that the motion phases are slightly different from each other even though the two participants have the same fi ve phases set to constitute this motion. This proves that the proposed method distinguishes the use of different muscles depending on the participants, even if they conduct the same motion. This difference could have been more important, because this motion requires more powerful muscle contraction forces than any other motion. IV. CONCLUSIONS A hierarchical segmentation algorithm was proposed that is able to classify the types and phases of motion simultane- ously from sEMG data. In other studies using other sensors, it was proven that lower-limb motions are characterized by continuous motions, such that they can be treated as motion phases in numerous human motions. Hence, we proposed a method to characterize a continuous motion as a series of motion phases, and a method to classify the type and phase of motion simultaneously, using sEMG data. The proposed algorithm was based on the HHMM, and its learning and reasoning methods were explained. The experiment involved four types of motion: walking, running, jumping, and sit-to-stand, for fi ve able-bodied participants. The experimental results verifi ed qualitatively and quantita- tively whether the motion type and phase could be distin- guished successfully for the proposed hierarchical method. In addition, the proposed simple learning method of HHMM achieved higher classifi cation accuracy than did the other su- pervised learning methods, and in addition, consumed much less time for the training. We also examined whether the segmented motion phase from the sEMG data was physically consistent when compared to the actual measured motion data. Hence, we confi rmed that the proposed method had achieved the objectives. In addition, it is promising that the proposed algorithm was able not only to reveal changes in the motions of the same participant, but also revealed changes between users performing the same type of motion. REFERENCES 1 D. Kulic, W. Takano, and Y. Nakamura, “Incremental learning, clus- tering and hierarchy formation of whole body motion patterns using adaptive hidden markov chains,” The International Journal of Robotics Research, vol. 27, no. 7, pp. 761784, 2008. 2 M. A. Sim ao, P. Neto, and O. Gibaru, “Unsupervised gesture seg- mentation by motion detection of a real-time data stream,” IEEE Transactions on Industrial Informatics, vol. 13, no. 2, pp. 473481, 2017. 3 D. Weinland, R. Ronfard, and E. Boyer, “A survey of vision-based methods for action representation, segmentation and recognition,” Computer vision and image understanding, vol. 115, no. 2, pp. 224 241, 2011. 4 Q. V. Le, W. Y. Zou, S. Y. Yeung, and A. Y. Ng, “Learning hierarchical invariant sp

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论