




已阅读5页,还剩3页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
Real-time monitoring of human task advancement Riccardo Maderna1, Paolo Lanfredini1, Andrea Maria Zanchettin1and Paolo Rocco1 AbstractIn collaborative robotics applications, human be- haviour is a major source of uncertainty. Predicting the evolution of the current human activity might be benefi cial to the effectiveness of task planning, as it enables a higher level of coordination of robot and human activities. This paper addresses the problem of monitoring the advancement of human tasks in real-time giving an estimate of their expected duration. The proposed method relies on dynamic time warping to align the current activity with a reference template. No training phase is required, as the prototypical execution is learnt online from previous instances of the same activity. The applicability and performance of the method within an industrial context have been verifi ed on a realistic assembly task. I. INTRODUCTION In the last years, there has been an ever-increasing interest in human-robot collaboration, allowing robots and humans to share the same environment and work closely with each other 1. In industry, the spread of the new paradigm is fostered by the need for quick and fl exible adaptation to production changes. A number of challenges arise that have to be solved to achieve effective collaboration. In addition to safety issues, which have been extensively studied, as in 2 5, monitoring of the environment is required to correctly allocate tasks and rapidly react to changes. Particularly, in collaborative robotics applications the biggest source of uncertainty comes from the human behaviour. Monitoring and predicting the evolution of human activities allows for a better schedule of future operations of the robots. The problem can be addressed at different levels, such as human task classifi cation, future sequence prediction and monitoring of the present activity. 6 proposed a complete framework for human activity recognition with the aim of providing a human-machine interface for the integration of human workers in intelligent manufacturing systems. Instead, a Hidden Markov Model was trained in 7 to infer the current human activity as perceived by a mobile robot. In 8 and 9 task classifi cation is achieved by early prediction of the target of a human reaching motion. 8 relies on an offl ine training phase to construct motion libraries that are then used for Bayesian classifi cation in real time. Differently, Zanchettin and Rocco 9 exploited a model-based approach to generate trajectories associated with each available target location. Each new measurement is compared to the model- based prediction to update the probability associated with each goal. The method does not require a training data set and allows for the possibility for the human to change target. The problem of predicting the most likely future task of 1The authors are with Politecnico di Milano, Dipartimento di Elettron- ica, Informazione e Bioingegneria, Piazza L. Da Vinci 32, 20133, Milano, Italy (e-mail: name.surnamepolimi.it). the human is addressed, for instance, in 10, where the human activity pattern is modelled with higher-order Markov Chains. Previously, Li et al. 11 proposed a framework for activity prediction based on causality relations and semantics. In 12, the authors used gaze information to monitor the workers activity and interpret his/her future intention. A further level of interest is to determine the progress of an ongoing task in real time. In 13 a probabilistic method for parsing of complex tasks is presented. Such activities are modelled using trained Bayesian networks that can also account for operation variants. Kaczmarek et al. 14 developed a framework to support the worker during assembly tasks, where the advancement and the correctness of operations are assessed based on gesture recognition and assembly parts detection. A method to predict the duration of a repetitive activity through observation is described in 15. This is done by tracking selected features to learn motion primitives and model their effect on the progress of the overall task. 16 developed a method to estimate the progress of human movement, even in presence of occlu- sions, with the aim of obtaining an early classifi cation of the human action and generate a corresponding robot trajectory for faster interaction. Using a probabilistic representation of motion primitives learned from demonstration, the likelihood of models with different durations is computed to select the one that best aligns with the current observations. This work focuses on the real-time monitoring of human task advancement. This is of primal importance to give an accurate estimate of the expected duration of long and complex activities, which are characterized by high vari- ability in their execution. The evolution of the ongoing activity is described by tracking the motion of the human and compared with a prototypical template. Progress is then estimated using a dynamic time warping-based algorithm, that is able to take into account changes in the execution pace. Differently from other approaches, such as 13, 15 and 16, the proposed method does not require any offl ine training phase, but learns online from previous repetitions of the same activity. Moreover, like 16, this work handles possible lack of information in human monitoring that may occur during activity execution due to occlusions or tracking errors. The remainder of the paper is organized as follows. Section II defi nes the problem and introduces the proposed approach. Section III briefl y discusses how human motion is tracked to gather information for activity monitoring. The dynamic time warping-based algorithm for the estimation of human task advancement is described in Section IV. Finally, Section V presents the results of the verifi cation experiments. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Macau, China, November 4-8, 2019 978-1-7281-4003-2/19/$31.00 2019 IEEE433 00.51 Time s -1 -0.5 0 0.5 1 Amplitude m Previous activity 00.51 Time s -1 -0.5 0 0.5 1 Current activity Fig. 1:Effects of high pace variability: based on the previous execution (left) the ET algorithm would predict that the current activity (right) is about to end in 500 ms, while it is being executed at a reduced pace. II. PROGRESSBASED ESTIMATION OF TASK DURATION Estimating the duration of an ongoing activity, that is a fi nite sequence of human motions and primitive actions, can be done using collected data. In fact, in most scenarios, especially in an industrial setting, it is expected that the human repeats a given task multiple times. Let T be the set of past durations and Tethe elapsed time from the start of the present activity to the current time instant. A basic approach to obtain an estimate TDof the duration of the ongoing activity is to compute the conditional expectation of past durations that are longer than Te: TD(Te) = E T | Te(1) Then, the remaining time to completion TRcan be simply derived as: TR(Te) = TD(Te) Te However, the elapsed time is representative of the actual activity advancement only under the assumption of low pace variability. Instead, human task execution speed may vary considerably, also due to distraction or fatigue. Figure 1 exemplifi es the effects of high variability on the elapsed time-based (ET) algorithm: based on the previous available duration and an elapsed time of 500 ms, one might conclude that the present activity is about to fi nish in 500 ms, while it is clear that it is being executed with a reduced pace and the actual duration will likely be higher. This is due to the fact that the ET algorithm relies only on data related to past executions, without exploiting additional information that can be extracted from the current iteration of the activity. Differently, if an online estimate of the percentage of ad- vancement adv%(Te) is available, we can derive the average speed and make an extrapolation to have a better estimate of the duration and the remaining time to completion: TD(Te) = Te adv%(Te) TR(Te) = 1 adv%(Te) adv%(Te) Te(2) To do so, a prototypical action, as well as a way to compare it with the current execution, are needed, which are described in Section IV. III. TRACKING OF HUMAN FEATURES Real-time information from the workspace is required to monitor the evolution of the present activity: this can be done tracking the human operators movements. The tracked features must be selected to obtain measurements that are informative and robust within the range of possible activities. Considering the case of industrial assembly tasks, it is meaningful to focus on the motion of the operators hands, as the rest of the body remains largely stationary. Several methods for human hands tracking have been proposed in the literature. However, most of them underperform when the two hands are interacting as it happens during assembly activities (see e.g. 1719) or require an excessive amount of computing time 20, thus not being suitable for real-time tracking. Using experiments on diverse human actions, it has been decided to track the Cartesian positions of wrists and in- dex fi ngers of each operators arm. Fingers provide more informative movements but are prone to occlusions, whereas wrists give more robust data. In the implementation, an RGB-D camera (Microsoft Kinect) was used. Wrist data are directly provided by the Kinect tracking software, while fi ngers positions are extracted from images with the help of coloured markers placed on the operator hands or gloves (see Figure 2). The progression of an activity is thus described by a 12-dimension time series built from the coordinates of the four features in the Cartesian space. To compare different repetitions of the same activity, it would be benefi cial to have normalized signals with the same bias and variance. However, the algorithm proposed in Section IV runs in real time, so that information on the signals as a whole are not available. One could express the motion of the features in term of velocity, in order to remove the position bias. However, different execution speeds would result in different amplitudes. Moreover, a wavering behaviour of the operator would easily introduce undesired spikes in the velocity profi le that would compromise time series compar- ison. On the other hand, in industrial settings the workspace is usually well structured, so that the human motion is constrained by the position of product parts and assembly areas. For this reason, a description of task evolution in term of features position attains better performance. IV. TASK ADVANCEMENT MONITORING In Section II we stated that computing a real-time estimate of activity advancement would be benefi cial for monitoring performance. This paper proposes a modifi ed version of the Dynamic Time Warping (DTW) algorithm for this purpose. DTW is a widely used algorithm that measures similarity between two temporal sequences (see 6, 2123 for examples). In its basic form, it calculates the optimal match between two given sequences and it is robust to nonlinear variations in the time dimension. This allows comparing two series regardless of the changes in their speeds. Modifi ca- tions, called Open-ended DTW 24, already exist that allow the comparison of incomplete input time series with complete references, obtaining as output the fraction of reference that 434 Fig. 2: Operators tracked features: wrists (blue) and index fi ngers (red). is matched by the input up to the current time instant. The proposed approach builds on the existing algorithms to cope with the presence of occlusions and uninformative human movements during task execution in order to obtain a more robust estimate of activity progress. Moreover, a method to update online the reference activity template is suggested. A. The dynamic time warping-based algorithm The underlying idea of the DTW algorithm is to locally deform the time axis of the input sequence in order to associate each of its points with one point of the reference sequence in a way that minimizes the cumulative distance between the aligned points. Let X = (x1.xi.xN) be the input sequence associated to the partial execution of the present activity and Y= (y1.yj.yM) the reference sequence that describes the activity template. The algorithm builds an N-by-M matrix where each element (i,j) stores the cumulative distance D(i,j) of the optimal warping for the subsequences X(i)and Y (j), which is computed as: D(i,j) = d + s(i,j) d = minD(i 1,j),D(i,j 1),D(i 1,j 1) (3) where s(i,j) stands for the Euclidean distance between xi and yjand the following constraints are considered to handle limit cases: D(0,0) = 0 D(i,0) = i D(0,j) = j A global similarity measure between the two sequences is given by DTW(X,Y ) = D(N,M), which considers both the input and the reference series in their entirety. However, if the input sequence is only representative of a partial execution of the template, as it happens in real time activity monitoring, a correct comparison between the input and the best matching forepart of the reference must be found. The open-ended similarity is thus defi ned as: OE(X,Y ) =min j=1,.,M DTW(X,Y (j) = min j=1,.,M D(N,j) where Y (j) is the reference sequence truncated at the j-th element and D(N,j) denotes the elements of the last column of the DTW matrix. One can fi nd the index j N of optimal truncation for the reference sequence given an input of length N and the associated percentage of advancement as: j N = argmax j=1,.,M D(N,j)adv%= 100j N M (4) Then, the warping path is a function (k),k = 0,.,N that associates each point of the input sequence with the index of optimal truncation for the reference sequence given the subsequence X(k). Since human activity can only progress forward, the warping path is constrained to be a non- decreasing function: (k) = maxj k,(k 1) (5) Figure 3 shows an example of DTW matrix and compares the optimal warping paths obtained in case of incomplete input using global DTW and the open-ended version. The OE- DTW is always able to fi nd the best available partial match, while global DTW forces meaningless matches considering the complete reference. In order to achieve good performance, DTW algorithms require a consistent stream of measurements at runtime. However, loss of data during the monitoring of human activ- ity may occur due to tracking errors or features occlusions. Two cases are possible: either only some of the signals are missing or all features are unavailable at the same time. Usually, tracking errors translate to isolated lost samples of all signals while occlusions are related to a subset of features but last for an indefi nite time interval. If a partial sample is available, the algorithm can continue to work in the standard way, provided that the Euclidean distance s(i,j) in equation (3) is computed using only the available dimensions. Conversely, the presence of full occlusions is critical for the DTW algorithm, as shown in Figure 4a. Standard DTW updates the cumulative distance D(i,j) using equation (3) searching for the minimum distance d among the neighbours of the point (i,j). When an occlusion occurs, input samples are discarded so that the fi rst points before and after the occlusion are considered to be consecutive, which leads to a wrong warping of the signals. An occlu- sion handling mechanism has been designed that takes into account the occlusion length and is able to retrieve the correct alignment as depicted in Figure 4b. The idea is to allow for many feasible matches during the occlusion and retrieves the correct one when new measurements are available. In fact, it is possible to extend the exploration space according to the occlusion length Loccto take into account the possibility for the points of the reference to be associated with occluded input points. The modifi ed rule reads as follows: d = minD(i 1,j),D(i,j 1),D(i k,j 1) k = 1,.,Locc (6) In this way, all feasible alignments are taken into account as equally probable during the occlusion. The principle is 435 Fig. 3:DTW matrix for an incomplete input sequence X with highlighted OE-DTW optimal warping path (blue). Warping paths obtained using global DTW when 5 and 9 input samples are available are indicated in red for comparison. Global DTW always considers the complete reference, leading to higher cumulative distance and producing meaningless associations, especially when short input sequences are available. illustrated in Figure 5, which shows an example of DTW matrix and optimal warping path in presence of an occlusion. Since the aim of the proposed algorithm is to monitor the progress of human activity, inputs to the DTW will be time series describing the motion of the operators tracked features, as described in Section III. However, movements of the operators hands are not bound to be synchronous. Thus, the relative temporal alignment among signals is not maintained for each repetition of the same activity. For instance, if the operator has to reach for two different objects, he/she can either grasp both objects simultaneously, in one sequence or the reverse. This makes it diffi cult to describe the task with a single time series, built with data coming from opposite sides of the body, and compare different executions with each other. The solution is to run two separate instances of the DTW algorithm, each of which receives as input the motion of features related to one side of the body, that is the positions of the wrist and the corresponding index fi nger. Then, results are fused together to obtain an estimate of the overall activity advancement using the weighted aver
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 本溪市2025年公需课考试题库及答案
- 2025年国家开放大学《认知心理学》期末考试备考试题及答案解析
- 2025年辽宁省丹东市公需课培训(专业技术人员继续教育)试题及答案
- 2025年管理类联考真题及答案及解析
- 护理综合题库专升本真题及答案解析
- 2025年广西桂林市警(协警)招聘考试题库及答案
- 2025年国家开放大学《商务沟通与协商》期末考试备考试题及答案解析
- 山东安全b证考试题库2025及答案解析
- 2025年国家开放大学《西方经济学》期末考试备考试题及答案解析
- 2025年国家开放大学(电大)《戏剧艺术》期末考试备考试题及答案解析
- 寻常型天疱疮
- 法人车辆租给公司合同范本
- 山东威海旅游介绍PPT模板(推荐)
- 汉画像石课件
- 初中毕业证怎么从网上查询
- GB/T 32926-2016信息安全技术政府部门信息技术服务外包信息安全管理规范
- GB/T 26006-2010船用铝合金挤压管、棒、型材
- 建筑工程防火墙脚手架搭设施工方案
- 比较思想政治教育学全套课件
- 正常检验一次抽样与方案
- 炎症性肠病患者与营养支持
评论
0/150
提交评论