IROS2019国际学术会议论文集0870_第1页
IROS2019国际学术会议论文集0870_第2页
IROS2019国际学术会议论文集0870_第3页
IROS2019国际学术会议论文集0870_第4页
IROS2019国际学术会议论文集0870_第5页
已阅读5页,还剩3页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Combined Optimization of Gripper Finger Design and Pose Estimation Processes for Advanced Industrial Assembly Frederik Hagelskjr1, Alja z Kramberger1, Adam Wolniakowski1,2, Thiusius Rajeeth Savarimuthu1and Norbert Kr uger1 AbstractVision systems are often used jointly with robotic manipulators to perform automated tasks in industrial appli- cations. Still, the correct set up of such workcells is diffi cult and requires signifi cant resources. One of the main challenges, when implementing such systems in industrial use cases, is the pose uncertainties presented by the vision system which have to be handled by grasping. In this paper, we present a framework for the design and analysis of optimal gripper fi nger designs and vision parameters. The proposed framework consists of two parallel methods which rely on vision and grasping simulation to provide an initial estimation of the uncertainty compensation capabilities of the designs. In case the compensation is not feasible with the initial design, an optimization process is introduced, to select the optimal pose estimation parameters and fi nger designs for the presented task. The proposed framework was evaluated in dynamic simulation and implemented in a real industrial use case. I. INTRODUCTION The establishment of a robotic solution for a pick and place action frequently requires an amount of set up time, that makes robot solutions commercially unattractive. This in particular holds, when vision is used to localize the object before grasping 1. On the other hand, vision is attractive compared to using hardware solutions such as bowl-feeders or magazines prior to grasping due to the high costs involved. One of the problems in the use of vision is that pose uncertainties become introduced due to factors such as sensor noise, calibration imprecision as well as deviations in model fi tting in the actual pose estimation process. The usual way of compensating for these errors is to design gripper fi ngers that are able to align the object, even if grasped imprecisely. This is usually achieved by designing specifi c cut-outs in the fi ngers. Industrial tasks, especially assembly of precise production parts with low-manufacturing tolerances, require high precision and repeatable grasping of workpieces while being handled by the robot. In this work, we tackle one of those challenges and solve the part handling task for the Kendrion workpiece shown in Fig. 1 (objects on the left) and Fig. 4. Normally such parts arrive in the workcell in a defi ned position, e.g. placed in fi xtures. In our use case, the position of the part was semi-defi ned in the beginning (the object is placed in a known pose on the table, but can move and rotate freely on the table), and the vision system, therefore, needs to determine its location accurately. 1Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Odense, Denmark. frhag,alk,adwol,trs,norbertmmmi.sdu.dk 2Faculty of Mechanical Engineering, Bialystok University of Technology, Bialystok, Poland. .pl Rotation between templates Vision uncertainty Camera Object Gripper alignment capability Gripper optimization time (offline) Gripper design Object Vision computation time (online) Fig. 1.On the left: vision system parameters (angle between the object templates used to establish the piece orientation) control the vision system uncertainty (red area) at the expense of on-line computation time (in the middle). On the right: off-line gripper optimization fi nds fi nger design parameters such that the gripper alignment capability (blue area) can compensate for the vision uncertainty (red area). In industrial set-ups, the purpose of grasping action is often to precisely place the workpiece in a defi ned position, such as in a fi xture or a production machine. In these type of actions, the accuracy of the object detection is less important than the precise pose of the object grasped by the gripper fi ngers. In this paper, we refer to the term uncertainty to describe the noise of the pose detection performed on the object. Given these conditions, the problem of fi nger design can be formulated as fi nding the right geometry that achieves the optimal robustness to compensate an objects pose un- certainty. An ideal gripper would be able to grasp the object placed imprecisely such that it still ends up in the expected pose. We refer to the magnitude of the uncertainty that the gripper fi ngers can compensate for as their alignment capability. An additional problem is to estimate the actual amount of uncertainty, introduced by vision, that the gripper design needs to compensate for. This often means that many trial- and-error tests are required until a suitable solution is found. In this paper, we formalize the imprecision of the vision solution as well as the alignment capabilities of the gripper in simulation. With the presented framework it is possible to ensure that prior to implementation in the real world both 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Macau, China, November 4-8, 2019 978-1-7281-4003-2/19/$31.00 2019 IEEE2022 components fi t together (see Fig. 1). The left part of the fi gure illustrates the infl uence of the vision systems parame- ters and the time invested on computing the precise pose on the uncertainty level of the pose estimate. This computation is done online and therefore, the runtime cost has to be paid at every time the pose estimation and the grasping actions are executed. The right part of the fi gure illustrates how the gripper fi ngers design optimization process increases uncertainty compensation for the object pose estimation. The fi nger design optimization is performed off-line, whereas for the vision, if more computation time is invested during the workcell design, a smaller online vision pose estimation runtime cost can be achieved during the execution. The vision setup parameters infl uence the magnitude of the pose estimation uncertainty. For template matching tech- niques, in particular, the quantization of the template match- ing is responsible for the precision that can be achieved. In general, a higher resolutions leads to a higher precision, however with the cost of an increased runtime of the algo- rithm, limiting the feasibility of the solution (see Fig. 1 - the leftmost arrow). A higher uncertainty of the vision system can be compensated by further optimizing the fi nger design as done in 2, which requires off-line computation. Thus, the selection of both appropriate vision and gripper design parameters needs to be balanced for optimal results, which usually requires a large number of experiments. Our approach allows us to ensure that the pose estimation setting and the alignment capabilities are in an appropriate re- lation by simulating both, and by that saving time-expensive trial-and error-experiments that make robot solutions very costly today. II. STATE OF THE ART Typically in industrial practice, the design of gripper fi ngers is a result of iterated testing and improvement of the original shape proposed by an experienced engineer. This requires substantial resources and causes problems for Small and Medium Enterprises (SMEs), which often deal with small batch size production relying on a rapid change of the production profi le. Thanks to the improvements in Computer Assisted Design (CAD) and Manufacturing, numerous tools emerge that assist in the design of particular robotic workcell elements, including gripper fi ngers. One of such tools is EGRIP 3 developed by Schunk GmbH. Recently, a new trend is emerging where the design of elements for robotic workcells is performed utilizing sim- ulation. In 4 the simulation framework is used in order to optimize the pick-and-place action parameters and the workcell confi guration for handling soft-body materials. In 5 a framework is introduced which allows for fast and effi cient fi nger-tip design using Computer Aided Design methods and kinematic simulation. This framework was later extended in 6 to make the design of multi-object grasping fi ngers. In our previous work 2, we have presented a framework, which uses dynamic simulation for the purpose of improving the gripper fi nger design for compensation of vision pose estimation uncertainties. Setting-up of a new visual pose estimation system consists of several consecutive steps; sensor selection, algorithm selection, and parameter tuning 7. These steps are usually performed manually and adjusted until performance is satis- factory. The process of parameter tuning is time-consuming and does not result in any analysis of the capabilities. To decrease the set up time and provide an analytical measure of performance, the parameter tuning can be performed in simulation. Jrgensen et al. 8 and Iversen et al. 9 presented a simulation-based pose estimation optimization process, however the work deals with camera placement and pose estimation based on 3D data, while in our work we apply op- timization of parameters for a 2D template matching process. The work of Jrgensen et al. 8 performs a pose estimation with an object placed on a table. The camera position is optimized to return the highest number of detections. Iversen et al. 9 also optimize the position of the camera towards the object, but does not perform a full pose estimation. Instead, the camera is positioned to optimize the precision by the Iterative Closest Point (ICP). In our previous work 1 the optimization of internal parameters concerning vision algorithms has shown good results, although not done in simulation. Bayesian Optimization is used to optimize the algorithm parameters to increase detection. A signifi cant difference between the previously mentioned approaches and our is that we have an adjustable parameter which addresses both uncertainty and runtime. In this paper we present a novel framework for optimizing both gripper and vision for a complete pick and place action. To our knowledge this is the fi rst approach wherein both the alignment capabilities of the gripper and the precision of the vision system is optimized concurrent to accommodate the presented task requirements. III. SYSTEM OVERVIEW In this section, we describe the proposed framework for fi nger design in combination with pose estimation parameter design depicted in Fig. 2. Both methods can be treated separately, with a common integration process when dealing with real use cases in production facilities. The input to the individual methods is represented as a combination of parameters describing the geometry and physical features of the objects for which the fi ngers and the vision system is to be designed. Furthermore, additional information on the environment, e.g., camera parameters, object placement in the workcell, etc., is introduced to the design procedure. Our method consists of three subparts, which are: 1) Finger design: The initial requirement of the method is to determine the proper gripper base (actuator). There are several options to chose from, e.g., pneumatic, hydraulic or servo - electric, which with their specifi c properties must comply with the task requirements. In the second step, the base shape (length, width, and hight) of the fi ngers is determined. Based on this shape 2023 Task Parameters Object Model Grasp Position Camera Placement Camera Calibration Object Alignment Compensation Object Detection Accuracy Can Compensate Real World implementation of the obtained parameters YESNO Evaluation in Simulation Create object detection system Rotation quantization Design and optimize fi ngers Imprint design, parametrized fi nger design Finger DesignObject Detection Evaluation in Simulation Finger Parameters Object Detection Parameters Optimization Evaluation in Simulation Finger Design Imprint design, parametrized fi nger design Object Detection Rotation quantization Fig. 2. Graphical representation of the workfl ow for the fi nger design and pose estimation framework. the specifi c features in relation to the object geometry are designed. A basic description of the design process using the imprint method 10 and the parameterized method 2 is given in Section IV-B. After the fi n- gers are constructed, their performance is evaluated in several simulation-based grasping experiments. The output of the evaluation determines how good the alignment properties of the fi nger design are. 2) Object detection: In the fi rst step, the appropriate cam- era must be selected to comply with the task require- ments. In the second step, the position of the camera in the workspace of the robot must be determined. The installation position is crucial to get the best object detection performance and cover the biggest workspace area of the robot. When the initial requirements are set, the object detection parameters, e.g., template match- ing, precision, and runtime, are determined and their performance evaluated in simulation. A description of the detection procedure is given in Section IV-A. The output of the simulation-based evaluation gives the uncertainty level of the vision system, based on the design parameters. If the alignment capabilities of the fi nger design can compensate for the vision uncertainty or vice-versa, the combined solution is ready to be implemented in the real set-up. If this criterion is not met, both sets of parameters are optimized (dashed block). 3) Parameter optimization: If the outputs of the design evaluation do not meet the task requirements, the parameters must be optimized to achieve better per- formance. In the fi nger design optimization process, we can optimize the fi nger design features, to that extent, that the new optimized design can compen- sate the pose estimation uncertainties. On the other hand, if the initial design of the fi ngers satisfi es the task requirements, pose estimation in terms of, higher precision versus runtime is optimized. This process is repeated until a good tradeoff between the performance of both methods is achieved. With this framework, we ensure that the shape of the fi ngers will compensate for the vision pose estimation un- certainties for any environmental condition of the proposed task. IV. METHODS A. Vision system Object pose estimation is performed with our previously developed pose estimation system 11. The system is de- signed for industrial pose estimation and returns very precise and robust object poses. This system needs a semi-defi ned pose (object placed on the table, but able to move and rotate freely) and thus have limitations. We decided to use this system as opposed to 2D or 3D 12, 1 feature matching systems which provide full 6D poses, as the pose estimation system was incorporated and benchmarked by the SDU robotics team at the World Robot Summit Assembly Challenge (WRC2018) 13 and it produced accurate and reliable pose estimation results. In order to achieve very high precision, the system gen- erates templates online to match them locally. The rotation is estimated by matching templates corresponding to each rotation, respectively. Increasing the number of generated templates will, therefore, increase algorithm runtime while decreasing the rotation uncertainty, see Fig. 3. Both runtime and uncertainty are parameters desired at a minimum. Be- cause those two parameters are dependent, a balance must be found depending on the task specifi cations. Determining the system runtime as a result of the number of templates is a simple task, as the template generation can be performed independently of the task scene. To determine the uncertainty of the real-world system would require an extensive dataset with precise labeling. In this paper, we build a simulation system to verify our pose estimation uncertainty automatically. 1) Simulation of vision system: In our work, we use but simplify the simulation system presented by Jrgensen et al. 8 and Rossmann et al. 14. Our synthetic images thus do 2024 Fig. 3. Plot showing the resulting runtime (Time s) and standard deviation of the angle error (err) from varying quantization of rotation. As the runtime increases it is seen that error decrease. Fig. 4.Comparison of synthetic and real image of the Kendrion object. not correspond completely with the real images. Instead, to ensure that the pose estimation performance in simulation and real-world matches, a generalization term is added as according to 15. Here different types of Gaussian noise is added to the image to generalize methods trained on synthetic images. We evaluated that our simulation-system gives approximately the same pose estimation results without an exact rendering, thus requiring fewer parameters and a simpler set up, compared with 14. The three external simulation factors in the system are (1) object, (2) scene, and (3) lighting. All three are diffi cult to obtain from the real world. Therefore, we perform a simplifi cation as follows: Object: As the pose estimation is based on CAD models we use the same model as our object model. However, such objects are often metallic with specular surfaces, and the CAD model does not include color information. Thus a single color intensity is chosen for the object surface. Scene: Obtaining the full scene model with all com- plexities is impossible, but as the pose estimation is in 2D we can insert an image as background. A standard textured background is chosen, i.e. the fi rst instance of the Brodatz Dataset. This enables us to determine the systems performance for any unknown scenes. Lighting: The last factor is the lighting condition. We include a single light behind the camera, as we want to include shadow errors in the model, but do not want to determine all possible light sources. As the camera is calibrated, no distortion is present in the image. Finally, the position of the camera is obtained by placing a checkerboard on the table and

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

最新文档

评论

0/150

提交评论