IROS2019国际学术会议论文集 0999_第1页
IROS2019国际学术会议论文集 0999_第2页
IROS2019国际学术会议论文集 0999_第3页
IROS2019国际学术会议论文集 0999_第4页
IROS2019国际学术会议论文集 0999_第5页
免费预览已结束,剩余1页可下载查看

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Control and Perception Framework for Deep Sea Mining Exploration Carlotta Sartore1 2 Ricard Campos 3 Josep Quintana3 Enrico Simetti 1 2 Rafael Garcia3 Giuseppe Casalino1 2 Abstract This paper presents the control and perception framework used in the ROBUST EU Horizon 2020 project aimed at integrate different technologies for developing an underwater autonomous robotic system for exploring deep sea mining sites The vehicle fi rstly collects data of the initial zone of interest and it selects a sub area which is the most probable to contain a manganese nodule fi eld then it carries out a low altitude survey When a possible nodule is detected by the cameras the vehicle lands on the seafl oor allowing the following fi xed based manipulation which is designed to perform the nodule analysis The work reports the implemented control and perception architecture and the preliminary pool experiments results I INTRODUCTION In the past few years the interest in deep sea mining has rapidly increased and research for the employment of robotic platform for exploration of extreme environment as in 1 has been carried out In particular the European Community is interested in having a new sustainable and undistorted source of raw materials as most of the providers are currently found outside Europe 2 Two major issues affect a rapid and robust exploration and identifi cation of deep sea mining sites i quickly mapping the seabed using high resolution sensors covering a large area and ii analyzing the element composition within the mining site The state of the art technology requires that a Remotely Operated Vehicle ROV is deployed from a surface support vessel SSV The ROV is teleoperated to collect samples which are then later analyzed onshore A recent experiment although performed for an oil more details about the defi nition of internal and external activation function can be found in 6 A control state machine depicted in Fig 2 has been implemented with the purpose of sequencing the aforementioned actions Concerning the dynamic control layer both a PI controller and an adaptive controller 8 have been implemented with the possibility to choose at run time which one to apply B Nodule Detection The fi rst task to be carried autonomously by the ROBUST UVMS is to detect the nodule position while navigating For this task the robot will navigate close to the sea fl oor acquiring images and processing them on real time The nodule detection is performed using artifi cial neural networks ANN a biologically inspired computational model able to exceed the performance of previous forms of artifi cial intelligence in common machine learning tasks In short convolutional Neural Networks CNN or ConvNet are a specifi c kind of ANN primarily used to solve diffi cult image driven pattern recognition tasks 9 In this work we adapted the DarkNet Yolo algorithm 10 in order to detect nodules in underwater environments The You only look once YOLO method 10 predicts what objects are present and where they are located on an image A single convolutional network simultaneously predicts multiple bounding boxes and class probabilities for those boxes YOLO trains on full images and directly optimizes detection performance This unifi ed model has several benefi ts over traditional object detection methods First YOLO is extremely fast it trades frame detection as a regression problem without needing a complex pipeline simply running the neural network on a new image at test time to predict detections Also YOLO reasons globally about the image when making predictions Unlike sliding window and region proposal based techniques YOLO sees the entire image during training and test time so it implicitly encodes contextual information about classes as well as their appearance Finally YOLO learns generalizable representa tions of objects When trained on natural images and tested on artwork YOLO outperforms top detection methods such as DPM and R CNN 11 by a wide margin Since YOLO is highly generalizable it is less likely to break down when applied to new domains or unexpected inputs We used nodule images acquired in pool experiments and we modifi ed the Darknet Yolo classifi er to work only as a single class detector The algorithm performs data augmenta tion in scale hue saturation and exposure over 2000 training images We decided to fi ne tune a tiny YOLO network starting the training using pre trained weights provided by the algorithm developers as a base This tiny network allows having weights of less than 40 MB which is quite convenient to allow its real time execution on an embedded PC such as the Nvidia Jetson TX2 present on the UVMS In case more than a nodule is detected in an image during the survey the robot chooses which one to land on based on the confi dence of the detection A sample of the detection can be seen in Fig 3 C Laser scanning and nodule 3D reconstruction After the landing task a scan of the laser scanner is triggered The output of the laser scanning system is a cloud of points describing the geometry under its fi eld of view with 6350 Fig 3 An image from the real time nodule detection showing the detection of two nodules a resolution in the order of mm see Fig 4 a From this point set our aim is to extract a single point and approaching direction from the nodule that can be reached by the arm We start by downsampling the point set to get a man ageable number of points allowing to return a sample point within reasonable time after the scan We divide the bounding box of the point set using a regular grid of a fi xed size 0 5 cm in our experiments see Fig 4 a and compute a representative point for each cell by computing the centroid of the points that fall on it Then we segment the part of the point set corresponding to the nodules from that of the ground For this purpose we take advantage of the known geometry of the place where we will perform the preliminary tests to defi ne a nodule as a cluster of points not on a planar ground that is not on the ground of the pool We detect the ground points using a typical RANSAC 12 fi tting of a plane model with a RANSAC threshold of 1 cm From the remaining set of points not on the ground we cluster them into spatially close sets using a greedy clustering technique 13 Since more than a nodule can fall under the fi eld of view of the laser we select the largest one for sampling From this point cloud representing the nodule we need to select a single sample point and approaching direction Obviously the presented nodule segmentation procedure is only valid on the test environment devised in this paper For future sea trials we plan to merge the information from the cameras to get a point set segmentation that depends on the cameras Thus we envision using the detection of the nodule using the method described in Section III B to know where the nodule is in the frame of the cameras and then use the known geometry between the cameras and the laser frame to only select the 3D points that back project within the detection footprint in both cameras Finally in order to compute the sample point we start by computing the centroid of the point cloud representing the nodule Then we construct a segment joining the center of the laser reference frame and the computed centroid and we select the point falling closer to that segment as the sample point Since the laser is pointing orthogonal to the ground the selected point will be in a position accessible from the vertical direction by the arm Thus the approaching direction should be somehow perpendicular to the ground plane Consequently we use the normal of the ground plane model previously found by RANSAC as the direction of approach Once the goal sample point on the surface of the nodule and the direction of approach have been computed we trigger the arm to reach the nodule The approaching of the arm to the nodule starts at a pre inspection position located along the direction of approach and separated a given distance from the nodule 20 cm in our experiments see Fig 4 b IV PRELIMINARY POOL EXPERIMENTS In this section the results of the preliminary pool ex periments are presented A poster reproducing a seabed image has been laid on the bottom of the pool and sample manganese nodules were deployed on top of it in order to mimic a realistic environment The vehicle x and y position w r t an inertial frame are given by the camera via registration w r t the poster image collected The AHRS gives the depth of the UVMS CoM with a quantization of 0 1 m Due to the low depth of the pool it was not possible to rely on the DVL linear velocities outputs therefore second order derivative fi lters have been applied to the x y and depth measures in order to obtain the linear velocities feedback The roll pitch and yaw measures are given by the AHRS Their derivatives have been fi ltered and multiplied by the related Jacobian to obtain the system angular velocities Finally the joints position feedback was given by the Arm Low Level Control ALLC at 100 Hz The kinematic control layer was running at 10 Hz the dynamic one at 100 Hz and the position and velocity feedback were published at 10 Hz The PI dynamic controller has been used All the layers have been developed within the Robot Operating System ROS framework The LIBS and proximity sensors were not yet available at that point of the project so they were replaced by a dummy tool box having the same dimensions of the real one For this reason it was not possible to know when the tool was actually touching the manganese nodule therefore a safety distance was kept in between the two The nodule position for landing has been computed by means of the nodule detection method presented in Section III B Due to the lack of a proper tracking procedure at the moment of the experiment the nodule position was computed w r t the inertial frame and stored once That is our method can detect nodules but not identify them Thus a tracking procedure is required in order to be able to detect and identify the same nodule on different images However since in the experiment only a single nodule was present by simply publishing the stored value computed w r t the camera frame for each detection in the image it was possible to test the correct behavior of both the kinematic control and perception layers The vehicle has been remotely positioned in an initial position then at t 86 39s the command of go to has been given with target coordinates 3 2 4 3 1 3 Thus the system transits from an empty action corresponding to the idle state to Aaand it performs the alignment towards the 6351 a b Fig 4 The full resolution point cloud provided by the laser 422 302 points is shown in a In b we show the results of the different procedures described in Section III C the point set was downsampled 5 865 points and we show in blue the points corresponding to the ground plane and in green the cluster of points describing the nodule geometry The red dot with an arrow represent the approaching point and direction respectively while the red asterisk marks the goal sample point on the nodule target Once the action is fi nished the action is switched to Agand the vehicle reaches the goal and keeps the position At t 121 69 s the landing starts the current action is switched to Ap As it can be seen in Fig 5 a the total activation function of the laser frame alignment task switches from zero to one at t 121 69 s indeed the task has been activated It is worth noticing that the task is an equality one therefore the activation value does not change overtime unless the system switches to a new action where the task is not included Meanwhile in Fig 5 a the total activation function of the distance task is depicted Since the task is an inequality one its activation changes overtime depending on whether the control variable are within the defi ned thresholds At t 157 90 s the distance task activation function is less than one therefore the control variables meet the desired requirements and the current action is switched to Adin order to perform the descent movement During such a movement the distance to nodule task is still active and its total activation reaches the value equal to one full activation but then it decreases again and it fulfi lls the task control objective before the vehicle lays on the seabed Fig 5 a Once the UVMS is at 0 1 m from the seafl oor the kine matic control layer triggers the Vehicle Low Level Control VLLC to manoeuvre the vehicle for the last part of the way down During this movement only the upper thrusters are used in order to prevent raising too much sand which could interfere with the laser and camera operations in the real operative scenario Once the vehicle has landed the kinematic control layers triggers the VLLC to push on with the front thrusters in order to avoid the vehicle tilting when the manipulator is exiting the vehicle frame from the rear The UVMS CoM trajectory of the whole trial is shown in Fig 5 d It can be seen that the vehicle aligns towards the goal and reaches it afterwards it lands while fulfi lling the distance and alignment requirements The changes of UVMS CoM depth over time are depicted in Fig 5 b where since the CoM depth is shifted of 0 348 m w r t the bottom of the vehicle the measure does not coincide with the fl oor position even when the UVMS lands In Fig 5 b it is shown how the depth is in between the minimum altitude limits but it never overcomes them until the vehicle lands This can also be noticed in Fig 5 a Indeed the activation function of the minimum altitude task is always different from zero in the fi rst phases of the experiment The activation function has some spikes e g at t 142 5 s where it is equal to one due to the quantization of the depth sensor Indeed the sensor hides changes to the task not allowing it to react in time to avoid reaching the full activation even though it is able to recover the correct position in few steps e g the spike at t 142 5 s disappears at t 143 6 s Once the vehicle has landed the laser performs the scan of the nodule following the approach described in Section III C which results in the fi nal sample position pre inspection position and direction of approach Once the nodule sample position is computed such information is communicated to the kinematic control layer which automatically triggers ALLC to exit the arm from the parked position Once the exiting is fi nished the kinematic control layer switches to the Aj action to reach the a priori defi ned pre inspection position Once the position is reached the kinematic control layer switches to the Atand it reaches the approaching point computed by the laser with the tool z axis aligned to the direction of approach to the nodule Once the position is reached the kinematic control layer is commanded to perform a step hence approaching the nodule by performing a movement along the direction of approach to the nodule of 0 05 m The kinematic control layer executes again action Atand reaches the new target position This sequence is iterated 3 times In Fig 5 c the tool sequence is depicted V CONCLUSIONS This paper has presented the control and perception framework for the ROBUST H2020 project as well as the hardware used and the preliminary pool experiments results More details have been given about the task priority framework enabling the kinematic control of all the different phases as well as the dynamic one In the pool experiments the whole architecture has been validated proving the feasibility of the approach proposed the UVMS was able to navigate in the pool and to land having the nodule in the work space of both the laser scanning system and the manipulator The tool was able to 6352 90100110120130140150160170180 time s 0 0 2 0 4 0 6 0 8 1 1 2 Task Activation Function Minimum Altitude Descendent Movement AUV CoM Nodule Distance Laser Frame Alignment PositioningGoing To Landing a 90100110120130140150160170180 time s 0 8 1 1 2 1 4 1 6 1 8 2 2 2 2 4 depth m Depth Depth MinimumAltitudeBound MinimumAltitudeBound FloorPosition PositioningGoing ToLanding b c 00 511 522 533 544 5 north m 7 6 5 6 5 5 5 4 5 4 3 5 east m t 121 69 s t 86 39 s AUV Trajectory CoM Nodule AlignmentTh Goal DistanceTh DistanceTh d Fig 5 Pool test results a The activation values of the main control tasks during the Going to Positioning and Landing phases b UVMS depth minimum altitude boundaries and seafl oor depth It can be noticed that the depth is quantized of 0 1 m and the CoM is shifted of 0 348 m w r t the vehicle bottom c Tool Nodule Approach phase fi rst the tool exits from the parked position then it approaches the pre inspection confi guration From such a confi guration it moves to the approaching point and performs three steps of 0 05 m along the approaching direction before reaching the end position on top of the nodule A safety distance is kept in between the tool and the nodule due to the lack of a proximity sensor d AUV CoM trajectory while reaching the goal from t 86 39s to t 121 69 and landing from t 121 69 to the end by keeping the CoM and the laser frame in between the defi ned thresholds The UVMS is depicted as an ellipsoid and a triangle where the latter mimicks the tail of the vehicle The UVMS shape is depicted in red during the movement and in blue when the fi nal position is reached approach the nodule by exploiting the vision outputs and maintaining the required distance Current works are aimed to prepare the whole system for the upcoming sea trials scheduled for summer 2019 These tests are aimed to identify the system dynamic model 14 parameters and tuning the dynamic control parameters with the fi nal goal of demonstrating the whole ROBUST mission concept REFERENCES 1 A Wedler M Hellerer B Rebele H Gmeiner B Vodermayer T Bellmann S Barthelmes R Rosta C Lange L Witte et al robex components and methods for the planetary exploration demon stration mission in 13th Symposium on Advanced Space Tech nologies in Robot

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论