UnderwateracousticopticalimagingsystembasedonOMAP3530.doc_第1页
UnderwateracousticopticalimagingsystembasedonOMAP3530.doc_第2页
UnderwateracousticopticalimagingsystembasedonOMAP3530.doc_第3页
UnderwateracousticopticalimagingsystembasedonOMAP3530.doc_第4页
全文预览已结束

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Underwater acoustic / optical imaging systemHaisen LI, Jian XU, Tian ZHOU, Pingxuan DOUScience and Technology on Underwater Acoustic LaboratoryHarbin Engineering UniversityHarbin, Heilongjiang Province, CAbstractThis paper introduces an underwater acoustic / optical integrated imaging system, which individual uses the DSP core and ARM core of the OMAP3530 processor to realize the multi-subarray amplitude-phase united detection algorithm and optical camera image collection for getting the estimating the depth of the sea and the optical information of the focus area uses the ARM core to receive the GPS and other parameters from the UART and then realizes the three-dimensional acoustic imaging of the seafloor with OpenGL | ES library in PowerVR core of the OMAP3530 processor, uses the QT embedded designed human-machine interface to realize the acoustic and optical image information real-time displaying relies on the network uploading the data to the mother-ship。During the Songhua Lake experiment time the whole system run very well and had good performance. The Underwater acoustic / optical integrated 3D imaging system is suitable for underwater topography, marine salvage search and rescue, undersea resource exploration and other fields which has a wide application prospect and huge market demand.underwater acoustic imaging; underwater optical imaging; integrated; OMAP3530; Qt embedded; underwater target identificatonI. Introduction (Heading 1)The detection of underwater targets mainly has two ways up to now. The first way is the optical detection 1 2. By this way the optical image collection equipment directly collects the targets optical image information at close range. The advantages of this way are equipments portability and high imaging resolution, so it is suitable for direct observation of naked eye. However, the limited detection range and high dependence on the light source make it unsuited for long-range detection. To meet the distant test and close fine coarse detection imaging needs, and reduce the expense, TIs OMAP3530 which is the latest high-performance multiform DSP device is adopted. It combines with sound and light detection equipment to constitute the integrated imaging system combined with innovative integration of sound and light detection method, realization of acoustic and optical image information to obtain the integrated imaging system. To explore long-range acoustic detection and close-optical fine coarse target detection. Overall system worked stabile in 2009 Songhua Lake experiment and it had successfully completed the job of surveying and charting underwater topography.II. System solutionThe main processor of the underwater acoustic / optical integrated imaging system is OMAP3530. It is a high-performance chip introduced by TI Company that has a three-core structure. OMAP3530 integrates a 600MHz ARM-A8 core, 430MHz DSP core, and Power SGX530 processor that is applied to 3D model construction 5. The overall structure of this system is shown in Fig. 1. DSP core realizes the multi-subarray amplitude-phase united detection method; ARM completes the collection of optical images and display at the human-machine interface that designed by QT embedded; PowerVR core accomplishes acoustic three-dimensional imaging.Figure 1. System structure diagramIII. Hardware SystemThe hardware design rounded the OMAP3530. This system uses DevKit8000 signal board for signal processing and program function running. LCD Graphic Display Module is used for acoustic three-dimensional imaging, optical imaging and interactive interface display; GPS modules and horizontal pan tilt detection module are respectively responsible for providing the appropriate GPS and roll / pitch information. The hardware structure of this system is shown in Fig. 2.National High Technology Research and Development Program of China (No. 2007AA09Z124)The Research Fund for the Doctoral Program of Higher Education of China (No.20070217022)National Natural Science Foundation of China(No.60872107)The Fundamental Research of Harbin Engineering University(No.HEUFT07017) Figure 2. Structure diagram of hardware platformIV. System Software DevelopmentThe main system consists of two parts: acoustic imaging system and optical imaging system. They are shown in Fig. 3.Acoustic imaging system uses DSP core to realize the multi-subarray amplitude-phase united detection algorithm. It uses ARM core to receive the GPS and other information and uses the PowerVR core to complete the acoustic three-dimensional imaging acceleration 6. By the way of double-buffered data transferring, the system transfers the acoustic three-dimensional imaging information to the embedded operating system and then displays this information on the QT/E LCD human-machine interface. These data can also be controlled, stored and replayed. Optical imaging system uses ARM core to capture the optical imaging camera data and then processes the data with the pixel interpolation algorithm. The optical imaging system can also show the images on the QT/E interface as well as saving the information flow on the system.Figure 3. Structure diagram of software modulesA. Subarray Amplitude - Phase DetectionConsidering the characteristic of higher SNR and smaller footprint in shallow water environment, the system uses the subarray amplitude-phase joint detection algorithm to calculate the depth information in the DSP core 8. When the DSP core has received the collected data from the network port, the algorithm makes the dynamic calibration of the Q value to complete floating point to fixed point conversion. According to pre-displacement hydrophone array situation, the subarray amplitude-phase joint detection algorithm divides the hydrophone array into several multiple sub-arrays, each of which is applied with beam forming. Afterwards DSP core utilizes the phase information of the sub-array beam with the same sequence number to calculate sub-array function. Like the following equation shows (1) (1)In equation (1), stands for the location of the ith sub-array; stands for the phase of the kth received sub-array; k stands for the slope; and c stands for the initial phase of the first primitive received signal (constant).In practical, the algorithms use the unwrapping method to process the phase parameter, apply the least mean-square optimization to estimate the K, apply the smoothing noise reduction method to process the depth data, determine the global weighting factor according to the output beam to make the amplitude weighting and phase weighting comparable, and ultimately determine the depth data of the target region.The DSP core completes multi-subarray amplitude - phase joint detection algorithms and then gets the depth distributing of the target area. These depth data are stored in the system flash; the system passes the address of the flash to ARM core through the message queue.B. OpenGL ES 3D imagingOpenGL 3D imaging systems are supposed to convert the information contained in the acoustic data to 3D image. The core framework of OMAP3530 is based on PowerVR SGX530 graphics accelerator solutions. API function throughout the system was tested to achieve system design requirements.We choose the DEM model of the standard vertex model in three-dimensional reconstruction, and complete image-side through the first-order linear difference padded data points 9.the point interpolation axis of adjacent longitude and latitude axis adjacent points and do the average, as shown in Fig. 4.Figure 4. Schematic diagram of first order linear interpolationWhile, is the data of close four data points from the first-order linear interpolation operator obtained a high degree of data due to sampling points. Completed with the embedded samples, the data are processed into the standard format which each line of data points have the same longitude and each column of data points have the same latitude. Expressed as matrix elements, x stands for table longitude, y stands for latitude said, z stands for the depth of water. All data points are connected in GL_LINE_STRIP way, which is shown in Fig. 5. Starting from the first point and draw the Triangular grid point by point in order to draw graphics. The actual triangle mesh network configuration is shown in Fig. 6.Figure 5. GL_LINE_STRIP connectFigure 6. Triangle mesh network The Illumination needs to add normal vector for each echo data point. The normal vector of each single point is determined by the normal vector of its adjacent patch 10. Two vectors (they should not be parallel) do cross multiplication and the normal vector of the plane which the two vectors determine can be worked out. Formula (2) shows the process concisely. (2) Counterclockwise rotation from the vector start seeking the plane of two adjacent vectors and normal vectors can be obtained plane normal vector, surface normal vector to do these four averaged vector, the vector normalization, have vertex the normal vector, the completion point of the normal vector of the drawing.Then a parameter matrix of model matrix is used to complete the view function settings. Use myglLightv () to complete light position, scattered light intensity, ambient light intensity, light intensity and other parameters set to mirror. Through the establishment of fixed-point buffer object, a buffer memory with dual memory interaction mechanism is built up. We complete the establishment of three-dimensional graphics, and the other Shell predefined with enumeration value PVRShellKeyNameSELECT, which supplies imaging window messages to the outside world. According to the window messages, an additional button can be chosen to display mode.C. QT/E human-machine interface designThe QT/E human-machine interface of the underwater acoustic / optical imaging system is supposed to display real-time GPS coordinates, speed, horizontal/vertical roll parameters as well as optical and acoustic three-dimensional imaging information.We use the C + to develop the QT/E interface, and use QT/E polymorphism and inheritance features to design the control system. The interface implementation procedure operation uses QT / E specific underlying graphics engine technology to complete the LINUX kernel Frame Buffer drawing operation, to achieve real-time human-machine interface display. Through the Inherited methods, we add third-party serial class and V4L2 video information class GPS positioning to achieve the same machine, horizontal / vertical shake detector serial communication between the mechanism and the optical image acquisition equipment for the operation mechanism.The optical imaging display uses the memory map based on Linux V4L2 protocol to achieve the video data acquisition and interface display. Comparing to traditional direct interception of optical images, it is economical in system memory consumption as well as a higher video frame rate. Video data acquisition needs of the video equipment video data collected by the memory mapping are mapped to user space and user space, to achieve the user space of the video data stream operation. Procedures uses the V4L2 to provide I/O port control functions, video buffer command word and buffer queue command word, so they can exchange data with LINUX underlying V4L2 driver procedures.Distribution of video memory and the query to achieve the functions of memory, and then be achieved through mmap memory-mapped drive device memory information is mapped to user space. The interface displays the video data stream created by the form of JPG format images, using QT / Es timing mechanism. The program uses fwrite function video device memory space mapped to the user space of the first address, the first address in the original video data, write and save images using the QT / E in the timing mechanism to 50MS intervals Uninterrupted to read out image information, synchronization refresh display information and make it smoothly seen the optical image acquisition device to capture the target area image information by our eyes. Optical image acquisition device for collecting the raw optical image brightness to the poor, low-contrast features, the system protocol uses V4L2 API function command character image information processing optical images collected information for further processing, through the way of increased even further the distribution of the original optical image pixel color, the completion of the QT / E interface brightness and contrast adjustmentsV. Experimental data processingWhen the experiment ship sailed from north to south(or contrarily) in Songhua Lake in 2009, in a track of time, the system continuously collected a single synchronous cycle lead underwater hydrophone array 256 through the network transmission of data sampling points, the DSP kernel through multiple antenna array amplitude - phase joint detection algorithm to calculate the depth of each echo data, using median filter removed outlier: after removing the location of data and acoustic sampling area due to use of first-order line Interpolation of padded samples. Data uses high degree of acoustic echo information 3D terrain reconstruction through digital elevation model, Graphics can achieve 25FPS refresh rate. Realizing the real-time display of underwater optical cameras to detect the focus of regional information and transmitting GPS and roll / pitch information through the serial port to QT / E of the human-machine interface, the system can provide real-time track and other information. System operation figures are given below.Figure 7. Display interfere of the video information captured by optical collectionFigure 8. Local grid map filled with no colorFigure 9. The whole systemVI. SummaryThe underwater acoustic / optical imaging system works well at the experiment in laboratory tank. UART expansion of the normal communication channels, software system and the kernel file system all work stability. The track of underwater terrain stability information can be displayed on LCD. The interface can display three-dimensional acoustic imaging optical camera information. Photographic image information, three-dimensional water topographic images can be displayed at average 20.7 to 30 images, the perspective of which can be adjusted basing on OpenGL. But it still waits to enhance as ARM clients information can only be processed after the DSP-side finished its processing of information, not synchronously. Due to too much repeated using of the cache timer, ARM-side software design leads the system to run with a certain degree of redundancy, which is hoped to be further improv

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论