基于MATLAB视像的GSM报警装置【说明书论文开题报告外文翻译】
收藏
资源目录
压缩包内文档预览:
编号:10276604
类型:共享资源
大小:5.87MB
格式:ZIP
上传时间:2018-07-09
上传人:小***
认证信息
个人认证
林**(实名认证)
福建
IP属地:福建
50
积分
- 关 键 词:
-
基于
matlab
视像
gsm
报警装置
说明书
仿单
论文
开题
报告
讲演
呈文
外文
翻译
- 资源描述:
-
基于MATLAB视像的GSM报警装置【说明书论文开题报告外文翻译】,基于,matlab,视像,gsm,报警装置,说明书,仿单,论文,开题,报告,讲演,呈文,外文,翻译
- 内容简介:
-
译文题目: The hierarchical structure of the image processing operations 0The hierarchical structure ofthe image processing operations Image processing is not a one step process. We are able to distinguish between several steps which must be performed one after the other until we can extract the data of interest from the observed scene. In this way a hierarchical processing scheme is built up as sketched in Fig. The figure gives an overview of the different phases of image processing. Image processing begins with the capture of an image with a suitable, not necessarily optical, acquisition system. In a technical or scientific application,we may choose to select an appropriate imaging system. Furthermore, we can set up the illumination system, choose the best wavelength range, and select other options to capture the object feature of interest in the best way in an image. Once the image is sensed, it must be brought into a form that can be treated with digital computers. This process is called digitization. With the problems of traffic are more and more serious. Thus Intelligent Transport System (ITS) comes out. The subject of the automatic recognition of license plate is one of the most significant subjects that are improved from the connection of computer vision and pattern recognition. The image imputed to the computer is disposed and analyzed in order to localization the position and recognition the characters on the license plate express these characters in text string form The license plate recognition system (LPSR) has important application in ITS. In LPSR, the first step is for locating the license plate in the captured image which is very important for character recognition. The recognition correction rate of license plate is governed by accurate degree of license plate location. In this paper, several of methods in image manipulation are compared and analyzed, then come out the resolutions for localization of the car plate. The experiences show that the good result has been got with these methods. The methods based on edge map and frequency analysis is used in the process of the localization of the license plate, that is to say, extracting the characteristics of the license plate in the car images after being checked up for the edge, and then analyzing and processing until the probably area of license plate is extracted. The automated license plate location is a part of the image processing ,its also an important part in the intelligent traffic system. It is the key step in the Vehicle License Plate Recognition(LPR). A method for the recognition of images of different backgrounds and different illuminations is proposed in the paper. the upper and lower borders are determined through the gray variation regulation of the character distribution. The left and right borders are determined through the black-white variation of the pixels in every row. The first steps of digital processing may include a number of different operations and are known as image processing. If the sensor has nonlinear characteristics, these need to be corrected. Likewise, brightness and contrast of the image may require improvement. Commonly, too, coordinate transformations are needed to restore geometrical distortions introduced during image formation. Radiometric and geometric corrections are elementary pixel processing operations. It may be necessary to correct known disturbances in the image,for instance caused by a defocused optics, motion blur, errors in the sensor, or errors in the transmission of image signals. We also deal with reconstruction techniques which are required with many indirect imaging 1techniques such as tomography that deliver no direct image. A whole chain of processing steps is necessary to analyze and identify objects. First, adequate filtering procedures must be applied in order to distinguish the objects of interest from other objects and the background. Essentially, from an image (or several images), one or more feature images are extracted. The basic tools for this task are averaging and edge detection and the analysis of simple neighborhoods and complex patterns known as texture in image processing. An important feature of an object is also its motion. Techniques to detect and determine motion are necessary. Then the object has to be separated from the background. This means that regions of constant features and discontinuities must be identified. This process leads to a label image. Now that we know the exact geometrical shape of the object, we can extract further information such as the mean gray value, the area, perimeter, and other parameters for the form of the object3. These parameters can be used to classify objects. This is an important step in many applications of image processing, as the following examples show: In a satellite image showing an agricultural area, we would like to distinguish fields with different fruits and obtain parameters to estimate their ripeness or to detect damage by parasites. There are many medical applications where the essential problem is to detect pathological changes. A classic example is the analysis of aberrations in chromosomes. Character recognition in printed and handwritten text is another example which has been studied since image processing began and still poses significant difficulties. You hopefully do more, namely try to understand the meaning of what you are reading. This is also the final step of image processing, where one aims to understand the observed scene. We perform this task more or less unconsciously whenever we use our visual system. We recognize people, we can easily distinguish between the image of a scientific lab and that of a living room, and we watch the traffic to cross a street safely. We all do this without knowing how the visual system works. For some times now, image processing and computer-graphics have been treated as two different areas. Knowledge in both areas has increased considerably and more complex problems can now be treated. Computer graphics is striving to achieve photorealistic computer-generated images of three-dimensional scenes, while image processing is trying to reconstruct one from an image actually taken with a camera. In this sense, image processing performs the inverse procedure to that of computer graphics. We start with knowledge of the shape and features of an objectat the bottom of Fig. and work upwards until we get a two-dimensional image. To handle image processing or computer graphics,we basically have to work from the same knowledge. We need to know the interaction between illumination and objects,how a three-dimensional scene is projected onto an image plane, etc. There are still quite a few differences between an image processing and a graphics workstation. But we can envisage that, when the similarities and interrelations between computer graphics and image processing are better understood and the proper hardware is developed,we will see some kind of general-purpose workstation in the future which can handle computer graphics as well as image processing tasks. The advent of multimedia,i.e.the integration of text, images, sound, and movies, will further accelerate the unification of computer graphics and image processing. In January 1980 Scientific American published a remarkable image called Plume2, the second of eight volcanic eruptions detected on the Jovian moon by the spacecraft Voyager 1 on 5 March 1979. The picture was a landmark image in interplanetary explorationthe first time an 2erupting volcano had been seen in space. It was also a triumph for image processing. Satellite imagery and images from interplanetary explorers have until fairly recently been the major users of image processing techniques, where a computer image is numerically manipulated to produce some desired effect-such as making a particular aspect or feature in the image more visible. Image processing has its roots in photo reconnaissance in the Second World War where processing operations were optical and interpretation operations were performed by humans who undertook such tasks as quantifying the effect of bombing raids. With the advent of satellite imagery in the late 1960s, much computer-based work began and the color composite satellite images, sometimes startlingly beautiful, have become part of our visual culture and the perception of our planet. Like computer graphics, it was until recently confined to research laboratories which could afford the expensive image processing computers that could cope with the substantial processing overheads required to process large numbers of high-resolution images. With the advent of cheap powerful computers and image collection devices like digital cameras and scanners, we have seen a migration of image processing techniques into the public domain. Classical image processing techniques are routinely employed by graphic designers to manipulate photographic and generated imagery, either to correct defects, change color and so on or creatively to transform the entire look of an image by subjecting it to some operation such as edge enhancement. A recent mainstream application of image processing is the compression of imageseither for transmission across the Internet or the compression of moving video images in video telephony and video conferencing. Video telephony is one of the current crossover areas that employ both computer graphics and classical image processing techniques to try to achieve very high compression rates. All this is part of an inexorable trend towards the digital representation of images. Indeed that most powerful image form of the twentieth centurythe TV imageis also about to be taken into the digital domain. Image processing is characterized by a large number of algorithms that are specific solutions to specific problems. Some are mathematical or context-independent operations that are applied to each and every pixel. For example, we can use Fourier transforms to perform image filtering operations. Others are algorithmicwe may use a complicated recursive strategy to find those pixels that constitute the edges in an image. Image processing operations often form part of a computer vision system. The input image may be filtered to highlight or reveal edges prior to a shape detection usually known as low-level operations. In computer graphics filtering operations are used extensively to avoid abasing or sampling artifacts. 图像处理操作的层次结构图像处理不是一步就能完成的过程。可将它分成诸多步骤,必须一个接一个地执行这些步骤,直到从被观察的景物中提取出有用的数据。依据这种方法,一个层次化的处理方案如图 12-1 所示,该图给出了图像处理不同阶段的概观。 图像处理首先是以适当的但不一定是光学的采集系统对图像进行采集。在技术或科学应用中,我们可以选择一个适当的成像系统。此外,我们还可以建立照明系统,选择最佳3波长范围,以及选择其他方案以便用最好的方法在图像中获取有用的对象特征。一旦图像被检测到,必须将其变成数字计算机可处理的形式,这个过程称之为数字化。 随着交通问题的日益严重,智能交通系统应运而生。汽车牌照自动识别系统是近几年发展起来的计算机视觉和模式识别技术在智能交通领域应用的重要研究课题之一。课题的目的是对摄像头获取的汽车图像进行预处理,确定车牌位置,提取车牌上的字符串,并对这些字符进行识别处理,用文本的形式显示出来。车牌自动识别技术在智能交通系统中具有重要的应用价值。在车牌自动识别系统中,首先要将车牌从所获取的图像中分割出来,这是进行车牌字符识别的重要步骤,定位准确与否直接影响车牌识别率。本文在对各种车辆图像处理方法进行分析、比较的基础上,提出了车牌预处理、车牌粗定位和精定位的方法,并且取得了较好的定位结果。车牌定位采取的是边缘检测的频率分析法。从经过边缘提取后的车辆图像中提取车牌特征,进行分析处理,从而初步定出车牌的区域,再利用车牌的先验知识和分布特征对车牌区域二值化图像进行处理,从而得到车牌的精确区域。 汽车牌照的自动定位是图像处理的一种,也是智能交通系统中的重要组成部分之一,是实现车牌识别(LPR)系统的关键。针对不同背景和光照条件下的车辆图像,提出了一种基于灰度图像灰度变化特征进行车牌定位的方法。依据车牌中字符的灰度变化以峰、谷规律分布确定车牌上下边界,对扫描行采用灰度跳变法确定车牌左右边界。数字化处理的第一步包含了一系列不同的操作并被称之为图像处理。如果传感器具有非线性特性,就必须予以校正,同样,图像的亮度和对比度也需要改善。通常,还需要进行坐标变换以消除在成像时产生的几何畸变。辐射度校正和几何校正是最基本的像素处理操作。 在图像中,对已知的干扰进行校正也是不可少的,比如由于光学聚焦不准,运动模糊,传感器误差以及图像信号传输误差所引起的干扰。在此还要涉及图像重构技术,这项技术需要许多间接的成像技术,比如不直接提供图像的 X 射线断层技术等。 一套完整的处理步骤对于物体的分析和识别是必不可少的。首先,应该采用适当的过滤技术以便从其他物体和背景中将所感兴趣的物体区分出来。实质上就是从一幅图像(或者数幅图像)中抽取出一幅或几幅特征图像。要完成这个任务最基本的工具就是图像处理中所使用的求均值和边缘检测、简单的相邻像素分析,以及复杂的被称为材质描述的模式分析。物体的一个重要特性就是它的运动性。检测和确定物体运动性的技术是必不可少的。随后,该物体必须从背景中分离出来,这就意味着具有同样特性和不同特性的区域必须被识别出来。这个过程产生出标志图像。既然已经知道了物体精确的几何形状,就可以抽取诸如平均灰度值、区域、边界以及形成物体的其他参数等更多的信息。这些参数可用来对物体进行分类,这是许多图像处理应用中至关重要的一步,比如下面一些应用:在一个显示农业地区的卫星图像中,想要区别出不同的果树,并获取参数以估算出成熟情况并监测害虫情况;在许多的医学应用中,最基本的问题是检查病理变化,最典型的应用就是染色体畸变分析;印刷体和手写体识别是另一个例子,图像处理一出现,人们就开始对它进行着研究,现在依然困难重重。 人们希望能了解得更多一些,也就是试图理解所读到的内容。这也是图像处理的最后一个步骤,即理解所观察到的景象。当我们使用视觉系统时,实际上已或多或少无意识地4在执行这个任务。我们能识别不同的人,可以很轻易地区分出实验室和起居室,可以观察车流以便安全地穿行马路。我们完成这样的任务而并不了解视觉系统工作的奥秘。长久以来,图像处理和计算机图形学被看做两个不同的领域。现在,人们在这两个领域中的知识都有了极大的
- 温馨提示:
1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
2: 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
3.本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

人人文库网所有资源均是用户自行上传分享,仅供网友学习交流,未经上传用户书面授权,请勿作他用。