




已阅读5页,还剩2页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
京 江 学 院jingjiang college of j i a n g s u u n i v e r s i t y 外 文 文 献 翻 译学生学号: 3081155033 学生姓名: 缪成鹏 专业班级: j电子信息工程0802 指导教师姓名: 李正明 指导教师职称: 教授 2012年6月a system for remote video surveillance and monitoringthe thrust of cmu research under the darpa video surveillance and monitoring (vsam) project is cooperative multi-sensor surveillance to support battlefield awareness. under our vsam integrated feasibility demonstration (ifd) contract, we have developed automated video understanding technology that enables a single human operator to monitor activities over a complex area using a distributed network of active video sensors. the goal is to automatically collect and disseminate real-time information from the battlefield to improve the situational awareness of commanders and staff. other military and federal law enforcement applications include providing perimeter security for troops, monitoring peace treaties or refugee movements from unmanned air vehicles, providing security for embassies or airports, and staking out suspected drug or terrorist hide-outs by collecting time-stamped pictures of everyone entering and exiting the building.automated video surveillance is an important research area in the commercial sector as well. technology has reached a stage where mounting cameras to capture video imagery is cheap, but finding available human resources to sit and watch that imagery is expensive. surveillance cameras are already prevalent in commercial establishments, with camera output being recorded to tapes that are either rewritten periodically or stored in video archives. after a crime occurs a store is robbed or a car is stolen investigators can go back after the fact to see what happened, but of course by then it is too late. what is needed is continuous 24-hour monitoring and analysis of video surveillance data to alert security officers to a burglary in progress, or to a suspicious individual loitering in the parking lot, while options are still open for avoiding the crime.keeping track of people, vehicles, and their interactions in an urban or battlefield environment is a difficult task. the role of vsam video understanding technology in achieving this goal is to automatically “parse” people and vehicles from raw video, determine their geolocations, and insert them into dynamic scene visualization. we have developed robust routines for detecting and tracking moving objects. detected objects are classified into semantic categories such as human, human group, car, and truck using shape and color analysis, and these labels are used to improve tracking using temporal consistency constraints. further classification of human activity, such as walking and running, has also been achieved. geolocations of labeled entities are determined from their image coordinates using either wide-baseline stereo from two or more overlapping camera views, or intersection of viewing rays with a terrain model from monocular views. these computed locations feed into a higher level tracking module that tasks multiple sensors with variable pan, tilt and zoom to cooperatively and continuously track an object through the scene. all resulting object hypotheses from all sensors are transmitted as symbolic data packets back to a central operator control unit, where they are displayed on a graphical user interface to give a broad overview of scene activities. these technologies have been demonstrated through a series of yearly demos, using a testbed system developed on the urban campus of cmu.detection of moving objects in video streams is known to be a significant, and difficult, research problem. aside from the intrinsic usefulness of being able to segment video streams into moving and background components, detecting moving blobs provides a focus of attention for recognition, classification, and activity analysis, making these later processes more efficient since only “moving” pixels need be considered.there are three conventional approaches to moving object detection: temporal differencing ; background subtraction; and optical flow. temporal differencing is very adaptive to dynamic environments, but generally does a poor job of extracting all relevant feature pixels. background subtraction provides the most complete feature data, but is extremely sensitive to dynamic scene changes due to lighting and extraneous events. optical flow can be used to detect independently moving objects in the presence of camera motion; however, most optical flow computation methods are computationally complex, and cannot be applied to full-frame video streams in real-time without specialized hardware.under the vsam program, cmu has developed and implemented three methods for moving object detection on the vsam testbed. the first is a combination of adaptive background subtraction and three-frame differencing . this hybrid algorithm is very fast, and surprisingly effective indeed, it is the primary algorithm used by the majority of the spus in the vsam system. in addition, two new prototype algorithms have been developed to address shortcomings of this standard approach. first, a mechanism for maintaining temporal object layers is developed to allow greater disambiguation of moving objects that stop for a while, are occluded by other objects, and that then resume motion. one limitation that affects both this method and the standard algorithm is that they only work for static cameras, or in a ”stepand stare” mode for pan-tilt cameras. to overcome this limitation, a second extension has beendeveloped to allow background subtraction from a continuously panning and tilting camera . through clever accumulation of image evidence, this algorithm can be implemented in real-time on a conventional pc platform. a fourth approach to moving object detection from a moving airborne platform has also been developed, under a subcontract to the sarnoff corporation. this approach is based on image stabilization using special video processing hardware.the current vsam ifd testbed system and suite of video understanding technologies are the end result of a three-year, evolutionary process. impetus for this evolution was provided by a series of yearly demonstrations. the following tables provide a succinct synopsis of the progress made during the last three years in the areas of video understanding technology, vsam testbed architecture, sensor control algorithms, and degree of user interaction. although the program is over now, the vsam ifd testbed continues to provide a valuable resource for the development and testing of new video understanding capabilities. future work will be directed towards achieving the following goals:1. better understanding of human motion, including segmentation and tracking of articulated body parts;2.improved data logging and retrieval mechanisms to support 24/7 system operations;3.bootstrapping functional site models through passive observation of scene activities;4.better detection and classification of multi-agent events and activities;5.better camera control to enable smooth object tracking at high zoom; and6.acquisition and selection of “best views” with the eventual goal of recognizing individuals in the scene.远程视频监控系统在美国国防部高级研究计划局,视频监控系统项目下进行的一系列监控装置研究是一项合作性的多层传感监控,用以支持战场决策。在我们的视频监控综合可行性示范条约下,我们已经研发出自动视频解析技术,使得单个操作员通过动态视频传感器的分布式网络来监测一个复杂区域的一系统活动。我们的目标是自动收集和传播实时的战场信息,以改善战场指挥人员的战场环境意识。在其他军事和联邦执法领域的应用包括为部队提供边境安防,通过无人驾驶飞机监控和平条约及难民流动,保证使馆和机场的安全,通过收集建筑物每个进口和出口的印时图片识别可疑毒品和恐怖分子藏匿场所。自动视频监控在商业领域同样也是一个重要的研究课题。随着科技的发展,安装摄像头捕捉视频图像已经非常廉价,但是通过人为监视图像的成本则非常高昂。监视摄像头已经在商业机构中普遍存在,与相机输出记录到磁带或者定期重写或者存储在录像档案。在犯罪发生后-比如商店被窃或汽车被盗后,再查看当时录像,往往为时已晚。尽管避免犯罪还有许多其他的选择,但现在需要的是连续24小时的监测和分析数据,由视频监控系统提醒保安人员,及时发现正在进行的盗窃案,或游荡在停车场的可疑人员。在城市或战场环境中追踪人员、车辆是一项艰巨的任务。vsam视频解析技术视频,确定其geolocations,并插入到动态场景可视化。我们已经开发强有力的例程为发现和跟踪移动的物体。被测物体分为语义类别,如人力,人力组,汽车和卡车使用形状和颜色分析,这些标签是用来改善跟踪一致性使用时间限制。进一步分类的人类活动,如散步,跑步,也取得了。 geolocations标记实体决心从自己的形象坐标使用广泛的基准立体声由两个或两个以上的重叠相机的意见,或查看射线相交的地形模型由单眼意见。这些计算机的位置饲料进入了更高一级的跟踪模块,任务多传感器变盘,倾斜和缩放,以合作,不断追踪的对象,通过现场。所有产生的对象假设所有传感器转交作为象征性的数据包返回到一个中央控制单元操作者,他们都显示在图形用户界面提供了广泛概述了现场活动。这些技术已证明,通过一系列每年演示,使用的试验系统上发展起来的城市校园的债务工具中央结算系统。检测移动物体的视频流被认为是一个重要和困难,研究问题。除了固有的作用能够部分进入移动视频流和背景的组成部分,移动块检测提供了一个关注的焦点识别,分类,分析和活动,使这些后来过程更有效率,因为只有“移动”像素需要加以考虑。有三种常
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 湖北省华师一附中2026届化学高三第一学期期末综合测试试题含解析
- 心肺复苏技术和除颤技术
- 2024学年广州市七年级语文上学期期中考试卷附答案解析
- 脑卒中科普讲解
- 牛蹄解剖生理讲解
- 桥梁施工安全技术交底
- 汽车试制会议汇报
- 石膏模型灌注技术
- 细胞生态关系图解
- 虚拟化技术培训
- 人大代表基础知识培训课件
- 2025年高压电工证考试题库及答案
- 2025-2026学年高一上学期开学第一课主题班会课件
- 光储充一体化项目可行性研究报告
- 2025年供热公司培训试题及答案
- 冠心病的防治指南
- 临建工程施工合同范本(2025版)
- G2电站锅炉司炉证考试题库及答案
- 架桥机架梁施工技术交底
- 屠宰加工人员职业技能模拟试卷含答案
- 多胎减胎患者护理指南
评论
0/150
提交评论