基于CMOS相机的智能汽车道路识别外文文献翻译、中英文翻译、外文

更新时间:2024-05-16 07:37:01 阅读量: 综合文库 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

徐州工程学院毕业设计

外文翻译

学生姓名 学院名称 专业名称 指导教师

虞黎亮 机电工程学院 机械设计制造及其自动化

张建化

2011年 05月 27日

基于CMOS相机的智能汽车道路识别

刘楚 中国上海同济大学自动工程学院fiercelc@126.com

陈杰 中国上海同济大学自动工程学院panggebiao@hptmail.com 徐一凡 中国上海同济大学自动工程学院freeskyflying@gmail.com 罗峰 中国上海同济大学自动工程学luo_feng@mail.tongji.edu.cn

摘要

近几年,智能辅助驾驶和导航越来越受到人们的关注,本文设计开发了一种以CMOS相机作为传感器的智能车道路识别系统,它可以完成道路识别和智能车导航功能,并说明了CMOS相机的安装和采样过程。

本文设计开发了一套道路PC监控系统和道路识别算法测试程序,它可以保证道路识别的精确性、快速性和自适应性。一旦算法通过测试,那么该程序无需修改就可在嵌入式发展环境下直接应用,也可在智能车微控制器上直接应用。

本文在PC机上设计开发了一个3D道路模拟系统,它很容易为道路识别系统的模拟和测量建立各种道路轨迹。此外,各种实际道路同样也可以在模拟系统上仿真。

实验表明在这样的测试环境下,道路识别运算法在道路识别和路径跟踪是令人满意的。本文研究可以丰富智能车道路识别算法的研究,也为发展视觉导航和无人驾驶提供支持。

关键词:CMOS照相机;智能车;道路识别;视频采样;道路模拟;算法测试

1.系统介绍

A. 道路识别的背景基于照相机

一辆无人驾驶车的概念包括一个高度自动化认知和控制技术的新兴语系,最后针对出租车常客为汽车体验用户。连同其它的发展,它们一起被很多人视为是2020年车辆的主要技术进展。

道路识别是智能车交通感知和自主驾驶的前提,同时在机械视觉系统和智能导航领域被研究。许多系统通过摄像机能实现车辆的无人驾驶。THMR-V(清华移动机器人—V)是一个能完美执行在平坦道路上,并加速到150千米/小时的系统。

然而,许多系统需要实际信息道路或静态形象道路来测试无人驾驶的功能。在线测试的过程成本是昂贵的,因此低成本的CMOS相机模块非常的适用于汽车工业,我们的目标是开发一个可扩展的调试平台,用于研究和开发基于CMOS照相机的机械视觉和自身引导系

统。通过这个系统,现实的公路电影可以用来测试道路识别功能;同时,结构化和非结构化的路都道路也可以模拟,开发人员可以修改虚拟CMOS相机的采样参数以便于调试和验证运算法则在道路识别功能中的重要性。

道路识别设备和稳定的算法是提高智能车稳定性的关键。在这里,飞思卡尔16位单片机”MC9S12DG128B”(缩短为“S12 \S12的运算速度和记忆能力远低于电脑,所以采用640 x 480分辨率的黑白CMOS相机作为智能车辆的视频传感器。相较于其他路面传感器,CMOS相机具有快速采集的能力,这为智能车提供了足够的道路信息。详细的参数指定表I。通过数据的收集和计算,车辆能够快速确定驾驶道路。图I展示怎样安装CMOS相机和框架。

B. 硬件设计与抽样算法

智能车上的视频采样模块由CMOS相机、视频同步分离LM1881以及S12的模数转换模块组成。 表格I CMOS传感器参数 模型 图像传感器 有效性像素 水平定义 视角 频率 电源 参数 1 / 3英寸OmniVision CMOS 分辨率:640*480 32 64度 50赫兹 直流 9V/100mA CMOS相机 智能车辆 视角范围

图1.CMOS相机的安装

由于CMOS相机需要9伏电源, 高过车电池电压,所以采用电源转换器MC34063使照相机正常工作。

在安装好相机后,需要采样的视频信号。这里LM1881视频同步分离器的使用,为智能车从视频信号控制器提取定时信息。当定时信号从LM1881芯片出现时,通过智能控制器内部的模数转换模块,智能控制器就能采样信号。

采样信号通过S12芯片内部的道路识别法则来处理,轨迹表面上的黑色线路用来跟踪和分析,如图2所示。CMOS相机的检测频率为50赫兹,视频信号总是在20毫秒自转一周,

用来满足高速运行的需要和实时的处理。

C. 模拟调试的方法

由于智能车是一个实时系统,当高速运行时,调试的方法是有限的。同时,潜在的定位问题也是难解决的。为了解决这个问题,需要在PC机上建立测试模拟系统,通过智能车算法的动态库,调试过程就可以很简单的完成。用C语言编写的代码语言,它具有如下优点:

1.C代码可以很容易地适用到许多类型的计算机,使软件开发的微控制器系统并联硬件设计。

2.即使平台改变,C语言编写的程序也可以直接移植。 3.C代码易于调试。

基于C语言的可移植性,一旦该算法通过测试,它就可以不需要修改而直接满足嵌入式发展环境,然后产生的目标代码就能在微控制器上正常运行。图3中讲解了模拟调试的方法,运算库是微软Visual C++的一个工具,通过收集,一个能通过仿真测试环境的测试和验证动态链接库就产生了。稳定的算法可以被移植,然后最终运行于智能车的微控制器中。

场信号中断 初始化 行信号中断 行计数器++ 行信号准备 纵信号结束 样本点 排采样信号 行计数足够 完成一个采用字段 结束 图2.CMOS采样流程

D. 模拟测试系统的实现

仿真试验系统是由两个主要功能模块:实时监控模块和离线三维道路模拟模块组成的。这两个模块有相同的运算库,所以这一算法可以测试于在线和离线状态。

1) 实时监控模块

为了解S12单片机取样内部数字模拟模块和道路调试识别算法的结果,提出了一种基于建立电脑监控程序,并从COM端口或无线模块读取视频数据和回归到二维灰度图像然后显示在屏幕上的方法,如图4所示。

模拟测试环境 动态连接库 电脑上的运算法则嵌入式开发环境 智能车微控制器

图3.仿真调试的方法

图4. 实时监测与调试模块

监控模块的原理是这样的:当通信配置完成时,程序初始化开始检测。当智能车得电,

Intelligent Vehicle Road Recognition Based

on the CMOS Camera

Chu Liu*, Jie Chen**, Yifan Xu*** and Feng Luo****

* College of Automotive Engineering, Tongji University, Shanghai, China. Email: fiercelc@126.com

** College of Automotive Engineering, Tongji University, Shanghai, China. Email: panggebiao@hotmail.com *** College of Automotive Engineering, Tongji University, Shanghai, China. Email: freeskyflying@gmail.com

**** College of Automotive Engineering, Tongji University, Shanghai, China. Email: luo_feng@mail.tongji.edu.cn

Abstract

Since the problems of intelligent auxiliary driving and co-navigating have received more and more attention recent years, a Road Recognition System is developed for the Intelligent Vehicle with CMOS camera as its road sensor, which provides solutions for the Road Recognition and automatic drive functions of the Intelligent Vehicle. The installation and sampling process of the CMOS camera is explained.

A PC based monitor and test program of the Road Recognition Algorithm is build to guarantee the accuracy, rapidity and adaptability of the road recognition function. Once the algorithm passes the test, it can be compiled directly under embedded development environment without modification and runs in the micro controller of the Intelligent Vehicle properly.

A 3D road simulation system is also build on PC, which easily creates all kinds of tracks for the emulation and measurement of the road recognition system, besides, each kind of the actual road can also be emulated by the simulation system. Experiments prove that under such tests, the Road Recognition Algorithm is satisfying for road recognition and tracking, so the approach could actively improve the research on Road Recognition function of Intelligent Vehicle, and also provides support for the development of vision navigation and autonomous driving.

Keywords CMOSCamera IntelligentVehicle RoadRecognition Video Sampling Road Simulation Algorithm Test

I. SYSTEM INTRODUCTION

A. The Background of Road Recognition based on the Camera

The driverless car concept embraces an emerging family of highly automated cognitive and control technologies, ultimately aimed at a full \human driver. Together with alternative propulsion, it is seen by some as the main technological advance in car technology by 2020.

Road Recognition is the premise of traffic perception and autonomous driving of the Intelligent Vehicle, which is also studied in the field of machine vision and intelligent navigation. Many systems have been developed which can drive autonomously using video cameras. THMR-V (Tsinghua mobile robot V) is a system that performs well with a speed up to 150 km/h in structured road [1].

However, many of the systems require actual road information or static road image for the test of its autonomous driving function. The cost of on-line test process may be expensive. Since low cost CMOS camera modules are ideal for many automotive applications, our goal was the development of an extendable debugging platform for the research and development of machine vision and self-piloting based on the CMOS camera. With this system, real road movie can be used to test the road recognition function; both structured road and unstructured road can also be simulated [2], developers are able to modify the sampling parameters of the virtual CMOS camera so as to debug and validate the important algorithm in the road recognition function.

Road Recognition devices and stable algorithm are the key to improving the stability of the Intelligent Vehicle. Here the Freescale 16-Bit micro controller \(shorter from \memory capacity of S12 is much lower than PC, the Black & White CMOS camera with resolution 640 x 480 is taken as the video sensor for the Intelligent Vehicle. Compared with other road sensors, CMOS camera possesses the ability of fast-collecting and forward-looking, which also provides enough road information for the Intelligent Vehicle. The detailed parameters are specified in Table I. With the data being collected and calculated, the vehicle is able to determine the track itself for fast driving. Fig. 1 shows installation of CMOS Camera and framework.

B. Hardware Design and Sampling Algorithm

The video sampling module of the intelligent vehicle is composed of CMOS Camera, LM1881 video sync separator and ADC module of S12.

Since the CMOS camera requires 9 Volts power supply, which is higher than the vehicle battery voltage, the DC-DC converter MC34063 is used to make the camera work properly.

TABLE I. CMOS SENSOR PARAMETERS

After the installation of the camera, the sampling of the video signal is required. Here the LM1881 video sync separator is used, which extracts timing information from the video signal for the vehicle controller. Then the vehicle controller is able to sample the signal with its internal ADC module when the timing signal from LM1881 occurs.

The signal sampled is then processed by the road recognition algorithm inside S12, the black line marker on the surface of the track is detected and analyzed, as shown in Fig. 2. The detection frequency of the CMOS camera is 50Hz, the video signal is processed within 20 milliseconds, which satisfies the needs for high-speed running and real-time processing.

Figure 1. Installation of CMOS Camera

C. The Method of Simulation Debugging

Since the Intelligent Vehicle is a real-time system, while running at a high speed, the debugging method is limited. Locating a potential problem at the same time can be difficult. In order to solve this matter, a simulation test system based on PC is built, with all the algorithm of the Intelligent Vehicle implemented in its dynamic library, the debugging process can easily be fulfilled. The code is written in C Language, which has the following advantages:

(1) C code can easily be realized on many kinds of computers, which makes the software development of micro controller system in parallel with hardware design.

(2) Programs written in C Language can be transplanted directly when the platform is changed.

(3) C code is easy for debugging.

Based on the portability of C Language, once the algorithm passes the tests, it can be compiled directly under embedded development environment without modification and then the generated target code is able to run in micro controller properly. Fig. 3 explains the method of simulation debugging, the algorithm library is implemented in Microsoft Visual C++ 6.0, after compile, a dynamic link library is generated, which is referenced by the simulation test environment for the test and validation. Stable algorithm can then be transplanted and finally runs in the micro controller of the Intelligent Vehicle.

Figure 2. CMOS Sampling Flow Chart

D. The Implemention of the Simulation Test System

The Simulation Test System is made up of two main modules, real-time monitor module and off-line 3D road simulation module. These two modules share the same algorithm library, so the algorithm can be test thoroughly both on-line and off-line.

1) Real-time Monitor Module

In order to know the sampling result from the S12 internal Analog-to-digital module and to debug the Road Recognition Algorithm, a PC based monitor program is build, which reads the video data from Serial COM Port or wireless module and reverts it to two-dimensional gray-scale image and then displays it on the screen in real-time, as shown in Fig. 4.

Figure 3. The Method of Simulation Debugging

The mechanism of the monitor module is as follows: When the communication configuration is completed, the program initializes and starts monitoring. The Intelligent Vehicle powers up, and sends a setup packet via wireless module to the monitor program on the PC, which includes the rows and columns of the video signal. The monitor program then sets receive parameters to the current value. Thus, a communication channel is established between the Intelligent Vehicle and PC. After that, the CMOS signal sent by the Intelligent Vehicle while running can be displayed on screen for monitoring.

The video data acquired can also be used to simulate the road signal and test the Intelligent Vehicle's Road Recognition Algorithm at the same time on the PC, see Fig. 4 - Road Recognition Results, which improves the development efficiency, guarantees the accuracy and adaptability of the algorithm. Time consumed by the algorithm is calculated and displayed accordingly, which guarantees that the algorithm is rapid enough to process a large amount of data. Once the algorithm passes the tests, it can be compiled directly under embedded development environment without modification and then runs in S12 properly. Experiments proof that, the results generated by the Road Recognition Algorithm on the Intelligent Vehicle and the results from PC simulation are identical. This also shows the advantages of the Road Recognition Algorithm simulation debugging. Fig. 5 shows the flow chart of the monitoring and debugging module.

Figure 4. Real-time Monitoring and Debugging Module

2) 3D Road Simulation Module

On-field testing is the best way to test the algorithm, but it involves two major overheads, firstly the entire vehicle should be in a workable condition, right from the mechanical components to the electronics, and the sensors to the software. Secondly, requirement of multiple team members, the cost involved in the logistics and time spent in the process [6]. Besides, in order to make the Intelligent Vehicle's Road Recognition Algorithm adapt arbitrary tracks, an off-line 3D road simulation and measurement system is build on PC, which uses OpenGL graphics engine to generate 3D scenes, the tracks inside are virtually sampled and converted to simulate the video signal sampled by S12 analog-to-digital module. The simulation flow chart is shown in Fig. 6.

When simulation system starts, the simulated scene including the Intelligent Vehicle is created according to the parameters pre-defined. Then the system enters a loop to display and position all scene targets. In this loop, the CMOS camera over the simulated Intelligent Vehicle is virtually sampled; the sampled data is then taken as parameters for the function call of algorithm library.

Figure 5. Data Processing of Real-time Monitor Module

In the algorithm library, the path is recognized, and the steering gear as well as motor drive value is calculated. All of these results are returned by the library for the motion and display of the simulated Intelligent Vehicle.

Road Recognition and other algorithm can then be tested by the signal virtually sampled. Any kind of race track such as road curve, road crosses and slopes can be simulated, and the virtual Intelligent Vehicles can be controlled on the software interface not only by the algorithm library but also by the keyboard. Thus various problems are detected and solved, which speeds up the testing process of the Road Recognition Algorithm. Actual road surface can also be simulated by the system, which provides support for the development of vision navigation and autonomous driving. Fig. 7 is the interface of the simulation test system..

The technique of 3D real-time simulation is now widely applied in the test of Road Recognition Algorithm of the Intelligent Vehicle. Since the tracks in the simulated scene are easily customized, many algorithms including Road Recognition Algorithm can be tested thoroughly and potential problems can be found when the actual roads are not present.

E. Road Recognition Algorithm

Figure 6. Data Processing of 3D Simulation Module

Road Recognition is the major task of autonomous vehicle guidance. Here an efficient road recognition algorithm based on the intelligent vehicle is explained, which is developed under the simulation test system. The track is made up of black feature lines over white road surface, which is in the middle of the road, parallel to the road boundaries. Many road conditions such as straight roads, road crosses and slopes can be shaped. The goal of Intelligent Vehicle Road Recognition is to detect the black feature lines and make the vehicle follow them at a high speed.

We deal mainly with the two-dimensional gray-scale image sampled by the CMOS camera. Dynamic threshold is applied for the edge extraction, which avoids the impactof changes in brightness of the road. The gray-scale image is scanned from bottom line to top line; each line returns a black point, which represents the position from the track. The next black point is within the range of the former point, so there is no need for the threshold comparison of every pixel in a line. The flow chart of is shown in Fig. 8.

The algorithm is tested in the simulation test system and Intelligent Vehicle environment on the real road surface. There is no complex computation in the road recognition; however, the black feature lines are extracted rapidly and correctly.

II. RESULTS

A Road Recognition System is developed for the Intelligent Vehicle, which provides solutions for the road recognition and automatic drive functions of the vehicle. PC based Road Recognition monitor program along with 3D road simulation and test system has been used by several cases for the test of Road Recognition Algorithm. Experiments prove that under such tests, the Road Recognition Algorithm is satisfying for road recognition and tracking, the average speed of the Intelligent Vehicle is as fast as 2.3 m/s, as shown in Figure 9.

III. PROSPECT

In 2002, the DARPA Grand Challenge competitions were announced. The competitions allowed international teams to compete in fully autonomous vehicle races over rough unpaved terrain and in a non-populated suburbansetting. So far the competition has been successfully held for six years. In the competition, except for radars and other sensors, video cameras are widely used. Although the final goal of safe door-to-door transportation in arbitrary environments is not yet reached, it is a trend that vehicles are fitted with video cameras used for auxiliary driving.

Since our 3D simulation system can simulate each kind of road surface and make the virtual vehicle move, so the approach could actively improve the research on Road Recognition function of Intelligent Vehicle, and also provides support for the development of vision navigation and autonomous driving.

Figure 8. Road Recognition Algorithm for the Intelligent Vehicle

Figure 9. Intelligent Vehicle with CMOS Camera Installed

本文来源:https://www.bwwdw.com/article/ipk7.html

Top