Friday, April 26, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningResearchers Working to Improve Autonomous Vehicle Driving Vision in the RainĀ 

Researchers Working to Improve Autonomous Vehicle Driving Vision in the RainĀ 

By John P. Desmond, AI Trends EditorĀ 

To help autonomous cars navigate safely in the rain and other inclement weather, researchers are looking into a new type of radar.Ā Ā 

Self-driving vehicles can have troubleĀ ā€œseeingā€Ā in the rain or fog, with the carā€™s sensors potentially blocked by snow, ice or torrential downpours, and their ability toĀ ā€œreadā€Ā road signs and road markings impaired.Ā 

Many autonomous vehicles rely on lidar radar technology, which works by bouncing laser beams off surrounding objects to give a high-resolution 3D picture on a clear day, but does not do so well in fog, dust, rain or snow, according to a recent report fromĀ abc10Ā of Sacramento, Calif.Ā 

ā€œA lot of automatic vehicles these days are using lidar, and these are basically lasers that shoot out and keep rotating to create points for a particular object,ā€ stated Kshitiz Bansal, a computer science and engineering Ph.D. student at University of California San Diego, in an interview.Ā 

The universityā€™s autonomous driving research team is working on a new way to improve the imaging capability of existing radar sensors, so they more accurately predict the shape and size of objects in an autonomous carā€™s view.Ā Ā 

DineshĀ Bharadia, professor of electrical and computerĀ engineering,UCĀ San Diego Jacobs School of Engineering

ā€œItā€™s a lidar-like radar,ā€ stated DineshĀ Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering, adding that it is an inexpensive approach. ā€œFusing lidar and radar can also be done with our techniques, but radars are cheap. This way, we donā€™t need to use expensive lidars.ā€Ā 

The team places two radar sensors on the hood of the car, enabling the system to see more space and detail than a single radar sensor. The team conducted tests to compare their systemā€™s performance on clear days and nights, and then with foggy weather simulation, to a lidar-based system. The result was the radar plus lidar system performed better than the lidar-alone system.Ā Ā 

ā€œSo, for example, a car that has lidar, if itā€™s going in an environment where there is a lot of fog, it wonā€™t be able to see anything through that fog,ā€Ā BansaidĀ stated. ā€œOur radar can pass through these bad weather conditions and can even see through fog or snow,ā€ he stated.Ā Ā 

The team usesĀ millimeter radar, a version of radar that uses short-wavelength electromagnetic waves to detect the range, velocity and angle of objects.Ā Ā Ā 

20 Partners Working on AI-SEE in Europe to Apply AI to Vehicle VisionĀ 

Enhanced autonomous vehicle vision is also the goal of a project in Europeā€”called AI-SEEā€”involving startupĀ Algolux, which is cooperating with 20 partners over a period of three years to work towards Level 4 autonomy for mass-market vehicles. Founded in 2014,Ā AlgoluxĀ is headquartered in Montreal and has raised $31.8 million to date, according to Crunchbase.Ā Ā 

The intent is to build a novel robust sensor system supported by artificial intelligence enhanced vehicle vision for low visibility conditions, to enable safe travel in every relevant weather and lighting condition such as snow, heavy rain or fog, according to a recent account fromĀ AutoMobilSport.Ā Ā Ā Ā 

TheĀ AlgoluxĀ technology employs a multisensory data fusion approach, in which the sensor data acquired will be fused and simulated by means of sophisticated AI algorithms tailored to adverse weather perception needs.Ā AlgoluxĀ plans to provide technology and domain expertise in the areas of deep learning AI algorithms, fusion of data from distinct sensor types, long-range stereo sensing, and radar signal processing.Ā Ā 

Dr. Werner Ritter, Consortium Lead, Mercedes Benz AG: ā€œAlgoluxĀ is one of the few companies in the world that is well versed in the end-to-end deep neural networks that are needed to decouple the underlying hardware from our application,ā€ stated Dr. Werner Ritter, consortium lead, from Mercedes Benz AG. ā€œThis, along with the companyā€™s in-depth knowledge of applying their networks for robust perception in bad weather, directly supports our application domain in AI-SEE.ā€Ā Ā 

The project will be co-funded by the National Research Council of Canada Industrial Research Assistance Program (NRC IRAP), the Austrian Research Promotion Agency (FFG), Business Finland, and the German Federal Ministry of Education and Research BMBF under the PENTA EURIPIDES label endorsed by EUREKA.Ā 

Nvidia Researching Stationary Objects in its Driving LabĀ Ā 

The ability of the autonomous car to detect what is in motion around it is crucial, no matter the weather conditions, and the ability of the car to know which items around it are stationary is also important, suggests a recentĀ blog postĀ in the Drive Lab series fromĀ Nvidia,Ā an engineering look at individual autonomous vehicle challenges. Nvidia is a chipmaker best known for its graphic processing units, widely used for development and deployment of applications employing AI techniques.Ā Ā Ā 

The Nvidia lab is working on usingĀ AI to address the shortcomings of radar signal processing in distinguishing moving and stationary objects, with the aim of improving autonomous vehicle perception.Ā Ā Ā 

NedaĀ Cvijetic, autonomous vehicles and computer vision research, Nvidia

ā€œWe trained a DNN [deep neural network] to detect moving and stationary objects, as well as accurately distinguish between different types of stationary obstacles, using data from radar sensors,ā€ statedĀ NedaĀ Cvijetic, who works on autonomous vehicles and computer vision for Nvidia; the author of the blog post. In her position for about four years, she previously worked as a systems architect for Teslaā€™s Autopilot software.Ā Ā Ā 

Ordinary radar processing bounces radar signals off of objects in the environment and analyzes the strength and density of reflections that come back. If a sufficiently strong and dense cluster of reflections comes back, classical radar processing can determine this is likely some kind of large object. If that cluster also happens to be moving over time, then that object is probably a car, the post outlines.Ā 

While this approach can work well for inferring a moving vehicle, the same may not be true for a stationary one. In this case, the object produces a dense cluster of reflections that are not moving. Classical radar processing would interpret the object as a railing, a broken down car, a highway overpass or some other object. ā€œThe approach often has no way of distinguishing which,ā€ the author states.Ā 

A deep neural network is an artificial neural network with multiple layers between the input and output layers, according to Wikipedia. The Nvidia team trained their DNN to detect moving and stationary objects, as well as to distinguish between different types of stationary objects, using data from radar sensors.Ā Ā 

Specifically, we trained a DNN to detect moving and stationary objects, as well as accurately distinguish between different types of stationary obstacles, using data fromĀ radar sensors.Ā Ā 

Training the DNN first required overcoming radar data sparsity problems. Since radar reflections can be quite sparse, itā€™s practically infeasible for humans to visually identify and label vehicles from radar data alone. However, Lidar data, which can create a 3D image of surrounding objects using laser pulses, can supplement the radar data. ā€œIn this way, the ability of a human labeler to visually identify and label cars from lidar data is effectively transferred into the radar domain,ā€ the author states.Ā 

The approach leads to improved results. ā€œWith this additional information, the radar DNN is able to distinguish between different types of obstaclesā€”even if theyā€™re stationaryā€”increase confidence of true positive detections, and reduce false positive detections,ā€ the author stated.Ā 

Many stakeholders involved in fielding safe autonomous vehicles, find themselves working on similar problems from their individual vantage points. Some of those efforts are likely to result in relevant software being available as open source, in an effort to continuously improve autonomous driving systems, a shared interest.Ā 

Read the source articles and informationĀ fromĀ abc10Ā of Sacramento, Calif.,Ā fromĀ AutoMobilSportĀ and in aĀ blog postĀ in the Drive Lab series from Nvidia.Ā 

Read MoreAI Trends

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments