AI Self Driving

Safety Regulations for Self-Driving Cars in America

California has hosted public road testing for more than 180 self-driving vehicles from 27 companies.

California has hosted public road testing for more than 180 self-driving vehicles from 27 companies. Carmakers Audi, BMW, Mercedes, Volvo, GM, Ford, Honda, Toyota, Fiat-Chrysler, and Tesla; technology companies Uber, Lyft, Google’s Waymo, and Apple; and software solution providers Mobileye (Intel), Nvidia, AutoX, and Drive.ai are all using the Golden State’s roadways to perfect their products. It’s getting crowded out there, and in response there are regulations.

Safety is the priority. In a 2017 American Automobile Association (AAA) survey, 78% of respondents said they would not want to ride in a self-driving car due to safety concerns.

Last September the National Highway Traffic Safety Administration (NHTSA), an agency under the US Department of Transportation, introduced a framework called Automated Driving Systems 2.0: A Vision for Safety in an effort to make self-driving cars safe. The US House of Representatives had months earlier passed the Self Drive Act, which regulates autonomous cars from production to testing and distribution under NHTSA supervision.

The NHTSA also adopted the six levels of autonomy classification scheme proposed by the Society of Automotive Engineers in 2016 for self-driving vehicles:

0 No Driving Automation
1 Driver Assistance
2 Partial Driving Automation
3 Conditional Driving Automation
4 High Driving Automation
5 Full Driving Automation

Level 1&2 self-driving cars use an Advanced Driver Assistance System, otherwise known as ADAS, to assist the human driver. Levels 3-5 rely on an Automated Driving System, otherwise known as ADS.

Most autonomous driving systems on roads today are Level 1&2, while some self-driving test vehicles can reach Level 3. In its November 2017 road test in Arizona, Google’s Waymo made the leap to Level 4 by removing the human driver. (Here we need to keep in mind that a car capable of Level 4 self-driving on Arizona roads may not be able to perform to even a Level 3 standard in India for example, where traffic is more chaotic.)

The biggest challenges facing self-driving systems are sensing, perception, and control. Studies in SLAM (simultaneous localization and mapping) technology and computer vision are expected to solve the first two challenges in the short term. As for control, the hardware is also not a problem. There is however the problem of unpredictable human drivers. To enable systems to make decisions intelligently, existing data is still far from sufficient to deal with all driving situations.

What how much data is required to guarantee safety? The more the better.

Statistically speaking, the average American driver will encounter a deadly accident after one hundred million miles of drivetime. And so even though Waymo for example has logged more than 1.3 million miles of road testing, this is still not nearly enough to give a statistically meaningful estimation of how safe the autonomous car would be.

Other possibilities for improving self-driving vehicle safety include installing RFID tags on roadways or at critical intersections for vehicle-to-infrastructure communication to help the autonomous cars to perceive the environment. Audi has also proposed using LED lights in the front windshield to notify pedestrians when the self-driving car “sees” them, so the pedestrian knows it is safe to cross the street.

Of course, an ideal safety solution would be for the many different autonomous driving research teams to simply share their data. Alas, the market is too competitive and the stakes too high to reasonably expect that to happen.


Contributing Analyst: Mos Zhang | Editor: Meghan Han, Michael Sarazen

0 comments on “Safety Regulations for Self-Driving Cars in America

Leave a Reply

Your email address will not be published. Required fields are marked *