Zero to Sixty at the Speed of Thought – AI in Connected Vehicles
Zero to Sixty at the Speed of Thought
The SAE (Society of Automotive Engineers) identifies six levels of driving automation, from 0 to 5. A zero-level car has basic safety features – automatic emergency brakes, blind spot warnings and such. At this level, control is never taken away from the driver, and assistance is provided only at critical moments.A level 5 vehicle is one that can drive itself in all conditions, and never expects a “driver” to take over. There is, effectively, no human driver. The car may not even have pedals or a steering wheel.
Estimates put the Level 5 Automotive AI market at between 12 and 16 Billion USD by 2025.
Connected and Autonomous – Cars that Think
To understand the relationship between automotive and AI, we need to understand what it means for them to be Connected and Autonomous. A connected vehicle can communicate with other vehicles, road infrastructure, the cloud, or even with pedestrians through their electronic devices. It constantly collects and analyses data to gather information about the world around it and to make real-time decisions that pilot the car and avoid hazards.
Many vehicles are already connected in a variety of ways. For example, GPS systems collect traffic information via cellular networks, which are then streamed to the driver, a sudden impact can trigger an alert to the police and many cars have ‘black boxes’ installed by insurers to upload data about driving routes, distances driven and driving style.
Putting AI in the Driver’s Seat
Imagine having access to a car that will drop you off to work, then drive itself home or to serve someone else. It could pick up the kids from school without you worrying about who is driving your children. Imagine never having to worry about where you parked because the car will drive itself to you. You could save a fortune in parking. It wouldn’t be a stretch to imagine living in central London and parking your car out of town overnight, so long as you didn’t mind waiting for your car to come back to you the next day.
There are other benefits too, and not all of them involve taking control away from the driver. For example, an AI-based car interior monitoring system developed by Bosch promises to identify fatigued or distracted drivers and warn them. Volkswagen has teamed up with Microsoft to explore the use of AI in predictive maintenance. In fact, a whole subset of vehicular AI is dedicated to human-machine interactions using gestures, eyeball tracking and like. Systems that identify the driver and automatically adjust seat position, mirror angle and music selection also fall within the realm of automotive AI.
However, there is no denying that it is the promise of self-driving, autonomously controlled cars that have the industry the most excited. The clear market leader is Tesla as they have by far the most real-world miles recorded using computer vision, stored in the cloud and processed in deep learning models. But others believe that they can still have a piece of the market and Mercedes Benz is working with nVidia to create a whole new AI infrastructure for vehicles – one that will be constantly upgraded with updates and whose primary function is to automate driving on regular routes.
It’s Going to be a Bumpy Ride
For a car to drive itself, it must perform a staggeringly large number of calculations, but simply having the ability to calculate is not enough. It also needs to have deep experience to draw from. Only when it can match the current data with past experiences can an AI become at or near human-level. The problem is, in order to do that, it needs time and the scope to make mistakes. Unfortunately, machines making mistakes is just not something we as a society is equipped to handle.
In December 2018, a self-driven vehicle from Uber fatally struck a 49-year old woman in an accident that was a perfect storm of mistakes and lack of attention. This one accident was enough to shelve Uber’s entire self-driving program for nearly a year, with the company mulling over whether it should continue the program at all.
In a far less grim but equally relevant incident, a viral video showed a newly introduced AI-based feature on a Tesla to mistake a Burger King signs for a stop sign. While some have argued that this is part of the learning process, and that once the system had the data it would not make the same mistake, it only reinforces the argument – what happens when machines mess up?
Machine Failure, or Human Error?
Of course, accidents happen every day on the road. Some due to driver neglect, some due to malfunctioning cars, and some due to sheer bad luck. However, the fact that an ostensibly “thinking” machine did this, because the vehicle cameras saw the victim but flagged her as a false positive – didn’t do anything to quell the public perception of AI being unpredictable.
AI will make mistakes. And ironically, those mistakes will mostly happen due to human error rather than anything to do with the “artificial” part of its intelligence. For example, in the case of the Uber accident, there was a driver who was supposed to take over, but they weren’t paying attention at that moment. Accidents like these are horrible, and they lead to thorny questions of accountability which need to be answered The fact remains that in most such cases it is a lack of human foresight and action that caused them. But there remains this nagging idea that a human mind behind the wheel will have a level of empathy that a machine simply will not.
A Closed System on an Open Road
One core problem we face when trying to create AI that can drive is that driving is essentially an open system. What this means is that there is an infinite number of permutations and combinations that can occur on the road. However, the number of situations you can train an AI for is limited. Yes, that limit is incredibly high, higher than the number of atoms in the universe, but it is still a finite number. The threat of probability presenting something the data hasn’t prepared the software for is always just a bit higher.
The Signal’s Yellow
One way to feed more data into the system without putting actual cars on the road is to train the AI through simulations. That’s exactly what Google is doing with Waymo. It recently announced that the AI has driven 10 billion miles in simulation. That’s over four hundred thousand times around the earth. Unfortunately, a simulation is still not the real thing, and many experts feel that a simulation simply cannot replicate the fringe incidents that can happen on actual roads. After all, driving along the same stretch of simulation over and over again can rack up the miles, but it doesn’t teach the AI much.
The problem is that much like the physical car needs fuel, AI needs data. What we don’t yet understand exactly is how much data is needed, and what the right mix is between the simulated and real world. How much data a does computer need to accurately identify if it is a person or a picture of a person painted on the side of a wall? How much data does it need to understand that a mother with a baby is less likely to jump a red light than a teenager? We don’t know yet. What we do know is that the system will never be a perfect system, there will never be no accidents. What we need is to get to a point where the ratio of accidents by autonomous vehicles is significantly lower than the accidents by human drivers. But that assumes that we will ever be comfortable with AI causing some road deaths and currently the answer seems to be a clear ‘no we aren’t’.
About the Author:
Richard is an Artificial Intelligence (AI) Influencer, Interviewer and Podcast Host. Founder of NeuralPath.io AI Advisory Practice and MKAI Expert Forums. A graduate of the MIT AI Strategy Course and a Wiley published author on the Future of AI. Formally with Oracle, Richard is on Open University Advisory Boards for ethical AI projects and a visiting lecturer in AI for Cranfield School of Management. https://boundlesspodcast.co.uk/
Published in Telematics Wire