Before I get started, it should be abundantly clear that the best technology suite to make a vehicle autonomous is not clear. Different companies are taking different approaches that may overlap, but have some fundamental differences. This technology will inevitably be optimized over time, but the debate is currently still in its infancy. This article will provide a broad overview of the categories of technology innovation that will contribute to vehicle autonomy. These include vehicle hardware, vehicle software and infrastructure.
In order for a vehicle to function autonomously, it must be able to robustly perform three functions: observe its environment, identify it location in the world and follow directions. The average smartphone can already perform the last two functions fairly well. Sure, a driverless car will need to do these things more reliably than our phones, but the basic tech is there and it is fairly affordable. The more unique challenge then is that of observing the driving environment. This is where companies pursuing this technology are investing staggering sums of money.
Of course, observing an environment requires an array of sensors and a computer to process all of that data. The sensors used to see the environment include a combination of video cameras, lidar (light detection and ranging), radar, and ultrasonic sensors. All competitors in this arena use some combination of these sensors (though not necessarily all of them). Perhaps the greatest distinction is that some companies (most notably Tesla) have opted to forgo the lidar sensor. Together these sensors provide a layered and redundant view of any environment in any condition.
All of these sensor produce reams of data. Not only must all of this data be processed quickly and accurately, but the device processing the information must be capable of making decision in the face of an unprecedented situation (otherwise known as an edge case). All of this requires a supercomputer with immense processing horsepower and AI computing infrastructure. To provide a sense of scale, the Nvidia computer used by Tesla has processing power equivalent to 150 MacBook Pros.
Another very basic piece of hardware that may come later, but will be an enormous boon to the industry, is vehicle to vehicle communication.
Of equal importance, and greater complexity, is the software suite needed to both process the information coming from the vehicle’s sensors and control the vehicle’s actions. It is impossible to predict or manually program every scenario a vehicle will encounter, so companies have turned to AI.
AI is playing a crucial role is advancing autonomous vehicle software to commercial viability. Not only is it essential to the operations of a driverless vehicle, but it learns from its experience. The more miles a AI system sees, the more reliable and safer it becomes. What’s more, the lessons learned from one vehicle can be shared with others. So as more cars are deployed with autonomous vehicle technology, safety will improve at a faster and faster rate.
There are three ways AI is being used to enhance the safety of autonomous vehicles. First and foremost, AI is being used to operate actual vehicles in real world situations under close supervision. This is the most direct and effective way to improve AVs. The problem is that this is a very capital intensive process. It requires buying or building vehicles and outfitting them with necessary hardware. It also requires staff dedicated to operate (if necessary) and study its actions. This method is essential to developing and proving the safety of AVs, but it takes an enormous amount of time to accumulate the driving miles necessary to prove the technology.
As a result, companies have developed methods to supplement real-world driving experience. First, companies like Tesla are running AI in the background of vehicles operated by humans. In this case, the AVs are making decisions as though they were driving the vehicle, without actually doing so. They then compare those decisions to the decisions made by the human driver. Second, virtually all companies are testing AI technology in virtual environments.
Driverless car infrastructure comes in two forms, car-to-car communication and car-to-infrastructure communications. Both will be driven by the federal government (versus the private sector for hardware and software innovations), but the first will come far sooner than the latter. Importantly, autonomous vehicles will not require these infrastructure innovation to function. They are being designed to operate in existing conditions. However, the infrastructure innovations that have been envisioned could substantially improve the safety, efficiency and effectiveness of driverless cars.
The roll-out of car-to-car communications is not waiting for driverless cars. It has already been rolled out for testing and it could become standard in new cars in the next couple years. What it does is broadcast a vehicle’s speed and position in addition to functions like steering and braking to nearby vehicles (perhaps a hundred yards or so). These vehicles can take that information and calculate the likelihood of a collision. In today’s vehicles, an impending collision would translate to warning to a driver or perhaps partial override of steering and braking. In autonomous vehicles that information would translate seamlessly into the overall operation of the vehicle.
This technology may yet take many forms, but at its most basic level it involves vehicle to infrastructure communications. This technology could tell a car exactly where it is on a road, where it is in the world, when to stop, when to go, how fast to go, etc. Currently driverless cars need to pick up these cues from their environment with expensive cameras and computer that have to see and interpret sign, striping and signals. This infrastructure, in addition to the car-to-car communications mentioned above, would reduce a vehicle’s dependence on that hardware and software to operate, and perhaps, replace it altogether.