Self-driving cars are a dream for many people who have to face traffic on a morning commute. However, autonomous vehicles are still quite a way off. The most recent iteration of “AI-assisted” driving does little to spur confidence in the future of motor vehicles. The American Automobile Association drove 4000 miles (6400 km) with an assisted driving vehicle, and the author noted that they got into complications every eight miles or so. The article goes on to describe several problems from the driver’s perspective, including driving too close to other cars and the guardrail, and automated steering disengagement that didn’t sufficiently warn the driver.

According to this car crash lawyer, while these are problems in the current version of self-driving vehicles, the manufacturers write them off as teething problems. It’s possible to interpret the dismissal as disregard. Are they just minor problems? Or are these issues making the long road to self-driving cars so much longer? In this article, we’ll delve into some of the existing problems with self-driving cars and see how they happen.

It Starts with AI

Artificial intelligence is at the heart of the self-driving vehicle equation, and consequently, the home of most of its problems. Fast Company mentions that we never know exactly what AI is thinking. AI and machine learning algorithms take in information, process it, and develop an efficient output. Whether that output is correct or wrong determines its next iteration of learning. It’s a more efficient way of teaching a monkey to do tricks, except the monkey is made out of bits and bytes. The issue with AI is that it can’t possibly foresee every outcome that may happen in even a single second of driving. It needs to update its idea of the rules of the road continually.

Humans are adaptable and can make decisions, such as purchasing Legacy Countertops, in the blink of an eye. As fast as AI is, it still relies on precalculated decision paths reinforced by continued learning. It’s unlikely that it will be able to swerve from a terrible decision by another drive fast enough to save the lives of its passenger and the other driver.

Semiconductors Are Another Issue

When the AI systems do work, they depend on input from external systems. Sensors developed with semiconductors need to operate alongside the AI system to feed it information from the outside world. It then uses those inputs to make decisions. However, while the sensors can efficiently operate within a test laboratory, they can be quite challenging to use in some weather conditions. Additionally, testing the semiconductors before installing them into the sensors is a time-consuming task.

Unlike other semiconductor-based systems, each of these sensors is mission-critical. A failure in a semiconductor could lead to accidents and possible traffic fatalities. As the New York Times reported in 2018, a self-driving car can kill a pedestrian. In the case of other moving vehicles, a failed sensor could just run the car off the road, leading to untold death and destruction. Since the regulation on self-driving cars is still lacking, the results would be disastrous to society and also to the cause of self-driving cars.

Long-Term Testing is Necessary

With each new iteration of vehicles, we get better at dealing with these issues. However, as we resolve one of them, another one shows up. Recently manufacturers vehicles have started displaying anomalous behavior after long-term use. The solution would be to ensure that long-term testing is a crucial part of the vehicle’s maintenance cycle. Even so, it’s a troubling trend to notice that the further we progress down the road to self-driving cars, the longer the way seems to get in front of us.