Tesla is showing off its self-driving car technology to investors today, at the company’s headquarters in Palo Alto, California — amid skepticism that Tesla is ready for fully autonomous vehicles.
Bart Selman is professor of computer science at Cornell University and an expert on artificial intelligence safety issues. He says it still isn’t clear whether there will be safe, fully autonomous vehicles within three to five years, and that Tesla faces an extra challenge, given its reliance on computer vision.
“Self-driving car technology has made incredible advances over the last five years. Vastly improved vision technology combined with inputs from other sensors are getting us close to full autonomy. However, we don't yet know whether we can reach the level of safety of a human driver within the next three to five years.
“Alert human drivers are surprisingly good at interpreting unexpected events and generally can take the necessary preventive steps to avoid accidents. However, because current autonomous driving systems lack a broader understanding of their environment, it is difficult for those systems to take similar preventive measures. The question therefore is whether we can develop systems that gracefully reduce risks when faced with unexpected events.
“In Tesla's case, a significant reliance on computer vision introduces an extra level of difficulty. It is well-known that current computer vision systems can fail in quite unpredictable ways. Having multiple sensors, ideally including Lidar, are therefore critical. Overall, the challenge remains of how to resolve possibly conflicting information of multiple sensors, as well as the question of how to gracefully handle unexpected situations without needing human input.
“At a higher level, we may be faced with a situation where self-driving car technology will be statistically significantly safer overall than human drivers, in part because many current accidents are due to moments of human inattention, but, on the other hand, self-driving cars may lead to certain types of accidents that would not occur with a human driver in control. The question then becomes whether society will choose the greater overall safety level over the risk of having certain types of ‘non-human’ accidents occur with some regularity.”