Software for cars has been increasing in complexity over the past few years. The amount of code in vehicles can reach a hundred million lines, and for the new and fast-rising self-driving vehicles, it could mean even more hundreds of million lines of code. These cars are not programmed in an “if-then” computer algorithm; instead, they rely on machine learning and pattern recognition. For them to drive continuously and without human assistance, experts said it will take 10, maybe even 20 years, and the end of driving for humans is not quite in view.

Regulators’ top priority is to make sure that these cars are safe to be on the roads, which could explain the stall in certain states like California, where some companies are already pushing for self-driving cars to hit the road by 2016. According to the state’s DMV, the regulations for post-testing deployment of autonomous vehicles are still being developed, and the DMV wants to make sure the self-driving vehicles are as safe as human drivers before giving the public access to these automobiles. Automobile fatalities decreased nearly 25% since 2004, according to the National Highway Traffic Administration (NHTSA). In 2013, 32,719 people died in car crashes, and in 2012, 33,782 people died. The NHTSA reports that this is a historic low for automobile fatalities.

(Related: Even Apple is making a self-driving car)

Yet the goal is to come up with cars that drive themselves more safely than humans, which is why the Department of Transportation and NHTSA support the initiatives, according to a statement for a symposium by Mark R. Rosekind, administrator of NHTSA.

He said that the potential to overcome human driving flaws (such sleepiness, inattention or recklessness) makes the technology worth pursuing. As much as driver safety, though, the NHTSA is also concerned with cybersecurity, which he said could threaten the technology.

Concerns about safety
Since the software in self-driving cars controls important components like the steering wheel, gas pedal and brake pedal, this means they are at risk for being hacked, according to Raj Rajkumar, professor and co-director of the General Motors-Carnegie Mellon Autonomous Driving Collaborative Research Lab. There have been a few incidents already where hackers went through the WiFi, 3G and 4G connections into the car’s software, such as when two hackers made a Chrysler Jeep stop dead in the middle of a highway. Rajkumar said that companies like Tesla, GM and anyone else working on self-driving cars need to be “sensitive to security vulnerabilities.”

GM is making sure it covers potential cyber threats by using a layered approach to in-vehicle security, which means they will create a series of defenses to cover gaps in protection, possibly including intrusion-detection systems, malware scanners and other tools. They are also designing many vehicle systems, which can be updated with security measures as potential threats evolve.

“If there is a security loophole that becomes an entry point for outsiders to get into the car, remotely and wirelessly, then that poses a big problem,” said Rajkumar.