Dan Levy's Avatar DanLevy.net

When AI Fails & the Crashing Robotic Cars

When AI Fails & the Crashing Robotic Cars

Robotic Cars: More or Less Crashes?

Google’s self-driving cars are apparently in 2x as many accidents as human drivers - If you think this is just buggy new tech, way too complex from the get-go - well, you’re partially right. An important detail I should share: as of Dec 2015 virtually all accidents were not the robots fault.

The accidents are caused by humans drivers unfamiliar with robotic drivers. Furthermore, Google has programmed the cars to obey the law in absolute terms - never speeding, difficulty merging in dense or fast freeway traffic. This opens up a bunch of legal, ethical questions (this is known as the trolley problem) -

I understand Google’s approach, especially when trying to minimize liability: always follow the rules - logically it follows that you cannot really be at fault if you always observe the law.

There would be massive liability if an accident happened because of intentionally designed ‘flexibility’ around the laws.

Don’t let the future escape us

The future will still arrive, even if the robots drive like octagenarians.

Perhaps a simple fix for now would be to use bright red flashing LEDs (think school buses) to warn human drivers they are about to rear-end an innocent robot.

I would be more comfortable with a car which had tiers of observance and rule adherance. To my mind this would be much closer to how humans drive.

Imagine 3-tiers of system-perception as follows: (decision & other layers omitted for simplicity)

  1. base: follows laws with annoying precision
  2. local: flexibile adjustments based on current traffic - to allow merge on the highway if say, 10MPH+ is needed. Conversely if the traffic is simply going too fast, the car should be smart enough to pull over to avoid being a nuissance to other drivers.
  3. 360: calculate ANY potential extreme collision risk/avoidance measures - driving on a shoulder or

This would likely require a smart balancing act - say tier 1 detects an immenent accident which cannot be avoided following the laws, it would then shift all processor power into tier 3 - hopefully finding a creative way to avoid harm.

Robotic cars are on the cusp of being technically smarter & faster than any human driver. Accept it. Welcome it.

References

  1. http://nn.cs.utexas.edu/pages/research/neat-warning/
  2. http://www.claimsjournal.com/magazines/idea-exchange/2014/09/29/255161.htm
Edit on GitHubGitHub