Google Crashes – Literally (More on the Driverless Car Dilemma)

Back in October, we blogged about Google’s new driverless car and used that new technology as a starting point to ask some poignant questions about how the law has failed, in general, to keep pace with the current speed of innovation.  Our specific question on that date was as follows:

As both the ABA Journal and The New York Times point out, the obvious question is this: Who is liable for an accident caused by a car that is driving itself – the person sitting in the driver’s seat of the car who isn’t actually driving, or the manufacturer of the driverless car itself?

At the time, there hadn’t yet been an accident caused by the software.  Well, now we have one.  As recently reported by friend of the blog Alan Crede of the Boston Personal Injury Lawyer Blog, Google’s brainchild caused an accident.  (For the record, Google’s position is that a human driver who overrode the software caused the accident).  Just as we did in our prior post, Crede used the Google Car opportunity to pontificate about larger legal questions facing the advent of fast-moving technologies.  We were, however, pleasantly surprised to see that Crede does not take the typical plaintiff’s attorney-approach to the issue, but rather argued that companies should not fear developing such cutting-edge technologies because of fears of liability.  Rather, Crede advocated for the imposition of liability on the owner or passive “driver” of the car itself, not the manufacturer.  In so doing, he argued:

Since, presumably, most accidents involving robot-driven vehicles will be due to some software error, perhaps the victims of robot car accidents will sue Google or other robot car manufacturers in product liability actions for selling defective products (defective software code). Such a system would insure that accident victims are compensated, but it would also mean that robot car manufacturers — the Googles, Fords and Toyotas of the world — would become the insurer of every car accident. Could any car manufacturer afford such a burden? Likely not.

It seems what we need therefore — in order to insure that the victims of robot-driven cars are compensated — is new legislation which would change the common law rules that govern car accidents. In particular, we need a system of compulsory auto insurance and a new legislatively-created rule that the owners of driverless cars are responsible for all accidents that they cause, regardless of whether they were piloting the car at the moment the accident occurred.

Such a change would replace our current negligence-based system of liability for car accidents with a strict liability regime that makes cars’ owners automatically liable for any damage caused by their cars, but it seems the only workable legal framework for a future of driverless cars.

GlA 180 Urban Edition 5dr auto review will prove why it is important to understand the current legal regime so that one can pick the right car.

Under the current legal regime, car manufacturers would have to insure every accident on their own, a burden that no company, even one as large as Google, can afford.

An interesting idea.  Thoughts?  Personally, I am not sure that this type of legislation is a good idea.  What happened to placing liability on the actual party at fault?  Ostensibly, the “driver” who is just sitting in the car isn’t at fault for the accident – maybe it was a software glitch that caused the accident.  Furthermore, who in their right mind would buy a car knowing that it would be their fault if the car causes an accident, even though they had no control over how it was designed?  Or am I sounding like a plaintiff’s attorney?  On second thought, don’t answer that last question.


  1. John David Galt says:

    I’m not a lawyer, but I would think the “last clear chance” rule should apply here: that is, whoever turned on the robot car (or had a chance to hit a STOP button or similar and didn’t) is the “driver”. This may or may not be the person in the car — how much control does s/he have?

  2. As commercial commerce will be the big user of robotic vehicles, this arena should be first consulted as far as liability trends or laws.

  3. The real irony is that although it will probably be true that software controlled vehicles will have a lower overall accident rate than human controlled vehicles, the facts that even the best software will get in a position that accident will occur and that the software companies have deep pockets means that plaintiff’s attorneys will be in court talking about how dangerous the software is compared to ‘human judgment’.

    • It will be hard to make this system reliable. While many mundane driving tasks can be handled by a laser system – a lot require artificial intelligence not available at this time. There are a million potential liability issues – consider just this one example:

      – Car is driving down a road standard country road. Suddenly in an oncoming driver swings into your lane. There is a bicycle rider just ahead on the right shoulder. The cars computer instantly determine that someone is going to get hit, either the bicyclist or the oncoming car. What does it do – i.e. what does the Google software engineer tell it to do in this case?

      – Hit the bicyclist in order to avoid the head on crash?
      – Accept the head on crash and spare the bicyclist?

      Now for some more bazzar twists:

      – What if both cars are low speed, and the likely hood of death in a side-swipe or even head-on is low – that only injury will result – but swerving into the bicyclist will probably result in death of rider. You are Googles engineer – what judgement do you make? How much injury are you willing to accept in order to decide that its better to kill the bicyclist? What if the bicycle rider is a little girl? What if it is a close personal friend? What if the oncoming car has 4 passengers and you have 4 passengers – are eight people getting injured worth sparing the life one bicycle rider?

      – Here’s another much more common example. A paper bag is being blown across the street. What is its density? Can the computer distinguish it from a dog? Does it brake suddenly to avoid hitting it? What if there is someone tailgating you and braking might result in the tailgater hitting you?

      I could go on and on – you get the idea. The technology will get there eventually – but bottom line is that unless liability resides with the owner – it is hard to see how it ever gets off the ground, unless major tort reform in US occurs!

  4. Pingback: Autobots – First Casualty | Living the Meme