Bar News - January 21, 2015
Opinion: Writing the Rules of the Road for Autonomous Cars
By: Anthony Sculimbrene
The future is coming. Will lawyers and the law be ready for it?
In July 2013 a group called Vislab did something remarkable. They conducted the first real-world test of a driverless car (known in the automotive industry as the less-scary-sounding “autonomous car”). BRAiVE, Vislab’s creation, weaved in and out of mixed traffic and had zero issues. The future is coming, and it will be delivered by an autonomous car.
The legal and ethical implications of an autonomous car are pretty far-reaching. In many ways, the autonomous car could be more important than recent, heralded technological innovations, such as the cellphone. Long distance travel, for example, could be done simply and easily – hop in your autonomous car just around bedtime, sleep in the reconfigured interior (why orient the seats the way they are now if no one is driving?), and wake up at your destination. It would be easier and safer than driving overnight and less of a hassle than taking a plane. Obviously, this future is a few years away, but it’s coming.
In fact, four states – California, Nevada, Florida, and Michigan (of course) – have laws addressing autonomous cars. But as the technology improves and legislation is passed, there are real and interesting legal issues associated with autonomous cars. Three are particularly interesting: 1) liability in an autonomous car; 2) DWIs and other motor vehicle crimes; and 3) ethical choices made by computers.
Imagine in 2024 you wake up, shower, and hop in your new Ford Autonomous. You grab your tablet (or newspaper, anachronisms are so funny) and you say “Take me to work.” The car processes the voice command, backs out of the driveway and is off. You’re reading your newspaper as the Ford Autonomous merges on to Interstate 293 and accelerates to 90 MPH once it is in the autonomous car lane (because of their improved reaction time and lack of driving errors, autonomous cars can take advantage of our highways’ true speed capacities and drive much faster than we can).
After reading you the Red Sox game log, the car exits the highway and just as the car turns off the highway exit ramp, you are hit. The other person doesn’t have an autonomous car. Who’s at fault? Suppose an analysis of your car’s computer shows that it had a programming error. Is that your responsibility? Is that the car company’s fault? Is that the computer programmer’s fault? Unlike a defective brake, you could have assumed control of the car and prevented the accident (no autonomous car design thus far locks out the driver completely), so should you have a requirement to do that and prevent accidents like this? Interesting legal issues in a new frontier.
New Hampshire lawyer John Frank Weaver touched on these issues in his book Robots are People Too. His position is that robots that drive these cars need to be treated like people in these instances, especially for purposes of insurance. Weaver’s point is an important one – without a decision about liability and insurance, the autonomous car is a hard thing to get on the road. Liability is the a priori condition for the launch of the autonomous revolution, but it is just the start of how these new vehicles will reshape areas of the law.
What will the criminal law look like in a land of autonomous cars? Will DWIs go extinct? What about car-related homicides? How will the rules of the road have to change to adapt to automated cars? Lots of proponents of autonomous cars cite the reduction in DWIs as a major benefit. It seems like this is almost a certainty. Without the need for driver input, who cares if the passengers (everyone is a passenger now) are intoxicated? Additionally, this change will impact the business side of the law. Undoubtedly autonomous cars will force lawyers that depend on DWIs to change their focus. After all, no one specializes in horse-related moving violations.
The most interesting aspect of autonomous cars comes into play when programming how they will react to certain situations. In very rare instances, accidents are unavoidable, even with the best drivers. How will the computer react? Suppose there is a scenario in which an accident is unavoidable and the choice is between crashing and killing the driver and crashing and killing a bus full of school children. How will the computer make this choice? What will its programming tell it to do? How will our laws change to account for this programming? If there was an unavoidable accident and the car’s programming tells the car to protect the driver but dozens die as a result, something seems wrong. And on a meta-level, what does the car’s programmed ethics say about our own ethics? Finally, who will make these ethical decisions that get programmed into the car’s automated brain? The driver, the company, the law, or a panel of highly specialized ethicists?
The questions around autonomous cars are really interesting. Those of us in the legal profession will probably have a hand in shaping the answers. Thinking about them ahead of time might make our answers better.
Anthony Sculimbrene is a public defender in the Nashua office of the New Hampshire Public Defender and graduate of the NH Bar Association Leadership Academy Class of 2013.