Self-driving cars: Where does the real resistance come from?

Standard

Google self-driving car prototypeGoogle’s recent announcement that they would be unveiling a self-driving vehicle prototype soon spawned a lot of the reactions you’d expect from such an announcement, mostly split between “cold, dead hands”-type comments to outrageous hazard-challenges that would give Mario Andretti pause.

I’d hate to chalk it all up to just plain technophobia; however, we have seen this kind of denial about new technologies before—for instance, when the Horseless Carriage was first introduced—and the song seems to be the same, note for note, but with the addition of synthesizer quaver and a bit of traffic sound sampling to remind us that it’s 2014.

But we’ve seen automated cars in movies like Minority Report.  I’ve written about them in my novels Sarcology and Chasing the Light.  And although they’re not depicted as death machines in popular media, they are still thought of that way by the public.  Why?

We know that many people don’t trust automation, because, well… it’s not perfect.  It makes mistakes.  And no one wants to think about mistakes being made in a car at 60 MPH.  So, better to let a person drive, right?

Except for the fact that humans aren’t perfect, either.  They make lots of mistakes behind the wheel, many of them caused by their own inability or disinterest in focusing on their driving a quarter-ton guided missile in a roadful of quarter-ton guided missiles, all piloted by humans not much more attentive than them.  Automobile driving remains one of the most lethal things an American citizen can do, and most of that lethality can be directly attributed to driver error.  Why aren’t they thinking about all those mistakes?

People also tend to think of “automation” as being some kind of human-analogous robot, with similar sensors and characteristics, that will be behind the wheel.  Blame television and movies for this, perhaps; we rarely think of the automated systems in elevators, computers, cellphones, and even our existing cars, as robots, because they don’t have two arms, two legs and a bubble-head with flashing lights in it.  Yet robots—automation—are all around us, making our daily lives easier in often invisible ways.

car viewA self-driving car won’t simply have a Nikon camera bolted to its nose and aluminum arms to turn a steering wheel; it will encompass multiple environmental recognition and evaluation systems, ranging from visible sight, to radar, GPS, saved map data and (in most cases) signals from roadways and even other vehicles.  It will parse all of those sources of data, evaluate their surroundings and possible hazards, and take appropriate action… and do all of that in milliseconds, versus the multiple seconds it would take a human to parse only visual data, come to a conclusion and take appropriate action.

A self-driving car will know its limitations better than a driver would: Knowing how fast it can accelerate or (more importantly) brake… how much torque it can handle in a turn before flipping over… how much clearance it has to avoid an obstacle… and having the precise control over its systems to more accurately make those starts, stops and turns… will make it better than any human driver at avoiding hazards.

And finally, we have the technology to allow cars to actively network with each other, using radio signals to track each other’s movements, send reports of unexpected road conditions or traffic, broadcast collision alerts, or just synchronize with multiple vehicles to form autonomous convoys and safely maneuver through traffic.

I’m not sure why people have a problem with the idea of automation being faster and more precise then they are at sudden-decision action.  The automobile’s engine is chock-full of robots controlling engine firing, cam timing, gear-shifting and even radio-station finding, accomplishing most of those very exacting tasks in microseconds.

People also tend to assume that these automation systems will simply fail, without warning, and (naturally) in the worst situation possible.  People spin scenarios of self-driving cars expecting turns where there are none and swerving into walls, or taking you over a cliff to avoid a sudden obstacle… the kind of stuff you’d expect to see in a Saturday Night Live sketch or a cheesy sitcom or movie.  Or they’ll just turn off, in the middle of a highway jaunt, leaving you screaming uncontrollably towards your doom.

Somehow, all of these people have forgotten about the concept of backup; emergency equipment designed to react to and recover from such situations.  The very idea that a self-driving car would not be built with a backup system, independent of the main system, whose only job would be to take control of a car when the main system malfunctions and bring the vehicle to a safe stop, is crazy.

And there’s already the beginnings of just that in some new cars today: BMW, Mercedes and Hyundai models have systems that will take control of the car’s brakes and slow it to avoid collisions, doing so faster than human drivers can often react.  Many vehicles can take control of the vehicle to steer it into parking spaces.  Put the two existing systems together, and you have an emergency pull-over system.

The significant thing is that all of the technologies required to make a working autonomous vehicle are either available today, being perfected today, or already in use today.  There’s no missing framistat or dilithium crystal needed to put all the pieces together; it’s just a matter of assembling them and optimizing them to do this job.

self-driving carWe have countless examples of jobs we used to do, now being safely done with automation, every day, every minute.  There’s no reason to believe that self-driving cars won’t soon be in that category.  So there must be something else convincing people that this is a bad idea.

There’s a reason why people describe our relationship with the car as “America’s love affair”: For so many Americans, being able to drive is considered a special privilege, a joy (though where to enjoy driving in this country, outside of a racetrack, is beyond me) and a source of pride.  It is tied into independence and adulthood, a socio-cultural milestone; it gives us full access to our automobile-dominated world.  Cars have been romanticized, idolized and personalized.  Some of us were born in them; probably more of us were conceived in them.  Cars hold a special place in American culture, and it’s hard for Americans to imagine a world in which they do not drive their own cars.

But at one time, it was hard for average people to imagine a world in which they did not grow their own food on a daily basis… have access to doctors… know what was on the other side of the planet… or watched a man walk on the Moon from their living rooms.

And speaking of Moon landings: Less computing power than what is in your cellphone today got NASA’s Apollo spacecraft to the Moon.  Six times.  And returned one damaged capsule safely when it suffered an explosion en-route.  Still think we can’t make cars drive themselves?

Advertisements

3 thoughts on “Self-driving cars: Where does the real resistance come from?

  1. Update:
    I’ve been kind of surprised by a lot of people who addressed liability in the event of an accident: Is the car manufacturer held responsible for an accident? Here’s how I see it:

    Though car manufacturers presently do become liable when a defect in their car (not caused by negligent owner maintenance) results in an accident or casualty, there are potentially other circumstances that could shift liability in other directions. For instance, if an object enters the roadway in too short a time for the vehicle (or a human-driven vehicle, for that matter) to avoid it, and an accident or casualty is caused, the owner of that object could be liable. Or if human error of any kind can be established to have initiated the events causing the accident, the manufacturer may be exonerated from blame—admittedly, this will be much less likely given the self-driving aspect of the car, but probably not impossible. (I dunno… maybe you fire a loaded weapon into a console, causing a short that disables the main system and causes a crash before the back-up system can take over.)

    But suppose the object in the road is a tree recently felled from wind or a lightning strike? Or a steering-system-disabling pothole that the car couldn’t detect? Insurance companies already have “acts of God” clauses that negate liability when no person or institution is directly responsible for some acts; I imagine there might even be a surge in “acts of God” claim conclusions, if the automakers have to get involved in such proceedings…

    But there’s even another consideration: Though the automaker built the car, they did not direct it to the spot where it was involved in an accident; the owner did. So, through some (admittedly heinous) twists and turns of law and letter, the owner may find themselves liable because they placed themselves at the scene.

    And finally, there’s the disclaimer gambit: Automakers may demand that potential owners sign a waiver of liability form, exonerating them from being involved in any accident that isn’t directly caused by a failure of the car’s systems (given contractually-followed maintenance) before allowing the owner to buy.

    Like the many other “what if?” questions I’ve heard, liability would require further and extensive thought by all parties concerned. But that’s to be expected with such a ground-breaking technology shift.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s