As the parent of two learner drivers, I am glad I do not have to teach Alexa and Siri to drive too. Because automatons do not have one crucial ability that my teens have, how ever inexperienced: Alexa and Siri can’t read minds. They cannot intuit whether that student on the kerb is about to step into traffic, more afraid of losing her Snapchat streak than getting run over; nor can they catch her eye to warn her not to. Self-driving cars do not do eye contact.
They won’t necessarily know that the common practice in the university town where we live is that all cyclists run all red lights, all the time. And when it comes to that most complicated of human interactions, the rush hour motorway merge lane, how will a driverless car stare down the other guy and wordlessly warn him not to cut in? How will it generate rude hand gestures, that lingua franca of the road?
As more companies put self-driving cars on real world roads for testing — with the help of the laissez-faire Trump administration, which last week promised to stay out of their way — developers are trying to figure out the best way for cars of the future to communicate with pedestrians, cyclists, other drivers, and their own passengers.