觀點汽車業

Redefine the back-seat driver role in driverless cars

As the parent of two learner drivers, I am glad I do not have to teach Alexa and Siri to drive too. Because automatons do not have one crucial ability that my teens have, how ever inexperienced: Alexa and Siri can’t read minds. They cannot intuit whether that student on the kerb is about to step into traffic, more afraid of losing her Snapchat streak than getting run over; nor can they catch her eye to warn her not to. Self-driving cars do not do eye contact.

They won’t necessarily know that the common practice in the university town where we live is that all cyclists run all red lights, all the time. And when it comes to that most complicated of human interactions, the rush hour motorway merge lane, how will a driverless car stare down the other guy and wordlessly warn him not to cut in? How will it generate rude hand gestures, that lingua franca of the road?

As more companies put self-driving cars on real world roads for testing — with the help of the laissez-faire Trump administration, which last week promised to stay out of their way — developers are trying to figure out the best way for cars of the future to communicate with pedestrians, cyclists, other drivers, and their own passengers.

您已閱讀29%(1209字),剩餘71%(2994字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。
設置字型大小×
最小
較小
默認
較大
最大
分享×