人工智慧

Computers are not the best judge of our emotions

Emotional AI has potential promise but there are reasons for caution

Imagine a job interview where you confront not just a panel of interviewers, but an electronic “eye” linked to a computer. Based on tiny movements in your face and eyes and variations in your tone of voice as you answer questions, the computer uses artificial intelligence to assess your emotional responses — and draw conclusions about you. Are you reliable? Do you really want the job? Does that micro-clenching of the jaw when asked about your biggest failure show you have something to hide?

Far from being a disconcerting vision of the future, this is already happening. Various companies are developing or marketing “emotion recognition” technology for recruitment; some businesses have deployed it. As the Financial Times highlighted this week, “emotional AI” is being used in sectors ranging from advertising to gaming to insurance, as well as law enforcement and security. It holds out the prospect of using facial clues to figure out what to sell people and how they respond to adverts; to check whether drivers or schoolchildren — or those working from home — are paying attention; or to spot people who are acting suspiciously.

History is littered with dark predictions about new technologies that proved overly alarmist. Yet as with facial recognition, to which emotional AI is closely related, there are reasons for particular caution. The science of matching an image of a face against a database through its physical characteristics is broadly sound. Even then, however, systems sometimes misidentify women or non-white faces — leading to risks of discrimination, for example, when used by law enforcement.

您已閱讀45%(1620字),剩餘55%(1991字)包含更多重要資訊,訂閱以繼續探索完整內容,並享受更多專屬服務。
版權聲明:本文版權歸FT中文網所有,未經允許任何單位或個人不得轉載,複製或以任何其他方式使用本文全部或部分,侵權必究。
設置字型大小×
最小
較小
默認
較大
最大
分享×