Imagine a job interview where you confront not just a panel of interviewers, but an electronic “eye” linked to a computer. Based on tiny movements in your face and eyes and variations in your tone of voice as you answer questions, the computer uses artificial intelligence to assess your emotional responses — and draw conclusions about you. Are you reliable? Do you really want the job? Does that micro-clenching of the jaw when asked about your biggest failure show you have something to hide?
Far from being a disconcerting vision of the future, this is already happening. Various companies are developing or marketing “emotion recognition” technology for recruitment; some businesses have deployed it. As the Financial Times highlighted this week, “emotional AI” is being used in sectors ranging from advertising to gaming to insurance, as well as law enforcement and security. It holds out the prospect of using facial clues to figure out what to sell people and how they respond to adverts; to check whether drivers or schoolchildren — or those working from home — are paying attention; or to spot people who are acting suspiciously.
History is littered with dark predictions about new technologies that proved overly alarmist. Yet as with facial recognition, to which emotional AI is closely related, there are reasons for particular caution. The science of matching an image of a face against a database through its physical characteristics is broadly sound. Even then, however, systems sometimes misidentify women or non-white faces — leading to risks of discrimination, for example, when used by law enforcement.