Can a robot read your emotions? Apple, Google, Facebook and other technology companies seem to think so. They are collectively spending billions of dollars to build emotion-reading devices that can interact meaningfully (and profitably) with humans using artificial intelligence.
These companies are banking on a belief about emotions that has held sway for more than 100 years: smiles, scowls and other facial movements are worldwide expressions of certain emotions, built in from birth. But is that belief correct? Scientists have tested it across the world. They use photographs of posed faces (pouts, smiles), each accompanied by a list of emotion words (sad, surprised, happy and so on) and ask people to pick the word that best matches the face. Sometimes they tell people a story about an emotion and ask them to choose between posed faces.
Westerners choose the expected word about 85 per cent of the time. The rate is lower in eastern cultures, but overall it is enough to claim that widened eyes, wrinkled noses and other facial movements are universal expressions of emotion. The studies have been so well replicated that universal emotions seem to be bulletproof scientific fact, like the law of gravity, which would be good news for robots and their creators.