TThere are many ways of being smart that aren’t smart like us.” These are the words of Patrick Winston, a leading voice in the field of artificial intelligence. Although his idea is simple, its significance has been lost on most people thinking about the future of work. Yet this is the feature of AI that ought to preoccupy us the most.
From the 1950s to the 1980s, during the “first wave” of AI research, it was generally thought that the best way to build systems capable of performing tasks to the level of human experts or higher was to copy the way that experts worked. But there was a problem: human experts often struggled to articulate how they performed many tasks.
Chess-playing was a good example. When researchers sat down with grandmasters and asked them to explain how they played such fine chess, the answers were useless. Some players appealed to “intuition”, others to “experience”. Many said they did not really know at all. How could researchers build a chess-playing system to beat a grandmaster if the best players themselves could not explain how they were so good?