In 1946, my grandfather, writing as “Murray Leinster”, published a science fiction story called “A Logic Named Joe”. In it, everyone has a computer (a “logic”) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, Joe, starts giving helpful answers to any request anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues — “Check your censorship circuits!” — until they work out what to unplug.
For as long as we’ve thought about computers, we’ve thought about making “artificial intelligence”, and wondered what that would mean. There’s an old joke that AI is whatever doesn’t work yet, because once it works it’s just software. Calculators do superhuman maths, and databases have superhuman memory, but they can’t do anything else, and they don’t understand what they’re doing, any more than a dishwasher understands dishes. Databases are superhuman, but they’re just software. But people do have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers call this “general intelligence”.
If we could make artificial general intelligence, or AGI, it should be obvious that this would be as important as computing, or electricity or perhaps steam. Today we print microchips, but what if you could print digital brains at the level of a human, or more than the level of a human, and do it by the billion? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more: steam engines did not have opinions about people.