Is the day approaching when computers start to learn tricks they weren’t designed to handle? Or develop deceptive behaviours that are hard to see through? Or come to truly “understand” the information they’re working on, raising philosophical questions about the boundaries between human and machine?
Serious AI researchers have long argued that questions such as these raise unreal expectations about their field and should stay in the realm of science fiction. Today’s AI systems, we are told, are boring number-crunchers, churning through massive data sets to draw their inferences.
So what happens when the researchers themselves suggest that these sci-fi storylines are no longer as far fetched as they once sounded?