It was what many called an iPhone moment: the launch in late 2022 of OpenAI’s ChatGPT, an artificial intelligence tool with a humanlike ability to create content, answer personalised queries and even tell jokes. And it captured the public imagination. Suddenly, a foundation model — a machine learning model trained on massive data sets — thrust AI into the limelight.
But soon this latest chapter in AI’s story was generating something else: concerns about its ability to spread misinformation and “hallucinate” by producing false facts. In the hands of business, many critics said, AI technologies would precipitate everything from data breaches to bias in hiring and widespread job losses.
“That breakthrough in the foundation model has got the attention,” says Alexandra Reeve Givens, chief executive of the Center for Democracy & Technology, a Washington and Brussels-based digital rights advocacy group. “But we also have to focus on the wide range of use cases that businesses across the economy are grappling with.”