The EU’s guidelines on artificial intelligence have been praised for setting a global standard. They are also timely, given the revelation this week that Microsoft had worked with a Chinese military-affiliated university on AI research. The EU’s principles leave open the question of how to decide whether AI is of good to society. The answer will be in nuanced regulation of individual applications, not a blanket approach to all AI.
Strategies for ethical AI are not new, but the European Commission’s document is noteworthy. It draws on a wide array of experts and is grounded in the idea that AI should both respect rights and be robust enough to avoid unintentional harm.
Just as with any other computer system, the adage “garbage in, garbage out” applies to AI. If systems are trained using historical data, they will reproduce historical biases. The best way to counter or at the least to identify this algorithmic bias is through greater transparency and robustness of AI systems. The French government’s approach to sharing many of its algorithms’ data offers one example of those principles in action.