On his return to the White House in January, Donald Trump swiftly dismantled the regulatory framework his predecessor Joe Biden had put in place to address artificial intelligence risks.
The US president’s actions included reversing a 2023 executive order that required AI developers to submit safety test results to federal authorities when systems posed a “serious risk” to the nation’s security, economy or public health and safety. Trump’s order characterised these guardrails as “barriers to American AI innovation”.
This back and forth on AI regulation reflects a tension between public safety and economic growth also seen in debates over regulation in areas such as workplace safety, financial sector stability and environmental protection. When regulations prioritise growth, should companies continue to align their governance with the public interest — and what are the pros and cons of doing so?