Estimated reading time 2 minutes 2 Min

Musk, experts urge pause on training AI systems

Artificial intelligence experts, industry players and Elon Musk want a temporary halt on advanced AI development over fears about the risks to humanity.

March 30, 2023
By Jyoti Narayan and Krystal Hu
30 March 2023

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in training systems more powerful than OpenAI’s newly launched model GPT-4, they say in an open letter, citing potential risks to society and humanity.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1000 people including Musk, Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, as well as AI heavyweights Yoshua Bengio and Stuart Russell, call for a pause on advanced AI development until shared safety protocols for such designs are developed, implemented and audited by independent experts.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter says.

The letter also details potential risks to society and civilisation by human-competitive AI systems in the form of economic and political disruptions, and calls on developers to work with policymakers on governance and regulatory authorities.

The letter comes as EU police force Europol on Monday joined a chorus of ethical and legal concerns over advanced AI such as ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

Musk, whose car maker Tesla is using AI for an autopilot system, has been vocal about his concerns about AI.

Since its release last year, Microsoft-backed OpenAI’s ChatGPT has prompted rivals to accelerate developing similar large language models, and companies to integrate generative AI models into their products.

Sam Altman, chief executive at OpenAI, hasn’t signed the letter, a spokesperson at Future of Life told Reuters. OpenAI didn’t immediately respond to request for comment.

“The letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications,” said Gary Marcus, an emeritus professor at New York University who signed the letter.

“They can cause serious harm … the big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialise.”

More in Top Stories