AI ‘should be subject to nuclear-level regulation’

OpenAI bosses said “superintelligence” will be more powerful than other technologies humanity has had to contend with in the past
OpenAI bosses said “superintelligence” will be more powerful than other technologies humanity has had to contend with in the past
REUTERS

The creators of ChatGPT have suggested that artificial intelligence poses such a risk to humanity that it must be subject to similar regulation as nuclear power.

OpenAI founder Sam Altman, president Greg Brockman and chief scientist Ilya Sutskever said that within a decade it is possible that AI systems will be capable of exceeding “expert skill level in most domains”.

Superintelligence, the scientists wrote in a blog post, “will be more powerful than other technologies humanity has had to contend with in the past”.

“Given the possibility of existential risk, we can’t just be reactive,” they said. “Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.”

Brockman appeared at the AI Forward event in San