HomeInsightsSuperintelligence: Leading figures call for prohibition on research

Hundreds of leading figures have signed a statement calling for a prohibition on research into superintelligence.

Unlike a previous call in 2023 for a six month ‘pause’ on training powerful AI systems, the letter goes further and urges that there should be total prohibition on the development of superintelligence until there is (1) “broad scientific consensus that it would be done safely and controllably” and (2) “strong public buy-in”.

The difference in approach lies in the power of superintelligence and, according to the signatories to the letter, the accompanying risks that it poses. They refer to many leading companies being intent on building superintelligence in the coming decade “that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction”. It also cites research suggesting that only five percent of U.S. adults are in support of fast, unregulated development of AI, and a significant majority want to see a pause on the development of advanced AI until it is proved to be safe.

Among the signatories are two so-called ‘godfathers of AI’, Yoshua Bengio and Geoffrey Hinton, respectively the world’s most and second-most cited scientists, as well as Nobel Prize winners, politicians, and public figures including Steve Wozniak and Sir Richard Branson. Their position is neatly summarised by leading academic and Professor of Computer Science at the University of California, Berkeley, Stuart Russell: “this is not a ban or even a moratorium in the usual sense. It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”. 

To read more, click here.