Elon Musk and top AI researchers call for pause on ‘giant AI experiments’

The move away from large-scale, high-impact artificial intelligence (AI) projects has been initiated by some of the top Artificial Intelligence researchers and tech giants such as Elon Musk. The move is aimed at a call to pause, which they have proposed in an open letter to the scientific community.

This open letter, organised by the Future of Life charity and signed by more than 2,400 researchers and luminaries, was released during the International Joint Conference on Artificial Intelligence in Stockholm, Sweden. The letter warns against potential dangers of running uncontrolled AI experiments which will create uncontrollable ethical, legal, and security threats.

The open letter suggests three general principles to be observed when developing AI: the technology must be safe, beneficial, and trustworthy. Safety means that AI should be robust to adversarial attacks and accurately estimating the risks associated with an experimental technology should be necessary.

Benefits means that AI should not be unexplainable/unpredictable and should be ‘symbolically aligned’ with human values. Trustworthiness includes that failure must be analysed to create a trustworthy AI which regards intentional and accidental harms.

The open letter ends with a call “for an arms-race and for a ban on autonomous weapons” and urges that “no major military power should start” the race.

The signers of the open letter admit that the technology may go out of control of the researchers who develop it, and that should provoke a pause in the progress of developing such technology. The open letter is an indication of what Elon Musk believed to be “an AI apocalypse” – a point before AI will potentially out-compete humans and will cause irreparable harms.

The letter is a result of the conversations that Elon Musk had with other tech titans at his San Francisco-based blockchain and AI event, The Summit of Innovation and Technology. The event was intended to provide a platform for critical conversations on the implications of AI and its future implications.

In conclusion, there is a growing worries of us entering a time in which AI development will outstrip humanity’s ability to control the technology. Accordingly, one of the primary initiatives of the leading AI experts and Elon Musk is to introduce a call to pause which will help in slowing down the race of developing self-learning systems uncritically and without proper analysis.

Leave a comment Cancel reply

Exit mobile version