Elon Musk and a group of artificial intelligence experts and industry executives have called for a six-month halt in the development of training systems more powerful than OpenAI’s recently launched model GPT-4.
They made the request in an open letter, citing potential societal and humanitarian risks.
The letter, which has been signed by over 1,000 people, including Musk, Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio and Stuart Russell, was issued by the non-profit Future of Life Institute, called for a halt to advanced AI development until shared safety protocols for such designs were developed, implemented, and audited by independent experts.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
The letter also outlined the potential threats to society and civilization posed by human-competitive AI systems in the form of economic and political disruptions, and it urged developers to collaborate with policymakers on governance and regulatory authorities.
The letter comes as the EU police force Europol joined a chorus of ethical and legal concerns about advanced AI like ChatGPT on Monday, warning about the system’s potential misuse in phishing attempts, disinformation, and cybercrime.
Musk, whose car company Tesla employs AI for an autopilot system, has been vocal about his reservations about AI.
Since its launch last year, Microsoft-backed OpenAI’s ChatGPT has prompted competitors to accelerate AI development as well as companies incorporating generative AI models into their products.
However, Sam Altman, CEO of OpenAI, is yet to sign the letter, a spokesperson at Future of Life told Reuters.