The Open Letter Calling for a Pause on AI Development
The past few months have seen the headlines dominated by advancements in Artificial Intelligence (AI) systems. The release of generative AI such as ChatGPT, alongside companies’ efforts to dominate the market, has led to concerns about their long-term impacts and the desirability of introducing new regulations. Amongst others, these topics are discussed in the open letter by the Future of Life Institute.
About the Open Letter
Over 1,000 AI experts and researchers have signed an open letter created by a think tank known as the Future of Life Institute (FLI). The signatories include engineers from Amazon, Google, DeepMind, Meta and Microsoft, as well as big names such as Elon Musk the co-founder of OpenAI, and Steve Wozniak, the co-founder of Apple.
The letter calls on all AI labs to pause the training of systems that are more powerful than GPT-4, the latest version of the ChatGPT system, for at least 6 months. This should be “public and verifiable and include all key actors”. If the response is not satisfactory, the letter calls on governments to take action. AI developers are also asked to focus their attention on improving existing systems to make them more reliable, accurate and safe.
Why Stop the Training of AI Systems?
After questioning AI’s ability to replace jobs and spread misinformation, the letter states that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable”.
OpenAI has acknowledged in its recent statement that “at some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models”. The FLI finds that “that point is now”.
Criticism
The letter has received considerable criticism. Margaret Mitchell, the co-author of ‘On the Dangers of Stochastic Parrots’, finds that “by treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI”. Notably, Mitchell’s paper was cited in the letter.
Others accuse FLI of focusing on hypothetical apocalyptic scenarios instead of more immediate concerns, such as the systems adopting racist or sexist views and producing misinformation.
A Need to Regulate AI?
Despite some criticism of the initiative, the race to develop AI systems has resulted in several regulatory grey areas. Concerns have been raised about liability for offensive and harmful content as well as data protection.
Yet, the UK Government’s recently published White Paper appears to be more pro-innovation; not pro-regulation. Alongside opposition parties, the Ada Lovelace Institute, an independent research body, believes that this approach fails to acknowledge the more contentious matters.
Conclusion
The open letter is the latest development that displays a growing concern regarding the development of AI systems. Despite facing criticism, it is likely to make technology companies and governments think about whether more control over AI systems is desirable.
By Scott Hickman