The Future of Life Institute has come out with an Open Letter advocating a 6 month pause on training AI models “more powerful” than GPT-4. The letter has already been signed by over 1000 people ranging from researchers, technology leaders and public figures.

The letter is correct in drawing attention to possible risks of AI technologies such as ChatGPT and GPT-4. Misinformation, impact on labour displacement and security are important issues, but unfortunately, in each case, the letter frames them in speculative futuristic terms, while ignoring ways in which the technology could cause harm right now or in the near future.

The most speculative part is the claim about AI posing long term catastrophic risks resulting in “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us” and that we “risk loss of control of our civilization”. The experiments showing unexpected capabilities of large language models (LLMs) are indeed interesting, but it is still magical thinking to claim that simply pumping the LLMs with more data and compute can create fully autonomous agents surpassing humans in all domains. By contrast there are very concrete risks when systems like ChatGPT are increasingly being connected as plugins to all kinds of applications and to the internet: they can leak sensitive personal data, spread worms across the internet or even shut down critical infrastructure. The framing in terms of existential risk ( “x-risk”) is a distraction from these real issues as we have argued a few years back. Addressing the real and present risks require open discussion and collaboration between academia, Government authorities and public society, but the hype of the letter might instead lead to corporations locking down the technology beyond scrutiny.

Blog pic  (1)-1

Demanding a 6-month moratorium on AI research is both naïve and misguided. The impact of AI on our lives is rapidly becoming all pervasive and it is a big long-term project to adjust our economy and society to it, not something that will have a magic fix in 6 months!

AI technologies have started having transformative impact across a range of domains such as health care, energy and manufacturing. Thus it is heartening to note that the letter is clearly advocating continued research and development of AI to make products that are “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy”.

The benefits of the recent AI breakthroughs are just beginning to touch the everyday lives of people. For example, LLMs are now being optimized to run on mobile devices: a 4GB model based on Meta’s LLaMA can run on a 2020 MacBook Air. Compression and architecture search methods that make this possible are at the heart of Embedl’s technology. Our mission is to be at the forefront of this democratization of AI - making the most advanced AI technologies run efficiently on resource constrained devices, e.g. affordable consumer hardware,  and bringing the benefits to everyone. In the process, we are also addressing one of the most important issues with current AI development, one that is strangely absent in the letter:The Carbon Footprint of AI. It is simply not sustainable to continue the profligate use of energy and other resources in AI development today as with the LLMs. AI technologies need to be developed sustainably within planetary boundaries. All industries and organizations deploying AI technologies must have efficient AI at the heart of its products. Embedl’s central goal is to make this possible for all organizations, large or small.

The letter carefully clarifies that it is “not asking for a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models”. “Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects.”. The letter is also correct to call for a much needed update of regulatory frameworks to catch up with the new technologies, check monopoly power and unlock innovation for the common public good.

 

1 https://futureoflife.org/open-letter/pause-giant-ai-experiments/                                                                                                                                                                 2 D. Dubhashi and S. Lappin, “AI Dangers: Real and Imagined”, Comm. ACM, 60:2. pp. 43-45 Feb. 2017.
3 D. Dubhashi and S. Lappin, “Scared about AI? It’s BigTech that needs reigning in”, Guardian, Dec. 16 2021.

You may also like

Back to the Future with Analog for AI?
Back to the Future with Analog for AI?
9 November, 2023

In a previous blog post, we mentioned that for all its amazing successes, there is a dark hidden secret of AI – its ener...

The Future of LLMs: from Pieter Abbeel and John Schulman
The Future of LLMs: from Pieter Abbeel and John Schulman
10 August, 2023

Pieter Abbeel is one of the rock stars of modern AI in robotics and reinforcement learning (RL). He’s a professor in the...

Difficulties and Constraints in the Automotive Industry
Difficulties and Constraints in the Automotive Industry
28 August, 2023

The automotive industry is undergoing a remarkable transformation through the integration of deep learning in embedded s...