Share Podcast
Azeem on AI: What Can the Copernican Revolution Teach Us about the Future of AI?
How might AI breakthroughs change the larger systems we operate within?
- Subscribe:
- Apple Podcasts
- Google Podcasts
- Spotify
- RSS
In his brief commentary, Azeem Azhar discusses the increasing complexity and capabilities of large language models (LLMs) and the transformative potential they hold. Just as the Copernican Revolution forced us to reassess our understanding of the universe and led to numerous societal and scientific changes, Azeem proposes that rapid advancements in AI could lead to a similar paradigm shift that challenges established norms and systems.
Further resources:
- Exponential LLMs and the Copernican Moment, Azeem Azhar, 2023
AZEEM AZHAR: Hi there, I’m Azeem Azhar. For the past decade, I’ve studied exponential technologies, their emergence, rapid uptake, and the opportunities they create. I wrote a book about this in 2021. It’s called The Exponential Age. Even with my expertise, I sometimes find it challenging to keep up with a fast pace of change in the field of artificial intelligence, and that’s why I’m excited to share a series of weekly insights with you where we can delve into some of the most intriguing questions about AI. In today’s reflection, I speak about AI’s Copernican moment, the idea that the development of advanced AI models and their potential impact on society is a significant paradigm shift that will have far-reaching consequences, similar to those brought about by Copernicus and Darwin in their respective fields. Let’s go.
Let’s start with a confession. I’m finding it really hard to keep up with the constant flow of experimentation, innovation, research results that are springing up around these LLM technologies. There is so much that is going on right now. As I reflect around the pace with which this is going, I mean, bloody exponentials, I think I’m allowed to say that. And the challenge with keeping up with it and the breadth and the depth of the experimentation, I really do think that this is quite a remarkable, very powerful technology. And in a way, it’s a sort of thing that we should, given our knowledge of history, really have a rethink. We need to enter a space of philosophical analysis, of imagination, of trying to step out of the system in which we are currently in to think about how we might order things given this set of changes. I have been thinking, for example, about the Copernican moment, right?
That moment when we started to realize it took a couple of hundred years for this to become common knowledge, it’s not yet common knowledge everywhere. That the earth was not the center of the universe, that the earth revolved around the sun. Or the Gutenberg moment where we started to democratize access to and then creation of knowledge. And these moments were moments where, fundamentally, people living within that system were forced to change their system. Prior to Gutenberg in Western Europe, ultimately the priests were the elite who controlled knowledge. And they absolutely did control that knowledge and they constrained what people could believe and how they could believe in very many ways. Of course, Copernicus came out and further challenged that domain. And so I think we can look at LLMs today and we can start to see where there’s a sort of a friction. They’re hitting copyright, they’re hitting privacy. And start to ask, at what point are we the priests and at what point are we the scientists from the perspective of the existing worldview?
If you are the Catholic Church and you’ve controlled the dissemination of knowledge through handwritten, hand-copied bibles, and someone is coming out and allowing lots of different versions of that to be created and ultimately for other people to start to produce lots and lots of material and drive up the levels of literacy, that’s a risk, right? That’s a risk to the existing stability. It’s a risk to the existing structures. It’s a risk that challenges that benchmark for truth and truth as a social construct. And after Copernicus, it was no longer true, as it had been before Copernicus, that the sun revolved around the earth. Now, of course, listening to that, you’re going to say, “The sun never revolved around the earth. The earth revolved around the sun.” But that wasn’t actually what people believed was true back in the 15th and early 16th century in periods before that.
And I think it’s important for us to ask those questions around this technology in particular just as we play with it and we see what it is telling us about the world and about our own assumptions. So where are we with these technologies, these LLMs that will form the basis of more advanced systems? I think we have to admit that the cat is out of the bag, right? Now, while it’s true that ever more powerful models are going to be harder and harder to build, GPT-5 will be harder to build. Whatever Anthropic is building or Stability is building now will be hard to build. It will be gated on technical capabilities, the ability to find the few hundred and really remarkably talented people to work on each project and the hundreds of millions of dollars to run for training and being able to even get the chips from Nvidia. That is hard stuff.
But actually, in over a 10-year, 15, 20-year period, it’s going to happen, right? It may not happen in a year, but it’ll certainly happen. We don’t even have to think about the most sophisticated models because lower-powered models of the GPT-3, 3.5 capability, things like Lama, which I’ve written about, are capable of quite a lot of the things that we see the most advanced LLMs doing. And they’re running on desktop computers and mobile phones. And the cost of running those is simply going to decline because of, on the one hand, the declining cost of computing. And on the other hand, algorithmic optimizations and improvements, which will have step change impacts on the computational cost of training. On top of that, this is becoming the number one priority or certainly a top priority for most firms.
I can’t really disclose about who I’ve spoken to over the last two or three weeks, but I’ve talked to a lot of people in industry and a lot of people with a view, a sight across many industries and firms within those. And the thing that I’m hearing is that this is becoming a super priority. There’s a lot of clamoring for firms to start to use these technologies for competitive advantage. There is a sort of a technical, a research, and a commercial momentum. And I think one of the things that we then need to do is start to really understand what the upsides of all of this might be and how we paint that upside. And once we understand what that upside is, that we start to fundamentally think about what kind of institutional frames we want to develop around these technologies.
Ultimately, institutional frames, whether they are regulations or laws, are trade-offs, right? They are trade-offs between freedom and benefits that might be delivered and delivered unevenly and costs that attach to them. So for example, over the past couple of weeks, we’ve started to read more about how these LLMs have made use of copyright material or maybe using private material and breaching various privacy regulations or breaching laws around slander or misinformation or libel. I think that the thing that we need to do when we start to look at that, of course, is take that material seriously. But also, start to ask questions about whether we need to change institutional frames and the laws around that in order to deal with the realities of this technology. If you think about copyrights, it’s only a few hundred years old. And it’s an economic settlement in the face of new opportunities. Virtually no one was making any money as an author before the printing press. And the printing press appeared in a blink of an eye.
One quite interesting example to look at is the use of DDT, the insecticide, the pesticide. So DDT was developed 60, 70 years ago, was really effective. Pesticide turned out to have all sorts of problematic environmental effects and effects on human health, so it was largely banned about 50 years ago. However, many countries, in particular, India, continued to use DDT. So the question is, given all of its downsides, why did the Indian government continue to use DDT and allow its use? So there were reasons, right? It was really effective for malarial control. It was incredibly cost-effective. There weren’t other good alternatives. So fundamentally, there was a trade-off which said, “For the harms that are done by DDT, in this case, were worth bearing with given the benefits it provided, certainly within India.” And I think it’s really important for us to recognize that particular tension and what that expiration should look like.
This is a hugely powerful technology. The things that have been happening in days, not even in weeks, are really amazing. They’re somewhat jaw dropping to me. And as I reflect on how hard it is to keep up with what’s going on, both on the myriad of new applications and experiments that are taking place on the one hand, and the really legitimate concerns that we might have in terms of ethics, inclusion, just outcomes from this technology, but also the way in which it is pushing very hard on our understanding of things like copyright and privacy. I think that we have to hold all of those things in our heads. We don’t want to have an unthinking embrace, but we want to start to look at this with a deep analysis, some imagination, some sense of humanity about it.
I do also think that we shouldn’t fall short of going back to Copernicus, for sake of argument, who inverted worldviews and forced us to think really differently, and Darwin similarly. And ask the question of the extent to which the system in which we are operating may start to change because of the nature of these sorts of discoveries. Well, thanks for tuning in. If you want to truly grasp the ins and outs of AI, visit www.exponentialview.co where I share expert insights with hundreds of thousands of leaders each week.