AZEEM AZHAR: Hi there, I’m Azeem Azhar. For the past decade, I’ve studied exponential technologies, their emergence, rapid uptake, and the opportunities they create. I wrote a book about this in 2021, it’s called The Exponential Age. Even with my expertise, I sometimes find it challenging to keep up with a fast pace of change in the field of artificial intelligence, and that’s why I’m excited to share a series of weekly insights with you where we can delve into some of the most intriguing questions about AI. In these first reflections, I will speak about why large language models can be considered a general purpose technology, meaning the kinds of technologies that can fundamentally change our economy. I recorded this initially for members of my Exponential View newsletter. If you want to stay in touch and hear more from your weekly, go to www.exponentialview.co. Now, let’s jump into a brief reflection on two questions.
One, can we consider large language models to be general purpose technologies? And if so, are the conditions right for a technological revolution? So, GPTs or multipurpose technologies are these really powerful technologies that tend to fundamentally change cost structure in an economy and have really, really long-term effect in restructuring, reorganizing it with these concomitant changes to the way in which we live our lives and the way that our politics works. So, some good examples of that: Steam power, electricity. Now, I go into how GPTs tend to change society in in quite simple detail in my book, exponential if you are outside of the US or the exponential age if you are in the us. But there are other great people you can refer to. One of my favorites is the economist, Carlota Perez, and I’m gonna use her framework today to look at this question about LLMs and the potential impact.
So, in Carlota’s framework, she says that technological revolutions get driven through these sort of three dynamics. The first is the arrival of a multipurpose technology that can support new industries. The second is the need for a cheap key input that changes the relative cost structure. And then the third as a consequence is the arrival, the delivery of one or more new infrastructures. So, let’s look at this in the context of something historically. So, take the steam revolution, right? Steam driven machinery and processes. Those were the multipurpose technologies. The key input to that steam revolution was the arrival of cheap coal. It started to be mined and extracted in England in the 17th century, and it was really the foundational shift in the energetic balance of modern economies, and we lived with it for a few hundred years after that. The infrastructures that got built around that were railroads and railroads were really important in reducing the cost of communication.
And they’re implicated in not the formation of nation states, but the firming up of the notion of the nation state as well as the expansion of course of market capitalism. Now let’s think about LLMs and how they might fit into this framework. So the first question is, is this a multipurpose technology? And the evidence is early, though. It is that it is likely to be a multiple multipurpose technology. We can see uses of LLMs in in every industry. We can see it amused in many, many different ways. So I’ve personally have used LLMs to generate code. I’ve had my LLM write me a script on my Mac to clean messy CSV files. I’ve also used LLMs to help me grammar check and spell check documents. I’ve used LLMs to write summaries of documents for me. I’ve used them to help me in analytical research and other people are demonstrating and experiencing the breadth of use for LLM.
So, I think we can argue right now these are multipurpose technologies. Now, the second part of her analytic is, is there a cheap key input? And the cheap key input that we see, of course, is computation and digital data, which is closely connected to computation. Computation is much, much cheaper now than it ever has been. It’s also more expensive today than it ever will be. And we need to put some of these numbers in context. So we hear that the total CapEx that OpenAI has allocated to build this new supercomputer for training L LMS with Nvidia is in the order of $225 million. Now, if you’re an individual householder, that’s a lot of money. If you’re an economy, it’s nothing. If you are a large company, it’s nothing. $225 million is not a huge amount of money. That is cheap. It’s a cheap key input. So we have that cheap key input.
And then the question is, are we seeing the sign of one or more new infrastructures build up around these technologies? Now, infrastructures tend to take a little bit of time to build out around a technology, understandably, and it tends to be a little bit of an iterative process till you get to that stable environment. But we had existing infrastructures based on the internet and mobile devices and so on, on which we’re able to quite rapidly bootstrap these LLMs. And I think that that’s slightly different to having to wait for infrastructures to be built out. But I also think we’re starting to see a new configuration, a new infrastructural configuration in terms of the relationship between uh, how these LMS will stabilize and the way in which we’ll interact with them. Or, what I mean is that there are lots of systems and functions that exist already at an infrastructural layer that matter to us in these digital economies that might start to matter much less either because they get completely abstracted away through automation or because we just bypass them by connecting to these new tools.
So, we’re starting to see what the shape of these new infrastructures might be. When you are testing, whether you are going through a technological revolution, it’s quite hard to do it to sort of apriori, right? Because all of our models are historical models based on the fact that there was evidence and we find some kind of model that roughly fits the facts from the the things that we’ve seen. And that’s what CO’s model is quite good for. But if we look at that model and say, well, what could its predictive value be? I think we start to see the outlines of what could be a pretty remarkable technological revolution. Um, what does that actually mean? I mean, it means the fundamental shift of the ways in which economies work in general, technological revolutions have resulted in larger economies, but it also has really, really significant political and social impact because it enables new styles of political organization like the nation state or the multinational corporation or its challenges, existing methods of behavior and organization.
So, my net net on all of this is that yes, LLMs are likely to be general purpose technologies, GPTs or GPTs. There we go. That they have the characteristics of the kind of technologies that will instigate a really significant transition, a paradigmatic transition, and that will affect the structure, the nature, the pace of speed, the scale of our economies on the one hand, but it will also naturally have a significant social change as well. Well, thanks for tuning in. If you want to truly grasp the ins and outs of ai, visit www.exponentialview.co where I share expert insights with hundreds of thousands of leaders each week.