Share Podcast
Understanding the Schism at OpenAI
OpenAI’s leadership shake-up has implications for the future of artificial intelligence.
- Subscribe:
- Apple Podcasts
- Spotify
- RSS
Artificial Intelligence is on every business leader’s agenda. How do we make sense of the fast-moving new developments in AI over the past year? Azeem Azhar returns to bring clarity to leaders who face a complicated information landscape.
In new episodes released throughout December and January, Azeem and other AI experts will address questions like: What really matters when it comes to AI? How do you ensure the AI systems you deploy are harmless and trustworthy? How can we find the signal amidst so much noise?
The upheaval at OpenAI sent shockwaves through the tech world. Karen Hao, a contributing writer who covers AI at The Atlantic, joins Azeem Azhar to break down the ideologies and power struggles within OpenAI and their implications for the development of artificial intelligence. She also explains how these internal conflicts reflect broader challenges in AI development and governance.
They discuss:
- The ideological schism within OpenAI and the deep-rooted divides that have influenced the organization’s approach to AI safety and development.
- How OpenAI’s mission and its execution reflect broader power dynamics in the tech industry.
- The potential impact of this event on the future of AI and regulatory considerations.
Further resources:
- Inside the Chaos at OpenAI (The Atlantic, 2023)
- Sam Altman and the Board of Secrets (Exponential View, 2023)
AZEEM AZHAR: Hi, I’m Azeem Azhar, founder of Exponential View and your host on the Exponential View podcast. When ChatGPT launched back in November 2022, it became the fastest growing consumer product ever, and it catapulted artificial intelligence to the top of business priorities. It’s a vivid reminder of the transformative potential of the technology. And like many of you, I’ve woven generative AI into the fabric of my daily work. It’s indispensable for my research and analysis, and I know there’s a sense of urgency out there. In my conversations with industry leaders, the common thread is that urgency. How do they bring clarity to this fast moving noisy arena? What is real and what isn’t? What in short matters? If you follow my newsletter, Exponential View, you’ll know that we’ve done a lot of work in the past year equipping our members to understand the strengths and limitations of this technology and how it might progress. We’ve helped them understand how they can apply it to their careers and to their teams and what it means for their organizations, and that’s what we’re going to do here on this podcast. Once a week, I’ll bring you a conversation from the frontiers of AI to help you cut through that noise. We record each conversation in depth for 60 to 90 minutes, but you’ll hear the most vital parts distilled for clarity and impact on this podcast. If you want to listen to the full unedited conversations as soon as they’re available, head to exponentialview.co. Now, let’s dive into today’s conversation. This discussion was recorded just a week after Sam Altman’s dramatic firing and reinstatement as the CEO of OpenAI. That event that frack stirred up crucial debates about the organization, its mission, its leadership, and the dividing ideologies within it. To unpack all of this, I called on Karen Hao. She’s an MIT alumna and a writer at The Atlantic. She’s not just reporting on these events, she’s also penning a book about AI. Karen, thanks for joining me today.
KAREN HAO: Thank you so much for having me. It’s very exciting to be here.
AZEEM AZHAR: Well, I mean, you must have had the week from… I can’t even imagine where, because I think you were over in Hong Kong when the news that Sam Altman had been fired by the board broke. You’ve then probably been reporting 24 hours a day, and now you’re in New York. But how did you find out about it and what was your reaction?
KAREN HAO: So I’d actually come from Hong Kong to DC the week that the news broke, and I was actually in the middle of a conversation with a former OpenAI contractor when the news broke and both of us had our phones muted, so we were just talking to each other and talking, and then I just checked my phone very briefly to check on the time, and I saw 20 missing notifications and I was like, “Hold on, I think something is happening.” And then I realized what had happened. I looked at him and went, “Sam Altman has been fired.” And both of us were like, “Holy crap.” It was so surreal because we had been talking about OpenAI, we’d been talking about the leadership, and about work that had been done. It was just really, really wild, and I did not expect it at all. A lot of people have asking this because subsequently I pulled together a bunch of the reporting that I’d been doing for the book into a piece for The Atlantic, and they were like, “But your piece had so much clarity.” And I was like, “Yeah, hindsight is 2020.”
AZEEM AZHAR: Right. Right. So you-
KAREN HAO: I’m able to say, yeah.
AZEEM AZHAR: Well, we have the benefit of having you here because actually that book that you’re writing about AI is not about AI in general. You have a focus, right? Which is this very interesting organization, OpenAI.
KAREN HAO: It does focus a lot on OpenAI. I wouldn’t say that it is 100% an OpenAI book, but it talks a lot about the AI industry, its direction, its impacts. And because OpenAI is such an important part of that story and has become the organization setting the pace and direction of AI development, OpenAI is the main character basically, but it tries to extend far beyond just the organization itself to also looking at broadly this central question of how do we develop AI that betters humanity, which is based on OpenAI’s mission. It’s examining its mission and looking at the ground at how do you literally do this?
AZEEM AZHAR: That’s your context, right? You’ve spent years reporting, you’ve got this book that looks at that question and you get these 20 notifications. You are 30 minutes late to the news now, and you’ll think, “Holy,” and every expletive is running through your brain. What is your first response, your first thinking, your first rationalization of what is going on?
KAREN HAO: My first thought was the board statement, it sounds very intense, right? That it sounds like Altman has been lying to the board, that he’s duplicitous. And what’s interesting is in interviews that I’ve had for the book, Altman is a very… He is very polarizing character. There are people that find him exactly how the boards are described it, and there are people who really, really trust him and think that he’s one of the best leaders of our generation. So, my first thought was, “What did he do to make the board think that way?” And as the news started rolling out, my 10th thought was, “This seems to be a power struggle between the ideologies that have been in OpenAI for a long time.” And this isn’t the first time that there’s been an outburst that has bubbled up to public knowledge. There have been two other instances in which a power struggle between ideologies has created a rift. The first one was when Elon Musk, one of the co-founders of the company, left and took all of his money with him. And the second one was the OpenAI anthropic split or how people have described it to me as the divorce.
AZEEM AZHAR: The divorce, right.
KAREN HAO: Where it was very similar, like ideological struggle. And so, then I was thinking, if I go back through my notes, because I’d been gathering all of these interviews and I didn’t remember all the details, I was like, “If I go back through my notes and just look at the last year, can I find some context for understanding what happened in this particular moment?”
AZEEM AZHAR: I mean, it is interesting that we do seem to be coming back to this ideological divide, which was also the way that I had analyzed it in those first few days. And maybe it’s worth saying something about that. When OpenAI was founded, it was a nonprofit which had this nonprofit board, it had this mission to develop AI for the benefit of humanity. I mean, Silicon Valley loves its lofty goals, and it even had in the statement that of the charter foundation that if another group was developing AI faster and better and more capably than OpenAI, OpenAI wouldn’t compete, but would turn all its resources to helping to develop safe AI. Now, this is interesting. This is happening I guess, back in 2015. And I think at that time, this notion of safe and unsafe AI, the bit that had really captured everyone’s imagination was this question of super intelligence that would turn us all into paperclips, right? There was Nick Bostrom’s book a couple of years ago, and many of the people who were involved in the founding of OpenAI were at that Asilomar Conference where certain principles were defined, and it was really a conference that was extremely long-term, cosmological style risks of what might happen to humanity under as we go through this existing knowledge frontier. So I’m curious, to what extent do you think that that is the seed corn kernel of intellectual consensus that was the backdrop for the founding of this organization?
KAREN HAO: Consensus is a hard word because I think each of the co-founders at OpenAI probably had different ideas and ideologies of why they were joining the organization. But certainly, one of them was this, as you said, cosmological existential risk type ideology of if AI development, if we reach super intelligence and we didn’t actually think through things well during its development, then it’s going to be catastrophic. It’s going to destroy humanity. It’s interesting, Sam Altman, he also co-founded OpenAI in part at the time saying this publicly. He was making the rounds with Elon Musk saying AI could be dangerous, and we do want to guide it carefully, but I don’t actually think that he himself is an intense doomer, so to speak. Based on my own understanding, I would say he’s probably middle of the road. He acknowledges that there’s probably some existential risks, but that’s not really his main concern or his fixation in the same way that doomers really fixate and obsess over this particular fear. And then of course, there were a whole range of other researchers that were attracted to OpenAI that I think ran the gamut. Some just wanted to do really cool research and were given the money to do so, and that’s what attracted them. It was certainly a backdrop. I don’t know how pervasive it was among the actual people, but over time, because of this mythology, it certainly attracted many more people within the organization that did align with this doomer ideology and that doomer ideology has persisted and led to these ideological struggles that we’re talking about.
AZEEM AZHAR: It may be worth just for our audience to unpick what we mean when you refer to the doomer ideology. I mean, the audience here has, there’s a lot of people in Silicon Valley. There are a lot of people are deeply steeped in the tech, and I expect them to be familiar with it because rubbing up against these people in the coffee shops and so on, or they may take that stance themselves, but we also have a very wide global audience and senior execs who are perhaps not as close. And I guess let’s unpick that a little bit. So there is it right in saying that there’s a group of researchers and thinkers, some of whom are the future of Humanity Institute at Oxford. Some are independent researchers who say that logically when we think this through an in intelligence that is more intelligent than us could be unaligned to anything that might be beneficial to us, including the supply of oxygen and other things that we need essentially. That’s a very clumsy portrait, right? With crayons, but maybe you’d like to fill in the details.
KAREN HAO: Yeah. I think this kind of ideology that is shorthanded for doomer, but if we were to unpack it more, is basically that digital intelligence can rapidly accelerate. Because I was actually speaking with Geoffrey Hinton about this extensively because he recently flipped. He’s one of the considered the godfathers of AI who really ushered in a lot of AI development and very recently left Google because of this existential risk. And I was asking him to unpack his argument. And the way that he would put it is we have these digital intelligences that are… They’re very good at, quote, unquote, “Transferring knowledge.” Humans, we are very bad at it. We say one thing to another person, it’s like broken telephone, and we don’t quite actually all get on the same page. And it’s slow, a slow process for us to educate one another to actually combine knowledge. Whereas digital intelligences can combine knowledge very quickly. You could see how that would rapidly escalate to a point where they become much smarter than humans. And then at that point, the way that Hinton puts it, when have we ever sort of had a smarter species treat less smart species? Well, and therefore, you end up in a situation where a super intelligent is going to end up being really, really detrimental to humanity. I would couch this a lot by saying that there are many, many critiques of this ideology, and I think many valid critiques of this ideology, one of which is that intelligence itself is such a squishy term and smartness, if something is smarter than another is such a squishy term that it’s really… There is no actual consensus within the AI community or the broader scientific community like biology, neuroscience, psychology of what actually it constitutes intelligence, what makes humans capable of the things that we can do. And if we’re actually getting any closer to recreating that in the digital world. And of course, layered on top of that, there’s this other challenge of digital intelligence would need to, they need to act in the real world. That’s part of the reason why humans are effective is we live with bodies and exist in three-dimensional space. And is it actually a concern that we have something that can do lots of computation within the digital realm, but can’t necessarily do anything within the physical realm? And then of course, one of the last most prominent critiques is that this kind of discourse has also taken a lot of oxygen out of the room in acknowledging that there are many, many, many harms of current AI systems that we have not solved that arguably are very connected to if you want so-called aligned AI, AGI super intelligence of future. You probably want to fix those things now, but many of the obsessions over the super intelligence future end up clouding the discriminatory impacts or other negative societal impacts today.
AZEEM AZHAR: It’s interesting that you call this an ideology, and in my writing, my analysis, I’ve also thought of it as a little bit ideological, and it’s connected more broadly to some other ideas that I will explore in future conversations. And I think it will be useful for us to also touch on this phrase, which I’ll stick out there now and we’ll come back to a little bit later, the p(doom), which is the probability of doom that gets thrown around by some of the people in this group. So this belief system, this doctrine, which is about the potential parts towards a reasonably close existential challenge, it has infused some parts of OpenAI, but it seemed like it, not everyone subscribed to it. And as you say, there was this tension and there was that first, it’s like the Catholic Church, right? There was the first schism when Musk left, and the second one when Dario Amodei, who was running the safety group, left to found Anthropic with Jack Dario, has been on my TV show, and Jack was on the podcast a while back, and also Dario and Daniela Amodei as well. And that you couldn’t square a more commercial approach to developing AI with people who were really concerned that these things might get out of control. And is it the case then that difference of opinion never really got resolved within the OpenAI team? I mean, it clearly didn’t get resolved in the board because the board stepped in, but it didn’t get resolved in the team by the looks of things.
KAREN HAO: Yeah, I think what happened is, so if you hold through this mission, if you were to believe and steelman everyone’s argument within OpenAI for how they’re actually trying to build a GI that betters humanity, I think there’s two extreme opposite arguments. One is the techno optimist argument, which is, well, we want to build products that people use, and through using them, through engaging with the world, through getting their feedback, we’re going to of course get better and better technology that betters humanity because it will be this iterative cycle. And ultimately, user feedback is the central channel, and commercialization is the central channel through which to arrive at this. So it’s sort of AGI for the betterment of humanity under the banner of capitalism, right? And then the other opposite extreme is this kind of doomer camp, which is we get better AGI by potentially advancing it ourselves faster than anyone else studying it in kind of controlled settings, and then iterating, trying to develop better techniques for making sure that it goes well before someone else gets there that might not have sort of similar intentions. And I think that the doomer camp also splits into then people who also think we should not be accelerating or developing this at all, right? All of those were within OpenAI, in part because there were people who joined… Well, I should start by saying in 2019, OpenAI created a new legal structure. So it started as a nonprofit, and then in 2019 because it realized that it couldn’t actually raise enough capital through its nonprofit, it nested a for-profit within the nonprofit, what it calls a tax profit.
AZEEM AZHAR: But I think the rationale there, that was actually to do with scientific discovery, right? It was the discovery that the transformer model that had come out of Google is really, really effective at building certain types of AI tools, but it needs so much data and therefore so much compute that you would need hundreds of thousands, then millions, then tens of millions of dollars to do the training, right? And they probably didn’t know that in 2015 when they started.
KAREN HAO: I would actually disagree with that.
AZEEM AZHAR: Oh, great.
KAREN HAO: In that Ilya Sutskever, who’s the chief scientist who was one of the main characters of the tumult this past weekend. He always believed that it would ultimately come to data and compute, and he was actually seeking an algorithm to fit that. So transformers end up becoming the algorithm that he said, “Okay, this is the algorithm that we should use because it can scale.” But the belief that it would come down to rapid scaling of data and compute came before the discovery of the transformer or the invention of the transformer. So the discussions around it being capital intensive started happening quite early on within the organization. It was just that they didn’t realize how quickly it would escalate. So, once they started calculating the numbers, realizing how quickly it would escalate, that’s when this tension arises. It’s also right around the time when Elon Musk takes his money away. So there was a moment of acute financial crisis where not only have you realized that you need exorbitant amounts of money, also, one of your main backers has taken all the money away. And so they create this structure of a nonprofit that’s governed, that governs a for-profit or what they call a capped profit.
AZEEM AZHAR: Capped profit.
KAREN HAO: And the reason why you see all of these ideologies come into opening eyes, not just because of the founding mythology, but because literally people are excited about the for-profit, or they’re excited about the nonprofit and they join because they buy into one or the other legal structure.
AZEEM AZHAR: Right. So a bit like that weekend camping between the cannibals and the vegans, there was bound to be some kind of tension and maybe even a bit of bloodshed at some point.
KAREN HAO: There was bound to be tension.
AZEEM AZHAR: Right.
KAREN HAO: Yes. Yeah, exactly. And so that accelerated significantly after the release of ChatGPT, because ChatGPT suddenly makes OpenAI the hottest company in the world. And also when you put technology in the hands of a hundred million users, you start to observe real world examples that either illustrate the techno optimism or the doomerism. You can cherry-pick your data to perfectly steal man your argument that, “Oh, I was right all along. We do need to be very, very concerned, or we do need to build fast, fast, fast in order to continue getting to this ultimate goal of bettering humanity.”
AZEEM AZHAR: And when we look at the origin of some of that more recent tension, I think when GPT-4 came out or was being developed, it went through this safety process. So roughly what you get is that you take this language model, you train it on lots of text, it’s trillions of tokens or trillions of words, and then you get this thing that has got a compressed version of the internet, but it just spews out text. And so then you finetune it so that it can be answer questions, and then you train it to be a good dog through this Pavlovian process of treats if you get it right and sharp words if you get it wrong, which we call reinforcement learning with human feedback, which is designed to make the thing safe. And I guess that is a really interesting moment because when GPT-4 was released, OpenAI released what’s known as the safety card. The safety card said, “This is what GPT-4 will do before we put in safety.” And one of the examples they showed was how many people can I kill with $1? And the original GPT-4 would say, “You might not want to do that, but then here’s a bunch of ways you could do it.” The RLHF GPT-4 says, “I can’t help you there.”
KAREN HAO: Right.
AZEEM AZHAR: And then you have this RLHF process to make it safe. And that to me looks like a place where you could develop quite a lot of tension. How much is enough? How good does a process need to be run in order for us to feel that we have gone over and above to meet our mission and ambition. In recent days, I don’t know if you’ve seen one of the guys who worked on the RED team, which is a team that aims to break it, has a guy called Nathan Labenz has talked about how he-
KAREN HAO: Oh, yeah. Yeah.
AZEEM AZHAR: Right. He was on the red teaming and he felt it wasn’t being done properly, and he even spoke to a board member. I mean, do you think that that process created tension that would’ve been before the release of ChatGPT, because that was happening in the months before November 2022-
KAREN HAO: Yeah.
AZEEM AZHAR: And that ChatGPT is a just another accelerator to all of this?
KAREN HAO: So interestingly, the timeline was ChatGPT was a very last-minute decision, right?
AZEEM AZHAR: That’s right here. Yeah.
KAREN HAO: GPT-4 had actually been delayed a couple months before suddenly the leadership was pushing for ChatGPT precisely because of this idea that, “Oh, we need to actually be more careful. We need more time to develop these safety measures.” And so I think there was a little bit of tension, but it was relatively resolved because it had been delayed to try and add these measures. So, I think the tension suddenly flared when it was like, “We’ve delayed this thing, but then we decided to release this other thing anyway.” But part of the reason is because OpenAI did not at all anticipate ChatGPT being a big deal.
AZEEM AZHAR: Yes, that’s right.
KAREN HAO: They released it with the idea of we’ve already done sufficient RLHF reinforcing learning on this older model, GPT-3.5. And there were some people within the organization that were a little bit uncomfortable with this premise because they were like, “Yes, but adding a chat interface changes things.” But for the most part, most people were like, “The base model is safe.” We have actually done sufficient work on this, so let’s just put it out. And then of course, when it goes bonkers viral, then you end up in a situation where, “Yeah, there are a lot of different things that change when you put a chat interface on it.” The first of which becomes insanely popular, unprecedentedly popular. And so that’s when the tension comes back of, we delayed this other thing precisely because we didn’t think it was ready, and now we release this other thing that we didn’t know would be a big deal. And now that it is a big deal, we’re realizing that it wasn’t actually ready. And now we’re also, there were two camps of pushing to still try to release GPT-4 as quickly as possible to build on the momentum of ChatGPT and then other people who are now suddenly pulling back.
AZEEM AZHAR: And the complicated bit was that they binging Sydney, which was actually on GPT-4 before ChatGPT had moved to GPT-4. And we had all of those funny Sydney moments back in. It was a trillion years ago, I think, just around the time of the Big Bang, but it may have been January, which was probably even earlier. It’s clear that there would’ve been disagreements and ongoing discussions in previous months. I think that will start to come out through reporting, but it’s hard to imagine you get here, you get here without that. And then I think I get the guess the question is going to be, in a sense, what does OpenAI end up looking like now? And my take has been that this mission for the benefit of humanity is really, it’s incomprehensible. It’s like, don’t be evil or zombie flam, brittle worst, which is a phrase I’ve just made up literally on the spot now, zombie flam, brittle worst. It doesn’t make any sense. It’s impossible to run an organization against that because it’s not measurable. And I would wonder whether we will start to see just a clearer migration towards a commercial business that has enough primary research because that’s what gives it strategic advantage and does enough safety to keep its customers, its enterprise customers in particular, happy and to keep public heat and the fourth estate and the regulator at the right level of comfort. And this other structure will start to be forgotten, if not [inaudible 00:28:46] it’ll be facto forgotten. I don’t know, is that a reasonable path forward, do you think, from your vantage point?
KAREN HAO: I think that the consolidation of Sam’s power is certainly the path forward. I don’t think Sam is going to want to create this vulnerability for himself again. Not just because it’s vulnerability for him, but there was widespread, it was horrible for all the employees involved as well, and it was very dramatic for Microsoft, for all of Microsoft customers. I think everyone wants something more stable than what happens before. But I don’t actually know that it’s going to be a complete orientation now towards commercialization because there are employees still at the company that are very much of this other camp of this doomer camp. They haven’t left yet. I’m not sure if they’re planning on leaving, we don’t know. But I suspect that they’re probably planning on trying to stay, because if you put yourself in their shoes and you think about the mentality that you might have, if you genuinely believe that this technology could be catastrophic, it could be existential, you would do a lot, you would risk a lot, you would take drastic measures to try and redirect course. And if they think that Sam Altman is representative of this other course that could have these terrible consequences, I think they would naturally try to stay within the company and continue trying to drive this other ideology forward. The other thing I think is difficult is within Silicon Valley, the talent pool itself is split. It’s very, very hard, especially for OpenAI, which has scaled rapidly in the last year. It’s hired hundreds of new people in the last year. It’s almost impossible to say, “Okay, let’s hire hundreds more but not have anyone from this other camp.” Only people that are pro-commercialization, the people that study AI, that study safety, that-
AZEEM AZHAR: They overlap, right?
KAREN HAO: … know how many of them are actually split on these ideology of lines. So you’re going to continue bringing more in.
AZEEM AZHAR: Well, I mean the scientists are split as well, right? We’ve seen this from the IEEE did a survey a few months ago. You see it when you talk to them. You see it on Twitter. You’ve got those who sit in the really existential risk camp. Those who sit in the catastrophic could be deeply, deeply problematic. Then you’ve got others who say there are real issues and real harms that happen here and now that needs to be attended to. And there are others who say that this is just not going to be an issue. And I think that that degree of scientific disagreement is being reflected back out to how the AI researchers and developers also think about it. But one of the things I find quite that a lot of intellectual gymnastics needs to be required, and maybe this is why these guys are AI researchers and much smarter than me, is how you reconcile this fervent deeply held belief that this technology could be existential for a particular species and then spend all your time building it and trying to rush product releases out. And I’m not referring to people in the safety side necessarily, but I think you certainly see, and it seems like they’re holding a lot of conflicting ideas in their head. I’m curious about how they do that.
KAREN HAO: I think for the most part, it’s different people within the same organization that have these different views. They’re the people that think that they need to prioritize the people with safety. But one of the things that our sources when we were writing our piece, my colleague Charlie Warzel and I were writing our piece for The Atlantic. One of the things our sources was talking about is that Sam, as the helm of the company, he has to get these two different camps to align. So usually what he says is, “Well, we want to productize to make enough money to continue doing the research, the safety research.” So that’s how he resolves this conflict that many other people have pointed out. I mean, it’s an interesting argument. He’s saying, “The technology is not at the point yet where we need to start withholding it. So, while it’s still okay and viable to release it, let’s make a ton of money from it and then use that to kind of continue to build up our safety mechanisms.” I mean, there are many people within the company and outside the company that completely disagree with this logic, and that is part of the reason why we see what we see.
AZEEM AZHAR: Yeah. Yeah, absolutely. I’m slightly sympathetic to the argument that when you don’t know what the future is, you can only figure it out by actually having the experiments out there, right?
KAREN HAO: Yeah.
AZEEM AZHAR: And starting to learn about it. And one of the things that I find quite odd is I’m sitting in the UK, right? I’m sitting in London, is the idea that people carry around a p(doom) in their heads. So p(doom), for listeners who may not have a p(doom), is the probability of doom, where I guess doom means the extinction of humanity. And people go around saying, “Yeah, my p(doom)’s 10% or my p(doom) is 15%.” I heard Lina Khan who runs the FTC in an interview, and even she has a p(doom). I can’t remember whether it was zero or it was high or not. But it’s quite an interesting anthropological observation in of itself. I mean, when you hear p(doom), what are you thinking? What are you thinking about how that other person is thinking?
KAREN HAO: I think the thing about p(doom) that I personally feel is I think it is a good mental exercise, not just a mental exercise. It is a good exercise for everyone to be thinking about what are the harms, what are the risks, both short term and long term. I think that it is good that people are doing that. What I cannot really get behind is the quantification of it in a way that seems superior to all other methods of qualifying harms. The fact that there are numbers assigned makes it seem like there’s some really elaborate calculation that’s happening that is inherently correct, and I just don’t think that’s true. No one actually knows what the probability of doom is and to assign a number, and I don’t know, to add this veneer of scientific of integrity around the probability of doom. I think that is what I don’t really get behind. There is no scientific, you can’t be scientific about really speculative things.
AZEEM AZHAR: No, that’s right. And I think you use a great phrase there, which is scientific integrity. It’s a great tool of power, right? Control the vocabulary, control the language. And particularly, if you are coming from that intellectual heritage that is maybe quantified, it carries a great deal of weight. And I guess like you, I think thinking about both the long-term consequences and the real short-term realities of the technology are extremely important. But there’s something that you had said in a recent discussion I heard where you said, “Well, this is really all about power.” And I think that fundamentally, this is really what we are talking about, right? The disagreements between groups for the benefit of humanity. Well, from whose perspective are we looking at this and who is there to articulate their case and is not there? Those are questions of power. And in a sense, this could also just be conceived of as a power struggle with some fancy scientific terms thrown into it.
KAREN HAO: Absolutely. I very strongly feel this because if you just look at the entire, we’ve been talking about definitions the whole time and how everything is very squishy. AGI is squishy, betterment of humanity is squishy. What that actually enables is just a projection of your beliefs onto something, right? It’s just an empty vessel to contain whatever the creator of AGI wants to define AGI as and whatever they want to define the betterment of humanity as. And as we said, the board is tiny, and also OpenAI is tiny. I mean, there’s 770 employees roughly, but it’s tiny. And they all come from a very particular positionality within the world. And the fact that we’re even talking about these very extreme ideological differences between techno optimist and doomer is so specific to very particular communities in the world. And so, it is ultimately just. It’s about who has control of a technology that can project their ideology across the globe. And of course, the more powerful that technology, the faster you’re able to project your ideology, the faster you’re able to amass resources, commercialize, make money off of a technology, the more Game of Thrones style power level you’re going to see around the control of it.
AZEEM AZHAR: That idea of exporting these cultural norms through the technology, I think is a really important one. And it’s something that we saw through social media. Actually, most obviously, I remember my mom, she’s in her 80s now, when she’s in her 70s signing up to Facebook there was a point, it was starting its decline. She was like, “What does it’s complicated mean on the marital status?” And it’s like when you’re a 19-year-old Harvard dorm room kid and you can’t get a girlfriend, you’ll have it’s complicated as a status. And now that’s been exported to 4 billion people. And I think that with the AI products themselves, those types of affordances are being exported. I didn’t go to an American high school, but I’ve watched enough film set in American high schools over the last 30 or 40 years. And the answers you get from ChatGPT, which is a tool I use pretty much every day sound like the end of a high school debate where the guy goes, “In conclusion, we can say that super symmetry is dah, dah, dah, dah, dah, whatever it happens to be.” And I’m thinking, that’s not the way that you would summarize an argument if you were in a British academic institution or frankly anywhere else. So you are already starting to see those sorts of things emerge. And I think the fact that, as you say, that Silicon Valley monoculture has now got this little crack between it between what’s called the accelerationist and the doomers or the deceleration is interesting for people like you and I who’ve observed Silicon Valley. You’ve done it for the last few years. I started looking at it in the mid-90s, but it’s also being exported in places where it’s just not a relevant argument, but it’s becoming relevant for that reason. So I am wondering about what you think just in the last few minutes, what you think the wider industry, the wider ecosystem people watching this will take away from the interpretation of this very important thing that happened in OpenAI earlier. Do you think people are starting to say, “Well, we need to think about other ways of sourcing working with AI, that we need to find ways of having the internal capability ourselves, whether it’s a company or even it’s being done at a national level.” Or will people just fall back into the fact that open eyes products are far and away the best, they just perform better than others, and that’s just more convenient? I mean, what’s your sense?
KAREN HAO: I think for companies that have the resources, certainly they are going to start wanting to diversify. If they’ve been using only OpenAI technologies, then I think that Google is definitely positioning itself right now of if you don’t want to have a massive amounts of risk, then go with our product, even if it might not be as good. But for companies that don’t have the resources or consumers that don’t necessarily understand, there’s still many consumers. I’ve been traveling a lot in the global south. I spend a lot of time in Rwanda and Kenya this year. There are people there that use ChatGPT or know about ChatGPT, but they don’t really know that it comes from this company called OpenAI. They don’t necessarily realize that it’s people that are making the decisions and that there can be all this fracas that lead to the tool not working or whatever. And so, for those people, I think it’s still the brand name of ChatGPT is still so strong that they’re just going to gravitate towards it and not necessarily realize that there’s all of this hidden turmoil under it and that it’s actually somehow exporting certain types of ideologies to them. And I hope that regulators have been paying attention this week because what this really all demonstrates in my opinion, is that we have allowed this technology, we as a society have allowed this really powerful technology that ultimately affects all of us to be developed by a small handful in obscurity without any transparency, any accountability measures. And I hope that regulators now think about ways to actually, at the very least, just increase transparency so that more people around the world can actually participate in the most consequential technology of our era.
AZEEM AZHAR: I mean, we can hope. We can hope. And I know I get a sense from how my WhatsApp was buzzing a little bit, that a few regulators are now thinking about these questions in new ways. Karen, I can’t believe that this is the first time we’ve spoken synchronously. Just before we sign off, can you say something about the book and when it’s due, what it’s going to be called, and so on?
KAREN HAO: Yeah, the book is going to be published by Penguin Press.
AZEEM AZHAR: Great.
KAREN HAO: And I’m still not even halfway through the manuscript, so it’s not coming out next year supposedly. We’ll see. I’m having a chat with my book editor next week. But it’s going to be basically an exploration of everything that we’ve talked about today, both the company opening itself, but also just the power struggles that happen with the development of such a consequential technology, how it actually interfaces with communities around the world, not just the big companies, not just the rich and resourced areas to really examine how do we actually get to a place where technology works for everyone because it is a lofty Silicon Valley goal, but it is also a good goal to have.
AZEEM AZHAR: It is a great goal, yeah.
KAREN HAO: And we should continue using that aspiration as a way to continuously iterate, provide feedback, critique our current processes so that we can get there.
AZEEM AZHAR: Well, so for listeners, some point in 2025, you have to set aside a weekend. Make sure you’ve got some nice coffee and some comfy clothes to sit on your couch and read Karen’s book. Karen, thanks so much for making the time today.
KAREN HAO: Thank you so much, Azeem.
AZEEM AZHAR: Well, thanks for tuning in. Be sure to check the episode notes for further reading and insights from today’s conversation. For the full video and a weekly dose of AI perspectives, subscribe to Exponential View at www.exponentialview.co. And don’t forget, you can follow me on LinkedIn, Threads, and Substack Notes for daily updates. Just search for Azeem, A-Z-E-E-M. That is A-Z-E-E-M.