Claude 3: Is Nathan too bought into the hype?
**Tom** [00:00:01]: Hi, Nate.
**Nate** [00:00:03]: Hey, Tom.
**Tom** [00:00:05]: In the world of open source, there's a timeless cone. In the forest of code, a solitary programmer pondered a problem, lost in the labyrinth of their own thoughts. Seeking clarity, they reached out to the wider community.
**Tom** [00:00:24]: A fellow coder with no obligation but a shared love for creation offered guidance.
**Tom** [00:00:31]: Together, they danced in the digital ether, exchanging ideas like leaves in the wind. In the end, the problem remained unsolved, yet the programmer found peace in the journey. For in the embrace of open source, enlightenment isn't in the destination, but in the connections forged along the way. This is OpenAI's ChadGBT's manifestation of the spirit of open source.
**Nate** [00:01:05]: Yeah, rumor is they're going to change to closed AI because Elon said so. But that's really not the discussion we're going to have. We may be able to wrap up with that.
**Tom** [00:01:16]: I think it's mostly smoke and mirrors. The Order of Running.
**Nate** [00:01:20]: Well, I'm looking at it. Tom, we got to get you on this webcam. You're on like potato quality video right now. But the Order of Running is Clawed 3 and the arms race that the Doomers have created themselves after being so worried about it. And some more on understanding where openness in AI is going to. I now get invited to bougie events about openness in AI and they are interesting. They're not a waste of time. It feels in a way like it's the room where open source AI definitionally will be defined. Sort of like there was a much smaller and much less publicly interested room on where open source software was defined, which was much more of a niche thing at the time compared to what AI is now. Everyone and their grandma knows what AI is or has an opinion on AI. And that'll make the definition of coming to an agreement much harder. That's kind of the point with openness. But I think the Clawed 3 stuff is kind of more of our bread and butter. I think essentially vibes on the Internet are that Clawed 3 is better than GPT-4. I have to go on the record of having actually played around with it. I'm busy, but I don't think that.
**Tom** [00:02:34]: What corroborates those vibes? It's really just like threads.
**Nate** [00:02:38]: It's long, many Twitter threads of people that I have exchanged ideas with for a while doing their challenge prompts and trying their best to break Clawed 3 and doing side by side with Clawed 3 and GPT-4 and Gemini and all of these to try to get the sense of it. I think there's obviously things that people are impressed by Clawed 3 on relative to GPT-4. The thing is, GPT-4 was trained in 2022. It's a long time ago. I mean, like the original GPT-4, they keep updating it. GPT-4 Turbo was not trained in 2022. So I think that argument is all dumb. I just got got by somebody else's kind of stupid argument. This is the stuff that happens. But the clear thing is that it's at least as good as where GPT-4 Turbo is today. It's comparable. So that has a lot of interesting arguments on if we're in an arms race, if large language models are going to be commoditized and all the like. I think we're kind of in it now. Like the spring is always busy with AI. Like last year, we got GPT-4 and Lama 2 and all this stuff. Like the spring is when people are trying to put in the work. And then the summer is going to come and everyone's going to go on vacation again. We're not going to get any as big bottles. Although it wasn't Lama 2 in July. I don't remember when it was Lama 2. I think so.
**Tom** [00:03:59]: I wanted to say it was a bit later.
**Nate** [00:04:01]: Late summer, like August, September, things will be slow. But we're in it. We're in the fucking pressure cooker. Like every week it's like you can't miss. You can't sleep in. There's just crazy stuff. Like a tropical launch of Clawed 3 at like fucking 5 a.m. Pacific time. Like the psychos like didn't even give us time to get a coffee before this is out. But it's really good. And I'm happy for them, even though I think their philosophical stance is in a deeply challenged spot now. That they're like they just progressed the arms race, which is the really founding worry.
**Tom** [00:04:32]: I mean, I'm not expert in this, but I guess I have a. I have training as a historian, and so there's a part of my brain that always is like going back to historical parallels of moments like this. When you think about like what what is an arms race? The historically, I think the arms matter a lot less than the race. And what I mean by that is that. Often the benchmarks, if you want to call them that across time, are a little bit arbitrary. They're kind of some combination of shelling point on which there's some consensus of like, oh, that's impressive. And on the other side. Some kind of perceived impending fear of like conflict, which it's interesting. We're not really. We talk about these companies like their competitors, but really neither of them is having much trouble raising money.
**Nate** [00:05:24]: Well, most of the money that's raised is from. It's interesting because of the merge. M&A market is so bad for tech companies. Most of the big tech companies are investing in all of these kind of as a hedge that they won't be able to do M&A on it. So in the past, most of this investment was from VCs. So it's a really different dynamic. I think like. I think a lot. Gil, who's a prominent investor, was writing about this in one of his thought pieces on AI, which I was like, that's cool. I know nothing about investing, but it matters. Everybody's in bed with everybody.
**Tom** [00:05:57]: It's kind of like. It's kind of incestuous, really, when you think about it, you take a step back. These are mostly the same, not just the same companies, but even the same people, you know, which, yeah, we don't need to get into the whole Musk thing, but I think that's. It's just obvious what that is.
**Nate** [00:06:14]: It's the same thing.
**Tom** [00:06:16]: It's an example, though, of this kind of nobler is going with the historical stuff. The example that came to mind as you were speaking was in the night, in the aughts, not 2000s, but 1900s. There was this arms race called like the kind of dreadnought arms race. So I believe the British Navy came out with it. I just said came out with, which is funny. They didn't. They didn't like to release these things like models, but they deployed
**Nate** [00:06:44]: Elizabeth the fourth announces the second dreadnought class. Our latest dreadnought class just. She's got her.
**Tom** [00:06:53]: She's got her demo day. No, they deployed a ship called the HMS dreadnought, which. And again, I'm not. I'm neither an expert on, you know, battleships per se. But but from what I did, I remember from previous life, I was reading about it for some reason. It was a combination of things in terms of what made that ship like impressive. So it had a larger armament. I believe it had more guns. I believe the guns had a wider kind of degree of basically what they could shoot at. There's a more technical term, but there is a larger degree of freedom in terms of where you could point them. They were more accurate. They could shoot farther. They were larger. Numbers around like, oh, it's like 12 inch guns or 16 inch guns. Like, I think this is sort of when that that shit is starting. The ship itself, I believe, had a more sophisticated either profile or a much more robust engine. And so it was able to be deployed for longer periods of time.
**Nate** [00:07:53]: Go out farther from port.
**Tom** [00:07:55]: So basically there was no one thing that made this ship that much better. But the combination of features basically made other British Navy's around the world take notice and tell themselves, like, oh, shit, we need to get moving on this. Partly, I think, because there was nothing about the ship that couldn't be mimicked. You could sort of immediately see what the British Navy had done to achieve this. And so there was this kind of very, very quick turnaround where the United States, France, basically any other country that considered itself a naval power or wanted to be considered a naval power was immediately releasing their versions of what was then called post-dreadnought battleships. So it was almost like an overnight thing. You either had a pre-dreadnought fleet or a post-dreadnought fleet in terms of what these things could do.
**Nate** [00:08:49]: Are we in the post-GBT-4 fleet?
**Tom** [00:08:52]: Well, that's interesting. I think the real shift was just, I've been consistent on this and thinking that I think the real shift was Chad-GBT. So it's not even about the model so much as it is the model paired with the interface that is what really put everybody on notice. It put Google on notice. It put Anthropic on notice. It put a lot of companies on notice of like, we need to be competing on this just so that we're seen as important and part of the conversation. But then I think there's some interesting things about that history.
**Nate** [00:09:28]: One, with these battleships.
**Tom** [00:09:29]: To my knowledge, and if there are like naval historians of the early 20th century who listen to our episodes, I encourage you to reach out and tell me if I'm wrong about this. But my understanding is that in actual fact, none of the dreadnought class battleships in any major Navy actually saw warfare. In other words, an enormous amount of capital was poured, resources, a number of things, material into the creation, the renovation and the release of these new types of ships, which were perceived to change the state of warfare. And then there was this major irony that even during World War I, none of that particular class of ship was actually like saw action. And so there's that side of it. And then the second part, which is actually even kind of more surprising.
**Nate** [00:10:23]: How do you think these language models are seeing action?
**Tom** [00:10:26]: Well, I think we'll get to that. We have to think about what action means here. I mean, people were steering the ships. They were leaving port. I mean, there were these famous grand tours that these ships would go on. Even Brazil had a dreadnought ship. And they were like, look at our dreadnought ship. And it went all around the world. People looked at it, took photos, and people wrote about it. They were like, oh shit, Brazil matters now. So there's one side of it that it wasn't really clear exactly what these ships were doing to change the state of play. And then the second thing is that arguably these ships really were, in hindsight, when you zoom out, really showing that battleships ended up not really mattering at all anymore. That they were really just a feature of navies that were meant to make certain things navies could do to be difficult for other navies to stop.
**Nate** [00:11:27]: I really don't know where I'm taking this.
**Tom** [00:11:29]: Well, where I'm taking this is what really ended up mattering was
**Nate** [00:11:31]: aircraft carriers. Yeah, but what's the analogy? I know that aircraft carriers are the thing that don't matter.
**Tom** [00:11:37]: Right, because that's actually what changed the nature of warfare, was the fact that we, and you couldn't do it immediately, but soon, there was this ability for ships to really just function as floating cities where you could deploy planes. You could deploy anything, right? So the specific dynamics of naval warfare became less important than just mastering the ability to just deploy non-trivial parts of your armed forces anywhere in the world. And that's still the world we live in. It's actually kind of remarkable the degree to which naval warfare really has not changed since World War II.
**Nate** [00:12:14]: I would argue that drones and cheap drones are going to potentially change that as asymmetric warfare. Because the cost of engaging an aircraft carrier can become too high.
**Tom** [00:12:28]: So that's the point, is that the Dreadnought ships felt transformative, but they actually were not asymmetric. Because what did Germany do when Germany realized, like, oh, we can't compete with this? They were like, let's just make underwater boats. Like, nobody's really playing that market. It's just what they call them, undersea boots. You know, U-boats for short. That's all submarines are. They're just a way of cheating. In other words, now there's these rumors that we're going to have space warfare happening. And so that's like the kind of 21st century equivalent. So like, I think for me it's...
**Nate** [00:13:03]: You kind of tied this all together.
**Tom** [00:13:05]: Well, I think it's an open question for me whether what we're seeing right now, like with this arms race over language models, because frankly, like, it's just, it's still just deep learning. As far as I'm concerned, there's an enormous degree of computation and data behind these things. Not very many. I mean, some other issues like RLHF notwithstanding. This is really just about scale and what scale makes possible. Scaling laws like that still seems to be the world that we're living in here. I'm kind of skeptical that what we're seeing is really an asymmetric mode of competitive industry capability. Like even now, I don't really think anyone has really mastered the art of monetizing these capabilities. People are using this stuff, but really what is the business model here? No one really knows, which is why Google is scared shitless about its business model, which is search and hyperlinks. I don't know.
**Nate** [00:14:07]: I feel like this is kind of missing the point of the arms race. I think there's more emotional energy in it than just the business model. It's like, I do think that the business model will follow. I don't necessarily, like it's going to be like many past situations where there's a paragraph and many of the companies fold, but the business models will be found and a few of them will latch onto it. But it's more of the energy that these people who are so worried about AI risk that they splintered their own company to be worried about the pace of progress are literally at the forefront of this pace increase. So it's like this pace is a thing that even people that are closely following the field experience. So those that do not follow the field and have this kind of progress fatigue, because all they hear is about all the progress and they're not part of it. So they get more of a fatigue than excitement, probably, because it's like when freaking all the news cycles are about Trump. It's like we stop feeling how bad he is because it's like the next thing and it's just normal. So people on the outside, I think, are almost potentially made easier by AI progress, but internally there is this sort of awe with how fast things are going.
**Tom** [00:15:28]: I'm not quite sure what the parallel is there.
**Nate** [00:15:31]: I think we're just arguing different points. I'm mostly arguing that I think the point that you're making is kind of irrelevant to the fact that Claude III is important. You're kind of taking the Gary Marcus angle, which makes me angry, which is like all of this doesn't matter because we're not going to make any money on it and it's all going to implode. And I'm kind of trying to argue the point that this matters at an emotional level and how if there is such a rate of progress, it's pretty likely that it's going to continue on some level. When the pace is really high, it's rare that the pace goes from high to zero overnight. It's like the early versions of the iPhone or whatever. The delta between the versions is so big. While nowadays the different iPhones are really marginal, we're just still at the early days where there's so much room to make the technology better that I see that as very meaningful.
**Tom** [00:16:23]: Yeah, that could be. I mean, I think my stance is actually much simpler than whatever Gary Marcus may or may not be arguing, which is that I think that the passion over Claude III is mostly vibes and that it's improvement or lack of it relative to GPT-4. I mean, it's like these companies are kind of...
**Nate** [00:16:45]: Well, if you look at the improvement to Claude II, it's pretty large.
**Tom** [00:16:48]: Claude II is generally accepted as being bad. I think this is maybe connecting to the open source conversation, which is like, what is the community that these models are actually catering to? Where is their actual hype and enthusiasm around this stuff? Well, there's a lot of hype, I guess. But where is their actual palpable enthusiasm of what you can do with these things? Clearly, people are feeling this.
**Nate** [00:17:12]: In enterprise and engineering, I use this stuff every day on a technical level. It's definitely better than trap GPT, and I'll switch to that and become more productive.
**Tom** [00:17:24]: So my question is, how wide does that horizon go? What is the audience for these things? There is an audience, but is the audience actually as wide as we think it is, or is there acting like it is? I think the answer, at least so far, is unequivocally no.
**Nate** [00:17:42]: I think that in years to decades, it's going to be that almost all information that we consume is mediated in some way by these technologies, whether it was generated by these technologies or it's filtered by these technologies.
**Tom** [00:17:56]: That's an independent access, isn't it? So that's absolutely true. I don't think either of us would disagree with that. We're clearly on the road to almost all information that is being generated or consumed as being mediated.
**Nate** [00:18:09]: But the reason why it'll all be that way is because it offers value. So regenerating things in the way that you want it is valuable. Transforming things to the modality that you want it is valuable.
**Tom** [00:18:23]: It does offer value.
**Nate** [00:18:25]: I don't have as much time to read as I can listen, so I can translate everything into audio and listen to it via these models. Those are all things that I either pay for already or will be paying for, and I think they'll just got to be wrapped into other tech products.
**Tom** [00:18:45]: I still think the question, Stan, who does this add value to? Because if the answer is everyone, I would contest that. Television replaced a lot of media in the mid-20th century to the point where, basically, if you lived in this country, you were sort of expected. I mean, the cultural zeitgeist was transplanted on television. Television did create value and added value. It transformed markets. It was huge. Most people, though, they consumed it passively. They were not content creators.
**Nate** [00:19:23]: But it's also like most of the economy
**Tom** [00:19:25]: is not driven by...
**Nate** [00:19:27]: There are individual users, but most of the economy is a few companies that capture a lot of things. And then they turn it into every product that is used by all these Fortune 500 companies to make them more productive. It's not technically everyone, but it could be effectively everyone. It could be effectively most of the economy.
**Nate** [00:19:47]: Well, it's infrastructure.
**Tom** [00:19:50]: Right. I mean, that's what that means. I don't know how... So the question then would be, I think we maybe are talking past each other here because my point is really more like the view that these new capabilities are unlocking remains largely indeterminate and very liable to be monopolized or siphoned off by a relatively small circle of actors. So as far as I'm concerned, my passion or my enthusiasm for open source, which is significant, comes from the fact that I would like the open source movement or I would like however open source ends up getting rebranded or redefined to interface with that dynamic.
**Nate** [00:20:41]: That's my only point here.
**Tom** [00:20:43]: So I'm not making any kind of prediction about the fact that this is snake oil or even claim that it's snake oil. I don't think it's snake oil.
**Nate** [00:20:51]: What do you mean by interface? Like to have it independent?
**Tom** [00:20:55]: Well, I think I want open source to be more political. I want open source to be less about a vibes-based community of hobbyists who like playing with models and more about a structural intervention on how it is that the value that these models are unlocking is able to be accessed and leveraged by larger groups of people across the supply chain. Because I think right now it's really very small specific parts of the supply chain where the value is actually being like oil. It's getting out of the ground and it's like, yeah, sure, you can put rigs on top of those spurts. But then it's not interesting to me that a small number of companies is able to do that right now. I would like that to change. And if I can't envision that changing, until I can envision that changing, I'm more likely to keep discounting whenever a new model comes out because then I'm like, sure, this matters in the sense that it's going to make some people more interested than they already are. But it's not.
**Nate** [00:22:05]: To me, that's in commensurate with what I would like open source to become. I mean, the fundamental question is on whether or not the capital requirements allow doing this in the open to even work. Because how valuable the assets are.
**Tom** [00:22:21]: Or if just one model gets leaked and then it's kind of, then becomes open. I think now you're naming it though, it's so expensive and the value is so stratified. Is value being created or is value just really being transferred and monopolized in new ways? I think it's some combination of the two.
**Nate** [00:22:50]: But if open source can't meaningfully have any kind of lever on that process, what is it?
**Tom** [00:22:55]: Like we shouldn't have a feedback loop where it's so capital intensive that only a few companies can even compete to access it. And they're the ones who are creating value. You're not creating value if you're monopolizing a value implicit in that structure.
**Nate** [00:23:14]: You need to show me, you need to show the public, you need to show users. You can still create value in concentration of power, but that has all the issues with concentration of power. That's the at least a triopoly of Google Anthropic and OpenAI. So the monopoly word definitely did not apply for now. I do think it's still an immense concentration of power. Those are just two different things.
**Tom** [00:23:40]: OPEC isn't technically a monopoly, but there aren't very many countries in OPEC. OPEC creates value. OPEC literally, they release a lot of oil. But the prices that they set on that oil are famously arbitrary. And really just reflective
**Nate** [00:23:58]: of what is in fact very specific. Yeah, the arbitrary price point right now is $20 a month for every AI service. Yeah. Well, no, I guess I think Part of the humor.
**Tom** [00:24:09]: Right. Part of the humor is like, right, because it wasn't even that long ago. We both remember when OpenAI started charging for monthly subscriptions to Turbo, or whatever it was initially called. And it was a joke, right? Like, yeah, how much, what number do you set on having this pseudo-AGI multimodal thing? Be your receptionist, or be your in-house worker. And we don't know. Even in a much better defined market, like streaming, where there's at least a long, decades-long legacy of, again, television and legacy forms of media that those things inherit. They're really just digitizing them and providing that service at scale. There's an enormous turnover and uncertainty around subscription costs. I mean, fuck, every other month
**Nate** [00:25:00]: I get some update from
**Tom** [00:25:02]: Netflix or Hulu of like, oh, yeah, we're updating our subscription tiers. Because there's so many variables at stake. There's the variables of like, how do you scale? How do you retain users? How do you do that in a way that is also competitive against the other companies? Disney is all up in arms now because they're trying to consolidate what they accidentally made in C++. They've got Hulu, and they've got ESPN. And they're realizing, it seems, that that works right up until Netflix just keeps outpacing the ability for those platforms to retain users. And so really what they have to do is agglomerate them into some single service, which it seems like that's what they intend to do now. But that's even in a much better defined... There are so many shows already there. There are so many consumer expectations for that. And I think I agree with you, that's still remarkably not true for AI because Gen AI is still so new. The market really has not been locked in yet in terms of services. In terms of companies, I think it has been locked in,
**Nate** [00:26:10]: unfortunately. It's interesting how different of framings we have come from this because I came with like, oh, it's exciting, and you're like, oh, is it the same? I think the question is really class models that it means that it's more likely that somebody that's open favoring can do the same. Resources aside, you could still have a rich billionaire or a government provide the same amount of resources that Anthropic or OpenAI has spent on one iteration of a model. There's a lot.
**Tom** [00:26:46]: It's like, what is the US government's budget?
**Nate** [00:26:48]: It's like $600 billion or something. Or no, $6 trillion. The number I have is the Ukraine aid was requested at $60 billion, which was 1% of the federal budget. So it's like, for 0.1% of the federal budget, you could definitely recreate these technologies in the next few years. If they are that important that they need to be opened, the amount of capital involved is doable. I think the government doesn't want to do that. It's a hard thing. The narrative examples around openness are just beginning to emerge. There's this paper from Stanford on the social impacts of open foundation models. And really, the interesting part is on the marginal risks of open models, where they debunk a lot of arguments against open models.
**Nate** [00:27:50]: And this is the first brick, I think, by the lead author of Sayash Kapoor at Princeton, the next step is on the marginal benefits of openness. And it just takes this narrative thing. But the problem with openness is that it's so multi-stakeholder that agreement on narrative is nearly impossible to get.
**Tom** [00:28:05]: So even if it could be commoditized, it could be done in the open.
**Nate** [00:28:07]: I don't know if there's ever going to be the same level of coordination to do it at the same scale that these, essentially, aggregators want to do.
**Tom** [00:28:17]: I share that skepticism, honestly, just from what I know about the dynamics here. Well, to your point about why couldn't a government hypothetically do this? I do think incentives are one reason for that. I also think this is a hunch. I think the human capital and human resources matter a lot more in AI than we, than most people seem to think that they do. I think it was significant in terms of the dynamics of what happened at OpenAI in November that what seems to really put pressure on that to play out the way that it did was the open letter that was progressively signed by more and more people in real time.
**Nate** [00:29:05]: The open sentence? The one sentence one?
**Tom** [00:29:08]: Was it one sentence one? The AI safety one was like,
**Nate** [00:29:12]: existential risk is a problem that should be considered around climate change and...
**Tom** [00:29:16]: Oh, no, no, no, no, not that one. I mean, I signed that one. I signed that one.
**Tom** [00:29:20]: No, I mean amongst the, no, the letter amongst the employees saying that if Altman is forced to leave, we'll leave.
**Nate** [00:29:26]: Oh, OK. Sorry, I just don't like this. Yeah.
**Tom** [00:29:29]: Yeah, those were different. Those were both... Yeah, 2023 was a busy year.
**Nate** [00:29:32]: God, the open letters. I recently signed one because I was... It was new. But open letters are not a means for political change in my mind. Maybe I'm just naive. Maybe the net open letters are something that get policymaker attention because they're so general. And I'm in the weeds as a technical researcher. This might be one of the flaws of being a technical person trying to do policy is that you find open letters dumb by almost definition. But, like, I do think maybe they matter more than I'm giving credit for because they keep coming up. And the people that are proposing them are not, are not, like, dumb. They're not trying to waste their time on these things.
**Tom** [00:30:08]: They clearly matter. I mean, it's not like just making any open letter has a difference. But, yeah, it matters. I mean, I'm still seeing this. I think that we're.
**Nate** [00:30:17]: There was one. There was one lot yesterday on evaluation of red teaming, which I signed. And I was like, yeah. And I was like, OK, I guess it's like one of those things where it's hard to get the right people in the room. But you could get the right people on an open letter. And therefore, it's like pretty representative of a certain idea.
**Tom** [00:30:34]: I mean, my concern with it and yes, I've also signed open letters. So, like, it's a tool. It's a mechanism. It's an instrument for change. But it's not itself like you can't supplant governance with open letters because that's really just a kind of like public or like open technocracy. Like you can't just have an oligarchy that's just open about who decides things and call that politics. I mean, you can. It's just that's your oligopoly. You're not you're not any other kind of government. Right. So it's it's not really it keeps deferring questions of accountability and authority. And so if that's the dynamic we settled into, that's unfortunate because eventually means that it will need to be changed with something more substantive where it's not just like a bunch of smart people happen to agree with this one sentence description. Like I, for example, I signed the one sentence letter that you mentioned saying, yeah, existential risk. Yada, yada. And then the analogy at the end was like, AI has this risk profile comparable to I think it was something like a pandemic or something like that. Climate change.
**Nate** [00:31:41]: OK. And I I remember I remember being on the fence.
**Tom** [00:31:43]: Actually, I was invited to sign it and I decided that I was going to sign it. And then the very next thing I did was publicly post that I did sign it, but that I also felt that there was a much larger and richer class of analogies that were needed to even make it clear what what make it clear what that risk profile was. And then I just named what those things were. That was my way of threading that needle in that case. I had friends who chose not to sign it because they were so offended at the limitations behind the analogy in that one sentence that that they didn't.
**Tom** [00:32:24]: And I I didn't disagree with them. I was like, right. So we're both navigating the same tension here. But the point is, I think my point, here's the point I'm trying to make. I don't want that to be the axis along which politics around A.I. works for the last year or so. That does seem to be a major axis along which politics around A.I. has worked. But it's ironic to me that open letters have this kind of cachet at the same time that it's also just become accepted that like Chatham House rules are dictating like when and how most of these in-person high level events like happen.
**Nate** [00:33:04]: Chatham House rules is kind of hilarious because it's almost like such a loophole that it doesn't matter because it's like you could say anything without attribution. But most of the things that anyone is going to reproduce normally can be narrowed down to like one to three people, even if you don't attribute it.
**Tom** [00:33:19]: So it's really like there's still substantial trust mechanisms.
**Tom** [00:33:23]: It's we don't know in A.I. what accountability means. I think that's the common denominator. And that's maybe relevant. That's maybe relevant for open source to the extent that yet again, we're kind of circling this ambiguity about the relationship between transparency and accountability. You can have open letters. You can have academic or civil convenings where issues at a high level are named and stakes are assigned to them. But that's not politics.
**Nate** [00:33:55]: Yeah, I have a proposal for another way the A.I. conversation emerges, which is through pop culture. We went through the whole Oppenheimer review right when we were starting this. And now Dune is also like, I guess the Dune movie doesn't have as much of a commentary on technology as the book series does. But like that's an important theme of Dune is the implications of technology. Yeah.
**Tom** [00:34:18]: And concentration of power. Yeah. Let's let's take it. Let's take it to Dune. See how we didn't we didn't have an episode last week because Nate and I were actually in person and we got up early and we saw a showing of the new Dune movie Friday morning at 830 a.m. I believe on the Upper West Side. And we were kind of processing that. And so we were like, you know, we're we're we're still in the midst of that. But yeah, so doing the universe of Dune is a good example where for those who don't know the whole kind of really the arch premise of Dune is that there is no A.I. There is no there are no robots. There is no data driven optimization at scale. The Dune universe. It's funny. It's actually impressive the degree to which there is no I can say more about this. But like there is no scaling. There is like everything is is feudal feudalism feudalistic. You're either above somebody or below somebody. And there's like it's all monopolies. There's all and these monopolies have monopolized different kinds of power. So there's the there's the navigation monopoly. There's the kind of for lack of a better word psychic monopoly which is a gesture it have. There's the there's the kind of political monopoly which is that's the literal kind of feudal structure of like the emperor and the different houses underneath him. So these lanes are very very rigidly defined and there is no A.I. Because the whole point of Dune is that previous to these events of the movies and the books there was this event called the Butlerian Jihad which is where there were self-aware machines who rose up in opposition to I mean eventually all of humanity. There was this prolonged war between those two groups. Interestingly humanity prevailed. This seems to be something we've lost hope in now but in the Dune universe humanity wins and the result of that is I think it's strongly implied these incredibly ossified lanes of power that cannot be changed that that's sort of the
**Nate** [00:36:30]: amount of like memes you can make about open A.I. in this movie and what we're talking about is so high. I would like to take my one meme as the there's an awesome sequence on Gaiety Prime which is the hearkening home planet and I am now forever imagining a open A.I. all hands as being like that sequence. So now if you see it you'll know but there are many many more memes around concentration of power and A.I. within it.
**Tom** [00:37:02]: But still the drama of the story is that these concentrations of power are in commensurate with each other. So if you see the new movie I think actually does a great job of this. You see how power is distributed in qualitatively different ways across these factions across these different sectors. So the Gaiety Prime sequence you mentioned I guess I don't want to spoil it exactly but like it you see a very specific this character Fade Rotha who's like I mean psychotic and he's great at fighting and he's great. He's like he's at the pinnacle of his craft in a certain sphere and then you see him interact with basically the representative of the psychic monopoly which is the Bene Gesserit and he's basically laid low you could say by the kind of power they represent. So everyone is sort of circling each other. Everybody has it's like a game of rock paper scissors but with all of humanity and all of civilization and the central drama of the story is when you have a character who by luck, happenstance or whatever is able to embody all those different sources of power at once in a way that cannot be controlled and cannot be that that old game can't control that entity anymore then what happens? And the movie is great because actually what it shows is what happens is tragedy. It's a very dark movie. It ends up being a very kind of bleak moving but bleak story about the fact that ironically the main character Paul is the most constrained of any character in that movie because his path is he has prescience but what that shows him is that everything is basically predetermined and the best he can do is something that's actually deeply self-destructive or violent in some ways but it is a great movie and we enjoyed seeing it together. I was kind of blown away by it. It's an overwhelming experience. I was going to tell the story briefly of like dude occupies a specific place in like the psyche of AI because again there is no AI in it. It was one of the first major science fiction sagas like franchises whose central premise was that there is no AI, that there's no prospect of that. This is totally different than Arthur C. Clarke or Robert Heinlein. There's not even the prospect of it. The whole universe's logic depends on the impossibility of that event and I've had interesting conversations with Stuart Russell. I brought him up before on the podcast about I think Stuart Russell had a formative experience with those novels when he was young because we were just chatting once and he I think accidentally let slip that he's memorized the like first ten mantras of the Orange Catholic Bible which is the like syncretic religion that defines the entire dude mythos and like the first of those is like thou shalt not make a machine in the image of the human mind or something like that but then there's like a deeper, there's like deeper cuts on how it works and Stuart had committed that to memory. So yeah, it's you know, it's still true that the limits of human nature that's why people love science fiction is that it expresses the limits of human nature. There's something about pushing technology to its limit that pushes us to our limits and you see our different predispositions come out in those settings and the movie does a great job of dramatizing that. The fact that it's, I mean there's a fundamentalist religion as like the central kind of motive force of the story is very interesting too because we often, I mean, I think if we're being honest like we often treat AI as a religion much more than we do anything else. You know, you either you either believe in it fundamentally or you don't believe in it fundamentally and there's an enormous kind of pressure now to say like what is the priesthood that gets to define how the public, what are the stories the public is going to have to play with around this technology and that's the current landscape. I think that ideally there would be a way for that priesthood to open things up a little bit to the laity in terms of how those stories get told reproduced or played with or we try something else but that's actually the state of things right now.
**Nate** [00:41:27]: That makes sense. I have a giveaway. I have a few hats that say stochastic parrot on them from a podcast friend late in space. I ended up with too many of them. I'm going to send one to the author of the paper but if you're still listening you can get in touch and I'll mail you one of these hats if you want them. They're kind of like the GPU poor hats. They're very minimalistic and you can watch...
**Tom** [00:41:47]: How many do you have?
**Nate** [00:41:49]: I have four two of which are already accounted for.
**Tom** [00:41:55]: And it just says, yeah you showed it briefly
**Nate** [00:41:57]: It says stochastic parrot.
**Tom** [00:41:59]: That's all it says. There's no parrot on it.
**Nate** [00:42:01]: Yeah. So that's what we got. For loyal listeners, I'll pay postage hopefully not international but figure it out. I need to reflect more on how different the clod takes that we've had but it is good grounding. In my future, probably not in the next couple weeks, I want us to do more of a side by side comparison of all these and just get a sense for what language models actually can and cannot do. I think that's a worthwhile exercise just playing with language models and seeing what you get. Maybe I'll record it. It'll be kind of entertaining for people. Just the process of doing this is very important to get a grounding on where we are at. But that's mostly what I've got. I think it's an interesting debate.
**Tom** [00:42:55]: Yeah. We'll keep the energy going. Sounds good. Everybody have a good week.
**Nate** [00:43:03]: We'll see you. Bye.