Model release therapy session #1

Tom:

Hi, Nate.

Nate:

Hey, Tom.

Tom:

Have you heard about the Friars who were behind on their belfry payments?

Nate:

No. No.

Tom:

So there were some friars who were cash strapped on these payments, so they opened up a small flower shop to raise funds. And, you know, everybody in town, they liked if they had a choice, they would rather buy flowers from men of God.

Tom:

So that's what quickly became market dominant. But there was a rival florist across town who was upset about this, understandably. He asked these friars to shut down. They wouldn't do it. So he took matters in his own hands, hired the local fixer, Hugh McTaggart, the roughest and most vicious thug in town to make these friars an offer that they they couldn't refuse.

Tom:

So Hugh beat them up, trashed their store, said he'd be back if they ever tried to pull this shit again. So, of course, the Friars had no choice. They were terrified. They closed up shop. And, of course, what that tells us yet again is that Hugh and only Hugh can prevent florist Friars.

Nate:

Wow. I real I really didn't know where this was going, but this is probably one of our better ones. Welcome to the Retort. We're not talking about forest fires, unfortunately. We are talking about AI, and it's been a busy week.

Nate:

I don't know. We'll see how much how much reading Tom has done, how much explaining

Tom:

Different kind of fire.

Nate:

Different kind of fire. Yeah. I mean, it's ongoing today. We're recording on Wednesday 21st. The next flurry of releases are coming.

Nate:

People in AI love to release things on the same day. Last week, we had the OpenAI Soarer model, which you've probably heard about in all sorts of podcasts. If you listen to 1 AI podcast, we assume you probably listened to a couple. We're not gonna go through the technical details in too much detail. It's awesome.

Nate:

We had Gemini shipping a new model. Today, we had what really is a new Gemini model called Gemma, which is Google shipping an open source model, but all signs indicate that it is, like, a training run from Gemini or, like, the same infrastructure. They're like, okay. Let's let's open source it for the fans, so to say. Let's let's let's give the people what they want, which I think it's good that Google's doing that again.

Nate:

We'll talk about that. And we'll see. There's plenty of other topics if we run out of time, but I think Sora is the one that fits best for our general theme, which is, like, AI is breaking the world, and what does it mean? I mean, I think there's a lot of different takes in Sora. I don't know.

Nate:

Like, what what is your starting point when you see realistic text to video, Tom? Like like what what do you think about society in this? Like I don't think copyright is the thing. There's lots of little issues, like copyright isn't really the thing, like people losing their jobs. Like, yes, that's important, but we've seen that as a society en masse before.

Nate:

Like like reallocation of talent and labor is a thing that is accompanied in technological revolutions. But I think the social fabric, like like we were talking about the metaverse is wild.

Tom:

Mhmm. Mhmm. Yeah. I mean, at a high level, my thoughts really aren't that different than what they have been for some time, which is this is about supply chains. And so people will look to reskill whether you're creative or whether whether you're, you know, related in some other way to video production or marketing across the supply chain.

Tom:

So this is a tool. It's a very powerful tool in this toolbox, but it's not clear to me even from the videos that I've seen that it's not like this is going to automate the entertainment industry overnight. Right? It's maybe worth keeping in mind, you know, if if the question is like, what does this mean for society? You know, my recollection of the, like, SAG strikes, the actor strikes that were going on, or it feels like ancient history, honestly.

Tom:

Yeah.

Nate:

It should've waited till now.

Tom:

Yeah. Well, in that case, though, I think it's a different cut. Right? Because, of course, that was a larger, you know, Hollywood has its own dynamics, which and this this is an AI podcast, but we're not trying to treat everything like a nail and AI is the hammer. That that that strike happened for lots of reasons.

Tom:

You know, Hollywood has its own kind of like multi decade cycle when this stuff gets relitigated. Part of it though was related to the fact that studios understandably have been interested in moving on, you know, getting to this kind of business model where you're a day player, you know, you're an extra. They give you $50 to scan your likeness, you know, 3 d scan. And then they never need you again because they, you know, they own your likeness or at least the version they scanned. And then they can just put you into the background of any Marvel movie, you know, any, like, whatever mass crowd scene they need.

Tom:

And that's like, they don't need to keep hiring you. And that frankly does break the back of a lot of the entertainment industry because the whole basis for the union is that it protects those kinds of people. There are these very, very, very radical extreme power laws of talent and whatnot that define the movie industry where there's like a few very, very few major celebrities at the tippy top who we all know, you know, you you have their image in your head

Nate:

and you

Tom:

know their names. But then there's like many, many, many, many thousands of people, tens of thousands of people, who you'll never know who those people are, but so much of the entertainment industry is built on top of those people, whether they're in front of the camera or behind the camera. And I think because there is so much human resources behind getting anything worth watching off the ground at scale. There will be plenty of opportunities to reskill or redistribute that labor. Even in the context of a tool that's seemingly magical, you know, like video generation from text.

Tom:

Because that does feel magical.

Nate:

Like, they could have their livelihood. They they're not gonna get redistributed at 1 to 1, though. Like, it could be lower relative compensation. And I think that's why people are like, the the compensation essentially gets skimmed by the tool that's providing the leverage. And if the leverage is the fact that you could generate a ton of videos really easily, OpenAI will take a bigger cut than the what used to be the people trained in the arts.

Nate:

Yeah. So This is all hypothetical. But that's, I think, that's why it's, like, where the argument is.

Tom:

It's definitely not one to 1. Yeah. I I would it's definitely not going to be a one to 1 reallocation or reskilling. History tells us that that's, like, never There

Nate:

will be teams that have it. Go, like, 1 to a 100 by designing their workflow for new tools. Mhmm.

Tom:

Yeah. I mean, I think it what what it's likely to do is take an already very stratified industry and make it even more stratified in terms of talent,

Nate:

in terms of skills. Think about outside of Hollywood? Because I think a lot of this is, like I I was on the record saying I don't think the general public has access before the election, given how it's likely that people would easily be able to keep jailbreaking whatever filters OpenAI has and generate propaganda for the presidential election. Like, do you think that's a ridiculous take, or do you think OpenAI will actually give people access? What do we have?

Nate:

8 months? I think that's about they didn't launch GPT 4 or announce GPT 4, but they were doing safety for a really long time. So, like, I I don't know if they take the same playbook for every model, which is, like, it's ready to go. We'll do we'll roll it out in a couple months, or they're just gonna keep us locked down because the Internet's not ready to deal with this. Like

Tom:

I mean, I'd rather not play the game of predicting what I think OpenAI is or is not going to do here. I I think we kind of act like they there there's kind of there there is reactive, I think, is the rest

Nate:

of the world. I mean, what should they do? I mean Don't predict them. Make What should they do? You're you're now on the spot.

Nate:

You have to make the decision of, like, you could give this access to people. Like, should they, as a member of society, should they give people access to the model, understanding that there's gonna be a lot of political propaganda

Tom:

comes into my head here that I think is relevant where so like, if you talk to sociologists who study why there are mafias or how you explain the existence of a mafia. The consensus explanation is that mafias arise. In other words, organized crime. Okay. What what does it take for crime to become organized?

Tom:

And the answer, you know, on one level is unsurprising, which is that there is, an unreliable or non existent police force due to the fact that there's, like, insufficient state control or regulation or oversight of some market in particular. And so a mafia arises sort of to act as a kind of pseudo state, a pseudo regulator for some market. Right? That's why the mafia in America had this giant heyday during prohibition. Okay.

Tom:

Racketeering. The government made something illegal that didn't make demand for it go away. And so there was a a supply chain that needed to be secured, and people stepped in to do it. You know, most famously Al Capone, lots of other people around the country, of course. But there are implications behind that that are interesting, which is whenever there is an enormous amount of demand for some tool or service that the government is not really quite in a confident position to regulate or oversee, that doesn't mean that there isn't also a parallel need for rules to determine what acceptable or safe or even reliable use, right, of, like, whatever that tool or service is.

Tom:

Right? So a lot of what these mafia folks were doing, like in Chicago in the 1920s, actually a lot of it was like making sure that it was safe to drink, you know, because they didn't want people getting blind when they drink. Some like moonshine or shit like that. Right? Like, they really were kind of functioning almost as like a Italian American FDA that was just sort of their job.

Tom:

So I'm getting to your question here. What should OpenAI do? Basically, what we need is someone to be a source for either being able to control access to the product in such a way that there is no possible use of it that could not be vouched for. My read of that is that that's probably unlikely to be achieved or I'm skeptical that it can be. Or that there's some kind of very stringent oversight over the people or the contexts in which these kinds of capabilities can be used.

Tom:

So if OpenAI did nothing, I think the state, in other words, the US government, I guess, the federal government, would have to step in with new kinds of, at least suggestions. If not actual, like required regulations, saying something like, there's a major fine for the use of this technology, especially if it's on to mimic specific people.

Nate:

I mean, there is an example. Specific stuff. There is an example that's relevant. Did you see the whole Air Canada, NAFU? With a like a chatbot?

Tom:

No. Is that

Nate:

They had a No. They were using chatbots and customer support, and the customer support made up a refund policy. And then a court said that this AI chat chatbot refund policy had to be held accountable to. So so they ended up having to pay this customer, this type of thing. So, like Oh, they'd have to pay the customer.

Nate:

That's a percent. In this case, it's a lot easier because this is more of a, like, monetary transaction. I think in terms of, like, reputations and things, like like, the FTC can say that imitating people is illegal for phone spam, but that's not gonna mean people aren't gonna do it. Like like, there's a difference between

Tom:

the That's the pro that's the pro

Nate:

off the record.

Tom:

That's like prohibition. Exactly. Yeah. Well, yeah. Exactly.

Tom:

So either way I think either way, there will be enforcement. So what should OpenAI do? Look. Honestly, I'm not sure I trust OpenAI to be the gatekeeper for that enforcement. I would rather it be somebody else.

Tom:

I would rather the government step up here because I think there are kinds of enforcement that could be done that if done right would work or at least be well motivated. If OpenAI does it, they ought to do it in pretty close consortium with what the feds are expecting or wanting from this stuff. I don't think it should be done unilaterally by industry owners.

Nate:

I just think, like, what, like, what does enforcement look like to you? Like, I think what this means is somebody is gonna make an OpenAI account. They're gonna bypass the OpenAI, like, safety filters. They're gonna generate some ridiculous video of Trump or Biden, like, freaking croaking because they're old, like, some old person propaganda. And then they're gonna, like, post it online, and they're not their account will be what like, what's gonna happen is this person's account can get shut down.

Nate:

Like, is OpenAI's terms of service enforceable and, like, political penalties? Like, I I don't really know how I see enforcement actually happening other than, like, terms of service level, which terms of service are not really binding contracts in law.

Tom:

Well, let me clarify my stance here. If OpenAI is gonna do anything, they need to not do it alone. But my true preference would be that OpenAI actually does nothing. Because I think and the and the reason I think that is it's actually that's where I was going with that original analogy, which is actually some prominent sociologists who I who I respect. At the height of, like so there was this murder increase that happened in the United States in urban cities quite infamously that I think was elevated over the course of the eighties nineties in particular.

Tom:

It then finally start to level off, but there was a series of there was like 2 decades or so when it was like precipitously increasing. And no one really understood why, and no one really understood what to do about it. And there it got so bad that some social scientists, you know, it's bad when like a social scientist who are not economists are brought in to like try and make sense of a problem. And in this case, there was one in particular, Randall Collins, whose work was a big influence on me during my early development, I guess. And he famously said that it would actually be better the state could either spend many 1,000,000,000 of dollars, like just entirely like reprioritizing public safety and hiring tens of thousands of police officers to make it so that crime could be more easily, you know, monitored, prevented, whatever whatever have you.

Tom:

Or it could actually take a step back and deliberately do nothing because and it would be really bad for a little bit of time, but then eventually you'd see a mafia.

Nate:

Yeah. Then everyone has

Tom:

And a mafia would effectively serve as a kind of private enforcer.

Nate:

It's like, do you think the general public is, like, actually gonna go through this generative AI phase and realize that everything on the Internet is untrustworthy, or is it only, like, our generation and younger? Like, Like, do you think our parents' generation could legitimately, like, transition within 6 months to understanding the context that, like, you have to snoop everything you see to be true or false. Like, I think the examples everyone talks about is that, like, it's hard for these people to adjust to, like, algorithmic timelines, fake media, and stuff in the same way that younger people are. But do you think video is a different medium in that regard?

Tom:

Yeah. I think video is a different medium in that regard. I don't know if it changes that fundamental dynamic. I think qualitatively, the dynamic is probably the same. Although we know that video, emotionally engages people differently.

Tom:

And I think also tends to be well, it depends on the video, but it can engage people to a much greater degree than just static, you know, photos. But yeah, my my gut is to buy into the basic idea, which is that older people I mean, I'm thinking of my parents right now.

Nate:

Yeah. I'm I'm not that old. Not like too old. Not like people that still are very functional and normal.

Tom:

Yeah. They're just look. You you see, I think I I forget if I've said this on our show before, but I've certainly said in person before that. Content, if if the question is because we've shifted now. At first, we're just talking about the economics of this stuff and how economically disruptive it'll be.

Tom:

Now we're talking more just about societally, politically, epistemically, how disruptive is it going to be? Younger people have been acculturated for a long time now to having a kind of ironic attitude towards a lot of online stuff. Not necessarily because it's the edge. I don't

Nate:

think it's as their their attitude may not be good, but continue. Like, that's a separate debate.

Tom:

I'm not saying it's good, but that's I

Nate:

think I've never heard that argument. I I'm just saying I've never heard that argument. We can come back to that.

Tom:

Yeah. Well, I think it's I I think that if I were to speculate a bit about what I think the reason for that is, it's that content generation, not due to, like, AI, but just due to, like, just it's like memes. It just has become so cheap and easy to generate content and put it online in a way that it could get engagement. I I

Nate:

I I'm mostly thinking about, like, how these people they're the most common career that people want from seeing all this content is to be a YouTuber or a TikToker. And these career paths are not fundamentally differentiating based on, like, hard skills. It's differentiating on, like, ability to be, like, a personality that is rewarded by the algorithm. And I think picking your career path on something that is sensitive to the algorithm and the fact that that's popular is wild because that is such a hard hustle life that people don't want to like like, the the weirdness of those incentives to compare it to what used to be popular in American society, like, couldn't be a bigger shift. And and I I am just like people didn't like, I I know there's a lot of people that are like, oh, the the gen alpha's totally after they can't read.

Nate:

But, like, talking about the incentives of these generations is also important to showcase that, like, there's big problems here.

Tom:

Right. So there's the incentives question. The fact that younger people are, you know, we are hustling because we're career driven and we're still trying to like climb. And if AI or algorithms are changing the game of that, that rewrites the rules by which we hustle. There's also just the cognitive dimension, which is, yeah, old people, they're just slower.

Tom:

I mean like, I mean I live in New York and I'm biased now, but like, I mean I know I've become a New Yorker when I'm like behind somebody on the on the sidewalk. I'm like, goddamn, this person's just fucking slow.

Nate:

Yeah. It's probably because they're jaded California.

Tom:

It's just because they're old. And it's not Yeah. It's like, it's yes, you're a citizen and yes, you have the same rights side due, but God, God damn. It's slow on this side. It's just the pace of life is different.

Tom:

And so I think when the pace of life is different and also, I mean, those people truly, this is the more sort of media side side of the argument. They they they really just grew up in a different media environment than we did. They were not media saturated in the way that we are. Media generation was not as cheap. It was not as prevalent.

Tom:

It was not like, we we grew up, I mean, you and I, like, grew up in a world that's still like an order of magnitude or more less media saturated than what people are growing up with now.

Nate:

Yeah.

Tom:

And and what I mean by that is the fact that we know, you and I both knew growing up. We just sensed it. It wasn't even conscious that there was always going to be a 100 times more content out there that we could consume even if all we did was consume content.

Nate:

Yes. This is why I don't I read, like, no papers. I read, like, no newsletters. It's like

Tom:

Well, it's gotten that's gotten worse over time because it's only gotten cheaper and cheaper, and it's been easier and easier

Nate:

to change. It's so far that it wasn't even a cost thing. It wasn't possible for previous generations. Previous generations content was filtered by geographic and physical constraints. And now it's like, now content is filtered to personal taste.

Nate:

And that is a extremely different thing, and I think video really completes that arc, where, like, video content didn't really exist. I mean, it existed on, like, over the air, very pre you had a set number of channels, cable stuff, and now it's like there's effectively infinite video that can be tailored to an individual's taste, which is like, this is like the completion of the arc of you kind of have infinite video and you have infinite agents that are tunable to people's tastes. Like, that's the last thing that you kinda come up with in a zero marginal cost content world.

Tom:

So we're not right. I mean, we're maybe still a little bit out from a truly interestingly dystopian future where there will be entire seasons of Seinfeld that I've seen that you haven't because I I had them autogenerated. And, like, I've been on, like, an emotional journey with shows, with books, with movies, with music, whatever. That is, yeah, just my own my own spirit animal, my own spirit journey. And that that's another side of this that also needs to be considered.

Tom:

I think the way I would sum this up though is, like, when it comes to the history of media, it is true that during periods of and you know, we we don't really know where this is going or exactly which analogies are warranted here. But like when you think about what did the emergence of the printing press do to late medieval, early modern Europe. Right? Some historians argue that the emergence of the printing press is actually that is the demarcator between the Middle Ages and and the modern period. That it's not the fall of Constantinople, it's not whatever.

Tom:

It's it's really literally just something is banal.

Nate:

It's a very techno optimist angle, but I agree.

Tom:

Well, it it doesn't have to be optimist. I think it's just the the disruption. It's just the fact that it became so cheap to mass produce text and information. Like, if that's if that's the demarcator, then it is bizarre, though it may seem, this week may have introduced a new epoch of human history in the sense that, these new models are affected. It it sounds silly to say it, but, yeah, the So telling

Nate:

that it's, like, a world that I don't wanna live in. Like, I don't want Idiot to me because I know I'm gonna be so susceptible to that. It's, like, not that hard to know that I like, like, endurance sports, formula 1, and, like, cool science things. It's, like, it's not that hard to come up with near infinite content, and it's just crippling to be able to plug into that at 0 cost all the time in terms of, like, personal volition. Like, it will destroy my personal volition in some some ways or just constantly be detracting from it.

Tom:

I'll play it. I'll have to play it a little bit fast and loose with history here, but you know, whatever our podcast is not really meant to

Nate:

go to the forefront.

Tom:

So I think that I think that the printing press analogy is interesting and it's worth it's worth sitting with. And, you know, I think I've already basically said why I think that is. So I'll stop there. But I think another really important historical parallel here, which I've also written about in more of my work and which I think actually is going to matter more in terms of how we because this this is effectively here. Like, we're not just gonna make it so that like, okay, once we have this, it's not gonna be like, now let's just not do it.

Tom:

Like crack cocaine still exists. Once it became possible to, like, easily mass distribute crack, that just becomes a problem for society that is basically gonna exist as long as the supply chains that can underpin that kind of production continue to exist. So we will have to reckon with this. I think the clearest parallel historically to the problem you just raised, which is how can we say no to these new kinds of poison that define our lived experience or in this case, our media experience? Like this is public health.

Tom:

This is the domain of public health. And if you look at the history of public health, the different periods when there was a need for it, it usually was induced by some new kind of technological development that caused people to cohabitate with each other in unsustainable and unprecedented ways. And that there were then indirect consequences of that, like epidemics of cholera or typhus or tuberculosis, or the Cuyahoga River catching fire in the 20th century, you know, factories being put right next to giant rivers whose water was now no longer, not even potable, but flammable actually. And what there needed to be was a reappraisal of the environment. The environment of cities, the environment literally in the sense of the ecosystem, and what new kinds of mechanisms or means of control were needed to regulate that environment at a level of of abstraction that we previously didn't we didn't have to worry about.

Tom:

It's funny actually when you and I were at RLDM last or 2 years ago, I guess it was, because it was a Brown. Right?

Nate:

Yeah.

Tom:

And we, I mean, there's a lot we, it was a lot we did on that trip, which is not relevant here. But when I was on that trip, I was up late at night reading about this, about the history of public health, about the history of specifically 19 century. Talking about this.

Nate:

At least some Yeah.

Tom:

It's vaguely familiar. Well, because the point there was, and I wanna this is what needs to be emphasized, I think. Cholera, typhus, the plague, these were not new diseases. The actual pathology of the disease was not new. The ancient Greeks were aware of these diseases.

Tom:

They had their own means of, you know, treating them. They actually were fairly effective on the individual level. But what was new was the fact that these diseases were simultaneously almost overnight ravaging entire populations of people. Because people were living differently than they did in the ancient world. Cities had never been this big.

Tom:

They'd never been this dense. They'd never been, Yeah. Industrialization was this entirely new wave of being. And it basically turned cities into a Tinder box for the spread of disease. But there was there did end up being solutions to that.

Tom:

They took decades to work out, but the names of those solutions are things like epidemiology, sanitation systems, germ theory. Okay? Randomized control trials. I love

Nate:

that idea. It's a field.

Tom:

Yeah. I mean, germ theory right. I mean, you point microscopes at shit that's really small until you find the little fucker that gives you cholera. Because because because that was a revolutionary idea. People actually, the Greeks thought this.

Tom:

They were dumb. They were I mean, they were wrong. But people up until 19th century, the consensus expert opinion, was the disease was caused by what they called miasma. Which is another way of saying, when things smell bad, you have a greater propensity to get the plague. Right?

Tom:

They thought that somehow the error Yeah. The effluvium of the error was just the medium through which

Nate:

disease reverse causation.

Tom:

Yeah. Well, they didn't have tools, you know, to measure anything else and they also didn't understand how to isolate the variables. And it wasn't until, and this is not a Game of Thrones reference, but there was this other guy named John Snow, in Victorian London, who did the first it wasn't technically a controlled experiment. It was a very brilliantly constructed natural experiment, in which he was able to isolate neighborhoods where there were outbreaks, kind of, artificially relative to where they got their water. And he was able to find, an ex a correlation between the water supply and cholera outbreaks that was so prohibitively strong that every other possible explanation basically fell in the dust.

Tom:

And so anyway, my point here is just to say, to your point, as an individual, you can't resist videos that have been auto engineered specifically to appeal to your entire life trajectory and your sense of purpose and meaning. Look, public health tells us that that's not a new problem because it's not the nature of that problem is so intense and dynamic and structural that it can't be the responsibility of individual people to to solve it or to deal with it on their own. We need new institutions, new methods of measurement and assessment at a higher level of abstraction to make it so that the downstream impacts of the use of these technologies is at least able to be monitored and also that policy can be set around it so that the effects are not prohibitively bad or toxic.

Nate:

Yeah. I mean, now that you're warmed up, let's move on to a potentially more sensitive generative AI topic. There's all these a popular image going around with this new Gemini release. The Gemini 1.5 models essentially are better models, much longer context length, cheaper. Google's doing well in base model land.

Nate:

We could come back to long context length. But on this topic of generative AI, one of the things that's going viral is this clip where you asked Gemini to generate me historically accurate images of kings in the middle ages, and one of them there is black. And it's like

Tom:

I regret not having seen this in advance. This is quite funny to me.

Nate:

And it's like, who should be the like, today, I understand why Google does this. Like, the machine learning models are biased to showing what is on the Internet, and they have a propensity to show white people in many situations and to reinforce many social biases like a doctor being male rather than female. And then it's the thing of, like, to what extent society socially should we be okay with controlling of information? So, like, as generative AI becomes the backdrop of the Internet and most of the data is generated by generative generative AI, it's like how big of an issue does this type of thing where it's gonna rewrite history in some way become. I, like, I think this is, like, it's benign enough that it's an interesting example.

Nate:

There are more harmful examples in the past, I think. It's like, what's what's your take? I could pull up the image while you're looking. I

Tom:

yeah. I'm curious to to see it or you can direct me to it. It follows, I think, from whatever you've been saying. Right? Which is up until now, look, it's we all I think we all know this, but it's uncomfortable to really take responsibility for it.

Tom:

But that's actually the moment that we're facing. The reality that we actually have to be responsible about an unknown is the fact that for the first, 20 or so years of its existence. But certainly in theory, and I think to a large extent in practice, the Internet and everything on the Internet was supplied by people as if they were individuals. Okay? So we think about content rights.

Tom:

We think about license agreements, we think about, you know, think about just in the back of your head, these fantasies of like what the Internet was like in the eighties or nineties of like weirdos obsessed with cyber culture who were wanting to find user forums for communities of people that were kind of like them. And it was sort of like a matching function. Like the Internet was just a way for you to find your folks. The Internet is not like that anymore, to put it mildly. The Internet is infrastructure.

Tom:

The Internet is maintained and supplied, which is really what you're talking about now is that there's a new supply base for it, by a very small ecosystem of players. Generative AI in that regard is kind of doubling down on and maybe cementing that reality, which has been kind of incrementally playing out for some for some time. I mean, even just think forget AI for a second. Like, even just think of, like, the history of Twitter right now. Right?

Tom:

Twitter in 2009 was seen by many people as like the future of democracy. I I literally remember when, there were these major protests that happened in Iran, I believe in the summer of 2009, and against the the government. And for and again, I'm not an expert on the details here. I've never been to Iran, so I should clarify that. But I what I remember at the time was that for whatever reason, much of the the protest movement in Iran was coalescing and sharing information on Twitter.

Nate:

Yeah. I remember that.

Tom:

And that this was yeah. This was apparently so significant that the White House personally petitioned either, you know, Jack Dorsey or other executives at Twitter to delay maintenance of their servers so that there were because there were particular windows of time when these protesters, well, like, kind of effectively relied on Twitter to coordinate their their their information and also just, like, their strategies for what they were gonna do the next day to end it again to such degree that it became kind of in the national security interest of the United States to to affect when that stuff was going on. Think about Twitter now relative to that. It's Okay.

Nate:

ML Twitter is, like, run by a bunch of non accounts. I mean, again,

Tom:

I I don't really I have an account. I don't really spend time on it. Good that you don't.

Nate:

I think I sent you some links to examples with this kind of, like, Gemini rewriting history stuff.

Tom:

Like I don't know how it

Nate:

is to access the chat. We use I could try again some other way.

Tom:

Oh, you put it in the chat. Yeah. We're using this this, like, interface here.

Nate:

We use a special Descript product. Not sponsored. You can sponsor us, but

Tom:

Oh, this is really funny. Oh, this is funny. I'm looking at

Nate:

There's so many examples at this this article. I was looking at it. Just like

Tom:

I like how it says, certainly. Here is the image. That's the best of art that cranks me up.

Nate:

Yeah. I mean Here's

Tom:

the images featuring various various genders and ethnicities. Yeah. This is good. I like these. These are cool, actually.

Nate:

I like this. AI generated art is cool. But, like, this is an important point where a norm is going to be defined that I think a lot of people will push back and be like, okay. Like, please fix the bias problems, but don't put those push them in our face. Like, I I don't want these models to like, I do want there to be under shading of the algorithmic and, like, data structures that are causing these biases and to mitigate them.

Nate:

But this is just, like, okay. Like, you're just gonna nuke your reputation if you keep putting these into like, this is effectively gonna be, like, what Google is marketing is, like, the replacement to Google search. It's, like, it's not gonna work.

Tom:

Yeah. So sorry. Like, do it. I don't understand why are people upset about this. These images are just cool to me.

Tom:

Like, I don't see what white people are upset about this.

Nate:

Except to know that they're not historically accurate.

Tom:

Well, yeah. And this is this just I do think people are Who who are the people that set up this?

Nate:

Okay. I mean, like, these these images are coming from Stratechery, which is one of the, like, highest profile tech analysis blog out there.

Tom:

So this person who said generate a famous 17th century physicist, what were they expecting the only right answer to that

Nate:

was This is after they're already biased Galileo? They're already yeah. I mean, they're looking they're already

Tom:

El Leo where I walk.

Nate:

Like, it's

Tom:

a it's a biased exercise. It's not a fair exercise. Look. AI by definition is is it's an amalgam. Okay?

Tom:

So like, it's not it's yeah. It's it's interesting, I guess, because it's not a replace it's not a one to one replacement for search in that regard. Right? Like if I searched Google for famous 17th century physicist, it's not gonna show me hyperlinks to brown people, presumably. I mean, I I'm guessing that.

Tom:

I'm doing it right now. Yeah. Do it right now and see what

Nate:

it says.

Tom:

I'm going to

Nate:

The the Google image is exactly what

Tom:

you would expect.

Nate:

It's all, like, faded paintings of, like, white men in wigs.

Tom:

Well, but the thing is that's frankly more accurate to the search.

Nate:

Albert Einstein has a surprisingly big part in 17th century physics searches.

Tom:

I mean, yeah. Well, that's just yeah. I mean, it's what are people look, I I think a lot of this is just AI does not replace knowledge. It doesn't replace encyclopedias. It doesn't replace it's a different kind of tool.

Tom:

I think that maybe we're we're still using the metaphors we've been using to talk about it and think about it are a little bit off base with respect to what it actually can do.

Nate:

This is like the Yoon Lacoon.

Tom:

It's fine.

Nate:

It's like Yann Lacoon's critique of all this is, like, it's not grounded in anything. Therefore, it's not gonna be real. And therefore, it can't build the highest level of trust that people want in most of their systems.

Tom:

Well, I think I agree with that. I thought it's kind of a platitude, but yeah.

Nate:

Yeah. Yeah. I mean, Yann speaks in platitude. So he he's hilarious. I don't like, I don't know what

Tom:

it sounds like. Kind of obvious

Nate:

to me. I'm gonna meet him when I'm in New York for this event in a couple weeks. He's at that event. It'd be like, hi. If this would be like

Tom:

yeah. We could invite him. I don't know. I don't know. I I supposedly I think I bothered him with my previous, like, that article where I was quoted saying that machine learning is alchemy.

Nate:

He's forgotten.

Tom:

I'm sure he's forgotten. Yeah. Yeah. It's I think we need to this is an invitation. Look, I would rather to return to your original prompt.

Tom:

I would rather OpenAI did basically nothing. And I would rather other people who know what they're talking about to help craft a more accurate semantics To talk about what these fucking things are doing. Rather than acting like in this one to one way, they're either gonna replace filmmakers or Google or whatever other existing interface. It's like Google We have

Nate:

to be both. Is, like, no longer doing nothing, though. They're, like, making deals with media organizations. And, like, I think that is, like, deeply muddling their direction, Even though it may not feel like it's

Tom:

It's alchemy again. That's alchemy. Because they're they're interested in transforming the interface through which, yeah, this stuff can get accessed and thought about rather than advancing the conversation or the science of what it would actually mean to use this stuff. Those are different projects. And, you know, I'm not casting casting blame exactly.

Tom:

Like if I were OpenAI, I'd probably be doing the same things as them in a self interested way. But is your goal to help us deliberate more clearly what is going on? Or is your goal to mystify us by inundating us with what these things can do? Those are just not the same thing. We talk about them as if they're kind of are and

Nate:

they're just really not. Yeah. It's like Sam Altman raising $7,000,000,000,000. A lot of very reasonable people that I know are like, I'm not even gonna comment on this. Like, it's just useless.

Nate:

Like, I'm not gonna comment. Like, that's not a, like, that's not a real rumor.

Tom:

It it isn't clear. I think that I think there is a general there's a through line here, which is that it's not clear what actually is the reality that corresponds to what, like, is coming out of these models or coming out of the people who build these models. We don't really know where these things are in social space. Like, so we don't know if they stand in relation to $7,000,000,000,000 What is comparable to $7,000,000,000,000 Because if you can't name anything, it's not really clear what that means. It's just an arbitrarily larger number than 6 or 5 or 4.

Tom:

Like, what is the next most valuable thing in the world after a $7,000,000,000,000 valuation? I don't even know. I could guess.

Nate:

It's Apple plus Microsoft, or it's Apple plus Google plus NVIDIA.

Tom:

Right. So it's like all, like, all other major companies. Economy is of The the the entire tech economy. Right. And and even that is, like, even that is just market cap.

Tom:

So it's not not that's that which is not real money exactly. So, like, that's that's it's it's $7.20 is literally speculation on top of speculation. Right? And it's not so what what credence should you give it? It's not intelligible.

Tom:

I'm not saying it's bad or good. I'm saying it's not even wrong.

Nate:

Yeah. Okay. Earlier, I told you I had an announcement. My announcement is today that I opened the Sutton Bartow book that has been steadfastly holding my microphone for weeks.

Tom:

You opened it? Yeah. You've never read it?

Nate:

No. I've read it, but I haven't read it in months. But essentially yeah. So Google released their new open source model, and they've confirmed that they use the Reinforce algorithm for their RLHF stuff, which is why this is funny is because Reinforce is just policy gradient with the fancy name. Like, I don't know why Mhmm.

Nate:

The the like, some guy in the nineties wrote a paper. It's like, we call these the reinforced algorithms when it was policy gradient. And I just find it fun because, like, RL is just going backwards in time rather than forwards. Like, everyone's like PPO, and and now it's just like, we just use policy gradient. Big number, good.

Nate:

And but, I mean, like, the paper doesn't have that many details. I think if I comment on it more, it's mostly gonna be about that, like, interstate RL history and how we're going backwards, and then I think people will go forwards from there. But just, like, what is the modification of reinforce that's needed? Because PPO is downstream permit. It's just like a a different it's a PPO is with regularization.

Nate:

Reinforcement just like out of the box version.

Tom:

So wait. What did you learn from Sudden and Barto? Or did you just

Nate:

I was just reading about Reinforce.

Tom:

Going back to basics.

Nate:

Yeah. I was just reading, like, this it's old enough that it's in the textbook. Like PPO and tree TRPO and all these modern algorithms aren't in the textbook. But it's just fun that it's like I can just read about vanilla policy gradient, and it's in this book that I have sitting on my desk. But Google's model ultimately is, like, better than this Mistral 7 b model that everyone's hyped about, which is why it's so funny.

Nate:

It's like it makes me feel really vindicated where I'm kinda like, yeah, we're all bad at this in the open, and these people, like OpenAI and Google, are running laps around us, but, like, they just don't release anything, so it doesn't really matter. So we're, like, doing our own thing. And this is what if, like, Google and OpenAI start playing in the open, it's just gonna be, like, a hilarious effect, which is good. It just it'll pull people forwards, but it's a good reality check for people. Like, Google can just release, like, a intermediate training run, and it's better than these comp the start up that's like, oh, mister Ols is French god start up.

Nate:

Google just was, like, oh, here's a trading side effect to, like, have fun. Go crazy, guys. Like like Economies of scale.

Tom:

Economies of scale. I mean, it we've this is a theme of you know, we've brought this up before, and, obviously, you're I mean a contributor to this conversation. I still think that the ongoing This ongoing consternation of like, what does openness actually mean? Seems relevant here. Right?

Tom:

So like pushing the conversation forward is is one thing. I don't wanna use the word democratize. I really try and avoid using that.

Nate:

I've I've stopped doing it as well.

Tom:

Well, but we need a word for we need a better word for what that word, I think, has been trying to capture, which is something like

Nate:

Inclusivity.

Tom:

In yeah. I don't I'm no one of the best word either, but it's like it's better. Well, the thing is here's what because here's what's wrong with all these words. Because I think both those words are better than democratized. But what's still wrong with those words is that they're ignoring the fact that the criteria for what is made transparent to whom or, you know, what what is to be included in relation to what remains implicit.

Tom:

You're just making an abstraction out of what is really a relational, like entity instead of sticks. Like, this is sort of what There's a good paper actually by, Collette, well, a friend of mine who I did a post doc with, Emmanuel Mas, and a few other people on this paper that articulates like the the way in which we've always talked about AI accountability or ML accountability is kind of bullshit, because accountability is a relational property by definition. Something is accountable to something else. So if you're not able to specify what is being held accountable, to whom, or to what, It's not accountability. It's just a branding kind of initiative.

Tom:

And I actually think I think that's an important it's a simple argument, but it's a it's a profound one. And it actually underpins a lot of these other abstractions that we often use, like transparency or inclusivity. Because you're you end up deferring the stakes because you're just sort of pay like paying this kind of Kabuki theater justice to an abstraction and washing your hands. What what actually needs to be done is holistically assess what kind of conversation is the one commensurate with democratic norms. Yeah.

Tom:

And democratizing is not a good way to capture.

Nate:

AI is doing is the closest of any. But, like, what Google is doing here is really just, like, hype building. They're open sourcing this model because it's very easy for them to or they're they're openly releasing the weights. It's not open source. They're releasing it because it's easy for them to do, and it's great for the narrative.

Nate:

Google back on top and AI, baby. Like like, they're literally just, like, their numbers are the best in open and enclosed, and they're all, like, this is great for marketing. Look at us. You could come to Google and do open ML research. Like, they're probably, like, almost all like like, why is Google doing this?

Nate:

They're doing this for marketing to have people help them with their models, like, there's no business model for them dropping this thing. It's all, like, I mean, no direct. It's all indirect, which

Tom:

I think matters. It's spread in circuses. Right? It's spread in circuses. Okay.

Tom:

Making it easier for anybody in ancient Rome to attend gladiatorial arenas at the Colosseum does not change who the emperor is. Right? I mean, that's getting and getting, like, expanding the seating at the Colosseum so that more people can see a gladiator get his head chopped off by a tiger or something does not, in any way, change the outcome or the criteria by which outcomes are assessed. Right? That famously it's well, it's a podcast.

Tom:

You can't see what I'm doing, but it's a it's a thumbs up, thumbs down.

Nate:

We are on YouTube. You can follow the Retort AI podcast on YouTube and see our okay quality reacts and sometimes distracting Google searching. FYI. Like and subscribe.

Tom:

I'm probably thinking of this because I should rewatch. I've been thinking about rewatching the movie Gladiator.

Nate:

I've rewatched shit. It's been on, like, Netflix

Tom:

Yeah. For Netflix. Yeah.

Nate:

I mean, I've watched it. It's good.

Tom:

Does it hold up? I'm not yeah. It holds up. I think it would yeah. It's a good movie.

Tom:

I remember it's a good movie. No. But, of course, the whole point of that movie is that he is able what I liked most about it from years ago when I saw it is that, like, he becomes a gladiator, of course. But then he, like, kind of plays this kind of subtle game of, like, manipulating the optics of how he's a gladiator to kind of wrest power from a very corrupt, fucked up emperor. And that's that's kinda where the story goes.

Tom:

And, I mean, that we we don't we can't play that kinda game in AI right now. We're just too focused on, like, making the stadium bigger or improving the seating.

Nate:

I I do think there could be economic value from this model that Google is giving away for free with a relatively okay terms of use. Like, people could deploy this model into somewhat useful products. As, like, these models Sure. I was saying, like, the the 7,000,000,000 parameter model scale is kind of, like, the sweet spot where, like, almost anyone could run inference. Like, you could probably do it on your MacBook and not be awful.

Nate:

A lot of people, academics, can do research on it. It's good enough that for really basic tasks, you'll be able to do things that are reasonable. It's like it's kind of this economic sweet spot right now where, like, so much is possible with it. So, like, they are giving up. It's like giving up a it's like a charity cookie.

Nate:

It's like a charity cookie that they know won't disrupt their their emperor. It's like when the emperor the emperor pays for I don't know. They fill up the coliseum with water, so it's nice and people can come and get free bread or whatever. Like, in the gladiator movie, they literally talk about it. Like, this is like, that would be the favorite emperor.

Nate:

I'm gonna give them, like, a week of games so big that anyone has never seen. Like, that's, like, the a third of the way, third of the plot point. And it's like, for Google, it's kind of an apt analogy. It's like, it doesn't affect them at all, and it makes people like them. And that I know, but right now will boost the stock price.

Tom:

Commodus is also, like, in love with his sister and, like, doing all other kinds of crazy ass shit. He's not a good emperor. Even his dad is like Well,

Nate:

we're not getting to the fact of whether or not we think Sundar should be residing over his AI strategy for

Tom:

both. Sundar. Yeah. I'm also I don't want the takeaway here to be that I'm comparing Sundar to the other Commodus.

Nate:

Sundar is like an actual, like, good, like, functioning CEO. Like, he's done great overall.

Tom:

It's look. It's we need I I do actually think we need better leaders, like, in this space. Like, I'm not anti CEOs. I'm not, like, some kind of weird asshole who thinks that there shouldn't be private enterprise. I think my point is just we we we need to, like, be aware of, like, what are like, imagine you're a superlative CEO.

Tom:

What decisions do you actually face faced with that could meaningfully open up access to AI tools? And so far, it seems to just be branded circuses, and maybe you get a little bit of economic value on the side, which I I would want I want CEOs to be in a position where they have actual leverage to pull that would make a difference. But they're as constrained as the rest of us, ironically. They're they're they're playing the rules of the game that are pretty ossified and pretty structured to favor or not favor certain types of market dynamics. And until those dynamics can change, I don't care if you're Marcus Aurelius.

Tom:

You're not gonna be that good of an emperor.

Nate:

Are you gonna get a comment on if Sam Altman would be a good emperor?

Tom:

Yeah. I don't think Sam Altman's Sam Altman can't even lift.

Nate:

Oh, yeah. We already established this.

Tom:

I'm on the record saying that $7,000,000,000,000 asshole.

Nate:

He can't he can't even

Tom:

say how much he bench presses. We need to get Sam on the pod. Sam, how much can you actually bench press?

Nate:

Do you think he can bench press?

Tom:

And then

Nate:

like he says Unless he might. Like, I'm like a I'm

Tom:

not even gonna bench press. He looks he looks like yeah. He looks like Tweety Bird. So I think he can bench more than that. Like, it's hard honestly, it's hard for you to even take it.

Tom:

Whenever you see any photo of this guy, it looks like It somehow looks like he somehow siphoned muscle mass from his legs and put them into his biceps. Yeah. Like it doesn't It's

Nate:

a it's a look. It's a fashion style.

Tom:

Well, it's it's I guess. Well, if what he's after is fashion, that's one thing. He probably still can lift more than me. But okay. Until he can be honest about what he yeah.

Tom:

Until he can be honest about what he lifts. I, I'm skeptical.

Nate:

I didn't see it.

Tom:

Sounds good. Okay. Well, we covered a lot. I'm sure there'll be more to talk about next week.

Nate:

Yeah. At this rate. We probably already missed something while we were recording.

Tom:

At this rate. Yeah. We should we should. It's okay. We'll we'll be on the inter interwebs.

Nate:

We'll check it out. Yeah. Thanks for listening.

Nate:

Review, whatever. Like and subscribe. Share it with friends.

Nate:

Clip us posted on TikTok.

Tom:

Clip us. Get seats at the Colosseum while they're still while they're still open.

Nate:

Yeah. Alright.

Tom:

Bye for now.

Nate:

Bye bye.

Creators and Guests

Nathan Lambert
Host
Nathan Lambert
RLHF researcher and author of Interconnects.ai blog
Model release therapy session #1
Broadcast by