Developing LLMs: Open, closed, or democratic?

Episode 4_transcript
Tom: [00:00:00] Hi, Nate. Hey, Tom. I was playing with ChatGPT, like I sometimes do, and I was really, uh, wanting to get inside it, understand it better, from within, and I was thinking, is there anybody I could hire to kind of do some snoopin figure out this thing, where it's coming from, what makes it tick, what are its secrets, and I think I need a PI.
Nathan: What does that mean?
You have to explain before we begin.
Tom: I think I need a P. I. Oh,
Nathan: like a professor?
Tom: We getting closer? No.
Nathan: Principal investigator?
Tom: Like a private eye, but also [00:01:00] a P. I. It doesn't really...
Nathan: Oh, hell yeah. Fuck yeah. That was rough. Welcome to our podcast.
Do we roll with this? Yeah. Anyway, so today on The Retort how to be a retort. Our puns are getting more convoluted, and we're going to kind of go all into the axis of openness in large language models, kind of discuss what open source actually means, a recent language model release, what is the kind of relationship between being open and then trying to have quote unquote safe models, the PR of openness, smoke and mirrors, litigation, whatever we want to talk with, talk about.
I mean, Tom, you're last plugged in. Do you know who released the most recent model that caused a stir? Is [00:02:00]
Tom: Is that le Mistral?
Nathan: Yeah, you pronounced it right. So I think everyone calls it Mistral, but I heard one French person call it Mistral. So. Mistral, Mistral, got to work on my, bring back my French. Um, they released a 7 billion parameter model, literally as a torrent link on Twitter.
I don't know if they were making, trying to make fun of the fact that Meta's models were leaked on BitTorrent earlier, or if they're just trying to stir the pot with the world out there, like kind of what that means in the space of AI. I think we'll get back to the fact that they just tweeted out a torrent link as we go through like openness and all these themes.
And they ended up getting a lot of, like, a lot of people like the model and then they got a lot of criticism for kind of not actually making the model safe. So I think these are kind of the axes that we're going to get into. I think we should start with like, what is open source actually means? [00:03:00] Because through this conversation, we're going to use the phrase open, like open companies versus open source.
And they are actually very different and we're likely to mess it up. So kind of covering it is important. It's like, I mean, I, I originally went down this rabbit hole. So I kind of explain it to LA. You could go from there. I think thing that's weirdest about it is that open source is really a kind of self defined community.
There's now like a foundation of open source, which permits certain licenses to count as open source. So like the Mr. Model was released as Apache 2. 0. And the biggest thing that this kind of like open source credentialing agency does is say a list of permissible licenses that count as actually an open source license that people could build on.
I'm sure there's other things that I'm missing, but when essentially LendLama2 was released, all of these. Open source people got [00:04:00] frustrated because the model was not technically open source for two reasons. One, because the license was slightly restrictive, and two, because the model didn't release a lot of data.
And I was hoping that would be a fulcrum point where people got more clear about open source as values and what it represents and actually participating in that. But I think it was just kind of a blip where the only people to get upset about it were the open source community members and not the broader, broader machine learning community.
That's a trend within like hacker and kind of computer cultures where people get upset and be really vocal within their community and actually solicit no change outside of it. So we're kind of at this point where there's a lot more deeper questions. And I guess the most important one before we go to down the rabbit hole is like, should language models be open sourced?
So like, do you have, I guess that's a better question for you. Do you think that language model should be open sourced and like how the technology [00:05:00] compares to previous technological revolutions and innovations? And I, I guess, and just to be clear, you're saying if the model should be open sourced or if the parameters should be open, so the parameter being open or the other axi where.
Any Joe Schmoe engineer on the internet could take these parameters and deploy them on a GPU and trick people into talking to their language model.
Tom: Right. So for me, I'm inclined to say no, to the extent that I don't see what the stakes are, other than appeasing the open source community. If we had an idea of, and I imagine this does sound... [00:06:00] Provocative, I guess, for an ethicist to be saying this, because a part of me feels that generally we want these things to be as open as we can, but often that's not grounded in very clear reasons why, practical reasons why, and my sense is that the market within which any model worth training is being developed are ones that make Thank you.
The open source model questionable, if not unsustainable, and so I struggle to see and this could be my limitation as somebody who's not as embroiled in this is as you are. It could be my limitation, but I don't see what the concrete reason is why any particular model [00:07:00] is justifiably open source other than as some kind of clarion call for maybe all models should be built to be open source in some way, but because they're not, and because the pace of things is not dictated by the open models, it's dictated by the closed ones, we're operate, we're operating in an ecosystem that functionally just is closed.
And so I don't really see the political purchase in that approach.
Nathan: I'm a little surprised we've never actually talked about this. That's kind of my, that's kind of my hunch hearing you say this. I feel like a lot of the topics we go into we've kind of at least touched on outside of this. The primary arguments for open sourcing language models I think are one around kind of how fast progress can be made.
There's a whole community making fun of EA [00:08:00] called like the effective accelerationists. They're very pro open source because they want things to go faster because they sensibly want to make money and probably do weird things by changing the world super fast. The other is kind of around security and, um, safety because when more people are experimenting with the model, especially like I think these companies disproportionately under employ social scientists.
And a lot of them actually do have a lot of safety researchers, but you effectively have all the researchers have access to the models when. They're open sourced regularly and different. Okay. Well, I already messed up when they're like, when the model is available. Or effectively free, they can access, they can do something when the model is open sourced different types of people can engage with it.
So a model is like only open sourced when, uh, this is according to people more plugged into the community than me is like when it could be totally reproduced. So you have access to the [00:09:00] exact data that was used, the exact training code and like the model architecture, if any of those are missing, then it is not open source.
It might be open compared to open AI or something like this. And the argument is then you could look for things like you can find biases in the model, you could do attribution tests to like figure out what types of data cause what types of issues. And that's very different than just be able to find an issue in an existing model.
And I mean, we've seen that kind of acceleration effect with Llama too, where Meta got a lot of feedback on their model, both around harms and pretty much on how to fine tune it by releasing them because everyone that has a GPU and an interest has been just playing with these models. I don't think most of the variants are particularly useful, but like getting people to understand how your models behave under different circumstances is useful for an economic factor.
Tom: That all makes sense. Yeah, my point of [00:10:00] view is informed by
my suspicion that the nature of these models and their limits, this is partly rooted in a hunch, I guess about the models. I'm skeptical that
the current makeup of the open source community is, I guess, sociologically sufficient to really do the kind of open experimentation. that would be sufficient to actually bootstrap up an understanding of these models. I mean that not as a critique of the people in that community, but more just, we would need Probably a few orders of magnitude more richer more researchers than there are to actually like [00:11:00] at any one moment in time truly do all of the Sifting.
Well, is
Nathan: that better than what open AI or Google is gonna do?
Tom: Well, that seems to be the question right and that seems to be what's at stake in this like semantic indeterminacy of what open source Even means you've kind of already addressed it, but we're living in a situation right now where certainly if Twitter's anything to go on for a model to be open source seems to just mean that it's marginally.
Emphasis on marginally more transparent than whatever OpenAI is doing. So really the definitions of those terms is entirely dependent on the effectively monopoly power that OpenAI, maybe Anthropic, enjoy at the present moment. It's a function of their position in this ecosystem. [00:12:00] And I think as long as that's true, it's hard for me to get that impressed by calls to make models more open source.
Because. It seems that you're really only just going to be playing catch up with a market dynamic that is already conditioning our own ability to even have the language to describe these capabilities. If, if we're already at the mercy of the market in that regard, or of two firms, maybe just one firm in particular in that regard.
I'm personally not that impressed by an incrementally more open model, especially because every three months there's a new one. So I just don't, I don't, I personally don't really see what the purchase is. I can say more about what I think it would take for. Yeah, I think
Nathan: we could get to that. I think like there are people that criticize the [00:13:00] open community and leaders in it, like HuggingFace is seen as a leader in it.
And there was this article by Dylan Patel who writes a newsletter called Semi Analysis where he was like writing about GPU rich and GPU poor and he went on just like smashing HuggingFace for totally misleading the open source community because HuggingFace is the place that needs to do the work to help the open source community like do the things that actually will matter.
And I don't think the argument totally land and I like talked to them about it and wrote about this. I think that's like overselling it, but it's like the PR forces at stake in a lot of companies are kind of crafting the narrative in an annoying way where people get frustrated when they're like the next open source model and it says no information about it at all.
So there's kind of that type of dissonance and that that's like. I think that's like the thing we're talking about now, and then later in the podcast, we'll get to the other type of thing, which is like [00:14:00] how open source kind of relates to safety and if language models should be released for kind of safety grounds.
I don't think you didn't comment
Tom: on that. No, I agree. That's a different conversation. So,
Nathan: yeah, let's get to like what you think would need to be done to kind of make, to shift the needle in open source language models, and then we could get to kind of safety and. What language models represent as a technology?
Cause it does lead differently.
Tom: Yeah, we can see how far we get. So
I think about large language models also to be clear, it's not even really like precise what that means. What is a large language model? We
Nathan: take a moment to be upset that foundation model caught on and now they're adding frontier model to it. It's like, I thought foundation model was so dumb at the time.
I mean, maybe it makes a bit of sense, but the way that they motivated it was actually totally incorrect.
Tom: It always bothered me. Yeah, it always [00:15:00] bothered me. It's gonna be
Nathan: correct for economic reasons rather than like anything else. I don't think they thought that at the time, but maybe they had great foresight.
Tom: I think that's, I think that's right. Which is, it's not clear what is the foundation that these models are speaking to, or uncovering, or if there is one, and it, and frankly it's still not. And so in practice, I think what that means is the amount of capital investment, the amount of doubling down, the amount of coalescing around particular models or the pipelines that produce particular models, that's the foundation of whatever.
The foundation is what is being built. So, foundation model is a bit of a misnomer. It's really just... Kind of apologetics on we're just gonna make the foundation and the model is [00:16:00] just the flag that we're planting in that foundation. So it is a foundation model, but not for the reasons that, you know, any of these people who use that term typically referred to.
And you're right that frontier models has kind of Sort of addressed that but... Tagging frontier
Nathan: models as an AI safety critique and it's like a word that they're using to try to talk about the evaluations that they want to do but don't yet know how to do, it's very odd.
Tom: The way I think about this, to call back to what I was, you know, my skepticism of open source I guess in this regard, let me try and ground that in this context.
I... Increasingly think about whatever we want to call them, large language models, foundation models, frontier models, God knows whatever term we'll catch on in six months, it's probably going to be something else. I think of them as infrastructure in the making. The difference between infrastructure and a model is that infrastructure [00:17:00] is A kind of enactment of capabilities that make whole new activities possible, like think of a road, which we've already discussed this on a previous episode.
In a sense, right? Roads are infrastructure. Why? Because roads are a way for agents to do qualitatively different things at different scales than they previously could do. I still feel like there's this irony that so much of, and this is also the AI discourse in what I'm
Nathan: about to say. Are you trying to go to the point where open source, like, doesn't work for
Tom: infrastructure?
Yeah, well, it's, it's not so much that it doesn't work, it's that I was going to make something I think more specific, which is the purpose of infrastructure is what's at stake. [00:18:00] So, building good infrastructure doesn't just mean you make, you allow a lot of really smart libertarians to like play with it and like refine it or something like that.
Making good infrastructure means that you know whom it is meant to serve according to what standards. And that you can verify that it is brought to those standards. That's what makes good
Nathan: infrastructure. I think I need to push you at this because this is like the analogy that you use regularly, which is like how technology interfaces with people.
But I don't think all infrastructure is public. Because like, I will go on a little rant because I think that the word infrastructure is good for these large language models. I think this is kind of based on the fact that like even the word model is like totally broken in our lexicon. And. It emerged from kind of calculus.
I think I have this book I like. The philosophy of science for the word model, it's the nerdiest model based [00:19:00] URL researcher thing ever. And like, it's really based in the kind of origins of calculus and kind of being able to bound your accuracy on certain things based on assumptions and based on differential equations and all of this.
So the kind of statistical and improvised computing that we're using totally changes that. So that's kind of one thing. And then I also agree that. It's like a large language model. It's like a large lango language, computational infrastructure . It is a change in how computation is gonna be done, and I see it being more comparable to something like a w s than something like a road.
In which case I think like, I don't know if Amazon benefits if their systems are all documented in the open, but I think there's arguments to be made by something like a w s could be made more efficient if certain things are open sourced and then like, This is like, I mean, like Mosaic might be seen to do that model where like training, if you think of [00:20:00] something a little bit different, like Mosaic acquired by Databricks, their company model is to provide really good training code that integrates with the cloud.
In a nice way, so kind of abstract that away, but like their training code is open source to make it so people trust their products and can potentially improve on what they're doing. If you think of this on the other side, instead of training language models, but rather than like using language models, if your code for using language models is open source, you get benefit out of it.
The language model is still sitting there, but you can kind of the integration still make economic value.
Tom: I feel like we might be talking past each other because I agree with everything that you just said. So when I use the term infrastructure, I don't mean to conflate that with public infrastructure.
Public infrastructure is a particular subclass of infrastructure, and it's an important subclass. But of course, you can have private infrastructure. You can have. Domestic infrastructure, you can have personally owned infrastructure [00:21:00] or personal stakes, individual stakes that have nothing to do with the public in this broader social or political sense or democratic sense.
It doesn't necessarily have anything to do with that. I did, while you were speaking, um, I did look up to remind myself what the origin of the word infrastructure is. So the latin term infra is, I think, what's at stake here. Which can sort of be translated to either below or underneath that's sort of what in for so you think of infrared Infrared is just the part of the electromagnetic spectrum.
That is sort of just you could interpret is literally below But I think more accurately like beyond the red we can't humans cannot perceive it Some animals can apparently but we are not sensitive to it. We can build telescopes and satellites that are that's the way that The James Webb Space Telescope works is that it actually it's, it's, it's deliberately shifted into the infrared, which [00:22:00] enables it to see farther back into the history of the cosmos.
But that's the point is that Things that are infra are the kind of basis for whatever the following word is. So, as I say that, it's sort of a subtle distinction, I guess, with a word like foundation, or a word like frontier. But I think infrastructure is more precise, because what infrastructure emphasizes that foundation and frontier do not, is that it is built.
It is something that is fabricated and created and implied in that is could have been built or fabricated differently. That's what's key. Foundation or frontier to me. Imply that there is something natural that is being uncovered by these models. In other words, that there was something that was somehow always there metaphysically or, or physically, just sort of beyond the realm of where we could see before and now with these models, [00:23:00] we're just able to pierce that veil.
We're just able to tap into it. And so that's why for me, those terms do smack a little bit of that old like alchemy, kind of like way of thinking, whereas approaching them as infrastructure just means. They were created by people to do particular things, for particular reasons, according to standards that remain indeterminate, that remain unresolved, that there is not consensus on.
I think
Nathan: almost anyone on the, all these like smart people on the papers will agree with you. I don't think that these names come out of like any... Brilliant, like word analysis and origins of meetings. I think it's just like an agreement that comes when you have 30 professors on a paper and they're taking a poll of which name could be it.
And then they go with the presentable one and then off the record, they're like, it's kind of shitty. I think it doesn't really answer the question of like, why can this layer of infrastructure not benefit from having an open source component?
Tom: I might push back on that [00:24:00] slightly. I suspect that some of the reasons that certain words are chosen are more unconscious than we might be comfortable admitting.
I, I suspect that the reason certain words get used over time and obtain this kind of mystique, which I would argue foundation models or frontier models have definitely obtained a kind of mystique, is that there is that kind of smokiness, there is that hinting at something tectonic that is being excavated.
In some way by this structure, and I, I frankly just don't find that to be a constructive or even healthy way to approach what is being built. So I'm completely fine with some of these models remaining private, like they can be private infrastructure. Like AWS, as you mentioned, that is used towards private ends for private purposes, and it should still be safe and possibly open source could play a role there.
What I'm trying to [00:25:00] emphasize, though, is that. If the goal is to build a good LLM, I don't see how making the models more open is any guarantee, or even, like, necessary towards that end. Because historically, whether we're looking at Ancient Rome, or the Renaissance, or the Industrial Revolution, or the New Deal, whenever there was a qualitatively new type of infrastructure that was built, with some exceptions...
The open source component, in my view, never really played a necessary role. Um, in some ways it was bad. Uh, and I could maybe give some, a few examples of that, that are maybe illuminating on this. But the point is, I think what needs to happen, at least in tandem, because I'm [00:26:00] not anti open source, I sort of just don't really see it as critical right now.
Because it's not clear to me, I haven't yet seen the case for why we need it. As a way to either articulate or achieve a desirable goal for what LLMs should be to be successful. Fair. I
Nathan: think when I think of the things that open source actually can do beyond all the marketing, I think it's very useful for kind of valuations and it's like OpenAI releases these system cards.
And I don't think, like the system card would be way bigger if people, if any random person was able to do it. And then kind of the grounding in reality of methods that they have with what they actually can do. So it's all like, we've talked about RLHF a lot. I think why RLHF is interesting is because it is really hard to document and there are powerful metaphors at [00:27:00] play.
And I think being transparent on which interpretation of the metaphor and, like, what, like, way you may be using it kind of is important for dispelling some of the magic around it and making it harder for people to do malicious actions with language models.
Tom: That's an interesting point. I am sympathetic. To that idea, like, I
Nathan: think there's a valid argument with, I think you can make a valid argument that I might be like fighting a losing battle by default and that's like, is it a sword worth dying on because it makes things marginally better or is it just useless?
Tom: I'm sympathetic to the idea that the open source commitment makes it harder for things to get bad, right? Which is what I heard in what you were saying. I'm less convinced. That it's necessary or even [00:28:00] useful for making ones that are good, and I guess that's what I'm emphasizing in my remarks. I think the, again,
Nathan: this leads into the next majority of things.
I think in order to get open source as a pathway, you have to accept that most of the people that are interested in openness are not in it for doing good. But they're rather in it for kind of libertarian goals of, I want to run my own thing and not be subject to a company. And this is something that I wrote about months ago.
It's like open source doesn't care about harms. And that's because most of the people driving the conversation language models are effectively libertarians, where they're like, I have my GPU. I want to train a language model that I could talk to on my laptop. And then there's a whole dark web of people that want to have it for weird, bad purposes.
And that's kind of like the open source academic scientific people like me that are like this is just all bullshit and people are saying stupid things versus the open source don't tread on me my computer by model type of thing and I think both of them are not really [00:29:00] sustainable on their own.
Tom: Right.
So I think that spirit was underlying some of my remarks as well. I agree that and this reflects back on the wider culture of AI, which is something else that it's worth referencing here. Is that when you read most of these papers, you read even most alignment papers, you read most roboticist papers, including self driving car papers, it almost doesn't matter.
It's very hobbyist. It's very much like, I have a robot in my garage and it's infinitely smart, or I'm concerned that it might be. And I'm wondering how I can still get it to do the things for me that I want it specifically to do, even though it's totally unconstrained in its capabilities. [00:30:00] And the thing is, those metaphors really just don't, even if you believe that these things can become arbitrarily capable.
I mean, for that reason, that metaphor of having these things In your lab or in your house or that you're somehow the only other person. It's this like Robinson Crusoe psychology where you're like alone and you're the only thing with your two hands and your one little prefrontal cortex and whatever who happens to be really good at Python and you're just able to like somehow play this game with this thing and so many of these papers are rooted in that mindset.
I think just because that is in fact The kind of life that many people in this community have, and that is their experience when they are Accessing an API or rewriting lines of code or whatever it is is that that's just your [00:31:00] relationship with the system That's the interface by which you access it But that's not the way a system like that is going to actually behave Once it exists and is deployed.
It's a completely different ballgame and we are stuck in this kind of Whether you want to call it libertarian, whether you want to call it individualist, whether you want to call it this kind of weirdly asocial, and I think inappropriately, many people who work on this stuff think pre social mindset, which is, before this thing hits society, what needs to be said about it, or what kind of access does there need to be, so that I can verify that it works.
And actually, that's not pre social, that's just technocratic. That's really just an anti democratic approach. To how these things can get evaluated or understood. So I guess that's the kind of, this conversation is interesting. And I also think that the larger conversation is interesting. [00:32:00] Because I think that in fact a lot of open source arguments are anti democratic.
Because they're preventing the ability for a majoritarian consensus of any kind to be arrived at. For what the evaluation criteria should be. Beyond just the ability of individual, possibly libertarian people. To do evaluations on their own.
Nathan: I mean, is it not the case that the kind of broader public will have an easier time engaging with language models if they're open source, just to kind of, because they will be available sooner onto a broader range, like if you're to see it, like nonprofit evaluation organizations, like.
Them having to negotiate with OpenAI to then, like, negotiate on a policy manner is way slower than going to look at LLAMA2 and understanding how language models work and building infrastructure there. I guess I just doubled down on the infrastructure word. [00:33:00]
Tom: Well, your question is about civil society. So...
Civil society is a kind of maybe somewhat fancy term that gets thrown around but is rarely defined. The way that I'm using it is, basically you have the state, which let's just for sake of this conversation call that something like the federal government, on the one hand. On the other hand you have the market, which in this case would be something like the biggest companies that are building the most capable models, okay, the ones that really set the agenda on this.
In between those two things is what's called civil society, which broadly means this network of could be academics, could be hobbyists, could be the press, people who are concerned with basically talking about [00:34:00] what is being deployed. And evaluating it as a means of informing public policy. It's that kind of web of organizations.
And there are thousands of these, right? I mean, we can even, we can regale them. Hugging Face is a great example of an org that is very much positioned in that kind of civil society type. So I'm not anti civil society. The question for me is, what is the role of that type of organization in informing how policy gets made?
I don't think that organizations like that are in a position to unilaterally decide what an LLM that is good is, or whether one should be deployed, or how it should be integrated in any particular way. The expertise probably is critical, and I'm glad that there is. The expertise that underlies the open source [00:35:00] movement and what it wants to do, but that's really just a means to an end, and that's not yet what I haven't yet seen a constructive basis for arriving at the end, which is something like why is it that the New York City school system has a blanket ban on chat to BT and then does a complete 1 80.
And says, no, we need to completely revamp our curriculum in order to, like, get ahead of this thing and do right by our students. It's a total oscillation. And many other organizations are faced with a similar kind of choice. Because they don't, they're not able to exercise judgment on what is the appropriate integration of this new kind of infrastructure into our legacy mission.
The question for me is, again, the question for me is, what is the role of open source in facilitating that kind of judgment? And I'm sure there is a role that it can play, but I think that role is going to be different. At [00:36:00] different moments in the development of these systems, and that's why I'm not committed to open source for the sake of open source.
That seems to me to be what the shape of a lot of the debate has been so far.
Nathan: I do think that a lot of the people that are saying open source for the sake of openness, like The, like the real intellectual leaders in the field are saying things that are much more specific around understanding how these models work at a deeper level and understanding the attribution of data and understanding how to like continue to use these in a broader economy.
I do think that it's like, you could be getting too caught up on the nonsense of PR that we're in where every startup that raised 20 million right now is trying to get their feet under them by saying we released an open language model. And that's very different than like meaningfully trying to shift the name needle on what state of the art information is public about language models.
I think why that's the case is because there are clearly actors in the AI [00:37:00] safety side of things that want information to not trickle out and not trickle down to the point where I've been asked at conferences like, Hey, do you know who at Hugging Face figured out this idea? It was about 18 months after someone at Google figured it out.
And I'm wondering if someone told it to them or if that's how long it takes people to figure out how to do it. So I just find that much more malicious than having all the information in front of you. So
Tom: as a counterpoint to the maliciousness, 100%, absolutely. So, let's just call a spade a spade here. We all re maybe I shouldn't say we all, I remember Uh, GPT 2 in the before the screen gets hazy as I'm speaking about this, right?
This is a long time ago. This was several years ago when GPT 2 was, was, was finished, right? And OpenAI said that they were concerned about, you know, widespread access to GPT 2. So they were limiting access and you had [00:38:00] to kind of ask for permission. A lot of people in the field and also. A sizable portion of the AI safety community had a kind of, there was a spectrum of some people eye rolled when OpenAI did that, and some other people were actually more kind of disturbed, because on the one hand GPT 2 didn't really seem anywhere near AGI level concerns in terms of Can this thing actually do unprecedented harms?
Are its capabilities really that difficult to make sense of? It was impressive, but it really wasn't anywhere near anything like what they were theorizing as AGI. And secondly, and I think more critically, it really wasn't that better than what other labs were doing. It was [00:39:00] good, but it wasn't really that far ahead of the curve.
So it felt to a lot of people. Like, OpenAI was playing with this different kind of game, which is setting a precedent around deciding when, and for what reasons, certain models in the name of safety should not be open, should be closed. And that's a very dangerous game, and I do feel a fundamentally anti democratic game.
Because... There really was no accountability for a decision like that. There was no basis that was really robustly public about why then, and what would it take for access to be opened up. What are the actual stakes of this closure? It seemed like they were frankly seeing what they could get away with, as far as setting a [00:40:00] precedent around this goes.
And we're now living... And the consequence of that, which is there's this dynamic where the biggest companies are basically allowed, not expected, to make their systems open, to make their models robustly accessible in a way that prior to that was really kind of much more normal and much more expected.
And we rely on this other type of community, this open source flag. to be the counterpoint to that. That whole dynamic didn't have to happen. That's what I think is unfortunate here. Those decisions were downstream of particular choices. That particular companies and people made about what this infrastructure can be and who it can be for.
That's what concerns me, and I think it concerns me that's been kind of forgotten or missing from this debate about should [00:41:00] we pursue open source or not. I think the answer obviously is, I mean, yes. We have no idea what we're building. We don't really know what the limits are. It should be as open as possible, just as a means to understanding.
But we're already in a dynamic where moats are kind of the name of the game, economically and institutionally. So if we're already building moats and we're halfway to feudalism, in that ecosystem, what does open source get you? I'm not sure I know the answer to that question. This is, this is sort of why I'm of, I'm of different minds about
Nathan: this.
I mean, what do you think about meta then? Meta can't spend as much as They can spend a similar amount to Google and OpenAI, a similar amount to OpenAI. Google could probably spend more. Google is bigger and Google has the advantage of in house hardware. So like most of [00:42:00] their spending is more efficient.
Like, Meta is taking the other side of like, they're not asking for accountability games. I mean, they're not releasing their pre training datasets likely until some litigation wraps up around Llama 1 based on books, which is probably a kind of nonsense. It's like, it's an hopeless legal case.
Tom: Well, I feel like Llama is not open source in the, in the full way that you were articulating. It is not technically
Nathan: open source, but like Meta and Zuckerberg is trying to, is pushing the conversation around to release
Tom: or not to release. Right. Well, I think again, Do you think that is a good
Nathan: thing? It's a simple way to
Tom: put this.
No, I don't. Um, and the reason is. Hopefully consistent with what I've been saying, [00:43:00] which is we are recycling words and phrases like open source as if they apply to this new type of infrastructure, when in fact our ability to mobilize those terms, in any sense, is delimited by the game that has been set up by a combination of these companies, like already.
So what that means is... It's sort of like, I have a funny analogy in my head, which is something like, have you ever played the game Monopoly? Yeah. But you've actually played it. Oh yeah,
Nathan: I've played Monopoly
Tom: like a bunch of times. So the name, so, the early part of that game, you're just buying properties.
Most spaces on the board are not taken yet. So you roll the dice, you're very likely to end up on a space that is not owned. And you have enough capital at the start of the game that you're very likely to just buy it, right? Regretfully. Sure. The [00:44:00] game doesn't really start until most of the board is owned.
Because then you either start having players who are bidding for the remaining properties, or they start trading properties. The goal not just being to get the most properties, it's to get monopolies, right? Corners of the board that are yours. But the reason those monopolies matter is not just because of, like, you get more money or whatever it is when you land on them.
The reasons that the dice only go up to 12. That's, that's key to the, the dynamic of the game. The dice only go up to 12, and so if you have two monopolies on the same side of the board, basically people are going to land on them. Because the board space is only, what, 10 spaces long? It's prohibitively unlikely for people to repeatedly skip over what you're doing.
So, I'm not sure if I can stick the landing here, but this makes sense to me, and you can see if this makes sense to you. Basically... The, the open source mindset to me right now [00:45:00] increasingly feels like we're like, we have these dice. Why can't we just keep going around the board and playing the game and getting more properties and getting more money because we get 200 every time we go around and go.
Well, the reason is that by this point, the board has been carved up between these companies so that there's even odds. That we're just going to land on extremely costly properties. Much more likely than that we're going to actually get back around for the 200 more dollars. When we land on Boardwalk we pay two grand, right?
You know, these numbers are still in my head. So the point is, our semantics of open source only go up to 12. But the board space has already been carved up. In ways that make it so that, yeah, if I had to choose between landing on what the pink properties, I guess that's Lama, whereas the blue properties would be like, maybe, maybe GVT4 is like
Nathan: 10 times more expensive than all [00:46:00] the other models, right?
Tom: So like, and it's only getting worse. The economics, the capital that you need to train the kinds of models that are remotely competitive with the leading. Entirely closed moated off ones is getting exponentially
Nathan: worse. I just can't. I just feel like you're getting towards a cop out, which is like, I literally have a blog post this week as someone who cares about open source by the time this podcast is out.
It's literally going to be like, are all these open source companies just fucking about and just wasting like just shooting themselves in the foot. And I think the question is, even in the context of people being pretty dumb, I still think transparency is more likely to help. So that's the battle that I'm fighting because I understand this technology is going and it's just like, what can we do with the imperfect situation?
And I think that's the question that I'm trying to get to, but it might, your answer might be like, it doesn't matter. There's more systematic issues at
Tom: play. Well, it's [00:47:00] not an either or I. agree that we should be as transparent as possible. I also strongly feel that there are larger structural issues at play that transparency is not going to solve alone.
But transparency can be, again, I will return to my civil society metaphor. Transparency is critical as a means for building consensus on political solutions or political approaches to, whether that's regulation, Whether that's certification. So I'm not just trying to like, bonk a stick on these companies like, Are
Nathan: you trying to, like, are you, do you see, so are you trying to build this alternative path then?
Like, who does this system level change? Well,
Tom: yeah, I am. I mean, I'm not the only one. Um, yes, it's, well, it's, it's about creating the conditions [00:48:00] for the kind of politics that you want, rather than being stuck with the kind of politics that you don't. The positions that are possible to play right now kind of just suck.
If what you're interested in is building systems that you know even how they work, let alone are happy with how they work. We don't, there isn't a place right now where you're like, yeah I'm in a position to work on that. That just doesn't exist. So instead what you have to do... Is plant seeds for an alternate
Nathan: you're ready for the next a I spring Tom's like the winter's coming after this.
I'm planting some
Tom: seeds. There's a there's an element of that. I mean, when I, if I think in some ways, this conversation has ended up being about. How cynical should we be? You're the
Nathan: more cynical of the two.
Tom: Well, I actually don't, well, I actually don't, well, I wouldn't purport to say how cynical I think you are, but I actually don't feel, I don't [00:49:00] feel cynical myself.
I feel cynical about particular dynamics, and particular players, and particular situations. And there's lots of grounds for that right now, but I'm fundamentally hopeful about the future. And the reason for that is, I feel like new, I, yeah, I feel like new questions can be asked about and with these models that were not previously possible to ask, and frankly not even really possible to articulate or specify.
And that's why for me it's a fundamentally liberatory exercise to think about. Because again, this is infrastructure. This is not just like, oh, we're uncovering some fundamental layer of reality and it sucks that it's also going to be very unequal for people. I don't think that's true. The way these models are built are a product of design choices.
The way that they are deployed is a product [00:50:00] of other choices and the way in which they're, you know, made optimal and whatnot. Like, these are all different layers of the stack and they could be made differently. And I'm optimistic that we will get to the point. Where it will become more clear what is the infrastructure that we are able to build now and we will be able to make it for We will be able to make it work in ways that I believe will be fundamentally transformative For people that are good, but it's early days And so what I want to do is figure out how do we get to the place where that there's at least even odds Of that work being done and if open source can play a role, let's do it by all means, but I want to see that argument.
That's the argument that I want to see. I don't want to see this hobbyist individualist libertarian. Don't try to make bullshit anymore. Oh, yeah, I mean,
Nathan: do you want to go for 10 more minutes? There's a whole nother angle on how platforms like a howling [00:51:00] essentially the thing to do is talk about bridging the gap from your word of infrastructure to the word that Silicon Valley uses a platform.
And kind of map between infrastructure and platforms and what language models are going to represent and then come things that people are doing. So we could also save that for the next time or we could go into it.
Tom: We, we do have a hard stop I think in a few more minutes, but I can, I can briefly, I can briefly say something there.
I think
Nathan: this might just be a teaser to continue, but essentially back to the Mistral model. When the Mistral model was released, a lot of people were criticizing it for being unsafe. They were saying long meta released a safe model in addition to an unsafe base model. They did bowl. They like people don't want to talk about that, but there's essentially because Mr.
Alwyn full YOLO. I think they were upset about the safety. But the important question, I think, is if you view language models as a platform or infrastructure, a platform being something that kind of aggregates demand and [00:52:00] internet and stuff like that, should the base model have any safety filters by default and, or should the base model be open, not necessarily fully open source based on these potential harms?
And I think reluctantly, I think that it should. Okay. To not do any filtering, but I'm curious what your answer is before I go into an
Tom: explanation. I'm entirely not just sympathetic. I mean, I've made explicit in my writing that I see societal scale, digital platforms as infrastructure, that platforms. Our infrastructure or easily have the capacity to become so and I'm not the only one I mean Lina Khan Basically famously has argued this about Amazon.
What do you think
Nathan: of her? Ridiculous FTC that's like the little lawsuit. She's bringing like are you gonna attach affiliation to her? Legal approach and [00:53:00] monopoly power, you
Tom: know, millennial solidarity is, is there. She's like, yeah, she's like our age. I mean, well, she's my age. You're actually younger than me a little bit.
She's older than me. Um, yeah, I think, look. We, this is, this is a whole other conversation about law and economics, but the way in which we, we don't yet, this is part, I think the common thread here is, we do not yet have a consistent sound language through which to understand the power that is implicit in what is being built.
That's what's at stake here. And the, my problem with open source, Is that it has, I think, within it, certain understandings of the way in which power can be made controllable or made, uh, can be harnessed through what is really a [00:54:00] fundamentally individualist methodology of engagement. And
Nathan: I'm not saying that's wrong.
I think this is a trend, a subset of open source, because part of open source is that more people can engage. I guess it might be kind of rosy eyed because in most popular open source projects, the company that maintains it makes all the hard decisions. It makes all the substantial contributions. So I think it might be like an idealistic thing rather than practical, but it's worth noting.
Tom: So I guess what I want is for that ideal to be updated for the present moment. I want that to be. more fully articulated, more fully furnished, so that we can actually see if it is Pollyannish or not. That's what I want to end up to, because right now, again, we're living in this slippery moment where you can be open source if you're this libertarian hobbyist, or you can be open source if you're this like closet romantic and [00:55:00] you want this kind of pseudo democratization by means of everyone just gradually becoming...
Can we
Nathan: stop using the, or either can we establish that we hate the word democratizing AI or stop using it? I think people have soured on it. No for profit company is trying to... Aggregate mass rule in the direction of AI. I am the one who has fallen for using the word democratization without thinking about it.
But what they're trying to do is just make a space where more people have a path to use it. They're like trying to enable broad adoption. Commercialized. They're not trying to democratize.
Tom: It's commercializing not democratizing. Right. I think I've also probably been lazy in, in, I mean, just now I meant it pejoratively, so I, I actually was being precise there.
But yeah, I don't think you can democratize anything. I think that there's either democracy or there's not, because democracy is about, democracy is about decision, not about access or transparency or. Democracy is a question of who decides, and in what ways. It's not a question of, [00:56:00] yeah, this kind of, yeah.
Nathan: Yeah, machine learning will never be democratized. Like, the capitalistic incentives will technically control the decisions more than any democratic process.
Tom: So, I think we should accept that and find a different basis for democracy. That's pretty much everything I've been saying in this conversation.
It's not, I'm not anti open source because I want secrecy or obfuscation. It's that I want to render unto the market what is the market, and render unto politics what is politics. But terms like democratization or open source, and frankly often transparency. Can be slippery in the way in which they kind of serve the interests of one in the name of the other.
Nathan: Yeah, that sounds good. I think I have more optimism in it, but that's probably strongly biased by my positions and
Tom: focuses. I want it to be a strategy. I want open source to be a [00:57:00] strategy. Towards desired ends rather than its own end because I don't I don't maybe that's the distinction here Is that I don't see it as his own end.
Nathan: Yeah, we'll see if I can bring you an answer on that I'm sure there there is it feels like there are threads being drawn at the noise The loud ass monkeys with symbols in the background make making open horse Those are the companies that make those those are the things that make the coherent argument for
Tom: open source harder
Amen to that Well, this has been fun.
Nathan: Yeah. See you next time. Thanks for
Tom: listening. Bye for now

Creators and Guests

Nathan Lambert
Host
Nathan Lambert
RLHF researcher and author of Interconnects.ai blog
Thomas Krendl Gilbert
Host
Thomas Krendl Gilbert
AI Ethicists and co-host of The Retort.
Developing LLMs: Open, closed, or democratic?
Broadcast by