Transcript for In conversation with Sam Altman

SPEAKER_02

00:00 - 00:22

I first met our next gas salmon almost 20 years ago when he was working on a local mobile app called Looped, who were both backed by Sequoia Capital. And in fact, who were both in the first class of Sequoia Scouts, he did an investment in a little unknown FinTech company called Stripe. I did Uber and in that tiny experiment. Uber, I've never heard that before. Yeah, I think so.

SPEAKER_01

00:22 - 00:24

I've got it starting right up.

SPEAKER_04

00:24 - 00:26

You should write a book, Jake. Maybe.

SPEAKER_02

00:31 - 02:28

Rain man David's eyes We open source it to the fans and they've just got rid of it Love you, Wes, I sweet of you That tiny experimental fund that Sam and I were part of a scouts is Sequoia's highest multiple returning fund, a couple of low-digit millions turned and to over 200 million, I'm told. And then he did, yeah, that's what I was talking about, rule off. Yeah. And he did a standout white combinator where he was president from 2014 to 2019 and 2016. He co-founded open AI with the goal. I've been sureing that artificial general intelligence benefits all of humanity. In 2019, he left LIC to join OpenAI full-time as CEO. Things got really interesting on November 30th of 2022. That's the day OpenAI launched ChatGPT. In January 2023, Microsoft invested $10 billion in November 2023. over a crazy five-day span, Sam was fired from open AI. Everybody was going to go work at Microsoft a bunch of hard emojis when viral on x-slash Twitter and people started speculating that the team had reached hard official general intelligence. The world was going to end and suddenly A couple days later, he was back to being the CEO of Open AI. In February, Sam was reportedly looking to raise $7 trillion for an AI chip project. This, after it was reported, that Sam was looking to raise a billion from Massey, or she's on to create an iPhone killer with Johnny Eiv, the co-creator of the iPhone. All of this while ChatGPT has become better and better in a household name. It's having a massive impact on how we work and how work is getting done. And it's reportedly the fastest product hit 100 million users in history in just two months. And check out OpenEyes and Sane revenue ramp up. They reported they hit 2 billion in ARR last year. Welcome to the all-in podcast, Sam moment. Thank you. Thank you guys. Sat you want to do this off here?

SPEAKER_00

02:28 - 02:44

Okay, sure. I mean, I think the whole industry is waiting with BetaBreath for the release of GPT-5. I guess it's been reported that it's launching sometime this summer, but that's a pretty big window. Can you narrow that down? I guess where are you in the release of G5?

SPEAKER_05

02:44 - 03:41

We take our time on releases of major. new models and I don't think we, I think it will be great when we do it and I think we'll be thoughtful about how we do it. Like we may release it in a different way than we've released previous models. Also, I don't even know if we'll call it GPT-5. What I, what I will say is, you know, a lot of people have noticed how much better GPT-4 has gotten since we've released it and particularly over the last few months, I think I think that's like a better hint of what the world looks like, where it's not that like 1, 2, 3, 4, 5, 6, 7, but you just use an AI system and the whole system just gets better and better, fairly continuously. I think that's like both a better technological direction. I think that's like easier for society to adapt to. But I assume that's where we'll head.

SPEAKER_03

03:41 - 03:55

Does that mean that there's not going to be long training cycles and it's continuously retraining or training sub models. Sam and maybe you could just speak to us about what might change architecturally going forward with respect to large models.

SPEAKER_05

03:55 - 04:04

Well, I mean, one one one one thing that you could imagine is this just that you keep training right a model that that would seem like a reasonable thing to me.

SPEAKER_02

04:06 - 04:27

Do you think he talked about releasing it differently this time? Are you thinking maybe releasing it to the page users first or a slower rollout to get the red teams tight since now. There's so much at stake. You have so many customers actually paying and you've got everybody watching everything you do. Is it? Yeah. You need to be more careful now.

SPEAKER_05

04:27 - 05:10

Yeah. Still only available to the page users. But one of the things that we really want to do is figure out how to make more advanced technology available to free users too. I think that's a super important part of our mission. And this idea that we build AI tools and make them super widely available. Free or, you know, not that expensive, whatever it is, so that people can use them to go kind of invent the future rather than the magic AGI in the sky inventing the future and showing it down upon us. That seems like a much better path. It seems like more inspiring path. I also think it's where things are actually heading. It makes me sad that we have not figured out how to make GPT4 level technology available to free users at something we really want to do.

SPEAKER_02

05:10 - 05:12

It's just very expensive.

SPEAKER_04

05:12 - 06:14

I think maybe the two big vectors and the people always talk about is that underlying cost and sort of the latency that's kind of rate limited a killer app and then I think the second is sort of the long term ability for people to build in an open source world versus a close source world. And I think the crazy thing about this space is that the open source community is rabbit. So one example that I think is incredible is, you know, we had these guys do a pretty crazy demo for Devon, remember like even like five or six weeks ago that looked incredible. And then some kid just published it under an open MIT license like open Devon and it's incredibly good and almost as good as that other thing that was close source. So maybe we can just start with that, which is tell me about the business decision to keep these models close source and where do you see things going in the next couple of years.

SPEAKER_05

06:15 - 07:55

So on the first part of your question, speed and cost, those are hugely important to us. And I don't want to give a timeline on when we can bring them down a lot because research is hard, but I am confident we'll be able to. We want to like cut the latency super dramatically. We want to cut the cost really, really dramatically. And I believe that will happen. We're still so early in the development of the science and understanding how this works. Plus we have all the engineering tailwinds. So I don't know like when we get to intelligence to achieve the meter and so fast that it feels instantaneous to us and everything else. I do believe we can get there for a pretty high level of intelligence and it's important to us. It's clearly important to users and it'll unlock a lot of stuff. On the open source, closed source thing, I think there's great roles for both. You know, we've opened source some stuff. We'll open source more stuff in the future. But really, like our mission is to build towards AGI and to figure out how to broadly distribute its benefits. We have a strategy for that. It seems to be resonating with a lot of people. It obviously isn't for everyone. And there's like a big ecosystem. And there will also be open source models and people who build that way. One area that I'm particularly interested personally and open source for is I want an open source model that is as good as it can be that runs on my phone. And that I think is going to, you know, the world doesn't quite have the technology for a good version of that yet, but that seems like a really important thing to go do at some point.

SPEAKER_02

07:55 - 07:56

Well, you do. Well, you do that.

SPEAKER_05

07:56 - 08:01

But you really, I don't know if we will or someone will. But some of the world, uh, long with three.

SPEAKER_00

08:01 - 08:14

Long with three running on a phone. Well, I guess maybe there's like a 7 billion. Yeah, yeah, I don't know if that's if that will fit.

SPEAKER_05

08:14 - 08:20

I'm not sure if that one is like I haven't played with it. I don't know if it's like good enough to kind of do the thing. I'm thinking about here.

SPEAKER_00

08:20 - 08:49

So when Lauma three got released, I think that big takeaway for a lot of people was, oh wow, they've like caught up to GPT four. I don't think it's equal in all dimensions, but it's like pretty pretty close to pretty in the ballpark. I guess the question is, you know, you guys released for a while ago, you're working on five or, you know, more upgrades to four. I mean, I think to Jamas point about Devon, how do you stay ahead of open source? I mean, it's just, that's just like a very hard thing to do in general, right? I mean, how do you think about that?

SPEAKER_05

08:49 - 09:31

Well, what we're trying to do is not make this sort of smartest set of weights that we can. But what we're trying to make is like this useful intelligence layer for people to use. And a model is part of that. I think we will stay pretty far ahead of, I hope, say pretty far ahead of the rest of the world on that. But there's a lot of other work around the whole system that's not just that, you know, the model weights. And we'll have to build up and during value, the old fashion way like any other business does, we'll have to figure out a great product in reasons to stick with it and deliver it a great price.

SPEAKER_02

09:32 - 10:12

When you found it, the organization, you, the stated goal or part of what you discussed was, hey, this is too important for anyone company to own it. So it definitely needs to be open. Then there was the switch. It's too dangerous for anybody to be able to see it. And we need to lock this down because you had some fear about that. I think is that accurate because the cynical side is like, well, this is a capitalistic move. And then the, I think, You know, I'm curious what the decision was here in terms of going from open, the world needs to see this. It's really important to close only we can see it. Well, how did you come to that conclusion over the discussion?

SPEAKER_05

10:12 - 11:38

There's a reason that we release chatGPT was we want the world to see this and we've been trying to tell people that AI is really important. And if you go back to like October of 2022, not that many people thought I was going to be that important or that it was really happening. And a huge part of what we try to do is put the technology in the hands of people. Now, again, there's different ways to do that. And I think there really is an important role to just say, like, here's the weights have added. But the fact that we have so many people using a free version of Chachi BT that we don't, you know, we don't run ads on. We don't try to make money on. We just put out there because we want people to have these tools. I think it's done a lot to provide a lot of value and teach people how to fish, but also to get the world really thoughtful about what's happening here. Now, we still don't have all the answers and we're fumbling our way through this. Like everybody else, and I assume we'll change strategy many more times as we learn new things. When we started OpenAI, we had really no idea about how things were going to go that we'd make a language model, we'd ever make a product, we started off just, I remember very clear that first day we were like, well, Now we're all here, that was, you know, it was difficult to get the setup, but what happens now? Maybe we should write some papers, maybe we should stand around a whiteboard and we've just been trying to like put one foot in front of the other and figure out what's next and what's next and what's next. And I think we'll keep doing that.

SPEAKER_04

11:38 - 12:31

Can I just replace something and just make sure I heard it right? I think what you were saying on the open source close source thing is if I heard it right, all these models independent of the business decision you make are going to become asymptotically accurate towards some amount of accuracy. Like not all, but like let's just say there's four or five that are well capitalized enough. You guys met a Google Microsoft, whomever, right? So let's just say four or five, maybe one startup. And on the open web and then quickly the accuracy or the value of these models will probably shift to these proprietary sources of training data that you could get that others can't or others can get that you can't is that how you see this thing evolving where the open web gets everybody to a certain threshold and then it's just an arms race for data beyond that.

SPEAKER_05

12:32 - 13:48

Doesn't, so I definitely don't think it'll be an arms race for data because when the models get smart enough at some point, it shouldn't be about more data, at least not for training. It may matter data to make it useful. Look, the one thing that I have learned most throughout all this is that it's hard to make confidence statements a couple of years in the future about where this is all going to go. And so don't want to try now. I will say that I expect lots of very capable models in the world. And, you know, like, it feels to me like we just like stumbled on a new fact of nature or science or whatever you want to call it which is like we can create you can like I mean, I don't believe this literally, but it's like a spiritual point. Intelligence is just this emergent property of matter, and that's like a rule of physics or something. So people are going to figure that out, but there will be all these different ways to design the systems. People will make different choices, figure out new ideas, and I'm sure, like, you know, Like any other industry, I would expect there to be multiple approaches and different people like different ones. You know, some people like iPhone, some people like an Android phone. I think there'll be some effect like that.

SPEAKER_04

13:48 - 14:20

Let's go back to that first section of just the cost and the speed. All of you guys are sort of a little bit rate limited on literally in videos throughput, right? And I think that you and most of everybody else have sort of effectively announced how much capacity you can get just because it's as much as they can spin out. What needs to happen at the substrate so that you can actually compute cheaper, compute faster, get access to more energy. How are you helping to frame out the industry solving those problems?

SPEAKER_05

14:21 - 15:27

Well, we'll make huge algorithmic gains for sure. And I don't want to discount that. I'm very interested in chips and energy. But if we can make a same quality model twice as efficient, that's like we had twice as much compute. And I think there's a gigantic amount of work to be done there. And I hope we'll start really seeing those results. Other than that, the whole supply chain is very complicated. There's logic fabric capacity there's how much HP I'm the world can make there's how quickly you can like get permits and poor the concrete make the data centers and then have people and they're worried them all up there's finally energy which is a huge bottleneck but I think when there's this much value to people the world will do its thing will try to help it happen faster and there's probably like I don't know how to give it a number, but there's like some percentage chance where there is, as you were saying, like a huge substrate breakthrough and we have like a massively more efficient way to do computing, but I don't, I don't like bank on that or spend too much time thinking about it.

SPEAKER_04

15:28 - 15:48

What about the device side and sort of you mentioned sort of the models that can fit on a phone? So obviously, whether that's an LLM or some SLM or something, I'm sure you're thinking about that. But then does the device itself change? I mean, does it need to be as expensive as an iPhone?

SPEAKER_05

15:48 - 16:23

I'm super interested in this. I love like great new form factors of computing. And it feels like with every major technological advance, a new thing becomes possible. Uh, phones are unbelievably good. So I think the threshold is like very high here. Like, I think like, I personally think iPhone is like the greatest piece of technology humanity has ever made. It's really a wonderful product. What comes into it? Like, I don't know. I mean, I was going to say that was what I say. And it's so good to get beyond it. I think the bar is like, white, high.

SPEAKER_02

16:24 - 16:26

We've been, you've been working with Johnny Ivan on something, right?

SPEAKER_04

16:26 - 16:36

We've been discussing ideas, but I don't, like, if I knew, is it that that it has to be more complicated or actually just much, much cheaper and simpler?

SPEAKER_05

16:36 - 16:52

Well, every, almost everyone's willing to pay for a phone anyway. So if you could like make a way cheaper device, I think that barrier to carry a second thing or use a second thing is pretty high. So I don't think, given that we're all willing to pay for phones or most of us are, I don't think cheaper is the answer.

SPEAKER_02

16:54 - 16:56

Different is the answer of them?

SPEAKER_00

16:56 - 17:03

Would there be like a specialized chip that would run on the phone? That was really good at powering a phone size AI model.

SPEAKER_05

17:03 - 17:20

Probably, but the phone manufacturers are going to do that for sure. That doesn't necessarily hit a new device. I think you'd have to find some really different interaction paradigm that the technology enables. And if I knew what it was, I would be excited to be working on it right now.

SPEAKER_02

17:20 - 17:31

But you have voice working right now in the app. In fact, I set my action, but on my phone, to go directly to chatGPT's voice app. And I use it when my kids and they love it, talking to think of latency issues.

SPEAKER_05

17:31 - 17:46

But it's really, we'll get that better. And I think voice is a hint to whatever the next thing is. Like if you can get voice interaction to be really good, it feels I think that feels like a different way to use a computer.

SPEAKER_02

17:46 - 18:03

But again, we're with that, by the way, like why is it not responsive? And, you know, it feels like a CB, you know, like over, over, it's really annoying to use, you know, in that way. But it's also brilliant when it gives you the right answer. We are working on that.

SPEAKER_05

18:03 - 18:12

It's, it's so clunky right now. It's slow. It's like kind of doesn't feel very smooth or authentic or organic. Like we'll get all that to be much better.

SPEAKER_00

18:14 - 18:28

What about computer vision? I mean, they have classes, or maybe you could wear a pen, and you take the combination of visual or video data, combine it with voice, and now, super eye knows everything that's happening around you.

SPEAKER_05

18:28 - 18:59

Super powerful to be able to, like, the multimodality of saying, like, hey, Chagit BT, what am I looking at? Or like, what kind of plant is this? I can't quite tell. That's obviously, that's like, that's another, I think, like hint, but whether people want to wear glasses or like hold up something when they want that, like, there's a bunch of just like the sort of like societal interpersonal issues here are all very complicated about wearing a computer on your face.

SPEAKER_02

18:59 - 19:05

We saw that with Google Glass, people clutched in the face in the mission, started a lot of things about that.

SPEAKER_05

19:05 - 19:06

So I think it's like,

SPEAKER_00

19:08 - 19:22

What are the apps that could be unlocked if AI was sort of ubiquitous on people's phones? Do you have a sense of that or what would you want to see built?

SPEAKER_05

19:22 - 20:47

I think what I want is just this always on like super low friction thing where I can either by voice or by text or ideally like some other it just kind of knows what I want. Have this like constant thing helping me throughout my day that's got like as much context as possible. It's like the world's greatest assistant. And it's just this like thing working to make me better and better. There's like only here people like talk about the a future there imagine they imagine there's sort of two. different approaches and they don't sound that different but I think they're like very different for how we'll design the system in practice. There's the I want to extension of myself. I want like a ghost or an alter ego or this thing that really like is me is acting on my behalf is responding to emails not even telling me about it is sort of like it becomes more me and is me. And then there's this other thing which is like, I want a great senior employee. It may get to know me very well. I may delegate it. You know, you can like have access to my email and I'll tell you the constraints, but but I think of it as this like separate entity. Yeah, I personally like the separate entity approach better and think that's where we're going to head. And so in that sense,

SPEAKER_02

20:49 - 21:05

The thing is not you, but it's like a always available, always great, super capable assistant, executive agent in a way, like it's out there working on your behalf and understands what you want and anticipates what you want is what I'm reading into what you're saying.

SPEAKER_05

21:05 - 21:45

I think there'd be agent like the heavier, but there's like a difference between a senior employee and an agent. And like I want it, you know, I think of like my, I think like a bit. Like one of the things that I like about a senior employee is they'll push back on me. They will sometimes not do something I ask or there's sometimes will say like, I can do that thing if you want. But if I do it, here's what I think would happen and then this and then that and are you really sure? I definitely want that kind of eye which not just like this thing that I really have skin it blinded does.

SPEAKER_02

21:45 - 21:47

It can be a reason. Yeah, and push back.

SPEAKER_05

21:47 - 21:56

The kind of relationship with me that I would expect out of a really competent person that I worked with, which is different from like a sick event. Yeah.

SPEAKER_04

21:57 - 22:36

The thing in that world where if you have this like Jarvis like thing that can it's reason, what do you think it does to products that you use today where the interface is very valuable? So for example, if you look at an Instacard or if you look at an Uber or if you look at a door dash, these are not services that are meant to be pipes that are just providing a set of APIs to a smart set of agents that ubiquitously work on behalf of 8 billion people. What do you think has to change and how we think about how apps need to work, how this entire infrastructure of experience is need to work in a world where you're identically interfacing to the world.

SPEAKER_05

22:36 - 23:38

I'm actually very interested in designing a world that is equally usable by humans and by AI's. I like the interpretability of that. I like the smoothness of the handoffs. I like the ability that we can provide feedback or whatever. So, you know, DoorDash could just expose some API to my future AI assistant and they could go put the order and whatever. Or I could say like, I could be holding my phone and I could say, okay, AI assistant like you put in this order on door to us, please. And I could like watch the app open and see the thing clicking around and I could say, hey, no, not this or like there's something about designing a world that is usable. equally well by humans and AI's that I think is a interesting concept. I'm just like more excited about humanoid robots than sort of robots of like very other shapes. The world is very much designed for humans. I think we should absolutely keep it that way. And a shared interface is nice.

SPEAKER_02

23:39 - 23:50

So you see voice chat, that modality kind of gets rid of apps. You just ask it for sushi. It knows sushi you like before. It knows what you don't like and does it's best shot at doing it.

SPEAKER_05

23:50 - 24:19

It's hard for me to imagine that we just go to a world totally where you say like, hey, charge me to the order me sushi and it says, okay, do you want it from this restaurant? What kind of what time would ever? I think user, I think visual user interfaces are super good for a lot of things. And it's hard to me to imagine like a world where you never look at a screen and just use voice mode only, but I can't imagine that for a lot of things.

SPEAKER_02

24:20 - 24:29

I mean, Apple tried with Siri. Like you could suppose that you could water it over automatically with Siri. I don't think anybody's ever done it because why would you take the risk of not.

SPEAKER_04

24:29 - 24:45

Well, the quality to your point, the quality is not good, but when the quality is good enough, you're a lot, you'll actually prefer it just because it's just lighter weight. You don't have to take your phone out. You don't have to search for your app and press it. Oh, it automatically logged you out. Oh, hold on, log back in. Oh, TFA. So, I'll pay in the ass.

SPEAKER_05

24:45 - 25:12

You know, it's like setting a timer with Siri. I do every time because it works really well and it's great to have a lot more information. But ordering an Uber, like I want to see the prices for a few different options. I want to see how far away it is. I want to see like maybe even where they are on the map because I might walk somewhere. I get a lot more information by I think in less time by looking at that order of the Uber screen than I would if I had to do that all through the audio channel.

SPEAKER_02

25:12 - 25:15

That's the idea of watching it happen. That's kind of cool.

SPEAKER_05

25:15 - 25:22

I think there will just be like yeah different There are different interfaces we use for different tasks and I think that'll keep going.

SPEAKER_04

25:22 - 25:38

Of all the developers that are building apps and experiences on OpenAI, are there a few that stand out for you where you're like, okay, this is directionally going in the super interesting area, even if it's like a toy app, but are there things that you guys point to and say this is really important?

SPEAKER_05

25:40 - 26:52

Um, I met with a new company this morning or I'm barely in a company. It's like two people that are going to work on a somewhere project trying to actually finally make the AI tutor like and I've always been interested in the space. I had a lot of people have done great stuff on our platform, but if if someone can deliver like the way that you actually like And they used a phrase I love, which is this is going to be like a monastery level of reinvention for how people learn things. But if you can find this new way to let people explore and learn and new ways on their own, I'm personally super excited about that. A lot of the coding related stuff you mentioned Devon earlier, I think that's like a super cool vision of the future. The thing that I am health care, I believe should be pretty transformed by this. But the thing I'm personally most excited about is the sort of doing faster and better scientific discovery. GPT-4 clearly, not they are in a big way, although maybe it accelerates things a little bit by making scientists more productive. But alpha-3, yeah. That's like, but Sam, that will be a triumph.

SPEAKER_03

26:52 - 27:17

Those are not Like, these models are trained and built differently than the language models. I mean, to some, obviously, there's a lot that's similar, but there's a lot, there's kind of a ground up architecture to a lot of these models that are being applied to these specific problem sets, these specific applications like chemistry interaction modeling, for example.

SPEAKER_05

27:19 - 27:31

You'll need some of that for sure, but the thing that I think we're missing across the board for many of these things we've been talking about is models that can do reasoning. And once you have reasoning, you can connect it to chemistry stimulators.

SPEAKER_03

27:31 - 28:25

So I guess that's the important question I wanted to kind of talk about today was this idea of networks of models people talk a lot about agents as if there's kind of this linear set of hall functions that happen, but one of the things that arises. in biology is network subsystems that have cross interactions, that the aggregation of the system, the aggregation of the network produces an output rather than one thing calling another, that thing calling another. Do we see an emergence in this architecture of either specialized models or network models that work together to address bigger problem sets, use reasoning, there's computational models that do things like chemistry or arithmetic and other models that do. rather than one model to rule them all that's purely generalized.

SPEAKER_05

28:25 - 28:43

I don't know. I don't know how much reasoning is going to turn out to be a super generalizable thing. I suspect it will, but that's more just like an intuition and a hope and it would be nice if it worked out that way. I don't know if that's like

SPEAKER_03

28:46 - 29:11

But let's walk through the protein modeling example. There's a bunch of training data, images of proteins, and then sequence data, and they build a model, predictive model, and they have a set of processes and steps for doing that. Do you envision that there's this artificial general intelligence or this great reasoning model that then figures out how to build that sub model that figures out how to solve that problem by acquiring the necessary data

SPEAKER_05

29:12 - 29:26

And then resolving so many ways where that could go like maybe it is it trains a literal model for it or maybe it just like knows the one big model what it can like go pick what other training data it needs and ask a question and then update on that.

SPEAKER_03

29:28 - 29:43

I guess the real question is, are all these startups going to die because so many startups are working in that modality, which is go get special data and then train a new model on that special data from the ground up and then it only does that one sort of thing and it works really well at that one thing and it works better than anything else at that one.

SPEAKER_05

29:43 - 30:52

You know, there's like a version of this, I think you can like already see. When you were when you were talking about like biology and these complicated networks of systems the reason I signed I got super sick recently and I'm mostly better now but it was just like body like got beat up like one system at a time thought like you can really tell like okay it's this cascading thing and uh And that reminded me of you like talking about the like biology is just these like you have no idea how much these systems interact with each other until things start going wrong. And that was sort of like interesting to see. But I was using, I was like using ChadGPT to try to like figure out like what was happening whatever and and would say, well, I'm you know, unsure of this one thing. And then I just like posted a paper on it without even reading the paper, like in the context. And it says, oh, that was the thing I was unsure of. Like now, I think this instead. So there's like a, that was like a small version of what you're talking about where you can like, can say this, I don't know this thing. And you can put more information. And you don't retrain the model. I just added it to the context here. And that's what you're getting them.

SPEAKER_03

30:52 - 31:28

So these models that are predicting protein structure, like, let's say, right, this is the whole basis. And now now other molecules at alpha fall three. Can they, yeah, I mean, is it basically a world where the best generalized model goes in and gets that training data and then figures out on its own? And maybe you could, maybe you could use an example for us. Can you tell us about Sora, your video model that generates amazing moving images, moving video, and what's different about the architecture there, whatever you're willing to share on how that is different?

SPEAKER_05

31:28 - 32:23

Yeah. So my, on the general thing first, You clearly will need specialized simulators, connectors, pieces of data, whatever, but my intuition. And again, I don't have this like backed up with science. My intuition will be if we can figure out the core of generalized reasoning, connecting that to new problem domains in the same way that humans are generalized reasoners. would I think be doable to the faster unlock faster unlock that I think so. But yeah, you sort of like does not start with a language model. It's that that's a model that is like customized to do video. And and so like we're clearly not at that world yet.

SPEAKER_03

32:24 - 32:46

Right. So you guys, so just as an example, for you guys to build a good video model, you built it from scratch using, I'm assuming some different architecture and different data. But in the future, the generalized reasoning system, the HCI, whatever system. theoretically could render that by figuring out how to do it.

SPEAKER_05

32:46 - 32:58

Yeah, I mean, one example of this is like, okay, you know, as far as I know, all the best text models in the world are still a lot of regressive models and the best image and video models or diffusion models. And it's like strange in some sense.

SPEAKER_02

32:59 - 34:11

Yeah. So there's a big debate about training data. You guys have been, I think, the most thoughtful of any company you've got licensing deals now, FT, et cetera. And we got to just be gentle here because you're involved in in New York Times lawsuit. You weren't able to settle a guess and arrangement with them for training data. How do you think about fairness in fair use? We've had big debates here on the pod. Obviously your actions are, you know, speak volumes that you're trying to be fair by doing licensing deals. So what's your personal position on the rights of artists who create beautiful music, lyrics, books and you taking that and then making a derivative product out of it and then monetizing it and what's fair here and how do we get to a world where You know, artists can make content in the world and then decide what they want other people to do with it. Yeah. And I'm just curious you're personally, because I know you to be a thoughtful person on this. And I know a lot of other people in our industry are not very thoughtful about how they think about content creators.

SPEAKER_05

34:11 - 36:56

So I think it's very different for different kinds of, I mean, I look on unfair use. I think we have a very reasonable position under the current law, but I think AI is so different. good for things like art will need to think about them in different ways. But let's say if you go read a bunch of math on the internet and learn how to do math, that I think seems unobjectionable to most people. And then there's like another set of people who might have a different opinion about what if you like actually, I mean, I get into that just in the internet making this answer too long. So I think there's like one category of people are like, okay, there's like generalized human knowledge. You can kind of like go, if you learn that, like that's like open domain or something, if you kind of go learn about the Pythagorean theorem. That's one end of the spectrum. And then I think the other extreme end of the spectrum is art. And maybe even like more than more specifically, I would say it's like doing It's a system generating art in the style or the likeness of another artist would be kind of the furthest end of that. And then there's many, many cases on the spectrum in between. I think the conversation has been historically very caught up on training data, but it will increasingly become more about what happens at inference time as training data becomes less valuable, and what the system does accessing information in context and real time, or taking something like that, what happens at inference time will become more debated and what the new economic model is there. So if you say like, create me a song in the style of Taylor Swift, Even if the model were never trained on any Taylor Swift songs at all, you can still have a problem, which is that may have read about Taylor Swift that may know about her themes, Taylor Swift means something. And then the question is, should that model, even if it were never trained on any Taylor Swift song whatsoever, be allowed to do that? And if so, how should Taylor get paid? Right. So I think there's an opt-in opt-out in that case first of all, and then there's an economic model. Staying on the music example, there is something interesting to look at from. the historical perspective here, which is sampling and how the economics around that work. This is quite the same thing, but it's like an interesting place to start looking at.

SPEAKER_03

36:56 - 37:44

Sam, let me just challenge that. What's the difference in the example you're giving of the model learning about things like song structure, tempo, melody, harmony relationships, all the discovering all the underlying structure that makes music successful, and then building new music using training data. And what a human does, that listens to lots of music, learns about and their brain is processing and building all those same sort of predictive models or those same sort of discoveries or understandings. What's the difference here? And why are you making the case that perhaps artists should be uniquely paid. This is not a sampling situation. You know, the AI is not outputting and it's not storing in the model the actual original song. Yeah, I was learning structure, right?

SPEAKER_05

37:44 - 37:54

I wasn't trying to make that that point because I agree like in the same way that humans are inspired by other humans. I was saying if you if you say generate me a song in the style of Taylor Swift.

SPEAKER_03

37:54 - 37:58

I see, right. Okay. Well, I think that's like where the prompt leverages some artist.

SPEAKER_05

37:58 - 38:00

But I think personally, that's a different case.

SPEAKER_03

38:00 - 38:25

Would you be comfortable asking, or would you be comfortable letting the model train itself with a music model being trained on the whole corpus of music that humans have created? without royalties being paid to the artists that that music is being fed in. And then you're not allowed to ask, you know, artists specific prompts. You could just say, hey, pay me a really cool pop song that's fairly modern about heartbreak.

SPEAKER_05

38:25 - 39:35

You know, with a female voice, you know, we we have currently made the decision not to do music and partly because exactly these questions of where you draw the lines and you know what like even I was meeting with several musicians that I really admire recently. I was just trying to talk about some of these edge cases, but even the world in which if we went and let's say we paid 10,000 musicians to create a bunch of music just to make a great training set where the music model could learn everything about strong, strong structure and what makes a good catchy beat and everything else. And only trained on that, let's say we could still make a great music model, which maybe we could. You know, I was kind of like posing that as a thought experiment to musicians and they're like, well, I can't object to that on any principal basis at that point. And yet there's still something that I don't like about it. Now, that's not a reason not to do it necessarily. But it is, did you say that ad that Apple put out maybe as yesterday or something of like squishing all of human creativity down into one really thin iPad?

SPEAKER_02

39:35 - 39:42

But we sure take on it. People got really emotional about it. Yeah, yeah, yeah, yeah.

SPEAKER_05

39:42 - 40:17

Then you would think there's something about I'm obviously hugely positive on AI, but there is something that I think is beautiful about human creativity and human artistic expression and you know for an AI that just does better science like great bring that on, but an AI that is going to do this like deeply beautiful human creative expression, I think we should like Think you're out. It's gonna happen. It's gonna be a tool that will lead us to greater creative heights. But I think we should figure out to do it in a way that like preserves the spirit of what we all care about here.

SPEAKER_02

40:18 - 40:49

And I think your actions speak loudly. We were trying to do Star Wars characters in Dolly. And if you ask for Darth Vader, it says, hey, we can't do that. So I guess red team or whatever you call it internally. We try. Yeah, you're not allowing people to use other people's IP. So you've taken that decision. Now, if you asked it to make a Jedi Bulldog or a Sith Lord Bulldog, which I did, it made my Bulldogs as Sith Bulldogs. So there's an interesting question about less vector, right?

SPEAKER_05

40:49 - 41:25

Yeah. You know, we put out this thing yesterday called the spec where we're trying to say here are here's here's how our models supposed to behave. And it's very hard. It's a long document. It's very hard to like specify exactly in each case where the limits should be. And I view this as like a discussion that's going to need a lot more input. But but these sorts of questions about Okay, maybe it shouldn't generate Darth Vader, but the idea of a Sith Lord or a Sith style thing or Jedi at this point is like part of the culture or like like these are all hard decisions.

SPEAKER_02

41:25 - 41:52

Yeah. And I think you're right. The music industry is going to consider this opportunity to make tellers or songs their opportunities. Part of the four part fair use test is, you know, these who gets to capitalize on new innovations for existing art. And Disney has an argument that, hey, you know, if you're going to make Sora versions of a show or whatever, Obi-Wan Kenobi, that's Disney's opportunity. And that's a great partnership for you.

SPEAKER_04

41:54 - 42:09

You know, to pursue. So we're, I think this section I would label as AI and the law. So let me ask, maybe a higher level question. What does it mean when people say regulate a satellite? Sam, what does it, what does that even mean?

SPEAKER_03

42:09 - 42:14

And comment on California's new proposed regulations as well if you fear of for it.

SPEAKER_05

42:15 - 43:30

I'm concerned. I mean, there's so many proposed regulations, but most of the ones I've seen on the California state things, I'm concerned about. I also have a general fear of the states all doing this themselves. When people say regular AI, I don't think they mean one thing. I think there's like some people are like being on the whole thing. Some people like don't allow to be open source, required to be open source. The thing that I am personally most interested in is I think there will come Look, I may be wrong about this. I will acknowledge that this is a forward looking statement, and those are always dangerous to make. But I think there will come a time in the not super distant future, like, you know, we're not talking like decades and decades from now, where AI says the frontier AI systems are capable of causing significant global harm and for those kinds of systems and the same way we have like global oversight of nuclear weapons or synthetic bio or things that can really like have a very negative impact way beyond the realm of one country. I would like to see some sort of international agency that is looking at the most powerful systems and ensuring like reasonable safety testing. You know, these things are not going to escape and recursively self-improve or whatever.

SPEAKER_02

43:31 - 43:56

The criticism of this is that you have the resources to cozy up to lobby, to be involved and you've been very involved with politicians and then startups, which are also passionate about an invest in, are not going to have the ability to resource and deal with this. And that this regulatory capture as per our friend Bill Garli did a great talk last year about it. Maybe you could address that head on.

SPEAKER_05

43:56 - 44:12

You know, if the line where we're only going to look at models that are trained on computers that cost more than 10 billion or more than 100 billion or whatever dollars, I'd be fine with that. There'd be some line that defined and I don't think that puts any regulatory burden on startups.

SPEAKER_02

44:12 - 44:26

So if you have like the nuclear raw material to make a nuclear bomb, like there's a small subset of people who have that therefore use the analogy of like a nuclear inspectors in a situation. Yeah. That's interesting. Sachs, you have a question?

SPEAKER_00

44:26 - 44:27

Well, Tomas go ahead and he had a follow-up.

SPEAKER_05

44:29 - 45:07

Can I say one more thing about that? Of course. I'd be super nervous about regulatory overreach here. I think we can get this wrong by doing way too much. Or even a little too much. I think we can get this wrong by doing not enough. But I do think part of and I. And now I mean, you know, we have seen regulatory overstepping or capture just get super bad in other areas. You know, that also maybe nothing will happen, but but I think it is part of our duty and our mission to like talk about what we believe is likely to happen and what it takes to get that right.

SPEAKER_03

45:08 - 46:12

The challenge Sam is that we have statute that is meant to protect people, protect society at large. What we're creating, however, is statute that gives the government rights to go in and audit code, to audit business trade secrets. We've never seen that to this degree before. Basically, the California legislation that's proposed and some of the federal legislation that's been proposed Basically, requires the government to audit a model, to audit software, to audit and review the parameters and the weightings of the model, and then you need their check mark in order to deploy it for commercial or public use. And for me, it just feels like We're trying to rain in the government agencies for fear and because folks have a hard time understanding this and are scared about the implications of it, they want to control it. And the only way to control it is to say, give me a right to audit before you can release it.

SPEAKER_00

46:12 - 46:14

And there's a playlist.

SPEAKER_03

46:14 - 46:21

I mean, the way to the stuff is written, you read it, you're like, going to pull your hair out because as you know, better than anyone in 12 months, none of the stuff's going to make sense anyway.

SPEAKER_05

46:21 - 46:55

Totally. Right. The reason I have pushed for an agency based approach for kind of like the big picture stuff and not a like write it in lies I don't in 12 months it will all be written wrong and I don't think even if these people were like true world experts I don't think they could get it right looking at 12 or 24 months and I don't these policies which are like we're gonna look at you know we're gonna audit all of your source code and like look at all of your weights one by one like Yeah, I think there's a lot of crazy proposals out there.

SPEAKER_03

46:55 - 46:59

By the way, especially if the models are always being retrained all the time, if they become more dynamic.

SPEAKER_05

46:59 - 47:11

Again, this is why I think it's yeah. But like when before an airplane gets certified, there's like a set of safety tests. We put the airplane through it. And totally. It's different than reading all of your code.

SPEAKER_03

47:11 - 47:17

That's that's reviewing the output of the model, not viewing the inside of the model.

SPEAKER_05

47:17 - 47:24

And so what I was going to say is that is the kind of thing that I think as safety testing makes sense.

SPEAKER_03

47:24 - 47:51

How are we going to get that to happen, Sam? And I'm not just speaking for OpenAI, I speak for the industry, for humanity, because I am concerned that we draw ourselves into almost like a dark age as type of era by restricting the growth of these incredible technologies that can prosper humanity, that humanity can prosper from so significantly. How do we change the sentiment and get that to happen? Because this is all moving so quickly at the government levels, and folks seem to be getting it wrong. And I'm just a bit concerned.

SPEAKER_04

47:51 - 48:11

Just to build on that Sam, the architectural decision, for example, that Lama took is pretty interesting in that it's like, we're going to let Lama grow and be as unfettered as possible. And we have this other kind of thing that we call Lama guard that's meant to be these protective guardrails. Is that how you see the problem being solved correctly?

SPEAKER_05

48:11 - 49:54

Or do you see that the current strength of models? Definitely some things are going to go wrong and I don't want to like make light of those or not take those seriously, but I'm not like I don't have any like catastrophic risk worries with a GPT for level model and I think there's many safe ways to choose to deploy this Maybe we'd find more common ground if we said that and I like You know, the specific example of models that are capable, that are technically capable, not even if they're not going to be used this way, of recursive self-improvement, or of, you know, autonomously designing and deploying a bio weapon, or something like that. Or new model. That was the recursive point. We should have safety testing on the outputs at an international level for models that have a reasonable chance of posing a threat there. I don't think like GPT for, of course, does not pose it in any sort of, well, I don't want to say any sort because we don't. Yeah, I don't think the GPT4 poses a material threat on those kinds of things. And I think there's many safe ways to release a model like this. But, you know, when like significant loss of human life is a serious possibility, like airplanes or any number of other examples where I think we're happy to have some sort of testing framework. Like, I don't think about an airplane when I get on it. I just assume it's going to be safe.

SPEAKER_03

49:54 - 49:55

Right.

SPEAKER_02

49:55 - 50:04

There's a lot of hand-wringing right now, Sam, about jobs. And you had a lot of, I think, did like some sort of a testament. You were at YC about UBI.

SPEAKER_05

50:04 - 50:15

And you've been a result in that come out very soon. It was a five-year study that wrapped up or started five years ago. Well, there was like a beta study first and there was like a long one that ran.

SPEAKER_03

50:15 - 50:19

But what did you start? Why do you start it? Maybe just explain you the I and why you started it.

SPEAKER_05

50:21 - 52:24

Um, so we started thinking about this in 2016, uh, kind of, both the same time started thinking AI really seriously. And the theory was that the magnitude of the change that may come to society and jobs in the economy and sort of in some deeper sense than that, like what the social contract looks like. Um, meant that we should have many studies to study many ideas about new ways to arrange that. I also think that, you know, I'm not like a super fan of how the government has handled most policies designed to help poor people and I kind of believe that if you could just give people money, they would make good decisions and the market would do its thing and You know, I'm very much in favor of lifting up the floor and reducing eliminating poverty. But I'm interested in better ways to do that than what we have tried for the existing social safety net and kind of the way things have been handled. And I think giving people money is not going to go solve all problems that certainly have to make people happy. But it might It might solve some problems that it might give people a better horizon with which to help themselves. And I'm interested in that. I think that now that we see some of the ways, so 2016 was very long time ago, you know, now that we see some of the ways that AI is developing, I wonder if there's better things to do than the traditional conceptualization of UBI, like I wonder I wonder if the future looks something like more like universal basic compute than universal basic income. And everybody gets like a slice of GPT 7's compute and they can use it, they can resell it, they can donate it to somebody to use for cancer research. But what you get is not dollars, but this like productivity slice, yeah, you own like part of the productivity.

SPEAKER_04

52:24 - 52:33

Right. I would like to shift to the gossip part of this. Okay. Awesome. What gossip? That was good. Let's go back to November. What? The flying.

SPEAKER_05

52:33 - 52:41

Yeah. Um, you know, I, I, if you have specific questions, I'm happy to.

SPEAKER_02

52:41 - 52:55

Maybe I said maybe. You said you were like, talk about it at some point. So here's the point. What happened? You were fired. You came back and it was a palace intrigue. Did somebody stay up in the back? Did you find AGI? What's going on?

SPEAKER_05

52:55 - 53:49

This is a safe space, yeah. I was fired. I was I talked about coming back. I kind of was a little bit unsure at the moment about what I wanted to do because I was very upset. And I realized that I really loved opening eye in the people and that I would come back and I kind of, I knew it was going to be hard. It was even harder than I thought, but I kind of was like, all right, fine. I agreed to come back. The board took a while to figure things out and then, you know, we were kind of like trying to keep the team together and Keep doing things for our customers and you know sort of started making other plans, then the board decided to hire a different interim CEO and then everybody. There are many people. Oh, my gosh.

SPEAKER_04

53:49 - 53:55

What was that guy's name? He was there for like a scare moochie, right? Like, uh, and it's great.

SPEAKER_05

53:55 - 54:01

And I have nothing but the scare moochie.

SPEAKER_02

54:01 - 54:06

And then, where were you when they, um, when you found the news that you've been fired? Like, what? I was in that moment.

SPEAKER_05

54:06 - 54:09

I was in a hotel room in Vegas for everyone weekend.

SPEAKER_02

54:09 - 54:12

I think they were so unfortunate. You know, taxed and they're like, what did you say?

SPEAKER_00

54:12 - 54:14

I think that's how to do that. I think that's how to do before J. K.

SPEAKER_02

54:16 - 54:22

I'm trying to think of I ever got fired. I don't think I've gotten time. Yeah, exactly. No, it's just what we were thinking. Like, it's a tax from who?

SPEAKER_05

54:22 - 54:41

Actually, no, I got a tax than they before. And then I got on a phone call with the board. And then that was that. And then I kind of like, I mean, then everything went crazy. I was like, it was like, I mean, I have my phone was like unusable. It was just a nonstop vibrating thing. I'm like, that's just called.

SPEAKER_00

54:41 - 54:52

You got fired by tweet. That happened a few times that during the Trump administration, a few capital points.

SPEAKER_05

54:52 - 55:51

And then like, you know, I kind of did like a few hours of just this like absolute fugue state in the hotel room trying to like I was just confused beyond belief trying to figure out what to do. And so we're and then like flew home, it may be like We've got a panel like, I don't know, 3PM or something like that. Still just like, you know, crazy nonstop, phone blowing up, met up with some people in person. By that evening, I was like, okay, you know, I'll just like go do HCI research and it's feeling pretty happy about the future. Yeah, you have options. And then, and then the next morning had this call with a couple of board members about coming back and that led to a few more days of craziness. And then, uh, and then it kind of, I think it got resolved. Well, it was like a lot of insanity in between.

SPEAKER_00

55:51 - 55:57

Well, what percent of it was because of these nonprofit board members.

SPEAKER_05

55:57 - 56:12

Um, well, we only have a nonprofit board. So it was all the nonprofit board members. Uh, they're, the board had gotten down to six people. Um, they, and then they removed Greg from the board and then fired me.

SPEAKER_00

56:12 - 56:29

Um, so, but it was like, you know, but I mean, like, was there a culture clash between the people on the board who had only non-profit experience versus the people who had start experience and maybe you can share a little bit about if you're willing to the motivation behind the action anything you can.

SPEAKER_05

56:29 - 57:35

I think there's always been culture clashes at. Look, obviously, not all of those board members are my favorite people in the world. But I have serious respect for the gravity with which they treat AGI and the importance of getting AI safety right. And even if I stringently disagree with their decision-making and actions, which I do, I have never once doubted their integrity or commitment to the sort of shared mission of safe and beneficial AGI. Do I think they like made good decisions in the process of that or kind of know how to balance all of things opening it has to get right? No, but I think the intent of the magnitude of AGI and getting that right

SPEAKER_00

57:36 - 58:08

I should, let me ask you about that. So the mission of OpenAI is explicitly to create EGI, which I think is really interesting. A lot of people would say that if we create EGI, that would be like an unintended consequence of something on horribly wrong and they're very afraid of that outcome. But open AI makes that the actual mission. Does that create like more fear about what you're doing? I mean, I understand it can create motivation too, but how do you reconcile that? I guess why is that?

SPEAKER_05

58:08 - 58:58

I think a lot of the, well, I mean, first I'll say, I'll answer the first question in the second one. I think it does create a great deal of fear. I think a lot of the world is understandably very afraid of AGI or very afraid of even current AI and very excited about it and even more afraid and even more excited about where it's going and we we wrestle with that but like I think it is unavoidable that this is going to happen. I also think it's going to be tremendously beneficial. But we do have to navigate how to get there in a reasonable way. And like a lot of stuff is going to change and change is, you know, pretty, pretty uncomfortable for people. So there's a lot of pieces that we got to get right and ask a, can I ask a different question?

SPEAKER_04

58:58 - 59:19

You, you have created. I mean, It's the hottest company, and you are literally at the center of the center of the center. But then it's so unique in the sense that all of this value, you assume economically. Can you just like walk us through like, yeah, I wish I had taken.

SPEAKER_05

59:19 - 59:22

I wish I had taken equity, so I never had to answer this question.

SPEAKER_00

59:22 - 59:28

If I could go back and talk. I want to give you a grand the hell. Why did the board just give you a big option, grand, like you deserve.

SPEAKER_04

59:28 - 59:33

Yeah, give you five points. What was the decision back then? Like, why was that so important decision back then?

SPEAKER_05

59:33 - 59:53

The original reason was just like the structure of our nonprofit. It was like there was something about Yeah, okay, this is like nice from motivations perspective, but mostly it was that our board needed to be a majority of disinterested directors. And I was like, that's fine. I don't need equity right now. I kind of.

SPEAKER_04

59:55 - 01:00:07

But like, yeah, we're now that you're running a company. Yeah, it creates these weird questions of like, well, what's your real motivation for us to told us that it is so deeply on it.

SPEAKER_05

01:00:07 - 01:00:13

One thing I have noticed, it is so deeply unimaginable to people to say, I don't really need more money.

SPEAKER_04

01:00:14 - 01:00:20

Like and I think I think I think people think it's a little bit of an ulterior motive. I think well, yeah, yeah, yeah.

SPEAKER_05

01:00:20 - 01:00:21

No, it's so it assumes.

SPEAKER_03

01:00:21 - 01:00:24

If you want to see doing on the side to make money.

SPEAKER_05

01:00:24 - 01:00:34

If I were just trying to say like I'm going to try to make a trillion dollars with open AI, I think everybody would have an easier time and it would save me. I save a lot of conspiracy theories.

SPEAKER_02

01:00:34 - 01:01:25

This is totally the back channel. You are a great deal maker. I've watched your whole career. I mean, you're just great at it. You got all these connections. You're really good at raising money. You're fantastic at it. And you got this Johnny, I've been going. You're in humane. You're investing in companies. You have to work. You're raising $7 trillion to build a phab. All this stuff. All of that put together. I'm kind of being a little physician here. You know, obviously, you're not racing $7 trillion, but maybe that's the market gap. So putting all that aside, the T was you're doing all these deals. They don't trust you because what your motivation, your end running and what opportunities belong inside of Open AI, what opportunities should be sams and this group of nonprofit people didn't trust you. Is that what happens?

SPEAKER_05

01:01:26 - 01:01:36

So the things like device companies or if we were doing some chipfab companies, like those are not same project. Those would be like opening I would get that. Quity.

SPEAKER_02

01:01:36 - 01:01:37

Um, wood. Okay.

SPEAKER_05

01:01:37 - 01:02:13

That's not a public's perception. Well, that's not like kind of the people like you who have to like commentate on the stuff all day's perception, which is fair because we have an announced stuff because it's not done. I don't think most people in the world like are thinking about this, but I agree. It spins up a lot of conspiracies. conspiracy theories and like tech commentators and if I could go back yeah I would just say like let me take equity and make that super clear and then I would be like all right like I'd still be doing it because I really care about AGI and think this is like the most interesting work in the world but at least I've checked everybody what's the chip project

SPEAKER_02

01:02:14 - 01:02:17

That's a seven trillion dollar. And we're just seven trillion number come from makes no sense.

SPEAKER_05

01:02:17 - 01:02:36

I don't know where that came from actually. I genuinely don't. I think I think the world needs a lot more AI infrastructure a lot more. Then it's currently planning to build and with a different cost structure. The exact way for us to play there is we're still trying to figure that out.

SPEAKER_04

01:02:36 - 01:02:52

What's your preferred model of organizing open AI? Is it sort of like the move fast break things, highly distributed, small teams, or is it more of this organized effort where you need to plan because you want to prevent some of these edge cases?

SPEAKER_05

01:02:52 - 01:03:41

Oh, I have to go in a minute. It's not because it's not to prevent the edge case that we need to be more organized, but it is that these systems are so complicated and concentrating, that's are so important. Like one You know, at the time, before it was like obvious to do this, you have like, deep mind or whatever has all these different teams doing all these different things and they're spreading their bets out. And yeah, open the eyes say we're going to like basically put the whole company and work together to make GPT for. And that was like unimaginable for how to run an AI research lab, but it is. I think what works, that it eliminates what works for us. So, not because we're trying to prevent educators, but because we want to concentrate resources and do these, like, big, hard, complicated things, we do have a lot of coordination on what we work on.

SPEAKER_02

01:03:42 - 01:03:50

All right, Sam, I know you got to go. You've been great on the hour. Come back any time. Great talking to you guys. Yeah, I'm so open about it.

SPEAKER_03

01:03:50 - 01:03:54

We've been talking about it for like a year plus. I'm really happy to finally happen. Yeah, it's awesome.

SPEAKER_05

01:03:54 - 01:03:59

I really wish I would love to come back on after I next like major motion. I'll be able to talk more directly about it.

SPEAKER_02

01:03:59 - 01:04:08

Yeah, but you got to zoom in. Same zoom every week. Same time. Same time, same zoom. Just drop off it. Put on your account. Come back to the game.

SPEAKER_04

01:04:08 - 01:04:09

Come back to the game.

SPEAKER_03

01:04:09 - 01:04:09

Yeah, come back to the game.

SPEAKER_05

01:04:11 - 01:04:15

I, you know, I would love to play poker. It has been forever. That would be a lot of fun.

SPEAKER_02

01:04:15 - 01:04:57

Yeah. That's a fan where Chema, when you and I were heads up and you had to rely on me. You and I were heads up and you went all in. I had a set, but there was a street and a flush on the board. And I'm in the tank trying to figure out if I want to lose this back with a plate of small stakes. It might have been like 5k pot or something. And then Chamap can't stay out of the pot and he starts plotting the two of us. You should call. You shouldn't call. He's blocking me. And I'm like, I'm going. I'm trying to figure out if I make the call here. I make the call. And it was like, uh, you had a really good hand. And I just happened to have a sign. I think you had the top pair, top kicker or something. But you made a great move because the boy was so textured, almost like, oh, bottom set.

SPEAKER_04

01:04:57 - 01:05:02

Sam has a great style of playing, which I would call random Jim. Totally. You got to just kill him.

SPEAKER_05

01:05:02 - 01:05:06

I don't really know if you, I don't, I don't know if you could say it by anybody.

SPEAKER_03

01:05:06 - 01:05:10

I don't, I'm not going to. You haven't seen you about playing the last 18 months. It's a lot different.

SPEAKER_02

01:05:10 - 01:05:13

I kind of have a game, much more snow. So much fun now.

SPEAKER_01

01:05:13 - 01:05:16

Oh, find out. You find that really hard before?

SPEAKER_02

01:05:16 - 01:05:22

I don't know what that is. I don't know what that is. Okay, you'll like it. Yeah, he's not. It's big.

SPEAKER_04

01:05:22 - 01:05:26

Yeah, we can do two and awards and congrats on everything honestly.

SPEAKER_00

01:05:26 - 01:05:32

Thank you. Thanks for coming on and thank you. I'll tell you back when the next after the big launch. Please do.

SPEAKER_02

01:05:32 - 01:05:46

Bye. Gentlemen, some breaking news here. All those projects he said are part of opening eye. That's something people didn't know before this and a lot of confusion there. Shemop, what is your major takeaway from our hour with Sam?

SPEAKER_04

01:05:46 - 01:06:47

I think that these guys are going to be one of the four major companies that matter in this whole space. I think that that's clear. I think what's still unclear is where is the economics going to be. He said something very discreet, but I thought it was important, which is I think he basically my interpretation is these models will roughly all be the same, but there's going to be a lot of scaffolding around these models that actually allow you to build these apps. So in many ways, that is like the open source movement. So even if the model itself is never open source. It doesn't much matter because you have to pay for the infrastructure, right? There's a lot of open source software that runs on Amazon. You still pay AWS something. So I think the right way to think about this now is the models will basically be all really good. And then it's all this other stuff that you'll have to pay for interface where it builds all this other stuff is going to be in a position to build a really good business.

SPEAKER_02

01:06:48 - 01:06:57

Freeberg, he talked a lot about reasoning. It seemed like that he kept going to reasoning in a way from the language model. Did you note that and anything else that you noted in our arrow with him?

SPEAKER_03

01:06:57 - 01:07:15

Yeah, I mean, that's a longer conversation because there is a lot of talk about language models eventually evolving to be so generalizable that they can resolve pretty much like all intelligent function. And so the language model is the foundational model that that ELDI, but

SPEAKER_04

01:07:15 - 01:08:11

That's a I think there's a lot of people that are different schools that thought on this and how much might my other takeaway I think is that the I think what he also seemed to indicate is there's like so many like we're also in raptured by LLMs, but there's so many things other than LLMs that are being baked and rolled by him and by other groups, and I think we have to pay some amount of attention to all those, because that's probably where, and I think free bread you tried to go there in your question, that's where reasoning will really come from is this mixture of experts approach. And so you're going to have to think multi-dimensionally to reason, right? We do that, right? Do I cross the street or not in this point in time? You reason based on all these multi inputs. And so there's all these little systems that go into making that decision in your brain. And if you use that as a simple example, there's all this stuff that has to go into making some experience being able to reason intelligently.

SPEAKER_02

01:08:11 - 01:08:33

Facts. You went right there with the corporate structure, the board, And he gave us a lot more information here. What are your thoughts on the, hey, you know, the chip stuff and the other stuff I'm working on. That's all part of open AI. People just don't realize it. And that moment and then, you know, your questions to him about equity, your thoughts on.

SPEAKER_00

01:08:33 - 01:09:21

I'm sure I was like the main guy who asked that question. When all you do talk about the non-profit, the difference between the non-profit version about, that's what I'm clearly with, some sort of culture clash on the board between the people who originated from the non-profit world and people who came from the start of war. We don't really know more than that, but they're clearly with some sort of culture clash. I thought one of the, a couple of the other areas that he drew attention to that we're kind of interesting is he clearly thinks there's a big opportunity on mobile that goes beyond just like having, you know, attaching a T app on your phone or maybe even having like a Siri on your phone. There's clearly something bigger there. He doesn't know exactly what it is, but. It's going to require more inputs to that personal assistant that's seen everything around you.

SPEAKER_02

01:09:21 - 01:09:30

I think that's a great insight David because he was talking about, hey, I'm looking for a senior team member who can push back and he understands all context.

SPEAKER_00

01:09:30 - 01:09:44

I thought that was like a very interesting to think about in an executive assistant or an assistant that has executive function as opposed to being like just an alter ego for you or what he called a second fan. That's kind of interesting.

SPEAKER_02

01:09:44 - 01:09:45

I thought that was interesting.

SPEAKER_00

01:09:45 - 01:09:50

Yeah. And clearly, he thinks there's a big opportunity in biology and scientific discovery.

SPEAKER_03

01:09:50 - 01:09:53

After the break, I think we should talk about Alpha Fold 3. It was just an accident.

SPEAKER_02

01:09:53 - 01:10:11

Yeah, let's do that. And we can talk about the the Apple ad in depth. I just want to also make sure people understand when people come on the pod, we don't show them questions. They don't edit the transcript. Nothing is out of bounds. If you were wondering why I didn't ask or we didn't ask about the Elon lawsuit, he's just not going to be able to comment on that. So it would be a no comment. So, you know,

SPEAKER_00

01:10:11 - 01:10:17

And when that's why, like our time was limited and there's a lot of questions that we could ask him that would have just been a waste of time.

SPEAKER_02

01:10:17 - 01:10:19

And when they would have, I just want to make sure people.

SPEAKER_00

01:10:19 - 01:10:24

Yeah, of course, he's going to know commonly lawsuit and he's already been asked about that 500 times.

SPEAKER_03

01:10:24 - 01:10:27

Yeah, take a quick break before the next before we come back.

SPEAKER_02

01:10:27 - 01:10:59

Yeah, take a bio break and then we'll come back with some news for you and some more banter with your favorite Besties on the number one podcast in the world. I'll be on the podcast. All right. Welcome back everybody. Second half of the show. Great guests. And Martin, they coming on the pod. We've got a bunch of news on the docket. So let's get started. Freyberg, you told me I could give some names of the guests that we booked for the oil and summit. I did not. You did. You've said each week every week that I get 27.

SPEAKER_03

01:10:59 - 01:11:09

I did not. I appreciate your interest in me all and some it's line up, but we do not yet have enough critical mass to feel like we should go out there.

SPEAKER_02

01:11:09 - 01:11:51

Well, I am a loose candidate. So I will answer my two guests and I created the summit and you took it for me. So I'm going to great job. I will announce my guess. I don't care which your opinion is. I have book two guests for the summit and it's going to be sold out. Look at these two guests I booked. For the third time coming back to the summit, Archite Elon Musk will be there. Hopefully in person, if not, you know, from 40,000 feet on Sarlene connection wherever he is in the world. And for the first time, our friend Mark Cuban will be coming. And so two great guests for you to look forward to. But Freeberg's got like a thousand guests coming. He'll tell you when it's like 48 hours before the conference. But yeah, two great guests coming.

SPEAKER_00

01:11:51 - 01:11:59

A billionaires who are coming isn't coming to. Yes. Coming. Yes. He's booked. So we have three billionaires. Three millionaires. Yes.

SPEAKER_03

01:11:59 - 01:12:01

Okay. It hasn't fully confirmed. So don't. Okay.

SPEAKER_02

01:12:01 - 01:12:07

Well, we're gonna say it anyway. As pencil bin. Don't say that's it. Don't say pencil. Yeah. Don't back out.

SPEAKER_04

01:12:07 - 01:12:10

This is going to be catnip for all these protests organized.

SPEAKER_00

01:12:10 - 01:12:20

There's like, if you haven't got one place, the bear. Well, by the way, speaking of updates, what do you guys think of the bottle for the all into killer? Oh, beautiful.

SPEAKER_04

01:12:20 - 01:12:38

Honestly, honestly, I would just say, I think you were doing a marvelous job that. I was shocked at the design, shock meaning it is so unique and high quality. I think it's amazing. It would make me drink the killer.

SPEAKER_00

01:12:38 - 01:12:40

You're going to, you're going to want to.

SPEAKER_02

01:12:40 - 01:13:11

It is stunning. Congratulations. And yeah, it was just, when we went through the deck at the monthly meeting, It was like, oh, that's nice. Oh, that's nice. We're going to do the concept bottles. And then that bottle came up. It everybody went like crazy. It was like somebody hitting like a head, a Steph Curry hitting a half court shot. It was like, oh my God, it was so clear that you've made an iconic bottle. that if we can produce it, oh, Lord, it is going to be like we can.

SPEAKER_03

01:13:11 - 01:13:13

It can be amazing.

SPEAKER_02

01:13:13 - 01:13:15

I'm excited. I'm excited for it. You know, it's like the problem.

SPEAKER_00

01:13:15 - 01:13:27

It's so complicated that we have to do a feasibility analysis on whether it was actually manufacturerable, but it is. So at least the earlier reports are good. So we're going to hopefully we'll have some aid for the time for the oil and some it.

SPEAKER_02

01:13:28 - 01:13:30

I mean, why not?

SPEAKER_04

01:13:30 - 01:13:35

So it's great when we get barricaded in by all these protests, we can drink a tequila.

SPEAKER_01

01:13:35 - 01:13:37

Did you want to see, did you see beer, too?

SPEAKER_04

01:13:37 - 01:13:41

Peter, you've got barricaded by these ding-dongs that came rich to my gun.

SPEAKER_02

01:13:41 - 01:13:49

Listen, people have the right to protest, sending it's great people are protesting, but surrounding people and threatening them is a little bit over the top and dead.

SPEAKER_00

01:13:49 - 01:13:51

I think you're exaggerating what happened.

SPEAKER_02

01:13:51 - 01:13:53

Well, I don't know exactly what happens. Obviously, as these videos,

SPEAKER_00

01:13:54 - 01:14:09

Look, they're not threatening anybody. And I don't even think they tried to barricade him in. They were just outside the building. And because they were blocking the driveway, his car couldn't leave. But he wasn't physically locked in the building or something.

SPEAKER_02

01:14:09 - 01:14:13

Yeah, that's what the headlines say. But that could be fake news fake social. Yeah.

SPEAKER_04

01:14:13 - 01:14:20

This was not on my bingo card. This pro protestored support by sacks was not on the bingo card. I got to say.

SPEAKER_00

01:14:21 - 01:15:34

The Constitution, the Constitution, the United States, in the First Amendment provides for the right of assembly, which includes protests and citizens as long as they're as long as they're peaceful. Now, obviously, if they go too far and they vandalize or break into buildings or use violence, then that's not peaceful, however. expressing sentiments with which you disagree does not make it violent and there's all these people out there now making the argument that if you hear something from a protester that you don't like and you subjectively experience that as a as a threat to your safety then that somehow it should be you know treated as valid like that's basically violent well that's that's not what the constitution says And these people understood, well, just a few months ago, that that was basically snowflakery. That, you know, just because somebody, you know what I'm saying? Like every other piece of what we're getting. We have the rise of the woke right now, where they're buying it right. Yeah, the woke right. They're buying into this idea of safetyism, which is being exposed to ideas you don't like to process. You don't like, is it threat your safety? No, it's not. So now we have so things on the outside. We have snowflakery on both sides now.

SPEAKER_02

01:15:34 - 01:15:47

It's ridiculous. The only thing I will say that I've seen and is this this surrounding individuals who you don't want there and locking them in a circle and then moving them out of the office.

SPEAKER_00

01:15:47 - 01:16:26

That's not cool. Yeah, obviously you can't do that, but look, I think that. Most of the protests on most of the campuses have not crossed the line. They've just occupied the lawns of these campuses. And look, I've seen some troublemakers try to barge through the encampments and claim that because they can't go through there, that's somehow they're being prevented from going in class. Look, you just walk around the lawn and you get to class. Okay. And some of these videos are showing that these are effectively right wing provocateurs who are engaging in left wing tactics. And I don't support it either way.

SPEAKER_04

01:16:26 - 01:16:40

Some of these camps are some of the funniest things you've ever seen. at one tent that's dedicated to like a reading room and you go in there and there's like these like flying to the center. Oh my god, it's unbelievably hilarious.

SPEAKER_00

01:16:40 - 01:17:15

Let's just go question that because the protests are originating on the left that there's some goofy views, like you know you're dealing with like a left wing idea complex, right? And you know, there's easy to make fun of them doing different things, but the fact of the matter is that most of the protests and most of these campuses are even though they can be annoying because they're occupying part of the law and they're not violent. And the way they're being cracked down on, they're sending the police in at 5 a.m. to crack down on these encampments with batons and riot gear. And I find that part to be completely excessive.

SPEAKER_02

01:17:16 - 01:17:42

But it's also dangerous because things can escalate when you have mobs of people and large groups of people. So I just want to make sure people understand that. Large group of people, you have a diffusion of responsibility that occurs when there's large groups of people who are passionate about things. And people can get hurt. People have gotten killed at these things. So just keep it calm, everybody. I agree with you. What's the harm with these folks protesting on a lawn? It's not a big deal. When they break into buildings, course.

SPEAKER_00

01:17:42 - 01:17:43

Yeah, that across the line obviously.

SPEAKER_02

01:17:43 - 01:17:51

Yeah, but I mean, let them sit out there and then they'll run out their food cars. They're, they're, I mean, I like food card and they're running out of waffles.

SPEAKER_04

01:17:51 - 01:18:00

Did you guys see the clip? I think it was on the University of Washington campus where one kid challenged this antifa guy to shut up.

SPEAKER_02

01:18:00 - 01:18:01

Fantastic.

SPEAKER_04

01:18:01 - 01:18:05

I mean, it didn't some of the funniest stuff. Some of the some content is coming out.

SPEAKER_02

01:18:05 - 01:18:11

That's just why this my favorite was the woman who came out and said that the Columbia students needed humanitarian aid.

SPEAKER_04

01:18:12 - 01:18:15

Oh my god, they open it up on her or hilarious.

SPEAKER_01

01:18:15 - 01:18:24

I was like, uh, you met her. You know, I was like, we need our door dash right now. We need to double dash some boba and we can't get it through the police.

SPEAKER_02

01:18:24 - 01:18:40

We need our boba low sugar boba with the popping boba bubbles wasn't getting in. But you know, people have the right to protest and uh, piece of boba by the way, there's a word I've never heard very good sex piece of boba inclined to avoid argument or violent conflict. Very nice.

SPEAKER_00

01:18:41 - 01:18:43

Well, it's in the Constitution, it's in the first amendment.

SPEAKER_02

01:18:43 - 01:19:31

Is it real? Yeah, I haven't heard the word peaceable before. I mean, you and I are some protocol on this, like, I don't, we used to have the ACLU, like, backing up the KKK going down Main Street and really freaking for. Yeah, they were fighting for. And I have to say the overtaken window is open to back up. And I think it's great. All right, we got some things on the dock if you're, I don't know if you guys saw the Apple new iPad ad. It's getting a bunch of criticism. They used like some giant I draw a press to crush a bunch of creative tools, each a turntable trumpet, you know, people really care about apples ads and what they represent. We talked about that mother earth, little vignette, they created here. What do you think, Freyberg, you see the ad? What was your reaction to it?

SPEAKER_03

01:19:31 - 01:19:34

Let me sat and it did not make me want to buy an iPads.

SPEAKER_04

01:19:34 - 01:19:44

So, it made you sad. Actually, listen to an emotion. Meaning like commercials, it's very rare that commercials can actually do that. Most people just zone out.

SPEAKER_03

01:19:44 - 01:19:53

Yeah, they took all this beautiful stuff and heard it. It didn't feel good. I don't know. Just didn't seem like a good ad. I don't know why they did that. I don't get it.

SPEAKER_00

01:19:53 - 01:20:56

I think maybe what they're trying to do is the selling point of this new iPad is that it's the thinnest one. I mean, there's no innovation left. So they're just making the devices, you know, thinner. Yeah. So I think the idea was that they were going to take this hydraulic press to represent how ridiculously thin the new iPad is. Now, I don't know if the point there was to smush all of that good stuff into the iPad. I don't know if that's what they were trying to convey. But yeah, I think that by destroying all those creative tools that Apple is supposed to represent, it definitely seemed very off-brand for them. And I think people were reacting to the fact that it was so different than what they would have done in the past. And of course everyone was saying, well Steve, would never have done this. I do think it did land wrong. I mean, I didn't care that much, but I was kind of asking the question, like, why are they destroying all these creator tools that they're renowned for creating or for turning into the digital version?

SPEAKER_02

01:20:56 - 01:21:03

Yeah, it just didn't land. I mean, Shamatha, how are you doing emotionally after seeing that? Are you okay, buddy?

SPEAKER_04

01:21:04 - 01:21:50

Yeah. I think this is, uh, you guys see that in the Berkshire annual meeting last weekend, Tim Cook was in the audience and Buffett was very lottery. This is an incredible company, but he's so clever with words. He's like, you know, this is an incredible business that we will hold forever most likely. And it turns out that he sold $20 billion with the foul shares. We're going to hold it forever. Which by the way, if you guys remember, we put that little chart up which shows when he doesn't mention it in the annual letter. It's basically like it's foreshadowing the fact that he is just pounding the cell. And he sold $20 billion.

SPEAKER_00

01:21:50 - 01:21:58

Well, also holding it forever could mean one share. Yeah. We kind of need to know. Like how much are we talking about?

SPEAKER_04

01:21:59 - 01:22:06

I mean, it's an incredible business that has so much money with nothing to do. They're probably just going to buy back the stock. Just a total waste.

SPEAKER_02

01:22:06 - 01:22:17

They were flown this rumor of buying Rivian. You know, after they shut down Titan Project, the her internal project to make a car, it seems like a car is the only thing people can think of that would move the needle in terms of earnings.

SPEAKER_04

01:22:17 - 01:23:19

I think the problem is J. Kelly, you kind of become afraid of your own shadow, meaning the folks that are really good at M&A, like you look at Benioff. The thing with Benioff's M&A strategy is that he's been doing it for 20 years. And so he's cut his teeth on small acquisitions and the market learns to give him trust so that when he proposes like the $27 billion Slack acquisition, he's allowed to do that. Another guy, you know, McCastroor at PANW. These last five years, people were very skeptical that he could actually roll up security because it was a super fragmented market. He's gotten permission. Then there are companies like Danner that buy hundreds of companies. So all these folks are examples of you start small and you earn the right to do more. Apple hasn't bought anything more than $50 or $100 million. And so the idea that all of a sudden they come out of the blue and buy a $10, $20 billion company. I think is just totally doesn't stand logic. It's just not possible for them because they'll be so afraid of their own shadow. That's the big problem. It's themselves.

SPEAKER_00

01:23:19 - 01:23:43

Well, if you're running out of in-house innovation and you can't do M&A, then your options are kind of limited. I mean, I do think that the fact that the big news out of Apple is the iPads getting thinner does represent kind of the end of the road in terms of innovation. It's kind of like when they added the third camera to the iPhone. Yeah, it runs me of those, um, remember to like when the Gillette. Yeah, they did the three came out and then they did.

SPEAKER_01

01:23:43 - 01:23:47

It was the best onion thing was like, we're doing five effort.

SPEAKER_00

01:23:47 - 01:23:55

But that Gillette actually came out with the mock five. So yeah, like the parody became the reality. What are they going to do at two more cameras to the iPhone? Yeah, five cameras on it.

SPEAKER_02

01:23:56 - 01:24:15

Yeah, it makes no sense and then I don't know anybody wants to remember the apple vision was like gonna plus why are they changing the fat iPads That's fair point fair point actually, you know what it's actually this didn't come out yet, but it turns out the iPad is on Ocempic. It's actually dropped a lot.

SPEAKER_00

01:24:15 - 01:24:18

That would've been a one year ad. Yeah, exactly.

SPEAKER_01

01:24:18 - 01:24:18

Oh, oh

SPEAKER_02

01:24:21 - 01:24:31

We could just workshop that right here. But there was another funny one which was making the iPhone smaller and smaller and smaller and the iPod smaller and smaller and smaller and smaller and smaller. So the point it was like, you know, like a thumb size iPhone.

SPEAKER_04

01:24:31 - 01:24:39

Like the Ben's still her phone. It's a lender. That was a great scene.

SPEAKER_02

01:24:39 - 01:24:51

Is there a category that you can think of that you would love an Apple product for? There's a product in your life. that you would love to have Apple's version of it.

SPEAKER_04

01:24:51 - 01:25:14

They killed it. I think a lot of people would be very open-minded to an Apple car. Okay. They just would. It's a connected internet device increasingly so. Yeah. And they managed to flood it. They had a chance to buy Tesla. They managed to flood it. Yeah. Right. There are just too many examples here where these guys have so much money and not enough ideas. That's a shame.

SPEAKER_02

01:25:15 - 01:25:32

It's a bummer, yeah. The one I always wanted to see them do, sacks was TV. The one I always wanted to see them do was the TV, and they were supposedly working on it. Like the actual TV, not the little Apple TV box in the back. And like that would have been extraordinary to actually have a gorgeous, you know, big television.

SPEAKER_04

01:25:32 - 01:25:49

What about the gaming console? They could have done that. You know, there's just all these things that they could have done. It's not a lack of imagination because these aren't exactly incredibly world-beating ideas. They're sitting right in front of you face. It's just the will to do it. If you think back on

SPEAKER_03

01:25:54 - 01:26:52

Apple's product lineup over the years where they've really created value is on how unique the products are they almost create new categories sure there may have been a quote tablet computer prior to the iPad but the iPad really defined the tablet computer era sure there was a smartphone or two before the iPhone came along but it really defined the smartphone And sure there was a computer before the Apple 2 and then it came along and it defined the personal computer. In all these cases, I think Apple strives to define the category. So it's very hard to define a television if you think about it or a gaming console in a way that you take a step up and you say, this has been you think, this has been you platform. So I don't know, that's the lens I would look at if I'm Apple in terms of like, can I redefine a car? Can I make, you know, we're all trying to fit them into an existing product bucket. But I think what they've always been so good at is identifying consumer needs and then creating an entirely new way of addressing that need in a real step change function. From the, like the, the, I pot, it was so different from any MP3 player ever.

SPEAKER_04

01:26:52 - 01:28:24

I think the reason why the car could have been completely reimagined by Apple is that they have a level of credibility and trust. that I think probably no other company has, and absolutely no other tech company has. And we talked about this, but I think this was the third Steve Jobs story that I left out, but in 2000, and I don't know, was it one? I launched a 99 cent download store. I think I've told you the story in Winham. Steve Jobs just ran total circles around us, but the reason he was able to is he had all the credibility to go to the labels and get deals done for licensing music that nobody could get done before. I think that's an example of what Apple's able to do, which is to use their political capital to change the rules. So if the thing that we would all want is safer roads and autonomous vehicles, there are regions in every town in city that could be completely converted to level five autonomous zones. If I had to pick one company that had the credibility to go and change those rules, it's them. because they could demonstrate that there was a methodical safe approach to doing something. And so the point is that even in these categories that could be totally reimagined, it's not for a lack of imagination. Again, it just goes back to a complete lack of will. And I understand because if you had $200 billion of capital on your balance sheet, I think it's probably pretty easy to get fat and lazy.

SPEAKER_02

01:28:24 - 01:29:10

Yeah, it is. And they want to have everything built there. People don't remember, but they actually built one of the first digital cameras. You must have owned this, right, free bird? You're allowed to let us. Yeah, totally. It was beautiful. What did they call it? Was it the eye camera or something? Quick take. Quick take. Quick take. Yeah. The thing I would like to see Apple build and I'm surprised they didn't. Was a smart home system the way Apple has nested. drop cam, a door lock, you know, a AV system, go after Questron or whatever, and just have your whole home automated thermostat nest, all of that would be brilliant by Apple. And right now, I'm an Apple family that has our all of our home automation through Google. So it's just kind of sucks. I would like that all to be that would be pretty amazing.

SPEAKER_00

01:29:10 - 01:29:16

Like if they did a Crestron or so on, because then when you just go to your Apple TV, all your cameras just work, you don't need to.

SPEAKER_02

01:29:16 - 01:29:22

Yes. That's that, I mean, and everybody has a home and everybody automates their home.

SPEAKER_00

01:29:22 - 01:29:27

So just think everyone has Apple TV at this point. So you just make Apple TV the brain for the home system.

SPEAKER_02

01:29:28 - 01:29:30

Right. That would be your hand.

SPEAKER_00

01:29:30 - 01:29:34

And you can connect your phone to it. And then yes, that would be very nice.

SPEAKER_02

01:29:34 - 01:30:14

Yeah, like can you imagine like the rain cameras, all that stuff being integrated. I don't know why they didn't go after that. That seems like the easy layout. Hey, you know, everybody's been talking free bird about this alpha fold this folding proteins. And there's some new version out from Google. And also, Google reportedly, we talked about this before, is also advancing talks to acquire Hubspots. So that rumor for the $30 billion market cap. Hubspots is out there as well. Freebreak, you're as our resident science, Sultan, our resident Sultan of science. And as in Google alumni, pick either story and let's go for it.

SPEAKER_03

01:30:14 - 01:35:18

Yeah, I mean, I'm not sure there's much more to add on the HubSpot acquisition rumors. They are still just rumors. And I think we covered the topic a couple of weeks ago. But I will say that Alpha Fold 3 that was just announced today and demonstrated by Google is a real, I would say, breathtaking moment for biology, for bioengineering, for human health, for medicine. And maybe I'll just take 30 seconds to kind of explain it. You remember when they introduced Alpha Fold an awful full two we talked about DNA codes for proteins so every three letters of DNA codes for an amino acid so a string of DNA codes for a string of amino acids and that's called a gene that produces a protein. And that protein is basically a long like think about beads. There's 20 different types of beads, 20 different amino acids that can be strong together. And what happens is that necklace that bead necklace basically collapses on itself. And all those will be stick together with each other in some complicated way that we can't deterministically model. And that creates a three-dimensional structure, which is called a protein, that molecule. And that molecule does something interesting. It can break apart other molecules. It can buy molecules. It can move molecules around. So it's basically the machinery of chemistry, of biochemistry. And so proteins are what is encoded in RDNA. And then the proteins do all the work of living organisms. So Google's Alpha Fold project took three-dimensional images of proteins and the DNA sequence that codes for those proteins and then they built a predictive model that predicted the three-dimensional structure of a protein from the DNA that codes for it. And that was a huge breakthrough years ago. What they just announced with Alpha Fold 3 today is that they're now including all small molecules. So all the other little molecules that go into chemistry and biology that drive the function of everything we see around us And the way that all those molecules actually bind and fit together is part of the predictive model. Why is that important? Well, let's see that you're designing a new drug and it's a protein based drug which biological drugs which most drugs are today. You could find a biological drug that binds to a cancer cell and then you'll spend 10 years going to clinical trials and billions of dollars later you find out that that protein accidentally binds to other stuff and hurts other stuff in the body and that's an off target effect or a side effect and that drug is pulled from the clinical trials and it never goes to market. Most drugs go through that process. They are actually tested in animals and then in humans. And we find all these side effects that arise from those drugs because we don't know how those drugs are going to bind or interact with other things in our biochemistry. And we only discovered after we put it in. But now we can actually model that with software. We can take that drug, we can create a three dimensional representation of it using the software. And we can model how that drug might interact with all the other cells, all the other proteins, all the other small molecules in the body to find all the off target effects that may arise and decide whether or not that presents a good drug candidate. That is one example of how this capability can be used, and there are many, many others, including creating new proteins that could be used to bind molecules or stick molecules together, or new proteins that could be designed to rip molecules apart. We can now predict the function of three-dimensional molecules using this capability, which opens up all of the software-based design of chemistry, of biology, of drugs, and it really isn't incredible breakthrough moments. The interesting thing that happened those Google alphabet has a subsidiary called isomorphic labs. It is a drug development subsidiary of alphabet. And they basically kept all the IP for alphabet 3 in isomorphic. So Google is going to monetize the heck out of this capability. And what they made available was not open source code, but a web-based viewer that scientists for quote non-commercial purposes can use to do some fundamental research in a web-based viewer and make some experiments and try stuff out and how interactions might occur. But no one can use it for commercial use, only Google's isomorphics labs can. So number one, it's an incredible demonstration of what AI outside of LLMs, which we just talked about with Sam today, And obviously, we talked about other models, but LLMs being kind of this consumer text predictive model capability. But outside of that, there's this capability in things like chemistry with these new AI models that can be trained and built to predict things like three-dimensional chemical interactions that is going to open up an entirely new era for human progress. And I think that's what's so exciting. I think the other side of this is Google is hugely advantageed and they just showed the world a little bit about some of these jewels that they have in the treasure chest and they're like, look at what we got. We're going to make all these drugs and they've got partner ships with all these pharma companies and I some more fake labs that they've talked about. And it's going to usher in a new era of drug development design for human health. So all in all, it's a pretty like astounding day. A lot of people are going crazy over the capability that they just demonstrated. And then it begs all this really interesting question around like, you know, what's Google going to do with it and how much value is going to be created here. So anyway, I thought it was a great story. And I just rambled on for a couple of minutes, but I don't know.

SPEAKER_02

01:35:18 - 01:35:24

It's pretty cool. Super interesting is is this AI capable of making a science corner that David Sachs pays attention to?

SPEAKER_04

01:35:26 - 01:35:32

Well, it will predict the cure, I think, for the common cold and for her piece, so he should pay attention.

SPEAKER_02

01:35:32 - 01:35:42

So, folding cells is the app that, like casual games, access to download is playing. How many, how many chess moves did you make during that segment?

SPEAKER_03

01:35:42 - 01:36:52

Let me just say one more thing. Do you guys remember we talked about Yamanaka factors and how challenging it is to basically, we can reverse aging if we can get the right proteins into cells to tune the expression of certain genes to make those cells useful? Right now, it's a shotgun approach to trying millions of compounds and combinations of compounds to do them. There's a lot of companies actually trying to do this right now to come up with a fountain of youth type product. We can now simulate that. So with this system, one of the things that this alpha fall three can do is predict what molecules will bind and promote certain sequences of DNA, which is exactly what we try and do with the Yamanaka factor based expression systems, and find ones that won't trigger off target expression. So meaning we can now go through the search space and software. of creating a combination of molecules that theoretically could unlock this fountain of you to de-age all the cells in the body and introduce an extraordinary kind of health benefit. And that's just, again, one example of the many things that are possible with this sort of platform. And I'm really, I got to be honest, I'm really just sort of scanning the surface here of what this can do. The capabilities and the impact are going to be like, I don't know, I know I say the sort of stuff a lot, but it's going to be pretty profound.

SPEAKER_04

01:36:52 - 01:37:23

There's a on the blog post. They have this incredible video that they show of the coronavirus that creates a common cold. I think the seven P&M protein and not only did they literally like predict it accurately. They also predicted how it interacts with an antibody, with a sugar. It's not so, you could see a world where like, I don't know, you just get a vaccine for the cold and it's kind of like you never have cold again. I mean, the seminal stuff, but so powerful.

SPEAKER_03

01:37:23 - 01:37:50

And you can filter out stuff that has off target effects. So so much of drug discovery and all the side effects stuff can start to be solved for in silica. And you could think about running extraordinary large use of model like this, run extraordinarily large simulations in a search space of chemistry to find stuff that does things in the body that can unlock, you know, all these benefits can do all sorts of amazing things to destroy cancer, to destroy viruses, to repair cells, to DH cells,

SPEAKER_02

01:37:51 - 01:37:52

And this is a hundred billion dollar business.

SPEAKER_03

01:37:52 - 01:38:01

They say, oh my god, I mean, this alone, I feel like this is where I, I said this before. I think Google's got this like portfolio of like quiet.

SPEAKER_02

01:38:01 - 01:38:04

Yeah, I think the fact that they didn't open source everything in this.

SPEAKER_03

01:38:08 - 01:38:10

says a lot of their intentions.

SPEAKER_02

01:38:10 - 01:38:15

Yeah. Yeah. Open source when you're behind closed source lock it up when you're ahead, but you're not.

SPEAKER_01

01:38:15 - 01:38:16

I'm a knockout.

SPEAKER_02

01:38:16 - 01:38:22

Interesting. The Yamanaka is the Japanese whiskey that sack serves on his plate as well. It's delicious. I love that Hokkaido.

SPEAKER_03

01:38:25 - 01:38:29

If you didn't find your way to Silicon Valley, you could be like a Vegas lounge comedy guy.

SPEAKER_02

01:38:29 - 01:38:31

Absolutely. Yeah. For sure. Yeah.

SPEAKER_03

01:38:31 - 01:38:37

I was actually, yeah, somebody said I should do like those 1950s, the 1950s talk shows where the guys would do like the stage.

SPEAKER_02

01:38:37 - 01:39:20

Yeah. Somebody told me I should do like spalled and gray Eric Bogosian style stuff. I don't know if you guys remember like the the monologue is from the 80s in New York. I was like, oh, that's interesting. Maybe. All right, everybody. Thanks for tuning in to the world's number one podcast. Can you believe we did it, Chema podcast in the world and the all insummit, the Ted killer. If you are going to Ted, congratulations for a genuine flag thing. If you want to talk about real issues come to the all insummit. And if you were protesting at the all-in-summit, let us know what mock meat you would like to have freeberg is setting up mock meat stations for all of our protestors and what milk you would like. Yeah, I'll be.

SPEAKER_01

01:39:20 - 01:39:24

If you want, if you're oatmeal, soy or not for reference, just please when you come to protest.

SPEAKER_03

01:39:24 - 01:39:26

We have different kinds of Xantham gum.

SPEAKER_02

01:39:26 - 01:39:40

You can transfer them right now of the nut milks you can want and then they'll be mindful of the oil design, which means the soy like this. Yes, on the south lawn, we'll have the goat yoga going on. So, just please have a look at the goat yoga going on for all of you.

SPEAKER_00

01:39:40 - 01:39:46

Very thoughtful for you though. Make sure that our protesters are going to be, well, well, fine, I'm sitting here.

SPEAKER_02

01:39:46 - 01:40:05

Yes, but we're actually, Fiburg is working on the protester gift backs. The protester gift backs are, they're, they're made of yakuza. Yeah, they're, they're good. I fold in produce. I think I saw them open for the smashing pumpkins in 2003. Alright, enough.

SPEAKER_01

01:40:05 - 01:40:08

I'll be here for three more in fights. Love you boys. Love you besties.

SPEAKER_00

01:40:08 - 01:40:26

This is the all-in-pater if I'm like what's going on. It's basic. and source it to the fans and they've just gone crazy with her.

SPEAKER_01

01:40:56 - 01:40:56

What?

SPEAKER_00

01:40:56 - 01:40:59

You're a B or a B. B. What? You're a B. What? You're a B.

SPEAKER_02

01:41:10 - 01:43:02

That's episode 178, and now the plugs, the all-in summit, is taking place in Los Angeles on September 8th through the 10th. You can apply for a ticket at summit.allinpodcast.co. Scholarships will be coming soon. If you want to see the four of us interview Sam Altman, you can actually see the video of this podcast on YouTube, youtube.com slash at all-in, which is search all-inpodcast. and hit the alert bell and you'll get updates when we post. We're doing a Q&A episode live when the YouTube channel hits 500,000 and we're going to do a party in Vegas. My understanding when we hit a million subscribers and look for that as well, you can follow us on xx.com slash the all-in-pod. TikTok is all underscore in underscore. Talk, Instagram, the all-in-pod. And on LinkedIn, just search for the all-in podcast. You can follow Chomoth at x.com slash Chomoth. And you can sign up for a Substack at Chomoth. Substack.com. I do. FreeBurr can be followed at x.com slash FreeBurr. And O'Hallo is hiring. Click on the careers page at O'HalloGenetics.com. Calm and you can follow sacks at x.com slash David sacks sacks recently spoke at the American moment conference and people are going crazy for it. It's into his tweet on his ex profile. I'm Jason Calacanis. I am x.com slash Jason and if you want to see pictures on my bulldogs and the food I'm eating go to instagram.com slash Jason in the first name club. You can listen to my other podcast this week and start up to search for one YouTube or your favorite podcast player. We are hiring a researcher, apply to be a researcher doing primary research and working with me and producer Nick working in data and science and being able to do great research, finance, etc. All in podcast.co slash research. It's a full-time job working with us the besties. We'll see you all next time when they all in podcast.