PodClips Logo
PodClips Logo
On with Kara Swisher
OpenAI CEO Sam Altman on GPT-4 & the A.I. Arms Race
OpenAI CEO Sam Altman on GPT-4 & the A.I. Arms Race

OpenAI CEO Sam Altman on GPT-4 & the A.I. Arms Race

On with Kara SwisherGo to Podcast Page

Kara Swisher, Nayeema Raza, Sam Altman
·
42 Clips
·
Mar 23, 2023
Listen to Clips & Top Moments
Episode Summary
Episode Transcript
0:00
Support for this show comes from E-Trade for Morgan Stanley. When the markets are a mess, you need a team that can help you invest with confidence with each raids
0:09
easy-to-use platform. + access
0:11
to Morgan, Stanley's, in-depth, research and guidance, you'll have the information you need to make informed investment decisions. No matter. The market conditions. Get started today at etrailer.com /, Vox E-Trade Securities, LLC member sipc a subsidiary of Morgan Stanley.
0:28
Support for on, with Kara, Swisher comes from Polestar at Polestar. Every inch of every vehicle, they design is thoughtfully made, they're made to transform otter. Performance accelerating from zero to 60 in less than four point. Two seconds with fully electric all-wheel drive. They're made to elevate the driving experience with LED lights and a panoramic glass roof and they're made to uphold a greater responsibility to the planet. Using sustainable materials and energy saving systems. The result is a car that combines the best of today with the technology of tomorrow.
0:57
Her performance, pure design, Polestar design, yours and book a test drive today at pollstar.com.
1:11
Hi, everyone. From New York Magazine and the VOX media podcast Network. This is nonprofit, open AI, which is now very much for profit and 100% scarier, just kidding. Actually, I'm not kidding. This is on with Kara swisher, and I'm Kara swisher. And I'm named arase, it's amazing how an open source nonprofit has moved to being a closed Source, private company with the big deal with Microsoft, are you shocked? No.
1:35
No, not even slightly. It's a huge opportunity. I'm in San Francisco now and it's really jumping with AI crypto didn't quite work out in any of those people move to Miami. And so it's very a oriented right now, everybody's thinking about a start-up in AI, you more bullish on AI than web 3 well, so that's kind of a low bar. So yeah, and I've always been bullish on a. I've talked about a lot over the years and, you know, this is just a version of it as it becomes more and more sophisticated and useful to people. So, I've always thought it was important and I think
2:05
The key technologists in Silicon Valley have always thought it was important. A grade I was talking to a VC yesterday though about how so many things that are not AI or being billed as a high-tech companies now and they're really not a. I they might have like a large learning model but they're not quite AI. Yeah, but last episode, we had Reid Hoffman on talking about what was possible with AI. And now, we have one of breeds many mentees Sam Altman. Sam is the CEO of open Ai and he leads the team that has given us chat, GPT and GPT for
2:35
Or he actually burst onto the scene, as a young Stanford drop out. I think in 2005 with the startup looped, right? Is that when you met him? Yes. When he hit looped, I visit him in a small. He was a little startup and it didn't do very well. It was a location-based kind of thing. I don't even remember. Social network greatly Geo, social not why don't? You know, it was not Facebook. Let's just say, so he was one of these many, many startup people that sort of were all over the valley, very smart. But the company didn't quite work out. Yeah, it kind of went bust. I think not many years later, but he became super important.
3:05
In the valley especially in my generation. He's about my age because of Y combinator, you led the startup accelerator that has incubated and launch stripe Airbnb. Coinbase. He got there later, it was working before he got there. But yeah, really let it to new heights. I think in a lot of ways was a very gay man in 2014. So I don't remember, I remember when he took over but he really invigorated it and was very involved in the startup scene. It was a great role for him. He was a great cheerleader and, you know, he's good at eyeing. Good startups. Do you see him as I kind of
3:35
One of the Elon Musk Peter teal, read Hoffman's of his generation kind of. Yeah, there's a lot of really smart people, but yeah, he's definitely special and he really did, you know, he had a bigger mentality more like read than the others, although they had it initially, not Peter teal, but he was thinking of big things with the startups that Ai and I really like him. I've gotten to know him pretty well over the years and so I've always enjoyed talking to his very thoughtful. He's got a lot of interesting takes on things and this is a really big deal. Now that
4:05
Sort of landed on taking open a.i. to these heights. Yeah, he has. He wants like you. He wants entertained the notion of running for office in California. He thought about running for governor something. I think you've talked to him about. Yeah, we talked about it, but he went on to revolutionize AI. So you think that's better or worse for Humanity? I don't know. We'll see, you know, California is probably easier to fix than what we're going to do about AI once it gets fully deployed. Although you know what the whole issue is, there's lots of great things and there's lots of bad things. And so we want to focus on both because it's like I say it's like when the internet started
4:35
and we didn't know what it was going to be. I think a lot of people are being very creative around what this could be and problems, it could solve it. At the same time, the problems it could create. Do you think that the fear is overblown like this, our jobs are at risk? AI is going to, you know, on those stories? Yes. Yes. It's like saying what is, you know, the car done for us or lights, or something like that, you know, things will change as they always do. And so I've always thought most of the fears are overblown, but as I say in the book, I'm working on right now, which is why I'm in San Francisco. Is everything that
5:05
Be digitized will be digitized, that's just inevitable and that's where it's going. So this will soon be two Bots talking to each other. No, no, but searches so Antiquated when you think about a typing words into a box, it's really Neanderthal in many ways. And this is, this is an upright Homo Sapien. Well it's been interesting because critics have kind of swarmed about Chachi PT earlier on and, and Sam was coming back on Twitter. Saying, just wait for the next iteration, right? We now have and GPT for we couldn't pick the interview with him until CPT for was out, but the
5:35
Model still has many issues and he himself has noted this. He tweeted that it's still flawed still limited and it still seems more impressive on first use than it does after you spend more time with it. This was about GPT for. Yep I would agree but that's a very interesting thing because the fact that it's more impressive on first blush than it is after you use it as part of the problem because I've been using my GP t plus and it pulls up all kinds of interesting like write me a research paper and then it will it will look really good and it will have a bunch of
6:05
False information on it. So this time compound. The misinformation problem when something looks slick well, but isn't informed right data in data out crappy dog. Crap in crap out. I mean, it's just the same. It's that's a very simplistic way of saying it, but I think you know, it's like the early internet really sucked too and now it kind of doesn't it sort of does and there's great things about it. But if you looked at early Yahoo or Google or Google was a much later but early Yahoo and others, it was a lot of bubble gum and baling wire. All right, well, let's see what Sam Altman has to say, and if he feels
6:35
He's confident in the choice of having done open AI versus running for governor of California. Will take a quick break and be back with the interview.
6:48
Fox creative. This is Advertiser content brought to you by Lenovo Lenovo sending people off to a desert island. Hi, I'm Mark, Hearst. I'm a Solutions manager for hybrid Cloud for Lenovo workstations. I get to work with lots of awesome companies. Everywhere from like Formula One to healthcare to Media entertainment. The goal from Mark and Lenovo workstations is to help our users.
7:17
Connected with their work from anywhere. We have to be able to spot the people in the cities but also the people in remote locations. So the question then becomes, how do you work from a desert island first identify the challenges one is power. Another one is Cooling and big. One is network connectivity. Right now, you've got this workstation, the better thing. It lacks is flexibility.
7:47
The alternative to that is. If you were able to connect to that system, just using a satellite and connect back to have all your data somewhere else. It's going to be a lot easier by using lenovo's, remote, workstation Solutions with tgx software teams can connect from anywhere. The Lenovo P6 20th includes the AMD, Rison thread, Ripper Pro processor. You got this? Amazing power that can be accessed from anywhere and everywhere on the planet to learn more visit.
8:17
Vodacom think station P
8:19
620.
8:22
Support for on with Kara. Swisher comes from Polestar, pole stars and electric vehicle company driven by sustainable design. Every inch of their vehicles are built to maximize form function and our future designed to accelerate from 0 to 60 in less than four point. Two seconds with a fully electric all-wheel drive, system design with a Sleek exterior using frameless mirrors and a panoramic. Glass roof and design with a carefully. Crafted cabin utilizing completely sustainable materials. This is an electric vehicle, unlike any other
8:51
Pure Performance, pure design, Polestar design, yours and book a test drive today at pollstar.com.
9:03
Sam. It's great to be in San Francisco rainy. San Francisco to talk to you in person. We need the rain. It's I know this atmospheric. River is not kidding. I got some soaked away here. I miss San Francisco. I'm here for a comeback. I'm going to, I'm going to, I'm trying to convince my every moment here. I agree. It's time to come back. I love San Francisco. I've never really left in my heart. So you started looped. That's where I met you explain. What it was. It was a
9:27
location-based social app for mobile phones,
9:29
right?
9:30
What happened the market wasn't there? I'd say is the number one thing.
9:34
Yeah. Because well I think
9:36
like you can't force a market, like you can have an idea about what something are what people are going to like as a start-up part of your job is to be ahead of it, and sometimes you're right about that and sometimes you're not, you know, sometimes you make Loop. Sometimes you make opening.
9:49
I yes. Right, right, exactly, right. But you started in 2015, after being at Y, combinator, and last late last year, you launch chat GPT talk about that transition you had
10:00
When you reinvigorated y combinator in a lot of ways,
10:03
I was handed such an easy task with Y combinator. I mean, like, I don't know if I reinvigorated, it was sort of a super great thing by the time I took over or
10:12
CMEs. I think it was it got more prominence, you change things around. I didn't mean to say it was fail.
10:17
Yeah, not at all. Not at all. I I think I scaled it more and we sort of took on longer-term more ambitious projects opening. I actually sort of got that was like something nice.
10:30
Helped start wallet YC and we did we found it other companies, some of which I'm very closely involved with like Haley on the nuclear fusion company. They were going to take a long time so I definitely like how to thing that I was passionate about and we did more of it, but I kind of just tried to like keep PG in Jessica's Vision going there because it's called grant
10:47
program. Jessica, you had shifted open a.i., why was that? When you're in this position, which is a high-profile position in Silicon Valley, sort of king of startups, essentially, why go off is that you wanted to be an entrepreneur again? No, I did.
11:01
I don't I am not a natural fit for CEO like an investor really I think suits me very well. I got convinced that AG I was going to happen and be the most important thing I could ever work on. I think it is going to like transform our society in many ways and you know, I won't pretend that as soon as we started opening I was sure it was going to work but it became clear over the intervening years and certainly by 2018 2019 that we had. We had a
11:30
Chance here. What was it that made? You think that
11:32
a number of things like hard to point to just a single one but by the time we made GPT to which was still weak in a lot of ways but you could look at the scaling laws and see what was going to happen. I was like this can go very very far and I got super excited about it. I've never never stopping. Super excited. Was
11:49
there something you saw that? That it just scaled or what was the
11:53
yeah. It was the like looking at the data of how predictably better, we could make the system with more compute with more
11:59
data.
12:00
And there's already been a lot of stuff going on at Google with the mind. They had bought that earlier right around.
12:06
Yeah, there has been a bunch of stuff but somehow like it wasn't quite the trajectory that has turned out to be the one that really works.
12:14
But in 2015, if you wrote that superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. Explain still think so. Okay. All right, we're gonna get into that. Why did you write that then? And yet you also called it the greatest technology ever.
12:30
Still believe both of those things. Okay. I think at this point more of the world would agree on that the time it was considered a very extremely like crazy position. So
12:37
explain roll it out that you wrote was probably the greatest threat to continue to exist in humanity. And also one of the greatest technologies that could improve Humanity. All those two things
12:46
out. Well I think we're seeing finally little previews of this with John GPT and especially like put GDP T4 out and people can see this Vision where just to pick one example, out of the thousands, we could talk about everyone in the
13:00
World can have an amazing, a I tutor on their phone with them all the time for anything they want to learn. That's really, we need that. I mean, that's, that's wonderful. That will make the world, a much better. The creative enhancement that people are able to get from using these tools to do whatever their creative work is, that's fantastic. The economic empowerment, all of these things. And again, we're seeing this only in the most limited primitive larval way. But at some point it's like, well now we can use these things to cure disease.
13:26
So what is the threat? Because I'm try, when I try to explain it to regular people.
13:30
People who don't quite a reach that person. No, you're not. You're not a regular. I'm so offended. I'm not a regular person but when the internet started nobody knew what it was going to do. When you thought superhuman machine and tells us why the greatest threat, what did you mean by
13:41
that? I think there's levels of threats. So today we can, we can look at these systems and say, all right, no imagination required. We can see how this can contribute to
13:53
Computer security exploits or disinformation or other things that can destabilize Society. Certainly there's going to be economic transition and those those are not in the future of those are things. We can look at now,
14:09
In the medium term. I think we can imagine if these systems get much much more powerful. Now what happens if a really bad actor gets to use them and tries to like figure out how much Havoc they can wreak on the world or harm they can inflict. Yeah and then we can go further to all of the sort of traditional sci-fi. What happens with the kind of runaway AGI scenarios or anything like that? Now the reason we're doing this work is because we want to minimize those downsides while still letting Society get the big
14:38
Upsides and we think it's very possible to do that. But it requires in our belief, this continual deployment in the world where you let people gradually. Get used to this technology where you get, give institutions Regulators policymakers, time to react to it, where you let people feel it. Find the exploits, find the Creative Energy of the world will come up with use cases. We and all the red teamers we could hire would never imagine. And so we want to see all of
15:08
The good and the bad and figure out how to continually minimize the bad and improve the benefits, and you can't do that in the lab. And this idea that we have that, we have an obligation in society will be better off for us to build in public, even if it means, making some mistakes along the way. Right? I think that's really
15:26
important. When people critique chechi BTU essentially, said, wait for GPT for now that it's out, hasn't met expectations.
15:34
A lot of people seem really happy with it. There's plenty of thing that your patient's.
15:38
Yeah, I'm proud of it, again, very long way to go, but it's a step forward and I'm proud of
15:42
it. So, you tweeted that at first glance that gbt for seems quote more impressive than it actually is. Why is that? Well, I
15:48
think that's been an issue with every version of these systems, not particularly GPT for you, find these like, flashes of Brilliance before you find the problems. And so, I think that someone used to say about GPT through that has really stuck with me, is it is the world's greatest demo Creator because you can tolerate a lot of mistakes there. But if you
16:08
A lot of reliability for a production system. It wasn't as good at that. Now, GPT for makes less mistakes. It's more reliable more robust but still a long way to
16:16
go. When the issues is hallucinations are called who's it was just kind of a creepy word. I have to say that you
16:21
what is your call instead
16:23
mistakes, mistakes, or something like, who's Jason feels like it's sentient.
16:27
It's interesting hallucinations that word doesn't trigger for me as sentient, but I really try to make sure we're picking words that are in the tools Camp. Not the creatures Camp because I think it's tempting to anthropomorphize.
16:38
This. That's correct bad
16:39
way. That's correct. And as you know there were a series of reporters wanting to date G, PT 3. But anyway, sometimes a bot just makes things up out of thin air, and that's hallucinations happen. Now, it'll say research papers or news articles that don't exist. You said GPD for does this less than gbd3? We shouldn't give them actual names but it's
16:56
still how I would be Anthem. I think it's good. That it's letters plus a
16:59
number that like Barbara anyway but it still happens. Why is that? So the
17:04
these systems are trained to do something which is
17:08
The next word in a sequence, right? And so it's trying to just complete a pattern and given its training set, this is the most likely completion. That said the decreased from 3 to 3.5 to 4, I think is very promising. We have we track this internally and every week, we're able to get the number lower and lower and lower. I think it'll require combinations of model scale new
17:30
ideas. A lot of user model scale is more data
17:33
not as something more data but more compute, throw another problem. Human feedback people like flag in the air.
17:38
As for us, developing new techniques of the model can tell when it's about to kind of go off the
17:43
rails saying this is a mistake. Yeah, one of the issues is that it obviously compounds a very serious misinformation
17:48
problem. Yes. So we don't, we pay experts to flag to go through anything about enough for us. Not just bounties, but we employ people, we have contractors, we work with external firms. We say, we need experts in this area to help us go through and improve things. You don't just want to rely totally on, you know, random users, doing whatever trying to troll you or anything like that.
18:09
So humans more
18:10
compute, what else to reduce the? Yeah, I think that there is going to be a big new algorithmic. I'm idea that, you know, a different way that we train or use or tweak these models, different architecture, perhaps. So I think we'll find that at some
18:28
point. Meaning, what? For the non-techie, the different architecture? Oh, it was, it could be a lot
18:33
of things. But you could say, like a different algorithm, but just some different idea of the way that we create our use these models.
18:38
Hmm that encourages during training or inference time when you're when you're using it that it encourages the models to really ground themselves in truth, not be able to cite sources. Microsoft has done some good work. They're working on some things.
18:54
So talk about the next steps. How does this move
18:57
forward?
18:59
I think we're sort of on this very long term, exponential. And that's, and I don't mean that just for AI, although ai2, I mean, that is like, cumulative human technological progress.
19:13
And it's very hard to calibrate on that and we keep adjusting our expectations. I think, if we told you five years ago, we'd have GPT for today, you'd maybe be impressed, but if we told you four months ago after use chat GPT we'd have GPD for today, probably not that impressed.
19:32
And yet it's the same continued exponential, so maybe where we get to a year from now, you're like, yeah, you know, it's better but sort of the new iPhones, always a little better to write, but if you look at where will be in 10 years, then I think you'd be pretty
19:46
impressed, right? Right. Actually, the old iPhones were not as impressive as the new one
19:50
for sure. But it's been such a gradual process. Correct. It unless you hold that original one and this one back to
19:54
back, right, right. I had I just saw a family in the other day. Actually interestingly enough, that's a very good comparison. You're getting criticism for being secretive.
20:02
You said competition and safety require that you do that critics. Say that's a cop-out it's just about competition. What's your
20:09
response?
20:10
I mean, it's clearly not the the we we make no secret of like, we would like to be a successful effort and I think that's fine and good and we try to be clear but also, we have made many decisions over the years in the name of safety that have been widely ridiculed time that are later. You know, people come to appreciate when we, when we even in the early versions of GP T, when we talked about not releasing model, weights or releasing them gradually because we wanted people to have time to
20:40
Apt got ridiculed for that and I totally stand by that decision. Would you like us to like push a button and open source GPT for and drop those weights into the world? Probably not. Probably
20:50
not. One of the excuses that Echo is uses. You don't understand it. We need to keep in the back box. It's often about competition. Well, for
20:56
us, it's the opposite. I mean, we've said all along and this is different than what most other AGI efforts have thought, is
21:04
everybody needs to know about this like a GI should not go be built in a secret lab with only the people who are like, you know, privilege and smart enough to understand it. Part of the reason that we deploy, this is I think we need the input of the world and the world needs familiarity with what is in the process of happening. The ability to weigh into shape this together, like we want that. We need that input and people people deserve it. So I think we're like not the secret of company where
21:34
we're quite the opposite. Like we put this. We put the most advanced AI in the world in an API that anybody can use. I don't think that if we hadn't started doing that a few years ago, Google or anybody else would be doing it. Now they would just be using it secretly to
21:50
make certain cells. So you think you're forcing it out? Well, what are you? But you are in competition and let me let me go back to someone who was your one of the original funders Elon Musk. He's been openly critical of open II especially as it's gone to prophets. He
22:04
Opening. I was created as an open source, which is why I named it. Open a.i., non-profit company to serve as a counterweight to Google but now has become closed Source maximum profit company, effectively controlled by Microsoft. Not what I intended at all. We're talking about open source versus closed, but what about his critique that you're too close to the big guys?
22:24
I mean, most of that is not true. Okay? I think let's go through on those that we're not controlled by Microsoft. Microsoft, doesn't even have a board seat on us. We are an independent
22:34
A company. We have an unusual structure where we can make very different decisions than what most companies do. I think a fair part of that is we don't open source everything anymore. We've been clear about why we think we were wrong there originally we still do open source a lot of stuff. You know open-sourcing clip was something that kicked off this whole generative image world. We recently open source whisper, we open source tools, will open source more stuff in the future but I don't think it would be good.
23:04
It right now for us to open source GPT. For, for example, I think that would cause some degree of havoc in the world, or at least there's a chance of that, we can't be certain that it wouldn't, and by putting it out behind an API. We are able to get many. Not all many of the benefits. We want of broad access to this. Society being able to understand that update and think about it, but when we find some of the scarier downsides were able to then fix
23:32
them, how do you
23:34
Respond to what he's saying. You're a close Source, maximum profit company. I'll leave out the control by Microsoft, but in part in strong partnership with my we have a cat was against what he said. I remember years ago, when he talked about this, this was something he talked about a lot and was well hard, oh, we don't want these big companies to run it if they run it, we're doomed, you know, he was much more dramatic than most
23:53
people, so we're capped profit company. Yeah, we invented this new thing where we
23:59
started as a non-profit, explain that explain what a cat profit is.
24:02
We are shareholders, can
24:04
I make us, which is our employees and our investors can make a certain return, like they're their shares have a certain price that they can get to. But if open AI goes and becomes a multi-trillion dollar company, whatever, almost all of that flows to the nonprofit that controls us, not like people had a cap, and then they don't want to cap. It continues to varies. We have to raise more money, but it's like much, much much and will remain much smaller than like any tech company. What?
24:32
The in terms of like a number, I truly don't know.
24:33
But it's not a significant. The nonprofit gets the significant chunk of the
24:37
revenue. Well well it gets know, it gets everything over a certain amount so if we're not very successful, the nonprofit might not what gets a little bit along the way but it won't get any appreciable amount. The goal of the cap profit is in in the world where we do succeed at making a GI and now we have a significant leader for real soon, you know, that could become much more valuable. I think than maybe any company out there today.
25:01
That's when you want almost all of it to flow to a
25:03
non-profit, I right? I want to get back to it. You know, I was talking about he was very adamant at the time. And again overly dramatic, that Google and Microsoft, and Amazon, we're going to kill us. I think he had those kind of words that they need there needed to be an alternative which changed in your estimation of do that to change from that
25:25
idea. It was very simple. Like, when we realized the level of capital, we were going to need
25:31
To do this scaling, turned out to be far more important than we thought. And we even thought it was going to be important. Then we tried for a while to raise to find a path to that level of capital as a non-profit. There was no one that was willing to do it. So we didn't want to become a fully for-profit company. We wanted to find something that would let us get the access to, in the, the power of capitalism to finance what we needed to do. But still be able to
26:02
fulfill and be governed by the nonprofit Mission. So having this nonprofit that governs, this cap the prophet LLC, given the Plainfield that we saw at the time. And I still think that we see now was the way to get to the best of all worlds, we could see in a really well functioning Society. I think this would have been a government
26:22
project. That's correct. I was just going to make that plant. The government would have been your
26:26
funder. We talk to them.
26:29
That was not, it wouldn't have not just been that they would have been our funder but they would have started the project. We done things like this before in this
26:38
country. Right? Sure.
26:39
But the answer is not to just say, oh well the government doesn't do stuff like this anymore so we're just going to sit around and you know let other countries run by us and get an AGI and do whatever they want to us. It's we're going to like look at what's possible on this Plainfield, right? So Ilan used to be the co-chair and you have a lot of respect from
26:59
He thought deeply about his critiques. Have you spoken to him directly? Was there a break or what? You two were very close as I was walking directly recently. Yeah. And what do you make of the critiques? How when you hear them from him I mean can be quite in your face about
27:15
this, he's got his style. Yeah I don't say that positive thing about you Ilan. Yeah, I like he really does care about a good future. He does say gee I that is correct and he's
27:29
I mean he's a jerk, whatever else you want to say about him, he has a style that is not a style that I'd want to have for myself, has changed. But I think he
27:39
He does really care and he is feeling very stressed about what the future is going to look like for Humanity for Humanity.
27:46
Yeah, he did apply that. Both to when we did an interview at Tesla, he's like, if this doesn't work, we're all doomed which was sort of centered on his car. But nonetheless who is correct and the same thing with this and this was something he talked about almost incessantly the idea of either a I taking over and killing us or maybe it doesn't really care, then he decided it was like anthills, you remember that? I don't have an answer.
28:09
He said, we're like, you know how we think when we're building a highway anthills are there and we just go over them without thinking about it. So, they don't, it doesn't really care. And then he said, we're like a cat and maybe they'll feed us and bail us, but they don't really care about us. It went on and on it when you change an iterated over time, but I think the most critique that I would agree with them is that these big companies would control this and there couldn't be innovation in the space. Well, I wish they were
28:34
evidence against
28:35
that except Microsoft and that's right there.
28:37
Like a big investor, but again,
28:38
Yeah, not even a board
28:39
member so when you
28:40
think truthful independence from
28:42
them so you think you are a start-up in comparison with a giant partner?
28:47
Yeah, I think we're start up with a giant. I mean, we're a big startup at this point
28:51
so and there was no way to be a nonprofit that would work.
28:54
I mean if you know, someone wants to give us tens of billions of dollars of nonprofit Capital. Yeah, coming down
28:58
where the government, which they're not. We tried now he and others are are working on different things. He hasn't Aunt I woke I play Greg Bachmann also
29:08
Said you guys made a mistake by creating a i with a left-leaning political bias, how do you, what do you think of the substance of those critiques?
29:17
Well, I think that
29:18
this was your co-founder. Yeah, I
29:20
think that the reinforcement learning from Human feedback on the first version of Chad GPT was pretty left biased, but that is now no longer true. It's just become an internet meme. There are people some people who are intellectually honest about this, if you go look at
29:38
Cat-like, gbt for and tested honest. It's a relatively neutral not to say we don't have more work to do. The main thing though, is I don't think you ever get to two people agreeing to any one system is unbiased and every topic and so giving users more control and also teaching people about like how these systems work that there is some Randomness and response that the worst screenshot you see on Twitter is not representative what these things do. I think it's
30:02
important. So when you said it had a left-leaning bias, what did that mean to you? And of course it will. They will run with that.
30:08
That they'll run with that quite far.
30:10
People would give it these tests that score you on, you know, the political Spectrum in America or whatever. And like, one to be all the way on the right time would be all the way on the left, you know, I would get like a 10 on all of those tests. The first version. Why? Because of, what was a number of reasons? But largely, because of the reinforcement learning for human feedback stuff,
30:35
We'll be back in a
30:35
minute.
30:48
Support for on with Kara. Swisher comes from NerdWallet it feels like the moment you start thinking about signing up for a new credit card. Your mail becomes about 95% junk offers there's so many options it be hard to find the cash back credit card, that's right for you nerd. Wallet can help you make smart decisions. By comparing top Financial products side by side to find a winner, your wallets team of nerds use their expertise to help you make smart financial decisions. NerdWallet can help you turn that infinite.
31:16
Array of offers into a few top options with objective reviews and side-by-side comparisons NerdWallet can help you find a cash back card with bonus percentages in categories. You spend the most in like gas or groceries. All that cash back would be perfect for anyone looking to plan a few more road trips. Next year, ready to make a financial decision, compare and find top credit cards, savings accounts, and more at NerdWallet.com NerdWallet, the smartest decision for all your financial decisions.
31:47
Is it possible to be an optimist anymore with so much difficult news? Some might say optimism requires a set of rose-colored glasses going around pretending. Everything's just fine technology, leader, Barbara Hampton, she CEO of Siemens, USA offers a different perspective, optimistic she says isn't about looking away from problems. It's about looking right at them while believing, you can find Solutions which is exactly what Barbara does in the optimistic Outlook podcast.
32:16
Take the climate crisis, the podcast details Technologies. We can use today to decarbonise, Industry and infrastructure, addressing three, quarters of all Global carbon emissions.
32:26
That's the optimistic
32:27
Outlook, subscribe wherever you listen to podcasts.
32:34
What do you think of the most viable threat to open a? I, as I hear, you're watching Claude very carefully. It's is the Bots from anthro pick a company. That's founded by former open a.i., folks, and back by alphabet, is that it? We're recording this on Tuesday, barred launched today. I'm sure you've been discussing it internally, talk about those to
32:51
start honestly. I mean, I try to pay some attention to what's happening with all these other things. I, it's going to be an unbelievably competitive space, like this is the first new technological platform in
33:04
Long period of time. The thing I worry about the most is not any of those because I think we can, you know, their conditions room for a lot of people and also I think we'll just continue to offer the best the best product. The thing I worry about the most is that Were Somehow missing a better approach and that this idea like everyone's chasing us right now. On large language models, kind of trained in the same way, I don't worry about them. I worry about the person that has some very different idea about a, make a more useful system
33:33
like a
33:34
Facebook to probably your first look, to be honest. The like a Facebook 2000. They're not like Facebook, not Facebook. No, Facebook's not going to come up with anything unless Snapchat does and then they'll copy it. I'm teasing. Sort of. But you don't feel like these other efforts that they're sort of in your same Lane, you're all competing and they say it's the one that is
33:51
not what I would worry about Maria. Like the people that are trying to do exactly what we're doing but you know,
33:58
scrambling a muscling it like. But is there one that you're watching more carefully?
34:04
Not, especially really
34:07
I kind of don't believe you but
34:08
real, I mean know, the things I was going to say the things that I pay the most attention to are not like language, model, startup, number 217. It's when I hear about someone. It's like these are like three smart people in a garage with some very different theory of how to build a GI. And that's when I pay
34:26
attention, is there one that you're paying attention to now?
34:31
There is one. I don't want to
34:32
say okay you really don't want to throw it on. Okay. What's the plan for making money?
34:36
So we're sort of like we have a platform which is this API the nature can use to the model and then we have like a consumer product on top of it, right? And the consumer product, 20 bucks a month for the sort of Premium version and the API, you just pay us per token. Like basically like a
34:50
meter businesses would do that depending on what they're using. If you're if they decide to deploy at a hotel or
34:55
wherever the more you use it, the more you
34:56
pay more than we use it. You pay. One of the things that someone
34:59
Said to me, I thought was very smart is if the original internet started on a more pay subscriber basis, rather than an advertising basis, it wouldn't be quite so
35:07
evil. I am excited to see if we can really do a mass scale. Subscription funded not add funded business here.
35:16
Do you see ads funding this? That to me is the original set of the internet.
35:19
We've made the bet not to do that, right? I'm not opposed to it, maybe it look like it. I don't know, we haven't thought like it's going great with our current model. We're happy
35:28
about it. You've been also
35:29
Impeding against Microsoft for clients are trying to sell your software through their Azure Cloud business as an
35:34
add-on actually, that I don't like. That's fine. I don't
35:37
care. It's mine. But you're also trying to sell directly. Sometimes the same clients, you don't care about the end of care about. I don't care. How does it work? Does it affect your bottom line? That
35:44
way.
35:47
Again, we're like an unusual company here. We're not like, we don't need to squeeze out every
35:51
dollar, former Google Tris, John Harris, who's become a Critic of how Tech is sloppily developed presented to a group in d.c. of regulators. I was there among the points you made is that you've essentially kicked off an AI arms race. I think that's what struck me the most meta, Microsoft, Google by do a rushing to ship generative. AI Bots. When the tech industry is shedding jobs, Microsoft recently laid off ethics and Society team within its a or that's not your issue.
36:16
Shoo, but are you worried about a profit-driven arms
36:18
race? I do think we need regulation and we need industry Norms of this. I am disappointed to see people like we spent many, many months and actually really the years that it's taken us to get good at making these models, getting them ready before you put them out. You know, people it obviously became somewhat of an Open Secret in Silicon Valley that we had gbt for done for a long time. And there were a lot of people who are like you got to release this. Now you're holding this back from
36:46
You know, exists your clothes, day, I whatever. But like we just wanted to take the time to get it right. And there's a lot to learn here and it's hard and in fact, we try to release things to help people get it right, even competitors. I am nervous about the shortcuts that other companies. Now seem like they want to take such as oh, just rushing out these models without all the safety features
37:09
built without saving cheese. So they're just, this is an art that they want to get in here and get ahead of you because you've had the front
37:15
seat.
37:16
Maybe they do, maybe they don't, they're certainly making some noise like, you know, there are going to.
37:21
So when you say worried, what can you do about it? Nothing.
37:25
Well, we can and we do try to talk to them and explain. Hey, here's some pitfalls and, you know, here's some things we think you need to get, right? Yeah, we can continue to push for regulation. We can try to set industry Norms. We can release things that we think help other people get towards safer systems faster.
37:40
Can you can you prevent that? There's let me read you this passage from the story about Stanford doing it. They did one of their own models.
37:46
Six hundred dollars. I think it costs them to put a train, a model for 600. Yeah, yeah, they did. It's called Stanford alpaca just so, you know, it is. It's a cute name. I'll send you the story. But so, what's to stop? Basically anyone from creating their own pet AI now, for 100 bucks, or so, and training it? However, they choose will open a eyes terms of service. Say, you may not use output from Services. Develop models, they compete with open II and meta says, it's only letting academic researchers, use llama under a non-commercial license at this stage. Although that's a moot point since the entire llama.
38:16
A model was leaked onto 4chan with an H or yeah. And this is a 600 dollar version of
38:20
yours.
38:22
One of the other reasons that we want to talk to the world about these things. Now is, this is coming. This is totally Unstoppable. Yeah. And they're going to be a lot of very good open source versions of this in coming years and it's going to come with, you know, Wonderful benefits and some problems by getting people used to this. Now, by getting Regulators to begin to take this seriously and think about it now, I think that's our best path
38:48
forward. All right, two things I want to talk about societal impact in regulation you've said,
38:52
I told you this will be greatest technology. Humanity has ever developed in almost every interview. Do your asked about the dangers of releasing a, i products and you say it's better to test a gradually, an open quote, while the stakes are relatively low. Can you expand on that wire the stakes low? Now, why aren't they high right now?
39:08
Relatively is the key
39:09
word. Okay, what happens to the stakes we, if it's not controlled now. Well, these
39:15
systems are now much more powerful than they were a few years ago and we are much more cautious than we were a few years ago in terms.
39:22
As of how we deploy them. We've tried to learn what we can learn. We've made some improvements. We found ways that people want to use this, you know, in this interview and I totally get why. And in many of these topics were, I think we're mostly talking about all of the downsides, but I'm going to ask you about the upside, okay? But we've also found ways to like improve the upsides by learning to sew mitigate. Downsides maximize upsides that sounds good. And it's not that the stakes are that low anymore. In fact, I think we're in a different world than we were
39:52
Two years ago, I still think they are relatively low to wear will be a few years from now. These systems are still they have classes of problems but there's things that are totally out of the Reach Out Of Reach there who know they'll be capable of and the learnings we have now the feedback we get now seeing the ways people hack jailbreak, whatever that's super high but I'm curious how you think we're doing. I know you're,
40:16
I think you're saying the right things, you're
40:18
absolutely awesome saying, like, how you think we're doing as you look up, Jackie
40:21
Gleason.
40:22
The reason people are so worried and I think it's legitimate worry. Is because the way the early internet rolled out it was gee whiz. Almost the whole time. Yeah. Almost up into the right. Gee whiz look at these rich guys. Isn't this great? Doesn't this help you and they missed every single consequence. Never thought of them was I remember seeing Facebook live? And I mentioned, I said, what about, you know, people who kill each other on it. What about, you know, murders what about suicides women? And they call me a bummer.
40:53
Bummer in this room. And I'm like, yeah, I'm a bummer. I'm like, I don't know. I just noticed that when people get ahead of tools, they tend and, you know, this is Brad Smith thing. It's a tool or a weapon weapon, seemed to come up a lot. And so, I always think same thing happen with the Google Founders. And they're trying to buy Yahoo many years ago. And I said, at least Microsoft knew they were Thugs and they, they called me. And they said that's really hurtful, really nice. I said, I'm not worried about you. I'm worried about the next guy. Like I don't know who runs your company in 20 years with all that information on.
41:22
D. And so I think you know I am a bummer and so if you don't know what it's going to be well you can think of all the amazing things that's going to do and it probably be a net positive for society. Net positive isn't so great either sometimes right it's a net positive, the internet's, a net positive like electricity's and net positive but every time it's a famous quote every time you when you invent electricity invent the electric chair when you invent this and that. And so that's that's what would be the thing here. That would be the greatest thing. Does it outweigh?
41:52
Some of the dangers,
41:53
I think that's going to be the fundamental tension that we face that we have to wrestle with at the field, as a whole, has to wrestle with Society has to wrestle
41:59
with, especially in this world, we live in now, which I think we can. All agree is not gotten gone forward, it's spinning backwards. A little bit in terms of authoritarians using this. You
42:09
know, I am super nervous
42:10
about. Yeah. What is the greatest thing you can think? Now you're not you and I are not creative, have to think of all the things you're not going to cut, not even what from your perspective. And you know, don't do term papers, don't do dad jokes
42:22
So, what do you think? That's fine? How are you thought? I would say for the greatest, but I'm getting tired of that. I don't care that it can write a press release. I don't care. Fine, sounds fantastic. I hate. I don't want them
42:32
anyway. Personally, most excited about is helping us greatly expand. Our scientific knowledge. Okay. I am a believer that a lot of our forward progress comes from increasing scientific discovery. Over a long period of time in any area. All the areas I think that's just what's Driven Humanity forward and if these
42:52
Systems can help us in many different ways, greatly increase the rate of scientific understanding you know curing diseases an obvious example. There's so many other things we can do with faster knowledge and better
43:04
understand already moved in that are folding proteins and things.
43:07
So that's the one that I'm personally most excited about, I think science. Yeah but there will be many other wonderful things to you. Just you asked me what my one was
43:14
and is there. One unusual thing that you think will be great that you've seen already that you're like, that's pretty cool
43:20
using some of these
43:22
New AI tutor like application. This is like I wish I had this when I was growing up. I could have learned so much so much better and faster. And when I think about what kids today will be like by the time they're finished with their formal education and how much smarter and more capable and better educated and they can be than us today, I'm excited to be
43:42
using these two, yes. Using these tools. I would say health information to people who can't afford it is probably the one I think is
43:49
most product is going to be transformative. We've seen
43:52
People who can't afford it, this in some ways. I'll just be most better. Yeah,
43:55
exactly. It's
43:56
100% pendant. And the come and the work we're seeing there from a bunch of early companies on the platform, I think it's
44:01
remarkable. So the last thing is regulation because one of the things that's happened is the internet was ever regulated by anybody. Really except maybe in Europe but in this country absolutely not there's not a privacy bill, there's not an antitrust Bill Etc. It goes on and on they did nothing but the EU is considering leaving chat. Gbt high risk, if it happens it will lead to significant restrictions on its use in Microsoft.
44:22
Often Google or lobbying against it, what do you think should happen
44:25
with a regulation in general with the
44:27
are this one? The high-risk one
44:30
I have followed the development of the eu's a I act but it is changed. It's you know, obviously still in development. I don't know enough about the current version of it's a if I think this way like this definition of what high risk is in this way of classifying it and this is what you have to do. I don't know if I would say that's like good or bad. I
44:50
I think like totally Banning this stuff is not the right answer, and I think that not
44:54
regulated Tick-Tock, but go ahead.
44:56
And I think not regulating the stuff at all, is not the right answer either, and so the question is like, is that going to end in the right balance? Like I think that you saying, you know, no one in Europe gets to use, chat GPT probably know what I would do but the EU saying here's the restrictions on chatty between any service like it. There's plenty of versions that I could imagine me all right supersensible,
45:17
alright? So after the as Silicon Valley, non bail out bail out
45:20
You tweeted we need more regulation of banks but what sort of Regulation I know and then someone tweeted at you, now he's going to say we need a Monday. I and you said we need a money. I but
45:29
I mean I do think that s VB was an unusually bad case but also if the regulator's aren't catching that, what are they doing?
45:39
They did catch it. Actually they were giving warnings were given
45:42
warnings, but like there's often an audit, you know, this thing is not quite like that's
45:46
different than saying I'm pretty significant. You don't need to do something, they just didn't do any well. They
45:50
could have.
45:50
I mean The Regulators could have taken over like, yes
45:53
months ago. So this is what happens, a lot of the time even in well regulated areas which banks are compared to the internet what sort of regulations does a, I need in America, lay them out. I know you've been meeting with regulators and lawmakers, I haven't done that many. Well, they call me when you do there, they want to say they've seen you again. What do they say? Well, you're like the guy now. So they like to say I was with Sam Oldman.
46:12
I did one. I
46:13
think it's nice. I going to tell you,
46:16
I did like a three-day trip to d.c. Yeah. Earlier this
46:18
year. So what tell me what you think?
46:20
Regulations weren't what are you telling them and do you find them Savvy as a group? I think they're savvier than people think some of
46:26
them are quite exceptional. Yeah, I think the thing that I would like to see happen immediately, it's just much more insight into what companies? Like ours are doing. You know, companies that are training above a certain level of capability, at a minimum, like a thing that I think could happen now, is the government should just have insight into the capabilities of our latest stuff, really?
46:50
Store, not what our internal audit procedures and external audits. We use look like how we collect our data. How we're red, teaming these systems, what we expect to happen, which we may be totally wrong about. We get it while anytime, but like, our internal road map documents, when we start a big training run, I think there could be government insight into that, and then if that can start, now, I do think, good regulation. Takes a long time to develop, it's a real process. They can figure out how they want to have oversight.
47:19
A reed had secrete Hoffman.
47:20
Just a blue ribbon panel. So they learn up on this stuff, which I mean, panels are
47:24
fine. We could do that too. But what I mean, is, like government Auditors sitting in our
47:29
buildings congressman, Ted lieu said, there needs to be an agency dedicated specifically to regulating AI. Is that a good idea?
47:37
I think there's two things you want to do. This is way out of my area of expertise but you're asking. So I'll try I think people like us that are creating these very powerful systems that could become something proper
47:50
Early called a GI at some
47:51
point. This is explain what that
47:53
is, artificial general intelligence. But what people mean? It's just like above some threshold where it's righted, right. Those efforts probably do need a new regulatory effort and I think it needs to be Global body, your guitar body and then people that are using a, I like we talked about the medical advisor. I think FDA can give probably very great medical regulation, but they'll have to update it for the inclusion of a. I, but I would say like
48:20
Creation of the systems and having something like an iaea. That regulates that is one thing, and then having existing industry Regulators still do their
48:31
regulations. So, people do react badly to that, because the information bureaus, that's that's always been a real problem in Washington yet. Not, everyone is who should head that agency in the US, I don't know. Okay, alright, so, one of the things that's going to happen though, is the less intelligent ones of which there are many are going to
48:50
Seize on things. Like they've done with Tick-Tock. Probably possibly deservedly but other things like Snap released a chatbot powered by GPT that reportedly told a fifteen-year-old, how to mask the smell of weed and alcohol and thirteen-year-old how to set the mood for sex. With an adult, they're going to seize on this stuff and the question is, who's liable? If this is true, when a teen uses those instructions and section 230 doesn't seem to cover generative AI. Is that a problem?
49:15
I think we will need a new law for use of the stuff and I think the liability will
49:21
Need to have a few different Frameworks of someone's tweaking. The models themselves. I think it's going to have to be the last person that touches, it has the liability. That's, and
49:29
that's that there, be liability, it's not full. The immunity that the
49:33
platform's guy do that. We should have full immunity. Now, that said, I understand why you want limits on it. Why you do want companies to be able to experiment with this? You want users to be able to get the experience they want, but the idea of like, no one having any limits for a generative AI for AI in general, that feels
49:49
super-wrong last thing.
49:50
Trying to quantify the impact you personally will have on society as one of the leading developers of this technology. Do you think about that? Do you think about your impact? Do you
50:00
like me? Open enemies him? You Sam. I mean, hopefully I'll have a positive impact. Like,
50:07
do you think about the impact on Humanity? The level of power that also comes with
50:12
it,
50:14
Yeah, I don't I think about like what opening I is going to do a lot in the impact opening. I will
50:18
have, I think it's out of your
50:21
hands. No. No. But it is very much a like the responsibility is with me at some level, but it's very much a team
50:28
effort. And so, when you think about the impact, what is your greatest hope and was your greatest worry.
50:36
My greatest hope is that we are we create this thing. We are one of many people that is going to contribute to this movement will create an AI other people create an AI and that this we will be a participant in this technological Revolution that I believe will be far greater in terms of impact and benefit than any. Before my view of the world is it's this like one big long technological Revolution, not a bunch of smaller ones but we'll play our part. We will be one.
51:06
Of several on this moment and that this is going to be really wonderful. This is going to elevate Humanity in ways, we still can't fully Envision and our children. Our children's children are going to be far better off than the, you know, the best of anyone from this time and we're just going to be in a radically improved world. We will live healthier more interesting. More fulfilling lives will have material abundance for people and you know we will be
51:35
A contributor and, you know, we'll put in our your part are part of that do
51:40
sound alarmingly like the people. I met 25 years ago. I have to say, if you were not I don't know how old you are but you weren't you were young. You were probably very
51:48
young 37. Yeah.
51:50
So you won't fall and they did talk like this, many of them did and some of them continue to be that way. A lot of them didn't unfortunately and then the greed seeped in the money seeped in the power seat in and it got it got a little more complex, I would say not not totally and again,
52:05
Net, it's better but I want to focus on you on my last question there seem to be two caricatures of you. One that I've seen in the Press is a boy is genius, who helped defeat Google and Usher and Utopia. The other is that you're an irresponsible woke Tech Overlord. Icarus that will lead us to our demise.
52:21
I have to pick one,
52:23
is it? No. I don't. How
52:24
old do I have to be before? I can like drop the boyish
52:27
qualifier. Oh, you can be boyish. Tell him Hanks is still boyish.
52:30
Yeah. And what was the second one?
52:32
You know, Icarus Overlord? Takeover, Lord woke something.
52:35
Yeah, yeah,
52:36
whatever your chorus part is that I like, boys, don't you that? I'm
52:40
I think we feel like adults
52:41
know you may be adults but boys always gets put on you. I don't ever call you boys. I think you're
52:46
adults Icarus. Meaning like I'm s work, we are messing around with something that we don't fully understand yet. What we are messing around with something, we don't fully understand. Yeah, and we are trying to do our part in contributing to the responsible path through it. All right. But I don't think either of
53:01
those care either. I mean by yourself, then describe what you
53:04
are.
53:08
Technology brother. Oh wow you're gonna go for Zak you know I just think that's such a funny meme. I don't know how to describe myself. I think that's what you would call me now. I wouldn't know 100% narrow
53:18
because it's an insult. Now I call you technology sister.
53:23
I'll take that and leave it on that
53:25
sleeve on that. All right I do have one more quick question. We last time we talked you were thinking of running for governor. I was thinking of running for mayor. I'm not going to be running for mayor. You can still run for governor. No,
53:34
no I
53:35
I think I am doing like the most amazing. I can imagine I really don't do anything else. You want to do anything? It's tiring but I love
53:41
it. Yeah, okay, Sam Altman. Thank you so much. Thank you.
53:51
You said he sounded a lot. Like a lot of Founders generation before him? Yes. What are the lessons you would impart to Sam as someone who has so much impact on Humanity? You know, I think what I said is that they were hopeful and they were they had great ideas and one of the things that I think people get wrong is to be a tech critic, means you love Tech. Like, you know, you really love it. You do have course and you don't want it to fail, you want it to create betterment for Humanity and that, if that's your goal, when you see it being warped and misused it
54:21
Really sad and disappointing. And I think one of the things early internet, people had all these Amazing Ideas the world talking to each other will get along with Russia, will be able to communicate over vast distances. And again just like I talked about with Reid, Hoffman, it's a Star Trek vision of the universe and that's what it was and boy the money and the power and the and the bad people that came in were really significantly shifted it, not completely by any means. I love my Netflix, you know, I just do
54:51
But the unintended or intended consequences, ultimately are very hard to Bear even if it's a net positive. So it's just the money and the power that's corrupting is what you're saying? It's inevitable. No, not inevitable, but often often. Yeah, not him. Not a lot of people, but let's see, the standing the test of time, right. You're saying about Reid, Hoffman, and Max, levchin versus say Peter, teal and Elon Musk. Well, I think Peter was always like that, you know, I don't think he's changed one bit and so,
55:21
You're not my right not in my estimation, he's been very consistent in how he looks at the world, which is not a particularly positive light. I think that a lot of them do, stay the same and they do stay true to what they're like and I don't know why that is over certain people and others get sucked into it in a way. That's really. I'm thinking about this a lot because it my books about yeah, of course how people change and why? And whether that's a good thing or a bad thing? Because you know, one of the things about tech is
55:51
Only changing one of the poems I'm using in the book is a poem by Maggie Smith called good bones and I'll just read you. The last part life is short and the world is at least half terrible for an every kind stranger. There is one who would break you though? I keep this from my children. I'm trying to sell them the world. Any decent realtor walking through a real shithole chirps of on about good bones. This place could be beautiful, right? You could make this place. Beautiful, mmm. And that's how I feel about this. They could make this place beautiful. And I think
56:21
Things that to, yeah, it's not just a lie. You tell your children right? Well, know this but it is, you can't tell that terrible things all the time. They would be like just lying on the ground. Yeah. But sometimes it's so idealistic. Like when he said Global regulatory body to regulate a I'm like, oh man, we're fucked. That's never gonna happen. Like, what was the last good? Global regulatory. Nobody would work. It could work. There has to be, this has to be Global. This has to be Global but how there's no infrastructure to set up a sustainable? I guess, Nobles in medicine. There is
56:51
Is what you think the World Health Organization has been effective. And I think there's stuff around, cloning around all kinds of stuff, it's never going to be perfect. But boy, there's a lot of people that that Hugh to those ethics. I mean, I think it depends how bought in state governments are including China but the regulation thing is particularly tricky because it can also become a moat, right? It's writer incumbents like Facebook's like regulate them. It's like, well, you can afford the regulation in a way that new competitors. Maybe can't, I think the government's can play a lot of roles here. They do it in nuclear non-proliferation. It's never
57:21
Perfect. But we still haven't set one off halfway. I think that's largely the deterrent power, and not because of Any effective regulation. I I'm, I'm a great believer in nuclear non-proliferation. And so to, I think there's lots of examples of it work. And I think the most significant thing that he said here was about the government's role, the US government's role. It shouldn't give this all over to the private sector. It should have been the one to give them money and to fund them and that is 100%. We've talked to Mariana mods, ocado about that. Yeah. And many
57:51
You people that to me, is the big shame as the government abrogating its role in really important things that are important globally and important for the us. But even when the government has played that kind of like, let's call it kindling role for industry, whether it be Elon Musk slow and for Tesla whether it be what DARPA was doing that became, you know, parts of Siri and and Echo and whatnot. The government here is bad at retaining like a windfall from that, that would be reinvested into taxpayer, used to. It used to just do it because it was the right thing to do.
58:21
We would research and investment by the government, you know, highway system seems to have worked out pretty good, the telephone system seems good, you know I mean we always tend to like talk about what they do wrong but there's so much stuff that the government contributed to that matters today. It used to be a cultural. So people would want to go into government and civil service and my father was in that generation. Like, you know, and I think that it's interesting to hear. Sam say no, he won't run for governor and back to you think sometimes, well, it would be so great. If some of these bright Minds, you know,
58:51
Went up to use more effective, where he is. Why would he do that? When he's more effective where he is, arguably the right regulator for. This is a person who could have built it. Yeah. Or conceived building it may be. Did you find his answers to the moderation questions and this idea of hallucination and overly impressive at first glance? Did you find those satisfying? Yeah, I thought if he doesn't have answers, I think one of the things I like about Sam is, if he doesn't have an answer, I don't think he's hiding it. I don't think he knows. And I think one of the strengths of certain entrepreneurs is I don't really
59:21
No. And I think the and a lot around AI right now. Anyone that's going to give you a certainty is lying to you. Well, they had experimented with using these, you know, these low-wage workers and Africa through Sama and Outsource. Well, it's not, I think it was that it was exposed are paying them less than two dollars an hour and training them to build up like, but what was reported, a Content, moderation, AI layer, which is ironic, when you think about it. So, there were workers in Africa being paid less than two dollars. An hour to train machines to replace them for that job. Well, have you been to an Amazon warehouse lately?
59:51
Lee, there's a lot of machines doing everything. That's the way it's going. It doesn't. That's like you're telling me something that happens in every other industry. Yeah I know. And yet we're going to grow smarter. Do you think that's true? AI to I do, is everyone's going to be smarter, I do. I think we do a lot of wrote idiotic work that we shouldn't be doing and we have to be more creative of what are the greatest use of our time is my great. Hope for AI is actually that it takes out the Rope B and all of a sudden creative industry flourishes because those are the parts that can't be replicated. And though I think you know sad reality of technology in the last generation has been
1:00:21
Kids maybe don't read as well or as much or as fast as or as early as you used to but they make video right? What if they're spoken to smarter? Like the idea of education on these things or information or Healthcare in an easy way is really the these phones are just are just getting started and they will not just be phones, they will be wrapped around us in more good information. You get in the more communication you get. That's a good thing. They might just be getting started but we are ending. Do you want to read us our credits today? Yes. Remember you can make this place.
1:00:51
Beautiful or ugly, not good bones, got a couple, got good bones. Today's show was produced by name or rasa Blakeney. Shit, Cristian Castro cell and Raphael is see, were special. Thanks to Haley, Milliken. Our Engineers are Fernando, are Udo and Rick Quan. Our theme music is by track Adam X. If you're already Following the show, you get the red pill. If not Rick, Deckard is coming after you go, wherever you, listen to podcasts search for on with Kara swisher and hit follow.
1:01:21
Whoa. Thanks for listening to on with Kara Swisher from New York Magazine, the VOX media podcast Network and us will be back on Friday. That's tomorrow with a special bonus episode.
ms