PodClips Logo
PodClips Logo
a16z
GPT-3: What's Hype, What's Real on the Latest in AI
GPT-3: What's Hype, What's Real on the Latest in AI

GPT-3: What's Hype, What's Real on the Latest in AI

a16zGo to Podcast Page

Sonal Chokshi, Frank Chen
·
17 Clips
·
Jul 30, 2020
Listen to Clips & Top Moments
Episode Summary
Episode Transcript
0:00
Hi everyone, welcome to this week's episode of 16 minutes. I'm so know your host and this is our show where we talk about the headlines. What's in the news and where we are on the long Arc of tech Trends. We're back from our holiday break. And so this week we're covering all the recent and ongoing Buzz around the topic of GPT 3 the natural language processing based text predictor from the San Francisco research and development company open AI they actually released their paper on GPT 3 in late may but only released their broader.
0:30
Marshall API a couple of weeks ago, so we're seeing a lot of excitement and activity around that in particular. Although it's all being called GPT 3. So we're going to do one of our explainer episodes the 2x explainer episode going into what it really is how it works why it matters and broader implications than questions while teasing apart what type what's real as is the premise of the show, but before I introduce our expert, let me just quickly summarize some of the highlights. So while GPT 3 is technically a text predictor that actually reduces what's possible because of course
1:00
Words and software are simply the encoding of human thought to borrow a phrase from Chris Dixon which means a lot more things are possible. So we're seeing a note. These are all cherry-picked examples believable for impose comments, press releases poetry screenplays articles. Someone even wrote an entire article headlined open a ties GPT three may be the biggest thing since Bitcoin and then revealed Midway that he didn't actually write the article but that GPT 3 did we're also seeing strategy documents like for business.
1:30
Leo's and advice written entirely in GPT 3 and not just words what we're seeing people design using words to write code for designing websites and other designs someone even built a figma plug-in again, all of it showing the transmute ability of thoughts to words to code to design and so on and then someone made a search engine that can return answers and URLs in response to quote ask me anything, which is anyone who's been in the NLP space knows I was at Park when we spun off.
2:00
Powers that back in the day and that's always been sort of a holy grail of question-answering which you know all about to having worked in this world Frank and now let me introduce you our expert in this episode Frank Chen has written a lot about AI including a primer on AI deep learning and machine learning a pulse check on AI what's working? What's not a microsite with resources for how to get started practically and do something with your own product and your own company and then reflecting on jobs and humanity and AI working together. You can find all of that on
2:29
our website Frank to start things off. What's your favorite example of GPT three so far mine is founding principles for a religion written in GPT 3. I'd love to hear your favorite and also your quick take on why the excitement to start us off before we dig in a bit deeper. My
2:45
favorite out of the whole thing is it's doing arithmetic. So if you ask it, what's 23 + 67 like just arbitrary two-digit arithmetic. It's doing it. This is a natural language processing.
3:00
All and so basically it got trained by feeding it lots and lots of text and out of that. It's figuring out we think how to do arithmetic which is very very surprising because you don't think that like exists in text the excitement potentially is promising signs of you know, progress towards General artificial intelligence. So today if you want to do very highly accurate natural language processing you build a bespoke.
3:29
Model you have your own custom architecture? You feed it a ton of data what GPT three shows is that they trained this model once and then they throw at a whole bunch of natural language processing tasks like fill in the blank or inference or translation and without retraining it at all. They're getting really good results compared to finely-tuned
3:55
models before we even go into teasing apart. What's hype what's real? Let's
4:00
Talk about the it. What is GPT
4:01
3 so we have two things one. We have a machine learning model TPT is actually an acronym it stands for generative pre-training Transformer will go through all those in a sec. But thing one is we have a pre-trained machine learning model that's optimized to do a wide variety of natural language processing tasks like reading a Wikipedia article on answering questions from it or guessing what the ending of a story should
4:29
It'd be or so on and so on so we have a machine learning model. The thing that people are playing with is an API that allows developers to essentially ask questions of that model. So instead of giving you the model and you program it to do what you want. They're giving you a selective access via the API one of the reasons they're doing this is that most people don't have the compute infrastructure to even train the model. There's been estimates that
5:00
If you wanted to train the model from scratch, it would cost something like 5 to 10 million dollars of cloud compute time. That's a big big model. And so like they don't give out the model and then to the controversy around this thing when they released the first version was they were worried that if they just gave the wrong model out people would do nefarious things with it, like generate fake news articles that you would just like saturate bomb the web then so they're like look we want to be responsible with this thing. And so we'll gate access.
5:29
Us via API, so then we know exactly who's using it. And then the API can be a bit of a throttle on what it can and can't
5:36
do right. Well helping them learn and just as a reminder apis are application programming interface as we've talked a lot about on the podcast and people want to learn more you can go to a six and z.com API to read all our resources explainers. There's so much we have on this whole topic but the key underlying idea and this goes to your point about the cost of what it would take if you were trying to build this from scratch is apis give developers and other
5:59
Nessus superpowers because they lower the barrier to entry in this case for anyone being able to use AI who doesn't necessarily have a whole in-house research team at cetera. And so that's one of the really neat things about the API, but I do want to correct one misconception the folks out. There aren't aware of when it comes to GPT. Three. What they're describing is GPT three. They're actually playing with open apis API, which is not just GPT three, obviously some of the technical achievements of GPT. Three are in the API, of course.
6:29
But it's a combination of other things. It's like a set of technologies that they've released and it's their first commercial product in fact, so that's just to give people a little context on what the it is and isn't there let's go ahead and go a level deeper into explaining what it is and their paper. They describe it simply as an autoregressive language model. Can you share what it is and kind of the category this fits in
6:52
yeah, so the broad category of things it fits into it is a neural network or a deep neural network and architectures basically talking.
6:59
About the shape of those networks at the highest level visualize it as a something comes in on the left. And then I want something to shoot out on the right side. And in between is a bunch of nodes that are connected to each other and the way in which those nodes are connected to each other and then the connection weights. That's essentially the neural network GPT 3 is one of those things technically it's called a Transformer architecture. This is an architecture for neural networks that Google introduced a few years ago and it's different than a
7:29
Evolution OWN Network, which is great for images. It's different than a recurrent neural network, which is good for simple language processing the way the nodes are connected to each other results in it being able to do essentially computations on large sentences yet filled with different words and doing it concurrently instead of sequentially so rnns which were the former state of the art on natural language processing. They're very sequential. So they'll kind of go through
7:59
Sentence of word at a time recurrent right exactly. These Transformer networks can basically sort of consider the entire sentence in context while it's doing its computations. One of the things that you classically have to do with natural language processing is you have to disambiguate words. I went to the bank that could mean I went to go withdraw some money or it could mean I went right up to the edge of the river because we have ambiguity in these words the natural language processing system needs to figure out well which sense of Bank did you mean
8:29
you need to know all the other words around that sentence in order to disambiguation it and so these Transformers consider large chunks of text in trying to make that decision all at once instead of sequentially. So that's what the Transformer architecture does. And then what open AI has been doing is basically transforming this type of neural network with the Transformer architecture on larger and larger datasets conceptually think of it as you have it read Wikipedia.
9:00
And think of that as generation one generation to is I'm going to have it read Wikipedia and all of the open-source textbooks that I can find this generation. They traded on what's called common crawl. It's kind of the same thing that Google uses to search and index the internet. There's an open source version of that think of it as robots go on to every webpage they gather the text and now we're using that as the training set for GPT
9:27
3. Yeah, something like half a trillion words, I believe.
9:30
Yeah, it's a crazy number of words. And then this thing has two orders of magnitude more than the previous attempts at something like a hundred seventy five billion parameters for the purposes of this conversation a way of measuring the complexity of a neural
9:44
network, right GPT to had 1.5
9:47
billion and in between gp2 two and three Microsoft did one that was 17 billion, right? So like there is a bit of an arms race here going on which is like, how big are your neural
9:57
networks? What does it mean because the papers
10:00
Called language models are few shot Learners. And I remember this movement in One-Shot learning where you can learn on very few examples, but honestly what you just described to me sounded like almost a trillion examples when you think about what is ingesting as an input, so can you actually explain what few shot even means in this
10:17
context? Yeah. So first they trained this model on the internet basically what came in as input on the left side was Reams and Reams and reams of text all the texts. They
10:30
Their hands on and they cleaned it a little and so this is very traditional deep learning. It is not itself a zero shot or a few shot approach its deep learned which means I have incredible amounts of input text what they mean in the context of this paper around no shot and few shot is the model can perform a variety of natural language processing tasks. So a good example of it is analogies King is the queen as Waters to what right in the context.
11:00
Of this system. What you can do is you could give it an example of that and they call that one shot which is I'm going to give you an example of an analogy that's completely filled out and then I want you to fill out more analogies. Another task would be pick the right ending of a story and I will give you one example with the correct answer. So I'm just getting give it to you once now typically what happens when you do traditional neural network learning you take an example you give it to the system and you tell the system the right answer the
11:29
The muses that right answer to basically readjust the neural net it's called back propagation. And the theory is that as it adjusts the weights inside the neural network, it will get that answer more correct than next time it sees it and so everything up into this point has basically been if I give you enough examples I'm going to be able to tell whether that picture has a hot dog in it or not. I will be able to generalize the features of a hot dog and I will basically deduce hot dog - if you just give me exam
11:59
Enough pictures and you tell me hot dog or not. What's going on here is they train this model once and then that they give it one example that example doesn't adjust the weights of the model. It really just Prime's the system to basically prepare it to answer this type of question. So you basically tell it look I want you to work on fill in the blank and I'm going to give you one or a few examples few shot of this and then we'll go from there. But those examples that you give it don't adjust the weights.
12:30
The model it's one model to rule them all and this is kind of how humans learn they don't need to see a thousand ten thousand a hundred thousand examples of hot dogs before they can start reliably telling whether those hotdogs or not.
12:44
It's like how children learn
12:45
language. Yeah, exactly babies before they can say cat and dog can recognize the difference between cats and dogs. They didn't see a million of them. Right? In fact, they can't say the words dog and cat yet. And so maybe something like this is going on in the brain.
12:59
Which is you have this sort of General processor and then it instantly knows how to adapt itself to solve a lot of different problems including problems. It had never seen before and so I'm going to go back to my favorite example of like what GPT 3 was used for like how in the world did it deduce the rules for two digit arithmetic by reading a lot of stuff. And so maybe this is the beginnings of a general intelligence that can rapidly adapt itself.
13:30
Look, I don't want to get ahead of myself. It falls apart on four digit arithmetic. And so it's not generally smart yet. But the fact that it got all of the two digit addition and subtraction problems right by reading text like that's crazy to me
13:44
the general takeaway is that it does some complicated things really well and some really easy things really badly and this is actually true of most AI the researchers have a huge section on limitations where quote gpg three samples can lose coherence over sufficiently long passages contradict.
13:59
Themselves and occasionally contain non sequiturs sentences or paragraphs now, of course as an editor that made me laugh because that's also true of human writing. So I was like, okay, this is also true about the writing I've seen in edited so I don't know who's talking here. Tommy tease apart where we really are in this long Arc. I'm having a hard time knowing what's real. What's not like helped me kind of understand. What is this thing really at this moment in time?
14:22
So we have the most sophisticated natural language processing pre-trained model of its kind the
14:29
Natural language processing Community has basically divided the problem of understanding language into dozens and dozens of sub tasks and task after task after task G PT 3 goes up against the state of the art the best performing system and basically what the piper does is lay out. Okay, here's where GPT 3 is approaching state-of-the-art. Here's where it's far away from state-of-the-art. And that's basically all we know is compared to state-of-the-art.
15:00
A techniques for solving that particular natural language processing tasks. How does it perform? We're really in the research domain, right? So if you were to ask me can I build a startup on it? Can I build the world's best chat bot on it. Can I build the world's best customer support agent I
15:16
where I was going to ask you that. Yeah. I
15:18
think it's really too early to tell whether you can build any of those things. The hope is that you could and long-term really the hope is having built a model like this and exposed an API you could take any
15:29
Can Valley startup that wants to solve a text problem chat, Bots or pre-sales support or post sales customer support or building a mental health app that talks to you all of those things will get dramatically cheaper and faster and easier to build on top of this infrastructure. If this works you have this generally smart system that's already been trained. Then you show it a couple examples of problems that you want to solve and then it will just solve them with very high accuracy.
15:59
All you have to do as a start-up or a programmer is to say hey, look I'm going to give you a couple examples of the type of problem that I want solved and then that priming is going to be enough for the system to get very accurate results. And in fact, sometimes better results than if you had built the model and fed it the data set yourself. So that's the hope but we just don't know
16:20
yet that's really good reminder because they themselves are like this is early days. It's researched as a lot of work to be done. But it's also really exciting as you're saying because this is one of the most
16:29
Advanced natural language models we've seen so the question I have that on the startup and building side. What would it take to what are the kinds of considerations to make it more practical and scalable? I mean for one thing the size you described how the Transformer has this ability to sort of comprehend so much at once without doing it in kind of this RNN model but a trade-off of that is that it's so slow or be able to fit on a GPU. So I'd love to have a quick take from you on what are the things that need to happen to make something like this.
16:58
More usable Etc. I think what's going to need to happen is that the opening i
17:02
product team is going to have conversations with dozens and dozens of startups that are using their technology and then they successfully refine the API and improve the performance and set up the security rules and all of that so that it becomes something as easy to use as a striper twilio striper twilio were very straightforward send a text message or processes payment. This is a lot more amorphous which is hey I can do SAT and
17:28
Jeez, how's that relevant for my startup? Well, there's a bit of a gap there right? You have a start-up that's like hey, I need my document summarized or I need you to go through all of the complaints we've ever gotten and give me Product insight for product managers. And so there's basically a divide between their names if they closed over
17:45
time, right? So what does this mean with the data world? Because one really interesting thing to me is on one hand apis give you superpowers kind of democratizing things on the other hand. It kind of makes things a bit of a race of the bottom then because then you have to differentiate on kind of private proprietary.
17:58
Terry these other elements, so do you have thoughts on what that means?
18:02
Yeah. I mean the hope for something like a GP T3 is that it's going to dramatically reduce the data Gathering cleansing cleaning process and frankly building the data model as well your machine learning model. So let me try to put it in economic terms. Let's say we put 10 million dollars into a series a company and then five million dollars of it goes to getting data and cleaning it and hiring your machine learning people and then renting a
18:28
Two gpus in Amazon or Google or Microsoft wherever you do your compute the hope is that if you could stand on the shoulders of something like GPT 3 and it'll be a future version of it. You would reduce those costs from five million dollars to $100,000. You're basically making API calls and the way you program quote-unquote. This thing is you just show it a bunch of examples that are relevant to the problem that you're trying to solve. So you show it texts where you had a suicide risk and you don't need to show it.
18:58
Bunch because it's pre-trained and you show it a new text that it hasn't seen before and you ask it. What is the risk of suicide in this text exchange the hope is that we can dramatically reduce the costs of gathering that data and building the machine learning models, but it's really too early to tell whether that's going to be practical or not.
19:17
So we know what it means for startups, but how do the incumbents respond in that kind of a world that seems almost inevitable that the big players there might be an AWS potentially right that could you know, make this a given in there.
19:28
Services like this kind of bigger question around this business model of AI as a service. Yeah. So the
19:35
first thing I'll say is it this is opening eyes first commercial product, which is interesting right recall that open E. I started as a research institution. So we'll sort of see what the pricing is if this works the scenario that I described earlier, which is dramatically reduce the time it takes to build a machine learning inside product then all of the public Cloud providers and
19:58
Other startups will offer competing products because they don't want to let open E. I just take all of the sort of text understanding ability of the internet right Google cloud and Microsoft and Amazon and Baidu and tencent like they're all going to say. Hey, look I can do that to build your application on me. Now. I will say that because of the large costs of training the model. So I'd mention estimates ranging from five to ten million dollars to train this thing once and obviously they
20:28
Train at once to get to where they were they trained it multiple times as they did the research process. And so this is not going to be for the faint of heart. It's going to come on the back of a lot of money with very skilled scientists using enormous infrastructure, but to the extent that this product works, then you're going to have very healthy competition among all of the incumbents. You might even have new players who figure out a different angle
20:53
on it, you know, it's really fascinating watching the people who have access and basically the recurring
20:58
Theme is that it's not like plug-and-play. It's obviously not built and ready for that yet the prompt and the sampling hyperparameters matter a lot priming is an art not a science. So I'm curious where you think the knowledge value is going to go in the future. What are the sort of the data scientist at the future going to look like for people who have to work with something like this now granted the models are going to evolve the API will evolve the product will evolve but what are the skills that people need to have in order to really do well in this world coming ahead.
21:27
It's really too early to tell
21:28
But it is a fundamentally different art of programming. Right? So if you think a programming to date it's basically I learn Python and I learned to be efficient with memory and I learned right clever algorithms that can sort things fast. That's well. Understood art thousands of classes millions of people know how to do that if this approach Works. Basically there is this massive pre-trained natural language model and the programming technique is basically I show you a couple examples of the tasks that I want you to
21:58
Perform, it'll be about what examples do I show you and in what form and do I show you the outliers or do I show you some normal ones? Right? And so if this approach Works, it'll all be about how do you prime the model to get the best accuracy for the real-world problems? You actually want your product to solve programming becomes what examples do I show you as opposed to? How do I allocate memory and write efficient search
22:26
algorithms. It's a very different thing the toilet.
22:28
Batarian the inventor of etherium describe this when he was observing some of this Buzz around GPT 3 that quote I can easily see many jobs in the next 10 to 20 years changing their workflow to human describes a i builds human debugs. There's a lot of speculation about how this might affect jobs it can displays customer support sales support data scientists legal assistance and other jobs like that are at risk, but do you have thoughts on the labor and jobs side of this like
22:58
Just sort of the broader questions and concerns here
23:01
the way I think about this generally informed a lot by Eric Barron Hobson and other people. So if you think about a job as a set of tasks some tasks will get automated and then some tasks will be stubbornly hard to automate and then there will be new tasks. And so think of jobs as sort of an ever-changing bundle of tasks some of which are performed by humans today, some of which I'll get automated and then there are new tasks. And so what Victaulic
23:29
If this AI stuff works being able to Prime the AI system with the right examples and then being able to debug it at the end. Those are two new tasks. No human on the planet gets paid to do that outside of AI researchers today, but that could be mainstream knowledge work in 10 years, which is you pick a good examples and then you debug it at the end. So you have these brand new tasks that are generating economic value and people get paid for them. That didn't exist before
23:55
I find it very fascinating what you said, by the way because what it also means to me is it
23:58
Becomes more inclusive for more people to enter the worlds that may have been previously closed off to a certain class of type of programmers or people who have certain technical skills because it lets say you're very good at describing things in is more of an art than a science and you're very good at sort of fiddling with and hacking at things. You might be better off than someone who went through like years and years of elite PhD education at tuning something then someone else. I
24:23
think the machine learning algorithms will invite more people who would otherwise be
24:28
Sheena pursuing careers in careers, they wouldn't have naturally risen to the top of so I think you're
24:33
right. What do you make of the concern? There was concerns that gpg 3 these answers that it gave that it predicted were riff with racism or stereotypes. What do you make of the data issues around that?
24:45
Okay. We're going to feed it every piece of text on the internet and then we're going to ask it to make generalizations. What could possibly go wrong a lot could possibly go wrong. If you look at the heart of this system is basically I'm trying to guess the next word.
24:58
Word and the way I make my guests is I go look at all the documents that have been written ever and I asked what words are most likely to have occurred in those documents. Right right gonna end up with culturally offensive stereotypes. And so we need to figure out how do we put the safety rails? How do we erect the apis? I'm glad they open AI researchers and the community around them are being very careful about this because they obviously have to how do we basically teach it the social norms. We want it to emit as opposed to the
25:28
That it found by reading
25:29
texts another whole philosophical sidebar, but really important is if you think about the internet as the sum total of human knowledge than other things that reflect many of the realities in the world, which are atrocious and offline many cases. The flip side of it is it's a lot harder to change the real world and people and behavior and society and systems, but probably a hell of a lot easier to change a technical system and be able to do certain things. So to me what's implicit in what you said is that there is actually a solution, you know me to be solution Mystic but
25:58
But that's within the technology that you don't necessarily get from IRL in real life.
26:04
Yeah, that's exactly right and if it were an algorithm land, so to speak where we are right GPT 3 and its descendants. Let's say G pt-17 gave you a text document write it wrote a text document for you. You could take that document and put it through whatever filter that you wanted right to filter out sexism or racism and that layer.
26:28
The inspectable in tunable to everybody. You didn't know how GPT 17 came up with its recommendations. But you have this safety net at the end, which is you can filter out things that you don't want. So you have this second step that you can actually put into your system. Do you not have to depend just on the first thing you can catch that at a subsequent
26:47
stage, right? You can have service system of checks and balances. So abroad meta question. One of my favorite post was from Kevin lacquer and he basically gave GPT three a turing
26:58
test and he tested it on these questions of Common Sense obscure trivia logic and one of the things he observed is that quote we need to ask it questions and no normal human would ever talk about and so he said if you're ever a judge in a Turing test make sure you ask them nonsense questions and see if the interviewee responds away a human would because the system doesn't know how to say I don't know and that's goes at this question of what does a Turing test tell us and there's been a lot of work as you know over the years about what the modernization of the Turing test like in
27:28
16 Gary Marcus our friend Gary Marcus Francesca, Rossi and Manila veloso publish an article beyond the Turing test and AI magazine Barbara gross of Harvard wrote a piece called what question would touring posted a in AI magazine in 2012, and she basically starts by saying that in 1950 when Turing proposed to replace the question can machines think with the question are there computers which would do well in the imitation game at the time computer science wasn't a field of study, you know Claude Shannon's theory of information was just
27:58
Getting started Psychology was just only starting to go beyond behavior. And so what would touring ask today? He probably propose a very different tests. And so question. I really wanted to ask you is how do we know if the thing is measuring what it supposed to measure or answering what it's supposed to answer or that it's getting smarter. I guess.
28:19
This is more a philosophical question than an engineering question. So why don't I say what we know and then I'll widely speculate on the other stuff.
28:28
Great, that's Life and Science. I'll go
28:30
quietly. So basically if you read the paper, you'll see that it Compares GP T 3s performance against various other state-of-the-art techniques on a wide variety of natural language processing tasks. So for instance, if you're asking it to translate from English to French, there's this thing called the Blue score the higher the blue score the better your translation and so every test has its measure and so what we do know is we can compare GPT 3 Performance versus other algorithms other systems.
28:59
What we don't know is how much does it really understand. So what do we really take away from the fact that it Ace 2 digit arithmetic? Like what does that mean? What does it understand of the world? Does it get math? Let's say you had a system that was a hundred percent accurate on every two digit arithmetic problem that you ever gave it. It's perfect at math, but it doesn't get it. Like it doesn't know that these numbers represent things in the real world. But what does that mean to claim that it doesn't get it. That's a philosophical question
29:27
right is philosophical because
29:28
The question then becomes as even matter if it comes to applying things practically because I think about this from the world of Education, you know, there's a big focus on metacognition and awareness of knowing what you know, and don't know but at a certain point if the kid is doing well on the test and the test is applicable to the world and they can basically survive and do well does it even matter if they really understood what arithmetic really means as long as they can solve the problem when you go to the store that I give you a dollar I get five cents change back. You know what I mean?
29:55
That's exactly right if you generalize that out to other tasks.
29:58
That human solve in the real world Imagine you just got good at a hundred and then a thousand and then 10,000 of these tasks that you would never seen before. Let's say descendants of GPT 3 got that good at a wide variety of language tasks. What does it mean to insist but it doesn't get the world. It doesn't yet language, right?
30:17
Fantastic. I'd love to get sort of your perspective on how do we think about this broader Arc of innovation that's playing out here. Daniel grows called GPT three screenshots.
30:28
Tick Tock videos of nerds and there's something to that it's kind of create this inherent variability. So I'm curious what your take on that on the one hand some of the most important technology start out looking like a toy Chris Dixon paraphrased really important idea from Clayton Christensen about how in a disruptive innovation happens, but a lot of the people who are researchers really emphasize. This is not a toy. This is a big deal.
30:49
There are a lot of tick-tock ish like videos that are coming out of the whole playground which is basically a place where you can try out the more
30:58
model and on the one hand people are saying it's a toy because they're in the sandbox and they're basically having fun feeding. It prompts. Some of those examples are actually really good and some of those are like comically bad, right? So it feels toy like the tantalizing Prospect for this thing is that we have the beginnings of an approach to general intelligence that we haven't seen us make this much progress on before which is today if you wanted to build a specific system.
31:28
for a specific natural language processing tasks you could do that custom architecture lots of training data and lots of hand tuning and lots of like PhD time
31:39
the tantalizing thing about
31:40
GPT 3 is it didn't have an end use case in mind that it was going to be optimal for but it turns out to be really good at a lot of them which kind of is how people are you're not tuned to like learn polka or double-entry bookkeeping or learn how to audio edit a podcast like you didn't
31:58
Come out of the womb with that but your brain is as general purpose computer that can figure out how to get very very good at that with enough practice in enough
32:07
intentionality. Well, it's really great that you use the word tantalizing because if you remember the Greek myth root behind it Tantalus was destined to constantly get this like tempting fruit dangling above him as punishment and it was so close yet soak Out Of Reach at the same time. So bottom line it for me Frank. It's tantalizing, right?
32:28
Look, there's a limit to how big these models can get and how effective the apis will be. Once. We sort of, you know, unleash them to regular programmers, but it is surprising that it is so good across a broad range of tasks including ones that the original designers didn't contemplate. So maybe this is the path to artificial general intelligence. Now, look it's way too early to tell so I'm not saying that it is I'm just saying it's very robust across a lot of
32:58
Very different tasks and that's surprising and kind of
33:02
exciting. Thank you so much for doing this episode of 60 Minutes Frank.
33:05
Awesome. Thank you so much for having me.
ms