PodClips Logo
PodClips Logo
My First Million
Emmett Shear: Life After Twitch, Jeff Bezos Lessons & AI Doomsday Odds
Emmett Shear: Life After Twitch, Jeff Bezos Lessons & AI Doomsday Odds

Emmett Shear: Life After Twitch, Jeff Bezos Lessons & AI Doomsday Odds

My First MillionGo to Podcast Page

Emmett Shear, My First Million, Sam Parr
·
19 Clips
·
Sep 11, 2023
Listen to Clips & Top Moments
Episode Summary
Episode Transcript
0:00
Is AI going to kill us all?
0:04
Maybe here is the CEO of twitch was acquired by Amazon in 2014 and joins us. Now, I started to itch to help people watch other people play video games on the internet, the Creator. And co-founder of twitch watch other people play video games who knew Emma knew I guess that's the answer. What types of ideas are you noticing our pet standing out to you? That are interesting for the first time in maybe five or seven years. It feels like credibly trying to start a consumer internet.
0:32
Company like the ones that like, I was so excited to start in, 2007 is like, potentially a good idea I'm at because of AI, you mentioned a, I might become so intelligent kills us. All this podcast is really growing. I don't want the world to end. I think it's gonna be okay but it's such a the downside is so bad. It's like it's really probably worse than nuclear war.
0:54
That's a really bad downside. I think it was a range of uncertainty and I would say that the true probability I believe is somewhere between I feel like I could rule the world. I know I could be what I want to put my all in. All right. What you're about to hear is a conversation. I had with Emmett sheer Emmett, was the creator and co-founder of twitch, you don't know about twitch. I know you're living under a rock, it's like one of the most
1:24
No five most popular websites in the states right now. It is a place where you can go to watch other people play video games of all things, watch other people play video games who knew Emmett new. I guess that's the answer. So he was the creator, co-founder of that and built it up. It's a multi-billion dollar company. They sold to Amazon many years ago, seven years ago or eight years ago for about a billion dollars. And has grown many times since, then he finally retired after 17 years of the journey, I got to know em.
1:54
It because he bought my previous company. So we got acquired by twitch and it was like my, you know, quote unquote boss, for my time when I was at twitch. So I got to see this guy first hand, he's the real deal and I've been wanting to get on the podcast since those early days. When I first met him, I was like this guy's great. We talked about a bunch of things. So we talked about some ideas of like how he would use AI if he was going to create another company. Like I think he's good he's retired now from that game of operating a company, but if he was gonna do it, this is what he would do.
2:24
So we talked about a ideas, we talked about why, why? You think say I might kill us all might you know, be be the big Doom scenario which is interesting because he's not just a guy who's gonna go cry wolf. He's not a pessimist, he's not just a journalist, who hates Tech isn't techno Optimist. This is a guy who believes in Tech is a very, very intelligent guy. And he sees, you know, a probability, he gave us a percentage of probability, he thinks that could be sort of the Doomsday scenario and why he thinks that
2:54
That could be the case and what we should do about it. So, we talk about AI, we talk about some of the Frameworks that he has for building companies. We didn't talk too much about like the origin of Twitchy, feel like he's done that a bunch of time. So we try to kind of stayed away from that. But it was a wide-ranging conversation. And for those who are watching this on YouTube, I apologize. The studio that we booked in San Francisco, they screwed up the video, so we don't have video for the 44 YouTube, we just have the audio only version. So you'll see our
3:24
Actress my bad. Sorry about that, you know, got to pick a better place. Got to pick a better Studio, I guess. But anyways, enjoy this episode with MH here.
3:35
Somebody said, creativity is not like a faucet. You can't just turn it on. I think actually, if you pulled like, 100 people, most people. Yeah, of course, creativity is the sacred special thing that only happens. If you've meditated in the morning and the room is perfectly, right? And you've had your, your l-theanine in your coffee or whatever and you were like no. For me. It's very it is like a faucet watch. I can just write and read just keep generating more ideas. Yeah, Hannah leather for two reasons. One.
4:02
I love that. You will just be like, no, actually this, that's like a consistent thing. I've seen you do. And the second is, I think that's very true about you. And I wonder is that practiced or is that innate? Like, if I, if we're if there's a researcher studying you, when you were like 10 years old, right? Do you think they would've been like, oh, this person is different in these? Wait. What would have seemed different or special about you at the time? The, if there was a nurturing nature break on this, it happened very early.
4:32
Because by the time I was 10 you would definitely notice the same thing. I'm not really that different. I would be much less effective but like as a ten-year-old I already had that same experience but you were different than other ten-year-olds. Yeah, other than your all's well actually, I was less different than I think most people actually most children. Have this experience already. I think most 10 year, olds and definitely, most five-year-olds are capable of generating ideas for what to do about something or to like play, pretend, almost indefinitely.
5:02
Don't run out of ideas, it's as you get older, some how you learn to do is you learn to stomp down the ideas that are like bad and to not say dumb things but the more pressure you put on yourself not to say dumb things. The more your inner idea generator it like gets disrupted and I say a lot of dumb things, like when I'm generating ideas, I mean I put weight down on them, but most of the ideas will be bad, they'll have something obviously wrong with them and they give you this advice. And when you, if you would like someone who teach you to brainstorm,
5:32
Like no bad ideas here. That's obviously not sure. There's lots of bad ideas, most of your ideas are bad, you have the actual devices like don't stop at the baths. Yeah, what you're, what you're trying to do is you're trying to disable that sensor that most people have installed that like is like notepad notepad notepad don't, don't be stupid, don't be stupid. And I think I was like, Mel socialized, I'd it never occurred to me, to, to have that. Like, I did. I never got the sensor installed and why that is the case. I'm not sure. But I actually I
6:02
I'm the one who is unchanged, in some sense, a little more childlike in that way. And everyone else is the weird one who like, why, how did you wind up like damaged by your life? That your, your inner Wellspring of creativity has been right crushed? And I think that process is actually very simple. This process would goes off of the all kinds of things in people's minds. You start from some capability, something you can do Som behavior and if when you do that behavior, you try that thing, you receive negative feedback which can be external or
6:32
Actually think even more often, internal you're like, oh, I screwed up. Oh, it's bad. Oh, I don't disappointment. You learn not to do that thing. Pretty rapidly and so that leads you to doing it less, which means you're less skillful at it, which tends to lead you to doing it less, which, in the that cycle ends in you being very bad at something, like, I'm bad at math. No, you're not. Everyone can be like the kind of math you're talking about. Everyone can be the kind of having people say, I'm bad at math. They don't mean I'm bad at like, abstract algebra proves, I mean, I can't do arithmetic or algebra basic algebra.
7:02
And that's just imaginary. Like everyone can do that. It's easy. They got stuck in one of these like Spirals and now it's getting out of one can be very hard and I guess I think that's what happens, people's creativity, I don't know, I didn't go through the process myself and so I got. So now as I'm saying this out loud, actually when I like idea that I have to comes up for me is like, oh well, maybe what it is is that I had better ideas. That's right, I got another one, I got the reward Loop or or I had an environment that was unusually positive and positively reinforcing for me having ideas. And so I would have
7:32
Diaz and it would go, well, that would lead me to having more ideas, which and more practice at having ideas, which would go well, and then you wind up, just never breaking that Loop. I've been a trainer who comes over to my house and it was devastating because like my kids will come down during the session, I'm always like, oh sorry like obviously annoying. My two-year-old is here almost getting hurt in all the weights and that's probably like not what you want in your session. Some was like, oh sorry, sorry and he's like, dude know it, he's like kids and dogs like a what he goes.
8:02
Love to be around kids dogs. They got it right? You know, life is like a dog is like unconditional love happy playful, you know, super loyal. He's like what's not to learn from a dog? I want to learn everything I can from a dog. Our kids just look what she's doing. She just made up a game on this thing. Like we're trying to do a serious workout, she made this her play place. Yeah, she can't wait to come down here. He's like, I wish all my clients. Want it couldn't wait to come down to the gym and I was like, damn, this guy's right. And one of the things I like is figure out people's isms their philosophies and you're like, oh, I thought
8:32
Have one on the way here, explain what it was. It was have. You tried just solving the problem? What does that mean? So there's a there's a meme on the internet. I think it sort of weird son Twitter, which is like, have you strive solving the problem by and then an infinite list of possibles in there? Just to tweet, is always have. You tried solving the problem by, like, ignoring the problem? May be tried in solving the problem by spending more money on it. Have you tried something and one of my favorite ones of those that has become almost like a life. Motto is like if you tried solving the problem by solving the problem and that sounds dumb, right? Let's see.
9:02
It sounds, it's one of those like Zen koan pieces of advice that when you first hear it is like, are you serious? Like that's the advice is solve the problem by solving the problem, but what you notice you try to help people with problems. A lot is often times, people will have a problem, really obvious, what the problem is and they'll come to you for advice for like, well, how can I deal with the consequences of this problem, or how can I avoid needing to solve this problem? Or how can I get someone else to solve this problem, or how other people solve the problem? The past, which I closer,
9:32
The right answer or what can be the right answer. And the point of the the saying is to remind you that, sometimes the way to solve the problem is just like just to actually try solving the problem. Like, don't deal with the symptoms, don't accept the symptoms, don't don't find a hack around it. Like, the problem is the website is not fast enough and instead of like trying to figure out how we can make a loading spinner that distracts people from that fact, what if we just made it so fast that you don't need a loading spinner? It's interesting because that's a good. It's a very good advice when
10:02
The problem actually is solvable. I mean, your people are flinching away from it because something about it is aversive, even though the problem isn't really unsolvable, like they, if they worked out for six months, it would go away and it's worth solving. Whereas, there are these problems. We're like, you're trying to make a perpetual motion machine afraid to do something. That is actually too hard and solving the problem by solving the problem. You should she stop trying to solve the problem. That's a huge mistake and you should be looking for a hack around needing to solve the problem. We should be looking to live with it.
10:32
Effectively, but I find actually on the balance, at least as most people. I, I talk to you. I help most people. I know, I think it's maybe it's people in Tech like love that love the hack. They're always looking for the easy fast solution to cut surrounding, it solve the problem and it's very helpful. It's the most often helpful for me that advice in my opinion is like, bringing people back to just solving the problem. I find that the the advice I like the most, or the sayings that resonate with me. The most are the ones. It's like you spot it. You got it. It's like if
11:02
One is the advice I needed. That's why resonates with me, that's why I like giving it out because like, I personally experienced it, have you personally experienced that or what's an example where you remember trying to do everything, but solve the problem and then you finally realize shit, I should have just solved the problem. It's interesting question, what is it? You spot it, you got it is like noticing is half the battle. Basically it's sort of the smart person version of whoever smelt it dealt it. Yeah, it's like 90% if you you only notice this in other people because you've seen it in yourself. Yeah. Otherwise you
11:32
Don't be observant of it. My version of this is we give the advice we need to hear. Yes, yeah. He's like, which is same basic idea. It's actually not always true. Like that's one of those really good heuristics. We're like sure half the time when you give advice it won't actually be for you. But half the time it is and noticing it is so powerful that like you should just check every piece of advice. You give like wait a second, is this advice I need to hear right now? When it comes to the, like, have you tried actually solving the problem? I think I'm pretty good at that in general, I think. But I often give it to myself in a more meta.
12:02
Sounds like it's a advice. I often needed more medicines of like when I'm confronted with like a thing that needs to be programmed, I will often go just program the thing. But I have a tendency to look for ways that I can solve the problem and not that the problem can be solved. And for me that the very this almost always like what? I don't know somebody else for help and I just like it doesn't even occur to me to go.
12:25
To go do that. I'm just, I'll just I'll just in definitely dig try to go solve the problem myself, I'm not really trying to solve the problem, I'm trying to solve the problem while avoiding having to ask anyone else for help which is like, not really, I'm not really trying to solve the problem but actually no. Weirdly I think this is one of those things where it's almost like the creativity thing. It was a shock for me to realize other people. Don't do that. Your self-actualize on that one. Yeah. What's a piece of good advice that you're bad at taking? That's a that's a an excellent one. I think the big
12:55
There is like, you know, listen more like I've been through this if I so much as YC and it's 100% something that I need to get better at which is like you go into the user interview and you have all these ideas and thoughts and you need to not be surfacing those, you need to actually be focused on you, move your attention to them and really be interested in care about what they have to say and your opinions. And what's what you think is true is irrelevant and I am. I'm a much better at that than I used to be and I also it's one of those things like
13:25
Being reminded like let's just chill out for a second, been lit. Like, listen, there's almost always good advice for me and something that I despise I give fairly often but like it's hard to take on one of the things I really liked that you showed me once. Well, I remember asking you when we're at twitch, I think we were working on a problem that was like reminiscent of early days twitch like the mobile mobile stuff in different countries where it's like oh we're not the leader or we need to like create from scratch.
13:55
Hatch, which wasn't a muscle that a lot of people, there were flexing at the time and I was like, hey do you have any stuff from the early days of twitching? You sent me a thing which was like here's all the user interviews. Like here's my dock from all users of use it which was basically, from what I understand. There was like a small Universe of people that were already doing video game streaming. And you were like, cool. Let me call all of them. And let me ask them like three questions. And if I can just get these answer, these three questions. That should give me a little bit of a roadmap of blueprint of
14:25
Understanding, what do I need to do in order to like win in this market? Yeah she take me back to that because I like that for two reasons. It was a simple and beat seemed like a focused intensity that he found a point of Leverage and you pushed. Yeah, I think two things happened to lead to that the first was like the realization, obviously improved, we wanted to win and gaming, the streamers mattered and at Justin TV we'd always been like streamers and viewers are equally important and I finally made a decision as like no. No this product ultimately
14:55
About streamers and if this doesn't work for the streamers doesn't work for anybody and then I had the realization. This is one of those Epiphany moments for. I truly saw. I have no idea why anyone would stream video games. Like I don't really want to do it and I have all these. I could say, I saw myself building products for these people for the past four years. Justin TV, and not really having any idea. Why they did the thing they did at all.
15:24
And I sort of I saw like, oh, I'm just making this up, I have no idea. I just, I don't know the answer, I could know the answer, like there is Arthur there is an answer out there, these the a bunch of people know it, but I don't and that triggered me to be. Like, I need to know, I need to understand like these this these two hundred people and understand their mind and about 40 interviews. Probably, and I didn't want to know like what they thought we should build these. If they knew what we should build, they would have my job.
15:50
And I've talked enough of them before to know that they had no good product ideas. I, I wanted to know like why are you streaming you what have you tried to use for streaming? Like what did you like about that? Like, what did how did you get started in the first place? What's your biggest dream for sharing? What do you wish, you know, someone would build for you and I didn't ask them what I wish I want to build for you because I thought they would have a good idea. I asked them because the follow-up question was, really the killer one, right? They they would say what you'd build me, this big red button, I'm a great. I built you the big red button. Like what what does it do for you? Like why is your life better?
16:20
ER, I built that and then they would tell me the real thing which is like, oh I would have met. I didn't make it but I make money that month or I'd get a bunch of new fans who like love me or my fans who already love me on YouTube, You'll watch me live more of them would as I go that's the real answer. Like, why you don't, you don't want the button you want the fans or the money or the I call it. Love the like the the sense of reassurance and and positive feedback that the your creative content was wanted, but you're a smart guy, those Love and Money.
16:50
Fans. I'm sure you would have guessed at what are the streamers want false? What did you say, false? What did you think it was a revelation that people would want money because I was like, you're streaming like, you know, whatever 12 hours a week. If we met like you monetize the rates we can monetize today you'd make like three dollars a month that would like that didn't occur to me. That would be a positive thing. Like yes. Oh my God. That would be amazing. I was like, wait, wait, you're serious. You would like three dollars in my comeback. I don't know. Over-promised. Like we're I'll build you the monetization actually. But like
17:20
You would really be excited if it only produced like a tiny amount of money and they're like, absolutely. I've just if the idea that I can make money doing this would be so exciting that had not occurred to me because it always easy for me to make. I was a programmer. I had summer jobs, interning for Microsoft. If you're a programmer you get a summer job in training for Microsoft, that's like pays many, many years of that level of streaming in 3 months, like why would I didn't eat any wasn't in my world view? That that would be so important to them. And of course, I knew they wanted a bigger audience.
17:50
But the degree to, which they valued, even one more viewer and the degree to, which they didn't care about anything else. Like they, they wanted people to watch them, they wanted to make money. And I asked about other things, like, do you want the video production? You're like, improve the video production of cool every depression and they'd be like, yeah, to be like, okay, but like what, what's good about that? Like, what do you like about that? Like, well, get more. I'll get much bigger audience. And it was really the realizing the realization that the it was just those three things basically, explain.
18:20
And 98% of the motivation and we could get, didn't move the needle on that could be ignored. So a good example is like polls. Every would ask for pulse seems like a cool feature, live. Poles, of course, are you can have a bigger audience with the light poles, not particularly are you going to make more money? No. Is it does, you really, do you really feel more love that you're running a live poll, then after just like asking chat and having people posted in the chat and say it no, it's the same. You got the feedback school. So this product you can research at blow up. It's cool to his Capital. So you're saying that this feature
18:50
Is worthless. Yes. In fact potentially negative in that and so would always be in the list of like things that would sound like they might be cool and we just would never build it entirely correctly because it wasn't going to move the needle. And the thing that's really hard to teach their that I've been YC visiting partner for the this batch I'm trying to convey to people. It's very hard to get them to do. It is like you have to care if an attic lie about these people. These people as people and these people as as in the role they're doing as these
19:20
People as streamers, and what they believe about their reality, is your to accept as based reality, because that is how they see the world that is what's going on. But like, you need to, like, literally have no regard for their ideas, right to solve the problem. And it's a little paternalistic in a way, but it's more of, like, just respecting that they are experts in this thing. And you need to understand them in that thing and that what people are looking for, when they are looking for the product idea from the person is like, they don't want to do this.
19:50
Work. They don't take responsibility for. It's my job. I have to solve the problem and no one's going to tell me what the answer is. There's no teacher, there's no customer.
20:02
It's up to me to come up with the the truth and and then defend it when other people are like no, that's wrong. I have to be able to say like no. No, let me explain. This is let me explain. Why. There's actually a good idea and that's scary, you're responsible. I think I should, it's a probably why the just solve the problem advice is bounced around my head because a bunch of the fear. Founders have about dressing, these things, I think comes down to a willingness to take responsibility for solving other people's. This other person's problem, like they're going to come in.
20:31
About your problems on you and it's your job to solve it for them within the constraints available and there's no, if you come with the wrong idea, it's on you and you can't you can't trust anyone else to do it for you. What are you seeing in this YC batch? So you're visiting partner, mmm, exciting time with a. I only like, you know, half or more of the batches doing something with with AI. Yeah, what's exciting. What are you saying? Where do you see the puck? So it's interesting. I would actually say that, at least, in this batch, I think it might've been different the previous patch,
21:01
But by this batch use of AI is no longer interesting guys out. So it's like, it's like being an AWS startup or like being a mobile mobile startup. Like what do you mean your mobiles are? But I know. Are you, are you building a social media Network? Like what's the of course? You have a mobile app I do. And now it's like, of course, you hear you're using LMS to solve a problem. That's just like, if you weren't doing that, I would think you were a dummy. Like, I don't understand like that's
21:31
You wouldn't bring it up. Not even interesting, topic of conversation. The question is, like, what, what are you doing notes? That's not entirely true, there's about some percentage of the batch, I don't know. It's between 10 and 20 percent. I'd say that's legitimately building, like, AI infrastructure because there's a need to build much of infrastructure there. Those are. Actually, those are AI companies. But like, when people hear AI company, I don't think they, they think back-end infrastructure all support for a I think of using AI to like, do things. And I actually couldn't tell you what percentage of the batch is a, i
22:01
From that point of view, all of them maybe I don't know like why wouldn't you use it even if it's only for a minor thing, there's always something you can use it for. It's a very useful technology. What types of ideas are you noticing or piss, standing out to you? That are that are interesting. Is they're like, you know, for example, I remember when I first moved to Silicon Valley, suddenly the kind of like, B companies started doing really well. I was like, oh, Uber Airbnb. And my online offline. Yeah, it was like, oh wait, this this used to be like taboo. Like it was like, no, it's supposed to be a software company.
22:31
Be like you have to ship t-shirts. What are you doing? I would say like stay away from Trends. So one of our most popular episodes on the Pod was, when I was talking about the fire movement, it stands for financially independent retired, early. And I'm not necessarily part of that because I don't want to retire, but I do love the idea of just being financially independent. I think it just gives you so many different options and I love content on that topic because I just love hearing stories and tactics and things like that once.
23:01
Saving money and earning more money and just being financially independent and the best podcast on that topic is called choose Phi have been around for years like six or seven years at this point, tens of thousands of downloads thousands and thousands of reviews and the host his name's Brad, He's wonderful. And if you're into earning more saving, more and being financially independent, that's something that I'm a big fan of, it's something that was my goal. Starting at the age of 20 then you have to check them out. The host is name is Brad. He's amazing. He's a big MFM listener. So he understands what we're about.
23:31
Out. And it's choose Phi at chosse. And then fi like, financially independent. So, F, I choose Phi, you can find it on Apple podcast Spotify wherever you get your podcast, we're big fans of them, check it out, the offline. Offline companies that started the trend did very well, Uber is a great company, airbnb's, a great company.
23:53
But they were all doors actually great company. But at the time that was, they were doing something that was not allowed. They were, they were, they were they'd found an opportunity that had been ignored almost all fee online, offline companies that get started after Uber doordash Airbnb are big, be like, we're going to be the Uber and doordash an Airbnb of X, most of those companies about do very well.
24:17
Is online, offline, bad? No, it's generated a bunch of incredible companies. Jumping on. The trend was probably bad for you and so whatever I tell you is like the trend. I see, I don't mean Trend. I guess what I mean is I think you're a person that is really good at looking at a situation like looking at a box of stuff and identifying correctly, what's really interesting in this? Yeah, there's a to you. Yeah. No, I understand, I think understand, we're asking so like what I think is changing in the world right now having observed, this is the consumer is back.
24:46
For the first time in a long time, minion by a long time, it's like internet standards, like five years or something. But like for the first time in maybe five or seven years, it feels like credibly trying to start a consumer internet company, like the ones that like, I was so excited to start in. 2007 is like potentially good idea. That's because of aiai means. There's a whole opportunity to sort of reimagine. How consumer experiences can work Roundup. And what's, what's cool about consumer is for be to be sad.
25:17
The Experience isn't the product. And so reimagining, the experience is not reopen a necessary, it can, but usually does not reopen a segment in consumer reimagining and experience 100% reopens the segment because the thing you're selling is the experience. The thing, the reason will use your product is it's a different experience. I'm going to be SAS, it's not the experience. It's the what? Yeah, it's it people actually care what it does in life and the pricing model and the and like the adoption. It's very practical and you can make people jump through hoops.
25:47
Oops, if it does a thing because there's a lot of money for the corporation and money and labor, and you are paid to use your product, and it's a whole different thing. And, so, AI dad's new capabilities, new capabilities enable new segments of PBS has to be created that will generate some amount of growth in consumer. Does a really cool thing. It's like mobile, it reopens every segment as like. So if now that you assume mobile exists, not that you assume a i exists. What could you build now? And that's very exciting. I don't have answers for that anywhere because
26:17
Like, you know, we'll see. Like that's a little bit weirder than consumer. It's a bunch of lottery tickets. Like, nobody knows, singular genius, that we write out right? Like, you could see like, okay, mobile comes photo-sharing became right like open again, right? Window has opened Windows Azure Blue Pacific, ocean turns out, it's Instagram, and it's Snapchat, which is going to use photos as text message. Yeah just have a few different use cases and answer them in Snapchat. Took two of the best ones, the fact that photo sharing is one of the most important segments and that you know sort of posting them in.
26:47
Messaging with them are the two important things to do. With them seems blindingly obvious in retrospect and if you'd had to predict that in 2007 or 2008, like good luck. Yeah, like nobody, nobody nobody correctly for it to that stuff before it happened. I mean, nobody if you did it, correctly, predict that you made a lot of money and congratulations, you're really good at consumer /. You got lucky, we will find out when you try to do it again. I think that an AI, I should have a theory for like the what one of the ways this will disrupt a bunch of businesses.
27:17
In a I especially in consumer a huge number of businesses can be conceived of as effectively being a database with a system of record that has like a bunch of canonical truths about the universe. And each of them is a row, it's like, Yelp is like a big database that has a bunch of rows and the rows are like restaurants and local businesses and they want to facts about the like there. Where are they located one of their hours. We all in that database row and it's all text and it's all there's a bunch of messy stuff out.
27:47
World and it's been digested into something that is searchable, incomprehensible and usable in an app for you to use. And most of the work of turning the messy real world into the canonical. Row is done, is done at Right, Time by the users. So that's how you do see apps work. In general, a bunch of your users, go out into the messy world, and then we turn it into a ronin database. And if they include a photo or a
28:12
Video is part of that. It's like attached to the row, as a fact about the restaurant. Here's a restaurant. Here's 100, these hundred 50 photos are facts about its menu, but they're attached facts, they're not the basis and where we think AI is opened up the possibility for is up huge inversion there. What if the thing you did you gave us was just a video of your meal and for photos of you know, but ideally just like a video of your of the me love you talking about the meal of whether it be
28:42
Good time or not. You and your friend shooting the shit about what did you like that one, know, I like this one. Like and what if we just saved that video raw and then an a, I watched it and extracted a cached version of that, of the metadata. But truly like, if we decide something else is important that we didn't get noise levels but like noise levels would be a good thing to get instead of like re collecting data forever and I just start a whole data collection brush to get that. We just
29:12
Go back, really. Tell the AI. Oh, yeah, also grab nice collection levels, from all of these videos. In fact, maybe we don't even as a product, have to go do that. Maybe as a customer I can literally just be like what's the noise level at this restaurant? And the in real time the AI can go re-watch the video and tell me or the you know I ran a search and there's these 15 restaurants and like, oh actually sort by noise level. We don't have noise level pre-recorded but it's in all the videos, the I can very quickly watch all the videos in parallel.
29:41
And since our by noise level for me, but it wasn't even in the database to start with. Right? And I think that inversion, I'm using Yelp as the example, because it's, I think a very familiar thing. Most people have like, review is pretty easy to imagine a bunch of video reviews of everything and that being the system of record instead, but you can describe some phenomenal number of consumer apps. As being that it, have you type anything to a text box, you're participating one of these system of record things. What if it's the video? What if you just what?
30:11
If you assume video is deeply indexable and understandable by computers, what should the experience look like? And I think it looks a lot more like, Snapchat or Tick-Tock like experience, but but then different because you need map. It's not exactly like anything. It's a new kind of thing but it's it starts probably with the camera open which is weird, right? Like a Yelp that starts with the camera open. That's a that's not Yelp today and it's hot it's disruptive because it's Yelps whole value prop. Is we have all this great. Hi
30:41
Meticulously groomed data. And if this is true that that becomes entirely worthless, we throw that all away. We just watch a bunch of videos like is worse than the videos as they're suddenly. The playing field is level between the startup and Yelp. And that's a that's a huge opportunity for disruption. And so I think that you can take that you can reapply it to any product where you fill out forms and that's like a general-purpose consumer thing you can now, do kind of like build it for mobile was and I think in some cases it will be very powerful.
31:11
And like that will be the new winner. I think in some cases, the incumbent can kind of add videos or like it's not really better and like the incumbent will just win. Like it won't disrupt everything. But if you pick the right thing, not only will disrupt the incumbent. The new thing may be dramatically better for some things like, actually think I should help as much as bad example. I think the data Yelp has with the photos and the reviews is like
31:34
90% is good as a video systems record, probably. But you could imagine something where the video system of record, where it's not. So obvious with a, even put in the highly processed version of the data in the text version of the data and the video versions a lot better. And then I think not only, can you disrupt the incumbent you can 10x the size of the segment like you. This becomes a good segment now where it wasn't particularly before. So I chat GPT is a great example of this in action. Everybody kind of has now played
32:04
With which is you take Google which is like, oh, we have our value, is this entire sort of rank web pages based off of terms and we have, we understand basically what should show up in this in this hierarchy and it was really good for finding stuff and chat gbt was like cool. You could ask a question to try to find a link to an answer or we could just give you an answer or even better forget questions and answers like what if you just give me a command and I can just make something. So the finding things I could create
32:34
Things for you, right? And all of a sudden, it was like, well, how did they do that? So, well, they just basically slurped up the internet and then, you know, trying to be I to do with it. They overfit a statistical prediction algorithm on every domain of human knowledge. Like this is my theory, I'm pretty sure it's true. But like statistical prediction, algorithms in general work very well, we found the Innovation, he's got a prediction algorithm works better than normal, but the way it works better than normal is really interesting. It's not actually particularly out that it outperforms traditional out.
33:04
Rhythms for prediction on normal amounts of data, it's that it keeps working as you just dumped more and more data into it and more and more processing on that data into it, like, most machine learning, algorithms, you kind of you over fit very fast and more processing, more explode of it. If you imagine, like, you've got a bunch of data cloud of data points, and they're kind of vaguely in a line under fit is like, you like just draw something just like a cross randomly as a random line that doesn't look anything like the shape of the
33:34
It's a well that curve is like, you draw a line through the dots and this kind of noise of like things that are random above and below. But it's like, if you look at it, okay, that that actually does fit the data like the underlying predictive facts about the data. Well, while ignoring the noise and then if you over fit it like you get this like really Wiggly curve that touches every single dot exactly. But like when you get a new thing it like will miss that because it over predicts, it predicts too much of the thing and so when you get new data, if she doesn't pick
34:04
Not very well. Okay, and so normally what happens is you try to like dump more data and more compute into a normal machine learning algorithm, you get diminishing returns very quickly. We're like it just doesn't perform that much, better with twice as much data as possible to compute the clever. The cool thing about the Transformer, based attention. All you need architecture, is that it it continues to benefit from more compute and more data in a way that other ones didn't. And so with that, lets you do is run it on
34:34
Much bigger domain than normal, winter. On everything. Don't just, don't just run it on. Normally, as you added more, as you add more area, it, like degrades the quality, I'll spare no, buckets do everything and just put in a ton of computer in, and now, you get something that predicts pretty well against everything which is to say it. Like, it seems to be kind of intelligent the day evidence, seems to suggest that it seems that it's over fit, when you ask it to predict something that is either in the in the set of things.
35:04
That was trained on or a linear interpolation between two things, it was trained on, it's quite good at giving you the thing you asked or but linear interpolation between five things. But if the things you're asking your all in there and it just has to find the way to blend them together. It's good at that when you ask it to actually think through a new problem for the first time like, what's an example. There are seven years on a wall. Each alternating. There's a flag attached to the 7th Gear on the right side of the gear where it's pointed up right now.
35:34
If I turn the first gear to the right, what happens the flag like, that's a, anyone who's like a breakfast question for you. This is what you Ponder. And any mornings that and paper and time, you can work this out. No problem, right? You just just draw the gears. When you turn the first gear to the right, it turns the left ones, the ones, the other ones, the lesson has an extra the right as a general principle there, that like yours Alternate, which is if you ask GPT at knows, that General principle but it won't, but like, but the news I plot, it doesn't has no one asked dumb gears on,
36:04
All flag questions. Like this is not a thing that has been its in its training set and you have to kind of logic array through it, and I figure out, okay. We should like, I'll do turn left, turn right, turn left. Turn right, turn left, turn right over the flag. Is on the right? It's pointing up. So, when the last gear which is the same as the first gear turning right at the last year, it's odd number. So it's turning right? Also the flag will rotate down to the right clockwise cool. Like I can work that out, it's not
36:34
Complicated.
36:36
And I bet that question will be answerable. That's a pretty easy question. And if Chip gbt for I tested with through 15, if for doesn't answer, it 5 will, but like the fact that it struggles at all with that well-being. So, brilliant, it combines other stuff, really shows that it's over fit, right in. It knows how to answer problems that it has seen before. But, when you give it a truly novel, kind of like combination of problem, it struggles a lot because it's, I would say, you know, to defuse it.
37:06
After of the formal psychiatric, psychometrics approach. It has a very high crystallized intelligence but a pretty low fluid intelligence right now now that could change. But like today, that's the State of Affairs and you bring this up in order to say what you say, okay. I think it's over fit and it and it strong in this area. We can this area. What's the so what of that for you is it that are you trying to say?
37:30
That's a little bit overhyped. Or you trying to say do just wait till it can do both. Are you trying to say? Certainly albums are doable now. Definitely just wait till you do both, because that's a that's a whole different thing, that's scary. But if the current thing that is mostly crystallized intelligence is really good at a very it this way. It's a cluster. I was a clever trick, right? It's really good at at a big set of tasks which happens to be the set of tasks that like anyone has ever written stuff.
37:59
Out about explicitly like all explicit human knowledge. That's like a very big domain. There's a lot of things that can be solved where there's an explicit examples of people solving that problem or a linear interpolation of those problems in the domain of all human knowledge. The fact that doesn't generalize is irrelevant. It's immensely powerful with you don't need fluid intelligence. I guess is that is the point for to be very useful, but it doesn't lead to everything. People are they? There you hit these boundaries, weird boundaries were
38:29
It's like, like, wait a second, you can't do that. Like now it's the kids that at all, novel problem solving is just terrible at. So, what about? Let's walk through two examples. I want to hear your take on this, so you gave the Yelp example, mmm. Another thing that's kind of like rose in a database is something like Spotify or it's like, oh, I want to go listen to a song. Here's genre artist song, Length, you know, some algorithmic popularity similarity to other songs in some way, and but spotify's value is
38:59
Of spotify's value is in the playlist. I would agree with the analogy to Spotify because playlists are an example of this kind of like database. He human data, entry thing spotify's value is mostly in the set of all of the music itself, the licenses and all the music itself. And so I don't think Spotify is a great example because
39:21
The human data entry parts of the database. If that'll just got deleted tomorrow, it would like not hurts about how that battle. The thing I'm thinking about is what if the license is don't matter? So what happens if generative, music is just awesome to listen to in a hyper yay, personal way. Oh, and it likes, yay, types of songs that Emmett likes. That's a different. That's a different Insight. That think is also possible, which is like, it's not about being able to analyze an extract from media. It's about being able to create media because the video system of record is enabled by the ability to
39:50
Stand and read video and comprehend. It generative is a is the opposite. It's like you we can we can make all the stuff music in particular is sticky against that people don't want new music. They want old music, they want the really love, already the music, I grew up with and that is the that cycle is what causes record labels and just sustained charge by, we still listen to The Rolling Stones, right? Like the other thing I would say about that one is like the music's. Not that good yet like maybe someday but like
40:20
It's really it's really not that good yet. Well, I'm a caveat this, if it gets if the general intelligence level goes up a lot, all bets are off. It'll make some really great music for us before. It maybe takes hurling killed everyone, but let's assume that doesn't happen soon. I think it's gonna take longer than people think. We're going to make music though. What do you know, what if we do go out, when you got some with some great music and amazing to me, it's gonna be a, a great two or three years of where we all like we all go. But until that point making really good.
40:50
Like new great music is hard actually and I think that Rick rubin's great success demonstrates why artists will still be important. The AI can generate lots and lots of music but it it's not going to have the the fine Judgment of Distinction of the ability to set up this song. Not that song and actually what it will do is it will di skill the music-making process on one vector, they believe like literally create the sounds and it will greatly upskill the
41:20
Making process. Another Vector, the ability to secure that just creates you to give explicit exact feedback. Like Rick Rubin. Does, a, as gonna turn us all into Rick rubin's for, for generative AI like that, that skill set of the ability to have a musician, come to you and help them produce their best music. That's the thing you need to do because it's easy to generate a Thousand Cuts, but there's infinite Cuts, you could generate. So how do you direct the the how do you shape that in the right direction and, and mine and discover?
41:50
These are kind of cool to be interesting, you'll get a different set of people who will be optimal that. Right? You mentioned a I might become so intelligent kills us. All his podcast is really growing. I don't want the world to end a life. Is good. Life is good. Here, we'll ask the question clean for the for the intro. Dramatic Hook is a I going to kill us all
42:13
Maybe like walking on seriousness walk through how you a smart person who's a optimist about technology. Hmm, But a realist about real shit. What is the way that you think about this? Or how would you explain this to you know a loved one? You care about who's not as deep as technology. How would you explain to this your their trusted source of Technology? What do you say to them? So it is because I am so optimistic about technology that I am afraid. If I was a little bit less optimistic and I was like this AI stuffs over hyped.
42:43
Yeah, yeah, look at that. Nice parlor. Tricks were like, we're nowhere near buildings, like it's actually intelligent. Like, and like the engine, all these Engineers are working on who think they're onto something. They're full of shit. It's gonna take us thousands of years. We're not that good at this stuff. Technology is not going that fast. I'd be like, this is fine. It's great. Actually is good news. It's a new trick. We learned. Excellent. It's because I am so optimistic that I think that there's a chance it will continue to improve very, very rapidly and if it does that's it's the that optimism is what makes me worried.
43:13
Sort of the analogy I like to give. And that friend is like a sin. Bio synthetic biology. I'm quite optimistic about synthetic biology. That I've several friends who work in sin, bio companies. It shows a lot of promise for fixing a lot of really big important health problems and it's quite dangerous. Good will let us and genetically engineer more dangerous diseases, that could be very harmful to people. And that estimate that's a Wade pro and con. It's like nuclear power. Make nuclear weapons and nuclear power. They're both real. The Christian nuclear weapons is dangerous. You just doesn't take. You don't be a tech. No, no, no.
43:43
Optimus to like think that that's there's a problem there. I think it was good that we didn't go have every country on Earth, go build nuclear weapons, probably and likewise in sin by. Oh, I would say that it would be, we actually, we already have these regulations in place, we should probably over time, will be able to strengthen them, and improve the, an audit, the oversight and build better organizations to Monitor and regulate them. But like we regulate, whether people can have the kinds of devices that would let them like print smallpox, and we regulate, whether
44:13
Or you can just buy precursor things, you need to go print stuff and we keep track of who's buying it and why and like that is why is I'm glad that we do that. I don't like calling for a halt sin bio but like if we weren't willing to regulate it, I would call for help. They it is vastly too dangerous to do to learn how to genetically engineer plagues and then not to have regulation around people's ability to get the access to the tools to Regin Earp legs. That's just suicidally dumb. And I just because I'm Pro technology, I believe we should absolutely develop the
44:43
Ecology and that we should regulate it that seems just straightforward. And obviously, sure, to me, I think it's easier to understand that in this, in BIO one because the concept of like engineering a plague seems like an obvious lie. I think you could do and very date, obviously very dangerous and obviously enabled by technology, the I think is more abstract because the threat it poses us is not posed by particular thing. The a I will do the with the plague will happen analogy. I like to use a sort of like, you know, I can tell you with confidence that Garry Kasparov is going to kick your ass at chess.
45:13
Right now. And you ask me? Well, how is he in Checkmate me? Which piece is going to use? I'm like, oh, I don't know. And you're like you can't even tell me what piece is going to use and you're saying is a Checkmate. Me, you're just a pessimist and like, then I know you don't think he's better at chess than you. The whole lot means he's gonna check mate you and I don't, I don't know what, quite know what happens or people deny that like I think, what were the big thing is? They don't really imagine the AI being smarter than them. They imagine the a I being like, like data
45:43
In Star Trek like kind of dumber than the humans about a lot of stuff but like really fast at math, like that's not what smarter means. Like, imagine the most Savvy like most smartest person you can think of and then make them think faster and also make them even better at it and not smart in just one, we like smart at everything, like a great writer just inside after insight and like can pick ups in bio on an afternoon because they're just so smart.
46:13
People. I know I think it's maybe it's people in Tech like love that love the hack. They're always looking for the easy fast solution to cut surrounding, you solve the problem and it's very helpful. It's the most often helpful for me that advice, in my opinion is like, bringing people back to just solving the problem. I find that the the advice I like the most, or the sayings that resonate with me. The most are the ones. It's like you spot it. You got it. It's like if it's the one is the advice I needed. That's why it resonates with me. That's why I like giving it out because like I personally experienced it have
46:43
You personally experienced that or what's an example where you remember trying to do everything, but solve the problem and then you finally realize shit, I should have just solved the problem. It's interesting question, what is it? You spot it, you got it is like noticing is half the battle. Basically it's sort of the smart person version of whoever smelt it dealt it. Yeah it's like if you you only notice this in other people because you've seen it in yourself. Yeah. Otherwise you wouldn't be observant of it. My version of this is we give the advice we need to hear. Yes. Yeah guys. Like, which is same basic idea. It's actually not always true.
47:13
That's one of those really good heuristics. We're like sure half the time when you give advice it won't actually be for you. But half the time it is and noticing it is so powerful that like you should just check every piece of advice. You give like wait a second, is this advice I need to hear right now when it comes to the like, have you tried actually solving the problem? I think I'm pretty good at that in general, I think. But I often give it to myself in a more meta sense. Like it's a advice I often needed more medicines of like when I'm confronted with like a thing that needs to be programmed, I will often go just program the thing.
47:43
Thing. But I have a tendency to look for ways that I can solve the problem and not that the problem can be solved. And for me, that the verse this almost always like what? I don't know somebody else for help and I just like it doesn't even occur to me to. Go to go do that. I'm just, I'll just, I'll just in definitely dig try to go solve the problem myself, I'm not really trying to solve the problem, I'm trying to solve the problem while avoiding having to ask anyone else for help, which is like, not really, I'm not really trying to solve the problem.
48:13
But as you know where the I think this is one of those things where it's almost like the creativity thing. It was a shock for me to realize other people. Don't do that your self-actualize on that one. Yeah. What's a piece of good advice that you're bad at taking? That's a that's a an excellent one. I think the big one there is like, you know, listen more like I think with this if I so much as YC and it's 100% something that I need to get better at which is like you go into the user interview and you have all these ideas and thoughts and you need to not be surfacing those, you need to actually be
48:43
Focused on you, move your attention to them and really be interested in care about what they have to say and your opinions and what's what you think is true is irrelevant and I am I'm much better at that than I used to be. I also it's one of those things like being reminded like let's just chill out for a second, been lit. Like listen, there's almost always good advice for me and something that I and if I give fairly often but like it's hard to take on,
49:13
Of the
49:13
things I really liked that you showed me once. Well, I remember asking you when we're at twitch, I think we were working on the problem that was like reminiscent of early days twitch like the mobile mobile stuff in different countries where it's like oh we're not the leader or we need to like create from scratch which wasn't a muscle that a lot of people there were flexing at the time and I was like hey do you have any stuff from the early days of twitching? You set me a thing which was like here's all the user interviews like here's my dock from all the use it or lose it which is basically.
49:43
Ali from what I understand. There was like a small Universe of people that were already doing video game streaming. And you were like, cool. Let me call all of them. Let me ask them like three questions. And if I can just get these answer, these three questions. That should give me a little bit of a roadmap of blueprint of understanding. What do I need to do in order to like win in this market? Yeah, she take me back to that because I like that for two reasons. It was a simple and be seemed like a focused intensity that you found a point of Leverage and you pushed
50:13
I think two things happen to lead to that the first was like the realization obviously improve. We wanted to win and gaming, the streamers mattered and at Justin TV, we'd always been like streamers and viewers are equally important and I finally made a decision as like no. No, this product ultimately is about streamers and if this doesn't work for the streamers is more for anybody and then I had the realization. This is one of those Epiphany moments for. I truly saw
50:41
I have no idea why anyone would stream video games like I don't really want to do it and I have all these. I could say, I saw myself building products for these people for the past four years, just in TV and not really having any idea, why they did the things they did at all. And I sort of, I saw like, oh, I'm just making this up, I have no idea. I just, I don't know the answer, I could know the answer, like there is Arthur, there is an answer out there, these the a bunch of people know it, but I don't and that triggered me to be like, I need to know. I need
51:11
Understand like these this these two hundred people need to understand their mind and it about 40 interviews probably and I didn't want to know like what they thought we should build is if they know what we should build. They would have my job and I've talked enough of them before to know that they had no good product ideas. I I wanted to know like why are you streaming you what have you tried to use for streaming? Like what did you like about that? Like, what did how did you get started in the first place? What's your biggest dream for sharing? What do you wish, you know, someone would build for you and I didn't
51:41
Ask them what? I wish I want to build for you because I thought they would have a good idea. I asked them because the follow-up question was, only the killer one, right? They they would say, what you'd build me, this big red button on my great. I built you the big red button. Like, what what does it do for you? Like why is your life better after I built that and then they would tell me the real thing which is like, oh I would have met. I didn't make it but I make money that month or I'd get a bunch of new fans who like, loved me or my fans who already love me on YouTube, You'll watch me live more of them would as I go, that's the real answer. Like, why you don't you?
52:11
I want the button you want the fans or the money or the I call it love the like the the sense of reassurance and and positive feedback that the your creative content was wanted but you're a smart guy, those love and money and fans I'm sure you would have guessed at what are the streamers want false. What did you say, false? What did you think it was a revelation that people would want money because I was like, you're streaming like you know, whatever 12 hours a week. If we met like you monetize the rates we can monetize today you'd make like three dollars a month.
52:41
That would like that didn't occur to me. That would be a positive thing. And oh yes. Oh my God. That would be amazing. I was like, wait, wait, you're serious. You would like three dollars in my camera. I don't know. Over-promised. Like we're, I will build you the monetization actually, but like you would really be excited if it only produced like a tiny amount of money and they're like, absolutely. I've just if the idea that I can make money doing this would be so exciting that had not occurred to me because it always easy for me to make. I was a programmer. I had summer jobs. Interning for Microsoft. If you're a programmer, you get a summer job.
53:11
Any for Microsoft that's like pays many, many years of that level of streaming in three months like why would I didn't eat any wasn't in my world view? That that would be so important to them. And of course, I knew they wanted a bigger audience but the degree to which they valued even one more viewer and the degree. Sure. They didn't care about anything else. Like they, they wanted people to watch them, they wanted to make money. And I asked about other things. Like, do you want the video production? You're like, improve the video production of Cooler.
53:41
A depression and they'd be like, yeah, to be like, okay, but like what, what's good about that? Like, what do you like about that? Like, well, get more. I'll get much bigger audience. And it was really the realizing, the realization that the, it was just those three things. Basically explained 98% of their motivation and we could get, didn't move the needle on that could be ignored. So, a good example that's like polls every would ask for polls, seems like a cool feature live. Poles, of course, are you can have a bigger audience with the light poles. Not particularly are you going to make more money? No. Is it does you?
54:11
If you really feel more love, that you're running a live poll, then after just like asking chat and having people posted in the chat and say it, no, it's the same, you got the feedback school. So this product you can research at blow up it's cooler here, Capital. So you're saying that this feature is worthless. Yes. In fact potentially - and that's and so would always be in the list of like things that would sound like they might be cool and we just would never build it entirely correctly because it wasn't going to move the needle. And the thing that's really hard to teach their that I've been why Steve
54:41
Visiting partner for this batch, I'm trying to convey to people. It's very hard to get them to do. It is like you have to care, phonetically about these people. These people as people and these people as as, in the role they're doing as these people as streamers. And what they believe about their reality, is your to accept as base reality, because that is how they see the world that is what's going on. But like, you need to, like, literally have no regard for their ideas, right? To solve the problem. And it's a little
55:11
Journalistic in a way, but it's more of like just respecting that they are experts in this thing. And you need to understand them in that thing and that what people are looking for. When they are looking for the product idea from the person is like, they don't want to do the work, they don't want to take responsibility for, its my job, I have to solve the problem and no one's going to tell me what the answer is. There's no teacher, there's no customer. It's up to me to come up with the the truth and and then
55:41
It when other people are like, no, that's wrong. I have to be able to say like, no, no, let me explain. This is let me explain. Why. There's actually a good idea and that's scary, you're responsible. I think I should, it's a probably why the just solve the problem advice is bounced around my head because a bunch of the fear. Founders have about addressing these things. I think comes down to a willingness to take responsibility for solving other people's. This other person's problem, like they're going to come and dump about your problems on you and it's your job to solve it for them within the constraints available and there's no
56:11
If you come with the wrong idea, it's on you and you can't, you can't trust anyone else to do it for you. What are you seeing in this YC batch? So you're visiting partner, mmm, exciting time with a. I only like, you know, half or more of the batches doing something with with AI. Yeah, what's exciting. What are you saying? Where do you see the puck? So it's interesting. I would actually say that, at least in this patch, I think it might've been different the previous patch, but by this batch use of AI is no longer interesting their eyes out.
56:43
It's like it's like being an AWS startup or like being a mobile mobile startup. Like what do you mean your mobiles are? But I know. Are you, are you building a social media Network? Like what's the of course? You have a mobile app, I do. And now, it's like, of course, you're here. You're using LMS to solve a problem. That's just like, if you weren't doing that, I would think you were a dummy. Like, I don't understand. Like that's not a, you wouldn't even bring it up. Not even interesting. Topic of conversation. The question is, like, what what are you doing notes? That's not entirely.
57:11
There's about some percentage of the batch, I don't know. It's between 10 and 20 percent. I'd say that's legitimately building like a. I infrastructure because there's a need to build much of infrastructure there. Those are. Actually, those are AI companies. But like when people hear a i company I don't think they they think back-end infrastructure all support for AI. They think of using AI to like do things and I actually couldn't tell you what percentage of the batch is AI. From that point of view, all of them. Maybe I don't know. Like why wouldn't you use it even if
57:41
It's only for a minor thing. There's always something you can use it for. It's a very useful technology. What types of ideas are you noticing or piss, standing out to you? That are that are interesting. Is they're like, you know, for example, I remember when I first moved to Silicon Valley, suddenly the kind of like, B companies started doing really well. I was like, oh, Uber Airbnb. And my online offline. Yeah, it was like, oh wait, this this used to be like taboo. Like it was like, no, it's supposed to be a software company. Like you have to ship t-shirts. What are you doing? I would say, like, stay away from trends.
58:11
So one of our most popular episodes on the Pod was, when I was talking about the fire movement, it stands for financially independent retired, early. And I'm not necessarily part of that because I don't want to retire, but I do love the idea of just being financially independent. I think it just gives you so many different options and I love content on that topic because I just love hearing stories and tactics and things like that on, saving money and earning more money, and just being financially independent in the best podcast.
58:40
Cast on that topic is called choose Phi have been around for years like six or seven years at this point, tens of thousands of downloads thousands and thousands of reviews and the host his name's Brad, He's wonderful. And if you're into earning more saving, more and being financially independent, that's something that I'm a big fan of, it's something that was my goal. Starting at the age of 20, then you have to check them out. The host is name is Brad. He's amazing. He's a big MFM listener. So he understands what we're about and it's choose Phi at chosse and then Phi.
59:10
Like financially independent. So, F, I choose Phi, you can find it on Apple podcast Spotify wherever you get your podcast, we're big fans of them, check it out, the offline. Offline companies that started the trend did very well, Uber is a great company, airbnb's, a great company.
59:27
But they were all doors actually great company. But at the time that was, they were doing something that was not allowed. They were, they were, they were they'd found an opportunity that had been ignored almost all fee online, offline companies that get started after Uber doordash Airbnb are big, be like, we're going to be the Uber and doordash an Airbnb of X, most of those companies about do very well.
59:51
Is online, offline, bad? No, it's generated a bunch of incredible companies. Jumping on. The trend was probably bad for you and so whatever I tell you is like the trend. I see, I don't mean Trend. I guess what I mean is I think you're a person that is really good at looking at a situation like looking at a box of stuff and identifying correctly, what's really interesting in this? Yeah, there's a to you. Yeah. No, I understand, I think understand, we're asking so like what I think is changing in the world right now having observed, this is the consumer is back.
1:00:20
For the first time in a long time, minion by a long time, it's like internet standards, like five years or something. But like for the first time in maybe five or seven years, it feels like credibly trying to start a consumer internet company, like the ones that like, I was so excited to start in. 2007 is like potentially good idea. That's because of aiai means. There's a whole opportunity to sort of reimagine. How consumer experiences can work Roundup. And what's, what's cool about consumer is for be to be sad.
1:00:51
The Experience isn't the product. And so reimagining, the experience is not reopen a necessary, it can, but usually does not reopen a segment in consumer reimagining and experience 100% reopens the segment because the thing you're selling is the experience. The thing, the reason will use your product is it's a different experience. I'm going to be SAS, it's not the experience. It's the what? Yeah, it's it people actually care what it does in life and the pricing model and the and like the adoption. It's very practical and you can make people jump through hoops.
1:01:21
Oops, if it does a thing because there's a lot of money for the corporation and money and labor, and you are paid to use your product, and it's a whole different thing. And, so, AI dad's new capabilities, new capabilities enable new segments of PBS has to be created that will generate some amount of growth in consumer. Does a really cool thing. It's like mobile, it reopens every segment as like. So if now that you assume mobile exists, not that you assume a i exists. What could you build now? And that's very exciting. I don't have answers for that anywhere because
1:01:51
Like, you know, we'll see. Like that's a little bit weirder than consumer. It's a bunch of lottery tickets. Like, nobody knows, singular genius, that we write out right? Like, you could see like, okay, mobile comes photo-sharing became right like open again, right? Window has opened Windows Azure Blue Pacific, ocean turns out, it's Instagram, and it's Snapchat, which is going to use photos as text message. Yeah just have a few different use cases and answer them in Snapchat. Took two of the best ones, the fact that photo sharing is one of the most important segments and that you know sort of posting them in.
1:02:21
Messaging with them are the two important things to do. With them seems blindingly obvious in retrospect and if you'd had to predict that in 2007 or 2008, like good luck. Yeah, like nobody, nobody nobody correctly for it to that stuff before it happened. I mean, nobody if you did it, correctly, predict that you made a lot of money and congratulations, you're really good at consumer /. You got lucky, we will find out when you try to do it again. I think that an AI, I should have a theory for like the what one of the ways this will disrupt a bunch of businesses.
1:02:51
In a I especially in consumer a huge number of businesses can be conceived of as effectively being a database with a system of record that has like a bunch of canonical truths about the universe. And each of them is a row, it's like, Yelp is like a big database that has a bunch of rows and the rows are like restaurants and local businesses and they want to facts about the like there. Where are they located one of their hours. We all in that database row and it's all text and it's all there's a bunch of messy stuff out.
1:03:21
World and it's been digested into something that is searchable, incomprehensible and usable in an app for you to use. And most of the work of turning the messy real world into the canonical. Row is done, is done at Right, Time by the users. So that's how you do see apps work. In general, a bunch of your users, go out into the messy world, and then we turn it into a ronin database. And if they include a photo or a
1:03:46
Video is part of that. It's like attached to the row, as a fact about the restaurant. Here's a restaurant. Here's 100, these hundred 50 photos are facts about its menu, but they're attached facts, they're not the basis and where we think AI is opened up the possibility for is up huge inversion there. What if the thing you did you gave us was just a video of your meal and for photos of you know, but ideally just like a video of your of the me love you talking about the meal of whether it be
1:04:16
Good time or not. You and your friend shooting the shit about what did you like that one, know, I like this one. Like and what if we just saved that video raw and then an a, I watched it and extracted a cached version of that, of the metadata. But truly like, if we decide something else is important that we didn't get noise levels but like noise levels would be a good thing to get instead of like re collecting data forever and I just start a whole data collection brush to get that. We just
1:04:46
Go back, really. Tell the AI. Oh, yeah, also grab nice collection levels, from all of these videos. In fact, maybe we don't even as a product, have to go do that. Maybe as a customer I can literally just be like what's the noise level at this restaurant? And the in real time the AI can go re-watch the video and tell me or the you know I ran a search and there's these 15 restaurants and like, oh actually sort by noise level. We don't have noise level pre-recorded but it's in all the videos, the I can very quickly watch all the videos in parallel.
1:05:15
And since our by noise level for me, but it wasn't even in the database to start with. Right? And I think that inversion, I'm using Yelp as the example, because it's, I think a very familiar thing. Most people have like, review is pretty easy to imagine a bunch of video reviews of everything and that being the system of record instead, but you can describe some phenomenal number of consumer apps. As being that it, have you type anything to a text box, you're participating one of these system of record things. What if it's the video? What if you just what?
1:05:45
If you assume video is deeply indexable and understandable by computers, what should the experience look like? And I think it looks a lot more like, Snapchat or Tick-Tock like experience, but but then different because you need map. It's not exactly like anything. It's a new kind of thing but it's it starts probably with the camera open which is weird, right? Like a Yelp that starts with the camera open. That's a that's not Yelp today and it's hot it's disruptive because it's Yelps whole value prop. Is we have all this great. Hi
1:06:15
Meticulously groomed data. And if this is true that that becomes entirely worthless, we throw that all away. We just watch a bunch of videos like is worse than the videos as they're suddenly. The playing field is level between the startup and Yelp. And that's a that's a huge opportunity for disruption. And so I think that you can take that you can reapply it to any product where you fill out forms and that's like a general-purpose consumer thing you can now, do kind of like build it for mobile was and I think in some cases it will be very powerful.
1:06:45
And like that will be the new winner. I think in some cases, the incumbent can kind of add videos or like it's not really better and like the incumbent will just win. Like it won't disrupt everything. But if you pick the right thing, not only will disrupt the incumbent. The new thing may be dramatically better for some things like, actually think I should help as much as bad example. I think the data Yelp has with the photos and the reviews is like
1:07:08
90% is good as a video systems record, probably. But you could imagine something where the video system of record, where it's not. So obvious with a, even put in the highly processed version of the data in the text version of the data and the video versions a lot better. And then I think not only, can you disrupt the incumbent you can 10x the size of the segment like you. This becomes a good segment now where it wasn't particularly before. So I chat GPT is a great example of this in action. Everybody kind of has now played
1:07:38
With which is you take Google which is like, oh, we have our value, is this entire sort of rank web pages based off of terms and we have, we understand basically what should show up in this in this hierarchy and it was really good for finding stuff and chat gbt was like cool. You could ask a question to try to find a link to an answer or we could just give you an answer or even better forget questions and answers like what if you just give me a command and I can just make something. So the finding things I could create
1:08:08
Things for you, right? And all of a sudden, it was like, well, how did they do that? So, well, they just basically slurped up the internet and then, you know, trying to be I to do with it. They overfit a statistical prediction algorithm on every domain of human knowledge. Like this is my theory, I'm pretty sure it's true. But like statistical prediction, algorithms in general work very well, we found the Innovation, he's got a prediction algorithm works better than normal, but the way it works better than normal is really interesting. It's not actually particularly out that it outperforms traditional out.
1:08:38
Rhythms for prediction on normal amounts of data, it's that it keeps working as you just dumped more and more data into it and more and more processing on that data into it, like, most machine learning, algorithms, you kind of you over fit very fast and more processing, more explode of it. If you imagine, like, you've got a bunch of data cloud of data points, and they're kind of vaguely in a line under fit is like, you like just draw something just like a cross randomly as a random line that doesn't look anything like the shape of the
1:09:08
It's a well that curve is like, you draw a line through the dots and this kind of noise of like things that are random above and below. But it's like, if you look at it, okay, that that actually does fit the data like the underlying predictive facts about the data. Well, while ignoring the noise and then if you over fit it like you get this like really Wiggly curve that touches every single dot exactly. But like when you get a new thing it like will miss that because it over predicts, it predicts too much of the thing and so when you get new data, if she doesn't pick
1:09:38
Not very well. Okay, and so normally what happens is you try to like dump more data and more compute into a normal machine learning algorithm, you get diminishing returns very quickly. We're like it just doesn't perform that much, better with twice as much data as possible to compute the clever. The cool thing about the Transformer, based attention. All you need architecture, is that it it continues to benefit from more compute and more data in a way that other ones didn't. And so with that, lets you do is run it on
1:10:08
Much bigger domain than normal, winter. On everything. Don't just, don't just run it on. Normally, as you added more, as you add more area, it, like degrades the quality, I'll spare no, buckets do everything and just put in a ton of computer in, and now, you get something that predicts pretty well against everything which is to say it. Like, it seems to be kind of intelligent the day evidence, seems to suggest that it seems that it's over fit, when you ask it to predict something that is either in the in the set of things.
1:10:38
That was trained on or a linear interpolation between two things, it was trained on, it's quite good at giving you the thing you asked or but linear interpolation between five things. But if the things you're asking your all in there and it just has to find the way to blend them together. It's good at that when you ask it to actually think through a new problem for the first time like, what's an example. There are seven years on a wall. Each alternating. There's a flag attached to the 7th Gear on the right side of the gear where it's pointed up right now.
1:11:08
If I turn the first gear to the right, what happens the flag like, that's a, anyone who's like a breakfast question for you. This is what you Ponder. And any mornings that and paper and time, you can work this out. No problem, right? You just just draw the gears. When you turn the first gear to the right, it turns the left ones, the ones, the other ones, the lesson has an extra the right as a general principle there, that like yours Alternate, which is if you ask GPT at knows, that General principle but it won't, but like, but the news I plot, it doesn't has no one asked dumb gears on,
1:11:37
All flag questions. Like this is not a thing that has been its in its training set and you have to kind of logic array through it, and I figure out, okay. We should like, I'll do turn left, turn right, turn left. Turn right, turn left, turn right over the flag. Is on the right? It's pointing up. So, when the last gear which is the same as the first gear turning right at the last year, it's odd number. So it's turning right? Also the flag will rotate down to the right clockwise cool. Like I can work that out, it's not
1:12:08
Complicated.
1:12:10
And I bet that question will be answerable. That's a pretty easy question. And if Chip gbt for I tested with through 15, if for doesn't answer, it 5 will, but like the fact that it struggles at all with that well-being. So, brilliant, it combines other stuff, really shows that it's over fit, right in. It knows how to answer problems that it has seen before. But, when you give it a truly novel, kind of like combination of problem, it struggles a lot because it's, I would say, you know, to defuse it.
1:12:40
After of the formal psychiatric, psychometrics approach. It has a very high crystallized intelligence but a pretty low fluid intelligence right now now that could change. But like today, that's the State of Affairs and you bring this up in order to say what you say, okay. I think it's over fit and it and it strong in this area. We can this area. What's the so what of that for you is it that are you trying to say?
1:13:04
That's a little bit overhyped. Or you trying to say do just wait till it can do both. Are you trying to say? Certainly albums are doable now. Definitely just wait till you do both, because that's a that's a whole different thing, that's scary. But if the current thing that is mostly crystallized intelligence is really good at a very it this way. It's a cluster. I was a clever trick, right? It's really good at at a big set of tasks which happens to be the set of tasks that like anyone has ever written stuff.
1:13:33
Out about explicitly like all explicit human knowledge. That's like a very big domain. There's a lot of things that can be solved where there's an explicit examples of people solving that problem or a linear interpolation of those problems in the domain of all human knowledge. The fact that doesn't generalize is irrelevant. It's immensely powerful with you don't need fluid intelligence. I guess is that is the point for to be very useful, but it doesn't lead to everything. People are they? There you hit these boundaries, weird boundaries were
1:14:03
It's like, like, wait a second, you can't do that. Like now it's the kids that at all, novel problem solving is just terrible at. So, what about? Let's walk through two examples. I want to hear your take on this, so you gave the Yelp example, mmm. Another thing that's kind of like rose in a database is something like Spotify or it's like, oh, I want to go listen to a song. Here's genre artist song, Length, you know, some algorithmic popularity similarity to other songs in some way, and but spotify's value is
1:14:33
Of spotify's value is in the playlist. I would agree with the analogy to Spotify because playlists are an example of this kind of like database. He human data, entry thing spotify's value is mostly in the set of all of the music itself, the licenses and all the music itself. And so I don't think Spotify is a great example because
1:14:55
The human data entry parts of the database. If that'll just got deleted tomorrow, it would like not hurts about how that battle. The thing I'm thinking about is what if the license is don't matter? So what happens if generative, music is just awesome to listen to in a hyper yay, personal way. Oh, and it likes, yay, types of songs that Emmett likes. That's a different. That's a different Insight. That think is also possible, which is like, it's not about being able to analyze an extract from media. It's about being able to create media because the video system of record is enabled by the ability to
1:15:24
Stand and read video and comprehend. It generative is a is the opposite. It's like you we can we can make all the stuff music in particular is sticky against that people don't want new music. They want old music, they want the really love, already the music, I grew up with and that is the that cycle is what causes record labels and just sustained charge by, we still listen to The Rolling Stones, right? Like the other thing I would say about that one is like the music's. Not that good yet like maybe someday but like
1:15:54
It's really it's really not that good yet. Well, I'm a caveat this, if it gets if the general intelligence level goes up a lot, all bets are off. It'll make some really great music for us before. It maybe takes hurling killed everyone, but let's assume that doesn't happen soon. I think it's gonna take longer than people think. We're going to make music though. What do you know, what if we do go out, when you got some with some great music and amazing to me, it's gonna be a, a great two or three years of where we all like we all go. But until that point making really good.
1:16:24
Like new great music is hard actually and I think that Rick rubin's great success demonstrates why artists will still be important. The AI can generate lots and lots of music but it it's not going to have the the fine Judgment of Distinction of the ability to set up this song. Not that song and actually what it will do is it will di skill the music-making process on one vector, they believe like literally create the sounds and it will greatly upskill the
1:16:54
Making process. Another Vector, the ability to secure that just creates you to give explicit exact feedback. Like Rick Rubin. Does, a, as gonna turn us all into Rick rubin's for, for generative AI like that, that skill set of the ability to have a musician, come to you and help them produce their best music. That's the thing you need to do because it's easy to generate a Thousand Cuts, but there's infinite Cuts, you could generate. So how do you direct the the how do you shape that in the right direction and, and mine and discover?
1:17:24
These are kind of cool to be interesting, you'll get a different set of people who will be optimal that. Right? You mentioned a I might become so intelligent kills us. All his podcast is really growing. I don't want the world to end a life. Is good. Life is good. Here, we'll ask the question clean for the for the intro. Dramatic Hook is a I going to kill us all
1:17:47
Maybe like walking on seriousness walk through how you a smart person who's a optimist about technology. Hmm, But a realist about real shit. What is the way that you think about this? Or how would you explain this to you know a loved one? You care about who's not as deep as technology. How would you explain to this your their trusted source of Technology? What do you say to them? So it is because I am so optimistic about technology that I am afraid. If I was a little bit less optimistic and I was like this AI stuffs over hyped.
1:18:17
Yeah, yeah, look at that. Nice parlor. Tricks were like, we're nowhere near buildings, like it's actually intelligent. Like, and like the engine, all these Engineers are working on who think they're onto something. They're full of shit. It's gonna take us thousands of years. We're not that good at this stuff. Technology is not going that fast. I'd be like, this is fine. It's great. Actually is good news. It's a new trick. We learned. Excellent. It's because I am so optimistic that I think that there's a chance it will continue to improve very, very rapidly and if it does that's it's the that optimism is what makes me worried.
1:18:47
Sort of the analogy I like to give. And that friend is like a sin. Bio synthetic biology. I'm quite optimistic about synthetic biology. That I've several friends who work in sin, bio companies. It shows a lot of promise for fixing a lot of really big important health problems and it's quite dangerous. Good will let us and genetically engineer more dangerous diseases, that could be very harmful to people. And that estimate that's a Wade pro and con. It's like nuclear power. Make nuclear weapons and nuclear power. They're both real. The Christian nuclear weapons is dangerous. You just doesn't take. You don't be a tech. No, no, no.
1:19:17
Optimus to like think that that's there's a problem there. I think it was good that we didn't go have every country on Earth, go build nuclear weapons, probably and likewise in sin by. Oh, I would say that it would be, we actually, we already have these regulations in place, we should probably over time, will be able to strengthen them, and improve the, an audit, the oversight and build better organizations to Monitor and regulate them. But like we regulate, whether people can have the kinds of devices that would let them like print smallpox, and we regulate, whether
1:19:47
Or you can just buy precursor things, you need to go print stuff and we keep track of who's buying it and why and like that is why is I'm glad that we do that. I don't like calling for a halt sin bio but like if we weren't willing to regulate it, I would call for help. They it is vastly too dangerous to do to learn how to genetically engineer plagues and then not to have regulation around people's ability to get the access to the tools to Regin Earp legs. That's just suicidally dumb. And I just because I'm Pro technology, I believe we should absolutely develop the
1:20:17
Ecology and that we should regulate it that seems just straightforward. And obviously, sure, to me, I think it's easier to understand that in this, in BIO one because the concept of like engineering a plague seems like an obvious lie. I think you could do and very date, obviously very dangerous and obviously enabled by technology, the I think is more abstract because the threat it poses us is not posed by particular thing. The a I will do the with the plague will happen analogy. I like to use a sort of like, you know, I can tell you with confidence that Garry Kasparov is going to kick your ass at chess.
1:20:47
Right now. And you ask me? Well, how is he in Checkmate me? Which piece is going to use? I'm like, oh, I don't know. And you're like you can't even tell me what piece is going to use and you're saying is a Checkmate. Me, you're just a pessimist and like, then I know you don't think he's better at chess than you. The whole lot means he's gonna check mate you and I don't, I don't know what, quite know what happens or people deny that like I think, what were the big thing is? They don't really imagine the AI being smarter than them. They imagine the a I being like, like data
1:21:17
In Star Trek like kind of dumber than the humans about a lot of stuff but like really fast at math, like that's not what smarter means. Like, imagine the most Savvy like most smartest person you can think of and then make them think faster and also make them even better at it and not smart in just one, we like smart at everything, like a great writer just inside after insight and like can pick ups in bio on an afternoon because they're just so smart.
1:21:47
Smartest person, you know. And then there's a keep pushing that and like that's that person is obviously dangerous. If they're if that person isn't a good person, they're obviously dangerous. Like, imagine this really, really capable person that imagine them wanting to go kill a bunch of people or something, it would be bad. Now, the thing about AI that then kicks it over the edge, is that that person can't self-improve easily you meet this person who's like super strong, super like talented, great with people, great great intellectual.
1:22:17
Mind, they can't turn around and, like, edit their own genome, edit, their own upbringing, and make V2 of themselves. With all the skills that maximally smart person came up with that like is even smarter than them, but that's like explicit. We're explicitly, the AI is good at programming and like chip design and like it can explicitly turn back on itself and rev another rev of that and the new one will be better at it than the first one was and there is no obvious endpoint that process like there probably is at some level
1:22:47
A physics-based end points that were like, you can't actually just keep getting smarter forever. There's some, but we don't really, we don't understand. The principles of intelligence at all, like, most things, we understood how to make electricity far before we understood what electricity really was like. We that's generally how we it's a scientific progress works. We usually understand, we gain the ability to create a manipulative phenomenon. Well, before we deeply understand how it works. We didn't really say, what fire was for quite a while you could use fire really well.
1:23:17
Well the same thing is going to happen here. We're using the AI but we don't understand its limits at all but understand that the theoretical limit of how far will get and if Moore's Law is any indication, we can keep getting at the very least in keep getting faster indefinitely whether or not it can get smarter or not, even human love, just human level intelligence. If you Captain human the voltage at which there's zero reason to think it will stop at human-like. It will almost certainly blow past us but like if you Capital human intelligence
1:23:47
Let's imagine a hundred thousand of the smartest person, you know, all running at 100 x, real time speed and able to communicate with each other instantaneously via like telepathy. Those hundred thousand people could credibly take over the world. Like they don't have to be smarter than a human for that for that, that army of Von neumanns. Right? Like so, so the argument to me, goes in several steps. It's like, can you build a certain level of intelligence? And then it's like, okay. Let's
1:24:17
I think, I actually think a lot of people do believe that like computers are smart. The Google is smart calculators, are smarter than us at math. I think it's not hard for them to believe that the AI is going to be far smarter than human beings. Where I think a lot of people then don't make that last leap is sort of like but then it will have an agenda or a motive or any. Yeah. Failed for anything to happen. So how do you address that last point of like what is this? What are the scenarios you worry about? When it comes to like now the direction of that
1:24:47
It's so you build this thing and its really good at solving, what is intelligence fundamentally, but the ability to solve a problem, right? So it's really good at solving problems, and it's gonna solve the problem by solving the problem, it can just go right through the problem and solve it because it's really good at solving problems. We've just defined it as like, that's that, that's the kind of thing. It is super good at solving problems. And so you tell it, somebody build an AI in all earnestness tells it, he, they're smart, they don't even tell it go do a thing, although they absolutely will. By the way, they will just tell it to go do a thing, but let's say we try to be careful when we ask it. Give me a plan.
1:25:17
Plan to stop the war.
1:25:21
In the Democratic Republic of Congo right now but which would be a good thing for the world. I think we should that Wars is going to hurt a lot of people give me a plan for that and I tried it. I caveat it. That that does this. That does that, that does this. They were the the here's what I mean by good plan. This is one of these like evil Genie bargaining things, right? Like, it'll give you a plan and it's make giving you a plan that will cause you to solve the problem, but like it's definition of solve. The problem is, there's no war in the DRC went over there to be no war in the DRC.
1:25:50
She is like all the humans in the DRC are in stasis fields, that means they don't die and it's all you know and oh we added a caveat that the GDP has to go up to so that. So it also the plan results in corporations in that in that area all trading with lots of money with each other. So the GDP is very high and when I say this it sounds like a fucking science fiction thing and the problem is its Kasparov at chess. I don't know if I could do it, I would be the super intelligent AI that could take over the world.
1:26:20
I can't give you the exact plan because but that, but I think that makes sense, which is that it's a human with it. A human with motivation, can get the AI to work for in the dangers. I think that the main thing is that the human doesn't need a bad motivation Than People imagine. Well, humans have pad powerful tools for a long time, bad people with powerful tools of done, bad things. That for a long time, the solution is good, people have powerful tools, countering them. The problem is even if you're a good person with powerful tool, good things to ask for reasonable things, good people would ask for, you know, let's up.
1:26:50
As the all in free cash flow of this Corporation, over the lifetime of the business and extend the lifetime as long as feasibly possible and in like the world District being destroyed and the core of the Earth being turned into for being turned into cars for the company to sell. And the I think the best analogy that works for some people here is like when we create the AI, we are creating a new species, it's a new species that is smarter than us and even if you try to constrain its being an Oracle and just
1:27:20
During questions, not taking action to be a good Oracle. One must come up with plans than and then a good Oracle can manipulate the people around and will manipulate people around it. No matter what, like, the whole point of like the Greek myths is like when they tell you, when they tell you the prophecy, when you trust them up for a trustworthy Oracle tells you a prophecy, the prophecy, often become self-fulfilling. It's very easy for that to happen. That's not an unusual thing. And I think even more to the point. I should, I was struggling to start this over at some level more to the point. We won't just make
1:27:50
Oracle's we are already building agents. We will build the predictive a. I will put it in a loop that causes it to optimize for its goals and he will give it goals optimized. For it's done. We're going to have its going to have goals. You be optimizing towards those things, and when it does that, you're gonna have these agents that have goals, that they're optimizing towards.
1:28:08
That are smart, not just smarter than humans, but much smarter than humans as much smarter than humans, as humans were against giant sloths. When we showed up in the new world and intelligence is the Uber weapon. Like, it's not an accident that humans took over the world. It's not the fastest creature. It's not the strongest, it's not the longest-lived, it's the smartest, and we're going to build a new smartest species. And this is a, this isn't a, there's no fundamentally unsolvable problem.
1:28:37
Hear that species could care about us, like you could build into its its goals of the world, how it saw the world of it. Humans, care about other humans that it cares about the things. We care about that it cares about humans that it cares about things we value. The 375 different shards of human of human desire that like, of everything we care about are about in the world. It could care what those things too. And if it does, hallelujah, we finally have a parent. Like, we, finally have someone who actually knows what they're doing around here. Because like Lord knows. We don't like we're barely competent.
1:29:07
On this thing, I would welcome. Very smart, very smart on other species that that is. That is aligned with us and cares about us. I would not welcome one that is that cares about maximizing free cash flow because that is not what he wants care about. And that is why it's like so dangerous and so knowing what, you know, then knowing what you believe First, what is the probability of the bad scenario in your head? Are you like, are we talking about a 1% fish, the thing order of magnitude, ten percent of
1:29:37
T %. What is it in your mind? I don't believe in point estimates for probability is because it's like a bid ask spread in the market. If you're really uncertain, the bid-ask spread doesn't clear like you're betting on it. There's just like a lot of unresolved. So I think of it as a range of uncertainty. And I would say that the true probability I believe is somewhere between 3 to 30 percent, which of the downsides of the doubt of a very, very bad thing happening. Which is scary enough.
1:30:07
That I urgently urge action on the issue, but it's not like you should give up like it. Probably. Everything's gonna be fine. In fact, he's probably really good. The answer what they do. The the non EV based answer the, like, just the straight-up. Like, are we gonna win or not? Answer is like, I think, I think it's going to be okay, but it's such a, the downside is so bad. It's like it's real. It's like it's like probably worse than nuclear war.
1:30:34
That's a really bad downside and it's worth putting. Even if even if you think I'm an it's nonsense at 3%. Like no, no, it's no more than a half percent. You go, you don't recommend a different course of action at 4:30 you have to, but you have to believe that it's effectively.
1:30:49
Almost impossible before you would recommend ignoring it as a person as a problem. Like, it has to be like point 0 1 percent. For be like, yeah, let's just roll the dice. And are you gonna, what are you gonna do? Action on that. So, you've kind of like, you know, you're done with Twitter and Dad mode now, but also, this is seems to be a pretty big deal. Yep. Are you, like, I should do something about this or yeah. I'm gonna right now. I'm sort of educating myself, because I think this point of view I'm expecting now,
1:31:17
Has been developing is like learning more about Ai and I think some of those things were intervening in the wrong way. Early, it's one of those, it's one of those stuff. Falling prophecy things, interviewing integrating improperly at the Rock in the way that is not effective. Spend social capital, and also like doesn't necessarily move the needle. And I, if, if you didn't have people like advisor to Kowski out there banging the drum really loud. I would feel more need to Bang the Drum myself, but I feel like
1:31:47
Asking me the question it's, you know, it's out its out in the water, people know, it's a problem and so I am xited is focused my brains, I calls em. Like, what, how do we actually thread the needle? What is a course of action? That leads us to over time, eventually still being able to develop a. I but also not destroying the world and I think one of the things I've gotten to is that like this idea that like oh the AI also has crystallized versus fluid intelligence. Just like a human does, that's an important split of how to think about it and that we should be monitoring and worried about try to
1:32:17
And the general intelligence not just generally benchmarking its performance on tasks because it that will keep going up and is not in fact, in itself necessarily intrinsically dangerous, if it can't solve novel problems. Is there a new Turing test level? Is there like a better? Because I it doesn't pass the Turing test yet. But is there is there something we have after that? Because seems like, there's even an intelligence test. Yeah. I mean it's like you tests basically like various kinds of how does it do on an IQ test right now? It depends it seemed that IQ tests before like we have is, right? Yeah. So very
1:32:47
No, it's right. So what would we do? How does it do on novel IQ tests? Which, I don't know. I said I've not seen a good Benchmark, that's a good. That's a good idea. Everything to go test. Yeah, I think that's that's like that's the sort of thing that I think would actually be worthy of going to go do. Maybe there's some sort of IQ tests for all of the moon for all the models through that really tries to get at fluid intelligence, rather than just wanted monitor. But how was this great project Arc is this Compass Group Arc is working on called The evals project that's exposed to try to build these kinds of tests, they're focused on a few other more pragmatic test right now. But but
1:33:17
I think that's sort of thing that would go after. That's a get that much. I'll ping pong asking about that. You said something earlier that I want to ask you about, you said founder, like, you know, time about this, the singular Gene is that it took to figure out Instagram or SnapChat or right, or whatever at that time. And you were like, you know, are there luckier? The good? I don't know, we'll find out when the try again. Are you luckier? Are you good? And are you going to try again? Well, so I had multiple failures for our successful. I must be at least like partially lucky. I would say that I don't plan to
1:33:45
Try again since I don't I don't feel drawn by trying to start a company. I think I kind of did that. It was fun. I got a lot out of, it was great. I don't need to do it a second time. I do, like, how starting a company gives me. Good, good goals and work towards. It's like concrete. That's a value to myself and others and I think it's also, I also liked that it was challenging and I want to do something. I like that I had scale, I could impact a lot of people but I've kind of come around. I was sort of thinking like well,
1:34:15
What has impacted me the most, what's changed my life? The most I realized that actually, if I really thought about it often, when it changed my life, the most was like, essays people had written and ideas people had shared and I think I'm at the stage of my life now where I'm actually I have something to say and so I'm I I think of it as sort of turned trying to, I want to put the Emmett world view out into the world. The way that, you know, Paul Graham is put the program will be on the road or to lab has like not just put his world of you out. There would like to condense it into like sayings that like
1:34:45
Can cut that allow other people to like, on board. It even if they haven't read, all the books. And I think it's over that ambition to, like, try to try to encode it into a meme. Almost yeah, 12 and be digested and shared and, you know, you need the law. You need the long form as this great blog post looking 3201 size does matter? Bye Stevie okay that's about why I like the people who change the world with her writing alright really long blog posts as basically like you just need some amount of time and someone's head.
1:35:15
Like, we talked about this earlier, like, to install that your agent Weiss. Yes, install the voice. And so, I think I need to produce a lot of writing, and then you also need the pithy summary things, which are, which both are things, the voice can say, off into people's heads. And also like, unable a language for talking about your world view that people who aren't soaking. It can like interact with. So people who are like reading, you don't sound like crazy people. I think that's the
1:35:42
That's what I want to work on next.
1:35:45
I love that. I think that's great. Do you said something about Rick Rubin? How he sort of the
1:35:51
I know how you would describe it. It's kind of like curator but almost like a collaborator really with an artist to help them do their. Great work is Paul Graham. The Rick Rubin of the startup world. No,
1:36:04
Paul is Paul's more like the Tony Robbins of the I mean and I mean that in the in the in the best way, it's not so much, go not quite so much, help, self help you. But the main thing that talking to Paul does, to you repeatedly as like, increase your ambition and drive like and he has good ideas sometimes to like to get me wrong. Every now and then Paul is really genius idea but like mostly
1:36:30
What I got to talking to Paul was not necessarily the great idea that would live like change the structure of the business but the belief that I could go find it and that I was going to change the world and that I should be what we were doing was important and worth investing in and that I got a bunch of stuff to it. That was so that was singularly. So valuable it like over overloads, the other things, I got out of it. How does he do that? Because you know when you say that my head thinks of like a Tony Robbins like a David Goggins like sort of people that almost like push you
1:36:59
Yeah, but he doesn't seem like that personality and reading all of his essays. He's not like that at all. So, how does he get you to think bigger and push harder without being a rah? Rah, rah, rah, think bigger, push harder. Right. You know what you should do is the classic Paul Graham ISM and it's always followed by the thing you could add on to what you're doing to turn it from.
1:37:23
Project a addressing. This small thing to project, be changing the speed of the University, which all Transportation. We're going to manage power. What have you tried to power all Transportation instead of like building a wheel? But that's as far as, you know, what you should do. You know what you should do is? Yeah, if you've talked to Paul, you know, I've never mind what you should do. You should do. That's that. That is the consistent policy. Mm, he honestly delude is it sounds mean but it sounds like he deludes himself about your business and how great you are and invites you to.
1:37:53
To join him in this diluted vision of like, interpreting what you're doing in the biggest best possible light and from that Vantage Point, what you're doing is super like, what? If it does, what if it goes, right? It's sort of what he invites you to ask. But what if stop stop asking yourself, don't stop seeing all the Pard problems. All the shit you have to do, ask you what? If what if what we're doing works, but if it goes right,
1:38:19
What if it goes right? And we like, keep going like what could it be? And when you spend time there,
1:38:26
You see how the the small things can turn out to be very Microsoft was building programming languages for like these hobbyists microcomputers, that was a tiny relevant Market that turned out to be extremely important and that's generally true. All the big businesses, but they would, they start out doing the important startups. They started doing something small and it seems almost trivial. But there's a way in which this trivial thing can be seen bigger. He sees it early. No, he sees he sees things have nothing to do with the way you'll actually be
1:38:56
Early, but he sees a bunch of ways you could be big. No one can do that. No one actually knows. If they knew it, they just go do. Then they'd be the, the prophet, the Oracle. What did he say? Let's say for Justin TV or what's on, we can be read it or yeah, Justin TV. Remember we one of them was like you should like go hire all the like reality TV stars and make get them to go beyond Justin TV. You could be, you could just take over all the unscripted stuff. That's gonna be it. There's a terrible idea for a bunch of reasons but like it recontextualize, what we were.
1:39:26
Doing for me, in terms of like, we're not making a on the internet live streaming show. We might be building like just the way that you make unscripted entertainment. Generally, and that's like, much bigger idea, and we're making a calendar and for my first startup and over this, the, you know, you should do is make it like programmable, so people can add in and out functionality, so it can like talk to your to-do list in your, your
1:39:56
And you're like everything else in your life and then it could be your counter in some ways. Like that's everything you're doing. What if it was like the central Hub of like your entire online information management system, that's also a bad idea like your calendar shouldn't be that. But like but like but account to cook but what if it was and you walk away and I am unpleasantly by saying that what he's telling you is I believe you are the kind of Founders who could build an information management system that controls
1:40:25
Although they take over people's entire, the solve the entire problem for them, desert takes over all their information and manages it for them. You're not just like building a like Google account, like a what what what you will find out later is a Google Calendar clone before. Who hollanders launched, you know, just like I said, you know, it's really an Outlook clone in JavaScript
1:40:45
you're like changing the way people relate to information, and like,
1:40:49
Is that true? It's neither true nor false does not a true or false statement but it's a way to contextualize what you're doing. It's a it's a Sonic Super E quote of like don't teach them to like Kerry Wood or build ships. Teach them the urine for the vast and Endless Sea like Paul teaches you to see how you could be a changer of the world and how what you're doing is part of like this Grand, like building of the future and like the idea is I'll repeat here nut, both of those ideas are bad.
1:41:19
But they were very helpful because they made me feel like what we were doing was important that Paul believed that I could do something big and important and they caused me to, even though I want to projecting them, look for those ideas, like to be open to and looking for, because he would get one every like you get like three an hour. Paul is a faucet for these. It's easy. I can do it for startups to. Now if I want to, I learned the trick and I should do that more often. I'm usually what fall into the Tactical stuff but by by having that happen when he
1:41:49
Once he's once, you've rejected 10 of those, you can't help but start hearing the Paul, you know what you should do in your own head, but being the ceiling has been raised. Yes, like what? Well, maybe I should reconnect utilize my to do list as like an email client, like what, why is he Malin to do separate? Like, maybe I should should be building something much bigger than what I'm building. And in a way that doesn't require me to change anything, they would have built it already almost that if I just like, think about it in different way. It's just funny balance. There actually to learn how to treat their with this.
1:42:19
Recently between like you, no small plans, have no power to stir men's Souls plan big or go home. You should be really ambitious and am super big and like only do projects that are really big. You could be that you can see being being super big and super important. And then the other hand, the fundamental truth that like you know, big trees grow from small acorns and like most of the BET, many of the best things when they get started, the person is not thinking. I'm going to go take over the world, they're just trying to do a good thing.
1:42:49
That like, they think is good often just often for themselves, even or for like, a very small number of other people. And then it turns out that that's much much bigger than they realized. And and those are both true pieces of advice, like different people need to hear in different contexts like but the kind of contrast each other. Yeah. What about these other people? So you've had a privilege, I asked by Paul Graham, you've also been friends with you were in the first YC batch. So you're friends with right guys. Thank you know, the
1:43:19
Some brothers, Sam Altman. Let's give me like a rapid fire on them of like what makes them unique. Like you said about Paul, what, what his kind of superpower is, what really stands out, what something you admire about the way he does things, give me one about maybe, Steve from Reading am. So it's easier in some ways with Paul because like, he was a mentor to me, right? And Steve was much more like my soul, like my brother and startups, right? Growing up with Paul. I know, I know the things that he liked taught me because it was, it was much more of an
1:43:49
illicit like I was being taught by Paul with Steve. It's like I learned things from him by like watching and imitating. I think like I actually learned a lot from Steve on management by watching his kind of unflappability. Like Steve is not like a nun passionate person and like well we can get angry or can get sad or whatever but like when there's a crisis happening or those just I've said I got to Shadow him for a day and went, bad news is delivered.
1:44:19
He responded, he wasn't like moved. He was like still grounded in response that it for that thing and was curious ask questions like didn't jump to what to do about it. But then also like ended the meeting was like all right, well I here's what we should do here is we're going to do and like it was sort of a master class. Like this is this is when you when something someone brings something up, it's got to be anxiety-provoking. I think it's bad news. That's what it looks like when a leader is engaged but not like not activated.
1:44:49
And like, I think I am my own leadership to sometimes success and sometimes failure. I think try to imitate that when I receive that. You know what, I have something like that that in that state when you say you shattered and whoa, what was that? Like, you guys just said, hey, we exchanged like, like going to each other's offices and like, sitting through every like early on or like, maybe like five years ago? Four years ago. It's really cool. We do is Justin me, Justin and Steve all like, shattered. Each other. It was pretty fun. I learned a lot, that's incredible. It's like washing their CEO.
1:45:19
And like you have to have the I don't know how you have the kind of like trust relationship to make that happen without like knowing someone for 15 years. And I happen to have the privilege to like know a bunch of CEOs for really long time and getting to go Shadow each other. It was like a real learning thing. What do you think? Even if these people did unless I explicitly, teach you things, you know. I like, you know, if I read a biography or whatever, one of the things I always try to figure out is more like to what extent is this person, sort of built different or operates differently than like even somebody who's very
1:45:49
Very good. Like the difference between very good and sort of like the elite. What is the the best of the best at this? Craft versus somebody? Who's very good, certainly very good but just not the same. What is those like, the diff is what? I'ma was most interested in. I'm curious if you've been around a lot of these high-performance even like, you know, yeah Bezos you've racked with him. Like do you notice any of these dips? Or is it all just like it's hardly fair to say like that. I think I believe more in contextual
1:46:19
As Asian like like that. I see people do really amazing at something, but like when it's official, is your own company? There's a lot of like you happen to fit this problem. Well, and it's not Jenna. I don't know how to generalize it and if you can, I don't know of anyone else. Even performing at this problem, the CEO of stripe job is a very specific job and Patricks amazing at it. Would he be equally amazing at some other CEO job possibly, but I've never seen him do that. Never seen enough to be CEO.
1:46:49
Striping is very hard for me to exaggerate the beginning. Like, is it true as like, like startup founder of ambitious company? Are those are those stripe different at that stage 2 or like? No, absolutely. The people who are really good. You can sense the energy, and the drive and the capability, and just the pace. There's like a it tends to like stuff happens a lot but like usually, but they're not always has some problems. Don't actually give way, like, stripe is a good example. A company that gives way to a high energy High Pace thing.
1:47:19
Because it's it's a simple problem. At some level that has infinite detail that I could be right. But I think like I don't know if that approach would work as well, if you're trying to create open, AI or anthropic where it's a research oriented organization and you kinda have to be more patient enforcing it's impossible and so I I really believe in like fit the different people are good at different things and like obviously someone's A Plus at Patrick 78 plus it being a stripe CEO and it's hard to tell the reason which these things are transferable mentally.
1:47:49
No. But I actually one thing did come to mind about this question in terms of like a capability that you think is Generac that I did see Bezos exhibit where I was like, oh, that's a thing that I'm good at, but he is better at that. I'm better than most people, but he's better than me, which is we present them on Twitch probably twice a year once or twice a year. For the first 34 years I was at Amazon and every time two things would happen, first of all, he would remember everything. We told him the first meeting and I don't think he was like reviewing
1:48:19
Note someone else took because I don't know when he would have the time to do that. I like I observed him going for meeting to meeting and he did not review notes. I think he's from membered, at least the high points and the other thing was consistently he would read our plan and he would then ask a question about why we didn't do a certain thing or give us an idea for thing we could do that. I hadn't thought of before
1:48:43
Once a, it's about two things, I had usually then, at least once which is hard to do. Because all you do is think that company never happens. Most people would be lucky to get one of those one ever. Let alone one a year, would be great if you did it once a year or even once every three years, right? He could just like you just generate them and they were and they were not all bad ideas either, they were new ideas, what a thing I had, I generate a lot of ideas to get a new idea. I haven't thought thought of on a topic I've been thinking about for a decade.
1:49:14
That might even be a good idea. That is like, he's just really fucking smart as far as I can tell. Like, I don't know how he does that. Can you say a story of one of those like, the statute of limitations past that lays his five years ago? I'm trying to remember, I can, honestly, I don't remember the specifics anymore. I just remember the like the like, what the fuck moment? Like, because the first time I was like, oh, he's smart, like he's seeing to It For the First Time a lot of times smart people have a one good idea about your business, the first time they see it because they have this huge history and their pattern matching you to some historical thing they've seen.
1:49:43
In and like that combination yields one new insight but then he said the second time ever the second I was like, what is going on? This doesn't make any sense like dope. I've never had that experience of forever and he does not have the new idea generation Cable in the same way, but he does have the like, remember what you told him thing, which is also extremely impressive. Like that's that's, and Andy has other thing. He can do that. I think is another and he also has a, it's easier for me, was people live like reported to her?
1:50:13
I've learned to stand like asking any Jesse. Yeah. And it has this like ability to criticize you in a way that conveys 100%. I know that you're amazing, I know that your plan is good.
1:50:30
You know, like or that you are kill, he's capable of making a really good plan. I know that you're working really hard and I know that you are smart and you have a great team and we have a huge opportunity and yet somehow your results are bullshit, which must I don't know what's wrong, but we're in this together and we're going to like, I've I have your back but like, I but I'm confused. Like, why aren't the results better? Given how amazing you are and you feel supported. Like you feel like he he believes in you?
1:50:59
But but like, but he's just use your so saggy. Oh, I'm sorry. I've confused. I've, I'm sorry. I failed even though I clearly can succeed at this. I'm gonna go. I'm gonna go like fix this now and like, it's almost like, instead of looking at this and then judging you, he comes to your side of the table says, what is this? Yeah. Like. How did we wind up your, how I have failed you that I didn't say something earlier, like something, I don't know. But like, not away and that can come off for some people when they do that, it comes off as insincere.
1:51:29
Or it comes off as like, they don't think you're actually competent like, how did I not catch? This can come off as I don't blame you, because you're clearly not good enough to have caught this. Like he really said, we, how did we wind up here? I know that we are working together. We're on the same team. How did we wind up with not the results we wanted with a plan that I thought we both thought it seemed good like, help me understand. And because it because it is genuine, it's super effective least, expect them, I don't know. There's regular, it was super effective on me that I saw be effective on other.
1:51:59
As well. So I know it works on some number of people, right? And it doesn't know those things I've tried, I've tried to become good at. I'm not, I'm not as good at it as handy as, but I've certainly gotten better. So there's something to learn from this great love that one, dude. Thanks for doing this. I know I've been, I've been bothering you to do this for a long time because I love hearing your stories love, hearing the way you think it's very different than most people run into even here in Silicon Valley were supposed to have this kind of very unique diverse set of mines. You know you're one of them you're one of the reasons I moved out to San Francisco. Was to meet people like you. So thanks for doing this.
1:52:29
Thank you, I appreciate that. Yeah, that's beautiful. And I really appreciate me all the time on the podcast.
1:52:38
I feel like I could rule the world. I know what I could be. What I want to be like a day's travel never looking back.
ms