Jon Krohn: 00:00:00 Welcome to another episode of the SuperDataScience podcast. I’m your host, Jon Krohn. I’m joined today by an outstanding guest, Jeff Li. He’s a senior data scientist at Netflix previously of Spotify and DoorDash. He goes into a tremendous amount of detail on what it’s like to be working as a data scientist and a machine learning engineer at these top companies working at huge scales on problems like forecasting with budgets of hundreds of millions or billions of dollars in play. It’s a really great episode. I think you’re going to enjoy it. This episode of Super Data Science is made possible by Airia, Dell, Intel, Fabi, and Anthropic.
00:00:39 Jeff, welcome to the SuperDataScience Podcast. It’s such a treat to have you here. How are you doing, man?
Jeff Li: 00:00:45 I’m doing good. Yeah, thanks for having me. I’ve heard about the SuperDataScience since I literally started Data Science. I saw it on Udemy like eight years ago. So yeah, glad to be here.
Jon Krohn: 00:00:55 Nice. Yeah. Shadow to Kirill Eremenko, the founder of the Super Data Science company and this podcast. It sounds like you might have even, I think you’ve been aware of the podcast since back when he was hosting many years ago.
Jeff Li: 00:01:05 Yeah, yeah, I heard about it. Well, I remember when I was first trying to get into data science, I was on Udemy and I was like browsing courses and I saw Super Data Science there and then, yeah, it’s always kind of peripherally. I’ve been aware of it, so yeah.
Jon Krohn: 00:01:20 What were you doing at that time that you were doing that exploration?
Jeff Li: 00:01:24 So this was before I even, I was in consulting and I was trying to decide if what to do next. I hated consulting. It’s
Jon Krohn: 00:01:32 Pretty much why consultants ended up doing that too, right? You’re like, well, I don’t really exactly know what to do, so let’s work at a bunch of different companies on a bunch of different problems. Kick the can down the road.
Jeff Li: 00:01:41 Exactly, yeah. And then you realize that it’s not for you, and everyone kind of realizes that and then kind of have to pick something and then commit to it for a period of time. And then I was browsing a bunch of Udemy courses. I remember there was the Super Data Science, like A to Z.
Jon Krohn: 00:01:58 Machine learning A to Z.
Jeff Li: 00:01:59 Yeah, machine learning A to Z. And I remember I bought it for 10 bucks or something from some Udemy sale, and I kind of used that as one of the resources to help me start kickstart my data science journey.
Jon Krohn: 00:02:10 Nice. And now actually coming full circle, you and another famous content creator, Ken G. You’ve been creating courses that take a little play on that name, like machine learning process. A to Z is one of your courses, for example.
Jeff Li: 00:02:24 Yeah. Yeah, that’s right. So I mean, initially we were going to start off with, we did an algorithms course, and then I told Ken, I was like, Hey, honestly, when I’m working day-to-day, the algorithms don’t really matter that much. It’s like your end-to-end process of scoping the problem, getting the right data, making sure you’re actually solving the right problem, that end-to-end process, I find it’s more important. And if you get that right, you can kind of plug and play many different algorithms,
Jon Krohn: 00:02:50 Thousands of people that bought that course. I think it’s Data Science 365 where people, yeah, 365 data science should be easy to find. We’ll be sure to include that in the show notes. But this has been a little bit of a tangent. I didn’t mean to get talking about the podcast and all things A to Z. I wanted to talk about where you are today. So you are a senior data scientist at Netflix, a former data science manager at Spotify, and a machine learning engineer at DoorDash previously as well. In those roles, you specialized in forecasting experimentation, causal modeling, and I think something that’s really interesting about what you’ve been doing is the scale at which you’ve been operating. So Spotify for example, it’s an over a hundred million dollars a year ad business. Netflix has a billion dollars supply and demand forecasting. So how is data science different at that kind of scale? What’s it like being a practitioner at that scale?
Jeff Li: 00:03:45 So I could probably actually contrast it with zero to one because I’ve, at Spotify, we started zero to one and Jumpstarting the podcast business, and you kind of have to scale it from zero to one. So I would say the mindset when you are say a startup, you’re going zero to one is you need to be quick, you need to be hacky, and you need to be able to move fast and be nimble to feedback. So in that kind of phase, what you usually do is you are building things quickly, you’re hacking things together, you’re not doing things that are very efficient. So you’re actually not picking at the small details of how is your model versioned, how is your model named? What’s the schema? It’s actually writing to how are you serving the models online? You’re just trying to do that as fast as you can versus when you’re doing it at scale, trying to think, you’re thinking less about being really hacky and hacking things together.
00:04:42 You’re thinking more about how can I do this once and reuse this multiple times over and over again. So at that level, you start to need much more robust, say naming conventions. You need much more robust repeatable frameworks and you need to say model ops. You need auditing or alerting for say, your models. A lot of times for one model, you might build it once and you might have to rebuild it each time if you’re being hacky, if you need to build it for scale, you ideally want to build it once, and then you get the benefit across all your models. So that’s really kind of the mindset shift from building for scale to going zero to one.
Jon Krohn: 00:05:22 I like that. I got to say the Spotify ad thing that you built from scratch that is, I can’t remember exactly, but about a year ago we were looking at, so we ran into a problem. We used to host this podcast on SoundCloud. So when Kirill founded the show 10 years ago, it was on SoundCloud and that worked. There actually were some cool things about SoundCloud. So for example, it was public how many downloads every episode was getting, and that looked great. It made it so that if somebody looked up Super Data Science podcast, you could, wow, there’s all these downloads. It’s a pretty popular show. Maybe I should accept the invitation to be on it or whatever. And so one of the weird things about the podcast world really outside of SoundCloud is that there’s very little visibility into downloads on YouTube, you see views, but how many times has somebody downloaded a podcast episode on Spotify or Apple Music? It’s kind of hard to tell. So actually that’s about the only thing that we lost by moving away from SoundCloud. But we had to move away from SoundCloud because they only support 500 episodes. You can actually have more than them in SoundCloud, but they won’t push those to Apple Music or Spotify or whatever.
Jeff Li: 00:06:37 I see.
Jon Krohn: 00:06:37 And so now that we’re at almost a thousand episodes, that’s a problem when half our catalog isn’t available. So a year ago, the point that I’m getting to here is that we did a comprehensive search of where should we be hosting our show? What’s the best platform? And it was an absolute no brainer at the end of this months of analysis with Spotify megaphone, which is the, yeah, it’s a great platform. If anybody’s thinking about getting into podcasting, I definitely recommend starting with that platform for publishing. If you’re going to have advertising within your podcast, it’s by far the best tool out there. So nice work.
Jeff Li: 00:07:16 Yeah, when I was at Spotify, I remember we bought Megaphone and it was a huge purchase because podcast was a big play at the time.
Jon Krohn: 00:07:24 Oh, I see.
Jeff Li: 00:07:26 Spotify was, when I joined, there was the big deals with Joe Rogan, caller daddy, and then Spotify wanted to own that value chain from having the biggest host, but also getting to smaller scale creators. So the megaphone and also the anchor play was to own that kind of entire value chain. So
Jon Krohn: 00:07:45 There you go. I didn’t know that it was another business that had been acquired.
Jeff Li: 00:07:48 Yeah, yeah, it was acquired.
Jon Krohn: 00:07:51 So the ad business was something that was created from scratch before the megaphone acquisition
Jeff Li: 00:08:00 To talk through the podcast business. Basically. There is kind of different segments of podcasts. So there’s the really big names like Joe Rogan, caller Daddy. Those were exclusive deals with Spotify. So it’s in the news, like a hundred million dollars deals, podcast deals with Rogan. That’s kind of the big names that the bet was to kind of get users onto the platform because they’d want to listen exclusively to this show. And then Megaphone was kind of this enterprise, medium-size podcasts that were pretty established. They had audiences, but they weren’t as big of a name. And then there’s also the anchor, which were kind of the smaller scale. Hey, I’m an individual podcaster at home and I want to start creating a podcast. So you can do that immediately through the tool. So that was really the play just to kind of own that whole value chain. Yeah,
Jon Krohn: 00:08:55 Nice. Yeah, no, they’ve done a great job. It is a very cool platform. Anyway, let’s talk more about the data science sides of this. So in particular, let’s talk about forecasting. Sure. So yeah, tell us about what that means. I mean, presumably something, it’s about trying to predict something in the future, so it’s kind of like a time series analysis, I suppose.
Jeff Li: 00:09:18 Yeah, yeah. I guess at a really simple level, we’re just making a prediction about the future from the dimension of time. So I think anybody who’s studying ml, you’re learning all these core ML techniques. All these core ML techniques apply, but we’re just using a different type of data. And naturally, because there’s auto correlation within the data, you kind of require some slightly different models, slightly different approaches. But at its core, a lot of times series models are basically linear regression models, but with certain unique characteristics to the linear regression models.
Jon Krohn: 00:09:57 So auto correlation there, meaning that the data are correlated with themselves over time. So for example, if you think about stock prices over time, the best predictor probably of what the stock prices today is, what the stock price was yesterday.
Jeff Li: 00:10:10 Exactly.
Jon Krohn: 00:10:11 And so there’s that inbuilt correlation that auto correlation, self correlation in the data. What kinds of tools do you need to apply to handle that?
Jeff Li: 00:10:22 So for build these models, the common models you’ll learn in textbooks are arima models. You’ll use exponential smoothing models. A common popular one is profit, but I think AI is a big hot topic right now, and transformers and Ls, tms in theory, you can use those kind of models to actually for time series data. Because for LLMs, you’re basically looking at say, tokens and you’re trying to predict the next token time series is actually very similar. You’re trying to predict take the current time unit and predict the next time unit. So there is kind of like a proliferation of foundation models for time series, but they’re definitely not as hot as the language models right now. So
Jon Krohn: 00:11:10 To break down some of those terms, so the arima one, that’s one that I’m familiar with from my trading days. So it’s a RI, a very common statistical approach to handling this audit correlation. You also mentioned their profit, and so that’s like guru as opposed to profit margin. It’s like a profit being able to see the future. And so that’s a tool originally out of Facebook.
Jeff Li: 00:11:38 Yeah, it came out of Meta Facebook. The crux of the model is it’s kind of an additive model where you break the time series into seasonality. So if there’s a bump or a monthly bump or yearly bump, you break it down into a trend. So you basically take the trend plus the seasonality and trend is just the direction in which the time series is moving. So it’s really kind of just taking the time series, decomposing it into two pieces, two or three pieces, which is trend, seasonality, and the error term. So yeah,
Jon Krohn: 00:12:12 It made a big splash when it came out, and I think Sean Taylor was one of the key people behind developing it, and he’s been on the show. I’ll be sure to include that episode in the show notes for people who want more on that profit tool. All right. So when you’re doing forecasting, this could be at the kind of scale that you’re doing forecasting at, or maybe it’s just forecasting in general. What are the kinds of pitfalls that people run into? What are the tricky parts about getting forecasting?
Jeff Li: 00:12:44 I can give two answers here. One from a technical perspective and one just from a general approach perspective. So I think common pitfall with forecasting is because time series data, the order actually matters. A common rookie mistake is to actually split the data doing a random
Jon Krohn: 00:13:03 Split, like a train test split.
Jeff Li: 00:13:04 Yeah, yeah. It’s easy to miss that, but you definitely do not want to do that. That ruins the kind of time series dependent aspect of it. But I think actually at a higher level, from my experience at least, I’ve found that the biggest pitfall in business is that people want to use very complex techniques for time series problems. So it’s fun to use lstm, it’s fun to try to use transformers for your time series data. But what I’ve found is that a lot of stakeholders who use the forecast, they really want to know why we are making this prediction and what assumptions are going into that prediction. So that’s why univariate time series models and say simple additive models or GMs, which are generalized linear models are still going to be important because the interpretability is always going to matter. With a lot of the very complex foundation models, it’ll make a prediction and we don’t really know why it made it prediction, so we can’t actually trust and go to stakeholders with it. So I would say that’s probably a bigger pitfall I’ve seen where more junior data scientists, including myself, I was really excited about all these sexy techniques. And in reality, a lot of times you don’t need to actually use those.
Jon Krohn: 00:14:27 It’s been a while since I’ve done time series analysis, it’s been a few years, but before the pandemic, so this is like 2019, that was kind of the end of me teaching in-person deep learning courses. So like 2016 to 2019, I was teaching deep learning these six week courses at the New York City Data Science Academy.
00:14:52 And being in New York, we are actually recording today, which I didn’t mention at the outset. We’re recording in person today at a great studio. Amazing. And so being in New York, there’s a lot of people in finance obviously. And then so you get a lot of people, either individual traders or people working at financial institutions where they’re trying to predict the future. They’re trying to forecast commodity prices or stock prices in the future. And all of those people would try to use deep learning approaches, like you mentioned ltms, there are transformers, different kinds of deep learning approaches. And it could have just been the tooling that we had at that time because this is now talking six years ago at the latest, but nobody ever got results anywhere close to what they could with simpler statistical approaches.
Jeff Li: 00:15:42 Right. Yeah, I do think that over the years, so there’s these forecasting competitions, they’re called the M1, M two have it every five, 10 years, and it’s like they have these competitions because they basically want to figure out what’s the best forecasting approach. And historically, when those competitions started, the univariate time series methods won quite a bit, but then over the years, deep learning has gotten better. But what they found more recently is typically the hybrid approaches have had the best of performance where you have an ensemble of both tree based models, deep learning models, and you have unit variate time series models that handle certain cases. So I think that really, so far right now, it’s not like one method will replace the other. It’s kind of like both will be in tandem. They’re kind of like a team, and they’re both going to be used for different types of problems. But yeah, I found that in practice at work, it’s not needed as much, but maybe if you’re a quant trader, you need to edge out 1.1% accuracy. Maybe it’s a little bit more useful. So
Jon Krohn: 00:16:55 Yeah, those are great tips. So I’ll have a link in the show notes to these M1 and M two forecasting competitions to give people a sense of where they can be looking for whatever the latest kind of modeling approach is to get the absolute state of the art and unsurprising to hear as usual that it’s an ensemble.
Jeff Li: 00:17:12 Yeah, yeah, it’s an ensemble right now as of the last paper I read.
Jon Krohn: 00:17:16 Yeah. And I suspect that in the future it’s just going to be bigger ensembles with more approaches in ’em. Going back a little bit, you mentioned that the other big problem that people run into with forecasting is doing their train test split wrong by just randomly splitting. If you think about a big table where every row is a time point, they’re just randomly taking some rows, putting them in the training set and the other rows, putting them in the test set. I think it’s pretty obvious, but I would just love to have you confirm for me and the audience that I have this correct, that the right way to be doing a train test split with forecasting is to train up until a certain time point, so use all the data points up until a certain time point and then use that as a cutoff. And you’re predicting after that cutoff.
Jeff Li: 00:18:02 Yeah, yeah, that’s right. And I would say with time series, we typically don’t want to do one split. There’s two approaches. There’s expanding window, so you kind of imagine that cutoff, you iterate through the different cutoff points.
Jon Krohn: 00:18:16 I see.
Jeff Li: 00:18:16 And then the training data expands, so it’s called expanding window, and there’s also sliding window where the training window stays static, and then it just kind of slides down the dataset. And that’s usually the common cross validation techniques that we’ll use to see if our model is performing better or not.
Jon Krohn: 00:18:34 That again sounds like a similarity to a pre-training of a language model
Jeff Li: 00:18:38 Where
Jon Krohn: 00:18:39 You pass a window over your entire corpus of data, just trying to predict the next word.
Jeff Li: 00:18:44 Yeah, I find language models and time series to be pretty, not the same, but pretty adjacent. Similar techniques can be used for both. So
Jon Krohn: 00:18:54 Digging into this a little bit more, the similarities and differences between time series models and language models, you have said, I have a quote here that forecasting isn’t as sexy as NLP or LLMs. You said that previously, I think on sahota, the data scientist show.
00:19:14 And you said something similar to that already in today’s episode, but with now LLMs getting state-of-the-art results in ensemble with other approaches, like you mentioned, tree based approaches and these statistical approaches, does that change your interest a little bit in potentially using transformers or other kinds of language models in forecasting? Is it something you explore more and more?
Jeff Li: 00:19:41 Oh yeah. I mean, at work it’s like transformers, AI agents, LLMs, that’s the hot thing. So everyone wants to
Jon Krohn: 00:19:49 Figure out AI agents for forecasting.
Jeff Li: 00:19:50 Yeah. Yeah. I’m trying to figure out how we can do that. So it’s like the hot thing, so you, everyone’s excited, so you’re trying to figure out use cases for it. So I totally would, if there’s a good foundation model that can prove strong accuracy that’s significantly better than the models that I’m currently running, and I can make a reasonable business case for it, I totally would be open to do it. But I think right now it’s like we’ve done some tests where we tested some of the foundation models. I think it’s like Kronos Times fm, there’s one from nsla, I forgot the name of it, and then it just didn’t beat our simple approaches internally, but they’re going to get better. And then in the future, maybe we could justify actually using them, but,
Jon Krohn: 00:20:42 And then I guess there still would be the explainability issue that you mentioned
Jeff Li: 00:20:45 Earlier. Yeah, we’d still have the explainability issue, and if somebody is able to solve that and we can use these models, then yeah, I’d totally be down.
Jon Krohn: 00:20:55 There’s probably some situations where the explainability matters more than others. I’m just going to absolutely, completely conjecture on some kinds of scenarios where that might happen in Netflix. And you don’t need to give away anything proprietary at all for just as I think about it, something like a model to be predicting what somebody might want to watch next and what to show them on the homepage. Maybe you don’t really care about explainability so much there because seeing lots of different options, it’s not like a mission critical decision. Whereas some things like predicting maybe where to allocate some financial resources, what market to put advertising money in or something where it’s potentially large sums of money, and you don’t want to be doing that without understanding why you’re doing it.
Jeff Li: 00:21:45 Yeah. So I think you got it right. And if I were to bucket the two things you said, there’s two common types of forecasting problems. You have operational use cases, and you have strategic use cases. So I actually learned this from a forecasting course I took from his name’s Tim, I dunno if you’ve met him, but he basically bucketed this concept into operational forecast where you build these forecasts to help run your operations. Let’s say that you’re trying to, let’s say Amazon, right? You’re trying to predict where deliveries are going so you can kind of effectively run fulfillment In that scenario, you will probably want the most accurate model. You don’t care as much about explainability, but if it’s really, really accurate, you’ll get significantly more efficiency gains from the operations of it versus there’s the other bucket called strategic forecasting. And that’s typically like say fp and a where they’re saying, Hey, I think this is where the business is going to be. We need to make decisions off of this forecast. So typically, if your forecast is for a strategic purpose, you want high interpretability. If you want an operational purpose, you don’t care as much about it. So I found that I’ve worked on both those problems and with those operational approaches, you can actually use more deep learning sexy models for those approaches if you can prove out the accuracy. But then if you’re trying to make decisions off of it that are strategic, then you actually really want to know the assumptions going into the forecast.
Jon Krohn: 00:23:23 That was a really elegant, nice way of bifurcating the kind of hand wavy idea that I was describing.
Jeff Li: 00:23:29 Nice. No trigger that. So yeah. Yeah.
Jon Krohn: 00:23:32 Perfect. Really cool. Alright, so we’ve talked about some of the approaches that you’ve been applying at top companies like Spotify, Netflix, DoorDash, some of the most competitive companies to get into for any kind of role, whether it’s data science or software engineering or marketing or even probably people working in legal hr. Those are probably some of the companies that people most want to get into. What kinds of tricks do you have for our listeners to get hired by these top firms?
Jeff Li: 00:23:59 Yeah, I have so many. So I would say a, I’ll kind of talk through all the tricks that I used. So when I was trying to get into DoorDash, there’s a technique I learned back then called the briefcase technique, which I think actually still works. And I’ve talked about this on a number of podcasts. But basically the idea is that the core principle of getting a job, it’s not like interviewing, it’s actually can you add value and solve that company’s problems? So an interview is just a way to test your skillset to see if you can actually solve their problems, but in reality, you can actually just circumvent this and actually see if you can solve their problems. So an example is actually at DoorDash what I did was I figured out what their biggest pain points were and then I put together a doc outlining, Hey, these are your problems, this is how I would solve it. And it was pretty detailed. I spent at least a couple days on it. And then you send it to the hiring manager and sometimes it’ll hit, it’s not always going to hit, but you have a reasonable shot for it to hit. And if it does hit, then they’ll actually be much more sold on saying, Hey, okay, this person has a skillset that could solve my problem. I’ll bring them in again to have a conversation with them. So I found that technique to work pretty well for.
Jon Krohn: 00:25:21 Do you mind if I interrupt you for one quick sec on this one, on that first point there? Yeah. Something that’s interesting today is, so you started working at DoorDash more than five years ago, I think, if I remember. Yeah,
Jeff Li: 00:25:32 It was a while ago.
Jon Krohn: 00:25:33 And so back then, obviously we didn’t have generative models that could be creating. So if somebody gets an email from you that’s like, these are your pain points and you clearly spend a lot of time, a couple of days, this is quite an unusual email to get probably for a hiring manager. And that’s part of what makes it such a great tool for getting hired. But I wonder if today a hiring manager might say, this is definitely gen ai. And it’s almost like the more effort you put into it, the more crisp and perfect it looks. Maybe they’re more likely to say, this is something that was just created by Gen ai,
00:26:12 This person, they’ve figured out some relatively simple age agentic workflow to be spamming tons of hiring managers with these what look like very complex reports. I get that for the podcast for example, or for my consulting business for ki, I end up getting these huge PDFs, like 30 page PDFs with illustrations, tons of detail, but I get them once a week for each business from some different random person who’s just doing a cold outreach and I’m like, this is definitely some agentic thing going on.
Jeff Li: 00:26:44 Yeah, I see. I see. So yeah, I guess it’s an interesting point because back then it definitely would’ve worked. I think today I still don’t think people are doing it as much for these big companies. Definitely for you it you have an audience, so people are going to reach out.
Jon Krohn: 00:27:04 Well, and it’s a different kind of thing. I’m getting sales pitches.
Jeff Li: 00:27:08 Yeah, I see, I see.
Jon Krohn: 00:27:08 And so I think that it could be something very different. Maybe nobody is. I mean, because actually I haven’t been getting those at all. People reach out and say, you’re doing any hiring, and it’s a relatively simple message. Nobody has been sending me these. These are your pain points. This is how my data science expertise, this relevant experience I have could be useful. So maybe you’re right. Maybe it would stand out anyway.
Jeff Li: 00:27:29 Yeah. So I do agree that for the gen AI piece, it’s like you can easily spam that for sure. I think the hard part is still you want it to hit if you have a genise, because I think a lot of times, even if I do this, if somebody does it to me, if they didn’t actually understand my problems, then it’s not really going to hit, then I’m going to ignore it. But then if they actually said, Hey, I listened to all your podcasts and I figured it out, Hey, your podcast is maybe kind of like, you could use this kind of software, I don’t know, you can improve this aspect of it, and it was very specific and it was clear that they understood what you needed, then it would hit a lot better.
Jon Krohn: 00:28:12 Here’s the magic button you press to 10 x your audience and revenue.
Jeff Li: 00:28:15 Exactly. Yeah. So I do think that was one trick that worked back then. I haven’t tried it recently. I don’t think I need to as much these days because I think as you get more experience, it’s easier to get your foot in the door.
00:28:32 But I would say my recent job at Netflix, the way I got it actually wasn’t through applying or using any kind of tactic. It was basically because I had been working in ads at Spotify, I was working in forecasting, so I had this unique intersection of skills in ads and forecasting at Spotify. And then when Netflix decided to start doing ads, forecasting is essential to running an ads business. They needed somebody with that exact experience and that exact overlap and skillset. So then I was basically the perfect candidate. And then when I interviewed, it was pretty, I’ve been thinking about this stuff for the last few years, so it was pretty smooth, but I had been rejected multiple times before at Netflix
Jon Krohn: 00:29:25 Been
Jeff Li: 00:29:25 I had been. So I think that the key thing is you have to just keep trying, even if you get rejected. And then also too, it’s like do you have a unique intersection of skills that is hard for the person to hire for? Because I think there’s many people that will do forecasting. There’s many people that do ads, but there’s a lot fewer people that do both. So I do think that as people develop their careers, it is good to build some sort of niche expertise and an overlap of say, industries and skillsets. And that’s actually what really differentiates yourself from the rest of the market.
Jon Krohn: 00:29:59 I love that. That’s a great soundbite. It sounds like that’s going to end up in a YouTube short later on. Yeah, sweet and sweet. Yeah, because that’s perfect. Alright, so switching gears now from finding the perfect job to finding the perfect partner, you have a startup called your move.ai and you’ve scaled that to over 10,000 users. And I’m reading a quote here, it promises to perfect your dating profile and put your texting on cruise control. So it helps users sound witty, flirty, or funny on demand depending on what they’re looking for. And so yeah, it’s kind of blurring the line between human expression and machine creation. Tell us about how you got into this particular thing that you’re doing.
Jeff Li: 00:30:46 Yeah, so I’ll caveat, I don’t really work on it anymore. So it’s like an old startup, but I’m happy to talk about it.
Jon Krohn: 00:30:54 Does it still work?
Jeff Li: 00:30:55 Yeah, it still works. It’s still running. Yeah, it’s still going. So my co-founder at the time, Dmitri, basically this is when GPT two came out. He basically wanted to build an app that helps you text better. And then for me in 2019, I trained some deep learning model on photos that I swiped on. And then I would have the deep learning model predict whether I’d a profile or not, and then it’ll auto swipe for me. So then I thought my friend Dmitri’s project was a natural continuation of what I had built before, but I think I thought I had more legs because I think the dating apps would definitely ban me for trying to auto swipe on the app. So yeah, really the idea of your move was to help you write better messages, write better profiles. Also, there’s a photo service now, so you can actually use AI to help you craft better dating profile photos. And yeah, really it was like I was very interested in this space because I was trying to figure out dating. And then yeah, I was like, okay, how can I use my skillset to apply to an area to make my life better? And that’s kind of where it came from. It’s
Jon Krohn: 00:32:20 Perfect. And I mean this in a genuinely affectionate
Jeff Li: 00:32:25 Way.
Jon Krohn: 00:32:25 This is the nerdiest way to approach dating. I
Jeff Li: 00:32:28 Love it. Well, no, everyone’s all, if you spent all your energy building this system to do it, you could have just done it and gotten the same result. But as kind of a data science engineering people, we like to just build systems to do it rather than do it ourselves.
Jon Krohn: 00:32:48 And it sounds like it’s been effective for enough users that a lot of people have been using the product, have been using your move.ai. Was your move AI useful for you? Did it end up having a real world result for you?
Jeff Li: 00:33:00 At the time when I was using it more, it was helpful, but it definitely not replace, I cannot let it be fully automated. I think a lot of the message suggestions, it’ll give me some ideas, but I’ll still craft it in my own voice. So there was that, and I think a couple of years ago the photo tech was not good enough yet. Even today, I think it’s still, you can kind of tell it’s still AI generated and if people see that you have an AI generated photo, it’s actually a negative
Jon Krohn: 00:33:34 Bet,
Jeff Li: 00:33:35 Like a negative perception. So I still think the photos are not quite there yet where it feels really genuine. So I would say it helped, but it could not be the sole thing that really solved all dating problems.
Jon Krohn: 00:33:48 That makes a lot of sense. It is interesting that I was reading, actually just yesterday at the time of recording that a lot of the big dating apps, which saw surges in their share price in the us at least during the pandemic, so kind of around 2020 match group, which owns a bunch of these dating apps. I think Hinge Tinder are all owned by this match group. Don’t quote me on this, I’m doing this from memory.
Jeff Li: 00:34:14 Yeah, yeah, no
Jon Krohn: 00:34:14 Worries. But then there’s also Tinder. And so yeah, Tinder Grinder for some people that’s the perfect dating app. And with all of these apps actually they’ve had a decrease in user activity. So kind of post pandemic with people wanting to meet in person, again, wanting to have that real connection. And so share prices for the publicly listed ones like Match Group have plummeted. And apparently all of these companies are betting on AI solutions to improve the app, and I wonder if that’s going to work out. So basically these kinds of things like you did with your Move ai. It sounds like companies are integrating more and more. And what clicked me onto that is that of course relative to if you got started with your move AI and GPT two era, that was like, I mean there was a 50 50 chance, you just got nonsense out of it.
00:35:08 And whereas today you can get, obviously I’m sure everybody who listens to this show uses conversational agents regularly and is kind of familiar with the state of the art. It’s mind blowing how accurate and helpful these results can be. And so integrating that into an app, I can see the idea and it’s easy to imagine being in a boardroom, how are we going to get our share price back up ai that’s probably happening in a lot of boardrooms all over the world, but I wonder if all of a sudden everyone is doing that in a dating app, getting help with what photos to select, and maybe with photo selection and stuff, that’s good because why not show yourself in the best light? But if everybody’s having AI generated messages and AI reading of messages, it’s kind of a weird environment it feels like to me.
Jeff Li: 00:35:53 Yeah, I mean that’s like when I was working on your move ai, that was the biggest concern. They’re like, is this going to be this dystopian future where everyone’s AI is talking to each other and setting up dates for each other? I personally think now it’s been a couple years since I worked on your move, I’ve noticed a trend to be a little bit more towards in real life. So there is a strong, especially being here in New York, I have a bunch of single friends, there’s a strong appetite to go to run clubs, go to in-person events to try to meet people. And there’s a really strong negative perception towards dating apps, but people will still use it because that’s the easiest way to get dates. But there’s a strong negative perception. So it does seem like it’s still going, but it’s like the trend is starting to kind of shift a little bit. I found for me, when I was dating, being social, doing hobbies that I loved, going out to events that aligned with my personal values tended to work better for me. But you have to use it, so it’s still going to be a part of what you do.
Jon Krohn: 00:37:07 Yeah, I think that the volume that you can get through a dating app, the volume of potential people, it’s a double-edged sword because while theoretically it allows you to see more options, that prospective option is also seeing more options. And so if it’s something like a run club, and two are you and the person that you’re interested in are both regular runners, the number of single people that are regularly going at the same time to that run club, it’s going to end up being a pretty small pond and you’re going to feel like, hey, this is the person for me. I keep seeing this person, and they’re just the perfect fit. Whereas online you’re just like, well, could see what else there is.
Jeff Li: 00:37:51 Yeah. So my girlfriend right now, I don’t think I would’ve matched with her on a dating app. We met through ski friends, so we both love skiing and snowboarding. She was filtering, I’m five nine, she was filtering for six feet plus guys, so we would never have met on a dating app. That’s funny. So it’s the medium kind of forces you to behave in certain ways as well. So yeah, it’s unfortunate part of the game. Yeah.
Jon Krohn: 00:38:19 Yeah, that is interesting. That’s a really good data point, solid data point on why real life can be better. So yeah, so interesting things happening in the dating world because of algorithms and now ai, this automation of love and see how that goes. Alright, moving on from that dating conversation, which maybe some of our technical listeners are get back to some data and AI stuff more explicitly. So we’ll do that right now. So something that you’ve argued before is that automation with AI fails, automation with AI fails if people haven’t first don’t have mastery. And so I’d love you to fill us in more on what you mean by that. So is it that in an organization you can’t have success just through agents automatically doing things if there isn’t some level of human mastery in the system already?
Jeff Li: 00:39:18 Yeah, so I can give a concrete example. So I tried on the side when AI was really popping off, I really tried to, I was like, Hey, the GPT image launch, I can create ads with this and then if I can promise, say founders that I can just grow their business by creating ads and they launch it on Meta Ad Manager, then it’s a clear business opportunity here. And I still think it’s a good business opportunity. But what I realized was I basically tried to build this AI workflow that would automatically create ads for me, and I was able to create ads. But the problem was the ads were horrible, they were not good. I would show them to a person running a business and they’re like, these are not good. And I didn’t have an eye for what was good and what was bad.
00:40:14 So I had never made ads before. I knew how to hit the open AI APIs, I knew how to write Python scripts, I knew how to generate the images, but I had never created ads before. And I think because I never had created ads before and I didn’t have an understanding of what was good or not, I wasn’t the right person to try to completely automate this workflow. Ideally, it’s like somebody who also knows ads but also has the tech skills, or I partner with somebody who knows how to create ads. Ideally that’s the right setup where, okay, this person has an intuition on what’s good in the system and then I can automate it, that will tend to work much better. So I learned from that project that it’s better to know how to do the workflow manually first and know that you can get a good result and then you can automate it. So
Jon Krohn: 00:41:07 Yeah, it’s interesting how there’s also how you can carve up any one of these topic areas into different parts. Because earlier in the episode you were talking about how your expertise with advertising was one of the key things that allowed you to have such a specialized niche and to get hired at Netflix after trying multiple times. But despite all of your experience working in the ad industry and working on ads, you didn’t have experience with creative with creating the copy, with creating the image that goes out.
Jeff Li: 00:41:36 Correct? Yeah, I didn’t have, I thought that because I had the background in ads that I could do it, but then I never actually, I didn’t have the expertise in creative specifically. I can create agent workflows doing forecasting and automate time series analysis. I feel very confident in that, but in creating high performing creatives, I’m probably not the right person to do that.
Jon Krohn: 00:42:02 So my last question I asked, I mentioned agents, we talked about agents a little bit. You have a popular tweet where you condescendingly condescendingly, I think that’s the right word. Scathingly certainly is the right word, said that agents are just python scripts. And so what do you mean by that? Is it really that simple? What do you think about agents? Is it over hype? Are we going to get, are we going to see a lot of value out of anyway? You have mentioned them a few times in this episode already in ways that sound like you see some potential there.
Jeff Li: 00:42:34 Yeah, yeah. I think I was probably a little, I think it was a little too condescending and scathing, but I think I’m realizing as I try to build, I’m very pro agents, I want to build them. I’m trying to get more projects like this. It’s hot, it’s fun. But what I’m realizing is that when I want an agent to automate something, a lot of times passing it to an LLM, the space in which it can answer is too wide. I want a more deterministic, Hey, do this exact thing how I want you to do it. And when I really want to make it as deterministic as possible, it then just becomes a Python script. But I’m very pro agents. I’m pro trying to automate my job and automate parts of my job. Yeah. So
Jon Krohn: 00:43:25 Yeah, you’re hitting the nail on the head there with some of the bigger issues with agents is that I talked five, 10 minutes ago about how LMS are so powerful these days, and obviously that’s just going to get better and better. But the flip side of that, the double-edged sword ness of that is that it can mean a very wide variety impossible responses and moving further away from something deterministic, something predictable. There’s different kinds of approaches that people, you can turn down the temperature on the model to try to have things be a bit more deterministic. But getting that right can definitely be tricky. And then there’s also frameworks out there for testing lots of possible different responses. But either way, I think at least what you’re highlighting here is that getting an ag agentic workflow to work effectively in a real world commercial use case, especially where it’s going to be going at scale, is really hard work.
Jeff Li: 00:44:19 It’s hard. And yeah, a lot of times you wanted to do, especially at a really large scale, a small mistake could be really costly. You want it to work exactly, you want it to work, versus maybe there’s a 5% chance it might give us a weird answer. We want to reduce that risk. So very pro agents, I think one of the things I learned recently was actually it’s you can have the work execution be much more deterministic, but the decision to trigger that execution actually should be through an LLM. So you can have an LLM make that decision on whether to trigger or not, and then when it makes that decision, then you just have a deterministic workflow to run for that. So I found that to be a good balance. But we’re still seeing.
Jon Krohn: 00:45:09 On the topic of ag agentic ai, something that’s very trendy these days is context engineering, and you also have a tweet about how a monolithic code repo can act as a great automatic source of all the context that you need to engineer for a particular kind of problem. Do you know what I’m talking about with this?
Jeff Li: 00:45:30 Yeah, I know what I’m talking about. Yeah. Yeah. So I think one of the things as I’ve gotten really deeper into the AI space is that, and this is for me, my personal workflow, if I have 10 different repos scattered across, it’s hard for me to say, go into cursor, have cursor, figure out all the files in all these five to 10 different repos and figure out what it’s doing and bring that context into what I’m trying to do. Versus if everything is in one place, it’s much easier for the say Cursor or Claude code or Codex agent to go and find the relevant files and start doing things with it. And when everything is in a monolithic repo, then I can just say, Hey, reference this folder, make a Python script similar to whatever is in this folder, and then it’ll just do it how I want it to do it. So yeah, so that’s kind of why I was like, I’m much more mono repo now with AI versus before I was actually against repo. So
Jon Krohn: 00:46:38 Yeah, it seems like the kind of thing that it was a software best practice to say, split out different kinds of functionalities or use cases into different repos, classic like microservice architectures where everything’s separate. And by changing one repo, you can feel like your API isn’t going to be impacted. But yeah, it’s interesting how things change now in this world of huge context windows for tools like Cursor and actually is a great next follow-up question at the time of recording at least, and these things change quickly. You shared recently that your current AI tech stack, so your personal tech stack that you use, you mentioned Cursor just there, but you’ve said that it spans Claude Whisper Flow, Gemini and V zero. So let’s dig into those a bit, maybe elaborate on your stack a bit and kind of where these fit together. So for example, Claude, probably a lot of people are aware of is one of the leading conversational agents out there in general. Gemini similarly, but Whisper Flow I haven’t even heard of and V zero I’ve heard of it. I’m kind of kicking myself that I don’t remember off the top of my head what it is.
Jeff Li: 00:47:51 Sure, yeah, I can talk through it. So yeah, whisper Flow is basically a voice dictation software where you basically can press a button and it’s called Whisper Flow because you can whisper into the mic and then it’ll actually dictate it pretty accurately in an office. You don’t want to be talking super loud. And then I found that to be really useful to leverage with AI because it’s much easier to say everything on my mind than to type it all out. So I can actually inject a lot more context when I’m prompting, say, the AI that’s interest or the ai.
Jon Krohn: 00:48:26 So you’ll be kind sitting at your desk and you’ll just be whispering into your mic on your laptop.
Jeff Li: 00:48:31 Or if I’m working from home, my girlfriend will just hear me just saying a bunch of saying a bunch of stuff, and then she’ll be like, oh, he’s talking to the AI right now. Yeah. Yeah.
Jon Krohn: 00:48:43 Nice. We’ll see, our listeners probably just heard a beep on that first version of stuff.
Jeff Li: 00:48:48 That you used there. I forgot
Jon Krohn: 00:48:50 They can use their imaginations. No, it’s okay. You can swear. We’ll just bleep it.
Jeff Li: 00:48:53 Okay, cool. Cool.
Jon Krohn: 00:48:55 And by the way, listeners, I probably haven’t mentioned this in a very long time, but the reason why we bleep, because I’m actually, I swear, pretty liberally in my day-to-day life, but if you swear one time in any of your episodes, you have to change, you have to tick this box across all podcasting platforms. You go from a suitable for Everyone podcast to an adult podcast.
Jeff Li: 00:49:17 Oh, interesting. Isn’t that crazy? Yeah, that’s crazy. Okay.
Jon Krohn: 00:49:19 So yeah, so we bleep and then that’s the way around. It’s pretty funny. A few years ago it was our previous podcast manager, she’s Eastern European originally, and she sent me this list of swear words. It was like, which of these words do we need to bleep out and which ones can we leave in? That was a fun one. I’d go into it, but you’d just hear a bunch of bleeps on the show. So whisper flow. That’s cool. I like the idea of that. Do you think that there’s a difference when you take the time to write an email? So now with my consulting business, one thing that I’m trying to make sure I do is convey things clearly to a client of my consulting firm and to make sure that their blood pressure is staying low on kind of everything that we’ve been thinking across the details of their project. And I think if I tried to dictate that I might not be able to do a good job, although even as I say that it would be a great starting point for the email, I could stream of consciousness. And even if that stream of consciousness isn’t very clear, I don’t need to go from my stream of consciousness to transcript. I can pass that through an LLM on the way and say, this is going to be an email to a client. Can you take this audio or this transcript and convert it into a nice structure? And then I could review that. So actually I was going to say, do you think there’s a downside to just dictating, but I’ve talked myself out of it?
Jeff Li: 00:50:59 Yeah, I actually would say for prompting, I dictate a lot, but if I’m trying to write my own say, doc or a spec, I actually just write it because I find that writing it helps me think through things. So for a lot of cases, I actually don’t dictate and I don’t dictate and prompt. I just actually will write it so I actually properly think through it. But if I don’t think I need to think through it, then yeah, I’ll probably dictate.
Jon Krohn: 00:51:24 Nice. Okay. So we do kind of agree that there would be situations where dictating might not be the best place to start. And so you’re saying it’s when you’re prompting that you’re often dictating, so you’re talking to a conversational agent, you’re talking to Claude or Gemini, and
Jeff Li: 00:51:37 Yeah, if it’s not too disturbing to people around me, then yeah, I’ll try to
Jon Krohn: 00:51:42 Have to try. I’ve literally never done it.
Jeff Li: 00:51:44 Yeah, I think if you use Cursor now, they added a dictation feature, so you don’t have to download. You can just try the dictation feature and cursor and see how well it works for you.
Jon Krohn: 00:51:54 Nice. I like that. All right. So yeah, let’s talk about some more of these tools. Why do you use Claw, Gemini, and Cursor, and what is V zero? Is that also kind of the same ilk?
Jeff Li: 00:52:03 Yeah. Okay. So I use Claude code for the terminal agent. So if I’m running the agent through the terminal, I found Claude Code to work better for me. I haven’t tried Codex. I know that’s also a popular one that’s growing, but I’ve just found Claude Code to have the most number of features. They have some cool features like Skills. They have an agent’s feature where you can actually save your agents. I’ve liked those, but recently a cursor just released their new model Composer one, which is crazy fast. And I’ve actually started to move over to using that model a lot more. So this stuff changes week to week. Gemini, I will use actually more for image generation. I found Nano Banana to be a lot better than the Chad GPT image generator. So if I want images, I’ll use a nano banana. And I test this with, I try to test different hairstyles using my face and chat. GPT will mess up my face versus nano banana will actually maintain my facial structure and then actually add a different hairstyle. So I use that to test the image generation to see how well it captures, keeps my face.
Jon Krohn: 00:53:18 Did you stumble upon this particular test, this particular validation mechanism through a world use case?
Jeff Li: 00:53:26 Yeah. Yeah. I was growing my hair out, so I was like, oh, I kind of want to know what I look like with longer hair. And then I did it with Chat GPT. It was horrible. I was like, this doesn’t look anything like me. And then I tested it with nano banana and actually looked pretty reasonable. So I was like, oh yeah, I think maintaining the facial structure and making it look like you as a person is still hard. So yeah, so I found Nano Banana to just be much better.
Jon Krohn: 00:53:51 Yeah, I recently had the same experience with, I had to update a social media card for a podcast episode, Super Data Podcast episode that I was posting because I gave the artist who creates our thumbnails the wrong title of the episode. I had accidentally in the file, I left in a previous episode title, but I didn’t notice that mistake until it was time for me to post it. And so I went to ChatGPT first, just like you did. Because I think we often think of, and this was the case for a long time or for at least a year or two, where across almost any kind of generative capability, there were points where maybe you’d say mid journey for some kinds of image generation tasks, but a couple of years ago it was like AI is the place to go start with the open AI tool. You’re probably going to get the best results. And so that’s where I started kind of out of habit. And I couldn’t get the image to stay my face and the guest’s face and the whole layout of the image to stay exactly the same. And it kept doing the exact same misspelling of a word. It was the word accelerate, and it was missing one of the Es, and it just could not get it to work. Whereas in Gemini, oh, you know what? I think it was the other way around.
00:55:13 I mean, it just goes to show how quickly things change. Now that I’m actually saying this out loud, I think it actually, yeah, it was in Gemini that I had the frustrating experience where I just couldn’t get it to spell this word accelerate correctly, and it couldn’t even seem to, it’s amazing how you can get into these loops with the agents where I was like, it just could not, I’m just start from scratch and just do it over. And it was just like it said, I can’t do this. I’m like, how can you not do that? You can definitely do that. And then with chat gpt, I think the reason why I’m remembering this is because it gave me multiple outputs.
Jeff Li: 00:55:47 Oh, I see.
Jon Krohn: 00:55:47 And it was like the first one I clicked on, I was like, ah, it doesn’t look good. The second one, I was like, ah, it doesn’t look good. It changed our faces. And then somehow the third one was just perfect.
Jeff Li: 00:55:55 Oh, nice.
Jon Krohn: 00:55:56 So yes, it goes to show the stok. Kind of tying back to your point about determinism or even something like that, where obviously we want the image to stay the same in this kind of use case, and two times out of three it gets it wrong. By luck, I guess one time out of three. I get it.
Jeff Li: 00:56:10 Yeah. Yeah. I remember when I was doing the ad creative thing, I think I spent $200 of API credits to figure out how to prompt it in a way that would get it exactly right. But it took a lot of experimentation to figure that out for me at least. Maybe it’s much better now
Jon Krohn: 00:56:30 That can end up being IP for some people in some use cases. Just getting that prompt, right, spending a lot of time on getting things just right for that particular model. And then they do a model update and
Jeff Li: 00:56:41 Then it changes. So you always have to keep playing with it. Yeah. Yeah. I could talk about the other tools if you want as well. Oh
Jon Krohn: 00:56:49 Yeah, for sure. Yeah.
Jeff Li: 00:56:50 Yeah. So the last one is V zero. So I found V zero to be really good for spinning up web apps. So if you want to front end, I mean it’s lovable. There’s lovable bolts, but I like V zero the most because V zero was originally software for deploying infrastructure. So I’ve found that they have pretty good front ends. I can screenshot something, paste it in there, it’ll generate the front end, and I have deep trust in their backend because I’ve used V zero for app deployment in the past. So yeah, that’s for spinning up easy web apps.
Jon Krohn: 00:57:28 That’s cool. I like that a
Jeff Li: 00:57:29 Lot.
Jon Krohn: 00:57:31 Yeah. I guess as a kind of underlying point across all of this, it seems like just like for me that it’s worthwhile having cloud code cursor Gemini V zero and to be experimenting with them for different kinds of use cases, because every month, which one is going to be the ideal choice for you for a given Use case changes. Yeah.
Jeff Li: 00:57:56 Yeah, it changes all the time. Yeah. I think the core of it is really just to keep trying stuff when new things get launched, just try it, see how you like it. And yeah, I mean if one provider changes something that makes things a lot better, you just switch over to it. So.
Jon Krohn: 00:58:12 Alright. As a kind of final topic area for you before we get into the final questions that I ask everyone, is the final kind of topic area. You have spent a lot of time in your career, I guess in your personal life as well, optimizing and also developing mental models of how to make the best decisions in the world. What kind of advice do you have some kind of general takeaways from that experience that you could share with us?
Jeff Li: 00:58:46 Yeah, yeah. I think I use mental models and life principles kind of in conjunction with each other. I think the mental models concept was inspired by the reading about Charlie Munger and the way he approached investing was to read from a wide variety of topics to pull these essential principles out of life to apply to their decision making and investing.
Jon Krohn: 00:59:19 Yeah, Charlie Munger there being the right hand man at Berkshire Hathaway.
Jeff Li: 00:59:24 Exactly.
Jon Krohn: 00:59:24 And I think he lived in 98 and passed away just recently.
Jeff Li: 00:59:27 Yeah, he passed away recently. But yeah, he’s a huge influence in the investing world and for me, I found it to be really useful just to help guide day-to-day life decisions, how I approach life, when things do come up, how to approach things, and I’m still kind of iterating on it. One that comes to mind is this concept, this model of asymmetric upside. So when there are things that occur in your life and there’s asymmetric upside, you do them. An example is actually going to parties with interesting people. Sometimes I might be a little bit feeling introverted, a little bit anxious about not knowing anyone there, but I always found that there’s no bad downside to going. There’s only upside. You could meet somebody really cool that you want to be really good friends with. You meet a great business connection versus the downside is maybe you didn’t meet anybody interesting.
01:00:29 You just go home and then just move on with your day. So I found that mental model to be really useful. Another one that comes to mind is activation energy. So in chemistry first, I don’t remember the exact terms, but in chemistry you need to hit a certain level of activation energy to have some sort of a chemical reaction. I found that when I’m feeling kind of sluggish, I don’t really want to do stuff. As long as I get myself to take some action, I know that I will hit some threshold for activation and I’ll get a lot more excited to do something. So that’s another one as well. But yeah, I would say it’s just kind of a useful way to guide your life and guide your decision making and how you approach things.
Jon Krohn: 01:01:16 Nice. I love that. Thank you for those. So yeah, now moving along to my standard ending questions, maybe it’ll flow nicely. The topic that you just had there, your mental models will flow nicely maybe into your book choice for us. Do you have a book recommendation for us?
Jeff Li: 01:01:31 For me, the one that comes to mind is Unleash the Power Within by Tony Robbins. So I’m a big personal development guy. I found that book to be very impactful for me because it really helped me clarify what I valued in my life, what is important and what is maybe not working for me. It also helped me really learn how to instill a high level of belief in yourself, and I think you need that to kind of be confident and tackle any kind of goal that you want. So I found that if I were to recommend only one self-help book, I’d probably recommend that book. That was pretty impactful for me.
Jon Krohn: 01:02:14 Nice. Very cool. He is kind of a classic name, if not the most canonical name in self-help, so that is cool. Have you seen Tony Robbins Live? That’s supposed to be quite a show.
Jeff Li: 01:02:24 I did the Unleash the Power within a few years ago. It was great. I mean, it is kind of like a rave slash motivational motivational event at the same time. So yeah, it really brings your energies up. It gets pretty emotional. He does some interventions where he talks to, some people come in with certain problems and he really kind of tries to help him and it gets pretty emotional. I did it a while back, but I found it to be pretty useful.
Jon Krohn: 01:02:58 That’s really cool. In my mind, I kind of had this impression in my head that, well, that kind of experience, that kind of Tony Robbins thing, that’s not for me. I’m not the kind of person that does that. So it’s interesting to have someone like you who actually is a lot like me in a lot of ways, say that you got so much value from it. I guess it’s something I should be considering. Certainly. I mean, the book is a pretty safe place to start.
Jeff Li: 01:03:20 Yeah, I think you read the book if you like it. It helps with the event is much more like you’ll definitely feel something versus the book. Some people have read through it and they’re like, oh yeah, I got kind of bored or, so it hits people differently.
Jon Krohn: 01:03:39 Nice. Alright, well hopefully an interesting wreck, not a boring wreck for most of the listeners who check
Jeff Li: 01:03:43 That out.
Jon Krohn: 01:03:45 Nice. And so Jeff, my final question for you is where can people follow you from where your thoughts, it’s been a really interesting episode where I’ve mentioned things like your Data 365 courses. I’ll be sure to have a link to those in the show notes. We’ve talked about some popular tweets that you’ve had. Where should people be following you?
Jeff Li: 01:04:03 Yeah, I mean my personal website, Jeff Lee chronicles.com, and I’m kind of lightly on Twitter, not like I kind of just post whenever I feel like it. Twitter is Jeff, M as in Mary, Jeff, MLI, Jeff M. Lee, and yeah, that’s pretty much it. Yeah, that’s what you can find me.
Jon Krohn: 01:04:25 Fantastic. We’ll be sure to have links to all of those in the show notes. Jeff, thank you so much for taking the time and yeah, hopefully we can check in again in a few years and see how your journey’s coming along.
Jeff Li: 01:04:34 Yeah, cool. Thanks for having me.
Jon Krohn: 01:04:38 What a knowledgeable guest. I hope you learned as much in today’s episode as I did in it, Jeff Lee covered strategies and tools for effective forecasting and time series analysis in general. His top tips for getting hired in technical roles at top firms like Netflix and Spotify, the promise and peril of ever increasing automation in AI around finding your romantic partner and the collection of AI tools that he uses as part of his daily workflow, including Whisper, flow, cloud Code, Gemini V Zero, and Cursor. As always, you can get all the show notes including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for social media profiles, as well as my own social media profiles at superdatascience.com/947. Thanks of course to everyone on the SuperDataScience podcast team, our podcast manager Sonja Brajovic, media editor, Mario Pombo, partnerships manager, Natalie Ziajski, researcher Serg Masís, writer Dr. Zara Karschay, and our founder Kirill Eremenko.Thanks to all of them for producing another super episode for us today for enabling that super team to create this free podcast for you. We are deeply grateful to our sponsors. You can support this show by checking out our sponsors links, which are in the show notes, and if you’d ever like to sponsor the show yourself, you can find out how at JonKrohn.com/podcast. Otherwise, just keep on listening. That’s the most important thing to me. But share the episode with folks who might like to hear it. Review the episode on your favorite podcasting app or YouTube subscribe, but most importantly, yeah, I just am so glad to have you listening and I hope I can continue to make episodes you love for years and years to come. Until next time, keep on rocking it out there and I’m looking forward to enjoying another round of the SuperDataScience Podcast with you very soon.