SDS 955: Nested Learning, Spatial Intelligence and the AI Trends of 2026, with Sadie St. Lawrence

Sadie St. Lawrence on Super Data Science Podcast

Podcast Guest: Sadie St. Lawrence

January 6, 2026

Subscribe on Apple PodcastsSpotifyStitcher Radio or TuneIn

Sadie St Lawrence joins Jon Krohn to discuss what to expect from the AI industry in 2026. Sadie and Jon talk through what they think will be the five biggest trends in AI, hand out awards for the best moments, comebacks, and disappointments in AI in 2025, and review how their predictions for 2025 played out. Hear Sadie’s five exciting predictions for 2026, from emerging jobs in AI to an important return to the drawing board!


Thanks to our Sponsors:

Interested in sponsoring a Super Data Science Podcast episode? Email natalie@superdatascience.com for sponsorship information.


About Sadie

Sadie St. Lawrence is the Founder and CEO of the Human Machine Collaboration Institute (HMCI), driving innovative research and advising to optimize human-AI collaboration in the knowledge economy, now amplified by a new ecosystem partnership with NVIDIA. She also founded Women in Data™, a global non-profit spanning 55 countries and empowering 70,000+ data professionals; it earned Top 50 Non-Profit status and was named the premier Women in AI & Tech community in 2021.

Named among DataIQ’s Top 100 Most Influential People in Data & AI and Dataleum’s Top 30 Women in AI, Sadie has educated over 700,000 learners through courses with UC Davis, Coursera, and LinkedIn Learning. Her forthcoming book, Becoming an AI Orchestrator, guides readers to work creatively and confidently alongside intelligent machines.


Overview

Sadie St Lawrence returns to the show to discuss what to expect from the AI industry in 2026. Sadie and host Jon Krohn talk through what they think will be the five biggest trends in AI, hand out awards for the best moments, comebacks, and disappointments in AI in 2025, and review how their predictions for 2025 played out.

Sadie and Jon precede their predictions for 2026 with a year in review, both recapping their expectations for 2025 and awarding their most – and least – favorite developments in AI. For her “Greatest Wow Moment”, Sadie selected Nano Banana for its ability to generate images, while Jon chose a simulated trading platform that he and Ed Donner created with software developer agents.

Of course, the main event was Sadie’s predictions for AI in 2026:

  • Increasingly specialized industry models, like AlphaFold and its ability to predict proteins. Sadie wants to see models being built for specific domains, and for domain experts to be analyzing them.
  • Advancements in continual and nested learning, allowing models to keep updating and learning proactively, bridging the gap between AI and human intelligence.
  • Labs returning their focus to fundamental research, to figure out what we need from our models and break the cycle of only making incremental improvements to existing LLMs.
  • A new wind for robotics, with the hope that larger, real-world datasets will generate a hub of activity for testing robots’ spatial intelligence in 3D environments.
  • AIOps will emerge as a job category, where workers will be expected to manage GPU infrastructure, orchestrate models, and test agent reliability.

Listen to the episode to hear about Sadie St Lawrence’s recent TEDx talk and her new business venture, Human Machine Collaboration Institute (HMCI), as well as Sadie’s and Jon’s “comeback of the year”, “disappointment of the year”, and “overall 2025 winner”.


In this episode you will learn:

  • (11:36) Recapping Sadie and Jon’s predictions for 2025                     
  • (26:54) The SuperDataScience Awards in AI                                       
  • (49:05) Prediction #1 for AI in 2026                                            
  • (52:13) Prediction #2 for AI in 2026                        
  • (53:33) Prediction #3 for AI in 2026                        
  • (57:54) Prediction #4 for AI in 2026                        
  • (1:01:01) Prediction #5 for AI in 2026   


Items mentioned in this podcast:


Follow Sadie:


Follow Jon:


Episode Transcript:

Podcast Transcript

Jon Krohn: 00:00:00 Here’s something wild to consider. The same level of AI capability that cost you a hundred dollars a year ago now costs $1. Intelligence is becoming cheap, radically cheap, and that changes everything about how we should be thinking about the future. Welcome to episode number 955 of the SuperDataScience Podcast. I’m your host, Jon Krohn. Today for the fifth year in a row, we’re kicking the year off by welcoming the inimitable Sadie St. Lawrence to the show to predict the five biggest trends in AI for 2026, we recap how she did on her predictions for 2025, we bestow four awards for 2025, our biggest wow moment, come back of the year, disappointment of the year and overall winner. And then we’ll get a glimpse at the year ahead in which intelligence will be vastly cheaper than ever before transforming work and play for all of us, look out. This episode of SuperDataScience is made possible by Dell Intel, Fabi and MongoDB.

00:00:59 Sadie St. Lawrence elcome back to the SuperDataScience Podcast. It’s your umpteenth time on the podcast. It’s too many to count. I can’t go back through and count them all. It’s been too many. How are you doing this time?

Sadie Lawrence: 00:01:11 I’m doing great. Yeah, we don’t want to count. We don’t want to age ourselves, but if you’ve been here from the beginning, thank you. This is how I know it’s coming to the end of the year and a new year is beginning is when I see Jon pop up in my texts.

Jon Krohn: 00:01:27 Yeah, I think this must be the fourth year in a row that we’re doing this predictions episode for the coming year. You are an esteemed futurist with your crystal ball. This past year you did a fantastic TEDx talk with some great thoughts about what the future of human machine collaboration should be like. And so I’ll have a link to that in the show notes, but I don’t know if there’s something, do you want to give us a high level overview of that TED Talk?

Sadie Lawrence: 00:01:57 Yeah, no, that was super fun to do. The main point of it is a spoiler alert, so plug your ears if you are going to watch it and don’t want to know. But the whole point is we should ask hard questions that we normally don’t ask, and our future is closer than we think. So a long time ago, I read the book The Singularity is Near, and I went through a thought experiment of like, okay, what actually happens if it is near? And we are getting closer to that and shouldn’t we be figuring out what that future may look like regardless of if it happens or doesn’t. We should start to ask the questions of what would happen if we merged our consciousness with ai, which leads to the overall question that everyone comes back to, which is what is consciousness? And so it’s more about asking the hard questions and diving into areas of science that we still have yet to uncover and figure out, which hopefully AI can help us do too.

Jon Krohn: 00:02:54 So you get into biological consciousness, you get into what it would be like to merge with a and what that experience might be like to have machine thoughts in your stream of consciousness and what it could be like to take actions through a machine with your brain, initiated by your brain. And I just brought it up on my screen here. It is really popular. You’ve got over 300 comments, a thousand likes, over 30,000 views. That is very cool, Sadie. Nice work.

00:03:25 So yeah, generating a lot of conversation and consciousness, not totally a random topic for you to be talking about on the TED stage because consciousness is a big part of your life lately. You’ve started a new business. Tell us all about it.

Sadie Lawrence: 00:03:40 Yeah, so in 2023, I as many people working with AI start to ask questions that I hadn’t asked in a while. Questions, what is machine intelligence and what really is emotion and consciousness? And I realized I want to get back to research, but I wanted to do research differently and doing research differently. It meant like an independent research firm. And so I started the Human Machine Collaboration Institute, HMCI, and our whole goal was just like, let’s find some tough problems to solve and see what we can come up with really focused around. Our end goal is figuring out how to create a unified theory of consciousness, but starting with smaller problems first. To get to that point, one of the things that we found was a problem that kept coming up for a lot of our team was it’s really hard for small to medium cities to not only adapt technology but adapt it in their ecosystems and in their economics.

00:04:43 And so we came up with a model to help with that. It’s an IR two model, intelligent revenue reinvestment model that takes funding from energy and data centers and reinvests it back into the community through upskilling workforce research and infrastructure that got the likes of Nvidia. And they said, Hey, we really like what you were doing. We like to partner with you on a flagship model in California to build out one of these ecosystems. And so we were able to get that kicked off this year and are looking to scale it two additional cities next year along with our research inference clusters. So I had a secret kind of fantasy to have secret think tanks across the world, and while they may not be secret or they may not be in forest, they will be in research inference clusters that will plug into these ecosystems as well and tie together to be able to do more research for local economies. So really exciting stuff.

Jon Krohn: 00:05:46 So the Human Machine Collaboration Institute, and you’re like a year old and you already have this big partnership with NVIDIA that is allowing you to have, these are physical centers where people can go in person.

Sadie Lawrence: 00:06:00 So physical and digital. One of the things that we’re building with the Rancho AI and robotics ecosystem is an actual digital twin. So while there is physical space that we have that people can come and get training and do research a lot for all of us working in ai, we know most of the work that we do is in a digital space. So the physical space is the hardware that we use, and then the rest of the collaboration happens in a digital ecosystem.

Jon Krohn: 00:06:26 That is amazing. Sadie, you never stop. For people who aren’t aware, Sadie already created an organization called Women in Data, which you now sit on the board of I believe, but you for a long time ran that organization and grew to hundreds of thousands of members across dozens of chapters all over the world. You probably have the latest stats.

Sadie Lawrence: 00:06:46 So a little over 70,000 individuals, 55 chapters in a total of a hundred plus countries, our membership representation. So it was another exciting year for Women in Data because we started to partner with women in analytics and we collaborate on the data connect conferences. So really said, Hey, let’s come together, combine our super powers. And it’s been a great partnership with women in analytics. And as you know, Reagan is just an awesome individual as well.

Jon Krohn: 00:07:17 So yeah, so Reagan runs the Women in analytics organization. It’s so cool that you two have partnered. If people want to get to know Reagan, Avon, they can check out episode number 698. She is fantastic, brilliant person at bringing AI into the real world, driving a lot of commercial value and bringing with now women in Data, a lot of community as well. Really cool. Now, that’s not all that you’ve done in the past year. You’ve also been slaving away on your first book, which just came out. And so by the time this episode is out, people should be able to go to Barnes and Noble or Amazon or wherever you buy your books, and you should be able to get delivered in presumably a 24 hour regular shipping cycle that we have these days. A copy of Sadie’s first book, which is called Becoming an AI Orchestrator, A Business Professional’s Guide to Leading, creating, and Thriving in the Age of Intelligence.

00:08:18 I am the series editor for this book, so I’m intimately familiar with it. I read it, I loved it. It is such a great book. It brings your own personal experiences with ai, but not in a way, but in a way that is highly practical and generalizable to broad audiences. It’s a very easy to read, easy to understand book that provides anyone. So this book, it doesn’t have code in it. It is a book that anyone can pick up and figure out how can with code or click and point interfaces become an AI orchestrator, and have you as an individual or your organization be able to harness the power of AI agents of generative AI and completely transform your life.

Sadie Lawrence: 00:09:12 Yeah. Well first I have to give a big shout out to you, Jon, because I would’ve not written a book without you telling me I didn’t think I was ready to write a book and introducing me to Deborah and the team over at Pearson. So thank you for motivating me to get that done and doing a very thoughtful review of the book. But yeah, it was so fun to write this book because immediately when I started using modern day AI tools, I realized my workflow was changing and just my whole mindset and how I work had to change. And that’s really the basis behind becoming an AI orchestrator is changing your role from being a musician in the orchestra to being that conductor and conducting with ais and allowing yourself to do more. And if anybody live and breathe that, because we were a team of three people at HMCI and we are all very much AI empowered, and the only way we could have done that is through having multiple AI agents and copilots and help at our disposal continually. So hopefully the book shares some of those best practices that other people can take and use for their own and build their own whole AI orchestra.

Jon Krohn: 00:10:21 Yeah, it’s fantastic, an invaluable one. I can’t wait to share this with all kinds of people in and out of our industry. I have some people that I know that are just going to love getting their hands on this book. And the ones that I think of that I’m most excited to send this to are people who are outside of our industry because people in our industry, I think a lot of us have, we maybe have already tinkered around a lot with these tools, maybe even implemented solutions for enterprises or whatever that involve agent ai. And so yeah, really cool for me to be able to send this to people for whom it’ll be kind of terror incognito and they can really make a really big impact on their own lives with it. So yeah, so again, that book title is becoming an AI orchestrator, and at the time that this episode is released, it is available wherever you get your books. Very cool. Congratulations, Sadie. I can’t wait to see what your future books Thank you are all about. Alright, so this episode, as I alluded to earlier on is an episode that recaps, well, the main point isn’t recapping. The main point is making predictions for 2026, but we will recap your predictions for 2025. So I believe you had five last year, if my notes are correct.

Sadie Lawrence: 00:11:47 Yes, that sounds about right. I do like numbers of threes and fives, so it sounds very, and it was 2025, so I feel like it’s very much a fitting number.

Jon Krohn: 00:11:56 Yeah. And so what I have down is number one was that ag agentic AI will be the dominant trend in 2025, and I feel like that’s a no-brainer to say that is correct. Do you have any thoughts or any data to back you up on how your results unfolded on that one?

Sadie Lawrence: 00:12:13 Yeah, I think everybody doesn’t want to hear about agentic AI at a conference anymore. So I don’t know if it’s concrete data or more just the vibes of where the people are at, which is please don’t tell us about how you have Angen AI conference, we’re ready for something new and fresh in 2026.

Jon Krohn: 00:12:30 Yeah, I mean I guess that’s a pretty good data point on how big, it was such a dominant trend in 2025 that nobody wants to hear about agent AI in 2026. Although I will say I still think, I mean, it just becomes more and more useful. It’s like every couple months it goes by the length of a human task that an agentic AI system can replace a human on that length of time doubles every seven months.

00:12:58 Right now we’re at a point where you can get about a 50% reliability, so about 50% of the time you’re going to get a result that you’re happy with on tasks in software development or machine learning that would take a human software developer or a human machine learning expert several hours to do. And given this seven month doubling, you can anticipate that by the end of 2026, it’s going to be more like eight hours full workday,

Sadie Lawrence: 00:13:25 Which is crazy to think about. So that’s when everybody keys up the discussions around replacement and what this means for the workforce. And I know it’s difficult to imagine at this point, but when you really calculate out the curve and just calculate out the hours of what somebody can do, I mean, that’s a whole different scenario that we could talk about.

Jon Krohn: 00:13:47 Yeah, it is very hard when you’re standing on an exponential curve, you perceive it linearly, which is a really funny thing for a human brain to do. Our brains weren’t designed for exponential technological growth. There’s been no living species in the history of the known universe that has had to deal with exponential technological growth before. So we just, yeah, it’s really hard. Even though this is what I’m doing day in, day out, I struggle to kind of accept how much different life is going to be in the future.

Sadie Lawrence: 00:14:18 Well, and I think that’s one of the big problems we have, which is nobody really knows what it looks like. Even a GI, we don’t even really have a grasp on if we reach that, what would that look like? Or artificial super intelligence. And I think that’s probably one of the biggest issues is there’s not a clear picture for what that looks like yet.

Jon Krohn: 00:14:37 And I guess we may never really be able to wrap our heads around it at all if it is kind of like the staircase of intelligence analogy where we would never be able to in a million years explain to a chimpanzee how to calculus and a chimpanzee is almost as intelligent as us, and if this thing artificial super intelligence is vastly more intelligent than us such that we are insects compared to it, then there’s no hope for us really understanding it. But I don’t know, there’s all kinds of, not everyone feels like intelligence. It might not be the case that intelligence works like that.

Sadie Lawrence: 00:15:12 And it may be the case that we actually don’t really even care. Maybe the whole point is that we never even know that we are the dumb ones because in our whole world, we are what exists in our world. So that’s how I see more humans being and behaving is typically we put ourselves at the top no matter what, whether that’s the case or not.

Jon Krohn: 00:15:36 Right, right, right. I get you there. Alright, I’m getting way too deep on your 2025 predictions and getting into looking ahead already, your second 2025 prediction was that AI integration into everyday devices will accelerate. And you gave some specific examples like augmented reality glasses, real-time translation. How did that one come along in 2025 relative to what you were expecting Sadie?

Sadie Lawrence: 00:16:04 Yeah, so we did see the AirPods come out with the real-time translation, which I think for an everyday device it’s probably one of the biggest tech everyday devices beyond our cell phones. Obviously we have AI in our cell phones. I still see Apple kind of being a disappointment in that space. I would love to see more of it as a highlight, but I do see a lot of AI and tools trying. I don’t know if any of them did it really well though, is what I would say. So I did see it in my toothbrush that I was purchasing at Costco and realize that’s not a place that I want AI in, but I did see a lot of everyday products trying to put it in. I went to, what’s the big conference in January in Las Vegas where all the new tech comes out?

Jon Krohn: 00:16:56 Is that Oh, the consumer.

Sadie Lawrence: 00:16:59 Yeah, something with the C

Jon Krohn: 00:17:01 Consumer electronics show. CES.

Sadie Lawrence: 00:17:03 Yes, yes, CES. So CES had it pretty much in every refrigerator, every oven, everywhere you went. And I realized this is not how I want my AI to be. So I did see it. Do I think it was a good decision? No.

Jon Krohn: 00:17:22 Right, right, right. All right, so whether for better or for worse, I think more integration is coming. Your number three prediction was AI driven scientific research will expand significantly. How’s that one coming in 2025? Sadie?

Sadie Lawrence: 00:17:37 So this one was interesting. There was a paper that came out and it showed that 39% more material sciences discoveries had been found because of ai, all of this great growth. And that paper actually got redacted this year. So while there was good research from it, I guess it wasn’t that great of research because the paper did get redacted, but we did see some cool things with deep mine in terms of their genome project. They were able to compress about 800 years of different material discoveries into what could be done in a couple hours and a few steps. And so again, a few things still happening from the deep mind perspective, but otherwise papers redacted and I think will take a little bit longer to see.

Jon Krohn: 00:18:25 Nice. That, yeah. So that sounds like still correct, but maybe not as emphatic as the first two. Number four is that enterprise AI monetization will be crucial. How did that one go?

Sadie Lawrence: 00:18:37 Yeah, so if you’ve seen the stats, you’re probably having deja vu from the big data era and the cloud era where everybody says 80% of data and AI projects fail. And so we see the same thing now happening in the AI space where a lot of organizations are throwing a lot of money in AI but not understanding how to implement it properly. And they have a lot of projects stuck in MVPs, but not a lot going to production. This is something that I see is just going to continue to be a trend where people are going to start asking, what is the ROI on this? Versus are we all brushing off a cliff to just stamp up that we’re doing gen ai?

Jon Krohn: 00:19:23 Yeah, there’s a lot of stamp happening, but I think it is really important, and I say this with a huge amount of bias as someone running a consulting company that helps enterprises get agent AI and other kinds of solutions in their business, but talking about that curve, that exponential curve that we were talking about earlier, if today you can get 50% reliability on a task that would take a human two hours to do, and you’re looking at four hours midyear next year, eight hours by the end of the year, by the end of 2026, those differences start to make a huge difference in terms of the kinds of tasks that you can replace in an organization. And then another way of thinking about it is that behind that curve of the 50% success rate, there’ll be another one that’s at 90% and another one at 99%. And so maybe we’re a couple of years away from being able to have an eight hour task, a complex eight hour task done in computer science or machine learning at a 99% reliability rate. And that is obviously a vastly different kind of paradigm. And all the while the cost of compute is dropping like crazy.

00:20:35 If there’s one thing that you’ve got to get into your head, there’s all kinds of things that happen around you where you’re like, oh, it’s annoying that inflation is eating into my savings.

00:20:47 It’d be so great if my money became more valuable over time. Where is someplace where I can bet where I can be sure that this trend is going in the right direction? And there’s no bigger thing that I can think of. Then intelligence is becoming crazy cheap intelligence, which for all of living history was this incredibly scarce resource is becoming very inexpensive, very cheap, and that changes everything. And so think about how can you be leveraging if it’s not cheap enough today for some application that you can think of, it will be in the future and it will be very quickly because the trend that we’re on right now is at the same level of intelligence, the same level of capability costs a hundred times less today than it did a year ago.

Sadie Lawrence: 00:21:40 No, and I think that going back to your point of it’s hard for us to understand exponential curves, we see things very linearly. How do you rethink about your business? And think about it when intelligence is almost exponentially getting cheaper too, that causes you to rethink your complete business model. So I’m excited for the new businesses that are going to come out of it because it is somewhat a greenfield, and that’s what we’re seeing. I mean looking just at businesses like Cursor where, I mean we’re seeing people get to millions if not billions of dollars and reoccurring revenue at rates that we’ve never seen before. And I think part of that has to do because of this exponential growth of intelligence.

Jon Krohn: 00:22:24 Right, right. For sure. Christopher, cool tool that I also started using this here myself personally. I like it a lot. Okay. Fifth and final prediction that you had for 2025 was that the demand for AI engineering skills will surpass the demand for traditional data science skills. Did that bear out Sadie?

Sadie Lawrence: 00:22:45 Yeah, so I looked at a quick kind of jobs report from LinkedIn and they showed skills and keywords that were on the rise and ones that weren’t. And just overall back to also, we have a lot of CEOs who just are adding gen AI and stamping it on, we’re also seeing a lot of AI stamped onto job descriptions as well, and so much higher growth in what we’re seeing for AI prompting AI engineering skills, particularly compared to data science skills. So check out the LinkedIn jobs report. It’s a great way, I think, just to kind of see what’s trending, think of it kind of like Google search keywords, but for what’s happening in the job world and what’s happening on job descriptions,

Jon Krohn: 00:23:34 That is really interesting. Big implications for all of our listeners who are trying to have the most relevant skills of the moment. And actually, so this is going back to the beginning of the year, but I did do an episode number 856 talking about the fastest growing job in most developed countries in the world, and that was AI engineer. So more in that episode on the kinds of skills that you can be developing listener in order to be super employable. And I don’t know how you feel about this Sadie, but to me it’s kind of obvious that AI engineer is kind of like a sub-specialization of data scientists that has emerged and not really a completely distinct area. But yeah, that’s kind of how I perceive it.

Sadie Lawrence: 00:24:24 I personally think the data scientists created a family of jobs. So when I got into the field in 2014, it was thought that you were like this unicorn, and everybody knew we were putting way too much on the role data scientists because you’re supposed to do everything from data engineering to data visualization to being an ML engineer and building out these models, also taking it to production, so doing ML ops, those jobs that I just described really didn’t exist when the data scientists got created, the data scientists was that. And then we realized, oh, most people aren’t a full stack data scientist and we can actually move faster if we split this into particular roles like ML lops, ML engineering, et cetera. What I’m seeing happen is not only did the data scientists breed out a family of jobs in five different job descriptions, but now we’re seeing it change just a rebrand from ML to ai. And what’s funny is I was reading the book Thinking Machines highly recommend that book to anyone, and they do a whole history on ai, and they didn’t want to call it AI back in the 2010s because they didn’t think anybody would take it seriously. So to avoid that, they called it machine learning, and now we’re back to the term ai. So I found it humorous that they came up with the term machine learning as a way to get away from AI, but here we all are, back to AI to begin with.

Jon Krohn: 00:25:54 Wow, that looks like a cool book. So it’s a 2016 book by Luke Doell about the history of AI.

Sadie Lawrence: 00:26:01 Yes, I like that. A big portion of it is about Nvidia and really their leadership in it.

Jon Krohn: 00:26:09 That’s a different book. That’s a different book. Okay. Gotcha. Gotcha, gotcha. That’s, that’s by Steven Witts.

Sadie Lawrence: 00:26:15 Yes.

Jon Krohn: 00:26:16 It turns out a bunch of people have named their book Thinking Machines Colon something more. Yeah. So this one is Jensen Huang, Nvidia, and the world’s most coveted Microchip by Steven Witt. Cool. All have that for folks in the show notes.

Sadie Lawrence: 00:26:31 Yeah, it’s surprisingly a great, if you really want a history on AI since the nineties, to me the book is more about a really more detailed history of AI and the hardware that it to get us to this point. So great read.

Jon Krohn: 00:26:46 Nice. Fantastic. All right, so that wraps up your predictions for 2025. We ended up going off on a lot of tangents and doing some forward-looking stuff before we get to your 2026 predictions. We’re going to do what we did for the first time last year. And then I personally love doing, I think our audience loved it as well. Do reach out to us on LinkedIn and let us know how much you like or don’t like the next segment so that we can evaluate how much we emphasize it or maybe expanded in the future, or maybe you stopped doing it but had a lot of fun. So last year we gave out awards. Well, you and I picked winners. We didn’t actually create awards, so send them to anyone, but we awarded four things for the past year. So there’s our wow moment of the year is number one. Number two is the comeback of the year. Number three is our disappointment of the year, and number four is our overall winner in ai, I guess data science over the past year. So let’s do wow moment first. Do you want to do yours first or do you want me to do mine?

Sadie Lawrence: 00:27:54 I’m ready. So let’s

Jon Krohn: 00:27:56 Go. Yeah, do it.

Sadie Lawrence: 00:27:57 So my wow moment for the year was nano banana particularly because that’s why I thought I felt the biggest change in image generation from last year to this year. And particularly for me, I’m remodeling our house. And so to take an image of a room and say, what would it look like with this tile floor sample and throw in the sample and how accurate it is, was really mind blowing. And particularly as you know how those models work, you don’t have a ton of control over them. And so the fact that it was able to replace the floor with my tile, I think also because I got a lot of use out of it this year, it definitely has been my newest sidekick, but also a ton of help. What’s been your wow?

Jon Krohn: 00:28:45 Nano Banana Pro did pop into my head as my potential wow moment of the year, Sadie. It is very impressive, but I spent some time reflecting on this and trying to think over the whole year. And my big wow moment came in the spring when Ed Donner and I were developing a full day talk, a full day workshop for the open data science conference, east ODSE East in Boston, and the demo that we came up with and Ed developed and delivered. So we, over the course of the whole day, we taught people what we perceive as the key agentic AI frameworks that people need to know. And so we started off by teaching the open AI agents SDK to show people how you can get an individual agent up and running with guardrails doing the kinds of tasks that you want it to be doing.

00:29:43 Then we taught them MCP model context protocol for equipping their agent with tools that they can use. And then the third thing is we introduced them to crew AI so that they can get a team of AI agents now working with their tools thanks to MCP all working together on some particular task. And what Ed did at the end. So it kind of like over the course of the day, the sophistication of the hands-on demos that Ed delivered got more and more complex. By the end of the day, he had a team of software developer agents, so there were four of them, if I’m remembering correctly. There was a front end developer, a backend developer, a tester, and a project manager. And so each of those four agents had different context to work with, had different tools that they could use, different specializations as things like the front end backend developer distinction would suggest, and used that crew of agents to create a trading platform.

00:30:56 And this trading platform worked. So it was a trading platform where you could send buy orders, sell orders, it would simulate the stock market, and then you could buy or sell any given stock on a real stock market. Well, you weren’t trading on a real stock market was simulated, but it was all the stocks say on Nasdaq available to you to trade with and you’re using the realtime price information in order to be able to trade these simulated portfolios. And then he created a crew of trading agents named after Famous Traders, and each of those agents had that famous traders kind of style. And then those traders used the software platform that the crew of agents had built. And it’s that kind of, it blows my mind that that works. This was a robust, sophisticated trading platform and the team of software developers of fake, these aren’t real people, these are agents. They created this thing over the lunch break. So he set it off running right before we took a 90 minute lunch break. And when we came back in the afternoon, the software platform worked. And that’s crazy to me.

Sadie Lawrence: 00:32:20 Now, if I can add a honorable mention to a wow moment, that would definitely be lovable. And what they’ve done with their platform is very similar to what you’re describing with the team of agents. It’s really that. But for fronted development, I redid our website for HMCI really with one prompt, and then there was some, so you can go to our website and let me know if you like it or not because it literally only took one prompt. But overall, it’s just incredible. I feel so old Jon. I am, I remember the day when I was editing code for my MySpace or I was even using Squarespace and felt like it was so much further ahead in terms of drag and drop website development, and now it’s like one prompt with a team of agents and you have a whole website, a whole platform, and you’re good to go. So

Jon Krohn: 00:33:16 I’ve got it up here. The hci.ai website looks pretty good, pretty good.

Sadie Lawrence: 00:33:22 Built by a bunch of agents and thank you. Lovable.

Jon Krohn: 00:33:26 Wow. Yeah, there you go. Really cool. Alright, so yeah, so that’s our wow moment of the year. Congrats to the winner as wow, we’re going to skip their speeches and move right on to the next category. So the second category for our 2025 award, Sadie is Comeback of the Year. And I think you and I actually let it slip to each other before we started recording. So most of these, we don’t know what the other is going to say, but for comeback of the year, it’s just so obvious. I don’t know how anyone could argue with us on this one. And we have the same one. Do you want to tell the audience what it is?

Sadie Lawrence: 00:34:08 Yeah, not only did we have the same one, but this was our same one from last year. So we have not changed all this, so any of your guesses, but we are both solid on Google. I mean, not only did they come back last year, but they’re coming back even stronger than ever this year. And I think just not only the suite of tools that they have, but what I’ve been most impressed with was how they’ve integrated into search and integrated into G Suite and just I think they’ve done a really great job of integrating it into their existing products in a really thoughtful way that doesn’t distract from the old way of working and has honestly, in my opinion, been really seamless. I continue to find myself using the AI answers more and more and more, and that’s becoming my standard. And I think a lot of people last year were really curious of what is Google going to do? Are they going to eat their own business? How’s it all going to turn out? And so beyond just talking about Nano Banana and their new model developments, they’re doing a really great job of integrating it into their existing products, which kudos to Google. No small feat.

Jon Krohn: 00:35:18 And something that I’ve mentioned on air, have I mentioned this on air? I’m not a hundred percent sure. I think I have, I’ve definitely talked about this in my personal life, my professional life personally, is that the other thing about Google that I think they have a big advantage on relative to some of the other Frontier labs like OpenAI and anthropic is that a lot of people already trust Google with their Google Office suite, Google Drive, Gmail, you already have a lot of data in there. And so for me personally, I have the most cutting edge models and subscriptions from all three of those providers, and I use them for different purposes. It doesn’t make sense to me to connect Anthropic or OpenAI to my Google Drive and give all of that information as access when I already have a cutting edge frontier model that does, why should I give myself that extra risk?

Sadie Lawrence: 00:36:21 Exactly. And I think back to, yes, they took their time and I think it was really important for them to think about how it integrates with their existing suite. I will say that’s my biggest regret was starting my new company is with Women in Data, we use G Suite and with HMC, I switched over to Microsoft, don’t do it, don’t do it. If I can make any recommendation, I should have stayed with G Suite. So that’s my biggest regret in the new company is like, why did we switch over and start on Microsoft? So maybe it’s not too late.

Jon Krohn: 00:36:55 But are you doing teams meetings?

Sadie Lawrence: 00:36:57 We are doing, yes. It’s horrible. I know, I know. What was I thinking? I thought it’d be cool and try something new. It wasn’t smart. But speaking of Google though, would highly recommend if you haven’t watched it, the Thinking Game documentary. So I know it is on Prime, but it’s about just Deep Mind and it starts from the beginning days when they started the lab and then Google taking it over. So yeah, if you really want to get, again, I must be into history lately because if you want a history on DeepMind, it’s a great documentary.

Jon Krohn: 00:37:34 It looks like it is available in full on YouTube for free.

Sadie Lawrence: 00:37:40 Amazing.

Jon Krohn: 00:37:40 Which doesn’t surprise me because it seems like it’s a Google product, so you have created this film. So yeah, it looks like 10 days ago at the time of recording it became available an hour and 24 minutes, 25 million views published by Google DeepMind. I will have a link to that in the show notes so that anyone anywhere can watch it for free. Alright, great recommendation there. So yeah, so come back of the year. Congratulations again, Google. Maybe next year it will be somebody else. How many times can you come back? All right, disappointment of the year. It’s interesting because earlier in this episode you used a company name and the word disappointment in the same sentence. So is that company name? This

Sadie Lawrence: 00:38:34 One may be controversial. I may think you’re going to disagree with me on this, Jon, but I’ll explain myself. And so I’m just going to say for me, actually, agents were disappointing and I wrote a Substack this year called Agents of Disappointment, take her off

Jon Krohn: 00:38:48 The air, take her off the air where? Where’s the aboard button?

Sadie Lawrence: 00:38:54 If all of a sudden I get muted, you’ll know why. Right?

Jon Krohn: 00:38:58 Alright, and that brings us to the end of the episode.

Sadie Lawrence: 00:38:59 Yes, exactly. No, so I wrote a post this year as my best performing stack called Agents of Disappointment, but really what I was talking about was the divide between the hype of agents and really where I saw this come into disappointment was from the enterprise companies. Obviously Salesforce has been talking about its agents force for some time. You have SAP, you have all of these enterprise companies who have been talking about how to implement agents into your existing enterprise tools and it just doesn’t work right? Or at least a lot of companies aren’t set up for it properly or in my mind, don’t know how to think about how to structure them properly and what to have agents do. And so from that standpoint, from an enterprise agent standpoint, I think the hype and the practicality of the implementation, the divide between those two was too great. And so that was my disappointment of 2025.

Jon Krohn: 00:40:00 I totally get it and I don’t disagree with you. It makes a lot of sense to me. There is, there’s too much talk about agent AI relative to the impact that it’s making. No question. I do think that a lot of it is related to people not having their data silos set up in a way or the security set up in a way where they’re comfortable with it. But yeah, a lot of tinkering with agents, not nearly as many enterprise deployments, but I do think it will come. That is not my disappointment of the year. My disappointment of the year is Apple.

Sadie Lawrence: 00:40:38 I think you were disappointed with Apple last year. We got to go back. Let’s look. I think, yeah, because Apple, did they announce Apple Intelligence last year or was that a this year thing? I don’t know. Time is weird in AI world.

Jon Krohn: 00:40:49 I think they did. No, I think you’re right. I think they announced Apple Intelligence in the autumn, northern hemisphere autumn of last year. And it was disappointing. But I mean I guess it’s kind of just Google. It’s like,

Sadie Lawrence: 00:41:01 And it’s still disappointing.

Jon Krohn: 00:41:02 It’s still because it’s like a year later and what can I do additionally with AI in my phone that I actually use? Not much. I once made an emoji and then didn’t do it again, just used it built in Gen AI to send an emoji over iMessage. But I don’t know, I don’t really need to do that. It seems like things like just having Siri be able to understand what I’m saying to it in the same way that Open AI whisper can, I mean

Sadie Lawrence: 00:41:44 Who’s actually even done better than that is. So I will use Grok in my Tesla. And so when I’m coming home from work and want to brainstorm something or learn a new subject, I just push a button and it works seamlessly and the chat mode goes back and forth and I’m like, okay, if I can do that in my car, why can’t I do it on my phone that seamlessly? So I hope that next year our comeback of the year is Apple because they’ve been in the disappointment quarter for far too long

Jon Krohn: 00:42:14 And they have a lot of potential because of how ubiquitous their devices are. There’s a big opportunity for Apple if they can get it right. Now, I’ve also got to talk about Grok and Xai because I never talk about them on air or very rarely, not nearly as much as they deserve given that they have come to the frontier mean. Well, we haven’t talked about our overall winner yet, so maybe XAI is your overall winner, so maybe I should save it, but I just want to say that you mentioning Grok. I even did recently, I did an episode on Google Gemini three Pro and how that at the time of the episode coming out at the end of November, it was the top model across most categories as well as overall on the LM arena. And so I kind of in that episode make the case that it’s the best model, but in terms of ELO score, statistically, even though it had a higher ELO score overall than Rock in El Marina, the difference in the ELO scores wasn’t enough for El Marina to call that statistically significant. And so technically Gemini three Pro is actually in a tie with gr, but it made my story, it dragged out, I couldn’t make the episode tight and about how Google had been losing to open AI and then later and it just became too complex if I threw X AI and GR in the mix. But yeah, they’ve done a great job.

Sadie Lawrence: 00:43:51 I think what’s impressive, I have a slide that I’ve been talking about for years and I just go, I think it’s rate my iq.org and it rates all the AI models in a curve. And so I have screenshots from 2023 to 2025 to now, and it’s just so fun to see how few models there were and just how pretty much everybody has cotton up to the top tier, not even the paid private models, but even, I mean this isn’t my overall winner, but again, what’s been happening in open source and what came out with Deep seek, I mean it’s really incredible what access we have now for even free.

Jon Krohn: 00:44:33 Yeah, it really does make you wonder with all these crazy amounts of money that need to go in to create a frontier model at the next capability level, and then six months later you have an open source model that can do the same thing, does raise lots of questions about monetization, especially if you don’t have clear distribution channels at vast scales like Google does. Yes, yes, yes. Interesting indeed. But let’s not make the audience wait any longer. It’s time for our overall winner of 2025 to be crowned. Oh my God. The audience is losing their minds. Wow. They’re out of control. Oh wow. Okay. I don’t know. I have no idea what you’re going to say. I’m not even a hundred percent sure I have a clear winner. So maybe we can kind of work together to decide on an overall winner in between us. Do you want me to say what I’m thinking as my overall winners or do you want

Sadie Lawrence: 00:45:29 To go? Yeah, go ahead. I have one, but

Jon Krohn: 00:45:31 So I have two firms that I have as my overall winner, and so one of them that we’ve already talked about a lot is Google, but the other one, which you’ve talked about them not too long ago in your book recommendation, the thing that is powering all these AI models, not all of ’em, I guess technically because Google does have TPUs, but Nvidia,

00:45:56 Yeah,

00:45:57 Nvidia is the big winner in ai. They’re the overall winner. I mean, look at their share price.

Sadie Lawrence: 00:46:04 Oh my gosh, it’s crazy.

Jon Krohn: 00:46:05 It’s bonkers.

Sadie Lawrence: 00:46:08 They truly are. I would say, I mean in the AI industry, they truly are just the overall winner in ai, right? That is it, right? Well, that’s why. Well, that’s what we’re counting, isn’t it? All roads point back to Nvidia. So I would say they are the overall winner and we’ll continue to be for some time. I guess my approach to who the overall winner was is I really saw this year as the year of vibes. To me, this was the year vibe coding got coined as a term where I do think agents were and are really useful is in coding. I think that’s the one place that we can see ’em just take off and do their thing. So for me, maybe it’s not the overall winner. I would agree with you as Nvidia maybe my theme I’m going to add in an extra category into this 2026, we’re getting a new category here, what the theme of the year was. And to me, this was such a vibey year, it was just like everything was about vibes. We didn’t have new model breakthroughs, like GT five was disappointing. I don’t know of anything that people are anticipating coming out for the previous years, people were like, wait till we get multimodal and then wait till we get agents. I don’t know what people are anticipating really next it is just all about vibes. And so that’s my theme.

Jon Krohn: 00:47:33 I like that. It makes a lot of sense. Alright, our joint winners are Nvidia and vibes. Yeah, no, I totally get it. It makes a lot of sense. And also to go into a little bit more detail, a lot of our listeners might already know this, and you definitely know this city, but the reason why coding is something that is so great to be doing with LLMs is because it’s something that it’s really easy to train LLMs to be good at a really high level because if the code executes it works, you kind of have the answer. So it’s easier to simulate multi-step very long problems. And that’s why earlier when I was talking about the length of human task doubling that AI systems can do every seven months, that is only on these kinds of tasks like computer science or machine learning tasks where you have a very clear sense of whether the thing worked and whether you can simulate data, simulate training data that allow you to continuously expand the length of what these models are doing for a lot of real world tasks in most industries to collect those data is just so crazy expensive to have humans creating those data that yes, it’s happening but way, way, way more slowly than those areas where we can simulate the data really effectively already.

00:49:03 So those are our winners. It’s time for the predictions. What’s going to happen in 2026. Sadie, I believe that last year you have five categories for us.

Sadie Lawrence: 00:49:13 I do. I have five themes and it’s not 2025, but oh, well here we are. So number one, I think we’re going to see, again, I know we’re going to AGI, and this is going to make us sound like we’re going back to narrow ai, but I think more like specialized industry models like alpha fold. So I think we’ve reached the limit for now until we build out some of these hyperscaler clusters. But really the limit on data for training these general purpose models, and back to your comment on why we see agents and LLMs do so well in coding is because we can actually measure it and see that it works. We need to have more specialized models for particular domains and then have domain experts evaluate those. And so I think we’re going to start to see more development of specialized models for particular domains.

00:50:12 And you could think of them as mini models, but the way I think of it is like we have human intelligence and there’s a general intelligence that we all have that get us through day-to-day life. And then we have people who go and get PhDs in areas or just our savants at a particular area. And I think that we, while for some reason nature has selected that path for us, that we each have our own specialization, I think we’ll start to see that happen now in model development as well. And so I’m interested to see how this unfolds, but I think that’s how we’re going to make progress is getting more specialized, kind of expect more alpha fold moments in particular domains.

Jon Krohn: 00:50:54 Nice. I like that. And it is interesting that you bring up the alpha fold example. It may not be coincidental, but that is an example of while very narrow artificial super intelligence, because it’s a humans, no matter how hard we try, you can’t look at a sequence of amino acids and predict what a three-dimensional protein will look like. We just can’t do it. But alpha fold can, not for every kind of protein, but for a very broad range of proteins with remarkable accuracy and so artificial super intelligence, so these kinds of very specialized industry models could become more and more prominent. We could see more and more examples like alpha fold where in these very narrow niches, a machine vastly outperforms human intelligence capabilities.

Sadie Lawrence: 00:51:43 But think about it, how do we get to an overall general super intelligence probably by combining these mini models of particular domains together to get there. But right now, one of the hardest things about some of these general intelligence models is really being able to test them fully in their subdomain. And so I think that it’s just splitting it up. Maybe half of it is the training and then half of it is the testing in a way that we can actually evaluate ’em more properly.

Jon Krohn: 00:52:11 Nice. Makes a lot of sense, Sadie. All right. What’s number two?

Sadie Lawrence: 00:52:15 Number two is continual and nested learning in models. So this is a new paper that just came out from Google. It is called Nested Learning, hence the prediction from it again. But back to Google being hot this year and last year and coming back with some of their research. I’m really interested in this paper because one of the issues that we have today is we train a model, and other than it remembering what’s in our context window, it’s not continuing to update and learn for me. And so nested learning is a way to bypass that and get around it. And so I think we’re going to see a lot more progress in this space with how do we get models to continue to update and continue to learn what makes humans really great about being able to learn and grow and what makes our intelligence so strong, so why wouldn’t we want that in a model as well?

Jon Krohn: 00:53:12 Makes a lot of sense. Really big innovation there. Yeah, that continuous learning is a big gap in most of, certainly all of the big LLMs from the Frontier labs, they don’t continuously learn. It’s a huge exercise and that is a function of biological intelligence. This is continuous learning, so pretty cool. I like that one a lot. What’s your number three prediction for 2026? Sadie

Sadie Lawrence: 00:53:37 Number three, and it’s, my bias is coming in, but we’re going to get back to research. So I don’t think that any lab right now, any frontier lab has a clear path for what is going to be our next big breakthrough in ai. Again, back to my comment earlier before we kind of saw the roadmap, oh wait until we get a model that’s multimodal. Wait until GT five comes out. I don’t hear anybody talking about GT six. I don’t hear anybody really talking about what’s the next thing they’re really waiting for from an AI model. And I think that we have to get back into labs to discover it, and I think we’re waiting to have a clear picture of what that may or may not look like. And a lot of this comes from the recent interview with Ilya and the Dark Keh podcast and just his estimates too that there’s still a lot of labs that don’t know exactly what that path forward looks like. And I think that we’re going to run into some limitations with some of the hyperscalers, not necessarily from a compute standpoint, but from a data availability standpoint. And so it’s time to get creative and get back to research, and I think is really just an exciting time because it means that new ideas are welcome and so it’s a good time to be in the space.

Jon Krohn: 00:55:09 Makes a lot of sense. Yeah, I mean for a long time the defacto scale that we were scaling up on was number of weights in a model 10 Xing that a hundred xing it. You were getting magical capability improvements as notably done from GT two to three to four. And yeah, then more recently in 2025, there was a lot of, well, 2024 even more so there was a lot of excitement about scaling inference time. And so how long do these reasoning models reason for? And those reasoning times did expand a lot in 2025. And so we saw really impressive results on math and chemistry olympiad kind of results. And just in general, these kinds of supposedly very hard benchmarks like humanities, the last exam have started to look tractable thanks to long inference times. But I think you’re right. I think that some kind of orthogonal breakthrough beyond just scaling has got to get us to the next level.

00:56:22 And I think continuous learning is potentially part of that. And I don’t think this is going to happen in 2026. If it does, that would be a huge breakthrough. It would be something that allows algorithms to learn in a much more sample efficient way. So humans, even infants can learn from one example or even infer something from zero examples. So a child who isn’t even old enough to speak but can kind of walk around if that child, if there’s an adult carrying a bunch of heavy looking objects and they’re kind of walking into a closet door, that child can’t even speak but will open the closet door for the adult. The child is able to infer the intentions of the adult in a scenario that the child has never seen before. And so that kind of zero shot learning or one shot learning or few shot learning, we have those terms in ai, but those terms only apply when you’re kind of providing examples in a context after the LLM has learned from billions and billions of examples. And so yeah, I think some kind of breakthrough that allows for way more sample efficient learning is critical.

Sadie Lawrence: 00:57:48 You must be peeping at the next prediction.

Jon Krohn: 00:57:51 Oh yeah. Well, I had no idea what the, seriously for our audience, I have no idea what it says. Sadie’s going to say next. Alright, number four, Sadie, what is it?

Sadie Lawrence: 00:58:01 Yeah, so I know there’s been a lot of talk about robotics and I keep saying this prediction that I think 2027, you

Jon Krohn: 00:58:07 Bought a new robot, didn’t you?

Sadie Lawrence: 00:58:10 Yes. So it’s supposed to come in 2026. I don’t have a date, but I know this year when this episode comes out, so I’m very excited to take my robot for a walk

Jon Krohn: 00:58:22 Wild.

Sadie Lawrence: 00:58:23 But I don’t think that 2026 will be the year of the robot. I still am standing true to what I say, which is 2027 will be the year of the robot. But I do believe we will see more physical AI and spatial intelligence, and particularly spatial intelligence because I think we need to branch off into new data sets. And so things like world labs and simulation of environments, I think that’s going to be a new playground that we’re going to explore as a new data set. And so really looking at of how do we bridge the gap between real physical AI and getting a robot to work in space. Again, this is where the continual learning will come into with these models as well, because when you go into a 3D environment, you’re encountering a lot of things that as all of us know with driving or walking and different things that we have to continually update our model for. And so I see physical ai, spatial intelligence being a space that will be really popular, but mainly from the standpoint of just helping us to collect new and harder data that we don’t have today that can finally help bring our ais into the world and really start to expand the use of these models.

Jon Krohn: 00:59:50 Nice. I love that. Actually, it was after we had recorded last year’s episode, but before we’d released it, I was at NIPS in Vancouver and Fefe Lee was one of the keynotes and thousands and thousands and thousands of people, huge auditorium packed full of people. Watched her talk about her company, world Labs that you’re describing right now. And it’s very expensive to collect these big real world data sets, but absolutely essential to be able to be training the machines of the future. Because right now all these frontier LLMs, they are capable of being helpful inside of a computer, but if you want the some robotics applications, some real world spatial application. Yeah, the hard work of collecting all those data and getting the machines going is key in getting your neo able to water all your plants. If I understand this, a key application you are looking for from your robot.

Sadie Lawrence: 01:00:51 Yes sir. I have a plant wall at the office. Well actually two plant walls and so I do need somebody to take care of these plants, which will be great.

Jon Krohn: 01:01:00 Nice. Alright. And then fifth and final prediction for 2026. Sadie, what is it?

Sadie Lawrence: 01:01:07 Yes, drum roll please. And I love to bring it to practical application for work, which is what will be the kind of new hot trending job. And I think we’re going to start to see AI ops become a thing. So AI operations, think of this as like how somebody who manages the GPU manage model orchestration, agent reliability, think of it from a function of DevOps in 2010 to the AIOps of 20, 25, 26 era. And so I think this is a job description that we’ll start to see pop up. I don’t know if it would be what I’d call the most popular one, but I think it’s going to be a new trend that we’ll see emerge here in 2026.

Jon Krohn: 01:01:57 Nice. I like that one as well. AIOps, quite practical for all of our hands-on practitioners out there listening. So yeah, to recap specialized industry models, continue on the nested learning research on the next big breakthrough spatial intelligence and AIOps. Sadie, thank you so much yet again for doing a predictions episode. I hope that I can wrangle you to do it again next year. I always enjoy this episode so much. Maybe me please. Absolutely. That would be great. Maybe we can even do that partway through the year when you have your robot. Maybe I can be in California with you and I don’t know, maybe figure that

Sadie Lawrence: 01:02:40 Out. Yes, come out to our new studio. That will be once we get it set up, Neo will let you in the door.

Jon Krohn: 01:02:47 So cool. I can’t wait to see it. I’m so jealous. Yeah, and so as you know, I always ask for a book recommendation at the end of every episode. You already gave one in this episode, the Thinking Machine, Jensen, Huang, Nvidia, and the world’s most coveted microchip incidentally turned out to be my overall winner of 2025. Do you have any other book recommendation for us or do you want to go with that one?

Sadie Lawrence: 01:03:07 I’m going to go with that one and then my book becoming an AI orchestrator. Of course,

Jon Krohn: 01:03:12 Of course, yes. They

Sadie Lawrence: 01:03:13 Just pair really, really well as a Christmas gift.

01:03:17 Those will be great to get the other one. I would say, I’ll add a bonus one. The Innovator’s Dilemma I feel like is a really good one right now. Tying us back to the beginning of the episode where you talked about intelligence is on this exponential curve and we mentioned that we need to rethink business models. I think that book does just a great job of highlighting how once you are an established business, it is difficult to reinvent your business. And so I think it’s a really just a relevant book for everybody right now from a standpoint of how do we rethink our businesses with ai.

Jon Krohn: 01:03:53 Nice. I like that recommendation as well. It seems like something I need to be reading. Gosh, I wish I could read all the books that people recommend on this show. That seems like a really useful one for me to be able to sink my teeth into because it is a dilemma that I face all the time. Alright, and yeah, of course your book, I’ll mention it by its full title again, becoming an AI orchestrator, A Business Professionals Guide to Leading, creating, and Thriving in the Age of Intelligence on bookshelves. Now check it out, people get it. And other than your book, where else should people be following you, subscribing to you going forward? Sadie,

Sadie Lawrence: 01:04:29 I’m having a lot of fun on Substack this year, mainly because it’s direct to people’s inbox and I just started my YouTube channel doing weekly videos on just deep dives in my favorite tools in ai. So I have one coming out on Gemini Pro this next week. So it’s just having a lot of fun with those two platforms right now.

Jon Krohn: 01:04:52 Nice Substack and YouTube and I imagine people should be following you on LinkedIn.

Sadie Lawrence: 01:04:57 Of course. The good old streets of LinkedIn. Come say hello. Just don’t throw your bot on my comments.

Jon Krohn: 01:05:04 Yeah, I block you. You do that to me. You’re blocked forever. See you. Alright Sadie, thank you so much. I’m looking forward to an exciting 2026 in AI and robotics and consciousness research and book releases. So much fun sharing this time with you again. As always,

Sadie Lawrence: 01:05:26 It’s always a great time. So happy New Year everyone, and here’s to another great year,

Jon Krohn: 01:05:33 Five years in a row, and I continue to love these annual look ahead predictions episodes with Sadie St. Lawrence. In today’s episode, we covered how her five predictions for 2025 largely panned out. Agen AI dominated AI integrated into major everyday devices like AirPods. Scientific research expanded with ai, enterprise monetization remained crucial, and AI engineering skills overtook traditional data science skills in demand. My wow moment of 2025 was watching a crew of AI agents build a functional stock trading platform in 90 minutes for Sadie. It was using lovable to generate her entire hci.ai website with a single prompt. We awarded Google Comeback of the Year again for becoming the Frontier AI lab to beat for the first time in years thanks to its Gemini three Pro and Nana Banana Pro models. While in Sadie’s book, vibing was the overall winner of 2025 in terms of Sadie’s predictions for 2026.

01:06:30 Number one was that we’ll see more specialized industry models emerge, think alpha fold style breakthroughs in specific domains rather than big general purpose models. Partly because we’ve hit data limits for training massive generalist systems. Number two is that continual and nested learning will advance significantly allowing models to keep updating and learning rather than remaining frozen. After training a key gap between current AI and human intelligence. Her third prediction is that labs will return their focus to fundamental research because scaling like model weights, inference time, et cetera, is no longer the clear provider of the next big AI breakthrough. Number four is that spatial intelligence and physical AI will gain momentum as researchers explore 3D environments and simulation as new data sources to bring AI into the real world. And fifth and finally, AI ops will emerge as a hot new job category, think DevOps, but for managing GPU infrastructure, model orchestration and agent reliability.

01:07:26 As always, you can get all the show notes including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Sadie social media profiles, as well as my own superdatascience.com/955. Thanks to everyone on the SuperDataScience podcast team, podcast manager Sonja Brajovic, media editor, Mario Pombo, partnerships manager, Natalie Ziajski, researcher Serg Masís, writer Dr. Zara Karschay, and our founder Kirill Eremenko. Thanks to all of them for producing another fantastic episode for us today to kick off the year for enabling that super team to create this pre podcast for you. We’re deeply grateful to our sponsors. They and you are allowed this show to happen alongside us as a team. So consider checking out our sponsor’s links in the show notes to support the show. If you ever want to support the show directly by sponsoring it yourself, you can get the details on how by making your way to jonkrohn.com/podcast. Otherwise, support us by sharing this podcast episode with folks who would love to hear about it, review it on your favorite podcasting app or YouTube subscribe, but most importantly, just keep on tuning in. I’m so grateful to have you listening and hope I can continue to make episodes you love for years and years to come. Till next time, keep on rocking it out there and I’m looking forward to enjoying another round of the SuperDataScience Podcast with you very soon.

Show All

Share on

Related Podcasts