Jon Krohn: 00:00 What if the employees secretly saving 70% of their time with AI? Don’t want anyone to know about it and they have good reason to hide. Welcome to the SuperDataScience Podcast. I’m your host, Jon Krohn. It is not hyperbole to say that today’s guest, Ethan Mollick, is one of the most sought after AI experts in the world. Ethan is an associate professor at the University of Pennsylvania’s prestigious warden Business school where he co-directs the generative AI lab. His book Co Intelligence, is amongst the most popular AI books of all time and nearly 400,000 people subscribe to his one useful Thing newsletter. Today’s episode on enterprise AI is short. Yes, but be prepared. It’s so information dense. You very well may need to slow down your player’s playback speed to enjoy it. Alright, let’s go.
00:48 Ethan, Co Intelligence, your bestselling book from the spring of last year focuses on collaborating with AI. Has the rapid evolution of agentic framework since the book’s publication led to any major changes in your thinking? So for example, if machines are so autonomous, do they eventually get to a point where we’re not really collaborating with them at all? Or is that autonomy a natural compliment?
Ethan Mollick: 01:12 Well, so there’s now a bunch of modes of coordinating with machines that are different. So with co intelligence, when I wrote the book initially, AI made a lot of mistakes. It wasn’t autonomous any way and you had to kind of do all the work yourself. That still is actually a very valuable way of working with machines. But now even that’s become a little complicated because there’s two ways of doing this. One is interactively. I talked to a chat bot, push back and forth, but with thinking model, the agent model is taking more and more time to do work the models. Increasingly, I assign a task to the ai, I use my expertise to decide what that task is. I evaluate the results, correct my approach, and if that still isn’t work, I do it on my own. And that is a really good form of co intelligence Now and then as you’re pointing out, we are starting to see some real autonomy from these machines for some tasks. So I think people will increasingly be picking among those sets of modes or just doing things alone. So things I do alone, things I do intelligently, things I assign to AI and assign the work and things that happen autonomously.
Jon Krohn: 02:06 That’s a great list, very concisely done. You clearly spent a lot of time thinking about this. You mentioned in your talk at Scaleup AI today, your recent cybernetic teammate paper involving a field experiment with 776 professionals at Proctor and Gamble, and it demonstrates that AI often functions like a teammate as opposed to being a tool. In light of this, how should executives, how should our listeners rethink team structures and management?
Ethan Mollick: 02:35 We have no answers to that question right now. I think that this is, so part of what worries me a little bit about the state of AI is the capabilities of the systems are quite high and people are not really using them and everyone wants someone else to answer the question for them. So if you look at why American firms in particular did so well throughout the 20th century, 40% or 20%, 40% of the extra competitive value of American firms has come from better management. And there was a willingness to experiment with management techniques and approaches that generally led to the dominance of American firms in the 20th century. I feel like that’s something a lot of companies have given up on. They either go to outside consultants to do the work or trust SaaS vendors to tell ’em how to run their company, and I think that companies that start to experiment are going to be in the best possible shape. So we’re already seeing some versions of breaking down traditional barriers like let’s pull senior IT people out of the IT department and have them sit next to a subject matter expert and vibe code applications together starts to really change how things operate.
Jon Krohn: 03:34 Yeah. This ties actually into something. It ties into some of your contrarian stances. You touched on it a little bit there in your last response. For example, you’ve argued against consultant led, external consultant led AI implementations, as well as other popular enterprise AI approaches like retrieval, augmented generation rag. Given your contrarian stance, you might have unique ideas about where the best opportunities lie for AI in enterprises today.
Ethan Mollick: 04:01 So first my qualifier, I actually think there are rooms for consultants and room for rag consultants especially can help organizations make the change they want to make MAP processes, things like that. But I think that the change does have to come from within to some extent. The reason why I’ve been sort of pushing against RAG solutions is not brag as a concept, but because the first and easiest thing to build is a talk to our documents chatbot. I think there’s space for this, but the thing is is that if you built a talk to our documents chatbot, when I was warning you not to do that a year and a half ago, you now have a mediocre talk to our document chat bot that costs you a lot of money to make and is now easily beaten by an off the shelf model, right? Similarly, if you had a large consultant built application, things have shifted very quickly. So I think that the goal is not so much that there’s one or bad application, but it’s that you need to be at least doing some really ambitious work. And it’s important to do that work ambitiously internally and not just look at vendors, vendors play a role, but you need to have some internal effort going on in that as well.
Jon Krohn: 04:58 It sounds like based on your answer to the most recent question as well as the one before it that a key idea here is that enterprises organizations need to be looking at the opportunities internally, figuring out how their IT specialists or AI specialists can be embedded with the people that will be impacted by the work. That sounds like a key part
Ethan Mollick: 05:19 Sort of. I think you’re putting too much emphasis on it. So I say you need three things in your organization, leadership, lab, and crowd. So leadership are the C-level people in the organization who are actually deciding what incentives will encourage people to use ai, giving a vision of what AI is going to do, deciding what processes need to be altered or changed. As a result, they hire consultants to help them, more power to them, but they need to be making those decisions. The crowd is everyone in your organization using these tools. You want access to advanced chatbots because it’s easy for people to figure out use cases in their own area of expertise. It’s very expensive to an outside person, figure that out for you. So you need them experimenting and then what do they do when they come with a result that works? That’s when they go to the lab.
05:55 So the lab is your group of people inside your organization who are not all from it. In fact, most of them are come from the fact that there’ll be a couple percent of people in your company who are just really good at ai, bring those into the lab. They’re going to be doing three things in the lab. They’ll be turning around those ideas right away. Someone comes with a prompt that works well, let’s test it, refine it, and ship it out to the world. They will be thinking about benchmarking and they’ll be thinking about whether you buy or build advanced
Jon Krohn: 06:19 Products. Yeah, this buy versus build scenario, you’ve talked about how if you have these big expensive external consultant engagements, then you’re very likely a year later to be have spent that money poorly because someone else comes along. A frontier lab comes up with a model that can do a lot of what that external consultancy had built. Anyway, so how do you advise on how we should be making our buy versus build considerations?
Ethan Mollick: 06:49 So I think it’s not just buy and build, there’s also rent as an option, which is by temporarily. I think you need to be thinking about how core something is to your business, how much extra expertise you don’t have internally. So you need a really good coach or mentor. You don’t have a mentoring program that’s very good. That might be a good thing to use externally. Then there’s core functions of your business, CRM, other sets of stuff where you might want to use AI capabilities from vendors to do that. But I would ask my vendors to be much more transparent than they’ve been in the path. Because your vendors generally don’t have their own AI models. They are absolutely using someone else’s models. What models are they using? What prompts are they using? Prompts if you don’t have an ability to audit them, can be easily manipulated in a way that makes results seem good that aren’t. So you need a lot more transparency in the buy decision than you used to have.
Jon Krohn: 07:34 Yeah, it’s interesting because you’re asking them to give away some of the secret sauce, but I guess there’s a balance that they need to strike if they want to get the deal with you anyway.
Ethan Mollick: 07:43 I think we should be pushing much harder for that secret sauce because we know enough that small changes of prompting can vastly change s of fancy, how people in your organization feel things operate. If you’re using older models and not updating them, why would you let the launder models for you when some of the models are flawed? I think one way or another this is not, if your secret sauce is we are using a prompt and we are using Gemini, that is not secret sauce. Like if that’s your competitive advantage, you are in big
Jon Krohn: 08:08 Trouble. For sure. Alright, so we’ve talked about organizations. Now let’s zoom into the individual level. A couple of years ago you identified secret cyborgs individuals and organizations who leverage AI for time savings of 20% to 70% on many tasks while maintaining or increasing the quality of their work product. Do you think that this 20 to 70% has continued to accelerate in the past two years since you originally started writing about secret CyberWorks?
Ethan Mollick: 08:35 And we have some evidence on this, over 50% of Americans have said they used AI at work. Probably more actually have they self-report that they on a fifth of tasks that they use AI for that they are seeing a three times performance improvement. That’s the self-report. Whether that’s true or not, it’s hard to know. What’s slowing that down from organizations is we don’t have the process needed to make that. What do you do? You’re running agile development. Someone gets all their code done right away. What’s the point of their standup? What’s the point of how do you work sprint planning around that? How do you think, what do we do about that stuff? Or they’re just not telling you they’re using it because they’re mis incentivized.
Jon Krohn: 09:10 Yeah, yeah, yeah. So if these secret cyborgs are in your organization, what can we be doing to surface them and to be taking advantage of what they’re doing? Maybe have what they’re doing, be less secret and be taught to other people. Have a whole army of cyborgs.
Ethan Mollick: 09:27 Well, this is where the leadership and lab come in. So there secret cyborgs come out of your crowd. The people in your organization doing things, your leadership needs to incentivize people to actually tell you this. If people think that they’re going to be fired or punished or other people will be fired showing productivity gains, they’re just not going to show you. If they’re working 90% less, they’re not going to want to give that up for free. So leadership needs to think about the incentive plan that puts this into place. And then you need the lab because you need somewhere for these people to go. And then either there’s a word to actually work in the lab or else to say, Hey, I’ve got this prompt that kind of works and saves me five hours a day. Could you make it good and get it out to everybody so it’s not just a one component piece, you need the other pieces.
Jon Krohn: 10:04 Alright, thank you. Final question for you. It’s always tough to look into your crystal ball and see into the future, but given your position in the market and all the research that you’ve done, you might be able to see into a crystal ball better than most of us. So what do you advise? What can we be doing today to prepare ourselves for the AI capabilities that will be available in the coming years?
Ethan Mollick: 10:28 So I think my advice tends to be very similar, which is use these models a lot, use these frontier models, figure out what they do. I think you bet pretty reasonably that they will get better and cheaper and that means a bunch of thresholds get crossed. But you need to use advanced models. You need to use ’em for actual work stuff. You need to use them trying to get them to work rather than being skeptical about whether they work and you’ll learn the shape of their frontier as you go. So I don’t think it’s as much as like to keep you up with the news, but if the model’s update on their own, pick one of the big players, use their stuff a lot and you’ll figure it out.
Jon Krohn: 11:00 Perfect. Professor Mollick, thank you so much for taking the time with us today.
Ethan Mollick: 11:02 Thanks for having me.
Jon Krohn: 11:06 What a terrific episode with the phenomenon that is Ethan Mollick. In it, professor Mollick covered how American firms gain 20 to 40% of their extra competitive value from better management and why companies that experiment with AI management techniques will dominate the future. He talked about his leadership lab and crowd framework for successful AI adoption where leadership sets incentives and vision, the crowd experiments with AI in their areas of expertise and the lab refines promising ideas and benchmark solutions. He talked about why over 50% of Americans report using AI at work claiming three X performance improvement on about a fifth of their AI assisted tasks. And he gave us his advice for preparing for AI’s future that’s use frontier models extensively for real work, assume they’ll get better and cheaper, and you’ll learn the shape of the frontier as you go. I hope you enjoyed this awesome conversation to be sure not to miss any of our exciting upcoming episodes. Subscribe to this podcast if you haven’t already. But most importantly, I hope you’ll just keep on listening. Until next time, keep on rocking out there and I’m looking forward to enjoying another round of the SuperDataScience Podcast with you very soon.