Jon Krohn: 00:00:00 What if I told you there was a quantitative approach to precisely solving many of the most important problems of business faces, allowing profitability to be boosted using data alone, but that very few people even know what this approach is?
00:00:12 Welcome to the SuperDataScience podcast. I’m your host, Jon Krohn. Today we’ve got the brilliant quantitative mind of Jerry Yurchisin on the show. Jerry is a senior data science strategist at Gurobi, the business rapidly solving complex real world problems for most of America’s largest enterprises, allowing them to automate decisions that optimize efficiency and profitability. What’s the trick? Its mathematical optimization. An approach. Few data scientists know in this episode, Jerry spills the beans on mathematical optimization, including free open source resources to implement it, real world use cases, and what the future holds for this powerful approach, enjoy. This episode of SuperDataSciences is made possible by Anthropic, Dell, Intel and the Open Data Science Conference.
00:01:00 Jerry, welcome back for your round three on the SuperDataScience Podcast. It’s great to have you here yet again. Jerry, where are you calling in from today?
Jerry Yurchisin: 00:01:12 Thanks. I am still in the greater Washington DC area, specifically Vienna, Virginia. Happy to be back third time’s the charm.
Jon Krohn: 00:01:25 It’s going to be a good one. I’m really excited for the topics that we’re covering in this episode, so people who will have listened to your two previous episodes, each of which are about a year ahead of each other. So your first appearance on the show was episode 723, and the second one was episode 813. And I bring these episode numbers up because you are an expert in what they call mathematical optimization. And we’ve described this in those preceding episodes as a great tool for data scientists, AI engineers to have in their tool belt alongside statistical approaches, alongside machine learning approaches. It provides another major category, another major way to be solving problems and mathematical optimization can solve kinds of problems that machine learning or statistics or AI can’t. And so Jerry, maybe so that people don’t have to necessarily go back to those previous episodes, although if they want a lot more, I have much more detailed intro to mathematical optimization, they should do that. But maybe you could give us just a few minutes providing kind of a general overview of what mathematical optimization is.
Jerry Yurchisin: 00:02:41 Yes, highly recommend going back to those previous episodes because I dive into a good amount of detail. But the high level, I guess elevator pitch for mathematical optimization is it does precisely what you were describing, Jon, of it solves just a different problem. A lot of, again, if you think about what you’re trying to do with what we can call now sort of a classical AI with traditional machine learning, statistical approaches, those are all trying to take a ton of data and figure out what the future is going to look like. That’s by and large what those approaches are doing and they do a great job at that. But it doesn’t tell you exactly what actions you should take, what choices you should make, what decisions you should make for your specific system or problem, your complex decision problem that you have. You could make slightly better decisions with that information, but by and large with massive businesses or even small businesses.
00:03:53 But the complexity of the decision problem can be astronomical. And while a little bit of a good prediction can help, there’s just too much decision space to make the right decision not yet alone something that’s really optimal. So that’s what mathematical optimization does is it provides essentially a framework for you to take your business problem, distill it down into three main components. One being what are the actual decisions I can make? What are levers I can pull? What are the buttons I can push? So for instance, if you’re talking about a supply chain network design problem, what facilities should I open up? Where should I have distribution centers, things like that. I can have one in dc, I can have one in Atlanta, I can have one in Boston, I can have one in Houston. Those are all potential decisions. So you have to sort of have a clear understanding of what all those potential decisions are, and then you need to understand how they can be possibly constrained. So what are your business rules? What are the things that are sort of limiting factors? So obviously it would be great to open up all of these distribution centers then to make sure that all of your customers get something immediately. That’s great. But obviously you’re going to be constrained by cost. You’re going to be constrained by people, you’re going to be constrained by maybe regulations or whatever it may be. There’s always business rules. There’s always things that are limiting your options.
00:05:38 But you want to take all of those into consideration with some objective in mind. So we have what we call an objective function. So I want to make my decisions that fit my constraints, but I want to do so while minimizing my costs, while maximizing my profit, while minimizing my carbon footprint, while doing all these sort of things that you want to do, you sort of want to push some objective as high up as you can or as far down as you can while making sure that you respect the rules that you have in place. And then the output is the values of those decisions. What should I be doing? How much of product A should I make at this manufacturing facility? How much should I ship of that to this distribution facility? How much of that should I ship to the individual customers? Things like that.
00:06:32 All of those are the types of decisions that businesses need to make, and that’s what mathematical optimization does. And you might be thinking, well, okay, that doesn’t sound all that complicated. If you think about making, if you’re producing hundreds of products yet alone, thousands or maybe tens of thousands of products, and you have all of these potential options of where they can be made, the source material can come from all these different places, the ways that you can ship things, the amounts that you can ship quantities and all this sort of stuff, you can sort see how a couple of easy decisions, once you really factor in the true scale really become massive and become sort just impossible for a person to do, but also become impossible for something that’s just made to predict a little bit of the future. Also, it becomes impossible for those types of tools to do this. Well.
Jon Krohn: 00:07:36 Yeah, thanks for that brief overview of what mathematical optimization is an invaluable tool for people to learn. Do you still have the burrito optimization game up and live?
Jerry Yurchisin: 00:07:49 It is up. It is live. It is a great way to understand the complexity of optimization. So I was talking about all of this supply chain, blah, blah blah, and how decisions can be very complex. So yeah, a couple years ago we made a game. So if you go to burrito optimization game.com or just sort of search burrito optimization, if you just
Jon Krohn: 00:08:17 Search
Jerry Yurchisin: 00:08:17 Burrito optimization,
Jon Krohn: 00:08:18 We’ll put a link in the show notes too for sure.
Jerry Yurchisin: 00:08:20 Perfect. Yeah, there’s a quick little game where you have essentially two types of decisions that you can make. You have a fleet of food trucks that all serve burritos, and you have two types of decisions you need to decide the number of trucks you want to use to feed a city full of hungry people during lunch and where do you place them? So two pretty basic decisions. I want 10 trucks or I want three trucks, or I want just two trucks or one and where to place them in sort of these discreet locations like at this intersection or this intersection or on this street or in front of this building. So super basic, but essentially what the game does is it allows you to sort of test things out and move things around and you sort of see if I place this many trucks at this location, this is what my profit’s going to be.
00:09:12 If I mess things, if I mess with things a little bit, here’s how it’s going to change my profit. So you could sort of tweak all of this stuff sort of manually and then you click, okay, I think my solution, I think this is optimal, can’t get any better than this. Boom, you click like solve. And then what happens is on the backend, growy will solve the problem and determine the optimal number of trucks and the optimal location and compare your solution because each putting out infinite trucks may seem like a good idea, but all of those are going to have a fixed cost to it. And then so you can’t just put a truck everywhere. You don’t have that. It costs lots of money. And then where to place that limited number of trucks is also a tough choice because you don’t know exactly, okay, if I move it one spot over, what’s my increase there is kind of tough to see manually, but when you have a sort of an algorithmic approach, then you sort of trust the output a little bit better.
00:10:16 So that’s the idea of the game is, okay, let’s see if you can do better than optimization. And it goes day by day in terms of the number of rounds. So you start with a very, very super simple problem and I would say maybe 20% of the people get the optimal solution. They can match optimal on the first super simple day. And then once you get past that, it’s impossible. So it just sort of shows gut intuition, visual reasoning, all these sorts of things are not the best way to be making large scale decisions, particularly in which if I shave off 2%, if I’m a massive company and I shave off 2% of my fuel costs, that’s a massive, massive savings. So it’s just sort of shows the complexity of decisions shows that the way that we’ve been approaching these decisions is probably maybe not the best way. And again, it sort of shows that the difference between the predictive nature of machine learning and statistics and stuff and the prescriptive nature of optimization, which is, hey, okay, what am I supposed to do in order to get the best outcome given I know what the future may look like?
Jon Krohn: 00:11:36 Yeah, it’s a fun game to play. I’ve played it many times and it’s intuitive. It really gives you a sense of these kinds of optimization problems and how they differ from machine learning problems or statistical problems and how it gives you all these levers for maximizing some objective, like you said at the beginning of the episode, like maximizing profits and or minimizing climate change impact, all these kinds of things simultaneously. And even though the burrito optimization game is literally a toy example, it still shows how complicated even in this very constrained environment, how difficult it becomes to make decisions as a human that even remotely approximate what the optimal solution would be. And then like you said, in real life you could have tens of thousands of products and you can’t, no human can look at a spreadsheet and just be like, cool, let’s build a store in Auckland, New Zealand. That’s what we need.
Jerry Yurchisin: 00:12:37 Exactly. Yeah. And on the note of games, definitely check out the burrito optimization game. I’ll give a little bit of a teaser of a new game because the burrito optimization game was so awesome. We are currently in development of a new game which is all about, it’s going to be called Grow Bean, and it’s about coffee. It’s sort of like a queuing theory problem of what are the prices, what price should I set in order to get maximum profit? So if you have, your prices are way too low, you’re going to be overwhelmed with customers and you can’t serve them all in a nice fashion and things like that, but if you set your price too high, no one’s going to come. So you got to find that right balance in order to keep people interested, but also make money and make sure that you’re actually meeting your costs and stuff like that. So that’s another game that’s in development, that’s a 2026 thing, but keep your eyes out for that as well.
Jon Krohn: 00:13:42 I expect you to be pinging me with the link to Gurobi be as soon as it’s at Cherry, I can’t wait to play. And speaking of Gurobi, we haven’t talked too much about your company yet. Gurobi, G-U-R-O-B-I, which a lot of people, I mean if people have been listening to this podcast for years, then obviously they heard you in the two preceding episodes that you were on and they might know a bit about Gurobi and a lot of people in their industry might know Gurobi probably more than if you walked up to people on the street and said, have you heard of Gurobi? But Gurobi is a really interesting company because it’s not like Nike or Lululemon or some kind of consumer product Apple where you’re selling direct to consumers. And so everyone’s kind of aware of it when it’s a really big company is on the scale of those kinds of companies in terms of revenue and company size, but it’s a B2B company and so some people haven’t heard of it, even though it is gigantic in our industry.
00:14:46 I think it’s fair to say, I dunno if you can correct me, but I think Gurobi Optimization is the biggest optimization mathematical optimization company in the world. And it is certainly, I know we’ve talked about in previous episodes how a crazy proportion, the vast majority of if you look at the top hundred or the top 10 companies in the United States by market capitalization, Gurobi is used by almost all of ’em or a crazy proportion of them. And so really important company. And interestingly this year at Nvidia GTC, which is NVIDIA’s big annual conference, the Nvidia CEO Jensen name dropped Gurobi in his big keynote. Do you want to tell us a bit more about that, Jerry?
Jerry Yurchisin: 00:15:35 One of the reasons I think why, if you go back to another episode, I probably talk about this as well a bit, is we’re Gurobi is specifically us, is used by all of these top companies, all these massive companies, all of these sort of Fortune 510, whatever, fortune N, they’re either going to be a significant proportion of them using Gurobi and if not Gurobi, they use optimization as well, but maybe they just don’t have the right problems for us. The part of the reason you don’t hear about that is people like to keep it a secret, we’re a differentiator. And you don’t want to tell your competitors exactly what you’re doing to slash you to save tons on costs or to build productivity or build efficiency or something like that. You’re not going to be, you don’t want to always talk about that. But specifically with Nvidia, we’ve had a nice sort of partnership with them and we still keep in close contact.
00:16:44 Our executive team went out to their headquarters maybe a couple months ago, chat things up and see what the future can hold between us as a collaboration. But specifically with the keynote and everything Nvidia, they released an open source solver, which is sort of like the name that we use for Gurobi and products that are like us is a mathematical optimization solver. They released a solver, an open source solver, and we worked with them and they worked with us to have some learnings about that. But it’s called co-opt. And so it’s a library that actually solves similar problems to Gurobi, but it works specifically on GPUs. And that’s actually where we’re going a little bit into that space as well is to see how math optimization can work on that hardware because essentially the mathematics behind mathematical optimization, we sort have this internal saying of GPUs hate pivoting.
00:17:54 And what I mean by that is there’s a specific sort of mathematical sort of operation that happens when you solve these types of problems. Mathematical, mathematical optimization problems with traditional algorithms that is called pivots, and it’s just not great for GPUs, it’s not highly paralyzable, it just doesn’t sort of fit that mold very well. So we’re trying to see how can we leverage other algorithms which have been coming out, other techniques or other approaches to really take advantage of the massive sort of power that GPUs can provide computationally. So just traditionally CPUs were the best fit for the types of calculations, the types of mathematical operations that were happening, but now we’re sort of seeing that GPUs, that there is a space for GPUs and it’s specifically if you have what we call a super large linear program, a super, super large LP is what we say.
00:19:04 And a linear program is all the stuff that I was talking about at the beginning, your decision variables, your constraints, your objective, but everything, all the relationships are linear and all of your decision variables can be sort of a continuous can take a range in some continuous sort of between A and B type of thing. A lot of decision variables for supply chain and all other types of things. You have these discrete decision variables, yes, no on off type things, but for these really large scale LPs linear programs, we have been seeing that GPUs are actually, and the algorithms that can run on those are providing some significant benefits and really sort of reducing some solve times with those problems on GPUs. So we’re actually starting to incorporate that with the grobe product. We’re asking our customers who have these problems to send in their problems so we can help them tune it, understand everything.
00:20:08 So it is a new space that we are diving into that other companies like Nvidia are diving into. We’re having partnerships with them, obviously we’ve released a couple blog articles together, sort of highlighting the advantages and stuff like that. You can watch a really cool webinar that one of our founders of the company, he tested all this stuff out himself and so you could sort of see the benefits of using GPUs to solve these optimization problems sort of firsthand and sort of get his take on it. So yeah, it’s a really exciting thing for us to be getting into. It’s new, it’s really challenging r and d team. They’re super excited to keep diving into this. We’re excited of what the future holds and we’re hoping that we can maintain a partnership and just keep pushing the frontier of optimization. We don’t want to leave any stone unturned.
Jon Krohn: 00:21:10 Fantastic. Yeah, cool that you found some ways to integrate GPUs into real world applications of mathematical optimization. And it’s interesting you mentioned there how Nvidia has this co-opt and so that’s kind of like for people who are familiar with GPUs and Nvidia, you’re already familiar with Cuda, CUDA, and so this is the same opening two letters and then opt optimization. Yeah, so you talk about that open source project, but Gurobi also does open source packages. And in addition to that, something that I can’t believe I didn’t mention earlier is that you personally, Jerry, you’ve done tons of tutorials and notebooks of code that are available for free to folks. And so I’ll have a link to those in the show notes, but I don’t know if, if people have been listening to this episode and thinking, oh, mathematical optimization, it’s something I should probably be learning, it kind of sounds like Gurobi is this pay to play platform and I maybe can’t get started with mathematical optimization unless I have this big enterprise Gurobi license or something. But that isn’t right. They can actually just be going today right now to use free resources that you’ve created and get going on learning mathematical optimization, right?
Jerry Yurchisin: 00:22:28 Yeah, you can get, by the time I’m done with this sentence, you can open up a notebook and see optimization firsthand. That’s been my main focus from joining Gurobi almost four years ago. It seems like an eternity ago. My role is to help bring optimization to the data science AI community. And part of that was like we need to make learning resources that data scientists can digest quickly and understand the benefits easily and just take off and go with it and sort of notebook examples, a lot of online training videos and things of that nature. That’s where I’ve been focusing a lot of my time. And essentially if you go to, I mean we could put this everywhere that the url, but it’s easy to remember, just growy.com/learn is sort of our new place for anyone to go who wants to learn mathematical optimization with Gurobi.
00:23:40 And you can go to that website, you can see a bunch of videos of me and other people talking about the basics of optimization, intermediate level stuff. Anywhere you want to go, you can get what you’re looking for and get started. And essentially we provide a very small scale free license for online learning. So if you just pip install Gurobi pi, which is our Python package, and by far the most utilized API, we have just pip install Gurobi pi. You can use that for as long as you need and you can get rid, but there is a size limitation to that problem. You can’t go and solve the problems I was talking about before with where you have millions or billions or trillions of decisions to make. You can’t do that, but you can get started with a smaller scale problem that can very accurately reflect your problem that you’re actually trying to solve actual business problem.
00:24:49 It just might not be at the right scale necessarily, but it is still it kind of like a mathematical twin. People like to use the term digital twin. I like to call mathematical optimization models a mathematical twin. It is a mathematical representation of your whatever problem or system that you have. So you can represent that and take your business problem, represent in a little bit of math, code up that math in Python and click or really click. You enter an optimize argument and function and then boom, outcomes, the optimal answer. And yeah, you can do that right now. And there’s tons and tons of stuff that we put out there to help you along the way. I can’t even talk, I’m trying to think of all of them. There’s too many.
00:25:51 But yeah, specifically if you have a access to Udemy courses, we put out a four-part course on, it’s called the introduction to optimization through the lens of data science, big long title, which must mean it’s really, really impressive and a great course, which is very true. But we teamed up with a Georgia Tech and one of the great professors there, Dr. Joel SoCal, who runs a master’s program there, master’s in analytics at Georgia Tech teamed up with him. So you can go through that and get tons of learning resources there. If you really, really like listening to me specifically, and I don’t annoy you after an hour or so, we have YouTube playlists with, there’s three of them now. There’s what we call Opti one oh one optimization for data scientist 1 0 1, and then we have a 2 0 1, 2 0 2. So we have three of those and we’re going to be doing a 3 0 1 teaser in the end of November as well.
00:26:57 So we’ll have a four part series of optimization modeling and all. And it’s like, it’s not just, here’s a notebook, go have fun. But we also bring in, we have a great partner, a consulting company named Decision Spot, and we had one of my best friends now in the optimization world, Asan, he’s a consultant, he does a lot of optimization modeling for customers. And so we bring in his perspective as well of like, okay, here’s how you actually, not just, here’s a notebook, go have fun, but here are the actual real business problems that we are solving. Here are the things you really need to consider. So it’s getting not just a Gurobi perspective, but actual another optimization expert that’s not under the Gurobi umbrella, getting their perspective as well. So you can learn from those folks as well. We always partner with great people like that. So yeah, there’s always more people to chat with.
Jon Krohn: 00:28:01 Thanks for all those resources, Jerry, for our listeners, for them to be able to get into optimization right away. And I want to highlight something that you said there that I think is really interesting, which is you said that they can use say a Python method to call the Gurobi mathematical optimization solver and find the optimal solution. And that word is really interesting to me because mathematical optimization, unlike statistics or machine learning, is it correct to say that you can provably find the optimal solution, not approximate it, but actually find it?
Jerry Yurchisin: 00:28:39 That’s the big differentiator is, and that’s why we call it mathematical optimization, is because if you’ve lived in the world of mathematics for any bit of time, proofs are important. And that’s precisely what mathematical optimization provides is given the framework that we sort of say, okay, if you set up your decision variables, you set up your constraints, you set everything up like this in this, it’s, it’s a kind of a specific but very flexible framework. So you can model pretty much any problem like that. If you do that and you apply these algorithms, then what will come out is the mathematically guaranteed optimal solution. And there are cases, and our customers have this where even their problems are super, super large and super complex and we can’t necessarily prove that this solution is optimal. What we do also provide is a worst case sort of bound on how far you can be, which is really, really interesting that no other approach can.
00:29:49 So we can say, okay, there’s something that’s called what we call a MIP gap. And what that means is a mixed integer programming is the acronym for the typical of problem type that people use, but there’s a gap. And the gap is like, here’s your current solution and here is the mathematical best case that we have. And essentially what happens is over time the solutions we find go like this, and then this other sort of theoretical value goes like this. And essentially once they go and meet, that’s where we say, okay, this is provably the optimal solution because we have an upper bound and we have this and they come together. But even if it’s like this, we could still tell you, oh, you’re no more than 5% off of optimal, no more than 2%, no more than whatever. That in itself is super, super valuable I think just to know how far off you can be.
00:30:54 And again, that’s not something that any other approach can say. Even if you use other optimization algorithms, like something like simulated and kneeling is a optimization thing, is an optimization sort of process, but that does not have this gap, does not have this guarantee when it converges. It’s one of those processes that, okay, my solution hasn’t changed by this amount over my last thousand iterations, so I, I’m going to call it quits. We’re good. That is something that we sort of call that a local optimization. You’re sort of getting into, if you’re minimizing a function, you might get into a little bit of a valley, but there could be a better valley right over there that’s lower for your costs, but you can get stuck in those things. And that’s what those types of algorithms while super useful and beneficial, I’m not saying that they’re bad or anything, they just say, okay, yeah, I got something that looks like it’s a good solution, but I can’t prove it. What app optimization can do is also take into consideration those other valleys and go from there and say, okay, this is the best valley.
Jon Krohn: 00:32:10 It’s interesting that you talk about peaks and valleys there in terms of optimization is another word for that topology, Jerry?
Jerry Yurchisin: 00:32:17 You can call it that. Yeah, because there are essentially, if you sort of think of it as, so describing these constraints, it sets up what we call a feasible region. And if you think of it in a linear sense, it’s kind of like a box. It could be a square, but then you start slicing that box in a bunch of different ways around the outside by constraints, then you get this sort of complex shape. So it is like there is a lot of geometry that is important to mathematical optimization. That’s actually precisely why we can do make the guarantees that we have is because of the geometry of the problem and the math that goes along with that allows us to say, okay, because of all of these proofs and theorems and all this mathematical theory, if this happens, then we know that this is the optimal solution.
Jon Krohn: 00:33:16 Very cool. And the reason why I bring up topology, you already know why Jerry, but we had on social media at the time of recording yesterday, so about a month before people hear this, a longtime listener to the podcast and longtime social media follower, a guy named Roland Phillips based actually, I guess not too far away from you in Virginia, a place called Roanoke.
Jerry Yurchisin: 00:33:40 Okay, yeah, that’s a little not super close, but pretty close.
Jon Krohn: 00:33:44 But yeah, so Roland Phillips out of Roanoke, he is a chartered financial consultant and he has a degree in mathematics. He does a lot of work in analytics and automating business processes. And he commented on a recent, so we had an episode, episode 9 2 3, which I guess is now about a month ago relative to when people are hearing this episode. It was with Amy Hobbler on graphs, on graph networks, and I guess she talked about topologies in something related to graphs. And yeah, Roland wrote that he really liked that episode and then he found it interesting that topology, that kind of word is being mentioned more and more. He says, I think the first mention with Jerry was episode 813, so one of the episodes that you, and that I mentioned of yours earlier in the episode, he says that you talked about topology, that was your previous appearance on the show. Yeah, yeah. It’s interesting. I guess, I dunno, I’m just trying, I’m tying a bunch of threads here together.
Jerry Yurchisin: 00:34:50 I think it’s a good point. People are beginning to, I think there’s a little bit of more like, hey, we need to take mathematics seriously in this and that there is room for, we talk about graph structures and stuff like that. That’s something that is also prevalent in mathematical optimization when you’re solving these really complex sort of problems where you’re trying to understand these yes, no on off decisions or you need to have your decision variables be actually the sort of discrete integer values there. There’s a lot of that type of stuff that happens as well. And so yeah, it just sort of shows that I, understanding the mathematics behind things is never a bad idea. Sometimes it’s hard, sometimes it’s maybe not quite as useful, but it doesn’t hurt. And a lot of pretty much all of these cool breakthrough things that we have with deep learning like 15 years ago or something like that. And now with LLMs and everything, there’s a mathematical process that’s behind it. You don’t really need to know everything about it, but it’s there. And if you think about the history of all this sort of stuff, the history of a neural network that was in the forties was when that kind of stuff was being
Jon Krohn: 00:36:27 The fifties, I think. Yeah.
Jerry Yurchisin: 00:36:28 Oh, maybe the
Jon Krohn: 00:36:29 Forties, exactly how I think about it. Yeah.
Jerry Yurchisin: 00:36:32 But that’s
Jon Krohn: 00:36:33 When almost a century,
Jerry Yurchisin: 00:36:34 That’s when the math behind that was being sort of invented. And the same thing is true with the optimization that was actually around the same time where the breakthrough algorithms for mathematical optimization, one being called the simplex algorithm, which again, if you sort of look into that a little bit, you’ll sort of see why geometry is important in that algorithm. But yeah, that stuff was developed a long, long time ago. It took particularly for something like deep learning, it took the massive amounts of data to make it useful and the massive amounts of computing power to actually have that be a breakthrough. So yeah, it’s interesting what may happen next with stuff that has been sort of long cast aside is like, eh, whatever, can’t do that. But then with new technological breakthroughs, who knows?
Jon Krohn: 00:37:29 And speaking of neural networks, deep learning, new technological breakthroughs, something that is completely new at Gurobi as far as I’m aware, certainly we haven’t talked about in any way in your preceding episodes really is use of LLMs of large language models. And so for example, I believe that you may now have, or Gurobi will soon have a custom GPT in the ChatGPT store.
Jerry Yurchisin: 00:37:54 So right now we have actually technically three custom GPTs in the ChatGPT store. One of them is called GU Robot, which is now we call Legacy grow robot. And I’ll talk about what the new girl robot is in a little bit.
Jon Krohn: 00:38:11 That name will never go out of fashion.
Jerry Yurchisin: 00:38:13 Yeah.
Jon Krohn: 00:38:14 The new girl robot that’s set.
Jerry Yurchisin: 00:38:16 Yep. We actually had a company vote about what we should call the new, because we had old girl robot, we had a company vote about what we should call the new one. We had other names and stuff like that and just new girl robot one by hands down. But the other things that you can find on the chat GPT store, what we call the Gurobi AI modeling prompt engineer and modeling assistant. And I highly recommend if you want to just not take my word for all of this and everything, and so it’s like, okay, how can mathematical optimization work for my specific problem? Just go to that fire up that custom GPT. And it’s meant to be a very conversational sort of way of understanding if mathematical optimization is the right approach for your problem. And then if it is, let’s have a little conversation back and forth to really develop the problem statement.
00:39:17 That is the core. That’s the most important thing with the mathematical optimization problem in practice is really understanding your problem and really understanding, again, what are all the levers that I should be able should have access to pulling and pushing buttons. What are all my possible things that I actually have control of? And one of the things that leads to failure in optimization problems is, oh, I didn’t consider this and the solution gave me something. Optimal solution is something that doesn’t look like what I was expecting and it’s because I didn’t give it all the information. I didn’t model it properly. And that’s what this prompt engineer is really there to help you do is really understand your problem and come up with a very sort of concrete and thorough problem statement. So you should clearly, by the end of that, you should clearly know what your objective is.
00:40:12 I want to minimize my costs or I want to minimize customer churn or something like that. And here are my sets of constraints and here are the decision variables, the decisions that I can make to do that. That’s what that’s for. And then we have the modeling assistant, which is made to take that problem statement and then actually give you a first stab at the mathematical formulation. Again, the process for all of these for mathematical optimization is take your business problem, understand it, transform it into math, which is essentially sets of linear or now, but sets of equations and inequalities and things like that. Transform it mathematically and then transform that into code. That’s what the modeling assistant is supposed to do is the last two of those. It gives you a first sort of stab at the formulation, the math part, and then also the code in Gurobi pi.
00:41:13 And if the problem is small enough, it’ll actually run it and solve it. We’ve attached essentially a Python wheel that has our solver that limited that solver capability attached to it so you can actually sort of see it in action. So that’s a great place to start really understanding how optimization can be used, particularly in the context you’re most interested in because you can hear me talk about supply chain or logistics or whatever it may be, these sort of traditional fields where optimization is used day in and day out by these massive companies, but also other things like, okay, well I want to know about marketing mix optimization. I am in a marketing department. How can I best spend my budget to maximize clicks or something like that. Boom, we can talk about that or whatever you’re trying to think about. Your problem is, it’s a great way to go about it. And all that’s in the ChatGPT store and now we’re sort of doing our own stuff as well with the new GU robot.
Jon Krohn: 00:42:15 Excellent. So the idea with Gurobi robot, whether it’s in the chat GPT store or outside of it is that we’re using large language models to make it easier to do mathematical optimization. So we’re taking our business problem constraints that we have, maybe expressing that in natural language and getting a headstart on all the mathematical definitions that are essential to getting our optimization solver running.
Jerry Yurchisin: 00:42:39 And particularly with the chat gpt store stuff and with the new robot, which is something that we host, it’s essentially built on AWS with Claude running in the background more or less. And so we have that, and between either of those, you sort of start vague business problem and it helps you develop it. And then yeah, it gives you, obviously, I don’t think I need to preach to the choir here about how you need to take the output of LLMs with the grain of salt and the caution and all that sort of stuff. We all know that not everything is perfect that comes out, but particularly for this audience who may be learning optimization and stuff like that, it’s really, really dang good. And it will give you really good responses, really correct modeling model, representations of your business problem, good code right out of the gate.
00:43:50 And that’s where we’re moving towards as a company is let’s build these tools. And growbot is being the first of that to really help lower the barrier of entry to speed up the process of defining your model, of doing the modeling, doing the code writing and stuff like that. Really speeding that up. And where robot sits now is it, it’s best if you’re a little bit of a, I don’t want to say an expert, but if you have some experience with our API and our stuff like that, because really calling on our material, our internal external material, all that sort of stuff being used on the backend, it really provides a great way to do things, to do all the modeling, to do all of the coding, and to do things like that in a way that is sort of just going to speed things up significantly. And we’re really excited about where we’re going to be going in the future and there’s going to be a lot more that we’re going to hopefully put out there.
Jon Krohn: 00:45:05 Nice, really cool application integrating LLMs essentially into the workflow of doing a mathematical optimization problem, making things easier for people in the flow of work, which is one of those great AI use cases. It’s nice to see it there for you folks at Gurobi. Something else that you have for us, I think that’s completely new since your previous appearances on the show are some interesting new real life use cases of mathematical optimization. So you have of course alluded to some of them. We’ve talked about the burrito optimization game or as a toy example or the new guo bean coffee example. You’ve mentioned that application areas like supply chain logistics, those tend to be areas that use mathematical optimization a fair bit. But I’d love to dig into a few more cool real life use cases that have cropped up in recent months.
Jerry Yurchisin: 00:46:04 So we recently had what we call the Robbi Decision Intelligence Summit. It’s our fancy sort of event that we put on ourselves. We invite customers, we invite prospects, we invite anyone who’s interested in learning more about optimization. We invite you to come. Last year just finished up a couple weeks ago, we were in Vegas, and so we’re bringing in super cool customers that are doing really cool things. And we had a couple talks that I thought were,
Jon Krohn: 00:46:41 I like how you had to pause there because you’re like, company names go into your head and then you can’t say them. So we get some pause, really cool companies,
Jerry Yurchisin: 00:46:52 But I will mention a couple, there’s a couple that I can mention. There’s some that I can’t, sadly. Again, we’re the best cut secret in decision making. I guess that’s what, if you’re going to come away with anything, optimization and robbi is the best kept secret because people don’t like to talk about us because yeah, why would you spill the beans? But there are two presentations that I really liked. One was from Toyota and they’re talking about how they used optimization for planning of vehicle manufacturing. So they’re getting demand forecasts of like, okay, this is the number of this type of vehicle that I expect that customers would want in this region at this time. So you can see if you’re thinking about by region over a certain amount of time, the whole sort of fleet of Toyota vehicles that they offer, that’s a pretty big problem.
00:47:56 And now you’re thinking about, okay, manufacturing that, how can I best manufacture these things, these cars at minimal costs and everything. You sort of see all of the small things that trickle into making a car. It’s a very complex process. So they ran through how they’re sort of building tools and is an aspect of LLMs and natural language in this as well. But they allowed their planners to interact with an optimization model. An optimization team built this optimization model, but they allowed their planners to interact with that and do scenario tests and what if analysis on all of these sort of things. I’m like, well, what if the tariffs on this particular thing, what if tariffs go up by from 0% to 10% and then next week they’re 80% and then the week after that they’re back down to 10 and then sometime they’re 30%.
00:49:06 It’s an insane time to try and plan long-term manufacturing right now. It’s insane with all this sort of fluctuation of particularly tariffs, but they had a tool that was, had optimization in the back, had an LLM sort of interface where the planners can really interact with this and say, okay, well what if tariffs are this? Or what if my supply of this thing was cut in half or something, interacting with the optimization model in a very natural way and getting all of these sort of cool scenarios and really being able to understand, okay, what if this happens? What should I be doing? How should I be manufacturing things at some macro level and really making decisions that will impact the company. It is just providing a whole new way to access optimization to people who don’t, they’re not going to be writing the models, they’re not going to be doing any of the Python coding, but these are the people who are making the decisions, who have all that have all this sort of SME expertise, all this business expertise, all this foundational knowledge of I actually know how to plan manufacturing for cars and stuff like that.
00:50:25 I know all of this, I don’t know, optimization, but now this group at Toyota, they did an exceptional job of blending the two and letting people interact with that. So that was one super cool case. And the other one is with total wine. The other ones that I could mention is total wine. And it’s again, a similar problem of how can I, it’s a similarish problem because it’s kind of supply, but it’s essentially, if you think about what a total wine store is, it’s a massive store that has all the beer and wine that you could ever want, anything you’re interested in finding. And depending on state laws, there may be liquors and stuff like that too.
Jon Krohn: 00:51:11 And, here I thought it was a platform for getting complete complaints.
Jerry Yurchisin: 00:51:15 I love it. But what I really liked about their story is if you think about the complexity of decision making that can happen within something like that, it’s like, okay, well, I buy a bunch of beer, I buy a bunch of wine. But you’re sort of thinking again about complexities and in the presentation the presenters talking about like, okay, I want to buy just one brand of beer or something. The choices that you have in just that single brand is pretty massive. Am I buying massive cases? Am I buying individual six packs? How many am I buying cases of 24 cases of 18? All that sort of stuff. When am I getting them? How often are they coming? How often are they arriving and everything like that. And then now you think about that for pretty much every beer that exists, particularly in North America or you importing them every wine.
00:52:18 It’s a massive, massive problem and not easy to solve. But what I really liked about this problem is the Toyota folks that I just mentioned, and a lot of our customers, they are, they have what we call operations research, sort of expertise in-house. Even the Toyota example, the person who presented it did not have the traditional background of our common customer. He is an AI person, but had some mathematical chops to him. And so it was not super, he took to it a little bit faster than I think some would. But total wine folks, they were a team of data scientists. They were people who did not have a traditional sort of operations research background, industrial engineering. Those are some of the common degree types that people have who have been exposed to linear programming, mixed synergy programming, the mathematical optimization things. That’s where you typically learn that.
00:53:20 But these were people who were, I’m a data scientist, been doing that for a decade now. Oh, we have this new problem type that we’re trying to solve, machine learning’s not cutting it. What else can we do? Oh, okay, I’ve learned of mathematical optimization. Now we need to actually do it. And so it was a total success story of taking a team of people who did not really know how to do this right away, understanding their learning, their pain points and stuff like that, understanding what worked for them and what, it was just a great story to hear that this stuff, if you’re listening to this now and you’re like, oh, well, I don’t have to listen. I don’t have time to learn all of this, or I don’t know the benefits of should I just hire or something like that, but that’s also complicated. It could be time consuming and blah, blah, blah. You can be done in-house. You can build a team that can take care of this, that can do this at the scale. And I think this is where a company, this is why I love working for the company I work for, is we don’t just hand you the software and say, good luck, have fun, as long as your check clears, blah, blah, blah.
00:54:41 We’re not going to talk with you. We have an exceptional support team that helps you with this. So if you get stuck, not stuck with like, Hey, I don’t know how to build my model stuck, but like, Hey, this is taking a lot longer than I thought to run. Or we’re getting these error messages, or we have issues with this or that. You have people when you submit a ticket with us, you have someone with a PhD in optimization or decades of experience that looks at that and thinks, here’s how I can help you. So they leveraged that and they used our support system to really help them, and now they’re saving, I don’t want to mischaracterize the number, but it’s a lot of money and they’re being able to reinvest it then. And that’s really great about these projects, these optimization projects is, yeah, you’re saving money typically, but it gives you an opportunity to reinvest and make things better elsewhere. So those are a couple of really cool customer stories that I was able to hear. And there’s tons more though. Tons, tons more.
Jon Krohn: 00:56:00 Thanks, Jerry. It’s great to be able to get all the detail on these kinds of mathematical optimization projects. It gives us color that we can use to imagine in our own world, in our own businesses and the problems that we’re tackling, how we could be taking advantage of mathematical optimization too. As you say, get some savings and reinvest that into some more growth somewhere else. Now, looking to the future a little bit, in episode 8, 8 7 of this podcast, we had the global CTO of Dell, John Rose on the show, and it’s an incredible episode. He’s an amazing individual and a really compelling speaker, incredible episode 8, 8, 7. And in it near the end, just near the end of this episode, he talked a fair bit about quantum computing and how quantum computing is an inevitability, just like you could see deep learning capabilities over time, and eventually you hit this inflection point where all of a sudden there’s tons of real world applications that are commercially viable. All of that is you could have modeled it for deep learning for years, decades in some cases, like Ray Kurtzweil was. And John Rose says that same thing is now happening with quantum computing. Yes, today it is expensive yesterday, there are not a lot of real world applications, but just look at the trajectory of where this is going. It is an inevitability that quantum computing will change the world in the coming decades. How does quantum computing interact with mathematical optimization?
Jerry Yurchisin: 00:57:40 It’s something that kind of gets in the way of people doing optimization, because the main reason is there’s, if you’ve taken any sort of course or understanding of computational complexity and things like that, a problem complexity, which is, I’ve been talking about that a lot. You run across terms of P versus NP of problem types, np, hard, np, complete, all these types of categorizations of decision problems that show how hard it is to solve. And there’s this idea that if something has a certain label now in today’s computational world, that it is unsolvable and don’t even try it. And mixed energy programming, which is again the main way that the main sort of model type that people use mathematical optimization for and Gurobi for that falls into one of those super hard buckets. So people think it falls in a super hard bucket. This super hard bucket can’t be solved today, so I need to wait for something like quantum. And so there is this sort of misconception of that thought path that I just laid out. What we are here to say is, okay, quantum computing, it may be an inevitability, it may not be.
00:59:17 I am not going to sort of plant a flag on one side or the other on that. But what we do note is today is, yeah, it’s not, and I think everyone would agree that it’s not a viable way to really solve problems today. So if you’re a CTO today or something like that, or CEO today or something like that, wouldn’t you like to save a ton of money or be super efficient now instead of waiting for the possibility of this happening in the future? So that’s sort of one of the hurdles that we’re trying to get over is you could be doing all of the stuff that some of these quantum optimization problems are saying that will happen in the future. Like, oh, you can revolutionize your supply chain in the future with quantum. You could revolutionize your supply chain now with mathematical optimization. And here’s what’s really cool, okay, let’s say that in 10 years, quantum optimization becomes the thing. You’ve already laid the groundwork. You’re not starting from scratch, you’re not missing the boat.
01:00:28 All of the things that you would need to do today to use mathematical optimization, understanding your business case, understanding all of the problems that you can approach with this, all the ones you can’t, understanding all the stakeholders, their involvement, getting buy-in, getting all the data connected to make sure that you’re making the right decision with the right data going into your optimization model, all of that stuff needs to happen regardless of if you’re using mathematical optimization or quantum, it still has to happen. So why not do it now? Why not get the benefits now? And then if quantum becomes the thing that some people think it’s the inevitability, then you’re just that much better prepared. You don’t need to then. And if there becomes this sort of rising tide of quantum, you’ll know exactly where you are and what you can be doing. And then when you actually do sort of prototype or proof of concept quantum, you have a reliable benchmark, you have the right number that you should be comparing it to.
01:01:37 You have an optimal state now versus what an optimal state could be with quantum because sort of what the difference between that and what we can provide now and in the future. And what quantum would probably provide is a lot more granularity, a lot more complex of the way that you model your problem down to the nitty griest detail. Sometimes that can’t happen with mathematical optimization. Now, if you’re talking about a supply chain, you’re not modeling individual workers and what they’re moving, doing stuff like that, maybe with quantum, that is something that you can do. You can get that extra granularity to help you really do things. Or it solves problems that maybe take 10 minutes or an hour to do optimization. Now maybe you are solving that in a second or a millisecond. So the whole thing is, it could be, but why not get started now?
Jon Krohn: 01:02:29 I love that perspective. I had no idea where you’re going to go with that answer. And this was, that was definitely not where I expected to be. But I love that framing of if there’s kinds of problems that you could be solving in your organization today that you think could only be solved in the future with quantum computing, that might not be true. Mathematical optimization could be the answer. Really interesting perspective, and I love that you’ve come on this show again for the third year in a row for our annual reminder of the importance of mathematical optimization, how it can be such a useful tool to have in our belts alongside the other analytical or predictive tools that we have in our tool belts. And Jerry, you probably remember from previous years that I always have the same final two questions for you, and I didn’t really prepare you for the penultimate one today, so hopefully you already have in mind a book recommendation for us.
Jerry Yurchisin: 01:03:29 Yes, I do. I do, and this is pivoting very far from everything else that we’ve talked about today. But for me personally, I have two kids. So I have a 4-year-old son and a one and a half year old daughter. So that takes up a lot of my spare time. But I read a book a little while ago called A Better Man. It’s a mostly serious Letter to my son by Michael Ian Black. He’s a comedian that I like, probably dating me and showing, letting people know how old I am. But it was a great book that I read, and it really put into perspective like raising kids, particularly a son and stuff like that, raising kids in today’s day and age. So yeah, I highly recommend that as a book for any sort of newer parents out there or perspective, Hey, this is something that I’m going to get my dive into.
01:04:34 Highly recommend that book. It was great for me to read, put some good perspectives on that. So that’s my, I’m going to pivot from all the math stuff that I usually talk about with whether I recommended the number zero. Then the last one after that showed some Nintendo Geekdom of mine, I think it was Asada, was my other book that I recommended. This time, it’s all about parenting because outside of talking to people like you, day in and day out during my office hours after that, it’s just kids. So that’s on my brain 90% of my life.
Jon Krohn: 01:05:09 That must be wonderful, Jerry. I appreciate the recommendation and I’m blown away. Remember your recommendations from previous year as well. All right. Final question. This one’s an easy one, a layup. How should people follow you? After this episode, obviously, we’ll have Gurobi.com/learn in the show notes for people to get your free tutorials and code and open source resources, it can get going with mathematical optimization in, in their own lives. But where else?
Jerry Yurchisin: 01:05:35 Sure. The best way is probably LinkedIn. That’s where I kind of post a lot of my thoughts of the events that we have, the events that are upcoming. A lot of stuff I talk about great partners. When I do a presentation with a cool company that we work a lot with, like next move who does operationalizes optimization modeling, you’ll get a lot of that information there. And so in addition to LinkedIn, I also have a Blue Sky account that I post some thoughts here and there about random things and our company is also on there and we promote a lot of our events there. So that’s another great way to sort of stay in touch with what we are doing and what I’m doing. So LinkedIn and Blue Sky are kind of the two go-tos for mathematical optimization and anything that you want to hear about from me, if that’s what you’re still into after listening to this episode,
Jon Krohn: 01:06:36 We really appreciate you coming on the show yet. Again, I always love these episodes and I’m sure a lot of our listeners do as well. Thanks for all of the new use cases and applications ideas related to mathematical optimization this year and hopefully we’ll be checking in with you again soon.
Jerry Yurchisin: 01:06:55 That sounds great. Happy to be on again, happy to sort of get some information out to your loyal listeners who are a bunch of cool people. And yeah, feel free to anyone to interact and hopefully I’ll see you guys around.
Jon Krohn: 01:07:11 Always a treat to have Jerry Yurchisin on the show. In today’s episode, he covered how mathematical optimization provides a framework for complex decision-making by defining decision variables, constraints and objectives to find provably optimal solutions. He talked about how unlike machine learning optimization tells you exactly what actions to take given your specific business constraints and goals, how Gurobi has developed custom GPTs and AI tools that streamline the application of mathematical optimization. And he talked about real world applications of the approach such as Toyota’s vehicle manufacturing planning system, and total wines complex inventory optimization.
01:07:48 As always, you can get all the show notes including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Jerry’s social media profiles, as well as my at superdatascience.com/931. Thanks to everyone on the SuperDaraScience podcast team, our podcast manager, Sonja Brajovic, media editor, Mario Pombo, partnerships manager, Natalie Ziajski, researcher Serg Masís, writer Dr. Zara Karschay, and our founder Kirill Eremenko. Thanks to all of them for producing another awesome episode for us today for enabling that super team to create this free podcast for you. We’re so grateful to our sponsors. You can support the show by checking out our sponsors links, which you can find in the show notes. And if you’d ever like to sponsor the show yourself, you can find out how to do that at jonkrohn.com/podcast. Otherwise, help me out by sharing this episode with folks that would love to learn about mathematical optimization. Review the episode on your favorite podcasting app or on YouTube, subscribe obviously if you’re not a subscriber. But most importantly, I just hope you’ll keep on tuning in. I’m so grateful to have you listening and hope I can continue to make episodes you love for years and years to come. Until next time, keep on rocking it out there and I’m looking forward to enjoying another round of the SuperDataScience Podcast with you very soon.