Jon Krohn: 00:00:00
This is episode number 743 with Piotr Grudzień, co-founder and CTO at Quickchat AI. Today’s episode is brought to you by Gurobi, the decision intelligence leader, and by CloudWolf, the Cloud Skills platform.
00:00:18
Welcome to the Super Data Science Podcast, the most listened-to podcast in the data science industry. Each week we bring you inspiring people and ideas to help you build a successful career in data science. I’m your host, Jon Krohn. Thanks for joining me today. And now, let’s make the complex simple.
00:00:49
Welcome back to the Super Data Science Podcast. Today I’m joined by the brilliant technical founder, Piotr Grudzień. Piotr is co-founder and CTO of Quickchat AI, a Y Combinator-backed conversation design platform that lets you quickly deploy and debug AI assistants for your business. He previously worked as an applied scientist at Microsoft and he holds a master’s in computer engineering from the University of Cambridge. Today’s episode is perfect for anyone who’d like to integrate conversational AI into their business. It should be accessible to technical and non-technical folks alike. In this episode, Piotr details what it takes to make a conversational AI system successful, whether that AI system is externally facing, such as a customer support agent, or if it’s internally facing such as a subject matter expert. He also talks about what it’s been like working in the fast developing large language model space over the past several years. He talks about what his favorite generative AI vendors are, what the future of LLMs and generative AI will entail, and what it takes to succeed as an AI entrepreneur. All right. You ready for this excellent episode? Let’s go.
00:01:56
Piotr, welcome to the Super Data Science podcast. Thanks for coming on. The people who are watching the YouTube version of this get treated to a beautiful view of Warsaw.
Piotr Grudzień: 00:02:09
What can I say?
Jon Krohn: 00:02:10
I actually thought it was a fake Zoom background or something that you’d engineered into the platform that we use for recording podcast episodes. Such is the kind of thing that somebody with a Cambridge engineering degree could be pulling off, but yeah, it’s real.
Piotr Grudzień: 00:02:27
Yeah. Absolutely. Come visit Warsaw and check it out for yourselves. It’s very cloudy today, but still very beautiful.
Jon Krohn: 00:02:33
Yeah, the winter’s generally dark.
Piotr Grudzień: 00:02:36
Yeah, absolutely. Yeah. It’s not the best time to visit Poland, but I still recommend it.
Jon Krohn: 00:02:41
Nice. All right. Well let’s jump right into the technical topics that we have planned for today. So you’re the co-founder and CTO of Quickchat AI. And so this is a startup that empowers companies to build their own multilingual AI assistant. So how does it work and how do you ensure your platform is accessible and adaptable to various business needs?
Piotr Grudzień: 00:03:02
Yeah. Absolutely. Let me start at the beginning. It’s funny how it’s changed a lot what wording and what explanations we can use to actually explain to people what our product is. Today, everyone has talked to ChatGPT several times, so now it’s really sufficient to say you’ve tried ChatGPT several times, you’ve used it for your own purposes, and now imagine integrating ChatGPT within your product, within your company processes such that you can have a conversation with a computer that allows you to fulfill tasks, to be more productive, to complete certain actions, and also to have that system as a public facing AI assistant that external users can use for sales purposes, for customer support and so on. The way we think about it is it’s a platform that allows you to create conversational AI experiences exactly the way you want so that they can serve your business. It’s no longer a toy. It’s something that you can test from A to Z and really deploy with confidence to your users.
Jon Krohn: 00:04:14
Very cool. How do you distinguish from other kinds of solutions like this? Particularly at the OpenAI DevDay recently, they announced the Assistants API. How does your solution fit in against other incumbent platforms and particularly that API?
Piotr Grudzień: 00:04:37
Yeah. Yeah. That’s a very good question. Today, what we are discovering, having worked with businesses for a number of years now, since 2020, since GPT-3 first came out, is that it’s a very different challenge to create a system that a business will be confident in putting front and center on their website, on their products. And also it’s a completely different engineering task to create an AI assistant that it’s very simple to set up. All the demos we can see on Twitter. This is literally telling you it will take you 10 seconds to create an AI assistant. But the question we ask ourselves is, okay, we’re going to have our customers use our product for a year, and then they’re going to look back and see, okay, how much business value did it deliver? How happy am I about the actual tiny details in the conversations that the users are having with the product? What kind of tools does Quickchat or their competitors give me to learn from these conversations to improve them over time and to go back to management and show that this product really enhances our business?
00:05:52
So it’s very much a different conversation. I feel like today a lot of developers want to make the setup of conversation AI products extremely simple, but then those tools are lacking the little knobs you need to tune the experience to be exactly what you want. And that’s really what we are focusing on. So sometimes we talk about our platform as like the Photoshop of conversational AI in a sense that we want to have it packed full of features that allow you to really dig in and become an expert in conversation design and testing and get that experience exactly what you need. Obviously we’re still only getting started and our roadmap is just packed full of features that we’ll be rolling out continuously. So our product will be very different a year or two years from now than it is now, but that’s the general direction. We want to give people the power to fully control their conversation experiences and those conversation designers …. We feel like this is going to be a new profession that will start coming up will be the people who make the computers talk exactly how they want and make voice or chatbots finally a really productive partner for you to use every day.
Jon Krohn: 00:07:17
So does that mean that it isn’t just an API? I was talking about the Assistants API that OpenAI now offers. So this sounds like, especially through the Photoshop analogy, it sounds like you have a user interface. That maybe you don’t need to be a programmer to be using the Quickchat solution.
Piotr Grudzień: 00:07:35
Yes, exactly. Our basic product that anyone can just go to our website and create an account and start for free, that’s exactly a no-code solution that allows you to create your own AI assistant with its own knowledge integrated with your favorite tools like messaging apps, and also you have a wide range of settings that allow you to tweak the conversation exactly as you like. And that’s a tool that’s available for anyone. You don’t need to be a programmer at all to use it. And of course we are working with a number of companies that have used the tool, used our business setup to get it to exactly what they want, but then there were some custom features that they also liked or custom setup of modules that they wanted us to implement to really get the quality exactly where it needs to be or implement the exact business processes that they need. And there I would say that Quickchat’s role is not only delivering the software that allows people to really control the conversation, but also teach them about the business processes that you need to take full advantage of introducing conversation AI in your companies.
00:08:55
So if you think about a large organization that switches to using an AI powered customer support, putting an AI chatbot on your website as customer support is just the first step. Then you need processes to analyze conversations and gain insights from those. And those insights come in two parts. One is those insights tell you how to improve the chatbot over time, add more things to its knowledge base or add more capabilities and more actions it can take from within the conversation. And the second one, which I think is really often neglected, is those conversations that your users are having with the AI are an amazing source of business insights. If you think about it, those are the people who come to you and complain about your product or they explain what they would like to achieve, and in those insights are hidden ideas for your future product features. And to be able to gain those insights and analyze them at scale, that’s an extremely important process that companies implementing conversation AI solutions will definitely want to implement in the future.
Jon Krohn: 00:10:08
Nice. I got you. So let me explain that back. That last point there. So you have an external facing chatbot powered by Quickchat AI that … Let’s say it could be my company. So my company Nebula, we are a platform that is automating as many white collar processes as we can, specifically like human resources things to begin with. So we have a platform that allows you to find talent. So we could have an external facing chatbot that could be interfacing with our clients or could be interfacing with the talent that they’re trying to attract to their companies. And as those people are having these conversations, they’re going to be bringing up issues with the way that our product is working, the way that the Nebula product is working. And so what you’re saying is that something that is baked into Quickchat already is that there’s some way for me to be gathering the insights from those conversations. What does that look like? So maybe there’s some feature in my platform that’s buggy. A really simple example. People keep trying to pay, but the credit card billing platform isn’t working. And so this is a huge problem. People want to be paying me and they can’t be and they’re having a conversation with an automated bot trying to resolve this. How would that get flagged to me? How would I find out that I’m having this issue with credit card swipes?
Piotr Grudzień: 00:11:50
Yeah. So that feature that we call Conversation Insights, that currently isn’t yet available in the self-serve solution where you set it all up. But this is something that we make available for a custom project. And the way this would work is that … Well, first of all, it’s important for us to understand your business. And already I have some idea now based on your explanation. And there will usually be some aspects of what the customers are talking about that will be particularly interesting for you. So for example, insights about new feature ideas and insights about some emergencies or big issues. And there, Quickchat AI will automatically scan the conversations and in your daily or weekly reports, you’ll be able to be flagged with the most common issues and those would be categorized into some packets that you should particularly focus on. That’s the basic idea of how it works.
00:12:59
And a very simple example of that is our human handoff feature. So the way human handoff works is that Quickchat AI automatically recognizes when the conversation is going in the direction that it will need human attention. So maybe someone wants a refund or someone just wants to talk to a human, or someone keeps asking similar questions over and over, which might suggest that the AI is confused or that it doesn’t have the necessary knowledge to answer. And then our system can detect it automatically and automatically disconnect the AI from the conversation and have one of your human agents to take over. And an interesting conversation insight feature that is being used there is that one of your human agents is pinged with a summary of the issue that the customer is facing. So there’s no need to read through perhaps some chaotic transcript, but rather you just get a nice summary and you can just act on it immediately.
Jon Krohn: 00:14:04
Gurobi Optimization recently joined us to discuss how you can drive decision-making, giving you the confidence to harness provably optimal decisions. Trusted by 80% of the world’s leading enterprises, Gurobi’s cutting-edge optimization solver, lightweight APIs and flexible deployment, simplify the data-to-decision journey. Gurobi offers a wealth of resources for data scientists. Webinars like a recent one on using Gurobi in Databricks. They provide hands-on training, notebook examples and an extensive online course. Visit gurobi.com/sds for these resources and exclusive access to a competition illustrating optimization’s value with prizes for top performers. That’s G-U-R-O-B-I.com/sds.
00:14:49
That’s very cool. When you say that out loud, it sounds so obviously to me like a super valuable solution. Yeah, that’s really cool. So yeah, you can be gathering insights from chats in these daily or weekly reports that capture the biggest issues, the topics that customers are discussing most in their chats. And then in addition to that, you have a process that can flag that a human needs to be involved in a conversation or it appears likely that a human needs to be involved in the conversation and then you provide a summary to them. I’m guessing a lot of these features themselves are powered by generative AI. So obviously that summary of the transcript, you’re going to be using a generative AI model to do that. And then for the insights as well, even creating … I don’t know. Obviously you can’t go into proprietary secrets, but maybe to some extent you can fill me in on how you categorize all of the conversations from that day or that week into different discrete categories.
Piotr Grudzień: 00:15:51
Yeah. It’s interesting to think about these things in a way that our approach always focuses on, rather than assuming that LLMs are the solution to all of our problems, we treat LLMs as the language generation tool that we can use at any point during our processing. And the reason why this is important is that we have a lot of people mention to us things like, I would like to create an AI that is an expert in some subject. And then many people, what they will do is write a prompt that suggests that, okay, this AI assistant is a subject in geography, and then off you go, go and talk to it. But the problem with a product like that is that it’s exciting to use at the very beginning, but then after you’ve used it for a week or two, it’s becoming repetitive. Unless you have a specific problem that keeps happening and you need to keep using that solution to solve it, it quickly becomes boring. So the question we usually ask these people is that, okay, first where is your expertise? And now explain your expertise to us and we’re going to use AI to build an experience that packages that expertise and delivers it back to your users.
00:17:28
And it’s very similar to what you just asked about, which is these conversation insights, we may try to generate them automatically, but usually the USP and the real value is in talking to the customer and really understanding what they need and really understanding the details of their business and then at the very end, using LLMs to answer those questions, solve those problems. So LLMs, they’re not, at least for now, universal problem solvers that also think for you and answer the questions that you should ask yourselves, but rather for now, I think we should know what we’re looking for and use LLMs as a way to get there rather than just hope that it works just like that.
Jon Krohn: 00:18:19
Nice. Nice. Yeah, that makes sense to me. Can you give us a concrete example, perhaps? Like an example of a particular customer … Maybe you don’t need to disclose who the customer is, but maybe just their industry or something and explain how you’d work with them to figure out what their big issues are and how you would be able to prime their LLMs to be able to tackle these kinds of user issues.
Piotr Grudzień: 00:18:45
Yeah. An interesting use case is market research, or very specifically is the process of having human agents talk to customers or potential customers who have tried a particular product and then trying to ask the right questions, get insights from them on how to improve the product or whether they would actually want to try the product. And there the tricky bit is that if you’ve just tried a new cookie, then if I just ask you to describe the taste, then you would just tell me what you thought. But only if I’m able to guide the conversation in the right way, then maybe you will tell me that well, actually last week you had a cookie that was a bit more crunchy. And that will give me an insight, okay, maybe that’s really the reason why you’re not as excited as you are.
00:19:51
And the real expertise is to teach the human agents how to conduct the conversation so that they can discover these things. And the interesting part is that it’s perfectly possible these days to set up an AI assistant with a very simple prompt that will just have a conversation that sounds very much like market research kind of thing. And I’m sure that if you would see the demo, you would be impressed and people would share it on Twitter and so on. But then when it comes to actually launch it to people and then run 10,000 of these conversations, gain insights from them, you’ll find that they’re all very generic. They’re just not up to scratch. And the reason is that the expert knowledge and the human experience that was gained over the years in the industry was not embedded in how the conversation should be conducted. And the way we would approach it is we would talk to experts so they can show us several transcripts of how these conversations should look like, try to decompose them into particular actions that should be taken by the interlocutor and particular context that the interlocutor has, and only from there to decompose the whole conversation into a set of actions with particular context and that you can use as a skeleton for your AI to run.
Jon Krohn: 00:21:27
Very cool. And so interlocutor there just means it could be a human agent or an AI agent that is handling the conversation on behalf of the company.
Piotr Grudzień: 00:21:37
Exactly. Yes.
Jon Krohn: 00:21:38
Nice. Okay. Yeah. That’s crystal clear. And something else that you’ve talked about in previous interviews, and actually our researcher, Serg Masís, for … I already mentioned this to Piotr before we started recording. But many of your public interviews are done in Polish and Serg doesn’t speak Polish, but he wanted to be able to have insights from those conversations. So he was using the OpenAI Whisper algorithm to convert those Polish conversations into English so that he could prepare some of these questions. And so this is an example of one of those questions. You previously contrasted light solutions versus enterprise solutions. So can you expand on these two different categories of solutions that a company like yours can offer?
Piotr Grudzień: 00:22:25
Yeah, so that distinction would be what I referred to earlier as our self-serve solution versus the enterprise or custom projects that we develop for companies. And we do distinguish between those two simply because there are some companies that are perfectly happy with creating an account on our platform and figuring it out for themselves and using all the different features to achieve exactly what they want. And then there are other companies that much prefer that all the work setting everything up is done by us and perhaps that some of the modules that we have are slightly tweaked so they fit their use case exactly. So I think that’s the usual starter, essential business and then talk to sales setup where it’s a high touch integration.
Jon Krohn: 00:23:18
Very cool. Your tools would empower somebody with a role like a conversation designer at a client of yours. So this conversation designer can use Quickchat AI to be doing the kinds of fine-tuning that you’re describing. Which would be understanding particular issues using historical transcripts to have the right flow to the conversation to be handling common customer issues. So you’ve talked about modules a few times, including just now in the context of light solutions versus enterprise solutions. Would you be able to talk us through the key modules that Quickchat offers and bring to life? Let’s say I’m a conversation designer at my company and I want to be using Quickchat AI, what are the key modules that I might be using in my first week with Quickchat AI in order to be fine-tuning the conversations?
Piotr Grudzień: 00:24:11
Yes. So when setting up an AI assistant, usually the first starting point is the knowledge base. So you first really need to ask yourself the question of what do I want the AI assistant to know beyond the general world knowledge that it already has, and where does my company store that information? It might be that it’s spread out across your company’s websites and sub-pages and URLs, so that data can be downloaded. It can be PDFs. It might be that it’s your FAQ pages, maybe on Intercom. Maybe some other system. It could be that it’s Word documents. So it’s important to gather all the data and it can be uploaded into Quickchat. So that will be the first step.
00:25:05
And then the second aspect of the conversations that you start with is the AI setup. That’s what we call it. And the very basic settings there is to decide on the tone of voice, but also there you can go into more details. If you’re creating an AI assistant that maybe is tuned to be selling your products, and then your knowledge base would be filled with product descriptions along with the URLs. Maybe there will be some promo codes. There is a setting that choose that the AI assistant is more focused on providing URLs and being more salesy. There is a setting that we’re launching in a few days, which we call AI Profession, which gives you a basket of these behaviors all in one that you don’t need to think about them separately, but you can choose an AI assistant that is a salesman or an expert in something or an educator. And in this very simple setting, you can get this entire basket of solutions altogether.
00:26:16
And then obviously that gets you to the first stage where your AI system has the right setup, has the knowledge base, and it’s possible for you to start testing. So to start having conversations as if this was your users and start gaining insights into how well your knowledge base is being used in conversations. And that’s when the important stuff starts. Because very often companies discover that the knowledge base that they provided to the AI is incomplete, or in some parts self-contradictory, or there are some parts that are completely missing. And there the important part is to have the tools to be able to debug it. Because unfortunately, there’s no magic bullet. GPT-3.5, GPT-4 are very smart, but they will not provide good answers if the knowledge base itself doesn’t make it clear what the right answer should be. So we do have tools that we make available for our custom projects that allow you to have conversations, debug particular messages to really understand why the particular message was generated, and to really dig into your knowledge base and find that perhaps we have two blog posts that talk about the same thing, but the conclusion is different. And until your knowledge base is really cleaned up, then your AI assistant will not work as well as it could.
Jon Krohn: 00:27:53
Makes perfect sense. Yeah. I could see that happening all the time where you obviously have resources that were collated over many years written by humans from … Yeah. They could be in the same department, but two years earlier the platform worked in one way, and so they created the document one way and then later this product worked some other way that are directly contradicting. But you’re just like, oh, sweet, we’ve got this new conversational assistant, let’s just give it all the information that we have and we just throw everything in without people realizing that there’s these inconsistencies in guidance due to things changing over time or mistakes. Humans make mistakes.
Piotr Grudzień: 00:28:35
Yeah. Yeah. And then over time, as you keep on testing and maybe you launched it to some group of users and then you can see some first conversations that weren’t as good as you thought, that’s when the really interesting stuff starts to happen because we start to discover that actually it’s very much subjective what the correct answer should be according to some people. It might be that an answer looks perfectly fine to one person, but actually someone else will say that you recommended one product here, but actually you should have recommended all three and talk less about each one of them. And our system has some features that allow you to tune these things. We call them AI guidelines. But AI guidelines is some instructions that allow you to really tell the AI how to behave in different situations, but we’ll be launching more and more of these features to really let you understand why a particular message was generated by the AI and to understand what to do so that next time it’s exactly as you want. We’re not going to say perfect because perfect is very subjective, but lets you set it up the way you like it.
Jon Krohn: 00:29:48
Data science and machine learning jobs increasingly demand cloud skills. With over 30% of job postings, listing cloud skills as a requirement today, and that percentage set to continue growing. Thankfully, Kirill and Hadelin, who have taught machine learning to millions of students, have now launched CloudWolf to efficiently provide you with the essential cloud computing skills. With CloudWolf, commit just 30 minutes a day for 30 days, and you can obtain your official AWS certification badge. Secure your career’s future, join now at CloudWolf.com/SDS for a whopping 30% membership discount. Again, that’s CloudWolf.com/SDS to start your cloud journey today.
00:30:28
Sweet. All right. You’ve now given us a clear picture of what the Quickchat product is, and also it provides us with a general way of thinking about AI assistants and how we might want to be deploying them into our company. So thank you very much for that. Let’s get into some specific challenges with designing conversational assistants now. So when I’m using a conversational assistant with my company, it’s critical for me and probably most companies that it’s not going to say inappropriate things, unethical things, dangerous things or just go off brand. What kinds of safety checks or controls do you put in place? What guardrails do you have to put in to try to ensure that these issues don’t take place?
Piotr Grudzień: 00:31:15
Yeah. So the short answer is that that’s something that we would like our users to not need to worry about, and that’s something that we deal with internally. And by that I mean that the context and what the AI assistant can talk about is very much limited to the specific topics and knowledge bases that have been set up by the user. It’s interesting to look back on the history of LLMs, which goes back many years, but in 2020, obviously when GPT-3 came out, they all started gaining in popularity. And it’s interesting to see how the topic of safety has evolved a lot. So with the original GPT-3 it was very easy to make it go off-topic, to make it say inappropriate things. So in the early days when we were one of the first users of GPT-3 we were talking to OpenAI researchers quite often about how to best create filters and other solutions to guardrails the models.
00:32:28
Today that conversation is a bit different because OpenAI has done a lot of work on tuning the models such that they follow instructions very, very closely and that they follow the topic of the conversation very closely as well. But obviously the work that LLM researchers do on guardrails is just one area, and then companies building on top of them, they need to do their own work and embed safety features in their own products to make sure that ultimately the end user doesn’t need to worry about it.
Jon Krohn: 00:32:59
Cool. Yeah. And so I guess that’s the thing that you guys would at Quickchat, you offer these specific safeguards that are dependent on exactly the situation that your client is in.
Piotr Grudzień: 00:33:14
Yes. Yes. So especially with custom projects, there are different considerations depending on the industry, depending on the specific needs of the customer and depending on the exact design of the conversation. Because if we’re talking about custom projects, conversations might not be open-ended, but follow some kind of a rough script. And there obviously that requires that the guardrails all are very, very strict as well.
Jon Krohn: 00:33:40
All right. And then so earlier in the episode you were talking about knowledge bases. They seem to be critical. Not seem to be. I know that they are critical in order to have these conversational agents be able to represent a company in a unique way and their particular product, their particular processes. And this would be internal or external. We’ve talked mostly in this episode, the examples have revolved around the external facing agents. But you mentioned right at the onset of this episode that of course these could also be used internally to allow employees to be solving problems more quickly than they otherwise could. Maybe be able to answer questions themselves as opposed to having to get in touch with an internal subject matter expert. So are there particular challenges associated with blending the external general knowledge that an LLM might come with already? So an LLM … You could be taking an off-the-shelf one like GPT-4 or GPT-3.5 Turbo or Anthropic’s Claude or whatever. But then you need to blend that with these knowledge bases. Are there any particular challenges associated with that?
Piotr Grudzień: 00:35:01
Yes. That’s an interesting question and I think we need to distinguish here between AI assistants that are supposed to flexibly handle and answer questions based on some knowledge base that you give it. And we already talked about how the knowledge base needs to be sufficiently clean and available in the right format and so on. And AI assistants that are closer to ChatGPT that cannot fully be trusted but are a bit more flexible and help you with a wider variety of things. And actually there the difference and the balance is very subtle, and I think now big companies are starting to notice that. That it’s not as simple as give ChatGPT to all employees and let it use it because we know that there are some limitations. But also it’s not as simple as set the temperature to zero, use the knowledge base because then we’ve just built search pretty much.
00:36:09
So I think if a company is implementing an AI assistant, what needs to go along with that is tools and processes that allow you to be fully aware of the fact that the AI assistant is one thing, but it’s also a tool that helps you clean up your knowledge base and answer questions like is my knowledge base even capable of giving me that answer? LLMs tend to give you answers that sound very plausible and sound very likely to be correct, but what if within your knowledge base there are two potential correct answers to the same questions and the LLM always just gives you one? That’s not really the optimal result. What you should want to be able to do is to see that actually there are two possible answers or maybe three or maybe two of them are conflicting, and this answer is actually a task for someone to figure out what’s going on there and try to talk to the right people to find the loopholes in the knowledge base.
00:37:14
So I think the devil is in the details here. There’s no one AI assistant that will solve all the problems. But introducing an AI assistant needs to go hand in hand with introducing processes around data cleanup, around analyzing conversations to make sure that the quality of the assistant goes up over time, but also of the knowledge base itself. And I think that’s really what’s going to empower companies and let them use AI really efficiently over time.
Jon Krohn: 00:37:45
Nice. Yeah. Okay. That was a really concrete example. It makes perfect sense to me. So if we’re fine-tuning a model on some knowledge base because of things like conflicting information or because of ambiguity … Or sorry, because of ambiguity. Because of potential multiple different good answers to a given question, it’s the kinds of tools that you build that allows you as a human to debug. We talked about this earlier. So earlier we were talking about modules that a conversation designer would use with the Quickchat tool. And so there’s things like knowledge base, tone of voice, and then you specifically went into tools for debugging conversations. So this sounds like a perfect example of that where you have the conversation designer being able to adjudicate what the best answer is and fine tune these conversations to make them better. You mentioned in your response this term temperature, which is something that I’m familiar with, but maybe not all of our listeners are. Would you mind digging into that a bit more?
Piotr Grudzień: 00:38:47
Yes. So basically temperature is a parameter that you use when asking a large language model to generate text for you. And then if you set the temperature to a high value, then that means that if you generate with the same prompt several times, you will get widely different results. Some of them are generally less plausible, but they will be wide variety. Whereas with temperature zero, if you generate with the same prompt several times, you’re very likely to get almost identical results. So this is the usual trade-off. Low temperature gives you answers that are very predictable but are not very creative, might be repetitive, but it’s generally more safe. It’s more likely to be just quoting directly from the knowledge base, whereas with higher temperature, that’s what you want to use for the LLM to write stories for you or write some fantastical scenarios. And I think that’s what people use for creating these amazing demos because then you can generate 50 completely diverse results, choose the absolute best one, and then that’s really, really impressive.
Jon Krohn: 00:40:03
Nice. Great answer. Thank you. Very clarifying. It was a perfect definition of temperature. I have nothing else to add to that. Clearly you have a huge amount of experience with using LLMs, whether they are developed by another company or yourselves. Let’s dig into how quickly this LLM landscape has been changing in recent years. So when you started off with building these Quickchat solutions, what was the ecosystem like then? What were the LLMs like that you were leveraging for your platform?
Piotr Grudzień: 00:40:41
Yes. I remember vividly the summer of 2020 when somehow through Twitter, some Y Combinator connections, I first heard about GPT-3, so this new model that came out. And there was a very closed beta and you needed to be very quick to be able to sign up and get access to the model and somehow I managed to be one of the first people to get access. And I remember there was this very, very novel idea. We had GPT-2 before, but the gap between GPT-2 and GTP-3 was so huge that it was just a complete paradigm shift. Completely different experience. So you could say that the idea of giving a model some text and having it continue was a brand new idea. And I remember when I first started playing around with GPT-3, for me it was completely amazing and I was 100% sure that this is going to change the tech world over the next few years.
00:41:49
Obviously in the very early days, our idea to build AI assistants was a bit controversial, I would say, simply because the models weren’t performing as well as they are today. They were slow and very expensive. So I remember when we were doing some first customer demos back in 2020, we knew that now we’re talking about something like short conversation might cost a dollar. It’s completely unfeasible. But we had this big bet that there will be huge advances in the models themselves. There’ll be a huge competition and the prices will just go down, which is exactly what we see. And if we compare GPT-3.5 Turbo in terms of how cheap and how fast it is, it’s absolutely amazing what the progress has been. And this idea of creating AI assistants that can use several calls to several LLMs to generate one response, this idea became very much plausible and very much within budgets of typical projects.
00:43:01
So a lot has changed. Obviously the whole tech world shifted towards generative AI solutions, which caused the usual thing where now every problem is being solved with generative AI. So actually I feel like these days we have very few startups that … Maybe not very few, but it’s not as common to start with a problem. But they usually start with, I want to use GPT-4 and now let me find a problem that I can solve with that. That’s obviously very typical, but that leads to many interesting situations like for example, something I mentioned before, which is the focus on the most flashy demo. The product that’s easiest to set up to start with, but there’s much less attention paid to what happens in the long run. How do I let businesses optimize for the years to come? How do I make sure that my solution is viable over thousands of interactions and so on? That’s always been like that with new exciting technology.
Jon Krohn: 00:44:10
You mentioned there in the beginning when you got access to GPT-3 for the first time in 2020 that it was expensive and it was slow. But one other thing that I think was a huge issue until GPT-4 was released in early 2023 was hallucinations. They were a big problem before.
Piotr Grudzień: 00:44:31
That is true. We internally have been working on a number of solutions to try to remedy that. So I think if you’re interacting with LLMs via platform like Quickchat you’ve felt that hallucinations have been less of a problem over time. But it is true that if you’re using models just like that, then obviously you can tell that there’s been a huge focus within the OpenAI team and other competitors. And so models become more and more usable in a vanilla way. For example, ChatGPT you can use as is and millions of people find it extremely useful so that’s obviously hats off to the OpenAI team in general and the LLM community.
Jon Krohn: 00:45:23
Mathematics forms the core of data science and machine learning. And now with my Mathematical Foundations of Machine Learning Course, you can get a firm grasp of that math, particularly the essential linear algebra and calculus. You can get all the lectures for free on my YouTube channel, but if you don’t mind paying a typically small amount for the Udemy version, you get everything from YouTube plus fully worked solutions to exercises and an official course completion certificate. As countless guests on the show have emphasized, to be the best data scientist you can be, you’ve got to know the underlying math. So check out the links to my Mathematical Foundations and Machine Learning Course in the show notes. Or at JonKrohn.com/Udemy. That’s JonKrohn.com/U-D-E-M-Y.
00:46:07
Yeah. It’s been night and day. It’s amazing how often … Maybe once a week when I post on LinkedIn some amazing new generative AI capability. It could be something that my company is rolling out for our customers or it could be some complete innovation from a third party. And about once a week somebody will write, “Ah, this will never take off because of hallucinations. They’re an issue. How can we deploy these systems?” And I’m like, have you been using GPT-4? Because I’ve been using GPT-4 since March, since it came out and I cannot think of one instance where I had an issue with hallucination where I was able to notice some problem. And so it’s amazing how … I guess this is just probably what humans have always been like. That some new technology comes along and people dig their heels in and are like, for reason X, this is never going to be an effective solution. And it seems like hallucinations are the thing that pops up most that I see with generative AI. Yeah. Obviously these people have not been using the modern tools. And it’s cool to know that with interfaces like Quickchat over top of even GPT-3 years ago, you were able to minimize hallucinations.
00:47:34
I don’t know to what extent you can go into describing how you prevent hallucinations from happening. So my experience with GPT-3, particularly around like 2021, was that it did make a lot of hallucinations. That was commonplace for me. So how does Quickchat identify that a hallucination might be there and prevent it from surfacing to a user?
Piotr Grudzień: 00:48:01
Yeah. I won’t go into a huge amount of detail, but the very basic idea is again that we don’t trust the model as much. We treat the model as a tool for generating text. Now, if you think about it, we focus a whole lot on how do we construct and format the context that we give to the model. So we really want to have tight control over what the model reads before the … And here I’m talking about the in-context learning. I’m not talking about fine-tuning. So we have very tight control over what the model reads just before generation. And then obviously there are post-processing steps that you can take. So you can ask yourself questions like, all right, here I gave the model this information and this is what the model said. Can I use LLMs again in some smart way to try and predict or evaluate how likely it is that this thing here is made up or incorrect?
00:49:02
Maybe we’re looking at a use case where using outside knowledge is dangerous 100% of the time. Maybe all I want is for the model to be rephrasing in a smart way, the very context that I was able to find for it in the knowledge base. And then I can, in my process post-processing step, I can make just 100% sure that what I see here in the response comes out directly from what was given in the context. And there are several different pre and post-processing steps that you can take to just eliminate the risk.
Jon Krohn: 00:49:35
Nice. That was a great answer. That makes sense to me and I appreciate you going into some detail there without divulging too much of your proprietary secret sauce there. So again, this might be something that touches on proprietary secret sauce, but we’ll see how it goes. So years ago you were using the OpenAI APIs. Do you still leverage OpenAI APIs today underneath Quickchat?
Piotr Grudzień: 00:50:00
Yes, we use OpenAI and other vendors as well, but I think at least at this point, it’s still safe to say that OpenAI is the best vendor. But it is true that in order to work with businesses, you have to be able to switch between different vendors to be able to adjust to particular customer needs, to be able to handle potential outages. So obviously we are watching the entire landscape very, very closely and Quickchat is integrated with several different vendors. Obviously new models keep coming out almost every day, so you need to really be on top of things to know which one is the best for different tasks currently.
Jon Krohn: 00:50:43
Nice. Are you able to recommend other particular vendors to our listeners in the circumstances when they might be useful?
Piotr Grudzień: 00:50:52
Yeah. Just in very general terms, the usual suspects, so Cohere, Anthropic. Some of the new LLama models as well. I highly recommend all companies to do their own research and to really find what works best in their particular use case because obviously the number of parameters is just one superficial metric, but what really matters is your use case, how big of a problem is latency, how big of a problem is potential down times and how well the model is tuned to your particular issue. Maybe you need to do your own fine-tuning. Maybe you need to work with someone else to build a dataset for your fine-tuning task. So yeah, just do your own research. That’s what I would recommend.
Jon Krohn: 00:51:40
Very cool. Yeah. That makes a lot of sense to me. Those are definitely the usual suspects, OpenAI, Cohere, Anthropic for sure. And yeah, cool to hear that sometimes the LLama 2 family comes in handy as well. That’s typically what we use in my company is LLama 2 for fine-tuning for specific tasks. And it makes perfect sense to be worried about downtime being a big factor and being able to swap if necessary. There was an instance at the time of recording this episode a couple of weeks ago, OpenAI had a huge outage. And it’s rare. You can look and you can see they have a downtime page where you can see for their various services, was there downtime on a given day over the past year thing? And it’s green almost all the time. But that day it was red and that if you are using just one vendor, you are critically stuck to them. Now, multiple of those vendors could be behind the scenes relying on say one big cloud provider like AWS.
Piotr Grudzień: 00:52:45
Absolutely.
Jon Krohn: 00:52:46
But typically huge companies like OpenAI, Cohere, Anthropic, they should be … I don’t actually know the details on this. But if they are relying on third party data centers, then they should be relying on them in multiple different regions. Yeah. So hopefully that outage would be rare across multiple of these vendors. Very cool. Those are super interesting insights. Speaking of instability, do you think that … I don’t want to spend too much time on this. I try not to be just the buzzing news show. I want most of what we cover to be longterm knowledge that people probably with most super data science episodes come back years later and all still be relevant information. But that caveat aside recently everyone is aware of the turmoil between OpenAI’s board and its CEO. When you’re watching that happen? As somebody who’s really dependent on OpenAI services, does that make you think, oh … You already have backup vendors lined up like Anthropic and Cohere so maybe for you it’s not something that you’re like, oh man, do we need to think about switching? You probably get where I’m going with this question.
Piotr Grudzień: 00:54:17
That’s a good question. Obviously when all of the Sam Altman situation was happening over the weekend, we were watching it closely. I never thought seriously that this might result in some downtime or full outage of OpenAI. I didn’t really see a connection there. I think it was more-
Jon Krohn: 00:54:38
Yeah. Sorry. Yeah, I don’t mean like it to be … I wouldn’t expect that either. But I mean longer term as a partner stability as opposed to within a given day downtime. I wouldn’t be worried about that either. But just in terms of the reliability of a partner. I guess that’s where my question was going.
Piotr Grudzień: 00:54:59
Got it. Yeah. So in that case, I guess it is an impulse that will motivate you slightly more to do your homework around other vendors and your own reliability. I don’t think it’s going to change over time, but I think today most companies like ours are way more prepared for a potential outage at OpenAI than they were a long time ago. Because it’s just been more discussed and on the news and there are more alternatives as well. But I’m sure that the whole OpenAI team is very much aware of that and they’re aware that we are all having these discussions. And I think their number one consideration over those few days was how to make sure that the users, the companies that rely on OpenAI, that they don’t get affected. And I think in that sense they handled it very well.
Jon Krohn: 00:55:59
Yeah. I guess that answers everything that I was anticipating. All right. So beyond immediate politics happening inside some of these big companies, what do you see happening in the next few years? So for me, GPT-4 was a huge game changer. That release in March of 2023, it absolutely blew my mind. And I went from being somebody who was … Skeptical isn’t the right word because obviously I work in AI and been a data scientist for a long time and I know that there’s a huge amount of potential here. I’ve always known the things like an artificial general intelligence, for lack of a better term, are theoretically possible. But even with GPT-3 for me, I thought, yeah, okay, GPT-3, very useful tool, but it didn’t blow my mind like GPT-4 did. So obviously there’s these things where scaling up by another factor of 10 or 100 with … I don’t know. Some hypothetical GPT-5 or GPT-6. It seems like scaling up is going to continue to yield pretty mind-blowing dividends in terms of the abstractions that these models can anticipate and the reasoning that they can handle. We also have had in recent weeks, things like this Q* rumor as being this system that is able to perform math highly accurately in a way that suggests that the problem solving capabilities of AI systems are about to make a giant leap forward.
00:57:42
So yeah. I’m curious what insights you have as somebody who’s deeply embedded in the generative AI space as to what changes we might anticipate in the coming years.
Piotr Grudzień: 00:57:53
Yes. One trend I think is only going to strengthen is the work that companies like Quickchat do, which is take these models like GPT-3, GPT-4 that are performing very well already and bring them into production, business, real problem solving. So what I mean by that is so that people at home and people in offices will be able to talk to computers and reliably achieve tasks using that. I think the technology is very ready for websites turning from us clicking at buttons to us talking to an avatar or talking to a website and just completing tasks using voice. I think the technology is very much there. What is needed is a lot of work on implementation, on understanding the needs of businesses. And a lot of that work is going to continue and generate a lot of value because productivity will increase greatly. And I basically don’t see a future where in five years we don’t talk to our computers. So that’s for sure.
00:59:08
But there will be a lot of people who are interested in pushing AI more towards the autonomy or what we think about in terms of potential dangers of AI. I don’t think walking in the direction of GPT-3, four, or five without any substantial changes will lead to any leaps. I think some breakthroughs will be needed. And I think what you alluded to, Q*, so any ideas around reinforcement learning style ideas, I think those will need to be introduced to really push the frontiers. And that’s where things get really interesting because I really think that the real challenge is to make these models that are extremely smart within one domain to be able to work across domains. And that is obviously extremely difficult because the number of different domains that we as humans operate in is enormous. Text is just one of them and GPT-4 perhaps mastered text. But then even interacting with things of the internet is a completely different task. To be able to navigate that, to be able to learn there. To be able to generate enough training data with reasonable tasks. To be able to go through billions and billions examples and really generate understanding.
01:00:50
Another thought I had was that to really be able to navigate the real world, the models need to become much, much better at the tail events or at the more black swan events. I think the models are too much focused on the average case, whereas really stand out and fulfill amazing tasks in the real world you need to navigate those one in a million situations really well and that’s something that humans are really good at. Most likely because of the wiring in our brains that have been developed over millions of years of evolution. And I think that very much uncertain idea needs to somehow be embedded in this new class of models. And I’m sure a lot of smart people are working on exactly that as we speak right now.
Jon Krohn: 01:01:50
Yeah. Nice. Great answers. And I agree with everything that you said. I do think that things like integrating reinforcement learning are critical to making a big leap forward in these generative AI conversations. And I agree with your vision that tools like you’re offering with Quickchat are going to enable us to have natural language conversations in so many different circumstances in real life, online, for dealing with customer problems or for just interacting with products. It’s so much more natural and yeah, it can be way faster and way more enjoyable. So I do agree with you that that is the future. Going back now into your past, in 2018, you were part of Y Combinator. So you were in the summer 2018 batch. Y Combinator is I think obviously the most well-known startup accelerator on the planet, and presumably one of the most competitive to get into. Can you tell us about the experience of that and how it helped you get Quickchat AI to where it’s today?
Piotr Grudzień: 01:03:05
Sure. So in 2018, I was working at Microsoft in London in the machine learning team working on NLP related stuff to do with email. But at the same time, I got really into the back then blockchain scene, especially with Ethereum. I remember I went to a talk by Vitalik Buterin who was talking about Ethereum, and it got me really, really fascinated. And until today, I think the basic idea behind Bitcoin and that it took us as humanity so long to figure out the double spend problem. I think it’s really fascinating. And the technology behind blockchain is really fascinating.
Jon Krohn: 01:03:54
Sorry. Sorry. Sorry. I really know very minimal about blockchain and Bitcoin. What is the double spend problem?
Piotr Grudzień: 01:04:02
Yeah. I’m trying to think if I can still explain it well enough. So the double spend problem is the very basic problem that was stopping people from creating real digital currencies back in the day. And that’s the problem that was solved by Satoshi Nakamoto. So someone who we still don’t know who they are actually. And the very basic idea of blockchain is to be able to solve the double spend problem. And in very, very broad terms, blockchain is a public ledger where people can contribute what they think are the financial transactions that happen in the world. And anyone can try to append another transaction, which basically says that you sent me $100 and it’s perfectly fine for anyone to try and attach a new transaction, even if it’s completely fraudulent, it never happened, or you giving me $100 would make your bank account a minus 50 bucks.
01:05:10
And then Satoshi Nakamoto’s idea was to basically make it very expensive in money terms to one, try and lie in the ledger and also try to let other people lie in the ledger. So it means that if we are the Bitcoin community, you have some Bitcoins, I have some Bitcoins, and we have a shared motivation to keep Bitcoin valuable, to keep the price of Bitcoin high. And the only reason why I would want to prevent you from faking transactions to get rich is that I want to prevent Bitcoin from losing credibility and losing its value. We might want to collude, but then the majority would always outvote us. And that’s the basic idea, not very well explained, that is keeping Bitcoin afloat until today. And that I think is extremely smart. And then Ethereum took that idea a step further and said that, how about instead of that transaction being someone sending someone else money, we could make that transaction any computation.
01:06:27
And now we have a world computer where we let you make much more complicated financial transactions because you can essentially run any of program, but that program still has that feature that you cannot run it in a way that evaluates some basic rules because then people will jump on it and make sure that that transaction doesn’t go through and you lose your money. That’s the very basic idea and in theory allows a lot of amazing global projects to be run like a global insurance scheme or global bank that is completely independent of … Well, yeah, completely independent of any single entity and just run by the democracy. By the majority of people.
01:07:21
And the idea I had when I applied to Y Combinator was that people who create these projects on top of the world computer, the issue they struggle with is that once you kick off your project, you can’t very easily edit the rules that you set up. You can’t easily debug it or launch new codes to the server. It might be impossible to turn it back. So you need very strong guardrails ahead of the time, and you need to do a lot of testing to make sure that your solution works as expected. So I created software that allows you to run simulations of the economy that you created to not only test for bugs in your code, but to test the rules that you set up will make people behave in a way you want. And that’s the idea that our company started with. I went through Y Combinator in 2018, but then we had a few pivots. Most of them were machine learning related, but that’s what got us to Quickchat in 2020.
Jon Krohn: 01:08:32
Very cool. Fascinating. Yeah, it is always interesting for me to learn a bit about Bitcoin because I don’t know that much. We did a couple episodes last year. I can quickly dig up the episode numbers in case people are interested. So we did episode number 621 last year as well as episode number 625. We had guests from Chainalysis, which is the world’s best known analytics provider for blockchain data. But even those episodes, we didn’t get into very much into the things that you were describing there, like the double spend problem. It’s not something that I’d ever heard before. Because in those episodes we were primarily concerned with data analytics and data science applied to the blockchain as opposed to the genesis of the ideas and the importance of the idea. So very cool. Yeah. So I don’t know if you have general advice as to if people are thinking about getting into an accelerator, what are the advantages of going into accelerator or not if you have a startup idea?
Piotr Grudzień: 01:09:42
I definitely recommend Y Combinator. If you can get in, I definitely recommend it because it just teaches you all that you need to know to maximize your chances of success. That said, Y Combinator is also amazing in a sense that they published all of their wisdom online basically. I think every startup founder should read 100% of the stuff that Y Combinator are published online because it literally tells you step-by-step, how to build a startup and how to succeed, how to maximize your chances. Obviously the tricky part is that most of the advice is meta advice, so it tells you how to think about problems. It explains to you the importance of finding the right co-founder, of not giving up, of evaluating your idea as well, of putting a price tag on a product as fast as possible, of listening to your users and so on. But these are just meta advice and obviously what you actually need to do to succeed, that you need to figure out on your own because you’re the expert on your users, you’re the expert on your product.
01:10:56
So Y Combinator, I definitely do recommend. There are lots of other accelerators that provide excellent guidance. But it is true that no accelerator is a replacement to actually doing the work. In order to create a successful product, you need to be ready for a lot of failures for many years of trying over and over again, and there’s no quick wins. If something feels like a quick win, then I’m sure you’ll find out that it’s actually not as simple. The typical feeling that you get is that you’ve been heads down focused and working day after day for many, many months, and then suddenly when you stop and maybe take a vacation and look back over the past year, then you see how much you’ve achieved. But every day feels just like extremely hard work dug into details and that’s how it should feel and anything else feels like a quick win that is actually diverting you from what you should be doing to make your product … To get it to where it should be.
Jon Krohn: 01:12:03
That’s great guidance. How do you know when it’s the right time to pivot? You talked about having this economic simulator for blockchain to start with and now you’re doing conversational AI. How do you know when it’s time to be pivoting from one idea to another?
Piotr Grudzień: 01:12:21
That’s a very good question. We pivoted a few times and I’m really not sure if we timed it well, to be honest. It’s a very difficult question. What I would say is that I definitely recommend having a co-founder and having someone who is bought into the idea as much as you have so that you can have honest conversations about where you really are. Because that’s the important thing when you’re doing a startup is to really think about the objective truth about your current situation. Another thing I will mention is that you need to try and monetize your product as quickly as possible because it’s extremely dangerous to give away something for free or very, very cheaply and have users who tell you a lot of compliments and maybe even use your product, but actually they only use it because it’s free or they only use it because you’re actually more of a consultant than a product company. And all of these things might abstract your view and make you think that you’re more successful than you actually are. Whereas maybe the truth is that you should have pivoted a few months ago.
01:13:36
I also absolutely recommend whatever you’re working on, just keep your eyes wide open. Technology right now is evolving extremely fast and startups will always be more nimble and agile than big companies, and it’s much easier for them to spot a new idea, create a new solution, and be first to market before the large competitors move there. So no easy answer on when to pivot, but just try to be true to what you’re looking for. Get a co-founder and talk a lot.
Jon Krohn: 01:14:12
Yeah. Yeah. Yeah. Those are great answers. Yeah. Having a co-founder that keeps you objective, it’s a great idea. And then also this idea of getting to pricing and selling as early as possible. I couldn’t agree more. That’s great guidance. That gives you a real sense of whether you have product market fit or not.
01:14:31
Awesome. All right. Well, Piotr, this has been a fascinating episode. I have come out of it a lot more knowledgeable than I went into it. Thank you very much. Before I let you go, do you have a book recommendation for us?
Piotr Grudzień: 01:14:42
Oh, yes. It’s funny. I was talking to the people on my team about this book a lot and I really recommend it. And so I’m reading right now a biography of Walt Disney. The title is Walt Disney: American Imagination. I can’t remember the full title. I wouldn’t say it’s a business book. It’s a biography. But I highly recommend it to all startup founders to really understand what it means to be obsessed about your product and to be obsessed about just creating something that is the absolute best in the world. The description of how much time Walt Disney spent on his early films, it’s so inspiring. I don’t think many people match that level of dedication. Very, very inspiring. Highly recommended.
Jon Krohn: 01:15:36
Very cool. That is not the typical founder that we have brought up on the show, so that’s a great recommendation, but I totally get it. Absolutely game-changing entrepreneur and really a tech entrepreneur.
Piotr Grudzień: 01:15:50
Absolutely. Absolutely. It’s a different kind of tech. No computers around. But to go from drawing things on paper to creating Snow White in 1937, I think, it’s absolutely amazing what they created, but it was sheer dedication, hard work day after day for many, many years.
Jon Krohn: 01:16:13
Yeah, wild. Yeah. So if our listeners want to be able to continue to extract valuable knowledge from you, from your personal internal knowledge bases, how can they do that after the episode?
Piotr Grudzień: 01:16:28
Yeah. So the best sources would be just look me up on LinkedIn or on Twitter. You can also follow Quickchat AI and there you can find both me and my co-founder on social media. We try to share our thoughts on conversational AI and AI more broadly, more and more often. So please do follow our blog and the Quickchat AI social media.
Jon Krohn: 01:16:49
Fantastic Piotr. Thank you so much for taking the time and yeah, maybe we will check in again in a few years and see how Quickchat is coming along.
Piotr Grudzień: 01:16:57
Absolutely. Thanks a lot. It was great talking to you.
Jon Krohn: 01:16:59
Fantastic. Piotr Grudzień and his team are making generative AI practical and commercially impactful. In today’s episode. Piotr filled us in on how the successful implementation of conversational agents for a business use case requires providing the agent with relevant context from a knowledge base, debugging factual ambiguities that could emerge in conversation, having guardrails in place to avoid harmful conversations, flagging when conversations require a human to be brought into the loop, gathering key insights from all of the AI agents conversations and reporting on those to the humans running the business, and having redundancy across multiple LLM providers, including perhaps a blend of proprietary APIs like OpenAI, Cohere, and Anthropic, alongside open source models like LLama 3. Separately. Piotr talked about how incorporating reinforcement learning into LLM approaches such as the Q* model that’s rumored out of OpenAI could be the key to making a leap forward in generative AI capabilities.
01:18:00
As always, you can get all the show notes including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Piotr’s social media profiles, as well as my own add www.superdatascience.com/743. Thanks to my colleagues at Nebula for supporting me while I create content like this Super Data Science episode for you. And thanks of course to Ivana, Mario, Natalie, Serg, Sylvia, Zara, and Kirill on the Super Data Science team for producing another sensational episode for us today.
01:18:27
For enabling that super team to create this free podcast for you, we’re very, very grateful indeed to our sponsors. You can support this show by checking out our sponsors’ links, which are in the show notes. And if you yourself are interested in sponsoring an episode, you can do that. You can get all the details on how by making your way to jonkrohn.com/podcast. Otherwise, please share, please review, please subscribe. Let your friends and colleagues know how much you love the Super Data Science Podcast. But if you don’t want to do any of that, that’s fine too. Most importantly, I just want you to keep on listening. I’m so grateful to have you listening, and I hope I can continue to make episodes you love for years and years to come. Until next time, keep on rocking it out there and I’m looking forward to enjoying another brand at the Super Data Science Podcast with you very soon.