Kirill Eremenko: This is Episode number 353 with Founder of Designing for Analytics, Brian T. O’Neill.
Kirill Eremenko: Welcome to the SuperDataScience podcast. My name is Kirill Eremenko, Data Science Coach and Lifestyle Entrepreneur and each week we bring you inspiring people and ideas to help you build your successful career in data science. Thanks for being here today and now, let’s make the complex simple.
Kirill Eremenko: This episode is brought to you by my very own book, Confident Data Skills. This is not your average data science book. This is a holistic view of data science with lots of practical applications. The whole five steps of the data science process are covered from asking the question to data preparation, to analysis, to visualizaton and presentation. Plus you get career tips ranging from how to approach interviewers, get mentors, and master soft skills in the workplace.
Kirill Eremenko: This book contains over 18 case studies of real world applications of data science. It comes off algorithms such as random forest, k-nearest neighbors, naive bayes, logistic regression, k-means clustering, Thompson sampling, and more. However, the best part is yet to come. The best part is that this book has absolutely zero code. So how can a data science book have zero code? Well, easy. We focus on the intuition behind the data science algorithms so you actually understand them, so you feel them through. And the practical applications, you get plenty of case studies, plenty of examples of them being applied. The code is something that you can pick up very easily once you understand how these things work. The benefit of that is that you don’t have to sit in front of a computer to read this book. You can read this book on a train, on a plane, on a park bench, in your bed before going to sleep. It’s that simple, even though it covers very interesting and sometimes advanced topics at the same time.
Kirill Eremenko: Check this out, I’m very proud to announce that with dozens of five-star reviews on Amazon and Goodreads, this book is even used at UCSD, University of California San Diego, to teach one of their data science courses. So if you pick up Confident Data Skills, you’ll be in good company.
Kirill Eremenko: To sum up, if you’re looking for an exciting and thought provoking book on data science, you can get your copy of Confident Data Skills today on Amazon. It’s a purple book, it’s hard to miss, and once you get your copy on Amazon, make sure to head on over to www.confidentdataskills.com where you can redeem some additional bonuses and goodies just for buying the book. Make sure not to [inaudible 00:02:47] that step. It’s absolutely free. It’s included with your purchase of the book but you do need to let us know that you bought it. So once again, the book is called Confident Data Skills and the website is confidentdataskills.com. Thanks for checking it out and I’m sure you’ll enjoy.
Kirill Eremenko: Welcome back to the SuperDataScience podcast, everybody. Super excited to have you back here on the show. Today, we’ve going to very cool and interesting. All of our episodes are very cool, but today’s a very interesting episode because approaching data science from a different perspective. To provide some context, let’s try to answer this question. What are the outcomes that we’re going for in this specific data science project? So you might be working on something and how often do you ask yourself, what are the outcomes we’re actually going for? Or for instance, this question. How are we going to measure success in this data science piece of work or project or analytics tool, decision support system that you’re building, model, insights, how are you going to measure success?
Kirill Eremenko: So the thing is that very often, we get caught up in lots of different things that comprise data science, from thinking about AI and data science strategy to juggling around different components of IOT systems, to working with data preparation or building different models, gathering insights, creating business decision support systems, and so on. Visualizing our data, presenting on it. We get caught up in all these different things. But what we might get in the end is in the words of Brian T. O’Neill, a technically right result or insight, but effectively wrong. What does that mean? Well … and or actually, why does that happen? Well, that can happen because along the way, we hadn’t been thinking about the end user, about their experience, about putting them in the middle of everything. That’s where human-centered design thinking actually comes in.
Kirill Eremenko: Brian T. O’Neill is an expert in the space of human-centered design, specifically for enabling decision making to be precise, human decision making, in data science. So he’s bringing in the field of human-centered design which exists in other areas of the world as well, he’s bringing it into, or he has been bringing it for many years now into the space of data science and enabling decision making. So basically thinking about your customers throughout the way.
Kirill Eremenko: That is a very powerful tool. It’s a specific soft skill but it’s not just about presenting the insights. It’s about thinking about your user throughout the whole journey. In this podcast you’ll get a lot of tips on how to do that. So Brian T. O’Neill’s a consultant in that space, he’s been doing it for many years. In this podcast, we will learn how to ask the right questions to understand the business needs and what actually is desired from a certain piece of work that you’re doing, the seven steps that he performs when he goes into companies to do his consulting work, understanding outputs versus inputs and the consequences or the outcomes that come out of your outputs.
Kirill Eremenko: So lots of interesting questions will be raised in this episode and I have a feeling if you apply the things that you learn here in your next data science project, you’ll see a different attitude from the people that you’re going to be presenting and delivering it to. They’ll have a much better experience and results from your project.
Kirill Eremenko: So there we go, that’s what this episode is all about. As usual, all of the links to connect with Brian T. O’Neill will be mentioned at the end of this episode, but I want to mention one already now. In case you don’t get to the end of the episode, but you want to learn more, Brian has set up a special page for us, thank you so much Brian. It’s designingforanalytics.com/superdatascience. You can learn more about his work there if you’d like.
Kirill Eremenko: On that note, let’s dive straight into it and let’s learn about human-centered design thinking in data science. Here we go. Without further ado, I bring to you Brian T. O’Neill, founder of designingforanalytics.com.
Kirill Eremenko: Welcome back to the SuperDataScience podcast, everybody. Super excited to have you back here on the show and today’s guest is Brian T. O’Neill calling in from Boston. Brian, how are you going today?
Brian T. O’Neill: I’m doing great. How’s it going?
Kirill Eremenko: Very, very good. I’m super excited about today’s show because we’re going to be talking about some really cool things, but first of all, you are a man of many activities and things that you do. So in addition to data science, you play the drums, right? Percussionist?
Brian T. O’Neill: That’s correct. I do do that.
Kirill Eremenko: That’s fantastic. Where can our listeners hear some of your work, because … we’ve chatted a bit about it before the podcast, but you play jazz mostly, is that right?
Brian T. O’Neill: Jazz, chamber music, orchestral music, Broadway shows, that type of thing. Little less on the rock pop music. Occasionally, some stuff like that, but yes, a lot of jazz, classical, world music.
Kirill Eremenko: It’s not just like a hobby because you know, I play around, or I enjoy dabbling on the piano but I know two pieces. You actually play professionally. You have two lives effectively. There and here. How do you combine that?
Brian T. O’Neill: Oh yeah. Well that was my training. You know, my formal training was in music. So I have a degree in percussion studies. I work as a freelance musician around Boston doing as I was saying, a lot of classical work. I play with a lot of the Broadway theater shows that travel through town in the pit orchestras for those. Occasionally some star attraction work, video game orchestras will come to town and they pick up musicians. Then I run a group called Mr. Ho’s Orchestrotica, which sounds like it’s spelled, it spells like it sounds, just orchestrotica.com if people are interested in that, that’s more of what I call my startup. It’s like running your own a little business and promoting original music in the kind of chamber, jazz, and global jazz kind of space.
Brian T. O’Neill: So yeah, so I do that and then I’ve been designing for the webs for since 1996, if you blur that out. But yeah, about 25 years doing design, started out as a web designer, gradually moved into the kind of the Boston startup dot com scene and then got into more enterprise stuff, fidelity investments and JP Morgan and working in some banking and larger enterprise kind of contexts. Then kind of just got into some very nerdy IT related software products. There’s actually a lot of enterprise B2B companies here in the Boston area. I had clients when I started, when I went freelance in like 2006, I started working for myself so I could kind of balance my two careers. I just had clients that kept kind of bringing me along to their next projects and they tended to be at very technical companies so products for other IT people, technical data products, this kind of thing.
Brian T. O’Neill: Analytics kind of was simmering behind the scenes and all of these and so that just became a kind of a focal point for me. About four years ago, I decided to kind of specialize my consulting work in data products. My goal is really to help companies design innovative and engaging data products powered by data science and analytics and really to focus on that last mile, which is where humans interact with the stuff that all of this great smart people that are doing things with data and math. At the end of the day if there’s a human in the loop in your system and you’re not developing a fully automated solution, then they are a factor in the success of the work that gets done. Doing that well is a different skillset than the modeling piece and the training sets and getting the data cleaned up and all those other things. You can get it technically right and effectively wrong, so I want to make sure that you know my clients and the people that I train in my seminars in this are focused on that human last mile piece. Really understanding the problem space, understanding how humans are going to perceive the work that’s being done, how they’re going to understand the data.
Brian T. O’Neill: The visualization is part of this. We typically jump to that when we talk about design, we think of data visualization. I tend to think that there’s a layer, there’s a perspective that’s a little bit higher elevation, which is in the design world we call this the user experience kind of layer which sits above the interface. Because you can actually, you can technically get the data visualization piece right, but if you don’t have the right data to begin with and you don’t understand the context of use, then it doesn’t matter if the visualization is the best way possible to do it, whatever the heck that means, it doesn’t matter. Right? Because if no one’s logging in to use the service and to make decisions, you know …
Brian T. O’Neill: That’s really what a lot of our work is about, right? It’s really about decision support. So if we don’t create decision support with these models and analytics services, then we’re really not having an impact with it. We have to look beyond the ink and the data ink that’s on the page and think about workflows and how do people do their jobs and what are they concerned about with this technology, do they understand it? What’s the change management that may be required there? That how I see it and one of the concerns I have … and jump in if I’m just like babbling here, but it’s the-
Kirill Eremenko: [inaudible 00:12:25] very interesting.
Brian T. O’Neill: Yeah, the reason that … one of the reasons that I know this is failing, I keep hearing this repeated over and over, is that what’s sometimes called the operationalization of models in the non-software companies, they tend to talk about AI and predictive analytics in this context of improving an internal business as opposed to creating a software product that has data science and analytics behind it. Those are kind of two branches and they use words like operationalization and change management.
Brian T. O’Neill: From a design perspective, from a designer’s lens on it, I don’t like that perspective because I feel like what that means is this team goes and does the technical part and they’re going to spit out a spreadsheet, a visualization, a Tableau thing, a field and the CRM. There’s some output and then it’s some other team’s job to go in and make the business use that stuff. This is where I think things can break down, right? Because you’ve got two teams trying to do the right thing but I think the perspective that’s more important is did we design the solution, the model, the thing, the software application, whatever the output medium is with the engagement model in mind from the start and look at that as part of the success of the overall data science work?
Brian T. O’Neill: It’s not a second thing. It’s not something that you pass off to another group. It’s integral to the work. Think of it as integral, not a deliverable that you pass to someone else to go shove down people’s throats. That’s not how you build stuff people want to use. Instead, you get them involved from the start, right? If it’s the sales team or it’s the CMO or the marketing department or whatever, they should be involved from the beginning of the project so there’s no big giant reveal. Like all of a sudden, the CEO is like, “Today, we’re making a big change. We’re releasing a new model to do X,” and jaws are hitting the floor and people are like, “F that. I’m not using that. I’m going to, I’m still calling the same … I’ve got my sales prospects, right? I’m not calling this list of sales prospects you came up with in the dark with some magic AI stuff. I don’t know what that’s about, but I know who’s going to buy this week and I’m going to call those people. That’s my job as a salesperson.”
Brian T. O’Neill: Well right there, there’s your six months of data science work down the toilet, right? Because this salesperson does not want to, they don’t know how you came up with this list of strange customers that they’ve never talked to, but you’re saying, “Oh they’re going to close next week. They’ll sign on the dotted line next week. And oh, by the way, here’s what you should charge them. Here’s the price quote that you should use,” and the salesperson is like, “How did you come up with this number? Where does this from, I have no idea what this is about.” Because they weren’t involved with the solutioning and the problem discovery and they weren’t … There was no research done. That’s where things can totally break down.
Brian T. O’Neill: So the designer lens is know these people are integral and we have to factor them in from the start and we’re all going to have a better time doing this work together because I’m sure … [inaudible 00:15:33] I’m sure you’ve had this experience, it’s more fun to work on stuff people want to use, right? Not stuff like, it just like-
Kirill Eremenko: For sure.
Brian T. O’Neill: You’re hoping this reveal, it’s like where’s the smiles? And instead it’s like, kind of quiet in the room and people are like, “What does that mean? What do I do with that number?” We don’t want to have those kinds of experiences. We want to deliver, like, “Yes, yes. When can I get more of that? Oh, could you also show me this? Does the model factor in this thing? Oh it does? Oh that’s awesome. I hate doing that work in spreadsheet.” That’s the kind of stuff we want to hear at the end.
Kirill Eremenko: Yeah, totally agree. It’s interesting because I was interviewing Stratos, one of our students, just yesterday on the podcast as well. He said that when he was applying for data science job last year, at the interview for the job that he actually ended up taking, one of the questions was related to exactly this about soft skills. How he would present the data and data science project, how he would talk to executives, how he would go about helping people understand what insights he’s communicating. So it’s exciting to see that companies are not only realizing this after the fact now, once the data science projects are starting to fail, they’re doing it preemptively. They’re hiring people who know what they’re doing in terms of this, what you call it, operationalization and soft skills that change management’s in.
Kirill Eremenko: What I wanted to ask you, so walk us through the process. You’ve outlined how important it is, it’s totally a critical part. What’s the point of doing a project if nobody’s going to end up using it? But once you go into a company and they need your help with this operationalization or change management or soft skills and data science, what are your typical steps? How do you identify the problems? Are they at the start, in the middle, at the end of the data science project pipeline or work life cycle? Then once you’ve identified the problems, what do you do about them?
Brian T. O’Neill: Well, the most popular response to this question by consultant ever is, “It depends.” There is a broad design process that I typically use, but that process is more like, it’s more like a shelf of ingredients. I may or may not use this ingredient with this particular pie this week with this client or I may use a ton of it. Or I may start with the flour and then add the water later and the next time, the flour doesn’t come in until way later in the process.
Brian T. O’Neill: One of the things that clients need to understand when they’re doing this type of work, when you’re doing creative work, when you’re doing discovery work to get into people’s heads and understand the problem space and all of this, is that it’s not highly analytical work, ironically. You’re going to have to ping pong back and forth to understand what’s needed. So sometimes you actually, you need to get into the design itself in order to figure out what needs to be designed.
Brian T. O’Neill: Research is a big thing that’s often missing in this place and that can sound like this really expensive long thing that takes forever to do. Nope. A lot of times what I’m talking about is having one-on-one conversations with the actual consumer of whatever it is that’s going to … whoever’s going to use this. You know, if we’re talking about predictive analytics, whoever’s going to use this predictive score, we need to figure out what is going to make them want or not want to use this from the beginning.
Brian T. O’Neill: You may even need to start with, well, we don’t even know who’s going to use this, and so right there we haven’t even figured out who is our team, who are the stakeholders in this project, what are their interests in this, and what is going to make or break this? You may need to have a conversation with your senior level stakeholders, because sometimes what can happen is you can’t even get the time you’re trying to… Say you’re helping out the marketing department and they’re like, “We don’t have time to sit in your ideation sessions to go through this.” Well, senior management needs to hear that, and the response from… If the data science experts are the ones that are leading the process here, what they need to hear is, look, we can build a model for anything if we have the right data for it.
Brian T. O’Neill: If you want us to build a decision support mechanism for the marketing department, the marketing people need to be involved in the process of helping us understand the pain and the need, and how they’re going to use this. If they’re not, what you’re going to do is pay my department $5 million over the next three months, and you’re going to have a high risk output at the end of that. So, do you want to take the chance that all the work we’re going to do is going to hit the floor and it’s never going to be used, or do you want to make sure that the marketing people are saying, “Wow, we know how to stop advertising in the wrong spot. We know who to send our mailers to for the next campaign that’s coming out. This is really helpful information.”
Brian T. O’Neill: And if you want the ladder, you’ve got to have those people involved at the right time. So, there needs to be a clear understanding of who our users are, our end users, who our stakeholders are, how we’re going to understand what it means for this output. Again, our visualization, our predictive model, whatever it’s going to be, how will we measure the success of that at the end of the project before we get into building anything, right? And what we may find is that you don’t need a machine learning model for this product. Maybe the first version of something is like, you know what? Right now you’re taking a wild ass guess every time you decide which cohort of people are we going to send this campaign to, right?
Brian T. O’Neill: Well what if we could just simply tell you how many people open the mail that we sent last year, and we can compare it to these other metrics. And it is, is just historical data, but we can at least get that going. Would that be an improvement? In one month, would that help you start making better decisions about where to spend marketing dollars? And if they said, “Yeah, that would actually be really great,” well now we have a way to start small, and we’re really focused on that business outcome, right? Instead of focusing on building a model, we’re focused on help the marketing department know who should we send mailers to and who should we not send mailers to for the Spring 20/21, whatever, shoe campaign, right?
Brian T. O’Neill: So,
Kirill Eremenko: Yeah.
Brian T. O’Neill: So, that’s part of it. There are several different steps. In my seminar, I have these, approximately seven different steps. You have your team building, we have a stage of research and problem finding or problem definition, which is where we get really crystal clear with our team about what problem we’re trying to solve, not what data science problem we’re trying to solve, but what people problem are we trying to solve, and what business outcomes we’re going for.
Brian T. O’Neill: We then move into starting a design brief, and this is where the question of ethics starts to come in. So, we may be looking at what are the second order consequences of the work that we’re doing here? Where might we need to put checks in place with the work that we’re doing so that the solutions we’re building are ethical, and useful, and usable, and actually consciously thinking about this, and not just waiting for a story to hit the news that we don’t want to hear about. So, that’s a factor in the process. From there, I’m a big fan of using a couple of tools from design, which are called journey maps or service blueprints. So, maybe you’ve heard about these before, but this is a visual way of talking through the customer’s journey, where they are today, and how they do their work today, and plotting that out visually over time so we can understand what’s it like to be this marketer that needs to send out things.
Brian T. O’Neill: How do you decide how to do that today? What process do you go through? Well, we collect these analytics from this tool, and then we go into Tableau and then we look at, whatever, the CRM, and then I kind of take a guess based on what I think the market’s doing. Anyhow, we map this thing out, and by understanding this customer journey and all the departments that may be involved, we can start to have a bigger picture about how does our little, or maybe major data science initiative fit into that workflow, and where might it hit the ground, right? Where is there a gap that may not be a data gap, but it may be an engagement gap, like trust.
Brian T. O’Neill: Maybe we find out the salespeople are on the road all the time. They’re not going to open up a PDF, they’re not going to open up Tableau in some desktop thing or whatever. They’re only going to respond to text messages, whatever’s on the screen in their little app that they use. We need to provide them with really good recommendations on which door should I go knock on next if I’m selling widgets door-to-door, or something like that. We need to understand what it’s like to be that person and to do that job, so that we have that context the entire time that we’re doing our work. So, the journey maps and service blueprints can help with that. The difference there is really whether you’re talking about external customers’ experience, or you’re talking about internally how a business process works. So, the service blueprint is really for if you’re building a model or something to improve operations inside a business, it’s… They talk about the front stage and the backstage. So that’s what that service blueprint version is.
Brian T. O’Neill: But they’re very similar. They look very similar. And then from there-
Kirill Eremenko: So, the service blueprint is internal?
Brian T. O’Neill: Yes, it covers… think, again, think of it as the backstage, like the behind the scenes. When you go to the Apple store, it’s like thinking about all the process of how they onboard you when you come in and have a customer service on your iPhone, right? And then, all of a sudden they walk away with your cracked iPhone screen. Well, behind the scenes there’s a whole bunch of stuff happening there. They’re probably like, is it under warranty? No. Okay. If that’s the process, then we do a quick five minute check of the screen, and see if we can do a hot swap. Nope. Okay. There’s a whole process that they go through there, but the customer doesn’t see that, right? So, that’s more of that service blueprint version.
Brian T. O’Neill: So that may be applicable for your audience that’s working as an employee inside a business, trying to improve the internals of that business from there, yeah, and another part of this is what I call the honeymooning and the onboarding. And this is what, again, we sometimes use this term operationalization, but I also like to think about what I call the honeymoon period, which is the period between when we make the announcement or we “launch,” launch or put into production, whatever the output of our analytics work is, there’s a period of time here where it’s new and it’s different, and I call this the honeymoon, and your design and the way this works, you may need to consciously put intentional effort into how we help with that transition, instead of relying heavily on training, which it’s hard to get people to show up for this stuff. It may be something where we actually need to design into the software.
Brian T. O’Neill: For example, how do we transition someone from the old way? We know that you used to use a spreadsheet here. Well you can actually upload your spreadsheet here, and then we’ll map this into our predictions, and we’ll help you save some steps of maybe they need to key in a bunch of data in order to get back some recommendations on something, and by understanding what these blockers are, the friction points, we can actually smooth this transition out by really… And I’m sure you and your listeners have experienced clunky onboarding when you download a new app for your phone, and a lot of times they force you through a tour, and you’re like skip, skip, skip, skip, skip. Just get me into the product, right?
Kirill Eremenko: Yeah.
Brian T. O’Neill: You kind of want the product to just be intuitive. You don’t want to read a bunch of screens about all this stuff it’s going to do, because you probably downloaded it because you have one thing you want to do, and now they want to tell you about 20 things that you need to do through a video or whatever. All that stuff is just in the way. But if you really understand how someone wants to use the service, and what their job is, or what they need to do, you can design that experience to gradually bring them into the new way of doing whatever that may be. So, from there you get into actually doing the sketching, algorithm design planning, getting visual with workflows. If there are visualizations that need to be presented here, then I like to work low-fidelity.
Brian T. O’Neill: So, I kind of teach this idea of working lo-fi with a small team with your power team that we talked about in the first module there, but working at a whiteboard together, and trying to get visual, and to prototype or simulate what might our outputs look like in low-fidelity before we ever do any data work whatsoever. And again, partly what we’re doing here is we’re taking away the giant reveal. It shouldn’t be the… There’s a black cloak. You walk into this dark room and then, bang, the lights go on, and here’s the data science model. That is not how you want to release stuff.
Kirill Eremenko: It’s kind of the difference between Waterfall and Agile, I guess.
Brian T. O’Neill: Yeah, exactly. So, we want people nodding their heads as we move through this process because they know what we’re doing, they know why we’ve done it the way we’re doing it, and they’ve been involved throughout the process. So-
Kirill Eremenko: And they feel like owners as well.
Brian T. O’Neill: Yeah.
Kirill Eremenko: In the end the product is you’re presenting it to their bosses, they’ll be on your side helping you present it rather than on the opposite side.
Brian T. O’Neill: Exactly. Exactly. So yeah, so there’s this visual process there, and we’re doing this work and iteration with our team. And then, kind of the last two, really the last formal phase here. And again, remember by this point, you may realize, wow, we don’t even know what we’re trying to do here. We haven’t clarified the problem yet. We’re, we’re sketching stuff, but we’re realizing by getting visual here, our chief marketing officer has been on our team here participating, and they still don’t really understand how they’re going to use this number, that our model is going to come up with, to do their work.
Brian T. O’Neill: We might need to go back to the drawing board, do some other research or talk about the problem space more before we go any further, before we start doing a ton of work, collecting data and building pipelines and all this stuff. You may ping pong back and forth between these different stages before you move forward. But the last kind of formal… If we were to do this in the perfect theoretical way where it was step one through six, perfectly, the last step here would be doing validation of the results here.
Brian T. O’Neill: So, what does that mean in the context of a predictive model or something like this? Well, the easiest way to boil this down would be, let’s say you’re going to present a score from zero to 100. You have a probability that’s what your model spits out. And the CMOs going to… Every day they log into this dashboard and your model’s going to produce a 67. On Tuesday, let’s say the score is 67. Well, what are you going to do with that 67? And 67 as compared to what? And asking this person, well, what would you do with this 67, and letting them talk about how are you going to react to this score?
Brian T. O’Neill: So, by presenting them a visual and having a conversation with them, we sometimes call this usability testing or design validation, we can start to tease out what might need to go into the engineering and the modeling. So, what you might hear is, “Well, 67 doesn’t feel very certain to me, but if I understood why it was 67, then I might know who to send my mailers to, right? But right now you just say it’s 67, and I don’t really know how you guys came up with that.” So, ding-ding, light goes on, right? We might need model interpretability here, right? We may need a way to show which features contributed most to that. And if they didn’t say that, and someone said, “You know what? Anything above an 80 I’m cool with that. I don’t really give a crap how you guys came up with it because it doesn’t matter. Anything above 80 is awesome. Anything below 50 I’m just going to totally ignore. I don’t care.”
Brian T. O’Neill: Well, at that point you may say, “Well you know what? We can come up with a much better algorithm here that’s 92% accurate, if you don’t really need to know how we came up with this, and there’s no compliance issues,” or whatever, that can start to guide the technical decisions that are made in terms of how the actual data science part works. But guess what? You don’t need data to test this out. You may find out that, oh with the 67 we need this kind of model interpretability, and in fact maybe we need a scoring system, qualitative ranges like, anything between 67 and 75 is a buy. Anything that’s below 67 and 52 is a hold. Red, green, yellow. Sometimes we talk about the traffic lights.
Brian T. O’Neill: The point here is you’re talking about putting a qualitative measure onto this quantitative score that came out, and the only way you’re going to know what those qualitative ranges should be is by having a conversation with your customers, with your users, to understand how they’re going to perceive this information and how they’re going to act on it. And you don’t actually have to build the entire model to know that. You can also start to tease out how accurate does it need to be, and this is something I hear about, a fair amount is data scientists, especially young ones, they want to do cool data science work. Some of the more academic ones want to publish papers about how accurate their models are, et cetera, and they’re focused heavily on the accuracy of the model, and not so much on the, “did someone use my model to make decisions?”
Brian T. O’Neill: In a business context, that’s what they care about. And what you may find out, I have a podcast episode about this, the title was something along the lines of, “When does the 60% accurate model beat an 85% accurate model?” And the joke is, well it’s the one that actually gets used to make decisions. That’s what matters. And the reality was is this person was David Stevenson, he was talking about this was a big light bulb moment for him. It was when he learned from his client, I forget it was an employee, or if he was consulting there, but the client, he was spending all this time trying to get the accuracy up from 80 to 82% or 85%, and his business sponsor was like, “What the hell are you doing? This is so great. If you can tell me that this is 65% accurate, let’s go onto the next thing. I don’t care. I’ve made my decision. It’s a yes/no decision.”
Brian T. O’Neill: I forget, I’m kind of paraphrasing it. This could be wrong, but the point here was that was more than enough accurate for some really great business value to be created. And spending twice as long to get a 5% increase in the quality of the prediction was not a good business decision whatsoever, because now you’re spending all your time doing this work that the sponsor or the user doesn’t care about. It won’t make any difference on how that person does their job, whether it’s 80% or 85% accurate, it’s completely meaningless. But if you never have those conversations with your stakeholders and your users, you’re never going to know that. You’re not going to know what the pain and gain looks like from their perspective. This empathy is what’s required for us to understand how to design really effective solutions. We have to put ourselves in their perspective and take off our technical hats, and look at things from the perspective of the person that’s consuming them.
Brian T. O’Neill: That’s what design is really about. It’s really about empathy and being able to put ourselves in their seat and their role, and relate to what that person’s job is like, and how they make decisions about things, and how do we slide our technology in there too to help with that.
Brian T. O’Neill: I know that was a long winded explanation of the process, but it’s mushy, it’s gray, it’s mushy, it’s not perfect, it’s not clean.It can be, we can put structure on it. That’s partly what we talk about in my seminar is, yes, it’s supposed to be a little bit messy. We may need to ping pong back and forth. That’s the innovation space. But we’re also trying to fail fast here. We’re trying to learn quickly what’s working and what’s not without spending a ton of time building the wrong stuff. So, when you get that question, what is our machine learning strategy? Right there, your alarm should be going off. This is a bad question. This question needs to be unpacked. And the reality is, is your business sponsor, if you’re not at a software company, your business sponsor probably doesn’t understand what’s possible.
Brian T. O’Neill: So you may need to have a separate discussion about, well, what is AI? What is possible with these technologies? And realize together we need to have a better conversation about what business outcome we want. Yes, we will try to use machine learning if that’s the best thing possible for us. And if you really just want machine learning no matter what, then let’s talk about this in laboratory, what I call, lab mode. Let’s have a project where really what we’re doing here is we’re going to rehearsal and we’re practicing. We’re having a scrimmage.
Brian T. O’Neill: And if that’s the point, let’s take a really tiny project. There’s no expectation of business value. We’re just here to exercise our abilities to see if, can we collect data? Can we put the training data together? Can we test it? Can we deploy it? If that’s really just to exercise our skillsets and perhaps to see where do we need more talent? Is it visualization? Is it data engineering? Fine. But the point is you have a clear plan and a clear conversation to set expectations that the goal of this project is a lab mode to work on these skills, to see if in the future we actually have the skills to put AI to good use in a business context and produce some value.
Brian T. O’Neill: I don’t think that’s what’s happening. I think usually it’s like, go give us some cool (beep), excuse my French, go build some amazing thing with… hire some PhDs, and they’re going to come up with this magic sauce, and they’re going to give it to us at the end. And then, there’s this big disappointment, or the data scientists are saying, “Well, what is the problem you want me to work on?”
Brian T. O’Neill: And the business person is saying, “Well, what’s possible?” And they’re like, “Well, I don’t know, you’re the product manager. What would you like us to help you with?” And you can see there’s like a tennis game going back and forth. And my feeling is, and the people that I talk to in my show, it’s time for the data people to step up here and start to have a better understanding of the people that are going to use these solutions.
Brian T. O’Neill: And it’s not to say that business people don’t also have a responsibility to become more data literate, but I tend to think that the last straw in this game are the people that are writing the code and pushing this stuff out. It’s the data people. And so they are the linchpin in this.
Brian T. O’Neill: And I think that skill set needs to be developed, at least in part, by the data people. They need to learn how to ask good probing questions. They need to learn how to extract these needs from a stakeholder who may not understand what’s possible yet with these techniques, and to try to really guide that person to express their need more clearly so that your team, if you’re the data people, you can be assured that my work is not going to fall on the floor in six months. People are not going to be wondering what is the value of paying… And this is expensive, right? People are paying top dollar right now for this talent. But guess what? It’s going to change.
Brian T. O’Neill: The salaries are going to come down. Everyone’s jumping into this space and at some point there’s going to be a lot of people with “data science” in their title or claiming this, and who’s going to be left standing are the ones that can actually turn data science into value and outcomes. And that requires a different skillset. It’s not Python, it’s not R, it’s not Kubernetes, it’s not all that technical stuff. That’s part of it. But there’s another part of it here if you really want to connect it to the people.
Brian T. O’Neill: So, anyhow. I’m blabbering. You asked me some questions, but I’m hoping this is helpful to your listeners.
Kirill Eremenko: Very helpful. I’m listening, soaking it all in. Very interesting insights.
Kirill Eremenko: What I think would be very helpful for our audience is the concepts you identified fantastic. From team-building to design brief, journey maps, service blueprint, honeymooning, sketching, algorithm design, validation, usability testing, very useful tips.
Kirill Eremenko: However, it sounds like that is something more of, it’s good to be aware of for anybody, but also that’s a framework for a consultant who goes in to analyze a business or maybe for even a business leader or manager to work with their stakeholders around data products.
Kirill Eremenko: The question that I’d love to get your opinion on is, what can an individual contributor, an IC data scientist, somebody who’s there building the code, who’s not the only data scientist in the whole company where he would obviously have the mandate to apply this framework of certain steps, but he’s a part of a bigger team. Maybe there’s a hundred people in this team, maybe there’s five or 20 people in his team. He’s not the manager, he’s not the leader or he or she. He or she, they are part of this bigger team so they don’t have this control or say over what’s going to happen, what’s not going to happen. They’re just doing their job. How can they be better at design thinking?
Brian T. O’Neill: Sure. It’s a fair question. And ultimately these are strategic questions. They do come down to that. I think the way to think about this is by being objective and asking good questions. For example, how are we going to know that we did a good job with this project? If you’re having questions about the work you’re doing and you feel like, “God, this project is going off the rails,” well maybe it’s time to get your team together and just have an informal conversation and say, “It would really help me, could we come up with just five bullets that are going to dictate the success of this project?”
Brian T. O’Neill: How would we know, not technically, not anything to do with the data, but at some point we’re going to present something to somebody, right? They’re going to consume this and they’re going to make a decision about whether it’s good, bad, okay, excellent, whatever. How was that going to happen?
Brian T. O’Neill: And if there’s silence in the room or it’s really mushy, then you can express that. Should we spend some time clarifying this so that we can make sure that we really hit a home run here? To use a baseball analogy, are we hitting a home run or a base hit, here? Well, if no one can tell us the difference between what a base hit and a home run is, then what do you think the chance of us hitting a home run is? It’s probably pretty low.
Brian T. O’Neill: It sounds really simple here, but if you have responses like, “We will impact the business, we will use AI in the CRM. Well, we could probably find an office shelf thing, create a field in the CRM and shove a data point into it and say, “We created AI in the CRM.” Whatever, right?
Brian T. O’Neill: That’s not clear enough for us to actually be actionable here and produce value. And I think if you really ask these questions with your team, and it’s not meant to challenge anybody, it’s in pursuit of clarity for the team so that we don’t create a data output that falls on the floor.
Brian T. O’Neill: And part of this is how you ask the questions. I have articles on my site about how you do this kind of research, but a lot of this really comes down to asking really good open-ended questions, listening and trying to kind of facilitate the conversation here so that everyone sees why we’re asking these questions. But I think that’s one way to do it, is simply to have a question about what are the outcomes we’re going for, here, and how will we measure that these were effective? It’s very rare, I hate to say it, it’s rare that I see that because most of the time employees are usually compensated for inputs and effort. You trade your time, we pay you a salary every month to come in and use your data science skills and you’re really paying for time.
Brian T. O’Neill: This model focuses more, even though you’re still going to get paid for your time, this model comes from a different one, which is, “What if our compensation was based on the results that we created, the outcomes and the value that we produce?” And I think part of the reason we don’t have this question as much is partially because of the way most companies compensate their employees. And I’m not suggesting we change that or anything, but that’s, I think, part of the reason these questions don’t always come up. And instead we just kind of wait for a boss to tell us “This is the next thing we’re doing. Here’s the project, here’s where the data are, build some connectors. We’re going to need to clean up this X, Y, and Z, dah, dah, dah, dah.” And you kind of just go in and do the work.
Brian T. O’Neill: At some point in your career, if you’re a junior and you’re probably going to need to go kind of down the expert/contributor path, or you’re going to go into management, but either one of those two things, the more you can start to realize that I’m not really being paid here for our programming, that’s not really what they want. The reason why someone is funding my team is they want the value that our programming theoretically can produce for the business. And if you don’t know what that value is, it’s going to be a lot harder for you to be seen as a great contributor.
Brian T. O’Neill: And so trying to align your work with that bigger picture, that’s one way to really start to connect the dots, here. Is to say, “You know what? Yes, I know how to do the model. Yes, we have the data here. But my concern is when we talked to so-and-so, the CMO, they said, we’re not going to use this. If we can’t understand how you came with this prediction, we’re not going to be able to use it. We can’t take the risk, and we’re going to keep the status quo here.”
Brian T. O’Neill: So when your boss is saying, “Look, we can get 95% accuracy by using this deep learning model, blah, blah, blah,” then you can say, “Well that’s fine. I’m with you if that’s what you want to do. But didn’t we hear so-and-so said they’re not going to use this? Do we want to maybe try a first version here, maybe little bit less accurate, but we know this person’s going to use it because we can prove to them how the model came up with these recommendations. Maybe that’s where we should go first and then we can see if we should make it more accurate.” You can try to have these conversations and I know that’s tough.
Brian T. O’Neill: Sometimes it’s hard to have these conversations when you’re early in your career, but I would challenge your listeners, you’re not really there to write code. What really a line of business wants is they want the value that your code produces, but it’s not really the code and the modeling and all the stuff that you learned in school, that’s not really what they want. It’s the output. It’s the outcome from your outputs.
Brian T. O’Neill: So if you always have that kind of lens in your mind and you can connect it to the people who are going to consume those outputs, you’re probably going to be more successful in your career in general.
Kirill Eremenko: Wow. That’s golden. I think if people follow that advice, they’ll be twice as successful already.
Brian T. O’Neill: Yeah.
Kirill Eremenko: Indeed.
Brian T. O’Neill: Sure.
Kirill Eremenko: It’s the output. As you said, the outcome of the output.
Brian T. O’Neill: Yeah.
Kirill Eremenko: That matters.
Brian T. O’Neill: Can I give you an example of this? Like real quick.
Kirill Eremenko: Sure.
Brian T. O’Neill: This is my learning moment for your listeners. When I was working, I worked at Lycos. If you remember, Yahoo was big. 20 years ago, Yahoo was the big search engine before Google and Lycos and AltaVista were competitors. And I worked at Lycos and I was a designer and we each belonged to different verticals and I focused a lot on the financial services products and some online trading platforms and things like this.
Brian T. O’Neill: I remember one day when I was designing one of the stock research pages for the Lycos Finance or whatever it was, and I was talking to the product manager about the ad placements here. And for years, and most designers, they hate the ads. If you’re working in a media company where advertising is the model, you have to find these slots to put banner ads on the page and you hate it and all this kind of stuff.
Brian T. O’Neill: And it didn’t click until this moment, this conversation with him, that, wait a second, these ads are what fund my salary here to be a designer and to do the work that I love to do. So what if I change my perspective to, look, no one really likes looking at ads. We know the customers, they hate that, but at some point this funds the business. So what if I can use my creative energy to figure out how to fit adds into the experience in a way that’s not so annoying, maybe it’s a little bit smoother or maybe there’s an add at a surprising place where it actually has some interesting context for the customer. Something like that.
Brian T. O’Neill: I looked at it more like I actually want to help my team produce more advertising revenue because that funds my salary, but that’s really what they’re asking for. And it was just this light that kind of went on for me. And so I stopped fighting it and I realized it’s never going to go away. This is a media company. At the time they were looking at subscription businesses and other models, but at the time it was a media company, which meant advertising.
Brian T. O’Neill: And so I actually started coming up with some other advertising products that we could actually go out and sell, that kind of stayed out of the way of the UX, which was my job, is to make really great user experiences with these interfaces. But also, I’ve started to think about what would be some other ways we could sell creative advertising, because that’s really what it was about.
Brian T. O’Neill: But I was fighting it constantly. And part of that, you want that ying and yang, right? You kind of want some of that in the business, which is you have kind of our purist designers. Designers can relate to this, they’re always going to go for simplicity and usability. And sometimes you may want to encourage people to opt into a form, right? To provide some more data. Well, maybe there’s a creative way to collect that data that is both transparent and ethical, but perhaps it’s fun. Maybe you turn it into a game instead of asking someone to fill out a survey. We put that into a game context, but we realize that we actually do need to collect this information. How can we do that in the interest of the business and the customer?
Brian T. O’Neill: It’s not just about the user experience piece. It’s about the business value that we’re creating, too. That was kind of the moment when the light went on for me in the advertising context. But I’m sure your listeners can probably find a way that they can start to see, “Wow, we’re going to help salespeople know who to call instead of just opening the CRM and smile and dial.” Right? What if we could tell them, “Here are the next 20 people, based on all the data we have, we think these 20 people are most likely to sign on the dotted line within the next two months.
Brian T. O’Neill: Put yourself in their shoes. What is it like to be a salesperson? And when you start to realize that’s really what the business hired you and your team for, is to help these sales people know who to call so they spend less time calling the wrong people. That’s what you’re there for. Not Python. Not R.
Kirill Eremenko: Gotcha. Yeah. Wow, okay. So in a nutshell, keep in mind what you’re there for, in terms of you’re there for the outputs and the outcomes that come from your outputs, not for your inputs. And also, empathy. I love that you mentioned that, really understanding and sitting down. This really helped me many times. Sitting down with the people that I’m creating a model for, or creating some analytics, or doing analytics for. Sitting down with them, even just living with them through their whole working day. Understanding what they experienced, what they feel throughout the day, really helps inform what I need to do.
Brian T. O’Neill: One other comment on this is, I’m going to totally cast a generalization, there’s lots of different people out there, but I’m going to say generally speaking, people with STEM backgrounds tend to be a little bit more introverted. They may find some of these kinds of research discussions a little bit uncomfortable. And here’s the great thing about doing good research: your primary job is to listen, it’s not to talk.
Brian T. O’Neill: So if you’re not comfortable doing this, really your job is to come up with some good questions, we call them open ended questions, which means questions that generally don’t start with the word “do,” because we don’t want questions to end with “yes” or “no.” We want to ask, “Tell me about X. Tell me about how you decide who to call when you’re on the hook for your sales numbers. How do you decide who should get what offer?”
Brian T. O’Neill: Ask open-ended questions, here, and just listen. And this is a way to kind of get get comfortable with this process where you don’t feel the need to talk like I am right now. I’m babbling, but it’s really about listening, is really what it’s about.
Kirill Eremenko: Gotcha. Totally agreed. Brian, you have quite a few things that you’re doing at the same time. Of course in addition to your music, but at the same time you do consulting for companies, you run seminars, you are about to launch a course, which is very exciting. You’ve got a podcast of your own. First of all, I want everybody to know that Brian’s podcast sounds amazing. It’s called Experiencing Data With Brian T. O’Neill. Check it out on iTunes. Congrats, Brian. You’ve done what, like a year now? Of podcast.
Brian T. O’Neill: Yeah, right. Episode 34, I think, comes out tomorrow actually, based on what we’re recording this now. So it’s been good. Yeah. Every two weeks we drop.
Kirill Eremenko: Fantastic, really cool. So the episode you were talking about earlier is Episode 24, How Empathy Can Reveal a 60% Accurate Data Science Solution. So everybody, you’re on this podcast, you’re listening to this, this means you love podcasts already. Check out Experiencing Data With Brian T. O’Neill. I think you’re going to love it. And Brian, I wanted to ask you, what do you teach? You’ve already shared quite a lot of things on the podcast today. What is it that you teach, any additional insights you can provide from the seminars that you run? What are the discussions around there? I’m just curious what other themes exist in this space of using human-centric design and data science?
Brian T. O’Neill: Are you asking what’s in a seminar or a course? Is that what you’re asking?
Kirill Eremenko: Yeah, typically. What’s the news online [crosstalk 00:00:54:30]?
Brian T. O’Neill: Sure, sure. I’ll give you an idea of the self-guided video course, which I just put up. The curriculum, like that process we talked about, those six or seven steps that were there, that’s a video course. So what it is, is for each module of those six or seven, there’s I think seven, there’s a short video where I kind of talk about the key concepts in there. And then there’s a written module that goes with that. And it’s really focused on doing the work. It’s not a read a book and you kind of digest 10% of it and 90% goes out the door.
Brian T. O’Neill: Most adults learn by doing. And so what I really tried to do with this course is provide actionable steps for each module. What do I literally go out and do if I’m doing this work in my own organization? What do I go and do to put this into action?
Brian T. O’Neill: When you talk about coming up with your team, well what does that mean? Literally? So that’s what I have. The video is kind of an overview for the module and then there’s kind of step-by-step activities there. And then I link to examples when it’s relevant. I try to provide some examples there. And one of the ways it’s different is that the course is called Designing Human-Centric Data Products. It’s loosely based on, you’ve probably heard of the term “design thinking” before, but what I felt was missing was this lens on data products.
Brian T. O’Neill: So each module specifically talks about what is different in the context of data products. When I’m working with AI or probabilistic types of software applications, what are some of the considerations that are different? This is still a very new space. But generally speaking, I would say right now, a good 70% of the process is the same and 70% of the meat-and-potatoes of doing good design work is the same.
Brian T. O’Neill: Whether or not it’s a machine-learning model or just descriptive analytics or some other technique, most of it’s the same, but there are other considerations we need to add on when we’re talking about probabilistic models. And so each module has a specific call out about what are the considerations here if I’m building a predictive model or something like that. So that’s the course.
Brian T. O’Neill: And then there’s an instructor-led online seminar version, which is the same modules. The only difference is I release two modules per week, and then we have a call together with a cohort of people that are in Slack. So this is the doing it with other people, which partially helps keep you on track and makes sure that you actually go and do the work.
Brian T. O’Neill: And some people like to work alone, other people want to kind of have a cohort of people to go through it with and hopefully they can learn from each other, and so we have live Q&As. On Mondays, we release the new modules and have a discussion about those, and then Fridays we do a check-in and it’s actually spread out over four weeks. It’s not four weeks of 40 hours a week. It’s very much designed on it’s you get in what you put …
Brian T. O’Neill: What you put into it is what you get out of it, and the goal here is to give you time to actually go and do some of this work, and some of it takes time. It takes time to set up, like I want to go do a ride along interview with my salesperson. I don’t know how they do they work, but I understand that I need to go understand a day in the life of the salesperson before I do this. Well, it could take a few days to set that up, and so I intentionally spread that seminar out over four weeks, so that there’s time for people to put this into play and then get feedback from me on it, so that’s the seminar and that’s how the training works.
Kirill Eremenko: That’s very cool. Can you give us an example of, like you said, you provide hands on exercises for people to do on their jobs already? Maybe can you give us an example of a simple hands on exercise somebody can go and do at their work already tomorrow, to actually experience that feeling?
Brian T. O’Neill: Well, in terms of actually literally spelling out how to do it, I don’t know if I could shortly do that on the podcast, but if you’re talking about what are some of the types of activities, like this journey mapping and service blueprinting we talked about, that’s one of the things we talked about, so who’s involved? Who do I need to bring to a session to do that? How do I set it up? In this case I actually provide a graph, a visual template that you can use here to get going with this, but we talk about literally setting up the room and who needs to be involved here. What is the goal of doing a journey map and what do you do with the thing? Okay, I have the map. Now what?
Brian T. O’Neill: That’s what we go through in the course, is literally what is this for? What is the value of it? What do I do with it in order to move forward, and when do I do this? When is it important for me to use this particular tool? Because again, you may not need this tool or it may be too late for this at the stage of the product that you’re in, so that’s another aspect of this, is you don’t have to do everything and every single module all the time on every project. What I want to do is give you seven kind of core areas, and this is not all of design. This is it’s just like algorithms, or different modeling methods.
Brian T. O’Neill: You don’t necessarily use all of them all the time on every project, but I wanted to give people seven kind of core areas that they can go deep on. Six months later you may say, “Oh, wow! I remember we did something with usability studies and we have a lot of screens to show people. I’m going to go dig out that module on testing that Brian had, and then I can use that on this project, because we’re doing lots of visuals,” or something like that, so …
Kirill Eremenko: Okay. Got you. What I love about online teaching is success stories, where people have applied what I wanted to convey, and they’ve gotten a job or a promotion or some kind of success. Any cool success story you can share about somebody who wasn’t using design thinking, and then through listening to you on the podcast, or taking your course, or somehow interacting with your work they decided, “I’m going to try design thinking,” and that completely changed their career? Anything inspiring like that, that you can share?
Brian T. O’Neill: I can’t. For the seminar and the course I can’t, because the course is just about to come out, and the seminar just is actually really new and we just, I’m actually hoping to put some testimonials up soon. I’m in the process of actually gathering some feedback from my first cohort of students that went through this, so hopefully you’ll be able to see some of those, the results from the seminar and the course on my website in the near future.
Brian T. O’Neill: So I can’t specifically say there’s that, but one of the things I got from one of the people in the course, and so this person was actually, she’s at an AI consulting firm and she’s technically an account and project manager, but what she realized was a lot of the challenges they were having, they were working with a pharma company, was in this kind of user experience space, because the client kept saying, “Well, just build some stuff and we’ll figure out whether it’s good later,” and she could smell that, “Well this could be really risky for us, because they’re happy with the work as long as we’re doing stuff, as long as we’re building pipelines and showing data,” but they couldn’t give a clear expression of how is this information going to be used to make decisions?
Brian T. O’Neill: And she wasn’t sure. She could see the team was struggling with this and they had had some difficult conversations with their client, and so by taking the course she said, “I feel a lot more armed now about what tool do I use in this toolbox based on what the current client situation is that we’re having? And now I feel a lot more armed. When it’s time to test this, I actually know how to test the results of our visualization or the application that we built. I know how to go do that now with them, so that they can see whether or not the work we did was useful or not, and then we can learn from that,” and before it was very much is the client say they’re happy or not?
Brian T. O’Neill: Well, the client may look at something and say, “I’m happy. That looks really nice,” but if you didn’t actually talk to them about, well, does it work well? If no one can measure how it’s supposed to work, then you might be just happy with the way it looks on the surface, but underneath the covers it’s not actually producing that value. And so now she knows how to go and have that conversation with the client and use that toolkit, so that was one of the big things for me, was that she had gotten that out of it.
Brian T. O’Neill: Another student was actually managing director at a big supply chain finance kind of related company, and he’s trying to figure out, “How do I bring some of our IP and our analysts’ work that we do on every single project? We have some IP here. We want to productize this into an application, so that we’re not spending as much time doing the same types of manual tooling work here,” and he just didn’t have a framework for how do you get from ad hoc presentations of work to a software application that will express this information on a routine way? He was really struggling with that, and so he feels that now he has a much better idea of what that process looks like to get to that UI, that will help him spend more time doing higher value work for his clients.
Kirill Eremenko: Amazing. That’s really, really cool examples of this stuff in action, and from quite some senior people as well, so you should give out certificates for your trainings, because this would be so valuable. It is already so valuable, but imagine somebody, I can just see somebody coming to an interview and saying, “Okay, this is a data science interview. What else do you know?”
Kirill Eremenko: Python R, and normally people are like, “I know SQL, I know this. I know Tableau. I know Kubernetes. I know TensorFlow 2.0,” whatever else, and that’s all great, but imagine saying two, three, four, five, whatever tools, technical tools, and then in addition saying, “Plus I did a whole training on design centered thinking in the space of analytics, data science and AI, and this is what I know. This is the framework I apply. This is my awareness of the situation. This is how confident I am about dealing with internal and external stakeholders,” and boom!
Kirill Eremenko: You just blow them away. Nobody ever says that at interviews, if you’re looking for a job or if you’re looking for getting better at your company, getting promotion or growing your existing business. It’s your annual review, whatever it is, or talking to your manager at your next one to one you. You start mentioning these things, discussing these things. You’ll be the first person in the whole AI team talking about design centered thinking.
Kirill Eremenko: I think this is a great addition to anybody’s career, very exciting. I’m glad, very, very cool that you decided to go from music into this, and also, in addition, you’re finding time to teach this and spread this knowledge. I think that’s very, very, very cool. And also, plus this podcast, some interesting guests that you’re interviewing. Hats off to you for the amazing contributions you’re making to this-
Brian T. O’Neill: Great. I appreciate the work. As for the certificate thing, someone asked me about this and I told her, “Look, this is the real world. It’s not school, and school’s about getting a grade,” and if you look at why is school the way it is? School was designed as, from what I understand, listening to a podcast on this, school was designed to optimize factory workers basically, right? You process them, you train them. If they don’t pass, you send them back, until they’ve learned the skill, and you move them up the ladder and you use quantitative testing to figure out whether or not they pass or not.
Brian T. O’Neill: That is not the world of business. That’s not what we’re there for. It’s not to say I know more Python functions by memory than you do. That’s easy stuff to measure, right? The biggest reward I think you could get out of taking my course or seminar and the thing that would be the biggest thanks for me is when your profile, when your résumé starts to talk about the results of the work that you have created with your team, right? When it says I helped the business save $2 million a month by building a model that did X, that is going to make you stand out at your next job. When you say, “I know R Python, Kubernetes. I’ve certified Microsoft Cloud, whatever, blah-blah-blah.”
Brian T. O’Neill: Well, guess what? You look like all the other people that are now coming into this field, except your number, I have seven years of this instead of six. Well, if you want to command a higher salary, when you can say, “Look, yeah, I don’t know Python as well as this other guy that you do, but did they help create a $2 million savings with their data science work? Do they know how to go talk to a stakeholder who says, ‘Give me AI,’ and they have no idea what they want? I actually can go in there and help you figure out what do they mean when they say they want AI, what should we really spend our precious data science dollars on?”
Brian T. O’Neill: When you could have that conversation, and I think my course will help people learn how to do that, that’s a good thank you to me and that’s going to speak way louder than any certificate with my logo on it. I appreciate the gesture there and I know where you’re going with it, but that’s really what’s going to make a bigger difference for you as a data science practitioner, is to be able to really show the results and the outcomes that you have helped produce, and it’s hard to do this. Sometimes it’s hard to measure it and really track it back to exactly your work. I get it, but at least have that in mind with the work you’re doing and you’ll probably see you’re going to have a great career ahead of you I think.
Kirill Eremenko: Fantastic, Brian. Well, that’s a great note to end this episode on. I think everybody’s gotten what they needed out of this and much, much more. Before I let you go, what’s the best place to find you? Where can our listeners contact you or get in touch, or just follow your career and follow the things that you share?
Brian T. O’Neill: Sure. Yeah. I’d say if you go to my website, designingforanalytics.com, that’s probably the best place. I have a mailing list, so if you want to just keep track of what I’m doing, I do send out little insight articles every week and updates on the podcast, so each time we release an episode that gets sent out to the list. I’m also pretty active on LinkedIn. Twitter, my handle is rhythm spice, R-H-Y-T-H-M spice.
Brian T. O’Neill: I’m not super active there, so I’d say hop on the list if you’re interested in following my work, and usually we have little, I offer little deals sometimes, especially when I’m putting out a new offering, a new training thing or something like that. I usually, in the spirit of doing MVPs, right? I may be contacting you to do a little research and like, “Is this a useful service?” And then offering coupons and discounts and things like that to my subscribers, so yeah, that’s probably the best place.
Kirill Eremenko: Fantastic. Thank you, so once again, the website is designingforanalytics, all one word, .com. Definitely check it out, and the podcast is called Experiencing Data with Brian T. O’Neill. Brian, is LinkedIn a good place to, for people to connect?
Brian T. O’Neill: Yes, that’s a great place to connect as well.
Kirill Eremenko: Awesome. Great, and connect, make sure to connect with Brian on LinkedIn. Okay. Great, and one more question I have for you today is what’s a book you can recommend to our listeners? I’m sure you have something special prepared.
Brian T. O’Neill: Yes. Well, there’s two books. I would say they’re both a little bit more on the business side, but are you familiar with Karim Lakhani, at-
Kirill Eremenko: No.
Brian T. O’Neill: He’s at Harvard Business School and he just wrote a text called Competing in the Age of AI, so I actually went to the little, the book release. I live close to Harvard here and I’m really just about 20% of the way through that, but if you want to look at, if you want to start to understand how is AI really changing the business landscape? And maybe you can start to feel like, “Oh, I can see how I fit into this,” I think that’s a good text to start to look at a high level, how your business stakeholders are looking at it. It’s not a technical book whatsoever.
Brian T. O’Neill: The other text that I’m in the middle of reading that I’m enjoying is called Infonomics. I don’t know if you know Doug Laney, but this is a Gartner book and it really talks about how to monetize, manage and measure information, so it’s looking at data as an asset instead of this kind of like sawdust, right? It’s not sawdust. That dust has a lot of value, and so what do we do with it though, right? How do we create products with it? How do we improve products and services using data? And so Doug’s got a book there and I’m really interested in finishing that, so those are the two things that I’m reading right now.
Kirill Eremenko: Fantastic. Thanks. Thanks for your recommendation, so Competing in the Age of AI and Infonomics.
Brian T. O’Neill: Yes.
Kirill Eremenko: On that note, Brian, thanks so much for coming on the show. Really enjoyed our chat and I learned a ton from here, and I’m sure our listeners will pick up great things from here as well. Thank you so much.
Brian T. O’Neill: Awesome. Yeah, it’s been a pleasure to chat with you.
Kirill Eremenko: So there you have it, everybody. That was Brian T. O’Neill and human-centered design thinking for enabling decision making in data science. How exciting was that? Lots of valuable insights, which you can already apply in your career already. Now, what was my favorite part?
Kirill Eremenko: My favorite part was the concept of thinking about your end user throughout the whole process, so something that I’ve talked about before quite a lot is that the most in demand data scientists are the ones that can connect insights to end users, so that’s the last stage of the data science project lifecycle, the presentation, the visualization, the presentation, communication of insight, but Brian takes it a step further.
Kirill Eremenko: He says that you need to be thinking about your end user not in the very end of your project, which is important, which indeed already sets your part, but he’s saying think about your user throughout your whole project. From the moment you ask the questions, to then preparing your data, to then building your model, to then visualizing and presenting it, the whole five steps you need to be thinking about your user. That’s what human-centered design thinking is all about in data science. It doesn’t matter if you’re creating a data science product, or you’re building a model, or just delivering an insight, or a decision support application, whatever it is, think about the end user.
Kirill Eremenko: I think it’s a skill. It’s a skill, it’s an art, something that needs to be learned and practiced, and hopefully now after this podcast everybody will be a little bit more inspired to practice it. And as mentioned throughout the podcast, you can find Brian at designingforanalytics.com, so if you’re a business and you want to engage Brian to look at your analytics products, then head on over to designingforanalytics.com/superdatascience. That’s a way to get in touch with him, and we don’t have any affiliate arrangement with him. That’s just a nice link that he set up for our listeners.
Kirill Eremenko: On the other hand, if you are an individual contributor in the space of data science, if you’re a user or if you’re a data scientist basically, the best way, the best things that will benefit you from his suite of products or things that you can find on his website are his podcast, which is called Experiencing Data with Brian T. O’Neill, and since you’re listening to this podcast you already like podcasts, check that out. It’s Experiencing Data with Brian T. O’Neill. Then his seminar, which is an online seminar, and his course, which he just published recently or is publishing in the coming days. Check that out as well, so maybe you want to learn more about design thinking in data science, so there we go. That’s where you can find Brian T. O’Neill.
Kirill Eremenko: Of course, you can connect with him on LinkedIn as well, and all of these links plus all the materials that we mentioned throughout this podcast will be available as usual at www.superdatascience.com/353. That’s www.superdatascience.com/353, and another exciting piece of news is Brian T. O’Neill’s coming to DataScienceGO, so if you haven’t booked your tickets yet, this is the DataScienceGO US version, United States, in October 2020.
Kirill Eremenko: Brian T. O’Neill will be presenting there. We’re getting him to come all the way from Boston, so if you haven’t gotten your tickets yet head on over to datasciencego.com, get your tickets today, lock them in and we will see you there. You’ll see Brian T. O’Neill and lots of other exciting speakers, so there we go. That’s the end of today’s podcast. Thank you so much for being here today and I look forward to seeing you back here next time, and until then, happy analyzing.