SDS 425: The Past, Present, and Future of AI Services

Podcast Guest: Rama Akkiraju

December 9, 2020

This was a great conversation with Rama. Rama has been at IBM for 23 years and has worked on a plethora of projects including Watson. We covered unstructured data, Watson, skills to consume AI services, time to value accuracy, the live cycle of AI models, and a lot more!

 

About Rama Akkiraju
Rama Akkiraju is an IBM Fellow, IBM Master Inventor and IBM Academy Member. Presently, Rama is the CTO of AI Operations, which aims to optimize the IT Operations management processes with AI-infusion. Rama has been named by Forbes magazine as one of the ‘Top 20 Women in AI Research’ in May 2017, has been featured in ‘A-Team in AI’ by Fortune magazine in July 2018, In addition, Rama has been named ‘Top 10 pioneering women in AI and Machine Learning’ by Enterprise Management 360 in April 2019. Rama is also the recipient of the University of California, Berkeley’s Athena award for Technical and Executive Leadership for 2020. She is the recipient of 4 best paper awards in AI and Operations Research areas as well as multiple IBM Technical Awards. Rama served as the President for ISSIP, a Service Science professional society for 2018 and continues to actively drive AI projects through this professional society.
Overview
Rama has spent over two decades at IBM, prior to data science, into the data science revolution. She notes that analytics was always part of the job, even without data science. She notes the “floodgate” that opened for data science in this field was Watson and its success on Jeopardy against human players. Watson’s triumph is its ability to quickly move through unstructured data in a way no human possibly could and has been used outside the PR accomplishment of Jeopardy to diagnose medical cases that human doctors were unable to.
Rama’s work with Watson was modeling people—personality traits, emotional sentiments, and conversational tones—to personalize chatbot experiences. She also works in social media modeling and establishing learnings from examining unstructured social media data. The IBM Watson platform is open for any company to utilize for their benefit as an AI service. This means chatbot applications, speech to text applications, and others. There’s a foundational set of services but more AI solutions can be built out of these services for company-specific use cases. An example is IT management, which might utilize AI for scaling, software development processes, software deployment, and others. Rama looks at the skill needs for AI consumption as a whole which means everyone needs the same skills no matter what service you’re utilizing. Most importantly, whoever built the AI needs to know who it was made for and why — clarity of purpose for a data science training model. Intaking, analyzing, and integrating user feedback is also very important to keep models disciplined. 
When it comes to AI deployment, we discussed why companies don’t reach the ROI they’re after. The first issue is not understanding what AI can actually deliver. Rama believes there’s a lot of hype around what AI can do and when companies actually deploy systems, they realize it’s subtler than that. Transparency is also important to ensure the scalability of solutions. Have plans to de-bias practices and make sure explainability is as robust as it can be. We have mature AI platforms—such as OpenScale at IBM—which has built-in tools to tackle apparent biases in models. You need all of this to deploy an AI model confidently.
As for the future, the benefit of AI is already there for many companies and domains. If the domains for AI are narrowed, the chance of success is higher such as optimizing, decision support and automating. Personalization is the next wave, ensuring the services you’re utilizing are special to your scenario. We can envision going into AI lead, human-guided, scenarios to reach personalization and customized interaction with AI systems. It’s about getting specific, defining goals, being truthful, and having a robust system you can trust.
In this episode you will learn:
  • 23 years at IBM, before and after data science [6:11]
  • IBM Watson and AI services [12:25]
  • Skills to utilize AI services [25:02]
  • How to achieve significant ROI on AI deployment [41:31]
  • What does the AI future look like to Rama? [52:41]
  • Ethics and the benefits of AI [1:04:37] 
Items mentioned in this podcast:
Follow Rama:
Episode Transcript

Podcast Transcript

Kirill Eremenko: 00:00:00

This is episode number 425 with IBM Fellow, Rama Akkiraju. 
Kirill Eremenko: 00:00:12
Welcome to the SuperDataScience podcast. My name is Kirill Eremenko, Data Science Coach and Lifestyle Entrepreneur. And each week we bring you inspiring people and ideas to help you build your successful career in data science. Thanks for being here today, and now let’s make the complex simple. 
Kirill Eremenko: 00:00:44
Welcome back to the SuperDataScience podcast everybody. Super excited to have you back here on the show, and I’m super excited for the conversation that you’re about to hear. I totally enjoyed talking with Rama. Rama is an IBM Fellow. She’s been at IBM for 23 years. She’s an IBM master inventor and an IBM Academy member. During her time at IBM, Rama has worked on some outstanding, very interesting projects, including the IBM Watson, where she was a director of research and development. Yes exactly that Watson that beat humans at the game of Jeopardy almost 10 years ago. Also, Rama is involved with a Watson platform and has a lot of insights to share about how AI has matured over the years and what to expect next. 
Kirill Eremenko: 00:01:44
In this fascinating conversation, here are some of the topics that we discussed. We talked about unstructured data, IBM Watson, the Watson platform, skills to consume AI service. It’s a very interesting and important topic because a lot of companies like IBM are packaging up their AI services. So it’s important to know what kind of skills are required in order to consume these AI services, in order to help enterprises consume these AI services, clarity of purpose, time to value, accuracy, lifecycle of AI models. All this was backed along the way with a sample case study of a fast food chain. We also talked about tools, setting expectations, the future of AI. And at the end, we discussed ethics of artificial intelligence.
Kirill Eremenko: 00:02:29
Lots of very interesting topics, very important topics. And you’re going to hear about all of them from somebody who’s been in this industry for a long time, who has seen it all and has great insights to share. This episode will be useful to you if you’re at any level of your data science journey, or if you’re a manager or executive at an enterprise. So with that note, can’t wait for you to check it out. Without further ado, I bring to you IBM Fellow, Rama Akkiraju. 
Kirill Eremenko: 00:03:08
Welcome back to the SuperDataScience podcast everybody. Super excited to have you back here on the show. And today we’ve got a very special guests calling in from San Jose, Rama Akkiraju. How are you doing, Rama? 
Rama Akkiraju: 00:03:19
I’m doing very well, Kirill. Thanks very much for having me on your show. 
Kirill Eremenko: 00:03:23
Yeah. Very excited to have you on the show and congrats on the puppy as you shared with me before the podcast. That’s so cool. Four months old. 
Rama Akkiraju: 00:03:34
Yes. Four months old. Keeping us on our toes. Pandemic puppy, you know? 
Kirill Eremenko: 00:03:42
Pandemic puppy. How many children do you have? 
Rama Akkiraju: 00:03:44
I have one. 
Kirill Eremenko: 00:03:45
One child? 
Rama Akkiraju: 00:03:46
Yes. 
Kirill Eremenko: 00:03:46
And so what made you decide to get a puppy? 
Rama Akkiraju: 00:03:50
My daughter is the one who really pushed us to get one. 
Kirill Eremenko: 00:03:55
And it’s a Shih Tzu, right? 
Rama Akkiraju: 00:03:57
It’s a Shih Tzu. It’s really adorable. 
Kirill Eremenko: 00:04:03
Okay. That’s beautiful. That’s beautiful. What’s her name? 
Rama Akkiraju: 00:04:04
His name is Scout. 
Kirill Eremenko: 00:04:06
Scout. Very cool. Well hope Scout is having fun and it’s good that you’re able to take them for walks even during the pandemic. 
Rama Akkiraju: 00:04:16
Yeah. He’ll get cooked up otherwise. 
Kirill Eremenko: 00:04:18
Yeah. Crazy. 
Rama Akkiraju: 00:04:20
And we get cooked up too. 
Kirill Eremenko: 00:04:21
Yeah. Yeah. Is things getting better with the pandemic, like with the vaccine has been announced, is it looking hopeful in the United States? 
Rama Akkiraju: 00:04:34
Well, to the contrary, the cases are rising and things are getting worse, and the second wave is going on right now. So in fact there are more cautions and suggested curfews and lockdowns, many places all across the country. So we just have to ride it through, the second wave, and hope for the vaccines to be more generally available and get vaccinated. And hopefully also there are no major side effects with those vaccines. So we’ll see. It’s going to take awhile. 
Kirill Eremenko: 00:05:08
Yeah. Yeah. Definitely. Well, as you say, we can only wait and see. There’s a lot of smart people figuring this out all around the world, so it’s good. 
Rama Akkiraju: 00:05:20
Yeah. Thanks to all the pharmaceutical researchers who are working around the clock to find vaccines for it, right? 
Kirill Eremenko: 00:05:27
Absolutely. A lot of data science involved as well. I spoke with one of the data scientists on this podcast who is in charge of the data pipeline for the COVID consortium project. And it’s very interesting how, like the skills we talk about and the skills we apply in business normally, can be used in crisis situations like this one. 
Rama Akkiraju: 00:05:53
Yeah. Yeah. I totally see that and I understand. There’s a lot of data science in there and a lot of disciplined processes, disciplined way of collecting data, cataloging, testing, all of that. Yeah. 
Kirill Eremenko: 00:06:08
Yeah, absolutely. And speaking of data and data science, you’ve been at IBM for, this is a- 
Rama Akkiraju: 00:06:17
Pretty much all my career. 
Kirill Eremenko: 00:06:19
Number of years. 23 years at IBM. Congratulations on what an incredible career and you’ve grown through the different positions there. How’s your experience been? Like very interesting for me is you were at IBM before data science was a thing. And then you are at IBM after data science, like 2010, 2012, when data science really became the big thing. What’s changed in the world or in your work before and after? Do you see like a big difference or was it very gradual and smooth? 
Rama Akkiraju: 00:07:02
Analytics were always there, whether it was doing mathematical modeling or algorithms for solving problems, manufacturing, supply chain had a lot of analytical algorithms, especially operations research was being applied quite a bit to optimize manufacturing and supply chain processes. So they were always there and there was a lot of mathematical modeling and now AI iterative refinement algorithms, A-star search type of algorithms and all of those kinds of things. 
Rama Akkiraju: 00:07:34
I think what has changed now is our ability to tap into unstructured data. We now have, with the advent of the cloud compute and the advancements in machine learning algorithms, it makes it possible for us to do natural language processing at scale because we have enough compute now and the algorithms are efficient enough. 
Rama Akkiraju: 00:08:03
So suddenly the world of unstructured data analysis opened up. And that is the new opportunity with this whole AI and data science. It was there. Data mining, we were looking at the clicks, web clicks and all that when eCommerce and internet boom has happened. That was all kind of very structured data. Still the number of clicks, what they bought, what they purchased, the patterns around it. There was a lot of data mining and still a lot of analytics, but now we can tap into all the enterprise documents. We can tap into social media conversations. We can tap into large volumes of healthcare journals and articles and doctor’s notes and legal notes and all kinds of stuff. And we can tap into the insights that that data gives. 
Rama Akkiraju: 00:09:01
And when you combine that with the previous generations, of course even today’s generations, continuing ability to do data analysis on structured data, you now have more deeper, richer insights. And that’s the opportunity that I see. That is what I have seen back when I was working on AI iterative refinement algorithms and operations research type of things for manufacturing and supply chain through the waves of eCommerce, the analytics around eCommerce, customer CRM analysis now to hopping into structured data. 
Rama Akkiraju: 00:09:47
And there was a landmark, I would say, in this whole thing as to what opened up the flood gate. And that was the Jeopardy, the game of Jeopardy that IBM had hosted. It had shown that, of course it’s a great combination of compute and analyzing natural language data to answer or to pose questions for answers. But that is the one that’s kind of a landmark moment, I would say, in this journey that suddenly kind of opened things up for everybody. Before that I would say, yes, I mean, Google Search, in general, Search, has been a huge wave and that continues to be so in solving problems. And that kind of builds the bridge, but to bring it to, I would say to specific use cases and enterprises beyond Search, I would say that Jeopardy was kind of the landmark moment. 
Kirill Eremenko: 00:10:46
This is where IBM Watson beat human players at the game of Jeopardy? 
Rama Akkiraju: 00:10:52
The game of Jeopardy. Right. Which is answer question game. 
Kirill Eremenko: 00:10:58
Yeah. Famous moment. It was a while ago. What year was it? Do you remember? 
Rama Akkiraju: 00:11:03
2012 or 2013, something like that, I think. 
Kirill Eremenko: 00:11:18
2013. You’re right. It’s not clear, but a long time ago. Almost 10 years ago, which is very interesting. And also in addition to that, I heard of a case when somebody, I think a woman had cancer and no doctor was able to diagnose her what kind of cancer she had and that same IBM Watson that won the game of Jeopardy by going through all this unstructured data and all these reports, which would be very hard for any one human to know at the same time, this disparate knowledge around the world about different types of cancer and different symptoms. IBM Watson was able to diagnose this lady with exactly the type of cancer that she had, which was a rare type of cancer. And from there that made it possible to decide what was the correct treatment. 
Kirill Eremenko: 00:12:09
That was like even more mind blowing for me because Jeopardy is a game, which is a big accomplishment. It’s like a big PR accomplishment, but here it’s actually helping somebody with their healthcare. As far as I understand, you worked on the IBM Watson project as well. What can you tell us about that? 
Rama Akkiraju: 00:12:31
Yeah. That’s the healthcare part of IBM Watson. That was one of the first, I would say, attempt at taking what was Jeopardy technology to really apply it to an enterprise use case. We as IBM have learned interesting lessons from that episode because there was a lot of expectations. And the reality of AI is somewhere else. There were some very successful outcomes. And there were some from which we learned significant lessons that it takes hard work. It takes a lot of right kind of data, and it takes setting the right expectations with our customers about the capability of these AI models on day one, versus what it takes in terms of the life cycle management of these models and all that. So yeah, it’s the success stories, especially when they positively impact humans lives, of course it’s something to celebrate. So we’re very happy about that. 
Rama Akkiraju: 00:13:48
The work that I did in Watson is around modeling people. So this is understanding people’s various aspects such as their personality traits, their communication tones, their emotions sentiments, so that we can personalize engagements for people, be it in chatbots when a human is interacting with a chatbot, how can those conversations be personalized? Or in social media monitoring type of scenarios where you would like to scan social media content and understand what are people saying about different products? What are their sentiments and those sorts of things. 
Rama Akkiraju: 00:14:39
But in public relations type of scenarios where you want to understand when a CEO of a company or a CFO is putting out statements, can those be analyzed to feel one way or the other about what they’re saying about the company and what might be the sentiment towards where the company is headed next and those sort of things. 
Rama Akkiraju: 00:15:05
And in general, about communications between people or chat between chatbots and humans, what are the communication tones? Are they frustrated at this point in time? Are they happy with the way the call is going? Is human intervention needed at this point, given how the call is going and those sorts of things. These are all some of the use cases for the work that we had done, but it’s all about understanding people. And we call that area, people insights. In the area of human computer interaction, it’s called user modeling, basically. That’s the area that I worked on in Watson. What you will see on Watson platform are services such as personality insights, which detects people’s personality traits and sentiment analysis, emotion analysis, and communication tone analysis called tone analyzer. So these are the four services, AI based services that my team and I, we have worked on while I was at Watson. 
Kirill Eremenko: 00:16:24
Wow! That’s very interesting. I hope you’re enjoying this episode. We’ll get back to it after this quick break. And Confident Data Skills Edition Two is out. This is the second edition of the book I published in 2018. Some time has passed since then. A lot of things have changed in the space of artificial intelligence and data science. If you’re not familiar with the book, then it helps develop an understanding of all of the main data science algorithms and the data science process on an intuitive level. So no code, no complex mathematics, just intuitive explanations of the algorithms and useful practical examples and case studies. 
Kirill Eremenko: 00:17:06
This book will be extremely helpful for you if you’re starting out, or if you’re looking to cement in that intuitive feeling for the algorithms, as you progress through your career. Specifically you will learn about Decision Trees, Random Forests, K-nearest Neighbors, Naïve Bayes, Logistic Regression, K-means Clustering, Hierarchical Clustering, Reinforcement Learning, Upper Conference Bound and Thompson Sampling. 
Kirill Eremenko: 00:17:29
And in this second edition, I also added Robotic Process Automation, Computer Vision, Natural Language Processing, Reinforcement Learning and Deep Learning and Neural Networks. Plus of course you will learn extremely valuable skills for a career such as ethics in AI, presentation skills, data science interview tips and much more. So if you want to get a grip and really cement in your intuitive understanding of this field, then this is the book for you, and you can get it on Amazon already today. It’s called Confident Data Skills Edition Two, and it’s a purple book. So enjoy, and let’s get back to the podcast. 
Kirill Eremenko: 00:18:05
It’s actually very cool to hear because I’ve seen this Watson platform and I’ve actually used it as a case study in one of my presentations about AI and Natural Language Processing. 
Rama Akkiraju: 00:18:19
Oh really? Which service was that? 
Kirill Eremenko: 00:18:24
I just talked about AI Watson platform and how … There was an example of a company, I think it was like, I might be mistaken, some AutoCAD, the creators of AutoCAD or some company where their designers are using their software. And they had like thousands, tens of thousands of support queries per day that they needed to address. And then they use the IBM Watson platform to automate that. And they were able to bring the average response time from like several days to under five minutes and process many more queries and so on. So like you have a case study of that on the website. And I was very inspired by that. So I use that in one of my presentations, but it’s really cool to see that you need to be talking with the creator or with one of the people that was working on this on the IBM Watson platform. 
Rama Akkiraju: 00:19:25
Yeah. As I said, there are many capabilities, many services in the Watson platform. There is the Watson Assistant, which is the chatbot service. There is the Watson Discovery, which allows you to ingest enterprise content and get insights from it. And then there is the, what is called Natural Language Understanding suite of services, which include extraction of entities in a given text document, extraction of sentiments, emotions, concepts, and those sort of things. And then there is the speech-to-text, text-to-speech, and then there is the personality insights and such. And as I said, all of these services constitute Watson platform. And the ones that I have personally worked on are the ones that I just mentioned, like the personality insights to analyze their sentiment and emotions. 
Kirill Eremenko: 00:20:13
Okay. Gotcha. So it’s quite a big platform. I think I got the examples somewhere here. I know I don’t have the example here. It’s very interesting. And so it’s basically any company can come and start using the IBM Watson platform to their benefit, right? So it’s quite a fast process to get up to speed. Is that right? 
Rama Akkiraju: 00:20:45
Yeah. The idea there is that AI services are a set of building blocks services that are available to everybody that you can use to build whatever business application you want to build. It could be a chatbot application. It could be that you want to build a specific scenario where you are applying speech-to-text and translating some audio documents into text documents. So different kinds of scenarios, depending on your scenario, you can bring in these different foundational services, assemble them and put your application together and deploy it. 
Rama Akkiraju: 00:21:35
In addition to what are offered on Watson platform, you can do many more things with AI, of course. I mean, you can infuse AI into IT operations management, for example. You can infuse AI into security management. You can infuse AI into financial services related use cases and so on. So it could be applied to many different types. What we have built in Watson platform was the foundational set of services, on top of it, depending on the industry, depending on the use case, there could be more specific AI based solutions that can be built, either by clients themselves by leveraging some of the foundational services or by vendors. And in some cases, IBM itself is building those higher level AI infused solutions and applications so that clients in those industries, those use cases, don’t have to do it themselves. 
Rama Akkiraju: 00:22:46
So for example, let’s take IT management. IT management has many aspects. It could be about managing the outages and incidents. When a system becomes unresponsive or excess traffic and it needs to be scaled up, any number of those things. So it could be the software development process optimization itself, where code is written, but there are some vulnerabilities in the core, security vulnerabilities, and you could apply AI to detect those and give you a relay signal about where the risks are and give you suggestions on how to improve code so that things don’t get bad. 
Rama Akkiraju: 00:23:26
Similarly, in other parts of software development life cycle, you can look at the deployment of applications and say, hey, this deployment is highly risky because it’s got these configuration changes, which have been noted to be highly problematic when you deploy these kinds of changes in the past, whenever you did that, major outages occurred. So there are different kinds of predictions you can do. And this is one example in IT domain. There are others in security domain, in financial services domain, of course in healthcare domain and so on. 
Rama Akkiraju: 00:23:56
So there is the foundational services or basic understanding of the text, images, voice and those types of things, which form the foundations for processing all this enterprise data. And then on top of it, you have the actual problems to solve for which you would build your own models. And underneath you may use any of these building blocks services, depending on the kind of data. If you’re working with images or if you’re working with audio or unstructured text. 
Kirill Eremenko: 00:24:27
Okay. It sounds useful when it’s created, but if, for example, I’m manager or a business owner, or an entrepreneur listening to this, I might be confused or it might be a bit overwhelming. Okay, how do I do this? Okay, there’s the IBM Watson platform, how do I go there? How do I connect to it? Do I need API? Do I need this and that? It feels like there’s a barrier to start using it. 
Kirill Eremenko: 00:25:02
So the question I had is what kind of skills does an organization need to have on board in order to be able to interact with this IBM Watson platform? And I think this question will be useful not just to managers and executives who might want to use something like the IBM Watson platform, but also to data scientists who are listening, who it will be guidance in order to, okay, what skills do I need to add to my portfolio to be that enabler for enterprises in order to interact with these kind of like plug and play ready-made AI solutions out there? 
Rama Akkiraju: 00:25:40
Yeah, it’s a very good question. Actually, let’s not make it very specific to Watson platform alone. In general, how do you consume AI services? That is the question at heart. Because there is nothing that is specific to Watson services that is either extremely better or worse. Because it’s really everybody out there packages every AI service as a containerized piece of software that is deployed in Kubernetes, made available either as a Software as a Service, SaaS service, or if clients prefer it to be on-prem, deploy it on-prem. It’s still packaged and containerized and all that. 
Rama Akkiraju: 00:26:27
So that part of really microservice-based architecture and accessing API either as a SaaS or on on-prem is the same across the board in the industry. Every vendor follows the same approach. Every company who’s building these AI services and delivering them to their customers uses the same approach. So there is nothing that’s significantly better or worse that is specific to Watson there. 
Rama Akkiraju: 00:26:55
What is really at the heart of the question that you asked is what does it take to consume an AI model and derive value out of it? Would you agree? Is that at the core of your question? 
Kirill Eremenko: 00:27:11
Yes. 
Rama Akkiraju: 00:27:12
Okay. So then let me state it in the following way. There are actually many considerations. First one is, whoever is building it, have to have clarity of purpose. Why are they building this AI model? Whose problem is it solving? And when I build it, do I make sure that this model is … First of all, is this model that is pre-trained or does it need to be trained with the data that client provides in their environment? If it is a pre-trained model where say, for example, speech-to-text model, in order to provide an English speech-to-text model, the builder of the speech to text model has to train it with thousands of hours, at least a thousand, ideally 2000 or so hours of audio data with various accents or people speaking in English from various accents, different parts of the geography and so on, for it to be robust. 
Rama Akkiraju: 00:28:20
But then I say clarity of purpose, because let’s say that this speech-to-text system is going to be deployed in a drive through fast food ordering retail company. They don’t in that setting, people who are ordering food via drive through set up are not going to use long sentences with a lot of grammar in it and all that. It’s very specific. Menu is given, they’re ordering specific items, there’s a lot of background noise, maybe dog barking in the back of the car or children crying in the back of the car, ambient noise in the environment. The speech-to-text model has to really do well for that use case. 
Rama Akkiraju: 00:29:16
When I say clarity of purpose, of course, you have to have a speech-to-text model that is good in understanding general English terms, but it’s more important in that case for it to be specific and useful in that context, that it understands that menu, it understands how people speak with those background noises and all that. So when you build a model for that purpose, the purpose has to be clear for a data scientists so that they can train it with the right kind of data and make sure that when it is deployed in that particular setting, that when somebody is ordering burgers, fries, and milkshakes and this and that, it understands that terminology. So that’s one thing. I’m just giving that as an example, but that applies just the same to, take it to healthcare domain, take it to contract understanding domain. There are specific terms and specific things that are applicable to specific industries and domains. 
Rama Akkiraju: 00:30:08
Having a clarity of that is super important and that’s what first of all, makes the model more usable and relevant. That’s first thing. And if it’s pretty trainable, like in the case of speech-to-text, it might be pretty trainable, because you say, okay, it’s going to be for the fast food center. These are the typical kinds of things that they make in the menu and so on. So clarity of purpose. So now we come to the client who’s buying that AI model, let’s say. 
Rama Akkiraju: 00:30:40
So let’s say this fast food company is purchasing this AI model that is able to understand the spoken speech of customers who are ordering this fast food and they’re deploying it in their restaurant in the fast food centers. They now need to have clarity on how long is it going to take for them, for this AI model to be reliable enough for them to deploy it more broadly across all their fast food centers. But initially they’ll do a POC. Anybody who wants to try it out, test it out. I mean, you don’t deploy a new AI model system just like that. 
Rama Akkiraju: 00:31:23
So they’ll do a POC. So the proof of concept maybe three months, or six months, maybe on 10, 20 fast food centers, then you kind of understand where the system is good at in recognizing the orders and where it’s making mistakes and give feedback to the system. And part of that feedback is you say this prediction, or you noted, you transcribed this order to be fries when it was not fries, when they said burger you actually mistook it to be something else and those sorts of things. 
Rama Akkiraju: 00:32:00
So you give feedback to the system. The system then takes that, retrain the models and retrains from the new accents that it is seeing and all that and improves its model. Or a period of, so may maybe after three, four months of POC time with all this feedback going and all that, system now starts to get fine-tuned very well to that specific accents, specific background noises with the feedback you gave. That’s the time to value. So companies who are consuming this have to be prepared. I’m not saying that every AI model needs to go to the three to six months. I’m just giving that one example so we can talk it through. You’re with me? 
Kirill Eremenko: 00:32:39
Yep. 
Rama Akkiraju: 00:32:39
Yeah. So this is a time to value. So they need to understand. In some cases it may be out of the box. It may just work just fine. And it may be usable just with some training from their own environment. But in some cases it could be this. So understanding what it takes, what is the time it takes for this AI model to start delivering value for me at the level of accuracy that I desire. Even if you deploy it, you may still get it wrong because even humans who are taking orders sometimes mix up things. Because when there is too much noise in the background and all that, it’s hard. And sometimes you get two orders of fries, or sometimes you get your burger is missing. That happens. 
Rama Akkiraju: 00:33:20
You can’t expect an AI system to be perfect all the time. But there’s some level of accuracy requirement that you may set. Setting, at least 90% of the time, 95% of the time, it has to be good. Is that acceptable? Or does it have to be 97%? Three orders in a hundred orders, they’re going to get bad. Customers are going to be unhappy. Is that something that you really are comfortable with or not? Those are the decisions that the consumers of the AI model have to make because it’s not guaranteed to be perfect. So that’s the time to value and also the accuracy aspect. 
Rama Akkiraju: 00:33:59
Then there is the data, which is, how much data is needed for this AI modular solution to train and to get to value? And how many audio samples from my other customers who are giving orders should I provide to the system as part of the initial training? And where do I put this data? You have to hire those people who are willing to give those orders in a setup, and you have to acquire that data legally. 
Rama Akkiraju: 00:34:32
In some cases, the companies may have all the data that they need. It may just be a matter of cleaning it. I wouldn’t say it lightly actually. Cleaning, organizing and analyzing data is a huge part of getting data ready for AI. And we call it at IBM, AI Ladder. You first collect data, then you cleanse it, you organize it, then you analyze it. And that’s when you are ready to actually use that data to do AI. 
Rama Akkiraju: 00:34:59
So all of these things around data, where does the data reside? How long does it take to aggregate or assemble it? Who owns that data? What is the governance and compliance of the data? What is the lineage of this data? Do we need to label it? And in many cases, and in this particular example of speech-to-text one that I’m cutting through, you may have to label it. Customers are giving orders with a lot of background noise and you transcribe it. Let humans to transcribe it for you, and you feed that as the additional training data to the pre-built speech-to-text system and so on. So that’s the data part. 
Rama Akkiraju: 00:35:33
So I talked about clarity of purpose, first from the builder’s side. Then I talked about consumers of the AI models having clarity on the time to value. How long does it take? And then the data part of it, how much data do I need to give it in order for it to deliver value to me? Then there is the skills part of it. Are data scientists needed to manage the life cycle of these AI models? When I say lifecycle, what do I mean? I mean this users have to give feedback. It could be making mistakes. You have to give it feedback for it to be learning from the mistakes that it’s making. Does that have to happen like forever? Would somebody have to keep on teaching it? 
Rama Akkiraju: 00:36:15
Frankly speaking, we don’t know because really how many AI systems are out there that have been in production for multiple months or years and have reached a level of accuracy that’s comfortable, that’s acceptable to the users and it has plateaued in its learning that it’s not learning anymore. It does make mistakes occasionally, but it’s not learning anymore. Do we have examples of that? Maybe, maybe not. The thing is that I say that because it’s fairly new, many companies are still exploring, experimenting with it. In some industries, things have progressed more than in others. In healthcare domains, for example, many of the AI systems that special purpose companies or IBM or others have built are in production and have been through multiple generations and such. 
Rama Akkiraju: 00:37:11
So maybe in some domains it has reached that level of maturity, but it has gone through those multiple iterations of improvements and has reached a level that is stable. But in many other cases, it’s in early days. Do users have to give feedback all the time, forever? Not clear. Initially you would definitely have to give it feedback until it gets to a point, but after that, it may occasionally still make mistakes, especially when it starts to see data that it hasn’t seen in its training, it may make mistakes. So in such cases, doing very disciplined error analysis and knowing where it’s making mistakes and giving it feedback would be helpful. 
Rama Akkiraju: 00:37:49
So what I’m saying is that the skills that are required to … As a subject matter expert, giving feedback and the skills that are required to ensure that you are doing error analysis on these models and are in a disciplined way of looking at where it’s making mistakes, there may be tools and reporting and all that around the AI models for doing all of these things, but somebody has to look at it and steer it in certain direction. 
Rama Akkiraju: 00:38:13
And again, automation tools and AI platforms are offering more and more of these things, still some amount of skills are required. So companies who are consuming it would have to know some amount of AI and AI terms, at least what is accuracy? What is precision? What is recall? What are the trade-offs? How to give feedback and how is a new model built? How to deploy the new model and all of those things. Skills is an important factor. So are new roles needed in the organization company? What training is required for existing people in their current roles to get them to manage these AI models in production is all things that companies have to think about. 
Rama Akkiraju: 00:38:53
And I’ll say two more things. I know in this long winded answer to your short question. I’ll say two more things. Tools and infrastructure is important, and also setting expectations is important. Tools and infrastructure is you build AI models, they’re containerized, let’s say microservices, you deploy them. What is your usage? How much load do you expect? How much to scale? Is it an on-prem? Is it a cloud based service that’s being deployed? What are the onboarding that is … How do you set it up? How do you train it? How do you deploy it and how much infrastructure is needed? How many servers, nodes, pads? What is the configuration like? How do you collect the payload data? That is the data that’s coming in when somebody is using this AI model. How do you save it? How do you make sure that the compliance and privacy data, privacy requirements are met with all the data that you’re collecting? These are all part of the tools and infrastructure that one has to have a plan and processes for. 
Rama Akkiraju: 00:40:02
And finally, setting expectations with end-users. End-users being not necessarily the companies that are deploying these AI models, it’s their end-users, who will be using it. Let’s say in this particular case, the people who are actually ordering these food at fast food centers, they need to also know that the order taker is a new automated speech-to-text system. That they have to be a little bit patient with it. Sometimes it may get it wrong. And so setting their expectations is important. And also setting the expectations for the company who is purchasing this, let’s say the fast food company itself, setting those expectations that are important. In response to your question, these are all factors. Clarity of purpose, time to value, data strategy, deals strategy, tools and infrastructure. And the last one was setting expectations. These are all things that are very important to really deploying and getting value out of AI models. 
Kirill Eremenko: 00:41:08
Wow! Thank you very much. That’s a very detailed answer and really puts it into clarity. And I love that you use a case study example with this fast food chain, puts it into perspective, seeing all this, rather than just concepts or concepts. It’s actually, I could feel how all this would be applied. Boston Consulting Group came out with a report in October saying that from 2018 to 2020, in 2018, about 40% of enterprises had an AI strategy. In 2020, it’s now 60% of enterprises have an AI strategy. So it’s growing, which is good, which is exciting. However, at the same time, only 10% of enterprises see significant financial returns on their AI investments. Why would you say that is the case and where are the pitfalls that you’re seeing that companies usually trip up on when deploying AI? 
Rama Akkiraju: 00:42:19
Yeah. Some of the things that I mentioned in the journey to AI apply to this question as well. Some of it is, first of all, not understanding what AI can and cannot do. There may be hyped up expectations about what AI can deliver. And when you actually deploy AI models, you realize that there is more hype than reality. And therefore you feel disappointed. That could be one thing that could be playing a role. Another thing is that companies and enterprises have this significant need to have AI that is explainable, that is trustworthy, and that is more transparent. Of course secure and all of the other things. 
Rama Akkiraju: 00:43:27
I would say instantiations of AI models, many vendors had their AI services. People started using them. People started experimenting with them, but then when you actually have to build it in real world enterprise grade, enterprise scale solutions and you fit them into your business processes, you have to have all of these things. It should be trustworthy. The platform should be robust and the AI models, they have to be able to explain what predictions they are making and why, because in many domains, there is a lot of audit. 
Rama Akkiraju: 00:44:10
One of the examples here, I would like to give us the insurance domain, where let’s say you built an AI model to assist in loan approval. Insurance domain is something that is highly regulated and has a lot of oversight. If there is an AI model that is making predictions saying this person’s loan is approved, this person’s loan is rejected, and you cannot explain why even when there is an audit or when the borrower is disputing it, then it’s a problem. If it is, let’s say unfairly biased on certain attributes, like race, gender, age, and those sorts of things, there is liability associated with it. 
Rama Akkiraju: 00:45:13
One of the reasons after the initial exploration, companies sort of backed out or said, wait a minute, my explorations are fine, but I really cannot deploy this because the system doesn’t explain what it’s doing. It’s unfairly biased. It is a black box system. I don’t know if I can trust it. I can’t apply it. In fact, some of the surveys that were done later on, I don’t have the exact numbers, but most companies said, but what is the one of the main concerns or one of the main factors that they care about when they deploy AI is trust and explainability. 
Rama Akkiraju: 00:46:00
Coming back to why haven’t companies been able to really derive value is this, that the platforms and the solutions that AI vendors are offering have to be able to support these things. That they have to be able to explain. They have to be able to be transparent, show how it arrived at the prediction, with what variables have influenced it to the extent that it can and have different capabilities to de-bias, to show how it’s making predictions, the explainability and all that. And it took a while for companies and vendors to build these capabilities. The transparency with the data and transparency with models and all that. 
Rama Akkiraju: 00:46:57
So now I would say after the initial hype, we’re getting to a point where we have somewhat more mature AI platforms that are able to offer these capabilities. So for example, I’ll give IBM example. We have a product called OpenScale that allows one to bring in your own AI model, wherever you have built it, and you can test it, the unfair bias for that model on specific factors, specific attributes and see if the model is behaving fairly or unfairly. And if it is behaving unfairly, you have access to algorithms in the platform that allow you to de-bias and re-change the models and deploy them again and monitor what it’s doing and so on. 
Rama Akkiraju: 00:47:48
So there are many of these factors that enterprises then realized that unless all these things are in place, I can’t really trust this AI model, and I can’t really put it in production. And I’ll give you one other example, an interesting project that I worked on. You got a lot of data in English, social media data, speech data, and all that, and built all these models that understand English speech-to-texts, that understand English text and then do natural language understanding and all that. And you go to a company, a global, let’s say financial services company, and say, here I have all of these foundational AI services. And we built an AI solution, a chatbot system. You can deploy it in your environment. The client would go great. All right. But my customers speak 15 different languages. Our customers in the U.S. and the UK speak English, but their accents are also different. And Australia speak English, but their accents are different. 
Rama Akkiraju: 00:48:59
Our customers in Spain, we wanted to apply this in Spain too, does your system speak Spanish? Oh, by the way, we have operations in Latin America, but the Spain Spanish is different from Mexican Spanish, which is different from Argentinian and so on. Does it speak all these different Spanish varieties and dialects? When you say AI for enterprises and scale, AI has to understand all of these languages as well, and natural language understanding, speech-to-text, all of these models have to be chained and available in all these languages. And also the language of the industry, financial services, domain insurance, domain language and so on. Like in the case of fast food center, I said, it has to understand the menus and the ingredients and so on. 
Rama Akkiraju: 00:49:44
These are all things that need to be ready for AI to be enterprise ready. So we are on a journey as industry, I would say. We have to build all of these things and the platforms have to be mature enough to support all of these kinds of requirements for companies and the skills within companies who are consuming this AI models have to come together. And they may have to be re-skilled in some ways, a little bit with the understanding of AI and how to manage these models. When all of these things happen, that’s when companies are ready to really derive value from it. 
Rama Akkiraju: 00:50:26
We have to prep the whole pipeline with all of these. We need tools for preparing the data. We need tools for managing these models. We need tools to tell us whether a model is fair or not. And if unfair, how to fix it. We need tools to help us understand how many prediction mistakes it’s making and what are the errors and how do we fix it and so on. A lot of factors have to come together and the platforms have to get mature. And it’s only now that the platforms are capturing these level of concerns for managing the AI models. And once they are mature enough, which they are starting to become now, companies will start to really benefit. 
Rama Akkiraju: 00:51:07
I’m not saying that by saying so that we have to wait until they become mature. Obviously, in some cases, in certain use cases and scenarios, you can just go ahead and start deploying AI based solutions. Chatbots were the first wave, I would say used by companies specifically to see how most frequently asked questions can be addressed by these chatbots, as opposed to humans doing it. And there has been a fair bit of first wave of success. Second wave of successes stories in that area. And now companies are starting to move into other business processes where they’re starting to look at AI. So for example, I said AI in IT operations management. It’s becoming pretty prevalent where outages and incidents and issues are automatically detected and addressed by automated AI systems. And similarly in security management and in other domains, it’s starting to make its way. 
Kirill Eremenko: 00:52:05
It makes me think of the Gartner Hype Cycle where it feels like, from what you’re saying, that AI has gone through that peak of the hype part, then down to the trough of disillusionment and slowly we’re getting out into this plateau of productivity where as you say, the whole industry is maturing and we understand what we need, the different tools we need and how AI will go forward. And with that, I have a question, what does the future look like to you? How soon will we get to a stage where more and more companies are able to deploy AI effectively and comfortably and what next? 
Rama Akkiraju: 00:52:59
As I said, in some domains already it is proving its value. It’s already there. If you narrow the problem scope, not try to solve general purpose AI, but do narrow AI for specific industry, specific use cases, then you can actually get the accuracies up to a level where it’s acceptable and you can also in specific domains address all the things that I mentioned, such as fairness, transparency, explainability, robustness, continuous learning aspects of the models and accuracy and all these things. 
Rama Akkiraju: 00:53:39
In some domains, it’s already there. In some other domains, it is making its way. But as I said, the first point I want to make here is that if you narrow the domain, we can have a lot more success and wherever companies have narrowed it and applied it, there they have had good success. So specifically for automating things, AI is being used for optimizing things. AI is being used for offering decision support. Then where would it go next?
Rama Akkiraju: 00:54:12
There is a lot of applicability to augmenting human intelligence right now. We could say some of these things that we talked about are going in that direction. Then personalization and natural interfaces are the next wave, I would say. When I say personalization, you know what I mean is you can do all things related to automation, optimization and all those things, but you want to get to a point where the services that are being offered to you are very special purpose for your cases, for your scenario. When you are interacting with a chatbot, the chatbot really understands your current situation. 
Rama Akkiraju: 00:54:59
Let’s say you just missed your flight and it’s connected to your calendar system and it understands your preferences, automatically finds out all the rest of the reservations that can be made for you and gives you a full travel plan, an alternate travel plan for you right off the bat. And you don’t have to go look for these alternatives. Should I go change the hotel now? Or should I change the car reservation now? That’s a personalization example. There is a lot of scope there in personalization, and there is also a lot of scope for providing natural interfaces to AI systems. 
Rama Akkiraju: 00:55:40
So for example, right now we are still using dashboards and web pages and tools for interacting with the predictions and the recommendations that AI system is making. Before speech-to-text, and they’re already there almost, but they’re all very mature and they’re all integrated into this specific use case scenarios. You could imagine, even in business settings, just the way we were asking Siri and Google now questions, we don’t have to type them in. We should be able to say, hey, show me my sales reports for this month. Show me how it compares with the sales for last year around the same time. Or show me how many incidents did I have in my IT environment in the past one month and compare that with the incidents that I have had in the month before. 
Rama Akkiraju: 00:56:30
I can give this kind of questions, just speak it out to AI system, and it should be able to understand, translate, form the right kind of queries, do the right kind of predictions, join all the information and give it to me. That is the, at least I would like to think as one wave of what’s next for AI. In general, if we look at what’s happening in the industry, there is a lot of trend for automation, automation and decision support optimization. These are all things that are pretty much now possible from siloed systems to a more integrated. From only structured data analysis, we go to more structured and unstructured data analysis. And from very discrete human and AI handoffs, we can now go more to natural human and AI collaboration. More human led and maybe AI guided type of scenarios. We can envision going into more AI led and human guided type of scenarios. 
Rama Akkiraju: 00:57:46
And again, where applicable, I’m not saying that in all cases AI has to lead, human guides it or human leads and AI can guide it. It really depends on the situation and so on, but since the question is more about where is AI headed, I would like to think that why we are continuing to optimize on scenarios such as automation, optimization, decision support, and so on, we can start to look forward to this next wave of enterprise products and solutions where there’s lot more personalization and lot more natural interactions with AI systems. 
Kirill Eremenko: 00:58:32
Wow! Thank you. Fantastic answer. We’re already running out of time. I can’t believe how fast this hour has gone by, but I definitely enjoyed and learned a lot. Thank you, Rama. Where can our listeners find you and follow you to learn more about how your career develops and things that you work on? 
Rama Akkiraju: 00:58:56
I’m on LinkedIn. People can follow me on LinkedIn. I write blog posts from time to time on things that I’m working on. I’m more on LinkedIn than on Twitter, but I’m on Twitter as well. So those would be the two places to find me. 
Kirill Eremenko: 00:59:11
Fantastic. That’s very cool. What is one book that you can recommend to our listeners? 
Rama Akkiraju: 00:59:21
I actually would recommend maybe two. 
Kirill Eremenko: 00:59:26
That’s okay. That’s okay. 
Rama Akkiraju: 00:59:29
I will connect it to AI, I’ll tell you. One book that I’ve read recently called Bad Blood. It’s on the company Theranos, written by John Carreyrou, Wall Street Journal investigative journalist. It’s more about how this supersmart lady, Elizabeth Holmes, with a lot of ambition started this company Theranos, which is mainly meant to be able to detect many kinds of diseases and symptoms and such from your blood, but with just one or two droplets of blood, instead of taking vials of blood and all that. It’s a real story, and cut to chase, she’s now being prosecuted because there were a lot of claims that she made about the company that were not quite true. Anyway, just a very interesting read. 
Rama Akkiraju: 01:00:30
And why do I talk about that book in the context of AI? The takeaway from me from that is that when you are in the field of science, and I’m not necessarily saying that was AI alone per se, there was a lot of diagnostics and all of that in that particular story, real story. Integrity and truthfulness about where your technology is, is super important. If you’re too far ahead in your vision and you sold your vision, the reality hasn’t met up, but you’re still kind of bridging that gap in your mind for the purposes of deceiving or for the purposes that you believe that it will get there. Therefore you’re still going to sell your vision as the reality, that is when things start to really fall apart. 
Rama Akkiraju: 01:01:38
So one thing that was a real interesting learning for me from that book is that how important it is to be honest, fruitful, and state it as it is. So if an AI system or an AI model, for example, I’ll bring it back to AI is only going to go so far. And that is the current state. Not that it cannot go far enough further in the future, but if that is where it is, I think setting those expectations with the customers being truthful and honest is a lot more important than selling on a vision that may or may not be achievable. And part of every hype, you talked about Gartner’s Hype Cycle. Part of what happens in the industry is we tend to get ahead of what is possible. 
Rama Akkiraju: 01:02:22
The vision level, it’s important. It’s good to have that vision, but when you actually sell, it’s really what is available and what is the art of the possible. So anyway, that book is an interesting lesson learned for all scientists on how to be truthful and honest about the state of their work. 
Rama Akkiraju: 01:02:46
The other one is an interesting one around … It’s called Code Zero, I’ll bring up the name of the author. There are multiple books with that same title. Just give me a moment. Marc Elsberg. So this one is an interesting one. It’s a fictional story about how personalized AI can go so far into intruding your lives, where it can start to give you suggestions on how to change yourself to be a good date, when you want a date. What to say to this person to have a nice conversation. That maybe against your own personality, but it slowly starts to manipulate you to a point that you’re no longer yourself. You’re just listening to the AI system that is just giving you suggestions and you’re just taking it. And you’re now becoming somebody else that you are not. 
Rama Akkiraju: 01:03:48
And the dangers of getting into that, this fictional company is a social media company starts to go down that slippery path of manipulating people based on their data, in some cases, knowing clearly that they are exceeding all the moral lines and all that. It’s a racy thriller kind of a story, but it brought up a lot of interesting questions. I know you asked for books. One other thing I would recommend is this Netflix documentary- 
Kirill Eremenko: 01:04:24
Social Dilemma. 
Rama Akkiraju: 01:04:25
… Social Dilemma. Yeah. That was also an interesting one that brought up a lot of ethical questions around recommenders and personalization and those sort of things. 
Kirill Eremenko: 01:04:35
I totally loved the documentary. What do you think? Do you think we will find a way to get the benefits of AI, but at the same time, stay ethical? What is your feel after this movie? Do we have a chance? 
Rama Akkiraju: 01:04:53
Yeah. Interestingly actually, I’m co-chair for an AI consortium, AI Council, at this industry consortium called CompTIA. I posted this link on my LinkedIn page, a few of my colleagues who work at CompTIA as part of this AI Council who all worked for different companies. We are part of that writing best practices for building AI models. Actually, many of the things that I talked about today are going to be coming up as white papers and such. We actually had an afternoon chat about that social media documentary. 
Rama Akkiraju: 01:05:31
So coming back to answering your question, I don’t think I can give any better answer than you or anybody else. As a human society, we have to find a way to stay true to ourselves and not be manipulated by these AI systems. But it’s so hard because we have built this web of all these different social media tools and apps that are offering so many conveniences to us. And at the same time, as part of those conveniences, sneaking in so many things that are totally taking us in a different direction and draining our time and emotions, emotional energy and all that. 
Rama Akkiraju: 01:06:19
As adults, we are struggling. Imagine what teenagers and kids in the early adolescence phase are going through. It’s mind boggling. I think one concrete suggestion that came out that kind of jumped out at me when I watched the documentary was one suggestion that one of the researchers who was being interviewed gave, which said, if it is a pull that you are asking a question, that’s a genuine one. It’s originated from you. You have some need and you are asking Google or any kind of search engine or any kind of tool for information. But when it starts recommending things for you, that’s the push from their side. You watch for it very carefully. As to whether you want to consume it or not, because that suddenly starts to take you in … You’re not driving your agenda, they are driving their agenda to you. 
Rama Akkiraju: 01:07:18
That’s one takeaway that I said that I don’t have an answer to your question. I mean, we as humans have to find a way to keep our sanity and our focus. Especially these tools, social media tools are really there to offer so many conveniences and useful tools, at the same time take away your focus from number of things that we should be doing in our day. And we have to find a balance. And it’s not that easy, especially with the kind of dopamine kicks that they keep giving. So being conscious of pull versus push, whose agenda are you serving, just being very aware of it all the time whenever we consume it, I think is the only way that we can drive average and as opposed to getting carried away by somebody else’s agenda. 
Kirill Eremenko: 01:08:10
Yeah. And as you say, even as adults, it’s hard to be aware and conscious of these things and children and like I feel … I think they mentioned this in the movie as well, that legislation has a place, but it feels like legislation is not keeping up. 
Rama Akkiraju: 01:08:32
It’s not catching up. Yeah. I mean, I think the legislators all around the world, actually in all countries are far behind in catching up to how fast these technologies that are emerging and how many different facets there are to them that influence our lives and livelihood and so many other aspects. Legislation definitely has a role to play. Absolutely. 
Rama Akkiraju: 01:08:55
And similarly, self-control of individuals at an individual level also has a big role to play, which is much harder, no matter how much legislators legislate, in the end it all comes down to your individual discipline. And that is so hard. Education system is also lagging behind. Schools and colleges, universities, also have to teach the ethics, the etiquette around using social media. Especially in this times when we are going through a pandemic and most students around the world are just learning through online media forums. 
Rama Akkiraju: 01:09:38
There’s so much on computers, on different online courses, not interacting physically with the colleagues, friends, access to everything right there with a click. You’re listening to your teacher, you could be looking like you’re looking into your Zoom session. You could be pulling up another browser watching something, reading something, any number of things could happen. Distractions galore. Universities and especially schools, middle schools and elementary schools and high schools also have to catch up a lot to teach their tickets and the morals around self-discipline and control. Otherwise, it will become a generation and a society of people who are just in it for the dopamine kicks. We lose our ability to focus for longer stretches and to do the kind of deep thinking that many scientists in our previous generations were able to do and invent such important things, because they were able to take one idea, one thought and sit on it for whatever time it took to come up with it. 
Rama Akkiraju: 01:10:52
It requires that solitude, that time to think. Anytime we are in it for 15, 20 minutes max and say, oh, time for a dopamine kick, let’s go read this news article. Let’s go do that. We all go through those distractions and more so for children. Yeah. I don’t want to end our conversation on a panicky note. So let’s bring it back to something more positive on the negative side. We are all so connected, information is accessible so much and we get to know things and can communicate with our family, friends at anytime of the day. So there are a lot of good things too. We just have to find the right balance and navigate it carefully. 
Kirill Eremenko: 01:11:39
Yeah. Yeah. Be the master of technology, not the other way around. 
Rama Akkiraju: 01:11:43
Exactly. 
Kirill Eremenko: 01:11:44
Awesome. Rama, thank you so much. It was a very interesting discussion. I’m very happy that you joined us on the show. 
Rama Akkiraju: 01:11:51
My pleasure. Thanks for having me on your show, Kirill.
Kirill Eremenko: 01:11:59
So there you have it everybody. I hope you enjoyed this episode with Rama Akkiraju as much as I did and got the valuable insights. My favorite part of the episode was what Rama mentioned about clarity of purpose, super important, and also the way she described it made it super clear that the company that is developing the AI, the company that is consuming the AI, they both need to have a clarity of purpose and also time to value, understand the time to value in order to get this value out of Artificial Intelligence effectively, and actually get the value in the first place. Very interesting, that example with the natural language processing and speech-to-text recognition and how, depending on the industry, depending on application, it will be used differently. Or will have different purpose for different industries, different companies. 
Kirill Eremenko: 01:12:57
That was my favorite takeaway. I’m sure you have yours. There’s lots of great insights. As usual, you can find the show notes for this episode at www.superdatascience.com/425. That’s www.superdatascience.com/425. There you will find the transcript for this episode, any materials and books that were mentioned on the podcast, as well as the URL to Rama’s LinkedIn and Twitter, where you can connect with her and follow her and her career. 
Kirill Eremenko: 01:13:25
If you enjoyed this episode and you know somebody who would benefit from understanding better how AI has matured over the years and how enterprises can apply artificial intelligence to more success, then send them this episode. Very easy to share, just send them the link, www.superdatascience.com/425. And there’ll be able to access both the audio and video and choose what they would prefer to listen to or watch. And on that note, thank you so much for being here today. I look forward to seeing you back here next time. Until then, happy analyzing. 
Show All

Share on

Related Podcasts