SDS 193: A serious talk on AI taking over jobs

SDS 193: A serious talk on AI taking over jobs

AI taking over jobsWelcome to episode #193 of the Super Data Science Podcast. Here we go!

The Terminator. Blade Runner. The Matrix.

We’ve seen enough of robot uprising movies to understand that there are risks – just like in any tech – with Artificial Intelligence. Joining us today in this episode of Super Data Science Podcast is Roman Yampolskiy to talk about AI Safety.

Subscribe on iTunesStitcher Radio or TuneIn

About Roman Yampolskiy

Roman Yampolskiy is a computer scientist at the University of Louisville and currently the director of Cyber Security Laboratory in the Department of Computer Engineering and Computer Science at the Speed School of Engineering. He is also the author of many publications including Artificial Superintelligence: A Futuristic Approach.

Overview

Aside from collecting robots, Roman is utilizing them as a professor to spark interest from kids in high school to explore the field of Artificial Intelligence.  According to him, AI has gone through major milestones from the past and will have more in the future. So, to actually pursue and commit to them as a career is highly recommended.

AI is seen to change the landscape of businesses, small scale to large scale, industries, organizations and government bureaus in the future. Systems and services could drastically improve. It can even make our everyday life more worthwhile and comfortable. We’ve already had known successes in AI – from Siri, the virtual assistant, to AlphaGo, a computer program trained to play Go against the experts. These have lots of potentials and bring motivation to the future of AI.

But, as it grows exponentially, its threats also pile up. If you haven’t heard, there’s the self-driving car which killed a woman because it failed to do an emergency stop and an AI chatbot which mimicked racial slurs when it conversed with netizens. It’s scary because if we’re going to be firm about AI penetrating everyone’s lives, then we, as AI engineers, architect, scientists, etc., have to be responsible and accountable when we put something out there. This is what Roman pushes – AI safety.

AI Safety is a new field at the intersection of AI and Cybersecurity. Roman is at the forefront of teaching the AI professionals how to make safe and secure machines for the future. Discover how important it is and why people should not overlook this when releasing a tech for human consumption. Kirill and Roman dive deeper into this field in this latest episode.

There’s too many to know about AI and AI Safety. So, today, to guide you, Roman answers questions like: Is AI going to take over jobs? Should we need a Universal Basic Income once AI has already outnumbered the labor force? How quickly will it take over the coming years? What is the concept of technological singularity? And, how will quantum computers play a role in this?

If AI is positioned to dominate the future, then take Roman’s advice for the AI career starters, professionals, and business owners. It’s better to plan ahead and be in control of what the future brings so start by listening in!

In this episode you will learn:

  • What is AI Safety? (05:17)
  • What has been done right and what has been done wrong for the future of AI. (06:38)
  • Prioritizing security when releasing a new tech. (09:50)
  • Small failure of a machine can become global or drastic. (11:50)
  • Roman defines the meaning of Artificial Intelligence. (18:36)
  • The experts who contributed in Roman’s book. (20:45)
  • Ethical Considerations of AI. (21:59)
  • Will there be a need for Universal Basic Income if AI skyrockets? (25:20)
  • Advice for people starting their career. (29:39)
  • Advice for professionals to get to high-end jobs. (33:36)
  • Advice for business owners. (38:49)
  • In general, where are we going as a civilization with AI? (40:14)
  • Will AI take over humans? (41:40)
  • The concept of technological singularity. (42:46)
  • The use of quantum computers. (46:50)

Items mentioned in this podcast:

Follow Roman

Episode Transcript

0

Full Podcast Transcript

Expand to view full transcript

Kirill Eremenko: This is episode number 193 with artificial intelligence expert Roman Yampolski. Welcome to the Super Data Science Podcast. My name is Kirill Eremenko, data science coach and lifestyle entrepreneur, and each week we bring you inspiring people and ideas to help you build your successful career in data science. Thanks for being here today and now let's make the complex simple. Welcome back to the Super Data Science Podcast ladies and gentlemen.
Very excited to have you on the show today, and today we've got a very interesting episode about artificial intelligence. Roman Yampolski is a renowned expert in the field of AI. He's written multiple books, he's given many talks on this topic. As you'll see from this podcast, it is just so fast-paced, get ready to keep up, probably prepare yourself, and be in a high energy state because if you're taking notes, you're going to be taking notes like crazy.
If not, just make sure you concentrate on the whole conversation is going to be very fast. You will actually notice how we dive straight into it. There's like right away from the very start we go straight into AI and it's boom, boom, boom, question-answer, question-answer, question-answer. It was very exciting for me personally to hear from one of the leading expert in the field of artificial intelligence.
What do we talk about today? We'll talk about the following topics, here are just some of the things we cover; artificial intelligence, safety. What is artificial intelligence? Is artificial intelligence and the Internet, are they evil? Careers in artificial intelligence, security. Artificial intelligence taking over jobs, universal basic income? Data science and data scientists automated by artificial intelligence?
How artificial intelligence impacts business owners and what business owners should know about AI? How quickly AI will take over in the coming years, and we'll even touch on quantum computers. There is just a quick overview of some of the topics we'll be covering. You will hear many more in this podcast. I personally can't wait for you to check out this conversation. Here we go. Without further ado, I bring to you AI expert, Roman Yampolski.
Kirill Eremenko: Welcome to the Super Data Science Podcast ladies and gentlemen. Today we've got a very exciting guest on the show Roman Yampolski. Roman welcome to the show. How are you doing today?
Roman Yampolski: Doing really well. That was the best pronunciation of my name ever.
Kirill Eremenko: That's true. The thing is that both Roman and I we speak Russian, and before the show I asked him how to pronounce his name and I tried with a Russian accent, so it be Roman Yampolski. Now for some reason I said it with an English accent. I totally get it. I hope you can forgive me.
Roman Yampolski: No I actually liked it. I wasn't being sarcastic. It's pretty good.
Kirill Eremenko: In any case very excited to have you on the show. For those who don't know, Roman is a top AI expert in the space of security and artificial intelligence in general. We just chatting on video just now and you've got all these robots behind you on the fridge. Tell us the story behind that. Why do you have so many robots sitting in your apartment?
Roman Yampolski: I'm a professor and part of my job is to recruit students. We go to local high schools, different events, and the best way to get kids interested in computer science and engineering is to let them play with robots. Every year I'll buy the latest models and over the years I got quite a collection my defense army behind me.
Kirill Eremenko: What's the latest robot just of this year? Do you know the name of the model?
Roman Yampolski: I have no clue, but actually they're getting more disappointing every year. I don't know if it's something to do with Toys "R" Us going out of business or what, but the quality is just dropping. I'm looking for good ones to buy under $100 and there is not much available.
Kirill Eremenko: Roman you're in Kentucky, right?
Roman Yampolski: Yes.
Kirill Eremenko: Louisville is that the city?
Roman Yampolski: Louisville, Kentucky.
Kirill Eremenko: Louisville, Kentucky. How did you get there? You've been in Kentucky 10 years. Tell us the story behind that.
Roman Yampolski: After I graduated with my PhD from University at Buffalo, I applied to I think it was 76 academic jobs and that was 2008. The market was not doing well. I got one offer and I took it.
Kirill Eremenko: It was in Kentucky?
Roman Yampolski: It was in Kentucky. Actually I love it. Kentucky is awesome. I think it has a brand problem. People don't recognize just how great it is.
Kirill Eremenko: That's for our non-US listeners that's in the middle of the US. Is that right?
Roman Yampolski: The real America yes.
Kirill Eremenko: Tell us a bit about your background. You mentioned you're a professor and you're in the space of artificial intelligence. If somebody off the street were to ask you, Roman what do you do? What would you say?
Roman Yampolski: I'm a computer scientist, which means I teach humans how to teach computers, and specifically I try to make whatever instructions we give to computers safe and secure, so bad guys can't hack them. Server system works as intended. That's a very high level introduction to what I do.
Kirill Eremenko: How did you get into this field?
Roman Yampolski: I always loved technology, always loved video games, science fiction has naturally progressed to where now I'm doing science fiction.
Kirill Eremenko: You just also I wanted to congratulate you. You said you just published a book. It's just became available on Amazon. Huge congratulations on that.
Roman Yampolski: Thank you. This is my tenth, so extra special and it just came out as number one new release in Amazon in artificial intelligence. It's a huge one, almost 500 pages and has the best researchers, philosophers, on the field of AI safety and security. Really excited.
Kirill Eremenko: This is your tenth book. That is crazy. That is so cool. For our listeners, the title is Artificial Intelligence Safety and Security. It's a first edition and you can find it on Amazon. Tell us a bit about this book, like what was the inspiration that went into this book? What is it all about?
Roman Yampolski: AI safety is a new field at the intersection of artificial intelligence and cyber security. How to make safe and secure machines. Everyone's trying to make more capable machines, just get AI to do something release some product. Very few people are concerned with making those products safe and secure for consumers, for society in general. This field was born couple of years ago and it had no textbook, no centralized stacks where people can quickly pick up what is going on, what is the state of the art. That was the idea behind the book.
Roman Yampolski: I got 28 chapters in it, most of them from top scholars in artificial intelligence, and philosophy, and cybersecurity, and many other domains and all of them speaking about this issue. How do we control intelligent machines? How do we secure them?
Kirill Eremenko: Right now we don't really have robots walking around the streets. The AI that most companies are using and making profits on is things that are predicting consumer behavior, recommender systems, or things like Alexa, and things like that. What is the concern there? Is the concern that they're going to rebel, or is the concern that there might be some biases inherent in AI, or some discrimination that they're performing? What is the main safety concern?
Roman Yampolski: I think I'll disagree a little. We do have robots all around us. We may not recognize them as such. For example, self-driving cars are exactly that. We also have delivery robots now. More and more we do have physical embodied systems playing along humans, and we need to make sure nobody can hack your self-driving car and make you fly off a cliff. That would be one existing technology example. I do have interest in future technology as well, so not just intelligence systems we have today or in the next five years, but long term.
Roman Yampolski: What happens in 10 years, 20 years? What happens to military AI? What happens to the software you described of course? All the problems people are concerned about technological, unemployment, algorithmic bias, are part of this. There are different components to the safety landscape. Some are more immediate concerns, economic concerns, others more long term, and more in terms of existential risk and survival.
Kirill Eremenko: It sounds like what the guys from Siri did basically when they were creating Siri, they looked to the future. The technology wasn't even ready yet and they were already creating something that would only be possible with future technologies. Does that sound something about right with your book, you're not only describing the present, but also looking to the future and trying to see what precautions we can take before it's too late?
Roman Yampolski: It is exactly right. I think it is the only way to be successful in business and startup space. You can't keep up with what's been around for many years if you're not planning ahead. You have to look at exponential trends in technology and go, where is this technology going to be in five years? Wonderful. If I start work now in four and a half years, I'll release this amazing product which will match capabilities of software and hardware at the time and will dominate the market.
Roman Yampolski: Same with cybersecurity. You need time to develop safety mechanisms. If somebody releases a product today and we start looking at how can we secure it? Well that's not good. That's what we did with Internet for example. We released it, it's super useful, but it has no security built-in, and as a result we're all paying the cost. We're now doing it with Internet 2.0, Internet of Things. Same thing.
Roman Yampolski: Just release the product, release the product, no safety or cyber security built-in from a ground up. I'm hoping that we're not going to repeat this mistake with intelligence systems.
Kirill Eremenko: Can you maybe comment on Asimov's three laws of robotics?
Roman Yampolski: Sure. They're literary tools. They're designed to fail to make interesting books. They obviously cannot be implemented in any real systems. They are self-contradictory, they're ambiguously defined. I think it's more of a lesson in what doesn't work for safety and security.
Kirill Eremenko: Let's just quote them for the people who don't know them. They were in the movie I, Robot, Will Smith and obviously in the books. First one is, "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Number two, A robot must obey the orders given it by human beings except where such orders would conflict with the first law." Three, "A robot must protect its own existence as long as such protection does not conflict with the first or second law."
Kirill Eremenko: Where in your view do they fail? How would you augment them to make systems better and safer in the future?
Roman Yampolski: They fail at a level of definitions. When you say protect a human, you have to define what a human is and what it means to prevent them from coming to harm. If I see you smoking or eating a donut, clearly endangers your health. Prevent you from engaging in those actions. I feel there is a non-zero chance of you having a car accident. Should I prevent you from ever driving or should I keep you in some locked up bags just to make sure you're in good, safe environment at all times.
Roman Yampolski: You can go quite extreme with those. Obviously to a human they seem like silly solutions, but to a machine they make just as much sense given the requirements of the three laws.
Kirill Eremenko: I never thought of it that way. Given the complexity of the question then, how did you go about it in your book, if you don't mind sharing a couple of examples from the book that just got published?
Roman Yampolski: This is as I said a new field. We don't have many solutions. At this point we recognizing the problems and what might go wrong. A lot of research is trying to understand how such systems fail. One of my projects is collecting different examples of intelligent systems fail, how robots fail, how industrial systems can fail, how software fails in different domains. The hope is that we can analyze those examples and predict future problems and prevent them as a result.
Roman Yampolski: A lot of is classifying different types of failure, what can we expect from external actors, hackers, internal problems. There are solutions for small subdomain issues, but there are no global solutions yet. That's exactly what we're researching, how to make those systems safe and secure. As of today, no one has a safety mechanism which works for any level of intelligence, scales up, works in all environments. That's exactly what the problem is.
Kirill Eremenko: Are we seeing in the current day and age already some examples of failures, where there have been consequences of artificial intelligence systems failing?
Roman Yampolski: Absolutely. I'll give you a few examples, some more trivial than others. Anytime you have a self-driving car kill a pedestrian, that's obviously a failure of the system. It's designed to avoid exactly that. More trivial is something like Microsoft chatbot Tay, which was released to interact with users but very quickly became racist, violent, nasty to its users, embarrassing the company.
Roman Yampolski: That level that's clearly a failed chatbot, but they didn't anticipate this going to happen. Which is silly, then you give users a chance to control the learning of a system.
Kirill Eremenko: Do you think these failings will become more global, more drastic in the future, or do you think that we still have an opportunity to control them?
Roman Yampolski: We have an opportunity to do better, but as of right now, the trend is they become more common. They are growing exponentially as technology itself grows and more and more people use different types of AI, and the damage they cause will be proportionate to how much control the system has. If you have a nuclear response system or a system controlling a power plant, if it fails, the consequences are very serious. I see exactly that happening in more and more malevolent actors.
Roman Yampolski: Hackers will have ability to use this technology in a negative way. It's a dual use technology, so they'll be able to take a very reasonable good product developed for a specific purpose, and use it in an unpredictable, or at least a way, which we didn't foresee to cause maximum damage.
Kirill Eremenko: Did you hear that literally a few days ago, I think MIT Technology Review posted on this. Google just gave control over its data center cooling to an artificial intelligence. I think a year ago they were doing some tests, and it was very successful and with the help of DeepMind they've handed over control of their cooling systems to an artificial intelligence. We all know how massive Google is and how big their centers should be. What are your thoughts on that?
Kirill Eremenko: This is a first massive step in that direction that artificial intelligence is now taking over this whole control. What problems do you see that that could cause?
Roman Yampolski: That's obviously a great solution, they saving energy, I think something like 40% improvement in efficiency of those data centers. Again, how secure is the system against external impact? Can someone penetrate it and mess with it disabling all Google services as a result? I don't have any internal knowledge of that specific system, but that's something I would check first just to make sure this is not going to be an issue for us.
Kirill Eremenko: Let's say hypothetically if you were implementing a system like that, what controls would you implement? Would you have another artificial intelligence monitoring that, or would you have humans monitoring that, or would it be a red button that a human can press to switch everything reverted back? I just want to understand a bit of an example of what is a control system over an artificial intelligence in nowadays?
Roman Yampolski: You don't always have an option of reverting back to human control. The systems are so complex no single person can manage. You stack relying in software. It's good to have multiple levels of redundancy, where if this AI fails there is a system which can take over. Again, we don't have a perfect solution for how to control systems of such complexity. That's exactly what my research is all about, figuring out is it possible, what level of accuracy can we achieve with that and so on.
Kirill Eremenko: My next question then would be artificial intelligence. What is your definition of AI? Some systems use for instance, I would assume some complex systems use machine learning, but not necessarily artificial intelligence. It doesn't have that component of reinforcement learning in it. Where some other systems might use deep learning but again, refer to reinforcement learning. What is the definition of artificial intelligence in your book, in your worldview?
Roman Yampolski: I don't think it's technology specific. There are many different ways to implement intelligent algorithms. The description is more related to ability to automate human behavior, physical labor, cognitive labor. Automating things people know how to do, teaching machines to do them. Whatever it's writing books, driving cars, any type of human behavior, whatever it's done with neural networks or some expert system approach, it doesn't really matter. The behavior is the same, and some of the possible problems are similar.
Kirill Eremenko: Basically just so I understand for instance, a logistic regression would also qualify as an artificial intelligence system in this definition?
Roman Yampolski: The reason it's hard to pinpoint the definition for this, it's so fuzzy, both for intelligence and for artificial intelligence. At some point I think I made a statement that the smallest unit of AI is an if statement, makes a decision. You cannot go any smaller than that. It's trivial, but it's true. There are good definitions in terms of what we talk about when we mean intelligence, ability to optimize in multiple environments essentially when whatever your goals are over other agents.
Roman Yampolski: We don't have a very well defined science of measuring intelligence of non-human agents, trying to understand how it can be combined with other agents. All of it is part of what I'm looking at, different aspects of it. What can we say about intelligence? How do you develop it? How do you control it? How do you measure it? How do you combine it? Anything to do with intelligence of any type of agent, human or artificial is part of what I'm interested in.
Kirill Eremenko: You said in your book you've got some of the most brightest minds in the space sharing their thoughts. Could you mention a couple of people that you've talked to for this book specifically, and what has their contribution to the field being?
Roman Yampolski: This is an edited book. It has chapter contributions from many, many scholars, some of the big names, a lot of name dropping here. Bill Joy is one of the contributors, Sun Microsystems. Ray Kurzweil, Director of Engineering at Google. Nick Bostrom, who's a philosopher, famous for his work on safety of superintelligence. Max Tegmark, who did amazing work in physics with mathematical universe and such. Just to give you a feel, but there is I think 45 contributors in total.
Kirill Eremenko: How did you get them all in one room?
Roman Yampolski: They weren't in one room. I took about a year of my life to track them down and molest them into doing it. I think one day I'll write a book about how I got this book to come out. It's a lot of busy work.
Kirill Eremenko: Let's shift gears a little bit. I'd love to get your opinion on the ethical considerations of artificial intelligence. We talked a bit about safety and probably come back to that topic in a bit, but let's talk a bit about ethics. What do you think will happen and will it happen, when AI starts taking over jobs? Usually, you hear these things that 100 years ago, 70% of the US or something along those lines, basically more than half of the population of the US was employed in agriculture and farming in creating food.
Kirill Eremenko: Now it's less than 3%, but that happened over a gradual period of time. Over 100 years technology helped free up humans to do more interesting things. Now we're facing a problem where this is going to happen very quickly over the space of a decade, a lot of jobs are going to be taken over by artificial intelligence whether they're blue collar or white collar jobs. The question I would have for you is, what are your predictions there and what's going to happen to the world?
Roman Yampolski: I agree with those trends, and I think there are a few differences from the historical precedent. One, we always developed tools, tools to help us be more productive, tools which we use to become more efficient, and so it grew economy. This is different because now we're developing agents. We no longer need us to use them. They can do more and more independent work and eventually when we get to human level performance, they'll be able to do all work for us.
Roman Yampolski: All jobs will be automated including my job, any other job. That completely changes how society functions. You no longer have to go to work, which for some of us is a disappointment, for others it's a great thing they finally don't have. There is economic support net, some unconditional basic income you get. Many people would be quite happy with it. Short term? Yes. Will it replace jobs we're losing with our jobs, maybe better jobs?
Roman Yampolski: The problem is if you lose a low skill job, like mining in Kentucky, it doesn't mean you can now retrain the same people to do high skill work. People often talk about, we'll create new jobs in this hyperspace cyber world's virtual space. Most of those jobs are not accessible to the people who lost their low level skills jobs. We need to come up with intermediate plan for handling that.
Kirill Eremenko: Do you think we will come up with a plan like that?
Roman Yampolski: There are people working on it. Again, taxing robots and trying to redistribute that lost income back to people who lost their job seems like a reasonable step, at least for short term. There are plans to retrain people as much as possible. I think short term those are adequate solutions. I'm not sure if we have a better plan long term for what to do with people's time. If all of you all of a sudden have 40, 60, 80 hours of extra time every week what do you do with it?
Roman Yampolski: Not everyone is an artist or a poet. What happens? Is it an explosion in sports? Is it lots of fishing? We need to figure out what people do with their time.
Kirill Eremenko: How to get people not to be bored. When you don't have something that you're contributing to something that you are giving back to or creating in your work and seeing the results of your labor, there's a possibility of becoming perpetually upset, sad, or even depressed, and not bored, just generally bored. It is quite a big challenge. I understand universal basic income the whole con premises for it, but that's my concern. What are people going to do with their time?
Kirill Eremenko: One idea I heard though was to get people in order to receive universal basic income, they will have to learn, they would have to study, go on websites like Khan Academy and take courses, or Coursera and others and learn new skills, new things, or discover history and things like that, so just to keep them occupied. What are your thoughts on that?
Roman Yampolski: I agree we'll have a problem with so many people defining who they are through their careers. Loss of that identity would be problematic. We'll definitely see a lot more bored people, sad people, maybe suicidal tendencies will go up. We see it with more advanced societies requiring less to survive, less work effort. It may work for some to encourage learning and continuous education. It doesn't work for everybody, not everyone can take an online course and do well in it.
Roman Yampolski: Also, I'm not sure if its unconditional income then you can't really condition it, and taking courses and doing well in there. That's another. It becomes your job. It's not like it's a stipend for a student. I don't think sentience is a part of it, we just have concern about very capable learning systems. Whatever they self aware, whatever they have consciousness is a very different issue, has more to do with robots rights and things of that nature.
Roman Yampolski: If you have a system which is capable of learning at human level, you have no safety mechanisms, no controls, and you just freely connected to the Internet, you have very little control over what it's going to learn and how it's going to use that information, who else is going to get access to it? In general, I would not recommend connecting your AGI to Internet right away, just to see what happens.
Kirill Eremenko: On that same topic, you mentioned you interviewed Ray Kurzweil and for our listeners on futurism.com, there's this little piece called the Dawn of the Singularity. It's based on predictions from Ray Kurzweil. Ray Kurzweil has been making predictions since the 90s. He's made 147 predictions and they have had an astonishing accuracy rate of 86%.
Kirill Eremenko: Here there's little infographic where he predicts some really crazy things that for instance, in the year 2029 artificial intelligence has claimed to be conscious and openly petition for recognition of this fact. Let's see another one, in 2040 non-biological intelligence is now billions of times more capable than biological intelligence. 2045, the singularity AI surpasses human beings as the smartest and most capable life form on the planet.
Kirill Eremenko: 2099, which is if you think about it's only, what does that give us? 81 years away. Organic human beings are a small minority of the intelligent life forms on earth. Is that really the future that awaits us?
Roman Yampolski: Obviously nobody knows for sure. Ray has a good record of making very accurate predictions. He predicted the year computers will win chess tournaments against humans, world championship. He has a number of other as you said, reliable predictions. I think this far in advance it's very hard to be accurate. The general trend he's pointing to the growing capability of machines versus humans, integration of biological with non-biological is definitely something I agree with.
Kirill Eremenko: Let's move now away from futures and things like that, let's move more to now. What can people listening to this podcast do for their careers now, to take into account all these things that are happening around artificial intelligence?
Roman Yampolski: One thing if you're a student, be very careful with the major you select. Make sure by the time you graduate and looking for a job, the job still exists. Don't major in some dead technology no one's going to use. Try to again, as we said, predict the future and place yourself just right for the future demand in the occupation. Right now things like machine learning, cybersecurity, cryptoeconomics are incredibly hot.
Roman Yampolski: There is large demand for them and it's likely will continue to grow in future years. If you're just at the point where you're selecting a major, I think those are good options for you. If you're already a working professional maybe pick up those skills, maybe take a course. It's definitely going to be useful in your future to be able to do those things.
Kirill Eremenko: One example I really like of a profession that probably won't exist for very long is umpires. The people that watch tennis matches and they call out when the ball is out or the ball is in. That's more of a tradition even now than a necessity. When that tradition dies off maybe 10, 20, 30 years from now, it won't really be necessary. I wouldn't study to be an umpire, along those lines. Do you have an example?
Roman Yampolski: I don't even know if you study for that or just something you pick up in 20 minutes. I have no knowledge of that whatsoever, so don't listen to me.
Kirill Eremenko: What do I do? Do you have an example of a profession that will probably not exist in 20, 30 years?
Roman Yampolski: I'm much more pragmatic. What is the large number of people working? Things like accounting, to me that seems completely insane that there are people retyping information from printed receipts back into computer, than use Excel to some things over. All that is fully automated but with today's technology. There is no reason for so many of those low level accounting jobs to exist already. I think there are predictions which say, in the next 10 years or so, something like 84% of accountants will be automated.
Roman Yampolski: We saw it with tax professionals. Most people now do taxes I think themselves with help of software, and we'll see this trend continue.
Kirill Eremenko: I originally had on the podcast Daniel and Leigh Pullen who are experts in robotics process automation, and that's one of the softwares that is going to edge out accounting in the near future. What they mentioned was that the real ethical concern is not about the accountants that are going to lose their jobs, but it's more about what kind of jobs are going to be edged out by artificial intelligence.
Kirill Eremenko: That is like as you mentioned, the low level accounting jobs, the re-entry jobs, the more learning jobs that people have to go through in order to get to the higher level accounting jobs. The problem there is once these low level jobs are edged out by artificial intelligence, the question is, how will people get to those high end jobs? Ultimately, higher more complex tasks like corporate accounting and things like that, you will still need humans for quite a bit of time after the low level jobs are gone.
Kirill Eremenko: The question is how will humans after graduating university, where will they get the experience to get to the high complex jobs if the low level jobs are all taken over by artificial intelligence?
Roman Yampolski: That's definitely something to look at, how do we actually continue training humans as AI becomes more competitive in those low end jobs? Absolutely.
Kirill Eremenko: That's some great advice for students and people who are looking to get started into their careers. What about professionals who are in the space of data sciences? There's lots of people who are data scientists or moving into the field of data science right now and they're listening to this podcast. What would you have to say for them? Is artificial intelligence a field they should look into, maybe augment their machine learning skills, or what types of artificial intelligence should they look out for and things like that?
Roman Yampolski: Definitely try to understand the cutting edge technology in machine learning, use those tools help you build better models, automate the process of model building. More and more we see this where standard day-to-day work of data analysis is automatable, more and more data mining and things of that nature can be done fully autonomously by machines. What is your contribution if you're doing the same thing every day, clicking a few buttons, you're highly replaceable.
Roman Yampolski: Try to come up with something unique, something differentiating you, you developing new not just number of layers in a neural net, but a new type of neural network. Things of that nature. Not possible for everyone, but if you can get there it will guarantee you job security for as long as possible.
Kirill Eremenko: I agree with that sentiment and I actually always mention to data scientists that in my opinion, the highest paid data science jobs and the ones that will be the hardest automate are the connector jobs. The data scientists, or machine learning experts, AI experts, who are not just creating the algorithms, but who are actually talking to the clients, who are explaining how these algorithms are working, who are acting as the connector between the world of technology, and the world of business, or the world of the consumer, whoever is consuming that technology.
Kirill Eremenko: In my personal opinion that is going to be the hardest part to automate, because that requires a sociable skill, a skill to be able to explain complex topics to a non-technical audience. What are your thoughts on that?
Roman Yampolski: That makes sense. I would call it human privilege. Something machines cannot yet do, participate in our social gatherings, go to clubs, play golf with us. It definitely gives you an advantage.
Kirill Eremenko: Do you think machines will ever be capable of creativity?
Roman Yampolski: I think they are already. We seeing machines create beautiful paintings, music, poems at the level where average person cannot tell whatever it was artificially generated, or a human made it. To me, they've been doing it for a while.
Kirill Eremenko: I agree. However, I think that's directed creativity, like a human tells it to create a poem, and it creates a poem. A human tells it to recreate a painting, or create a painting, and it does that. Will they ever have their own ideas and thoughts on what they want to do?
Roman Yampolski: I think they will, but I want to keep it fair. How many people out of seven and a half billion engage in such level of creativity? Almost none. We call them creative geniuses. There is a dozen of them. Most people with human level intelligence, general intelligence, are not very creative, don't come up with anything novel, and barely can do if you tell them to do, or to write a poem, most cannot do it, but maybe some will succeed.
Roman Yampolski: I think it's a very high standard you're setting well beyond what people are expected to perform as. Nonetheless, I believe computers will get there where they will be super creative, much more creative than humans, we have certain advantages that you can really consider the space of possibilities fully and come up with things we'll probably won't understand.
Kirill Eremenko: When do you predict the first Turing test will be passed successfully?
Roman Yampolski: It really depends on how you define it. Turing in his original work talked about 30% success rate for five minutes, we already got to that level. If you're talking about unrestricted Turing test, basically being to human level intelligence. Again, relying on Ray Kurzweil's predictions, I think something like 2045 is a very reasonable number.
Kirill Eremenko: We talked about advice for students, advice for existing data scientists and people wanting to move into data science. Let's talk a little bit about advice for business owners. About 10% or just above that of our audience are business owners, entrepreneurs, directors, executives. Question for you. What would you say to them about artificial intelligence? How important is it for companies to start adopting AI, and should they start doing this quickly, or wait for their competitors to test the waters?
Roman Yampolski: I think they're already behind if they haven't started work on it. You competing with companies which are optimizing their process, automating it, making it as efficient as possible in terms of communication, data analysis. If you're not doing something with that, you really falling behind already. I strongly suggest seeing how you can automate some of that process, automate some of the labor costs. It's definitely worth your time I think.
Kirill Eremenko: Andrew Ng has this quote that, "AI is the new electricity." Hundred years ago most businesses didn't have electricity, and it was a challenge, it was just getting introduced. I love this question, I ask people sometimes. Name me even one business, it doesn't have to be online business, any kind of business, one business that doesn't use electricity? Pretty much every single business in the world uses electricity nowadays.
Kirill Eremenko: My question to you will be, how quickly do you think we'll get to a stage where every single business in the world will be using artificial intelligence? How many years will that take?
Roman Yampolski: That goes back to our question about definitions. If we stick with a very low level of what it requires to be called artificial intelligence, then I think most businesses today already do it. They have spell checkers for writing their reports, they have GPS systems for navigating deliveries. It really depends on what you mean by that. If you're talking about human level intelligence, no one has that yet.
Roman Yampolski: I think all the big successful companies are betting heavily on machine learning and AI, whatever is Google, Facebook, Apple, all of them are at forefront of that research.
Kirill Eremenko: We covered all of those aspects of advice. Thank you for that. My next question would be in general overall, where do you think we're going as a civilization? Do you really think that we're going to as Elon Musk predicts, start integrating ourselves with artificial intelligence, are we going to live side by side with AI?
Roman Yampolski: It's very hard for someone not to do something if I was engaging in it and it gives them competitive advantage. If your competitors integrate with machines, and they have better memory, access to Internet, it's very hard for you to say, I'm not interested in that. I'll stay on unaugmented, Amish lifestyle. You can, but you won't be competitive in that environment. It seems like the capitalist system forces you to adapt all the latest gadgets, and brain implants, or whatever it is to just be able to participate in the system.
Roman Yampolski: If you don't have a smartphone today, are you really competitive in a business space? Do you react to changes in the market? I think it's going to happen. My concern is long term as that brain chip you have, that smartphone you including becomes more and more capable. What is it you contributing to this hybrid relationship? You become a bottleneck. You have a slow piece of meat attached to the processor. Very quickly you become irrelevant and the system removes you from the equation.
Kirill Eremenko: Does that mean that you're saying that artificial intelligence will rebel and aim to get rid of all humans?
Roman Yampolski: I don't like this term rebel, it implies some desire to take over power struggle. Those are human qualities. I'm just saying if you designing an engineering system, and you start by having a human brain and an artificial intelligence working together, right now it makes perfect sense. Humans have capabilities machines don't and vice versa. The hybrid is more powerful. Over time as machines are capable of doing more and more of what a person does and doing it faster, there is less and less need for having the human in that system.
Roman Yampolski: If you take this to the extreme, there is zero need for a human to be part of that equation. They get taken out of that system. Now machines are the ones making decisions, producing everything, essentially in charge of everything. The question becomes, what is it we doing and what are the rights and privileges we have?
Kirill Eremenko: Tim Urban from Wait But Why, he has a wonderful piece on, I think it's called the path to artificial intelligence. He talks about as soon as we have artificial intelligence that will be like a technological singularity. From that point on, we're talking about general artificial intelligence, which Ray Kurzweil predicts around the year 2045. Once we have that, it will start thinking so fast and creating things so quickly that we won't be able to keep up.
Kirill Eremenko: It will be a completely different world. Even now we can see that the amount of data is growing so fast in the world, that we're almost doubling the amount of data that we've created. It only takes us a couple of years to double the amount of data we created since the dawn of humankind. Once general artificial intelligence is in the game, then it will start inventing things for us.
Kirill Eremenko: We will start living forever, we might get time travel, we might get teleportation, and things like that just because it's so much smarter than us. Do you think that that type of technological singularity is something that we are going to be faced with?
Roman Yampolski: I think so. It may be a much slower process. It may not learn that quickly and so instead of minutes, days it will take months or years, but that sound like something we're going to face.
Kirill Eremenko: How do you prepare for a world like that then?
Roman Yampolski: I don't know if you can. The idea is that we somehow can stay in control and stay in charge, but so far I haven't seen any good examples or reasons to believe that a lower intelligence can control much, much higher intelligence. I'm still looking for a good mathematical proof or evidence of any kind, but it doesn't look promising so far.
Kirill Eremenko: Then the natural question is, why are we doing this? Why did you write your book? Why are people doing research in this space, when the brightest minds of the world are still debating whether or not we will be able to prepare or even survive this kind of change?
Roman Yampolski: It's a competitive marketplace. If you don't do it, your competitor will do it anyway. You might as well participate or at least have some control over the outcomes. I do research hoping to find solutions, hoping to discover, there are ways we can stay in control to a certain degree or maybe partial control. We can do better than doing nothing.
Kirill Eremenko: Is the safest option to take a one way ticket to Mars and just look, watch, observe from there?
Roman Yampolski: I don't know. A lot of people I meet are interested in going to Mars, I never understood that. It doesn't seem like it's a fun place, there is not much to do, and you'll probably die very soon. I'm not excited yet.
Kirill Eremenko: At least you won't have robots to worry about and artificial intelligence.
Roman Yampolski: Mars is the planet which is completely populated by robots over 100 of them right now.
Kirill Eremenko: What else was I going to talk about? What are your thoughts on how quickly Google DeepMind won at the game of go? That was not expected for another 10 years. Does that mean that our advances in AI are way faster than we were thinking?
Roman Yampolski: They are. It was a bit surprising, but again, they had very good hardware, a lot of compute, so usually projections were made for a standard computer, how long it's going to take before it's powerful enough, whereas they had access to the whole data center and could compute in a matter of weeks, months, years of processing. I think you have to adjust for that, but still it was about 10 years ahead of schedule. I think it's a very good warning sign about what's coming in our domains.
Kirill Eremenko: What are your thoughts on quantum computers? Will that somehow enable AI even further or is that some completely other area of technology?
Roman Yampolski: It may be useful. I don't think human brain relies on quantum effects that much despite what some people suggest. I think it's possible to get intelligent and superintelligence systems without quantum computing just with von Neumann architecture. It seems to be doing some very cool things for space of cryptography for example, breaking certain cryptographic protocols. It's definitely a very impactful technology. It's still in infancy right now.
Roman Yampolski: It's not very useful for anything, but it's growing exponentially as well in terms of number of cubits it has and I think it will impact e-commerce, how we do public encryption in the short term absolutely.
Kirill Eremenko: I was just going through my notes from this podcast and patching up the holes that we didn't cover off. You mentioned about AI security that that is a good career path for the future. From all of the things we discussed today it's pretty evident that that is the case. Where would somebody learn about artificial intelligence security and build a career there. Is that something that is even taught anywhere?
Roman Yampolski: It is starting to be. As I said, when I will have a first textbook on a subject, there is a number of centers at actually the best universities. Places like Berkeley, MIT, Oxford, Cambridge have centers specifically for studying AI safety, so if you lucky enough to get into one of them. I'm always happy to take on students, so that's University of Louisville. You do have options.
Kirill Eremenko: I'm actually in awe and I will need some time to process all of this. Is there anything else that you personally would like to get across to our listeners? There'll be over 6,000 people listening to this podcast in the next couple of weeks. Is there any message you'd like to share with them before we start wrapping up?
Roman Yampolski: I'm always amazed at the number of people who know nothing about the things which are most important in my life like superintelligence, cryptocurrencies, life extension. My advice is always just to learn about those things. They might impact your life, change your life, and make your life better if you understand what the topic is. I assume your listeners are already above average in terms of what they're interested in, but if those topics sound completely novel to you, definitely do some reading.
Roman Yampolski: We mentioned Ray Kurzweil's work, he has awesome books in singularity and I would recommend you get engaged with that.
Kirill Eremenko: I can also recommend a blog that you can get emailed to you by Peter Diamandis, it's called Abundance Insider. I enjoy that once a week and it gives you the most recent updates on all of these technologies. That's also another good one. Roman, thank you so much for coming on the show. Before we wrap up, I wanted to ask you where can our listeners get in touch with you, and follow you, and find out more about your career, and the things that you're going to be getting into in the future?
Roman Yampolski: You can follow me on Twitter, you can follow me on Facebook, just don't follow me home. It's very important. If you just google me, all my papers are available for free online. My books are available on Amazon. Just Google my name, Roman Yampolski.
Kirill Eremenko: Is Linkedin also an option to follow you?
Roman Yampolski: I don't use it that much. I think it's more for industry than academia, but I do have an account, so if you absolutely must I'll friend you.
Kirill Eremenko: Twitter is better right?
Roman Yampolski: Twitter, Facebook yeah.
Kirill Eremenko: Roman, thank you so much for coming on the show. Wonderful discussion today. One last question for you. What's a book apart of course from your book that we already discussed and by the way, for our listeners it's called Artificial Intelligence Safety and Security just got released on Amazon. Highly recommend for everybody to pick it up. What's another book that you personally enjoyed that you can recommend to our listeners to help them enhance their careers?
Roman Yampolski: I mentioned books by Kurzweil, his books on singularity he has multiple ones. I think all of them are quite wonderful both in terms of describing his vision and giving you connections to work of other people in that space, so you can keep exploring afterwards.
Kirill Eremenko: Books by Ray Kurzweil. Once again, thank you so much Roman for coming onto the show. Very happy to have had you here, and I'm sure a lot of people are going to learn some valuable insights about AI from this podcast.
Roman Yampolski: Thank you so much for inviting me. I really enjoyed it.
Kirill Eremenko: There you have it. That was artificial intelligence expert Roman Yampolski. Hope you enjoyed this episode as much as I did and you probably felt just like me. That was very, very fast-paced. It was like bam, bam, bam. Question-answer, question-question, answer. Felt like one of those rapid-fire question sessions when we have them in episode, but the whole episode was like that. I hope you enjoyed it and you were able to keep up. If not, you can always relisten to it.
Kirill Eremenko: I know I will probably benefit from relistening to this episode and getting some additional takeaways, things that might've slipped away from me during the conversation. On that note, make sure to pick up Roman's book that will have a lot more valuable insights. If you enjoyed today's episode, I'm just assuming that you're going to get a lot of value and also enjoy his newest book that just got released, and you can pick it up on Amazon.
Kirill Eremenko: You can get all the show notes for this episode at www.superdatascience.com/193. Make sure to follow Roman, check out his books, his most recent book, and some of the other things that we talked about. As I also mentioned, Roman's done a lot of talks, so you can probably find quite a few additional talks online with him in there. On that note, make sure to connect as well with Roman on Linkedin and other social media, so you can follow along with the latest developments in the space of artificial intelligence and always stay updated.
Kirill Eremenko: One of a probably great source of information on AI. There we go. Hope you enjoyed today's podcast. I look forward to seeing you back here next time, but until then, happy analyzing.

Kirill Eremenko
Kirill Eremenko

I’m a Data Scientist and Entrepreneur. I also teach Data Science Online and host the SDS podcast where I interview some of the most inspiring Data Scientists from all around the world. I am passionate about bringing Data Science and Analytics to the world!

What are you waiting for?

EMPOWER YOUR CAREER WITH SUPERDATASCIENCE

CLAIM YOUR TRIAL MEMBERSHIP NOW
as seen on: