SDS 120: Technological Singularity

Podcast Guest: Kirill Eremenko

January 7, 2018

Welcome to episode #120 of the Super Data Science Podcast. Here we go!

Today it’s Five Minute Friday time!
While ‘singularity’ is a term borrowed from physics to mean a situation where normal rules of physics no longer apply, we today look at two definitions for ‘technological singularity’.
The more recent definition is attributed to the futurist Ray Kurzweil and refers to the observation that the speed at which new technologies come into this world is increasing.
The older definition is attributed to Vernor Vinge and his paper dated 1993 (although the term had been in use decades prior) and refers to the point at which humans create superhuman intelligence, which in turn drives further technological advancement.
Vinge further argues that technological singularity is inevitable.
Items mentioned in this podcast:
Did you enjoy the podcast?
  • How do you expect your future to look as a result of the coming technological singularity?
  • Download The Transcript
  • Music Credit: Our Lives Past by Phantom Sage (feat. Emily Stiles) [NCS Release]

Podcast Transcript

This is Five Minute Friday episode number 120: Technological Singularity.

Hello and welcome back to the SuperDataScience podcast. Previously on this show, we had a conversation with Hadelin where we were talking about the trends, the exciting things to anticipate in 2018 in the space of data science. And today I wanted to take it a step further and talk about technological singularity. Now we’re going to talk about two definitions of technological singularity, and you’ll see why in a second. But to start off with, I’m going to rewind even further and we’ll just see where the term ‘singularity’ comes from.
So singularity is a term borrowed from physics, where it is used to describe a situation where normal laws of physics may no longer apply. For example, you may have seen in movies like Interstellar, or other sci-fi movies and books, that when you enter the vicinity of a black hole or you go into a black hole, the theory is that we can no longer apply the normal laws of physics that govern our normal day-to-day lives. And that is where the term ‘singularity’ comes from originally.
Now in terms of technological singularity, there’s two definitions. And we’re going to start with the second one, with the more recent one. And the reason for that is that it’s not the one we’re going to focus on in this podcast, it’s not the one I’d like us to talk about, but at the same time, I want all of us to be on the same page and aware that they are two definitions of singularity and that they are slightly different, so if you’re ever watching a YouTube video, reading a blog post, or you’re having a conversation with somebody and they mention technological singularity, then you will know which one they are talking about.
So technological singularity number two, the second definition of it, was introduced by Ray Kurzweil. And you may have heard of Ray Kurzweil, he’s quite a famous futurist in California, in Silicon Valley. And the definition of technological singularity is linked to something that Ray Kurzweil defines as ‘the law of accelerating returns’, which is pretty much the same thing as Moore’s Law. So basically what this means is that as time passes, we create much better and more powerful machines, more powerful integrated circuits, more powerful transistors, and chips, and stuff like that. And so basically, what happens is that the speed at which new technologies come into this world is increasing.
If you think about it, it’s true. 20 years ago, not everybody had a mobile phone. Then 10 years ago, people had mobile phones, but they weren’t touch mobile phones like iPhones and so on, that everybody has now. They still had buttons. Then 5 years ago, they started changing to iPhones and so on. Then iPhones became even more powerful. Then we had one camera, now we have two cameras, now the cameras are a crazy amount of megapixels, nobody even uses separate cameras any more. So all the time, we’re getting more technology and it’s coming out, but it’s happening faster and faster and faster. Like this progress and the advancement and research are enabling things that we couldn’t even imagine some time ago, like virtual reality, or self-driving cars, and things like that.
So basically, what it’s saying is that every year, the advancements in technology are not the same as they were in the previous year. They’re not coming in at the same rate. They’re coming in at almost double the rate. So it’s happening twice as fast. We’re moving further and further. So basically what the singularity according to Ray Kurzweil means is that if we could quantify advancements in technology, if 20 years ago, to advance the technology by x, it took us 10 years, then after that it took us 5 years to advance the same amount in technology it took us 5 years, then after that it took us 2.5 years, then after that it took us 1.5 years, and after that it will take us 8 months, and then it will take us 6 months, then it will take us 3 months, and so on. So the way we advance technology is becoming faster and faster.
So according to Ray Kurzweil, some time in the next decade or two, we’ll get to a point where the major advancements in technology that right now are taking two, four years or so, they are going to be squished down into a day. So that you won’t be saying a new iPhone coming out every year – again, they’ve got their own reasons why they reach these cycles a year, it’s marketing and stuff like that. But you will see technologies like that, hypothetically the iPhone, not coming out once a year, the advancements in technology will be facilitating or enabling a new iPhone to be released every day. So hypothetically, an iPhone is released, and then the next day, technology has jumped to such a level, now we can release the next version of the iPhone, and it will be twice as good as the previous one. And then the next day again, twice as good as the previous one. And in fact, it will get to the point where it’s not just by day, it’ll be by hour. The technology is advancing every hour so they release hypothetically an iPhone, and then an hour later, the technology is already so good, they’ve already done the research and so on, that the technology is so good now they can release an iPhone which is twice as powerful. And then it will go down to a minute.
And so that’s singularity for Ray Kurzweil, the point in time when we get to such quick technological advancement that they’re just happening pretty much instantaneously. And at that point, we don’t know what the world’s going to look like. And that’s kind of the whole idea behind singularity, is that whichever way you define it, we don’t know what the world will look like because we will no longer be able to apply our current or previous thinking to that world. If technologies are shooting out every minute, then how can you imagine how you’re going to live in that world? So that’s singularity definition number two, by Ray Kurzweil, the more recent one.
The older version of the technological singularity definition dates back quite some time. The main paper that I’m going to mention today is called “The Coming of Technological Singularity: How to Survive in the Post-Human Era” by Vernor Vinge. It was published in 1993. So that is 25 years ago. So the term “technological singularity” was used before Vernor Vinge. It was used in the 60s, and people like von Neumann, who are mentioned in this paper, John von Neumann and Stan Ulum. So you can find out more in the paper, but we’re going to reference this point in time, when this paper was written, and that is 1993.
And so basically, what technological singularity means, according to definition number one, and why definition number one is the more widespread definition, you’ll probably find more information on this, and because it’s just older and it’s being adopted by more people. And this definition of technological singularity means the point in time when we will create superhuman intelligence. Or greater than human intelligence. And why is that important? Why is that a point in time when after that, as soon as that’s done, we can no longer apply the normal rules of thinking (because that’s the whole concept of singularity, as we discussed just now)? Why is that such a big deal?
Well, if you think about it, I’m actually going to read a quote from the paper. By the way, we’ll link to the paper in the show notes. If you like, read it. It’s actually a fun read. It’s nothing complex, it’s just a philosophical paper. In the space of data science and analytics and AI, it’s bound to have this reference. So here’s a quote. “When greater than human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities on a still shorter time scale.” And then he gives an analogy of how this has already happened at least once in the world. And if you think about it, before humans, the only rate at which progress would happen, it wasn’t technological progress, it was progress as in evolutionary progress, biological progress. The only rate at which it happened was at the speed of evolution, at the speed of natural selection. No progress on this planet was happening faster than natural selection simply because animals didn’t have the capacity to take in information, internalise it, think about it, and run those “what-if” scenarios in their heads to come up with their own type of progress.
Then humans came along and we can run these what-ifs in our heads, and all of a sudden, we have progress which is happening much faster, as you can see around us. It’s happening way faster than natural selection or evolution. Even our own bodies cannot keep up with the world that we live in. We are very old. Our brains are 2 million years old, whereas they are not used to seeing cars and airplanes and stuff like that. They are not designed for that. Because evolution hasn’t caught up. The technological singularity that we are talking about is when we will create super human intelligence which even exceeds our own and which will start to take over the world and start to create its own inventions, its own technology.
And so how is that different to the first definition, the Ray Kurzweil definition of technological singularity? In both cases, we have very quick technological progress and so on. Well in this definition of technological singularity, what is happening is you have not humans creating these advancements, or even humans leveraging their own creations, machines, to create technological advancements, like non-intelligent machines. But in this case, what we have is we have intelligent machines taking over the whole progress. There’s another great quote here somewhere that “the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” Regardless of that whole aspect of whether they will take over the world or not, let’s think of it as in how different the world will be.
What that means is that, all of a sudden, if machines that have super human intelligence are driving technological progress and creating the inventions for us, what will happen is things that we cannot even imagine right now, or things that might be possible in a hundred thousand years, or a million years from now, based on our rates of research, on our ability to be creative and come up with new solutions, those things will all of a sudden become possible in the next decade, or in the next century at the most. And we’re talking about crazy things, like a warp drive to explore different galaxies, or anti-gravity, or teleportation, or longevity, or humans being able to come up with devices to breathe under water, or flying cars. Lots and lots of things that are only possible in science fiction all of a sudden could become possible in real life simply because it’s no longer human, it’s machines, it’s AI, that is doing a technological process, intelligent artificial intelligence, general artificial intelligence, and highly sentient artificial intelligence that is driving that.
And so, as you can see, their definitions are somewhat similar in the sense that they talk about accelerating technological progress, but at the same time they’re different in the sense that in the first, in the primary definition of technological singularity, it is the point in time when superhuman intelligence was created, and bam! From that moment onward, things are going to develop so quickly. I love the analogy that it’s like you’re standing on a platform and waiting for a train and a train is coming, and you see it coming, it’s coming, and them voom, it just goes past you. That’s how quick it’ll be.
So there we go, two definitions of technological singularity, something to look forward to and be excited about. And Vernor Vinge actually talks about something really cool. I really like this. It’s in his paper. This is the whole concept of AI creating its own inventions, and more AI to create even more complex inventions. It’s called an intellectual runaway, which we cannot possibly keep up with. And what I did really like about the paper is, inevitably we’d have the question, “Can the singularity be avoided? Can we stop this from happening?” And he mentions that there’s in some science fiction books and stuff, governments come up with laws and regulations to prevent that, to stop research in that space, but he gives a very good argument that any laws against advancements in AI, in technology, are just going to be in vain because the advantages of having that technology do the research for you, or using the by-products of that research that’s been put into it, the advantages are so compelling that if a government were to impose sanctions on creating artificial intelligence and putting regulations around it, or limits trying to stop it from happening, then that would just mean that someone else would get to those advantages first, to those super compelling advantages. And that’s a very strong argument to support the belief that there’s no way to stop this from happening, that the singularity is going to happen.
So make sure to check out the paper if you’re interested in this. We’ll link it in the show notes. Otherwise it’s called “The Coming of Technological Singularity: How to Survive in the Post-Human Era”. Also, there’s a great blog post by Tim Urban, it’s called “The AI Revolution: The Road to Super Intelligence.” It’s a two-part blog post, I think, and it’s the length of a small book. It’s very, very cool. It’s got his amazing stick figure drawings and a very interesting read as well, which summarises a lot of these concepts.
So there you go, I’ll leave you at that. Something to ponder on. Hope you have a fantastic weekend, and maybe you’ll find somebody to discuss super intelligence and singularity with and see what their opinions are. And I can’t wait to see you back here next time. And until then happy analyzing.
Show All

Share on

Related Podcasts