This is FiveMinuteFriday, Hybrid AI.
Welcome back to the SuperDataScience podcast everybody, super excited to have you back here on the show. Today we’ve got a very exciting topic, hybrid AI. So I was looking at the Data Science Insider, which is a newsletter which we send out every single week on Friday. It’s absolutely free. If you haven’t subscribed yet, check it out at www.superdatascience.com/dsi. Just subscribe to it and get your AI and data science updates weekly.
So, and yes, I read it as well. I find a lot of interesting articles there myself cause our team puts it together and I only get it when it’s actually finalized, when it’s being sent out to everybody. And one of the articles I was looking at, this is in DSI from 17th, April, 2020. The third article is a debate between AI experts battle over technology’s future. And it features a debate between Gary Marcus and Danny Lang. I hope I’m pronouncing his surname correctly and there’s a link to an article. As usual, we provide all the links to the articles you can refer. So I went in and read further. It’s really interesting.
So, what is this all about? These are two experts who are debating, this was a few, I guess a few weeks, so March 27th, they were debating about what is the future of AI, what does AI look like? And Danny Lang, holds a view that we’re used to. That deep learning is the future. That if you give enough data and compute power to any model, a deep learning model, then it’ll solve your problem. Whereas Gary Marcus brings a very interesting new perspective and I decided to dig a bit further into it. I watched a snippet of a podcast with him and Lex Fridman. I read a bit more about his work. And so here’s the deal. So this is what Gary Marcus says, that deep learning on its own isn’t sufficient. We need to augment deep learning with at least some level of classical artificial intelligence. And so the example that he gave him in the podcast with Lex Fridman is imagine you have like a bottle and a cap, you will from deep learning and AI can learn that bottles and caps come together a lot of the time, right?
It will see images of a bottle on a cap or bottle near a cap or a cap, a cap on a bottle, a cap near a bottle and so on. But it is virtually impossible for a deep learning, let’s say convolutional neural network to come up with a reasoning system on its own to decipher that this cap is going to screw onto the bottle and is going to prevent water from leaking out that how they function with each other. They can’t do logical or inductive inference from abstract knowledge like that. Basically what it can do is post an image in the case of a convolutional neural network. And so not only are AI very narrow in that sense, but also they don’t have, they lack this logical understanding or reasoning in the background. And a couple of months ago, probably even a year ago now, we had Khai Pham on the podcast who talked about reasoning and how reasoning is actually different to the artificial intelligence that we’re building now.
So similar, very similar notion over here. So I found that quite interesting. And so what Gary Marcus says the solution is, and he wrote a paper about it, we’ll link to it in the show notes called the Next Decade of AI or Next Decade in AI. It’s an article actually, not a paper. And he says that the future is augmenting deep learning systems with some classical artificial intelligence. And a great example of that is Google search. So when you type into a search query, a classical AI in there disambiguates the word. And so for instance, it’s going to, the example that Gary gave in this debate is that it will try to distinguish, are you talking about Paris as in Paris Hilton, Paris – Texas, Paris – France? What? What Paris do you mean when you type Paris into Google search? So it uses a classical AI system to disambiguate the meaning of Paris.
And then it will use some deep learning network to do other things, for example, to find synonyms using the BERT model, the model that we’ve heard a lot about. And so that’s just an interesting thing to think about because we can get really carried away with these hyped up deep learning models like such as BERT that are taking the world by storm, that everybody’s talking about. But at the end of the day, what gets the best results? And according to Gary, we’re seeing more and more hybrid models into the world. For instance, in this article that he wrote the Next Decade in AI, there’s 20 examples of different hybrid models that are being used.
So something to think about. What kind of applications could you use a hybrid model for, where are you using a deep learning model, where you can consider using a deep learning model, where maybe a hybrid model could be better? I encourage checking out this paper, or this article called the Next Decade in AI by Gary Marcus. Again, we’ll link to it in the show notes. You can find it on www.superdatascience.com/the number of this episode. And yeah, that’s just a very interesting consideration. Deep learning models versus mixed or hybrid AI model with including some classical artificial intelligence.
On that note, hope you enjoyed this podcast. If you’re not subscribed to the DSI yet, check it out at www.superdatascience.com/dsi to get really cool updates like this into your inbox as well. And until next time, happy analyzing.