Google Brain’s Quoc Le speaks about Deep learning’s progress and its future

Google Brain’s Quoc Le speaks about Deep learning’s progress and its future


Dr. Quoc Le from Google Brain, speaking at the MIT Innovators unders 35 forum Credit: Biotechin.Asia

Dr. Quoc Viet Le is a research scientist at Google Brain known for his path-breaking work on deep neural networks (DNN). He is especially famous for his Ph.D work in image processing under Andrew Ng, one of the pioneers of the DNN revolution. Le’s and Ng’s work demonstrated how computers could be used to learn complicated features and patterns in a way similar to how the mammalian brain learns.

This revolutionized the interest in DNNs, and got the current giants of the computer industry such as Google, Facebook and Microsoft in a race to incorporate AI techniques into their software. DNNs perform effectively in tasks such as image processing, handwriting recognition and game-playing, and are being explored for solutions to other problems such as self-driving cars, robotics, medical diagnosis and environmental and social problems.

Quoc Le was listed as one of the top tech innovators under 35 in the MIT tech review. At EmtechAsia, we asked Quoc Le a few questions about his take on neural networks, its development, philosophy, challenges and future role in enabling or threatening humanity.

In part two of our interview with Quoc Le, we discuss the bottlenecks in the development of neural networks, his take on adapting an open philosophy for artificial intelligence (AI) development, its future and whether it could be a threat to humanity. Read on for insights from one of the brains behind making computers brainier. (Read part one of this interview here. )

Q: You told us about the rapid strides that deep neural networks have made so far. What is the current bottleneck in the development of this technology ?

Le: Two things I can think of.

1. Scaling up the networks that we are training. Currently, the sizes of the DNNs we are working with are about 100 times bigger than what people have tried before, and now we will try for a 1000 times. But we are still far from the size of the rat or cat brain by a few orders of magnitude, let alone the human brain. So one thing we would want to do is to scale up to the size of an animal brain. We will face some challenges in this.

2. Mastering Unsupervised learning

The training we have succeeded in doing so far is Supervised learning – using data where the labels or `answers’ are known. Let me try to explain this. Imagine a learning where you walk around with a teacher who tells you everyday what to learn, and the answers to certain questions. What you learn is from the answers that the teacher tells you. That is supervised learning. If you were observing a collection of images, for example, the teacher points to each one and tells you what it is – whether it is an image of a cat, dog, car, house, etc.

What we dont have enough of is Unsupervised learning. In this case, you walk around with no teacher. You have observations, but nobody tells you what it really is, i.e, what the answers are. If you were observing images, for example, nobody tells you what categories they fall under. But given this collection of images, you can learn some sort of simpler representation of these images, identify some patterns in this data, and use it later for some purpose. This is something humans learn well to do, but not machines, yet. This improvement must be done at the software side, and is complicated.

Q: How has unsupervised learning been used so far ?

Le: In Image processing – in Google, for example, we have the capability to collect images from a lot of websites, but these dont come with labels (car, cow, dog etc). With improved unsupervised learning, we have made some progress in characterizing this data before doing any learning.

Similarly, it has been used with speech and handwriting recognition.

As a possible future idea in medical diagnosis and healthcare – suppose we want to learn only from the good doctors, which limits the amount of labelled data we can use. But we still have a lot of medical records, right ? How do we learn from this large set of records if we dont have the labels ? As a first step, we can characterize the patients into different possible categories based on his/her symptoms, even if we have no idea what those symptoms mean. So we can do things like that using unsupervised learning to make our job easier.

Q: Other than scaling up networks and doing unsupervised learning, do you suggest any other steps needed to be taken in this field ?

Le: Yes, we need to improve our understanding of neural networks. Currently our understanding of DNNs and why they work so well, is still limited. Back in the 1990s, one obstacle for people working on neural nets was to understand their working. It was a big problem then which actually convinced scientists not to work on neural networks for a period of time. They didn’t want to work on something they didn’t really understand.

Fast forward a decade or two. Today, we see that even though we can use DNNs well, our understanding of deep learning is still limited !

A better understanding will be great. This will be good for issues like safety and security too.

Q: (With respect to that suggestion) The public is mostly interested in the application part of neural nets, like say, automated cars. Do you think they would care about how the black box of the neural net system works, as long as it guarantees you insured safety?

Le: I do think so. For example, in your example of self-driving cars, an algorithm that identifies a car from just a black pixel in an image fed to it, is far different from one that identifies it from more concrete and reliable features like a tyre or windows. So I’m sure if we know how the AI is identifying it, or what it is using to identify it, it is better.

Q: You mentioned about following an open philosophy for development of AI? Why do you think this is important ?

Le: Yes. In new technology, the hardest part is to get people interested to work on your tech. If a company’s approach to a certain technology is open, a lot of people get inspiration to work in such companies – it has happened to many of my friends. In my case, Google happened to be a very open company, and that is a factor that persuaded many of my friends to join.

I think it will happen in the future as well. Companies like Google, Facebook, Microsoft and Baidu have opened up. That’s a good sign. Now people have more choices on open companies to choose. Researchers care deeply about making a big impact in the world. As soon as a technology is forced to be developed in a secretive way, we will fail to attract talented people and fail in our mission to build good AI. So I think we will stay open as long as we want to do this.

Q: But when you keep a technology open, doesn’t that mean you now don’t have control over who deploys it ?

Le: I always debate with myself whether its good to have one AI or many AIs. Right now we don’t know. But my theory is that it will be better to have a more open AI that people can understand. Like the example of the self-driving car (refer above).

Right now its hard to say what is best, but that is a question is for the far future. Maybe deep learning isn’t really the key technology that will lead to a breakthrough in sometime, maybe it could be something else, right?

Q: There are many famous people raising concerns about AI and where deep learning will go. There are concerns that we have no clue what could happen if AI blows up fast and leads to a technological singularity, things like whether it could eventually lead to mankind’s destruction.

Could you elaborate your view on this ?
Le: I have 2 comments.
1. The time frame for something like what Elon Musk said to happen, is large, like a 1000 years.
If you zoom into the current moment, remember that 5 years ago, AI was `not even a thing’! I was working in Stanford in secret because I was embarassed that if I tell it out, people will laugh at me ! Machines didn’t work that well then. Now things have started working well, and we have begun extrapolating that it will blow up and something dreadful might threaten humanity.
There is a risk with every technology in mankind’s history, whether it be a car or an airplane or nuclear power – they have all killed humans in some way or another. AI could be a risk later, but in my perspective as a scientist we are very far from it.
In Andrej Karpathy’s work, it was initially observed that the machine learning algorithm performed equal to or better than humans. If you connect dots, you might begin to think AI is already there (superior), beating a human at it. But if you think about it, this happened because a lot of datasets in this restricted area were collected for this study. The images collected by Andrej was a very small subset of possible images. It may have happened that people overtrained on that dataset and it worked well. But if you use it for other applications, such as object recognition, neural networks may not work that well yet.
Q: Well, Le, I have grown up watching the movie `The Terminator’. So are you saying that (`judgement’) day when machines will overcome humans is too far away to think about?

Le: I won’t dismiss that is one possible future.
But firstly, yes, if ever it happens, it’s going to be very far away. Secondly, let’s be clear that it’s never the technology that does harm. It’s about how you deploy the technology, that can have consequences. So we need to be conscious about it.
So whether there will come a day like that? Everything is possible so I can’t rule it out. But I think we’ll figure out a way to control it, if it happens.
April 3, 2016 / by / in , , , , , , ,

Leave a Reply

Show Buttons
Hide Buttons

IMPORTANT MESSAGE: is a website owned and operated by Scooblr, Inc. By accessing this website and any pages thereof, you agree to be bound by the Terms of Use and Privacy Policy, as amended from time to time. Scooblr, Inc. does not verify or assure that information provided by any company offering services is accurate or complete or that the valuation is appropriate. Neither Scooblr nor any of its directors, officers, employees, representatives, affiliates or agents shall have any liability whatsoever arising, for any error or incompleteness of fact or opinion in, or lack of care in the preparation or publication, of the materials posted on this website. Scooblr does not give advice, provide analysis or recommendations regarding any offering, service posted on the website. The information on this website does not constitute an offer of, or the solicitation of an offer to buy or subscribe for, any services to any person in any jurisdiction to whom or in which such offer or solicitation is unlawful.