- Home
- Martin Ford
Architects of Intelligence Page 4
Architects of Intelligence Read online
Page 4
MARTIN FORD: What did you learn from all of that?
YOSHUA BENGIO: We had previously used a sigmoid function to train neural nets, but it turned out that by using ReLUs we could suddenly train very deep nets much more easily. That was another big change that occurred around 2010 or 2011.
There is a very large dataset—the ImageNet dataset—which is used in computer vision, and people in that field would only believe in our deep learning methods if we could show good results on that dataset. Geoffrey Hinton’s group actually did it, following up on earlier work by Yann LeCun on convolutional networks—that is, neural networks which were specialized for images. In 2012, these new deep learning architectures with extra twists were used with huge success and showed a big improvement on existing methods. Within a couple of years, the whole computer vision community switched to these kinds of networks.
MARTIN FORD: So that’s the point at which deep learning really took off?
YOSHUA BENGIO: It was a bit later. By 2014, things were lining up for a big acceleration in the community for the take-up of deep learning.
MARTIN FORD: That’s when it transitioned from being centered in universities to being in the mainstream domain at places like Google, Facebook, and Baidu?
YOSHUA BENGIO: Exactly. The shift started slightly earlier, around 2010, with companies like Google, IBM, and Microsoft, who were working on neural networks for speech recognition. By 2012, Google had these neural networks on their Android smartphones. It was revolutionary for the fact that the same technology of deep learning could be used for both computer vision and speech recognition. It drove a lot of attention toward the field.
MARTIN FORD: Thinking back to when you first started in neural networks, are you surprised at the distance things have come and the fact that they’ve become so central to what large companies, like Google and Facebook, are doing now?
YOSHUA BENGIO: Of course, we didn’t expect that. We’ve had a series of important and surprising breakthroughs with deep learning. I mentioned earlier that speech recognition came around 2010, and then computer vision around 2012. A couple of years later, in 2014 and 2015, we had breakthroughs in machine translation that ended up being used in Google Translate in 2016. 2016 was also the year we saw the breakthroughs with AlphaGo. All of these things, among a number of others, were really not expected.
I remember back in 2014 I looked at some of our results in caption generation, where the computer is trying to come up with a caption for an image, and I was amazed that we were able to do that. If you had asked me just one year earlier if we’d be able to do that in a year, I would have said no.
MARTIN FORD: Those captions are pretty remarkable. Sometimes they’re way off the mark, but most of the time they’re amazing.
YOSHUA BENGIO: Of course, they’re way off sometimes! They’re not trained on enough data, and there are also some fundamental advances in basic research that need to be made for those systems to really understand an image and really understand language. We’re far away from achieving those advances, but the fact that they were able to reach the level of performance that they have was not something we expected.
MARTIN FORD: Let’s talk about your career. What was your own path into the field of AI?
YOSHUA BENGIO: When I was young, I would read a lot of science fiction, and I’m sure that had an impact on me. It introduced me to topics such as AI and Asimov’s Three Laws of Robotics, and I wanted to go to college and study physics and mathematics. That changed when my brother and I became interested in computers. We saved our money to buy an Apple IIe and then an Atari 800. Software was scarce in those days, so we learned to program them ourselves in BASIC.
I got so excited with programming that I went into computer engineering and then computer science for my Master’s and PhD. While doing my Master’s around 1985, I started reading some papers on early neural nets, including some of Geoffrey Hinton’s papers, and it was like love at first sight. I quickly decided that this was the subject I wanted to do my research in.
MARTIN FORD: Is there any particular advice you’d give to someone who wants to get into the field of being a deep learning expert or researcher?
YOSHUA BENGIO: Just jump in the water and start swimming. There’s a ton of information in the form of tutorials, videos, and open source libraries at all levels because there’s so much interest in this field. And there is the book I co-authored, called Deep Learning, which helps newcomers into the field and is available for free online. I see many undergrad students training themselves by reading lots and lots of papers, trying to reproduce those papers, and then applying to get into the labs which are doing this kind of research. If you’re interested in the area, there’s no better time to start than now.
MARTIN FORD: In terms of your career, one thing I noticed is that of the key people in deep learning, you’re the only one that remains entirely in the academic world. Most others are part-time at companies like Facebook or Google. What made you take that career pathway?
YOSHUA BENGIO: I’ve always valued academia and the freedom to work for the common good or the things that I believe would have more impact. I also value working with students both psychologically and in terms of the efficiency and productivity of my research. If I went into the industry, I would be leaving a lot of that behind.
I also wanted to stay in Montreal, and at that time, it was the case that going into the industry meant going to either California or New York. It was then that I thought that maybe we could build something in Montreal that could become a new Silicon Valley for AI. As a result, I decided to stay and create Mila, The Montreal Institute for Learning Algorithms.
Mila carries out basic research, and also plays a leadership role in the AI ecosystem in Montreal. This role involves working in partnership with the Vector Institute in Toronto, and Amii, in Edmonton, as part of the Canadian strategy to really push AI forward—in terms of science, in terms of the economy, and in terms of positive social impact.
MARTIN FORD: Since you mention it, let’s talk more about AI and the economy, and some of the risks there. I have written a lot about the potential for artificial intelligence to bring on a new Industrial Revolution, and potentially to lead to a lot of job losses. How do you feel about that hypothesis, do you think that it is overhyped?
YOSHUA BENGIO: No, I don’t think it’s overhyped. The part that is less clear is whether this is going to happen over a decade or three decades. What I can say is that even if we stop basic research in AI and deep learning tomorrow, the science has advanced enough that there’s already a huge amount of social and economic benefit to reap from it simply by engineering new services and new products from these ideas.
We also collect a huge amount of data that we don’t use. For example, in healthcare, we’re only using a tiny, tiny fraction of what is available, or of what will be available as even more gets digitized every day. Hardware companies are working hard to build deep learning chips that are soon going to be easily a thousand times faster or more energy-efficient than the ones we currently have. The fact that you could have these things everywhere around you, in cars and phones, is clearly going to change the world.
What will slow things down are things like social factors. It takes time to change the healthcare infrastructure, even if the technology is there. Society can’t change infinitely fast, even if the technology is moving forward.
MARTIN FORD: If this technology change does lead to a lot of jobs being eliminated, do you think something like a basic income would be a good solution?
YOSHUA BENGIO: I think a basic income could work, but we have to take a scientific view on this to get rid of our moral priors that say if a person doesn’t work, then they shouldn’t have an income. I think it’s crazy. I think we have to look at what’s going to work best for the economy and what’s going to work best for people’s happiness, and we can do pilot experiments to answer those questions.
It’s not like there’s one clear answer, there are many
ways that society could take care of the people who are going to be left behind and minimize the amount of misery arising from this Industrial Revolution. I’m going to go back to something that my friend Yann LeCun said: If we had had the foresight in the 19th century to see how the Industrial Revolution would unfold, maybe we could have avoided much of the misery that followed. If in the 19th century we had put in place the kind of social safety net that currently exists in most Western nations, instead of waiting until the 1940s and 1950s, then hundreds of millions of people would have led a much better and healthier life. The thing is, it’s going to take probably much less than a century this time to unfold that story, and so the potential negative impacts could be even larger.
I think it’s really important to start thinking about it right now and to start scientifically studying the options to minimize misery and optimize global well-being. I think it’s possible to do it, and we shouldn’t just rely on our old biases and religious beliefs in order to decide on the answer to these questions.
MARTIN FORD: I agree, but as you say, it could unfold fairly rapidly. It’s going to be a staggering political problem, too.
YOSHUA BENGIO: Which is all the more reason to act quickly!
MARTIN FORD: A valid point. Beyond the economic impact, what are the other things we should worry about in terms of artificial intelligence?
YOSHUA BENGIO: I have been very active in speaking against killer robots.
MARTIN FORD: I noticed you signed a letter aimed at a university in Korea which seemed to be headed towards research on killer robots.
YOSHUA BENGIO: That’s right, and this letter is working. In fact, KAIST, The Korea Advanced Institute of Science and Technology, has been telling us that they will avoid going into the development of military systems which don’t have a human in the loop.
Let me go back to this question about a human in the loop because I think this is really important. People need to understand that current AI—and the AI that we can foresee in the reasonable future—does not, and will not, have a moral sense or moral understanding of what is right and what is wrong. I know there are differences across cultures, but these moral questions are important in people’s lives.
It’s true, not just for killer robots but all kinds of other things, like the work that a judge does deciding on the fate of a person—whether that person should return to prison or be freed into society. These are really difficult moral questions, where you have to understand human psychology, and you have to understand moral values. It’s crazy to put those decisions in the hands of machines, which don’t have that kind of understanding. It’s not just crazy; it’s wrong. We have to have social norms or laws, which make sure that computers in the foreseeable future don’t get those kinds of responsibilities.
MARTIN FORD: I want to challenge you on that. I think a lot of people would say that you have a very idealistic view of human beings and the quality of their judgment.
YOSHUA BENGIO: Sure, but I’d rather have an imperfect human being as a judge than a machine that doesn’t understand what it’s doing.
MARTIN FORD: But think of an autonomous security robot that would be happy to take a bullet first and shoot second, whereas a human would never do that, and that could potentially save lives. In theory, an autonomous security robot would also not be racist, if it were programmed correctly. These are actually areas where it might have an advantage over a human being. Would you agree?
YOSHUA BENGIO: Well, it might be the case one day, but I can tell you we’re not there yet. It’s not just about precision, it’s about understanding the human context, and computers have absolutely zero clues about that.
MARTIN FORD: Other than the military and weaponization aspects, is there anything else that we should be worried about with AI?
YOSHUA BENGIO: Yes, and this is something that hasn’t been discussed much, but now may come more to the forefront because of what happened with Facebook and Cambridge Analytica. The use of AI in advertising or generally in influencing people is something that we should be really aware of as dangerous for democracy—and is morally wrong in some ways. We should make sure that our society prevents those things as much as possible.
In Canada, for example, advertising that is directed at children is forbidden. There’s a good reason for that: We think that it’s immoral to manipulate their minds when they are so vulnerable. In fact, though, every one of us is vulnerable, and if it weren’t the case, then advertising wouldn’t work.
The other thing is that advertising actually hurts market forces because it gives larger companies a tool to slow down smaller companies coming into their markets because those larger companies can use their brand. Nowadays they can use AI to target their message to people in a much more accurate way, and I think that’s kind of scary, especially when it makes people do things that may be against their well-being. It could be the case in political advertising, for example, or advertising that could change your behavior and have an impact on your health. I think we should be really, really careful about how these tools are used to influence people in general.
MARTIN FORD: What about the warnings from people like Elon Musk and Stephen Hawking about an existential threat from super intelligent AI and getting into a recursive improvement loop? Are these things that we should be concerned about at this point?
YOSHUA BENGIO: I’m not concerned about these things, I think it’s fine that some people study the question. My understanding of the current science as it is now, and as I can foresee it, is that those kinds of scenarios are not realistic. Those kinds of scenarios are not compatible with how we build AI right now. Things may be different in a few decades, I have no idea, but that is science fiction as far as I’m concerned. I think perhaps those fears are detracting from some of the most pressing issues that we could act on now.
We’ve talked about killer robots and we’ve talked about political advertising, but there are other concerns, like how data could be biased and reinforce discrimination, for example. These are things that governments and companies can act on now, and we do have some ways to mitigate some of these issues. The debate shouldn’t focus so much on these very long-term potential risks, which I don’t think are compatible with my understanding of AI, but we should pay attention to short-term things like killer robots.
MARTIN FORD: I want to ask you about the potential competition with China and other countries. You’ve talked a lot about, for example, having limitations on autonomous weapons and one obvious concern there is that some countries might ignore those rules. How worried should we be about that international competition?
YOSHUA BENGIO: Firstly, on the scientific side I don’t have any concern. The more researchers around the world are working on a science, the better it is for that science. If China is investing a lot in AI that’s fine; at the end of the day, we’re all going to take advantage of the progress that’s going to come of that research.
However, I think the part about the Chinese government potentially using this technology either for military purposes or for internal policing is scary. If you take the current state of the science and build systems that will recognize people, recognize faces, and track them, then essentially you can build a Big Brother society in just a few years. It’s quite technically feasible and it is creating even more danger for democracy around the world. That is really something to be concerned about. It’s not just states like China where this could happen, either; it could also happen in liberal democracies if they slip towards autocratic rule, as we have seen in some countries.
Regarding the military race to use AI, we shouldn’t confuse killer robots with the use of AI in the military. I’m not saying that we should completely ban the use of AI in the military. For example, if the military uses AI to build weapons that will destroy killer robots, then that’s a good thing. What is immoral is to have these robots kill humans. It’s not like we all have to use AI immorally. We can build defensive weapons, and that could be useful to stop the r
ace.
MARTIN FORD: It sounds like you feel there’s definitely a role for regulation in terms of autonomous weapons?
YOSHUA BENGIO: There’s a role for regulation everywhere. In the areas where AI is going to have a social impact, then we at least have to think about regulation. We have to consider what the right social mechanism is that will make sure that AI is used for good.
MARTIN FORD: And you think governments are equipped to take on that question?
YOSHUA BENGIO: I don’t trust companies to do it by themselves because their main focus is on maximizing profits. Of course, they’re also trying to remain popular among their users or customers, but they’re not completely transparent about what they do. It’s not always clear that those objectives that they’re implementing correspond to the well-being of the population in general.
I think governments have a really important role to play, and it’s not just individual governments, it’s the international community because many of these questions are not just local questions, they’re international questions.
MARTIN FORD: Do you believe that the benefits to all of this are going to clearly outweigh the risks?
YOSHUA BENGIO: They’ll only outweigh the risks if we act wisely. That’s why it’s so important to have those discussions. That’s why we don’t want to move straight ahead with blinkers on; we have to keep our eyes open to all of the potential dangers that are lurking.
MARTIN FORD: Where do you think this discussion should be taking place now? Is it something primarily think tanks and universities should do, or do you think this should be part of the political discussion both nationally and internationally?
YOSHUA BENGIO: It should totally be part of the political discussion. I was invited to speak at a meeting of G7 ministers, and one of the questions discussed was, “How do we develop AI in a way that’s both economically positive and keeps the trust of the people?”, because people today do have concerns. The answer is to not do things in secret or in ivory towers, but instead to have an open discussion where everybody around the table, including every citizen, should be part of the discussion. We’re going to have to make collective choices about what kind of future we want, and because AI is so powerful, every citizen should understand at some level what the issues are.