Passer au contenu

Université de Montréal

Menu RechercherLiens rapidesConnexionFrançais

Portrait of a researcher

Creating intelligence

Yoshua Bengio

Professor in the Department of Computer Science and Operational Research, head of the Montreal Institute for Learning Algorithms, Scientific Director of the Institute for Data Valorization and holder of the Canada Research Chair in Statistical Learning Algorithms.

How does intelligence work, and how can it be created? Even as a teenager, Yoshua Bengio was fascinated by this thorny question. “I learned to program, I read science fiction and dreamed about artificial intelligence.”

Today he is considered one of the fathers of the computer systems inspired by the way our neurons function. This “deep learning” technique brought artificial intelligence out of the realm of fantasy, leading to spectacular advances in voice and image recognition and making technological applications like self-driving cars possible.

These results owe much to his patience and tenacity. Artificial neuronal networks, his research topic ever since his Master’s thesis, were far from popular before he demonstrated their potential in an article published in 2006. “I had a lot of trouble convincing my students to work on it. They were afraid they wouldn’t find jobs when they graduated,” he says.

That is no longer the case. In 2013, Google paid steeply to purchase DeepMind, a young London firm specializing in artificial intelligence, where several of Professor Bengio’s former students were working. The race to attract talent was launched, and Bengio’s laboratory became the main breeding ground for artificial intelligence programmers. So much so that Google and Microsoft opened research centres in Montréal and the local science and business communities joined forces to make the city a hub for this new industry.

What were the key steps in your career as a researcher?

In 2006, we discovered a technique that allowed us to train deep artificial neuronal networks that functioned much better than traditional networks. Then we worked with colleagues in neuroscience to develop variations on calculations by artificial neurons inspired by biology. That allowed us to significantly improve our systems. Afterwards we saw the first real-life applications of our research, in speech recognition, in 2010, and object recognition, in 2012. Lastly, in 2014, we made a discovery that is ushering in a similar revolution in the field of machine translation.

How do you see the future of artificial intelligence?

There is still lots to discover before we reach human-like intelligence. At the moment there is a push toward what is known as “reinforcement learning”: combined with deep learning, it allows applications such as the AlphaGo system, the first software program to beat a Go champion, in 2016. Go is a much greater challenge than chess for computers, because it calls for a form of intuition. We also need to make strides in terms of unsupervised learning. Computers still can’t learn by themselves, the way children do just by observing the world around them. That’s an important skill we still have to give them.