Wednesday, January 16 2019
15:30 - 17:15

Room 327

Special lecture in the Machine Learning Course: Deep neural circuits

Anirbit Mukherjee (Johns Hopkins University, USA)

Johns Hopkins University, USA

"Deep Learning"/"Deep Neural Nets" is a technological marvel in our quest towards artificial intelligence and is now increasingly widely used. It has been shown to have unprecedented performance for a wide variety of purposes from playing chess to self-driving cars to astrophysics to experimental high-energy physics. But this recent revolutionary practical success of deep neural nets has turned out to be extremely challenging to be explainable by any theoretical framework.

In this review talk I will start from the basics and give an overview of the theorems that we have proven in our attempts at making sense of deep learning. I will survey our work about (a) depth-hierarchy theorems for deep neural circuits, (b) convergence of algorithms which optimize on these function spaces and (c) theoretical methods to estimate the risk function of neural nets.

Download as iCalendar