University of Texas at Austin

Past Event: CSEM Student Forum

Recovery Guarantees for One-hidden-layer Neural Networks

Kai Zhong, UT Austin

10 – 11AM
Friday Nov 3, 2017

POB 6.304

Abstract

Neural Networks (NNs) have achieved great empirical success recently in various applications, including computer vision, natural language processing and reinforcement learning. However, due to the non-convexity of the NNs, the theoretical understanding of NNs is still limited. In this presentation, I will show that when inputs are sampled from Gaussian distribution and the activation function satisfies some properties, one-hidden-layer fully-connected NNs and one-hidden-layer convolutional NNs can be recovered in polynomial time. Specifically, I will first give a brief overview of recent theoretical progress on neural networks. Then we show gradient descent with tensor method initialization is guaranteed to converge to the ground truth parameters of the NNs with polynomial sample complexity and computational complexity.

Event information

Date
10 – 11AM
Friday Nov 3, 2017
Location POB 6.304
Hosted by Ivana Escobar