Guest Speaker: Honglak Lee
Deep Representation Learning: Challenges and New Directions
Machine learning is a powerful tool for tackling challenging problems in artificial intelligence. In practice, success of machine learning algorithms critically depends on the feature representations for input data, which often becomes a limiting factor. To address this problem, deep learning methods have recently emerged as successful techniques to learn feature hierarchies from unlabeled and labeled data. In this talk, I will present my perspectives on the progress, challenges, and some new directions. Specifically, I will talk about my recent work to address the following interrelated challenges: (1) how can we learn invariant yet discriminative features, and furthermore disentangle underlying factors of variation to model high-order interactions between the factors? (2) how can we learn representations of the output data when the output variables have complex high-order dependencies? (3) how can we learn shared representations from heterogeneous input data modalities?
Honglak Lee is an Assistant Professor of Computer Science and Engineering at the University of Michigan, Ann Arbor. He received his Ph.D. from Computer Science Department at Stanford University in 2010, advised by Prof. Andrew Ng. His primary research interests lie in machine learning, which spans over deep learning, unsupervised and semi-supervised learning, transfer learning, graphical models, and optimization. He also works on application problems in computer vision, audio recognition, robot perception, and text processing. His work received best paper awards at ICML and CEAS. He has served as a guest editor of IEEE TPAMI Special Issue on Learning Deep Architectures, as well as area chairs of ICML and NIPS. He received the Google Faculty Research Award in 2011, and was selected by IEEE Intelligent Systems as one of AI's 10 to Watch in 2013.