Faculty Candidate: Matus Telgarsky

Faculty candidate talk: Matus Telgarsky from University of Michigan

All dates for this event occur in the past.

mt_headshot.jpg

Matus Telgarsky
480 Dreese Labs
2015 Neil Avenue
Columbus, Ohio 43210

 

Representation in machine learning: The Benefits of Depth

A basic and far-reaching design decision in machine learning is the concrete way in which prediction functions are represented.  This talk  will discuss two of the most common choices, along with some of their consequences.  First is the family of linear representations, for instance as used in boosting, which approximate complicated functions by adding together many simple functions.  Second is the family of layered representations, for instance as used in neural networks, where simple functions may be not only added but also composed.  The key result in this second section is that while linear and few-layered representations can approximate any function, in doing so they can require exponentially as many simple functions as a many-layered representation.  To close, the talk will cover many avenues for future work.

Matus Telgarsky obtained his PhD in Computer Science at UCSD in 2013 under Sanjoy Dasgupta; while there, his research focused primarily upon optimization and statistical aspects of unconstrained and unregularized algorithms (e.g., boosting), and to a lesser extent clustering. Thereafter he has been a postdoctoral researcher at Rutgers University and the University of Michigan, as well as a consulting researcher at Microsoft Research in New York City; his most recent research focus has been representation and nonconvex optimization.