Faculty Candidate: Tuo Zhao
480 Dreese Labs
2015 Neil Ave
Compute Faster and Learn Better: Machine Learning via Nonconvex Model-based Optimization
Nonconvex optimization naturally arises in many machine learning problems (e.g. sparse learning, matrix factorization, and tensor decomposition). Machine learning researchers exploit various nonconvex formulations to gain modeling flexibility, estimation robustness, adaptivity, and computational scalability. Although classical computational complexity theory has shown that solving nonconvex optimization is generally NP-hard in the worst case, practitioners have proposed numerous heuristic optimization algorithms, which achieve outstanding empirical performance in real-world applications.
To bridge this gap between practice and theory, we propose a new generation of model-based optimization algorithms and theory, which incorporate the statistical thinking into modern optimization. Particularly, when designing practical computational algorithms, we take the underlying statistical models into consideration (e.g. sparsity, low rankness). Our novel algorithms exploit hidden geometric structures behind many nonconvex optimization problems, and can obtain global optima with the desired statistics properties in polynomial time with high probability.
Tuo Zhao is a PhD student in Department of Computer Science at Johns Hopkins University (http://www.cs.jhu.edu/~tour). His research focuses on high dimensional parametric and semiparametric learning, large-scale optimization, and applications to computational genomics and neuroimaging. He led the JHU team to victory in the INDI ADHD 200 global competition on fMRI imaging-based diagnosis classification in 2011. He received Siebel scholarship in 2014 and Baidu’s research fellowship in 2015.