Faculty Candidate: Mehrdad Mahdavi
480 Dreese Labs
2015 Neil Avenue
Columbus, Ohio 43210
Randomized algorithms for large-scale and high dimensional learning: sketching, random projections, and stochastic optimization
As the scale and dimensionality of data continue to grow in many applications of data analytics, it becomes critical to develop efficient and effective algorithms to solve large-scale machine learning problems with huge number of features.
In this talk we introduce novel randomized methods to cope with large number of samples and huge number of features, while still maintaining the statistical power of big data. We first address learning from high-dimensional large-scale data using sketching and random projection to reduce the number of samples and features, respectively. However, we consider a recovery problem, i.e., how to accurately recover the solution to the optimization problem to the original big high-dimensional space based on the solution learned after applying sketching and random projection. We show that sketching methods meets the dual random recovery algorithm in dual space and show that using a logarithmic number of calls to solvers of small scale problem, the proposed primal-dual sketching method is able to recover the optimum of the original problem up to arbitrary precision.
We then introduce a new paradigm for optimization, dubbed mixed optimization (a.k.a stochastic optimization with variance reduction), which interpolates between stochastic and full gradient methods and is able to i) achieve faster convergence rate in stochastic optimization, and ii) condition number independent convergence rate in deterministic optimization.
Bio: Mehrdad Mahdavi is a Research Assistant Professor at Toyota Technological Institute at University of Chicago (TTI-C). He obtained his Ph.D. degree from Michigan State University in 2014 and the M.Sc. degree from Sharif University of Technology, Tehran, Iran. He has won the Top Cited Paper Award from the journal of Applied Mathematics and Computation (Elsevier) in 2010 and the Mark Fulk Best Student Paper Award at the Conference on Learning Theory (COLT) in 2012. His current research interests include Machine Learning focused on Online Learning, Convex Optimization, and Sequential and Statistical Learning Theory.
Host: Wei Xu