HomeHome

Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning

Bo Huang

Title:
Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning
Author(s):
Huang, Bo
Thesis Advisor(s):
Goldfarb, Donald
Date:
Type:
Dissertations
Department(s):
Industrial Engineering and Operations Research
Persistent URL:
Notes:
Ph.D., Columbia University.
Abstract:
Sparse modeling is a rapidly developing topic that arises frequently in areas such as machine learning, data analysis and signal processing. One important application of sparse modeling is the recovery of a high-dimensional object from relatively low number of noisy observations, which is the main focuses of the Compressed Sensing, Matrix Completion(MC) and Robust Principal Component Analysis (RPCA) . However, the power of sparse models is hampered by the unprecedented size of the data that has become more and more available in practice. Therefore, it has become increasingly important to better harnessing the convex optimization techniques to take advantage of any underlying "sparsity" structure in problems of extremely large size. This thesis focuses on two main aspects of sparse modeling. From the modeling perspective, it extends convex programming formulations for matrix completion and robust principal component analysis problems to the case of tensors, and derives theoretical guarantees for exact tensor recovery under a framework of strongly convex programming. On the optimization side, an efficient first-order algorithm with the optimal convergence rate has been proposed and studied for a wide range of problems of linearly constraint sparse modeling problems.
Subject(s):
Mathematics
Statistics
Operations research
Item views
694
Metadata:
text | xml
Suggested Citation:
Bo Huang, , Convex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning, Columbia University Academic Commons, .

Center for Digital Research and Scholarship at Columbia University Libraries | Policies