2024 Theses Doctoral
Resource-Efficient Machine Learning Systems: From Natural Behavior to Natural Language
Contemporary machine learning models exhibit unprecedented performance in the text, vision, and time-series domains, but at the cost of significant computational and human resources. Applying these technologies for science requires balancing accuracy and resource allocation, which I investigate here via three unique case studies.
In Chapter 1, I present a deep learning system for animal pose estimation from video. Existing approaches rely on frame-by-frame supervised deep learning, which requires extensive manual labeling, fails to generalize to data far outside of its training set, and occasionally produces scientifically-critical errors that are hard to detect. The solution proposed here includes semi-supervised learning on unlabeled videos, video-centric network architectures, and a post-processing step that combines network ensembling and state-space modeling. These methods improve performance both with scarce and abundant labels, and are implemented in an easy-to-use software package and cloud application.
In Chapter 2, I turn to the Gaussian process, a canonical nonparametric model, known for its poor scaling with dataset size. Existing methods accelerate Gaussian processes at the cost of modeling biases. I analyze two common techniques -- early truncated conjugate gradients and random Fourier features -- showing that they find hyperparameters that underfit and overfit the data, respectively. I then propose to eliminate these biases in exchange of increased variance, via randomized truncation estimators.
In In Chapter 3, I investigate continual learning, or "finetuning", in large language models (LLMs) with billions of weights. Training these models requires more memory than typically available in academic clusters. Low-Rank Adaptation (LoRA) is a widely-used technique that saves memory by training only low rank perturbations to selected weight matrices in a so-called "base model'". I compare the performance of LoRA and full finetuning on two target domains, programming and mathematics, across different data regimes. I find that in most common settings, LoRA underperforms full finetuning, but it nevertheless exhibits a desirable form of regularization: it better maintains the base model's performance on tasks outside the target domain. I then propose best practices for finetuning with LoRA.
In summary, applying state-of-the-art models to large scientific datasets necessitates taking computational shortcuts. This thesis highlights the implications of these shortcuts and emphasizes the need for careful empirical and theoretical investigation to find favorable trade-offs between accuracy and resource allocation.
Subjects
Files
- Biderman_columbia_0054D_18667.pdf application/pdf 126 KB Download File
More About This Work
- Academic Units
- Neurobiology and Behavior
- Thesis Advisors
- Cunningham, John Patrick
- Degree
- Ph.D., Columbia University
- Published Here
- July 10, 2024