2025 Theses Doctoral
Attention, Memory, and Generalizability in Neural Networks and Other Learning Systems
This dissertation applies elements of biological cognition to the design of artificially intelligent systems.
The first chapter provides a broad overview of the ways in which neural networks generalize models across datasets, contexts, and problems and recent progress that has been made in this area. Following this summary of the literature, the remaining chapters examine the modeling of three important features of cognition that are relevant for neural networks: attention, memory, and generalization.
The second chapter presents and estimates a model of attention and problem-dependent effort for learning-based classification models. The third chapter focuses on the design of memory-related systems in biological brains, using data and literature from the fruit fly olfactory system as a motivating example and deriving principles for the incorporation of memory into artificial networks. The fourth chapter examines the problem of overfitting in data-driven artificial systems and introduces a new measure for evaluating the generalizability of results in the context of linear regressions.
Subjects
Files
-
Rohlfs_columbia_0054D_19029.pdf application/pdf 2.92 MB Download File
More About This Work
- Academic Units
- Electrical Engineering
- Thesis Advisors
- Lazar, Aurel A.
- Degree
- D.E.S., Columbia University
- Published Here
- February 12, 2025