2019 Theses Doctoral
Advances in Deep Generative Modeling With Applications to Image Generation and Neuroscience
Deep generative modeling is an increasingly popular area of machine learning that takes advantage of recent developments in neural networks in order to estimate the distribution of observed data. In this dissertation we introduce three advances in this area. The first one, Maximum Entropy Flow Networks, allows to do maximum entropy modeling by combining normalizing flows with the augmented Lagrangian optimization method. The second one is the continuous Bernoulli, a new [0,1]-supported distribution which we introduce with the motivation of fixing the pervasive error in variational autoencoders of using a Bernoulli likelihood for non-binary data. The last one, Deep Random Splines, is a novel distribution over functions, where samples are obtained by sampling Gaussian noise and transforming it through a neural network to obtain the parameters of a spline. We apply these to model texture images, natural images and neural population data, respectively; and observe significant improvements over current state of the art alternatives.
Subjects
Files
- LoaizaGanem_columbia_0054D_15490.pdf application/pdf 3.09 MB Download File
More About This Work
- Academic Units
- Statistics
- Thesis Advisors
- Cunningham, John Patrick
- Degree
- Ph.D., Columbia University
- Published Here
- October 9, 2019