High dimensional information processing

Title:

High dimensional information processing

Author(s):

Rahnama Rad, Kamiar

Thesis Advisor(s):

Paninski, Liam

Date:

2011

Type:

Dissertations

Department:

Statistics

Permanent URL:

http://hdl.handle.net/10022/AC:P:11287

Notes:

Ph.D., Columbia University.

Abstract:

Part I: Consider the ndimensional vector y = Xβ + ǫ where β ∈ Rp has only k nonzero entries and ǫ ∈ Rn is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted by noise, where the objective is to estimate the sparsity pattern of β given the observation vector y and the measurement matrix X. First, we derive a nonasymptotic upper bound on the probability that a specific wrong sparsity pattern is identified by the maximumlikelihood estimator. We find that this probability depends (inversely) exponentially on the difference of kXβk2 and the ℓ2norm of Xβ projected onto the range of columns of X indexed by the wrong sparsity pattern. Second, when X is randomly drawn from a Gaussian ensemble, we calculate a nonasymptotic upper bound on the probability of the maximumlikelihood decoder not declaring (partially) the true sparsity pattern. Consequently, we obtain sufficient conditions on the sample size n that guarantee almost surely the recovery of the true sparsity pattern. We find that the required growth rate of sample size n matches the growth rate of previously established necessary conditions. Part II: Estimating twodimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spiketriggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these twodimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the flexibility and performance of the new techniques on a variety of simulated and real data. Part III: Many fundamental questions in theoretical neuroscience involve optimal decoding and the computation of Shannon information rates in populations of spiking neurons. In this paper, we apply methods from the asymptotic theory of statistical inference to obtain a clearer analytical understanding of these quantities. We find that for large neural populations carrying a finite total amount of information, the full spiking population response is asymptotically as informative as a single observation from a Gaussian process whose mean and covariance can be characterized explicitly in terms of network and single neuron properties. The Gaussian form of this asymptotic sufficient statistic allows us in certain cases to perform optimal Bayesian decoding by simple linear transformations, and to obtain closedform expressions of the Shannon information carried by the network. One technical advantage of the theory is that it may be applied easily even to nonPoisson point process network models; for example, we find that under some conditions, neural populations with strong historydependent (nonPoisson) effects carry exactly the same information as do simpler equivalent populations of noninteracting Poisson neurons with matched firing rates. We argue that our findings help to clarify some results from the recent literature on neural decoding and neuroprosthetic design. Part IV: A model of distributed parameter estimation in networks is introduced, where agents have access to partially informative measurements over time. Each agent faces a local identification problem, in the sense that it cannot consistently estimate the parameter in isolation. We prove that, despite local identification problems, if agents update their estimates recursively as a function of their neighbors’ beliefs, they can consistently estimate the true parameter provided that the communication network is strongly connected; that is, there exists an information path between any two agents in the network. We also show that the estimates of all agents are asymptotically normally distributed. Finally, we compute the asymptotic variance of the agents’ estimates in terms of their observation models and the network topology, and provide conditions under which the distributed estimators are as efficient as any centralized estimator.

Subject(s):

Neurosciences
Applied mathematics
 Item views:

149
 Metadata:

text  xml