Academic Commons Search Results
http://academiccommons.columbia.edu/catalog.rss?f%5Bdepartment_facet%5D%5B%5D=Statistics&f%5Bgenre_facet%5D%5B%5D=Technical+reports&q=&rows=500&sort=record_creation_date+desc
Academic Commons Search Resultsen-usFinding a Maximum-Genius Graph Impeding
http://academiccommons.columbia.edu/catalog/ac:142035
Furst, Merrick L.; Gross, Jonathan L.; McGeoch, Lyle A.http://hdl.handle.net/10022/AC:P:11837Mon, 28 Nov 2011 00:00:00 +0000The computational complexity of constructing the imbeddings of a given graph into surfaces of different genus is not well-understood. In this paper, topological methods and a reduction to linear matroid parity are used to develop a polynomial-time algorithm to find a maximum-genus cellular imbedding. This seems to be the first imbedding algorithm for which the running time is not exponential in the genus of the imbedding surface.Computer sciencejlg2Statistics, Computer ScienceTechnical reportsAn Information-Theoretic Scale for Cultural Rule Systems
http://academiccommons.columbia.edu/catalog/ac:140503
Gross, Jonathan L.http://hdl.handle.net/10022/AC:P:11478Tue, 18 Oct 2011 00:00:00 +0000Important cultural messages are expressed in nonverbal media such as food, clothing, or the allocation of space or time. For instance, how and what a group of persons eats on a particular occasion may convey public information about that occasion and about the group of persons eating together. Whereas attention seems to be most commonly directed toward the individual character of the information, the present concern is the quantity of public information, as observed in the pattern of nonverbal cultural signs. To measure this quantity, it is proposed that the pattern of cultural signs be encoded as a sequence of abstract symbols (e.g. letters of the alphabet) and its complexity appraised by a suitably adapted form of the measure of Kolmogorov and Chaitin. That is, an algorithmic language is constructed and the mathematical information quantity is reckoned as the length of the shortest program that yields the sequence. In this cultural context, the measure is called "intricacy". By focusing on syntactic structure and pattern variation rather than on background levels, intricacy resists some influences of material wealth that tend to distort comparisons of individuals and groups. A compact mathematical overview of the theory is presented and an experiment to test it within the social medium of food sharing is briefly described.Information science, Sociology, Applied mathematicsjlg2Statistics, Computer ScienceTechnical reportsAbout SparseLab
http://academiccommons.columbia.edu/catalog/ac:140160
Donoho, David L.; Stodden, Victoria C.; Tsaig, Yaakovhttp://hdl.handle.net/10022/AC:P:11429Tue, 11 Oct 2011 00:00:00 +0000Changes and Enhancements for Release 2.0: 4 papers have been added to SparseLab 200: "Fast Solution of l1-norm Minimization Problems When the Solutions May be Sparse"; "Why Simple Shrinkage is Still Relevant For Redundant Representations"; "Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise"; "On the Stability of Basis Pursuit in the Presence of Noise." SparseLab is a library of Matlab routines for finding sparse solutions to underdetermined systems. The library is available free of charge over the Internet. Versions are provided for Macintosh, UNIX and Windows machines. Downloading and installation instructions are given here. SparseLab has over 400 .m files which are documented, indexed and cross-referenced in various ways. In this document we suggest several ways to get started using SparseLab: (a) trying out the pedagogical examples, (b) running the demonstrations, which illustrate the use of SparseLab in published papers, and (c) browsing the extensive collection of source files, which are self-documenting. SparseLab makes available, in one package, all the code to reproduce all the figures in the included published articles. The interested reader can inspect the source code to see exactly what algorithms were used, and how parameters were set in producing our figures, and can then modify the source to produce variations on our results. SparseLab has been developed, in part, because of exhortations by Jon Claerbout of Stanford that computational scientists should engage in "really reproducible" research. This document helps with installation and getting started, as well as describing the philosophy, limitations and rules of the road for this software.Technical communication, Computer sciencevcs2115StatisticsTechnical reportsSparseLab Architecture
http://academiccommons.columbia.edu/catalog/ac:140164
Donoho, David L.; Stodden, Victoria C.; Tsaig, Yaakovhttp://hdl.handle.net/10022/AC:P:11430Tue, 11 Oct 2011 00:00:00 +0000Changes and Enhancements for Release 2.0: 4 papers have been added to SparseLab 2.0: "Fast Solution of l1-norm Minimization Problems When the Solutions May be Sparse"; "Why Simple Shrinkage is Still Relevant For Redundant Representations"; "Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise"; "On the Stability of Basis Pursuit in the Presence of Noise." This document describes the architecture of SparseLab version 2.0. It is designed for users who already have had day-to-day interaction with the package and now need specific details about the architecture of the package, for example to modify components for their own research.Technical communication, Computer sciencevcs2115StatisticsTechnical reportsSome Problems in Topographical Graph Theory
http://academiccommons.columbia.edu/catalog/ac:138034
Gross, Jonathan L.; Harary, Frankhttp://hdl.handle.net/10022/AC:P:11055Wed, 31 Aug 2011 00:00:00 +0000Computer sciencejlg2Statistics, Computer ScienceTechnical reportsEstimation of System Reliability Using a Semiparametric Model
http://academiccommons.columbia.edu/catalog/ac:135421
Wu, Leon Li; Teravainen, Timothy Kaleva; Kaiser, Gail E.; Anderson, Roger N.; Boulanger, Albert G.; Rudin, Cynthiahttp://hdl.handle.net/10022/AC:P:10670Fri, 08 Jul 2011 00:00:00 +0000An important problem in reliability engineering is to predict the failure rate, that is, the frequency with which an engineered system or component fails. This paper presents a new method of estimating failure rate using a semiparametric model with Gaussian process smoothing. The method is able to provide accurate estimation based on historical data and it does not make strong a priori assumptions of failure rate pattern (e.g., constant or monotonic). Our experiments of applying this method in power system failure data compared with other models show its efficacy and accuracy. This method can be used in estimating reliability for many other systems, such as software systems or components.Computer sciencellw2107, tkt2103, gek1, rna1Statistics, Computer Science, Center for Computational Learning SystemsTechnical reportsComparing Speed of Provider Data Entry: Electronic Versus Paper Methods
http://academiccommons.columbia.edu/catalog/ac:133547
Jackson, Kevin M.; Kaiser, Gail E.; Wong, Lyndon; Rabinowitz, Daniel; Chiang, Michael F.http://hdl.handle.net/10022/AC:P:10508Wed, 08 Jun 2011 00:00:00 +0000Electronic health record (EHR) systems have significant potential advantages over traditional paper-based systems, but they require that providers assume responsibility for data entry. One significant barrier to adoption of EHRs is the perception of slowed data-entry by providers. This study compares the speed of data-entry using computer-based templates vs. paper for a large eye clinic, using 10 subjects and 10 simulated clinical scenarios. Dataentry into the EHR was significantly slower (p<0.01) than traditional paper forms.Computer sciencegek1, dr105Statistics, Biomedical Informatics, Computer Science, OphthalmologyTechnical reports