Academic Commons Search Results
http://academiccommons.columbia.edu/catalog.rss?f%5Bdepartment_facet%5D%5B%5D=Electrical+Engineering&q=&rows=500&sort=record_creation_date+desc
Academic Commons Search Resultsen-usAn Open Pipeline for Generating Executable Neural Circuits from Fruit Fly Brain Data
http://academiccommons.columbia.edu/catalog/ac:197713
Givon, Lev E.http://dx.doi.org/10.7916/D8P26Z34Tue, 19 Apr 2016 18:55:22 +0000Despite considerable progress in mapping the fly’s connectome and elucidating the patterns of information flow in its brain, the complexity of the fly brain’s structure and the still-incomplete state of knowledge regarding its neural circuitry pose significant challenges beyond satisfying the computational resource requirements of current fly brain models that must be addressed to successfully reverse the information processing capabilities of the fly brain. These include the need to explicitly facilitate collaborative development of brain models by combining the efforts of multiple researchers, and the need to enable programmatic generation of brain models that effectively utilize the burgeoning amount of increasingly detailed publicly available fly connectome data.
This thesis presents an open pipeline for modular construction of executable models of the fruit fly brain from incomplete biological brain data that addresses both of the above requirements. This pipeline consists of two major open-source components respectively called Neurokernel and NeuroArch.
Neurokernel is a framework for collaborative construction of executable connectome-based fly brain models by integration of independently developed models of different functional units in the brain into a single emulation that can be executed upon multiple Graphics Processing Units (GPUs). Neurokernel enforces a programming model that enables functional unit models that comply with its interface requirements to communicate during execution regardless of their internal design. We demonstrate the power of this programming model by using it to integrate independently developed models of the fly retina and lamina into a single vision processing system. We also show how Neurokernel’s communication performance can scale over multiple GPUs, number of functional units in a brain emulation, and over the number of communication ports exposed by a functional unit model.
Although the increasing amount of experimentally obtained biological data regarding the fruit fly brain affords brain modelers a potentially valuable resource for model development, the actual use of this data to construct executable neural circuit models is currently challenging because of the disparate nature of different data sources, the range of storage formats they use, and the limited query features of those formats complicates the process of inferring executable circuit designs from biological data. To overcome these limitations, we created a software package called NeuroArch that defines a data model for concurrent representation of both biological data and model structure and the relationships between them within a single graph database. Coupled with a powerful interface for querying both types of data within the database in a uniform high-level manner, this representation enables construction and dispatching of executable neural circuits to Neurokernel for execution and evaluation.
We demonstrate the utility of the NeuroArch/Neurokernel pipeline by using the packages to generate an executable model of the central complex of the fruit fly brain from both published and hypothetical data regarding overlapping neuron arborizations in different regions of the central complex neuropils. We also show how the pipeline empowers circuit model designers to devise computational analogues to biological experiments such as parallel concurrent recording from multiple neurons and emulation of genetic mutations that alter the fly’s neural circuitry.Neurosciences, Computer scienceleg22Electrical EngineeringDissertationsInvestigation of Two-Dimensional Transition Metal Dichalcogenides by Optical and Scanning Tunneling Spectroscopy
http://academiccommons.columbia.edu/catalog/ac:197563
Rigosi, Albert Felixhttp://dx.doi.org/10.7916/D8WH2PXDWed, 13 Apr 2016 12:20:01 +0000The goal of this dissertation is not only to present works completed and projects initiated and accomplished, but to also attempt to teach some of the material to readers who have limited exposure to condensed matter. I will offer an introduction to two-dimensional transition metal dichalcogenide materials (2D TMDCs) and the mathematics required to understand the research conducted. Some effort will be given on explaining the experimental setups and preparations. Projects that required elaborate sample fabrication and the yielded results will be summarized. These results have heavy implications for the science behind bound electron-hole pairs, the effects of magnetic fields on such pairs, and extracting the useful optical properties from the material systems in which these pairs reside. Specialized fabrication techniques of samples for longer term projects that I led will also be presented, namely those of constructing heterostructures by stacking various 2D TMDCs for exploring the modulated properties of these novel arrangements. The latter portion of this dissertation will cover the nanoscopic dynamics of TMDC heterostructures. The Kramers-Kronig relations will be derived and discussed in detail.
Data and results regarding the electronic structure of these materials, their heterostructures, and their custom alloys measured via scanning tunneling microscopy will be presented. Coupled with the measured optical properties, significant numerical quantities that characterize these materials are extracted. There will be several appendices that offer some supplementary information and basic summaries about all the projects that were initiated.Physicsafr2117Physics, Electrical EngineeringDissertationsProbing Transition Metal Dichalcogenide Monolayers and Heterostructures by Optical Spectroscopy and Scanning Tunneling Spectroscopy
http://academiccommons.columbia.edu/catalog/ac:197551
Hill, Heather Mariehttp://dx.doi.org/10.7916/D88W3D85Mon, 11 Apr 2016 18:25:30 +0000Atomically thin two-dimensional materials, such as graphene and semiconductor transition metal dichalcogenides (TMDCs), exhibit remarkable and desirable optical and electronic properties. This dissertation focuses on the excitonic properties of monolayer TMDCs taken first in isolation and then in contact with another material. We begin with a study of the exciton binding energy in two monolayer TMDCs, WS₂ and MoS₂. We observe excited states of the exciton by two different optical spectroscopy techniques: reflectance contrast and photoluminescence excitation (PLE) spectroscopy. We fit a hydrogenic model to the energies associated with the excited states and infer a binding energy, which is an order of magnitude higher than the bulk material. In the second half of this work, we study two types of two-dimensional vertical heterostructures. First, we investigate heterostructures composed of monolayer WS₂ partially capped with graphene one to four layers thick. Using reflectance contrast to measure the spectral broadening of the excitonic features, we measure the decrease in the coherence lifetime of the exciton in WS₂ due to charge and energy transfer when in contact with graphene. We then compare our results with the exciton lifetime in MoS₂/WS₂ and MoSe₂/WSe₂ heterostructures. In TMDC/TMDC heterostructures, the decrease in exciton lifetime is twice that in WS₂/graphene heterostructures and due predominantly to charge transfer between the layers. Finally, we probe the band alignment in MoS₂/WS₂ heterostructures using scanning tunneling microscopy (STM) and spectroscopy (STS).We confirm the monolayer band gaps and the predicted type II band alignment in the heterostructure. Drawing from all the research presented, we arrive at a favorable conclusion about the viability of TMDC based devices.Physicshmh2131Physics, Electrical EngineeringDissertationsArchitectures and Integrated Circuits for Efficient, High-power "Digital'' Transmitters for Millimeter-wave Applications
http://academiccommons.columbia.edu/catalog/ac:197310
Chakrabarti, Anandaroophttp://dx.doi.org/10.7916/D8XP74VTMon, 04 Apr 2016 12:18:12 +0000This thesis presents architectures and integrated circuits for the implementation of energy-efficient, high-power "digital'' transmitters to realize high-speed long-haul links at millimeter-wave frequencies in nano-scale silicon-based processes.Electrical engineeringac3215Electrical EngineeringDissertationsUser association for energy harvesting relay stations in cellular networks
http://academiccommons.columbia.edu/catalog/ac:196929
Wang, Zhe; Wang, Xiaodong; Aldiab, Motasem; Jaber, Tareqhttp://dx.doi.org/10.7916/D89886X6Wed, 30 Mar 2016 00:00:00 +0000We consider a cellular wireless network enhanced by relay stations that are powered by renewable energy sources. Such a network consists of the macro base stations (BS), relay stations (RSs), and many mobile stations (MSs). In addition to the traditional data/voice transmission between the BS and the MSs, a higher service tier may be provided by using the energy harvesting RSs for some MSs. We propose a network scenario utilizing the energy harvesting relay stations to improve the service quality without taking the additional licensed frequency band and transmission power, and design a user association algorithm for the energy harvesting RSs in such a network. The goal is to assign each MS an RS for relaying its signal to minimize the probability of the relay service outage, i.e, the probability that an MS’s relay service request is rejected. First, we propose a network scenario and develop a mathematical model to estimate the rejection probability for a given user association. We then propose a low-complexity local search algorithm, which balances the computational complexity and the performance, to obtain a locally optimal user association. Simulation results are provided to demonstrate the superior performance of the proposed techniques over the traditional methods.Electrical engineering, Mathematics, Radio relay systems, Wireless communication systems, Energy harvesting, Mobile radio stationszw2231, xw2008Electrical EngineeringArticlesReconstruction of novel transcription factor regulons through inference of their binding sites
http://academiccommons.columbia.edu/catalog/ac:196938
Elmas, Abdulkadir; Wang, Xiaodong; Samoilov, Michael S.http://dx.doi.org/10.7916/D8K07469Wed, 30 Mar 2016 00:00:00 +0000Background In most sequenced organisms the number of known regulatory genes (e.g., transcription factors (TFs)) vastly exceeds the number of experimentally-verified regulons that could be associated with them. At present, identification of TF regulons is mostly done through comparative genomics approaches. Such methods could miss organism-specific regulatory interactions and often require expensive and time-consuming experimental techniques to generate the underlying data. Results In this work, we present an efficient algorithm that aims to identify a given transcription factor’s regulon through inference of its unknown binding sites, based on the discovery of its binding motif. The proposed approach relies on computational methods that utilize gene expression data sets and knockout fitness data sets which are available or may be straightforwardly obtained for many organisms. We computationally constructed the profiles of putative regulons for the TFs LexA, PurR and Fur in E. coli K12 and identified their binding motifs. Comparisons with an experimentally-verified database showed high recovery rates of the known regulon members, and indicated good predictions for the newly found genes with high biological significance. The proposed approach is also applicable to novel organisms for predicting unknown regulons of the transcriptional regulators. Results for the hypothetical protein D d e0289 in D. alaskensis include the discovery of a Fis-type TF binding motif. Conclusions The proposed motif-based regulon inference approach can discover the organism-specific regulatory interactions on a single genome, which may be missed by current comparative genomics techniques due to their limitations.Electrical engineering, Genetics, Molecular biology, Mathematics, Transcription factors, Comparative genomics, Bioinformatics, Genomics--Data processing, Computational biologyae2321, xw2008Electrical EngineeringArticlesNeurokernel: An Open Source Platform for Emulating the Fruit Fly Brain
http://academiccommons.columbia.edu/catalog/ac:195647
Givon, Lev E.; Lazar, Aurel A.http://dx.doi.org/10.7916/D8F76CGKTue, 15 Mar 2016 00:00:00 +0000We have developed an open software platform called Neurokernel for collaborative development of comprehensive models of the brain of the fruit fly Drosophila melanogaster and their execution and testing on multiple Graphics Processing Units (GPUs). Neurokernel provides a programming model that capitalizes upon the structural organization of the fly brain into a fixed number of functional modules to distinguish between these modules’ local information processing capabilities and the connectivity patterns that link them. By defining mandatory communication interfaces that specify how data is transmitted between models of each of these modules regardless of their internal design, Neurokernel explicitly enables multiple researchers to collaboratively model the fruit fly’s entire brain by integration of their independently developed models of its constituent processing units. We demonstrate the power of Neurokernel’s model integration by combining independently developed models of the retina and lamina neuropils in the fly’s visual system and by demonstrating their neuroinformation processing capability. We also illustrate Neurokernel’s ability to take advantage of direct GPU-to-GPU data transfers with benchmarks that demonstrate scaling of Neurokernel’s communication performance both over the number of interface ports exposed by an emulation’s constituent modules and the total number of modules comprised by an emulation.Neurosciences, Electrical engineering, Drosophila melanogaster--Physiology, Graphics processing units, Brain--Modelsleg22, aal1Electrical EngineeringArticlesDistributed and Large-Scale Optimization
http://academiccommons.columbia.edu/catalog/ac:193921
Ali Younis Kalbat, Abdulrahman Younishttp://dx.doi.org/10.7916/D8D79B7VWed, 27 Jan 2016 00:00:00 +0000This dissertation is motivated by the pressing need for solving real-world large-scale optimization problems with the main objective of developing scalable algorithms that are capable of solving such problems efficiently. Large-scale optimization problems naturally appear in complex systems such as power networks and distributed control systems, which are the main systems of interest in this work. This dissertation aims to address four problems with regards to the theory and application of large-scale optimization problems, which are explained below: Chapter 2: In this chapter, a fast and parallelizable algorithm is developed for an arbitrary decomposable semidefinite program (SDP). Based on the alternating direction method of multipliers, we design a numerical algorithm that has a guaranteed convergence under very mild assumptions. We show that each iteration of this algorithm has a simple closed-form solution, consisting of matrix multiplications and eigenvalue decompositions performed by individual agents as well as information exchanges between neighboring agents. The cheap iterations of the proposed algorithm enable solving a wide spectrum of real-world large-scale conic optimization problems that could be reformulated as SDP. Chapter 3: Motivated by the application of sparse SDPs to power networks, the objective of this chapter is to design a fast and parallelizable algorithm for solving the SDP relaxation of a large-scale optimal power flow (OPF) problem. OPF is fundamental problem used for the operation and planning of power networks, which is non-convex and NP-hard in the worst case. The proposed algorithm would enable a real-time power network management and improve the system's reliability. In particular, this algorithm helps with the realization of Smart Grid by allowing to make optimal decisions very fast in response to the stochastic nature of renewable energy. The proposed algorithm is evaluated on IEEE benchmark systems. Chapter 4: The design of an optimal distributed controller using an efficient computational method is one of the most fundamental problems in the area of control systems, which remains as an open problem due to its NP-hardness in the worst case. In this chapter, we first study the infinite-horizon optimal distributed control (ODC) problem (for deterministic systems) and then generalize the results to a stochastic ODC problem (for stochastic systems). Our approach rests on formulating each of these problems as a rank-constrained optimization from which an SDP relaxation can be derived. We show that both problems admit sparse SDP relaxations with solutions of rank at most~3. Since a rank-1 SDP matrix can be mapped back into a globally-optimal controller, the rank-3 solution may be deployed to retrieve a near-global controller. We also propose computationally cheap SDP relaxation for each problem and then develop effective heuristic methods to recover a near-optimal controller from the low-rank SDP solution. The design of several near-optimal structured controllers with global optimality degrees above 99\% will be demonstrated. Chapter 5: The frequency control problem in power networks aims to control the global frequency of the system within a tight range by adjusting the output of generators in response to the uncertain and stochastic demand. The intermittent nature of distributed power generation in smart grid makes the traditional decentralized frequency controllers less efficient and demands distributed controllers that are able to deal with the uncertainty in the system introduced by non-dispatchable supplies (such as renewable energy), fluctuating loads, and measurement noise. Motivated by this need, we study the frequency control problem using the results developed in Chapter 4. In particular, we formulate the problem and then conduct a case study on the IEEE 39-Bus New England system. The objective is to design a near-global optimal distributed frequency controller for the New England test system by optimally adjusting the mechanical power input to each generator based on the real-time measurement received from neighboring generators through a user-defined communication topology.Electrical engineering, Applied mathematics, Computer science, Semidefinite programming, Mathematical optimization, Mathematical optimization--Methodology, Computer algorithmsak3369Electrical EngineeringDissertationsFacilitating Formal Verification of Cooperative Driving Applications: Techniques and Case Study
http://academiccommons.columbia.edu/catalog/ac:193257
Lin, Shou-ponhttp://dx.doi.org/10.7916/D8X63MQGMon, 11 Jan 2016 00:00:00 +0000The next generation of intelligent vehicles will evolve from being able to drive autonomously to ones that communicate with other vehicles and execute joint behaviors. Before allowing these vehicles on public roads, we must guarantee that they will not cause accidents. We will apply formal methods to ensure the degree of safety that cannot be assured with simulation or closed-track testing. However, there are challenges that need to be addressed when applying formal verification techniques to cooperative driving systems. This thesis focuses on the techniques that address the following challenges: 1. Automotive applications interact with the physical world in different ways; 2. Cooperative driving systems are time-critical; 3. The problem of state explosion when we apply formal verification to systems with more participants. First, we describe the multiple stack architecture. It combines several stacks, each of which addresses a particular way of interaction with the physical world. The layered structure in each stack makes it possible for engineers to implement cooperative driving applications without being bogged down by the details of low-level devices. Having functions arranged in a layered fashion helps us divide the verification of the whole system into smaller subproblems of independent module verification. Secondly, we present a framework for modeling the protocol systems that uses GPS clocks for synchronization. We introduce the timing stack, which separates a process into two parts: the part modeled as an finite-state machine that controls state transitions and messages exchanges, and the part that determines the exact moment that a timed event should occur. The availability of accurate clocks at different locations allows processes to execute actions simultaneously, reducing interleaving that often arises in systems that use multiple timers to control timed events. With accurate clocks, we create a lock protocol that resolves conflicting merge requests for driver-assisted merging. Thirdly, we introduce stratified probabilistic verification that mitigates state explosion. It greatly improves the probability bound obtained in the original probabilistic verification algorithm. Unlike most techniques that aim at reducing state space, it is a directed state traversal, prioritizing the states that are more likely to be encountered during system execution. When state traversal stops upon depleting the memory, the unexplored states are the ones that are less likely to be reached. We construct a linear program whose solution is the upper bound for the probability of reaching those unexplored states. The stratified algorithm is particularly useful when considering a protocol system that depends on several imperfect components that may fail with small but hard-to-quantify probabilities. In that case, we adopt a compositional approach to verify a collection of components, assuming that the components have inexact probability guarantees. Finally, we present our design of driver-assisted merging. Its design is reasonably simplified by using the multiple stack architecture and GPS clocks. We use a stratified algorithm to show that merging system fails less than once every 5 × 10¹³ merge attempts.Electrical engineering, Computer science, Artificial intelligence, Automobile driving, Automobile driving--Steering--Automatic control, Automobiles--Automatic control, Motor vehicles--Automatic control, Automatic control--Computer programssl3357Electrical EngineeringDissertationsSilicon Photonic Devices and Their Applications
http://academiccommons.columbia.edu/catalog/ac:193245
Li, Yinghttp://dx.doi.org/10.7916/D8CJ8D8JThu, 07 Jan 2016 00:00:00 +0000Silicon photonics is the study and application of photonic systems, which use silicon as an optical medium. Data is transferred in the systems by optical rays. This technology is seen as the substitutions of electric computer chips in the future and the means to keep tack on the Moore’s law. Cavity optomechanics is a rising field of silicon photonics. It focuses on the interaction between light and mechanical objects. Although it is currently at its early stage of growth, this field has attracted rising attention. Here, we present highly sensitive optical detection of acceleration using an optomechanical accelerometer. The core part of this accelerometer is a slot-type photonic crystal cavity with strong optomechanical interactions. We first discuss theoretically the optomechanical coupling in the air-slot mode-gap photonic crystal cavity. The dispersive coupling gom is numerically calculated. Dynamical parametric oscillations for both cooling and amplification, in the resolved and unresolved sideband limit, are examined numerically, along with the displacement spectral density and cooling rates for the various operating parameters. Experimental results also demonstrated that the cavity has a large optomechanical coupling rate. The optically induced spring effect, damping and amplification of the mechanical modes are observed with measurements both in air and in vacuum. Then, we propose and demonstrate our optomechanical accelerometer. It can operate with a resolution of 730 ng/Hz¹/² (or equivalently 40.1 aN/Hz¹/²) and with a transduction bandwidth of ≈ 85 kHz. We also demonstrate an integrated photonics device, an on-chip spectroscopy, in the last part of this thesis. This new type of on-chip microspectrometer is based on the Vernier effect of two cascaded micro-ring cavities. It can measure optical spectrum with a bandwidth of 74nm and a resolution of 0.22 nm in a small footprint of 1.5 mm².Optics, Mechanical engineering, Silicon--Optical properties, Photonics, Optomechanics, Photonics--Industrial applicationsyl2584Electrical Engineering, Mechanical EngineeringDissertationsSilicon Modulators, Switches and Sub-systems for Optical Interconnect
http://academiccommons.columbia.edu/catalog/ac:194352
Li, Qihttp://dx.doi.org/10.7916/D8WS8T00Wed, 06 Jan 2016 00:00:00 +0000Silicon photonics is emerging as a promising platform for manufacturing and integrating photonic devices for light generation, modulation, switching and detection. The compatibility with existing CMOS microelectronic foundries and high index contrast in silicon could enable low cost and high performance photonic systems, which find many applications in optical communication, data center networking and photonic network-on-chip. This thesis first develops and demonstrates several experimental work on high speed silicon modulators and switches with record performance and novel functionality. A 8x40 Gb/s transmitter based on silicon microrings is first presented. Then an end-to-end link using microrings for Binary Phase Shift Keying (BPSK) modulation and demodulation is shown, and its performance with conventional BPSK modulation/ demodulation techniques is compared. Next, a silicon traveling-wave Mach- Zehnder modulator is demonstrated at data rate up to 56 Gb/s for OOK modulation and 48 Gb/s for BPSK modulation, showing its capability at high speed communication systems. Then a single silicon microring is shown with 2x2 full crossbar switching functionality, enabling optical interconnects with ultra small footprint. Then several other experiments in the silicon platform are presented, including a fully integrated in-band Optical Signal to Noise Ratio (OSNR) monitor, characterization of optical power upper bound in a silicon microring modulator, and wavelength conversion in a dispersion-engineered waveguide. The last part of this thesis is on network-level application of photonics, specically a broadcast-and-select network based on star coupler is introduced, and its scalability performance is studied. Finally a novel switch architecture for data center networks is discussed, and its benefits as a disaggregated network are presented.Electrical engineering, Optics, Silicon--Optical properties, Photonics, Phase shift keying, Microelectronics, Optical communicationsql2163Electrical EngineeringDissertationsExploring Regulation Genes Involved in the Expression of L-Amino Acid Oxidase in Pseudoalteromonas sp. Rf-1
http://academiccommons.columbia.edu/catalog/ac:192340
Yu, Zhiliang; Wang, Ju; Lin, Jianxun; Zhao, Minyan; Qiu, Juanpinghttp://dx.doi.org/10.7916/D88S4PNRTue, 15 Dec 2015 00:00:00 +0000Bacterial L-amino acid oxidase (LAAO) is believed to play important biological and ecological roles in marine niches, thus attracting increasing attention to understand the regulation mechanisms underlying its production. In this study, we investigated genes involved in LAAO production in marine bacterium Pseudoalteromonas sp. Rf-1 using transposon mutagenesis. Of more than 4,000 mutants screened, 15 mutants showed significant changes in LAAO activity. Desired transposon insertion was confirmed in 12 mutants, in which disrupted genes and corresponding functionswere identified. Analysis of LAAO activity and lao gene expression revealed that GntR family transcriptional regulator, methylase, non-ribosomal peptide synthetase, TonB-dependent heme-receptor family, Na⁺/H⁺ antiporter and related arsenite permease, N-acetyltransferase GCN5, Ketol-acid reductoisomerase and SAM-dependent methytransferase, and their coding genes may be involved in either upregulation or downregulation pathway at transcriptional, posttranscriptional, translational and/or posttranslational level. The nhaD and sdmT genes were separately complemented into the corresponding mutants with abolished LAAO-activity. The complementation of either gene can restore LAAO activity and lao gene expression, demonstrating their regulatory role in LAAO biosynthesis. This study provides, for the first time, insights into the molecular mechanisms regulating LAAO production in Pseudoalteromonas sp. Rf-1, which is important to better understand biological and ecological roles of LAAO.Genetics, Microbiology, Gene expression, Genetic regulation, Marine bacteria, Oxidases, Flavoproteins, BiosynthesisElectrical EngineeringArticlesPolymyxin E Induces Rapid Paenibacillus polymyxa Death by Damaging Cell Membrane while Ca2+ Can Protect Cells from Damage
http://academiccommons.columbia.edu/catalog/ac:192256
Yu, Zhiliang; Cai, Yuanning; Qin, Wangrong; Lin, Jianxun; Qiu, Juanpinghttp://dx.doi.org/10.7916/D80P0ZRCMon, 14 Dec 2015 00:00:00 +0000Polymyxin E, produced by Paenibacillus polymyxa, is an important antibiotic normally against Gram-negative pathogens. In this study, we found that polymyxin E can kill its producer P. polymyxa, a Gram-positive bacterium, by disrupting its cell membrane. Membrane damage was clearly revealed by detecting the leakage of intracellular molecules. The observation using scanning electron microscopy also supported that polymyxin E can destroy the cell membrane and cause an extensive cell surface alteration. On the other hand, divalent cations can give protection against polymyxin E. Compared with Mg2+, Ca2+ can more effectively alleviate polymyxin E-induced damage to the cell membrane, thus remarkably increasing the P. polymyxa survival. Our findings would shed light on a not yet described bactericidal mechanism of polymyxin E against Gram-positive bacteria and more importantly the nature of limited fermentation output of polymyxin E from P. polymyxa.Molecular biology, Cellular biology, Biochemistry, Polymyxin, Gram-positive bacteria, Cell membranesElectrical EngineeringArticlesElectrochemical Camera Chip for Simultaneous Imaging of Multiple Metabolites in Biofilms
http://academiccommons.columbia.edu/catalog/ac:189979
Bellin, Daniel Louishttp://dx.doi.org/10.7916/D8NK3DKHFri, 16 Oct 2015 00:00:00 +0000Despite advances in monitoring spatiotemporal expression patterns of genes and proteins with fluorescent probes, direct detection of metabolites and small molecules remains challenging. Here we present an integrated circuit-based electrochemical camera chip capable of simultaneous spatial imaging of multiple redox-active metabolites, called phenazines, produced by Pseudomonas aeruginosa PA14 colony biofilms. Imaging of mutants with various capacities for phenazine production reveals local patterns of phenazine distribution in the biofilms. Such integrated circuit-based techniques promise wide applicability in detecting redox-active species from diverse biological samples.Electrical engineeringdlb2149Electrical EngineeringDissertationsComputational Methods for Nonlinear Optimization Problems: Theory and Applications
http://academiccommons.columbia.edu/catalog/ac:189943
Madani, Ramtinhttp://dx.doi.org/10.7916/D88S4PDMThu, 15 Oct 2015 00:00:00 +0000This dissertation is motivated by the lack of efficient global optimization techniques for polynomial optimization problems. The objective is twofold. First, a new mathematical foundation for obtaining a global or near-global solution will be developed. Second, several case studies will be conducted on a variety of real-world problems. Global optimization, convex relaxation and distributed computation are at the heart of this PhD dissertation. Some of the specific problems to be addressed in this thesis on both the theory and the application of nonlinear optimization are explained below: Graph theoretic algorithms for low-rank optimization problems: There is a rapidly growing interest in the recovery of an unknown low-rank matrix from limited information and measurements. This problem occurs in many areas of engineering and applied science such as machine learning, control, and computer vision. We develop a graph-theoretic technique in Part I that is able to generate a low-rank solution for a sparse Linear Matrix Inequality (LMI), which is directly applicable to a large set of problems such as low-rank matrix completion with many unknown entries. Our approach finds a solution with a guarantee on its rank, using the recent advances in graph theory. Resource allocation for energy systems: The flows in an electrical grid are described by nonlinear AC power flow equations. Due to the nonlinear interrelation among physical parameters of the network, the feasibility region represented by power flow equations may be nonconvex and disconnected. Since 1962, the nonlinearity of the network constraints has been studied, and various heuristic and local-search algorithms have been proposed in order to perform optimization over an electrical grid [Baldick, 2006; Pandya and Joshi, 2008]. Part II is concerned with finding convex formulations of the power flow equations using semidefinite programming (SDP). The potential of SDP relaxation for problems in power systems has been manifested in [Lavaei and Low, 2012], with further studies conducted in [Lavaei, 2011; Sojoudi and Lavaei, 2012]. A variety of graph-theoretic and algebraic methods are developed in Part II in order to facilitate performing fundamental, yet challenging tasks such as optimal power flow (OPF) problem, security-constrained OPF and the classical power flow problem. Synthesis of distributed control systems: Real-world systems mostly consist of many interconnected subsystems, and designing an optimal controller for them pose several challenges to the field of control theory. The area of distributed control is created to address the challenges arising in the control of these systems. The objective is to design a constrained controller whose structure is specified by a set of permissible interactions between the local controllers with the aim of reducing the computation or communication complexity of the overall controller. It has been long known that the design of an optimal distributed (decentralized) controller is a daunting task because it amounts to an NP-hard optimization problem in general [Witsenhausen, 1968; Tsitsiklis and Athans, 1984]. Part III is devoted to study the potential of the SDP relaxation for the optimal distributed control (ODC) problem Our approach rests on formulating each of different variations of the ODC problem as rank-constrained optimization problems from which SDP relaxations can be derived. As the first contribution, we show that the ODC problem admits a sparse SDP relaxation with solutions of rank at most 3. Since a rank-1 SDP matrix can be mapped back into a globally-optimal controller, the low-rank SDP solution may be deployed to retrieve a near-global controller. Parallel computation for sparse semidefinite programs: While small- to medium-sized semidefinite programs are efficiently solvable by second-order-based interior point methods in polynomial time up to any arbitrary precision [Vandenberghe and Boyd, 1996a], these methods are impractical for solving large-scale SDPs due to computation time and memory issues. In Part IV of this dissertation, a parallel algorithm for solving an arbitrary SDP is introduced based on the alternating direction method of multipliers. The proposed algorithm has a guaranteed convergence under very mild assumptions. Each iteration of this algorithm has a simple closed-form solution, and consists of scalar multiplication and eigenvalue decomposition over matrices whose sizes are not greater than the treewdith of the sparsity graph of the SDP problem. The cheap iterations of the proposed algorithm enable solving real-world large-scale conic optimization problems.Engineering, Applied mathematics, Computer sciencerm3122Electrical EngineeringDissertationsLarge-Scale Video Event Detection
http://academiccommons.columbia.edu/catalog/ac:189685
Ye, Guangnanhttp://dx.doi.org/10.7916/D8BG2NGVTue, 13 Oct 2015 00:00:00 +0000Because of the rapid growth of large scale video recording and sharing, there is a growing need for robust and scalable solutions for analyzing video content. The ability to detect and recognize video events that capture real-world activities is one of the key and complex problems. This thesis aims at the development of robust and efficient solutions for large scale video event detection systems. In particular, we investigate the problem in two areas: first, event detection with automatically discovered event specific concepts with organized ontology, and second, event detection with multi-modality representations and multi-source fusion. Existing event detection works use various low-level features with statistical learning models, and achieve promising performance. However, such approaches lack the capability of interpreting the abundant semantic content associated with complex video events. Therefore, mid-level semantic concept representation of complex events has emerged as a promising method for understanding video events. In this area, existing works can be categorized into two groups: those that manually define a specialized concept set for a specific event, and those that apply a general concept lexicon directly borrowed from existing object, scene and action concept libraries. The first approach seems to require tremendous manual efforts, whereas the second approach is often insufficient in capturing the rich semantics contained in video events. In this work, we propose an automatic event-driven concept discovery method, and build a large-scale event and concept library with well-organized ontology, called EventNet. This method is different from past work that applies a generic concept library independent of the target while not requiring tedious manual annotations. Extensive experiments over the zero-shot event retrieval task when no training samples are available show that the proposed EventNet library consistently and significantly outperforms the state-of-the-art methods. Although concept-based event representation can interpret the semantic content of video events, in order to achieve high accuracy in event detection, we also need to consider and combine various features of different modalities and/or across different levels. One one hand, we observe that joint cross-modality patterns (e.g., audio-visual pattern) often exist in videos and provide strong multi-modal cues for detecting video events. We propose a joint audio-visual bi-modal codeword representation, called bi-modal words, to discover cross-modality correlations. On the other hand, combining features from multiple sources often produces performance gains, especially when the features complement with each other. Existing multi-source late fusion methods usually apply direct combination of confidence scores from different sources. This becomes limiting because heterogeneous results from various sources often produce incomparable confidence scores at different scales. This makes direct late fusion inappropriate, thus posing a great challenge. Based upon the above considerations, we propose a robust late fusion method with rank minimization, that not only achieves isotonicity among various scores from different sources, but also recovers a robust prediction score for individual test samples. We experimentally show that the proposed multi-modality representation and multi-source fusion methods achieve promising results compared with other benchmark baselines. The main contributions of the thesis include the following. 1. Large scale event and concept ontology: a) propose an automatic framework for discovering event-driven concepts; b) build the largest video event ontology, EventNet, which includes 500 complex events and 4,490 event-specific concepts; c) build the first interactive system that allows users to explore high-level events and associated concepts in videos with event browsing, search, and tagging functions. 2. Event detection with multi-modality representations and multi-source fusion: a) propose novel bi-modal codeword construction for discovering multi-modality correlations; b) propose novel robust late fusion with rank minimization method for combining information from multiple sources. The two parts of the thesis are complimentary. Concept-based event representation provides rich semantic information for video events. Cross-modality features also provide complementary information from multiple sources. The combination of those two parts in a unified framework can offer great potential for advancing state-of-the-art in large-scale event detection.Computer sciencegy2179Electrical EngineeringDissertationsHeavy Tails and Instabilities in Large-Scale Systems with Failures
http://academiccommons.columbia.edu/catalog/ac:189193
Skiani, Evangeliahttp://dx.doi.org/10.7916/D8MW2GKRFri, 02 Oct 2015 00:00:00 +0000Modern engineering systems, e.g., wireless communication networks, distributed computing systems, etc., are characterized by high variability and susceptibility to failures. Failure recovery is required to guarantee the successful operation of these systems. One straight- forward and widely used mechanism is to restart the interrupted jobs from the beginning after a failure occurs. In network design, retransmissions are the primary building blocks of the network architecture that guarantee data delivery in the presence of channel failures. Retransmissions have recently been identified as a new origin of power laws in modern information networks. In particular, it was discovered that retransmissions give rise to long tails (delays) and possibly zero throughput. To this end, we investigate the impact of the ‘retransmission phenomenon’ on the performance of failure prone systems and propose adaptive solutions to address emerging instabilities. The preceding finding of power law phenomena due to retransmissions holds under the assumption that data sizes have infinite support. In practice, however, data sizes are upper bounded 0 ≤ L ≤ b, e.g., WaveLAN’s maximum transfer unit is 1500 bytes, YouTube videos are of limited duration, e-mail attachments cannot exceed 10MB, etc. To this end, we first provide a uniform characterization of the entire body of the distribution of the number of retransmissions, which can be represented as a product of a power law and the Gamma distribution. This rigorous approximation clearly demonstrates the transition from power law distributions in the main body to exponential tails. Furthermore, the results highlight the importance of wisely determining the size of data fragments in order to accommodate the performance needs in these systems as well as provide the appropriate tools for this fragmentation. Second, we extend the analysis to the practically important case of correlated channels using modulated processes, e.g., Markov modulated, to capture the underlying dependencies. Our study shows that the tails of the retransmission and delay distributions are asymptotically insensitive to the channel correlations and are determined by the state that generates the lightest tail in the independent channel case. This insight is beneficial both for capacity planning and channel modeling since the independent model is sufficient and the correlation details do not matter. However, the preceding finding may be overly optimistic when the best state is atypical, since the effects of ‘bad’ states may still downgrade the performance. Third, we examine the effects of scheduling policies in queueing systems with failures and restarts. Fair sharing, e.g., processor sharing (PS), is a widely accepted approach to resource allocation among multiple users. We revisit the well-studied M/G/1 PS queue with a new focus on server failures and restarts. Interestingly, we discover a new phenomenon showing that PS-based scheduling induces complete instability in the presence of retransmissions, regardless of how low the traffic load may be. This novel phenomenon occurs even when the job sizes are bounded/fragmented, e.g., deterministic. This work demonstrates that scheduling one job at a time, such as first-come-first-serve, achieves a larger stability region and should be preferred in these systems. Last, we delve into the area of distributed computing and study the effects of commonly used mechanisms, i.e., restarts, fragmentation, replication, especially in cloud computing services. We evaluate the efficiency of these techniques under different assumptions on the data streams and discuss the corresponding optimization problem. These findings are useful for optimal resource allocation and fault tolerance in rapidly developing computing networks. In addition to networking and distributed computing systems, the aforementioned results improve our understanding of failure recovery management in large manufacturing and service systems, e.g., call centers. Scalable solutions to this problem increase in significance as these systems continuously grow in scale and complexity. The new phenomena and the techniques developed herein provide new insights in the areas of parallel computing, probability and statistics, as well as financial engineering.Electrical engineering, Operations researches3009Electrical EngineeringDissertationsReal-time Awareness and Fast Reconguration Capabilities for Agile Optical Networks
http://academiccommons.columbia.edu/catalog/ac:189208
Ahsan, Atiyah Sayyidahhttp://dx.doi.org/10.7916/D8QJ7GQXFri, 02 Oct 2015 00:00:00 +0000Ever-growing demand for speed and bandwidth coupled with increasing energy consumption in current networks are driving the need for intelligent, next-generation networking architectures that can overcome fundamental spectral and energy limitations. Metro-only internet traffic in particular is experiencing unprecedented growth rates and increasing twice as fast as long-haul traffic. The current quasi-static peak capacity pro- visioned network is ill-equipped to support this rise of unpredictable, high bandwidth but short-duration traffic flows. A promising solution to address the emerging networking challenges is agile optical networking. Agile optical networking leverages novel photonic devices and multi-layer switching capabilities along with network awareness and intelligence to allocate re- sources in accordance to changing traffic demands and network conditions. However, network agility requires changing the wavelength configuration in the optical layer in real-time to match the traffic demands. Rapidly changing the wavelength loading conditions in optical amplifiers result in debilitating power fluctuations that propagate through the network and can lead to network instability, a problem that is avoided in current networks by using long reconfiguration times encompassing many small adjustments. An agile optical network, once successfully implemented, will be characterized by unpredictable transmission impairments. Power levels along any path in an agile network is constantly fluctuating due to the continuously changing wavelength configuration; consequently, power dependent transmission impairments are also constantly fluctuating. Real-time knowledge of the state of the physical layer is thus critical for managing signal quality and reliability in an agile optical network, requiring the development of cost-effective, energy-efficient monitoring solutions that can support advanced modulation formats. This dissertation focuses on developing solutions for the two key requirements for a stable agile optical network. Techniques that allow wavelength reconguration on the order of seconds while maintaining stable network operation and minimal data loss are presented. Functionality of an existing advanced optical performance monitor is extended to include autonomous monitoring of both single and multiple channel systems, so that it can be used in agile optical network for real-time introspection of the physical layer.Engineering, Electrical engineeringasa2157Electrical EngineeringDissertationsExcimer Laser Crystallization of Silicon Thin-Films for Monolithic 3D Integration
http://academiccommons.columbia.edu/catalog/ac:189637
Carta, Fabiohttp://dx.doi.org/10.7916/D8251HJ4Wed, 23 Sep 2015 00:00:00 +0000In 1964 the first metal oxide semiconductor (MOS) integrated circuit (IC) became available. Shortly after in 1965 Gordon Moore predicted the pace of the device density increase in ICs. His prediction became a self-fulfilling prophecy, which taking advantage of the formal device scaling rules introduced by Robert Dennard in 1974, drove the evolution of the integrated electronic industry. In conventional two dimensional ICs, devices are integrated into a single layer of silicon in what is called the front end of line (FEOL) fabrication. Additional layers on top of the devices serve as inter-dielectric isolating layer and metal interconnects and are fabricated in the back end of line (BEOL) process. Scaling the dimension of devices allows for an increase in device density, improvement on device switching speed and reduction of the cost per device. The conjunction of these benefits drove the industry thus far. Over the past decade further scaling the devices while achieving also an increase in performance and cost benefits became extremely difficult. As the dimensional scaling of complementary MOS (CMOS) devices reaches its limits, three dimensional ICs (3DICs) are increasingly being considered as a path to achieve higher device densities. 3DICs offer a way to increase density by using multiple device layers on the same die, reducing the interconnect distance and allowing for a decrease in signal delay. Among different fabrication techniques, monolithic 3D integration is potentially more cost effective but requires high performance devices, a process compatible with transistor integration in the BEOL stack and needs to deliver a high device density and uniformity in order to be adopted by the very large scale integration (VLSI) industry. This work focuses on a particular laser crystallization technique to achieve monolithic device integration. The technique, called Excimer Laser Crystallization (ELC), makes use of an excimer laser to achieve a large grain polycrystalline thin-film starting from an amorphous layer, allowing integration of high quality thin-film transistors (TFTs). Thus far, the ELC technique has been studied on thin-films typically deposited on top of quartz substrate or Si/SiO₂ wafers. On the other hand state of the art VLSI integration uses more advance BEOL stacks with low-κ material as interlayer dielectrics (ILDs) to passivate the copper (Cu) interconnect lines. This thesis focuses on three different key aspect to enable laser crystallization in the BEOL for device integration: 1. Excimer laser crystallization of amorphous silicon on low-κ dielectric; 2. Excimer laser crystallization of amorphous silicon on BEOL processed wafer; 3. VLSI of TFTs on excimer laser crystallized silicon. The ELC of amorphous silicon on low-κ dielectric is first explored through one dimension (1D) finite element method (FEM) simulation of the temperature evolution during the laser exposure in two different systems: 1. amorphous silicon deposited on top of SiO₂ dielectric; 2. amorphous silicon deposited on top of low-κ dielectric. Simulations predict that is necessary a lower laser energy for crystallizing the silicon on the low-κ material. Experimental observations confirm the predicted behavior yielding a 35% lower energy for crystallization of thin-film silicon on top of a low-κ dielectric. Material characterization through defect enhanced SEM micrograph, Raman spectroscopy and XRD analysis shows an equivalent material morphology for the two system with a preferential (111) crystal orientation for the SiO₂ system. Silicon crystallization on BEOL processed wafer is studied through a combination of 1D FEM simulation and experimental observation on a silicon layer deposited on top of a SiO₂dielectric protecting the underlying damascene Cu structure. 1D FEM show that during the silicon laser exposure, because of the short pulse width of the laser (30 ns), the heat is retained in the amorphous silicon layer allowing its melting while keeping the temperature of the Cu lines below 320 °C which is a favorable condition for monolithic integration in the BEOL. Further experimental evidences show the ability of crystallizing a-Si on such structure while preserving the physical and electrical properties of the Cu lines. The feasibility of monolithic VLSI 3D integration is demonstrated through integration of TFTs devices on 200 mm silicon wafers. The integration process and performance of the TFTs device are modeled through technology computer aided design (TCAD) simulations which are used to define the process flow and the fabrication parameters. Characterization of the TFTs over multiple die yield good device performance and uniformity. TFTs characterized with 1.5 V of supply voltage have a sub-threshold slope down to 79 mV/decade, current density up to 26.3 μA/μm, a threshold voltage of 0.23 V, current On/Off ratio above 10⁵ and device field effect mobility up to 19.8 cm²/(V s) for LPCVD-sourced silicon. Furthermore, the Levinson method allows characterization of the trap density in the thin-film polysilicon devices yielding a mean value 8.13×10¹² cm². This work present an integration scheme which proves to be compatible with VLSI in the BEOL of wafers. It paves the way to further development which could lead to an high performance, cost effective, monolithic 3D integration approach useful in application such as reconfigurable logic, display, heterogeneous integration and on chip optical communications.Engineeringfc2383Electrical EngineeringDissertationsMassively Parallel Spiking Neural Circuits: Encoding, Decoding and Functional Identification
http://academiccommons.columbia.edu/catalog/ac:189640
Zhou, Yiyinhttp://dx.doi.org/10.7916/D8SQ8ZQFWed, 23 Sep 2015 00:00:00 +0000This thesis presents a class of massively parallel spiking neural circuit architectures in which neurons are modeled by dendritic stimulus processors cascaded with spike generators. We investigate how visual stimuli can be represented by the spike times generated by the massively parallel neural circuits, how the spike times can be used to reconstruct and process visual stimuli, and the conditions when visual stimuli can be faithfully represented/reconstructed. Functional identification of the massively parallel neural circuits from spike times and its evaluation are also investigated. Together, this thesis offers a comprehensive analytic framework of massively parallel spiking neural circuit architectures arising in the study of early visual systems. In encoding, modeling of visual stimuli in reproducing kernel Hilbert spaces is presented, recognizing the importance of studying visual encoding in a rigorous mathematical framework. For massively parallel neural circuits with biophysical spike generators, I/O characterization of the biophysical spike generators becomes possible by introducing phase response curve manifolds for the biophysical spike generators. I/O characterization of the entire neural circuit can then be interpreted as generalized sampling in the Hilbert space. Multi-component dendritic stimulus processors are introduced to model visual encoding in stereoscopic color vision. It is also shown that encoding of visual stimuli by an ensemble of complex cells has the complexity of Volterra dendritic stimulus processors. Based on the I/O characterization, reconstruction algorithms are derived to decode, from spike times, visual stimuli encoded by these massively parallel neural circuits. Decoding problems are first formulated as spline interpolation problems. Conditions on faithful reconstruction are presented, allowing the probe of information content carried by the spikes. Algorithms are developed to qualify the decoding in massively parallel settings. For stereoscopic color visual stimuli, demixing of individual channels from an unlabeled set of spike trains is demonstrated. For encoding with complex cells, decoding problems are formulated as rank minimization problems. It is shown that the decoding algorithm does not suffer from the curse of dimensionality and thereby allows for a visual representation using biologically realistic neural resources. The study of visual stimuli encoding and decoding enables the functional identification of massively parallel neural circuits. The duality between decoding and functional identification suggests that algorithms for functional identification of the projection of dendritic stimulus processors onto the space of input stimuli can be formulated similarly to the decoding algorithms. Functional identification of dendritic stimulus processors of neurons carrying stereoscopic color information as well as that of energy processing in complex cells is demonstrated. Furthermore, this duality also inspires a novel method to evaluate the quality of functional identification of massively parallel spiking neural circuits. By reconstructing novel stimuli using identified circuit parameters, the evaluation of the entire identified circuit is reduced to intuitive comparisons in stimulus space. The use of biophysical spike generators advances a methodology in the study of intrinsic noise sources in neurons and their effects on stimulus representation and on precision of functional identification. These effects are investigated using a class of nonlinear neural circuits consisting of both feedforward and feedback Volterra dendritic stimulus processors and biophysical spike generators. It is shown that encoding with neural circuits with intrinsic noise sources can be interpreted as generalized sampling with noisy measurements. Effects of noise on decoding and functional identification are derived theoretically and were systematically investigated by extensive simulations. Finally, the massively parallel neural circuit architectures are shown to enable the implementation of identity preserving transformations in the spike domain using a switching matrix regulating the connection between encoding and decoding. Two realizations of the architectures are developed, and extensive examples using continuous visual streams are provided. Implications of this result on the problem of invariant object recognition in the spike domain are discussed.Electrical engineering, Neurosciencesyz2227Electrical EngineeringDissertationsResource Allocation for the Internet of Everything: From Energy Harvesting Tags to Cellular Networks
http://academiccommons.columbia.edu/catalog/ac:189619
Margolies, Robert Sethhttp://dx.doi.org/10.7916/D87D2TG9Tue, 22 Sep 2015 00:00:00 +0000In the near future, objects equipped with heterogeneous devices such as sensors, actuators, and tags, will be able to interact with each other and cooperate to achieve common goals. These networks are termed the Internet of Things (IoT) and have applications in healthcare, smart buildings, assisted living, manufacturing, supply chain management, and intelligent transportation. The IoT vision is enabled by ubiquitous wireless communications and there are numerous resource allocation challenges to efficiently connect each device to the network. In this thesis, we study wireless resource allocation problems that arise in the IoT, namely in the areas of the energy harvesting tags, termed the Internet of Tags (IoTags), and in cellular networks (mobile and cognitive). First, we present our experience designing and developing Energy Harvesting Active Networked Tags (EnHANTs). The prototypes harvest indoor light energy using custom organic solar cells, communicate and form multihop networks using ultra-low-power Ultra- Wideband Impulse Radio (UWB-IR) transceivers, and dynamically adapt their communications and networking patterns to the energy harvesting and battery states. Using our custom designed small scale testbed, we evaluate energy-adaptive networking algorithms spanning the protocol stack (link, network, and flow control). Throughout the evaluation of experiments, we highlight numerous phenomena which are typically difficult to capture in simulations and nearly impossible to model in analytical work. We believe that these lessons would be useful for the designers of many different types of energy harvesters and energy harvesting adaptive networks. Based on the lessons learned from EnHANTs, we present Power Aware Neighbor Discovery Asynchronously (Panda), a Neighbor Discovery (ND) protocol optimized for networks of energy harvesting nodes. To enable object tracking and monitoring applications for IoTags, Panda is designed to efficiently identify nodes which are within wireless communication range of one another. By accounting for numerous hardware constraints which are typically ignored (i.e., energy costs for transmission/reception, and transceiver state switching times/costs), we formulate a power budget to guarantee perpetual ND. Finally, via testbed evaluation utilizing Commercial Off-The-Shelf (COTS) energy harvesting nodes, we demonstrate experimentally that Panda outperforms existing protocols by a factor of 2-3x. We then consider Proportional Fair (PF) cellular scheduling algorithms for mobile users, These users experience slow-fading wireless channels while traversing roads, train tracks, bus routes, etc. We leverage the predicable mobility on these routes and present the Predictive Finite-horizon PF Scheduling ((PF)2S) Framework. We collect extensive channel measurement results from a 3G network and characterize mobility-induced channel state trends. We show that a user’s channel state is highly reproducible and leverage that to develop a data rate prediction mechanism. Our trace-based simulations of the (PF)2S Framework indicate that the framework can increase the throughput by 15%–55% compared to traditional PF schedulers, while improving fairness. Finally, we study fragmentation within a probability model of combinatorial structures. Our model does not refer to any particular application. Yet, it is applicable to dynamic spectrum access networks which can be used as the wireless access technology for numerous IoT applications. In dynamic spectrum access networks, users share the wireless resource and compete to transmit and receive data, and accordingly have specific bandwidth and residence-time requirements. We prove that the spectrum tends towards states of complete fragmentation. That is, for every request for j > 1 sub-channels, nearly all size-j requests are allocated j mutually disjoint sub-channels. In a suite of four theorems, we show how this result specializes for certain classes of request-size distributions. We also show that the delays in reaching the inefficient states of complete fragmentation can be surprisingly long. The results of this chapter provide insights into the fragmentation process and, in turn, into those circumstances where defragmentation is worth the cost it incurs.Electrical engineeringrsm2156Electrical EngineeringDissertationsVan der Waals Layered Materials: Surface Morphology, Interlayer Interaction, and Electronic Structure
http://academiccommons.columbia.edu/catalog/ac:189613
Yeh, Po-Chunhttp://dx.doi.org/10.7916/D8V69HZ1Thu, 17 Sep 2015 00:00:00 +0000Over the past decades, new materials have formed the backbone in shaping the landscape of technology. From the Si-based transistors in our smart devices to the carbon fibers that have redefined air-transportation, the pursuit for a stronger, lighter, and cost-effective material has never ceased, as well as the attempt to fully understand their physics and material properties. Moore's law just turned 50th this year. Moore's law seems harder and harder to hold as the industry has reached a point where the dimensions of those Si-based transistors are getting too small and thin to proceed quickly and without incurring substantial additional cost. Also, the transistor dimensions have been getting closer and closer to the physical limitation of the Si. Today, the most advanced node is merely "7 nm" – less than 15 layers of Si, silicon oxides, or other metal oxides. At this point, every layers of atoms counts. The search for new ultrathin materials as the "new silicon" has begun. In this regard, graphene, which is a single sheet of carbon atoms arranged in a hexagonal honeycomb lattice, has led to a huge interest in the science and technology communities due to its exotic physics that arises from low-dimensional confinement. This interest soon extended to two-dimensional (2D) electronic materials systems, especially semiconducting van der Waals layered-materials such as MoS₂ that, unlike graphene, has a direct bandgap material in its monolayer form. There are also many other promising candidates such as transition metal dichalcogenides (TMDCs), black phosphorene, and perovskites on this rapid growing "2D materials" family tree. From condensed matter physics' point of view, studying the electronic behavior of these 2D systems can provide insight into a variety of phenomena, including epitaxial growth, interfacial charge transfer, energy-momentum relation, and carrier mobility, that leads to advanced device fabrication and engineering. In this dissertation, I examine (1) the surface structure, including the growth, the crystal quality, and thin film surface corrugation of a monolayer sample and a few layers of MoS₂ and WSe₂, and (2) their electronic structure. The characteristics of these electronic systems depend intimately on the morphology of the surfaces they inhabit, and their interactions with the substrate or within layers. These physical properties will be addressed in each chapter. This thesis has dedicated to the characterization of mono- and a few layers of MoS₂ and WSe₂ that uses surface-sensitive probes such as low-energy electron microscopy and diffraction (LEEM and LEED). Prior to our studies, the characterization of monolayer MoS₂ and WSe₂ has been generally limited to optical and transport probes. Furthermore, the heavy use of thick silicon oxide layer as the supporting substrate has been important in order to allow optical microscopic characterization of the 2D material. Hence, to the best of our knowledge, this has prohibited studies of this material on other surfaces, and it has precluded the discovery of potentially rich interface interactions that may exist between MoS₂ and its supporting substrate. Thus, in our study, we use a so-called SPELEEM system (Spectroscopic Photo-Emission and Low Energy Electron Microscopy) to address these imaging modalities: (1) real-space microscopy, which would allow locating of monolayer MoS₂ samples, (2) spatially-resolved low-energy diffraction which would allow confirmation of the crystalline quality and domain orientation of MoS₂ samples, and, (3) spatially-resolved spectroscopy, which would allow electronic structure mapping of MoS₂ samples. Moreover, we have developed a preparation procedure for samples that yield, a surface-probe ready, ultra-clean, and can be transferred on an arbitrary substrate. In this thesis, to fully understand the physics in MoS₂ such as direct-to-indirect band gap transition, hole mobility, strain, or large spin-orbit splitting, we investigate our sample using micro-probe angle-resolved photoemission (µ-ARPES), which is a powerful tool to directly measure the electronic structure. We find that the valence bands of monolayer MoS₂, particularly the low-binding-energy bands, are distinctly different from those of bulk MoS₂ in that the valence band maximum (VBM) of a monolayer is located at Κ ̅ of the first Brillouin zone (BZ), rather than at Γ ̅, as is the case in bilayer and thicker MoS₂ crystals. This result serves as a direct evidence, if complemented with the photoluminescence studies of conduction bands, which shows the direct-to-indirect transition from mono- to milti-layer MoS₂. We also confirmed this same effect in WSe₂ in our later studies. Also, by carefully studying the uppermost valence band (UVB) of both exfoliated and CVD-grown monolayer MoS₂, we found a compression in energy in comparison with the calculated band, an effect, which were also observed in suspended sample with minimum-to-none substrate interaction. We tentatively attribute it to an intrinsic effect of monolayer MoS₂ owning to lattice relaxation. The degree of compression in CVD-grown MoS₂ is larger than that in exfoliated monolayer MoS₂, likely due to defects, doping, or stress. Furthermore, we find that the uppermost valence band near Κ ̅ of monolayer MoS₂ is less dispersive than that of the bulk, which leads to a striking increase in the hole effective-mass and, hence, the reduced carrier mobility of the monolayer compared to bulk MoS₂. Beyond monolayer MoS₂, we have studied the evolution of bandgap as a function of interlayer twist angles in a bilayer MoS₂ system. Our µ-ARPES measurements over the whole surface-Brillouin zone reveal the Γ ̅ state is, indeed, the highest lying occupied state for all twist angles, affirming the indirect bandgap designation for bilayer MoS₂, irrespective of twist angle. We directly quantify the energy separation between the high symmetry points Γ ̅ and K ̅ of the highest occupied states; this energy separation is predicted to be directly proportional to the interlayer separation, which is a function of the twist angle. We also confirm that this trend is a result of the energy shifting of the top-most occupied state at Γ ̅, which is predicted by DFT calculations. Finally, we also report on the variation of the hole effective mass at Γ ̅ and K ̅ with respect to twist angle and compare it with theory. Our study provides a direct measurement and serves as an example for how the interlayer coupling can affect the band structure and electron transitions, which is crucial in designing TMDs devices. To the end of this thesis, I briefly sum up our angle-resolve two-photon photoemission (2PPE) studies on self-assembly molecules, organic molecules, and graphene on highly-crystalline metal systems, and our investigation of their interfacial charge transfer/trapping, image potential states, and coverage-dependent dipole moments, as well as their work functions by using a tunable ultra-fast femtosecond laser.Physics, Materials science, Electrical engineeringpy2175Electrical EngineeringDissertationsScalable Machine Learning for Visual Data
http://academiccommons.columbia.edu/catalog/ac:189406
Yu, Xinnanhttp://dx.doi.org/10.7916/D8F47NDBFri, 21 Aug 2015 00:00:00 +0000Recent years have seen a rapid growth of visual data produced by social media, large-scale surveillance cameras, biometrics sensors, and mass media content providers. The unprecedented availability of visual data calls for machine learning methods that are effective and efficient for such large-scale settings. The input of any machine learning algorithm consists of data and supervision. In a large-scale setting, on the one hand, the data often comes with a large number of samples, each with high dimensionality. On the other hand, the unconstrained visual data requires a large amount of supervision to make machine learning methods effective. However, the supervised information is often limited and expensive to acquire. The above hinder the applicability of machine learning methods for large-scale visual data. In the thesis, we propose innovative approaches to scale up machine learning to address challenges arising from both the scale of the data and the limitation of the supervision. The methods are developed with a special focus on visual data, yet they are also widely applicable to other domains that require scalable machine learning methods. Learning with high-dimensionality: The "large-scale" of visual data comes not only from the number of samples but also from the dimensionality of the features. While a considerable amount of effort has been spent on making machine learning scalable for more samples, few approaches are addressing learning with high-dimensional data. In Part I, we propose an innovative solution for learning with very high-dimensional data. Specifically, we use a special structure, the circulant structure, to speed up linear projection, the most widely used operation in machine learning. The special structure dramatically improves the space complexity from quadratic to linear, and the computational complexity from quadratic to linearithmic in terms of the feature dimension. The proposed approach is successfully applied in various frameworks of large-scale visual data analysis, including binary embedding, deep neural networks, and kernel approximation. The significantly improved efficiency is achieved with minimal loss of the performance. For all the applications, we further propose to optimize the projection parameters with training data to further improve the performance. The scalability of learning algorithms is often fundamentally limited by the amount of supervision available. The massive visual data comes unstructured, with diverse distribution and high-dimensionality -- it is required to have a large amount of supervised information for the learning methods to work. Unfortunately, it is difficult, and sometimes even impossible to collect a sufficient amount of high-quality supervision, such as instance-by-instance labels, or frame-by-frame annotations of the videos. Learning from label proportions: To address the challenge, we need to design algorithms utilizing new types of supervision, often presented in weak forms, such as relatedness between classes, and label statistics over the groups. In Part II, we study a learning setting called Learning from Label Proportions (LLP), where the training data is provided in groups, and only the proportion of each class in each group is known. The task is to learn a model to predict the class labels of the individuals. Besides computer vision, this learning setting has broad applications in social science, marketing, and healthcare, where individual-level labels cannot be obtained due to privacy concerns. We provide theoretical analysis under an intuitive framework called Empirical Proportion Risk Minimization (EPRM), which learns an instance level classifier to match the given label proportions on the training data. The analysis answers the fundamental question, when and why LLP is possible. Under EPRM, we propose the proportion-SVM (∝SVM) algorithm, which jointly optimizes the latent instance labels and the classification model in a large-margin framework. The approach avoids making restrictive assumptions on the data, leading to the state-of-the-art results. We have successfully applied the developed tools to challenging problems in computer vision including instance-based event recognition, and attribute modeling. Scaling up mid-level visual attributes: Besides learning with weak supervision, the limitation on the supervision can also be alleviated by leveraging the knowledge from different, yet related tasks. Specifically, "visual attributes" have been extensively studied in computer vision. The idea is that the attributes, which can be understood as models trained to recognize visual properties can be leveraged in recognizing novel categories (being able to recognize green and orange is helpful for recognizing apple). In a large-scale setting, the unconstrained visual data requires a high-dimensional attribute space that is sufficiently expressive for the visual world. Ironically, though designed to improve the scalability of visual recognition, conventional attribute modeling requires expensive human efforts for labeling the detailed attributes and is inadequate for designing and learning a large set of attributes. To address such challenges, in Part III, we propose methods that can be used to automatically design a large set of attribute models, without user labeling burdens. We propose weak attribute, which combines various types of existing recognition models to form an expressive space for visual recognition and retrieval. In addition, we develop category-level attribute to characterize distinct properties separating multiple categories. The attributes are optimized to be discriminative to the visual recognition task over known categories, providing both better efficiency and higher recognition rate over novel categories with a limited number of training samples.Artificial intelligence, Computer science, Electrical engineeringxy2154Electrical EngineeringDissertationsThree-Dimensional Object Search, Understanding, and Pose Estimation with Low-Cost Sensors
http://academiccommons.columbia.edu/catalog/ac:188352
Wang, Yanhttp://dx.doi.org/10.7916/D8RX9B7VMon, 06 Jul 2015 00:00:00 +0000With the recent development of low-cost depth sensors, an entirely new type of 3D data is being generated rapidly by regular consumers. Traditionally, 3D data is produced by a small number of professional designers (i.e., the Computer Aided Design (CAD) model); however, 3D data from massive consumer-level sensors has the potential of introducing many new applications, such as user-captured 3D warehouse and search engines, robots with 3D sensing capability, and customized 3D printing. Nevertheless, the low-cost sensors used by general consumers also pose new technological challenges. First, they have relatively high levels of sensor noise. Second, the use of such consumer devices is often in uncontrolled settings, resulting in challenging conditions, such as poor lighting, cluttered scenes, and object occlusion. To address such emerging opportunities and associated challenges, this dissertation is dedicated to the development of novel algorithms and systems for 3D data understanding and processing, using input from a consumer-level 3D sensor. In particular, the key problems of 3D shape retrieval, scene understanding, and pose recognition are explored in order to present a comprehensive coverage of the key aspects of content-based 3D shape analysis. To resolve the aforementioned challenges, we propose a flexible Markov Random Field (MRF) framework that uses local information to allow partial matching, and thus address the model incompleteness problem; the framework also uses higher-order correlation to provide additional robustness against sensor noise. With the MRF framework, these 3D analysis problems can be transformed into a unified potential energy minimization problem, while preserving the flexibility to adapt to different settings and resolve the unique challenges of each problem. The contributions of the dissertation include: a. Cross-Domain 3D Retrieval: First we tackle the problem of searching 3D noise- free models using noisy data captured by low-cost 3D sensors – a unique cross-domain setting. To manage the challenges of sensor noise and model incompleteness from consumer-level sensors, we propose a novel MRF formulation for the retrieval problem. The potential function of the random field is designed to capture both the local shape and global spatial consistency in order to preserve the local matching capability, while offering robustness against the sensor noise. The specific form of the potential functions is determined efficiently by a series of weak classifiers, thus forming a variant of the Regression Tree Field (RTF). We achieve better retrieval precision and recall in the cross-domain settings with a consumer-level depth sensor compared with state-of-the-art approaches. b. 3D Scene Understanding: We develop a scene understanding system based on input from consumer-level depth sensors. To resolve the key challenge of the lack of annotated 3D training data, we construct an MRF that connects the input 3D point cloud and the associated 2D reference images, based on which the 3D point cloud is stitched. A series of weak classifiers are trained to obtain an approximate semantic segmentation result from the reference images. The potential function of the field is designed to integrate the results from the classifiers, while taking advantage of the 3D spatial consistency in order to output a comprehensive scene understanding result. We achieve comparable accuracy and much faster speed compared with state-of-the-art 3D scene understanding systems, with the difference that we do not require annotated 3D training data. c. Pose Recognition of Deformable Objects: We develop a method for supporting a robotics system to recognize pose and manipulate deformable objects. More specifically, garment pose is recognized with the help of an offline simulated database and the proposed retrieval approach. We use a novel binary feature representation extracted from the reconstructed 3D surfaces in order to allow efficient matching, thus achieving real-time performance. A spatial weight is further learned in order to integrate the local matching result. The system shows superior recognition accuracy and faster speed than the state-of-the-art approaches. d. Application with 2D Data: In addition to the traditional 3D applications, we explore the possibility of extending MRF formulation to 2D data, especially those used in classical low-level 2D vision problems, such as image deblurring and denoising. One well-known technique that uses image prior, the probabilistic patched-based prior, is known to have bottlenecks in finding the most similar model from a model set, which can be posed as a retrieval problem. Therefore, we apply the MRF formulation originally developed for 3D shape retrieval, and extend it to this 2D problem by introducing a grid-like random field structure. We can achieve 40x acceleration compared with the state-of-the-art algorithm, while preserving quality. We organize the dissertation as follows. First, the core problems of 3D shape retrieval, scene understanding, and pose recognition, and with the proposed solutions that use MRF and RTF are explored in Part I. In Part II, the extension to 2D data is discussed. Extensive evaluation is performed in each specific task in order to compare the proposed approaches with state-of-the-art algorithms and systems, and also to justify the components of the proposed methods. Finally, in Part III, we include the conclusion remarks and discussion of open issues and future work.Computer science, Roboticsyw2383Electrical EngineeringDissertationsPhotonic Interconnection Networks for Applications in Heterogeneous Utility Computing Systems
http://academiccommons.columbia.edu/catalog/ac:187809
Chen, Cathyhttp://dx.doi.org/10.7916/D82806PVTue, 12 May 2015 15:55:11 +0000Growing demands in heterogeneous utility computing systems in future cloud and high performance computing systems are driving the development of processor-hardware accelerator interconnects with greater performance, flexibility, and dynamism. Recent innovations in the field of utility computing have led to an emergence in the use of heterogeneous compute elements. By leveraging the computing advantages of hardware accelerators alongside typical general purpose processors, performance efficiency can be maximized. The network linking these compute nodes is increasingly becoming the bottleneck in these architectures, limiting the hardware accelerators to be restricted to localized computing.
A high-bandwidth, agile interconnect is an imperative enabler for hardware accelerator delocalization in heterogeneous utility computing. A redesign of these systems' interconnect and architecture will be essential to establishing high-bandwidth, low-latency, efficient, and dynamic heterogeneous systems that can meet the challenges of next-generation utility computing.
By leveraging an optics-based approach, this dissertation presents the design and implementation of optically-connected hardware accelerators (OCHA) that exploit the distance-independent energy dissipation and bandwidth density of photonic transceivers, in combination with the flexibility, efficiency and data parallelization offered by optical networks. By replacing the electronic buses with an optical interconnection network, architectures that delocalize hardware accelerators can be created that are otherwise infeasible.
With delocalized optically-connected hardware accelerator nodes accessible by processors at run time, the system can alleviate the network latency issues plague current heterogeneous systems. Accelerators that would otherwise sit idle, waiting for it's master CPU to feed it data, can instead operate at high utilization rates, leading to dramatic improvements in overall system performance.
This work presents a prototype optically-connect hardware accelerator module and custom optical-network-aware, dynamic hardware accelerator allocator that communicate transparently and optically across an optical interconnection network. The hardware accelerators and processor are optimized to enable hardware acceleration across an optical network using fast packet-switching. The versatility of the optical network enables additional performance benefits including optical multicasting to exploit the data parallelism found in many accelerated data sets. The integration of hardware acceleration, heterogeneous computing, and optics constitutes a critical step for both computing and optics.
The massive data parallelism, application dependent-location and function, as well as network latency, and bandwidth limitations facing networks today complement well with the strength of optical communications-based systems. Moreover, ongoing efforts focusing on development of low-cost optical components and subsystems that are suitable for computing environment may benefit from the high-volume heterogeneous computing market. This work, therefore, takes the first steps in merging the areas of hardware acceleration and optics by developing architectures, protocols, and systems to interface with the two technologies and demonstrating areas of potential benefits and areas for future work. Next-generation heterogeneous utility computing systems will indubitably benefit from the use of efficient, flexible and high-performance optically connect hardware acceleration.Electrical engineeringElectrical EngineeringDissertationsEnergy Efficient, Cross-Layer Enabled, Dynamic Aggregation Networks for Next Generation Internet
http://academiccommons.columbia.edu/catalog/ac:186470
Wang, Michaelhttp://dx.doi.org/10.7916/D8416W13Fri, 24 Apr 2015 18:31:34 +0000Today, the Internet traffic is growing at a near exponential rate, driven predominately by data center-based applications and Internet-of-Things services. This fast-paced growth in Internet traffic calls into question the ability of the existing optical network infrastructure to support this continued growth. The overall optical networking equipment efficiency has not been able to keep up with the traffic growth, creating a energy gap that makes energy and cost expenditures scale linearly with the traffic growth. The implication of this energy gap is that it is infeasible to continue using existing networking equipment to meet the growing bandwidth demand. A redesign of the optical networking platform is needed.
The focus of this dissertation is on the design and implementation of energy efficient, cross-layer enabled, dynamic optical networking platforms, which is a promising approach to address the exponentially growing Internet bandwidth demand. Chapter 1 explains the motivation for this work by detailing the huge Internet traffic growth and the unsustainable energy growth of today's networking equipment. Chapter 2 describes the challenges and objectives of enabling agile, dynamic optical networking platforms and the vision of the Center for Integrated Access Networks (CIAN) to realize these objectives; the research objectives of this dissertation and the large body of related work in this field is also summarized.
Chapter 3 details the design and implementation of dynamic networking platforms that support wavelength switching granularity. The main contribution of this work involves the experimental validation of deep cross-layer communication across the optical performance monitoring (OPM), data, and control planes. The first experiment shows QoS-aware video streaming over a metro-scale test-bed through optical power monitoring of the transmission wavelength and cross-layer feedback control of the power level. The second experiment extends the performance monitoring capabilities to include real-time monitoring of OSNR and polarization mode dispersion (PMD) to enable dynamic wavelength switching and selective restoration.
Chapter 4 explains the author's contributions in designing dynamic networking at the sub-wavelength switching granularity, which can provide greater network efficiency due to its finer granularity. To support dynamic switching, regeneration, adding/dropping, and control decisions on each individual packet, the cross-layer en- abled node architecture is enhanced with a FPGA controller that brings much more precise timing and control to the switching, OPM, and control planes. Furthermore, QoS-aware packet protection and dynamic switching, dropping, and regeneration functionalities were experimentally demonstrated in a multi-node network.
Chapter 5 describes a technique to perform optical grooming, a process of optically combining multiple incoming data streams into a single data stream, which can simultaneously achieve greater bandwidth utilization and increased spectral efficiency. In addition, an experimental demonstration highlighting a fully functioning multi-node, agile optical networking platform is detailed. Finally, a summary and discussion of future work is provided in Chapter 6. The future of the Internet is very exciting, filled with not-yet-invented applications and services driven by cloud computing and Internet-of-Things. The author is cautiously optimistic that agile, dynamically reconfigurable optical networking is the solution to realizing this future.Electrical engineering, OpticsElectrical EngineeringDissertationsPhotonic Switches and Networks for High-Performance Computing and Data Centers
http://academiccommons.columbia.edu/catalog/ac:186512
Wang, Howardhttp://dx.doi.org/10.7916/D8PC31B2Fri, 17 Apr 2015 18:17:07 +0000The accelerated growth in performance of microprocessors and the emergence of chip multiprocessors, which are now widely leveraged in current data centers and high-performance computing (HPC) systems, have motivated the need for developing novel interconnection networks solutions to meet the growing need for data transmissions across all levels of the infrastructure. This work posits that, given the unique characteristics of optics---advantages and limitations---purpose-driven systems-level designs are necessary in order to harness the tremendous performance and efficiency opportunities that can be enabled by photonic interconnects. First, an enhanced optically connected network architecture is presented featuring advanced photonic functionalities to support a wider class of bandwidth-intensive traffic patterns characteristic of cloud computing systems. This proposed architectural framework can enable a rich set of photonic resources to be allocated on-demand to optimize communications between various applications within the data center. A prototype of the proposed optical network architecture is constructed and a demonstration of two unique functionalities, serving to validate the physical layer feasibility of the system, is presented. An instantiation of this architectural framework is presented that enables physical layer data duplication in order to more effectively support reliable group data delivery in the data center. Compared to the conventional solutions that duplicate data in the network or application layer, this architecture achieves efficient data transmission over the ultra-fast, loss-free, energy-efficient and low cost optical paths, with simplified flow control, congestion control, and group membership management. Both an end-to-end hardware experiment and large-scale simulations were carried out to evaluate the efficacy of the design. Next, the challenges associated with interfacing to photonically-switched networks are explored. In particular, various interface designs aimed at addressing the unique challenges imposed by optical-packet switched networks are proposed and evaluated. First, an overview of the data vortex network optical packet switch architecture is given. A high-speed optical packet formatter and interface is then presented along with the results of end-to-end data exchanges across the interface connected to a data vortex network. Finally, the design of a low-power all-optical interface alternative is validated with an end-to-end demonstration. Finally, various unique photonic switching node designs are introduced for a variety of applications|a nanosecond-scale bidirectional 2 x —2 switch to construct efficient optical fat-tree architectures, a 4 x —4 switch capable of operating as both a nanosecond-scale optical packet switch and as an optical circuit switch, and a non-blocking 4 x —4 switch designed for constructing on-chip photonic integrated networks.Electrical engineeringhw2047Electrical EngineeringDissertationsCross-layer resource allocation in wireless and optical networks
http://academiccommons.columbia.edu/catalog/ac:186473
Birand, Berkhttp://dx.doi.org/10.7916/D8VQ31K6Mon, 06 Apr 2015 15:19:46 +0000The success of the Internet can be largely attributed to its modular architecture and its high level of abstraction. As a result, the Internet is an extremely heterogeneous network in which a multitude of wireless, electronic, and optical devices coexist. Yet, wireless and optical technologies are approaching their capacity limits. In this thesis, we study cross-layer and cross-domain optimizations in wireless and optical networks to improve the scalability of heterogeneous networks. Specifically, we investigate the benefits in capacity improvement and energy efficiency of improved interaction between different layers, as well as different domains.
First, we use the Local Pooling (LoP) conditions to identify all the network graphs under primary interference constraints in which Greedy Maximal Scheduling (GMS) achieves 100% throughput. In addition, we show that in all bipartite graphs of size up to 7 x— n, GMS is guaranteed to achieve 66% throughput. Finally, we study the performance of GMS in interference graphs and show that it can perform arbitrarily bad.
We study the properties of evolving graphs of networks whose structure changes due to node mobility. We present several graph metrics that quantify change in an evolving graph sequence and apply these metrics to several sources of mobility. We relate our results on the effect of the rate of graph change to the performance of higher-layer network algorithms in dynamic networks.
We then consider optical networks, and formulate a global optimization problem that captures the QoT constraints in future dynamic optical networks. We design a power control algorithm for solving this problem by using feedback from Optical Performance Monitors (OPMs). We evaluate this algorithm via extensive simulations on a network-scale optical network simulator, as well as experiments with commercial optical network equipment.
Finally, we consider a cellular network with Coordinated Multi-Point (CoMP) Joint Transmission (JT) capabilities that allow multiple BSs to transmit simultaneously to a single user. We formulate the OFDMA Joint Scheduling (OJS) problem of determining a subframe schedule and deciding if to use JT, and we prove hardness results for this problem. Based on a decomposition framework, we develop efficient scheduling algorithms for bipartite and series-parallel planar graphs, and approximation algorithms for general graphs. We then consider a queueing model that evolves over time, and prove that solving the OJS problem with a specific queue-based utility function (in every subframe) achieves maximum throughput in CoMP-enabled networks.Computer science, Information technology, Electrical engineeringbb2408Electrical EngineeringDissertationsIntegrated circuit-based electrochemical sensor for spatially resolved detection of redox-active metabolites in biofilms
http://academiccommons.columbia.edu/catalog/ac:185400
Bellin, Daniel L.; Sakhtah, Hassan; Thimot , Jordan; Emmett, Kevin ; Dietrich, Lars; Shepard, Kenneth L.http://dx.doi.org/10.7916/D8C24VBGTue, 31 Mar 2015 00:00:00 +0000Despite advances in monitoring spatiotemporal expression patterns of genes and proteins with fluorescent probes, direct detection of metabolites and small molecules remains challenging. A technique for spatially resolved detection of small molecules would benefit the study of redox-active metabolites that are produced by microbial biofilms and can affect their development. Here we present an integrated circuit-based electrochemical sensing platform featuring an array of working electrodes and parallel potentiostat channels. ‘Images’ over a 3.25 0.9 mm2 area can be captured with a diffusion-limited spatial resolution of 750 mm. We demonstrate that square wave voltammetry can be used to detect, identify and quantify (for concentrations as low as 2.6 mM) four distinct redox-active metabolites called phenazines. We characterize phenazine production in both wild-type and mutant Pseudomonas aeruginosa PA14 colony biofilms, and find correlations with fluorescent reporter imaging of phenazine biosynthetic gene expression.Molecular biology, Electrical engineering, Nanotechnology, Biophysicsdlb2149, hs2593, jat2176, kje2109, ld2444, kls30Physics, Electrical Engineering, Biological SciencesArticlesIdentification of gene interactions associated with disease from gene expression data using synergy networks
http://academiccommons.columbia.edu/catalog/ac:184938
Watkinson, John; Wang, Xiaodong; Zheng, Tian; Anastassiou, Dimitrishttp://dx.doi.org/10.7916/D81835DPFri, 27 Mar 2015 00:00:00 +0000Analysis of microarray data has been used for the inference of gene-gene interactions. If, however, the aim is the discovery of disease-related biological mechanisms, then the criterion for defining such interactions must be specifically linked to disease. Here we present a computational methodology that jointly analyzes two sets of microarray data, one in the presence and one in the absence of a disease, identifying gene pairs whose correlation with disease is due to cooperative, rather than independent, contributions of genes, using the recently developed information theoretic measure of synergy. High levels of synergy in gene pairs indicates possible membership of the two genes in a shared pathway and leads to a graphical representation of inferred gene-gene interactions associated with disease, in the form of a "synergy network." We apply this technique on a set of publicly available prostate cancer expression data and successfully validate our results, confirming that they cannot be due to pure chance and providing a biological explanation for gene pairs with exceptionally high synergy. Thus, synergy networks provide a computational methodology helpful for deriving "disease interactomes" from biological data. When coupled with additional biological knowledge, they can also be helpful for deciphering biological mechanisms responsible for disease.Genetics, Biostatisticsxw2008, tz33, da8Statistics, Electrical EngineeringArticlesModeling and Analysis of Graphene Resonant Channel Transistors for RF Filters
http://academiccommons.columbia.edu/catalog/ac:184141
Lekas, Michaelhttp://dx.doi.org/10.7916/D8WM1C7CWed, 04 Mar 2015 00:00:00 +0000In recent years the proliferation of wireless devices such as tablets and smartphones has resulted in an unprecedented crowding of the radio spectrum around the world. The high density of radio signals being transmitted at any one time has necessitated the use of high-performance radio-frequency (RF) filters prior to any receiver signal path in order to protect the device against interference. State-of-the-art filter technologies based on piezoelectric resonators provide good rejection of interfering signals, but do not scale well to cover the large range of frequencies currently allocated for cellular communications. This thesis presents measurements and analysis of a new active resonator technology, known as a graphene resonant channel transistor (G-RCT), that has the potential to be used in micron-scale RF filters that are capable of covering these larger bandwidths. A compact model for G-RCTs is developed that accurately replicates the AC, DC, and frequency tuning characteristics of the device, enabling the design and simulation of hybrid electromechanical circuits. The device noise is also modeled, and analytical expressions for the noise figure of circuits using G-RCTs are derived. Finally, expressions for third-order intermodulation distortion are derived and validated with measurements.Electrical engineeringml3077Electrical EngineeringDissertationsRadiation Damage and Radiation-Based Device-Fabrication Techniques in LiNbO3
http://academiccommons.columbia.edu/catalog/ac:194981
Huang, Hsu-Chenghttp://dx.doi.org/10.7916/D86D5RTSTue, 24 Feb 2015 00:00:00 +0000The study of radiation effects in materials and devices has been of much interest for a broad variety of scientific reasons and applied purposes. Examples are the investigation of the robustness of devices under radiation environments, interstellar space, or the use of irradiation for device-fabrication methods such as ion implantation. Thus, a fundamental understanding of the irradiation-induced change in materials properties is crucial of applied science. In this thesis, investigation of radiation damage in complex oxides, especially LiNbO3 and its related device, was carried out. Different radiation sources were examined, including light and heavy ion, electron and gamma rays. A set of instruments were employed to probe induced damage from different perspective. These instruments include optical methods (optical microscopy and micro-Raman spectroscopy), ion beam analysis (Rutherford backscattering spectroscopy), and direct visualization (SEM, TEM and AFM). It was found that the degree of ion-matter interaction (stopping power) plays a major role in initiating damage. Different forms of damage, including surface deformation, material amorphization, defect clusters, and long-range strain field, were observed. Probing the response of oxide crystals with these tools provides a comprehensive basis for better comprehension of radiation damage. The detailed studies and their findings are described in more details in Chapters 4, 5, and 7. In addition to the materials characterization, the impact of radiation effects on one specific active device (thin-film electro-optic modulator) was also studied. It was found that radiation dose and distribution both affect device performance. More details are in Chapter 6. Our knowledge of these radiation effects allow us to utilize and engineer this damage for the development of advanced radiation-enabled device fabrication techniques. Both light- and heavy-ion irradiation were studied and it was found that the nature of their different damage mechanisms lead to essentially a different response to wet chemical etching. Using radiation-enabled selective etching allowed use to explore new or improved techniques for fabricating sub-um-thick LiNbO3 thin film and high-resolution patterning. Co-irradiation of both ion species was also investigated and this method shows easy fabrication of patterning on freestanding thin film. The characteristics of these methods make them useful for the fabrication of the current and future photonic integrated platforms. More details are in Chapters 3 and 8. In addition to radiation damage study, other materials-based fabrication methods were investigated - particularly those that were a consequence of our materials fabrication studies. For example, domain engineering of ferroelectric materials (poling) for nonlinear-optic applications was investigated and it was found that domain broadening can be sufficiently suppressed using a thinned sample. More details are in Chapters 3.Electrical engineering, Materials Science, Opticshh2362Electrical EngineeringDissertationsSimultaneous Iterative Learning and Feedback Control Design
http://academiccommons.columbia.edu/catalog/ac:182998
Chinnan, Anil Philiphttp://dx.doi.org/10.7916/D8NC6000Tue, 10 Feb 2015 00:00:00 +0000Iterative learning controllers aim to produce high precision tracking in operations where the same tracking maneuver is repeated over and over again. Model-based iterative learning control laws are designed from the system Markov parameters which could be inaccurate. Chapter 2 examines several important learning control laws and develops an understanding of how and when inaccuracy in knowledge of the Markov parameters results in instability of the learning process. While an iterative learning controller can compensate for unknown repeating errors and disturbances, it is not suited to handle non-repeating, stochastic errors and disturbances, which can be more effectively handled by a feedback controller. Chapter 3 explores feedback and iterative learning combination controllers, showing how a one-time step behind disturbance estimator and one-repetition behind disturbance estimator can be incorporated together in such a combination. Since learning control applications are finite-time by their very nature, frequency response based design techniques are not best suited for designing the feedback controller in this context. A finite-time feedback controller design approach is more appropriate given the overall aim of zero tracking error for the entire trajectory, even for shorter trajectories where the system response is still in its transient phase and has not yet reached steady state. Chapter 4 presents a combination of finite-time feedback and learning control as a natural solution for such a control objective, showing how a finite-time feedback controller and an iterative learning controller can be simultaneously synthesized during the learning process. Finally, Chapter 5 examines different configurations where a combination of a feedback controller and an iterative learning controller can be implemented. Numerical results are used to illustrate the feedback and iterative controller designs developed in this thesis.Electrical engineering, Mechanical engineering, Roboticsapc2113Electrical EngineeringDissertationsChallenges and Solutions for High Performance Analog Circuits with Robust Operation in Low Power Digital CMOS
http://academiccommons.columbia.edu/catalog/ac:182964
Hsu, Chun-Weihttp://dx.doi.org/10.7916/D89C6W87Thu, 05 Feb 2015 00:00:00 +0000In modern System-on-Chip products, analog circuits need to co-exist with digital circuits integrated on the same chip. This brings on a lot of challenges since analog circuits need to maintain their performance while being subjected to disturbances from the digital circuits. Device size scaling is driven by digital applications to reduce size and improve performance but also results in the need to reduce the supply voltage. Moreover, in some applications, digital circuits require a changing supply voltage to adapt performance to workloads. So it is further desirable to develop design solutions for analog circuits that can operate with a flexible supply voltage, which can be reduced well below 1V. In this thesis challenges and solutions for key high performance analog circuit functions are explored and demonstrated that operate robustly in a digital environment, function with flexible supply voltages or have a digital-like operation. A combined phase detector consisting of a phase-frequency detector and sub-sampling phase detector is proposed for phase-locked loops (PLLs). The phase-frequency function offers robust operation and the sub-sampling detector leads to low in-band phase noise. A 2.2GHz PLL with a combined phase detector was prototyped in a 65nm CMOS process, with an on-chip loop filter area of only 0.04mm^2. The experimental results show that the PLL with the combined phase detector is more robust to disturbances than a sub-sampling PLL, while still achieving a measured in-band phase noise of -122dBc/Hz which is comparable to the excellent noise performance of a sub-sampling PLL. A pulse-controlled common-mode feedback (CMFB) circuit is proposed for a 0.6V-1.2V supply-scalable fully-differential amplifier that was implemented in a low power/leakage 65nm CMOS technology. An integrator built with the amplifier occupies an active area of 0.01mm^2. When the supply is changed from 0.6V to 1.2V, the measured frequency response changes are small, demonstrating the flexible supply operation of the differential amplifier with the pulse-controlled CMFB. Next, models are developed to study the performance scaling of a continuous-time sigma-delta modulator (SDM) with a varying supply voltage. It is demonstrated that the loop filter and the quantizer exhibit different supply dependence. The loop noise performance becomes better at a higher supply thanks to larger signal swings and better signal-to-noise ratio, while the figure of merit determined by the quantization noise gets better at a lower supply voltage, thanks to the quantizer power dissipation reduction. The theoretical models were verified with simulations of a 0.6V-1.2V 2MHz continuous-time SDM design in a 65nm CMOS low power/leakage process. Finally, two design techniques are introduced that leverage the continued improvement of digital circuit blocks for the realization of analog functions. A voltage-controlled-ring-oscillator-based amplifier with zero compensation is proposed that internally uses a phase-domain representation of the analog signal. This provides a huge DC gain without significant penalties on the unity-gain bandwidth or area. With this amplifier a 4th-order 40-MHz active-UGB-RC filter was implemented that offers a wide bandwidth, superior linearity and small area. The filter prototype in a 55nm CMOS process has an active area of 0.07mm^2 and a power consumption of 7.8mW at 1.2V. The in-band IIP3 and out-of-band IIP3 are measured as 27.3dBm and 22.5dBm, respectively. A digital in-situ biasing technique is proposed to overcome the design challenges of conventional analog biasing circuits in an advanced CMOS process. A digital CMFB was simulated in a 65nm CMOS technology to demonstrate the advantages of this digital biasing scheme. Using time-based successive approximation conversion, the digital CMFB provides the desired analog output with a more robust operation and a smaller area, but without needing any stability compensation schemes like in conventional analog CMFBs. In summary, analog design techniques are continuously evolving to adapt to the integration with digital circuits on the same chip and are increasingly using digital-like blocks to realize analog functions in highly-integrated SOC chips. The signal representation in analog circuits is moving from traditional electrical signals such as voltage or current, to time and phase-domain representations. These changes make analog circuits more robust to voltage disturbances and supply variations. In addition to improved robustness, analog circuits based on timing signals benefit from the faster and smaller transistors offered by the continued feature scaling in CMOS technologies.Electrical engineering, Engineeringch2730Electrical EngineeringDissertationsResource Allocation for Energy Harvesting Communications
http://academiccommons.columbia.edu/catalog/ac:194972
Wang, Zhehttp://dx.doi.org/10.7916/D87P8X60Tue, 03 Feb 2015 00:00:00 +0000With the rapid development of energy harvesting technologies, a new paradigm of wireless communications that employs energy harvesting transmitters has become a reality. The renewable energy source enables the flexible deployment of the transmitters and prolongs their lifetimes. To make the best use of the harvested energy, many challenging research issues arise from the new paradigm of communications. In particular, optimal resource (energy, bandwidth, etc.) allocation is key to the design of an efficient wireless system powered by renewable energy sources. In this thesis, we focus on several resource allocation problems for energy harvesting communications, including the energy allocation for a single energy harvesting transmitter, and the joint energy and spectral resource allocation for energy harvesting networks. More specifically, the resource allocation problems discussed in this thesis are summarized as follows. We solve the problem of designing an affordable optimal energy allocation strategy for the system of energy harvesting active networked tags (EnHANTs), that is adapted to the identification request and the energy harvesting dynamic. We formulate a Markov decision process (MDP) problem to optimize the overall system performance which takes into consideration of both the system activity-time and the communication reliability. To solve the problem, both a static exhaustive search method and a modified policy iteration algorithm are employed to obtain the optimal energy allocation policy. We develop an energy allocation algorithm to maximize the achievable rate for an access-controlled energy harvesting transmitter based on causal observations of the channel fading states. We formulate the stochastic optimization problem as a Markov decision process (MDP) with continuous states and define an approximate value function based on a piecewise linear fit in terms of the battery state. We show that with the approximate value function, the update in each iteration consists of a group of convex problems with a continuous parameter and we derive the optimal solution to these convex problems in closed-form. Specifically, the computational complexity of the proposed algorithm is significantly lower than that of the standard discrete MDP method. We propose an efficient iterative algorithm to obtain the optimal energy-bandwidth allocation for multiple flat-fading point-to-point channels, maximizing the weighted sum-rate given the predictions of the energy and channel state. For the special case that each transmitter only communicates with one receiver and the objective is to maximize the total throughput, we develop efficient algorithms for optimally solving the subproblems involved in the iterative algorithm. Moreover, a heuristic algorithm is also proposed for energy-bandwidth allocation based on the causal energy and channel observations. We consider the energy-bandwidth allocation problem in multiple orthogonal and non-orthogonal flat-fading broadcast channels to maximize the weighted sum-rate given the predictions of energy and channel states. To efficiently obtain the optimal allocation, we extend the iterative algorithm originally proposed for multiple flat-fading point-to-point channels and further develop the optimal algorithms to solve the corresponding subproblems. For the orthogonal broadcast channel, the proportionally-fair (PF) throughput maximization problem is formulated and we derive the equivalence conditions such that the optimal solution can be obtained by solving a weighted throughput maximization problem. The algorithm to obtain the proper weights is also proposed. We consider the energy-subchannel allocation problem for energy harvesting networks in frequency-selective fading channels. We first assume that the harvested energy and subchannel gains can be predicted and propose an algorithm to efficiently obtain the energy-subchannel allocations for all links over the scheduling period based on controlled water-filling. The proposed algorithm is shown to be asymptotically optimal when the bandwidth of the subchannel goes to zero. A causal algorithm is also proposed based on the Q-learning technique that makes use of the statistics of the energy harvesting and channel fading processes.Electrical engineeringzw2168Electrical EngineeringDissertationsStudies on Mixed Slab-Toroidal Electron Temperature Gradient Mode Instabilities in the Columbia Linear Machine
http://academiccommons.columbia.edu/catalog/ac:181490
Balbaky, Abed Raoufhttp://dx.doi.org/10.7916/D89S1PSQFri, 19 Dec 2014 00:00:00 +0000This thesis investigates the behavior of electron temperature gradient (ETG) driven instabilities in the Columbia Linear Machine (CLM). Building on prior work in CLM, the primary goal of this research is to produce, identify, and illuminate the basic physics of these instabilities, and explore the behavior of these instabilities under the presence of trapping and curved magnetic field lines. The first part of this thesis is focused on studying the saturated ETG mode, and the general behavior of the mode under varying levels of magnetic curvature. Measuring ETG modes can be problematic since they have large real frequencies, fast growth rates (~MHz) and small spatial scales, but carefully designed probe diagnostics can overcome these limits. In order to produce curved magnetic field lines, we modified CLM to operate with an internal movable mirror coil. We determined the temperature and density profiles under varying curvature, and measured changes in the mode structure and frequency. We found small changes in the azimuthal/poloidal structure and frequency, characterized by an increase in the m-number (m_slab~10-13 and Δ m~1), along with small changes in the axial/toroidal structure (k_ǁcurvature<k_ǁslab) and frequency. We also present one of the first experimental scaling of ETG mode amplitude as a function (ω _curvature<ω_slab). Our key finding was a that overall levels of saturated ETG mode amplitude had a modest increase (~1.5x) which is slightly larger than existing theory and simulations would predict, and that the power density and amplitude of individual mode peaks can increase more dramatically (~2-3x amplitude). The second part of this thesis studies the radial transport for saturated ETG modes in CLM. ETG modes are believed to be a significant source of anomalous electron energy transport in plasmas, and a better understanding of these modes and the transport they drive across magnetic field lines is of particular interest for advanced tokamaks and future fusion reactors, where these is a continued push for energy efficiency. A specially designed triple probe has been developed, which can measure fluctuations in temperature and potential simultaneously, with a high frequency and special resolution suitable for ETG studies. We present an experimental scaling of radial transport as a function of magnetic field curvature, again one of the first of its kind. Our findings indicate a modest increase in radial transport (~2x) with increased curvature, but unlike saturated mode amplitudes, we find that radial transport saturates for higher levels of curvature in CLM.Plasma physicsarb2128Electrical EngineeringDissertationsDefect Mediated Sub-Bandgap Optical Absorption in Ion-Implanted Silicon Nano-Wire Waveguide Photodetectors
http://academiccommons.columbia.edu/catalog/ac:180901
Souhan, Brianhttp://dx.doi.org/10.7916/D8WW7GB0Fri, 21 Nov 2014 00:00:00 +0000Silicon has numerous benefits as a photonic integrated circuit platform, including optical transparency from 1.1 Âµm to greater than 5 Âµm, tight optical confinement due to its high index of refraction, high third order non-linearity, and lack of two photon absorption at wavelengths above 2.2 Âµm. Additionally, silicon photonics has the added benefit of decades of fabrication knowledge from the CMOS industry. Despite these advantages, an enormous challenge exists in two areas, optical sources for silicon photonic integrated circuits, and on the other end, optical detectors for silicon photonic integrated circuits. The same bandgap energy that leads to the optical transparency at telecom and mid-infrared wavelengths, limits both generation and detection in this same regime. This dissertation focuses on the detection problem, exploring the use of defect-mediated sub-bandgap optical absorption in ion-implanted silicon nano-wire waveguides. Section I of this dissertation focuses on fabrication and the ion-implantation process, including a primer on Shockley-Read-Hall theory and its application to defect-mediated sub-bandgap optical absorption. Section II examines the devices for use at telecom wavelengths. In Chapter 4, the fabrication and characterization of metal-semiconductor-metal ion-implanted silicon nano-wire waveguide photodiodes is examined. These devices require minimal fabrication, are compatible with standard CMOS fabrication processes, and exhibited responsivities as high as 0.51 A/W and frequency responses greater than 2.6 GHz. With improved fabrication tolerances, frequency responses of greater than 10 GHz are expected. Chapter 5 examined these ion-implanted photodiodes in a p-i-n configuration as a high speed data interconnect, demonstrating error free operation at 10 Gbs with expected sensitivities approaching that of Ge detectors. Section III extends the above research to longer wavelengths, starting with data reception at 1.9 Âµm in Chapter 6, exhibiting an approximate 5 dB penalty in sensitivity compared to the same diodes at 1.55 Âµm, at a data rate of 1 Gbs, limited by RC due to the 2 mm length of the device. Chapter 7 goes even further, characterizing Si+ implanted silicon nano-wire waveguides for operation between 2.2 Âµm and 2.35 Âµm. These devices showed responsivities as high as 9.9 mA/W, with internal quantum efficiencies approaching 5%. Chapter 8 concludes with the characterization of Zn+ implanted silicon nano-wire waveguides operating in the same wavelength regime, exhibiting higher overall responsivity, albeit at a much higher reverse bias. These long wavelength devices open up new areas of research for silicon photonics, allowing for CMOS compatible detectors operating into the mid-infrared region, useful for chemical sensing, free-space communications, and medical imaging.Electrical engineeringbs2695Electrical EngineeringDissertationsHelium-ion-induced radiation damage in LiNbO₃ thin-film electro-optic modulators
http://academiccommons.columbia.edu/catalog/ac:179745
Huang, Hsu-Cheng; Dadap Jr., Jerry I.; Malladi, Girish; Kymissis, Ioannis; Bakhru, Hassaram; Osgood Jr., Richard M.http://dx.doi.org/10.7916/D84Q7SPNWed, 19 Nov 2014 00:00:00 +0000Helium-ion-induced radiation damage in a LiNbO₃-thin-film (10 μm-thick) modulator is experimentally investigated. The results demonstrate a degradation of the device performance in the presence of He⁺ irradiation at doses of ≥ 1016 cm⁻². The experiments also show that the presence of the He⁺ stopping region, which determines the degree of overlap between the ion-damaged region and the guided optical mode, plays a major role in determining the degree of degradation in modulation performance. Our measurements showed that the higher overlap can lead to an additional ~5.5 dB propagation loss. The irradiation-induced change of crystal-film anisotropy(nₒ−nₑ )of ~36% was observed for the highest dose used in the experiments. The relevant device extinction ratio, VπL, and device insertion loss, as well the damage mechanisms of each of these parameters are also reported and discussed.Electrical engineering, Physicshh2362, jid5, ik2174, rmo1Applied Physics and Applied Mathematics, Electrical EngineeringArticlesProbing the response of 2D crystals by optical spectroscopy
http://academiccommons.columbia.edu/catalog/ac:178240
Li, Yileihttp://dx.doi.org/10.7916/D8319TGXWed, 08 Oct 2014 00:00:00 +0000Atomically thin two-dimensional crystals form a distinct and growing class of new materials. The electromagnetic response of a two-dimensional crystal provides direct access to its electronic properties. This thesis presents a series of experimental studies of the electromagnetic response of model two-dimensional crystals as probed by optical spectroscopy. Our aim is to obtain understanding of their intrinsic linear and nonlinear response and the many-body interactions in these materials, as well as to explore the potential to use the two-dimensional materials for sensing applications. In the two studies of graphene, we either removed contaminations from the environment to reveal the intrinsic response or intentionally applied adsorbates to investigate how the electrons interact with the extrinsic molecules. In the first study, we obtained ultra-clean graphene using hexagonal boron nitride as the substrate, which allowed us to probe using Raman spectroscopy the intrinsic electron-phonon and electron-electron interactions free from substrate induced sample inhomogeneity. In a second study, we demonstrate a strong near-field electromagnetic interaction of graphene plasmons with the vibrations of adsorbed molecules. Our results reveal the potential of graphene for molecular sensing. In our investigations of the monolayer transition metal dichalcogenides, we performed measurements of the linear and the second-order nonlinear dielectric response. From the linear dielectric response, we demonstrate strong light-matter interactions even for a single layer of these materials. Several trends in the excitonic properties of this group of materials were obtained from the measured dielectric function. In the nonlinear optical study, we observed a large enhancement of the second-harmonic signal from monolayers as compared to the bulk sample, a consequence of the breaking of the inversion symmetry present in the bulk. In addition to the results for monolayers, we describe the behavior of few-layer materials, where the symmetry properties change layer by layer. For monolayers (and samples of odd layer thickness with broken inversion symmetry), the strong and anisotropic second-harmonic response provides a simple optical probe of crystallographic orientation. In the magneto-optic study of transition metal dichalcogenide monolayers, we demonstrate the induction of valley splitting and polarization by the application of an external magnetic field. The interaction of the valleys with the magnetic field reflects their non-zero magnetic moments, which are compared to theoretical models. We further clarify the electronic configuration of the charged excitons and important many-body corrections to the trion binding energy through the control of valley polarization achieved by the external magnetic field.Condensed matter physics, Opticsyl2673Physics, Electrical EngineeringDissertationsControl Systems for Silicon Photonic Microring Devices
http://academiccommons.columbia.edu/catalog/ac:189313
Padmaraju, Kishorehttp://dx.doi.org/10.7916/D8P849F2Wed, 01 Oct 2014 00:00:00 +0000The continuing growth of microelectronics in speed, scale, and complexity has led to a looming bandwidth bottleneck for traditional electronic interconnects. This has precipitated the penetration of optical interconnects to smaller, more localized scales, in such applications as data centers, supercomputers, and access networks. For this next generation of optical interconnects, the silicon photonic platform has received wide attention for its ability to manifest, more economical, high-performance photonics. The high index contrast and CMOS compatibility of the silicon platform give the potential to intimately integrate small footprint, power-efficient, high-bandwidth photonic interconnects with existing high-performance CMOS microelectronics. Within the silicon photonic platform, traditional photonic elements can be manifested with smaller footprint and higher energy-efficiency. Additionally, the high index contrast allows the successful implementation of silicon microring-based devices, which push the limits on achievable footprint and energy-efficiency metrics. While laboratory demonstrations have testified to their capabilities as powerful modulators, switches, and filters, the commercial implementation of microring-based devices is impeded by their susceptibility to fabrication tolerances and their inherent temperature sensitivity. This work develops and demonstrates methods to resolve the aforementioned sensitivities of microring-based devices. Specifically, the use of integrated heaters to thermally tune and lock microring resonators to laser wavelengths, and the underlying control systems to enable such functionality. The first developed method utilizes power monitoring to show the successful thermal stabilization of a microring modulator under conditions that would normally render it inoperational. In a later demonstration, the photodetector used for power monitoring is co-integrated with the microring modulator, again demonstrating thermal stabilization of a microring modulator and validating the use of defect-enhanced silicon photodiodes for on-chip control systems. Secondly, a generalized method is developed that uses dithering signals to generate anti-symmetric error signals for use in stabilizing microring resonators. A control system utilizing a dithering signal is shown to successfully wavelength lock and thermally stabilize a microring resonator. Characterizations are performed on the robustness and speed of the wavelength locking process when using dithering signals. An FPGA implementation of the control system is used to scale to a WDM microring demultiplexer, demonstrating the simultaneous wavelength locking of multiple microring resonators. Additionally, the dithering technique is adopted to create control systems for microring-based switches, which have traditionally posed a challenging problem due to their multi-state configurations. The aforementioned control systems are rigorously tested for applications with high speed data and analyzed for power efficiency and scalability to show that they can successfully scale to commercial implementations and be the enabling factor in the commercial deployment of microring-based devices.Electrical engineering, Opticskp2362Electrical EngineeringDissertationsElucidating the sequence and structural specificities of DNA-binding factors
http://academiccommons.columbia.edu/catalog/ac:178182
Lazarovici, Allanhttp://dx.doi.org/10.7916/D8KP80R3Tue, 30 Sep 2014 00:00:00 +0000Characterizing the binding preferences of transcription factors is a major objective in molecular biology. Important processes such as development and responses to environmental stresses are regulated by the interactions between transcription factors and DNA. In this thesis, we address three key issues in the analysis of protein-DNA interactions. First, we demonstrate how transcription factor binding motifs can be inferred from ChIP-seq data by integrating a peak-calling algorithm and a biophysical model of transcription factor specificity. Next, we show that high-resolution DNase I cleavage profiles can provide detailed information about the role that DNA shape plays in protein- DNA recognition. Our analysis reveals the interplay between DNA sequence, methylation status, DNA geometry, and DNase I cleavage. Finally, we construct a model of transcription factor-DNA interaction that allows multiple transcription factors to bind co- operatively and competitively. In addition, the model can also infer transcription factor concentration. As the binding preferences of transcription factors continue to be characterized with a high degree of precision, we anticipate that use of these more realistic models will become more prevalent.Biophysics, Cellular biologyal2394Electrical Engineering, Biological SciencesDissertationsMetal-semiconductor-metal ion-implanted Si waveguide photodetectors for C-band operation
http://academiccommons.columbia.edu/catalog/ac:177609
Souhan, Brian; Grote, Richard R.; Driscoll, Jeffrey B.; Lu, Ming; Stein, Aaron; Bakhru, Hassaram; Osgood, Jr., Richard M.http://dx.doi.org/10.7916/D8542M4ZTue, 23 Sep 2014 00:00:00 +0000Metal-semiconductor-metal Si waveguide photodetectors are demonstrated with responsivities of greater than 0.5 A/W at a wavelength of 1550 nm for a device length of 1mm. Sub-bandgap absorption in the Si waveguide is achieved by creating divacancy lattice defects via Si+ ion implantation. The modal absorption coefficient of the ion-implanted Si waveguide is measured to be ≈185 dB/cm, resulting in a detector responsivity of ≈0.51 A/W at a 50V bias. The frequency response of a typical 1mm-length detector is measured to be 2.6 GHz, with simulations showing that a frequency response of 9.8 GHz is achievable with an optimized contact configuration and bias voltage of 15V. Due to the ease with which these devices can be fabricated, and their potential for high performance, these detectors are suitable for various applications in Si-based photonic integrated circuits.Electrical engineeringbs2695, rmo1Electrical EngineeringArticlesGene Regulatory Network Reconstruction Using Conditional Mutual Information
http://academiccommons.columbia.edu/catalog/ac:197809
Liang, Kuo-Ching; Wang, Xiaodonghttp://dx.doi.org/10.7916/D85B00ZKTue, 09 Sep 2014 16:33:38 +00001. Introduction: The prediction of the functions of genes and the elucidation of the gene regulatory mechanisms have been an important topic of genomic research. The advances in microarray technology over the past decade have provided a wealth of information by allowing us to observe the expression levels of thousands of genes at once. With the increasing availability of gene expression data, the development of tools that can more accurately predict gene-to-gene interactions and uncover more complex interactions between genes has become an intense area of research. 1.1. Background
Gene Clustering Algorithms
Some of the first attempts at determining gene regulations are based on the gene expression clustering algorithms. These algorithms determine genes that are likely to be coregulated by grouping genes that exhibit similar gene expressions under the same conditions. Different clustering algorithms differ in the metric used to measure similarity between gene expressions, and how the metric is used to cluster into groups similarly expressed genes [1]. In [2], a hierarchical clustering algorithm using a correlation coefficient metric is proposed. The K-means algorithm has also been applied to partition genes into different clusters [3]. Other clustering algorithms such as self-organizing map (SOM) [4], mutual-information-based algorithms [56], and graph-theory-based algorithms [7] have also been proposed.
Graphical Algorithms
While gene clustering algorithms allow us to discover genes that are coregulated, they do not reveal much of the underlying biological mechanism such as the regulatory pathways. In recent years, many models have been proposed attempting to understand how individual genes interact with each other to govern the diverse biological processes in the cell. In [8–10], gene regulatory network inference based on graphical models is proposed. A graphical model depicts the relationships among nodes in a graph which are considered as random variables. Links between nodes represent dependence of the two variables. For network inference based on the graphical Gaussian model [1112], the nodes with corresponding random variables are assumed to be jointly distributed according to the multivariate Gaussian distribution , with mean vector and covariance matrix . In [13], the gene-to-gene interaction is predicted from expression data using Bayesian networks, another type of graphical model. The dependence relationship between the variables is denoted by a directed acyclic graph where the nodes are associated with the variables , , and the nodes are linked if a dependent relationship exists between the two corresponding variables. Given a set of expression values , the algorithm selects the graph that best describes by choosing the graph that maximizes a scoring function based on the Bayes' rule . In [14], gene regulatory network reconstruction based on the dynamic Bayesian network is proposed to support cycles in the network, and time-series data in .
Relevance Network Algorithms
Another method that is related to graphical model is called relevance network. Relevance networks are based on the idea of "covariance graph" where a link exists between genes and , if and only if the corresponding gene expressions of and are marginally dependent [15]. Different measures of dependence have been used in relevance-network-based algorithms. In [16], the correlation coefficient is used to represent the dependence between two genes, and in both [1617], mutual information is used to measure the nonlinear relationship between the expressions of two genes. Since these metrics are computed from a finite number of samples, a threshold is often imposed so that two nodes are connected if the computed metric between the two nodes is above the threshold. In [17], entropy and joint entropy are first computed based on the histogram, then the mutual information of and is computed by . In [18], the proposed ARACNE algorithm uses the Gaussian kernel estimator to estimate the mutual information between the expressions and of genes and . Before estimating from the observed expressions and using the Gaussian kernel estimator, and are copula-transformed to take values between 0 and 1. This step is performed so that the expression data are transformed to uniform distribution, and arbitrary artifacts from microarray processing are removed. In gene regulatory networks, if gene regulates , which in turn regulates , then and will also be highly correlated. Using methods based on relevance network, a link will often be incorrectly inferred between and due to the high correlation measures. In [18], ARACNE tries to resolve this problem by using the data processing inequality (DPI). From DPI, if , and form a Markov chain (denoted as ), then [19]. For a triplet of genes where the estimated mutual information of all three pairs of genes exceed the threshold, the link with the lowest mutual information is removed by ARACNE in the DPI step. While relevance-network-based methods such as ARACNE perform well when the interactions in the gene regulatory network are between pairs of genes, they are unable to completely discover interactions that are results of the joint regulation of the target gene by two or more genes. The XOR interactive regulation is one such interaction that can be recognized only by exploiting the conditional dependence between variables of interest. Using conditional mutual information (CMI), it is possible to detect the XOR and other nonlinear interactive regulation by two genes. Several recent works have attempted to incorporate information theoretic measures for more than two variables in regulatory network discovery. In [20], a CMI measure where the conditioning variable takes discrete values in two states (high and low) is proposed to discovery the transcriptional interactions in the human B lymphocytes. In [2122], methods based on both MI and CMI have also been proposed to decrease the false positive rate for the detection of the interactions. In [23], the conditional coexpression model is introduced, and the CMI is used as a measure of conditional coexpression. In [24], an extension of the context likelihood of relatedness (CLR) algorithm [25], called "synergy augmented CLR" is proposed. The technique uses the recently developed information theoretic concept of synergy [26] to define a numerical score for a transcriptional interaction by identifying the most synergistic partner gene for the interaction. In this work, we propose a relevance-network-based gene regulatory network inference algorithm similar to [24], using information theoretic measure to determine the relationship between triplets of genes. 1.2. Objective Here, we make use of both mutual information and conditional mutual information as measures of dependence between gene expressions. The main focus of this work is to discover the potential interactions between genes by adapting the relevance network model, which is also used in [1718]. The inference of the connectivity, or the "wiring" of the network, is also an important aspect of biological network inference. The proposed network inference algorithm uses an adaptive partitioning scheme to estimate the mutual information between and conditioned on , where can be either discrete or continuous. We show that using both mutual information and conditional mutual information allows us to more accurately detect correlations due to interactive regulation and other complex gene-to-gene relationships. In this work, our primary focus is on the detection of Boolean interactive regulation and other interactions which cause incorrect inferences, such as coregulation and indirect regulation. The experimental results show that the proposed network inference algorithm can successfully detect these types of regulation, and outperform two commonly used algorithms, BANJO and ARACNE. The remainder of the paper is organized as follows. In Section 2, we present the system model for regulatory network inference. In Section 3, we present the adaptive partitioning algorithms for estimating mutual information and conditional mutual information as well as our proposed network inference algorithm based on MI-CMI. In Section 4, we present experimental results. Section 5 concludes the paper. 2. System Model: Suppose that the given set of genes form a regulatory network, where each node of the network is represented by a gene. Associated with each node, is a random variable with unknown steady-state distribution from which the expressions of are generated. We assume that for gene , we have the vector of steady-state gene expressions , where is the gene expression of gene under condition . In a network inference problem, our primary goal is to correctly identify the links representing direct regulation and reduce the false negative and false positive links. A false negative can be due to the incorrect estimation of the metric that measures the interaction between the expressions of two genes. When interactive regulation is introduced into the network, false negatives may occur for certain interactive regulations due to that no significant interaction is detected between the regulated gene and any one of the regulating genes, but rather the regulation is only detectable when the regulated gene and all of the regulating genes are considered together. For example, in Figure 1(a), gene is being regulated by an XOR interaction of genes and . Using mutual information as metric, the individual interactions between and and between and are not discovered since . Figure 1 (a) XOR interactive regulation of by and (a) XOR interactive regulation of by and . (b) Coregulation of and . In the relevance network approach, two nodes are connected when they exhibit high degrees of interaction according to the chosen metric. Using metrics such as correlation coefficient and mutual information, high degrees of interaction between two genes typically indicate that one of the genes is directly or indirectly regulating the other gene, or the two genes are being coregulated by another gene. In relevance networks, indirect regulation and coregulated genes often are the cause of false positive links. ARACNE, as discussed in the previous section, removes indirect regulation by the application of DPI. However, ARACNE and other network inference algorithms based only on correlation coefficient or mutual information are unable to identify genes that are being coregulated, particularly if they are coregulated by the same mechanism. For example, in Figure 1(b), both and are regulated by an AND interaction of and . Using correlation coefficient or mutual information as metric will always result in a high interaction between and , and in most cases, greater than the interaction between the regulated gene and either one of the regulating genes, whereas using DPI will result in a false positive link. The insufficiencies of using only mutual information or correlation coefficient as discussed above naturally lead us to the use of conditional mutual information as the metric of choice in our proposed regulatory network inference algorithm. For Figure 1(a), it is clear that the interaction between and and that between and can be detected by and . To resolve false positives due to coregulated genes recall that the conditional mutual information measures the reduction of information provided about by observing conditioned on having observed . An example of Figure 1(a) can be seen in [27]. In Figure 1(b), coregulation of and can be recognized by the fact that if and are regulated by the same biological mechanism, and , since having observed or , no more information is provided about by observing , or information provided about by observing , respectively. On the other hand, having observed , which regulates both and , the information provided about by observing is reduced, and we have . Thus, by considering both the mutual information and conditional mutual information, we are able to reduce the amount of false positive links due to coregulation. Example of Figure 1(b) can be seen in [28]. From the above discussion, in the next section, we develop a relevance-network-based regulatory network inference algorithm that utilizes both mutual information and conditional mutual information to predict interactions between genes from the observed gene expression data. It is clear that we need efficient estimators that can accurately compute mutual information and conditional mutual information from data. Moreover, the conditional mutual information estimator should be able to support both discrete and continuous conditioning variables to allow for wider ranging uses. 3. MI-CMI Regulatory Network Inference Algorithm: There are several mutual information estimators such as the Gaussian kernel estimator and the equipartition estimator [29] but each has its weakness. The Gaussian kernel estimator requires a smoothing window that needs to be optimized for different underlying distributions, thus increasing the estimator complexity. While the equipartition estimator is simple in nature, the different grids in a partition often have variable efficiency in terms of contribution to the mutual information estimate due to the underlying sample distribution. In this section, we make use of an adaptive partitioning mutual information estimator proposed in [30] and extend it to estimating conditional mutual information. These estimators are then employed in building our MI-CMI-based relevance network for regulatory network inference. 3.1. Adaptive Partitioning Mutual Information Estimator Let us consider a pair of random variables and taking values in and , both of which are assumed to be the real line for simplicity. For each random variable, we have samples and . From the samples we wish to obtain an estimate of the mutual information . For mutual information estimators that partition the samples according to equal length or equiprobable partition, many of the grids may turn out to be inefficient due to the distribution of the samples. For example, let and , where is uniformly distributed on . Hence, the samples fall on a unit circle; and grids inside the circle do not contribute to the estimation of the mutual information between and . Therefore, a partitioning scheme that can adaptively change the number, size, and placement of the grids is more efficient in estimating mutual information. In the following, we describe a mutual information estimator proposed in [30] that adaptively partitions the observation space based on the unknown underlying distributions of the samples. In the adaptive partitioning scheme, the sample space is divided into rectangular grids of varying sizes depending on the underlying distributions. A grid denoted as has the -axis range and -axis range . Furthermore, the set containing all the grids of the partitioning is denoted as . Let us denote , , and as the densities of the distributions , , and , respectively. We then define the following conditional distributions:
and their densities
respectively, where denotes the indicator function of the set can now be written as
where is called the restricted divergence and is the residual divergence. We define a sequence of the partitioning of the sample space as nested if each grid is a disjoint union of grids , where can be different for each . Thus, can be seen as a refinement of . A nested sequence is said to be asymptotically sufficient for and if for every there exists a such that for each , one can find an satisfying
where denotes the -algebra of , and denotes the symmetric difference. In [30], it is shown that if the nested sequence is asymptotically sufficient for and , then
Given the pairs of samples , we define
that is, the frequency of the samples falling into the grid . Then, the restricted divergence can be estimated from the samples with the following estimator:
Furthermore, in [30] it is shown that the residual diversity approaches zero as and that
Thus, mutual information can be estimated by computing the relative sample frequency on appropriately placed rectangular grids. We now give the adaptive partitioning algorithm that constructs an asymptotic sufficient sequence of partitions for mutual information estimation. Algorithm 1 (Adaptive partitioning algorithm for mutual information estimation). (i) Initialization: Partition and at and , respectively, such that
that is, and are the equiprobable partition points for and with respect to the empirical distribution of marginal distributions, and is divided into 4 grids. This partition is denoted as . (ii) Partitioning : for a grid , select the partition points and , such that
Denote as the total number of samples in the grid and as the total number of samples in each of the quadrants created by the above partition. Compute the Pearson's chi-squared test for uniform distribution,
If the sample distribution of the quadrants passes the uniform test, that is, (11) holds, is added to . If the sample distribution does not pass the uniform test, the grids , , , and are added to . (iii) Repeat step (ii) for all grids in . (iv) Repeat steps (ii) and (iii) until . When the partitioning process is terminated, define . (v) Using the partition , compute the mutual information estimate according to (7). Here, we give an example of how to adaptively partition a given set of sampled data. In this example, we sampled 100 times and that are jointly Gaussian with correlation coefficient of 0.9 and both with mean zero. The 100 sample pairs are plotted in Figure 2(a). In Figure 2(b), we plot the same samples in their ordinal plot, meaning that each sample of and is ranked in decreasing order with respect to other samples from the same random variable, and the sample pairs are plotted by their integer-valued ranks. In the ordinal plots, equiprobable partition is equivalent to partition at the midpoint. In Figure 2(b), we can also see the dashed lines dividing the samples into 4 grids. This is the initialization partition that is always kept no matter how the samples are distributed. In Figure 2(c), we can see that the 4 grids are each partitioned into 4 quadrants by the dashed lines. Table 1 shows the distribution of the samples in quadrants created by the partitioning of the 4 grids during the second-level partition, and their chi-squared statistics. To pass the uniform chi-squared test for 95%, the chi-squared test statistic should be less than 7.815. As we can see from Table 1, all 4 grids failed the test, thus require further partitioning. Table 1 Quadrant sample counts in each grid after second-level partition, and result of chi-squared test.
Quadrant 1
Quadrant 2
Quadrant 3
Quadrant 4
statistic
Pass?
Grid 1 18 7 6 11 8.4762 no Grid 2 0 0 7 1 17.0000 no Grid 3 1 6 1 0 11.0000 no Grid 4 12 6 18 6 9.4286 no Figure 2 Example of adaptive partitioning steps for pairwise mutual information Example of adaptive partitioning steps for pairwise mutual information. (a) 100 samples of and jointly Gaussian with (b) Initialization partition of the original samples (c) Second-level partition of the grids (d) Third-level partition of the grids In Figure 2(d), we can see that 13 nonzero grids from the previous steps are each divided into 4 quadrants by the dashed lines. Table 2 shows similarly for the third-level partitions the quadrant sample counts in each of the grids, and their chi-squared test results. From Table 2, we can see that all grids pass the chi-squared test, thus the third-level partition is not needed, and the adaptive partitioning scheme has partitioned the samples into the 13 grids shown in Figure 2(d). Table 2 Quadrant sample counts in each grid after third-level partition, and result of chi-squared test.
Quadrant 1
Quadrant 2
Quadrant 3
Quadrant 4
statistic
Pass?
Grid 1 7 3 5 3 2.4444 yes Grid 2 2 1 1 3 1.5714 yes Grid 3 2 3 0 1 3.3333 yes Grid 4 3 3 4 1 1.7273 yes Grid 5 2 0 2 3 2.7143 yes Grid 6 0 0 1 0 3.0000 yes Grid 7 0 1 0 0 3.0000 yes Grid 8 2 2 1 1 0.6667 yes Grid 9 4 2 5 1 3.3333 yes Grid 10 2 0 1 3 3.3333 yes Grid 11 1 0 0 0 3.0000 yes Grid 12 3 2 1 0 3.3333 yes Grid 13 2 5 5 6 2.0000 yes 3.2. Conditional Mutual Information Estimator Works in various fields have utilized conditional mutual information to test for conditional independence. However, in most cases, they are often limited to conditioning on a discrete, often binary, random variable [3132]. When conditioning on a discrete random variable, the conditional mutual information can be computed as
This is done by simply dividing the samples into bins according to the value takes, and taking the weighted summation of the mutual information in each bin. In the case of conditioning on a continuous random variable, however, the partitioning of is often not so clear. Next, we propose a modification to the adaptive partitioning estimator that also adaptively partitions the z-axis to allow the estimation of conditional mutual information when the conditioned random variable is continuous. Let us consider a triplet of random variables , and taking real values in , and , respectively. Given the samples , , and , we wish to compute an estimate of the conditional mutual information . Suppose that the space is divided into cuboids of various sizes depending on the underlying distributions. The cuboid denoted as has range on the -axis, on the -axis, and on the -axis, and the set containing all the cuboids of the partition is denoted as . We then define the following conditional distribution:
and its density
Similar to (3), we can write as
where Q denotes and R denotes . We can rewrite as
Notice that this is simply a weighted sum for the restricted diversity as computed in (3) for samples grouped according to the z-axis partition , and for a partition ,
and it can be estimated as
Following the proof in [3033],
We can see from (15) and (17) that
and the integral
in the definition of vanishes if and only if , that is, and are independent in the cuboid . In the following, we propose an adaptive partitioning scheme that partitions the given samples into cuboids, where in each cuboid the conditional distributions of and given are independent. Similar to Algorithm 1, we use the Pearson's chi-square test to determine the independence of the samples. We now present the algorithm for estimating the conditional mutual information with continuous conditioning variable. Algorithm 2 (Adaptive partitioning algorithm for conditional mutual information estimation). (i) Initialization: partition , and at , and , respectively, such that
that is, , and are the equiprobable partition points for , , and with respect to the empirical distribution of marginal distributions, and is divided into 8 cuboids. This partition is denoted as . (ii) Partitioning : for a cuboid , select the partition points , and , such that
Denote as the total number of samples in the cuboid and as the total number of samples in each of the octants created by the above partition. Compute the Pearson's chi-squared test for uniform distribution,
If the sample distribution passes the uniform test, that is, if (24) holds, the cuboid is added to . If the sample distribution does not pass the uniform test, the cuboids
are added to . (iii) Repeat step (ii) for all cuboids in . (iv) Repeat steps (ii) and (iii) until . When the partitioning process is terminated, define . (v) Using the partition , compute the conditional mutual information estimate according to (18). Figures 3 and 4 give an adaptive partition of a trivariate sample data. Note that is the output of an XOR gate with and as inputs, with random noise added to both the inputs and the output. We can see that the -axis is partitioned into two regions, and . In the initial step, the sample data is divided into 8 cuboids. The 4 cuboids without any data points are discarded, and the other 4 are added to . In the second step, each of the 4 cuboids is divided into 8 cuboids and tested for uniform distribution with the chi-squared test. All 4 pass the test and are added to . In the next step, we see that , and the partitioning process is terminated with . Figure 3 Adaptive partition of and given Adaptive partition of and given . Figure 4 Adaptive partition of and given Adaptive partition of and given . Compared to the estimation of conditional mutual information for discrete conditioning variable, we can see that instead of grouping samples into subsets where samples belonging in the same subset have the same values for the discrete-valued conditioning variable, here we group samples based on the adaptively determined partitioning of on the z-axis. The problem of estimating the conditional mutual information is thus broken down into estimating the mutual information for each group of samples, where the samples are grouped by which they belong to. Note that the complexity of the Gaussian kernel estimator is known to be . However, the complexity of the adaptive partitioning estimator is dependent upon the joint distribution of the variables. For example, suppose and are independent and identically distributed uniform distributions. To compute from pairs of will take on average only the four initializing grids, since the sample pairs are typically uniformly distributed in each of the grids, and no further subpartitions are necessary according to the chi-squared test. On the other hand, suppose that and , where is uniformly distributed between , it will take many more subpartitions to obtain uniform distribution of the samples on each of the resulting grids. From our experience, for samples of and jointly Gaussian pairs, the Gaussian kernel estimator takes about 2 minutes to compute the MI, whereas for the adaptive partitioning algorithm, the time is between 2.5 to 3 minutes, on MATLAB code running on a Pentium 4 2.54 GHz machine. However, this is without taking into consideration the overhead required by the Gaussian estimator to compute the smoothing window. 3.3. Gene Regulatory Network Inference Algorithm To infer a gene regulatory network that has various interactive regulations and coregulations, we propose a strategy of using both mutual information and conditional mutual information to reconstruct the regulatory network. In our proposed algorithm, we first use mutual information as metric to build regulatory network similarly to [17] to capture most of the direct regulations. To decrease the complexity of the algorithm by avoiding computing conditional mutual information for all triplets, while still allowing us to detect most of the causes for false positives and false negatives, we only compute the CMI for triplets of genes where either all three genes are connected, or all three genes are not connected. The decrease in complexity would depend on several factors. Once the pairwise MI threshold is chosen, the triplets that have one or two connections between the three genes indicate that the pairwise MI is sufficient for the determination of the interaction between the three genes, and the use of CMI is not necessary. Thus, instead of computing the CMI for all triplets of genes, CMI needs to be computed only for those triplets that are completely connected or completely unconnected. The amount of decrease in complexity would then depend on the ratio of triplets that have only one or two connections, which would depend on the actual connectivities between the genes, and the threshold selected for the pairwise mutual information phase of the algorithm. The MI-CMI gene regulatory network inference algorithm is as follows. Algorithm 3 (MI-CMI gene regulatory network inference algorithm). (i) For a gene expression dataset containing genes, compute the mutual information estimate for all gene pairs , using Algorithm 1. (ii) Initialize the graph as a zero matrix. Set if , where is a predetermined threshold. (iii) Detecting indirect regulation and coregulation: for any triplet of genes where , compute the conditional mutual information estimate , , and using Algorithm 2. (a) If
this means that and contain nearly the same information regarding , that having observed , contains no new information about , and vice versa. Also, having observed , the information contained about in is reduced. This indicates that and are regulated by through the same mechanism, meaning that the gene pair is coregulated, thus is set to . (b) If
and , this indicates that regulates , and regulates , and that the is indirectly regulated by , indicated by the smallest CMI. Using DPI similarly to [18], is set to . (iv) Detecting interactive regulation: for any triplet of genes where , compute the conditional mutual information estimate , and using Algorithm 2. (a) If one or two of the CMI estimates is greater than , this indicates that the genes contain interactions that was not captured using MI, and we set the corresponding link or links to . (b) If all three of the CMI estimates are greater than , this may indicate that the two regulating genes may have had some prior interactions, or there is an XOR interaction between the 3 genes. Thus, we apply the DPI to remove the link with the weakest estimated CMI, and the links corresponding to the two largest estimated CMI are set to . 4. Experimental Results: In this section, we present simulation results to demonstrate the performance of the algorithms discussed in Section 3. We first illustrate the performance of Algorithm 2 for estimating the conditional mutual information of jointly Gaussian random variables. Next, we consider the performance of Algorithm 1 for estimating mutual information, by implementing the regulatory network inference algorithm in [18], but replacing the Gaussian kernel mutual information estimator employed there with Algorithm 1. Finally, we compare the network inference performance of Algorithm 3 with that of ARACNE [18] and BANJO [11] on synthetic networks. 4.1. Conditional Mutual Information of Jointly Gaussian Random Variables To assess the accuracy of Algorithms 1 and 2 for the estimation of gene regulatory networks, we consider estimating the pairwise and conditional mutual information of multivariate Gaussian distributions. In our simulation, we compare the MI and CMI estimates of Algorithms 1 and 2 with those of the b-spline estimators. A b-spline MI estimator is proposed in [34] which divides the sample range into a number of bins. Contrary to the approach in the classical histogram estimators, where each sample contributes only to the bin it is in, for the b-spline estimator, the weight of a sample is spread to the bins. In the case of a third-order b-spline estimator, for a sample located in bin , the sample is assigned to the bins , , and , and the weight of the sample in each bin is computed using the b-spline coefficients. Here, we modify the b-spline estimator as proposed in [34] to estimate the 3-way MI so that the CMI can be obtained with the relationship . For MI estimation, we generated bivariate Gaussian samples with correlation coefficients 0, 0.3, and 0.6. For each coefficient, we generated samples, and computed the estimated MI, , for each sample size using Algorithm 1 and the third-order b-spline estimator with 10 bins proposed in [34]. Each sample size is averaged over 500 sets of samples. For a bivariate Gaussian distribution, the exact MI of and is given by
where is the correlation coefficient between and . For CMI estimation, we generated samples trivariate Gaussian distributions with the following covariance matrices:
For each Gaussian distribution, we generated samples, and computed the estimated CMI, , for each sample size, using Algorithm 2 and the modified third-order b-spline estimator with 10 bins. For each sample size , the estimated CMI is averaged over 500 sets of samples. For a trivariate Gaussian distribution, the exact CMI of and given is given by
where , and are the conditional covariances of , and conditional covariance between and , given , respectively. For a trivariate Gaussian distribution, the conditional covariance matrix between and given is given by
where denotes the covariance of and . The results of the MI estimation are given in Table 3, and the results of the CMI estimation are given in Table 4. We can see that in both the MI and CMI estimation, the adaptive algorithms have closer estimates to the analytical values for all correlation coefficients and covariance matrices, except for the MI estimation for . From both tables, we can see that as the sample size grows, the adaptive algorithms converge toward the analytical values for both MI and CMI estimation. However, this is not true for the b-spline algorithms, where in the cases of MI estimation for , and CMI estimation for covariance matrix 4, the b-spline estimators converge to incorrect values. Table 3 Comparison of the estimated MI of bivariate Gaussian distribution with different correlation coefficient using Algorithm 1 and b-spline algorithm. Correlation coefficient Algorithm 100 200 300 400 500 Analytical 0 Adaptive 0.0080 0.0036 0.0022 0.0022 0.0015 0 b-spline 10 0.0912 0.0443 0.0288 0.0210 0.0166 0.3 Adaptive 0.0280 0.0287 0.0305 0.0319 0.0330 0.0472 b-spline 10 0.1248 0.0789 0.0640 0.0562 0.0515 0.6 Adaptive 0.1371 0.1730 0.1916 0.1999 0.2052 0.2231 b-spline 10 0.2471 0.2029 0.1879 0.1781 0.1719 Table 4 Comparison of the estimated CMI of trivariate Gaussian distribution with different covariance matrices using Algorithm 2 and the modified b-spline algorithm. Cond. Corr. Algorithm 100 200 300 400 500 Analytical 0 Adaptive 0.0263 0.0215 0.0171 0.0175 0.0113 0 b-spline 10 0.1899 0.1039 0.0711 0.0536 0.0429 b-spline 20 0.7888 0.5592 0.4330 0.3497 0.2943 0.1612 Adaptive 0.0310 0.0278 0.0253 0.0249 0.0187 0.0132 b-spline 10 0.1899 0.1065 0.0759 0.0603 0.0495 0.3035 Adaptive 0.0497 0.0510 0.0534 0.0565 0.0582 0.0483 b-spline 10 0.2251 0.1377 0.1032 0.0855 0.0761 0.7408 Adaptive 0.2294 0.3050 0.3234 0.3444 0.3784 0.3979 b-spline 10 0.2773 0.2390 0.2190 0.2092 0.2029 b-spline 20 0.6387 0.5323 0.4719 0.4378 0.4121 As a comparison, we performed CMI estimation of covariance matrices 1 and 4 using b-spline estimator with 20 bins. In [34], it is shown that the b-spline method has similar performance to that of the kernel density estimator (KDE), and the MI computed has the same level of significance. However, the KDE is shown to be more computationally intensive than the b-spline method. Thus in our comparisons, we only included the results from the b-spline method. For matrix 4, the b-spline estimator now converges to the correct analytical value. However, for matrix 1, the b-spline estimator does not converge to zero as the estimator with 10 bins does. This illustrates the drawback of using the b-spline estimators for MI and CMI estimation. The accuracy of the b-spline estimators depend on the choice for its parameters. On the other hand, Algorithms 1 and 2 are nonparametric, and do not need any prior knowledge of the underlying distributions to produce good estimates. Looking more closely at CMI estimation, for small sample size and large CMI value, Algorithm 2 has a negative bias. As the sample size increases, the bias quickly reduces. On the other hand, when the true CMI value is small, Algorithm 2 tends to overestimate. It should be noted that estimating the CMI from a finite number of samples for a distribution with zero conditional correlation coefficient will typically result in a nonzero value. Nevertheless, the estimation results are still reasonably accurate, even for only 100 samples, so that conditional independence can be easily detected. 4.2. Regulatory Networks with Only Direct Regulation Next, we implemented the algorithm described in [18] by replacing the Gaussian kernel MI estimator there with Algorithm 1. The modified algorithm is then compared with the original ARACNE algorithm in [18]. The purpose of this comparison is to show that the adaptive partitioning MI estimator is a valid alternative for the Gaussian kernel estimator. Specifically, we constructed 25 synthetic regulatory networks, each with 20 one-to-one gene regulations, using NetBuilder [35]. To compare the network inference performance, we adopt the same metrics as used in [18]—recall and precision. Recall, defined as , where is the number of true positive links and is the number of false negative links, measures the ratio of correctly identified links out of total number of links. Precision, defined as , where is the number of false positive links, measures the ratio of correctly predicted links out of total predicted links. The values and relationship between the two metrics change with the selected threshold value, . At low , more links will be admitted as gene interactions, potentially capturing more true links, resulting in high recall values. However, as more links are included, the number of false positives also increases, which decreases the precision. On the other hand, when is high, only links with high interactions are admitted, and they in most cases represent true interactions between genes, thus improving the precision. However, true interactions that exhibit lower interaction are not admitted, resulting in a decrease in recall. In Figure 5, we plot the precision versus recall performance of the two algorithms. It is seen that both algorithms perform exactly the same. This shows that the adaptive partitioning MI estimator can be employed as an alternative to the Gaussian kernel estimator in capturing the gene-to-gene interactions. The comparison shown in Figure 5 only uses synthetic networks constructed so that there are only pairwise connectivities. This is to illustrate that the adaptive partitioning algorithm can be used as an alternative to the kernel-based estimator in the ARACNE algorithm without degradation in performance. In the later simulations, we showed that in the presence of coregulation by two genes, the CMI is needed to improve the performance of regulatory network inference. Note that since MI and CMI are estimated from finite number of samples, the estimated MI and CMI are always greater than 0. From the relevance-network approach, by setting an arbitrarily low threshold, any number of links can be admitted as detected gene interactions, and with sufficiently low threshold, all possible links can be admitted. When large numbers of links are admitted, the number of false negative will be small, which leads to large values of recall. Thus, the comparisons at large recall values tend to be meaningless and are not included in the figures. Figure 5 Comparison of ARACNE and relevance-network-based algorithm with adaptive partitioning MI estimator and DPI Comparison of ARACNE and relevance-network-based algorithm with adaptive partitioning MI estimator and DPI. 4.3. Regulatory Networks with Coregulation and Interactive Regulation We now compare the performance of Algorithm 3, ARACNE, and BANJO for regulatory network inference in the presence of coregulated and interactively regulated genes. We again use the synthetic network modeling software NetBuilder to generate random networks. NetBuilder allows modeling of gene-to-gene interactions such as activation by transcription factor combination (AND and OR), repression (NOT), and other combinatory interactions. We generated 50 synthetic networks, each containing 15 to 25 nodes with 20 links. For each node, we generated 100 steady-state expression data samples. To compare the effects of interactive regulation and coregulation on the performance of the three algorithms, two sets of synthetic networks are constructed: one set contains 25 networks where 30% of the interactions involve interactive regulation and coregulation, the other set contains 25 networks where 60% of the interactions involve interactive regulation and coregulation. In Figures 6 and 7, we plotted precision versus recall performance for the two sets of synthetic networks. It is seen that Algorithm 3 is able to outperform both ARACNE and BANJO in terms of precision for all ranges of interest. Notice that the improvement over ARACNE is greater for dataset with 60% of coregulation and interactive regulation, which is expected since ARACNE in most cases cannot detect the XOR interactions, and the application of DPI for gene coregulation can introduce both false positives and false negatives. Surprisingly, BANJO is found to have better performance than ARACNE at high recall values for the set of networks that contains 60% coregulation and interactive regulation. In [18], it is shown that the Gaussian network algorithm performs worse when the network contains only direct interaction between two genes. It is possible that due to the use of joint distributions to model the expression values of nodes in Gaussian-network-based algorithms such as BANJO, they are able to discover some of the coregulations and interactive regulations that are not found by ARACNE. Figure 6 Precision versus recall for datasets with 30% coregulated or interactively regulated links Precision versus recall for datasets with 30% coregulated or interactively regulated links. Figure 7 Precision versus recall for datasets with 60% coregulated or interactively regulated links Precision versus recall for datasets with 60% coregulated or interactively regulated links. In Figure 9, we give an example of a network discovered by each algorithm. For the MI-CMI algorithm, we randomly permute for each gene the expressions across the different conditions, similar to what is done in [17]. We performed 30 such permutations, and for each permutation we computed the pairwise mutual information using Algorithm 1 for all possible pairs. The highest observed mutual information out of the 30 permutations is used as the threshold for both MI-CMI algorithm and ARACNE. Results for BANJO were obtained using the default parameters. Figure 9(a) represents the network inferred by the MI-CMI algorithm, Figure 9(b) the network inferred by ARACNE, and Figure 9(c) the network discovered by BANJO. In each figure, red links represent XOR interactions, green links represent OR interactions, and blue links represent AND interactions. In Figure 9, false negative links are indicated with a cross mark, and false positive links are represented by dashed lines. The true underlying network is shown in Figure 8. As we can see from the figures, BANJO produced the most false positive links, both from indirect regulation and coregulation, whereas both the MI-CMI algorithm and ARACNE only have one each. However, the MI-CMI algorithm and BANJO discovered similar numbers of interactive regulation completely, discovering 5 and 4, respectively. An interactive regulation is completely discovered when both regulating genes are linked correctly to the interactively regulated gene. For ARACNE, only 2 interactive regulations are discovered completely, and for most of the interactive regulations only one of the links is discovered. Figure 8 True underlying network configuration inferred in Figure 9 True underlying network configuration inferred in Figure 9. Figure 9 (a) Synthetic network inferred by MI-CMI algorithm (a) Synthetic network inferred by MI-CMI algorithm. (b) Synthetic network inferred by ARACNE. (c) Synthetic network inferred by BANJO. 5. Conclusions: We have proposed a new gene regulatory network inference algorithm that employs both mutual information and conditional information to discover possible direct and interactive regulations between genes, and to eliminate false links due to indirect regulations and coregulation. The mutual information and conditional mutual information are estimated from the expression data using an adaptive partitioning estimator. We have shown that the proposed network inference method outperforms BANJO and ARACNE when the underlying regulatory network contains coregulated or interactively regulated genes. In this work, we have focused on the discovery of the joint regulation of a gene by two other genes. It is possible to extend this work to joint regulation by multiple genes by modifying the proposed conditional mutual information estimator to a higher order. However, doing so would pose several computational problems. As the dimension of the CMI increases, increasing number of samples is needed to maintain the same level of accuracy. Also, as the dimension of the CMI increases, the number of sets of genes to be tested also increases, thus rendering this method impractical for brute force computation of all possible sets of genes. One possibility to reduce the amount of computations needed is to take into consideration the constraints placed on the possible connectivities from known biochemical reactions between the genes involved. This can be a future direction for research in this area.Genetics, Microbiologyxw2008Electrical EngineeringArticlesA simple spiking retina model for exact video stimulus representation
http://academiccommons.columbia.edu/catalog/ac:197789
Lazar, Aurel A.; Pnevmatikakis, Eftychioshttp://dx.doi.org/10.7916/D8JM2842Tue, 09 Sep 2014 16:33:16 +0000A computational model for the representation of visual stimuli with a population of spiking neurons is presented. We show that under mild conditions it is possible to faithfully represent an analog video stream into a sequence of spike trains and provide an algorithm that recovers the video input by using only the spike times of the population. In our model an analog, bandlimited in time, video stream approaches the dendritic trees of a neural population. At each neuron, the multi-dimensional video input is filtered by the neuron's spatiotemporal receptive field, and the one-dimensional output dendritic current enters the soma of the neuron (see Figure 1). The set of the spatial receptive fields is modeled as a Gabor filterbank. The spike generation mechanism is threshold based: Each time the dendritic current exceeds a threshold a spike is fired and the membrane potential is reset by a negative potential through a negative feedback loop that gets triggered by the spike. This simple spike mechanism has been shown to accurately model the responses of various neurons in the early visual system 1. Figure 1 Encoding and decoding mechanisms for video stimuli: The stimulus is filtered by the receptive fields of the neurons and enters the soma Encoding and decoding mechanisms for video stimuli: The stimulus is filtered by the receptive fields of the neurons and enters the soma. Spike generation is threshold based and a negative feedback mechanism resets the membrane potential after each spike. In the decoding part each spike, represented by a delta pulse, is weighted by an appropriate coefficient and then filtered from the same receptive field for stimulus reconstruction. The total sum is passed from a low pass filter to recover the original input stimulus. We prove and demonstrate that we can recover the whole video stream based only on the knowledge of the spike times, provided that the size of the neural population is sufficiently big. Increasing the number of neurons to achieve better representation is consistent with basic neurobiological thought 2. Although very precise, the responses of visual neurons show some variability between subsequent stimulus repeats, which can be attributed to various noise sources 1. We examine the effect of noise on our algorithm and show that the reconstruction quality gracefully degrades when white noise is present at the input or at the feedback loop.Neurosciences, Electrical engineeringaal1Electrical EngineeringArticlesReconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds
http://academiccommons.columbia.edu/catalog/ac:196856
Lazar, Aurel A.; Pnevmatikakis, Eftychioshttp://dx.doi.org/10.7916/D8W094F0Tue, 09 Sep 2014 16:31:16 +0000We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability.Electrical engineering, Neurosciencesaal1Electrical EngineeringArticlesEncoding auditory scenes with a population of Hodgkin-Huxley neurons
http://academiccommons.columbia.edu/catalog/ac:196095
Lazar, Aurel A.; Turetsky, Roberthttp://dx.doi.org/10.7916/D8MP51RPTue, 09 Sep 2014 16:27:56 +0000The article evaluates the auditory models that exhibit a population of Hodgkin-Huxley neurons. It was found that such model maintains the intelligibility of natural and synthetic sound stimuli. Hodgkin-Huxley neurons were found to encode sounds with approximately the same intelligibility as Integrate-and-Fire neurons. The results show that the Hodgkin-Huxley neurons can be interchangeably used in computational modeling.Neurosciences, Biologyaal1Electrical EngineeringArticlesVariable window binding for mutually exclusive alternative splicing
http://academiccommons.columbia.edu/catalog/ac:192647
Anastassiou, Dimitris; Liu, Hairuo; Varadan, Vinayhttp://dx.doi.org/10.7916/D8Q23XQZTue, 09 Sep 2014 00:00:00 +0000Background: Genes of advanced organisms undergo alternative splicing, which can be mutually exclusive, in the sense that only one exon is included in the mature mRNA out of a cluster of alternative choices, often arranged in a tandem array. In many cases, however, the details of the underlying biologic mechanisms are unknown. Results: We describe 'variable window binding' - a mechanism used for mutually exclusive alternative splicing by which a segment ('window') of a conserved nucleotide 'anchor' sequence upstream of the exon 6 cluster in the pre-mRNA of the fruitfly Dscam gene binds to one of the introns, thereby activating selection of the exon directly downstream from the binding site. This mechanism is supported by the fact that the anchor sequence can be inferred solely from a comparison of the intron sequences using a genetic algorithm. Because the window location varies for each exon choice, regulation can be achieved by obstructing part of that sequence. We also describe a related mechanism based on competing pre-mRNA stem-loop structures that could explain the mutually exclusive choice of exon 17 of the Dscam gene. Conclusion: On the basis of comparative sequence analysis, we propose efficient biologic mechanisms of alternative splicing of the Drosophila Dscam gene that rely on the inherent structure of the pre-mRNA. Related mechanisms employing 'locus control regions' could be involved on other occasions of mutually exclusive choices of exons or genes.Genetics, Bioinformatics, Messenger RNA, RNA splicing, Locus of control, Exons (Genetics)da8Electrical EngineeringArticlesMaximum parsimony xor haplotyping by sparse dictionary selection
http://academiccommons.columbia.edu/catalog/ac:194784
Elmas, Abdulkadir; Jajamovich, Guido H.; Wang, Xiaodonghttp://dx.doi.org/10.7916/D82N50RZTue, 09 Sep 2014 00:00:00 +0000Background: Xor-genotype is a cost-effective alternative to the genotype sequence of an individual. Recent methods developed for haplotype inference have aimed at finding the solution based on xor-genotype data. Given the xor-genotypes of a group of unrelated individuals, it is possible to infer the haplotype pairs for each individual with the aid of a small number of regular genotypes. Results: We propose a framework of maximum parsimony inference of haplotypes based on the search of a sparse dictionary, and we present a greedy method that can effectively infer the haplotype pairs given a set of xor-genotypes augmented by a small number of regular genotypes. We test the performance of the proposed approach on synthetic data sets with different number of individuals and SNPs, and compare the performances with the state-of-the-art xor-haplotyping methods PPXH and XOR-HAPLOGEN. Conclusions: Experimental results show good inference qualities for the proposed method under all circumstances, especially on large data sets. Results on a real database, CFTR, also demonstrate significantly better performance. The proposed algorithm is also capable of finding accurate solutions with missing data and/or typing errors.Bioinformatics, Genetics, Molecular biology, Genetics--Research, Biometry--Research, Bioinformatics--Methodology, Genomics--Data processingae2321, ghj2105, xw2008Electrical EngineeringArticlesNotch and VEGF pathways play distinct but complementary roles in tumor angiogenesis
http://academiccommons.columbia.edu/catalog/ac:194760
Hernandez, Sonia; Banerjee, Debarshi; Garcia, Alejandro; Kangsamaksin, Thaned; Cheng, Wei-Yi; Anastassiou, Dimitris; Funahashi, Yasuhiro; Kadenhe-Chiweshe, Angela; Shawber, Carrie; Kitajewski, Jan K.; Kandel, Jessica; Yamashiro, Darrell J.http://dx.doi.org/10.7916/D8D21W3XTue, 09 Sep 2014 00:00:00 +0000Background: Anti-angiogenesis is a validated strategy to treat cancer, with efficacy in controlling both primary tumor growth and metastasis. The role of the Notch family of proteins in tumor angiogenesis is still emerging, but recent data suggest that Notch signaling may function in the physiologic response to loss of VEGF signaling, and thus participate in tumor adaptation to VEGF inhibitors. Methods: We asked whether combining Notch and VEGF blockade would enhance suppression of tumor angiogenesis and growth, using the NGP neuroblastoma model. NGP tumors were engineered to express a Notch1 decoy construct, which restricts Notch signaling, and then treated with either the anti-VEGF antibody bevacizumab or vehicle. Results: Combining Notch and VEGF blockade led to blood vessel regression, increasing endothelial cell apoptosis and disrupting pericyte coverage of endothelial cells. Combined Notch and VEGF blockade did not affect tumor weight, but did additively reduce tumor viability. Conclusions: Our results indicate that Notch and VEGF pathways play distinct but complementary roles in tumor angiogenesis, and show that concurrent blockade disrupts primary tumor vasculature and viability further than inhibition of either pathway alone.Molecular biology, Oncology, Medicine, Vascular endothelial growth factor, Neuroblastoma, Neovascularization, Bevacizumabdb2789, tk2012, wc2302, da8, avk2102, cjs2002, jkk9, dy39Obstetrics and Gynecology, Pediatrics, Electrical Engineering, SurgeryArticlesMultiple-Symbol Decision-Feedback Space-Time Differential Decoding in Fading Channels
http://academiccommons.columbia.edu/catalog/ac:194767
Liu, Yan; Wang, Xiaodonghttp://dx.doi.org/10.7916/D84M931KTue, 09 Sep 2014 00:00:00 +0000Space-time differential coding (STDC) is an effective technique for exploiting transmitter diversity while it does not require the channel state information at the receiver. However, like conventional differential modulation schemes, it exhibits an error floor in fading channels. In this paper, we develop an STDC decoding technique based on multiple-symbol detection and decision-feedback, which makes use of the second-order statistic of the fading processes and has a very low computational complexity. This decoding method can significantly lower the error floor of the conventional STDC decoding algorithm, especially in fast fading channels. The application of the proposed multiple-symbol decision-feedback STDC decoding technique in orthogonal frequency-division multiplexing (OFDM) system is also discussed.Electrical engineering, Technical communication, Computer science, Orthogonal frequency division multiplexing, Space time codes, Wireless communication systems--Technological innovationsxw2008Electrical EngineeringArticlesFactor-Graph-Based Soft Self-Iterative Equalizer for Multipath Channels
http://academiccommons.columbia.edu/catalog/ac:194060
Lu, Ben; Yue, Guosen; Wang, Xiaodong; Madihian, Mohammadhttp://dx.doi.org/10.7916/D87P8WWJTue, 09 Sep 2014 00:00:00 +0000We consider factor-graph-based soft self-iterative equalization in wireless multipath channels. Since factor graphs are able to characterize multipath channels to per-path level, the corresponding soft self-iterative equalizer possesses reduced computational complexity in sparse multipath channels. The performance of the considered self-iterative equalizer is analyzed in both single-antenna and multiple-antenna multipath channels. When factor graphs of multipath channels have no cycles or mild cycle conditions, the considered self-iterative equalizer can converge to optimum performance after a few iterations; but it may suffer local convergence in channels with severe cycle conditions.Electrical engineering, Technical communication, Equalizers (Electronics), Radio--Transmitters and transmission--Fading, MIMO systemsxw2008Electrical EngineeringArticlesGene regulatory network inference by point-based Gaussian approximation filters incorporating the prior information
http://academiccommons.columbia.edu/catalog/ac:194860
Jia, Bin; Wang, Xiaodonghttp://dx.doi.org/10.7916/D8833QGNTue, 09 Sep 2014 00:00:00 +0000The extended Kalman filter (EKF) has been applied to inferring gene regulatory networks. However, it is well known that the EKF becomes less accurate when the system exhibits high nonlinearity. In addition, certain prior information about the gene regulatory network exists in practice, and no systematic approach has been developed to incorporate such prior information into the Kalman-type filter for inferring the structure of the gene regulatory network. In this paper, an inference framework based on point-based Gaussian approximation filters that can exploit the prior information is developed to solve the gene regulatory network inference problem. Different point-based Gaussian approximation filters, including the unscented Kalman filter (UKF), the third-degree cubature Kalman filter (CKF3), and the fifth-degree cubature Kalman filter (CKF5) are employed. Several types of network prior information, including the existing network structure information, sparsity assumption, and the range constraint of parameters, are considered, and the corresponding filters incorporating the prior information are developed. Experiments on a synthetic network of eight genes and the yeast protein synthesis network of five genes are carried out to demonstrate the performance of the proposed framework. The results show that the proposed methods provide more accurate inference results than existing methods, such as the EKF and the traditional UKF.Bioinformatics, Biostatistics, Mathematics, Gene regulatory network, Kalman filtering, Gaussian distribution, Iterative methods (Mathematics)xw2008Electrical EngineeringArticlesMaximum-parsimony haplotype frequencies inference based on a joint constrained sparse representation of pooled DNA
http://academiccommons.columbia.edu/catalog/ac:194706
Jajamovich, Guido H.; Iliadis, Alexandros; Anastassiou, Dimitris; Wang, Xiaodonghttp://dx.doi.org/10.7916/D8JS9NWVTue, 09 Sep 2014 00:00:00 +0000Background: DNA pooling constitutes a cost effective alternative in genome wide association studies. In DNA pooling, equimolar amounts of DNA from different individuals are mixed into one sample and the frequency of each allele in each position is observed in a single genotype experiment. The identification of haplotype frequencies from pooled data in addition to single locus analysis is of separate interest within these studies as haplotypes could increase statistical power and provide additional insight. Results: We developed a method for maximum-parsimony haplotype frequency estimation from pooled DNA data based on the sparse representation of the DNA pools in a dictionary of haplotypes. Extensions to scenarios where data is noisy or even missing are also presented. The resulting method is first applied to simulated data based on the haplotypes and their associated frequencies of the AGT gene. We further evaluate our methodology on datasets consisting of SNPs from the first 7Mb of the HapMap CEU population. Noise and missing data were further introduced in the datasets in order to test the extensions of the proposed method. Both HIPPO and HAPLOPOOL were also applied to these datasets to compare performances. Conclusions: We evaluate our methodology on scenarios where pooling is more efficient relative to individual genotyping; that is, in datasets that contain pools with a small number of individuals. We show that in such scenarios our methodology outperforms state-of-the-art methods such as HIPPO and HAPLOPOOL.Bioinformatics, Genetics, Biology, Genetics--Research, Biometry, DNA--Data processingghj2105, ai2159, da8, xw2008Electrical EngineeringArticlesAdaptive Transmitter Precoding for Time Division Duplex CDMA in Fading Multipath Channels: Strategy and Analysis
http://academiccommons.columbia.edu/catalog/ac:190598
Reynolds, Daryl; Høst-Madsen, Anders; Wang, Xiaodonghttp://dx.doi.org/10.7916/D85M644HTue, 09 Sep 2014 00:00:00 +0000The recently developed blind adaptive techniques for multiuser detection in code division multiple access (CDMA) systems offer an attractive compromise of performance and complexity. However, the desire to further reduce complexity at the mobile unit has led to the investigation of techniques that move signal processing from the mobile unit to the base station. In this paper, we investigate transmitter precoding for downlink time division duplex (TDD) code division multiple access (CDMA) communications. In particular, we develop a linear minimum mean square error precoding strategy using blind channel estimation for fading multipath channels that allows for simple matched filtering at the mobile unit and is easy to make adaptive. We also present a performance analysis using tools developed for the analysis of conventional (receiver-based) linear blind multiuser detection in unknown channels. We compare the analytical and simulation results to traditional receiver-based blind multiuser detection. It is seen that transmitter precoding offers a reasonable alternative for TDD-mode CDMA when minimizing computational complexity at the mobile unit is a priority.Electrical engineering, Multiuser detection (Telecommunication), Code division multiple access, Computational complexity, Wireless communication systems--Design and constructionxw2008Electrical EngineeringArticlesHuman cancer cells express Slug-based epithelial-mesenchymal transition gene expression signature obtained in vivo
http://academiccommons.columbia.edu/catalog/ac:195250
Anastassiou, Dimitris; Rumjantseva, Viktoria; Cheng, Weiyi; Huang, Jianzhong; Canoll, Peter D.; Yamashiro, Darrell; Kandel, Jessica J.http://dx.doi.org/10.7916/D8X928SWTue, 09 Sep 2014 00:00:00 +0000Background: The biological mechanisms underlying cancer cell motility and invasiveness remain unclear, although it has been hypothesized that they involve some type of epithelial-mesenchymal transition (EMT). Methods: We used xenograft models of human cancer cells in immunocompromised mice, profiling the harvested tumors separately with species-specific probes and computationally analyzing the results. Results: Here we show that human cancer cells express in vivo a precise multi-cancer invasion-associated gene expression signature that prominently includes many EMT markers, among them the transcription factor Slug, fibronectin, and α-SMA. We found that human, but not mouse, cells express the signature and Slug is the only upregulated EMT-inducing transcription factor. The signature is also present in samples from many publicly available cancer gene expression datasets, suggesting that it is produced by the cancer cells themselves in multiple cancer types, including nonepithelial cancers such as neuroblastoma. Furthermore, we found that the presence of the signature in human xenografted cells was associated with a downregulation of adipocyte markers in the mouse tissue adjacent to the invasive tumor, suggesting that the signature is triggered by contextual microenvironmental interactions when the cancer cells encounter adipocytes, as previously reported. Conclusions: The known, precise and consistent gene composition of this cancer mesenchymal transition signature, particularly when combined with simultaneous analysis of the adjacent microenvironment, provides unique opportunities for shedding light on the underlying mechanisms of cancer invasiveness as well as identifying potential diagnostic markers and targets for metastasis-inhibiting therapeutics.Biomechanics, Oncology, Molecular biology, Cancer cells--Motility, Epithelial cells, Mesenchymal stem cells, Cancer invasivenessda8, pc561, dy39, jjk47Pediatrics, Electrical Engineering, Surgery, Pathology and Cell BiologyArticlesA haplotype inference algorithm for trios based on deterministic sampling
http://academiccommons.columbia.edu/catalog/ac:183683
Iliandis, Alexandros; Watkinson, John; Anastassiou, Dimitris; Wang, Xiaodonghttp://dx.doi.org/10.7916/D81N7ZHVMon, 08 Sep 2014 00:00:00 +0000In genome-wide association studies, thousands of individuals are genotyped in hundreds of thousands of single nucleotide polymorphisms (SNPs). Statistical power can be increased when haplotypes, rather than three-valued genotypes, are used in analysis, so the problem of haplotype phase inference (phasing) is particularly relevant. Several phasing algorithms have been developed for data from unrelated individuals, based on different models, some of which have been extended to father-mother-child "trio" data. We introduce a technique for phasing trio datasets using a tree-based deterministic sampling scheme. We have compared our method with publicly available algorithms PHASE v2.1, BEAGLE v3.0.2 and 2SNP v1.7 on datasets of varying number of markers and trios. We have found that the computational complexity of PHASE makes it prohibitive for routine use; on the other hand 2SNP, though the fastest method for small datasets, was significantly inaccurate. We have shown that our method outperforms BEAGLE in terms of speed and accuracy for small to intermediate dataset sizes in terms of number of trios for all marker sizes examined. Our method is implemented in the "Tree-Based Deterministic Sampling" (TDS) package, available for download at http://www.ee.columbia.edu/~anastas/tds Using a Tree-Based Deterministic sampling technique, we present an intuitive and conceptually simple phasing algorithm for trio data. The trade off between speed and accuracy achieved by our algorithm makes it a strong candidate for routine use on trio datasets.Genetics, Bioinformaticsda8, xw2008Electrical EngineeringArticlesMultilevel Mixture Kalman Filter
http://academiccommons.columbia.edu/catalog/ac:183076
Guo, Dong; Wang, Xiaodong; Chen, Ronghttp://dx.doi.org/10.7916/D8959FXFMon, 08 Sep 2014 00:00:00 +0000The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.Electrical engineeringxw2008Electrical EngineeringArticlesAdaptive antenna selection and Tx/Rx beamforming for large-scale MIMO systems in 60 GHz channels
http://academiccommons.columbia.edu/catalog/ac:183119
Dong, Ke; Prasad, Narayan; Wang, Xiaodong; Zhu, Shihuahttp://dx.doi.org/10.7916/D83J3BB7Mon, 08 Sep 2014 00:00:00 +0000We consider a large-scale MIMO system operating in the 60 GHz band employing beamforming for high-speed data transmission. We assume that the number of RF chains is smaller than the number of antennas, which motivates the use of antenna selection to exploit the beamforming gain afforded by the large-scale antenna array. However, the system constraint that at the receiver, only a linear combination of the receive antenna outputs is available, which together with the large dimension of the MIMO system makes it challenging to devise an efficient antenna selection algorithm. By exploiting the strong line-of-sight property of the 60 GHz channels, we propose an iterative antenna selection algorithm based on discrete stochastic approximation that can quickly lock onto a near-optimal antenna subset. Moreover, given a selected antenna subset, we propose an adaptive transmit and receive beamforming algorithm based on the stochastic gradient method that makes use of a low-rate feedback channel to inform the transmitter about the selected beams. Simulation results show that both the proposed antenna selection and the adaptive beamforming techniques exhibit fast convergence and near-optimal performance.Electrical engineeringxw2008Electrical EngineeringArticlesMulti-cancer computational analysis reveals invasion-associated variant of desmoplastic reaction involving INHBA, THBS2 and COL11A1
http://academiccommons.columbia.edu/catalog/ac:183614
Kim, Hoon; Watkinson, John; Varadan, Vinay; Anastassiou, Dimitrishttp://dx.doi.org/10.7916/D8TB158ZMon, 08 Sep 2014 00:00:00 +0000Despite extensive research, the details of the biological mechanisms by which cancer cells acquire motility and invasiveness are largely unknown. This study identifies an invasion associated gene signature shedding light on these mechanisms. We analyze data from multiple cancers using a novel computational method identifying sets of genes whose coordinated overexpression indicates the presence of a particular phenotype, in this case high-stage cancer. We conclude that there is one shared "core" metastasis-associated gene expression signature corresponding to a specific variant of stromal desmoplastic reaction, present in a large subset of samples that have exceeded a threshold of invasive transition specific to each cancer, indicating that the corresponding biological mechanism is triggered at that point. For example this threshold is reached at stage IIIc in ovarian cancer and at stage II in colorectal cancer. Therefore, its presence indicates that the corresponding stage has been reached. It has several features, such as coordinated overexpression of particular collagens, mainly COL11A1 and other genes, mainly THBS2 and INHBA. The composition of the overexpressed genes indicates invasion-facilitating altered proteolysis in the extracellular matrix. The prominent presence in the signature of INHBA in all cancers strongly suggests a biological mechanism centered on activin A induced TGF-β signaling, because activin A is a member of the TGF-β superfamily consisting of an INHBA homodimer. Furthermore, we establish that the signature is predictive of neoadjuvant therapy response in at least one breast cancer data set. Therefore, these results can be used for developing high specificity biomarkers sensing cancer invasion and predicting response to neoadjuvant therapy, as well as potential multi-cancer metastasis inhibiting therapeutics targeting the corresponding biological mechanism.Genetics, Cellular biologyda8Electrical EngineeringArticlesSpectrogram Analysis of Genomes
http://academiccommons.columbia.edu/catalog/ac:185023
Sussillo, David; Kundaje, Anshul; Anastassiou, Dimitrishttp://dx.doi.org/10.7916/D81834WJMon, 08 Sep 2014 00:00:00 +0000We performed frequency-domain analysis in the genomes of various organisms using tricolor spectrograms, identifying several types of distinct visual patterns characterizing specific DNA regions. We relate patterns and their frequency characteristics to the sequence characteristics of the DNA. At times, the spectrogram patterns could be related to the structure of the corresponding protein region by using various public databases such as GenBank. Some patterns are explained from the biological nature of the corresponding regions, which relate to chromosome structure and protein coding, and some patterns have yet unknown biological significance. We found biologically meaningful patterns, on the scale of millions of base pairs, to a few hundred base pairs. Chromosome-wide patterns include periodicities ranging from 2 to 300. The color of the spectrogram depends on the nucleotide content at specific frequencies, and therefore can be used as a local indicator of CG content and other measures of relative base content. Several smaller-scale patterns were found to represent different types of domains made up of various tandem repeats.Geneticsda8Electrical EngineeringArticlesRobust discovery of periodically expressed genes using the laplace periodogram
http://academiccommons.columbia.edu/catalog/ac:185105
Liang, Kuo-ching; Wang, Xiaodong; Li, Ta-Hsinhttp://dx.doi.org/10.7916/D8CF9NHXMon, 08 Sep 2014 00:00:00 +0000Time-course gene expression analysis has become important in recent developments due to the increasingly available experimental data. The detection of genes that are periodically expressed is an important step which allows us to study the regulatory mechanisms associated with the cell cycle. In this work, we present the Laplace periodogram which employs the least absolute deviation criterion to provide a more robust detection of periodic gene expression in the presence of outliers. The Laplace periodogram is shown to perform comparably to existing methods for the Sacharomyces cerevisiae and Arabidopsis time-course datasets, and to outperform existing methods when outliers are present. Time-course gene expression data are often noisy due to the limitations of current technology, and may include outliers. These artifacts corrupt the available data and make the detection of periodicity difficult in many cases. The Laplace periodogram is shown to perform well for both data with and without the presence of outliers, and also for data that are non-uniformly sampled.Geneticsxw2008, tl2136Electrical EngineeringArticlesGene Prediction Using Multinomial Probit Regression with Bayesian Gene Selection
http://academiccommons.columbia.edu/catalog/ac:185076
Zhou, Xiaobo; Wang, Xiaodong; Dougherty, Edward Rhttp://dx.doi.org/10.7916/D8K35S2NMon, 08 Sep 2014 00:00:00 +0000A critical issue for the construction of genetic regulatory networks is the identification of network topology from data. In the context of deterministic and probabilistic Boolean networks, as well as their extension to multilevel quantization, this issue is related to the more general problem of expression prediction in which we want to find small subsets of genes to be used as predictors of target genes. Given some maximum number of predictors to be used, a full search of all possible predictor sets is combinatorially prohibitive except for small predictors sets, and even then, may require supercomputing. Hence, suboptimal approaches to finding predictor sets and network topologies are desirable. This paper considers Bayesian variable selection for prediction using a multinomial probit regression model with data augmentation to turn the multinomial problem into a sequence of smoothing problems. There are multiple regression equations and we want to select the same strongest genes for all regression equations to constitute a target predictor set or, in the context of a genetic network, the dependency set for the target. The probit regressor is approximated as a linear combination of the genes and a Gibbs sampler is employed to find the strongest genes. Numerical techniques to speed up the computation are discussed. After finding the strongest genes, we predict the target gene based on the strongest genes, with the coefficient of determination being used to measure predictor accuracy. Using malignant melanoma microarray data, we compare two predictor models, the estimated probit regressors themselves and the optimal full-logic predictor based on the selected strongest genes, and we compare these to optimal prediction without feature selection.Genetics, Bioinformaticsxw2008Electrical EngineeringArticlesAdvanced Signal Processing for Cognitive Radio Networks
http://academiccommons.columbia.edu/catalog/ac:184490
Liang, Ying-Chang; Wang, Xiaodong; Zeng, Yonghong; Choi, Jinho; Zhang, Rui; Luise, Marcohttp://dx.doi.org/10.7916/D8Z31X1RMon, 08 Sep 2014 00:00:00 +0000Cognitive radio is the enabling technology for dynamic spectrum access, which may potentially mitigate the radio spectrum scarcity problem encountered in many countries. Thus, cognitive radio is widely regarded as one of the most promising technologies for the future wireless communications. For instance, the Federal Communications Commission (FCC) of the US approved in November 2008 the use of mobile devices in unused TV bands (TV white spaces), and many governments worldwide have also moved to support this new spectrum sharing model. This has been recently accompanied by a significant upsurge in academic research and application initiatives, especially in unlocking the potential in the "white space" of TV bands for various broadband wireless services.Electrical engineeringxw2008Electrical EngineeringArticlesPerformance Comparisons of MIMO Techniques with Application to WCDMA Systems
http://academiccommons.columbia.edu/catalog/ac:184444
Li, Chuxiang; Wang, Xiaodonghttp://dx.doi.org/10.7916/D8F769Z1Mon, 08 Sep 2014 00:00:00 +0000Multiple-input multiple-output (MIMO) communication techniques have received great attention and gained significant development in recent years. In this paper, we analyze and compare the performances of different MIMO techniques. In particular, we compare the performance of three MIMO methods, namely, BLAST, STBC, and linear precoding/decoding. We provide both an analytical performance analysis in terms of the average receiver and simulation results in terms of the BER. Moreover, the applications of MIMO techniques in WCDMA systems are also considered in this study. Specifically, a subspace tracking algorithm and a quantized feedback scheme are introduced into the system to simplify implementation of the beamforming scheme. It is seen that the BLAST scheme can achieve the best performance in the high data rate transmission scenario; the beamforming scheme has better performance than the STBC strategies in the diversity transmission scenario; and the beamforming scheme can be effectively realized in WCDMA systems employing the subspace tracking and the quantized feedback approach.Electrical engineering, Computer sciencexw2008Electrical EngineeringArticlesFast and accurate haplotype frequency estimation for large haplotype vectors from pooled DNA data
http://academiccommons.columbia.edu/catalog/ac:180467
Iliadis, Alexandros; Anastassiou, Dimitris; Wang, Xiaodonghttp://dx.doi.org/10.7916/D84Q7SB5Mon, 08 Sep 2014 00:00:00 +0000Background: Typically, the first phase of a genome wide association study (GWAS) includes genotyping across hundreds of individuals and validation of the most significant SNPs. Allelotyping of pooled genomic DNA is a common approach to reduce the overall cost of the study. Knowledge of haplotype structure can provide additional information to single locus analyses. Several methods have been proposed for estimating haplotype frequencies in a population from pooled DNA data. Results: We introduce a technique for haplotype frequency estimation in a population from pooled DNA samples focusing on datasets containing a small number of individuals per pool (2 or 3 individuals) and a large number of markers. We compare our method with the publicly available state-of-the-art algorithms HIPPO and HAPLOPOOL on datasets of varying number of pools and marker sizes. We demonstrate that our algorithm provides improvements in terms of accuracy and computational time over competing methods for large number of markers while demonstrating comparable performance for smaller marker sizes. Our method is implemented in the "Tree-Based Deterministic Sampling Pool" (TDSPool) package which is available for download at http://www.ee.columbia.edu/~anastas/tdspool. Conclusions: Using a tree-based determinstic sampling technique we present an algorithm for haplotype frequency estimation from pooled data. Our method demonstrates superior performance in datasets with large number of markers and could be the method of choice for haplotype frequency estimation in such datasets.Genetics, Bioinformaticsai2159, da8, xw2008Electrical EngineeringArticlesHierarchical Dirichlet process model for gene expression clustering
http://academiccommons.columbia.edu/catalog/ac:180107
Wang, Liming; Wang, Xiaodonghttp://dx.doi.org/10.7916/D8QZ2899Mon, 08 Sep 2014 00:00:00 +0000Clustering is an important data processing tool for interpreting microarray data and genomic network inference. In this article, we propose a clustering algorithm based on the hierarchical Dirichlet processes (HDP). The HDP clustering introduces a hierarchical structure in the statistical model which captures the hierarchical features prevalent in biological data such as the gene express data. We develop a Gibbs sampling algorithm based on the Chinese restaurant metaphor for the HDP clustering. We apply the proposed HDP algorithm to both regulatory network segmentation and gene expression clustering. The HDP algorithm is shown to outperform several popular clustering algorithms by revealing the underlying hierarchical structure of the data. For the yeast cell cycle data, we compare the HDP result to the standard result and show that the HDP algorithm provides more information and reduces the unnecessary clustering fragments.Geneticsxw2008Electrical EngineeringArticlesSequential Statistical Signal Processing with Applications to Distributed Systems
http://academiccommons.columbia.edu/catalog/ac:177169
Yilmaz, Yasinhttp://dx.doi.org/10.7916/D8GX48W5Fri, 29 Aug 2014 00:00:00 +0000Detection and estimation, two classical statistical signal processing problems with wellestablished theories, are traditionally studied under the fixed-sample-size and centralized setups, e.g., Neyman-Pearson target detection, and Bayesian parameter estimation. Recently, they appear in more challenging setups with stringent constraints on critical resources, e.g., time, energy, and bandwidth, in emerging technologies, such as wireless sensor networks, cognitive radio, smart grid, cyber-physical systems (CPS), internet of things (IoT), and networked control systems. These emerging systems have applications in a wide range of areas, such as communications, energy, the military, transportation, health care, and infrastructure. Sequential (i.e., online) methods suit much better to the ever-increasing demand on time-efficiency, and latency constraints than the conventional fixed-sample-size (i.e., offline) methods. Furthermore, as a result of decreasing device sizes and tendency to connect more and more devices, there are stringent energy and bandwidth constraints on devices (i.e., nodes) in a distributed system (i.e., network), requiring decentralized operation with low transmission rates. Hence, for statistical inference (e.g., detection and/or estimation) problems in distributed systems, today's challenge is achieving high performance (e.g., time efficiency) while satisfying resource (e.g., energy and bandwidth) constraints. In this thesis, we address this challenge by (i) first finding optimum (centralized) sequential schemes for detection, estimation, and joint detection and estimation if not available in the literature, (ii) and then developing their asymptotically optimal decentralized versions through an adaptive non-uniform sampling technique called level-triggered sampling. We propose and rigorously analyze decentralized detection, estimation, and joint detection and estimation schemes based on level-triggered sampling, resulting in a systematic theory of event-based statistical signal processing. We also show both analytically and numerically that the proposed schemes significantly outperform their counterparts based on conventional uniform sampling in terms of time efficiency. Moreover, they are compatible with the existing hardware as they work with discrete-time observations produced by conventional A/D converters. We apply the developed schemes to several problems, namely spectrum sensing and dynamic spectrum access in cognitive radio, state estimation and outage detection in smart grid, and target detection in multi-input multi-output (MIMO) wireless sensor networks.Electrical engineeringyy2336Electrical EngineeringDissertationsNon-linear Dynamics in ETG Mode Saturation and Beam-Plasma Instabilities
http://academiccommons.columbia.edu/catalog/ac:177448
Tokluoglu, Erinc K.http://dx.doi.org/10.7916/D8MG7MQZTue, 19 Aug 2014 00:00:00 +0000Non-linear mechanisms arise frequently in plasmas and beam-plasma systems resulting in dynamics not predicted by linear theory. The non-linear mechanisms can influence the time evolution of plasma instabilities and can be used to describe their saturation. Furthermore time and space averaged non-linear fields generated by instabilities can lead to collisionless transport and plasma heating. In the case of beam-plasma systems counter-intuitive beam defocusing and scaling behavior which are interesting areas of study for both Low-Temperature and High Energy Density physics. The non-linear mode interactions in form of phase coupling can describe energy transfer to other modes and can be used to describe the saturation of plasma instabilities. In the first part of this thesis, a theoretical model was formulated to explain the saturation mechanism of Slab Electron Temperature Gradient (ETG) mode observed in the Columbia Linear Machine (CLM), based on experimental time-series data collected through probe diagnostics [1]. ETG modes are considered to be a major player in the unexplained high levels of electron transport observed in tokamak fusion experiments and the saturation mechanism of these modes is still an active area of investigation. The data in the frequency space indicated phase coupling between 3 modes, through a higher order spectral correlation coefficient known as bicoherence. The resulting model is similar to [2], which was a treatment for ITG modes observed in the CLM and correctly predicts the observed saturation level of the ETG turbulence. The scenario is further supported by the fact that the observed mode frequencies are in close alignment with those predicted theoretical dispersion relations. Non-linear effects arise frequently in beam-plasma systems and can be important for both low temperature plasma devices commonly used for material processing as well as High Energy Density applications relevant to inertial fusion. The non-linear time averaged fields generated by beam-plasma instabilities can be responsible for defocusing and distorting beams propagating in background plasma. This can be problematic in inertial fusion applications where the beam is intended to propagate ballistically as the background plasma neutralizes the beam space charge and current. We used particle-in-cell (PIC) code LSP to numerically investigate the defocusing effects in an ion beam propagating in background plasma experiences as it is exposed to the non-linear fields generated by Two-Stream instability between beam ions and plasma electrons. Supported by theory and benchmarked by the numerical solutions of governing E&M equations, the simulations were used to find and check scaling laws for the defocusing forces in the parameter space of beam and plasma density as well as the beam ion mass. A transition region where the defocusing fields peak has been identified, which should be avoided in the design of experimental devices. We further proposed a diagnostic tool to identify the presence of the two-stream instability in a system with parameters similar to the National Drift Compression Experiment II (NDCX-II) and conducted proof-of concept simulations. In the case of electron beam propagating in background plasma instability driven collisionless scattering and plasma heating is observed. 1-D simulations conducted in EDIPIC were benchmarked in LSP to study the excitation and time-evolution of electron-electron Two-Stream instability. Coupling of electron dynamics via non-linear ponderomotive force created by instability generated fields with ion cavities and Ion-Acoustic mode excitation was observed. Furthermore 2-D simulations of an electron-beam in a background plasma was performed. Many of the effects in observed in 1-D simulations were replicated. Morever generation of oblique modes with transverse wave numbers were observed in the simulations, which resulted in significant transverse scattering of beam electrons and the time evolution of the turbulent spectrum was studied via Fourier techniques. It is plausible that the modes excited might be interacting non-linearly via mode-coupling, however further theoretical and numerical investigation of the turbulent spectrum is needed. The study of the more realistic 2-D system and the spectrum is important for the understanding of collisionless heating of plasmas by beams and the underlying energy delivery which can have important applications in especially low temperature plasma systems used primarily in etching and materials processing.Plasma physicsekt2109Electrical EngineeringDissertationsRF Frontend for Spectrum Analysis in Cognitive Radio
http://academiccommons.columbia.edu/catalog/ac:197492
Jayaraman, Karthikhttp://dx.doi.org/10.7916/D8ZC810ZMon, 07 Jul 2014 11:31:53 +0000Advances in wireless technology have sparked a plethora of mobile communication standards to support a variety of applications. FCC predicts a looming crisis due to the exponentially growing demand for spectrum and it recommends to increase the efficiency of spectrum utilization. Cognitive Radio (CR) is envisioned as a radio technology which detects and exploits empty spectrum to improve the quality of communication. Spectrum analyzer for detecting spectrum holes is a key component required for implementing cognitive radio. Mitola's vision of using an RF Analog-to-Digital (ADC) to digitize the entire spectrum is not yet a reality. The traditional spectrum analysis technique based on a RF Front end using an LO Sweep is too slow, making it unsuitable to track fast hopping signals. In this work, we demonstrate an RF Frontend that can simplify the ADC's requirement by splitting the input spectrum into multiple channels. It avoids the problem of PLL settling by incorporating LO synthesis within the signal path using a concept called Iterative Down Converter. An example 0.75GHz-11.25GHz RF Channelizer is designed in 65nm Standard CMOS Process. The channelizer splits the input spectrum (10.5GHz bandwidth) into seven channels (each of bandwidth 1.5GHz). The channelizer shows the ability to rapidly switch from one channel to another (within a few ns) as well as down-converting multiple channels simultaneously (concurrency). The channelizer achieves a dynamic range of 54dB for a bandwidth of 10.5GHz, while consuming 540mW of power. Harmonic rejection mixer plays a key role in a broadband receiver. A novel order scalable harmonic rejection mixer architecture is described in this research. A proof-of-principle prototype has been designed and fabricated in a 45nm SOI technology. Experimental results demonstrate an operation range of 0.5GHz to 1.5GHz for the LO frequency while offering harmonic rejection better than 55dB for the 3rd harmonic and 58dB for the 5th harmonic across LO frequencies. While cognitive radio solves the spectrum efficiency problem in frequency domain, the electronic beam steering provides a spatial domain solution. Electronic beam forming using phased arrays have been claimed to improve spectrum efficiency by serving more number of users for a given bandwidth. A LO path phase-shifter with frequency-doubling is demonstrated for WiMAX applications.Electrical engineeringkj2347Electrical EngineeringDissertationsNanophotonics for Optoelectronic Devices: Extrinsic Silicon Photonic Receivers and Organic Photovoltaics
http://academiccommons.columbia.edu/catalog/ac:175668
Grote, Richardhttp://dx.doi.org/10.7916/D8TB1528Mon, 07 Jul 2014 00:00:00 +0000The demand for high data rate communications and renewable energy sources has led to new materials and platforms for optoelectronic devices, which require nanometer scale feature sizes. Devices that operate in the visible and near-infrared commonly have active areas with dimensions on the order of the diffraction limit λ/2^n, where λ is the free space wavelength and n is the index of refraction, for which the ray optics modeling techniques and bulk focusing optics traditionally used in optoelectronic device design are no longer applicable. In this subwavelength regime, nanophotonic light-trapping strategies are required to localize electromagnetic fields in the active area. This dissertation details the application of nanophotonics to two optoelectronic systems: extrinsic photodetectors for silicon photonics and light-trapping in organic photovoltaics. Error-free reception of 10 Gb/s data at λ = 1.55 μm is demonstrated with a Si⁺ ion-implanted silicon waveguide photodiode. To mitigate the relatively small absorption coefficient of ion-implanted silicon, resonant cavity enhancement using in-line Fabry-Pérot and 1D photonic crystal cavities, as well as slow light enhancement using a coupled resonator optical waveguide are discussed. The extension of these photodiodes to the mid-infrared is demonstrated using Zn⁺ implantation to detect over a range of λ = 2.2-2.4 μm, and a new method for modulation and switching in integrated optics by using interference in a resonant cavity, termed coherent perfect loss (CPL), is presented. Finally, the upper limit of nanophotonic light trapping is derived for organic photovoltaics with material anisotropy included.Electrical engineering, Electromagnetics, Opticsrrg2130Electrical EngineeringDissertationsArchitectures for Improved Organic Semiconductor Devices
http://academiccommons.columbia.edu/catalog/ac:175349
Beck, Jonathanhttp://dx.doi.org/10.7916/D82B8W54Mon, 07 Jul 2014 00:00:00 +0000Advancements in the microelectronics industry have brought increasing performance and decreasing prices to a wide range of users. Conventional silicon-based electronics have followed Moore's law to provide an ever-increasing integrated circuit transistor density, which drives processing power, solid-state memory density, and sensor technologies. As shrinking conventional integrated circuits became more challenging, researchers began exploring electronics with the potential to penetrate new applications with a low price of entry: "Electronics everywhere." The new generation of electronics is thin, light, flexible, and inexpensive. Organic electronics are part of the new generation of thin-film electronics, relying on the synthetic flexibility of carbon molecules to create organic semiconductors, absorbers, and emitters which perform useful tasks. Organic electronics can be fabricated with low energy input on a variety of novel substrates, including inexpensive plastic sheets. The potential ease of synthesis and fabrication of organic-based devices means that organic electronics can be made at very low cost. Successfully demonstrated organic semiconductor devices include photovoltaics, photodetectors, transistors, and light emitting diodes. Several challenges that face organic semiconductor devices are low performance relative to conventional devices, long-term device stability, and development of new organic-compatible processes and materials. While the absorption and emission performance of organic materials in photovoltaics and light emitting diodes is extraordinarily high for thin films, the charge conduction mobilities are generally low. Building highly efficient devices with low-mobility materials is one challenge. Many organic semiconductor films are unstable during fabrication, storage, and operation due to reactions with water, oxygen and hydroxide. A final challenge facing organic electronics is the need for new processes and materials for electrodes, semiconductors and substrates compatible with low-temperature, flexible, and oxygenated and aromatic solvent-free fabrication. Materials and processes must be capable of future high volume production in order to enable low costs. In this thesis we explore several techniques to improve organic semiconductor device performance and enable new fabrication processes. In Chapter 2, I describe the integration of sub-optical-wavelength nanostructured electrodes that improve fill factor and power conversion efficiency in organic photovoltaic devices. Photovoltaic fill factor performance is one of the primary challenges facing organic photovoltaics because most organic semiconductors have poor charge mobility. Our electrical and optical measurements and simulations indicate that nanostructured electrodes improve charge extraction in organic photovoltaics. In Chapter 3, I describe a general method for maximizing the efficiency of organic photovoltaic devices by simultaneously optimizing light absorption and charge carrier collection. We analyze the potential benefits of light trapping strategies for maximizing the overall power conversion efficiency of organic photovoltaic devices. This technique may be used to improve organic photovoltaic materials with low absorption, or short exciton diffusion and carrier-recombination lengths, opening up the device design space. In Chapter 4, I describe a process for high-quality graphene transfer onto chemically sensitive, weakly interacting organic semiconductor thin-films. Graphene is a promising flexible and highly transparent electrode for organic electronics; however, transferring graphene films onto organic semiconductor devices was previously impossible. We demonstrate a new transfer technique based on an elastomeric stamp coated with an fluorinated polymer release layer. We fabricate three classes of organic semiconductor devices: field effect transistors without high temperature annealing, transparent organic light-emitting diodes, and transparent small-molecule organic photovoltaic devices.Electrical engineering, Materials sciencejhb2158Electrical EngineeringDissertationsSilicon Photonics: All-Optical Devices for Linear and Nonlinear Applications
http://academiccommons.columbia.edu/catalog/ac:175330
Driscoll, Jeffreyhttp://dx.doi.org/10.7916/D8K935N4Mon, 07 Jul 2014 00:00:00 +0000Silicon photonics has grown rapidly since the first Si electro-optic switch was demonstrated in 1987, and the field has never grown more quickly than it has over the past decade, fueled by milestone achievements in semiconductor processing technologies for low loss waveguides, high-speed Si modulators, Si lasers, Si detectors, and an enormous toolbox of passive and active integrated devices. Silicon photonics is now on the verge of major commercialization breakthroughs, and optical communication links remain the force driving integrated and Si photonics towards the first commercial telecom and datacom transceivers; however other potential and future applications are becoming uncovered and refined as researchers reveal the benefits of manipulating photons on the nanoscale. This thesis documents an exploration into the unique guided-wave and nonlinear properties of deeply-scaled high-index-contrast sub-wavelength Si waveguides. It is found that the tight confinement inherent to single-mode channel waveguides on the silicon-on-insulator platform lead to a rich physics, which can be leveraged for new devices extending well beyond simple passive interconnects and electro-optic devices. The following chapters will concentrate, in detail, on a number of unique physical features of Si waveguides and extend these attributes towards new and interesting devices. Linear optical properties and nonlinear optical properties are investigated, both of which are strongly affected by tight optical confinement of the guided waveguide modes. As will be shown, tight optical confinement directly results in strongly vectoral modal components, where the electric and magnetic fields of the guided modes extend into all spatial dimensions, even along the axis of propagation. In fact, the longitudinal electric and magnetic field components can be just as strong as the transverse fields, directly affecting the modal group velocity and energy transport properties since the longitudinal fields are shown to contribute no time-averaged momentum. Furthermore, the vectoral modal components, in conjunction with the tensoral nature of the third-order susceptibility of Si, lead to nonlinear properties which are dependent on waveguide orientation with respect to the Si parent crystal and the construction of the modal electric field components. This consideration is used to maximize effective nonlinearity and realize nonlinear Kerr gratings along specific waveguide trajectories. Tight optical confinement leads to a natural enhancement of the intrinsically large effective nonlinearty of Si waveguides, and in fact, the effective nonlinearty can be made to be almost 10^6 times greater in Si waveguides than that of standard single-mode fiber. Such a large nonlinearity motivates chip-scale all-optical signal processing techniques. Wavelength conversion by both four-wave-mixing (FWM) and cross-phase-modulation (XPM) will be discussed, including a technique that allows for enhanced broadband discrete FWM over arbitrary spectral spans by modulating both the linear and nonlinear waveguide properties through periodic changes in waveguide geometry. This quasi-phase-matching approach has very real applications towards connecting mature telecom sources detectors and components to other spectral regimes, including the mid-IR. Other signal processing techniques such as all-optical modulation format conversion via XPM will also be discussed. This thesis will conclude by looking at ways to extend the bandwidth capacity of Si waveguide interconnects on chip. As the number of processing cores continues to scale as a means for computational performance gains, on-chip link capacity will become an increasingly important issue. Metallic traces have severe limitations and are envisioned to eventually bow to integrated photonic links. The aggregate bandwidth supported by a single waveguide link will therefore become a crucial consideration as integrated photonics approaches the CPU. One way to increase aggregate bandwidth is to utilize different eigen-modes of a multimode waveguide, and integrated waveguide mode-muxes and demuxes for achieving simultaneous mode-division-multiplexing and wavelength-division-multiplexing will be demonstrated.Electrical engineering, Opticsjbd2112Electrical EngineeringDissertationsChip scale low dimensional materials: optoelectronics and nonlinear optics
http://academiccommons.columbia.edu/catalog/ac:175882
Gu, Tingyihttp://dx.doi.org/10.7916/D8FX77KJMon, 07 Jul 2014 00:00:00 +0000The CMOS foundry infrastructure enables integration of high density, high performance optical transceivers. We developed integrated devices that assemble resonators, waveguide, tapered couplers, pn junction and electrodes. Not only the volume standard manufacture in silicon foundry is promising to low-lost optical components operating at IR and mid-IR range, it also provides a robust platform for revealing new physical phenomenon. The thesis starts from comparison between photonic crystal and micro-ring resonators based on chip routers, showing photonic crystal switches have small footprint, consume low operation power, but its higher linear loss may require extra energy for signal amplification. Different designs are employed in their implementation in optical signal routing on chip. The second part of chapter 2 reviews the graphene based optoelectronic devices, such as modulators, lasers, switches and detectors, potential for group IV optoelectronic integrated circuits (OEIC). In chapter 3, the highly efficient thermal optic control could act as on-chip switches and (transmittance) tunable filters. Local temperature tuning compensates the wavelength differences between two resonances, and separate electrode is used for fine tuning of optical pathways between two resonators. In frequency domain, the two cavity system also serves as an optical analogue of Autler-Towns splitting, where the cavity-cavity resonance detuning is controlled by the length of pathway (phase) between them. The high thermal sensitivity of cavity resonance also effectively reflects the heat distribution around the nanoheaters, and thus derives the thermal conductivity in the planar porous suspended silicon membrane. Chapter 4 and 5 analyze graphene-silicon photonic crystal cavities with high Q and small mode volume. With negligible nonlinear response to the milliwatt laser excitation, the monolithic silicon PhC turns into highly nonlinear after transferring the single layer graphene with microwatt excitation, reflected by giant two photon absorption induced optical bistability, low power dynamic switching and regenerative oscillation, and coherent four-wave-mixing from high Kerr coefficient. The single layer graphene lowers the operational power 20 times without enhancing the linear propagation loss. Chapter 6 moves onto high Q ring resonator made of plasma enhanced chemical vapor deposition grown silicon nitride (PECVD SiN). PECVD SiN grown at low temperature is compatible with CMOS processing. The resonator enhanced light-matter interaction leads to molecular absorption induced quality factor enhancement and thermal bistability, near the critical coupling region. In chapter 7, carrier transport and recombination in InAs quantum dots based GaAs solar cells are characterized by current-voltage curve. The parameters include voltage dependent ideality factor, series and shunt resistance. The device variance across the wafer is analyzed and compared. Quantum dots offers extra photocurrent by extending the absorption edge further into IR range, but the higher recombination rate increases the dark current as well. Different dots sized enabled by growth techniques are employed for comparison.Optics, Electrical engineering, Physicstg2342Electrical Engineering, Mechanical EngineeringDissertationsDesign Techniques for Analog-to-Digital Converters in Scaled CMOS Technologies
http://academiccommons.columbia.edu/catalog/ac:193045
Kuppambatti, Jayanth Narasimhanhttp://dx.doi.org/10.7916/D8RV0KT8Mon, 07 Jul 2014 00:00:00 +0000Analog-to-digital converters (ADCs) are analog pre-processing systems that convert the real life analog signals, the input of sensors or antenna, to digital bits that are processed by the system digital back-end. Due to the various issues associated with CMOS technology scaling such as reduced signal swings and lower transistor gains, the design of ADCs has seen a number of challenges in medium to high resolution and wideband digitization applications. The various chapters of this thesis focus on efficient design techniques for ADCs that aim to address the challenges associated with design in scaled CMOS technologies. This thesis discusses the design of three analog and mixed-signal prototypes: the first prototype introduces current pre-charging (CRP) techniques to generate the reference in Multiplying Digital-to-Analog Converters (MDACs) of pipeline ADCs. CRP techniques are specifically applied to Zero-Crossing Based (ZCB) Pipeline-SAR ADCs in this work. The proposed reference pre-charge technique relaxes power and area requirements for reference voltage generation and distribution in ZCB Pipeline ADCs, by eliminating power hungry low impedance reference voltage buffers. The next prototype describes the design of a radiation-hard dual-channel 12-bit 40MS/s pipeline ADC with extended dynamic range, for use in the readout electronics upgrade for the ATLAS Liquid Argon Calorimeters at the CERN Large Hadron Collider. The design consists of two pipeline A/D channels with four MDACs with nominal 12-bit resolution each, that are verified to be radiation-hard beyond the required specifications. The final prototype proposes Switched-Mode Signal Processing, a new design paradigm that achieves rail-to-rail signal swings with high linearity at ultra-low supply voltages. Switched-Mode Signal Processing represents analog information in terms of pulse widths and replaces the output stage of OTAs with power-efficient rail-to-rail Class-D stages, thus producing Switched-Mode Operational Amplifiers (SMOAs). The SMOAs are used to implement a Programmable Gain Amplifier (PGA) that has a programmable gain from 0-12dB.Electrical engineeringjnk2113Electrical EngineeringDissertationsBlind Decoding of Multiple Description Codes over OFDM Systems via Sequential Monte Carlo
http://academiccommons.columbia.edu/catalog/ac:174750
Yang, Zigang; Guo, Dong; Wang, Xiaodonghttp://dx.doi.org/10.7916/D88K7755Fri, 06 Jun 2014 00:00:00 +0000We consider the problem of transmitting a continuous source through an OFDM system. Multiple description scalar quantization (MDSQ) is applied to the source signal, resulting in two correlated source descriptions. The two descriptions are then OFDM modulated and transmitted through two parallel frequency-selective fading channels. At the receiver, a blind turbo receiver is developed for joint OFDM demodulation and MDSQ decoding. Transformation of the extrinsic information of the two descriptions are exchanged between each other to improve system performance. A blind soft-input soft-output OFDM detector is developed, which is based on the techniques of importance sampling and resampling. Such a detector is capable of exchanging the so-called extrinsic information with the other component in the above turbo receiver, and successively improving the overall receiver performance. Finally, we also treat channel-coded systems, and a novel blind turbo receiver is developed for joint demodulation, channel decoding, and MDSQ source decoding.Electrical engineeringxw2008Electrical EngineeringArticlesAttractor Molecular Signatures and Their Applications for Prognostic Biomarkers
http://academiccommons.columbia.edu/catalog/ac:173844
Cheng, Wei-Yihttp://dx.doi.org/10.7916/D8NP22JKTue, 13 May 2014 00:00:00 +0000This dissertation presents a novel data mining algorithm identifying molecular signatures, called attractor metagenes, from large biological data sets. It also presents a computational model for combining such signatures to create prognostic biomarkers. Using the algorithm on multiple cancer data sets, we identified three such gene co-expression signatures that are present in nearly identical form in different tumor types representing biomolecular events in cancer, namely mitotic chromosomal instability, mesenchymal transition, and lymphocyte infiltration. A comprehensive experimental investigation using mouse xenograft models on the mesenchymal transition attractor metagene showed that the signature was expressed in the human cancer cells, but not in the mouse stroma. The attractor metagenes were used to build the winning model of a breast cancer prognosis challenge. When applied on larger data sets from 12 different cancer types from The Cancer Genome Atlas Pan-Cancer project, the algorithm identified additional pan-cancer molecular signatures, some of which involve methylation sites, microRNA expression, and protein activity.Electrical engineering, Bioinformaticswc2302Electrical EngineeringDissertationsThin-film Bulk Acoustic Resonators on Integrated Circuits for Physical Sensing Applications
http://academiccommons.columbia.edu/catalog/ac:172910
Johnston, Matthew Leighhttp://dx.doi.org/10.7916/D8HT2MDJTue, 15 Apr 2014 00:00:00 +0000Merging chemical and biomolecular sensors with silicon integrated circuits has the potential to push complex electronics into a low-cost, portable platform, greatly simplifying system- level instrumentation and extending the reach and functionality of point of use technologies. One such class of sensor, the thin-film bulk acoustic resonator (FBAR), has a micron-scale size and low gigahertz frequency range that is ideally matched with modern complementary metal-oxide-semiconductor (CMOS) technologies. An FBAR sensor can enable label-free detection of analytes in real time, and CMOS integration can overcome the measurement complexity and equipment cost normally required for detection with acoustic resonators. This thesis describes a body of work conducted to integrate an array of FBAR sensors with an active CMOS substrate. A monolithic fabrication method is developed, which allows for FBAR devices to be built directly on the top surface of the CMOS chip through post-processing. A custom substrate is designed and fabricated in 0.18 µm CMOS to support oscillation and frequency measurement for each sensor site in a 6×4 array. The fabrication of 0.8-1.5 GHz FBAR devices is validated for both off-chip and on-chip devices, and the integrated system is characterized for sensitivity and limit of detection. On-chip, parallel measurement of multiple sensors in real time is demonstrated for a quantitative vapor sensing application, and the limit of detection is below 50 ppm. This sensor platform could be used for a broad scope of label-free detection applications in chemistry, biology, and medicine, and it demonstrates potential for enabling a low-cost, point of use instrument.Electrical engineeringmj2200Electrical EngineeringDissertationsSelected machine learning reductions
http://academiccommons.columbia.edu/catalog/ac:172841
Choromanska, Anna Ewahttp://dx.doi.org/10.7916/D8QF8QZ9Fri, 11 Apr 2014 00:00:00 +0000Machine learning is a field of science aiming to extract knowledge from the data. Optimization lies in the core of machine learning as many learning problems are formulated as optimization problems, where the goal is to minimize/maximize an objective function. More complex machine learning problems are then often solved by reducing them to simpler sub-problems solvable by known optimization techniques. This dissertation addresses two elements of the machine learning system 'pipeline', designing efficient basic optimization tools tailored to solve specific learning problems, or in other words optimize a specific objective function, and creating more elaborate learning tools with sub-blocks being essentially optimization solvers equipped with such basic optimization tools. In the first part of this thesis we focus on a very specific learning problem where the objective function, either convex or non-convex, involves the minimization of the partition function, the normalizer of a distribution, as is the case in conditional random fields (CRFs) or log-linear models. Our work proposes a tight quadratic bound on the partition function which parameters are easily recovered by a simple algorithm that we propose. The bound gives rise to the family of new optimization learning algorithms, based on bound majorization (we developed batch, both full-rank and low-rank, and semi-stochastic variants), with linear convergence rate that successfully compete with state-of-the-art techniques (among them gradient descent methods, Newton and quasi-Newton methods like L-BFGS, etc.). The only constraint we introduce is on the number of classes which is assumed to be finite and enumerable. The bound majorization method we develop is simultaneously the first reduction scheme discussed in this thesis, where throughout this thesis by 'reduction' we understand the learning approach or algorithmic technique converting a complex machine learning problem into a set of simpler problems (that can be as small as a single problem). Secondly, we focus on developing two more sophisticated machine learning tools, for solving harder learning problems. The tools that we develop are built from basic optimization sub-blocks tailored to solve simpler optimization sub-problems. We first focus on the multi class classification problem where the number of classes is very large. We reduce this problem to a set of simpler sub-problems that we solve using basic optimization methods performing additive update on the parameter vector. Secondly we address the problem of learning data representation when the data is unlabeled for any classification task. We reduce this problem to a set of simpler sub-problems that we solve using basic optimization methods, however this time the parameter vector is updated multiplicatively. In both problems we assume that the data come in a stream that can even be infinite. We will now provide more specific description of each of these problems and describe our approach for solving them. In the multi class classification problem it is desirable to achieve train and test running times which are logarithmic in the label complexity. The existing approaches to this problem are either intractable or do not adapt well to the data. We propose a reduction of this problem to a set of binary regression problems organized in a tree structure and introduce a new splitting criterion (objective function) allowing gradient descent style optimization (bound optimization methods can also be used). A decision tree algorithm that we obtain differs from traditional decision trees in the objective optimized, and in how that optimization is done. The different objective has useful properties, such us it guarantees balanced and small-error splits, while the optimization uses an online learning algorithm that is queried and trained simultaneously as we pass over the data. Furthermore, we prove an upper-bound on the number of splits required to reduce the entropy of the tree leafs below small threshold. We empirically show that the trees we obtain have logarithmic depth, which implies logarithmic training and testing running times, and significantly smaller error than random trees. Finally, we consider the problem of unsupervised (clustering) learning of data representation, where the quality of obtained clustering is measured using a very simple, intuitive and widely cited clustering objective, k-means clustering objective. We introduce a family of online clustering algorithms by extending algorithms for online supervised learning, with access to expert predictors (which are basic sub-blocks of our learning system), to the unsupervised learning setting. The parameter vector corresponds to the probability distribution over the experts. Different update rules for the parameter vector depend on an approximation to the current value of the k-means clustering objective obtained by each expert, and model different levels of non-stationarity in the data. We show that when the experts are batch clustering algorithms with approximation guarantees with respect to the k-means clustering objective, applied to a sliding window of the data stream, our algorithms obtain approximation guarantees with respect to the k-means clustering objective. Thus simultaneously we address an open problem posed by Dasgupta for approximating k-means clustering objective on data streams. We experimentally show that our algorithms' empirical performance tracks that of the best clustering algorithm in its experts set and that our algorithms outperform widely used online algorithms.Computer scienceaec2163Computer Science, Electrical EngineeringDissertationsMicro-Raman spectroscopic visualization of lattice vibrations and strain in He+- implanted single-crystal LiNbO3
http://academiccommons.columbia.edu/catalog/ac:172060
Huang, Hsu-Cheng; Dadap, Jerry I.; Herman, Irving P.; Bakhru, Hassaram; Osgood, Jr., Richard M.http://dx.doi.org/10.7916/D8BC3WKFThu, 27 Mar 2014 00:00:00 +0000Scanning micro-Raman spectroscopy has been utilized to image and investigate strain in He+-implanted congruent LiNbO3 samples. By using abruptly patterned implanted samples, we show that the spatial two-dimensional mapping of the Raman spectral peaks can be used to image the strain distribution and determine its absolute magnitude. We demonstrate that both short- and long-range length-scale in-plane and out-of-plane strain and stress states can be determined using the secular equations of phonon-deformation-potential theory. We also show that two-dimensional Raman imaging can be used to visualize the relaxation of strain in the crystal during low-temperature annealing.Electrical engineering, Physicshh2362, jid5, iph1, rmo1Applied Physics and Applied Mathematics, Electrical EngineeringArticlesCoding Techniques for Advanced Wireless Communication Systems
http://academiccommons.columbia.edu/catalog/ac:171931
Gong, Chenhttp://dx.doi.org/10.7916/D8PR7T29Thu, 20 Mar 2014 00:00:00 +0000Motivated by the ever increasing demand of wireless communication for larger capacity and higher quality, wireless communication system grows from a single-pair point-to-point communication system to a multiple-transceiver pair communication network. Various new communication techniques, for example, cooperative communication, interference management, multi-carrier communication, are employed to enhance the system capacity and improve the communication quality. Even for some single-pair communication scenarios, due to the different quality demands for different types of information messages, more advanced coding schemes should be designed to provide more protection for more important information messages, for example, the system emergency message. This thesis proposes several coding schemes to address the above questions. More specifically, the proposed coding schemes are summarized as follows. Message-wise error protection is a new unequal error protection scheme where in a codebook some special messages are more protected than other ordinary messages. We propose the first practical coding scheme for message-wise error protection based on LDPC codes, where codeword flipping is employed to separate the special message codewords from the ordinary message codewords. We consider a half-duplex 4-node joint relay system with two sources, one relay, and one destination, where the relay combines the information from both sources and transmits it to the destination together with both sources. We propose joint network and channel coding schemes based on the superposition coding (SC) and the Raptor coding (RC), and design practical Raptor codes for the proposed coding schemes. We propose novel coding and decoding methods for a fully connected K-user Gaussian interference channel. Each transmitter encodes its information into multiple layers and transmits the superposition of those layers. Each receiver performs a twofold task by first identifying which interferers it should decode and then determining which layers of them should be decoded. We propose practical coding schemes that employ the quadrature amplitude modulations (QAM) and Raptor codes. We propose group decoding and the associated rate allocation schemes for the multi-relay assisted interference channels, where both the relays and the destinations employ constrained group decoding. We consider two types of relay systems, the hopping relay system with no direct source-destination links, and the inband relay system with direct source-destination links. For each relay type, our objective is to design the relay assignment and group decoding strategies at the relays and destinations, to maximize the minimum information rate among all source-destination pairs. We consider a distributed storage system employing some existing regenerate codes where the storage nodes are scattered in a wireless network. The existing full-downloading approach, where the data collector downloads all symbols from a subset of the storage nodes for data reconstruction, becomes less efficient in wireless networks. This is because that, due to fading, the wireless channels may not offer sufficient bandwidths for full downloading. We propose a partial downloading scheme that allows downloading a portion of the symbols from any storage node, and formulate a cross-layer wireless resource allocation problem for data reconstruction employing such partial downloading. We derive necessary and sufficient conditions for the data reconstructability for partial downloading, in terms of the numbers of downloaded symbols from the storage nodes. We also propose channel and power allocation schemes for partial downloading in wireless distributed storage systems.Electrical engineeringcg2474Electrical EngineeringDissertationsDesign and Optimization of Low-power Level-crossing ADCs
http://academiccommons.columbia.edu/catalog/ac:171934
Weltin-Wu, Colinhttp://dx.doi.org/10.7916/D8K072B8Thu, 20 Mar 2014 00:00:00 +0000This thesis investigates some of the practical issues related to the implementation of level-crossing ADCs in nanometer CMOS. A level-crossing ADC targeting minimum power is designed and measured. Three techniques to circumvent performance limitations due to the zero-crossing detector at the heart of the ADC are proposed and demonstrated: an adaptive resolution algorithm, an adaptive bias current algorithm, and automatic offset cancelation. The ADC, fabricated in 130 nm CMOS, is designed to operate over a 20 kHz bandwidth while consuming a maximum of 8.5 uW. A peak SNDR of 54 dB for this 8-bit ADC demonstrates a key advantage of level-crossing sampling, namely SNDR higher than the classic Nyquist limit.Electrical engineeringcw2319Electrical EngineeringDissertationsCardiac tissue characterization using near-infrared spectroscopy
http://academiccommons.columbia.edu/catalog/ac:171806
Singh-Moon, Rajinder; Hendon, Christine P.http://dx.doi.org/10.7916/D8TM785WFri, 14 Mar 2014 00:00:00 +0000Cardiac tissue from swine and canine hearts were assessed using diffuse reflectance near-infrared spectroscopy (NIRS) ex vivo. Slope measured between 800-880 nm reflectance was found to reveal differences between epicardial fat and normal myocardium tissue. This parameter was observed to increase monotonically from measurements obtained from the onset of radio frequency ablation (RFA). A sheathe-style fiber optic catheter was then developed to allow real-time sampling of the zone of resistive heating during RFA treatment. A model was developed and used to extract changes in tissue absorption and reduced scattering based on the steady-state diffusion approximation. It was found that key changes in tissue optical properties occur during application of RF energy and can be monitored using NIRS. These results encourage the development of NIRS integrated catheters for real-time guidance of the cardiac ablation treatment.Biology, Mediciners3226, cpf2115Electrical Engineering, AnesthesiologyArticlesMulti-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images
http://academiccommons.columbia.edu/catalog/ac:171076
Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victorhttp://dx.doi.org/10.7916/D82V2D57Wed, 26 Feb 2014 00:00:00 +0000Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.Electrical engineering, OpticsElectrical EngineeringArticlesTowards A Dynamic QoS-aware Over-The-Top Video Streaming in LTE
http://academiccommons.columbia.edu/catalog/ac:170979
Nam, Hyunwoo; Kim, Kyung Hwa; Kim, Bong Ho; Calin, Doru; Schulzrinne, Henninghttp://dx.doi.org/10.7916/D80863BSFri, 21 Feb 2014 00:00:00 +0000We present a study of traffic behavior of two popular over-the-top (OTT) video streaming services (YouTube and Netflix). Our analysis is conducted on different mobile devices (iOS and Android) over various wireless networks (Wi-Fi, 3G and LTE) under dynamic network conditions. Our measurements show that the video players frequently discard a large amount of video content although it is successfully delivered to a client. We first investigate the root cause of this unwanted behavior. Then, we propose a Quality-of-Service (QoS)-aware video streaming architecture in Long Term Evolution (LTE) networks to reduce the waste of network resource and improve user experience. The architecture includes a selective packet discarding mechanism, which can be placed in packet data network gateways (P-GW). In addition, our QoS-aware rules assist video players in selecting an appropriate resolution under a fluctuating channel condition. We monitor network condition and configure QoS parameters to control availability of the maximum bandwidth in real time. In our experimental setup, the proposed platform shows up to 20.58% improvement in saving downlink bandwidth and improves user experience by reducing buffer underflow period to an average of 32 seconds.Computer sciencehn2203, kk2515, dc2686, hgs10Computer Science, Electrical EngineeringTechnical reportsTowards Dynamic Network Condition-Aware Video Server Selection Algorithms over Wireless Networks
http://academiccommons.columbia.edu/catalog/ac:170975
Nam, Hyunwoo; Kim, Kyung Hwa; Schulzrinne, Henning; Calin, Doruhttp://dx.doi.org/10.7916/D8416V3KFri, 21 Feb 2014 00:00:00 +0000We investigate video server selection algorithms in a distributed video-on-demand system. We conduct a detailed study of the YouTube Content Delivery Network (CDN) on PCs and mobile devices over Wi-Fi and 3G networks under varying network conditions. We proved that a location-aware video server selection algorithm assigns a video content server based on the network attachment point of a client. We found out that such distance-based algorithms carry the risk of directing a client to a less optimal content server, although there may exist other better performing video delivery servers. In order to solve this problem, we propose to use dynamic network information such as packet loss rates and Round Trip Time (RTT)between an edge node of an wireless network (e.g., an Internet Service Provider (ISP) router in a Wi-Fi network and a Radio Network Controller (RNC) node in a 3G network) and video content servers, to find the optimal video content server when a video is requested. Our empirical study shows that the proposed architecture can provide higher TCP performance, leading to better viewing quality compared to location-based video server selection algorithms.Computer sciencehn2203, kk2515, hgs10, dc2686Computer Science, Electrical EngineeringTechnical reportsLarge Scale Nearest Neighbor Search - Theories, Algorithms, and Applications
http://academiccommons.columbia.edu/catalog/ac:168490
He, Junfenghttp://dx.doi.org/10.7916/D83776PGMon, 06 Jan 2014 00:00:00 +0000We are witnessing a data explosion era, in which huge data sets of billions or more samples represented by high-dimensional feature vectors can be easily found on the Web, enterprise data centers, surveillance sensor systems, and so on. On these large scale data sets, nearest neighbor search is fundamental for lots of applications including content based search/retrieval, recommendation, clustering, graph and social network research, as well as many other machine learning and data mining problems. Exhaustive search is the simplest and most straightforward way for nearest neighbor search, but it can not scale up to huge data set at the sizes as mentioned above. To make large scale nearest neighbor search practical, we need the online search step to be sublinear in terms of the database size, which means offline indexing is necessary. Moreover, to achieve sublinear search time, we usually need to make some sacrifice on the search accuracy, and hence we can often only obtain approximate nearest neighbor instead of exact nearest neighbor. In other words, by large scale nearest neighbor search, we aim at approximate nearest neighbor search methods with sublinear online search time via offline indexing. To some extent, indexing a vector dataset for (sublinear time) approximate search can be achieved by partitioning the feature space to different regions, and mapping each point to its closet regions. There are different kinds of partition structures, for example, tree based partition, hashing based partition, clustering/quantization based partition, etc. From the viewpoint of how the data partition function is generated, the partition methods can be grouped into two main categories: 1. data independent (random) partition such as locality sensitive hashing, randomized trees/forests methods, etc.; 2. data dependent (optimized) partition, such as compact hashing, quantization based indexing methods, and some tree based methods like kd-tree, pca tree, etc. With the offline indexing/partitioning, online approximate nearest neighbor search usually consists of three steps: locate the query region that the query point falls in, obtain candidates which are the database points in the regions near the query region, and rerank/return candidates. For large scale nearest neighbor search, the key question is: how to design the optimal offline indexing, such that the online search performance is the best, or more specifically, the online search can be as fast as possible, while meeting a required accuracy? In this thesis, we have studied theories, algorithms, systems and applications for (approximate) nearest neighbor search on large scale data sets, for both indexing with random partition and indexing with learning based partition. Our specific main contributions are: 1. We unify various nearest neighbor search methods into the data partition framework, and provide a general formulation of optimal data partition, which supports fastest search speed while satisfying a required search accuracy. The formulation is general, and can be used to explain most existing (sublinear) large scale approximate nearest neighbor search methods. 2. For indexing with data-independent partitions, we have developed theories on their lower and upper bounds of time and space complexity, based on the optimal data partition formulation. The bounds are applicable for a general group of methods called Nearest Neighbor Preferred Hashing and Nearest Neighbor Preferred Partition, including, locality sensitive hashing, random forest, and many other random hashing methods, etc. Moreover, we also extend the theory to study how to choose the parameters for indexing methods with random partitions. 3. For indexing with data-dependent partitions, I have applied the same formulation to develop a joint optimization approach with two important criteria: nearest neighbor preserving and region size balancing. we have applied the joint optimization to different partition structures such as hashing and clustering, and achieved several new nearest neighbor search methods, outperforming (or at least comparable) to state-of-the-art solutions for large scale nearest neighbor search. 4. we have further studied fundamental problems for nearest neighbor search beyond search methods, for example, what is the difficulty of nearest neighbor search on a given data set (independent of search methods)? What data properties affect the difficulty and how? How will the theoretical analysis and algorithm design of large scale nearest neighbor search problem be affected by the data set difficulty? 5. Finally, we have applied our nearest neighbor search methods for practical applications. We focus on the development of large visual search engines using new indexing methods developed in this thesis. The techniques can be applied to other domains with data-intensive applications, and moreover, be extended to other applications beyond visual search engine, such as large scale machine learning, data mining, and social network analysis, etc.Computer science, Electrical engineeringjh2700Electrical EngineeringDissertationsA Correlated 1-D Monatomic Condensed Matter System: Experiment and Theory
http://academiccommons.columbia.edu/catalog/ac:168505
Zaki, Nader Wasfyhttp://dx.doi.org/10.7916/D89S1P0DMon, 06 Jan 2014 00:00:00 +0000A one-dimensional quantum mechanical system is experimentally synthesized and investigated for physical phenomena that it may inherit due to quantum confinement and electron correlations. The experimentally realized system is a self-assembled array of monatomic cobalt wires that are grown under ultra-high vacuum conditions on a vicinal copper (111) substrate using a recipe developed specifically for this work. This work experimentally demonstrates that this 1-D system undergoes a charge density wave instability, which is a first for such a 1-D phenomenon on a metallic substrate. It is determined experimentally that this 1-D system undergoes an electronic phase transition at a temperature of about 85K, in which the higher temperature electronic phase is itinerant rather than localized. Using ab initio density functional theory, the cause of the measured charge density wave instability is assigned to erromagnetic interactions along the chain. Specifically, it is deduced that the instability is driven by spin -minority pin-exchange interactions predominately in the cobalt dxz/dyz orbitals. Beyond, shedding light on electron correlations in a physically realized quantum mechanical 1-D system, this work demonstrates that this particular system is a new test-case example for advanced theoretical techniques in predicting the correct structural ground phase.Condensed matter physicsnz2137Electrical EngineeringDissertationsMulti-Photon Spectroscopic Studies of Molecule/Metal Interfaces and Graphene
http://academiccommons.columbia.edu/catalog/ac:192820
Hong, Sung Younghttp://dx.doi.org/10.7916/D8QZ27WHMon, 06 Jan 2014 00:00:00 +0000This dissertation presents multi-photon spectroscopic studies on molecule/metal interfaces and graphene. Two different aspects of these ultrathin molecular or atomic materials were investigated: (1) the electronic structure of molecule/metal interfaces and (2) nonlinear optical properties of graphene. For the case (1), two-photon photoemission (TPPE) using a femotosecond laser was employed to investigate occupied and unoccupied electronic states of molecule/metal interfaces. Here we selected two specific examples of interfaces, benzenethiols on Cu(111) and hexa-cata-hexabenzocoronene (HBC) on Cu(111), which are important model systems for an organic / electrode interface of organic semiconductor devices. Although the same copper substrate was used for all the experiments, the nature of interfaces was strongly affected by the interaction between molecular adlayers and metal substrate. Our TPPE measurements on two benzenthiol species, thiophenol and p-fluorothiophenol, on Cu(111) focus on the role of adsorbates in shifting surface polarization and effecting surface electron confinement. As the coverage of each molecule increases, their photoemission-measured work functions exhibit nearly identical behavior up to 0.4-0.5 ML, at which point their behavior diverges; this behavior can be fit to an interfacial bond model for the surface dipole. In addition, our results show the emergence of an interfacial electronic state 0.1-0.2 eV below the Fermi level. This electronic state is attributed to quantum-mechanical-confinement shifting of the Cu(111) surface state by the molecular adsorbates. Another TPPE study of ours was carried out on an organic semiconductor, HBC, deposited on Cu(111). An increase of HBC coverage continuously shifts the vacuum level of the Cu substrate until a coverage of 2 ML is reached. In the same range of coverage, the Shockley state and the image potential states are quenched while new unoccupied states develop. The momentum- and polarization-resolved photoemission spectra reveal that the new states are originated from a Cu image state. Electron localization is discussed with respect to the structural evolution of HBC. For the case (2), nonlinear optical scanning microscopy was designed to study third-harmonic generation (THG) from micron- scale graphene crystals on glass substrate. The polarization-, thickness-, and orientation- dependence of THG signals from the graphene were measured and compared to theoretical prediction using the nonlinear optical slab model of Bloembergen and Pershan. The results revealed in-plane isotropy and out-of-plane anisotropy of the THG signals and sub-quadratic dependence of the layer number. Due to the strong THG signal, background-free imaging of graphene crystal was carried out. This result implies the potential application of THG for imaging graphene on arbitrary substrates.Chemistry, Physical chemistry, Physicssh2580Chemistry, Electrical EngineeringDissertationsTopics in Routing and Network Coding for Wireless Networks
http://academiccommons.columbia.edu/catalog/ac:168086
Desai, Maulikhttp://dx.doi.org/10.7916/D83R0QSJFri, 06 Dec 2013 00:00:00 +0000This dissertation presents topics in routing and network coding for wireless networks. We present a multipurpose multipath routing mechanism. We propose an efficient packet encoding algorithm that can easily integrate a routing scheme with network coding. We also discuss max-min fair rate allocation and scheduling algorithms for the flows in a wireless network that utilizes coding. We propose Polar Coordinate Routing (PCR) to create multiple paths between a source and a destination in wireless networks. Our scheme creates paths that are circular segments of different radii connecting source-destination pairs. We propose a non-euclidean distance metric that allows messages to travel along these paths. Using PCR it is possible to maintain a known separation among the paths, which reduces the interference between the nodes belonging to two separate routes. Our extensive simulations show that while PCR achieves a known separation between the routes, it does so with a small increase in overall hop count. Moreover, we demonstrate that the variances of average separation and hop count are lower for the paths created using PCR compared to the existing schemes, indicating a more reliable system. Existing multipath routing schemes in wireless networks do not perform well in the areas with obstacles or low node density. To overcome adverse areas in a network, we integrate PCR with simple robotic routing, which lets a message circumnavigate an obstacle and follow the multipath trajectory to the destination as soon as the obstacle is passed. Next we propose an efficient packet encoding algorithm to integrate a routing scheme with network coding. Note that this packet encoding algorithm is not dependent on PCR. In fact it can be coupled with any routing scheme in order to leverage the benefits offered by both an advanced routing scheme and an enhanced packet encoding algorithm. Our algorithm, based on bipartite graphs, lets a node exhaustively search its queue to identify the maximum set of packets that can be combined in a single transmission. We extend this algorithm to consider multiple next hop neighbors for a packet while searching for an optimal packet combination, which improves the likelihood of combining more packets in a single transmission. Finally, we propose an algorithm to assign max-min fair rates to the flows in a wireless network that utilizes coding. We demonstrate that when a network uses coding, a direct application of conventional progressive filling algorithm to achieve max-min fairness may yield incorrect or suboptimal results. To emulate progressive filling correctly for a wireless networks with coding, we couple a conflict graph based framework with a linear program. Our model helps us directly select a bottleneck flow at each iteration of the algorithm, eliminating the need of gradually increasing the rates of the flows until a bottleneck is found. We demonstrate the caveats in selecting the bottleneck flows and setting up transmission scheduling constraints in order to avoid suboptimal results. We first propose a centralized fair rate allocation algorithm assuming the global knowledge of the network. We also present a novel yet simple distributed algorithm that achieves the same results as the centralized algorithm. We also present centralized as well as distributed scheduling algorithms that help flows achieve their fair rates. We run our rate allocation algorithm on various topologies. We use various fairness metrics to show that our rate allocation algorithm outperforms existing algorithms (based on network utility maximization) in terms of fairness.Computer engineering, Electrical engineeringmmd2165Electrical EngineeringDissertationsSystems for pervasive electronics and interfaces
http://academiccommons.columbia.edu/catalog/ac:167028
Sarik, Johnhttp://hdl.handle.net/10022/AC:P:22149Thu, 07 Nov 2013 00:00:00 +0000Energy Harvesting Active Networked Tags (EnHANTs) are a new type of wireless device in the domain between RFIDs and sensor networks. Future EnHANTs will be small, flexible, and self-powered devices that can be attached to everyday objects that are traditionally not networked to enable "Internet of Things" applications. This work describes the design and development of the EnHANT prototypes and testbed. The current prototypes use thin-film photovoltaics optimized for indoor light harvesting, form multihop networks using ultra-low-power Ultra-Wideband Impulse Radio (UWB-IR) transceivers, and implement energy harvesting adaptive networking protocols. The current testbed enables the evaluation of different algorithms by exposing individual prototypes to repeatable light conditions based on real-world irradiance data. New approaches to characterizing the energy available to energy harvesting devices were explored. A mobile data-logger was used to record the intensity of ambient light, determine the light source, and record the acceleration from motion during different real world activities. These traces were used to model the behavior of photovoltaic and inertial energy harvesters in real world deployments and can be replayed in the EnHANTs testbed. In addition, new techniques to evaluate the efficiency of different photovoltaic technologies under indoor illumination were developed. A proof-of-concept system was built to characterize photovoltaics under a standardized set of conditions in which the radiant intensity and spectral composition of the light source were systematically varied. Techniques to structure student research projects within the EnHANTs project were developed. Project-based learning approaches were implemented to engage students using real-world system development constraints. A survey of the students showed that this approach is an effective method for developing technical, professional, and soft skills. Open source hardware was also applied to EnHANTs project and extended into other domains. A laboratory-based class in flat panel display technology was developed. The course introduces fundamental concepts of display systems and reinforces these concepts through the fabrication of three display devices. A lab kit platform was developed to enable remote students to use low-cost, course specific hardware to complete the lab exercises remotely. This platform was also applied to external projects targeted at non-university students. A workshop was developed to teach artists, designers, and hobbyists how to design and build custom user interfaces using thin-film electronics and rapid prototyping tools. Surveys of the students and workshop participants showed that this platform is an effective teaching tool and can be easily adapted and expanded.Electrical engineeringjcs2160Electrical EngineeringDissertationsIdentification of Dendritic Processing in Spiking Neural Circuits
http://academiccommons.columbia.edu/catalog/ac:167013
Slutskiy, Yevgeniyhttp://hdl.handle.net/10022/AC:P:22146Thu, 07 Nov 2013 00:00:00 +0000A large body of experimental evidence points to sophisticated signal processing taking place at the level of dendritic trees and dendritic branches of neurons. This evidence suggests that, in addition to inferring the connectivity between neurons, identifying analog dendritic processing in individual cells is fundamentally important to understanding the underlying principles of neural computation. In this thesis, we develop a novel theoretical framework for the identification of dendritic processing directly from spike times produced by spiking neurons. The problem setting of spiking neurons is necessary since such neurons make up the majority of electrically excitable cells in most nervous systems and it is often hard or even impossible to directly monitor the activity within dendrites. Thus, action potentials produced by neurons often constitute the only causal and observable correlate of dendritic processing. In order to remain true to the underlying biophysics of electrically excitable cells, we employ well-established mechanistic models of action potential generation to describe the nonlinear mapping of the aggregate current produced by the tree into an asynchronous sequence of spikes. Specific models of spike generation considered include conductance-based models such as Hodgkin-Huxley, Morris-Lecar, Fitzhugh-Nagumo, as well as simpler models of the integrate-and-fire and threshold-and-fire type. The aggregate time-varying current driving the spike generator is taken to be produced by a dendritic stimulus processor, which is a nonlinear dynamical system capable of describing arbitrary linear and nonlinear transformations performed on one or more input stimuli. In the case of multiple stimuli, it can also describe the cross-coupling, or interaction, between various stimulus features. The behavior of the dendritic stimulus processor is fully captured by one or more kernels, which provide a characterization of the signal processing that is consistent with the broader cable theory description of dendritic trees. We prove that the neural identification problem, stated in terms of identifying the kernels of the dendritic stimulus processor, is mathematically dual to the neural population encoding problem. Specifically, we show that the collection of spikes produced by a single neuron in multiple experimental trials can be treated as a single multidimensional spike train of a population of neurons encoding the parameters of the dendritic stimulus processor. Using the theory of sampling in reproducing kernel Hilbert spaces, we then derive precise results demonstrating that, during any experiment, the entire neural circuit is projected onto the space of input stimuli and parameters of this projection are faithfully encoded in the spike train. Spike times are shown to correspond to generalized samples, or measurements, of this projection in a system of coordinates that is not fixed but is both neuron- and stimulus-dependent. We examine the theoretical conditions under which it may be possible to reconstruct the dendritic stimulus processor from these samples and derive corresponding experimental conditions for the minimum number of spikes and stimuli that need to be used. We also provide explicit algorithms for reconstructing the kernel projection and demonstrate that, under natural conditions, this projection converges to the true kernel. The developed methodology is quite general and can be applied to a number of neural circuits. In particular, the methods discussed span all sensory modalities, including vision, audition and olfaction, in which external stimuli are typically continuous functions of time and space. The results can also be applied to circuits in higher brain centers that receive multi-dimensional spike trains as input stimuli instead of continuous signals. In addition, the modularity of the approach allows one to extend it to mixed-signal circuits processing both continuous and spiking stimuli, to circuits with extensive lateral connections and feedback, as well as to multisensory circuits concurrently processing multiple stimuli of different dimensions, such as audio and video. Another important extension of the approach can be used to estimate the phase response curves of a neuron. All of the theoretical results are accompanied by detailed examples demonstrating the performance of the proposed identification algorithms. We employ both synthetic and naturalistic stimuli such as natural video and audio to highlight the power of the approach. Finally, we consider the implication of our work on problems pertaining to neural encoding and decoding and discuss promising directions for future research.Neurosciences, Electrical engineering, Biophysicsys2146Electrical EngineeringDissertationsDesign of Scalable On-Demand Video Streaming Systems Leveraging Video Viewing Patterns
http://academiccommons.columbia.edu/catalog/ac:166939
Hwang, Kyung-Wookhttp://hdl.handle.net/10022/AC:P:22120Mon, 04 Nov 2013 00:00:00 +0000The explosive growth in on-demand access of video across all forms of delivery (Internet, traditional cable, IPTV, wireless) has renewed the interest in scalable delivery methods. Approaches using Content Delivery Networks (CDNs), Peer-to-Peer (P2P) approaches, and their combinations have been proposed as viable options to ease the load on servers and network links. However, there has been little focus on how to take advantage of user viewing patterns to understand their impact on existing mechanisms and to design new solutions that improve the streaming service quality. In this dissertation, we leverage on the observation that users watch only a small portion of videos to understand the limits of existing designs and to optimize two scalable approaches -- the content placement and P2P Video-on-Demand (VoD) streaming. Then, we present our novel scalable system called Joint-Family which enables adaptive bitrate streaming (ABR) in P2P VoD, supporting user viewing patterns. We first provide evidence of such user viewing behavior from data collected from a nationally deployed VoD service. In contrast to using a simplistic popularity-based placement and traditionally proposed caching strategies (such as CDNs), we use a Mixed Integer Programming formulation to model the placement problem and employ an innovative approach that scales well. We have performed detailed simulations using actual traces of user viewing sessions (including stream control operations such as pause, fast-forward, and rewind). Our results show that the use of segment-based placement strategy yields substantial savings in both disk storage requirements at origin servers/VHOs as well as network bandwidth use. For example, compared to a simple caching scheme using full videos, our MIP-based placement using segments can achieve up to 71% reduction in peak link bandwidth usage. Secondly, we note that the policies adopted in existing P2P VoD systems have not taken user viewing behavior -- that users abandon videos -- into account. We show that abandonment can result in increased interruptions and wasted resources. As a result, we reconsider the set of policies to use in the presence of abandonment. Our goal is to balance the conflicting needs of delivering videos without interruptions while minimizing wastage. We find that an Earliest-First chunk selection policy in conjunction with the Earliest-Deadline peer selection policy allows us to achieve high download rates. We take advantage of abandonment by converting peers to "partial seeds"; this increases capacity. We minimize wastage by using a playback lookahead window. We use analysis and simulation experiments using real-world traces to show the effectiveness of our approach. Finally, we propose Joint-Family, a protocol that combines P2P and adaptive bitrate (ABR) streaming for VoD. While P2P for VoD and ABR have been proposed previously, they have not been studied together because they attempt to tackle problems with seemingly orthogonal goals. We motivate our approach through analysis that overcomes a misconception resulting from prior analytical work, and show that the popularity of a P2P swarm and seed staying time has a significant bearing on the achievable per-receiver download rate. Specifically, our analysis shows that popularity affects swarm efficiency when seeds stay "long enough". We also show that ABR in a P2P setting helps viewers achieve higher playback rates and/or fewer interruptions. We develop the Joint-Family protocol based on the observations from our analysis. Peers in Joint-Family simultaneously participate in multiple swarms to exchange chunks of different bitrates. We adopt chunk, bitrate, and peer selection policies that minimize occurrence of interruptions while delivering high quality video and improving the efficiency of the system. Using traces from a large-scale commercial VoD service, we compare Joint-Family with existing approaches for P2P VoD and show that viewers in Joint-Family enjoy higher playback rates with minimal interruption, irrespective of video popularity.Computer science, Computer engineering, Electrical engineeringComputer Science, Electrical EngineeringDissertationsExtracting Three-Dimensional Orientation and Tractography of Myofibers Using Optical Coherence Tomography
http://academiccommons.columbia.edu/catalog/ac:165688
Gan, Yu; Fleming, Christine P.http://hdl.handle.net/10022/AC:P:21780Wed, 25 Sep 2013 00:00:00 +0000Abnormal changes in orientation of myofibers are associated with various cardiac diseases such as arrhythmia, irregular contraction, and cardiomyopathy. To extract fiber information, we present a method of quantifying fiber orientation and reconstructing three-dimensional tractography of myofibers using optical coherence tomography (OCT). A gradient based algorithm was developed to quantify fiber orientation in three dimensions and particle filtering technique was employed to track myofibers. Prior to image processing, three-dimensional image data set were acquired from all cardiac chambers and ventricular septum of swine hearts using OCT system without optical clearing. The algorithm was validated through rotation test and comparison with manual measurements. The experimental results demonstrate that we are able to visualize three-dimensional fiber tractography in myocardium tissues.Cellular biology, Medical imaging and radiology, Bioinformaticsyg2327, cpf2115Electrical EngineeringArticlesMobile Video Is Inefficient: A Traffic Analysis
http://academiccommons.columbia.edu/catalog/ac:165639
Nam, Hyunwoo; Kim, Bong Ho; Calin, Doru; Schulzrinne, Henning G.http://hdl.handle.net/10022/AC:P:21762Mon, 23 Sep 2013 00:00:00 +0000Video streaming on mobile devices is on the rise. According to recent reports, mobile video streaming traffic accounted for 52.8% of total mobile data traffic in 2011, and it is forecast to reach 66.4% in 2015. We analyzed the network traffic behaviors of the two most popular HTTP-based video streaming services: YouTube and Netflix. Our research indicates that the network traffic behavior depends on factors such as the type of device, multimedia applications in use and network conditions. Furthermore, we found that a large part of the downloaded video content can be unaccepted by a video player even though it is successfully delivered to a client. This unwanted behavior often occurs when the video player changes the resolution in a fluctuating network condition and the playout buffer is full while downloading a video. Some of the measurements show that the discarded data may exceed 35% of the total video content.Computer sciencehn2203, dc2686, hgs10Computer Science, Electrical EngineeringTechnical reportsThe Challenge of Communicating Computational Research
http://academiccommons.columbia.edu/catalog/ac:165636
Hong, Neil Chue; Jockers, Matthew L.; Ellis, Daniel P. W.; Stodden, Victoria C.http://hdl.handle.net/10022/AC:P:21703Fri, 20 Sep 2013 00:00:00 +0000Computational approaches to scholarship have revolutionized how research is done but have at the same time complicated the process of disseminating the results of that research. Conclusions may be produced using mathematical models or custom software that are not easily accessible to, or reproducible by, those outside the research team. And in some fields, a lack of understanding of computational approaches may lead to skepticism about their use. The panel considers urgent questions faced by researchers across the range of academic disciplines. How can scientists and social scientists address the lack of access to the software and code used to produce many research results, which has led to a crisis of verifiability and concern about the accuracy of the scientific record? How can digital humanists approach discussions of computational methods, which may not fit into traditional forms of scholarship and can be viewed with suspicion in disciplines that prize the art of scholarly analysis? Computational researchers are examining communication practices, policies, and tools that promise to more effectively convey their research process and the results it produces. The panelists are: Neil Chue Hong, Director of the Software Sustainability Institute; Matthew L. Jockers, Assistant Professor of English at the University of Nebraska-Lincoln; and Daniel P. W. Ellis, Associate Professor of Electrical Engineering at Columbia University.Technical communication, Information sciencede171, vcs2115Statistics, Electrical Engineering, Scholarly Communication Program, Center for Digital Research and Scholarship, Libraries and Information ServicesInterviews and roundtablesExperimental investigations of the role of proximity approximation in near-field radiative transfer
http://academiccommons.columbia.edu/catalog/ac:189127
Gu, Ninghttp://dx.doi.org/10.7916/D89P311QMon, 16 Sep 2013 00:00:00 +0000The nature of thermal radiative transfer changes significantly as the nominal gap between two objects becomes comparable to or smaller than the characteristic wavelength given by Wien's displacement law. At larger gaps, conventional theory of blackbody radiation is sufficient to describe the radiative transfer; at smaller gaps, however, wave effects such as evanescent wave tunneling, interference and diffraction render the classical theory invalid. The change in radiative transfer between two objects is most dramatic when they can support electromagnetic surface polaritons because of the high local density of states at the interface between the object and vacuum. When two objects of polar dielectric materials are close enough, the enhanced near-field radiation due to surface phonon polariton tunneling can exceed the blackbody limit by several orders of magnitude. This enhanced radiation at nanoscale has potential applications in energy transfer, heat assisted magnetic recording and near-field radiative cooling. In recent years, several experiments measuring the enhanced near-field radiation between a micro-sphere and a plane substrate have been reported. To measure the radiative transfer, the magnitude of which can be less than 10 nW, the sensor of choice is the bi-material micro-cantilever. My thesis has focused on two aspects of near-field radiative transfer between a micro-sphere and a substrate: (1) to enable quantitative comparison between experimental measurement and theoretical/numerical prediction of near-field radiative transfer. (2) to develop a comprehensive thermal model for the experimental measurement procedure. To enable the first task, an improved experimental apparatus to measure the near-field radiation between a micro-sphere and a substrate has been developed. In previous experimental apparatuses, radiative transfer was measured between a micro-sphere and a truncated plane surface. This was necessary because of the optical configuration. Our new apparatus overcomes this drawback with a newly designed optical path. With this new apparatus, the experiments are truly between a micro-sphere and an infinite plane. Measurements for micro-spheres with wide range of radii from 2.5 µ; to 25 µ; have been conducted. The experimental measurements are compared to the numerical prediction using the modified proximity proximation. In contrast to van der Waals force and Casimir force measurements in which the proximity approximation agree better when applied to larger spheres, in radiative heat transfer measurements, the modified proximity approximation agree better for smaller spheres. This surprising finding is explained by the difference in nature of radiative transfer and forces. To go along with the improved apparatus, we have also modified the method of data acquisition, calibration procedures and the thermal model for the experiment. In terms of data collection, we can now eliminate the effects of spurious forces; the second change we have implemented in the experiment is that the substrate is translated at a constant velocity, as opposed to discrete steps. We have developed a thermal model for the new experimental procedure.Mechanical engineering, Electrical engineering, Physicsng2220Electrical Engineering, Mechanical EngineeringDissertationsContinuous-Time and Companding Digital Signal Processors Using Adaptivity and Asynchronous Techniques
http://academiccommons.columbia.edu/catalog/ac:165165
Vezyrtzis, Christoshttp://hdl.handle.net/10022/AC:P:21617Fri, 13 Sep 2013 00:00:00 +0000The fully synchronous approach has been the norm for digital signal processors (DSPs) for many decades. Due to its simplicity, the classical DSP structure has been used in many applications. However, due to its rigid discrete-time operation, a classical DSP has limited efficiency or inadequate resolution for some emerging applications, such as processing of multimedia and biological signals. This thesis proposes fundamentally new approaches to designing DSPs, which are different from the classical scheme. The defining characteristic of all new DSPs examined in this thesis is the notion of "adaptivity" or "adaptability." Adaptive DSPs dynamically change their behavior to adjust to some property of their input stream, for example the rate of change of the input. This thesis presents both enhancements to existing adaptive DSPs, as well as new adaptive DSPs. The main class of DSPs that are examined throughout the thesis are continuous-time (CT) DSPs. CT DSPs are clock-less and event-driven; they naturally adapt their activity and power consumption to the rate of their inputs. The absence of a clock also provides a complete avoidance of aliasing in the frequency domain, hence improved signal fidelity. The core of this thesis deals with the complete and systematic design of a truly general-purpose CT DSP. A scalable design methodology for CT DSPs is presented. This leads to the main contribution of this thesis, namely a new CT DSP chip. This chip is the first general-purpose CT DSP chip, able to process many different classes of CT and synchronous signals. The chip has the property of handling various types of signals, i.e. various different digital modulations, both synchronous and asynchronous, without requiring any reconfiguration; such property is presented for the first time CT DSPs and is impossible for classical DSPs. As opposed to previous CT DSPs, which were limited to using only one type of digital format, and whose design was hard to scale for different bandwidths and bit-widths, this chip has a formal, robust and scalable design, due to the systematic usage of asynchronous design techniques. The second contribution of this thesis is a complete methodology to design adaptive delay lines. In particular, it is shown how to make the granularity, i.e. the number of stages, adaptive in a real-time delay line. Adaptive granularity brings about a significant improvement in the line's power consumption, up to 70% as reported by simulations on two design examples. This enhancement can have a direct large power impact on any CT DSP, since a delay line consumes the majority of a CT DSP's power. The robust methodology presented in this thesis allows safe dynamic reconfiguration of the line's granularity, on-the-fly and according to the input traffic. As a final contribution, the thesis also examines two additional DSPs: one operating the CT domain and one using the companding technique. The former operates only on level-crossing samples; the proposed methodology shows a potential for high-quality outputs by using a complex interpolation function. Finally, a companding DSP is presented for MPEG audio. Companding DSPs adapt their dynamic range to the amplitude of their input; the resulting can offer high-quality outputs even for small inputs. By applying companding to MPEG DSPs, it is shown how the DSP distortion can be made almost inaudible, without requiring complex arithmetic hardware.Electrical engineering, Computer science, Computer engineeringcv2176Electrical EngineeringDissertationsProtein Secondary Structure Prediction using Deterministic Sequential Sampling
http://academiccommons.columbia.edu/catalog/ac:164181
Liang, Kuo-ching; Wang, Xiaodonghttp://hdl.handle.net/10022/AC:P:21317Fri, 09 Aug 2013 00:00:00 +0000The prediction of the secondary structure of a protein from its amino acid sequence is an important step towards the prediction of its three-dimensional structure. While many of the existing algorithms utilize the similarity and homology to proteins with known secondary structures in the Protein Data Bank, other proteins with low similarity measures require a single sequence approach to the discovery of their secondary structure. In this paper we propose an algorithm based on the deterministic sequential sampling method and hidden Markov model for the single-sequence protein secondary structure prediction. The predictions are made based on windowed observations and by the weighted average over possible conformations within the observation window. The proposed algorithm is shown to achieve better performance on real dataset compared to the existing single-sequence algorithm.Bioinformaticsxw2008Electrical EngineeringArticlesBayesian Multiple-Instance Motif Discovery with BAMBI: Inference of Recombinase and Transcription Factor Binding Sites
http://academiccommons.columbia.edu/catalog/ac:164068
Jajamovich, Guido H.; Wang, Xiaodong; Arkin, Adam P.; Samoilov, Michael S.http://hdl.handle.net/10022/AC:P:21301Wed, 07 Aug 2013 00:00:00 +0000Finding conserved motifs in genomic sequences represents one of essential bioinformatic problems. However, achieving high discovery performance without imposing substantial auxiliary constraints on possible motif features remains a key algorithmic challenge. This work describes BAMBI--a sequential Monte Carlo motif identification algorithm, which is based on a position weight matrix model that does not require additional constraints and is able to estimate such motif properties as length, logo, number of instances and their locations solely on the basis of primary nucleotide sequence data. Furthermore, should biologically meaningful information about motif attributes be available, BAMBI takes advantage of this knowledge to further refine the discovery results. In practical applications, we show that the proposed approach can be used to find sites of such diverse DNA-binding molecules as the cAMP receptor protein (CRP) and Din-family site-specific serine recombinases. Results obtained by BAMBI in these and other settings demonstrate better statistical performance than any of the four widely-used profile-based motif discovery methods: MEME, BioProspector with BioOptimizer, SeSiMCMC and Motif Sampler as measured by the nucleotide-level correlation coefficient. Additionally, in the case of Din-family recombinase target site discovery, the BAMBI-inferred motif is found to be the only one functionally accurate from the underlying biochemical mechanism standpoint. C++ and Matlab code is available at http://www.ee.colubmia.edu/guido/BAMBI or http://genomics.lbl.gov/BAMBI/.Bioinformaticsghj2105, xw2008Electrical EngineeringArticlesFirst in vivo Real-Time Imaging of Endocardial RF Ablation by Optical Coherence Tomography
http://academiccommons.columbia.edu/catalog/ac:164061
Fleming, Christine P.; Rosenthal, Noah; Rollins, Andrew M.; Arruda, Mauriciohttp://hdl.handle.net/10022/AC:P:21305Wed, 07 Aug 2013 00:00:00 +0000We report the first in vivo use of optical coherence tomography (OCT), a high-resolution (~10 µm) real-time imaging technology, to scan subendocardial tissue and to monitor radiofrequency (RF) lesion formation. Endocardial imaging during an open chest procedure in a female pig was conducted with a forward imaging catheter with a Fourier Domain OCT system at 20 frames per second. Images of the endocardial surface and subendocardial tissue were obtained when the catheter was in direct contact with the endocardial surface. The formation and progressive increase in size of cavities within the myocardium were observed in the OCT images when a steam pop was audible. Our initial findings suggest that imaging with a forward scanning OCT catheter can assess tip electrode–tissue interface contact, image subsurface myocardial structure, and visualize dynamic effects of intramural RF energy delivery.Biomedical engineering, Medical imaging and radiologycpf2115Electrical EngineeringArticles