Academic Commons Search Results
http://academiccommons.columbia.edu/catalog.rss?f%5Bdepartment_facet%5D%5B%5D=Industrial+Engineering+and+Operations+Research&q=&rows=500&sort=record_creation_date+desc
Academic Commons Search Resultsen-usFrom Continuous to Discrete: Studies on Continuity Corrections and Monte Carlo Simulation with Applications to Barrier Options and American Options
http://academiccommons.columbia.edu/catalog/ac:171186
Cao, Menghuihttp://dx.doi.org/10.7916/D8PG1PS1Fri, 28 Feb 2014 15:26:30 +0000This dissertation 1) shows continuity corrections for first passage probabilities of Brownian bridge and barrier joint probabilities, which are applied to the pricing of two-dimensional barrier and partial barrier options, and 2) introduces new variance reduction techniques and computational improvements to Monte Carlo methods for pricing American options.
The joint distribution of Brownian motion and its first passage time has found applications in many areas, including sequential analysis, pricing of barrier options, and credit risk modeling. There are, however, no simple closed-form solutions for these joint probabilities in a discrete-time setting. Chapter 2 shows that, discrete two-dimensional barrier and partial barrier joint probabilities can be approximated by their continuous-time probabilities with remarkable accuracy after shifting the barrier away from the underlying by a factor. We achieve this through a uniform continuity correction theorem on the first passage probabilities for Brownian bridge, extending relevant results in Siegmund (1985a). The continuity corrections are applied to the pricing of two-dimensional barrier and partial barrier options, extending the results in Broadie, Glasserman & Kou (1997) on one-dimensional barrier options. One interesting aspect is that for type B partial barrier options, the barrier correction cannot be applied throughout one pricing formula, but only to some barrier values and leaving the other unchanged, the direction of correction may also vary within one formula.
In Chapter 3 we introduce new variance reduction techniques and computational improvements to Monte Carlo methods for pricing American-style options. For simulation algorithms that compute lower bounds of American option values, we apply martingale control variates and introduce the local policy enhancement, which adopts a local simulation to improve the exercise policy. For duality-based upper bound methods, specifically the primal-dual simulation algorithm (Andersen and Broadie 2004), we have developed two improvements. One is sub-optimality checking, which saves unnecessary computation when it is sub-optimal to exercise the option along the sample path; the second is boundary distance grouping, which reduces computational time by skipping computation on selected sample paths based on the distance to the exercise boundary. Numerical results are given for single asset Bermudan options, moving window Asian options and Bermudan max options. In some examples the computational time is reduced by a factor of several hundred, while the confidence interval of the true option value is considerably tighter than before the improvements.Operations research, FinanceOperations Research, Industrial Engineering and Operations ResearchDissertationsPricing, Trading and Clearing of Defaultable Claims Subject to Counterparty Risk
http://academiccommons.columbia.edu/catalog/ac:169814
Kim, Jinbeomhttp://dx.doi.org/10.7916/D8319SWWMon, 03 Feb 2014 12:12:22 +0000The recent financial crisis and subsequent regulatory changes on over-the-counter (OTC) markets have given rise to the new valuation and trading frameworks for defaultable claims to investors and dealer banks. More OTC market participants have adopted the new market conventions that incorporate counterparty risk into the valuation of OTC derivatives. In addition, the use of collateral has become common for most bilateral trades to reduce counterparty default risk. On the other hand, to increase transparency and market stability, the U.S and European regulators have required mandatory clearing of defaultable derivatives through central counterparties. This dissertation tackles these changes and analyze their impacts on the pricing, trading and clearing of defaultable claims. In the first part of the thesis, we study a valuation framework for financial contracts subject to reference and counterparty default risks with collateralization requirement. We propose a fixed point approach to analyze the mark-to-market contract value with counterparty risk provision, and show that it is a unique bounded and continuous fixed point via contraction mapping. This leads us to develop an accurate iterative numerical scheme for valuation. Specifically, we solve a sequence of linear inhomogeneous partial differential equations, whose solutions converge to the fixed point price function. We apply our methodology to compute the bid and ask prices for both defaultable equity and fixed-income derivatives, and illustrate the non-trivial effects of counterparty risk, collateralization ratio and liquidation convention on the bid-ask prices. In the second part, we study the problem of pricing and trading of defaultable claims among investors with heterogeneous risk preferences and market views. Based on the utility-indifference pricing methodology, we construct the bid-ask spreads for risk-averse buyers and sellers, and show that the spreads widen as risk aversion or trading volume increases. Moreover, we analyze the buyer's optimal static trading position under various market settings, including (i) when the market pricing rule is linear, and (ii) when the counterparty -- single or multiple sellers -- may have different nonlinear pricing rules generated by risk aversion and belief heterogeneity. For defaultable bonds and credit default swaps, we provide explicit formulas for the optimal trading positions, and examine the combined effect of heterogeneous risk aversions and beliefs. In particular, we find that belief heterogeneity, rather than the difference in risk aversion, is crucial to trigger a trade. Finally, we study the impact of central clearing on the credit default swap (CDS) market. Central clearing of CDS through a central counterparty (CCP) has been proposed as a tool for mitigating systemic risk and counterpart risk in the CDS market. The design of CCPs involves the implementation of margin requirements and a default fund, for which various designs have been proposed. We propose a mathematical model to quantify the impact of the design of the CCP on the incentive for clearing and analyze the market equilibrium. We determine the minimum number of clearing participants required so that they have an incentive to clear part of their exposures. Furthermore, we analyze the equilibrium CDS positions and their dependence on the initial margin, risk aversion, and counterparty risk in the inter-dealer market. Our numerical results show that minimizing the initial margin maximizes the total clearing positions as well as the CCP's revenue.Operations research, Financejk3071Industrial Engineering and Operations ResearchDissertationsTwo Papers of Financial Engineering Relating to the Risk of the 2007--2008 Financial Crisis
http://academiccommons.columbia.edu/catalog/ac:167143
Zhong, Haowenhttp://dx.doi.org/10.7916/D8CC0XMGFri, 15 Nov 2013 17:04:33 +0000This dissertation studies two financial engineering and econometrics problems relating to two facets of the 2007-2008 financial crisis. In the first part, we construct the Spatial Capital Asset Pricing Model and the Spatial Arbitrage Pricing Theory to characterize the risk premiums of futures contracts on real estate assets. We also provide rigorous econometric analysis of the new models. Empirical study shows there exists significant spatial interaction among the S&P/Case-Shiller Home Price Index futures returns. In the second part, we perform empirical studies on the jump risk in the equity market. We propose a simple affine jump-diffusion model for equity returns, which seems to outperform existing ones (including models with Levy jumps) during the financial crisis and is at least as good during normal times, if model complexity is taken into account. In comparing the models, we made two empirical findings: (i) jump intensity seems to increase significantly during the financial crisis, while on average there appears to be little change of jump sizes; (ii) finite number of large jumps in returns for any finite time horizon seem to fit the data well both before and after the crisis.Operations research, Statisticshz2193Industrial Engineering and Operations ResearchDissertationsCutting Planes for Convex Objective Nonconvex Optimization
http://academiccommons.columbia.edu/catalog/ac:166569
Michalka, Alexanderhttp://hdl.handle.net/10022/AC:P:22000Thu, 17 Oct 2013 14:46:03 +0000This thesis studies methods for tightening relaxations of optimization problems with convex objective values over a nonconvex domain. A class of linear inequalities obtained by lifting easily obtained valid inequalities is introduced, and it is shown that this class of inequalities is sufficient to describe the epigraph of a convex and differentiable function over a general domain. In the special case where the objective is a positive definite quadratic function, polynomial time separation procedures using the new class of lifted inequalities are developed for the cases when the domain is the complement of the interior of a polyhedron, a union of polyhedra, or the complement of the interior of an ellipsoid. Extensions for positive semidefinite and indefinite quadratic objectives are also studied. Applications and computational considerations are discussed, and the results from a series of numerical experiments are presented.Industrial engineeringadm2148Industrial Engineering and Operations ResearchDissertationsResource Cost Aware Scheduling Problems
http://academiccommons.columbia.edu/catalog/ac:166566
Carrasco, Rodrigohttp://hdl.handle.net/10022/AC:P:21999Thu, 17 Oct 2013 14:31:58 +0000Managing the consumption of non-renewable and/or limited resources has become an important issue in many different settings. In this dissertation we explore the topic of resource cost aware scheduling. Unlike the purely scheduling problems, in the resource cost aware setting we are not only interested in a scheduling performance metric, but also the cost of the resources consumed to achieve a certain performance level. There are several ways in which the cost of non-renewal resources can be added into a scheduling problem. Throughout this dissertation we will focus in the case where the resource consumption cost is added, as part of the objective, to a scheduling performance metric such as weighted completion time and weighted tardiness among others. In our work we make several contributions to the problem of scheduling with non-renewable resources. For the specific setting in which only energy consumption is the important resource, our contributions are the following. We introduce a model that extends the previous energy cost models by allowing more general cost functions that can be job-dependent. We further generalize the problem by allowing arbitrary precedence constraints and release dates. We give approximation algorithms for minimizing an objective that is a combination of a scheduling metric, namely total weighted completion time and total weighted tardiness, and the total energy consumption cost. Our approximation algorithm is based on an interval-and-speed-indexed IP formulation. We solve the linear relaxation of this IP and we use this solution to compute a schedule. We show that these algorithms have small constant approximation ratios. Through experimental analysis we show that the empirical approximation ratios are much better than the theoretical ones and that in fact the solutions are close to optimal. We also show empirically that the algorithm can be used in additional settings not covered by the theoretical results, such as using flow time or an online setting, with good approximation and competitiveness ratios.Industrial engineering, Applied mathematicsIndustrial Engineering and Operations ResearchDissertationsStochastic Models of Limit Order Markets
http://academiccommons.columbia.edu/catalog/ac:161685
Kukanov, Arseniyhttp://hdl.handle.net/10022/AC:P:20511Thu, 30 May 2013 16:40:28 +0000During the last two decades most stock and derivatives exchanges in the world transitioned to electronic trading in limit order books, creating a need for a new set of quantitative models to describe these order-driven markets. This dissertation offers a collection of models that provide insight into the structure of modern financial markets, and can help to optimize trading decisions in practical applications. In the first part of the thesis we study the dynamics of prices, order flows and liquidity in limit order markets over short timescales. We propose a stylized order book model that predicts a particularly simple linear relation between price changes and order flow imbalance, defined as a difference between net changes in supply and demand. The slope in this linear relation, called a price impact coefficient, is inversely proportional in our model to market depth - a measure of liquidity. Our empirical results confirm both of these predictions. The linear relation between order flow imbalance and price changes holds for time intervals between 50 milliseconds and 5 minutes. The inverse relation between the price impact coefficient and market depth holds on longer timescales. These findings shed a new light on intraday variations in market volatility. According to our model volatility fluctuates due to changes in market depth or in order flow variance. Previous studies also found a positive correlation between volatility and trading volume, but in order-driven markets prices are determined by the limit order book activity, so the association between trading volume and volatility is unclear. We show how a spurious correlation between these variables can indeed emerge in our linear model due to time aggregation of high-frequency data. Finally, we observe short-term positive autocorrelation in order flow imbalance and discuss an application of this variable as a measure of adverse selection in limit order executions. Our results suggest that monitoring recent order flow can improve the quality of order executions in practice. In the second part of the thesis we study the problem of optimal order placement in a fragmented limit order market. To execute a trade, market participants can submit limit orders or market orders across various exchanges where a stock is traded. In practice these decisions are influenced by sizes of order queues and by statistical properties of order flows in each limit order book, and also by rebates that exchanges pay for limit order submissions. We present a realistic model of limit order executions and formalize the search for an optimal order placement policy as a convex optimization problem. Based on this formulation we study how various factors determine investor's order placement decisions. In a case when a single exchange is used for order execution, we derive an explicit formula for the optimal limit and market order quantities. Our solution shows that the optimal split between market and limit orders largely depends on one's tolerance to execution risk. Market orders help to alleviate this risk because they execute with certainty. Correspondingly, we find that an optimal order allocation shifts to these more expensive orders when the execution risk is of primary concern, for example when the intended trade quantity is large or when it is costly to catch up on the quantity after limit order execution fails. We also characterize the optimal solution in the general case of simultaneous order placement on multiple exchanges, and show that it sets execution shortfall probabilities to specific threshold values computed with model parameters. Finally, we propose a non-parametric stochastic algorithm that computes an optimal solution by resampling historical data and does not require specifying order flow distributions. A numerical implementation of this algorithm is used to study the sensitivity of an optimal solution to changes in model parameters. Our numerical results show that order placement optimization can bring a substantial reduction in trading costs, especially for small orders and in cases when order flows are relatively uncorrelated across trading venues. The order placement optimization framework developed in this thesis can also be used to quantify the costs and benefits of financial market fragmentation from the point of view of an individual investor. For instance, we find that a positive correlation between order flows, which is empirically observed in a fragmented U.S. equity market, increases the costs of trading. As the correlation increases it may become more expensive to trade in a fragmented market than it is in a consolidated market. In the third part of the thesis we analyze the dynamics of limit order queues at the best bid or ask of an exchange. These queues consist of orders submitted by a variety of market participants, yet existing order book models commonly assume that all orders have similar dynamics. In practice, some orders are submitted by trade execution algorithms in an attempt to buy or sell a certain quantity of assets under time constraints, and these orders are canceled if their realized waiting time exceeds a patience threshold. In contrast, high-frequency traders submit and cancel orders depending on the order book state and their orders are not driven by patience. The interaction between these two order types within a single FIFO queue leads bursts of order cancelations for small queues and anomalously long waiting times in large queues. We analyze a fluid model that describes the evolution of large order queues in liquid markets, taking into account the heterogeneity between order submission and cancelation strategies of different traders. Our results show that after a finite initial time interval, the queue reaches a specific structure where all orders from high-frequency traders stay in the queue until execution but most orders from execution algorithms exceed their patience thresholds and are canceled. This "order crowding" effect has been previously noted by participants in highly liquid stock and futures markets and was attributed to a large participation of high-frequency traders. In our model, their presence creates an additional workload, which increases queue waiting times for new orders. Our analysis of the fluid model leads to waiting time estimates that take into account the distribution of order types in a queue. These estimates are tested against a large dataset of realized limit order waiting times collected by a U.S. equity brokerage firm. The queue composition at a moment of order submission noticeably affects its waiting time and we find that assuming a single order type for all orders in the queue leads to unrealistic results. Estimates that assume instead a mix of heterogeneous orders in the queue are closer to empirical data. Our model for a limit order queue with heterogeneous order types also appears to be interesting from a methodological point of view. It introduces a new type of behavior in a queueing system where one class of jobs has state-dependent dynamics, while others are driven by patience. Although this model is motivated by the analysis of limit order books, it may find applications in studying other service systems with state-dependent abandonments.Operations research, Finance, Statisticsak2870Industrial Engineering and Operations ResearchDissertationsFinancial Portfolio Risk Management: Model Risk, Robustness and Rebalancing Error
http://academiccommons.columbia.edu/catalog/ac:161415
Xu, Xingbohttp://hdl.handle.net/10022/AC:P:20382Mon, 20 May 2013 15:59:07 +0000Risk management has always been in key component of portfolio management. While more and more complicated models are proposed and implemented as research advances, they all inevitably rely on imperfect assumptions and estimates. This dissertation aims to investigate the gap between complicated theoretical modelling and practice. We mainly focus on two directions: model risk and reblancing error. In the first part of the thesis, we develop a framework for quantifying the impact of model error and for measuring and minimizing risk in a way that is robust to model error. This robust approach starts from a baseline model and finds the worst-case error in risk measurement that would be incurred through a deviation from the baseline model, given a precise constraint on the plausibility of the deviation. Using relative entropy to constrain model distance leads to an explicit characterization of worst-case model errors; this characterization lends itself to Monte Carlo simulation, allowing straightforward calculation of bounds on model error with very little computational effort beyond that required to evaluate performance under the baseline nominal model. This approach goes well beyond the effect of errors in parameter estimates to consider errors in the underlying stochastic assumptions of the model and to characterize the greatest vulnerabilities to error in a model. We apply this approach to problems of portfolio risk measurement, credit risk, delta hedging, and counterparty risk measured through credit valuation adjustment. In the second part, we apply this robust approach to a dynamic portfolio control problem. The sources of model error include the evolution of market factors and the influence of these factors on asset returns. We analyze both finite- and infinite-horizon problems in a model in which returns are driven by factors that evolve stochastically. The model incorporates transaction costs and leads to simple and tractable optimal robust controls for multiple assets. We illustrate the performance of the controls on historical data. Robustness does improve performance in out-of-sample tests in which the model is estimated on a rolling window of data and then applied over a subsequent time period. By acknowledging uncertainty in the estimated model, the robust rules lead to less aggressive trading and are less sensitive to sharp moves in underlying prices. In the last part, we analyze the error between a discretely rebalanced portfolio and its continuously rebalanced counterpart in the presence of jumps or mean-reversion in the underlying asset dynamics. With discrete rebalancing, the portfolio's composition is restored to a set of fixed target weights at discrete intervals; with continuous rebalancing, the target weights are maintained at all times. We examine the difference between the two portfolios as the number of discrete rebalancing dates increases. We derive the limiting variance of the relative error between the two portfolios for both the mean-reverting and jump-diffusion cases. For both cases, we derive ``volatility adjustments'' to improve the approximation of the discretely rebalanced portfolio by the continuously rebalanced portfolio, based on on the limiting covariance between the relative rebalancing error and the level of the continuously rebalanced portfolio. These results are based on strong approximation results for jump-diffusion processes.Operations research, Finance, Mathematicsxx2126Industrial Engineering and Operations Research, BusinessDissertationsTournaments With Forbidden Substructures and the Erdos-Hajnal Conjecture
http://academiccommons.columbia.edu/catalog/ac:160247
Choromanski, Krzysztofhttp://hdl.handle.net/10022/AC:P:20024Mon, 29 Apr 2013 15:29:42 +0000A celebrated Conjecture of Erdos and Hajnal states that for every undirected graph H there exists ɛ(H)>0 such that every undirected graph on n vertices that does not contain H as an induced subgraph contains a clique or a stable set of size at least n^{ɛ(H)}. In 2001 Alon, Pach and Solymosi proved that the conjecture has an equivalent directed version, where undirected graphs are replaced by tournaments and cliques and stable sets by transitive subtournaments. This dissertation addresses the directed version of the conjecture and some problems in the directed setting that are closely related to it. For a long time the conjecture was known to be true only for very specific small graphs and graphs obtained from them by the so-called substitution procedure proposed by Alon, Pach and Solymosi. All the graphs that are an outcome of this procedure have nontrivial homogeneous sets. Tournaments without nontrivial homogeneous sets are called prime. They play a central role here since if the conjecture is not true then the smallest counterexample is prime. We remark that for a long time the conjecture was known to be true only for some prime graphs of order at most 5. There exist 5-vertex graphs for which the conjecture is still open, however one of the corollaries of the results presented in the thesis states that all tournaments on at most 5 vertices satisfy the conjecture. In the first part of the thesis we will establish the conjecture for new infinite classes of tournaments containing infinitely many prime tournaments. We will first prove the conjecture for so-called constellations. It turns out that almost all tournaments on at most 5 vertices are either constellations or are obtained from constellations by substitutions. The only 5-vertex tournament for which this is not the case is a tournament in which every vertex has outdegree 2. We call this the tournament C_{5}. Another result of this thesis is the proof of the conjecture for this tournament. We also present here the structural characterization of the tournaments satisfying the conjecture in almost linear sense. In the second part of the thesis we focus on the upper bounds on coefficients epsilon(H) for several classes of tournaments. In particular we analyze how they depend on the structure of the tournament. We prove that for almost all h-vertex tournaments ɛ(H) ≤ 4/h(1+o(1)). As a byproduct of the methods we use here, we get upper bounds for ɛ(H) of undirected graphs. We also present upper bounds on ɛ(H) of tournaments with small nontrivial homogeneous sets, in particular prime tournaments. Finally we analyze tournaments with big ɛ(H) and explore some of their structural properties.Mathematicskmc2178Industrial Engineering and Operations ResearchDissertationsOptimization Algorithms for Structured Machine Learning and Image Processing Problems
http://academiccommons.columbia.edu/catalog/ac:158764
Qin, Zhiweihttp://hdl.handle.net/10022/AC:P:19648Fri, 05 Apr 2013 10:47:07 +0000Optimization algorithms are often the solution engine for machine learning and image processing techniques, but they can also become the bottleneck in applying these techniques if they are unable to cope with the size of the data. With the rapid advancement of modern technology, data of unprecedented size has become more and more available, and there is an increasing demand to process and interpret the data. Traditional optimization methods, such as the interior-point method, can solve a wide array of problems arising from the machine learning domain, but it is also this generality that often prevents them from dealing with large data efficiently. Hence, specialized algorithms that can readily take advantage of the problem structure are highly desirable and of immediate practical interest. This thesis focuses on developing efficient optimization algorithms for machine learning and image processing problems of diverse types, including supervised learning (e.g., the group lasso), unsupervised learning (e.g., robust tensor decompositions), and total-variation image denoising. These algorithms are of wide interest to the optimization, machine learning, and image processing communities. Specifically, (i) we present two algorithms to solve the Group Lasso problem. First, we propose a general version of the Block Coordinate Descent (BCD) algorithm for the Group Lasso that employs an efficient approach for optimizing each subproblem exactly. We show that it exhibits excellent performance when the groups are of moderate size. For groups of large size, we propose an extension of the proximal gradient algorithm based on variable step-lengths that can be viewed as a simplified version of BCD. By combining the two approaches we obtain an implementation that is very competitive and often outperforms other state-of-the-art approaches for this problem. We show how these methods fit into the globally convergent general block coordinate gradient descent framework in (Tseng and Yun, 2009). We also show that the proposed approach is more efficient in practice than the one implemented in (Tseng and Yun, 2009). In addition, we apply our algorithms to the Multiple Measurement Vector (MMV) recovery problem, which can be viewed as a special case of the Group Lasso problem, and compare their performance to other methods in this particular instance; (ii) we further investigate sparse linear models with two commonly adopted general sparsity-inducing regularization terms, the overlapping Group Lasso penalty l1/l2-norm and the l1/l_infty-norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As one of the core building-blocks of this framework, we develop new algorithms using a partial-linearization/splitting technique and prove that the accelerated versions of these algorithms require $O(1 sqrt(epsilon) ) iterations to obtain an epsilon-optimal solution. We compare the performance of these algorithms against that of the alternating direction augmented Lagrangian and FISTA methods on a collection of data sets and apply them to two real-world problems to compare the relative merits of the two norms; (iii) we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recent advances in robust Principal Component Analysis and tensor completion. We propose tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem. These algorithms are based on the highly efficient alternating direction augmented Lagrangian and accelerated proximal gradient methods. We also propose a nonconvex model that can often improve the recovery results from the convex models. We investigate the empirical recoverability properties of the convex and nonconvex formulations and compare the computational performance of the algorithms on simulated data. We demonstrate through a number of real applications the practical effectiveness of this convex optimization framework for robust low-rank tensor recovery; (iv) we consider the image denoising problem using total variation regularization. This problem is computationally challenging to solve due to the non-differentiability and non-linearity of the regularization term. We propose a new alternating direction augmented Lagrangian method, involving subproblems that can be solved efficiently and exactly. The global convergence of the new algorithm is established for the anisotropic total variation model. We compare our method with the split Bregman method and demonstrate the superiority of our method in computational performance on a set of standard test images.Operations research, Computer science, Statisticszq2107Industrial Engineering and Operations ResearchDissertationsModels for managing surge capacity in the face of an influenza epidemic
http://academiccommons.columbia.edu/catalog/ac:157364
Zenteno, Anahttp://hdl.handle.net/10022/AC:P:19200Fri, 01 Mar 2013 10:01:02 +0000Influenza pandemics pose an imminent risk to society. Yearly outbreaks already represent heavy social and economic burdens. A pandemic could severely affect infrastructure and commerce through high absenteeism, supply chain disruptions, and other effects over an extended and uncertain period of time. Governmental institutions such as the Center for Disease Prevention and Control (CDC) and the U.S. Department of Health and Human Services (HHS) have issued guidelines on how to prepare for a potential pandemic, however much work still needs to be done in order to meet them. From a planner's perspective, the complexity of outlining plans to manage future resources during an epidemic stems from the uncertainty of how severe the epidemic will be. Uncertainty in parameters such as the contagion rate (how fast the disease spreads) makes the course and severity of the epidemic unforeseeable, exposing any planning strategy to a potentially wasteful allocation of resources. Our approach involves the use of additional resources in response to a robust model of the evolution of the epidemic as to hedge against the uncertainty in its evolution and intensity. Under existing plans, large cities would make use of networks of volunteers, students, and recent retirees, or borrow staff from neighboring communities. Taking into account that such additional resources are likely to be significantly constrained (e.g. in quantity and duration), we seek to produce robust emergency staff commitment levels that work well under different trajectories and degrees of severity of the pandemic. Our methodology combines Robust Optimization techniques with Epidemiology (SEIR models) and system performance modeling. We describe cutting-plane algorithms analogous to generalized Benders' decomposition that prove fast and numerically accurate. Our results yield insights on the structure of optimal robust strategies and on practical rules-of-thumb that can be deployed during the epidemic. To assess the efficacy of our solutions, we study their performance under different scenarios and compare them against other seemingly good strategies through numerical experiments. This work would be particularly valuable for institutions that provide public services, whose operations continuity is critical for a community, especially in view of an event of this caliber. As far as we know, this is the first time this problem is addressed in a rigorous way; particularly we are not aware of any other robust optimization applications in epidemiology.Operations research, Public healthacz2103Industrial Engineering and Operations ResearchDissertationsRare Events in Stochastic Systems: Modeling, Simulation Design and Algorithm Analysis
http://academiccommons.columbia.edu/catalog/ac:156733
Shi, Yixihttp://hdl.handle.net/10022/AC:P:19034Wed, 13 Feb 2013 12:32:00 +0000This dissertation explores a few topics in the study of rare events in stochastic systems, with a particular emphasis on the simulation aspect. This line of research has been receiving a substantial amount of interest in recent years, mainly motivated by scientific and industrial applications in which system performance is frequently measured in terms of events with very small probabilities.The topics mainly break down into the following themes: Algorithm Analysis: Chapters 2, 3, 4 and 5. Simulation Design: Chapters 3, 4 and 5. Modeling: Chapter 5. The titles of the main chapters are detailed as follows: Chapter 2: Analysis of a Splitting Estimator for Rare Event Probabilities in Jackson Networks Chapter 3: Splitting for Heavy-tailed Systems: An Exploration with Two Algorithms Chapter 4: State Dependent Importance Sampling with Cross Entropy for Heavy-tailed Systems Chapter 5: Stochastic Insurance-Reinsurance Networks: Modeling, Analysis and Efficient Monte CarloEngineering, Mathematicsys2347Industrial Engineering and Operations ResearchDissertationsChance Constrained Optimal Power Flow: Risk-Aware Network Control under Uncertainty
http://academiccommons.columbia.edu/catalog/ac:156182
Bienstock, Daniel; Chertkov, Michael; Harnett, Seanhttp://hdl.handle.net/10022/AC:P:18933Tue, 05 Feb 2013 10:34:34 +0000When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely used by the electric power industry to re-dispatch hourly controllable generation (coal, gas and hydro plants) over control areas of transmission networks, can result in grid instability, and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition, which is borne by simulations of real grids, would likely resulting in automatic line tripping to protect lines from thermal stress, a risky and undesirable outcome which compromises stability. Smart grid goals include a commitment to large penetration of highly fluctuating renewables, thus calling to reconsider current practices, in particular the use of standard OPF. Our Chance Constrained (CC) OPF corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, our CC-OPF satisfies all the constraints with high probability while simultaneously minimizing the cost of economic re-dispatch. CC-OPF allows efficient implementation, e.g. solving a typical instance over the 2746-bus Polish network in 20 seconds on a standard laptop.Industrial engineering, Operations researchdb17, srh2144Industrial Engineering and Operations Research, Applied Physics and Applied MathematicsArticlesModels for managing the impact of an influenza epidemic
http://academiccommons.columbia.edu/catalog/ac:153905
Bienstock, Daniel; Zenteno Langle, Ana Ceciliahttp://hdl.handle.net/10022/AC:P:15119Mon, 29 Oct 2012 09:30:05 +0000We present methodologies for managing the impact of workforce absenteeism on the operational continuity of public services during an influenza epidemic. From a planner’s perspective, it is of paramount importance to design contingency plans to administer resources on the face of such an event; however, there is significant complexity underlying this task, stemming from uncertainty on the likely severity and evolution of the epidemic. Our approach involves the procurement of additional resources in response to a robust model of the evolution of the epidemic. We develop insights on the structure of optimal robust strategies and on practical rules-of-thumb that can be applied should an epidemic take place. We present numerical examples that illustrate the effectiveness of our results.Public healthdb17, acz2103Industrial Engineering and Operations Research, Applied Physics and Applied MathematicsArticlesChance Constrained Optimal Power Flow: Risk-Aware Network Control under Uncertainty
http://academiccommons.columbia.edu/catalog/ac:153902
Bienstock, Daniel; Chertkov, Michael; Harnett, Seanhttp://hdl.handle.net/10022/AC:P:15118Mon, 29 Oct 2012 09:19:08 +0000When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely used by the electric power industry to redispatch hourly controllable generation (coal, gas and hydro plants) over control areas of transmission networks, can result in grid instability, and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition, which is borne by simulations of real grids, would likely resulting in automatic line tripping to protect lines from thermal stress, a risky and undesirable outcome which compromises stability. Smart grid goals include a commitment to large penetration of highly fluctuating renewables, thus calling to reconsider current practices, in particular the use of standard OPF. Our Chance Constrained (CC) OPF corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, our CCOPF satisfies all the constraints with high probability while simultaneously minimizing the cost of economic redispatch. CCOPF allows efficient implementation, e.g. solving a typical instance over the 2746bus Polish network in 20s on a standard laptop.Industrial engineering, Operations researchdb17Industrial Engineering and Operations Research, Applied Physics and Applied MathematicsArticlesContingent Capital: Valuation and Risk Implications Under Alternative Conversion Mechanisms
http://academiccommons.columbia.edu/catalog/ac:152933
Nouri, Behzadhttp://hdl.handle.net/10022/AC:P:14800Fri, 28 Sep 2012 11:11:35 +0000Several proposals for enhancing the stability of the financial system include requirements that banks hold some form of contingent capital, meaning equity that becomes available to a bank in the event of a crisis or financial distress. Specific proposals vary in their choice of conversion trigger and conversion mechanism, and have inspired extensive scrutiny regarding their effectivity in avoiding costly public rescues and bail-outs and potential adverse effects on market dynamics. While allowing banks to leverage and gain a higher return on their equity capital during the upturns in financial markets, contingent capital provides an automatic mechanism to reduce debt and raise the loss bearing capital cushion during the downturns and market crashes; therefore, making it possible to achieve stability and robustness in the financial sector, without reducing efficiency and competitiveness of the banking system with higher regulatory capital requirements. However, many researchers have raised concerns regarding unintended consequences and implications of such instruments for market dynamics. Death spirals in the stock price near the conversion, possibility of profitable stock or book manipulations by either the investors or the issuer, the marketability and demand for such hybrid instruments, contagion and systemic risks arising from the hedging strategies of the investors and higher risk taking incentives for issuers are among such concerns. Though substantial, many of such issues are addressed through a prudent design of the trigger and conversion mechanism. In the following chapters, we develop multiple models for pricing and analysis of contingent capital under different conversion mechanisms. In Chapter 2 we analyze the case of contingent capital with a capital-ratio trigger and partial and on-going conversion. The capital ratio we use is based on accounting or book value to approximate the regulatory ratios that determine capital requirements for banks. The conversion process is partial and on-going in the sense that each time a bank's capital ratio reaches the minimum threshold, just enough debt is converted to equity to meet the capital requirement, so long as the contingent capital has not been depleted. In Chapter 3 we simplify the design to all-at-once conversion however we perform the analysis through a much richer model which incorporates tail risk in terms of jumps, endogenous optimal default policy and debt rollover. We also investigate the case of bail-in debt, where at default the original shareholders are wiped out and the converted investors take control of the firm. In the case of contingent convertibles the conversion trigger is assumed as a contractual term specified by market value of assets. For bail-in debt the trigger is where the original shareholders optimally default. We study incentives of shareholders to change the capital structure and how CoCo's affect risk incentives. Several researchers have advocated use of a market based trigger which is forward looking, continuously updated and readily available, while some others have raised concerns regarding unintended consequences of a market based trigger. In Chapter 4 we investigate one of these issues, namely the existence and uniqueness of equilibrium when the conversion trigger is based on the stock price.Finance, Operations researchbn2164Industrial Engineering and Operations Research, BusinessDissertationsThree Essays on Dynamic Pricing and Resource Allocation
http://academiccommons.columbia.edu/catalog/ac:151966
Nur, Cavdarogluhttp://hdl.handle.net/10022/AC:P:14492Thu, 23 Aug 2012 11:27:07 +0000This thesis consists of three essays that focus on different aspects of pricing and resource allocation. We use techniques from supply chain and revenue management, scenario-based robust optimization and game theory to study the behavior of firms in different competitive and non-competitive settings. We develop dynamic programming models that account for pricing and resource allocation decisions of firms in such settings. In Chapter 2, we focus on the resource allocation problem of a service firm, particularly a health-care facility. We formulate a general model that is applicable to various resource allocation problems of a hospital. To this end, we consider a system with multiple customer classes that display different reactions to delays in service. By adopting a dynamic-programming approach, we show that the optimal policy is not simple but exhibits desirable monotonicity properties. Furthermore, we propose a simple threshold heuristic policy that performs well in our experiments. In Chapter 3, we study a dynamic pricing problem for a monopolist seller that operates in a setting where buyers have market power, and where each potential sale takes the form of a bilateral negotiation. We review the dynamic programming formulation of the negotiation problem, and propose a simple and tractable deterministic "fluid" analogue for this problem. The main emphasis of the chapter is in expanding the formulation to the dynamic setting where both the buyer and seller have limited prior information on their counterparty valuation and their negotiation skill. In Chapter 4, we consider the revenue maximization problem of a seller who operates in a market where there are two types of customers; namely the "investors" and "regular-buyers". In a two-period setting, we model and solve the pricing game between the seller and the investors in the latter period, and based on the solution of this game, we analyze the revenue maximization problem of the seller in the former period. Moreover, we study the effects on the total system profits when the seller and the investors cooperate through a contracting mechanism rather than competing with each other; and explore the contracting opportunities that lead to higher profits for both agents.Operations researchIndustrial Engineering and Operations ResearchDissertationsForbidden Substructures in Graphs and Trigraphs, and Related Coloring Problems
http://academiccommons.columbia.edu/catalog/ac:146465
Penev, Irenahttp://hdl.handle.net/10022/AC:P:13082Tue, 01 May 2012 16:34:11 +0000Given a graph G, χ(G) denotes the chromatic number of G, and ω(G) denotes the clique number of G (i.e. the maximum number of pairwise adjacent vertices in G). A graph G is perfect provided that for every induced subgraph H of G, χ(H) = ω(H). This thesis addresses several problems from the theory of perfect graphs and generalizations of perfect graphs. The bull is a five-vertex graph consisting of a triangle and two vertex-disjoint pendant edges; a graph is said to be bull-free provided that no induced subgraph of it is a bull. The first result of this thesis is a structure theorem for bull-free perfect graphs. This is joint work with Chudnovsky, and it first appeared in [12]. The second result of this thesis is a decomposition theorem for bull-free perfect graphs, which we then use to give a polynomial time combinatorial coloring algorithm for bull-free perfect graphs. We remark that de Figueiredo and Maffray [33] previously solved this same problem, however, the algorithm presented in this thesis is faster than the algorithm from [33]. We note that a decomposition theorem that is very similar (but slightly weaker) than the one from this thesis was originally proven in [52], however, the proof in this thesis is significantly different from the one in [52]. The algorithm from this thesis is very similar to the one from [52]. A class G of graphs is said to be χ-bounded provided that there exists a function f such that for all G in G, and all induced subgraphs H of G, we have that χ(H) ≤ f(ω(H)). χ-bounded classes were introduced by Gyarfas [41] as a generalization of the class of perfect graphs (clearly, the class of perfect graphs is χ-bounded by the identity function). Given a graph H, we denote by Forb*(H) the class of all graphs that do not contain any subdivision of H as an induced subgraph. In [57], Scott proved that Forb*(T) is χ-bounded for every tree T, and he conjectured that Forb*(H) is χ-bounded for every graph H. Recently, a group of authors constructed a counterexample to Scott's conjecture [51]. This raises the following question: for which graphs H is Scott's conjecture true? In this thesis, we present the proof of Scott's conjecture for the cases when H is the paw (i.e. a four-vertex graph consisting of a triangle and a pendant edge), the bull, and a necklace (i.e. a graph obtained from a path by choosing a matching such that no edge of the matching is incident with an endpoint of the path, and for each edge of the matching, adding a vertex adjacent to the ends of this edge). This is joint work with Chudnovsky, Scott, and Trotignon, and it originally appeared in [13]. Finally, we consider several operations (namely, "substitution," "gluing along a clique," and "gluing along a bounded number of vertices"), and we show that the closure of a χ-bounded class under any one of them, as well as under certain combinations of these three operations (in particular, the combination of substitution and gluing along a clique, as well as the combination of gluing along a clique and gluing along a bounded number of vertices) is again χ-bounded. This is joint work with Chudnovsky, Scott, and Trotignon, and it originally appeared in [14].Mathematicsip2158Mathematics, Industrial Engineering and Operations ResearchDissertationsEssays on Inventory Management and Object Allocation
http://academiccommons.columbia.edu/catalog/ac:144769
Lee, Thiam Huihttp://hdl.handle.net/10022/AC:P:12623Fri, 17 Feb 2012 15:52:21 +0000This dissertation consists of three essays. In the first, we establish a framework for proving equivalences between mechanisms that allocate indivisible objects to agents. In the second, we study a newsvendor model where the inventory manager has access to two experts that provide advice, and examine how and when an optimal algorithm can be efficiently computed. In the third, we study classical single-resource capacity allocation problem and investigate the relationship between data availability and performance guarantees. We first study mechanisms that solve the problem of allocating indivisible objects to agents. We consider the class of mechanisms that utilize the Top Trading Cycles (TTC) algorithm (these may differ based on how they prioritize agents), and show a general approach to proving equivalences between mechanisms from this class. This approach is used to show alternative and simpler proofs for two recent equivalence results for mechanisms with linear priority structures. We also use the same approach to show that these equivalence results can be generalized to mechanisms where the agent priority structure is described by a tree. Second, we study the newsvendor model where the manager has recourse to advice, or decision recommendations, from two experts, and where the objective is to minimize worst-case regret from not following the advice of the better of the two agents. We show the model can be reduced to the class machine-learning problem of predicting binary sequences but with an asymmetric cost function, allowing us to obtain an optimal algorithm by modifying a well-known existing one. However, the algorithm we modify, and consequently the optimal algorithm we describe, is not known to be efficiently computable, because it requires evaluations of a function v which is the objective value of recursively defined optimization problems. We analyze v and show that when the two cost parameters of the newsvendor model are small multiples of a common factor, its evaluation is computationally efficient. We also provide a novel and direct asymptotic analysis of v that differs from previous approaches. Our asymptotic analysis gives us insight into the transient structure of v as its parameters scale, enabling us to formulate a heuristic for evaluating v generally. This, in turn, defines a heuristic for the optimal algorithm whose decisions we find in a numerical study to be close to optimal. In our third essay, we study the classical single-resource capacity allocation problem. In particular, we analyze the relationship between data availability (in the form of demand samples) and performance guarantees for solutions derived from that data. This is done by describing a class of solutions called epsilon-backwards accurate policies and determining a suboptimality gap for this class of solutions. The suboptimality gap we find is in terms of epsilon and is also distribution-free. We then relate solutions generated by a Monte Carlo algorithm and epsilon-backwards accurate policies, showing a lower bound on the quantity of data necessary to ensure that the solution generated by the algorithm is epsilon-backwards accurate with a high probability. Combining the two results then allows us to give a lower bound on the data needed to generate an Î±-approximation with a given confidence probability 1-delta. We find that this lower bound is polynomial in the number of fares, M, and 1/Î±.Operations researchthl2102Industrial Engineering and Operations ResearchDissertationsMultiproduct Pricing Management and Design of New Service Products
http://academiccommons.columbia.edu/catalog/ac:144706
Wang, Ruxianhttp://hdl.handle.net/10022/AC:P:12603Fri, 17 Feb 2012 12:45:47 +0000In this thesis, we study price optimization and competition of multiple differentiated substitutable products under the general Nested Logit model and also consider the designing and pricing of new service products, e.g., flexible warranty and refundable warranty, under customers' strategic claim behavior. Chapter 2 considers firms that sell multiple differentiated substitutable products and customers whose purchase behavior follows the Nested Logit model, of which the Multinomial Logit model is a special case. In the Nested Logit model, customers make product selection decision sequentially: they first select a class or a nest of products and subsequently choose a product within the selected class. We consider the general Nested Logit model with product-differentiated price coefficients and general nest-heterogenous degrees. We show that the adjusted markup, which is defined as price minus cost minus the reciprocal of the price coefficient, is constant across all the products in each nest. When optimizing multiple nests of products, the adjusted nested markup is also constant within a nest. By using this result, the multi-product optimization problem can be reduced to a single-dimensional problem in a bounded interval, which is easy to solve. We also use this result to simplify the oligopolistic price competition and characterize the Nash equilibrium. Furthermore, we investigate its application to dynamic pricing and revenue management. In Chapter 3, we investigate the flexible monthly warranty, which offers flexibility to customers and allow them to cancel it at anytime without any penalty. Frequent technological innovations and price declines severely affect sales of extended warranties as product replacement upon failure becomes an increasingly attractive alternative. To increase sales and profitability, we propose offering flexible-duration extended warranties. These warranties can appeal to customers who are uncertain about how long they will keep the product as well as to customers who are uncertain about the product's reliability. Flexibility may be added to existing services in the form of monthly-billing with month-by-month commitments, or by making existing warranties easier to cancel, with pro-rated refunds. This thesis studies flexible warranties from the perspectives of both the customer and the provider. We present a model of the customer's optimal coverage decisions under the objective of minimizing expected support costs over a random planning horizon. We show that under some mild conditions the customer's optimal coverage policy has a threshold structure. We also show through an analytical study and through numerical examples how flexible warranties can result in higher profits and higher attach rates. Chapter 4 examines the designing and pricing of residual value warranty that refunds customers at the end of warranty period based on customers' claim history. Traditional extended warranties for IT products do not differentiate customers according to their usage rates or operating environment. These warranties are priced to cover the costs of high-usage customers who tend to experience more failures and are therefore more costly to support. This makes traditional warranties economically unattractive to low-usage customers. In this chapter, we introduce, design and price residual value warranties. These warranties refund a part of the upfront price to customers who have zero or few claims according to a pre-determined refund schedule. By design, the net cost of these warranties is lower for light users than for heavy users. As a result, a residual value warranty can enable the provider to price-discriminate based on usage rates or operating conditions without the need to monitor individual customers' usage. Theoretic results and numerical experiments demonstrate how residual value warranties can appeal to a broader range of customers and significantly increase the provider's profits.Operations research, Industrial engineeringrw2267Industrial Engineering and Operations ResearchDissertationsA Simulation Model to Analyze the Impact of Golf Skills and a Scenario-based Approach to Options Portfolio Optimization
http://academiccommons.columbia.edu/catalog/ac:143076
Ko, Soonminhttp://hdl.handle.net/10022/AC:P:12166Tue, 10 Jan 2012 14:41:51 +0000A simulation model of the game of golf is developed to analyze the impact of various skills (e.g., driving distance, directional accuracy, putting skill, and others) on golf scores. The golf course model includes realistic features of a golf course including rough, sand, water, and trees. Golfer shot patterns are modeled with t distributions and mixtures of t and normal distributions since normal distributions do not provide good fits to the data. The model is calibrated to extensive data for amateur and professional golfers. The golf simulation is used to assess the impact on scores of distance and direction, determine what factors separate pros from amateurs, and to determine the impact of course length on scores. In the second part of the thesis, we use a scenario-based approach to solve a portfolio optimization problem with options. The solution provides the optimal payoff profile given an investor's view of the future, his utility function or risk appetite, and the market prices of options. The scenario-based approach has several advantages over the traditional covariance matrix method, including additional flexibility in the choice of constraints and objective function.Engineering, Operations researchsk2822Industrial Engineering and Operations Research, BusinessDissertationsRisk Premia and Optimal Liquidation of Defaultable Securities
http://academiccommons.columbia.edu/catalog/ac:139526
Leung, Siu Tang; Liu, Penghttp://hdl.handle.net/10022/AC:P:11331Mon, 03 Oct 2011 10:19:01 +0000This paper studies the optimal timing to liquidate defaultable securities in a general intensity-based credit risk model under stochastic interest rate. We incorporate the potential price discrepancy between the market and investors, which is characterized by risk-neutral valuation under different default risk premia specifications. To quantify the value of optimally timing to sell, we introduce the delayed liquidation premium which is closely related to the stochastic bracket between the market price and a pricing kernel. We analyze the optimal liquidation policy for various credit derivatives. Our model serves as the building block for the sequential buying and selling problem. We also discuss the extensions to a jump-diffusion default intensity model as well as a defaultable equity model.Finance, Economic theorytl2497Industrial Engineering and Operations ResearchArticlesAdding Trust to P2P Distribution of Paid Content
http://academiccommons.columbia.edu/catalog/ac:138893
Sherman, Alex; Stavrou, Angelos; Nieh, Jason; Keromytis, Angelos D.; Stein, Clifford S.http://hdl.handle.net/10022/AC:P:11195Mon, 19 Sep 2011 12:56:04 +0000While peer-to-peer (P2P) file-sharing is a powerful and cost-effective content distribution model, most paid-for digital-content providers (CPs) use direct download to deliver their content. CPs are hesitant to rely on a P2P distribution model because it introduces a number of security concerns including content pollution by malicious peers, and lack of enforcement of authorized downloads. Furthermore, because users communicate directly with one another, the users can easily form illegal file-sharing clusters to exchange copyrighted content. Such exchange could hurt the content providers' profits. We present a P2P system TP2P, where we introduce a notion of trusted auditors (TAs). TAs are P2P peers that police the system by covertly monitoring and taking measures against misbehaving peers. This policing allows TP2P to enable a stronger security model making P2P a viable alternative for the distribution of paid digital content. Through analysis and simulation, we show the effectiveness of even a small number of TAs at policing the system. In a system with as many as 60% of misbehaving users, even a small number of TAs can detect 99% of illegal cluster formation. We develop a simple economic model to show that even with such a large presence of malicious nodes, TP2P can improve CP's profits (which could translate to user savings) by 62% to 122%, even while assuming conservative estimates of content and bandwidth costs. We implemented TP2P as a layer on top of BitTorrent and demonstrated experimentally using PlanetLab that our system provides trusted P2P file sharing with negligible performance overhead.Computer sciencejn234, ak2052, cs2035Computer Science, Industrial Engineering and Operations ResearchArticlesAccounting for Risk Aversion in Derivatives Purchase Timing
http://academiccommons.columbia.edu/catalog/ac:138783
Leung, Siu Tang; Ludkovski, Mikehttp://hdl.handle.net/10022/AC:P:11191Fri, 16 Sep 2011 09:45:22 +0000We study the problem of optimal timing to buy/sell derivatives by a risk-averse agent in incomplete markets. Adopting the exponential utility indifference valuation, we investigate this timing flexibility and the associated delayed purchase premium. This leads to a stochastic control and optimal stopping problem that combines the observed market price dynamics and the agent's risk preferences. Our results extend recent work on indifference valuation of American options, as well as the authors' first paper (Leung and Ludkovski, SIAM J. Fin. Math., 2011). In the case of Markovian models of contracts on non-traded assets, we provide analytical characterizations and numerical studies of the optimal purchase strategies, with applications to both equity and credit derivatives.Finance, Economic theorytl2497Industrial Engineering and Operations ResearchArticlesAlgorithms for Sparse and Low-Rank Optimization: Convergence, Complexity and Applications
http://academiccommons.columbia.edu/catalog/ac:137539
Ma, ShiqianMon, 22 Aug 2011 11:53:09 +0000Solving optimization problems with sparse or low-rank optimal solutions has been an important topic since the recent emergence of compressed sensing and its matrix extensions such as the matrix rank minimization and robust principal component analysis problems. Compressed sensing enables one to recover a signal or image with fewer observations than the "length" of the signal or image, and thus provides potential breakthroughs in applications where data acquisition is costly. However, the potential impact of compressed sensing cannot be realized without efficient optimization algorithms that can handle extremely large-scale and dense data from real applications. Although the convex relaxations of these problems can be reformulated as either linear programming, second-order cone programming or semidefinite programming problems, the standard methods for solving these relaxations are not applicable because the problems are usually of huge size and contain dense data. In this dissertation, we give efficient algorithms for solving these "sparse" optimization problems and analyze the convergence and iteration complexity properties of these algorithms. Chapter 2 presents algorithms for solving the linearly constrained matrix rank minimization problem. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast and solved as a semidefinite programming problem, such an approach is computationally expensive when the matrices are large. In Chapter 2, we propose fixed-point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems. Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10-5 in about 3 minutes by sampling only 20 percent of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms. In Chapter 3, we study the convergence/recoverability properties of the fixed point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving linearly constrained matrix rank minimization problems are reported. Chapters 4 and 5 considers alternating direction type methods for solving composite convex optimization problems. We present in Chapter 4 alternating linearization algorithms that are based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most O(1/ε) iterations to obtain an ε-optimal solution, while our accelerated (i.e., fast) versions require at most O(1/√ε) iterations, with little change in the computational effort required at each iteration. For more general problem, i.e., minimizing the sum of K convex functions, we propose multiple-splitting algorithms for solving them. We propose both basic and accelerated algorithms with O(1/ε) and O(1/√ε) iteration complexity bounds for obtaining an ε-optimal solution. To the best of our knowledge, the complexity results presented in these two chapters are the first ones of this type that have been given for splitting and alternating direction type methods. Numerical results on various applications in sparse and low-rank optimization, including compressed sensing, matrix completion, image deblurring, robust principal component analysis, are reported to demonstrate the efficiency of our methods.Operations researchsm2756Industrial Engineering and Operations ResearchDissertationsMany-Server Queues with Time-Varying Arrivals, Customer Abandonment, and non-Exponential Distributions
http://academiccommons.columbia.edu/catalog/ac:136569
Liu, Yunanhttp://hdl.handle.net/10022/AC:P:10801Tue, 02 Aug 2011 15:07:23 +0000This thesis develops deterministic heavy-traffic fluid approximations for many-server stochastic queueing models. The queueing models, with many homogeneous servers working independently in parallel, are intended to model large-scale service systems such as call centers and health care systems. Such models also have been employed to study communication, computing and manufacturing systems. The heavy-traffic approximations yield relatively simple formulas for quantities describing system performance, such as the expected number of customers waiting in the queue. The new performance approximations are valuable because, in the generality considered, these complex systems are not amenable to exact mathematical analysis. Since the approximate performance measures can be computed quite rapidly, they usefully complement more cumbersome computer simulation. Thus these heavy-traffic approximations can be used to improve capacity planning and operational control. More specifically, the heavy-traffic approximations here are for large-scale service systems, having many servers and a high arrival rate. The main focus is on systems that have time-varying arrival rates and staffing functions. The system is considered under the assumption that there are alternating periods of overloading and underloading, which commonly occurs when service providers are unable to adjust the staffing frequently enough to economically meet demand at all times. The models also allow the realistic features of customer abandonment and non-exponential probability distributions for the service times and the times customers are willing to wait before abandoning. These features make the overall stochastic model non-Markovian and thus thus very difficult to analyze directly. This thesis provides effective algorithms to compute approximate performance descriptions for these complex systems. These algorithms are based on ordinary differential equations and fixed point equations associated with contraction operators. Simulation experiments are conducted to verify that the approximations are effective. This thesis consists of four pieces of work, each presented in one chapter. The first chapter (Chapter 2) develops the basic fluid approximation for a non-Markovian many-server queue with time-varying arrival rate and staffing. The second chapter (Chapter 3) extends the fluid approximation to systems with complex network structure and Markovian routing to other queues of customers after completing service from each queue. The extension to open networks of queues has important applications. For one example, in hospitals, patients usually move among different units such as emergency rooms, operating rooms, and intensive care units. For another example, in manufacturing systems, individual products visit different work stations one or more times. The open network fluid model has multiple queues each of which has a time-varying arrival rate and staffing function. The third chapter (Chapter 4) studies the large-time asymptotic dynamics of a single fluid queue. When the model parameters are constant, convergence to the steady state as time evolves is established. When the arrival rates are periodic functions, such as in service systems with daily or seasonal cycles, the existence of a periodic steady state and the convergence to that periodic steady state as time evolves are established. Conditions are provided under which this convergence is exponentially fast. The fourth chapter (Chapter 5) uses a fluid approximation to gain insight into nearly periodic behavior seen in overloaded stationary many-server queues with customer abandonment and nearly deterministic service times. Deterministic service times are of applied interest because computer-generated service times, such as automated messages, may well be deterministic, and computer-generated service is becoming more prevalent. With deterministic service times, if all the servers remain busy for a long interval of time, then the times customers enter service assumes a periodic behavior throughout that interval. In overloaded large-scale systems, these intervals tend to persist for a long time, producing nearly periodic behavior. To gain insight, a heavy-traffic limit theorem is established showing that the fluid model arises as the many-server heavy-traffic limit of a sequence of appropriately scaled queueing models, all having these deterministic service times. Simulation experiments confirm that the transient behavior of the limiting fluid model provides a useful description of the transient performance of the queueing system. However, unlike the asymptotic loss of memory results in the previous chapter for service times with densities, the stationary fluid model with deterministic service times does not approach steady state as time evolves independent of the initial conditions. Since the queueing model with deterministic service times approaches a proper steady state as time evolves, this model with deterministic service times provides an example where the limit interchange (limiting steady state as time evolves and heavy traffic as scale increases) is not valid.Operations researchyl2342Industrial Engineering and Operations ResearchDissertationsFirst Order Methods for Large-Scale Sparse Optimization
http://academiccommons.columbia.edu/catalog/ac:135750
Aybat, Necdet Serhathttp://hdl.handle.net/10022/AC:P:10735Fri, 15 Jul 2011 12:00:39 +0000In today's digital world, improvements in acquisition and storage technology are allowing us to acquire more accurate and finer application-specific data, whether it be tick-by-tick price data from the stock market or frame-by-frame high resolution images and videos from surveillance systems, remote sensing satellites and biomedical imaging systems. Many important large-scale applications can be modeled as optimization problems with millions of decision variables. Very often, the desired solution is sparse in some form, either because the optimal solution is indeed sparse, or because a sparse solution has some desirable properties. Sparse and low-rank solutions to large scale optimization problems are typically obtained by regularizing the objective function with L1 and nuclear norms, respectively. Practical instances of these problems are very high dimensional (~ million variables) and typically have dense and ill-conditioned data matrices. Therefore, interior point based methods are ill-suited for solving these problems. The large scale of these problems forces one to use the so-called first-order methods that only use gradient information at each iterate. These methods are efficient for problems with a "simple" feasible set such that Euclidean projections onto the set can be computed very efficiently, e.g. the positive orthant, the n-dimensional hypercube, the simplex, and the Euclidean ball. When the feasible set is "simple", the subproblems used to compute the iterates can be solved efficiently. Unfortunately, most applications do not have "simple" feasible sets. A commonly used technique to handle general constraints is to relax them so that the resulting problem has only "simple" constraints, and then to solve a single penalty or Lagrangian problem. However, these methods generally do not guarantee convergence to feasibility. The focus of this thesis is on developing new fast first-order iterative algorithms for computing sparse and low-rank solutions to large-scale optimization problems with very mild restrictions on the feasible set - we allow linear equalities, norm-ball and conic inequalities, and also certain non-smooth convex inequalities to define the constraint set. The proposed algorithms guarantee that the sequence of iterates converges to an optimal feasible solution of the original problem, and each subproblem is an optimization problem with a "simple" feasible set. In addition, for any eps > 0, by relaxing the feasibility requirement of each iteration, the proposed algorithms can compute an eps-optimal and eps-feasible solution within O(log(1/eps)) iterations which requires O(1/eps) basic operations in the worst case. Algorithm parameters do not depend on eps > 0. Thus, these new methods compute iterates arbitrarily close to feasibility and optimality as they continue to run. Moreover, the computational complexity of each basic operation for these new algorithms is the same as that of existing first-order algorithms running on "simple" feasible sets. Our numerical studies showed that only O(log(1/eps)) basic operations, as opposed to O(1/eps) worst case theoretical bound, are needed for obtaining eps-feasible and eps-optimal solutions. We have implemented these new first-order methods for the following problem classes: Basis Pursuit (BP) in compressed sensing, Matrix Rank Minimization, Principal Component Pursuit (PCP) and Stable Principal Component Pursuit (SPCP) in principal component analysis. These problems have applications in signal and image processing, video surveillance, face recognition, latent semantic indexing, and ranking and collaborative filtering. To best of our knowledge, an algorithm for the SPCP problem that has O(1/eps) iteration complexity and has a per iteration complexity equal to that of a singular value decomposition is given for the first time.Operations research, Applied mathematicsnsa2106Industrial Engineering and Operations ResearchDissertationsQuantitative Modeling of Credit Derivatives
http://academiccommons.columbia.edu/catalog/ac:131549
Kan, Yu Hanghttp://hdl.handle.net/10022/AC:P:10272Thu, 05 May 2011 13:46:38 +0000The recent financial crisis has revealed major shortcomings in the existing approaches for modeling credit derivatives. This dissertation studies various issues related to the modeling of credit derivatives: hedging of portfolio credit derivatives, calibration of dynamic credit models, and modeling of credit default swap portfolios. In the first part, we compare the performance of various hedging strategies for index collateralized debt obligation (CDO) tranches during the recent financial crisis. Our empirical analysis shows evidence for market incompleteness: a large proportion of risk in the CDO tranches appears to be unhedgeable. We also show that, unlike what is commonly assumed, dynamic models do not necessarily perform better than static models, nor do high-dimensional bottom-up models perform better than simpler top-down models. On the other hand, model-free regression-based hedging appears to be surprisingly effective when compared to other hedging strategies. The second part is devoted to computational methods for constructing an arbitrage-free CDO pricing model compatible with observed CDO prices. This method makes use of an inversion formula for computing the aggregate default rate in a portfolio from expected tranche notionals, and a quadratic programming method for recovering expected tranche notionals from CDO spreads. Comparing this approach to other calibration methods, we find that model-dependent quantities such as the forward starting tranche spreads and jump-to-default ratios are quite sensitive to the calibration method used, even within the same model class. The last chapter of this dissertation focuses on statistical modeling of credit default swaps (CDSs). We undertake a systematic study of the univariate and multivariate properties of CDS spreads, using time series of the CDX Investment Grade index constituents from 2005 to 2009. We then propose a heavy-tailed multivariate time series model for CDS spreads that captures these properties. Our model can be used as a framework for measuring and managing the risk of CDS portfolios, and is shown to have better performance than the affine jump-diffusion or random walk models for predicting loss quantiles of various CDS portfolios.Finance, Mathematicsyk2246Industrial Engineering and Operations ResearchDissertationsContagion and Systemic Risk in Financial Networks
http://academiccommons.columbia.edu/catalog/ac:131474
Moussa, Amalhttp://hdl.handle.net/10022/AC:P:10249Fri, 29 Apr 2011 18:12:27 +0000The 2007-2009 financial crisis has shed light on the importance of contagion and systemic risk, and revealed the lack of adequate indicators for measuring and monitoring them. This dissertation addresses these issues and leads to several recommendations for the design of an improved assessment of systemic importance, improved rating methods for structured finance securities, and their use by investors and risk managers. Using a complete data set of all mutual exposures and capital levels of financial institutions in Brazil in 2007 and 2008, we explore in chapter 2 the structure and dynamics of the Brazilian financial system. We show that the Brazilian financial system exhibits a complex network structure characterized by a strong degree of heterogeneity in connectivity and exposure sizes across institutions, which is qualitatively and quantitatively similar to the statistical features observed in other financial systems. We find that the Brazilian financial network is well represented by a directed scale-free network, rather than a small world network. Based on these observations, we propose a stochastic model for the structure of banking networks, representing them as a directed weighted scale free network with power law distributions for in-degree and out-degree of nodes, Pareto distribution for exposures. This model may then be used for simulation studies of contagion and systemic risk in networks. We propose in chapter 3 a quantitative methodology for assessing contagion and systemic risk in a network of interlinked institutions. We introduce the Contagion Index as a metric of the systemic importance of a single institution or a set of institutions, that combines the effects of both common market shocks to portfolios and contagion through counterparty exposures. Using a directed scale-free graph simulation of the financial system, we study the sensitivity of contagion to a change in aggregate network parameters: connectivity, concentration of exposures, heterogeneity in degree distribution and network size. More concentrated and more heterogeneous networks are found to be more resilient to contagion. The impact of connectivity is more controversial: in well-capitalized networks, increasing connectivity improves the resilience to contagion when the initial level of connectivity is high, but increases contagion when the initial level of connectivity is low. In undercapitalized networks, increasing connectivity tends to increase the severity of contagion. We also study the sensitivity of contagion to local measures of connectivity and concentration across counterparties --the counterparty susceptibility and local network frailty-- that are found to have a monotonically increasing relationship with the systemic risk of an institution. Requiring a minimum (aggregate) capital ratio is shown to reduce the systemic impact of defaults of large institutions; we show that the same effect may be achieved with less capital by imposing such capital requirements only on systemically important institutions and those exposed to them. In chapter 4, we apply this methodology to the study of the Brazilian financial system. Using the Contagion Index, we study the potential for default contagion and systemic risk in the Brazilian system and analyze the contribution of balance sheet size and network structure to systemic risk. Our study reveals that, aside from balance sheet size, the network-based local measures of connectivity and concentration of exposures across counterparties introduced in chapter 3, the counterparty susceptibility and local network frailty, contribute significantly to the systemic importance of an institution in the Brazilian network. Thus, imposing an upper bound on these variables could help reducing contagion. We examine the impact of various capital requirements on the extent of contagion in the Brazilian financial system, and show that targeted capital requirements achieve the same reduction in systemic risk with lower requirements in capital for financial institutions. The methodology we proposed in chapter 3 for estimating contagion and systemic risk requires visibility on the entire network structure. Reconstructing bilateral exposures from balance sheets data is then a question of interest in a financial system where bilateral exposures are not disclosed. We propose in chapter 5 two methods to derive a distribution of bilateral exposures matrices. The first method attempts to recover the balance sheet assets and liabilities "sample by sample". Each sample of the bilateral exposures matrix is solution of a relative entropy minimization problem subject to the balance sheet constraints. However, a solution to this problem does not always exist when dealing with sparse sample matrices. Thus, we propose a second method that attempts to recover the assets and liabilities "in the mean". This approach is the analogue of the Weighted Monte Carlo method introduced by Avellaneda et al. (2001). We first simulate independent samples of the bilateral exposures matrix from a relevant prior distribution on the network structure, then we compute posterior probabilities by maximizing the entropy under the constraints that the balance sheet assets and liabilities are recovered in the mean. We discuss the pros and cons of each approach and explain how it could be used to detect systemically important institutions in the financial system. The recent crisis has also raised many questions regarding the meaning of structured finance credit ratings issued by rating agencies and the methodology behind them. Chapter 6 aims at clarifying some misconceptions related to structured finance ratings and how they are commonly interpreted: we discuss the comparability of structured finance ratings with bond ratings, the interaction between the rating procedure and the tranching procedure and its consequences for the stability of structured finance ratings in time. These insights are illustrated in a factor model by simulating rating transitions for CDO tranches using a nested Monte Carlo method. In particular, we show that the downgrade risk of a CDO tranche can be quite different from a bond with same initial rating. Structured finance ratings follow path-dependent dynamics that cannot be adequately described, as usually done, by a matrix of transition probabilities. Therefore, a simple labeling via default probability or expected loss does not discriminate sufficiently their downgrade risk. We propose to supplement ratings with indicators of downgrade risk. To overcome some of the drawbacks of existing rating methods, we suggest a risk-based rating procedure for structured products. Finally, we formulate a series of recommendations regarding the use of credit ratings for CDOs and other structured credit instruments.Finance, Statisticsam2810Statistics, Industrial Engineering and Operations ResearchDissertationsA Case for P2P Delivery of Paid Content
http://academiccommons.columbia.edu/catalog/ac:110614
Sherman, Alex; Stavrou, Angelos; Nieh, Jason; Stein, Clifford S.; Keromytis, Angelos D.http://hdl.handle.net/10022/AC:P:29479Wed, 27 Apr 2011 16:19:27 +0000P2P file sharing provides a powerful content distribution model by leveraging users' computing and bandwidth resources. However, companies have been reluctant to rely on P2P systems for paid content distribution due to their inability to limit the exploitation of these systems for free file sharing. We present TP2, a system that combines the more cost-effective and scalable distribution capabilities of P2P systems with a level of trust and control over content distribution similar to direct download content delivery networks. TP2 uses two key mechanisms that can be layered on top of existing P2P systems. First, it provides strong authentication to prevent free file sharing in the system. Second, it introduces a new notion of trusted auditors to detect and limit malicious attempts to gain information about participants in the system to facilitate additional out-of-band free file sharing. We analyze TP2 by modeling it as a novel game between malicious users who try to form free file sharing clusters and trusted auditors who curb the growth of such clusters. Our analysis shows that a small fraction of trusted auditors is sufficient to protect the P2P system against unauthorized file sharing. Using a simple economic model, we further show that TP2 provides a more cost-effective content distribution solution, resulting in higher profits for a content provider even in the presence of a large percentage of malicious users. Finally, we implemented TP2 on top of BitTorrent and use PlanetLab to show that our system can provide trusted P2P file sharing with negligible performance overhead.Computer sciencejn234, cs2035, ak2052Computer Science, Industrial Engineering and Operations ResearchTechnical reportsMitigating the Effect of Free-Riders in BitTorrent using Trusted Agents
http://academiccommons.columbia.edu/catalog/ac:110826
Sherman, Alex; Stavrou, Angelos; Nieh, Jason; Stein, Clifford S.http://hdl.handle.net/10022/AC:P:29544Wed, 27 Apr 2011 09:56:20 +0000Even though Peer-to-Peer (P2P) systems present a cost-effective and scalable solution to content distribution, most entertainment, media and software, content providers continue to rely on expensive, centralized solutions such as Content Delivery Networks. One of the main reasons is that the current P2P systems cannot guarantee reasonable performance as they depend on the willingness of users to contribute bandwidth. Moreover, even systems like BitTorrent, which employ a tit-for-tat protocol to encourage fair bandwidth exchange between users, are prone to free-riding (i.e. peers that do not upload). Our experiments on PlanetLab extend previous research (e.g. LargeViewExploit, BitTyrant) demonstrating that such selfish behavior can seriously degrade the performance of regular users in many more scenarios beyond simple free-riding: we observed an overhead of up to 430% for 80% of free-riding identities easily generated by a small set of selfish users. To mitigate the effects of selfish users, we propose a new P2P architecture that classifies peers with the help of a small number of {\em trusted nodes} that we call Trusted Auditors (TAs). TAs participate in P2P download like regular clients and detect free-riding identities by observing their neighbors' behavior. Using TAs, we can separate compliant users into a separate service pool resulting in better performance. Furthermore, we show that TAs are more effective ensuring the performance of the system than a mere increase in bandwidth capacity: for 80\% of free-riding identities a single-TA system has a 6\% download time overhead while without the TA and three times the bandwidth capacity we measure a 100\% overhead.Computer sciencejn234, cs2035Computer Science, Industrial Engineering and Operations ResearchTechnical reportsFairTorrent: Bringing Fairness to Peer-to-Peer Systems
http://academiccommons.columbia.edu/catalog/ac:110957
Sherman, Alex; Nieh, Jason; Stein, Clifford S.http://hdl.handle.net/10022/AC:P:29585Tue, 26 Apr 2011 12:15:48 +0000The lack of fair bandwidth allocation in Peer-to-Peer systems causes many performance problems, including users being disincentivized from contributing upload bandwidth, free riders taking as much from the system as possible while contributing as little as possible, and a lack of quality-of-service guarantees to support streaming applications. We present FairTorrent, a simple distributed scheduling algorithm for Peer-to-Peer systems that fosters fair bandwidth allocation among peers. For each peer, FairTorrent maintains a deficit counter which represents the number of bytes uploaded to a peer minus the number of bytes downloaded from it. It then uploads to the peer with the lowest deficit counter. FairTorrent automatically adjusts to variations in bandwidth among peers and is resilient to exploitation by free-riding peers. We have implemented FairTorrent inside a BitTorrent client without modifications to the BitTorrent protocol, and compared its performance on PlanetLab against other widely-used BitTorrent clients. Our results show that FairTorrent can provide up to two orders of magnitude better fairness and up to five times better download performance for high contributing peers. It thereby gives users an incentive to contribute more bandwidth, and improve overall system performance.Computer sciencejn234, cs2035Computer Science, Industrial Engineering and Operations ResearchTechnical reportsGroup Ratio Round-Robin: O(1) Proportional Share Scheduling for Uniprocessor and Multiprocessor Systems
http://academiccommons.columbia.edu/catalog/ac:109814
Caprita, Bogdan; Chan, Wong Chun; Nieh, Jason; Stein, Clifford S.; Zheng, Haoqianghttp://hdl.handle.net/10022/AC:P:29230Fri, 22 Apr 2011 13:48:44 +0000Proportional share resource management provides a flexible and useful abstraction for multiplexing time-shared resources. We present Group Ratio Round-Robin (GR3), the first proportional share scheduler that combines accurate proportional fairness scheduling behavior with O(1) scheduling overhead on both uniprocessor and multiprocessor systems. GR3 uses a novel client grouping strategy to organize clients into groups of similar processor allocations which can be more easily scheduled. Using this grouping strategy, GR3 combines the benefits of low overhead round-robin execution with a novel ratio-based scheduling algorithm. GR3 can provide fairness within a constant factor of the ideal generalized processor sharing model for client weights with a fixed upper bound and preserves its fairness properties on multiprocessor systems. We have implemented GR3 in Linux and measured its performance against other schedulers commonly used in research and practice, including the standard Linux scheduler, Weighted Fair Queueing, Virtual-Time Round-Robin, and Smoothed Round-Robin. Our experimental results demonstrate that GR3 can provide much lower scheduling overhead and much better scheduling accuracy in practice than these other approaches.Computer sciencejn234, cs2035Computer Science, Industrial Engineering and Operations ResearchTechnical reportsLearning mixtures of product distributions over discrete domains
http://academiccommons.columbia.edu/catalog/ac:110398
Feldman, Jon; O'Donnell, Ryan; Servedio, Rocco Anthonyhttp://hdl.handle.net/10022/AC:P:29411Thu, 21 Apr 2011 12:41:48 +0000We consider the problem of learning mixtures of product distributions over discrete domains in the distribution learning framework introduced by Kearns et al. We give a $\poly(n/\eps)$ time algorithm for learning a mixture of $k$ arbitrary product distributions over the $n$-dimensional Boolean cube $\{0,1\}^n$ to accuracy $\eps$, for any constant $k$. Previous polynomial time algorithms could only achieve this for $k = 2$ product distributions; our result answers an open question stated independently by Cryan and by Freund and Mansour. We further give evidence that no polynomial time algorithm can succeed when $k$ is superconstant, by reduction from a notorious open problem in PAC learning. Finally, we generalize our $\poly(n/\eps)$ time algorithm to learn any mixture of $k = O(1)$ product distributions over $\{0,1, \dots, b\}^n$, for any $b = O(1)$.Computer scienceras2105Industrial Engineering and Operations Research, Computer ScienceTechnical reportsGrouped Distributed Queues: Distributed Queue, Proportional Share Multiprocessor Scheduling
http://academiccommons.columbia.edu/catalog/ac:110491
Caprita, Bogdan; Nieh, Jason; Stein, Clifford S.http://hdl.handle.net/10022/AC:P:29440Thu, 21 Apr 2011 09:45:32 +0000We present Grouped Distributed Queues (GDQ), the first proportional share scheduler for multiprocessor systems that, by using a distributed queue architecture, scales well with a large number of processors and processes. GDQ achieves accurate proportional fairness scheduling with only O(1) scheduling overhead. GDQ takes a novel approach to distributed queuing: instead of creating per-processor queues that need to be constantly balanced to achieve any measure of proportional sharing fairness, GDQ uses a simple grouping strategy to organize processes into groups based on similar processor time allocation rights, and then assigns processors to groups based on aggregate group shares. Group membership of processes is static, and fairness is achieved by dynamically migrating processors among groups. The set of processors working on a group use simple, low-overhead round-robin queues, while processor reallocation among groups is achieved using a new multiprocessor adaptation of the well-known Weighted Fair Queuing algorithm. By commoditizing processors and decoupling their allocation from process scheduling, GDQ provides, with only constant scheduling cost, fairness within a constant of the ideal generalized processor sharing model for process weights with a fixed upper bound. We have implemented GDQ in Linux and measured its performance. Our experimental results show that GDQ has low overhead and scales well with the number of processors.Computer sciencejn234, cs2035Computer Science, Industrial Engineering and Operations ResearchTechnical reportsOptimal adaptive control of cascading power grid failures
http://academiccommons.columbia.edu/catalog/ac:129328
Bienstock, Danielhttp://hdl.handle.net/10022/AC:P:9744Mon, 20 Dec 2010 14:40:12 +0000Power grids have long been a source of interesting optimization problems. Perhaps best known among the optimization community are the unit commitment problems and related generator dispatching tasks. However, recent blackout events have renewed interest on problems related to grid vulnerabilities. A difficult problem that has been widely studied, the N-K problem, concerns the detection of small cardinality sets of lines or buses whose simultaneous outage could develop into a significant failure event. This is a hard combinatorial problem which, unlike the typical formulations for the unit commitment problem, includes a detailed model of flows in the grid. A different set of algorithmic questions concern how to react to protect a grid when a significant event has taken place. This is the outlook that we take in this paper. In this context, the central modeling ingredient is that power grids display cascading behavior. In this paper, building on prior models for cascades, we consider an affine, adaptive, distributive control algorithm that is computed at the start of the cascade and deployed during the cascade. The control sheds demand as a function of observations of the state of the grid, with the objective of terminating the cascade with a minimum amount of demand lost. The optimization problem handled at the start of the cascade computes the coefficients in the affine control (one set of coefficients per demand bus). We present numerical experiments with parallel implementations of our algorithms, using as data a snapshot of the U.S. Eastern Interconnect, with approximately 15000 buses and 23000 lines.Electrical engineeringdb17Industrial Engineering and Operations Research, Applied Physics and Applied MathematicsArticlesBehavior-Based Modeling and Its Application to Email Analysis
http://academiccommons.columbia.edu/catalog/ac:125674
Stolfo, Salvatore; Hershkop, Shlomo; Hu, Chia-wei; Li, Wei-Jen; Nimeskern, Olivier; Wang, Kehttp://hdl.handle.net/10022/AC:P:8686Wed, 28 Apr 2010 12:52:39 +0000The Email Mining Toolkit (EMT) is a data mining system that computes behavior profiles or models of user email accounts. These models may be used for a multitude of tasks including forensic analyses and detection tasks of value to law enforcement and intelligence agencies, as well for as other typical tasks such as virus and spam detection. To demonstrate the power of the methods, we focus on the application of these models to detect the early onset of a viral propagation without "content-base" (or signature-based) analysis in common use in virus scanners. We present several experiments using real email from 15 users with injected simulated viral emails and describe how the combination of different behavior models improves overall detection rates. The performance results vary depending upon parameter settings, approaching 99% true positive (TP) (percentage of viral emails caught) in general cases and with 0.38% false positive (FP) (percentage of emails with attachments that are mislabeled as viral). The models used for this study are based upon volume and velocity statistics of a user's email rate and an analysis of the user's (social) cliques revealed in the person's email behavior. We show by way of simulation that virus propagations are detectable since viruses may emit emails at rates different than human behavior suggests is normal, and email is directed to groups of recipients in ways that violate the users' typical communications with their social groups.Computer sciencesjs11, sh553, ch176Computer Science, Industrial Engineering and Operations ResearchArticlesContinuity of a queueing integral representation in the M1 topology
http://academiccommons.columbia.edu/catalog/ac:125349
Pang, Guodong; Whitt, Wardhttp://hdl.handle.net/10022/AC:P:8584Fri, 02 Apr 2010 16:18:37 +0000We establish continuity of the integral representation y(t)=x(t)+∫0th(y(s)) ds, t≥0, mapping a function x into a function y when the underlying function space D is endowed with the Skorohod M1 topology. We apply this integral representation with the continuous mapping theorem to establish heavy-traffic stochastic-process limits for many-server queueing models when the limit process has jumps unmatched in the converging processes as can occur with bursty arrival processes or service interruptions. The proof of M1-continuity is based on a new characterization of the M1 convergence, in which the time portions of the parametric representations are absolutely continuous with respect to Lebesgue measure, and the derivatives are uniformly bounded and converge in L1.gp2224, ww2040Industrial Engineering and Operations ResearchArticlesThe N-k Problem in Power Grids: New Models, Formulations and Numerical Experiments (Extended Version)
http://academiccommons.columbia.edu/catalog/ac:125318
Bienstock, Daniel; Verma, Abhinavhttp://hdl.handle.net/10022/AC:P:8574Wed, 17 Mar 2010 17:38:39 +0000Given a power grid modeled by a network together with equations describing the power flows, power generation and consumption, and the laws of physics, the so-called N-k problem asks whether there exists a set of k or fewer arcs whose removal will cause the system to fail. The case where k is small is of practical interest. We present theoretical and computational results involving a mixed-integer model and a continuous nonlinear model related to this question.db17Industrial Engineering and Operations Research, Applied Physics and Applied MathematicsArticles