Academic Commons Search Results
http://academiccommons.columbia.edu/catalog.rss?f%5Bsubject_facet%5D%5B%5D=Operations+research&q=&rows=500&sort=record_creation_date+desc
Academic Commons Search Resultsen-usPerfect Simulation and Deployment Strategies for Detection
http://academiccommons.columbia.edu/catalog/ac:189976
Wallwater, Ayahttp://dx.doi.org/10.7916/D8X066JBFri, 16 Oct 2015 15:06:43 +0000This dissertation contains two parts. The first part provides the first algorithm that, under minimal assumptions, allows to simulate the stationary waiting-time sequence of a single-server queue backwards in time, jointly with the input processes of the queue
(inter-arrival and service times).
The single-server queue is useful in applications of DCFTP (Dominated Coupling From The Past), which is a well known protocol for simulation without bias from steady-state distributions. Our algorithm terminates in finite time assuming only finite mean of the
inter-arrival and service times. In order to simulate the single-server queue in stationarity until the first idle period in finite expected termination time we require the existence of finite variance. This requirement is also necessary for such idle time (which is a natural
coalescence time in DCFTP applications) to have finite mean. Thus, in this sense, our algorithm is applicable under minimal assumptions.
The second part studies the behavior of diffusion processes in a random environment.
We consider an adversary that moves in a given domain and our goal is to produce an optimal strategy to detect and neutralize him by a given deadline. We assume that the target's dynamics follows a diffusion process whose parameters are informed by available intelligence information. We will dedicate one chapter to the rigorous formulation of the detection problem, an introduction of several frameworks that can be considered when applying our methods, and a discussion on the challenges of finding the analytical optimal solution. Then, in the following chapter, we will present our main result, a large deviation behavior of the adversary's survival probability under a given strategy. This result will be later give rise to asymptotically efficient Monte Carlo algorithms.Operations researchaw2589Operations Research, Industrial Engineering and Operations ResearchDissertationsHeavy Tails and Instabilities in Large-Scale Systems with Failures
http://academiccommons.columbia.edu/catalog/ac:189193
Skiani, Evangeliahttp://dx.doi.org/10.7916/D8MW2GKRFri, 02 Oct 2015 12:15:41 +0000Modern engineering systems, e.g., wireless communication networks, distributed computing systems, etc., are characterized by high variability and susceptibility to failures. Failure recovery is required to guarantee the successful operation of these systems. One straight- forward and widely used mechanism is to restart the interrupted jobs from the beginning after a failure occurs. In network design, retransmissions are the primary building blocks of the network architecture that guarantee data delivery in the presence of channel failures. Retransmissions have recently been identified as a new origin of power laws in modern information networks. In particular, it was discovered that retransmissions give rise to long tails (delays) and possibly zero throughput. To this end, we investigate the impact of the ‘retransmission phenomenon’ on the performance of failure prone systems and propose adaptive solutions to address emerging instabilities.
The preceding finding of power law phenomena due to retransmissions holds under the assumption that data sizes have infinite support. In practice, however, data sizes are upper bounded 0 ≤ L ≤ b, e.g., WaveLAN’s maximum transfer unit is 1500 bytes, YouTube videos are of limited duration, e-mail attachments cannot exceed 10MB, etc. To this end, we first provide a uniform characterization of the entire body of the distribution of the number of retransmissions, which can be represented as a product of a power law and the Gamma distribution. This rigorous approximation clearly demonstrates the transition from power law distributions in the main body to exponential tails. Furthermore, the results highlight the importance of wisely determining the size of data fragments in order to accommodate the performance needs in these systems as well as provide the appropriate tools for this fragmentation.
Second, we extend the analysis to the practically important case of correlated channels using modulated processes, e.g., Markov modulated, to capture the underlying dependencies. Our study shows that the tails of the retransmission and delay distributions are asymptotically insensitive to the channel correlations and are determined by the state that generates the lightest tail in the independent channel case. This insight is beneficial both for capacity planning and channel modeling since the independent model is sufficient and the correlation details do not matter. However, the preceding finding may be overly optimistic when the best state is atypical, since the effects of ‘bad’ states may still downgrade the performance.
Third, we examine the effects of scheduling policies in queueing systems with failures and restarts. Fair sharing, e.g., processor sharing (PS), is a widely accepted approach to resource allocation among multiple users. We revisit the well-studied M/G/1 PS queue with a new focus on server failures and restarts. Interestingly, we discover a new phenomenon showing that PS-based scheduling induces complete instability in the presence of retransmissions, regardless of how low the traffic load may be. This novel phenomenon occurs even when the job sizes are bounded/fragmented, e.g., deterministic. This work demonstrates that scheduling one job at a time, such as first-come-first-serve, achieves a larger stability region and should be preferred in these systems.
Last, we delve into the area of distributed computing and study the effects of commonly used mechanisms, i.e., restarts, fragmentation, replication, especially in cloud computing services. We evaluate the efficiency of these techniques under different assumptions on the data streams and discuss the corresponding optimization problem. These findings are useful for optimal resource allocation and fault tolerance in rapidly developing computing networks. In addition to networking and distributed computing systems, the aforementioned results improve our understanding of failure recovery management in large manufacturing and service systems, e.g., call centers. Scalable solutions to this problem increase in significance as these systems continuously grow in scale and complexity. The new phenomena and the techniques developed herein provide new insights in the areas of parallel computing, probability and statistics, as well as financial engineering.Electrical engineering, Operations researches3009Electrical EngineeringDissertationsStochastic Networks: Modeling, Simulation Design and Risk Control
http://academiccommons.columbia.edu/catalog/ac:189655
Li, Juanhttp://dx.doi.org/10.7916/D88P5ZV3Mon, 28 Sep 2015 12:09:02 +0000This dissertation studies stochastic network problems that arise in various areas with important industrial applications. Due to uncertainty of both external and internal variables, these networks are exposed to the risk of failure with certain probability, which, in many cases, is very small. It is thus desirable to develop efficient simulation algorithms to study the stability of these networks and provide guidance for risk control.
Chapter 2 models equilibrium allocations in a distribution network as the solution of a linear program (LP) which minimizes the cost of unserved demands across nodes in the network. Assuming that the demands are random (following a jointly Gaussian law), we study the probability that the optimal cost exceeds a large threshold, which is a rare event. Our contribution is the development of importance sampling and conditional Monte Carlo algorithms for estimating this probability. We establish the asymptotic efficiency of our algorithms and also present numerical results that demonstrate the strong performance of our algorithms.
Chapter 3 studies an insurance-reinsurance network model that deals with default contagion risks with a particular aim of capturing cascading effects at the time of defaults. We capture these effects by finding an equilibrium allocation of settlements that can be found as the unique optimal solution of an optimization problem. We are able to obtain an asymptotic description of the most likely ways in which the default of a specific group of participants can occur, by solving a multidimensional Knapsack integer programming problem. We also propose a class of strongly efficient Monte Carlo estimators for computing the expected loss of the network conditioned on the failure of a specific set of companies.
Chapter 4 discusses control schemes for maintaining low failure probability of a transmission system power line. We construct a stochastic differential equation to describe the temperature evolution in a line subject to stochastic exogenous factors such as ambient temperature, and present a solution to the resulting stochastic heat equation. A number of control algorithms designed to limit the probability that a line exceeds its critical temperature are provided.Operations research, Engineering, Financejl3035Operations Research, Industrial Engineering and Operations ResearchDissertationsRanking Algorithms on Directed Configuration Networks
http://academiccommons.columbia.edu/catalog/ac:189652
Chen, Ningyuanhttp://dx.doi.org/10.7916/D8J38RX8Mon, 28 Sep 2015 12:08:56 +0000In recent decades, complex real-world networks, such as social networks, the World Wide Web, financial networks, etc., have become a popular subject for both researchers and practitioners. This is largely due to the advances in computing power and big-data analytics. A key issue of analyzing these networks is the centrality of nodes. Ranking algorithms are designed to achieve the goal, e.g., Google's PageRank. We analyze the asymptotic distribution of the rank of a randomly chosen node, computed by a family of ranking algorithms on a random graph, including PageRank, when the size of the network grows to infinity.
We propose a configuration model generating the structure of a directed graph given in- and out-degree distributions of the nodes. The algorithm guarantees the generated graph to be simple (without self-loops and multiple edges in the same direction) for a broad spectrum of degree distributions, including power-law distributions. Power-law degree distribution is referred to as scale-free property and observed in many real-world networks. On the random graph G_n=(V_n,E_n) generated by the configuration model, we study the distribution of the ranks, which solves
R_i=∑ _{j: (j,i) ∈ E_n} (C_jR_j +Q_i)
for all node i, some weight C_i and personalization value Q_i.
We show that as the size of the graph n → ∞, the rank of a randomly chosen node converges weakly to the endogenous solution of the
R =^D ∑ _{i=1}^N (C_iR_i + Q),
where (Q, N, {C_i}) is a random vector and {R_i} are i.i.d. copies of R, independent of (Q, N,{C_i}). This main result is divided into three steps. First, we show that the rank of a randomly chosen node can be approximated by applying the ranking algorithm on the graph for finite iterations. Second, by coupling the graph to a branching tree that is governed by the empirical size-biased distribution, we approximate the finite iteration of the ranking algorithm by the root node of the branching tree. Finally, we prove that the rank of the root of the branching tree converges to that of a limiting weighted branching process, which is independent of n and solves the stochastic fixed-point equation. Our result formalizes the well-known heuristics, that a network often locally possesses a tree-like structure. We conduct a numerical example showing that the approximation is very accurate for English Wikipedia pages (over 5 million).
To draw a sample from the endogenous solution of the stochastic fixed-point equation, one can run linear branching recursions on a weighted branching process. We provide an iterative simulation algorithm based on bootstrap. Compared to the naive Monte Carlo, our algorithm reduces the complexity from exponential to linear in the number of recursions. We show that as the bootstrap sample size tends to infinity, the sample drawn according to our algorithm converges to the target distribution in the Kantorovich-Rubinstein distance and the estimator is consistent.Operations research, Computer sciencenc2462Industrial Engineering, Industrial Engineering and Operations ResearchDissertationsTwo Essays in Financial Engineering
http://academiccommons.columbia.edu/catalog/ac:189643
Yang, Linanhttp://dx.doi.org/10.7916/D8K35T1MMon, 28 Sep 2015 12:08:34 +0000This dissertation consists of two parts. In the first part, we investigate the potential impact of wrong-way risk on calculating credit valuation adjustment (CVA) of a derivatives portfolio. A credit valuation adjustment (CVA) is an adjustment applied to the value of a derivative contract or a portfolio of derivatives to account for counterparty credit risk. Measuring CVA requires combining models of market and credit risk. Wrong-way risk refers to the possibility that a counterparty's likelihood of default increases with the market value of the exposure. We develop a method for bounding wrong-way risk, holding fixed marginal models for market and credit risk and varying the dependence between them. Given simulated paths of the two models, we solve a linear program to find the worst-case CVA resulting from wrong-way risk. We analyze properties of the solution and prove convergence of the estimated bound as the number of paths increases. The worst case can be overly pessimistic, so we extend the procedure for a tempered CVA by penalizing the deviation of the joint model of market and credit risk from a reference model. By varying the penalty for deviations, we can sweep out the full range of possible CVA values for different degrees of wrong-way risk. Here, too, we prove convergence of the estimate of the tempered CVA and the joint distribution that attains it. Our method addresses an important source of model risk in counterparty risk measurement. In the second part, we study investors' trading behavior in a model of realization utility. We assume that investors' trading decisions are driven not only by the utility of consumption and terminal wealth, but also by the utility burst from realizing a gain or a loss. More precisely, we consider a dynamic trading problem in which an investor decides when to purchase and sell a stock to maximize her wealth utility and realization utility with her reference points adapting to the stock's gain and loss asymmetrically. We study, both theoretically and numerically, the optimal trading strategies and asset pricing implications of two types of agents: adaptive agents, who realize prospectively the reference point adaptation in the future, and naive agents, who fail to do so. We find that an adaptive agent sells the stock more frequently when the stock is at a gain than a naive agent does, and that the adaptive agent asks for a higher risk premium for the stock than the naive agent does in equilibrium. Moreover, compared to a non-adaptive agent whose reference point does not change with the stock's gain and loss, both the adaptive and naive agents sell the stock less frequently, and the naive agent requires the same risk premium as the non-adaptive agent does.Operations research, Financely2220Operations Research, Business, Industrial Engineering and Operations ResearchDissertationsDecision Making with Coupled Learning: Applications in Inventory Management and Auctions
http://academiccommons.columbia.edu/catalog/ac:188400
Chaneton, Juan Manuelhttp://dx.doi.org/10.7916/D86M3632Fri, 07 Aug 2015 12:05:17 +0000Operational decisions can be complicated by the presence of uncertainty. In many cases, there exist means to reduce uncertainty, though these may come at a cost. Decision makers then face the dilemma of acting based on current, incomplete information versus investing in trying to minimize uncertainty. Understanding the impact of this trade-off on decisions and performance is the central topic of this thesis.
When attempting to construct probabilistic models based on data, operational decisions often affect the amount and quality of data that is collected. This introduces an exploration-exploitation trade-off between decisions and information collection. Much of the literature has sought to understand how operational decisions should be modified to incorporate this trade-off. While studying two well-known operational problems, we ask an even more basic question: does the exploration-exploitation trade-off matter in the first place? In the first two parts of this thesis we focus on this question in the context of the newsvendor problem and sequential auctions with incomplete private information.
We first analyze the well-studied stationary multi-period newsvendor problem, in which a retailer sells perishable items and unmet demand is lost and unobserved. This latter limitation, referred to as demand censoring, is what introduces the exploration-exploitation trade-off in this problem. We focus on two questions: i.) what is the value of accounting for the exploration-exploitation trade-off; and, ii.) what is the cost imposed by having access only to sales data as opposed to underlying demand samples? Quite remarkably, we show that, for a broad family of tractable cases, there is essentially no exploration-exploitation trade-off; i.e., there is almost no value of accounting for the impact of decisions on information collection. Moreover, we establish that losses due to demand censoring (as compared to having full access to demand samples) are limited, but these are of higher order than those due to ignoring the exploration-exploitation trade-off. In other words, efforts aimed at improving information collection concerning lost sales are more valuable than analytic or computational efforts to pin down the optimal policy in the presence of censoring.
In the second part of this thesis we examine the problem of an agent bidding on a sequence of repeated auctions for an item. The agent does not fully know his own valuation of the object and he can only collect information if he wins an auction. This coupling introduces the exploration-exploitation trade-off in this problem. We study the value of accounting for information collection on decisions and find that: i.) in general the exploration-exploitation trade-off cannot be ignored (that is, in some cases ignoring exploration can substantially affect rewards), but ii.) for a broad class of instances, ignoring exploration can indeed produce nearly optimal results. We characterize this class through a set of conditions on the problem primitives, and we demonstrate with examples that these are satisfied for common settings found in the literature.
In the third part of this thesis we study the impact of uncertainty in the context of inventory record inaccuracies in inventory management systems. Record inaccuracies, mismatches between physical and recorded inventory, are frequently encountered in practice and can markedly affect revenues. Most of the literature is devoted to analyzing the cost-benefit relationship between investing in means to reduce inaccuracies and accounting for them in operational decisions. We focus on the less explored approach of using available data to reduce the uncertainty in inventory. In practice, collecting Point Of Sale (POS) data is substantially simpler than collecting stock information. We propose a model in which inventory is regarded as a virtually unobservable quantity and POS data is used to infer its state over time. Additionally, our method also works as an effective estimator of censored demand in the presence of inaccurate records. We test our methodology with extensive numerical experiments based on both simulated and actual retailing data. The results show that it is remarkably effective in inferring unobservable past statistics and predicting future stock status, even in the presence of severe data misspecifications.Operations research, Businessjmc2274BusinessDissertationsSmart Grid Risk Management
http://academiccommons.columbia.edu/catalog/ac:188373
Abad Lopez, Carlos Adrianhttp://dx.doi.org/10.7916/D8028QR9Tue, 21 Jul 2015 00:00:00 +0000Current electricity infrastructure is being stressed from several directions -- high demand, unreliable supply, extreme weather conditions, accidents, among others. Infrastructure planners have, traditionally, focused on only the cost of the system; today, resilience and sustainability are increasingly becoming more important. In this dissertation, we develop computational tools for efficiently managing electricity resources to help create a more reliable and sustainable electrical grid. The tools we present in this work will help electric utilities coordinate demand to allow the smooth and large scale integration of renewable sources of energy into traditional grids, as well as provide infrastructure planners and operators in developing countries a framework for making informed planning and control decisions in the presence of uncertainty. Demand-side management is considered as the most viable solution for maintaining grid stability as generation from intermittent renewable sources increases. Demand-side management, particularly demand response (DR) programs that attempt to alter the energy consumption of customers either by using price-based incentives or up-front power interruption contracts, is more cost-effective and sustainable in addressing short-term supply-demand imbalances when compared with the alternative that involves increasing fossil fuel-based fast spinning reserves. An essential step in compensating participating customers and benchmarking the effectiveness of DR programs is to be able to independently detect the load reduction from observed meter data. Electric utilities implementing automated DR programs through direct load control switches are also interested in detecting the reduction in demand to efficiently pinpoint non-functioning devices to reduce maintenance costs. We develop sparse optimization methods for detecting a small change in the demand for electricity of a customer in response to a price change or signal from the utility, dynamic learning methods for scheduling the maintenance of direct load control switches whose operating state is not directly observable and can only be inferred from the metered electricity consumption, and machine learning methods for accurately forecasting the load of hundreds of thousands of residential, commercial and industrial customers. These algorithms have been implemented in the software system provided by AutoGrid, Inc., and this system has helped several utilities in the Pacific Northwest, Oklahoma, California and Texas, provide more reliable power to their customers at significantly reduced prices. Providing power to widely spread out communities in developing countries using the conventional power grid is not economically feasible. The most attractive alternative source of affordable energy for these communities is solar micro-grids. We discuss risk-aware robust methods to optimally size and operate solar micro-grids in the presence of uncertain demand and uncertain renewable generation. These algorithms help system operators to increase their revenue while making their systems more resilient to inclement weather conditions.Operations research, Energyca2446Operations Research, Industrial Engineering and Operations ResearchDissertationsMethods for Pricing Pre-Earnings Equity Options and Leveraged ETF Options
http://academiccommons.columbia.edu/catalog/ac:186986
Santoli, Marcohttp://dx.doi.org/10.7916/D86Q1W99Thu, 07 May 2015 00:17:52 +0000In this thesis, we present several analytical and numerical methods for two financial engineering problems: 1) accounting for the impact of an earnings announcement on the price and implied volatility of the associated equity options, and 2) analyzing the price dynamics of leveraged exchange-traded funds (LETFs) and valuation of LETF options. Our pricing models capture the main characteristics of these options, along with jumps and stochastic volatility in the underlying asset. We illustrate our results through numerical implementation and calibration using market data.
In the first part, we model the pricing of equity options around an earnings announcement (EA). Empirical studies have shown that an earnings announcement can lead to an immediate price shock to the company stock. Since many companies also have options written on their stocks, the option prices should reflect the uncertain price impact of an upcoming EA before expiration. To represent the shock due to earnings, we incorporate a random jump on the announcement date in the dynamics of the stock price. We consider different distributions of the scheduled earnings jump as well as different underlying stock price dynamics before and after the EA date. Our main contributions include analytical option pricing formulas when the underlying stock price follows the Kou model along with a double-exponential or Gaussian EA jump on the announcement date. Furthermore, we derive analytic bounds and asymptotics for the pre-EA implied volatility under various models. The calibration results demonstrate adequate fit of the entire implied volatility surface prior to an announcement. The comparison of the risk-neutral distribution of the EA jump to its historical counterpart is also discussed. Moreover, we discuss the valuation and exercise strategy of pre-EA American options, and present an analytical approximation and numerical results.
The second part focuses on the analysis of LETFs. We start by providing a quantitative risk analysis of LETFs with an emphasis on the impact of leverage ratios and investment horizons. Given an investment horizon, different leverage ratios imply different levels of risk. Therefore, the idea of an {admissible range of leverage ratios} is introduced. For an admissible leverage ratio, the associated LETF satisfies a given risk constraint based on, for example, the value-at-risk (VaR) and conditional VaR. Moreover, we discuss the concept of {admissible risk horizon} so that the investor can control risk exposure by selecting an appropriate holding period. The intra-horizon risk is calculated, showing that higher leverage can significantly increase the probability of an LETF value hitting a lower level. This leads us to evaluate a stop-loss/take-profit strategy for LETFs and determine the optimal take-profit given a stop-loss risk constraint. In addition, the impact of volatility exposure on the returns of different LETF portfolios is investigated.
In the last chapter, we study the pricing of options written on LETFs. Since LETFs on the same reference index share the same source of risk, it is important to consider a consistent pricing methodology of these options. In addition, LETFs can theoretically experience a loss greater than 100\%. In practice, some LETF providers design the fund so that the daily returns are capped both downward and upward. We incorporate these features and model the reference index by a stochastic volatility model with jumps. An efficient numerical algorithm based on transform methods to value options under this model is presented. We illustrate the accuracy of our pricing algorithm by comparing it to existing methods. Calibration using empirical option data shows the impact of leverage ratio on the implied volatility. Our method is extended to price American-style LETF options.Finance, Operations researchIndustrial Engineering and Operations ResearchDissertationsOptimal Multiple Stopping Approach to Mean Reversion Trading
http://academiccommons.columbia.edu/catalog/ac:186941
Li, Xinhttp://dx.doi.org/10.7916/D88K781SFri, 24 Apr 2015 18:33:24 +0000This thesis studies the optimal timing of trades under mean-reverting price dynamics subject to fixed transaction costs. We first formulate an optimal double stopping problem whereby a speculative investor can choose when to enter and subsequently exit the market. The investor's value functions and optimal timing strategies are derived when prices are driven by an Ornstein-Uhlenbeck (OU), exponential OU, or Cox-Ingersoll-Ross (CIR) process. Moreover, we analyze a related optimal switching problem that involves an infinite sequence of trades. In addition to solving for the value functions and optimal switching strategies, we identify the conditions under which the double stopping and switching problems admit the same optimal entry and/or exit timing strategies. A number of extensions are also considered, such as incorporating a stop-loss constraint, or a minimum holding period under the OU model.
A typical solution approach for optimal stopping problems is to study the associated free boundary problems or variational inequalities (VIs). For the double optimal stopping problem, we apply a probabilistic methodology and rigorously derive the optimal price intervals for market entry and exit. A key step of our approach involves a transformation, which in turn allows us to characterize the value function as the smallest concave majorant of the reward function in the transformed coordinate. In contrast to the variational inequality approach, this approach directly constructs the value function as well as the optimal entry and exit regions, without a priori conjecturing a candidate value function or timing strategy. Having solved the optimal double stopping problem, we then apply our results to deduce a similar solution structure for the optimal switching problem. We also verify that our value functions solve the associated VIs.
Among our results, we find that under OU or CIR price dynamics, the optimal stopping problems admit the typical buy-low-sell-high strategies. However, when the prices are driven by an exponential OU process, the investor generally enters when the price is low, but may find it optimal to wait if the current price is sufficiently close to zero. In other words, the continuation (waiting) region for entry is disconnected. A similar phenomenon is observed in the OU model with stop-loss constraint. Indeed, the entry region is again characterized by a bounded price interval that lies strictly above the stop-loss level. As for the exit timing, a higher stop-loss level always implies a lower optimal take-profit level. In all three models, numerical results are provided to illustrate the dependence of timing strategies on model parameters.Operations research, FinanceIndustrial Engineering and Operations ResearchDissertationsExcluding Induced Paths: Graph Structure and Coloring
http://academiccommons.columbia.edu/catalog/ac:186521
Maceli, Peter Lawsonhttp://dx.doi.org/10.7916/D8WW7GK4Mon, 20 Apr 2015 12:17:03 +0000An induced subgraph of a given graph is any graph which can be obtained by successively deleting vertices, possible none. In this thesis, we present several new structural and algorithmic results on a number of different classes of graphs which are closed under taking induced subgraphs.
The first result of this thesis is related to a conjecture of Hayward and Nastos on the structure of graphs with no induced four-edge path or four-edge antipath. They conjectured that every such graph which is both prime and perfect is either a split graph or contains a certain useful arrangement of simplicial and antisimplicial vertices. We give a counterexample to their conjecture, and prove a slightly weaker version. This is joint work with Maria Chudnovsky, and first appeared in Journal of Graph Theory.
The second result of this thesis is a decomposition theorem for the class of all graphs with no induced four-edge path or four-edge antipath. We show that every such graph can be obtained from pentagons and split graphs by repeated application of complementation, substitution, and split graph unification. Split graph unification is a new graph operation we introduced, which is a generalization of substitution and involves "gluing" two graphs along a common induced split graph. This is a combination of joint work with Maria Chudnovsky and Irena Penev, together with later work of Louis Esperet, Laetitia Lemoine and Frederic Maffray, and first appeared in.
The third result of this thesis is related to the problem of determining the complexity of coloring graphs which do not contain some fixed induced subgraph. We show that three-coloring graphs with no induced six-edge path or triangle can be done in polynomial-time. This is joint work with Maria Chudnovsky and Mingxian Zhong, and first appeared in. Working together with Flavia Bonomo, Oliver Schaudt, and Maya Stein, we have since simplified and extended this result.Operations research, Mathematics, Computer scienceplm2109Operations Research, Industrial EngineeringDissertationsEssays on Inventory Management and Conjoint Analysis
http://academiccommons.columbia.edu/catalog/ac:181257
Chen, Yupenghttp://dx.doi.org/10.7916/D8GX49BDWed, 10 Dec 2014 00:00:00 +0000With recent theoretic and algorithmic advancements, modern optimization methodologies have seen a substantial expansion of modeling power, being applied to solve challenging problems in impressively diverse areas. This dissertation aims to extend the modeling frontier of optimization methodologies in two exciting fields inventory management and conjoint analysis. Although the three essays concern distinct applications using different optimization methodologies, they share a unifying theme, which is to develop intuitive models using advanced optimization techniques to solve problems of practical relevance. The first essay (Chapter 2) applies robust optimization to solve a single installation inventory model with non stationary uncertain demand. A classical problem in operations research, the inventory management model could become very challenging to analyze when lost sales dynamics, non zero fixed ordering cost, and positive lead time are introduced. In this essay, we propose a robust cycle based control policy based on an innovative decomposition idea to solve a family of variants of this model. The policy is simple, flexible, easily implementable and numerical experiments suggest that the policy has very promising empirical performance.The policy can be used both when the excess demand is backlogged as well as when it is lost; with non zero fixed ordering cost, and also when lead time is non zero. The policy decisions are computed by solving a collection of linear programs even when there is a positive fixed ordering cost. The policy also extends in a very simple manner to the joint pricing and inventory control problem. The second essay (Chapter 3) applies sparse machine learning to model multimodal continuous heterogeneity in conjoint analysis. Consumers' heterogeneous preferences can often be represented using a multimodal continuous heterogeneity (MCH) distribution. One interpretation of MCH is that the consumer population consists of a few distinct segments, each of which contains a heterogeneous sub population. Modeling of MCH raises considerable challenges as both across and within segment heterogeneity need to be accounted for. In this essay, we propose an innovative sparse learning approach for modeling MCH and apply it to conjoint analysis where adequate modeling of consumer heterogeneity is critical. The sparse learning approach models MCH via a two-stage divide and conquer framework, in which we first decompose the consumer population by recovering a set of candidate segmentations using structured sparsity modeling, and then use each candidate segmentation to develop a set of individual level representations of MCH. We select the optimal individual level representation of MCH and the corresponding optimal candidate segmentation using cross-validation. Two notable features of our approach are that it accommodates both across and within segment heterogeneity and endogenously imposes an adequate amount of shrinkage to recover the individual level partworths. We empirically validate the performance of the sparse learning approach using extensive simulation experiments and two empirical conjoint data sets. The third essay (Chapter 4) applies dynamic discrete choice models to investigate the impact of return policies on consumers' product purchase and return behavior. Return policies have been ubiquitous in the marketplace, allowing consumers to use and evaluate a product before fully committing to purchase. Despite the clear practical relevance of return policies, however, few studies have provided empirical assessments of how consumers' purchase and return decisions respond to the return policies facing them. In this essay, we propose to model consumers' purchase and return decisions using a dynamic discrete choice model with forward looking and Bayesian learning. More specifically, we postulate that consumers' purchase and return decisions are optimal solutions for some underlying dynamic expected utility maximization problem in which consumers learn their true evaluations of products via usage in a Bayesian manner and make purchase and return decisions to maximize their expected present value of utility, and return policies impact consumers' purchase and return decisions by entering the dynamic expected utility maximization problem as constraints. Our proposed model provides a behaviorally plausible approach to examine the impact of return policies on consumers' purchase and return behavior.Operations researchyc2561Operations ResearchDissertationsThe Theory of Systemic Risk
http://academiccommons.columbia.edu/catalog/ac:178176
Chen, Chenhttp://dx.doi.org/10.7916/D8W37TWCTue, 30 Sep 2014 00:00:00 +0000Systemic risk is an issue of great concern in modern financial markets as well as, more broadly, in the management of complex business and engineering systems. It refers to the risk of collapse of an entire complex system, as a result of the actions taken by the individual component entities or agents that comprise the system. We investigate the topic of systemic risk from the perspectives of measurement, structural sources, and risk factors. In particular, we propose an axiomatic framework for the measurement and management of systemic risk based on the simultaneous analysis of outcomes across agents in the system and over scenarios of nature. Our framework defines a broad class of systemic risk measures that accommodate a rich set of regulatory preferences. This general class of systemic risk measures captures many specific measures of systemic risk that have recently been proposed as special cases, and highlights their implicit assumptions. Moreover, the systemic risk measures that satisfy our conditions yield decentralized decompositions, i.e., the systemic risk can be decomposed into risk due to individual agents. Furthermore, one can associate a shadow price for systemic risk to each agent that correctly accounts for the externalities of the agent's individual decision-making on the entire system. Also, we provide a structural model for a financial network consisting of a set of firms holding common assets. In the model, endogenous asset prices are captured by the marketing clearing condition when the economy is in equilibrium. The key ingredients in the financial market that are captured in this model include the general portfolio choice flexibility of firms given posted asset prices and economic states, and the mark-to-market wealth of firms. The price sensitivity can be analyzed, where we characterize the key features of financial holding networks that minimize systemic risk, as a function of overall leverage. Finally, we propose a framework to estimate risk measures based on risk factors. By introducing a form of factor-separable risk measures, the acceptance set of the original risk measure connects to the acceptance sets of the factor-separable risk measures. We demonstrate that the tight bounds for factor-separable coherent risk measures can be explicitly constructed.Operations researchcc3136Industrial Engineering and Operations Research, BusinessDissertationsStudies in Stochastic Networks: Efficient Monte-Carlo Methods, Modeling and Asymptotic Analysis
http://academiccommons.columbia.edu/catalog/ac:177127
Dong, Jinghttp://dx.doi.org/10.7916/D8X63K4FTue, 12 Aug 2014 00:00:00 +0000This dissertation contains two parts. The first part develops a series of bias reduction techniques for: point processes on stable unbounded regions, steady-state distribution of infinite server queues, steady-state distribution of multi-server loss queues and loss networks and sample path of stochastic differential equations. These techniques can be applied for efficient performance evaluation and optimization of the corresponding stochastic models. We perform detailed running time analysis under heavy traffic of the perfect sampling algorithms for infinite server queues and multi-server loss queues and prove that the algorithms achieve nearly optimal order of complexity. The second part aims to model and analyze the load-dependent slowdown effect in service systems. One important phenomenon we observe in such systems is bi-stability, where the system alternates randomly between two performance regions. We conduct heavy traffic asymptotic analysis of system dynamics and provide operational solutions to avoid the bad performance region.Operations research, Applied mathematicsjd2736Industrial Engineering and Operations ResearchDissertationsStochastic Approximation Algorithms in the Estimation of Quasi-Stationary Distribution of Finite and General State Space Markov Chains
http://academiccommons.columbia.edu/catalog/ac:177124
Zheng, Shuhenghttp://dx.doi.org/10.7916/D89C6VM9Tue, 12 Aug 2014 00:00:00 +0000This thesis studies stochastic approximation algorithms for estimating the quasi-stationary distribution of Markov chains. Existing numerical linear algebra methods and probabilistic methods might be computationally demanding and intractable in large state spaces. We take our motivation from a heuristic described in the physics literature and use the stochastic approximation framework to analyze and extend it. The thesis begins by looking at the finite dimensional setting. The finite dimensional quasi-stationary estimation algorithm was proposed in the Physics literature by [#latestoliveira, #oliveiradickman1, #dickman], however no proof was given there and it was not recognized as a stochastic approximation algorithm. This and related schemes were analyzed in the context of urn problems and the consistency of the estimator is shown there [#aldous1988two, #pemantle, #athreya]. The rate of convergence is studied by [#athreya] in special cases only. The first chapter provides a different proof of the algorithm's consistency and establishes a rate of convergence in more generality than [#athreya]. It is discovered that the rate of convergence is only fast when a certain restrictive eigenvalue condition is satisfied. Using the tool of iterate averaging, the algorithm can be modified and we can eliminate the eigenvalue condition. The thesis then moves onto the general state space discrete-time Markov chain setting. In this setting, the stochastic approximation framework does not have a strong theory in the current literature, so several of the convergence results have to be adapted because the iterates of our algorithm are measure-valued The chapter formulates the quasi-stationary estimation algorithm in this setting. Then, we extend the ODE method of [#kushner2003stochastic] and proves the consistency of algorithm. Through the proof, several non-restrictive conditions required for convergence of the algorithm are discovered. Finally, the thesis tests the algorithm by running some numerical experiments. The examples are designed to test the algorithm in various edge cases. The algorithm is also empirically compared against the Fleming-Viot method.Operations researchIndustrial Engineering and Operations ResearchDissertationsEssays in Financial Engineering
http://academiccommons.columbia.edu/catalog/ac:177072
Ahn, Andrewhttp://dx.doi.org/10.7916/D80K26R0Sat, 19 Jul 2014 00:00:00 +0000This thesis consists of three essays in financial engineering. In particular we study problems in option pricing, stochastic control and risk management. In the first essay, we develop an accurate and efficient pricing approach for options on leveraged ETFs (LETFs). Our approach allows us to price these options quickly and in a manner that is consistent with the underlying ETF price dynamics. The numerical results also demonstrate that LETF option prices have model-dependency particularly in high-volatility environments. In the second essay, we extend a linear programming (LP) technique for approximately solving high-dimensional control problems in a diffusion setting. The original LP technique applies to finite horizon problems with an exponentially-distributed horizon, T. We extend the approach to fixed horizon problems. We then apply these techniques to dynamic portfolio optimization problems and evaluate their performance using convex duality methods. The numerical results suggest that the LP approach is a very promising one for tackling high-dimensional control problems. In the final essay, we propose a factor model-based approach for performing scenario analysis in a risk management context. We argue that our approach addresses some important drawbacks to a standard scenario analysis and, in a preliminary numerical investigation with option portfolios, we show that it produces superior results as well.Operations researchaja2133Industrial Engineering and Operations ResearchDissertationsEssays on Infrastructure Design and Planning for Clean Energy Systems
http://academiccommons.columbia.edu/catalog/ac:176967
Kocaman, Aysehttp://dx.doi.org/10.7916/D8JW8C2FMon, 07 Jul 2014 00:00:00 +0000The International Energy Agency estimates that the number of people who do not have access to electricity is nearly 1.3 billion and a billion more have only unreliable and intermittent supply. Moreover, current supply for electricity generation mostly relies on fossil fuels, which are finite and one of the greatest threats to the environment. Rising population growth rates, depleting fuel sources, environmental issues and economic developments have increased the need for mathematical optimization to provide a formal framework that enables systematic and clear decision-making in energy operations. This thesis through its methodologies and algorithms enable tools for energy generation, transmission and distribution system design and help policy makers make cost assessments in energy infrastructure planning rapidly and accurately. In Chapter 2, we focus on local-level power distribution systems planning for rural electrification using techniques from combinatorial optimization. We describe a heuristic algorithm that provides a quick solution for the partial electrification problem where the distribution network can only connect a pre-specified number of households with low voltage lines. The algorithm demonstrates the effect of household settlement patterns on the electrification cost. We also describe the first heuristic algorithm that selects the locations and service areas of transformers without requiring candidate solutions and simultaneously builds a two-level grid network in a green-field setting. The algorithms are applied to real world rural settings in Africa, where household locations digitized from satellite imagery are prescribed. In Chapter 3 and 4, we focus on power generation and transmission using clean energy sources. Here, we imagine a country in the future where hydro and solar are the dominant sources and fossil fuels are only available in minimal form. We discuss the problem of modeling hydro and solar energy production and allocation, including long-term investments and storage, capturing the stochastic nature of hourly supply and demand data. We mathematically model two hybrid energy generation and allocation systems where time variability of energy sources and demand is balanced using the water stored in the reservoirs. In Chapter 3, we use conventional hydro power stations (incoming stream flows are stored in large dams and water release is deferred until it is needed) and in Chapter 4, we use pumped hydro stations (water is pumped from lower reservoir to upper reservoir during periods of low demand to be released for generation when demand is high). Aim of the models is to determine optimal sizing of infrastructure needed to match demand and supply in a most reliable and cost effective way. An innovative contribution of this work is the establishment of a new perspective to energy modeling by including fine-grained sources of uncertainty such as stream flow and solar radiations in hourly level as well as spatial location of supply and demand and transmission network in national level. In addition, we compare the conventional and the pumped hydro power systems in terms of reliability and cost efficiency and quantitatively show the improvement provided by including pumped hydro storage. The model will be presented with a case study of India and helps to answer whether solar energy in addition to hydro power potential in Himalaya Mountains would be enough to meet growing electricity demand if fossil fuels could be almost completely phased out from electricity generation.Environmental engineering, Operations research, Energyask2170Mechanical Engineering, Earth and Environmental EngineeringDissertationsHigh-Dimensional Portfolio Management: Taxes, Execution and Information Relaxations
http://academiccommons.columbia.edu/catalog/ac:185815
Wang, Chunhttp://dx.doi.org/10.7916/D8M043JJMon, 07 Jul 2014 00:00:00 +0000Portfolio management has always been a key topic in finance research area. While many researchers have studied portfolio management problems, most of the work to date assumes trading is frictionless. This dissertation presents our investigation of the optimal trading policies and efforts of applying duality method based on information relaxations to portfolio problems where the investor manages multiple securities and confronts trading frictions, in particular capital gain taxes and execution cost. In Chapter 2, we consider dynamic asset allocation problems where the investor is required to pay capital gains taxes on her investment gains. This is a very challenging problem because the tax to be paid whenever a security is sold depends on the tax basis, i.e. the price(s) at which the security was originally purchased. This feature results in high-dimensional and path-dependent problems which cannot be solved exactly except in the case of very stylized problems with just one or two securities and relatively few time periods. The asset allocation problem with taxes has several variations depending on: (i) whether we use the exact or average tax-basis and (ii) whether we allow the full use of losses (FUL) or the limited use of losses (LUL). We consider all of these variations in this chapter but focus mainly on the exact and average-cost tax-basis LUL cases since these problems are the most realistic and generally the most challenging. We develop several sub-optimal trading policies for these problems and use duality techniques based on information relaxations to assess their performances. Our numerical experiments consider problems with as many as 20 securities and 20 time periods. The principal contribution of this chapter is in demonstrating that much larger problems can now be tackled through the use of sophisticated optimization techniques and duality methods based on information-relaxations. We show in fact that the dual formulation of exact tax-basis problems are much easier to solve than the corresponding primal problems. Indeed, we can easily solve dual problem instances where the number of securities and time periods is much larger than 20. We also note, however, that while the average tax-basis problem is relatively easier to solve in general, its corresponding dual problem instances are non-convex and more difficult to solve. We therefore propose an approach for the average tax-basis dual problem that enables valid dual bounds to still be obtained. In Chapter 3, we consider a portfolio execution problem where a possibly risk-averse agent needs to trade a fixed number of shares in multiple stocks over a short time horizon. Our price dynamics can capture linear but stochastic temporary and permanent price impacts as well as stochastic volatility. In general it's not possible to solve even numerically for the optimal policy in this model, however, and so we must instead search for good sub-optimal policies. Our principal policy is a variant of an open-loop feedback control (OLFC) policy and we show how the corresponding OLFC value function may be used to construct good primal and dual bounds on the optimal value function. The dual bound is constructed using the recently developed duality methods based on information relaxations. One of the contributions of this chapter is the identification of sufficient conditions to guarantee convexity, and hence tractability, of the associated dual problem instances. That said, we do not claim that the only plausible models are those where all dual problem instances are convex. We also show that it is straightforward to include a non-linear temporary price impact as well as return predictability in our model. We demonstrate numerically that good dual bounds can be computed quickly even when nested Monte-Carlo simulations are required to estimate the so-called dual penalties. These results suggest that the dual methodology can be applied in many models where closed-form expressions for the dual penalties cannot be computed. In Chapter 4, we apply duality methods based on information relaxations to dynamic zero-sum games. We show these methods can easily be used to construct dual lower and upper bounds for the optimal value of these games. In particular, these bounds can be used to evaluate sub-optimal policies for zero-sum games when calculating the optimal policies and game value is intractable.Operations research, FinanceIndustrial Engineering and Operations ResearchDissertationsConvex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning
http://academiccommons.columbia.edu/catalog/ac:175385
Huang, Bohttp://dx.doi.org/10.7916/D8VM49DMMon, 07 Jul 2014 00:00:00 +0000Sparse modeling is a rapidly developing topic that arises frequently in areas such as machine learning, data analysis and signal processing. One important application of sparse modeling is the recovery of a high-dimensional object from relatively low number of noisy observations, which is the main focuses of the Compressed Sensing, Matrix Completion(MC) and Robust Principal Component Analysis (RPCA) . However, the power of sparse models is hampered by the unprecedented size of the data that has become more and more available in practice. Therefore, it has become increasingly important to better harnessing the convex optimization techniques to take advantage of any underlying "sparsity" structure in problems of extremely large size. This thesis focuses on two main aspects of sparse modeling. From the modeling perspective, it extends convex programming formulations for matrix completion and robust principal component analysis problems to the case of tensors, and derives theoretical guarantees for exact tensor recovery under a framework of strongly convex programming. On the optimization side, an efficient first-order algorithm with the optimal convergence rate has been proposed and studied for a wide range of problems of linearly constraint sparse modeling problems.Mathematics, Statistics, Operations researchIndustrial Engineering and Operations ResearchDissertationsNew Quantitative Approaches to Asset Selection and Portfolio Construction
http://academiccommons.columbia.edu/catalog/ac:175867
Song, Irenehttp://dx.doi.org/10.7916/D83N21JVMon, 07 Jul 2014 00:00:00 +0000Since the publication of Markowitz's landmark paper "Portfolio Selection" in 1952, portfolio construction has evolved into a disciplined and personalized process. In this process, security selection and portfolio optimization constitute key steps for making investment decisions across a collection of assets. The use of quantitative algorithms and models in these steps has become a widely-accepted investment practice by modern investors. This dissertation is devoted to exploring and developing those quantitative algorithms and models. In the first part of the dissertation, we present two efficiency-based approaches to security selection: (i) a quantitative stock selection strategy based on operational efficiency and (ii) a quantitative currency selection strategy based on macroeconomic efficiency. In developing the efficiency-based stock selection strategy, we exploit a potential positive link between firm's operational efficiency and its stock performance. By means of data envelopment analysis (DEA), a non-parametric approach to productive efficiency analysis, we quantify firm's operational efficiency into a single score representing a consolidated measure of financial ratios. The financial ratios integrated into an efficiency score are selected on the basis of their predictive power for the firm's future operating performance using the LASSO (least absolute shrinkage and selection operator)-based variable selection method. The computed efficiency scores are directly used for identifying stocks worthy of investment. The basic idea behind the proposed stock selection strategy is that as efficient firms are presumed to be more profitable than inefficient firms, higher returns are expected from their stocks. This idea is tested in a contextual and empirical setting provided by the U.S. Information Technology (IT) sector. Our empirical findings confirm that there is a strong positive relationship between firm's operational efficiency and its stock performance, and further establish that firm's operational efficiency has significant explanatory power in describing the cross-sectional variations of stock returns. We moreover offer an economic argument that posits operational efficiency as a systematic risk factor and the most likely source of excess returns of investing in efficient firms. The efficiency-based currency selection strategy is developed in a similar way; i.e. currencies are selected based on a certain efficiency metric. An exchange rate has long been regarded as a reliable barometer of the state of the economy and the measure of international competitiveness of countries. While strong and appreciating currencies correspond to productive and efficient economies, weak and depreciating currencies correspond to slowing down and less efficient economies. This study hence develops a currency selection strategy that utilizes macroeconomic efficiency of countries measured based on a widely-accepted relationship between exchange rates and macroeconomic variables. For quantifying macroeconomic efficiency of countries, we first establish a multilateral framework using effective exchange rates and trade-weighted macroeconomic variables. This framework is used for transforming the three representative bilateral structural exchange rate models: the flexible price monetary model, the sticky price monetary model, and the sticky price asset model, into their multilateral counterparts. We then translate these multilateral models into DEA models, which yield an efficiency score representing an aggregate measure of macroeconomic variables. Consistent with the stock selection strategy, the resulting efficiency scores are used for identifying currencies worthy of investment. We evaluate our currency selection strategy against appropriate market and strategic benchmarks using historical data. Our empirical results confirm that currencies of efficient countries have stronger performance than those of inefficient countries, and further suggest that compared to the exchange rate models based on standard regression analysis, our models based on DEA improve on the predictability of the future performance of currencies. In the first part of the dissertation, we also develop a data-driven variable selection method for DEA based on the group LASSO. This method extends the LASSO-based variable selection method used for specifying a DEA model for estimating firm's operational efficiency. In our proposed method, we derive a special constrained version of the group LASSO with the loss function suited for variable selection in DEA models and solve it by a new tailored algorithm based on the alternating direction method of multipliers (ADMM). We conduct a thorough evaluation of the proposed method against two widely-used variable selection methods: the efficiency contribution measure (ECM) method and the regression-based (RB) test, in the DEA literature using Monte Carlo simulations. The simulation results show that our method provides more favorable performance compared with its benchmarks. In the second part of the dissertation, we propose a generalized risk budgeting (GRB) approach to portfolio construction. In a GRB portfolio, assets are grouped into possibly overlapping subsets, and each subset is allocated a risk budget that has been pre-specified by the investor. Minimum variance, risk parity and risk budgeting portfolios are all special instances of a GRB portfolio. The GRB portfolio optimization problem is to find a GRB portfolio with an optimal risk-return profile where risk is measured using any positively homogeneous risk measure. When the subsets form a partition, the assets all have identical returns and we restrict ourselves to long-only portfolios, then the GRB problem can in fact be solved as a convex optimization problem. In general, however, the GRB problem is a constrained non-convex problem, for which we propose two solution approaches. The first approach uses a semidefinite programming (SDP) relaxation to obtain an (upper) bound on the optimal objective function value. In the second approach we develop a numerical algorithm that integrates augmented Lagrangian and Markov chain Monte Carlo (MCMC) methods in order to find a point in the vicinity of a very good local optimum. This point is then supplied to a standard non-linear optimization routine with the goal of finding this local optimum. It should be emphasized that the merit of this second approach is in its generic nature: in particular, it provides a starting-point strategy for any non-linear optimization algorithms.Operations researchIndustrial Engineering and Operations ResearchDissertationsNetwork Resource Allocation Under Fairness Constraints
http://academiccommons.columbia.edu/catalog/ac:176038
Chandramouli, Shyam Sundarhttp://dx.doi.org/10.7916/D8S46Q3VMon, 07 Jul 2014 00:00:00 +0000This work considers the basic problem of allocating resources among a group of agents in a network, when the agents are equipped with single-peaked preferences over their assignments. This generalizes the classical claims problem, which concerns the division of an estate's liquidation value when the total claim on it exceeds this value. The claims problem also models the problem of rationing a single commodity, or the problem of dividing the cost of a public project among the people it serves, or the problem of apportioning taxes. A key consideration in this classical literature is equity: the good (or the ``bad,'' in the case of apportioning taxes or costs) should be distributed as fairly as possible. The main contribution of this dissertation is a comprehensive treatment of a generalization of this classical rationing problem to a network setting. Bochet et al. recently introduced a generalization of the classical rationing problem to the network setting. For this problem they designed an allocation mechanism---the egalitarian mechanism---that is Pareto optimal, envy free and strategyproof. In chapter 2, it is shown that the egalitarian mechanism is in fact group strategyproof, implying that no coalition of agents can collectively misreport their information to obtain a (weakly) better allocation for themselves. Further, a complete characterization of the set of all group strategyproof mechanisms is obtained. The egalitarian mechanism satisfies many attractive properties, but fails consistency, an important property in the literature on rationing problems. It is shown in chapter 3 that no Pareto optimal mechanism can be envy-free and consistent. Chapter 3 is devoted to the edge-fair mechanism that is Pareto optimal, group strategyproof, and consistent. In a related model where the agents are located on the edges of the graph rather than the nodes, the edge-fair rule is shown to be envy-free, group strategyproof, and consistent. Chapter 4 extends the egalitarian mechanism to the problem of finding an optimal exchange in non-bipartite networks. The results vary depending on whether the commodity being exchanged is divisible or indivisible. For the latter case, it is shown that no efficient mechanism can be strategyproof, and that the egalitarian mechanism is Pareto optimal and envy-free. Chapter 5 generalizes recent work on finding stable and balanced allocations in graphs with unit capacities and unit weights to more general networks. The existence of a stable and balanced allocation is established by a transformation to an equivalent unit capacity network.Operations researchIndustrial Engineering and Operations ResearchDissertationsData-driven Decisions in Service Systems
http://academiccommons.columbia.edu/catalog/ac:175604
Kim, Song-Heehttp://dx.doi.org/10.7916/D8D798KHMon, 07 Jul 2014 00:00:00 +0000This thesis makes contributions to help provide data-driven (or evidence-based) decision support to service systems, especially hospitals. Three selected topics are presented. First, we discuss how Little's Law, which relates average limits and expected values of stationary distributions, can be applied to service systems data that are collected over a finite time interval. To make inferences based on the indirect estimator of average waiting times, we propose methods for estimating confidence intervals and for adjusting estimates to reduce bias. We show our new methods are effective using simulations and data from a US bank call center. Second, we address important issues that need to be taken into account when testing whether real arrival data can be modeled by nonhomogeneous Poisson processes (NHPPs). We apply our method to data from a US bank call center and a hospital emergency department and demonstrate that their arrivals come from NHPPs. Lastly, we discuss an approach to standardize the Intensive Care Unit admission process, which currently lacks a well-defined criteria. Using data from nearly 200,000 hospitalizations, we discuss how we can quantify the impact of Intensive Care Unit admission on individual patient's clinical outcomes. We then use this quantified impact and a stylized model to discuss optimal admission policies. We use simulation to compare the performance of our proposed optimal policies to the current admission policy, and show that the gain can be significant.Operations researchsk3116Industrial Engineering and Operations ResearchDissertationsOn the Kidney Exchange Problem and Online Minimum Energy Scheduling
http://academiccommons.columbia.edu/catalog/ac:175610
Herrera Humphries, Tuliahttp://dx.doi.org/10.7916/D8125QSXMon, 07 Jul 2014 00:00:00 +0000The allocation and management of scarce resources are of central importance in the design of policies to improve social well-being. This dissertation consists of three essays; the first two deals with the problem of allocating kidneys and the third one on power management in computing devices. Kidney exchange programs are an attractive alternative for patients who need a kidney transplant and who have a willing, but medically incompatible, donor. A registry that keeps track of such patient-donor pairs can nd matches through exchanges amongst such pairs. This results in a quicker transplant for the patients involved, and equally importantly, keeps such patients from the long wait list of patients without an intended donor. As of March 2014, there were at least 99,000 candidates waiting for a kidney transplant in the U.S. However, in 2013 only 16,893 transplants were conducted. This imbalance between supply and demand among other factors, has driven the development of multiple kidney exchange programs in the U.S. and the subsequent development of matching mechanisms to run the programs. In the first essay we consider a matching problem arising in kidney exchanges between hospitals. Focusing on the case of two hospitals, we construct a strategy-proof matching mechanism that is guaranteed to return a matching that is at least 3/4 the size of a maximum cardinality matching. It is known that no better performance is possible if one focuses on mechanisms that return a maximal matching, and so our mechanism is best possible within this natural class of mechanisms. For path-cycle graphs we construct a mechanism that returns a matching that is at least 4/5 the size of max-cardinality matching. This mechanism does not necessarily return a maximal matching. Finally, we construct a mechanism that is universally truthful on path-cycle graphs and whose performance is within 2/3 of optimal. Again, it is known that no better ratio is possible. In most of the existing literature, mechanisms are typically evaluated by their overall performance on a large exchange pool, based on which conclusions and recommendations are drawn. In our second essay, we consider a dynamic framework to evaluate extensively used kidney exchange mechanisms. We conduct a simulation-based study of a dynamically evolving exchange pool during 9 years. Our results suggest that some of the features that are critical in a mechanism in the static setting have only a minor impact in its longrun performance when viewed in the dynamic setting. More importantly, features that are generally underestimated in the static setting turn to be relevant when we look at dynamically evolving exchange pool. For example, the pairs' arrival rates. In particular we provide insights into the eect on the waiting times and the probability to receive an oer of controllable features such as the frequency at which matching are run, the structures through which pairs could be matched (cycles or chains) as well as inherent features such as the pairs ABO-PRA characteristics, the availability of altruistic donors, and wether or not compatible pairs join the exchange etc. We evaluate the odds to receive an oer and the expected time to receive an oer for each ABO-PRA type of pairs in the model. Power management in computing devices aims to minimize energy consumption to perform tasks, meanwhile keeping acceptable performance levels. A widely used power management strategy for devices, is to transit the devices and/or components to lower power consumption states during inactivity periods. Transitions between power states consume energy, thus, depending on such costs, it may be advantageous to stay in high power state during some inactivity periods. In our third essay we consider the problem of minimizing the total energy consumed by a 2-power state device, to process jobs that are sent over time by a constrained adversary. Jobs can be preempted, but deadlines need to be met. In this problem, an algorithm must decide when to schedule the jobs, as well as a sequence of power states, and the discrete time thresholds at which these states will be reached. We provide an online algorithm to minimize the energy consumption when the cost of a transition to the low power state is small enough. In this case, the problem of minimizing the energy consumption is equivalent to minimizing the total number of inactivity periods. We also provide an algorithm to minimize the energy consumption when it may be advantageous to stay in high power state during some inactivity periods. In both cases we provide upper bounds on the competitive ratio of our algorithms, and lower bounds on the competitive ratio of all online algorithms.Operations researchIndustrial Engineering and Operations ResearchDissertationsGraph Structure and Coloring
http://academiccommons.columbia.edu/catalog/ac:175631
Plumettaz, Matthieuhttp://dx.doi.org/10.7916/D87M0637Mon, 07 Jul 2014 00:00:00 +0000We denote by G=(V,E) a graph with vertex set V and edge set E. A graph G is claw-free if no vertex of G has three pairwise nonadjacent neighbours. Claw-free graphs are a natural generalization of line graphs. This thesis answers several questions about claw-free graphs and line graphs. In 1988, Chvatal and Sbihi proved a decomposition theorem for claw-free perfect graphs. They showed that claw-free perfect graphs either have a clique-cutset or come from two basic classes of graphs called elementary and peculiar graphs. In 1999, Maffray and Reed successfully described how elementary graphs can be built using line graphs of bipartite graphs and local augmentation. However gluing two claw-free perfect graphs on a clique does not necessarily produce claw-free graphs. The first result of this thesis is a complete structural description of claw-free perfect graphs. We also give a construction for all perfect circular interval graphs. This is joint work with Chudnovsky. Erdos and Lovasz conjectured in 1968 that for every graph G and all integers s,t≥ 2 such that s+t-1=χ(G) > ω(G), there exists a partition (S,T) of the vertex set of G such that ω(G|S)≥ s and χ(G|T)≥ t. This conjecture is known in the graph theory community as the Erdos-Lovasz Tihany Conjecture. For general graphs, the only settled cases of the conjecture are when s and t are small. Recently, the conjecture was proved for a few special classes of graphs: graphs with stability number 2, line graphs and quasi-line graphs. The second part of this thesis considers the conjecture for claw-free graphs and presents some progresses on it. This is joint work with Chudnovsky and Fradkin. Reed's ω, ∆, χ conjecture proposes that every graph satisfies χ≤ ⎡½ (Δ+1+ω)⎤ ; it is known to hold for all claw-free graphs. The third part of this thesis considers a local strengthening of this conjecture. We prove the local strengthening for line graphs, then note that previous results immediately tell us that the local strengthening holds for all quasi-line graphs. Our proofs lead to polytime algorithms for constructing colorings that achieve our bounds: The complexity are O(n²) for line graphs and O(n³m²) for quasi-line graphs. For line graphs, this is faster than the best known algorithm for constructing a coloring that achieves the bound of Reed's original conjecture. This is joint work with Chudnovsky, King and Seymour.Operations researchmp2761Industrial Engineering and Operations ResearchDissertationsSequential Optimization in Changing Environments: Theory and Application to Online Content Recommendation Services
http://academiccommons.columbia.edu/catalog/ac:176086
Gur, Yonatanhttp://dx.doi.org/10.7916/D8639MWFMon, 07 Jul 2014 00:00:00 +0000Recent technological developments allow the online collection of valuable information that can be efficiently used to optimize decisions "on the fly" and at a low cost. These advances have greatly influenced the decision-making process in various areas of operations management, including pricing, inventory, and retail management. In this thesis we study methodological as well as practical aspects arising in online sequential optimization in the presence of such real-time information streams. On the methodological front, we study aspects of sequential optimization in the presence of temporal changes, such as designing decision making policies that adopt to temporal changes in the underlying environment (that drives performance) when only partial information about this changing environment is available, and quantifying the added complexity in sequential decision making problems when temporal changes are introduced. On the applied front, we study practical aspects associated with a class of online services that focus on creating customized recommendations (e.g., Amazon, Netflix). In particular, we focus on online content recommendations, a new class of online services that allows publishers to direct readers from articles they are currently reading to other web-based content they may be interested in, by means of links attached to said article. In the first part of the thesis we consider a non-stationary variant of a sequential stochastic optimization problem, where the underlying cost functions may change along the horizon. We propose a measure, termed {\it variation budget}, that controls the extent of said change, and study how restrictions on this budget impact achievable performance. As a yardstick to quantify performance in non-stationary settings we propose a regret measure relative to a dynamic oracle benchmark. We identify sharp conditions under which it is possible to achieve long-run-average optimality and more refined performance measures such as rate optimality that fully characterize the complexity of such problems. In doing so, we also establish a strong connection between two rather disparate strands of literature: adversarial online convex optimization; and the more traditional stochastic approximation paradigm (couched in a non-stationary setting). This connection is the key to deriving well performing policies in the latter, by leveraging structure of optimal policies in the former. Finally, tight bounds on the minimax regret allow us to quantify the "price of non-stationarity," which mathematically captures the added complexity embedded in a temporally changing environment versus a stationary one. In the second part of the thesis we consider another core stochastic optimization problem couched in a multi-armed bandit (MAB) setting. We develop a MAB formulation that allows for a broad range of temporal uncertainties in the rewards, characterize the (regret) complexity of this class of MAB problems by establishing a direct link between the extent of allowable reward "variation" and the minimal achievable worst-case regret, and provide an optimal policy that achieves that performance. Similarly to the first part of the thesis, our analysis draws concrete connections between two strands of literature: the adversarial and the stochastic MAB frameworks. The third part of the thesis studies applied optimization aspects arising in online content recommendations, that allow web-based publishers to direct readers from articles they are currently reading to other web-based content. We study the content recommendation problem and its unique dynamic features from both theoretical as well as practical perspectives. Using a large data set of browsing history at major media sites, we develop a representation of content along two key dimensions: clickability, the likelihood to click to an article when it is recommended; and engageability, the likelihood to click from an article when it hosts a recommendation. Based on this representation, we propose a class of user path-focused heuristics, whose purpose is to simultaneously ensure a high instantaneous probability of clicking recommended articles, while also optimizing engagement along the future path. We rigorously quantify the performance of these heuristics and validate their impact through a live experiment. The third part of the thesis is based on a collaboration with a leading provider of content recommendations to online publishers.Operations research, Business, MathematicsBusinessDissertationsEvacuating Damaged and Destroyed Buildings on 9/11: Behavioral and Structural Barriers
http://academiccommons.columbia.edu/catalog/ac:174721
Groeger, Justina L.; Stellman, Steven D.; Kravitt, Alexandra; Brackbill, Robert M.http://dx.doi.org/10.7916/D8DB7ZXJWed, 04 Jun 2014 00:00:00 +0000Introduction Evacuation of the World Trade Center (WTC) twin towers and surrounding buildings damaged in the September 11, 2001 attacks provides a unique opportunity to study factors that affect emergency evacuation of high rise buildings. Problem The goal of this study is to understand the extent to which structural and behavioral barriers and limitations of personal mobility affected evacuation by occupants of affected buildings on September 11, 2001. Methods This analysis included 5,023 civilian, adult enrollees within the World Trade Center Health Registry who evacuated the two World Trade Center towers and over 30 other Lower Manhattan buildings that were damaged or destroyed on September 11, 2001. Multinomial logistic regression was used to predict total evacuation time (less than 30 to ≤60 minutes, greater than 1 hour to less than 2 hours relative to ≤30 minutes) in relation to number of infrastructure barriers and number of behavioral barriers, adjusted for demographic and other factors. Results A higher percentage of evacuees reported encountering at least one behavioral barrier (84.9%) than reported at least one infrastructure barrier (51.9%). This pattern was consistent in all buildings except WTC 1, the first building attacked, where greater than 90% of evacuees reported encountering both types of barriers. Smoke and poor lighting were the most frequently-reported structural barriers. Extreme crowding, lack of communication with officials, and being surrounded by panicked crowds were the most frequently-reported behavioral barriers. Multivariate analyses showed evacuation time to be independently associated with the number of each type of barrier as well as gender (longer times for women), but not with the floor from which evacuation began. After adjustment, personal mobility impairment was not associated with increased evacuation time. Conclusion Because most high-rise buildings have unique designs, infrastructure factors tend to be less predictable than behavioral factors, but both need to be considered in developing emergency evacuation plans in order to decrease evacuation time and, consequently, risk of injury and death during an emergency evacuation.Behavioral sciences, Operations researchsds91EpidemiologyArticlesDynamic Markets with Many Agents: Applications in Social Learning and Competition
http://academiccommons.columbia.edu/catalog/ac:174798
Ifrach, Barhttp://dx.doi.org/10.7916/D8NK3C48Tue, 15 Apr 2014 00:00:00 +0000This thesis considers two applications in dynamics economic models with many agents. The dynamics of the economic systems under consideration are intractable since they depend on the (stochastic) outcomes of the agents' actions. However, as the number of agents grows large, approximations to the aggregate behavior of agents come to light. I use this observation to characterize market dynamics and subsequently to study these applications. Chapter 2 studies the problem of devising a pricing strategy to maximize the revenues extracted from a stream of consumers with heterogenous preferences. Consumers, however, do not know the quality of the product or service and engage in a social learning process to learn it. Using a mean-field approximation the transient of this social learning process is uncovered and the pricing problem is analyzed. Chapter 3 adds to the previous chapter in analyzing features of this social learning process with finitely many agents. In addition, the chapter generalizes the information structure to include cases where consumers take into account the order in which reviews were submitted. Chapter 4 considers a model of dynamic oligopoly competition in the spirit of models that are widespread in industrial organization. The computation of equilibrium strategies of such models suffers from the curse of dimensionality when the number of agents (firms) is large. For a market structure with few dominant firms and many fringe firms, I study an alternative equilibrium concept in which fringe firms are represented succinctly with a low dimensional set of statistics. The chapter explores how this new equilibrium concept expands the class of dynamic oligopoly models that can be studied computationally in empirical work.Operations research, Economicsbi2118BusinessDissertationsFrom Continuous to Discrete: Studies on Continuity Corrections and Monte Carlo Simulation with Applications to Barrier Options and American Options
http://academiccommons.columbia.edu/catalog/ac:171186
Cao, Menghuihttp://dx.doi.org/10.7916/D8PG1PS1Fri, 28 Feb 2014 00:00:00 +0000This dissertation 1) shows continuity corrections for first passage probabilities of Brownian bridge and barrier joint probabilities, which are applied to the pricing of two-dimensional barrier and partial barrier options, and 2) introduces new variance reduction techniques and computational improvements to Monte Carlo methods for pricing American options. The joint distribution of Brownian motion and its first passage time has found applications in many areas, including sequential analysis, pricing of barrier options, and credit risk modeling. There are, however, no simple closed-form solutions for these joint probabilities in a discrete-time setting. Chapter 2 shows that, discrete two-dimensional barrier and partial barrier joint probabilities can be approximated by their continuous-time probabilities with remarkable accuracy after shifting the barrier away from the underlying by a factor. We achieve this through a uniform continuity correction theorem on the first passage probabilities for Brownian bridge, extending relevant results in Siegmund (1985a). The continuity corrections are applied to the pricing of two-dimensional barrier and partial barrier options, extending the results in Broadie, Glasserman & Kou (1997) on one-dimensional barrier options. One interesting aspect is that for type B partial barrier options, the barrier correction cannot be applied throughout one pricing formula, but only to some barrier values and leaving the other unchanged, the direction of correction may also vary within one formula. In Chapter 3 we introduce new variance reduction techniques and computational improvements to Monte Carlo methods for pricing American-style options. For simulation algorithms that compute lower bounds of American option values, we apply martingale control variates and introduce the local policy enhancement, which adopts a local simulation to improve the exercise policy. For duality-based upper bound methods, specifically the primal-dual simulation algorithm (Andersen and Broadie 2004), we have developed two improvements. One is sub-optimality checking, which saves unnecessary computation when it is sub-optimal to exercise the option along the sample path; the second is boundary distance grouping, which reduces computational time by skipping computation on selected sample paths based on the distance to the exercise boundary. Numerical results are given for single asset Bermudan options, moving window Asian options and Bermudan max options. In some examples the computational time is reduced by a factor of several hundred, while the confidence interval of the true option value is considerably tighter than before the improvements.Operations research, FinanceIndustrial Engineering and Operations ResearchDissertationsInfrastructure Scaling and Pricing
http://academiccommons.columbia.edu/catalog/ac:171000
Gocmen, Fikret Canerhttp://dx.doi.org/10.7916/D8SQ8XFFTue, 18 Feb 2014 00:00:00 +0000Infrastructure systems play a crucial role in our daily lives. They include, but are not limited to, the highways we take while we commute to work, the stadiums we go to watch games, and the power plants that provide the electricity we consume in our homes. In this thesis we study infrastructure systems from several different perspectives with a focus on pricing and scalability. The pricing aspect of our research focuses on two industries: toll roads and sports events. Afterwards, we analyze the potential impact of small modular infrastructure on a wide variety of industries. We start by analyzing the problem of determining the tolls that maximize revenue for a managed lane operator -- that is, an operator who can charge a toll for the use of some lanes on a highway while a number of parallel lanes remain free to use. Managing toll lanes for profit is becoming increasingly common as private contractors agree to build additional lane capacity in return for the opportunity to retain toll revenue. We start by modeling the lanes as queues and show that the dynamic revenue-maximizing toll is always greater than or equal to the myopic toll that maximizes expected revenue from each arriving vehicle. Numerical examples show that a dynamic revenue-maximizing toll scheme can generate significantly more expected revenue than either a myopic or a static toll scheme. An important implication is that the revenue-maximizing fee does not only depend on the current state, but also on anticipated future arrivals. We discuss the managerial implications and present several numerical examples. Next, we relax the queueing assumption and model traffic propagation on a highway realistically by using simulation. We devise a framework that can be used to obtain revenue maximizing tolls in such a context. We calibrate our framework by using data from the SR-91 Highway in Orange County, CA and explore different tolling schemes. Our numerical experiments suggest that simple dynamic tolling mechanisms can lead to substantial revenue improvements over myopic and time-of-use tolling policies. In the third part, we analyze the revenue management of consumer options for tournaments. Sporting event managers typically only offer advance tickets which guarantee a seat at a future sporting event in return for an upfront payment. Some event managers and ticket resellers have started to offer call options under which a customer can pay a small amount now for the guaranteed option to attend a future sporting event by paying an additional amount later. We consider the case of tournament options where the event manager sells team-specific options for a tournament final, such as the Super Bowl, before the finalists are determined. These options guarantee a final game ticket to the bearer if his team advances to the finals. We develop an approach by which an event manager can determine the revenue maximizing prices and amounts of advance tickets and options to sell for a tournament final. Afterwards, for a specific tournament structure we show that offering options is guaranteed to increase expected revenue for the event. We also establish bounds for the revenue improvement and show that introducing options can increase social welfare. We conclude by presenting a numerical application of our approach. Finally, we argue that advances made in automation, communication and manufacturing portend a dramatic reversal of the ``bigger is better'' approach to cost reductions prevalent in many basic infrastructure industries, e.g. transportation, electric power generation and raw material processing. We show that the traditional reductions in capital costs achieved by scaling up in size are generally matched by learning effects in the mass-production process when scaling up in numbers instead. In addition, using the U.S. electricity generation sector as a case study, we argue that the primary operating cost advantage of large unit scale is reduced labor, which can be eliminated by employing low-cost automation technologies. Finally, we argue that locational, operational and financial flexibilities that accompany smaller unit scale can reduce investment and operating costs even further. All these factors combined argue that with current technology, economies of numbers may well dominate economies of unit scale.Business, Operations research, Applied mathematicsBusinessDissertationsDesign and Evaluation of Procurement Combinatorial Auctions
http://academiccommons.columbia.edu/catalog/ac:173476
Kim, Sang Wonhttp://dx.doi.org/10.7916/D8DF6P8CTue, 18 Feb 2014 00:00:00 +0000The main advantage of a procurement combinatorial auction (CA) is that it allows suppliers to express cost synergies through package bids. However, bidders can also strategically take advantage of this flexibility, by discounting package bids and "inflating" bid prices for single-items, even in the absence of cost synergies; the latter behavior can hurt the performance of the auction. It is an empirical question whether allowing package bids and running a CA improves performance in a given setting.Analyzing the actual performance of a CA requires evaluating cost efficiency and the margins of the winning bidders, which is typically private and sensitive information of the bidders. Thus motivated, in Chapter 2 of this dissertation, we develop a structural estimation approach for large-scale first-price CAs to estimate the firms' cost structure using the bid data. To overcome the computational difficulties arising from the large number of bids observed in large-scale CAs, we propose a novel simplified model of bidders' behavior based on pricing package characteristics. Overall, this work develops the first practical tool to empirically evaluate the performance of large-scale first-price CAs commonly used in procurement settings.In Chapter 3, we apply our method to the Chilean school meals auction, in which the government procures half a billion dollars' worth of meal services every year and bidders submit thousands of package bids. Our estimates suggest that bidders' cost synergies are economically significant in this application (~5%), and the current CA mechanism achieves high allocative efficiency (~98%) and reasonable margins for the bidders (~5%). We believe this is the first work in the literature that empirically shows that a CA performs well in a real-world application.We also conduct a counterfactual analysis to study the performance of the Vickrey-Clarke-Groves (VCG) mechanism in our empirical application. While it is well known in the literature that the VCG mechanism achieves allocative efficiency, its application in practice is at best rare due to several potential weaknesses such as prohibitively high procurement costs. Interestingly, contrary to the recent theoretical work, the results show that the VCG mechanism achieves reasonable procurement costs in our application. Motivated from this observation, Chapter 4 addresses such apparent paradox between the theory and our empirical application. Focusing on the high procurement cost issue, we study the impact of competition on the revenue performance of the VCG mechanism using an asymptotic analysis. We believe the findings in this chapter add useful insights for the practical usage of the VCG mechanism.Business, Economics, Operations researchskim14BusinessDissertationsPricing, Trading and Clearing of Defaultable Claims Subject to Counterparty Risk
http://academiccommons.columbia.edu/catalog/ac:169814
Kim, Jinbeomhttp://dx.doi.org/10.7916/D8319SWWMon, 03 Feb 2014 00:00:00 +0000The recent financial crisis and subsequent regulatory changes on over-the-counter (OTC) markets have given rise to the new valuation and trading frameworks for defaultable claims to investors and dealer banks. More OTC market participants have adopted the new market conventions that incorporate counterparty risk into the valuation of OTC derivatives. In addition, the use of collateral has become common for most bilateral trades to reduce counterparty default risk. On the other hand, to increase transparency and market stability, the U.S and European regulators have required mandatory clearing of defaultable derivatives through central counterparties. This dissertation tackles these changes and analyze their impacts on the pricing, trading and clearing of defaultable claims. In the first part of the thesis, we study a valuation framework for financial contracts subject to reference and counterparty default risks with collateralization requirement. We propose a fixed point approach to analyze the mark-to-market contract value with counterparty risk provision, and show that it is a unique bounded and continuous fixed point via contraction mapping. This leads us to develop an accurate iterative numerical scheme for valuation. Specifically, we solve a sequence of linear inhomogeneous partial differential equations, whose solutions converge to the fixed point price function. We apply our methodology to compute the bid and ask prices for both defaultable equity and fixed-income derivatives, and illustrate the non-trivial effects of counterparty risk, collateralization ratio and liquidation convention on the bid-ask prices. In the second part, we study the problem of pricing and trading of defaultable claims among investors with heterogeneous risk preferences and market views. Based on the utility-indifference pricing methodology, we construct the bid-ask spreads for risk-averse buyers and sellers, and show that the spreads widen as risk aversion or trading volume increases. Moreover, we analyze the buyer's optimal static trading position under various market settings, including (i) when the market pricing rule is linear, and (ii) when the counterparty -- single or multiple sellers -- may have different nonlinear pricing rules generated by risk aversion and belief heterogeneity. For defaultable bonds and credit default swaps, we provide explicit formulas for the optimal trading positions, and examine the combined effect of heterogeneous risk aversions and beliefs. In particular, we find that belief heterogeneity, rather than the difference in risk aversion, is crucial to trigger a trade. Finally, we study the impact of central clearing on the credit default swap (CDS) market. Central clearing of CDS through a central counterparty (CCP) has been proposed as a tool for mitigating systemic risk and counterpart risk in the CDS market. The design of CCPs involves the implementation of margin requirements and a default fund, for which various designs have been proposed. We propose a mathematical model to quantify the impact of the design of the CCP on the incentive for clearing and analyze the market equilibrium. We determine the minimum number of clearing participants required so that they have an incentive to clear part of their exposures. Furthermore, we analyze the equilibrium CDS positions and their dependence on the initial margin, risk aversion, and counterparty risk in the inter-dealer market. Our numerical results show that minimizing the initial margin maximizes the total clearing positions as well as the CCP's revenue.Operations research, Financejk3071Industrial Engineering and Operations ResearchDissertationsPerfect Simulation, Sample-path Large Deviations, and Multiscale Modeling for Some Fundamental Queueing Systems
http://academiccommons.columbia.edu/catalog/ac:181094
Chen, Xinyunhttp://dx.doi.org/10.7916/D8WH2MZ1Mon, 06 Jan 2014 00:00:00 +0000As a primary branch of Operations Research, Queueing Theory models and analyzes engineering systems with random fluctuations. With the development of internet and computation techniques, the engineering systems today are much bigger in scale and more complicated in structure than 20 years ago, which raises numerous new problems to researchers in the field of queueing theory. The aim of this thesis is to explore new methods and tools, from both algorithmic and analytical perspectives, that are useful to solve such problems. In Chapter 1 and 2, we introduce some techniques of asymptotic analysis that are relatively new to queueing applications in order to give more accurate probabilistic characterization of queueing models with large scale and complicated structure. In particular, Chapter 1 gives the first functional large deviation result for infinite-server system with general inter-arrival and service times. The functional approach we use enables a nice description of the whole system over the entire time horizon of interest, which is important in real problems. In Chapter 2, we construct a queueing model for the so-called limit order book that is used in main financial markets worldwide. We use an asymptotic approach called multi-scale modeling to disentangle the complicated dependence among the elements in the trading system and to reduce the model dimensionality. The asymptotic regime we use is inspired by empirical observations and the resulting limit process explains and reproduces stylized features of real market data. Chapter 2 also provides a nice example of novel applications of queueing models in systems, such as the electronic trading system, that are traditionally outside the scope of queueing theory. Chapter 3 and 4 focus on stochastic simulation methods for performance evaluation of queueing models where analytic approaches fail. In Chapter 3, we develop a perfect sampling algorithm to generate exact samples from the stationary distribution of stochastic fluid networks in polynomial time. Our approach can be used for time-varying networks with general inter-arrival and service times, whose stationary distributions have no analytic expression. In Chapter 4, we focus on the stochastic systems with continuous random fluctuations, for instance, the workload arrives to the system in continuous flow like a Levy process. We develop a general framework of simulation algorithms featuring a deterministic error bound and an almost square root convergence rate. As an application, we apply this framework to estimate the stationary distributions of reflected Brownian motions and the performance of our algorithm is better than existing prevalent numeric methods.Operations researchxc2177Industrial Engineering and Operations ResearchDissertationsTwo Papers of Financial Engineering Relating to the Risk of the 2007--2008 Financial Crisis
http://academiccommons.columbia.edu/catalog/ac:167143
Zhong, Haowenhttp://dx.doi.org/10.7916/D8CC0XMGFri, 15 Nov 2013 00:00:00 +0000This dissertation studies two financial engineering and econometrics problems relating to two facets of the 2007-2008 financial crisis. In the first part, we construct the Spatial Capital Asset Pricing Model and the Spatial Arbitrage Pricing Theory to characterize the risk premiums of futures contracts on real estate assets. We also provide rigorous econometric analysis of the new models. Empirical study shows there exists significant spatial interaction among the S&P/Case-Shiller Home Price Index futures returns. In the second part, we perform empirical studies on the jump risk in the equity market. We propose a simple affine jump-diffusion model for equity returns, which seems to outperform existing ones (including models with Levy jumps) during the financial crisis and is at least as good during normal times, if model complexity is taken into account. In comparing the models, we made two empirical findings: (i) jump intensity seems to increase significantly during the financial crisis, while on average there appears to be little change of jump sizes; (ii) finite number of large jumps in returns for any finite time horizon seem to fit the data well both before and after the crisis.Operations research, Statisticshz2193Industrial Engineering and Operations ResearchDissertationsData-driven System Design in Service Operations
http://academiccommons.columbia.edu/catalog/ac:163306
Lu, Yinahttp://hdl.handle.net/10022/AC:P:21080Tue, 16 Jul 2013 00:00:00 +0000The service industry has become an increasingly important component in the world's economy. Simultaneously, the data collected from service systems has grown rapidly in both size and complexity due to the rapid spread of information technology, providing new opportunities and challenges for operations management researchers. This dissertation aims to explore methodologies to extract information from data and provide powerful insights to guide the design of service delivery systems. To do this, we analyze three applications in the retail, healthcare, and IT service industries. In the first application, we conduct an empirical study to analyze how waiting in queue in the context of a retail store affects customers' purchasing behavior. The methodology combines a novel dataset collected via video recognition technology with traditional point-of-sales data. We find that waiting in queue has a nonlinear impact on purchase incidence and that customers appear to focus mostly on the length of the queue, without adjusting enough for the speed at which the line moves. We also find that customers' sensitivity to waiting is heterogeneous and negatively correlated with price sensitivity. These findings have important implications for queueing system design and pricing management under congestion. The second application focuses on disaster planning in healthcare. According to a U.S. government mandate, in a catastrophic event, the New York City metropolitan areas need to be capable of caring for 400 burn-injured patients during a catastrophe, which far exceeds the current burn bed capacity. We develop a new system for prioritizing patients for transfer to burn beds as they become available and demonstrate its superiority over several other triage methods. Based on data from previous burn catastrophes, we study the feasibility of being able to admit the required number of patients to burn beds within the critical three-to-five-day time frame. We find that this is unlikely and that the ability to do so is highly dependent on the type of event and the demographics of the patient population. This work has implications for how disaster plans in other metropolitan areas should be developed. In the third application, we study workers' productivity in a global IT service delivery system, where service requests from possibly globally distributed customers are managed centrally and served by agents. Based on a novel dataset which tracks the detailed time intervals an agent spends on all business related activities, we develop a methodology to study the variation of productivity over time motivated by econometric tools from survival analysis. This approach can be used to identify different mechanisms by which workload affects productivity. The findings provide important insights for the design of the workload allocation policies which account for agents' workload management behavior.Operations researchyl2494BusinessDissertationsApproximate dynamic programming for large scale systems
http://academiccommons.columbia.edu/catalog/ac:169790
Desai, Vijay V.http://hdl.handle.net/10022/AC:P:20875Fri, 28 Jun 2013 00:00:00 +0000Sequential decision making under uncertainty is at the heart of a wide variety of practical problems. These problems can be cast as dynamic programs and the optimal value function can be computed by solving Bellman's equation. However, this approach is limited in its applicability. As the number of state variables increases, the state space size grows exponentially, a phenomenon known as the curse of dimensionality, rendering the standard dynamic programming approach impractical. An effective way of addressing curse of dimensionality is through parameterized value function approximation. Such an approximation is determined by relatively small number of parameters and serves as an estimate of the optimal value function. But in order for this approach to be effective, we need Approximate Dynamic Programming (ADP) algorithms that can deliver `good' approximation to the optimal value function and such an approximation can then be used to derive policies for effective decision-making. From a practical standpoint, in order to assess the effectiveness of such an approximation, there is also a need for methods that give a sense for the suboptimality of a policy. This thesis is an attempt to address both these issues. First, we introduce a new ADP algorithm based on linear programming, to compute value function approximations. LP approaches to approximate DP have typically relied on a natural `projection' of a well studied linear program for exact dynamic programming. Such programs restrict attention to approximations that are lower bounds to the optimal cost-to-go function. Our program -- the `smoothed approximate linear program' -- is distinct from such approaches and relaxes the restriction to lower bounding approximations in an appropriate fashion while remaining computationally tractable. The resulting program enjoys strong approximation guarantees and is shown to perform well in numerical experiments with the game of Tetris and queueing network control problem. Next, we consider optimal stopping problems with applications to pricing of high-dimensional American options. We introduce the pathwise optimization (PO) method: a new convex optimization procedure to produce upper and lower bounds on the optimal value (the `price') of high-dimensional optimal stopping problems. The PO method builds on a dual characterization of optimal stopping problems as optimization problems over the space of martingales, which we dub the martingale duality approach. We demonstrate via numerical experiments that the PO method produces upper bounds and lower bounds (via suboptimal exercise policies) of a quality comparable with state-of-the-art approaches. Further, we develop an approximation theory relevant to martingale duality approaches in general and the PO method in particular. Finally, we consider a broad class of MDPs and introduce a new tractable method for computing bounds by consider information relaxation and introducing penalty. The method delivers tight bounds by identifying the best penalty function among a parameterized class of penalty functions. We implement our method on a high-dimensional financial application, namely, optimal execution and demonstrate the practical value of the method vis-a-vis competing methods available in the literature. In addition, we provide theory to show that bounds generated by our method are provably tighter than some of the other available approaches.Operations research, Mathematicsvvd2101Industrial Engineering and Operations Research, BusinessDissertationsStochastic Models of Limit Order Markets
http://academiccommons.columbia.edu/catalog/ac:161685
Kukanov, Arseniyhttp://hdl.handle.net/10022/AC:P:20511Thu, 30 May 2013 00:00:00 +0000During the last two decades most stock and derivatives exchanges in the world transitioned to electronic trading in limit order books, creating a need for a new set of quantitative models to describe these order-driven markets. This dissertation offers a collection of models that provide insight into the structure of modern financial markets, and can help to optimize trading decisions in practical applications. In the first part of the thesis we study the dynamics of prices, order flows and liquidity in limit order markets over short timescales. We propose a stylized order book model that predicts a particularly simple linear relation between price changes and order flow imbalance, defined as a difference between net changes in supply and demand. The slope in this linear relation, called a price impact coefficient, is inversely proportional in our model to market depth - a measure of liquidity. Our empirical results confirm both of these predictions. The linear relation between order flow imbalance and price changes holds for time intervals between 50 milliseconds and 5 minutes. The inverse relation between the price impact coefficient and market depth holds on longer timescales. These findings shed a new light on intraday variations in market volatility. According to our model volatility fluctuates due to changes in market depth or in order flow variance. Previous studies also found a positive correlation between volatility and trading volume, but in order-driven markets prices are determined by the limit order book activity, so the association between trading volume and volatility is unclear. We show how a spurious correlation between these variables can indeed emerge in our linear model due to time aggregation of high-frequency data. Finally, we observe short-term positive autocorrelation in order flow imbalance and discuss an application of this variable as a measure of adverse selection in limit order executions. Our results suggest that monitoring recent order flow can improve the quality of order executions in practice. In the second part of the thesis we study the problem of optimal order placement in a fragmented limit order market. To execute a trade, market participants can submit limit orders or market orders across various exchanges where a stock is traded. In practice these decisions are influenced by sizes of order queues and by statistical properties of order flows in each limit order book, and also by rebates that exchanges pay for limit order submissions. We present a realistic model of limit order executions and formalize the search for an optimal order placement policy as a convex optimization problem. Based on this formulation we study how various factors determine investor's order placement decisions. In a case when a single exchange is used for order execution, we derive an explicit formula for the optimal limit and market order quantities. Our solution shows that the optimal split between market and limit orders largely depends on one's tolerance to execution risk. Market orders help to alleviate this risk because they execute with certainty. Correspondingly, we find that an optimal order allocation shifts to these more expensive orders when the execution risk is of primary concern, for example when the intended trade quantity is large or when it is costly to catch up on the quantity after limit order execution fails. We also characterize the optimal solution in the general case of simultaneous order placement on multiple exchanges, and show that it sets execution shortfall probabilities to specific threshold values computed with model parameters. Finally, we propose a non-parametric stochastic algorithm that computes an optimal solution by resampling historical data and does not require specifying order flow distributions. A numerical implementation of this algorithm is used to study the sensitivity of an optimal solution to changes in model parameters. Our numerical results show that order placement optimization can bring a substantial reduction in trading costs, especially for small orders and in cases when order flows are relatively uncorrelated across trading venues. The order placement optimization framework developed in this thesis can also be used to quantify the costs and benefits of financial market fragmentation from the point of view of an individual investor. For instance, we find that a positive correlation between order flows, which is empirically observed in a fragmented U.S. equity market, increases the costs of trading. As the correlation increases it may become more expensive to trade in a fragmented market than it is in a consolidated market. In the third part of the thesis we analyze the dynamics of limit order queues at the best bid or ask of an exchange. These queues consist of orders submitted by a variety of market participants, yet existing order book models commonly assume that all orders have similar dynamics. In practice, some orders are submitted by trade execution algorithms in an attempt to buy or sell a certain quantity of assets under time constraints, and these orders are canceled if their realized waiting time exceeds a patience threshold. In contrast, high-frequency traders submit and cancel orders depending on the order book state and their orders are not driven by patience. The interaction between these two order types within a single FIFO queue leads bursts of order cancelations for small queues and anomalously long waiting times in large queues. We analyze a fluid model that describes the evolution of large order queues in liquid markets, taking into account the heterogeneity between order submission and cancelation strategies of different traders. Our results show that after a finite initial time interval, the queue reaches a specific structure where all orders from high-frequency traders stay in the queue until execution but most orders from execution algorithms exceed their patience thresholds and are canceled. This "order crowding" effect has been previously noted by participants in highly liquid stock and futures markets and was attributed to a large participation of high-frequency traders. In our model, their presence creates an additional workload, which increases queue waiting times for new orders. Our analysis of the fluid model leads to waiting time estimates that take into account the distribution of order types in a queue. These estimates are tested against a large dataset of realized limit order waiting times collected by a U.S. equity brokerage firm. The queue composition at a moment of order submission noticeably affects its waiting time and we find that assuming a single order type for all orders in the queue leads to unrealistic results. Estimates that assume instead a mix of heterogeneous orders in the queue are closer to empirical data. Our model for a limit order queue with heterogeneous order types also appears to be interesting from a methodological point of view. It introduces a new type of behavior in a queueing system where one class of jobs has state-dependent dynamics, while others are driven by patience. Although this model is motivated by the analysis of limit order books, it may find applications in studying other service systems with state-dependent abandonments.Operations research, Finance, Statisticsak2870Industrial Engineering and Operations ResearchDissertationsFinancial Portfolio Risk Management: Model Risk, Robustness and Rebalancing Error
http://academiccommons.columbia.edu/catalog/ac:161415
Xu, Xingbohttp://hdl.handle.net/10022/AC:P:20382Mon, 20 May 2013 00:00:00 +0000Risk management has always been in key component of portfolio management. While more and more complicated models are proposed and implemented as research advances, they all inevitably rely on imperfect assumptions and estimates. This dissertation aims to investigate the gap between complicated theoretical modelling and practice. We mainly focus on two directions: model risk and reblancing error. In the first part of the thesis, we develop a framework for quantifying the impact of model error and for measuring and minimizing risk in a way that is robust to model error. This robust approach starts from a baseline model and finds the worst-case error in risk measurement that would be incurred through a deviation from the baseline model, given a precise constraint on the plausibility of the deviation. Using relative entropy to constrain model distance leads to an explicit characterization of worst-case model errors; this characterization lends itself to Monte Carlo simulation, allowing straightforward calculation of bounds on model error with very little computational effort beyond that required to evaluate performance under the baseline nominal model. This approach goes well beyond the effect of errors in parameter estimates to consider errors in the underlying stochastic assumptions of the model and to characterize the greatest vulnerabilities to error in a model. We apply this approach to problems of portfolio risk measurement, credit risk, delta hedging, and counterparty risk measured through credit valuation adjustment. In the second part, we apply this robust approach to a dynamic portfolio control problem. The sources of model error include the evolution of market factors and the influence of these factors on asset returns. We analyze both finite- and infinite-horizon problems in a model in which returns are driven by factors that evolve stochastically. The model incorporates transaction costs and leads to simple and tractable optimal robust controls for multiple assets. We illustrate the performance of the controls on historical data. Robustness does improve performance in out-of-sample tests in which the model is estimated on a rolling window of data and then applied over a subsequent time period. By acknowledging uncertainty in the estimated model, the robust rules lead to less aggressive trading and are less sensitive to sharp moves in underlying prices. In the last part, we analyze the error between a discretely rebalanced portfolio and its continuously rebalanced counterpart in the presence of jumps or mean-reversion in the underlying asset dynamics. With discrete rebalancing, the portfolio's composition is restored to a set of fixed target weights at discrete intervals; with continuous rebalancing, the target weights are maintained at all times. We examine the difference between the two portfolios as the number of discrete rebalancing dates increases. We derive the limiting variance of the relative error between the two portfolios for both the mean-reverting and jump-diffusion cases. For both cases, we derive ``volatility adjustments'' to improve the approximation of the discretely rebalanced portfolio by the continuously rebalanced portfolio, based on on the limiting covariance between the relative rebalancing error and the level of the continuously rebalanced portfolio. These results are based on strong approximation results for jump-diffusion processes.Operations research, Finance, Mathematicsxx2126Industrial Engineering and Operations Research, BusinessDissertationsOptimization Algorithms for Structured Machine Learning and Image Processing Problems
http://academiccommons.columbia.edu/catalog/ac:158764
Qin, Zhiweihttp://hdl.handle.net/10022/AC:P:19648Fri, 05 Apr 2013 00:00:00 +0000Optimization algorithms are often the solution engine for machine learning and image processing techniques, but they can also become the bottleneck in applying these techniques if they are unable to cope with the size of the data. With the rapid advancement of modern technology, data of unprecedented size has become more and more available, and there is an increasing demand to process and interpret the data. Traditional optimization methods, such as the interior-point method, can solve a wide array of problems arising from the machine learning domain, but it is also this generality that often prevents them from dealing with large data efficiently. Hence, specialized algorithms that can readily take advantage of the problem structure are highly desirable and of immediate practical interest. This thesis focuses on developing efficient optimization algorithms for machine learning and image processing problems of diverse types, including supervised learning (e.g., the group lasso), unsupervised learning (e.g., robust tensor decompositions), and total-variation image denoising. These algorithms are of wide interest to the optimization, machine learning, and image processing communities. Specifically, (i) we present two algorithms to solve the Group Lasso problem. First, we propose a general version of the Block Coordinate Descent (BCD) algorithm for the Group Lasso that employs an efficient approach for optimizing each subproblem exactly. We show that it exhibits excellent performance when the groups are of moderate size. For groups of large size, we propose an extension of the proximal gradient algorithm based on variable step-lengths that can be viewed as a simplified version of BCD. By combining the two approaches we obtain an implementation that is very competitive and often outperforms other state-of-the-art approaches for this problem. We show how these methods fit into the globally convergent general block coordinate gradient descent framework in (Tseng and Yun, 2009). We also show that the proposed approach is more efficient in practice than the one implemented in (Tseng and Yun, 2009). In addition, we apply our algorithms to the Multiple Measurement Vector (MMV) recovery problem, which can be viewed as a special case of the Group Lasso problem, and compare their performance to other methods in this particular instance; (ii) we further investigate sparse linear models with two commonly adopted general sparsity-inducing regularization terms, the overlapping Group Lasso penalty l1/l2-norm and the l1/l_infty-norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As one of the core building-blocks of this framework, we develop new algorithms using a partial-linearization/splitting technique and prove that the accelerated versions of these algorithms require $O(1 sqrt(epsilon) ) iterations to obtain an epsilon-optimal solution. We compare the performance of these algorithms against that of the alternating direction augmented Lagrangian and FISTA methods on a collection of data sets and apply them to two real-world problems to compare the relative merits of the two norms; (iii) we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recent advances in robust Principal Component Analysis and tensor completion. We propose tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem. These algorithms are based on the highly efficient alternating direction augmented Lagrangian and accelerated proximal gradient methods. We also propose a nonconvex model that can often improve the recovery results from the convex models. We investigate the empirical recoverability properties of the convex and nonconvex formulations and compare the computational performance of the algorithms on simulated data. We demonstrate through a number of real applications the practical effectiveness of this convex optimization framework for robust low-rank tensor recovery; (iv) we consider the image denoising problem using total variation regularization. This problem is computationally challenging to solve due to the non-differentiability and non-linearity of the regularization term. We propose a new alternating direction augmented Lagrangian method, involving subproblems that can be solved efficiently and exactly. The global convergence of the new algorithm is established for the anisotropic total variation model. We compare our method with the split Bregman method and demonstrate the superiority of our method in computational performance on a set of standard test images.Operations research, Computer science, Statisticszq2107Industrial Engineering and Operations ResearchDissertationsModels for managing surge capacity in the face of an influenza epidemic
http://academiccommons.columbia.edu/catalog/ac:157364
Zenteno, Anahttp://hdl.handle.net/10022/AC:P:19200Fri, 01 Mar 2013 00:00:00 +0000Influenza pandemics pose an imminent risk to society. Yearly outbreaks already represent heavy social and economic burdens. A pandemic could severely affect infrastructure and commerce through high absenteeism, supply chain disruptions, and other effects over an extended and uncertain period of time. Governmental institutions such as the Center for Disease Prevention and Control (CDC) and the U.S. Department of Health and Human Services (HHS) have issued guidelines on how to prepare for a potential pandemic, however much work still needs to be done in order to meet them. From a planner's perspective, the complexity of outlining plans to manage future resources during an epidemic stems from the uncertainty of how severe the epidemic will be. Uncertainty in parameters such as the contagion rate (how fast the disease spreads) makes the course and severity of the epidemic unforeseeable, exposing any planning strategy to a potentially wasteful allocation of resources. Our approach involves the use of additional resources in response to a robust model of the evolution of the epidemic as to hedge against the uncertainty in its evolution and intensity. Under existing plans, large cities would make use of networks of volunteers, students, and recent retirees, or borrow staff from neighboring communities. Taking into account that such additional resources are likely to be significantly constrained (e.g. in quantity and duration), we seek to produce robust emergency staff commitment levels that work well under different trajectories and degrees of severity of the pandemic. Our methodology combines Robust Optimization techniques with Epidemiology (SEIR models) and system performance modeling. We describe cutting-plane algorithms analogous to generalized Benders' decomposition that prove fast and numerically accurate. Our results yield insights on the structure of optimal robust strategies and on practical rules-of-thumb that can be deployed during the epidemic. To assess the efficacy of our solutions, we study their performance under different scenarios and compare them against other seemingly good strategies through numerical experiments. This work would be particularly valuable for institutions that provide public services, whose operations continuity is critical for a community, especially in view of an event of this caliber. As far as we know, this is the first time this problem is addressed in a rigorous way; particularly we are not aware of any other robust optimization applications in epidemiology.Operations research, Public healthacz2103Industrial Engineering and Operations ResearchDissertationsChance Constrained Optimal Power Flow: Risk-Aware Network Control under Uncertainty
http://academiccommons.columbia.edu/catalog/ac:156182
Bienstock, Daniel; Chertkov, Michael; Harnett, Seanhttp://hdl.handle.net/10022/AC:P:18933Tue, 05 Feb 2013 00:00:00 +0000When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely used by the electric power industry to re-dispatch hourly controllable generation (coal, gas and hydro plants) over control areas of transmission networks, can result in grid instability, and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition, which is borne by simulations of real grids, would likely resulting in automatic line tripping to protect lines from thermal stress, a risky and undesirable outcome which compromises stability. Smart grid goals include a commitment to large penetration of highly fluctuating renewables, thus calling to reconsider current practices, in particular the use of standard OPF. Our Chance Constrained (CC) OPF corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, our CC-OPF satisfies all the constraints with high probability while simultaneously minimizing the cost of economic re-dispatch. CC-OPF allows efficient implementation, e.g. solving a typical instance over the 2746-bus Polish network in 20 seconds on a standard laptop.Industrial engineering, Operations researchdb17, srh2144Applied Physics and Applied Mathematics, Industrial Engineering and Operations ResearchArticlesChance Constrained Optimal Power Flow: Risk-Aware Network Control under Uncertainty
http://academiccommons.columbia.edu/catalog/ac:153902
Bienstock, Daniel; Chertkov, Michael; Harnett, Seanhttp://hdl.handle.net/10022/AC:P:15118Mon, 29 Oct 2012 00:00:00 +0000When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely used by the electric power industry to redispatch hourly controllable generation (coal, gas and hydro plants) over control areas of transmission networks, can result in grid instability, and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition, which is borne by simulations of real grids, would likely resulting in automatic line tripping to protect lines from thermal stress, a risky and undesirable outcome which compromises stability. Smart grid goals include a commitment to large penetration of highly fluctuating renewables, thus calling to reconsider current practices, in particular the use of standard OPF. Our Chance Constrained (CC) OPF corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, our CCOPF satisfies all the constraints with high probability while simultaneously minimizing the cost of economic redispatch. CCOPF allows efficient implementation, e.g. solving a typical instance over the 2746bus Polish network in 20s on a standard laptop.Industrial engineering, Operations researchdb17Applied Physics and Applied Mathematics, Industrial Engineering and Operations ResearchArticlesPrice competition and the impact of service attributes: Structural estimation and analytical characterizations of equilibrium behavior
http://academiccommons.columbia.edu/catalog/ac:153522
Pierson, Margaret Parkerhttp://hdl.handle.net/10022/AC:P:14979Wed, 17 Oct 2012 00:00:00 +0000This dissertation addresses a number of outstanding, fundamental questions in operations management and industrial organization literature. Operations management literature has a long history of studying the competitive impact of operational, firm-level strategic decisions within oligopoly markets. The first essay reports on an empirical study of an important industry, the drive-thru fast-food industry. We estimate a competition model, derived from an underlying Mixed MultiNomial Logit (MNML) consumer choice model, using detailed empirical data. The main goal is to measure to what extent waiting time performance, along with price levels, brand attributes, geographical and demographic factors, impacts competing firms' market shares. The primary goal of our second essay is to characterize the equilibrium behavior of price competition models with Mixed Multinomial Logit (MMNL) demand functions under affine cost structures. In spite of the huge popularity of MMNL models in both the theoretical and empirical literature, it is not known, in general, whether a Nash equilibrium (in pure strategies) of prices exists, and whether the equilibria can be uniquely characterized as the solutions to the system of First Order Condition (FOC) equations. The third essay, which is the most general in its context, we establish that in the absence of cost efficiencies resulting from a merger, aggregate profits of the merging firms increase as do equilibrium prices for general price competition models with general nonlinear demand and cost functions as long as the models are supermodular, with two additional structural conditions: (i) each firm's profit function is strictly quasi-concave in its own price(s), and (ii) markets are competitive, i.e., in the pre-merger industry, each firm's profits increase when any of his competitors increases his price, unilaterally. Even the equilibrium profits of the remaining firms in the industry increase, while the consumer ends up holding the bag, i.e., consumer welfare declines. As demonstrated by this essay, the answers to these sorts of strategy questions have implications not only for the firms and customers but also the policy makers policing these markets.Operations research, BusinessBusinessDissertationsContingent Capital: Valuation and Risk Implications Under Alternative Conversion Mechanisms
http://academiccommons.columbia.edu/catalog/ac:152933
Nouri, Behzadhttp://hdl.handle.net/10022/AC:P:14800Fri, 28 Sep 2012 00:00:00 +0000Several proposals for enhancing the stability of the financial system include requirements that banks hold some form of contingent capital, meaning equity that becomes available to a bank in the event of a crisis or financial distress. Specific proposals vary in their choice of conversion trigger and conversion mechanism, and have inspired extensive scrutiny regarding their effectivity in avoiding costly public rescues and bail-outs and potential adverse effects on market dynamics. While allowing banks to leverage and gain a higher return on their equity capital during the upturns in financial markets, contingent capital provides an automatic mechanism to reduce debt and raise the loss bearing capital cushion during the downturns and market crashes; therefore, making it possible to achieve stability and robustness in the financial sector, without reducing efficiency and competitiveness of the banking system with higher regulatory capital requirements. However, many researchers have raised concerns regarding unintended consequences and implications of such instruments for market dynamics. Death spirals in the stock price near the conversion, possibility of profitable stock or book manipulations by either the investors or the issuer, the marketability and demand for such hybrid instruments, contagion and systemic risks arising from the hedging strategies of the investors and higher risk taking incentives for issuers are among such concerns. Though substantial, many of such issues are addressed through a prudent design of the trigger and conversion mechanism. In the following chapters, we develop multiple models for pricing and analysis of contingent capital under different conversion mechanisms. In Chapter 2 we analyze the case of contingent capital with a capital-ratio trigger and partial and on-going conversion. The capital ratio we use is based on accounting or book value to approximate the regulatory ratios that determine capital requirements for banks. The conversion process is partial and on-going in the sense that each time a bank's capital ratio reaches the minimum threshold, just enough debt is converted to equity to meet the capital requirement, so long as the contingent capital has not been depleted. In Chapter 3 we simplify the design to all-at-once conversion however we perform the analysis through a much richer model which incorporates tail risk in terms of jumps, endogenous optimal default policy and debt rollover. We also investigate the case of bail-in debt, where at default the original shareholders are wiped out and the converted investors take control of the firm. In the case of contingent convertibles the conversion trigger is assumed as a contractual term specified by market value of assets. For bail-in debt the trigger is where the original shareholders optimally default. We study incentives of shareholders to change the capital structure and how CoCo's affect risk incentives. Several researchers have advocated use of a market based trigger which is forward looking, continuously updated and readily available, while some others have raised concerns regarding unintended consequences of a market based trigger. In Chapter 4 we investigate one of these issues, namely the existence and uniqueness of equilibrium when the conversion trigger is based on the stock price.Finance, Operations researchbn2164Industrial Engineering and Operations Research, BusinessDissertationsThree Essays on Dynamic Pricing and Resource Allocation
http://academiccommons.columbia.edu/catalog/ac:151966
Nur, Cavdarogluhttp://hdl.handle.net/10022/AC:P:14492Thu, 23 Aug 2012 00:00:00 +0000This thesis consists of three essays that focus on different aspects of pricing and resource allocation. We use techniques from supply chain and revenue management, scenario-based robust optimization and game theory to study the behavior of firms in different competitive and non-competitive settings. We develop dynamic programming models that account for pricing and resource allocation decisions of firms in such settings. In Chapter 2, we focus on the resource allocation problem of a service firm, particularly a health-care facility. We formulate a general model that is applicable to various resource allocation problems of a hospital. To this end, we consider a system with multiple customer classes that display different reactions to delays in service. By adopting a dynamic-programming approach, we show that the optimal policy is not simple but exhibits desirable monotonicity properties. Furthermore, we propose a simple threshold heuristic policy that performs well in our experiments. In Chapter 3, we study a dynamic pricing problem for a monopolist seller that operates in a setting where buyers have market power, and where each potential sale takes the form of a bilateral negotiation. We review the dynamic programming formulation of the negotiation problem, and propose a simple and tractable deterministic "fluid" analogue for this problem. The main emphasis of the chapter is in expanding the formulation to the dynamic setting where both the buyer and seller have limited prior information on their counterparty valuation and their negotiation skill. In Chapter 4, we consider the revenue maximization problem of a seller who operates in a market where there are two types of customers; namely the "investors" and "regular-buyers". In a two-period setting, we model and solve the pricing game between the seller and the investors in the latter period, and based on the solution of this game, we analyze the revenue maximization problem of the seller in the former period. Moreover, we study the effects on the total system profits when the seller and the investors cooperate through a contracting mechanism rather than competing with each other; and explore the contracting opportunities that lead to higher profits for both agents.Operations researchIndustrial Engineering and Operations ResearchDissertationsModeling Customer Behavior for Revenue Management
http://academiccommons.columbia.edu/catalog/ac:151773
Bansal, Matulyahttp://hdl.handle.net/10022/AC:P:14424Fri, 17 Aug 2012 00:00:00 +0000In this thesis, we model and analyze the impact of two behavioral aspects of customer decisionmaking upon the revenue maximization problem of a monopolist firm. First, we study the revenue maximization problem of a monopolist firm selling a homogeneous good to a market of risk-averse, strategic customers. Using a discrete (but arbitrary) valuation distribution, we show how the dynamic pricing problem with strategic customers can be formulated as a mechanism design problem, thereby making it more amenable to analysis. We characterize the optimal solution, and solve the problem for several special cases. We perform asymptotic analysis for the low risk-aversion case and show that it is asymptotically optimal to offer at most two products. Second, we consider a revenue-maximizing monopolist firm that serves a market of customers that are heterogeneous with respect to their valuations and desire for a quality attribute. Instead of optimizing the net utility that results from an appropriate combination of product price and quality, as in the traditional model of customer behavior, we consider a setting where customers purchase the cheapest product subject to its quality exceeding a customer specific quality threshold. We call such preferences threshold preferences. We solve the firm’s product design problem in this setting, and contrast with the traditional model of customer choice behavior. We consider several scenarios where such preferences might arise, and identify the optimal solution in each case. In addition to these product design problems, we study the problem of identifying the optimal putting strategy for a golfer. We develop a model of golfer putting skill, and combine it with a putt trajectory and holeout model to identify a golfer’s optimal putting strategy. The problem of identifying the optimal putting strategy is shown to be equivalent to a two-dimensional stochastic shortest path problem, with continuous state and control space, and solved using approximate dynamic programming. We calibrate the golfer model to professional and amateur player data, and use the calibrated model to answer several interesting questions, e.g., how does green reading ability affect golfer performance, how do professional and amateur golfers differ in their strategy, how do uphill and downhill putts compare in difficulty, etc.Business, Operations researchmb2431BusinessDissertationsStrategic Models in Supply Network Design
http://academiccommons.columbia.edu/catalog/ac:147203
Lederman, Rogerhttp://hdl.handle.net/10022/AC:P:13314Thu, 24 May 2012 00:00:00 +0000This dissertation contains a series of essays intended to introduce strategic modeling techniques into the network design problem. While investment in production capacity has long been approached as a critical strategic decision, the increasing need for robust, responsive supply capabilities has made it essential to take a network view, where multiple products and sites are considered simultaneously. In traditional network planning, models have rarely accounted for the behavior of additional players - customers, competitors, suppliers - on whom a firm can exert only a limited influence. We analyze a set of models that account for the dynamics of the firm's interaction with these outside actors. In Chapters 2 and 3, we develop game-theoretic models to characterize the allocation of resources in a network context. In Chapter 2, we use series-parallel networks to model the arrangement of producers whose output is bundled. This structure may arise, for example, when various components of the production process are outsourced individually. We study supply-function mechanisms through which producers strategically manage scarce capacity. Our results show how network structure can be analyzed to measure producers' market power and its effect on equilibrium markups. Chapter 3 looks at the network design problem of a vertically integrated firm with the ability to flexibly allocate resources across markets. We consider optimal design of the firm's production network as an upper-level decision to be optimized with respect to competitive outcomes in the lower stage. We find that optimal strategies regarding the location and centralization of production will differ across firms, depending on their competitive position in the market. The final two chapters discuss practical issues regarding the availability of model inputs in a multi-product context. In Chapter 4, we propose a method to construct competitor sets through estimation of a latent-segment choice model. We present a case study in a hotel market, where demand is distributed both spatially and temporally. We show how widely available data on market events can be used to drive identification of customer segments, providing a basis to assess competitive interactions. Chapter 5 provides a further example, in the setting of urban transportation networks, of how user behavior on a network can be estimated from partially observed data. We present a novel two-phase approach for performing this estimation in real time.Operations research, Businessrdl2102BusinessDissertationsMultiproduct Pricing Management and Design of New Service Products
http://academiccommons.columbia.edu/catalog/ac:144706
Wang, Ruxianhttp://hdl.handle.net/10022/AC:P:12603Fri, 17 Feb 2012 00:00:00 +0000In this thesis, we study price optimization and competition of multiple differentiated substitutable products under the general Nested Logit model and also consider the designing and pricing of new service products, e.g., flexible warranty and refundable warranty, under customers' strategic claim behavior. Chapter 2 considers firms that sell multiple differentiated substitutable products and customers whose purchase behavior follows the Nested Logit model, of which the Multinomial Logit model is a special case. In the Nested Logit model, customers make product selection decision sequentially: they first select a class or a nest of products and subsequently choose a product within the selected class. We consider the general Nested Logit model with product-differentiated price coefficients and general nest-heterogenous degrees. We show that the adjusted markup, which is defined as price minus cost minus the reciprocal of the price coefficient, is constant across all the products in each nest. When optimizing multiple nests of products, the adjusted nested markup is also constant within a nest. By using this result, the multi-product optimization problem can be reduced to a single-dimensional problem in a bounded interval, which is easy to solve. We also use this result to simplify the oligopolistic price competition and characterize the Nash equilibrium. Furthermore, we investigate its application to dynamic pricing and revenue management. In Chapter 3, we investigate the flexible monthly warranty, which offers flexibility to customers and allow them to cancel it at anytime without any penalty. Frequent technological innovations and price declines severely affect sales of extended warranties as product replacement upon failure becomes an increasingly attractive alternative. To increase sales and profitability, we propose offering flexible-duration extended warranties. These warranties can appeal to customers who are uncertain about how long they will keep the product as well as to customers who are uncertain about the product's reliability. Flexibility may be added to existing services in the form of monthly-billing with month-by-month commitments, or by making existing warranties easier to cancel, with pro-rated refunds. This thesis studies flexible warranties from the perspectives of both the customer and the provider. We present a model of the customer's optimal coverage decisions under the objective of minimizing expected support costs over a random planning horizon. We show that under some mild conditions the customer's optimal coverage policy has a threshold structure. We also show through an analytical study and through numerical examples how flexible warranties can result in higher profits and higher attach rates. Chapter 4 examines the designing and pricing of residual value warranty that refunds customers at the end of warranty period based on customers' claim history. Traditional extended warranties for IT products do not differentiate customers according to their usage rates or operating environment. These warranties are priced to cover the costs of high-usage customers who tend to experience more failures and are therefore more costly to support. This makes traditional warranties economically unattractive to low-usage customers. In this chapter, we introduce, design and price residual value warranties. These warranties refund a part of the upfront price to customers who have zero or few claims according to a pre-determined refund schedule. By design, the net cost of these warranties is lower for light users than for heavy users. As a result, a residual value warranty can enable the provider to price-discriminate based on usage rates or operating conditions without the need to monitor individual customers' usage. Theoretic results and numerical experiments demonstrate how residual value warranties can appeal to a broader range of customers and significantly increase the provider's profits.Operations research, Industrial engineeringrw2267Industrial Engineering and Operations ResearchDissertationsEssays on Inventory Management and Object Allocation
http://academiccommons.columbia.edu/catalog/ac:144769
Lee, Thiam Huihttp://hdl.handle.net/10022/AC:P:12623Fri, 17 Feb 2012 00:00:00 +0000This dissertation consists of three essays. In the first, we establish a framework for proving equivalences between mechanisms that allocate indivisible objects to agents. In the second, we study a newsvendor model where the inventory manager has access to two experts that provide advice, and examine how and when an optimal algorithm can be efficiently computed. In the third, we study classical single-resource capacity allocation problem and investigate the relationship between data availability and performance guarantees. We first study mechanisms that solve the problem of allocating indivisible objects to agents. We consider the class of mechanisms that utilize the Top Trading Cycles (TTC) algorithm (these may differ based on how they prioritize agents), and show a general approach to proving equivalences between mechanisms from this class. This approach is used to show alternative and simpler proofs for two recent equivalence results for mechanisms with linear priority structures. We also use the same approach to show that these equivalence results can be generalized to mechanisms where the agent priority structure is described by a tree. Second, we study the newsvendor model where the manager has recourse to advice, or decision recommendations, from two experts, and where the objective is to minimize worst-case regret from not following the advice of the better of the two agents. We show the model can be reduced to the class machine-learning problem of predicting binary sequences but with an asymmetric cost function, allowing us to obtain an optimal algorithm by modifying a well-known existing one. However, the algorithm we modify, and consequently the optimal algorithm we describe, is not known to be efficiently computable, because it requires evaluations of a function v which is the objective value of recursively defined optimization problems. We analyze v and show that when the two cost parameters of the newsvendor model are small multiples of a common factor, its evaluation is computationally efficient. We also provide a novel and direct asymptotic analysis of v that differs from previous approaches. Our asymptotic analysis gives us insight into the transient structure of v as its parameters scale, enabling us to formulate a heuristic for evaluating v generally. This, in turn, defines a heuristic for the optimal algorithm whose decisions we find in a numerical study to be close to optimal. In our third essay, we study the classical single-resource capacity allocation problem. In particular, we analyze the relationship between data availability (in the form of demand samples) and performance guarantees for solutions derived from that data. This is done by describing a class of solutions called epsilon-backwards accurate policies and determining a suboptimality gap for this class of solutions. The suboptimality gap we find is in terms of epsilon and is also distribution-free. We then relate solutions generated by a Monte Carlo algorithm and epsilon-backwards accurate policies, showing a lower bound on the quantity of data necessary to ensure that the solution generated by the algorithm is epsilon-backwards accurate with a high probability. Combining the two results then allows us to give a lower bound on the data needed to generate an Î±-approximation with a given confidence probability 1-delta. We find that this lower bound is polynomial in the number of fares, M, and 1/Î±.Operations researchthl2102Industrial Engineering and Operations ResearchDissertationsA Simulation Model to Analyze the Impact of Golf Skills and a Scenario-based Approach to Options Portfolio Optimization
http://academiccommons.columbia.edu/catalog/ac:143076
Ko, Soonminhttp://hdl.handle.net/10022/AC:P:12166Tue, 10 Jan 2012 00:00:00 +0000A simulation model of the game of golf is developed to analyze the impact of various skills (e.g., driving distance, directional accuracy, putting skill, and others) on golf scores. The golf course model includes realistic features of a golf course including rough, sand, water, and trees. Golfer shot patterns are modeled with t distributions and mixtures of t and normal distributions since normal distributions do not provide good fits to the data. The model is calibrated to extensive data for amateur and professional golfers. The golf simulation is used to assess the impact on scores of distance and direction, determine what factors separate pros from amateurs, and to determine the impact of course length on scores. In the second part of the thesis, we use a scenario-based approach to solve a portfolio optimization problem with options. The solution provides the optimal payoff profile given an investor's view of the future, his utility function or risk appetite, and the market prices of options. The scenario-based approach has several advantages over the traditional covariance matrix method, including additional flexibility in the choice of constraints and objective function.Engineering, Operations researchsk2822Industrial Engineering and Operations Research, BusinessDissertationsSupply Chain Management: Supplier Financing Schemes and Inventory Strategies
http://academiccommons.columbia.edu/catalog/ac:142635
Wang, Minhttp://hdl.handle.net/10022/AC:P:11857Wed, 30 Nov 2011 00:00:00 +0000This dissertation addresses a few fundamental questions on the interface between supplier financing schemes and inventory management. Traditionally, retailers finance their inventories through an independent financing institution or by drawing from their own cash reserves, without any supplier involvement (Independent Financing). However, suppliers may reduce their buyers' costs and stimulate sales and associated revenues and profits, by either (i) adopting the financing function themselves (Trade Credit), or (ii) subsidizing the inventory costs (Inventory Subsidies). In the first part (Chapter 2) we analyze and compare the equilibrium performance of supply chains under these three basic financing schemes. The objective is to compare the equilibrium profits of the individual chain members, the aggregate supply chain profits, the equilibrium wholesale price, the expected sales volumes and the average inventory levels under the three financing options, and thus provide important insights for the selection and implementation of supply chain financing mechanisms. Several of the financing schemes introduce a new type of inventory control problem for the retailers in response to terms specified by their suppliers. In Chapter 3 we therefore consider the inventory management problem of a firm which incurs inventory carrying costs with a general shelf age dependent structure and, even more generally, that of a firm with shelf age and delay dependent inventory and backlogging costs. Beyond identifying the structure of optimal replenishment strategies and corresponding algorithms to compute them, it is often important to understand how changes in various primitives of the inventory model impact on the optimal policy parameters and performance measures. In spite of a voluminous literature over more than fifty years, very little is known about this area. In Chapter 4, we therefore study monotonicity properties of stochastic inventory systems governed by an (r; q) or (r; nq) policy and apply the results in our general theorems both to standard inventory models and to those with general shelf age and delay dependent inventory costs.Business, Operations researchmw2426BusinessDissertationsAlgorithms for Sparse and Low-Rank Optimization: Convergence, Complexity and Applications
http://academiccommons.columbia.edu/catalog/ac:137539
Ma, ShiqianMon, 22 Aug 2011 00:00:00 +0000Solving optimization problems with sparse or low-rank optimal solutions has been an important topic since the recent emergence of compressed sensing and its matrix extensions such as the matrix rank minimization and robust principal component analysis problems. Compressed sensing enables one to recover a signal or image with fewer observations than the "length" of the signal or image, and thus provides potential breakthroughs in applications where data acquisition is costly. However, the potential impact of compressed sensing cannot be realized without efficient optimization algorithms that can handle extremely large-scale and dense data from real applications. Although the convex relaxations of these problems can be reformulated as either linear programming, second-order cone programming or semidefinite programming problems, the standard methods for solving these relaxations are not applicable because the problems are usually of huge size and contain dense data. In this dissertation, we give efficient algorithms for solving these "sparse" optimization problems and analyze the convergence and iteration complexity properties of these algorithms. Chapter 2 presents algorithms for solving the linearly constrained matrix rank minimization problem. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast and solved as a semidefinite programming problem, such an approach is computationally expensive when the matrices are large. In Chapter 2, we propose fixed-point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems. Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10-5 in about 3 minutes by sampling only 20 percent of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms. In Chapter 3, we study the convergence/recoverability properties of the fixed point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving linearly constrained matrix rank minimization problems are reported. Chapters 4 and 5 considers alternating direction type methods for solving composite convex optimization problems. We present in Chapter 4 alternating linearization algorithms that are based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most O(1/ε) iterations to obtain an ε-optimal solution, while our accelerated (i.e., fast) versions require at most O(1/√ε) iterations, with little change in the computational effort required at each iteration. For more general problem, i.e., minimizing the sum of K convex functions, we propose multiple-splitting algorithms for solving them. We propose both basic and accelerated algorithms with O(1/ε) and O(1/√ε) iteration complexity bounds for obtaining an ε-optimal solution. To the best of our knowledge, the complexity results presented in these two chapters are the first ones of this type that have been given for splitting and alternating direction type methods. Numerical results on various applications in sparse and low-rank optimization, including compressed sensing, matrix completion, image deblurring, robust principal component analysis, are reported to demonstrate the efficiency of our methods.Operations researchsm2756Industrial Engineering and Operations ResearchDissertationsMany-Server Queues with Time-Varying Arrivals, Customer Abandonment, and non-Exponential Distributions
http://academiccommons.columbia.edu/catalog/ac:136569
Liu, Yunanhttp://hdl.handle.net/10022/AC:P:10801Tue, 02 Aug 2011 00:00:00 +0000This thesis develops deterministic heavy-traffic fluid approximations for many-server stochastic queueing models. The queueing models, with many homogeneous servers working independently in parallel, are intended to model large-scale service systems such as call centers and health care systems. Such models also have been employed to study communication, computing and manufacturing systems. The heavy-traffic approximations yield relatively simple formulas for quantities describing system performance, such as the expected number of customers waiting in the queue. The new performance approximations are valuable because, in the generality considered, these complex systems are not amenable to exact mathematical analysis. Since the approximate performance measures can be computed quite rapidly, they usefully complement more cumbersome computer simulation. Thus these heavy-traffic approximations can be used to improve capacity planning and operational control. More specifically, the heavy-traffic approximations here are for large-scale service systems, having many servers and a high arrival rate. The main focus is on systems that have time-varying arrival rates and staffing functions. The system is considered under the assumption that there are alternating periods of overloading and underloading, which commonly occurs when service providers are unable to adjust the staffing frequently enough to economically meet demand at all times. The models also allow the realistic features of customer abandonment and non-exponential probability distributions for the service times and the times customers are willing to wait before abandoning. These features make the overall stochastic model non-Markovian and thus thus very difficult to analyze directly. This thesis provides effective algorithms to compute approximate performance descriptions for these complex systems. These algorithms are based on ordinary differential equations and fixed point equations associated with contraction operators. Simulation experiments are conducted to verify that the approximations are effective. This thesis consists of four pieces of work, each presented in one chapter. The first chapter (Chapter 2) develops the basic fluid approximation for a non-Markovian many-server queue with time-varying arrival rate and staffing. The second chapter (Chapter 3) extends the fluid approximation to systems with complex network structure and Markovian routing to other queues of customers after completing service from each queue. The extension to open networks of queues has important applications. For one example, in hospitals, patients usually move among different units such as emergency rooms, operating rooms, and intensive care units. For another example, in manufacturing systems, individual products visit different work stations one or more times. The open network fluid model has multiple queues each of which has a time-varying arrival rate and staffing function. The third chapter (Chapter 4) studies the large-time asymptotic dynamics of a single fluid queue. When the model parameters are constant, convergence to the steady state as time evolves is established. When the arrival rates are periodic functions, such as in service systems with daily or seasonal cycles, the existence of a periodic steady state and the convergence to that periodic steady state as time evolves are established. Conditions are provided under which this convergence is exponentially fast. The fourth chapter (Chapter 5) uses a fluid approximation to gain insight into nearly periodic behavior seen in overloaded stationary many-server queues with customer abandonment and nearly deterministic service times. Deterministic service times are of applied interest because computer-generated service times, such as automated messages, may well be deterministic, and computer-generated service is becoming more prevalent. With deterministic service times, if all the servers remain busy for a long interval of time, then the times customers enter service assumes a periodic behavior throughout that interval. In overloaded large-scale systems, these intervals tend to persist for a long time, producing nearly periodic behavior. To gain insight, a heavy-traffic limit theorem is established showing that the fluid model arises as the many-server heavy-traffic limit of a sequence of appropriately scaled queueing models, all having these deterministic service times. Simulation experiments confirm that the transient behavior of the limiting fluid model provides a useful description of the transient performance of the queueing system. However, unlike the asymptotic loss of memory results in the previous chapter for service times with densities, the stationary fluid model with deterministic service times does not approach steady state as time evolves independent of the initial conditions. Since the queueing model with deterministic service times approaches a proper steady state as time evolves, this model with deterministic service times provides an example where the limit interchange (limiting steady state as time evolves and heavy traffic as scale increases) is not valid.Operations researchyl2342Industrial Engineering and Operations ResearchDissertationsFirst Order Methods for Large-Scale Sparse Optimization
http://academiccommons.columbia.edu/catalog/ac:135750
Aybat, Necdet Serhathttp://hdl.handle.net/10022/AC:P:10735Fri, 15 Jul 2011 00:00:00 +0000In today's digital world, improvements in acquisition and storage technology are allowing us to acquire more accurate and finer application-specific data, whether it be tick-by-tick price data from the stock market or frame-by-frame high resolution images and videos from surveillance systems, remote sensing satellites and biomedical imaging systems. Many important large-scale applications can be modeled as optimization problems with millions of decision variables. Very often, the desired solution is sparse in some form, either because the optimal solution is indeed sparse, or because a sparse solution has some desirable properties. Sparse and low-rank solutions to large scale optimization problems are typically obtained by regularizing the objective function with L1 and nuclear norms, respectively. Practical instances of these problems are very high dimensional (~ million variables) and typically have dense and ill-conditioned data matrices. Therefore, interior point based methods are ill-suited for solving these problems. The large scale of these problems forces one to use the so-called first-order methods that only use gradient information at each iterate. These methods are efficient for problems with a "simple" feasible set such that Euclidean projections onto the set can be computed very efficiently, e.g. the positive orthant, the n-dimensional hypercube, the simplex, and the Euclidean ball. When the feasible set is "simple", the subproblems used to compute the iterates can be solved efficiently. Unfortunately, most applications do not have "simple" feasible sets. A commonly used technique to handle general constraints is to relax them so that the resulting problem has only "simple" constraints, and then to solve a single penalty or Lagrangian problem. However, these methods generally do not guarantee convergence to feasibility. The focus of this thesis is on developing new fast first-order iterative algorithms for computing sparse and low-rank solutions to large-scale optimization problems with very mild restrictions on the feasible set - we allow linear equalities, norm-ball and conic inequalities, and also certain non-smooth convex inequalities to define the constraint set. The proposed algorithms guarantee that the sequence of iterates converges to an optimal feasible solution of the original problem, and each subproblem is an optimization problem with a "simple" feasible set. In addition, for any eps > 0, by relaxing the feasibility requirement of each iteration, the proposed algorithms can compute an eps-optimal and eps-feasible solution within O(log(1/eps)) iterations which requires O(1/eps) basic operations in the worst case. Algorithm parameters do not depend on eps > 0. Thus, these new methods compute iterates arbitrarily close to feasibility and optimality as they continue to run. Moreover, the computational complexity of each basic operation for these new algorithms is the same as that of existing first-order algorithms running on "simple" feasible sets. Our numerical studies showed that only O(log(1/eps)) basic operations, as opposed to O(1/eps) worst case theoretical bound, are needed for obtaining eps-feasible and eps-optimal solutions. We have implemented these new first-order methods for the following problem classes: Basis Pursuit (BP) in compressed sensing, Matrix Rank Minimization, Principal Component Pursuit (PCP) and Stable Principal Component Pursuit (SPCP) in principal component analysis. These problems have applications in signal and image processing, video surveillance, face recognition, latent semantic indexing, and ranking and collaborative filtering. To best of our knowledge, an algorithm for the SPCP problem that has O(1/eps) iteration complexity and has a per iteration complexity equal to that of a singular value decomposition is given for the first time.Operations research, Applied mathematicsnsa2106Industrial Engineering and Operations ResearchDissertationsEssays in Consumer Choice Driven Assortment Planning
http://academiccommons.columbia.edu/catalog/ac:131420
Saure, Denis R.http://hdl.handle.net/10022/AC:P:10232Thu, 28 Apr 2011 00:00:00 +0000Product assortment selection is among the most critical decisions facing retailers: product variety and relevance is a fundamental driver of consumers' purchase decisions and ultimately of a retailer's profitability. In the last couple of decades an increasing number of firms have gained the ability to frequently revisit their assortment decisions during a selling season. In addition, the development and consolidation of online retailing have introduced new levels of operational flexibility, and cheap access to detailed transactional information. These new operational features present the retailer with both benefits and challenges. The ability to revisit the assortment decision frequently over time allows the retailer to introduce and test new products during the selling season, and adjust on the fly to unexpected changes in consumer preferences, and use customer profile information to customize (in real time) online shopping experience. Our main objective in this thesis is to formulate and solve assortment optimization models addressing the challenges present in modern retail environments. We begin by analyzing the role of the assortment decision in balancing information collection and revenue maximization, when consumer preferences are initially unknown. By considering utility maximizing consumers, we establish fundamental limits on the performance of any assortment policy whose aim is to maximize long run revenues. In addition, we propose adaptive assortment policies that attain such performance limits. Our results highlight salient features of this dynamic assortment problem that distinguish it from similar problems of sequential decision making under model uncertainty. Next, we extend the analysis to the case when additional consumer profile information is available; our primary motivation here is the emerging area of online advertisement. As in the previous setup, we identify fundamental performance limits and propose adaptive policies attaining these limits. Finally, we focus on the effects of competition and consumers' access to information on assortment strategies. In particular, we study competition among retailers when they have access to common products, i.e., products that are available to the competition, and where consumers have full information about the retailers' offerings. Our results shed light on equilibrium properties in such settings and the effect common products have on this behavior.Operations researchdrs2114BusinessDissertationsContinuity of a queueing integral representation in the M1 topology
http://academiccommons.columbia.edu/catalog/ac:125349
Pang, Guodong; Whitt, Wardhttp://hdl.handle.net/10022/AC:P:8584Fri, 02 Apr 2010 00:00:00 +0000We establish continuity of the integral representation y(t)=x(t)+âˆ«0th(y(s))â€‰ds, tâ‰¥0, mapping a function x into a function y when the underlying function space D is endowed with the Skorohod M1 topology. We apply this integral representation with the continuous mapping theorem to establish heavy-traffic stochastic-process limits for many-server queueing models when the limit process has jumps unmatched in the converging processes as can occur with bursty arrival processes or service interruptions. The proof of M1-continuity is based on a new characterization of the M1 convergence, in which the time portions of the parametric representations are absolutely continuous with respect to Lebesgue measure, and the derivatives are uniformly bounded and converge in L1.Operations researchww2040Industrial Engineering and Operations ResearchArticles