Academic Commons Search Results
https://academiccommons.columbia.edu/catalog?action=index&controller=catalog&f%5Bdepartment_facet%5D%5B%5D=Industrial+Engineering+and+Operations+Research&format=rss&fq%5B%5D=has_model_ssim%3A%22info%3Afedora%2Fldpd%3AContentAggregator%22&q=&rows=500&sort=record_creation_date+desc
Academic Commons Search Resultsen-usOptimal Stopping and Switching Problems with Financial Applications
https://academiccommons.columbia.edu/catalog/ac:204576
Wang, Zhenghttp://dx.doi.org/10.7916/D8VQ330DWed, 02 Nov 2016 18:03:14 +0000This dissertation studies a collection of problems on trading assets and derivatives over finite and infinite horizons. In the first part, we analyze an optimal switching problem with transaction costs that involves an infinite sequence of trades. The investor's value functions and optimal timing strategies are derived when prices are driven by an exponential Ornstein-Uhlenbeck (XOU) or Cox-Ingersoll-Ross (CIR) process. We compare the findings to the results from the associated optimal double stopping problems and identify the conditions under which the double stopping and switching problems admit the same optimal entry and/or exit timing strategies. Our results show that when prices are driven by a CIR process, optimal strategies for the switching problems are of the classic buy-low-sell-high type. On the other hand, under XOU price dynamics, the investor should refrain from entering the market if the current price is very close to zero. As a result, the continuation (waiting) region for entry is disconnected. In both models, we provide numerical examples to illustrate the dependence of timing strategies on model parameters. In the second part, we study the problem of trading futures with transaction costs when the underlying spot price is mean-reverting. Specifically, we model the spot dynamics by the OU, CIR or XOU model. The futures term structure is derived and its connection to futures price dynamics is examined. For each futures contract, we describe the evolution of the roll yield, and compute explicitly the expected roll yield. For the futures trading problem, we incorporate the investor's timing options to enter and exit the market, as well as a chooser option to long or short a futures upon entry. This leads us to formulate and solve the corresponding optimal double stopping problems to determine the optimal trading strategies. Numerical results are presented to illustrate the optimal entry and exit boundaries under different models. We find that the option to choose between a long or short position induces the investor to delay market entry, as compared to the case where the investor pre-commits to go either long or short. Finally, we analyze the optimal risk-averse timing to sell a risky asset. The investor's risk preference is described by the exponential, power or log utility. Two stochastic models are considered for the asset price -- the geometric Brownian motion (GBM) and XOU models to account for, respectively, the trending and mean-reverting price dynamics. In all cases, we derive the optimal thresholds and certainty equivalents to sell the asset, and compare them across models and utilities, with emphasis on their dependence on asset price, risk aversion, and quantity. We find that the timing option may render the investor's value function and certainty equivalent non-concave in price even though the utility function is concave in wealth. Numerical results are provided to illustrate the investor's optimal strategies and the premia associated with optimally timing to sell with different utilities under different price dynamics.Operations research, Finance, Finance--Mathematical models, Assets (Accounting), Mathematical optimization, Ornstein-Uhlenbeck process, Brownian motion processeszw2192Industrial Engineering and Operations ResearchDissertationsOptimization in Strategic Environments
https://academiccommons.columbia.edu/catalog/ac:202863
Feigenbaum, Itai Izhakhttp://dx.doi.org/10.7916/D8377916Fri, 14 Oct 2016 18:07:05 +0000This work considers the problem faced by a decision maker (planner) trying to optimize over incomplete data. The missing data is privately held by agents whose objectives are dierent from the planner's, and who can falsely report it in order to advance their objectives. The goal is to design optimization mechanisms (algorithms) that achieve "good" results when agents' reports follow a game-theoretic equilibrium. In the first part of this work, the goal is to design mechanisms that provide a small worst-case approximation ratio (guarantee a large fraction of the optimal value in all instances) at equilibrium. The emphasis is on strategyproof mechanisms|where truthfulness is a dominant strategy equilibrium|and on the approximation ratio at that equilibrium. Two problems are considered|variants of knapsack and facility location problems. In the knapsack problem, items are privately owned by agents, who can hide items or report fake ones; each agent's utility equals the total value of their own items included in the knapsack, while the planner wishes to choose the items that maximize the sum of utilities. In the facility location problem, agents have private linear single sinked/peaked preferences regarding the location of a facility on an interval, while the planner wishes to locate the facility in a way that maximizes one of several objectives. A variety of mechanisms and lower bounds are provided for these problems. The second part of this work explores the problem of reassigning students to schools. Students have privately known preferences over the schools. After an initial assignment is made, the students' preferences change, get reported again, and a reassignment must be obtained. The goal is to design a reassignment mechanism that incentivizes truthfulness, provides high student welfare, transfers relatively few students from their initial assignment, and respects student priorities at schools. The class of mechanisms considered is permuted lottery deferred acceptance (PLDA) mechanisms, which is a natural class of mechanisms based on permuting the lottery numbers students initially draw to decide the initial assignment. Both theoretical and experimental evidence is provided to support the use of a PLDA mechanism called reversed lottery deferred acceptance (RLDA). The evidence suggests that under some conditions, all PLDA mechanisms generate roughly equal welfare, and that RLDA minimizes transfers among PLDA mechanisms.Operations research, Planners, Mathematical optimization, Algorithmsiif2103Industrial Engineering and Operations ResearchDissertationsEO-Performance relationships in Reverse Internationalization by Chinese Global Startup OEMs: Social Networks and Strategic Flexibility
https://academiccommons.columbia.edu/catalog/ac:203306
Chin, Tachia; Tsai, Sang-Bing; Fang, Kai; Zhu, Wenzhong; Liu, Ren-huai; Yang, Dongjin; Tsuei, Richard Ting Changhttp://dx.doi.org/10.7916/D8CZ37DHFri, 07 Oct 2016 16:41:03 +0000Due to the context-sensitive nature of entrepreneurial orientation (EO), it is imperative to in-depth explore the EO-performance mechanism in China at its critical, specific stage of economic reform. Under the context of “reverse internationalization” by Chinese global startup original equipment manufacturers (OEMs), this paper aims to manifest the unique links and complicated interrelationships between the individual EO dimensions and firm performance. Using structural equation modeling, we found that during reverse internationalization, proactiveness is positively related to performance; risk taking is not statistically associated with performance; innovativeness is negatively related to performance. The proactiveness-performance relationship is mediated by Strategic flexibility and moderated by social networking relationships. The dynamic and complex institutional setting, coupled with the issues of overcapacity and rising labor cost in China may explain why our distinctive results occur. This research advances the understanding of how contingent factors (social network relationships and strategic flexibility) facilitate entrepreneurial firms to break down institutional barriers and reap the most from EO. It brings new insights into how Chinese global startup OEMs draw on EO to undertake reverse internationalization, responding the calls for unraveling the heterogeneous characteristics of EO sub-dimensions and for more contextually-embedded treatment of EO-performance associations.Economics, Commerce-Business, Internationalized territories, Structural equation modeling, International economic relations, Globalization--Economic aspectskf2449Industrial Engineering and Operations ResearchArticlesOn the Trade-oﬀs between Modeling Power and Algorithmic Complexity
https://academiccommons.columbia.edu/catalog/ac:202359
Ye, Chunhttp://dx.doi.org/10.7916/D87W6CDCFri, 16 Sep 2016 18:03:56 +0000Mathematical modeling is a central component of operations research. Most of the academic research in our field focuses on developing algorithmic tools for solving various mathematical problems arising from our models. However, our procedure for selecting the best model to use in any particular application is ad hoc. This dissertation seeks to rigorously quantify the trade-offs between various design criteria in model construction through a series of case studies. The hope is that a better understanding of the pros and cons of different models (for the same application) can guide and improve the model selection process.
In this dissertation, we focus on two broad types of trade-offs. The first type arises naturally in mechanism or market design, a discipline that focuses on developing optimization models for complex multi-agent systems. Such systems may require satisfying multiple objectives that are potentially in conflict with one another. Hence, finding a solution that simultaneously satisfies several design requirements is challenging. The second type addresses the dynamics between model complexity and computational tractability in the context of approximation algorithms for some discrete optimization problems. The need to study this type of trade-offs is motivated by certain industry problems where the goal is to obtain the best solution within a reasonable time frame. Hence, being able to quantify and compare the degree of sub-optimality of the solution obtained under different models is helpful. Chapters 2-5 of the dissertation focus on trade-offs of the first type and Chapters 6-7 the second type.Operations research, Mathematical models, Operations research, Multiagent systems, Mathematical optimization, Approximation algorithmscy2214Industrial Engineering and Operations ResearchDissertationsApproximation Algorithms for Demand-Response Contract Execution and Coflow Scheduling
https://academiccommons.columbia.edu/catalog/ac:202248
Qiu, Zhenhttp://dx.doi.org/10.7916/D8FQ9WVPThu, 15 Sep 2016 18:03:31 +0000Solving operations research problems with approximation algorithms has been an important topic since approximation algorithm can provide near-optimal solutions to NP-hard problems while achieving computational efficiency. In this thesis, we consider two different problems in the field of optimal control and scheduling theory respectively and develop efficient approximation algorithms for those problems with performance guarantee.
Chapter 2 presents approximation algorithms for solving the optimal execution problem for demand-response contract in electricity markets. Demand side participation is essential for achieving real-time energy balance in today's electricity grid. Demand-response contracts, where an electric utility company buys options from consumers to reduce their load in the future, are an important tool to increase demand-side participation. In this chapter, we consider the operational problem of optimally exercising the available contracts over the planning horizon such that the total cost to satisfy the demand is minimized. In particular, we consider the objective of minimizing the sum of the expected ℓ_β-norm of the load deviations from given thresholds and the contract execution costs over the planning horizon. For β=∞, this reduces to minimizing the expected peak load. The peak load provides a good proxy to the total cost of the utility as spikes in electricity prices are observed only in peak load periods. We present a data driven near-optimal algorithm for the contract execution problem. Our algorithm is a sample average approximation (SAA) based dynamic program over a multi-period planning horizon. We provide a sample complexity bound on the number of demand samples required to compute a (1+ε)-approximate policy for any ε>0. Our SAA algorithm is quite general and we show that it can be adapted to quite general demand models including Markovian demands and objective functions. For the special case where the demand in each period is i.i.d., we show that a static solution is optimal for the dynamic problem. We also conduct a numerical study to compare the performance of our SAA based DP algorithm. Our numerical experiments show that we can achieve a (1+ε)-approximation in significantly smaller numbers of samples than what is implied by the theoretical bounds. Moreover, the structure of the approximate policy also shows that it can be well approximated by a simple affine function of the state.
In Chapter 3, we study the NP-hard coflow scheduling problem and develop a polynomial-time approximation algorithm for the problem with constant approximation ratio. Communications in datacenter jobs (such as the shuffle operations in MapReduce applications) often involve many parallel flows, which may be processed simultaneously. This highly parallel structure presents new scheduling challenges in optimizing job-level performance objectives in data centers. Chowdhury and Stoica [13] introduced the coflow abstraction to capture these communication patterns, and recently Chowdhury et al. [15] developed effective heuristics to schedule coflows. In this chapter, we consider the problem of efficiently scheduling coflows so as to minimize the total weighted completion time, which has been shown to be strongly NP-hard [15]. Our main result is the first polynomial-time deterministic approximation algorithm for this problem, with an approximation ratio of $64/3$, and a randomized version of the algorithm, with a ratio of 8+16sqrt{2}/3. Our results use techniques from both combinatorial scheduling and matching theory, and rely on a clever grouping of coflows.
In Chapter 4, we carry out a comprehensive experimental analysis on a Facebook trace and extensive simulated instances to evaluate the practical performance of several algorithms for coflow scheduling, including our approximation algorithms developed in Chapter 3. Our experiments suggest that simple algorithms provide effective approximations of the optimal, and that the performance of the approximation algorithm of Chapter 3 is relatively robust, near optimal, and always among the best compared with the other algorithms, in both the offline and online settings.Operations research, Operations research, Approximation algorithms, Mathematical optimization, Schedulingzq2110Industrial Engineering and Operations ResearchDissertationsResource Allocation in Wireless Networks: Theory and Applications
https://academiccommons.columbia.edu/catalog/ac:202119
Marasevic, Jelena Rajkohttp://dx.doi.org/10.7916/D85T3KP0Wed, 07 Sep 2016 18:04:55 +0000Limited wireless resources, such as spectrum and maximum power, give rise to various resource allocation problems that are interesting both from theoretical and application viewpoints. While the problems in some of the wireless networking applications are amenable to general resource allocation methods, others require a more specialized approach suited to their unique structural characteristics. We study both types of the problems in this thesis.
We start with a general problem of alpha-fair packing, namely, the problem of maximizing sum_j {w_j f_α(x_j)}, where w_j > 0, ∀j, and (i) f_α(x_j)=ln(x_j), if α = 1, (ii) f_α(x_j)= {x_j^(1-α)}/{1-α}, if α ≠ 1,α > 0, subject to positive linear constraints of the form Ax ≤ b, x ≥ 0, where A and b are non-negative. This problem has broad applications within and outside wireless networking. We present a distributed algorithm for general alpha that converges to an epsilon-approximate solution in time (number of distributed iterations) that has an inverse polynomial dependence on the approximation parameter epsilon and poly-logarithmic dependence on the problem size. This is the first distributed algorithm for weighted alpha-fair packing with poly-logarithmic convergence in the input size. We also obtain structural results that characterize alpha-fair allocations as the value of alpha is varied. These results deepen our understanding of fairness guarantees in alpha-fair packing allocations, and also provide insights into the behavior of alpha-fair allocations in the asymptotic cases when alpha tends to zero, one, and infinity.
With these general tools on hand, we consider an application in wireless networks where fairness is of paramount importance: rate allocation and routing in energy-harvesting networks. We discuss the importance of fairness in such networks and cases where our results on alpha-fair packing apply. We then turn our focus to rate allocation in energy harvesting networks with highly variable energy sources and that are used for applications such as monitoring and tracking. In such networks, it is essential to guarantee fairness over both the network nodes and the time slots and to be as fair as possible -- in particular, to require max-min fairness. We first develop an algorithm that obtains a max-min fair rate assignment for any routing that is specified at the input. Then, we consider the problem of determining a "good'' routing. We consider various routing types and either provide polynomial-time algorithms for finding such routings or prove that the problems are NP-hard. Our results reveal an interesting trade-off between the complexities of computation and implementation. The results can also be applied to other related fairness problems.
The second part of the thesis is devoted to the study of resource allocation problems that require a specialized approach. The problems we focus on arise in wireless networks employing full-duplex communication -- the simultaneous transmission and reception on the same frequency channel. Our primary goal is to understand the benefits and complexities tied to using this novel wireless technology through the study of resource (power, time, and channel) allocation problems. Towards that goal, we introduce a new realistic model of a compact (e.g., smartphone) full-duplex receiver and demonstrate its accuracy via measurements. First, we focus on the resource allocation problems with the objective of maximizing the sum of uplink and downlink rates, possibly over multiple orthogonal channels. For the single-channel case, we quantify the rate improvement as a function of the remaining self-interference and signal-to-noise ratios and provide structural results that characterize the sum of uplink and downlink rates on a full-duplex channel. Building on these results, we consider the multi-channel case and develop a polynomial time algorithm which is nearly optimal in practice under very mild restrictions. To reduce the running time, we develop an efficient nearly-optimal algorithm under the high SINR approximation.
Then, we study the achievable capacity regions of full-duplex links in the single- and multi-channel cases. We present analytical results that characterize the uplink and downlink capacity region and efficient algorithms for computing rate pairs at the region's boundary. We also provide near-optimal and heuristic algorithms that "convexify'' the capacity region when it is not convex. The convexified region corresponds to a combination of a few full-duplex rates (i.e., to time sharing between different operation modes). The analytical results provide insights into the properties of the full-duplex capacity region and are essential for future development of fair resource allocation and scheduling algorithms in Wi-Fi and cellular networks incorporating full-duplex.Electrical engineering, Operations research, Computer science, Wireless sensor networks, Resource allocation, Electrical engineering, Energy harvestingjrm2207Electrical Engineering, Industrial Engineering and Operations ResearchDissertationsEssays on Approximation Algorithms for Robust Linear Optimization Problems
https://academiccommons.columbia.edu/catalog/ac:202072
Lu, Brian Yinhttp://dx.doi.org/10.7916/D8ZG6SGMThu, 01 Sep 2016 12:12:14 +0000Solving optimization problems under uncertainty has been an important topic since the appearance of mathematical optimization in the mid 19th century. George Dantzig’s 1955 paper, “Linear Programming under Uncertainty” is considered one of the ten most inﬂuential papers in Management Science [26]. The methodology introduced in Dantzig’s paper is named stochastic programming, since it assumes an underlying probability distribution of the uncertain input parameters. However, stochastic programming suffers from the “curse of dimensionality”, and knowing the exact distribution of the input parameter may not be realistic. On the other hand, robust optimization models the uncertainty using a deterministic uncertainty set. The goal is to optimize the worst-case scenario from the uncertainty set. In recent years, many studies in robust optimization have been conducted and we refer the reader to Ben-Tal and Nemirovski [4–6], El Ghaoui and Lebret [19], Bertsimas and
Sim [15, 16], Goldfarb and Iyengar [23], Bertsimas et al. [8] for a review of robust optimization. Computing an optimal adjustable (or dynamic) solution to a robust optimization problem is generally hard. This motivates us to study the hardness of approximation of the problem and provide efﬁcient approximation algorithms. In this dissertation, we consider adjustable robust linear optimization problems with packing and covering formulations and their approximation algorithms. In particular, we study the performances of static solution and afﬁne solution as approximations for the adjustable robust problem.
Chapter 2 and 3 consider two-stage adjustable robust linear packing problem with uncertain second-stage constraint coefﬁcients. For general convex, compact and down-monotone uncertainty sets, the problem is often intractable since it requires to compute a solution for all possible realizations of uncertain parameters [22]. In particular, for a fairly general class of uncertainty sets, we show that the two-stage adjustable robust problem is NP-hard to approximate within a factor that is better than Ω(logn), where n is the number of columns of the uncertain coefﬁcient matrix. On the other hand, a static solution is a single (here and now) solution that is feasible for all possible realizations of the uncertain parameters and can be computed efﬁciently. We study the performance of static solutions an approximation for the adjustable robust problem and relate its optimality to a transformation of the uncertain set. With this transformation, we show that for a fairly general class of uncertainty sets, static solution is optimal for the adjustable robust problem. This is surprising since the static solution is widely perceived as highly conservative. Moreover, when the static solution is not optimal, we provide an instance-based tight approximation bound that is related to a measure of non-convexity of the transformation of the uncertain set. We also show that for two-stage problems, our bound is at least as good (and in many case signiﬁcantly better) as the bound given by the symmetry of the uncertainty set [11, 12]. Moreover, our results can be generalized to the case where the objective coefﬁcients and right-hand-side are also uncertainty.
In Chapter 3, we focus on the two-stage problems with a family of column-wise and constraint-wise uncertainty sets where any constraint describing the set involves entries of only a single column or a single row. This is a fairly general class of uncertainty sets to model constraint coefﬁcient uncertainty. Moreover, it is the family of uncertainty sets that gives the previous hardness result. On the positive side, we show that a static solution is an
O(\log n · min(\log \Gamma, \log(m+n))-approximation for the two-stage adjustable robust problem where m and n denote the numbers of rows and columns of the constraint matrix and \Gamma is the maximum possible ratio of upper bounds of the uncertain constraint coefﬁcients. Therefore, for constant \Gamma, surprisingly the performance bound for static solutions matches
the hardness of approximation for the adjustable problem. Furthermore, in general the static solution provides nearly the best efﬁcient approximation for the two-stage adjustable robust problem.
In Chapter 4, we extend our result in Chapter 2 to a multi-stage adjustable robust linear optimization problem. In particular, we consider the case where the choice of the uncertain constraint coefﬁcient matrix for each stage is independent of the others. In real world applications, decision problems are often of multiple stages and a iterative implementation of two-stage solution may result in a suboptimal solution for multi-stage problem. We consider the static solution for the adjustable robust problem and the transformation of the uncertainty sets introduced in Chapter 2. We show that the static solution is optimal for the adjustable robust problem when the transformation of the uncertainty set for each stage is convex.
Chapters 5 considers a two-stage adjustable robust linear covering problem with uncertain right-hand-side parameter. As mentioned earlier, such problems are often intractable due to astronomically many extreme points of the uncertainty set. We introduce a new approximation framework where we consider a “simple” set that is “close” to the original uncertainty set. Moreover, the adjustable robust problem can be solved efﬁciently over the extended set. We show that the approximation bound is related to a geometric factor that represents the Banach-Mazur distance between the two sets. Using this framework, we provide approximation bounds that are better than the bounds given by an afﬁne policy in [7] for a large class of interesting uncertainty sets. For instance, we provide an approximation solution that gives a m^{1/4}-approximation for the two-stage adjustable robust problem with hypersphere uncertainty set, while the afﬁne policy has an approximation ratio of O(\sqrt{m}).
Moreover, our bound for general p-norm ball is m^{\frac{p-1}{p^2}} as opposed to m^{1/p} as given by an affine policy.Operations research, Approximation algorithms, Mathematical optimization, Uncertainty (Information theory), Robust optimizationyl2662Industrial Engineering and Operations ResearchDissertationsDynamic Algorithms for Shortest Paths and Matching
https://academiccommons.columbia.edu/catalog/ac:202021
Bernstein, Aaronhttp://dx.doi.org/10.7916/D8QF8T2WFri, 19 Aug 2016 19:22:23 +0000There is a long history of research in theoretical computer science devoted to designing efficient algorithms for graph problems. In many modern applications the graph in question is changing over time, and we would like to avoid rerunning our algorithm on the entire graph every time a small change occurs. The evolving nature of graphs motivates the dynamic graph model, in which the goal is to minimize the amount of work needed to reoptimize the solution when the graph changes. There is a large body of literature on dynamic algorithms for basic problems that arise in graphs. This thesis presents several improved dynamic algorithms for two fundamental graph problems: shortest paths, and matching.Computer science, Mathematics, Computer algorithms, Graphic methods, Computer science, Mathematicsab3417Computer Science, Industrial Engineering and Operations ResearchDissertationsSoft Regulation with Crowd Recommendation: Coordinating Self-Interested Agents in Sociotechnical Systems under Imperfect Information
https://academiccommons.columbia.edu/catalog/ac:197950
Luo, Yu; Iyengar, Garud N.; Venkatasubramanian, Venkathttp://dx.doi.org/10.7916/D8HM58FXWed, 27 Apr 2016 13:48:12 +0000Regulating emerging industries is challenging, even controversial at times. Under-regulation can result in safety threats to plant personnel, surrounding communities, and the environment. Over-regulation may hinder innovation, progress, and economic growth. Since one typically has limited understanding of, and experience with, the novel technology in practice, it is difficult to accomplish a properly balanced regulation. In this work, we propose a control and coordination policy called soft regulation that attempts to strike the right balance and create a collective learning environment. In soft regulation mechanism, individual agents can accept, reject, or partially accept the regulator’s recommendation. This non-intrusive coordination does not interrupt normal operations. The extent to which an agent accepts the recommendation is mediated by a confidence level (from 0 to 100%). Among all possible recommendation methods, we investigate two in particular: the best recommendation wherein the regulator is completely informed and the crowd recommendation wherein the regulator collects the crowd’s average and recommends that value. We show by analysis and simulations that soft regulation with crowd recommendation performs well. It converges to optimum, and is as good as the best recommendation for a wide range of confidence levels. This work sheds a new theoretical perspective on the concept of the wisdom of crowds.Sociology, Industrial engineering, System science, Sociotechnical systems, Learning, Operations research, Collective behavioryl2750, gi10, vv2213Chemical Engineering, Industrial Engineering and Operations ResearchArticlesApplied Inventory Management: New Approaches to Age-Old Problems
https://academiccommons.columbia.edu/catalog/ac:194202
Daniel Guetta, Charles Raphaelhttp://dx.doi.org/10.7916/D84M94B1Fri, 05 Feb 2016 15:26:26 +0000Supply chain management is one of the fundamental topics in the field of operations research, and a vast literature exists on the subject. Many recent developments in the field are rapidly narrowing the gap between the systems handled in the literature and the real-life problems companies need to solve on a day-to-day basis. However, there are certain features often observed in real-world systems that elude even these most recent developments. In this thesis, we consider a number of these features, and propose some new heuristics together with methodologies to evaluate their performance.
In Chapter 2, we consider a general two-echelon distribution system consisting of a depot and multiple sales outlets which face random demands for a given item. The replenishment process consists of two stages: the depot procures the item from an outside supplier, while the retailers' inventories are replenished by shipments from the depot. Both of the replenishment stages are associated with a given facility-specific leadtime. The depot as well as the retailers face a limited inventory capacity. We propose a heuristic for this class of dynamic programming models to obtain an upper bound on optimal costs, together with a new approach to generate lower bounds based on Lagrangian relaxation. We report on an extensive numerical study with close to 14,000 instances which evaluates the accuracy of the lower bound and the optimality gap of the various heuristic policies. Our study reveals that our policy performs exceedingly well almost across the entire parameter spectrum.
In Chapter 3, we extend the model above to deal with distribution systems involving several items. In this setting, two interdependencies can arise between items that considerably complicate the problem. First, shared storage capacity at each of the retail outlets results in a trade-off between items; ordering more of one item means less space is available for another. Second, economies of scope can occur in the order costs if several items can be ordered from a single supplier, incurring only one fixed cost. To our knowledge, our approach is the first that has been proposed to handle such complex, multi-echelon, multi-item systems. We propose a heuristic for this class of dynamic programming models, to obtain an upper bound on optimal costs, together with an approach to generate lower bounds. We report on an extensive numerical study with close to 1,200 instances that reveals our heuristic performs excellently across the entire parameter spectrum. In Chapter 4, we consider a periodic-review stochastic inventory control system consisting of a single retailer which faces random demands for a given item, and in which demand forecasts are dynamically updated (for example, new information observed in one period may affect our beliefs about demand distributions in future periods). Replenishment orders are subject to fixed and variable costs. A number of heuristics exist to deal with such systems, but to our knowledge, no general approach exists to find lower bounds on optimal costs therein. We develop a general approach for finding lower bounds on the cost of such systems using an information relaxation. We test our approach in a model with advance demand information, and obtain good lower bounds over a range of problem parameters.
Finally, in Appendix A, we begin to tackle the problem of using these methods in real supply chain systems. We were able to obtain data from a luxury goods manufacturer to inspire our study. Unfortunately, the methods we developed in earlier chapters were not directly applicable to these data. Instead, we developed some alternate heuristic methods, and we considered statistical techniques that might be used to obtain the parameters required for these heuristics from the data available.Operations research, Industrial engineering, Business administration, Business logistics, Industrial management, Inventory control, Inventory control--Data processing, Inventory control--Evaluation, Inventory control--Management, Retail trade--Inventory controlBusiness, Industrial Engineering and Operations ResearchDissertationsWind Resource Assessment for Utility-Scale Clean Energy Project: The Case of Sao Vicente Island
https://academiccommons.columbia.edu/catalog/ac:191372
Yussuff, Abdulmutalibhttp://dx.doi.org/10.7916/D8N58M04Wed, 25 Nov 2015 10:21:52 +0000Accurate wind resource assessment is of high importance in wind power project development. This thesis estimates the annual energy yield and emission reduction potential for a grid connected 5.95 MW wind power plant at the island of Sao Vicente in Cape Verde. Wind speed data from Sao Vicente wind farm is processed and analyzed in R (Statistical software). The maximum annual wind energy potential at the site is 53,470.2 MWh, but analysis shows that the turbine can harness an estimated 14,185 MWh per annum. The estimated annual greenhouse gas (GHG) emissions displacement is 10,071 tonnes of CO2. In monetary terms, the GHG displacement is worth € 60,428 per annum based on the European trading system of € 6 per tonne CO2. The estimated investment cost of the 5.95MW wind power project is € 15.5 million against the estimated investment cost of similar project in Germany of € 10 million based on the investment benchmark of $ 1,800/kW published by the Fraunhofer Institute and also in comparison with a typical Vestas wind turbine cost of $1,800/kW. The difference in investment cost between Cape Verde and Germany is attributed to additional cost of breaking the complex terrain barriers to the good wind site in Sao Vicente; importation of turbine and equipment parts; foreign consultancy services and manpower; pre-feasibility and feasibility studies to identify suitable sites. With the prevailing electricity tariff of € 0.28 per kWh in Cape Verde, it was estimated that the wind power project will break-even within 4 years with or without carbon credit. This indicates that the project is financially viable. In the context of Nigeria’s coastal area of Lagos, wind resource potential lies within Class 1 (<5 m/s) at a hub height of 74 metres. This indicates that wind power project could be realized using a turbine with a cut-in speed below 3 m/s in best case scenario. The implication is that more numbers of small wind turbines will be needed to reach utility-scale.Environmental engineering, Alternative energy, Natural resource management, Wind power, Winds--Speed--Measurementay2328Industrial Engineering and Operations ResearchMaster's thesesA collective approach to reducing carbon dioxide emission: A case study of four University of Lagos Halls of residence
https://academiccommons.columbia.edu/catalog/ac:191369
Abolarin, S. M.; Gbadegesin, A. O.; Shitta, M. B.; Yussuff, Abdulmutalib; Eguma, C. A.; Ehwerhemuepha, L.; Adegbenro, O.http://dx.doi.org/10.7916/D8542N7QWed, 25 Nov 2015 10:02:58 +0000A major focus of existing literature on energy conservation is the modelling and quantification of energy savings and the corresponding carbon dioxide emissions from lightings. While many studies have established theoretical frameworks concerning these issues, very little documentation exists relating to energy savings and emission levels in students’ hostels. This paper considers the lighting efficiency improvement of four University of Lagos halls of residence for the purpose of quantifying energy saving and the minimization of carbon dioxide that can be made. Compact fluorescent lamps are considered alternatives to the current primary usage of conventional fluorescent and incandescent bulbs. The existing electricity consumption data obtained from energy audit are used in combination with conversion factors to estimate the annual CO2 contributed to the atmosphere by lighting in each of the buildings. The result of the study shows that over 45% reduction in carbon dioxide emission can be achieved. There is a lot individuals can do to reduce the emissions, for example, using energy saving appliances, turning off appliances when not in use, less use of fossil fuels, are simple measures that can be adopted to reduce annual carbon footprint, improve economic growth, enhance environment, health and save the planet.Energy, Sustainability, Climate change, Buildings--Energy conservation, University of Lagos, Electric utilities--Energy conservation, Carbon dioxide mitigationay2328Industrial Engineering and Operations ResearchArticlesPerfect Simulation and Deployment Strategies for Detection
https://academiccommons.columbia.edu/catalog/ac:189976
Wallwater, Ayahttp://dx.doi.org/10.7916/D8X066JBFri, 16 Oct 2015 15:06:43 +0000This dissertation contains two parts. The first part provides the first algorithm that, under minimal assumptions, allows to simulate the stationary waiting-time sequence of a single-server queue backwards in time, jointly with the input processes of the queue
(inter-arrival and service times).
The single-server queue is useful in applications of DCFTP (Dominated Coupling From The Past), which is a well known protocol for simulation without bias from steady-state distributions. Our algorithm terminates in finite time assuming only finite mean of the
inter-arrival and service times. In order to simulate the single-server queue in stationarity until the first idle period in finite expected termination time we require the existence of finite variance. This requirement is also necessary for such idle time (which is a natural
coalescence time in DCFTP applications) to have finite mean. Thus, in this sense, our algorithm is applicable under minimal assumptions.
The second part studies the behavior of diffusion processes in a random environment.
We consider an adversary that moves in a given domain and our goal is to produce an optimal strategy to detect and neutralize him by a given deadline. We assume that the target's dynamics follows a diffusion process whose parameters are informed by available intelligence information. We will dedicate one chapter to the rigorous formulation of the detection problem, an introduction of several frameworks that can be considered when applying our methods, and a discussion on the challenges of finding the analytical optimal solution. Then, in the following chapter, we will present our main result, a large deviation behavior of the adversary's survival probability under a given strategy. This result will be later give rise to asymptotically efficient Monte Carlo algorithms.Operations researchaw2589Operations Research, Industrial Engineering and Operations ResearchDissertationsStochastic Networks: Modeling, Simulation Design and Risk Control
https://academiccommons.columbia.edu/catalog/ac:189655
Li, Juanhttp://dx.doi.org/10.7916/D88P5ZV3Mon, 28 Sep 2015 12:09:02 +0000This dissertation studies stochastic network problems that arise in various areas with important industrial applications. Due to uncertainty of both external and internal variables, these networks are exposed to the risk of failure with certain probability, which, in many cases, is very small. It is thus desirable to develop efficient simulation algorithms to study the stability of these networks and provide guidance for risk control.
Chapter 2 models equilibrium allocations in a distribution network as the solution of a linear program (LP) which minimizes the cost of unserved demands across nodes in the network. Assuming that the demands are random (following a jointly Gaussian law), we study the probability that the optimal cost exceeds a large threshold, which is a rare event. Our contribution is the development of importance sampling and conditional Monte Carlo algorithms for estimating this probability. We establish the asymptotic efficiency of our algorithms and also present numerical results that demonstrate the strong performance of our algorithms.
Chapter 3 studies an insurance-reinsurance network model that deals with default contagion risks with a particular aim of capturing cascading effects at the time of defaults. We capture these effects by finding an equilibrium allocation of settlements that can be found as the unique optimal solution of an optimization problem. We are able to obtain an asymptotic description of the most likely ways in which the default of a specific group of participants can occur, by solving a multidimensional Knapsack integer programming problem. We also propose a class of strongly efficient Monte Carlo estimators for computing the expected loss of the network conditioned on the failure of a specific set of companies.
Chapter 4 discusses control schemes for maintaining low failure probability of a transmission system power line. We construct a stochastic differential equation to describe the temperature evolution in a line subject to stochastic exogenous factors such as ambient temperature, and present a solution to the resulting stochastic heat equation. A number of control algorithms designed to limit the probability that a line exceeds its critical temperature are provided.Operations research, Engineering, Financejl3035Operations Research, Industrial Engineering and Operations ResearchDissertationsRanking Algorithms on Directed Configuration Networks
https://academiccommons.columbia.edu/catalog/ac:189652
Chen, Ningyuanhttp://dx.doi.org/10.7916/D8J38RX8Mon, 28 Sep 2015 12:08:56 +0000In recent decades, complex real-world networks, such as social networks, the World Wide Web, financial networks, etc., have become a popular subject for both researchers and practitioners. This is largely due to the advances in computing power and big-data analytics. A key issue of analyzing these networks is the centrality of nodes. Ranking algorithms are designed to achieve the goal, e.g., Google's PageRank. We analyze the asymptotic distribution of the rank of a randomly chosen node, computed by a family of ranking algorithms on a random graph, including PageRank, when the size of the network grows to infinity.
We propose a configuration model generating the structure of a directed graph given in- and out-degree distributions of the nodes. The algorithm guarantees the generated graph to be simple (without self-loops and multiple edges in the same direction) for a broad spectrum of degree distributions, including power-law distributions. Power-law degree distribution is referred to as scale-free property and observed in many real-world networks. On the random graph G_n=(V_n,E_n) generated by the configuration model, we study the distribution of the ranks, which solves
R_i=∑ _{j: (j,i) ∈ E_n} (C_jR_j +Q_i)
for all node i, some weight C_i and personalization value Q_i.
We show that as the size of the graph n → ∞, the rank of a randomly chosen node converges weakly to the endogenous solution of the
R =^D ∑ _{i=1}^N (C_iR_i + Q),
where (Q, N, {C_i}) is a random vector and {R_i} are i.i.d. copies of R, independent of (Q, N,{C_i}). This main result is divided into three steps. First, we show that the rank of a randomly chosen node can be approximated by applying the ranking algorithm on the graph for finite iterations. Second, by coupling the graph to a branching tree that is governed by the empirical size-biased distribution, we approximate the finite iteration of the ranking algorithm by the root node of the branching tree. Finally, we prove that the rank of the root of the branching tree converges to that of a limiting weighted branching process, which is independent of n and solves the stochastic fixed-point equation. Our result formalizes the well-known heuristics, that a network often locally possesses a tree-like structure. We conduct a numerical example showing that the approximation is very accurate for English Wikipedia pages (over 5 million).
To draw a sample from the endogenous solution of the stochastic fixed-point equation, one can run linear branching recursions on a weighted branching process. We provide an iterative simulation algorithm based on bootstrap. Compared to the naive Monte Carlo, our algorithm reduces the complexity from exponential to linear in the number of recursions. We show that as the bootstrap sample size tends to infinity, the sample drawn according to our algorithm converges to the target distribution in the Kantorovich-Rubinstein distance and the estimator is consistent.Operations research, Computer sciencenc2462Industrial Engineering, Industrial Engineering and Operations ResearchDissertationsTwo Essays in Financial Engineering
https://academiccommons.columbia.edu/catalog/ac:189643
Yang, Linanhttp://dx.doi.org/10.7916/D8K35T1MMon, 28 Sep 2015 12:08:34 +0000This dissertation consists of two parts. In the first part, we investigate the potential impact of wrong-way risk on calculating credit valuation adjustment (CVA) of a derivatives portfolio. A credit valuation adjustment (CVA) is an adjustment applied to the value of a derivative contract or a portfolio of derivatives to account for counterparty credit risk. Measuring CVA requires combining models of market and credit risk. Wrong-way risk refers to the possibility that a counterparty's likelihood of default increases with the market value of the exposure. We develop a method for bounding wrong-way risk, holding fixed marginal models for market and credit risk and varying the dependence between them. Given simulated paths of the two models, we solve a linear program to find the worst-case CVA resulting from wrong-way risk. We analyze properties of the solution and prove convergence of the estimated bound as the number of paths increases. The worst case can be overly pessimistic, so we extend the procedure for a tempered CVA by penalizing the deviation of the joint model of market and credit risk from a reference model. By varying the penalty for deviations, we can sweep out the full range of possible CVA values for different degrees of wrong-way risk. Here, too, we prove convergence of the estimate of the tempered CVA and the joint distribution that attains it. Our method addresses an important source of model risk in counterparty risk measurement. In the second part, we study investors' trading behavior in a model of realization utility. We assume that investors' trading decisions are driven not only by the utility of consumption and terminal wealth, but also by the utility burst from realizing a gain or a loss. More precisely, we consider a dynamic trading problem in which an investor decides when to purchase and sell a stock to maximize her wealth utility and realization utility with her reference points adapting to the stock's gain and loss asymmetrically. We study, both theoretically and numerically, the optimal trading strategies and asset pricing implications of two types of agents: adaptive agents, who realize prospectively the reference point adaptation in the future, and naive agents, who fail to do so. We find that an adaptive agent sells the stock more frequently when the stock is at a gain than a naive agent does, and that the adaptive agent asks for a higher risk premium for the stock than the naive agent does in equilibrium. Moreover, compared to a non-adaptive agent whose reference point does not change with the stock's gain and loss, both the adaptive and naive agents sell the stock less frequently, and the naive agent requires the same risk premium as the non-adaptive agent does.Operations research, Financely2220Operations Research, Business, Industrial Engineering and Operations ResearchDissertationsSmart Grid Risk Management
https://academiccommons.columbia.edu/catalog/ac:188373
Abad Lopez, Carlos Adrianhttp://dx.doi.org/10.7916/D8028QR9Tue, 21 Jul 2015 12:07:19 +0000Current electricity infrastructure is being stressed from several directions -- high demand, unreliable supply, extreme weather conditions, accidents, among others. Infrastructure planners have, traditionally, focused on only the cost of the system; today, resilience and sustainability are increasingly becoming more important. In this dissertation, we develop computational tools for efficiently managing electricity resources to help create a more reliable and sustainable electrical grid. The tools we present in this work will help electric utilities coordinate demand to allow the smooth and large scale integration of renewable sources of energy into traditional grids, as well as provide infrastructure planners and operators in developing countries a framework for making informed planning and control decisions in the presence of uncertainty.
Demand-side management is considered as the most viable solution for maintaining grid stability as generation from intermittent renewable sources increases. Demand-side management, particularly demand response (DR) programs that attempt to alter the energy consumption of customers either by using price-based incentives or up-front power interruption contracts, is more cost-effective and sustainable in addressing short-term supply-demand imbalances when compared with the alternative that involves increasing fossil fuel-based fast spinning reserves. An essential step in compensating participating customers and benchmarking the effectiveness of DR programs is to be able to independently detect the load reduction from observed meter data. Electric utilities implementing automated DR programs through direct load control switches are
also interested in detecting the reduction in demand to efficiently pinpoint non-functioning devices to reduce maintenance costs. We develop sparse optimization methods for detecting a small change in the demand for electricity of a customer in response to a price change or signal from the utility, dynamic learning methods for scheduling the maintenance of direct load control switches whose operating state is not directly observable and can only be inferred from the metered electricity consumption, and machine learning methods for accurately forecasting the load of hundreds of thousands of residential, commercial and industrial customers. These algorithms have been implemented in the software system provided by AutoGrid, Inc., and this system has helped several utilities in the Pacific Northwest, Oklahoma, California and Texas, provide more reliable power to their customers at significantly reduced prices.
Providing power to widely spread out communities in developing countries using the conventional power grid is not economically feasible. The most attractive alternative source of affordable energy for these communities is solar micro-grids. We discuss risk-aware robust methods to optimally size and operate solar micro-grids in the presence of uncertain demand and uncertain renewable generation. These algorithms help system operators to increase their revenue while making their systems more resilient to inclement weather conditions.Operations research, Energyca2446Industrial Engineering and Operations Research, Operations ResearchDissertationsDesign and Analysis of Matching and Auction Markets
https://academiccommons.columbia.edu/catalog/ac:188367
Saban, Danielahttp://dx.doi.org/10.7916/D8348JJ5Fri, 10 Jul 2015 12:13:25 +0000Auctions and matching mechanisms have become an increasingly important tool to allocate scarce resources among competing individuals or firms. Every day, millions of auctions are run for a variety of purposes, ranging from selling valuable art or advertisement space in websites to acquiring goods for government use. Every year matching mechanisms are used to decide the public school assignments of thousands of incoming high school students, who are competing to obtain a seat in their most preferred school. This thesis addresses several questions that arise when designing and analyzing matching and auction markets.
The first part of the dissertation is devoted to matching markets. In Chapter 2, we study markets with indivisible goods where monetary compensations are not possible. Each individual is endowed with an object and has ordinal preferences over all objects. When preferences are strict, the Top-Trading Cycles (TTC) mechanism invented by Gale is Pareto efficient, strategy-proof, and finds a core allocation, and is the only mechanism satisfying these properties. In the extensive literature on this problem since then, the TTC mechanism has been characterized in multiple ways, establishing its central role within the class of all allocation mechanisms. In many real applications, however, the individual preferences have subjective indifferences; in this case, no simple adaptation of the TTC mechanism is Pareto efficient and strategy-proof. We provide a foundation for extending the TTC mechanism to the preference domain with indifferences while guaranteeing Pareto efficiency and strategy-proofness. As a by-product, we establish sufficient conditions for a mechanism (within a broad class of mechanisms) to be strategy-proof and use these conditions to design computationally efficient mechanisms.
In Chapter 3, we study several questions associated to the Random Priority (RP) mechanism from a computational perspective. The RP mechanism is a popular way to allocate objects to agents with strict ordinal preferences over the objects. In this mechanism, an ordering over the agents is selected uniformly at random; the first agent is then allocated his most-preferred object, the second agent is allocated his most-preferred object among the remaining ones, and so on. The outcome of the mechanism is a bi-stochastic matrix in which entry (i,a) represents the probability that agent i is given object a. It is shown that the problem of computing the RP allocation matrix is #P-complete. Furthermore, it is NP-complete to decide if a given agent i receives a given object a with positive probability under the RP mechanism, whereas it is possible to decide in polynomial time whether or not agent i receives object a with probability 1. The implications of these results for approximating the RP allocation matrix as well as on finding constrained Pareto optimal matchings are discussed.
Chapter 4 focuses on assignment markets (matching markets with transferable utilities), such as labor and housing markets. We consider a two-sided assignment market with agent types and stochastic structure similar to models used in empirical studies, and characterize the size of the core in such markets. The value generated from a match between a pair of agents is the sum of two random productivity terms, each of which depends only on the type but not the identity of one of the agents, and a third deterministic term driven by the pair of types. We allow the number of agents to grow, keeping the number of agent types fixed. Let n be the number of agents and K be the number of types on the side of the market with more types. We find, under reasonable assumptions, that the relative variation in utility per agent over core outcomes is bounded as O^*(1/n^{1/K}), where polylogarithmic factors have been suppressed. Further, we show that this bound is tight in worst case, and provide a tighter bound under more restrictive assumptions.
In the second part of the dissertation, we study auction markets. Chapter 5 considers the problem faced by a procurement agency that runs an auction-type mechanism to construct an assortment of products with posted prices, from a set of differentiated products offered by strategic suppliers. Heterogeneous consumers then buy their most preferred alternative from the assortment as needed. Framework agreements (FAs), widely used in the public sector, take this form; this type of mechanism is also relevant in other contexts, such as the design of medical formularies and group buying. When evaluating the bids, the procurement agency must consider the trade-off between offering a richer menu of products for consumers, versus offering less variety, hoping to engage the suppliers in a more aggressive price competition. We develop a mechanism design approach to study this problem, and provide a characterization of the optimal mechanisms. This characterization allows us to quantify the optimal trade-off between product variety and price competition, in terms of suppliers' costs, products' characteristics, and consumers' characteristics. We then use the optimal mechanism as a benchmark to evaluate the performance of the Chilean government procurement agency's current implementation of FAs, used to acquire US $2 billion worth of goods per year. We show how simple modifications to the current mechanism, which increase price competition among close substitutes, can considerably improve performance.Businessdhs2131Business, Industrial Engineering and Operations ResearchDissertationsMethods for Pricing Pre-Earnings Equity Options and Leveraged ETF Options
https://academiccommons.columbia.edu/catalog/ac:186986
Santoli, Marcohttp://dx.doi.org/10.7916/D86Q1W99Thu, 07 May 2015 00:17:52 +0000In this thesis, we present several analytical and numerical methods for two financial engineering problems: 1) accounting for the impact of an earnings announcement on the price and implied volatility of the associated equity options, and 2) analyzing the price dynamics of leveraged exchange-traded funds (LETFs) and valuation of LETF options. Our pricing models capture the main characteristics of these options, along with jumps and stochastic volatility in the underlying asset. We illustrate our results through numerical implementation and calibration using market data.
In the first part, we model the pricing of equity options around an earnings announcement (EA). Empirical studies have shown that an earnings announcement can lead to an immediate price shock to the company stock. Since many companies also have options written on their stocks, the option prices should reflect the uncertain price impact of an upcoming EA before expiration. To represent the shock due to earnings, we incorporate a random jump on the announcement date in the dynamics of the stock price. We consider different distributions of the scheduled earnings jump as well as different underlying stock price dynamics before and after the EA date. Our main contributions include analytical option pricing formulas when the underlying stock price follows the Kou model along with a double-exponential or Gaussian EA jump on the announcement date. Furthermore, we derive analytic bounds and asymptotics for the pre-EA implied volatility under various models. The calibration results demonstrate adequate fit of the entire implied volatility surface prior to an announcement. The comparison of the risk-neutral distribution of the EA jump to its historical counterpart is also discussed. Moreover, we discuss the valuation and exercise strategy of pre-EA American options, and present an analytical approximation and numerical results.
The second part focuses on the analysis of LETFs. We start by providing a quantitative risk analysis of LETFs with an emphasis on the impact of leverage ratios and investment horizons. Given an investment horizon, different leverage ratios imply different levels of risk. Therefore, the idea of an {admissible range of leverage ratios} is introduced. For an admissible leverage ratio, the associated LETF satisfies a given risk constraint based on, for example, the value-at-risk (VaR) and conditional VaR. Moreover, we discuss the concept of {admissible risk horizon} so that the investor can control risk exposure by selecting an appropriate holding period. The intra-horizon risk is calculated, showing that higher leverage can significantly increase the probability of an LETF value hitting a lower level. This leads us to evaluate a stop-loss/take-profit strategy for LETFs and determine the optimal take-profit given a stop-loss risk constraint. In addition, the impact of volatility exposure on the returns of different LETF portfolios is investigated.
In the last chapter, we study the pricing of options written on LETFs. Since LETFs on the same reference index share the same source of risk, it is important to consider a consistent pricing methodology of these options. In addition, LETFs can theoretically experience a loss greater than 100\%. In practice, some LETF providers design the fund so that the daily returns are capped both downward and upward. We incorporate these features and model the reference index by a stochastic volatility model with jumps. An efficient numerical algorithm based on transform methods to value options under this model is presented. We illustrate the accuracy of our pricing algorithm by comparing it to existing methods. Calibration using empirical option data shows the impact of leverage ratio on the implied volatility. Our method is extended to price American-style LETF options.Finance, Operations researchIndustrial Engineering and Operations ResearchDissertationsOptimal Multiple Stopping Approach to Mean Reversion Trading
https://academiccommons.columbia.edu/catalog/ac:186941
Li, Xinhttp://dx.doi.org/10.7916/D88K781SFri, 24 Apr 2015 18:33:24 +0000This thesis studies the optimal timing of trades under mean-reverting price dynamics subject to fixed transaction costs. We first formulate an optimal double stopping problem whereby a speculative investor can choose when to enter and subsequently exit the market. The investor's value functions and optimal timing strategies are derived when prices are driven by an Ornstein-Uhlenbeck (OU), exponential OU, or Cox-Ingersoll-Ross (CIR) process. Moreover, we analyze a related optimal switching problem that involves an infinite sequence of trades. In addition to solving for the value functions and optimal switching strategies, we identify the conditions under which the double stopping and switching problems admit the same optimal entry and/or exit timing strategies. A number of extensions are also considered, such as incorporating a stop-loss constraint, or a minimum holding period under the OU model.
A typical solution approach for optimal stopping problems is to study the associated free boundary problems or variational inequalities (VIs). For the double optimal stopping problem, we apply a probabilistic methodology and rigorously derive the optimal price intervals for market entry and exit. A key step of our approach involves a transformation, which in turn allows us to characterize the value function as the smallest concave majorant of the reward function in the transformed coordinate. In contrast to the variational inequality approach, this approach directly constructs the value function as well as the optimal entry and exit regions, without a priori conjecturing a candidate value function or timing strategy. Having solved the optimal double stopping problem, we then apply our results to deduce a similar solution structure for the optimal switching problem. We also verify that our value functions solve the associated VIs.
Among our results, we find that under OU or CIR price dynamics, the optimal stopping problems admit the typical buy-low-sell-high strategies. However, when the prices are driven by an exponential OU process, the investor generally enters when the price is low, but may find it optimal to wait if the current price is sufficiently close to zero. In other words, the continuation (waiting) region for entry is disconnected. A similar phenomenon is observed in the OU model with stop-loss constraint. Indeed, the entry region is again characterized by a bounded price interval that lies strictly above the stop-loss level. As for the exit timing, a higher stop-loss level always implies a lower optimal take-profit level. In all three models, numerical results are provided to illustrate the dependence of timing strategies on model parameters.Operations research, FinanceIndustrial Engineering and Operations ResearchDissertationsDomestic Septic Tanks for Treating Sewage and Biogas Generation
https://academiccommons.columbia.edu/catalog/ac:183894
Modjinou, Mawufemohttp://dx.doi.org/10.7916/D81G0K40Tue, 03 Mar 2015 15:38:59 +0000This study is to design a novel septic tank, named Anaerobic Upflow Domestic Septic Tank (AUDST) to recover biogas as energy and treat domestic sewage. The green technology proposes alternate options to existing Domestic Septic Tanks (DST), encourages anaerobically pre-treatment to reduce bacteria, pollutants, Total Suspended Solids (TSS), Chemical oxygen demand (COD) and Biological oxygen demand (BOD) before the effluent is discharged or is removed by cesspit trucks. Studies have shown that DST in homes partially treat or just store sewage. Again, these DST have to be emptied from time to time because it lack features that will sustain anaerobic activity and usually the sludge is disposed of directly into the sea, water bodies and even into open places such as “Lavender Hills’’ without any treatment or disinfection. These practices cause severe public health and environmental problems. To tackle the challenge at household level, DST are redesigned to treat domestic sewage with less management, low operating cost, low secondary discharge of pollutants. The proposed new design concept is operated through three (3) units: such as desilting, anaerobic digestion and facultative filtration units. The anaerobic digestion stage is made up of baffle and anaerobic filter for accommodating sludge and providing a more intimate contact between anaerobic biomass and sewage which improves treatment performance. The anaerobic unit is fitted with locally woven baskets prefilled with packing materials. The aim is to strengthen the biological treatment process at this stage. The Facultative Filtration unit of the model is also packed with filtering media such as gravels (3-6mm in diameter) that is low in cost, and has a high durability to produce effluent with lower pollutants and suspended solids content to meet Ghana’s Environmental Protection Agency (EPA) standards for the discharge of domestic effluents.Mechanical engineering, Environmental engineeringmm4488Industrial Engineering and Operations ResearchMaster's thesesThe Theory of Systemic Risk
https://academiccommons.columbia.edu/catalog/ac:178176
Chenhttp://dx.doi.org/10.7916/D8W37TWCTue, 30 Sep 2014 14:42:47 +0000Systemic risk is an issue of great concern in modern financial markets as well as, more broadly, in the management of complex business and engineering systems. It refers to the risk of collapse of an entire complex system, as a result of the actions taken by the individual component entities or agents that comprise the system. We investigate the topic of systemic risk from the perspectives of measurement, structural sources, and risk factors. In particular, we propose an axiomatic framework for the measurement and management of systemic risk based on the simultaneous analysis of outcomes across agents in the system and over scenarios of nature. Our framework defines a broad class of systemic risk measures that accommodate a rich set of regulatory preferences. This general class of systemic risk measures captures many specific measures of systemic risk that have recently been proposed as special cases, and highlights their implicit assumptions. Moreover, the systemic risk measures that satisfy our conditions yield decentralized decompositions, i.e., the systemic risk can be decomposed into risk due to individual agents. Furthermore, one can associate a shadow price for systemic risk to each agent that correctly accounts for the externalities of the agent's individual decision-making on the entire system. Also, we provide a structural model for a financial network consisting of a set of firms holding common assets. In the model, endogenous asset prices are captured by the marketing clearing condition when the economy is in equilibrium. The key ingredients in the financial market that are captured in this model include the general portfolio choice flexibility of firms given posted asset prices and economic states, and the mark-to-market wealth of firms. The price sensitivity can be analyzed, where we characterize the key features of financial holding networks that minimize systemic risk, as a function of overall leverage. Finally, we propose a framework to estimate risk measures based on risk factors. By introducing a form of factor-separable risk measures, the acceptance set of the original risk measure connects to the acceptance sets of the factor-separable risk measures. We demonstrate that the tight bounds for factor-separable coherent risk measures can be explicitly constructed.Operations researchcc3136Industrial Engineering and Operations Research, BusinessDissertationsStudies in Stochastic Networks: Efficient Monte-Carlo Methods, Modeling and Asymptotic Analysis
https://academiccommons.columbia.edu/catalog/ac:177127
Dong, Jinghttp://dx.doi.org/10.7916/D8X63K4FTue, 12 Aug 2014 18:10:34 +0000This dissertation contains two parts. The first part develops a series of bias reduction techniques for: point processes on stable unbounded regions, steady-state distribution of infinite server queues, steady-state distribution of multi-server loss queues and loss networks and sample path of stochastic differential equations. These techniques can be applied for efficient performance evaluation and optimization of the corresponding stochastic models. We perform detailed running time analysis under heavy traffic of the perfect sampling algorithms for infinite server queues and multi-server loss queues and prove that the algorithms achieve nearly optimal order of complexity. The second part aims to model and analyze the load-dependent slowdown effect in service systems. One important phenomenon we observe in such systems is bi-stability, where the system alternates randomly between two performance regions. We conduct heavy traffic asymptotic analysis of system dynamics and provide operational solutions to avoid the bad performance region.Operations research, Applied mathematicsjd2736Industrial Engineering and Operations ResearchDissertationsStochastic Approximation Algorithms in the Estimation of Quasi-Stationary Distribution of Finite and General State Space Markov Chains
https://academiccommons.columbia.edu/catalog/ac:177124
Zheng, Shuhenghttp://dx.doi.org/10.7916/D89C6VM9Tue, 12 Aug 2014 15:34:38 +0000This thesis studies stochastic approximation algorithms for estimating the quasi-stationary distribution of Markov chains. Existing numerical linear algebra methods and probabilistic methods might be computationally demanding and intractable in large state spaces. We take our motivation from a heuristic described in the physics literature and use the stochastic approximation framework to analyze and extend it.
The thesis begins by looking at the finite dimensional setting. The finite dimensional quasi-stationary estimation algorithm was proposed in the Physics literature by [#latestoliveira, #oliveiradickman1, #dickman], however no proof was given there and it was not recognized as a stochastic approximation algorithm. This and related schemes were analyzed in the context of urn problems and the consistency of the estimator is shown there [#aldous1988two, #pemantle, #athreya]. The rate of convergence is studied by [#athreya] in special cases only. The first chapter provides a different proof of the algorithm's consistency and establishes a rate of convergence in more generality than [#athreya]. It is discovered that the rate of convergence is only fast when a certain restrictive eigenvalue condition is satisfied. Using the tool of iterate averaging, the algorithm can be modified and we can eliminate the eigenvalue condition.
The thesis then moves onto the general state space discrete-time Markov chain setting. In this setting, the stochastic approximation framework does not have a strong theory in the current literature, so several of the convergence results have to be adapted because the iterates of our algorithm are measure-valued The chapter formulates the quasi-stationary estimation algorithm in this setting. Then, we extend the ODE method of [#kushner2003stochastic] and proves the consistency of algorithm. Through the proof, several non-restrictive conditions required for convergence of the algorithm are discovered.
Finally, the thesis tests the algorithm by running some numerical experiments. The examples are designed to test the algorithm in various edge cases. The algorithm is also empirically compared against the Fleming-Viot method.Operations researchIndustrial Engineering and Operations ResearchDissertationsEssays in Financial Engineering
https://academiccommons.columbia.edu/catalog/ac:177072
Ahn, Andrewhttp://dx.doi.org/10.7916/D80K26R0Sat, 19 Jul 2014 14:34:30 +0000This thesis consists of three essays in financial engineering. In particular we study problems in option pricing, stochastic control and risk management.
In the first essay, we develop an accurate and efficient pricing approach for options on leveraged ETFs (LETFs). Our approach allows us to price these options quickly and in a manner that is consistent with the underlying ETF price dynamics. The numerical results also demonstrate that LETF option prices have model-dependency particularly in high-volatility environments.
In the second essay, we extend a linear programming (LP) technique for approximately solving high-dimensional control problems in a diffusion setting. The original LP technique applies to finite horizon problems with an exponentially-distributed horizon, T. We extend the approach to fixed horizon problems. We then apply these techniques to dynamic portfolio optimization problems and evaluate their performance using convex duality methods. The numerical results suggest that the LP approach is a very promising one for tackling high-dimensional control problems.
In the final essay, we propose a factor model-based approach for performing scenario analysis in a risk management context. We argue that our approach addresses some important drawbacks to a standard scenario analysis and, in a preliminary numerical investigation with option portfolios, we show that it produces superior results as well.Operations researchaja2133Industrial Engineering and Operations ResearchDissertationsNetwork Resource Allocation Under Fairness Constraints
https://academiccommons.columbia.edu/catalog/ac:176038
Chandramouli, Shyam Sundarhttp://dx.doi.org/10.7916/D8S46Q3VMon, 07 Jul 2014 11:46:04 +0000This work considers the basic problem of allocating resources among a group of agents in a network, when the agents are equipped with single-peaked preferences over their assignments. This generalizes the classical claims problem, which concerns the division of an estate's liquidation value when the total claim on it exceeds this value. The claims problem also models the problem of rationing a single commodity, or the problem of dividing the cost of a public project among the people it serves, or the problem of apportioning taxes. A key consideration in this classical literature is equity: the good (or the ``bad,'' in the case of apportioning taxes or costs) should be distributed as fairly as possible. The main contribution of this dissertation is a comprehensive treatment of a generalization of this classical rationing problem to a network setting.
Bochet et al. recently introduced a generalization of the classical rationing problem to the network setting. For this problem they designed an allocation mechanism---the egalitarian mechanism---that is Pareto optimal, envy free and strategyproof. In chapter 2, it is shown that the egalitarian mechanism is in fact group strategyproof, implying that no coalition of agents can collectively misreport their information to obtain a (weakly) better allocation for themselves. Further, a complete characterization of the set of all group strategyproof mechanisms is obtained.
The egalitarian mechanism satisfies many attractive properties, but fails consistency, an important property in the literature on rationing problems. It is shown in chapter 3 that no Pareto optimal mechanism can be envy-free and consistent. Chapter 3 is devoted to the edge-fair mechanism that is Pareto optimal, group strategyproof, and consistent. In a related model where the agents are located on the edges of the graph rather than the nodes, the edge-fair rule is shown to be envy-free, group strategyproof, and consistent.
Chapter 4 extends the egalitarian mechanism to the problem of finding an optimal exchange in non-bipartite networks. The results vary depending on whether the commodity being exchanged is divisible or indivisible. For the latter case, it is shown that no efficient mechanism can be strategyproof, and that the egalitarian mechanism is Pareto optimal and envy-free. Chapter 5 generalizes recent work on finding stable and balanced allocations in graphs with unit capacities and unit weights to more general networks. The existence of a stable and balanced allocation is established by a transformation to an equivalent unit capacity network.Operations researchIndustrial Engineering and Operations ResearchDissertationsNew Quantitative Approaches to Asset Selection and Portfolio Construction
https://academiccommons.columbia.edu/catalog/ac:175867
Song, Irenehttp://dx.doi.org/10.7916/D83N21JVMon, 07 Jul 2014 11:39:33 +0000Since the publication of Markowitz's landmark paper "Portfolio Selection" in 1952, portfolio construction has evolved into a disciplined and personalized process. In this process, security selection and portfolio optimization constitute key steps for making investment decisions across a collection of assets. The use of quantitative algorithms and models in these steps has become a widely-accepted investment practice by modern investors. This dissertation is devoted to exploring and developing those quantitative algorithms and models.
In the first part of the dissertation, we present two efficiency-based approaches to security selection: (i) a quantitative stock selection strategy based on operational efficiency and (ii) a quantitative currency selection strategy based on macroeconomic efficiency. In developing the efficiency-based stock selection strategy, we exploit a potential positive link between firm's operational efficiency and its stock performance. By means of data envelopment analysis (DEA), a non-parametric approach to productive efficiency analysis, we quantify firm's operational efficiency into a single score representing a consolidated measure of financial ratios. The financial ratios integrated into an efficiency score are selected on the basis of their predictive power for the firm's future operating performance using the LASSO (least absolute shrinkage and selection operator)-based variable selection method. The computed efficiency scores are directly used for identifying stocks worthy of investment. The basic idea behind the proposed stock selection strategy is that as efficient firms are presumed to be more profitable than inefficient firms, higher returns are expected from their stocks. This idea is tested in a contextual and empirical setting provided by the U.S. Information Technology (IT) sector. Our empirical findings confirm that there is a strong positive relationship between firm's operational efficiency and its stock performance, and further establish that firm's operational efficiency has significant explanatory power in describing the cross-sectional variations of stock returns. We moreover offer an economic argument that posits operational efficiency as a systematic risk factor and the most likely source of excess returns of investing in efficient firms.
The efficiency-based currency selection strategy is developed in a similar way; i.e. currencies are selected based on a certain efficiency metric. An exchange rate has long been regarded as a reliable barometer of the state of the economy and the measure of international competitiveness of countries. While strong and appreciating currencies correspond to productive and efficient economies, weak and depreciating currencies correspond to slowing down and less efficient economies. This study hence develops a currency selection strategy that utilizes macroeconomic efficiency of countries measured based on a widely-accepted relationship between exchange rates and macroeconomic variables. For quantifying macroeconomic efficiency of countries, we first establish a multilateral framework using effective exchange rates and trade-weighted macroeconomic variables. This framework is used for transforming the three representative bilateral structural exchange rate models: the flexible price monetary model, the sticky price monetary model, and the sticky price asset model, into their multilateral counterparts. We then translate these multilateral models into DEA models, which yield an efficiency score representing an aggregate measure of macroeconomic variables. Consistent with the stock selection strategy, the resulting efficiency scores are used for identifying currencies worthy of investment. We evaluate our currency selection strategy against appropriate market and strategic benchmarks using historical data. Our empirical results confirm that currencies of efficient countries have stronger performance than those of inefficient countries, and further suggest that compared to the exchange rate models based on standard regression analysis, our models based on DEA improve on the predictability of the future performance of currencies.
In the first part of the dissertation, we also develop a data-driven variable selection method for DEA based on the group LASSO. This method extends the LASSO-based variable selection method used for specifying a DEA model for estimating firm's operational efficiency. In our proposed method, we derive a special constrained version of the group LASSO with the loss function suited for variable selection in DEA models and solve it by a new tailored algorithm based on the alternating direction method of multipliers (ADMM). We conduct a thorough evaluation of the proposed method against two widely-used variable selection methods: the efficiency contribution measure (ECM) method and the regression-based (RB) test, in the DEA literature using Monte Carlo simulations. The simulation results show that our method provides more favorable performance compared with its benchmarks.
In the second part of the dissertation, we propose a generalized risk budgeting (GRB) approach to portfolio construction. In a GRB portfolio, assets are grouped into possibly overlapping subsets, and each subset is allocated a risk budget that has been pre-specified by the investor. Minimum variance, risk parity and risk budgeting portfolios are all special instances of a GRB portfolio. The GRB portfolio optimization problem is to find a GRB portfolio with an optimal risk-return profile where risk is measured using any positively homogeneous risk measure. When the subsets form a partition, the assets all have identical returns and we restrict ourselves to long-only portfolios, then the GRB problem can in fact be solved as a convex optimization problem. In general, however, the GRB problem is a constrained non-convex problem, for which we propose two solution approaches. The first approach uses a semidefinite programming (SDP) relaxation to obtain an (upper) bound on the optimal objective function value. In the second approach we develop a numerical algorithm that integrates augmented Lagrangian and Markov chain Monte Carlo (MCMC) methods in order to find a point in the vicinity of a very good local optimum. This point is then supplied to a standard non-linear optimization routine with the goal of finding this local optimum. It should be emphasized that the merit of this second approach is in its generic nature: in particular, it provides a starting-point strategy for any non-linear optimization algorithms.Operations researchIndustrial Engineering and Operations ResearchDissertationsGraph Structure and Coloring
https://academiccommons.columbia.edu/catalog/ac:175631
Plumettaz, Matthieuhttp://dx.doi.org/10.7916/D87M0637Mon, 07 Jul 2014 11:36:44 +0000We denote by G=(V,E) a graph with vertex set V and edge set E. A graph G is claw-free if no vertex of G has three pairwise nonadjacent neighbours. Claw-free graphs are a natural generalization of line graphs. This thesis answers several questions about claw-free graphs and line graphs.
In 1988, Chvatal and Sbihi proved a decomposition theorem for claw-free perfect graphs. They showed that claw-free perfect graphs either have a clique-cutset or come from two basic classes of graphs called elementary and peculiar graphs. In 1999, Maffray and Reed successfully described how elementary graphs can be built using line graphs of bipartite graphs and local augmentation. However gluing two claw-free perfect graphs on a clique does not necessarily produce claw-free graphs. The first result of this thesis is a complete structural description of claw-free perfect graphs. We also give a construction for all perfect circular interval graphs. This is joint work with Chudnovsky.
Erdos and Lovasz conjectured in 1968 that for every graph G and all integers s,t≥ 2 such that s+t-1=χ(G) > ω(G), there exists a partition (S,T) of the vertex set of G such that ω(G|S)≥ s and χ(G|T)≥ t. This conjecture is known in the graph theory community as the Erdos-Lovasz Tihany Conjecture. For general graphs, the only settled cases of the conjecture are when s and t are small. Recently, the conjecture was proved for a few special classes of graphs: graphs with stability number 2, line graphs and quasi-line graphs. The second part of this thesis considers the conjecture for claw-free graphs and presents some progresses on it. This is joint work with Chudnovsky and Fradkin.
Reed's ω, ∆, χ conjecture proposes that every graph satisfies χ≤ ⎡½ (Δ+1+ω)⎤ ; it is known to hold for all claw-free graphs. The third part of this thesis considers a local strengthening of this conjecture. We prove the local strengthening for line graphs, then note that previous results immediately tell us that the local strengthening holds for all quasi-line graphs. Our proofs lead to polytime algorithms for constructing colorings that achieve our bounds: The complexity are O(n²) for line graphs and O(n³m²) for quasi-line graphs. For line graphs, this is faster than the best known algorithm for constructing a coloring that achieves the bound of Reed's original conjecture. This is joint work with Chudnovsky, King and Seymour.Operations researchmp2761Industrial Engineering and Operations ResearchDissertationsOn the Kidney Exchange Problem and Online Minimum Energy Scheduling
https://academiccommons.columbia.edu/catalog/ac:175610
Herrera Humphries, Tuliahttp://dx.doi.org/10.7916/D8125QSXMon, 07 Jul 2014 11:36:10 +0000The allocation and management of scarce resources are of central importance in the design
of policies to improve social well-being. This dissertation consists of three essays; the first
two deals with the problem of allocating kidneys and the third one on power management
in computing devices.
Kidney exchange programs are an attractive alternative for patients who need a kidney
transplant and who have a willing, but medically incompatible, donor. A registry that keeps
track of such patient-donor pairs can nd matches through exchanges amongst such pairs.
This results in a quicker transplant for the patients involved, and equally importantly, keeps
such patients from the long wait list of patients without an intended donor. As of March
2014, there were at least 99,000 candidates waiting for a kidney transplant in the U.S.
However, in 2013 only 16,893 transplants were conducted. This imbalance between supply
and demand among other factors, has driven the development of multiple kidney exchange
programs in the U.S. and the subsequent development of matching mechanisms to run the
programs.
In the first essay we consider a matching problem arising in kidney exchanges between
hospitals. Focusing on the case of two hospitals, we construct a strategy-proof matching
mechanism that is guaranteed to return a matching that is at least 3/4 the size of a maximum cardinality
matching. It is known that no better performance is possible if one focuses on
mechanisms that return a maximal matching, and so our mechanism is best possible within
this natural class of mechanisms. For path-cycle graphs we construct a mechanism that
returns a matching that is at least 4/5 the size of max-cardinality matching. This mechanism
does not necessarily return a maximal matching. Finally, we construct a mechanism that is
universally truthful on path-cycle graphs and whose performance is within 2/3 of optimal.
Again, it is known that no better ratio is possible.
In most of the existing literature, mechanisms are typically evaluated by their overall
performance on a large exchange pool, based on which conclusions and recommendations
are drawn. In our second essay, we consider a dynamic framework to evaluate extensively
used kidney exchange mechanisms. We conduct a simulation-based study of a dynamically
evolving exchange pool during 9 years. Our results suggest that some of the features that
are critical in a mechanism in the static setting have only a minor impact in its longrun
performance when viewed in the dynamic setting. More importantly, features that
are generally underestimated in the static setting turn to be relevant when we look at
dynamically evolving exchange pool. For example, the pairs' arrival rates. In particular we
provide insights into the eect on the waiting times and the probability to receive an oer
of controllable features such as the frequency at which matching are run, the structures
through which pairs could be matched (cycles or chains) as well as inherent features such
as the pairs ABO-PRA characteristics, the availability of altruistic donors, and wether or
not compatible pairs join the exchange etc. We evaluate the odds to receive an oer and
the expected time to receive an oer for each ABO-PRA type of pairs in the model.
Power management in computing devices aims to minimize energy consumption to perform
tasks, meanwhile keeping acceptable performance levels. A widely used power management
strategy for devices, is to transit the devices and/or components to lower power
consumption states during inactivity periods. Transitions between power states consume
energy, thus, depending on such costs, it may be advantageous to stay in high power state
during some inactivity periods. In our third essay we consider the problem of minimizing
the total energy consumed by a 2-power state device, to process jobs that are sent over time
by a constrained adversary. Jobs can be preempted, but deadlines need to be met. In this
problem, an algorithm must decide when to schedule the jobs, as well as a sequence of power
states, and the discrete time thresholds at which these states will be reached. We provide
an online algorithm to minimize the energy consumption when the cost of a transition to
the low power state is small enough. In this case, the problem of minimizing the energy
consumption is equivalent to minimizing the total number of inactivity periods. We also
provide an algorithm to minimize the energy consumption when it may be advantageous
to stay in high power state during some inactivity periods. In both cases we provide upper
bounds on the competitive ratio of our algorithms, and lower bounds on the competitive
ratio of all online algorithms.Operations researchIndustrial Engineering and Operations ResearchDissertationsData-driven Decisions in Service Systems
https://academiccommons.columbia.edu/catalog/ac:175604
Kim, Song-Heehttp://dx.doi.org/10.7916/D8D798KHMon, 07 Jul 2014 11:35:52 +0000This thesis makes contributions to help provide data-driven (or evidence-based) decision support to service systems, especially hospitals. Three selected topics are presented.
First, we discuss how Little's Law, which relates average limits and expected values of stationary distributions, can be applied to service systems data that are collected over a finite time interval. To make inferences based on the indirect estimator of average waiting times, we propose methods for estimating confidence intervals and for adjusting estimates to reduce bias. We show our new methods are effective using simulations and data from a US bank call center.
Second, we address important issues that need to be taken into account when testing whether real arrival data can be modeled by nonhomogeneous Poisson processes (NHPPs). We apply our method to data from a US bank call center and a hospital emergency department and demonstrate that their arrivals come from NHPPs.
Lastly, we discuss an approach to standardize the Intensive Care Unit admission process, which currently lacks a well-defined criteria. Using data from nearly 200,000 hospitalizations, we discuss how we can quantify the impact of Intensive Care Unit admission on individual patient's clinical outcomes. We then use this quantified impact and a stylized model to discuss optimal admission policies. We use simulation to compare the performance of our proposed optimal policies to the current admission policy, and show that the gain can be significant.Operations researchsk3116Industrial Engineering and Operations ResearchDissertationsHigh-Dimensional Portfolio Management: Taxes, Execution and Information Relaxations
https://academiccommons.columbia.edu/catalog/ac:185815
Wang, Chunhttp://dx.doi.org/10.7916/D8M043JJMon, 07 Jul 2014 11:34:06 +0000Portfolio management has always been a key topic in finance research area. While many researchers have studied portfolio management problems, most of the work to date assumes trading is frictionless. This dissertation presents our investigation of the optimal trading policies and efforts of applying duality method based on information relaxations to portfolio problems where the investor manages multiple securities and confronts trading frictions, in particular capital gain taxes and execution cost.
In Chapter 2, we consider dynamic asset allocation problems where the investor is required to pay capital gains taxes on her investment gains. This is a very challenging problem because the tax to be paid whenever a security is sold depends on the tax basis, i.e. the price(s) at which the security was originally purchased. This feature results in high-dimensional and path-dependent problems which cannot be solved exactly except in the case of very stylized problems with just one or two securities and relatively few time periods. The asset allocation problem with taxes has several variations depending on: (i) whether we use the exact or average tax-basis and (ii) whether we allow the full use of losses (FUL) or the limited use of losses (LUL). We consider all of these variations in this chapter but focus mainly on the exact and average-cost tax-basis LUL cases since these problems are the most realistic and generally the most challenging. We develop several sub-optimal trading policies for these problems and use duality techniques based on information relaxations to assess their performances. Our numerical experiments consider problems with as many as 20 securities and 20 time periods. The principal contribution of this chapter is in demonstrating that much larger problems can now be tackled through the use of sophisticated optimization techniques and duality methods based on information-relaxations. We show in fact that the dual formulation of exact tax-basis problems are much easier to solve than the corresponding primal problems. Indeed, we can easily solve dual problem instances where the number of securities and time periods is much larger than 20. We also note, however, that while the average tax-basis problem is relatively easier to solve in general, its corresponding dual problem instances are non-convex and more difficult to solve. We therefore propose an approach for the average tax-basis dual problem that enables valid dual bounds to still be obtained.
In Chapter 3, we consider a portfolio execution problem where a possibly risk-averse agent needs to trade a fixed number of shares in multiple stocks over a short time horizon. Our price dynamics can capture linear but stochastic temporary and permanent price impacts as well as stochastic volatility. In general it's not possible to solve even numerically for the optimal policy in this model, however, and so we must instead search for good sub-optimal policies. Our principal policy is a variant of an open-loop feedback control (OLFC) policy and we show how the corresponding OLFC value function may be used to construct good primal and dual bounds on the optimal value function. The dual bound is constructed using the recently developed duality methods based on information relaxations. One of the contributions of this chapter is the identification of sufficient conditions to guarantee convexity, and hence tractability, of the associated dual problem instances. That said, we do not claim that the only plausible models are those where all dual problem instances are convex. We also show that it is straightforward to include a non-linear temporary price impact as well as return predictability in our model. We demonstrate numerically that good dual bounds can be computed quickly even when nested Monte-Carlo simulations are required to estimate the so-called dual penalties. These results suggest that the dual methodology can be applied in many models where closed-form expressions for the dual penalties cannot be computed.
In Chapter 4, we apply duality methods based on information relaxations to dynamic zero-sum games. We show these methods can easily be used to construct dual lower and upper bounds for the optimal value of these games. In particular, these bounds can be used to evaluate sub-optimal policies for zero-sum games when calculating the optimal policies and game value is intractable.Operations research, FinanceIndustrial Engineering and Operations ResearchDissertationsConvex Optimization Algorithms and Recovery Theories for Sparse Models in Machine Learning
https://academiccommons.columbia.edu/catalog/ac:175385
Huang, Bohttp://dx.doi.org/10.7916/D8VM49DMMon, 07 Jul 2014 11:31:19 +0000Sparse modeling is a rapidly developing topic that arises frequently in areas such as machine learning, data analysis and signal processing. One important application of sparse modeling is the recovery of a high-dimensional object from relatively low number of noisy observations, which is the main focuses of the Compressed Sensing, Matrix Completion(MC) and Robust Principal Component Analysis (RPCA) . However, the power of sparse models is hampered by the unprecedented size of the data that has become more and more available in practice. Therefore, it has become increasingly important to better harnessing the convex optimization techniques to take advantage of any underlying "sparsity" structure in problems of extremely large size.
This thesis focuses on two main aspects of sparse modeling. From the modeling perspective, it extends convex programming formulations for matrix completion and robust principal component analysis problems to the case of tensors, and derives theoretical guarantees for exact tensor recovery under a framework of strongly convex programming. On the optimization side, an efficient first-order algorithm with the optimal convergence rate has been proposed and studied for a wide range of problems of linearly constraint sparse modeling problems.Mathematics, Statistics, Operations researchIndustrial Engineering and Operations ResearchDissertationsFrom Continuous to Discrete: Studies on Continuity Corrections and Monte Carlo Simulation with Applications to Barrier Options and American Options
https://academiccommons.columbia.edu/catalog/ac:171186
Cao, Menghuihttp://dx.doi.org/10.7916/D8PG1PS1Fri, 28 Feb 2014 15:26:30 +0000This dissertation 1) shows continuity corrections for first passage probabilities of Brownian bridge and barrier joint probabilities, which are applied to the pricing of two-dimensional barrier and partial barrier options, and 2) introduces new variance reduction techniques and computational improvements to Monte Carlo methods for pricing American options.
The joint distribution of Brownian motion and its first passage time has found applications in many areas, including sequential analysis, pricing of barrier options, and credit risk modeling. There are, however, no simple closed-form solutions for these joint probabilities in a discrete-time setting. Chapter 2 shows that, discrete two-dimensional barrier and partial barrier joint probabilities can be approximated by their continuous-time probabilities with remarkable accuracy after shifting the barrier away from the underlying by a factor. We achieve this through a uniform continuity correction theorem on the first passage probabilities for Brownian bridge, extending relevant results in Siegmund (1985a). The continuity corrections are applied to the pricing of two-dimensional barrier and partial barrier options, extending the results in Broadie, Glasserman & Kou (1997) on one-dimensional barrier options. One interesting aspect is that for type B partial barrier options, the barrier correction cannot be applied throughout one pricing formula, but only to some barrier values and leaving the other unchanged, the direction of correction may also vary within one formula.
In Chapter 3 we introduce new variance reduction techniques and computational improvements to Monte Carlo methods for pricing American-style options. For simulation algorithms that compute lower bounds of American option values, we apply martingale control variates and introduce the local policy enhancement, which adopts a local simulation to improve the exercise policy. For duality-based upper bound methods, specifically the primal-dual simulation algorithm (Andersen and Broadie 2004), we have developed two improvements. One is sub-optimality checking, which saves unnecessary computation when it is sub-optimal to exercise the option along the sample path; the second is boundary distance grouping, which reduces computational time by skipping computation on selected sample paths based on the distance to the exercise boundary. Numerical results are given for single asset Bermudan options, moving window Asian options and Bermudan max options. In some examples the computational time is reduced by a factor of several hundred, while the confidence interval of the true option value is considerably tighter than before the improvements.Operations research, FinanceIndustrial Engineering and Operations ResearchDissertationsPricing, Trading and Clearing of Defaultable Claims Subject to Counterparty Risk
https://academiccommons.columbia.edu/catalog/ac:169814
Kim, Jinbeomhttp://dx.doi.org/10.7916/D8319SWWMon, 03 Feb 2014 12:12:22 +0000The recent financial crisis and subsequent regulatory changes on over-the-counter (OTC) markets have given rise to the new valuation and trading frameworks for defaultable claims to investors and dealer banks. More OTC market participants have adopted the new market conventions that incorporate counterparty risk into the valuation of OTC derivatives. In addition, the use of collateral has become common for most bilateral trades to reduce counterparty default risk. On the other hand, to increase transparency and market stability, the U.S and European regulators have required mandatory clearing of defaultable derivatives through central counterparties. This dissertation tackles these changes and analyze their impacts on the pricing, trading and clearing of defaultable claims. In the first part of the thesis, we study a valuation framework for financial contracts subject to reference and counterparty default risks with collateralization requirement. We propose a fixed point approach to analyze the mark-to-market contract value with counterparty risk provision, and show that it is a unique bounded and continuous fixed point via contraction mapping. This leads us to develop an accurate iterative numerical scheme for valuation. Specifically, we solve a sequence of linear inhomogeneous partial differential equations, whose solutions converge to the fixed point price function. We apply our methodology to compute the bid and ask prices for both defaultable equity and fixed-income derivatives, and illustrate the non-trivial effects of counterparty risk, collateralization ratio and liquidation convention on the bid-ask prices. In the second part, we study the problem of pricing and trading of defaultable claims among investors with heterogeneous risk preferences and market views. Based on the utility-indifference pricing methodology, we construct the bid-ask spreads for risk-averse buyers and sellers, and show that the spreads widen as risk aversion or trading volume increases. Moreover, we analyze the buyer's optimal static trading position under various market settings, including (i) when the market pricing rule is linear, and (ii) when the counterparty -- single or multiple sellers -- may have different nonlinear pricing rules generated by risk aversion and belief heterogeneity. For defaultable bonds and credit default swaps, we provide explicit formulas for the optimal trading positions, and examine the combined effect of heterogeneous risk aversions and beliefs. In particular, we find that belief heterogeneity, rather than the difference in risk aversion, is crucial to trigger a trade. Finally, we study the impact of central clearing on the credit default swap (CDS) market. Central clearing of CDS through a central counterparty (CCP) has been proposed as a tool for mitigating systemic risk and counterpart risk in the CDS market. The design of CCPs involves the implementation of margin requirements and a default fund, for which various designs have been proposed. We propose a mathematical model to quantify the impact of the design of the CCP on the incentive for clearing and analyze the market equilibrium. We determine the minimum number of clearing participants required so that they have an incentive to clear part of their exposures. Furthermore, we analyze the equilibrium CDS positions and their dependence on the initial margin, risk aversion, and counterparty risk in the inter-dealer market. Our numerical results show that minimizing the initial margin maximizes the total clearing positions as well as the CCP's revenue.Operations research, Financejk3071Industrial Engineering and Operations ResearchDissertationsPerfect Simulation, Sample-path Large Deviations, and Multiscale Modeling for Some Fundamental Queueing Systems
https://academiccommons.columbia.edu/catalog/ac:181094
Chen, Xinyunhttp://dx.doi.org/10.7916/D8WH2MZ1Mon, 06 Jan 2014 18:13:10 +0000As a primary branch of Operations Research, Queueing Theory models and analyzes engineering systems with random fluctuations. With the development of internet and computation techniques, the engineering systems today are much bigger in scale and more complicated in structure than 20 years ago, which raises numerous new problems to researchers in the field of queueing theory. The aim of this thesis is to explore new methods and tools, from both algorithmic and analytical perspectives, that are useful to solve such problems.
In Chapter 1 and 2, we introduce some techniques of asymptotic analysis that are relatively new to queueing applications in order to give more accurate probabilistic characterization of queueing models with large scale and complicated structure. In particular, Chapter 1 gives the first functional large deviation result for infinite-server system with general inter-arrival and service times. The functional approach we use enables a nice description of the whole system over the entire time horizon of interest, which is important in real problems. In Chapter 2, we construct a queueing model for the so-called limit order book that is used in main financial markets worldwide. We use an asymptotic approach called multi-scale modeling to disentangle the complicated dependence among the elements in the trading system and to reduce the model dimensionality. The asymptotic regime we use is inspired by empirical observations and the resulting limit process explains and reproduces stylized features of real market data. Chapter 2 also provides a nice example of novel applications of queueing models in systems, such as the electronic trading system, that are traditionally outside the scope of queueing theory.
Chapter 3 and 4 focus on stochastic simulation methods for performance evaluation of queueing models where analytic approaches fail.
In Chapter 3, we develop a perfect sampling algorithm to generate exact samples from the stationary distribution of stochastic fluid networks in polynomial time. Our approach can be used for time-varying networks with general inter-arrival and service times, whose stationary distributions have no analytic expression. In Chapter 4, we focus on the stochastic systems with continuous random fluctuations, for instance, the workload arrives to the system in continuous flow like a Levy process. We develop a general framework of simulation algorithms featuring a deterministic error bound and an almost square root convergence rate. As an application, we apply this framework to estimate the stationary distributions of reflected Brownian motions and the performance of our algorithm is better than existing prevalent numeric methods.Operations researchxc2177Industrial Engineering and Operations ResearchDissertationsTwo Papers of Financial Engineering Relating to the Risk of the 2007--2008 Financial Crisis
https://academiccommons.columbia.edu/catalog/ac:167143
Zhong, Haowenhttp://dx.doi.org/10.7916/D8CC0XMGFri, 15 Nov 2013 17:04:33 +0000This dissertation studies two financial engineering and econometrics problems relating to two facets of the 2007-2008 financial crisis. In the first part, we construct the Spatial Capital Asset Pricing Model and the Spatial Arbitrage Pricing Theory to characterize the risk premiums of futures contracts on real estate assets. We also provide rigorous econometric analysis of the new models. Empirical study shows there exists significant spatial interaction among the S&P/Case-Shiller Home Price Index futures returns. In the second part, we perform empirical studies on the jump risk in the equity market. We propose a simple affine jump-diffusion model for equity returns, which seems to outperform existing ones (including models with Levy jumps) during the financial crisis and is at least as good during normal times, if model complexity is taken into account. In comparing the models, we made two empirical findings: (i) jump intensity seems to increase significantly during the financial crisis, while on average there appears to be little change of jump sizes; (ii) finite number of large jumps in returns for any finite time horizon seem to fit the data well both before and after the crisis.Operations research, Statisticshz2193Industrial Engineering and Operations ResearchDissertationsCutting Planes for Convex Objective Nonconvex Optimization
https://academiccommons.columbia.edu/catalog/ac:166569
Michalka, Alexanderhttp://hdl.handle.net/10022/AC:P:22000Thu, 17 Oct 2013 14:46:03 +0000This thesis studies methods for tightening relaxations of optimization problems with convex objective values over a nonconvex domain. A class of linear inequalities obtained by lifting easily obtained valid inequalities is introduced, and it is shown that this class of inequalities is sufficient to describe the epigraph of a convex and differentiable function over a general domain. In the special case where the objective is a positive definite quadratic function, polynomial time separation procedures using the new class of lifted inequalities are developed for the cases when the domain is the complement of the interior of a polyhedron, a union of polyhedra, or the complement of the interior of an ellipsoid. Extensions for positive semidefinite and indefinite quadratic objectives are also studied. Applications and computational considerations are discussed, and the results from a series of numerical experiments are presented.Industrial engineeringadm2148Industrial Engineering and Operations ResearchDissertationsResource Cost Aware Scheduling Problems
https://academiccommons.columbia.edu/catalog/ac:166566
Carrasco, Rodrigohttp://hdl.handle.net/10022/AC:P:21999Thu, 17 Oct 2013 14:31:58 +0000Managing the consumption of non-renewable and/or limited resources has become an important issue in many different settings. In this dissertation we explore the topic of resource cost aware scheduling. Unlike the purely scheduling problems, in the resource cost aware setting we are not only interested in a scheduling performance metric, but also the cost of the resources consumed to achieve a certain performance level. There are several ways in which the cost of non-renewal resources can be added into a scheduling problem. Throughout this dissertation we will focus in the case where the resource consumption cost is added, as part of the objective, to a scheduling performance metric such as weighted completion time and weighted tardiness among others. In our work we make several contributions to the problem of scheduling with non-renewable resources. For the specific setting in which only energy consumption is the important resource, our contributions are the following. We introduce a model that extends the previous energy cost models by allowing more general cost functions that can be job-dependent. We further generalize the problem by allowing arbitrary precedence constraints and release dates. We give approximation algorithms for minimizing an objective that is a combination of a scheduling metric, namely total weighted completion time and total weighted tardiness, and the total energy consumption cost. Our approximation algorithm is based on an interval-and-speed-indexed IP formulation. We solve the linear relaxation of this IP and we use this solution to compute a schedule. We show that these algorithms have small constant approximation ratios. Through experimental analysis we show that the empirical approximation ratios are much better than the theoretical ones and that in fact the solutions are close to optimal. We also show empirically that the algorithm can be used in additional settings not covered by the theoretical results, such as using flow time or an online setting, with good approximation and competitiveness ratios.Industrial engineering, Applied mathematicsIndustrial Engineering and Operations ResearchDissertationsApproximate dynamic programming for large scale systems
https://academiccommons.columbia.edu/catalog/ac:169790
Desai, Vijay V.http://hdl.handle.net/10022/AC:P:20875Fri, 28 Jun 2013 10:51:16 +0000Sequential decision making under uncertainty is at the heart of a wide variety of practical problems. These problems can be cast as dynamic programs and the optimal value function can be computed by solving Bellman's equation. However, this approach is limited in its applicability. As the number of state variables increases, the state space size grows exponentially, a phenomenon known as the curse of dimensionality, rendering the standard dynamic programming approach impractical. An effective way of addressing curse of dimensionality is through parameterized value function approximation. Such an approximation is determined by relatively small number of parameters and serves as an estimate of the optimal value function. But in order for this approach to be effective, we need Approximate Dynamic Programming (ADP) algorithms that can deliver `good' approximation to the optimal value function and such an approximation can then be used to derive policies for effective decision-making. From a practical standpoint, in order to assess the effectiveness of such an approximation, there is also a need for methods that give a sense for the suboptimality of a policy. This thesis is an attempt to address both these issues. First, we introduce a new ADP algorithm based on linear programming, to compute value function approximations. LP approaches to approximate DP have typically relied on a natural `projection' of a well studied linear program for exact dynamic programming. Such programs restrict attention to approximations that are lower bounds to the optimal cost-to-go function. Our program -- the `smoothed approximate linear program' -- is distinct from such approaches and relaxes the restriction to lower bounding approximations in an appropriate fashion while remaining computationally tractable. The resulting program enjoys strong approximation guarantees and is shown to perform well in numerical experiments with the game of Tetris and queueing network control problem. Next, we consider optimal stopping problems with applications to pricing of high-dimensional American options. We introduce the pathwise optimization (PO) method: a new convex optimization procedure to produce upper and lower bounds on the optimal value (the `price') of high-dimensional optimal stopping problems. The PO method builds on a dual characterization of optimal stopping problems as optimization problems over the space of martingales, which we dub the martingale duality approach. We demonstrate via numerical experiments that the PO method produces upper bounds and lower bounds (via suboptimal exercise policies) of a quality comparable with state-of-the-art approaches. Further, we develop an approximation theory relevant to martingale duality approaches in general and the PO method in particular. Finally, we consider a broad class of MDPs and introduce a new tractable method for computing bounds by consider information relaxation and introducing penalty. The method delivers tight bounds by identifying the best penalty function among a parameterized class of penalty functions. We implement our method on a high-dimensional financial application, namely, optimal execution and demonstrate the practical value of the method vis-a-vis competing methods available in the literature. In addition, we provide theory to show that bounds generated by our method are provably tighter than some of the other available approaches.Operations research, Mathematicsvvd2101Industrial Engineering and Operations Research, BusinessDissertationsStochastic Models of Limit Order Markets
https://academiccommons.columbia.edu/catalog/ac:161685
Kukanov, Arseniyhttp://hdl.handle.net/10022/AC:P:20511Thu, 30 May 2013 16:40:28 +0000During the last two decades most stock and derivatives exchanges in the world transitioned to electronic trading in limit order books, creating a need for a new set of quantitative models to describe these order-driven markets. This dissertation offers a collection of models that provide insight into the structure of modern financial markets, and can help to optimize trading decisions in practical applications. In the first part of the thesis we study the dynamics of prices, order flows and liquidity in limit order markets over short timescales. We propose a stylized order book model that predicts a particularly simple linear relation between price changes and order flow imbalance, defined as a difference between net changes in supply and demand. The slope in this linear relation, called a price impact coefficient, is inversely proportional in our model to market depth - a measure of liquidity. Our empirical results confirm both of these predictions. The linear relation between order flow imbalance and price changes holds for time intervals between 50 milliseconds and 5 minutes. The inverse relation between the price impact coefficient and market depth holds on longer timescales. These findings shed a new light on intraday variations in market volatility. According to our model volatility fluctuates due to changes in market depth or in order flow variance. Previous studies also found a positive correlation between volatility and trading volume, but in order-driven markets prices are determined by the limit order book activity, so the association between trading volume and volatility is unclear. We show how a spurious correlation between these variables can indeed emerge in our linear model due to time aggregation of high-frequency data. Finally, we observe short-term positive autocorrelation in order flow imbalance and discuss an application of this variable as a measure of adverse selection in limit order executions. Our results suggest that monitoring recent order flow can improve the quality of order executions in practice. In the second part of the thesis we study the problem of optimal order placement in a fragmented limit order market. To execute a trade, market participants can submit limit orders or market orders across various exchanges where a stock is traded. In practice these decisions are influenced by sizes of order queues and by statistical properties of order flows in each limit order book, and also by rebates that exchanges pay for limit order submissions. We present a realistic model of limit order executions and formalize the search for an optimal order placement policy as a convex optimization problem. Based on this formulation we study how various factors determine investor's order placement decisions. In a case when a single exchange is used for order execution, we derive an explicit formula for the optimal limit and market order quantities. Our solution shows that the optimal split between market and limit orders largely depends on one's tolerance to execution risk. Market orders help to alleviate this risk because they execute with certainty. Correspondingly, we find that an optimal order allocation shifts to these more expensive orders when the execution risk is of primary concern, for example when the intended trade quantity is large or when it is costly to catch up on the quantity after limit order execution fails. We also characterize the optimal solution in the general case of simultaneous order placement on multiple exchanges, and show that it sets execution shortfall probabilities to specific threshold values computed with model parameters. Finally, we propose a non-parametric stochastic algorithm that computes an optimal solution by resampling historical data and does not require specifying order flow distributions. A numerical implementation of this algorithm is used to study the sensitivity of an optimal solution to changes in model parameters. Our numerical results show that order placement optimization can bring a substantial reduction in trading costs, especially for small orders and in cases when order flows are relatively uncorrelated across trading venues. The order placement optimization framework developed in this thesis can also be used to quantify the costs and benefits of financial market fragmentation from the point of view of an individual investor. For instance, we find that a positive correlation between order flows, which is empirically observed in a fragmented U.S. equity market, increases the costs of trading. As the correlation increases it may become more expensive to trade in a fragmented market than it is in a consolidated market. In the third part of the thesis we analyze the dynamics of limit order queues at the best bid or ask of an exchange. These queues consist of orders submitted by a variety of market participants, yet existing order book models commonly assume that all orders have similar dynamics. In practice, some orders are submitted by trade execution algorithms in an attempt to buy or sell a certain quantity of assets under time constraints, and these orders are canceled if their realized waiting time exceeds a patience threshold. In contrast, high-frequency traders submit and cancel orders depending on the order book state and their orders are not driven by patience. The interaction between these two order types within a single FIFO queue leads bursts of order cancelations for small queues and anomalously long waiting times in large queues. We analyze a fluid model that describes the evolution of large order queues in liquid markets, taking into account the heterogeneity between order submission and cancelation strategies of different traders. Our results show that after a finite initial time interval, the queue reaches a specific structure where all orders from high-frequency traders stay in the queue until execution but most orders from execution algorithms exceed their patience thresholds and are canceled. This "order crowding" effect has been previously noted by participants in highly liquid stock and futures markets and was attributed to a large participation of high-frequency traders. In our model, their presence creates an additional workload, which increases queue waiting times for new orders. Our analysis of the fluid model leads to waiting time estimates that take into account the distribution of order types in a queue. These estimates are tested against a large dataset of realized limit order waiting times collected by a U.S. equity brokerage firm. The queue composition at a moment of order submission noticeably affects its waiting time and we find that assuming a single order type for all orders in the queue leads to unrealistic results. Estimates that assume instead a mix of heterogeneous orders in the queue are closer to empirical data. Our model for a limit order queue with heterogeneous order types also appears to be interesting from a methodological point of view. It introduces a new type of behavior in a queueing system where one class of jobs has state-dependent dynamics, while others are driven by patience. Although this model is motivated by the analysis of limit order books, it may find applications in studying other service systems with state-dependent abandonments.Operations research, Finance, Statisticsak2870Industrial Engineering and Operations ResearchDissertationsFinancial Portfolio Risk Management: Model Risk, Robustness and Rebalancing Error
https://academiccommons.columbia.edu/catalog/ac:161415
Xu, Xingbohttp://hdl.handle.net/10022/AC:P:20382Mon, 20 May 2013 15:59:07 +0000Risk management has always been in key component of portfolio management. While more and more complicated models are proposed and implemented as research advances, they all inevitably rely on imperfect assumptions and estimates. This dissertation aims to investigate the gap between complicated theoretical modelling and practice. We mainly focus on two directions: model risk and reblancing error. In the first part of the thesis, we develop a framework for quantifying the impact of model error and for measuring and minimizing risk in a way that is robust to model error. This robust approach starts from a baseline model and finds the worst-case error in risk measurement that would be incurred through a deviation from the baseline model, given a precise constraint on the plausibility of the deviation. Using relative entropy to constrain model distance leads to an explicit characterization of worst-case model errors; this characterization lends itself to Monte Carlo simulation, allowing straightforward calculation of bounds on model error with very little computational effort beyond that required to evaluate performance under the baseline nominal model. This approach goes well beyond the effect of errors in parameter estimates to consider errors in the underlying stochastic assumptions of the model and to characterize the greatest vulnerabilities to error in a model. We apply this approach to problems of portfolio risk measurement, credit risk, delta hedging, and counterparty risk measured through credit valuation adjustment. In the second part, we apply this robust approach to a dynamic portfolio control problem. The sources of model error include the evolution of market factors and the influence of these factors on asset returns. We analyze both finite- and infinite-horizon problems in a model in which returns are driven by factors that evolve stochastically. The model incorporates transaction costs and leads to simple and tractable optimal robust controls for multiple assets. We illustrate the performance of the controls on historical data. Robustness does improve performance in out-of-sample tests in which the model is estimated on a rolling window of data and then applied over a subsequent time period. By acknowledging uncertainty in the estimated model, the robust rules lead to less aggressive trading and are less sensitive to sharp moves in underlying prices. In the last part, we analyze the error between a discretely rebalanced portfolio and its continuously rebalanced counterpart in the presence of jumps or mean-reversion in the underlying asset dynamics. With discrete rebalancing, the portfolio's composition is restored to a set of fixed target weights at discrete intervals; with continuous rebalancing, the target weights are maintained at all times. We examine the difference between the two portfolios as the number of discrete rebalancing dates increases. We derive the limiting variance of the relative error between the two portfolios for both the mean-reverting and jump-diffusion cases. For both cases, we derive ``volatility adjustments'' to improve the approximation of the discretely rebalanced portfolio by the continuously rebalanced portfolio, based on on the limiting covariance between the relative rebalancing error and the level of the continuously rebalanced portfolio. These results are based on strong approximation results for jump-diffusion processes.Operations research, Finance, Mathematicsxx2126Industrial Engineering and Operations Research, BusinessDissertationsTournaments With Forbidden Substructures and the Erdos-Hajnal Conjecture
https://academiccommons.columbia.edu/catalog/ac:160247
Choromanski, Krzysztofhttp://hdl.handle.net/10022/AC:P:20024Mon, 29 Apr 2013 15:29:42 +0000A celebrated Conjecture of Erdos and Hajnal states that for every undirected graph H there exists ɛ(H)>0 such that every undirected graph on n vertices that does not contain H as an induced subgraph contains a clique or a stable set of size at least n^{ɛ(H)}. In 2001 Alon, Pach and Solymosi proved that the conjecture has an equivalent directed version, where undirected graphs are replaced by tournaments and cliques and stable sets by transitive subtournaments. This dissertation addresses the directed version of the conjecture and some problems in the directed setting that are closely related to it. For a long time the conjecture was known to be true only for very specific small graphs and graphs obtained from them by the so-called substitution procedure proposed by Alon, Pach and Solymosi. All the graphs that are an outcome of this procedure have nontrivial homogeneous sets. Tournaments without nontrivial homogeneous sets are called prime. They play a central role here since if the conjecture is not true then the smallest counterexample is prime. We remark that for a long time the conjecture was known to be true only for some prime graphs of order at most 5. There exist 5-vertex graphs for which the conjecture is still open, however one of the corollaries of the results presented in the thesis states that all tournaments on at most 5 vertices satisfy the conjecture. In the first part of the thesis we will establish the conjecture for new infinite classes of tournaments containing infinitely many prime tournaments. We will first prove the conjecture for so-called constellations. It turns out that almost all tournaments on at most 5 vertices are either constellations or are obtained from constellations by substitutions. The only 5-vertex tournament for which this is not the case is a tournament in which every vertex has outdegree 2. We call this the tournament C_{5}. Another result of this thesis is the proof of the conjecture for this tournament. We also present here the structural characterization of the tournaments satisfying the conjecture in almost linear sense. In the second part of the thesis we focus on the upper bounds on coefficients epsilon(H) for several classes of tournaments. In particular we analyze how they depend on the structure of the tournament. We prove that for almost all h-vertex tournaments ɛ(H) ≤ 4/h(1+o(1)). As a byproduct of the methods we use here, we get upper bounds for ɛ(H) of undirected graphs. We also present upper bounds on ɛ(H) of tournaments with small nontrivial homogeneous sets, in particular prime tournaments. Finally we analyze tournaments with big ɛ(H) and explore some of their structural properties.Mathematicskmc2178Industrial Engineering and Operations ResearchDissertationsOptimization Algorithms for Structured Machine Learning and Image Processing Problems
https://academiccommons.columbia.edu/catalog/ac:158764
Qin, Zhiweihttp://hdl.handle.net/10022/AC:P:19648Fri, 05 Apr 2013 10:47:07 +0000Optimization algorithms are often the solution engine for machine learning and image processing techniques, but they can also become the bottleneck in applying these techniques if they are unable to cope with the size of the data. With the rapid advancement of modern technology, data of unprecedented size has become more and more available, and there is an increasing demand to process and interpret the data. Traditional optimization methods, such as the interior-point method, can solve a wide array of problems arising from the machine learning domain, but it is also this generality that often prevents them from dealing with large data efficiently. Hence, specialized algorithms that can readily take advantage of the problem structure are highly desirable and of immediate practical interest. This thesis focuses on developing efficient optimization algorithms for machine learning and image processing problems of diverse types, including supervised learning (e.g., the group lasso), unsupervised learning (e.g., robust tensor decompositions), and total-variation image denoising. These algorithms are of wide interest to the optimization, machine learning, and image processing communities. Specifically, (i) we present two algorithms to solve the Group Lasso problem. First, we propose a general version of the Block Coordinate Descent (BCD) algorithm for the Group Lasso that employs an efficient approach for optimizing each subproblem exactly. We show that it exhibits excellent performance when the groups are of moderate size. For groups of large size, we propose an extension of the proximal gradient algorithm based on variable step-lengths that can be viewed as a simplified version of BCD. By combining the two approaches we obtain an implementation that is very competitive and often outperforms other state-of-the-art approaches for this problem. We show how these methods fit into the globally convergent general block coordinate gradient descent framework in (Tseng and Yun, 2009). We also show that the proposed approach is more efficient in practice than the one implemented in (Tseng and Yun, 2009). In addition, we apply our algorithms to the Multiple Measurement Vector (MMV) recovery problem, which can be viewed as a special case of the Group Lasso problem, and compare their performance to other methods in this particular instance; (ii) we further investigate sparse linear models with two commonly adopted general sparsity-inducing regularization terms, the overlapping Group Lasso penalty l1/l2-norm and the l1/l_infty-norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As one of the core building-blocks of this framework, we develop new algorithms using a partial-linearization/splitting technique and prove that the accelerated versions of these algorithms require $O(1 sqrt(epsilon) ) iterations to obtain an epsilon-optimal solution. We compare the performance of these algorithms against that of the alternating direction augmented Lagrangian and FISTA methods on a collection of data sets and apply them to two real-world problems to compare the relative merits of the two norms; (iii) we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recent advances in robust Principal Component Analysis and tensor completion. We propose tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem. These algorithms are based on the highly efficient alternating direction augmented Lagrangian and accelerated proximal gradient methods. We also propose a nonconvex model that can often improve the recovery results from the convex models. We investigate the empirical recoverability properties of the convex and nonconvex formulations and compare the computational performance of the algorithms on simulated data. We demonstrate through a number of real applications the practical effectiveness of this convex optimization framework for robust low-rank tensor recovery; (iv) we consider the image denoising problem using total variation regularization. This problem is computationally challenging to solve due to the non-differentiability and non-linearity of the regularization term. We propose a new alternating direction augmented Lagrangian method, involving subproblems that can be solved efficiently and exactly. The global convergence of the new algorithm is established for the anisotropic total variation model. We compare our method with the split Bregman method and demonstrate the superiority of our method in computational performance on a set of standard test images.Operations research, Computer science, Statisticszq2107Industrial Engineering and Operations ResearchDissertationsModels for managing surge capacity in the face of an influenza epidemic
https://academiccommons.columbia.edu/catalog/ac:157364
Zenteno, Anahttp://hdl.handle.net/10022/AC:P:19200Fri, 01 Mar 2013 10:01:02 +0000Influenza pandemics pose an imminent risk to society. Yearly outbreaks already represent heavy social and economic burdens. A pandemic could severely affect infrastructure and commerce through high absenteeism, supply chain disruptions, and other effects over an extended and uncertain period of time. Governmental institutions such as the Center for Disease Prevention and Control (CDC) and the U.S. Department of Health and Human Services (HHS) have issued guidelines on how to prepare for a potential pandemic, however much work still needs to be done in order to meet them. From a planner's perspective, the complexity of outlining plans to manage future resources during an epidemic stems from the uncertainty of how severe the epidemic will be. Uncertainty in parameters such as the contagion rate (how fast the disease spreads) makes the course and severity of the epidemic unforeseeable, exposing any planning strategy to a potentially wasteful allocation of resources. Our approach involves the use of additional resources in response to a robust model of the evolution of the epidemic as to hedge against the uncertainty in its evolution and intensity. Under existing plans, large cities would make use of networks of volunteers, students, and recent retirees, or borrow staff from neighboring communities. Taking into account that such additional resources are likely to be significantly constrained (e.g. in quantity and duration), we seek to produce robust emergency staff commitment levels that work well under different trajectories and degrees of severity of the pandemic. Our methodology combines Robust Optimization techniques with Epidemiology (SEIR models) and system performance modeling. We describe cutting-plane algorithms analogous to generalized Benders' decomposition that prove fast and numerically accurate. Our results yield insights on the structure of optimal robust strategies and on practical rules-of-thumb that can be deployed during the epidemic. To assess the efficacy of our solutions, we study their performance under different scenarios and compare them against other seemingly good strategies through numerical experiments. This work would be particularly valuable for institutions that provide public services, whose operations continuity is critical for a community, especially in view of an event of this caliber. As far as we know, this is the first time this problem is addressed in a rigorous way; particularly we are not aware of any other robust optimization applications in epidemiology.Operations research, Public healthacz2103Industrial Engineering and Operations ResearchDissertationsRare Events in Stochastic Systems: Modeling, Simulation Design and Algorithm Analysis
https://academiccommons.columbia.edu/catalog/ac:156733
Shi, Yixihttp://hdl.handle.net/10022/AC:P:19034Wed, 13 Feb 2013 12:32:00 +0000This dissertation explores a few topics in the study of rare events in stochastic systems, with a particular emphasis on the simulation aspect. This line of research has been receiving a substantial amount of interest in recent years, mainly motivated by scientific and industrial applications in which system performance is frequently measured in terms of events with very small probabilities.The topics mainly break down into the following themes: Algorithm Analysis: Chapters 2, 3, 4 and 5. Simulation Design: Chapters 3, 4 and 5. Modeling: Chapter 5. The titles of the main chapters are detailed as follows: Chapter 2: Analysis of a Splitting Estimator for Rare Event Probabilities in Jackson Networks Chapter 3: Splitting for Heavy-tailed Systems: An Exploration with Two Algorithms Chapter 4: State Dependent Importance Sampling with Cross Entropy for Heavy-tailed Systems Chapter 5: Stochastic Insurance-Reinsurance Networks: Modeling, Analysis and Efficient Monte CarloEngineering, Mathematicsys2347Industrial Engineering and Operations ResearchDissertationsChance Constrained Optimal Power Flow: Risk-Aware Network Control under Uncertainty
https://academiccommons.columbia.edu/catalog/ac:156182
Bienstock, Daniel; Chertkov, Michael; Harnett, Seanhttp://hdl.handle.net/10022/AC:P:18933Tue, 05 Feb 2013 10:34:34 +0000When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely used by the electric power industry to re-dispatch hourly controllable generation (coal, gas and hydro plants) over control areas of transmission networks, can result in grid instability, and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition, which is borne by simulations of real grids, would likely resulting in automatic line tripping to protect lines from thermal stress, a risky and undesirable outcome which compromises stability. Smart grid goals include a commitment to large penetration of highly fluctuating renewables, thus calling to reconsider current practices, in particular the use of standard OPF. Our Chance Constrained (CC) OPF corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, our CC-OPF satisfies all the constraints with high probability while simultaneously minimizing the cost of economic re-dispatch. CC-OPF allows efficient implementation, e.g. solving a typical instance over the 2746-bus Polish network in 20 seconds on a standard laptop.Industrial engineering, Operations researchdb17, srh2144Industrial Engineering and Operations Research, Applied Physics and Applied MathematicsArticlesModels for managing the impact of an influenza epidemic
https://academiccommons.columbia.edu/catalog/ac:153905
Bienstock, Daniel; Zenteno Langle, Ana Ceciliahttp://hdl.handle.net/10022/AC:P:15119Mon, 29 Oct 2012 09:30:05 +0000We present methodologies for managing the impact of workforce absenteeism on the operational continuity of public services during an influenza epidemic. From a planner’s perspective, it is of paramount importance to design contingency plans to administer resources on the face of such an event; however, there is significant complexity underlying this task, stemming from uncertainty on the likely severity and evolution of the epidemic. Our approach involves the procurement of additional resources in response to a robust model of the evolution of the epidemic. We develop insights on the structure of optimal robust strategies and on practical rules-of-thumb that can be applied should an epidemic take place. We present numerical examples that illustrate the effectiveness of our results.Public healthdb17, acz2103Industrial Engineering and Operations Research, Applied Physics and Applied MathematicsArticlesChance Constrained Optimal Power Flow: Risk-Aware Network Control under Uncertainty
https://academiccommons.columbia.edu/catalog/ac:153902
Bienstock, Daniel; Chertkov, Michael; Harnett, Seanhttp://hdl.handle.net/10022/AC:P:15118Mon, 29 Oct 2012 09:19:08 +0000When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely used by the electric power industry to redispatch hourly controllable generation (coal, gas and hydro plants) over control areas of transmission networks, can result in grid instability, and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition, which is borne by simulations of real grids, would likely resulting in automatic line tripping to protect lines from thermal stress, a risky and undesirable outcome which compromises stability. Smart grid goals include a commitment to large penetration of highly fluctuating renewables, thus calling to reconsider current practices, in particular the use of standard OPF. Our Chance Constrained (CC) OPF corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, our CCOPF satisfies all the constraints with high probability while simultaneously minimizing the cost of economic redispatch. CCOPF allows efficient implementation, e.g. solving a typical instance over the 2746bus Polish network in 20s on a standard laptop.Industrial engineering, Operations researchdb17Industrial Engineering and Operations Research, Applied Physics and Applied MathematicsArticlesContingent Capital: Valuation and Risk Implications Under Alternative Conversion Mechanisms
https://academiccommons.columbia.edu/catalog/ac:152933
Nouri, Behzadhttp://hdl.handle.net/10022/AC:P:14800Fri, 28 Sep 2012 11:11:35 +0000Several proposals for enhancing the stability of the financial system include requirements that banks hold some form of contingent capital, meaning equity that becomes available to a bank in the event of a crisis or financial distress. Specific proposals vary in their choice of conversion trigger and conversion mechanism, and have inspired extensive scrutiny regarding their effectivity in avoiding costly public rescues and bail-outs and potential adverse effects on market dynamics. While allowing banks to leverage and gain a higher return on their equity capital during the upturns in financial markets, contingent capital provides an automatic mechanism to reduce debt and raise the loss bearing capital cushion during the downturns and market crashes; therefore, making it possible to achieve stability and robustness in the financial sector, without reducing efficiency and competitiveness of the banking system with higher regulatory capital requirements. However, many researchers have raised concerns regarding unintended consequences and implications of such instruments for market dynamics. Death spirals in the stock price near the conversion, possibility of profitable stock or book manipulations by either the investors or the issuer, the marketability and demand for such hybrid instruments, contagion and systemic risks arising from the hedging strategies of the investors and higher risk taking incentives for issuers are among such concerns. Though substantial, many of such issues are addressed through a prudent design of the trigger and conversion mechanism. In the following chapters, we develop multiple models for pricing and analysis of contingent capital under different conversion mechanisms. In Chapter 2 we analyze the case of contingent capital with a capital-ratio trigger and partial and on-going conversion. The capital ratio we use is based on accounting or book value to approximate the regulatory ratios that determine capital requirements for banks. The conversion process is partial and on-going in the sense that each time a bank's capital ratio reaches the minimum threshold, just enough debt is converted to equity to meet the capital requirement, so long as the contingent capital has not been depleted. In Chapter 3 we simplify the design to all-at-once conversion however we perform the analysis through a much richer model which incorporates tail risk in terms of jumps, endogenous optimal default policy and debt rollover. We also investigate the case of bail-in debt, where at default the original shareholders are wiped out and the converted investors take control of the firm. In the case of contingent convertibles the conversion trigger is assumed as a contractual term specified by market value of assets. For bail-in debt the trigger is where the original shareholders optimally default. We study incentives of shareholders to change the capital structure and how CoCo's affect risk incentives. Several researchers have advocated use of a market based trigger which is forward looking, continuously updated and readily available, while some others have raised concerns regarding unintended consequences of a market based trigger. In Chapter 4 we investigate one of these issues, namely the existence and uniqueness of equilibrium when the conversion trigger is based on the stock price.Finance, Operations researchbn2164Industrial Engineering and Operations Research, BusinessDissertationsThree Essays on Dynamic Pricing and Resource Allocation
https://academiccommons.columbia.edu/catalog/ac:151966
Nur, Cavdarogluhttp://hdl.handle.net/10022/AC:P:14492Thu, 23 Aug 2012 11:27:07 +0000This thesis consists of three essays that focus on different aspects of pricing and resource allocation. We use techniques from supply chain and revenue management, scenario-based robust optimization and game theory to study the behavior of firms in different competitive and non-competitive settings. We develop dynamic programming models that account for pricing and resource allocation decisions of firms in such settings. In Chapter 2, we focus on the resource allocation problem of a service firm, particularly a health-care facility. We formulate a general model that is applicable to various resource allocation problems of a hospital. To this end, we consider a system with multiple customer classes that display different reactions to delays in service. By adopting a dynamic-programming approach, we show that the optimal policy is not simple but exhibits desirable monotonicity properties. Furthermore, we propose a simple threshold heuristic policy that performs well in our experiments. In Chapter 3, we study a dynamic pricing problem for a monopolist seller that operates in a setting where buyers have market power, and where each potential sale takes the form of a bilateral negotiation. We review the dynamic programming formulation of the negotiation problem, and propose a simple and tractable deterministic "fluid" analogue for this problem. The main emphasis of the chapter is in expanding the formulation to the dynamic setting where both the buyer and seller have limited prior information on their counterparty valuation and their negotiation skill. In Chapter 4, we consider the revenue maximization problem of a seller who operates in a market where there are two types of customers; namely the "investors" and "regular-buyers". In a two-period setting, we model and solve the pricing game between the seller and the investors in the latter period, and based on the solution of this game, we analyze the revenue maximization problem of the seller in the former period. Moreover, we study the effects on the total system profits when the seller and the investors cooperate through a contracting mechanism rather than competing with each other; and explore the contracting opportunities that lead to higher profits for both agents.Operations researchIndustrial Engineering and Operations ResearchDissertationsForbidden Substructures in Graphs and Trigraphs, and Related Coloring Problems
https://academiccommons.columbia.edu/catalog/ac:146465
Penev, Irenahttp://hdl.handle.net/10022/AC:P:13082Tue, 01 May 2012 16:34:11 +0000Given a graph G, χ(G) denotes the chromatic number of G, and ω(G) denotes the clique number of G (i.e. the maximum number of pairwise adjacent vertices in G). A graph G is perfect provided that for every induced subgraph H of G, χ(H) = ω(H). This thesis addresses several problems from the theory of perfect graphs and generalizations of perfect graphs. The bull is a five-vertex graph consisting of a triangle and two vertex-disjoint pendant edges; a graph is said to be bull-free provided that no induced subgraph of it is a bull. The first result of this thesis is a structure theorem for bull-free perfect graphs. This is joint work with Chudnovsky, and it first appeared in [12]. The second result of this thesis is a decomposition theorem for bull-free perfect graphs, which we then use to give a polynomial time combinatorial coloring algorithm for bull-free perfect graphs. We remark that de Figueiredo and Maffray [33] previously solved this same problem, however, the algorithm presented in this thesis is faster than the algorithm from [33]. We note that a decomposition theorem that is very similar (but slightly weaker) than the one from this thesis was originally proven in [52], however, the proof in this thesis is significantly different from the one in [52]. The algorithm from this thesis is very similar to the one from [52]. A class G of graphs is said to be χ-bounded provided that there exists a function f such that for all G in G, and all induced subgraphs H of G, we have that χ(H) ≤ f(ω(H)). χ-bounded classes were introduced by Gyarfas [41] as a generalization of the class of perfect graphs (clearly, the class of perfect graphs is χ-bounded by the identity function). Given a graph H, we denote by Forb*(H) the class of all graphs that do not contain any subdivision of H as an induced subgraph. In [57], Scott proved that Forb*(T) is χ-bounded for every tree T, and he conjectured that Forb*(H) is χ-bounded for every graph H. Recently, a group of authors constructed a counterexample to Scott's conjecture [51]. This raises the following question: for which graphs H is Scott's conjecture true? In this thesis, we present the proof of Scott's conjecture for the cases when H is the paw (i.e. a four-vertex graph consisting of a triangle and a pendant edge), the bull, and a necklace (i.e. a graph obtained from a path by choosing a matching such that no edge of the matching is incident with an endpoint of the path, and for each edge of the matching, adding a vertex adjacent to the ends of this edge). This is joint work with Chudnovsky, Scott, and Trotignon, and it originally appeared in [13]. Finally, we consider several operations (namely, "substitution," "gluing along a clique," and "gluing along a bounded number of vertices"), and we show that the closure of a χ-bounded class under any one of them, as well as under certain combinations of these three operations (in particular, the combination of substitution and gluing along a clique, as well as the combination of gluing along a clique and gluing along a bounded number of vertices) is again χ-bounded. This is joint work with Chudnovsky, Scott, and Trotignon, and it originally appeared in [14].Mathematicsip2158Mathematics, Industrial Engineering and Operations ResearchDissertationsEssays on Inventory Management and Object Allocation
https://academiccommons.columbia.edu/catalog/ac:144769
Lee, Thiam Huihttp://hdl.handle.net/10022/AC:P:12623Fri, 17 Feb 2012 15:52:21 +0000This dissertation consists of three essays. In the first, we establish a framework for proving equivalences between mechanisms that allocate indivisible objects to agents. In the second, we study a newsvendor model where the inventory manager has access to two experts that provide advice, and examine how and when an optimal algorithm can be efficiently computed. In the third, we study classical single-resource capacity allocation problem and investigate the relationship between data availability and performance guarantees. We first study mechanisms that solve the problem of allocating indivisible objects to agents. We consider the class of mechanisms that utilize the Top Trading Cycles (TTC) algorithm (these may differ based on how they prioritize agents), and show a general approach to proving equivalences between mechanisms from this class. This approach is used to show alternative and simpler proofs for two recent equivalence results for mechanisms with linear priority structures. We also use the same approach to show that these equivalence results can be generalized to mechanisms where the agent priority structure is described by a tree. Second, we study the newsvendor model where the manager has recourse to advice, or decision recommendations, from two experts, and where the objective is to minimize worst-case regret from not following the advice of the better of the two agents. We show the model can be reduced to the class machine-learning problem of predicting binary sequences but with an asymmetric cost function, allowing us to obtain an optimal algorithm by modifying a well-known existing one. However, the algorithm we modify, and consequently the optimal algorithm we describe, is not known to be efficiently computable, because it requires evaluations of a function v which is the objective value of recursively defined optimization problems. We analyze v and show that when the two cost parameters of the newsvendor model are small multiples of a common factor, its evaluation is computationally efficient. We also provide a novel and direct asymptotic analysis of v that differs from previous approaches. Our asymptotic analysis gives us insight into the transient structure of v as its parameters scale, enabling us to formulate a heuristic for evaluating v generally. This, in turn, defines a heuristic for the optimal algorithm whose decisions we find in a numerical study to be close to optimal. In our third essay, we study the classical single-resource capacity allocation problem. In particular, we analyze the relationship between data availability (in the form of demand samples) and performance guarantees for solutions derived from that data. This is done by describing a class of solutions called epsilon-backwards accurate policies and determining a suboptimality gap for this class of solutions. The suboptimality gap we find is in terms of epsilon and is also distribution-free. We then relate solutions generated by a Monte Carlo algorithm and epsilon-backwards accurate policies, showing a lower bound on the quantity of data necessary to ensure that the solution generated by the algorithm is epsilon-backwards accurate with a high probability. Combining the two results then allows us to give a lower bound on the data needed to generate an Î±-approximation with a given confidence probability 1-delta. We find that this lower bound is polynomial in the number of fares, M, and 1/Î±.Operations researchthl2102Industrial Engineering and Operations ResearchDissertationsMultiproduct Pricing Management and Design of New Service Products
https://academiccommons.columbia.edu/catalog/ac:144706
Wang, Ruxianhttp://hdl.handle.net/10022/AC:P:12603Fri, 17 Feb 2012 12:45:47 +0000In this thesis, we study price optimization and competition of multiple differentiated substitutable products under the general Nested Logit model and also consider the designing and pricing of new service products, e.g., flexible warranty and refundable warranty, under customers' strategic claim behavior. Chapter 2 considers firms that sell multiple differentiated substitutable products and customers whose purchase behavior follows the Nested Logit model, of which the Multinomial Logit model is a special case. In the Nested Logit model, customers make product selection decision sequentially: they first select a class or a nest of products and subsequently choose a product within the selected class. We consider the general Nested Logit model with product-differentiated price coefficients and general nest-heterogenous degrees. We show that the adjusted markup, which is defined as price minus cost minus the reciprocal of the price coefficient, is constant across all the products in each nest. When optimizing multiple nests of products, the adjusted nested markup is also constant within a nest. By using this result, the multi-product optimization problem can be reduced to a single-dimensional problem in a bounded interval, which is easy to solve. We also use this result to simplify the oligopolistic price competition and characterize the Nash equilibrium. Furthermore, we investigate its application to dynamic pricing and revenue management. In Chapter 3, we investigate the flexible monthly warranty, which offers flexibility to customers and allow them to cancel it at anytime without any penalty. Frequent technological innovations and price declines severely affect sales of extended warranties as product replacement upon failure becomes an increasingly attractive alternative. To increase sales and profitability, we propose offering flexible-duration extended warranties. These warranties can appeal to customers who are uncertain about how long they will keep the product as well as to customers who are uncertain about the product's reliability. Flexibility may be added to existing services in the form of monthly-billing with month-by-month commitments, or by making existing warranties easier to cancel, with pro-rated refunds. This thesis studies flexible warranties from the perspectives of both the customer and the provider. We present a model of the customer's optimal coverage decisions under the objective of minimizing expected support costs over a random planning horizon. We show that under some mild conditions the customer's optimal coverage policy has a threshold structure. We also show through an analytical study and through numerical examples how flexible warranties can result in higher profits and higher attach rates. Chapter 4 examines the designing and pricing of residual value warranty that refunds customers at the end of warranty period based on customers' claim history. Traditional extended warranties for IT products do not differentiate customers according to their usage rates or operating environment. These warranties are priced to cover the costs of high-usage customers who tend to experience more failures and are therefore more costly to support. This makes traditional warranties economically unattractive to low-usage customers. In this chapter, we introduce, design and price residual value warranties. These warranties refund a part of the upfront price to customers who have zero or few claims according to a pre-determined refund schedule. By design, the net cost of these warranties is lower for light users than for heavy users. As a result, a residual value warranty can enable the provider to price-discriminate based on usage rates or operating conditions without the need to monitor individual customers' usage. Theoretic results and numerical experiments demonstrate how residual value warranties can appeal to a broader range of customers and significantly increase the provider's profits.Operations research, Industrial engineeringrw2267Industrial Engineering and Operations ResearchDissertationsA Simulation Model to Analyze the Impact of Golf Skills and a Scenario-based Approach to Options Portfolio Optimization
https://academiccommons.columbia.edu/catalog/ac:143076
Ko, Soonminhttp://hdl.handle.net/10022/AC:P:12166Tue, 10 Jan 2012 14:41:51 +0000A simulation model of the game of golf is developed to analyze the impact of various skills (e.g., driving distance, directional accuracy, putting skill, and others) on golf scores. The golf course model includes realistic features of a golf course including rough, sand, water, and trees. Golfer shot patterns are modeled with t distributions and mixtures of t and normal distributions since normal distributions do not provide good fits to the data. The model is calibrated to extensive data for amateur and professional golfers. The golf simulation is used to assess the impact on scores of distance and direction, determine what factors separate pros from amateurs, and to determine the impact of course length on scores. In the second part of the thesis, we use a scenario-based approach to solve a portfolio optimization problem with options. The solution provides the optimal payoff profile given an investor's view of the future, his utility function or risk appetite, and the market prices of options. The scenario-based approach has several advantages over the traditional covariance matrix method, including additional flexibility in the choice of constraints and objective function.Engineering, Operations researchsk2822Industrial Engineering and Operations Research, BusinessDissertationsRisk Premia and Optimal Liquidation of Defaultable Securities
https://academiccommons.columbia.edu/catalog/ac:139526
Leung, Siu Tang; Liu, Penghttp://hdl.handle.net/10022/AC:P:11331Mon, 03 Oct 2011 10:19:01 +0000This paper studies the optimal timing to liquidate defaultable securities in a general intensity-based credit risk model under stochastic interest rate. We incorporate the potential price discrepancy between the market and investors, which is characterized by risk-neutral valuation under different default risk premia specifications. To quantify the value of optimally timing to sell, we introduce the delayed liquidation premium which is closely related to the stochastic bracket between the market price and a pricing kernel. We analyze the optimal liquidation policy for various credit derivatives. Our model serves as the building block for the sequential buying and selling problem. We also discuss the extensions to a jump-diffusion default intensity model as well as a defaultable equity model.Finance, Economic theorytl2497Industrial Engineering and Operations ResearchArticlesAdding Trust to P2P Distribution of Paid Content
https://academiccommons.columbia.edu/catalog/ac:138893
Sherman, Alex; Stavrou, Angelos; Nieh, Jason; Keromytis, Angelos D.; Stein, Clifford S.http://hdl.handle.net/10022/AC:P:11195Mon, 19 Sep 2011 12:56:04 +0000While peer-to-peer (P2P) file-sharing is a powerful and cost-effective content distribution model, most paid-for digital-content providers (CPs) use direct download to deliver their content. CPs are hesitant to rely on a P2P distribution model because it introduces a number of security concerns including content pollution by malicious peers, and lack of enforcement of authorized downloads. Furthermore, because users communicate directly with one another, the users can easily form illegal file-sharing clusters to exchange copyrighted content. Such exchange could hurt the content providers' profits. We present a P2P system TP2P, where we introduce a notion of trusted auditors (TAs). TAs are P2P peers that police the system by covertly monitoring and taking measures against misbehaving peers. This policing allows TP2P to enable a stronger security model making P2P a viable alternative for the distribution of paid digital content. Through analysis and simulation, we show the effectiveness of even a small number of TAs at policing the system. In a system with as many as 60% of misbehaving users, even a small number of TAs can detect 99% of illegal cluster formation. We develop a simple economic model to show that even with such a large presence of malicious nodes, TP2P can improve CP's profits (which could translate to user savings) by 62% to 122%, even while assuming conservative estimates of content and bandwidth costs. We implemented TP2P as a layer on top of BitTorrent and demonstrated experimentally using PlanetLab that our system provides trusted P2P file sharing with negligible performance overhead.Computer sciencejn234, ak2052, cs2035Computer Science, Industrial Engineering and Operations ResearchArticlesAccounting for Risk Aversion in Derivatives Purchase Timing
https://academiccommons.columbia.edu/catalog/ac:138783
Leung, Siu Tang; Ludkovski, Mikehttp://hdl.handle.net/10022/AC:P:11191Fri, 16 Sep 2011 09:45:22 +0000We study the problem of optimal timing to buy/sell derivatives by a risk-averse agent in incomplete markets. Adopting the exponential utility indifference valuation, we investigate this timing flexibility and the associated delayed purchase premium. This leads to a stochastic control and optimal stopping problem that combines the observed market price dynamics and the agent's risk preferences. Our results extend recent work on indifference valuation of American options, as well as the authors' first paper (Leung and Ludkovski, SIAM J. Fin. Math., 2011). In the case of Markovian models of contracts on non-traded assets, we provide analytical characterizations and numerical studies of the optimal purchase strategies, with applications to both equity and credit derivatives.Finance, Economic theorytl2497Industrial Engineering and Operations ResearchArticlesAlgorithms for Sparse and Low-Rank Optimization: Convergence, Complexity and Applications
https://academiccommons.columbia.edu/catalog/ac:137539
Ma, ShiqianMon, 22 Aug 2011 11:53:09 +0000Solving optimization problems with sparse or low-rank optimal solutions has been an important topic since the recent emergence of compressed sensing and its matrix extensions such as the matrix rank minimization and robust principal component analysis problems. Compressed sensing enables one to recover a signal or image with fewer observations than the "length" of the signal or image, and thus provides potential breakthroughs in applications where data acquisition is costly. However, the potential impact of compressed sensing cannot be realized without efficient optimization algorithms that can handle extremely large-scale and dense data from real applications. Although the convex relaxations of these problems can be reformulated as either linear programming, second-order cone programming or semidefinite programming problems, the standard methods for solving these relaxations are not applicable because the problems are usually of huge size and contain dense data. In this dissertation, we give efficient algorithms for solving these "sparse" optimization problems and analyze the convergence and iteration complexity properties of these algorithms. Chapter 2 presents algorithms for solving the linearly constrained matrix rank minimization problem. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast and solved as a semidefinite programming problem, such an approach is computationally expensive when the matrices are large. In Chapter 2, we propose fixed-point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems. Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10-5 in about 3 minutes by sampling only 20 percent of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms. In Chapter 3, we study the convergence/recoverability properties of the fixed point continuation algorithm and its variants for matrix rank minimization. Heuristics for determining the rank of the matrix when its true rank is not known are also proposed. Some of these algorithms are closely related to greedy algorithms in compressed sensing. Numerical results for these algorithms for solving linearly constrained matrix rank minimization problems are reported. Chapters 4 and 5 considers alternating direction type methods for solving composite convex optimization problems. We present in Chapter 4 alternating linearization algorithms that are based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most O(1/ε) iterations to obtain an ε-optimal solution, while our accelerated (i.e., fast) versions require at most O(1/√ε) iterations, with little change in the computational effort required at each iteration. For more general problem, i.e., minimizing the sum of K convex functions, we propose multiple-splitting algorithms for solving them. We propose both basic and accelerated algorithms with O(1/ε) and O(1/√ε) iteration complexity bounds for obtaining an ε-optimal solution. To the best of our knowledge, the complexity results presented in these two chapters are the first ones of this type that have been given for splitting and alternating direction type methods. Numerical results on various applications in sparse and low-rank optimization, including compressed sensing, matrix completion, image deblurring, robust principal component analysis, are reported to demonstrate the efficiency of our methods.Operations researchsm2756Industrial Engineering and Operations ResearchDissertationsMany-Server Queues with Time-Varying Arrivals, Customer Abandonment, and non-Exponential Distributions
https://academiccommons.columbia.edu/catalog/ac:136569
Liu, Yunanhttp://hdl.handle.net/10022/AC:P:10801Tue, 02 Aug 2011 15:07:23 +0000This thesis develops deterministic heavy-traffic fluid approximations for many-server stochastic queueing models. The queueing models, with many homogeneous servers working independently in parallel, are intended to model large-scale service systems such as call centers and health care systems. Such models also have been employed to study communication, computing and manufacturing systems. The heavy-traffic approximations yield relatively simple formulas for quantities describing system performance, such as the expected number of customers waiting in the queue. The new performance approximations are valuable because, in the generality considered, these complex systems are not amenable to exact mathematical analysis. Since the approximate performance measures can be computed quite rapidly, they usefully complement more cumbersome computer simulation. Thus these heavy-traffic approximations can be used to improve capacity planning and operational control. More specifically, the heavy-traffic approximations here are for large-scale service systems, having many servers and a high arrival rate. The main focus is on systems that have time-varying arrival rates and staffing functions. The system is considered under the assumption that there are alternating periods of overloading and underloading, which commonly occurs when service providers are unable to adjust the staffing frequently enough to economically meet demand at all times. The models also allow the realistic features of customer abandonment and non-exponential probability distributions for the service times and the times customers are willing to wait before abandoning. These features make the overall stochastic model non-Markovian and thus thus very difficult to analyze directly. This thesis provides effective algorithms to compute approximate performance descriptions for these complex systems. These algorithms are based on ordinary differential equations and fixed point equations associated with contraction operators. Simulation experiments are conducted to verify that the approximations are effective. This thesis consists of four pieces of work, each presented in one chapter. The first chapter (Chapter 2) develops the basic fluid approximation for a non-Markovian many-server queue with time-varying arrival rate and staffing. The second chapter (Chapter 3) extends the fluid approximation to systems with complex network structure and Markovian routing to other queues of customers after completing service from each queue. The extension to open networks of queues has important applications. For one example, in hospitals, patients usually move among different units such as emergency rooms, operating rooms, and intensive care units. For another example, in manufacturing systems, individual products visit different work stations one or more times. The open network fluid model has multiple queues each of which has a time-varying arrival rate and staffing function. The third chapter (Chapter 4) studies the large-time asymptotic dynamics of a single fluid queue. When the model parameters are constant, convergence to the steady state as time evolves is established. When the arrival rates are periodic functions, such as in service systems with daily or seasonal cycles, the existence of a periodic steady state and the convergence to that periodic steady state as time evolves are established. Conditions are provided under which this convergence is exponentially fast. The fourth chapter (Chapter 5) uses a fluid approximation to gain insight into nearly periodic behavior seen in overloaded stationary many-server queues with customer abandonment and nearly deterministic service times. Deterministic service times are of applied interest because computer-generated service times, such as automated messages, may well be deterministic, and computer-generated service is becoming more prevalent. With deterministic service times, if all the servers remain busy for a long interval of time, then the times customers enter service assumes a periodic behavior throughout that interval. In overloaded large-scale systems, these intervals tend to persist for a long time, producing nearly periodic behavior. To gain insight, a heavy-traffic limit theorem is established showing that the fluid model arises as the many-server heavy-traffic limit of a sequence of appropriately scaled queueing models, all having these deterministic service times. Simulation experiments confirm that the transient behavior of the limiting fluid model provides a useful description of the transient performance of the queueing system. However, unlike the asymptotic loss of memory results in the previous chapter for service times with densities, the stationary fluid model with deterministic service times does not approach steady state as time evolves independent of the initial conditions. Since the queueing model with deterministic service times approaches a proper steady state as time evolves, this model with deterministic service times provides an example where the limit interchange (limiting steady state as time evolves and heavy traffic as scale increases) is not valid.Operations researchyl2342Industrial Engineering and Operations ResearchDissertationsFirst Order Methods for Large-Scale Sparse Optimization
https://academiccommons.columbia.edu/catalog/ac:135750
Aybat, Necdet Serhathttp://hdl.handle.net/10022/AC:P:10735Fri, 15 Jul 2011 12:00:39 +0000In today's digital world, improvements in acquisition and storage technology are allowing us to acquire more accurate and finer application-specific data, whether it be tick-by-tick price data from the stock market or frame-by-frame high resolution images and videos from surveillance systems, remote sensing satellites and biomedical imaging systems. Many important large-scale applications can be modeled as optimization problems with millions of decision variables. Very often, the desired solution is sparse in some form, either because the optimal solution is indeed sparse, or because a sparse solution has some desirable properties. Sparse and low-rank solutions to large scale optimization problems are typically obtained by regularizing the objective function with L1 and nuclear norms, respectively. Practical instances of these problems are very high dimensional (~ million variables) and typically have dense and ill-conditioned data matrices. Therefore, interior point based methods are ill-suited for solving these problems. The large scale of these problems forces one to use the so-called first-order methods that only use gradient information at each iterate. These methods are efficient for problems with a "simple" feasible set such that Euclidean projections onto the set can be computed very efficiently, e.g. the positive orthant, the n-dimensional hypercube, the simplex, and the Euclidean ball. When the feasible set is "simple", the subproblems used to compute the iterates can be solved efficiently. Unfortunately, most applications do not have "simple" feasible sets. A commonly used technique to handle general constraints is to relax them so that the resulting problem has only "simple" constraints, and then to solve a single penalty or Lagrangian problem. However, these methods generally do not guarantee convergence to feasibility. The focus of this thesis is on developing new fast first-order iterative algorithms for computing sparse and low-rank solutions to large-scale optimization problems with very mild restrictions on the feasible set - we allow linear equalities, norm-ball and conic inequalities, and also certain non-smooth convex inequalities to define the constraint set. The proposed algorithms guarantee that the sequence of iterates converges to an optimal feasible solution of the original problem, and each subproblem is an optimization problem with a "simple" feasible set. In addition, for any eps > 0, by relaxing the feasibility requirement of each iteration, the proposed algorithms can compute an eps-optimal and eps-feasible solution within O(log(1/eps)) iterations which requires O(1/eps) basic operations in the worst case. Algorithm parameters do not depend on eps > 0. Thus, these new methods compute iterates arbitrarily close to feasibility and optimality as they continue to run. Moreover, the computational complexity of each basic operation for these new algorithms is the same as that of existing first-order algorithms running on "simple" feasible sets. Our numerical studies showed that only O(log(1/eps)) basic operations, as opposed to O(1/eps) worst case theoretical bound, are needed for obtaining eps-feasible and eps-optimal solutions. We have implemented these new first-order methods for the following problem classes: Basis Pursuit (BP) in compressed sensing, Matrix Rank Minimization, Principal Component Pursuit (PCP) and Stable Principal Component Pursuit (SPCP) in principal component analysis. These problems have applications in signal and image processing, video surveillance, face recognition, latent semantic indexing, and ranking and collaborative filtering. To best of our knowledge, an algorithm for the SPCP problem that has O(1/eps) iteration complexity and has a per iteration complexity equal to that of a singular value decomposition is given for the first time.Operations research, Applied mathematicsnsa2106Industrial Engineering and Operations ResearchDissertationsQuantitative Modeling of Credit Derivatives
https://academiccommons.columbia.edu/catalog/ac:131549
Kan, Yu Hanghttp://hdl.handle.net/10022/AC:P:10272Thu, 05 May 2011 13:46:38 +0000The recent financial crisis has revealed major shortcomings in the existing approaches for modeling credit derivatives. This dissertation studies various issues related to the modeling of credit derivatives: hedging of portfolio credit derivatives, calibration of dynamic credit models, and modeling of credit default swap portfolios. In the first part, we compare the performance of various hedging strategies for index collateralized debt obligation (CDO) tranches during the recent financial crisis. Our empirical analysis shows evidence for market incompleteness: a large proportion of risk in the CDO tranches appears to be unhedgeable. We also show that, unlike what is commonly assumed, dynamic models do not necessarily perform better than static models, nor do high-dimensional bottom-up models perform better than simpler top-down models. On the other hand, model-free regression-based hedging appears to be surprisingly effective when compared to other hedging strategies. The second part is devoted to computational methods for constructing an arbitrage-free CDO pricing model compatible with observed CDO prices. This method makes use of an inversion formula for computing the aggregate default rate in a portfolio from expected tranche notionals, and a quadratic programming method for recovering expected tranche notionals from CDO spreads. Comparing this approach to other calibration methods, we find that model-dependent quantities such as the forward starting tranche spreads and jump-to-default ratios are quite sensitive to the calibration method used, even within the same model class. The last chapter of this dissertation focuses on statistical modeling of credit default swaps (CDSs). We undertake a systematic study of the univariate and multivariate properties of CDS spreads, using time series of the CDX Investment Grade index constituents from 2005 to 2009. We then propose a heavy-tailed multivariate time series model for CDS spreads that captures these properties. Our model can be used as a framework for measuring and managing the risk of CDS portfolios, and is shown to have better performance than the affine jump-diffusion or random walk models for predicting loss quantiles of various CDS portfolios.Finance, Mathematicsyk2246Industrial Engineering and Operations ResearchDissertationsContagion and Systemic Risk in Financial Networks
https://academiccommons.columbia.edu/catalog/ac:131474
Moussa, Amalhttp://hdl.handle.net/10022/AC:P:10249Fri, 29 Apr 2011 18:12:27 +0000The 2007-2009 financial crisis has shed light on the importance of contagion and systemic risk, and revealed the lack of adequate indicators for measuring and monitoring them. This dissertation addresses these issues and leads to several recommendations for the design of an improved assessment of systemic importance, improved rating methods for structured finance securities, and their use by investors and risk managers. Using a complete data set of all mutual exposures and capital levels of financial institutions in Brazil in 2007 and 2008, we explore in chapter 2 the structure and dynamics of the Brazilian financial system. We show that the Brazilian financial system exhibits a complex network structure characterized by a strong degree of heterogeneity in connectivity and exposure sizes across institutions, which is qualitatively and quantitatively similar to the statistical features observed in other financial systems. We find that the Brazilian financial network is well represented by a directed scale-free network, rather than a small world network. Based on these observations, we propose a stochastic model for the structure of banking networks, representing them as a directed weighted scale free network with power law distributions for in-degree and out-degree of nodes, Pareto distribution for exposures. This model may then be used for simulation studies of contagion and systemic risk in networks. We propose in chapter 3 a quantitative methodology for assessing contagion and systemic risk in a network of interlinked institutions. We introduce the Contagion Index as a metric of the systemic importance of a single institution or a set of institutions, that combines the effects of both common market shocks to portfolios and contagion through counterparty exposures. Using a directed scale-free graph simulation of the financial system, we study the sensitivity of contagion to a change in aggregate network parameters: connectivity, concentration of exposures, heterogeneity in degree distribution and network size. More concentrated and more heterogeneous networks are found to be more resilient to contagion. The impact of connectivity is more controversial: in well-capitalized networks, increasing connectivity improves the resilience to contagion when the initial level of connectivity is high, but increases contagion when the initial level of connectivity is low. In undercapitalized networks, increasing connectivity tends to increase the severity of contagion. We also study the sensitivity of contagion to local measures of connectivity and concentration across counterparties --the counterparty susceptibility and local network frailty-- that are found to have a monotonically increasing relationship with the systemic risk of an institution. Requiring a minimum (aggregate) capital ratio is shown to reduce the systemic impact of defaults of large institutions; we show that the same effect may be achieved with less capital by imposing such capital requirements only on systemically important institutions and those exposed to them. In chapter 4, we apply this methodology to the study of the Brazilian financial system. Using the Contagion Index, we study the potential for default contagion and systemic risk in the Brazilian system and analyze the contribution of balance sheet size and network structure to systemic risk. Our study reveals that, aside from balance sheet size, the network-based local measures of connectivity and concentration of exposures across counterparties introduced in chapter 3, the counterparty susceptibility and local network frailty, contribute significantly to the systemic importance of an institution in the Brazilian network. Thus, imposing an upper bound on these variables could help reducing contagion. We examine the impact of various capital requirements on the extent of contagion in the Brazilian financial system, and show that targeted capital requirements achieve the same reduction in systemic risk with lower requirements in capital for financial institutions. The methodology we proposed in chapter 3 for estimating contagion and systemic risk requires visibility on the entire network structure. Reconstructing bilateral exposures from balance sheets data is then a question of interest in a financial system where bilateral exposures are not disclosed. We propose in chapter 5 two methods to derive a distribution of bilateral exposures matrices. The first method attempts to recover the balance sheet assets and liabilities "sample by sample". Each sample of the bilateral exposures matrix is solution of a relative entropy minimization problem subject to the balance sheet constraints. However, a solution to this problem does not always exist when dealing with sparse sample matrices. Thus, we propose a second method that attempts to recover the assets and liabilities "in the mean". This approach is the analogue of the Weighted Monte Carlo method introduced by Avellaneda et al. (2001). We first simulate independent samples of the bilateral exposures matrix from a relevant prior distribution on the network structure, then we compute posterior probabilities by maximizing the entropy under the constraints that the balance sheet assets and liabilities are recovered in the mean. We discuss the pros and cons of each approach and explain how it could be used to detect systemically important institutions in the financial system. The recent crisis has also raised many questions regarding the meaning of structured finance credit ratings issued by rating agencies and the methodology behind them. Chapter 6 aims at clarifying some misconceptions related to structured finance ratings and how they are commonly interpreted: we discuss the comparability of structured finance ratings with bond ratings, the interaction between the rating procedure and the tranching procedure and its consequences for the stability of structured finance ratings in time. These insights are illustrated in a factor model by simulating rating transitions for CDO tranches using a nested Monte Carlo method. In particular, we show that the downgrade risk of a CDO tranche can be quite different from a bond with same initial rating. Structured finance ratings follow path-dependent dynamics that cannot be adequately described, as usually done, by a matrix of transition probabilities. Therefore, a simple labeling via default probability or expected loss does not discriminate sufficiently their downgrade risk. We propose to supplement ratings with indicators of downgrade risk. To overcome some of the drawbacks of existing rating methods, we suggest a risk-based rating procedure for structured products. Finally, we formulate a series of recommendations regarding the use of credit ratings for CDOs and other structured credit instruments.Finance, Statisticsam2810Statistics, Industrial Engineering and Operations ResearchDissertationsA Case for P2P Delivery of Paid Content
https://academiccommons.columbia.edu/catalog/ac:110614
Sherman, Alex; Stavrou, Angelos; Nieh, Jason; Stein, Clifford S.; Keromytis, Angelos D.http://hdl.handle.net/10022/AC:P:29479Wed, 27 Apr 2011 16:19:27 +0000P2P file sharing provides a powerful content distribution model by leveraging users' computing and bandwidth resources. However, companies have been reluctant to rely on P2P systems for paid content distribution due to their inability to limit the exploitation of these systems for free file sharing. We present TP2, a system that combines the more cost-effective and scalable distribution capabilities of P2P systems with a level of trust and control over content distribution similar to direct download content delivery networks. TP2 uses two key mechanisms that can be layered on top of existing P2P systems. First, it provides strong authentication to prevent free file sharing in the system. Second, it introduces a new notion of trusted auditors to detect and limit malicious attempts to gain information about participants in the system to facilitate additional out-of-band free file sharing. We analyze TP2 by modeling it as a novel game between malicious users who try to form free file sharing clusters and trusted auditors who curb the growth of such clusters. Our analysis shows that a small fraction of trusted auditors is sufficient to protect the P2P system against unauthorized file sharing. Using a simple economic model, we further show that TP2 provides a more cost-effective content distribution solution, resulting in higher profits for a content provider even in the presence of a large percentage of malicious users. Finally, we implemented TP2 on top of BitTorrent and use PlanetLab to show that our system can provide trusted P2P file sharing with negligible performance overhead.Computer sciencejn234, cs2035, ak2052Computer Science, Industrial Engineering and Operations ResearchTechnical reportsMitigating the Effect of Free-Riders in BitTorrent using Trusted Agents
https://academiccommons.columbia.edu/catalog/ac:110826
Sherman, Alex; Stavrou, Angelos; Nieh, Jason; Stein, Clifford S.http://hdl.handle.net/10022/AC:P:29544Wed, 27 Apr 2011 09:56:20 +0000Even though Peer-to-Peer (P2P) systems present a cost-effective and scalable solution to content distribution, most entertainment, media and software, content providers continue to rely on expensive, centralized solutions such as Content Delivery Networks. One of the main reasons is that the current P2P systems cannot guarantee reasonable performance as they depend on the willingness of users to contribute bandwidth. Moreover, even systems like BitTorrent, which employ a tit-for-tat protocol to encourage fair bandwidth exchange between users, are prone to free-riding (i.e. peers that do not upload). Our experiments on PlanetLab extend previous research (e.g. LargeViewExploit, BitTyrant) demonstrating that such selfish behavior can seriously degrade the performance of regular users in many more scenarios beyond simple free-riding: we observed an overhead of up to 430% for 80% of free-riding identities easily generated by a small set of selfish users. To mitigate the effects of selfish users, we propose a new P2P architecture that classifies peers with the help of a small number of {\em trusted nodes} that we call Trusted Auditors (TAs). TAs participate in P2P download like regular clients and detect free-riding identities by observing their neighbors' behavior. Using TAs, we can separate compliant users into a separate service pool resulting in better performance. Furthermore, we show that TAs are more effective ensuring the performance of the system than a mere increase in bandwidth capacity: for 80\% of free-riding identities a single-TA system has a 6\% download time overhead while without the TA and three times the bandwidth capacity we measure a 100\% overhead.Computer sciencejn234, cs2035Computer Science, Industrial Engineering and Operations ResearchTechnical reportsFairTorrent: Bringing Fairness to Peer-to-Peer Systems
https://academiccommons.columbia.edu/catalog/ac:110957
Sherman, Alex; Nieh, Jason; Stein, Clifford S.http://hdl.handle.net/10022/AC:P:29585Tue, 26 Apr 2011 12:15:48 +0000The lack of fair bandwidth allocation in Peer-to-Peer systems causes many performance problems, including users being disincentivized from contributing upload bandwidth, free riders taking as much from the system as possible while contributing as little as possible, and a lack of quality-of-service guarantees to support streaming applications. We present FairTorrent, a simple distributed scheduling algorithm for Peer-to-Peer systems that fosters fair bandwidth allocation among peers. For each peer, FairTorrent maintains a deficit counter which represents the number of bytes uploaded to a peer minus the number of bytes downloaded from it. It then uploads to the peer with the lowest deficit counter. FairTorrent automatically adjusts to variations in bandwidth among peers and is resilient to exploitation by free-riding peers. We have implemented FairTorrent inside a BitTorrent client without modifications to the BitTorrent protocol, and compared its performance on PlanetLab against other widely-used BitTorrent clients. Our results show that FairTorrent can provide up to two orders of magnitude better fairness and up to five times better download performance for high contributing peers. It thereby gives users an incentive to contribute more bandwidth, and improve overall system performance.Computer sciencejn234, cs2035Computer Science, Industrial Engineering and Operations ResearchTechnical reportsGroup Ratio Round-Robin: O(1) Proportional Share Scheduling for Uniprocessor and Multiprocessor Systems
https://academiccommons.columbia.edu/catalog/ac:109814
Caprita, Bogdan; Chan, Wong Chun; Nieh, Jason; Stein, Clifford S.; Zheng, Haoqianghttp://hdl.handle.net/10022/AC:P:29230Fri, 22 Apr 2011 13:48:44 +0000Proportional share resource management provides a flexible and useful abstraction for multiplexing time-shared resources. We present Group Ratio Round-Robin (GR3), the first proportional share scheduler that combines accurate proportional fairness scheduling behavior with O(1) scheduling overhead on both uniprocessor and multiprocessor systems. GR3 uses a novel client grouping strategy to organize clients into groups of similar processor allocations which can be more easily scheduled. Using this grouping strategy, GR3 combines the benefits of low overhead round-robin execution with a novel ratio-based scheduling algorithm. GR3 can provide fairness within a constant factor of the ideal generalized processor sharing model for client weights with a fixed upper bound and preserves its fairness properties on multiprocessor systems. We have implemented GR3 in Linux and measured its performance against other schedulers commonly used in research and practice, including the standard Linux scheduler, Weighted Fair Queueing, Virtual-Time Round-Robin, and Smoothed Round-Robin. Our experimental results demonstrate that GR3 can provide much lower scheduling overhead and much better scheduling accuracy in practice than these other approaches.Computer sciencejn234, cs2035Computer Science, Industrial Engineering and Operations ResearchTechnical reportsLearning mixtures of product distributions over discrete domains
https://academiccommons.columbia.edu/catalog/ac:110398
Feldman, Jon; O'Donnell, Ryan; Servedio, Rocco Anthonyhttp://hdl.handle.net/10022/AC:P:29411Thu, 21 Apr 2011 12:41:48 +0000We consider the problem of learning mixtures of product distributions over discrete domains in the distribution learning framework introduced by Kearns et al. We give a $\poly(n/\eps)$ time algorithm for learning a mixture of $k$ arbitrary product distributions over the $n$-dimensional Boolean cube $\{0,1\}^n$ to accuracy $\eps$, for any constant $k$. Previous polynomial time algorithms could only achieve this for $k = 2$ product distributions; our result answers an open question stated independently by Cryan and by Freund and Mansour. We further give evidence that no polynomial time algorithm can succeed when $k$ is superconstant, by reduction from a notorious open problem in PAC learning. Finally, we generalize our $\poly(n/\eps)$ time algorithm to learn any mixture of $k = O(1)$ product distributions over $\{0,1, \dots, b\}^n$, for any $b = O(1)$.Computer scienceras2105Industrial Engineering and Operations Research, Computer ScienceTechnical reportsGrouped Distributed Queues: Distributed Queue, Proportional Share Multiprocessor Scheduling
https://academiccommons.columbia.edu/catalog/ac:110491
Caprita, Bogdan; Nieh, Jason; Stein, Clifford S.http://hdl.handle.net/10022/AC:P:29440Thu, 21 Apr 2011 09:45:32 +0000We present Grouped Distributed Queues (GDQ), the first proportional share scheduler for multiprocessor systems that, by using a distributed queue architecture, scales well with a large number of processors and processes. GDQ achieves accurate proportional fairness scheduling with only O(1) scheduling overhead. GDQ takes a novel approach to distributed queuing: instead of creating per-processor queues that need to be constantly balanced to achieve any measure of proportional sharing fairness, GDQ uses a simple grouping strategy to organize processes into groups based on similar processor time allocation rights, and then assigns processors to groups based on aggregate group shares. Group membership of processes is static, and fairness is achieved by dynamically migrating processors among groups. The set of processors working on a group use simple, low-overhead round-robin queues, while processor reallocation among groups is achieved using a new multiprocessor adaptation of the well-known Weighted Fair Queuing algorithm. By commoditizing processors and decoupling their allocation from process scheduling, GDQ provides, with only constant scheduling cost, fairness within a constant of the ideal generalized processor sharing model for process weights with a fixed upper bound. We have implemented GDQ in Linux and measured its performance. Our experimental results show that GDQ has low overhead and scales well with the number of processors.Computer sciencejn234, cs2035Computer Science, Industrial Engineering and Operations ResearchTechnical reportsOptimal adaptive control of cascading power grid failures
https://academiccommons.columbia.edu/catalog/ac:129328
Bienstock, Danielhttp://hdl.handle.net/10022/AC:P:9744Mon, 20 Dec 2010 14:40:12 +0000Power grids have long been a source of interesting optimization problems. Perhaps best known among the optimization community are the unit commitment problems and related generator dispatching tasks. However, recent blackout events have renewed interest on problems related to grid vulnerabilities. A difficult problem that has been widely studied, the N-K problem, concerns the detection of small cardinality sets of lines or buses whose simultaneous outage could develop into a significant failure event. This is a hard combinatorial problem which, unlike the typical formulations for the unit commitment problem, includes a detailed model of flows in the grid. A different set of algorithmic questions concern how to react to protect a grid when a significant event has taken place. This is the outlook that we take in this paper. In this context, the central modeling ingredient is that power grids display cascading behavior. In this paper, building on prior models for cascades, we consider an affine, adaptive, distributive control algorithm that is computed at the start of the cascade and deployed during the cascade. The control sheds demand as a function of observations of the state of the grid, with the objective of terminating the cascade with a minimum amount of demand lost. The optimization problem handled at the start of the cascade computes the coefficients in the affine control (one set of coefficients per demand bus). We present numerical experiments with parallel implementations of our algorithms, using as data a snapshot of the U.S. Eastern Interconnect, with approximately 15000 buses and 23000 lines.Electrical engineeringdb17Industrial Engineering and Operations Research, Applied Physics and Applied MathematicsArticlesBehavior-Based Modeling and Its Application to Email Analysis
https://academiccommons.columbia.edu/catalog/ac:125674
Stolfo, Salvatore; Hershkop, Shlomo; Hu, Chia-wei; Li, Wei-Jen; Nimeskern, Olivier; Wang, Kehttp://hdl.handle.net/10022/AC:P:8686Wed, 28 Apr 2010 12:52:39 +0000The Email Mining Toolkit (EMT) is a data mining system that computes behavior profiles or models of user email accounts. These models may be used for a multitude of tasks including forensic analyses and detection tasks of value to law enforcement and intelligence agencies, as well for as other typical tasks such as virus and spam detection. To demonstrate the power of the methods, we focus on the application of these models to detect the early onset of a viral propagation without "content-base" (or signature-based) analysis in common use in virus scanners. We present several experiments using real email from 15 users with injected simulated viral emails and describe how the combination of different behavior models improves overall detection rates. The performance results vary depending upon parameter settings, approaching 99% true positive (TP) (percentage of viral emails caught) in general cases and with 0.38% false positive (FP) (percentage of emails with attachments that are mislabeled as viral). The models used for this study are based upon volume and velocity statistics of a user's email rate and an analysis of the user's (social) cliques revealed in the person's email behavior. We show by way of simulation that virus propagations are detectable since viruses may emit emails at rates different than human behavior suggests is normal, and email is directed to groups of recipients in ways that violate the users' typical communications with their social groups.Computer sciencesjs11, sh553, ch176Computer Science, Industrial Engineering and Operations ResearchArticlesContinuity of a queueing integral representation in the M1 topology
https://academiccommons.columbia.edu/catalog/ac:125349
Pang, Guodong; Whitt, Wardhttp://hdl.handle.net/10022/AC:P:8584Fri, 02 Apr 2010 12:18:37 +0000We establish continuity of the integral representation y(t)=x(t)+âˆ«0th(y(s))â€‰ds, tâ‰¥0, mapping a function x into a function y when the underlying function space D is endowed with the Skorohod M1 topology. We apply this integral representation with the continuous mapping theorem to establish heavy-traffic stochastic-process limits for many-server queueing models when the limit process has jumps unmatched in the converging processes as can occur with bursty arrival processes or service interruptions. The proof of M1-continuity is based on a new characterization of the M1 convergence, in which the time portions of the parametric representations are absolutely continuous with respect to Lebesgue measure, and the derivatives are uniformly bounded and converge in L1.Operations researchww2040Industrial Engineering and Operations ResearchArticlesThe N-k Problem in Power Grids: New Models, Formulations and Numerical Experiments (Extended Version)
https://academiccommons.columbia.edu/catalog/ac:125318
Bienstock, Daniel; Verma, Abhinavhttp://hdl.handle.net/10022/AC:P:8574Wed, 17 Mar 2010 17:38:39 +0000Given a power grid modeled by a network together with equations describing the power flows, power generation and consumption, and the laws of physics, the so-called N-k problem asks whether there exists a set of k or fewer arcs whose removal will cause the system to fail. The case where k is small is of practical interest. We present theoretical and computational results involving a mixed-integer model and a continuous nonlinear model related to this question.db17Industrial Engineering and Operations Research, Applied Physics and Applied MathematicsArticles