Academic Commons Search Results
https://academiccommons.columbia.edu/catalog?action=index&controller=catalog&f%5Bpub_date_facet%5D%5B%5D=2012&f%5Bsubject_facet%5D%5B%5D=Operations+research&format=rss&fq%5B%5D=has_model_ssim%3A%22info%3Afedora%2Fldpd%3AContentAggregator%22&q=&rows=500&sort=record_creation_date+desc
Academic Commons Search Resultsen-usDynamic Trading Strategies in the Presence of Market Frictions
https://academiccommons.columbia.edu/catalog/ac:dv41ns1rpx
Saglam, Mehmet10.7916/D8KS7ZCDTue, 05 Dec 2017 17:29:20 +0000This thesis studies the impact of various fundamental frictions in the microstructure of financial markets. Specific market frictions we consider are latency in high-frequency trading, transaction costs arising from price impact or commissions, unhedgeable inventory risks due to stochastic volatility and time-varying liquidity costs. We explore the implications of each of these frictions in rigorous theoretical models from an investor's point of view and derive analytical expressions or efficient computational procedures for dynamic strategies. Specific methodologies in computing these policies include stochastic control theory, dynamic programming and tools from applied probability and stochastic processes.
In the first chapter, we describe a theoretical model for the quantitative valuation of latency and its impact on the optimal dynamic trading strategy. Our model measures the trading frictions created by the presence of latency, by considering the optimal execution problem of a representative investor. Via a dynamic programming analysis, our model provides a closed-form expression for the cost of latency in terms of well-known parameters of the underlying asset. We implement our model by estimating the latency cost incurred by trading on a human time scale. Examining NYSE common stocks from 1995 to 2005 shows that median latency cost across our sample more than tripled during this time period.
In the second chapter, we provide a highly tractable dynamic trading policy for portfolio choice problems with return predictability and transaction costs. Our rebalancing rule is a linear function of the return predicting factors and can be utilized in a wide spectrum of portfolio choice models with minimal assumptions. Linear rebalancing rules enable to compute exact and efficient formulations of portfolio choice models with linear constraints, proportional and nonlinear transaction costs, and quadratic utility function on the terminal wealth. We illustrate the implementation of the best linear rebalancing rule in the context of portfolio execution with positivity constraints in the presence of short-term predictability. We show that there exists a considerable performance gain in using linear rebalancing rules compared to static policies with shrinking horizon or a dynamic policy implied by the solution of the dynamic program without the constraints.
Finally, in the last chapter, we propose a factor-based model that incorporates common factor shocks for the security returns. Under these realistic factor dynamics, we solve for the dynamic trading policy in the class of linear policies analytically. Our model can accommodate stochastic volatility and liquidity costs as a function of factor exposures. Calibrating our model with empirical data, we show that our trading policy achieves superior performance in the presence of common factor shocks.Finance, Operations research, Stochastic processes, BusinessBusinessThesesChance Constrained Optimal Power Flow: Risk-Aware Network Control under Uncertainty
https://academiccommons.columbia.edu/catalog/ac:153902
Bienstock, Daniel; Chertkov, Michael; Harnett, Sean10.7916/D8PN9GCXTue, 27 Jun 2017 15:37:47 +0000When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely used by the electric power industry to redispatch hourly controllable generation (coal, gas and hydro plants) over control areas of transmission networks, can result in grid instability, and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition, which is borne by simulations of real grids, would likely resulting in automatic line tripping to protect lines from thermal stress, a risky and undesirable outcome which compromises stability. Smart grid goals include a commitment to large penetration of highly fluctuating renewables, thus calling to reconsider current practices, in particular the use of standard OPF. Our Chance Constrained (CC) OPF corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, our CCOPF satisfies all the constraints with high probability while simultaneously minimizing the cost of economic redispatch. CCOPF allows efficient implementation, e.g. solving a typical instance over the 2746bus Polish network in 20s on a standard laptop.Industrial engineering, Operations researchdb17, srh2144Industrial Engineering and Operations ResearchArticlesApproximate dynamic programming for large scale systems
https://academiccommons.columbia.edu/catalog/ac:169790
Desai, Vijay V.10.7916/D82V2PH1Thu, 08 Jun 2017 16:12:40 +0000Sequential decision making under uncertainty is at the heart of a wide variety of practical problems. These problems can be cast as dynamic programs and the optimal value function can be computed by solving Bellman's equation. However, this approach is limited in its applicability. As the number of state variables increases, the state space size grows exponentially, a phenomenon known as the curse of dimensionality, rendering the standard dynamic programming approach impractical. An effective way of addressing curse of dimensionality is through parameterized value function approximation. Such an approximation is determined by relatively small number of parameters and serves as an estimate of the optimal value function. But in order for this approach to be effective, we need Approximate Dynamic Programming (ADP) algorithms that can deliver `good' approximation to the optimal value function and such an approximation can then be used to derive policies for effective decision-making. From a practical standpoint, in order to assess the effectiveness of such an approximation, there is also a need for methods that give a sense for the suboptimality of a policy. This thesis is an attempt to address both these issues. First, we introduce a new ADP algorithm based on linear programming, to compute value function approximations. LP approaches to approximate DP have typically relied on a natural `projection' of a well studied linear program for exact dynamic programming. Such programs restrict attention to approximations that are lower bounds to the optimal cost-to-go function. Our program -- the `smoothed approximate linear program' -- is distinct from such approaches and relaxes the restriction to lower bounding approximations in an appropriate fashion while remaining computationally tractable. The resulting program enjoys strong approximation guarantees and is shown to perform well in numerical experiments with the game of Tetris and queueing network control problem. Next, we consider optimal stopping problems with applications to pricing of high-dimensional American options. We introduce the pathwise optimization (PO) method: a new convex optimization procedure to produce upper and lower bounds on the optimal value (the `price') of high-dimensional optimal stopping problems. The PO method builds on a dual characterization of optimal stopping problems as optimization problems over the space of martingales, which we dub the martingale duality approach. We demonstrate via numerical experiments that the PO method produces upper bounds and lower bounds (via suboptimal exercise policies) of a quality comparable with state-of-the-art approaches. Further, we develop an approximation theory relevant to martingale duality approaches in general and the PO method in particular. Finally, we consider a broad class of MDPs and introduce a new tractable method for computing bounds by consider information relaxation and introducing penalty. The method delivers tight bounds by identifying the best penalty function among a parameterized class of penalty functions. We implement our method on a high-dimensional financial application, namely, optimal execution and demonstrate the practical value of the method vis-a-vis competing methods available in the literature. In addition, we provide theory to show that bounds generated by our method are provably tighter than some of the other available approaches.Operations research, Mathematicsvvd2101Industrial Engineering and Operations ResearchThesesDynamic Markets with Many Agents: Applications in Social Learning and Competition
https://academiccommons.columbia.edu/catalog/ac:174798
Ifrach, Bar10.7916/D8NK3C48Thu, 08 Jun 2017 16:06:11 +0000This thesis considers two applications in dynamics economic models with many agents. The dynamics of the economic systems under consideration are intractable since they depend on the (stochastic) outcomes of the agents' actions. However, as the number of agents grows large, approximations to the aggregate behavior of agents come to light. I use this observation to characterize market dynamics and subsequently to study these applications.
Chapter 2 studies the problem of devising a pricing strategy to maximize the revenues extracted from a stream of consumers with heterogenous preferences. Consumers, however, do not know the quality of the product or service and engage in a social learning process to learn it. Using a mean-field approximation the transient of this social learning process is uncovered and the pricing problem is analyzed.
Chapter 3 adds to the previous chapter in analyzing features of this social learning process with finitely many agents. In addition, the chapter generalizes the information structure to include cases where consumers take into account the order in which reviews were submitted.
Chapter 4 considers a model of dynamic oligopoly competition in the spirit of models that are widespread in industrial organization. The computation of equilibrium strategies of such models suffers from the curse of dimensionality when the number of agents (firms) is large. For a market structure with few dominant firms and many fringe firms, I study an alternative equilibrium concept in which fringe firms are represented succinctly with a low dimensional set of statistics. The chapter explores how this new equilibrium concept expands the class of dynamic oligopoly models that can be studied computationally in empirical work.Operations research, Economicsbi2118BusinessThesesPrice competition and the impact of service attributes: Structural estimation and analytical characterizations of equilibrium behavior
https://academiccommons.columbia.edu/catalog/ac:153522
Pierson, Margaret Parker10.7916/D89029WZWed, 07 Jun 2017 17:00:33 +0000This dissertation addresses a number of outstanding, fundamental questions in operations management and industrial organization literature. Operations management literature has a long history of studying the competitive impact of operational, firm-level strategic decisions within oligopoly markets. The first essay reports on an empirical study of an important industry, the drive-thru fast-food industry. We estimate a competition model, derived from an underlying Mixed MultiNomial Logit (MNML) consumer choice model, using detailed empirical data. The main goal is to measure to what extent waiting time performance, along with price levels, brand attributes, geographical and demographic factors, impacts competing firms' market shares. The primary goal of our second essay is to characterize the equilibrium behavior of price competition models with Mixed Multinomial Logit (MMNL) demand functions under affine cost structures. In spite of the huge popularity of MMNL models in both the theoretical and empirical literature, it is not known, in general, whether a Nash equilibrium (in pure strategies) of prices exists, and whether the equilibria can be uniquely characterized as the solutions to the system of First Order Condition (FOC) equations. The third essay, which is the most general in its context, we establish that in the absence of cost efficiencies resulting from a merger, aggregate profits of the merging firms increase as do equilibrium prices for general price competition models with general nonlinear demand and cost functions as long as the models are supermodular, with two additional structural conditions: (i) each firm's profit function is strictly quasi-concave in its own price(s), and (ii) markets are competitive, i.e., in the pre-merger industry, each firm's profits increase when any of his competitors increases his price, unilaterally. Even the equilibrium profits of the remaining firms in the industry increase, while the consumer ends up holding the bag, i.e., consumer welfare declines. As demonstrated by this essay, the answers to these sorts of strategy questions have implications not only for the firms and customers but also the policy makers policing these markets.Operations research, BusinessBusinessThesesContingent Capital: Valuation and Risk Implications Under Alternative Conversion Mechanisms
https://academiccommons.columbia.edu/catalog/ac:152933
Nouri, Behzad10.7916/D80P164KWed, 07 Jun 2017 17:00:14 +0000Several proposals for enhancing the stability of the financial system include requirements that banks hold some form of contingent capital, meaning equity that becomes available to a bank in the event of a crisis or financial distress. Specific proposals vary in their choice of conversion trigger and conversion mechanism, and have inspired extensive scrutiny regarding their effectivity in avoiding costly public rescues and bail-outs and potential adverse effects on market dynamics. While allowing banks to leverage and gain a higher return on their equity capital during the upturns in financial markets, contingent capital provides an automatic mechanism to reduce debt and raise the loss bearing capital cushion during the downturns and market crashes; therefore, making it possible to achieve stability and robustness in the financial sector, without reducing efficiency and competitiveness of the banking system with higher regulatory capital requirements. However, many researchers have raised concerns regarding unintended consequences and implications of such instruments for market dynamics. Death spirals in the stock price near the conversion, possibility of profitable stock or book manipulations by either the investors or the issuer, the marketability and demand for such hybrid instruments, contagion and systemic risks arising from the hedging strategies of the investors and higher risk taking incentives for issuers are among such concerns. Though substantial, many of such issues are addressed through a prudent design of the trigger and conversion mechanism. In the following chapters, we develop multiple models for pricing and analysis of contingent capital under different conversion mechanisms. In Chapter 2 we analyze the case of contingent capital with a capital-ratio trigger and partial and on-going conversion. The capital ratio we use is based on accounting or book value to approximate the regulatory ratios that determine capital requirements for banks. The conversion process is partial and on-going in the sense that each time a bank's capital ratio reaches the minimum threshold, just enough debt is converted to equity to meet the capital requirement, so long as the contingent capital has not been depleted. In Chapter 3 we simplify the design to all-at-once conversion however we perform the analysis through a much richer model which incorporates tail risk in terms of jumps, endogenous optimal default policy and debt rollover. We also investigate the case of bail-in debt, where at default the original shareholders are wiped out and the converted investors take control of the firm. In the case of contingent convertibles the conversion trigger is assumed as a contractual term specified by market value of assets. For bail-in debt the trigger is where the original shareholders optimally default. We study incentives of shareholders to change the capital structure and how CoCo's affect risk incentives. Several researchers have advocated use of a market based trigger which is forward looking, continuously updated and readily available, while some others have raised concerns regarding unintended consequences of a market based trigger. In Chapter 4 we investigate one of these issues, namely the existence and uniqueness of equilibrium when the conversion trigger is based on the stock price.Finance, Operations researchbn2164Industrial Engineering and Operations ResearchThesesThree Essays on Dynamic Pricing and Resource Allocation
https://academiccommons.columbia.edu/catalog/ac:151966
Nur, Cavdaroglu10.7916/D8W09D0GWed, 07 Jun 2017 17:00:14 +0000This thesis consists of three essays that focus on different aspects of pricing and resource allocation. We use techniques from supply chain and revenue management, scenario-based robust optimization and game theory to study the behavior of firms in different competitive and non-competitive settings. We develop dynamic programming models that account for pricing and resource allocation decisions of firms in such settings. In Chapter 2, we focus on the resource allocation problem of a service firm, particularly a health-care facility. We formulate a general model that is applicable to various resource allocation problems of a hospital. To this end, we consider a system with multiple customer classes that display different reactions to delays in service. By adopting a dynamic-programming approach, we show that the optimal policy is not simple but exhibits desirable monotonicity properties. Furthermore, we propose a simple threshold heuristic policy that performs well in our experiments. In Chapter 3, we study a dynamic pricing problem for a monopolist seller that operates in a setting where buyers have market power, and where each potential sale takes the form of a bilateral negotiation. We review the dynamic programming formulation of the negotiation problem, and propose a simple and tractable deterministic "fluid" analogue for this problem. The main emphasis of the chapter is in expanding the formulation to the dynamic setting where both the buyer and seller have limited prior information on their counterparty valuation and their negotiation skill. In Chapter 4, we consider the revenue maximization problem of a seller who operates in a market where there are two types of customers; namely the "investors" and "regular-buyers". In a two-period setting, we model and solve the pricing game between the seller and the investors in the latter period, and based on the solution of this game, we analyze the revenue maximization problem of the seller in the former period. Moreover, we study the effects on the total system profits when the seller and the investors cooperate through a contracting mechanism rather than competing with each other; and explore the contracting opportunities that lead to higher profits for both agents.Operations researchIndustrial Engineering and Operations ResearchThesesStrategic Models in Supply Network Design
https://academiccommons.columbia.edu/catalog/ac:147203
Lederman, Roger10.7916/D8X63V2VWed, 07 Jun 2017 16:58:54 +0000This dissertation contains a series of essays intended to introduce strategic modeling techniques into the network design problem. While investment in production capacity has long been approached as a critical strategic decision, the increasing need for robust, responsive supply capabilities has made it essential to take a network view, where multiple products and sites are considered simultaneously. In traditional network planning, models have rarely accounted for the behavior of additional players - customers, competitors, suppliers - on whom a firm can exert only a limited influence. We analyze a set of models that account for the dynamics of the firm's interaction with these outside actors. In Chapters 2 and 3, we develop game-theoretic models to characterize the allocation of resources in a network context. In Chapter 2, we use series-parallel networks to model the arrangement of producers whose output is bundled. This structure may arise, for example, when various components of the production process are outsourced individually. We study supply-function mechanisms through which producers strategically manage scarce capacity. Our results show how network structure can be analyzed to measure producers' market power and its effect on equilibrium markups. Chapter 3 looks at the network design problem of a vertically integrated firm with the ability to flexibly allocate resources across markets. We consider optimal design of the firm's production network as an upper-level decision to be optimized with respect to competitive outcomes in the lower stage. We find that optimal strategies regarding the location and centralization of production will differ across firms, depending on their competitive position in the market. The final two chapters discuss practical issues regarding the availability of model inputs in a multi-product context. In Chapter 4, we propose a method to construct competitor sets through estimation of a latent-segment choice model. We present a case study in a hotel market, where demand is distributed both spatially and temporally. We show how widely available data on market events can be used to drive identification of customer segments, providing a basis to assess competitive interactions. Chapter 5 provides a further example, in the setting of urban transportation networks, of how user behavior on a network can be estimated from partially observed data. We present a novel two-phase approach for performing this estimation in real time.Operations research, Businessrdl2102BusinessThesesModeling Customer Behavior for Revenue Management
https://academiccommons.columbia.edu/catalog/ac:151773
Bansal, Matulya10.7916/D8KP8867Wed, 07 Jun 2017 16:55:57 +0000In this thesis, we model and analyze the impact of two behavioral aspects of customer decision making upon the revenue maximization problem of a monopolist firm. First, we study the revenue maximization problem of a monopolist firm selling a homogeneous good to a market of risk-averse, strategic customers. Using a discrete (but arbitrary) valuation distribution, we show how the dynamic pricing problem with strategic customers can be formulated as a mechanism design problem, thereby making it more amenable to analysis. We characterize the optimal solution, and solve the problem for several special cases. We perform asymptotic analysis for the low risk-aversion case and show that it is asymptotically optimal to offer at most two products. Second, we consider a revenue-maximizing monopolist firm that serves a market of customers that are heterogeneous with respect to their valuations and desire for a quality attribute. Instead of optimizing the net utility that results from an appropriate combination of product price and quality, as in the traditional model of customer behavior, we consider a setting where customers purchase the cheapest product subject to its quality exceeding a customer specific quality threshold. We call such preferences threshold preferences. We solve the firm’s product design problem in this setting, and contrast with the traditional model of customer choice behavior. We consider several scenarios where such preferences might arise, and identify the optimal solution in each case. In addition to these product design problems, we study the problem of identifying the optimal putting strategy for a golfer. We develop a model of golfer putting skill, and combine it with a putt trajectory and holeout model to identify a golfer’s optimal putting strategy. The problem of identifying the optimal putting strategy is shown to be equivalent to a two-dimensional stochastic shortest path problem, with continuous state and control space, and solved using approximate dynamic programming. We calibrate the golfer model to professional and amateur player data, and use the calibrated model to answer several interesting questions, e.g., how does green reading ability affect golfer performance, how do professional and amateur golfers differ in their strategy, how do uphill and downhill putts compare in difficulty, etc.Business, Operations researchmb2431BusinessThesesEssays on Inventory Management and Object Allocation
https://academiccommons.columbia.edu/catalog/ac:144769
Lee, Thiam Hui10.7916/D8JM2HM7Wed, 07 Jun 2017 02:51:18 +0000This dissertation consists of three essays. In the first, we establish a framework for proving equivalences between mechanisms that allocate indivisible objects to agents. In the second, we study a newsvendor model where the inventory manager has access to two experts that provide advice, and examine how and when an optimal algorithm can be efficiently computed. In the third, we study classical single-resource capacity allocation problem and investigate the relationship between data availability and performance guarantees.
We first study mechanisms that solve the problem of allocating indivisible objects to agents. We consider the class of mechanisms that utilize the Top Trading Cycles (TTC) algorithm (these may differ based on how they prioritize agents), and show a general approach to proving equivalences between mechanisms from this class. This approach is used to show alternative and simpler proofs for two recent equivalence results for mechanisms with linear priority structures. We also use the same approach to show that these equivalence results can be generalized to mechanisms where the agent priority structure is described by a tree.
Second, we study the newsvendor model where the manager has recourse to advice, or decision recommendations, from two experts, and where the objective is to minimize worst-case regret from not following the advice of the better of the two agents. We show the model can be reduced to the class machine-learning problem of predicting binary sequences but with an asymmetric cost function, allowing us to obtain an optimal algorithm by modifying a well-known existing one. However, the algorithm we modify, and consequently the optimal algorithm we describe, is not known to be efficiently computable, because it requires evaluations of a function v which is the objective value of recursively defined optimization problems. We analyze v and show that when the two cost parameters of the newsvendor model are small multiples of a common factor, its evaluation is computationally efficient. We also provide a novel and direct asymptotic analysis of v that differs from previous approaches. Our asymptotic analysis gives us insight into the transient structure of v as its parameters scale, enabling us to formulate a heuristic for evaluating v generally. This, in turn, defines a heuristic for the optimal algorithm whose decisions we find in a numerical study to be close to optimal.
In our third essay, we study the classical single-resource capacity allocation problem. In particular, we analyze the relationship between data availability (in the form of demand samples) and performance guarantees for solutions derived from that data. This is done by describing a class of solutions called epsilon-backwards accurate policies and determining a suboptimality gap for this class of solutions. The suboptimality gap we find is in terms of epsilon and is also distribution-free. We then relate solutions generated by a Monte Carlo algorithm and epsilon-backwards accurate policies, showing a lower bound on the quantity of data necessary to ensure that the solution generated by the algorithm is epsilon-backwards accurate with a high probability. Combining the two results then allows us to give a lower bound on the data needed to generate an α-approximation with a given confidence probability 1-delta. We find that this lower bound is polynomial in the number of fares, M, and 1/α.Operations researchthl2102Industrial Engineering and Operations ResearchThesesA Simulation Model to Analyze the Impact of Golf Skills and a Scenario-based Approach to Options Portfolio Optimization
https://academiccommons.columbia.edu/catalog/ac:143076
Ko, Soonmin10.7916/D85D8ZT3Wed, 07 Jun 2017 02:46:24 +0000A simulation model of the game of golf is developed to analyze the impact of various skills (e.g., driving distance, directional accuracy, putting skill, and others) on golf scores. The golf course model includes realistic features of a golf course including rough, sand, water, and trees. Golfer shot patterns are modeled with t distributions and mixtures of t and normal distributions since normal distributions do not provide good fits to the data. The model is calibrated to extensive data for amateur and professional golfers. The golf simulation is used to assess the impact on scores of distance and direction, determine what factors separate pros from amateurs, and to determine the impact of course length on scores. In the second part of the thesis, we use a scenario-based approach to solve a portfolio optimization problem with options. The solution provides the optimal payoff profile given an investor's view of the future, his utility function or risk appetite, and the market prices of options. The scenario-based approach has several advantages over the traditional covariance matrix method, including additional flexibility in the choice of constraints and objective function.Engineering, Operations researchsk2822Industrial Engineering and Operations ResearchThesesMultiproduct Pricing Management and Design of New Service Products
https://academiccommons.columbia.edu/catalog/ac:144706
Wang, Ruxian10.7916/D81V5MXDWed, 07 Jun 2017 02:45:51 +0000In this thesis, we study price optimization and competition of multiple differentiated substitutable products under the general Nested Logit model and also consider the designing and pricing of new service products, e.g., flexible warranty and refundable warranty, under customers' strategic claim behavior. Chapter 2 considers firms that sell multiple differentiated substitutable products and customers whose purchase behavior follows the Nested Logit model, of which the Multinomial Logit model is a special case. In the Nested Logit model, customers make product selection decision sequentially: they first select a class or a nest of products and subsequently choose a product within the selected class. We consider the general Nested Logit model with product-differentiated price coefficients and general nest-heterogenous degrees. We show that the adjusted markup, which is defined as price minus cost minus the reciprocal of the price coefficient, is constant across all the products in each nest. When optimizing multiple nests of products, the adjusted nested markup is also constant within a nest. By using this result, the multi-product optimization problem can be reduced to a single-dimensional problem in a bounded interval, which is easy to solve. We also use this result to simplify the oligopolistic price competition and characterize the Nash equilibrium. Furthermore, we investigate its application to dynamic pricing and revenue management. In Chapter 3, we investigate the flexible monthly warranty, which offers flexibility to customers and allow them to cancel it at anytime without any penalty. Frequent technological innovations and price declines severely affect sales of extended warranties as product replacement upon failure becomes an increasingly attractive alternative. To increase sales and profitability, we propose offering flexible-duration extended warranties. These warranties can appeal to customers who are uncertain about how long they will keep the product as well as to customers who are uncertain about the product's reliability. Flexibility may be added to existing services in the form of monthly-billing with month-by-month commitments, or by making existing warranties easier to cancel, with pro-rated refunds. This thesis studies flexible warranties from the perspectives of both the customer and the provider. We present a model of the customer's optimal coverage decisions under the objective of minimizing expected support costs over a random planning horizon. We show that under some mild conditions the customer's optimal coverage policy has a threshold structure. We also show through an analytical study and through numerical examples how flexible warranties can result in higher profits and higher attach rates. Chapter 4 examines the designing and pricing of residual value warranty that refunds customers at the end of warranty period based on customers' claim history. Traditional extended warranties for IT products do not differentiate customers according to their usage rates or operating environment. These warranties are priced to cover the costs of high-usage customers who tend to experience more failures and are therefore more costly to support. This makes traditional warranties economically unattractive to low-usage customers. In this chapter, we introduce, design and price residual value warranties. These warranties refund a part of the upfront price to customers who have zero or few claims according to a pre-determined refund schedule. By design, the net cost of these warranties is lower for light users than for heavy users. As a result, a residual value warranty can enable the provider to price-discriminate based on usage rates or operating conditions without the need to monitor individual customers' usage. Theoretic results and numerical experiments demonstrate how residual value warranties can appeal to a broader range of customers and significantly increase the provider's profits.Operations research, Industrial engineeringrw2267Industrial Engineering and Operations ResearchTheses