Academic Commons Search Results
https://academiccommons.columbia.edu/catalog?action=index&controller=catalog&f%5Bsubject_facet%5D%5B%5D=Industrial+engineering&format=rss&fq%5B%5D=has_model_ssim%3A%22info%3Afedora%2Fldpd%3AContentAggregator%22&q=&rows=500&sort=record_creation_date+desc
Academic Commons Search Resultsen-usNon-Bayesian Inference and Prediction
https://academiccommons.columbia.edu/catalog/ac:v9s4mw6mcg
Xiao, Di10.7916/D8Q81RMVTue, 10 Oct 2017 22:16:52 +0000In this thesis, we first propose a coherent inference model that is obtained by distorting the prior density in Bayes' rule and replacing the likelihood with a so-called pseudo-likelihood. This model includes the existing non-Bayesian inference models as special cases and implies new models of base-rate neglect and conservatism. We prove a sufficient and necessary condition under which the coherent inference model is processing consistent, i.e., implies the same posterior density however the samples are grouped and processed retrospectively. We show that processing consistency does not imply Bayes' rule by proving a sufficient and necessary condition under which the coherent inference model can be obtained by applying Bayes' rule to a false stochastic model. We then propose a prediction model that combines a stochastic model with certain parameters and a processing-consistent, coherent inference model. We show that this prediction model is processing consistent, which states that the prediction of samples does not depend on how they are grouped and processed prospectively, if and only if this model is Bayesian. Finally, we apply the new model of conservatism to a car selection problem, a consumption-based asset pricing model, and a regime-switching asset pricing model.Operations research, Mathematical statistics, Industrial engineeringdx2125Industrial Engineering and Operations ResearchThesesFundamental Tradeoffs for Modeling Customer Preferences in Revenue Management
https://academiccommons.columbia.edu/catalog/ac:dncjsxkspb
Desir, Antoine Minh10.7916/D8F76QZ0Wed, 02 Aug 2017 16:11:46 +0000Revenue management (RM) is the science of selling the right product, to the right person, at the right price. A key to the success of RM, which now spans a broad array of industries, is its grounding in mathematical modeling and analytics. This dissertation contributes to the development of new RM tools by: (1) exploring some fundamental tradeoffs underlying any RM problems, and (2) designing efficient algorithms for some RM applications. Another underlying theme of this dissertation is the modeling of customer preferences, a key component of any RM problem.
The first chapters of this dissertation focus on the model selection problem: many demand models are available but picking the right model is a challenging task. In particular, we explore the tension between the richness of a model and its tractability. To quantify this tradeoff, we focus on the assortment optimization problem, a very general and core RM problem. To capture customer preferences in this context, we use choice models, a particular type of demand model. In Chapters 1, 2, 3 and 4 we design efficient algorithms for the assortment optimization problem under different choice models. By assessing the strengths and weaknesses of different choice models, we can quantify the cost in tractability one has to pay for better predictive power. This in turn leads to a better understanding of the tradeoffs underlying the model selection problem.
In Chapter 5, we focus on a different question underlying any RM problem: choos- ing how to sell a given product. We illustrate this tradeoff by focusing on the problem of selling ad impressions via Internet display advertising platforms. In particular, we study how the presence of risk-averse buyers affects the desire for reservation con- tracts over real time buy via a second-price auction. In order to capture the risk aversion of buyers, we study different utility models.Operations research, Revenue management, Consumers' preferences--Mathematical models, Industrial engineeringad2918Industrial Engineering and Operations ResearchThesesSoft Regulation with Crowd Recommendation: Coordinating Self-Interested Agents in Sociotechnical Systems under Imperfect Information
https://academiccommons.columbia.edu/catalog/ac:197950
Luo, Yu; Iyengar, Garud N.; Venkatasubramanian, Venkat10.7916/D8HM58FXFri, 30 Jun 2017 16:57:24 +0000Regulating emerging industries is challenging, even controversial at times. Under-regulation can result in safety threats to plant personnel, surrounding communities, and the environment. Over-regulation may hinder innovation, progress, and economic growth. Since one typically has limited understanding of, and experience with, the novel technology in practice, it is difficult to accomplish a properly balanced regulation. In this work, we propose a control and coordination policy called soft regulation that attempts to strike the right balance and create a collective learning environment. In soft regulation mechanism, individual agents can accept, reject, or partially accept the regulator’s recommendation. This non-intrusive coordination does not interrupt normal operations. The extent to which an agent accepts the recommendation is mediated by a confidence level (from 0 to 100%). Among all possible recommendation methods, we investigate two in particular: the best recommendation wherein the regulator is completely informed and the crowd recommendation wherein the regulator collects the crowd’s average and recommends that value. We show by analysis and simulations that soft regulation with crowd recommendation performs well. It converges to optimum, and is as good as the best recommendation for a wide range of confidence levels. This work sheds a new theoretical perspective on the concept of the wisdom of crowds.Learning, Operations research, Collective behavior, Sociotechnical systems, Sociology, Industrial engineering, System theoryyl2750, gi10, vv2213Chemical Engineering, Industrial Engineering and Operations ResearchArticlesChance Constrained Optimal Power Flow: Risk-Aware Network Control under Uncertainty
https://academiccommons.columbia.edu/catalog/ac:156182
Bienstock, Daniel; Chertkov, Michael; Harnett, Sean10.7916/D8VH5ZQVTue, 27 Jun 2017 17:57:54 +0000When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely used by the electric power industry to re-dispatch hourly controllable generation (coal, gas and hydro plants) over control areas of transmission networks, can result in grid instability, and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition, which is borne by simulations of real grids, would likely resulting in automatic line tripping to protect lines from thermal stress, a risky and undesirable outcome which compromises stability. Smart grid goals include a commitment to large penetration of highly fluctuating renewables, thus calling to reconsider current practices, in particular the use of standard OPF. Our Chance Constrained (CC) OPF corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, our CC-OPF satisfies all the constraints with high probability while simultaneously minimizing the cost of economic re-dispatch. CC-OPF allows efficient implementation, e.g. solving a typical instance over the 2746-bus Polish network in 20 seconds on a standard laptop.Industrial engineering, Operations researchdb17, srh2144Industrial Engineering and Operations ResearchArticlesChance Constrained Optimal Power Flow: Risk-Aware Network Control under Uncertainty
https://academiccommons.columbia.edu/catalog/ac:153902
Bienstock, Daniel; Chertkov, Michael; Harnett, Sean10.7916/D8PN9GCXTue, 27 Jun 2017 15:37:47 +0000When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely used by the electric power industry to redispatch hourly controllable generation (coal, gas and hydro plants) over control areas of transmission networks, can result in grid instability, and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition, which is borne by simulations of real grids, would likely resulting in automatic line tripping to protect lines from thermal stress, a risky and undesirable outcome which compromises stability. Smart grid goals include a commitment to large penetration of highly fluctuating renewables, thus calling to reconsider current practices, in particular the use of standard OPF. Our Chance Constrained (CC) OPF corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, our CCOPF satisfies all the constraints with high probability while simultaneously minimizing the cost of economic redispatch. CCOPF allows efficient implementation, e.g. solving a typical instance over the 2746bus Polish network in 20s on a standard laptop.Industrial engineering, Operations researchdb17, srh2144Industrial Engineering and Operations ResearchArticlesLeveraging the mining industry’s energy demand to improve host countries’ power infrastructure
https://academiccommons.columbia.edu/catalog/ac:156324
Toledano, Perrine10.7916/D8KK9M58Mon, 19 Jun 2017 17:48:28 +0000The World Bank estimates that African investment needs in infrastructure would cost US$93 billion per year, only half of which is for the power sector. In the same time, the availability of power lies at the core of a mine’s development strategy; mining operators need to make sure that the energy demand of mining operations is met. This is especially the case in remote areas, where mining companies are developing large projects with little or no connectivity to national grids and very limited options for electricity supply.
To address these energy problems, the mining industry has adopted different solutions depending on the power situation of the country, the projects’ energy demand, and the projects’ distance from the grid: When sourcing from the grid is too expensive or when there is no grid, industry finances and builds its own power generation facilities or sources from a third-party that is
a private power generator. When sourcing from the grid is less expensive than own generation, industry either sources from the grid or finances/co-finances the upgrade of the power assets under various arrangements with the public utility. For a mining company, the goal is to maximize cost-savings. For a host country, the challenge is to maximize welfare gains by leveraging any investment in power infrastructure development for the electrification needs of the country. This could be through connecting the mine to the grid and incentivizing the company to produce extra capacity to sell to the public utility in order to increase supply and reduce the electricity cost, or by requiring that the privately-financed network is open to third-party access, so that towns and populations between the mine and the grid benefit from the privately financed distribution lines as well. Both, cost savings and welfare gains can be met simultaneously if sound regulations and efficient coordination mechanisms are in place. Without appropriate regulation, the opportunity for the country will be missed. Without appropriate coordination mechanisms within the mining industry or between the industry and the government, scale economies will be lost. Therefore to take advantage of the opportunity of the investments of the mining industry in power infrastructure, and make sure that the country benefits from those investments, an appropriate planning, regulatory and commercial framework is needed. If power assets are leveraged and designed to contribute to the development of public infrastructure at the national, regional or community levels, the incremental capital cost of building additional capacity could be reduced and the economic and social spillover effects can extend far beyond the mining sector. The purpose of this working paper is to distill good practice principles observed in power infrastructure development leveraging the mining industry’s energy demand around the world, informed by expert opinion.Economics, Industrial engineeringpt2179Vale Columbia Center on Sustainable International InvestmentReportsEssays on Cloud Pricing and Causal Inference
https://academiccommons.columbia.edu/catalog/ac:200357
Kilcioglu, Cinar10.7916/D8R78F9QThu, 15 Jun 2017 15:04:37 +0000In this thesis, we study economics and operations of cloud computing, and we propose new matching methods in observational studies that enable us to estimate the effect of green building practices on market rents.
In the first part, we study a stylized revenue maximization problem for a provider of cloud computing services, where the service provider (SP) operates an infinite capacity system in a market with heterogeneous customers with respect to their valuation and congestion sensitivity. The SP offers two service options: one with guaranteed service availability, and one where users bid for resource availability and only the "winning" bids at any point in time get access to the service. We show that even though capacity is unlimited, in several settings, depending on the relation between valuation and congestion sensitivity, the revenue maximizing service provider will choose to make the spot service option stochastically unavailable. This form of intentional service degradation is optimal in settings where user valuation per unit time increases sub-linearly with respect to their congestion sensitivity (i.e., their disutility per unit time when the service is unavailable) -- this is a form of "damaged goods." We provide some data evidence based on the analysis of price traces from the biggest cloud service provider, Amazon Web Services.
In the second part, we study the competition on price and quality in cloud computing. The public "infrastructure as a service" cloud market possesses unique features that make it difficult to predict long-run economic behavior. On the one hand, major providers buy their hardware from the same manufacturers, operate in similar locations and offer a similar menu of products. On the other hand, the competitors use different proprietary "fabric" to manage virtualization, resource allocation and data transfer. The menus offered by each provider involve a discrete number of choices (virtual machine sizes) and allow providers to locate in different parts of the price-quality space. We document this differentiation empirically by running benchmarking tests. This allows us to calibrate a model of firm technology. Firm technology is an input into our theoretical model of price-quality competition. The monopoly case highlights the importance of competition in blocking "bad equilibrium" where performance is intentionally slowed down or options are unduly limited. In duopoly, price competition is fierce, but prices do not converge to the same level because of price-quality differentiation. The model helps explain market trends, such the healthy operating profit margin recently reported by Amazon Web Services. Our empirically calibrated model helps not only explain price cutting behavior but also how providers can manage a profit despite predictions that the market "should be" totally commoditized.
The backbone of cloud computing is datacenters, whose energy consumption is enormous. In the past years, there has been an extensive effort on making the datacenters more energy efficient. Similarly, buildings are in the process going "green" as they have a major impact on the environment through excessive use of resources. In the last part of this thesis, we revisit a previous study about the economics of environmentally sustainable buildings and estimate the effect of green building practices on market rents. For this, we use new matching methods that take advantage of the clustered structure of the buildings data. We propose a general framework for matching in observational studies and specific matching methods within this framework that simultaneously achieve three goals: (i) maximize the information content of a matched sample (and, in some cases, also minimize the variance of a difference-in-means effect estimator); (ii) form the matches using a flexible matching structure (such as a one-to-many/many-to-one structure); and (iii) directly attain covariate balance as specified ---before matching--- by the investigator. To our knowledge, existing matching methods are only able to achieve, at most, two of these goals simultaneously. Also, unlike most matching methods, the proposed methods do not require estimation of the propensity score or other dimensionality reduction techniques, although with the proposed methods these can be used as additional balancing covariates in the context of (iii). Using these matching methods, we find that green buildings have 3.3% higher rental rates per square foot than otherwise similar buildings without green ratings ---a moderately larger effect than the one previously found.Data processing service centers--Energy conservation, Cloud computing, Industrial management, Industrial engineering, Operations researchck2560BusinessThesesApplied Inventory Management: New Approaches to Age-Old Problems
https://academiccommons.columbia.edu/catalog/ac:194202
Daniel Guetta, Charles Raphael10.7916/D84M94B1Wed, 14 Jun 2017 21:12:18 +0000Supply chain management is one of the fundamental topics in the field of operations research, and a vast literature exists on the subject. Many recent developments in the field are rapidly narrowing the gap between the systems handled in the literature and the real-life problems companies need to solve on a day-to-day basis. However, there are certain features often observed in real-world systems that elude even these most recent developments. In this thesis, we consider a number of these features, and propose some new heuristics together with methodologies to evaluate their performance.
In Chapter 2, we consider a general two-echelon distribution system consisting of a depot and multiple sales outlets which face random demands for a given item. The replenishment process consists of two stages: the depot procures the item from an outside supplier, while the retailers' inventories are replenished by shipments from the depot. Both of the replenishment stages are associated with a given facility-specific leadtime. The depot as well as the retailers face a limited inventory capacity. We propose a heuristic for this class of dynamic programming models to obtain an upper bound on optimal costs, together with a new approach to generate lower bounds based on Lagrangian relaxation. We report on an extensive numerical study with close to 14,000 instances which evaluates the accuracy of the lower bound and the optimality gap of the various heuristic policies. Our study reveals that our policy performs exceedingly well almost across the entire parameter spectrum.
In Chapter 3, we extend the model above to deal with distribution systems involving several items. In this setting, two interdependencies can arise between items that considerably complicate the problem. First, shared storage capacity at each of the retail outlets results in a trade-off between items; ordering more of one item means less space is available for another. Second, economies of scope can occur in the order costs if several items can be ordered from a single supplier, incurring only one fixed cost. To our knowledge, our approach is the first that has been proposed to handle such complex, multi-echelon, multi-item systems. We propose a heuristic for this class of dynamic programming models, to obtain an upper bound on optimal costs, together with an approach to generate lower bounds. We report on an extensive numerical study with close to 1,200 instances that reveals our heuristic performs excellently across the entire parameter spectrum. In Chapter 4, we consider a periodic-review stochastic inventory control system consisting of a single retailer which faces random demands for a given item, and in which demand forecasts are dynamically updated (for example, new information observed in one period may affect our beliefs about demand distributions in future periods). Replenishment orders are subject to fixed and variable costs. A number of heuristics exist to deal with such systems, but to our knowledge, no general approach exists to find lower bounds on optimal costs therein. We develop a general approach for finding lower bounds on the cost of such systems using an information relaxation. We test our approach in a model with advance demand information, and obtain good lower bounds over a range of problem parameters.
Finally, in Appendix A, we begin to tackle the problem of using these methods in real supply chain systems. We were able to obtain data from a luxury goods manufacturer to inspire our study. Unfortunately, the methods we developed in earlier chapters were not directly applicable to these data. Instead, we developed some alternate heuristic methods, and we considered statistical techniques that might be used to obtain the parameters required for these heuristics from the data available.Industrial management, Inventory control, Inventory control--Data processing, Inventory control--Evaluation, Inventory control--Management, Retail trade--Inventory control, Business logistics, Operations research, Industrial engineeringBusinessThesesCutting Planes for Convex Objective Nonconvex Optimization
https://academiccommons.columbia.edu/catalog/ac:166569
Michalka, Alexander10.7916/D8TF04RGThu, 08 Jun 2017 16:12:38 +0000This thesis studies methods for tightening relaxations of optimization problems with convex objective values over a nonconvex domain. A class of linear inequalities obtained by lifting easily obtained valid inequalities is introduced, and it is shown that this class of inequalities is sufficient to describe the epigraph of a convex and differentiable function over a general domain. In the special case where the objective is a positive definite quadratic function, polynomial time separation procedures using the new class of lifted inequalities are developed for the cases when the domain is the complement of the interior of a polyhedron, a union of polyhedra, or the complement of the interior of an ellipsoid. Extensions for positive semidefinite and indefinite quadratic objectives are also studied. Applications and computational considerations are discussed, and the results from a series of numerical experiments are presented.Industrial engineeringadm2148Industrial Engineering and Operations ResearchThesesResource Cost Aware Scheduling Problems
https://academiccommons.columbia.edu/catalog/ac:166566
Carrasco, Rodrigo10.7916/D8Z60WDXThu, 08 Jun 2017 16:12:25 +0000Managing the consumption of non-renewable and/or limited resources has become an important issue in many different settings. In this dissertation we explore the topic of resource cost aware scheduling. Unlike the purely scheduling problems, in the resource cost aware setting we are not only interested in a scheduling performance metric, but also the cost of the resources consumed to achieve a certain performance level. There are several ways in which the cost of non-renewal resources can be added into a scheduling problem. Throughout this dissertation we will focus in the case where the resource consumption cost is added, as part of the objective, to a scheduling performance metric such as weighted completion time and weighted tardiness among others. In our work we make several contributions to the problem of scheduling with non-renewable resources. For the specific setting in which only energy consumption is the important resource, our contributions are the following. We introduce a model that extends the previous energy cost models by allowing more general cost functions that can be job-dependent. We further generalize the problem by allowing arbitrary precedence constraints and release dates. We give approximation algorithms for minimizing an objective that is a combination of a scheduling metric, namely total weighted completion time and total weighted tardiness, and the total energy consumption cost. Our approximation algorithm is based on an interval-and-speed-indexed IP formulation. We solve the linear relaxation of this IP and we use this solution to compute a schedule. We show that these algorithms have small constant approximation ratios. Through experimental analysis we show that the empirical approximation ratios are much better than the theoretical ones and that in fact the solutions are close to optimal. We also show empirically that the algorithm can be used in additional settings not covered by the theoretical results, such as using flow time or an online setting, with good approximation and competitiveness ratios.Industrial engineering, MathematicsIndustrial Engineering and Operations ResearchThesesRapid Advance: High Technology in China in the Global Electronic Age
https://academiccommons.columbia.edu/catalog/ac:161506
Mays, Susan Kay10.7916/D8HQ464QThu, 08 Jun 2017 13:59:26 +0000This study examines how a critical high technology industry in China, the semiconductor industry, advanced from being an isolated, centrally planned industry in the mid 1980s to being an important participant in the competitive global semiconductor industry after 2000. The research examines the most important trends, projects, and enterprises in China, with attention to China's global partners and China's rapidly growing role in the world economy. In the 1990s, semiconductor enterprises in China proactively made key structural changes and global linkages that set the stage for the industry's growth after 2000. The study thus provides an industry level assessment of how reforms and technological upgrading occurred in contemporary China, including the degree and character of so-called state led development. This research also shows that the development of this high technology industry had direct and positive effects on China's larger business environment and trade policies. Finally, this study compares the development of the semiconductor industry in China with its development in Japan, South Korea, and Taiwan, identifying differences in national approaches and the effects of the global information revolution.Economic history, Asians, Industrial engineeringsm2075HistoryThesesDevelopment of Construction Projects Scheduling with Evolutionary Algorithms
https://academiccommons.columbia.edu/catalog/ac:140087
Tavakolan, Mehdi10.7916/D85M6CPKWed, 07 Jun 2017 02:45:59 +0000Evolutionary Algorithms (EAs) as appropriate tools to optimize multi-objective problems have been applied to optimize construction projects in the last two decades. However, studies on improving the convergence ratio and processing time in the most applied algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) in construction engineering and management domains remain poorly understood. Furthermore, hybrid algorithms such as Hybrid Genetic Algorithm-Particle Swarm Optimization (HGAPSO) and Shuffled Frog Leaping Algorithm (SFLA) have been presented in computational optimization and water resource management domains during recent years to prevent pitfalls of the aforementioned algorithms. In this dissertation, I present three studies on hybrid algorithms to show that our proposed hybrid approaches are superior than existing optimization algorithms in finding better project schedule solutions with less total project cost, shorter total project duration, and less total resources allocation moments. In the first, I present a HGAPSO approach to solve complex, TCRO problems in construction project planning. Our proposed approach uses the fuzzy set theory to characterize uncertainty about the input data (i.e., time, cost, and resources required to perform an activity). In the second, I present the SFLA algorithm to solve TCRO problems using splitting allowed during activities execution. The third study involves the evaluation of the inflation impact on resources unit price during execution of construction projects. This research presents the comprehensive TCRO model by comparing two hybrid algorithms, HGAPSO and SFLA, with the three most capable algorithms -- GA, PSO and ACO -- in six different examples in terms of the structure of projects, construction assumptions and kinds of Time-Cost functions. Each of the three studies helps overcome parts of EAs problems and contributes to obtaining optimal project schedule solutions of total project duration, total project cost and total resources allocation moments of construction projects in the planning stage. The findings have significant implications in improving the scheduling of construction projects.Civil engineering, Industrial engineeringmt2568Civil Engineering and Engineering MechanicsThesesMultiproduct Pricing Management and Design of New Service Products
https://academiccommons.columbia.edu/catalog/ac:144706
Wang, Ruxian10.7916/D81V5MXDWed, 07 Jun 2017 02:45:51 +0000In this thesis, we study price optimization and competition of multiple differentiated substitutable products under the general Nested Logit model and also consider the designing and pricing of new service products, e.g., flexible warranty and refundable warranty, under customers' strategic claim behavior. Chapter 2 considers firms that sell multiple differentiated substitutable products and customers whose purchase behavior follows the Nested Logit model, of which the Multinomial Logit model is a special case. In the Nested Logit model, customers make product selection decision sequentially: they first select a class or a nest of products and subsequently choose a product within the selected class. We consider the general Nested Logit model with product-differentiated price coefficients and general nest-heterogenous degrees. We show that the adjusted markup, which is defined as price minus cost minus the reciprocal of the price coefficient, is constant across all the products in each nest. When optimizing multiple nests of products, the adjusted nested markup is also constant within a nest. By using this result, the multi-product optimization problem can be reduced to a single-dimensional problem in a bounded interval, which is easy to solve. We also use this result to simplify the oligopolistic price competition and characterize the Nash equilibrium. Furthermore, we investigate its application to dynamic pricing and revenue management. In Chapter 3, we investigate the flexible monthly warranty, which offers flexibility to customers and allow them to cancel it at anytime without any penalty. Frequent technological innovations and price declines severely affect sales of extended warranties as product replacement upon failure becomes an increasingly attractive alternative. To increase sales and profitability, we propose offering flexible-duration extended warranties. These warranties can appeal to customers who are uncertain about how long they will keep the product as well as to customers who are uncertain about the product's reliability. Flexibility may be added to existing services in the form of monthly-billing with month-by-month commitments, or by making existing warranties easier to cancel, with pro-rated refunds. This thesis studies flexible warranties from the perspectives of both the customer and the provider. We present a model of the customer's optimal coverage decisions under the objective of minimizing expected support costs over a random planning horizon. We show that under some mild conditions the customer's optimal coverage policy has a threshold structure. We also show through an analytical study and through numerical examples how flexible warranties can result in higher profits and higher attach rates. Chapter 4 examines the designing and pricing of residual value warranty that refunds customers at the end of warranty period based on customers' claim history. Traditional extended warranties for IT products do not differentiate customers according to their usage rates or operating environment. These warranties are priced to cover the costs of high-usage customers who tend to experience more failures and are therefore more costly to support. This makes traditional warranties economically unattractive to low-usage customers. In this chapter, we introduce, design and price residual value warranties. These warranties refund a part of the upfront price to customers who have zero or few claims according to a pre-determined refund schedule. By design, the net cost of these warranties is lower for light users than for heavy users. As a result, a residual value warranty can enable the provider to price-discriminate based on usage rates or operating conditions without the need to monitor individual customers' usage. Theoretic results and numerical experiments demonstrate how residual value warranties can appeal to a broader range of customers and significantly increase the provider's profits.Operations research, Industrial engineeringrw2267Industrial Engineering and Operations ResearchThesesThe N-k Problem in Power Grids: New Models, Formulations and Numerical Experiments (Extended Version)
https://academiccommons.columbia.edu/catalog/ac:125318
Bienstock, Daniel; Verma, Abhinav10.7916/D8Z89K4QWed, 31 May 2017 19:34:26 +0000Given a power grid modeled by a network together with equations describing the power flows, power generation and consumption, and the laws of physics, the so-called N-k problem asks whether there exists a set of k or fewer arcs whose removal will cause the system to fail. The case where k is small is of practical interest. We present theoretical and computational results involving a mixed-integer model and a continuous nonlinear model related to this question.Industrial engineeringdb17ArticlesStability and Structural Properties of Stochastic Storage Networks
https://academiccommons.columbia.edu/catalog/ac:v15dv41nv6
Kella, Offer; Whitt, Ward10.7916/D8TM7PK7Thu, 25 May 2017 18:46:41 +0000We establish stability, monotonicity, concavity and subadditivity properties for open stochastic storage networks in which the driving process has stationary increments. A principal example is a stochastic fluid network in which the external inputs are random but all internal flows are deterministic. For the general model, the multi-dimensional content process is tight under the natural stability condition. The multi-dimensional content process is also stochastically increasing when the process starts at the origin, implying convergence to a proper limit under the natural stability condition. In addition, the content process is monotone in its initial conditions. Hence, when any content process with nonzero initial conditions hits the origin, it couples with the content process starting at the origin. However, in general, a tight content process need not hit the origin.Stochastic systems, Lévy processes, Stochastic systems, Industrial engineering, Probabilities, Queuing theory, Operations researchww2040Industrial Engineering and Operations ResearchArticlesMarkov Chain Models to Estimate the Premium for Extended Hedge Fund Lockups
https://academiccommons.columbia.edu/catalog/ac:9cnp5hqc0g
Derman, Emanuel; Park, Kun Soo; Whitt, Ward10.7916/D8BP0F80Thu, 25 May 2017 18:37:04 +0000A lockup period for investment in a hedge fund is a time period after making the investment during which the investor cannot freely redeem his investment. It is routine to have a one-year lockup period, but recently the requested lockup periods have grown longer. We estimate the premium for such extended lockup, taking the point of view of a manager of a fund of funds, who has to choose between two investments in similar funds in the same strategy category, the first having a one-year lockup and the second having an n-year lockup. Assuming that the manager will rebalance his portfolio of hedge funds on a yearly basis, if permitted, we define the annual lockup premium as the difference between the rates of return from these investments. We develop a Markov chain model to estimate this lockup premium. By solving systems of equations, we fit the Markov chain transition probabilities to three directly observable hedge fund performance measures: the persistence of return, the variance of return and the hedge-fund death rate. The model quantifies the way the lockup premium depends on these parameters. Data from the TASS database are used to estimate the persistence, which is found to be statistically significant.Hedge funds, Liquidity (Economics), Markov processes, Stochastic systems, Industrial engineering, Probabilities, Queuing theory, Operations researched2114, ww2040Industrial Engineering and Operations ResearchArticlesWhat you should know about queueing models to set staffing requirements in service systems
https://academiccommons.columbia.edu/catalog/ac:fqz612jm7c
Whitt, Ward10.7916/D88P6C1PThu, 25 May 2017 18:37:02 +0000One traditional application of queueing models is to help set staffing requirements in service systems, but the way to do so is not entirely straightforward, largely because demand in service systems typically varies greatly by the time of day. This article discusses ways - old and new - to cope with that time-varying demand.Call centers, Queuing theory, Queuing theory, Queuing networks (Data transmission), Industrial engineering, Probabilities, Operations researchww2040Industrial Engineering and Operations ResearchArticlesWorkload bounds in fluid models with priorities
https://academiccommons.columbia.edu/catalog/ac:dbrv15dv54
Berger, Arthur W.; Whitt, Ward10.7916/D8RV116ZThu, 25 May 2017 18:36:56 +0000In this paper we establish upper and lower bounds on the steady-state per-class workload distributions in a single-server queue with multiple priority classes. Motivated by communication network applications, the model has constant processing rate and general input processes with stationary increments. The bounds involve corresponding quantities in related models with the first-come first-served discipline. We apply the bounds to support a new notion of effective bandwidths for multi-class systems with priorities. We also apply the lower bound to obtain sufficient conditions for the workload distributions to have heavy tails.Queuing theory, Stochastic models, Industrial engineering, Probabilities, Queuing theory, Operations researchww2040Industrial Engineering and Operations ResearchArticlesLinear stochastic fluid networks
https://academiccommons.columbia.edu/catalog/ac:d51c59zw58
Kella, Offer; Whitt, Ward10.7916/D89P3D52Thu, 25 May 2017 18:36:47 +0000We introduce open stochastic fluid networks that can be regarded as continuous analogues or fluid limits of open networks of infinite-server queues. Random exogenous input may come to any of the queues. At each queue, a c.d.f.-valued stochastic process governs the proportion of the input processed by a given time after arrival. The routeing may be deterministic (a specified sequence of successive queue visits) or proportional, i.e. a stochastic transition matrix may govern the proportion of the output routed from one queue to another. This stochastic fluid network with deterministic c.d.f.s governing processing at the queues arises as the limit of normalized networks of infinite-server queues with batch arrival processes where the batch sizes grow. In this limit, one can think of each particle having an evolution through the network, depending on its time and place of arrival, but otherwise independent of all other particles. A key property associated with this independence is the linearity: the workload associated with a superposition of inputs, each possibly having its own pattern of flow through the network, is simply the sum of the component workloads. As with infinite-server queueing models, the tractability makes the linear stochastic fluid network a natural candidate for approximations.Stochastic systems, Storage area networks (Computer networks), Lévy processes, Queuing networks (Data transmission), Industrial engineering, Probabilities, Queuing theory, Operations researchww2040Industrial Engineering and Operations ResearchArticlesDecomposition approximations for time-dependent Markovian queueing networks
https://academiccommons.columbia.edu/catalog/ac:r7sqv9s4pp
Whitt, Ward10.7916/D8H99HPCThu, 25 May 2017 18:36:41 +0000Motivated by the development of complex telephone call center networks, we present a general framework for decompositions to approximately solve Markovian queueing networks with time-dependent and state-dependent transition rates. The decompositions are based on assuming either full or partial product form for the time-dependent probability vectors at each time. These decompositions reduce the number of time-dependent ordinary differential equations that must be solved. We show how special structure in the transition rates can be exploited to speed up computation. There is extra theoretical support for the decomposition approximation when the steady-state distribution of the time-homogeneous version of the model has product form.Queuing theory, Queuing networks (Data transmission), Markov processes, Call centers, Industrial engineering, Probabilities, Operations researchww2040Industrial Engineering and Operations ResearchArticlesUsing different response-time requirements to smooth time-varying demand for service
https://academiccommons.columbia.edu/catalog/ac:02v6wwpzgw
Whitt, Ward10.7916/D87W6QNNThu, 25 May 2017 18:36:38 +0000Many service systems have demand that varies significantly by time of day, making it costly to provide sufficient capacity to be able to respond very quickly to each service re- quest. Fortunately, however, different service requests often have very different response-time requirements. Some service requests may need immediate response, while others can tolerate substantial delays. Thus it is often possible to smooth demand by partitioning the service requests into separate priority classes according to their response-time requirements. Classes with more stringent performance requirements are given higher priority for service. Lower capacity may be required if lower-priority-class demand can be met during off-peak periods. We show how the priority classes can be defined and the resulting required fixed capacity can be determined, directly accounting for the time-dependent behavior. For this purpose, we ex- ploit relatively simple analytical models, in particular, Mt/G/∞ and deterministic offered-load models. The analysis also provides an estimate of the capacity savings that can be obtained from partitioning time-varying demand into priority classes.Queuing theory, Queuing networks (Data transmission), Industrial engineering, Probabilities, Operations researchww2040Industrial Engineering and Operations ResearchArticlesAnalysis of join-the-shortest-queue routing for web server farms
https://academiccommons.columbia.edu/catalog/ac:xksn02v700
Gupta, Varun; Harchol-Balter, Mor; Sigman, Karl; Whitt, Ward10.7916/D81J9P84Thu, 25 May 2017 18:36:35 +0000Join the Shortest Queue (JSQ) is a popular routing policy for server farms. However, until now all analysis of JSQ has been limited to First-Come-First-Serve (FCFS) server farms, whereas it is known that web server farms are better modeled as Processor Sharing (PS) server farms. We provide the first approximate analysis of JSQ in the PS server farm model for general job-size distributions, obtaining the distribution of queue length at each queue. To do this, we approximate the queue length of each queue in the server farm by a one-dimensional Markov chain, in a novel fashion. We also discover some interesting insensitivity properties of PS server farms with JSQ routing, and discuss the near-optimality of JSQ.Queuing theory, Industrial engineering, Probabilities, Operations researchvg2297, ks20, ww2040Industrial Engineering and Operations ResearchArticlesCoping with Time-Varying Demand When Setting Staffing Requirements for a Service System
https://academiccommons.columbia.edu/catalog/ac:gmsbcc2fsb
Green, Linda V.; Kolesar, Peter J.; Whitt, Ward10.7916/D8RR29QBThu, 25 May 2017 18:36:32 +0000We review queueing-theory methods for setting staffing requirements in service systems where customer demand varies in a predictable pattern over the day. Analyzing these systems is not straightforward, because standard queueing theory focuses on the long-run steady-state behavior of stationary models. We show how to adapt stationary queueing models for use in nonstationary environments so that time-dependent performance is captured and staffing requirements can be set. Relatively little modification of straightforward stationary analysis applies in systems where service times are short and the targeted quality of service is high. When service times are moderate and the targeted quality of service is still high, time-lag refinements can improve traditional stationary independent period-by-period and peak-hour approximations. Time-varying infinite-server models help develop refinements, because closed-form expressions exist for their time-dependent behavior. More difficult cases with very long service times and other complicated features, such as end-of-day effects, can often be treated by a modified-offered-load approximation, which is based on an associated infinite-server model. Numerical algorithms and deterministic fluid models are useful when the system is overloaded for an extensive period of time. Our discussion focuses on telephone call centers, but applications to police patrol, banking, and hospital emergency rooms are also mentioned.Call centers, Queuing theory, Police patrol--Mathematical models, Banks and banking, Hospitals--Emergency services, Industrial engineering, Probabilities, Queuing theory, Operations researchlvg1, pjk4, ww2040Industrial Engineering and Operations ResearchArticlesValue-Based Routing and Preference-Based Routing in Customer Contact Centers
https://academiccommons.columbia.edu/catalog/ac:v6wwpzgmvb
Sisselman, Michael E.; Whitt, Ward10.7916/D8H70T92Thu, 25 May 2017 18:36:28 +0000Telephone call centers and their generalizations—customer contact centers—usually handle several types of customer service requests (calls). Since customer service representatives (agents) have different call-handling abilities and are typically cross-trained in multiple skills, contact centers exploit skill-based routing (SBR) to assign calls to appropriate agents, aiming to respond properly as well as promptly. Established agent-staffing and SBR algorithms ensure that agents have the required call-handling skills and that call routing is performed so that constraints are met for standard congestion measures, such as the percentage of calls of each type that abandon before starting service and the percentage of answered calls of each type that are delayed more than a specified number of seconds. We propose going beyond traditional congestion measures to focus on the expected value derived from having particular agents handle various calls. Expected value might represent expected revenue or the likelihood of first-call resolution. Value might also reflect agent call-handling preferences. We show how value-based routing (VBR) and preference-based routing (PBR) can be introduced in the context of an existing SBR framework, based on static-priority routing using a highly-structured priority matrix, so that constraints are still met for standard congestion measures. Since an existing SBR framework is used to implement VBR and PBR, it is not necessary to replace the automatic call distributor (ACD). We show how mathematical programming can be used, with established staffing requirements, to find a desirable priority matrix. We select the priority matrix to use during a specified time interval (e.g., 30-minute period) by maximizing the total expected value over that time interval, subject to staffing constraints.Customer services, Call centers, Labor turnover, Industrial engineering, Probabilities, Queuing theory, Operations researchww2040Industrial Engineering and Operations ResearchArticlesHeavy-traffic extreme-value limits for Erlang delay models
https://academiccommons.columbia.edu/catalog/ac:jdfn2z34w1
Pang, Guodong; Whitt, Ward10.7916/D8251WQXThu, 25 May 2017 18:36:24 +0000We consider the maximum queue length and the maximum number of idle servers in the classical Erlang delay model and the generalization allowing customer abandonment – the M/M/n + M queue. We use strong approximations to show, under regularity conditions, that properly scaled versions of the maximum queue length and maximum number of idle servers over subintervals [0, t] in the delay models converge jointly to independent random variables with the Gumbel extreme- value distribution in the QED and ED many-server heavy-traffic limiting regimes as n and t increase to infinity together appropriately; we require that tn → ∞ and tn = o(n1/2−ε) as n → ∞ for some ε>0.Queuing theory, Queuing networks (Data transmission), Limit theorems (Probability theory), Industrial engineering, Probabilities, Operations researchgp2224, ww2040Industrial Engineering and Operations ResearchArticlesExplicit M/G/1 waiting-time distributions for a class of long-tail service-time distributions
https://academiccommons.columbia.edu/catalog/ac:stqjq2bvs8
Abate, Joseph; Whitt, Ward10.7916/D8WM1RXWThu, 25 May 2017 18:36:22 +0000O. J. Boxma and J. W. Cohen recently obtained an explicit expression for the M/G/1 steady-state waiting-time distribution for a class of service-time distributions with power tails. We extend their explicit representation from a one-parameter family of service-time distributions to a two-parameter family. The complementary cumulative distribution function (ccdf’s) of the service times all have the asymptotic form Fc(t) ∼ αt−3/2 as t → ∞, so that the associated waiting-time ccdf’s have asymptotic form Wc(t) ∼ βt−1/2 as t → ∞. Thus the second moment of the service time and the mean of the waiting time are infinite. Our result here also extends our own earlier explicit expression for the M/G/1 steady-state waiting-time distribution when the service-time distribution is an exponential mixture of inverse Gaussian distributions (EMIG). The EMIG distributions form a two-parameter family with ccdf hav- ing the asymptotic form Fc(t) ∼ αt−3/2e−ηt as t → ∞. We now show that a variant of our previous argument applies when the service-time ccdf is an undamped EMIG, i.e., with ccdf Gc(t) = eηtFc(t) for Fc(t) above, which has the power tail Gc(t) ∼ αt−3/2 as t → ∞. The Boxma-Cohen long-tail service-time distribution is a special case of an undamped EMIG.Queuing theory, Inverse Gaussian distribution, Industrial engineering, Probabilities, Operations researchww2040Industrial Engineering and Operations ResearchArticlesStaffing a Call Center with Uncertain Arrival Rate and Absenteeism
https://academiccommons.columbia.edu/catalog/ac:m905qfttgm
Whitt, Ward10.7916/D8M3377TThu, 25 May 2017 18:36:18 +0000This paper proposes simple methods for staffing a single-class call center with uncertain arrival rate and uncertain staffing due to employee absenteeism. The arrival rate and the proportion of servers present are treated as random variables. The basic model is a multi-server queue with customer abandonment, allowing non-exponential service-time and time-to-abandon distributions. The goal is to maximize the expected net return, given throughput benefit and server, customer-abandonment and customer-waiting costs, but attention is also given to the standard deviation of the return. The approach is to approximate the performance and the net return, conditional on the random model-parameter vector, and then uncondition to get the desired results. Two recently-developed approximations are used for the conditional performance measures: first, a deterministic fluid approximation and, second, a numerical algorithm based on a purely Markovian birth-and-death model, having state-dependent death rates.Uncertainty (Information theory), Call centers, Absenteeism (Labor), Industrial engineering, Probabilities, Queuing theory, Operations researchww2040Industrial Engineering and Operations ResearchArticlesHeavy-traffic extreme-value limits for queues
https://academiccommons.columbia.edu/catalog/ac:jsxksn02wr
Glynn, Peter W.; Whitt, Ward10.7916/D80C5786Thu, 25 May 2017 18:36:16 +0000We consider the maximum waiting time among the first n customers in the GI/G/1 queue. We use strong approximations to prove, under regularity conditions, convergence of the normalized maximum wait to the Gumbel extreme-value distribution when the traffic intensity ρ approaches 1 from below and n approaches infinity at a suitable rate. The normalization depends on the interarrival-time and service-time distributions only through their first two moments, corresponding to the iterated limit in which first ρ approaches 1 and then n approaches infinity. We need n to approach infinity sufficiently fast so that n ( 1 − ρ )2 → ∞. We also need n to approach infinity sufficiently slowly: If the service time has a pth moment for ρ > 2, then it suffices for ( 1 − ρ ) n 1 / p to remain bounded; if the service time has a finite moment generating function, then it suffices to have ( 1 − ρ ) log n → 0. This limit can hold even when the normalized maximum waiting time fails to converge to the Gumbel distribution as n → ∞ for each fixed ρ. Similar limits hold for the queue length process.Queuing theory, Limit theorems (Probability theory), Industrial engineering, Probabilities, Queuing theory, Operations researchww2040Industrial Engineering and Operations ResearchArticlesLimits for Cumulative Input Processes to Queues
https://academiccommons.columbia.edu/catalog/ac:05qfttdz09
Whitt, Ward10.7916/D8348XW6Thu, 25 May 2017 18:36:13 +0000We establish functional central limit theorems (FCLTs) for a cumulative input process to a fluid queue from the superposition of independent on—off sources, where the on periods and off periods may have heavy-tailed probability distributions. Variants of these FCLTs hold for cumulative busy-time and idle-time processes associated with standard queueing models. The heavy-tailed on-period and off-period distributions can cause the limit process to have discontinuous sample paths (e.g., to be a non-Brownian stable process or more general Lévy process) even though the converging processes have continuous sample paths. Consequently, we exploit the Skorohod M1 topology on the function space D of right-continuous functions with left limits. The limits here combined with the previously established continuity of the reflection map in the M1 topology imply both heavy-traffic and non-heavy-traffic FCLTs for buffer-content processes in stochastic fluid networks.Central limit theorem, Symmetry (Mathematics), Industrial engineering, Stochastic systems, Lévy processes, Probabilities, Queuing theory, Operations researchww2040Industrial Engineering and Operations ResearchArticlesDynamic staffing in a telephone call center aiming to immediately answer all calls
https://academiccommons.columbia.edu/catalog/ac:7h44j0zpd2
Whitt, Ward10.7916/D8J67VDFThu, 25 May 2017 18:36:10 +0000This paper proposes practical modeling and analysis methods to facilitate dynamic staffing in a telephone call center with the objective of immediately answering all calls. Because of this goal, it is natural to use infinite-server queueing models. These models are very useful because they are so tractable. A key to the dynamic staffing is exploiting detailed knowledge of system state in order to obtain good estimates of the mean and variance of the demand in the near future. The near-term staffing needs, e.g., for the next minute or the next 20 min., can often be predicted by exploiting information about recent demand and current calls in progress, as well as historical data. The remaining holding times of calls in progress can be predicted by classifying and keeping track of call types, by measuring holding-time distributions and by taking account of the elapsed holding times of calls in progress. The number of new calls in service can be predicted by exploiting information about both historical and recent demand.Call centers, Forecasting, Queuing networks (Data transmission), Call centers, Industrial engineering, Probabilities, Queuing theory, Operations researchww2040Industrial Engineering and Operations ResearchArticlesHeavy-traffic limits for many-server queues with service interruptions
https://academiccommons.columbia.edu/catalog/ac:d51c59zw57
Pang, Guodong; Whitt, Ward10.7916/D8SN0NGVThu, 25 May 2017 18:36:07 +0000We establish many-server heavy-traffic limits for G/M/n + M queueing models, allowing cus- tomer abandonment (the +M), subject to exogenous regenerative service interruptions. With unscaled service interruption times, we obtain a FWLLN for the queue-length process, where the limit is an ordinary differential equation in a two-state random environment. With asymptoti- cally negligible service interruptions, we obtain a FCLT for the queue-length process, where the limit is characterized as the pathwise unique solution to a stochastic integral equation with jumps. When the arrivals are renewal and the interruption cycle time is exponential, the limit is a Markov process, being a jump-diffusion process in the QED regime and an O-U process driven by a Levy process in the ED regime (and for infinite-server queues). A stochastic-decompostion property of the steady-state distribution of the limit process in the ED regime (and for infinite-server queues) is obtained.Queuing networks (Data transmission), Lévy processes, Industrial engineering, Probabilities, Queuing theory, Operations researchgp2224, ww2040Industrial Engineering and Operations ResearchArticlesTwo fluid approximations for multi-server queues with abandonments
https://academiccommons.columbia.edu/catalog/ac:x3ffbg79fp
Whitt, Ward10.7916/D8154VHNThu, 25 May 2017 18:36:05 +0000Insight is provided into a previously developed M/M/s/r+M(n) approximation for the M/GI/s/r+GI queueing model by establishing fluid and diffusion limits for the approximating model. Fluid approximations for the two models are compared in the many-server efficiency-driven (overloaded) regime. The two fluid approximations do not coincide, but they are close.Queuing networks (Data transmission), Queuing theory, Call centers, Industrial engineering, Probabilities, Operations researchww2040Industrial Engineering and Operations ResearchArticlesThe last departure time from an Mt/G/∞ queue with a terminating arrival process
https://academiccommons.columbia.edu/catalog/ac:gxd2547d95
Goldberg, David Alan; Whitt, Ward10.7916/D8K64WJTThu, 25 May 2017 18:36:01 +0000This paper studies the last departure time from a queue with a terminating arrival process. This problem is motivated by a model of two-stage inspection in which finitely many items come to a first stage for screening. Items failing first-stage inspection go to a second stage to be examined further. Assuming that arrivals at the second stage can be regarded as an independent thinning of the departures from the first stage, the arrival process at the second stage is approximately a terminating Poisson process. If the failure probabilities are not constant, then this Poisson process will be nonhomogeneous. The last departure time from an Mt/G/∞ queue with a terminating arrival process serves as a remarkably tractable approximation, which is appropriate when there are ample inspection resources at the second stage. For this model, the last departure time is a Poisson random maximum, so that is possible to give exact expressions and develop useful approximations based on extreme-value theory.Queuing theory, Queuing networks (Data transmission), Industrial engineering, Probabilities, Operations researchww2040Industrial Engineering and Operations ResearchArticlesContinuity of a queueing integral representation in the M1 topology
https://academiccommons.columbia.edu/catalog/ac:125349
Pang, Guodong; Whitt, Ward10.7916/D8SF32D0Thu, 13 Apr 2017 15:46:36 +0000We establish continuity of the integral representation y(t)=x(t)+∫0th(y(s)) ds, t≥0, mapping a function x into a function y when the underlying function space D is endowed with the Skorohod M1 topology. We apply this integral representation with the continuous mapping theorem to establish heavy-traffic stochastic-process limits for many-server queueing models when the limit process has jumps unmatched in the converging processes as can occur with bursty arrival processes or service interruptions. The proof of M1-continuity is based on a new characterization of the M1 convergence, in which the time portions of the parametric representations are absolutely continuous with respect to Lebesgue measure, and the derivatives are uniformly bounded and converge in L1.Industrial engineering, Operations researchgp2224, ww2040Industrial Engineering and Operations ResearchArticlesSqueezing the most out of ATM
https://academiccommons.columbia.edu/catalog/ac:125355
Choudhury, Gagan L.; Lucantoni, David M.; Whitt, Ward10.7916/D8NP29NTThu, 13 Apr 2017 15:46:34 +0000Although ATM seems to be the wave of the future, one analysis requires that the utilization of the network be quite low. That analysis is based on asymptotic decay rates of steady-state distributions used to develop a concept of effective bandwidths for connection admission control. The present authors have developed an exact numerical algorithm that shows that the effective-bandwidth approximation can overestimate the target small blocking probabilities by several orders of magnitude when there are many sources that are more bursty than Poisson. The bad news is that the appealing simple connection admission control algorithm using effective bandwidths based solely on tail-probability asymptotic decay rates may actually not be as effective as many have hoped. The good news is that the statistical multiplexing gain on ATM networks may actually be higher than some have feared. For one example, thought to be realistic, the analysis indicates that the network actually can support twice as many sources as predicted by the effective-bandwidth approximation. The authors also show that the effective bandwidth approximation is not always conservative. Specifically, for sources less bursty than Poisson, the asymptotic constant grows exponentially in the number of sources (when they are scaled as above) and the effective-bandwidth approximation can greatly underestimate the target blocking probabilities. Finally, they develop new approximations that work much better than the pure effective-bandwidth approximation.Industrial engineeringww2040Industrial Engineering and Operations ResearchArticles