Theses Doctoral

Computational Methods for Nonlinear Optimization Problems: Theory and Applications

Madani, Ramtin

This dissertation is motivated by the lack of efficient global optimization techniques for polynomial optimization problems. The objective is twofold. First, a new mathematical foundation for obtaining a global or near-global solution will be developed. Second, several case studies will be conducted on a variety of real-world problems. Global optimization, convex relaxation and distributed computation are at the heart of this PhD dissertation. Some of the specific problems to be addressed in this thesis on both the theory and the application of nonlinear optimization are explained below:
Graph theoretic algorithms for low-rank optimization problems: There is a rapidly growing interest in the recovery of an unknown low-rank matrix from limited information and measurements. This problem occurs in many areas of engineering and applied science such as machine learning, control, and computer vision. We develop a graph-theoretic technique in Part I that is able to generate a low-rank solution for a sparse Linear Matrix Inequality (LMI), which is directly applicable to a large set of problems such as low-rank matrix completion with many unknown entries. Our approach finds a solution with a guarantee on its rank, using the recent advances in graph theory.
Resource allocation for energy systems: The flows in an electrical grid are described by nonlinear AC power flow equations. Due to the nonlinear interrelation among physical parameters of the network, the feasibility region represented by power flow equations may be nonconvex and disconnected. Since 1962, the nonlinearity of the network constraints has been studied, and various heuristic and local-search algorithms have been proposed in order to perform optimization over an electrical grid [Baldick, 2006; Pandya and Joshi, 2008]. Part II is concerned with finding convex formulations of the power flow equations using semidefinite programming (SDP). The potential of SDP relaxation for problems in power systems has been manifested in [Lavaei and Low, 2012], with further studies conducted in [Lavaei, 2011; Sojoudi and Lavaei, 2012]. A variety of graph-theoretic and algebraic methods are developed in Part II in order to facilitate performing fundamental, yet challenging tasks such as optimal power flow (OPF) problem, security-constrained OPF and the classical power flow problem.
Synthesis of distributed control systems: Real-world systems mostly consist of many interconnected subsystems, and designing an optimal controller for them pose several challenges to the field of control theory. The area of distributed control is created to address the challenges arising in the control of these systems. The objective is to design a constrained controller whose structure is specified by a set of permissible interactions between the local controllers with the aim of reducing the computation or communication complexity of the overall controller. It has been long known that the design of an optimal distributed (decentralized) controller is a daunting task because it amounts to an NP-hard optimization problem in general [Witsenhausen, 1968; Tsitsiklis and Athans, 1984]. Part III is devoted to study the potential of the SDP relaxation for the optimal distributed control (ODC) problem Our approach rests on formulating each of different variations of the ODC problem as rank-constrained optimization problems from which SDP relaxations can be derived. As the first contribution, we show that the ODC problem admits a sparse SDP relaxation with solutions of rank at most 3. Since a rank-1 SDP matrix can be mapped back into a globally-optimal controller, the low-rank SDP solution may be deployed to retrieve a near-global controller.
Parallel computation for sparse semidefinite programs: While small- to medium-sized semidefinite programs are efficiently solvable by second-order-based interior point methods in polynomial time up to any arbitrary precision [Vandenberghe and Boyd, 1996a], these methods are impractical for solving large-scale SDPs due to computation time and memory issues. In Part IV of this dissertation, a parallel algorithm for solving an arbitrary SDP is introduced based on the alternating direction method of multipliers. The proposed algorithm has a guaranteed convergence under very mild assumptions. Each iteration of this algorithm has a simple closed-form solution, and consists of scalar multiplication and eigenvalue decomposition over matrices whose sizes are not greater than the treewdith of the sparsity graph of the SDP problem. The cheap iterations of the proposed algorithm enable solving real-world large-scale conic optimization problems.

Files

  • thumnail for Madani_columbia_0054D_13011.pdf Madani_columbia_0054D_13011.pdf application/pdf 5.52 MB Download File

More About This Work

Academic Units
Electrical Engineering
Thesis Advisors
Lavaei, Javad
Degree
Ph.D., Columbia University
Published Here
October 15, 2015