Academic Commons

Reports

Solving the Depth Interpolation Problem on a Parallel Architecture with Efficient Numerica1 Methods

Choi, Dong Jae

Many constraint propagation problems in early vision, including depth interpolation, can be cast as solving a large system of linear equations where the resulting matrix is symmetric and positive definite (SPD). Usually ,the resulting SPD matrix is sparse. We solve the depth interpolation problem on a parallel architecture, a fine grained SIMD machine with local and global communication networks. We show how the Chebyshev acceleration and the conjugate gradient methods can be run on this parallel architecture for sparse SPD matrices. Using an abstract SIMD model, for several synthetic and real images we show that the adaptive Chebyshev acceleration method executes faster than the conjugate gradient method, when given near optimal initial estimates of the smallest and largest eigenvalues of the iteration matrix. We extend these iterative methods through a multigrid approach, with a fixed multilevel coordination strategy. We show again that the adaptive Chebyshev acceleration method executes faster than the conjugate gradient method, when accelerated further with the multigrid approach. Furthermore, we show that the optimal Chebyshev acceleration method performs best since this method requires local computations only, whereas the adaptive Chebyshev acceleration and the conjugate gradient methods require both local and global computations.

Subjects

Files

More About This Work

Academic Units
Computer Science
Publisher
Department of Computer Science, Columbia University
Series
Columbia University Computer Science Technical Reports, CUCS-358-88
Published Here
December 19, 2011
Academic Commons provides global access to research and scholarship produced at Columbia University, Barnard College, Teachers College, Union Theological Seminary and Jewish Theological Seminary. Academic Commons is managed by the Columbia University Libraries.