1988 Reports
Solving the Depth Interpolation Problem on a Parallel Architecture with Efficient Numerica1 Methods
Many constraint propagation problems in early vision, including depth interpolation, can be cast as solving a large system of linear equations where the resulting matrix is symmetric and positive definite (SPD). Usually ,the resulting SPD matrix is sparse. We solve the depth interpolation problem on a parallel architecture, a fine grained SIMD machine with local and global communication networks. We show how the Chebyshev acceleration and the conjugate gradient methods can be run on this parallel architecture for sparse SPD matrices. Using an abstract SIMD model, for several synthetic and real images we show that the adaptive Chebyshev acceleration method executes faster than the conjugate gradient method, when given near optimal initial estimates of the smallest and largest eigenvalues of the iteration matrix. We extend these iterative methods through a multigrid approach, with a fixed multilevel coordination strategy. We show again that the adaptive Chebyshev acceleration method executes faster than the conjugate gradient method, when accelerated further with the multigrid approach. Furthermore, we show that the optimal Chebyshev acceleration method performs best since this method requires local computations only, whereas the adaptive Chebyshev acceleration and the conjugate gradient methods require both local and global computations.
Subjects
Files
- cucs-358-88.pdf application/pdf 5.92 MB Download File
More About This Work
- Academic Units
- Computer Science
- Publisher
- Department of Computer Science, Columbia University
- Series
- Columbia University Computer Science Technical Reports, CUCS-358-88
- Published Here
- December 19, 2011