Academic Commons Search Results
http://academiccommons.columbia.edu/catalog.rss?f%5Bauthor_facet%5D%5B%5D=Boult%2C+Terrance+E.&f%5Bdepartment_facet%5D%5B%5D=Computer+Science&q=&rows=500&sort=record_creation_date+desc
Academic Commons Search Resultsen-usAn Optimal Complexity Algorithm for Computing Topological Degree in Two Dimensions
http://academiccommons.columbia.edu/catalog/ac:163688
Boult, Terrance E.; Sikorski, Krzysztofhttp://hdl.handle.net/10022/AC:P:20995Wed, 10 Jul 2013 00:00:00 +0000An algorithm is presented to compute the topological degree for any function from a class F. The class F consists of functions defined on a two dimensional unit square C, f, C → ℝ², which satisfy Lipschitz condition with constant K > 0, and whose infinity norm on the boundary of C is at least d > 0. A worst case lower bound, m* = 4[K/4d], is established on the number of function evaluations necessary to compute the topological degree for any function f from the class F. The parallel information used by our algorithm permits the computation of the degree for every f in F with m* function evaluations necessary. The cost of our algorithm is shown to be almost equal to the complexity of the problem.Computer scienceComputer ScienceTechnical reportsReproducing Kernels for Visual Surface Interpolation.
http://academiccommons.columbia.edu/catalog/ac:163685
Boult, Terrance E.http://hdl.handle.net/10022/AC:P:20993Wed, 10 Jul 2013 00:00:00 +0000We examine the details of two related methods for the recovery of virtual surfaces from space depth data. The methods use the reproducing kernels of Hilbert spaces to construct a spline interpolating the data. such that this spline is of minimal norm. We discuss the numerical properties of the two methods presented, and give example interpolations.Computer scienceComputer ScienceTechnical reportsCan We Approximate Zeros of Functions with Non-zero Topological Degree?
http://academiccommons.columbia.edu/catalog/ac:144957
Boult, Terrance E.; Sikorski, Krzysztof A.http://hdl.handle.net/10022/AC:P:12684Thu, 23 Feb 2012 00:00:00 +0000The bisection method provides an affirmative answer for scalar functions. We show that the answer is negative for bivariate functions. This means, in particular, that an arbitrary continuation method cannot approximate a zero of every smooth bivariate function with non-zero topological degree.Computer scienceComputer ScienceTechnical reportsFlow Trees: A Lower Bound Computation Tool with Applications to Rearrangeable Multihop Lightwave Network Optimization
http://academiccommons.columbia.edu/catalog/ac:143784
Yener, Bulent; Boult, Terrance E.http://hdl.handle.net/10022/AC:P:12381Fri, 27 Jan 2012 00:00:00 +0000This paper presents a new method for computing the lower bounds for multihop network design problems which is particularly well suited to optical networks. More specifically, given N stations each with d transceivers and pairwise average traffic values of the stations, the method provides a lower bound for the combined problem of finding optimum (i) allocation of wavelengths to the stations to determine a configuration, and (ii) routing of the traffic on this configuration while minimizing congestion - defined as the maximum flow assigned on any link. The lower bounds can be computed in time polynomial in the network size. Consequently, the results in this work yield a tool which can be used in (i) evaluating the quality of heuristic design algorithms, and (ii) determining a termination criteria during minimization. The lower bound computation is based on first building flow trees to find a lower bound on the total flow, and then distributing the total flow over the links to minimize the congestion.Computer scienceComputer ScienceTechnical reportsLogical Embeddings for Minimum Congestion Routing in Lightwave Networks
http://academiccommons.columbia.edu/catalog/ac:143607
Yener, Bulent; Boult, Terrance E.http://hdl.handle.net/10022/AC:P:12337Wed, 25 Jan 2012 00:00:00 +0000The problem considered in this paper is motivated by the independence between logical and physical topology in Wavelength Division Multiplexing WDM based local and metropolitan lightwave networks. This paper suggests logical embeddings of digraphs into multihop lightwave networks to maximize the throughput under nonuniform traffic conditions. Defining congestion as the maximum flow carried on any link, two perturbation heuristics are presented to find a good logical embedding on which the routing problem is solved with minimum congestion. A constructive proof for a lower bound of the problem is given, and obtaining an optimal solution for integral routing is shown to be NP-Complete. The performance of the heuristics is empirically analyzed on various traffic models. Simulation results show that our heuristics perform on the average from a computed lower bound Since this lower bound is not quite tight we suspect that the actual performance is better In addition we show that 5%-20% performance improvements can be obtained over the previous work.Computer scienceComputer ScienceTechnical reportsImage Understanding and Robotics Research at Columbia University
http://academiccommons.columbia.edu/catalog/ac:143529
Kender, John R.; Allen, Peter K.; Boult, Terrance E.http://hdl.handle.net/10022/AC:P:12271Fri, 20 Jan 2012 00:00:00 +0000Over the past year, the research investigations of the Vision/Robotics Laboratory at Columbia University have reflected the interests of its four faculty members, two staff programmers, and 16 Ph.D. students. Several of the projects involve other faculty members in the department or the university, or researchers at AT&T, IBM, or Philips. We list below a summary of our interests and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative. The majority of our current investigations are deepenings of work reported last year; this was the second year of both our basic Image Understanding contract and our Strategic Computing contract. Therefore, the form of this year's report closely resembles last year's. Although there are a few new initiatives, mainly we report the new results we have obtained in the same five basic research areas. Much of this work is summarized on a video tape that is available on request. We also note two service contributions this past year. The Special Issue on Computer Vision of the Proceedings of the IEEE, August, 1988, was co-edited by one of us (John Kender [27]). And, the upcoming IEEE Computer Society Conference on Computer Vision and Pattem Recognition, June, 1989, is co-program chaired by one of us (John Kender [23]).Computer sciencejrk3, pka1Computer ScienceTechnical reportsAn Algorithm to Recover Generalized Cylinders from a Single Intensity View
http://academiccommons.columbia.edu/catalog/ac:143339
Gross, Ari D.; Boult, Terrance E.http://hdl.handle.net/10022/AC:P:12219Tue, 17 Jan 2012 00:00:00 +0000Understanding a scene involves the ability to recover the shape of objects in an environment. Generalized cylinders are a flexible, loosely defined class of parametric shapes capable of modeling many real-world objects. Straight homogeneous generalized cylinders are an important subclass of generalized cylinders whose cross sections are scaled versions of a reference curve. In this paper, a general method is presented for recovering straight homogeneous generalized cylinders from monocular intensity images. The algorithm is much more general in scope than any other developed to date. combining constraints derived from both contour and intensity information. We first demonstrate that contour information alone is insufficient to recover a straight homogeneous generalized cylinder uniquely. Next, we show that the sign and magnitude of the Gaussian curvature at a point varies among members of a contour-equivalent class. The image contour fails to constrain two parameters required to recover the shape of a generalized cylinder, the 3D axis location and the object tilt. Next, a method for "ruling" straight homogeneous generalized cylinder images is developed. Once the rulings of the image have been recovered, we show that all parameters derivable from contour alone can be recovered. To recover the two remaining parameters (modulo scale) not constrained by image contour requires incorporating additional information into the recovery process, e.g. intensity information. We derive a method for recovering the tilt of the object using the ruled contour image and intensity values along cross-sectional geodesics. In addition, we derive a method for recovering the location of the object's 3D axis from intensity values along meridians of the surface. Using the different methods outlined in this paper constitutes an algorithm for recovering all the shape parameters (modulo scale) of a straight homogeneous generalized cylinder.Computer scienceComputer ScienceTechnical reportsOn the Complexity of Constrained Distance Transforms and Digital Distance Map Updates in Two Dimensions
http://academiccommons.columbia.edu/catalog/ac:142977
Boult, Terrance E.http://hdl.handle.net/10022/AC:P:12119Fri, 23 Dec 2011 00:00:00 +0000Using digital distance maps, one can easily solve the shortest path problem from any point by simply following the gradient of the distance map. Other researchers have developed techniques to quickly compute such maps. One such technique, the constrained distance transform, is described and a computational complexity analysis is presented. Unfortunately, algorithms for digital distance maps have all assumed a static environment. This paper extends the usefulness of the planar digital distance maps by providing an efficient means for dealing with obstacles in motion. In particular, an algorithm is presented that allows one to compute what portions of a map will probably be effected by an obstacle's motion. The regions that must be checked for possible update when an obstacle moves are those that are in its "shadow", or in the shadow of obstacles that are partially in the shadow of the moving obstacle. The technique can handle multiple fixed goals, multiple obstacles moving and interacting in an arbitrary fashion. A complexity analysis and short verification is presented. The algorithm is demonstrated on a number of synthetic two dimensional examples, and example timing results are reported.Computer scienceComputer ScienceTechnical reportsEnergy-Based Segmentation of Very Sparse Range Surfaces
http://academiccommons.columbia.edu/catalog/ac:142886
Boult, Terrance E.; Lerner, Mark D.http://hdl.handle.net/10022/AC:P:12091Thu, 22 Dec 2011 00:00:00 +0000This paper describes a new segmentation technique for very sparse surfaces which is based on minimizing the energy of the surfaces in the scene. While it could be used in almost any system as part of surface reconstruction/model recovery, the algorithm is designed to be usable when the depth information is scattered and very sparse, as is generally the case with depth generated by stereo algorithms. We show results from a sequential algorithm that processes laser range-finder data or synthetic data. We then discuss a parallel implementation running on the parallel Connection Machine. The idea of segmentation by energy minimization is not new. However, prior techniques have relied on discrete regularization or Markov random fields to model the surfaces to build smooth surfaces and detect depth edges. Both of the aforementioned techniques are ineffective at energy minimization for very sparse data. In addition, this method does not require edge detection and is thus also applicable when edge information is unreliable or unavailable. Our model is extremely general; it does not depend on a model of the surface shape but only on the energy for bending a surface. Thus the surfaces can grow in a more data-directed manner. The technique presented herein models the surfaces with reproducing kernel-based splines, which can be shown to solve a regularized surface reconstruction problem. From the functional form of these splines we derive computable bounds on the energy of a surface over a given finite region. The computation of the spline, and the corresponding surface representation are quite efficient for very sparse data. An interesting property of the algorithm is that it makes no attempt to determine segmentation boundaries; the algorithm can be viewed as a classification scheme which segments the data into collections of points which are "from" the same surface. Among the significant advantages of the method is the capacity to process overlapping transparent surfaces, as well as surfaces with large occluded areas.Computer scienceComputer ScienceTechnical reportsSeparable Image Warping with Spatial Lookup Tables
http://academiccommons.columbia.edu/catalog/ac:142867
Wolberg, George; Boult, Terrance E.http://hdl.handle.net/10022/AC:P:12085Thu, 22 Dec 2011 00:00:00 +0000Image warping refers to the 2-D resampling of a source image onto a target image. In the general case, this requires costly 2-D filtering operations. Simplifications are possible when the warp can be expressed as a cascade of orthogonall-D transformations. In these cases, separable transformations have been introduced to realize large performance gains. The central ideas in this area were formulated in the 2-pass algorithm by Catmull and Smith. Although that method applies over an important class of transformations, there are intrinsic problems which limit its usefulness. The goal of this work is to extend the 2-pass approach to handle arbitrary spatial mapping functions. We address the difficulties intrinsic to 2-pass scanline algorithms: bottlenecking, foldovers, and the lack of closed-form inverse solutions. These problems are shown to be resolved in a general, efficient, separable technique, with graceful degradation for transformations of increasing complexity.Computer scienceComputer ScienceTechnical reportsOn the Recovery of Superellipsoids
http://academiccommons.columbia.edu/catalog/ac:142369
Boult, Terrance E.; Gross, Ari D.http://hdl.handle.net/10022/AC:P:11949Fri, 09 Dec 2011 00:00:00 +0000Superellipsoids are parameterized solids which can appear like cubes or spheres or octahedrons or 8-pointed stars or anything in between. They can also be stretched bent tapered and combined with boolean to model a wide range of objects. Columbia's vision group is interested in using superquadrics as model primitives for computer vision applications because they are flexible enough to allow modeling of many objects, yet they can be described by a few (5-14) numbers. This paper discusses research into the recovery of superellipsoids from 3-D information, in particular range data. This research can be divided into two parts, a study of potential error-of-fit measures for recovering superquadrics, and implementation and experimentation with a system which attempts to recover superellipsoids by minimizing an error-of-fit measure. This paper presents an overview of work in both areas. Included are data from an initial comparison of 4 error-of-fit measures in terms of the inter-relationship between each measure and the parameters defining the superellipsoid. Also discussed is an experimental system which employs a nonlinear least square minimization technique to recover the parameters. This paper discusses both the advantages of this technique, and some of its major drawbacks. Examples are presented, using both synthetic and range-data, where the system successfully recovers superlliposids. Including "negative" volumes as would occur if superellipsoids were used in a constructive solid modeling system.Computer scienceComputer ScienceTechnical reportsImage Understanding and Robotics Research at Columbia University
http://academiccommons.columbia.edu/catalog/ac:142377
Kender, John R.; Allen, Peter K.; Boult, Terrance E.; Ibrahim, Husseinhttp://hdl.handle.net/10022/AC:P:11950Fri, 09 Dec 2011 00:00:00 +0000The research investigations of the Vision/Robotics Laboratory at Columbia University reflect the diversity of interests of its four faculty members, two staff programmers and 15 Ph.D. students. Several of the projects involve either a visiting computer science post-doc, other faculty members in the department or the university, or researchers at AT&T Bell Laboratories or Philips laboratories. We list below a summary of our interest and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative.Computer sciencejrk3, pka1Computer ScienceTechnical reportsRegularization: Problems and Promises
http://academiccommons.columbia.edu/catalog/ac:142341
Boult, Terrance E.http://hdl.handle.net/10022/AC:P:11940Wed, 07 Dec 2011 00:00:00 +0000Regularization is becoming a popular framework for describing and solving many ill-posed problems of computer vision. Of course, a generalized framework is only useful if it provides additional insights or benefits unavailable without it. This paper discusses some of the benefits promised by the regularization framework. Additionally, as a mathematical paradigm for vision, regularization presents many difficulties for the vision researcher, and some of these difficulties are discussed in this paper. The paper then discusses the lack of development of most of the "promises of regularization" theory, and gives a. brief look as some of the promises which have been realized. In the context of smooth surface reconstruction, the paper addresses one of the most difficult problems with the use of regularization: the problem of determining an appropriate functional class, norm, and regularization stabilizing functional. In particular, results are discussed from an experiment which subjectively orders various functional classes and stabilizing functionals for a regularization-based formulation of the surface reconstruction problem. The conclusions drawn include the fact that there exist non-traditional formulations of this regularization problem which provide better results. The paper concludes with a brief mention of two more general frameworks and their relationship to regularization.Computer scienceComputer ScienceTechnical reportsAn integrated approach to stereo matching, surface reconstruction and depth segmentation using consistent smoothness assumptions
http://academiccommons.columbia.edu/catalog/ac:142338
Chen, Liang-Hua; Boult, Terrance E.http://hdl.handle.net/10022/AC:P:11939Wed, 07 Dec 2011 00:00:00 +0000This paper presents a new algorithm for stereo matching which makes use of simultaneous matching, surface reconstruction, and segmentation of world surfaces. By integrating these three phases, which are traditionally temporally separated, the algorithm can make use of the current surface information to help disambiguate the potential matches. After discussing the required mathematical background, the paper describes the integrated process of matching, reconstruction and segmentation. Unlike past attempts at integrating these processes, the presented algorithm uses a single smoothness criterion for both matching, reconstruction and segmentation. The segmentation part of the process is based on estimates of surface bending energy, and is significantly different from previous segmentation algorithms. Examples are presented showing results on both synthetic images and camera acquired images. The camera-based examples include both a traditional type scene with two objects, and a scene with transparent objects.Computer scienceComputer ScienceTechnical reportsRecovery of superquadrics from depth information
http://academiccommons.columbia.edu/catalog/ac:142099
Boult, Terrance E.; Gross, Ari D.http://hdl.handle.net/10022/AC:P:11863Fri, 02 Dec 2011 00:00:00 +0000Superquadrics are a a class volumetric primitive which can model objects including rectangular solids with rounded corners, ellipsoids, octaheadrons, 8-pointed stars, hyperbolic sheets, and toroids with cross sections ranging from rectangles with rounded corners to elliptical regions. They can be stretched, bent, tapered and combined with boolean operations to model a wide range of objects. This paper discusses our progress at attempting to recover a subclass of superquadrics from 3D depth data. The first section of this paper presents a mathematical definition of superquadrics. Some of the rationale for using superquadrics for object recognition is then discussed. Briefly, superquadrics are flexible enough to represent a wide class of objects, but are simple enough to be recovered from 3d data. Additionally, the surface and its normal surface both have well defined inside-out functions which provide a useful tool for their recovery. The third section examines some of the difficulties to be encountered when modeling objects with superquadrics, or attempting to recover superquadrics from 3D data. These difficulties include the general problems of a non-orthogonal representation, difficulties of dealing with objects which are not exactly representable with CSG operations on the primitives, the need to recognize negative objects. Certain numerical instabilities and some problems caused by using the inside-out function as an approximation of the distance of a point from the superquadric. Our current system employs a nonlinear least square minimization technique on the inside-out function to recover the parameters. After discussing the details of the current system, the paper presents examples, using noisy synthetic data, where the system succe88fully uses multiple views to recover underlying superquadrics. Also presented are examples using range data, including the recovery of a negative superellipsiod. Some pros and cons of our approach as well as few conclusions, and a discussion of our planned future work appear in the final section. The main result is that least square minimization using the inside-out function allows both positive and negative instances of superellipsoids to be recovered from depth data. A second preliminary result is that a single view of a superquadric may not be sufficient for reconstruction without additional assumption.Computer scienceComputer ScienceTechnical reportsAn Experimental System for the Integration of Information from Stereo and Multiple Shape From Texture Algorithms
http://academiccommons.columbia.edu/catalog/ac:142102
Boult, Terrance E.; Moerdler, Markhttp://hdl.handle.net/10022/AC:P:11864Fri, 02 Dec 2011 00:00:00 +0000In numerous computer vision applications, there is both the need and the ability to access multiple types of information about the three dimensional aspects of objects or surfaces. When this information comes from different sources the combination becomes non-trivial. This paper describes the present state of ongoing research in Columbia's Vision Laboratory in the integration of multiple visual sensing methodologies which yield three dimensional information, in particular, feature based stereo algorithms, and various shape-from-texture algorithms are already in operation and multi-view shape-from-texture and shape-from shading modules are expected to be incorporated. Unlike most systems for multi-sensor integration, which fuse all the information at one conceptual level, e.g., the surface level, the system under development uses two levels of data fusion, intra-process integration and inter-process integration. The paper discusses intra-process integration techniques for feature-based stereo and shape-from-texture algorithms. It also discusses an inter-process integration technique based on smooth models of surfaces. Examples are presented using camera acquired images.Computer scienceComputer ScienceTechnical reportsUpdating Distance Maps When Objects Move
http://academiccommons.columbia.edu/catalog/ac:142096
Boult, Terrance E.http://hdl.handle.net/10022/AC:P:11862Fri, 02 Dec 2011 00:00:00 +0000Using a discrete distance transform one can quickly build a map of the distance from a goal to every point in a digital map. Using this map, one can easily solve the shortest path problem from any point by simply following the gradient of the distance map. This technique can be used in any number of dimensions and can incorporate obstacles of arbitrary shape (represented in the digital map) including pseudo-obstacles caused by unattainable configurations of a robotic system. This paper further extends the usefulness of the digital distance transform technique by providing an efficient means for dealing with objects which undergo motion. In particular, an algorithm is presented that allows one to update only those portions of the distance map that will potentially change as an object moves. The technique is based on an analysis of the distance transform as a problem in wave propagation. The regions that must be checked for possible update when an object moves are those that are in its "shadow~, or in the shadow of objects that are partially in the shadow of the moving object. The technique can handle multiple goals, and multiple objects moving and interacting in an arbitrary fashion. The algorithm is demonstrated on a number of synthetic two dimensional examples.Computer scienceComputer ScienceTechnical reportsSmoothness assumptions in human and machine vision, and their implications for optimal surface interpolation
http://academiccommons.columbia.edu/catalog/ac:141209
Boult, Terrance E.http://hdl.handle.net/10022/AC:P:11722Wed, 02 Nov 2011 00:00:00 +0000In this paper we shall examine what smoothness assumptions are made about object surfaces, object motion, and image intensities. We begin by looking into the physiological limits of vision and how these might influence our perception of smoothness. We then look at a sampling of the computer vision and psychology literature, inferring smoothness constraints from the mathematical assumptions tacitly presumed by researchers. This look at computer vision and psychology of vision is not meant to be an inclusive study, but rather representative of the assumptions made, and in part representative of the mathematical models used therein. We shall conclude that prevalent assumptions are that surfaces, motion, and intensity images are functions in C2, eland c2 respectively. In the latter portion of this paper we examine one use of explicit assumptions on smoothness in the definition of existing method for obtaining "optimal" surface interpolation. We briefly introduce the nomenclature of information-based complexity originated by Traub, Wozniakowski, and their colleagues, which is the mathematical machinery used in obtaining these "optimal" surfaces. This theory requires that we know the class of functions from which our desired surface comes, and part of the definition of a class is the degree of smoothness. We then survey many possible classes for the visual interpolation problem of two dimensional surfaces, and state formulas from which one can obtain the optimal surface interpolating given depth data.Computer scienceComputer ScienceTechnical reportsComplexity of Computing Topological Degree of Lipschitz Functions in N Dimensions
http://academiccommons.columbia.edu/catalog/ac:141131
Boult, Terrance E.; Sikorski, Krzysztof A.http://hdl.handle.net/10022/AC:P:11704Tue, 01 Nov 2011 00:00:00 +0000Computer scienceComputer ScienceTechnical reportsVisual Surface Interpolation: A Comparison of Two Methods
http://academiccommons.columbia.edu/catalog/ac:141134
Boult, Terrance E.http://hdl.handle.net/10022/AC:P:11705Tue, 01 Nov 2011 00:00:00 +0000We critically compare 2 different methods for visual surface interpolation. One method uses the reproducing kernels of Hilbert spaces to construct a spline interpolating the data, such that this spline is of minimal norm. The other method, presented in Grimson (1981), recovers the surface of minimal norm by direct minimization of the norm with a gradient projection algorithm. We present the problem that each algorithm is attempting to solve, then briefly introduce both methods. The main contribution is an analysis of each algorithm in terms of the worst case running time (serial processor), space complexity, and rough estimates of the running time and space costs for massively parallel implementations. We then conclude with a discussion of the differences in the internal representation of the surface in both algorithms.Computer scienceComputer ScienceTechnical reportsInformation Based Complexity Applied to Optimal Recovery of the 2 1/2-D Sketch
http://academiccommons.columbia.edu/catalog/ac:141095
Kender, John R.; Lee, David; Boult, Terrance E.http://hdl.handle.net/10022/AC:P:11693Tue, 01 Nov 2011 00:00:00 +0000In this paper, we introduce the information based complexity approach to optimal algorithms as a paradigm for solving image understanding problems, and obtain the optimal error algorithm for recovering the "2 1/2-D Sketch" (i.e. a dense depth map) from a sparse depth map. First, we give a interpolation algorithm that is provably optimal for surface reconstruction; furthermore the algorithm runs in linear time. Secondly, we show that adaptive information (i.e. the intelligent and selective determination of where to sample next, based on the values of previous samples) can not improve the accuracy of reconstruction. Third, we discuss properties of an implementation of the algorithm which make it very amenable to parallel processing, and which allow for point-wise determination of surface depth without the necessity for global surface reconstruction. We conclude with some remarks on a serial implementation.Computer sciencejrk3Computer ScienceTechnical reports