Academic Commons Search Results
https://academiccommons.columbia.edu/catalog?action=index&controller=catalog&f%5Bauthor_facet%5D%5B%5D=Gnewuch%2C+Michael&f%5Bdepartment_facet%5D%5B%5D=Computer+Science&format=rss&fq%5B%5D=has_model_ssim%3A%22info%3Afedora%2Fldpd%3AContentAggregator%22&q=&rows=500&sort=record_creation_date+desc
Academic Commons Search Resultsen-usA New Randomized Algorithm to Approximate the Star Discrepancy Based on Threshold Accepting
https://academiccommons.columbia.edu/catalog/ac:135483
Gnewuch, Michael; Wahlstrom, Magnus; Winzen, Carolahttp://hdl.handle.net/10022/AC:P:10679Mon, 11 Jul 2011 12:01:42 +0000We present a new algorithm for estimating the star discrepancy of arbitrary point sets. Similar to the algorithm for discrepancy approximation of Winker and Fang [SIAM J. Numer. Anal. 34 (1997), 2028{2042] it is based on the optimization algorithm threshold accepting. Our improvements include, amongst others, a non-uniform sampling strategy which is more suited for higher-dimensional inputs and additionally takes into account the topological characteristics of given point sets, and rounding steps which transform axis-parallel boxes, on which the discrepancy is to be tested, into critical test boxes. These critical test boxes provably yield higher discrepancy values, and contain the box that exhibits the maximum value of the local discrepancy. We provide comprehensive experiments to test the new algorithm. Our randomized algorithm computes the exact discrepancy frequently in all cases where this can be checked (i.e., where the exact discrepancy of the point set can be computed in feasible time). Most importantly, in higher dimension the new method behaves clearly better than all previously known methods.Computer scienceComputer ScienceTechnical reportsEntropy, Randomization, Derandomization, and Discrepancy
https://academiccommons.columbia.edu/catalog/ac:135480
Gnewuch, Michaelhttp://hdl.handle.net/10022/AC:P:10678Mon, 11 Jul 2011 11:56:56 +0000The star discrepancy is a measure of how uniformly distributed a finite point set is in the d-dimensional unit cube. It is related to high-dimensional numerical integration of certain function classes as expressed by the Koksma-Hlawka inequality. A sharp version of this inequality states that the worst-case error of approximating the integral of functions from the unit ball of some Sobolev space by an equal-weight cubature is exactly the star discrepancy of the set of sample points. In many applications, as, e.g., in physics, quantum chemistry or finance, it is essential to approximate high-dimensional integrals. Thus with regard to the Koksma- Hlawka inequality the following three questions are very important: (i) What are good bounds with explicitly given dependence on the dimension d for the smallest possible discrepancy of any n-point set for moderate n? (ii) How can we construct point sets efficiently that satisfy such bounds? (iii) How can we calculate the discrepancy of given point sets efficiently? We want to discuss these questions and survey and explain some approaches to tackle them relying on metric entropy, randomization, and derandomization.Computer scienceComputer ScienceTechnical reportsWeighted Geometric Discrepancies and Numerical Integration on Reproducing Kernel Hilbert Spaces
https://academiccommons.columbia.edu/catalog/ac:133602
Gnewuch, Michaelhttp://hdl.handle.net/10022/AC:P:10523Thu, 09 Jun 2011 11:48:59 +0000We extend the notion of L2-B-discrepancy introduced in [E. Novak, H. Wozniakowski, L2 discrepancy and multivariate integration, in: Analytic number theory. Essays in honour of Klaus Roth. W. W. L. Chen, W. T. Gowers, H. Halberstam, W. M. Schmidt, and R. C. Vaughan (Eds.), Cambridge University Press, Cambridge, 2009, 359–388] to what we want to call weighted geometric L2-discrepancy. This extended notion allows us to consider weights to moderate the importance of different groups of variables, and additionally volume measures different from the Lebesgue measure as well as classes of test sets different from measurable subsets of Euclidean spaces. We relate the weighted geometric L2-discrepancy to numerical integration defined over weighted reproducing kernel Hilbert spaces and settle in this way an open problem posed by Novak and Wozniakowski. Furthermore, we prove an upper bound for the numerical integration error for cubature formulas that use admissible sample points. The set of admissible sample points may actually be a subset of the integration domain of measure zero. We illustrate that particularly in infinite dimensional numerical integration it is crucial to distinguish between the whole integration domain and the set of those sample points that actually can be used by algorithms.Computer scienceComputer ScienceTechnical reportsInfinite-Dimensional Integration on Weighted Hilbert Spaces
https://academiccommons.columbia.edu/catalog/ac:133568
Gnewuch, Michaelhttp://hdl.handle.net/10022/AC:P:10512Thu, 09 Jun 2011 10:17:50 +0000We study the numerical integration problem for functions with infinitely many variables. The functions we want to integrate are from a reproducing kernel Hilbert space which is endowed with a weighted norm. We study the worst case ε-complexity which is defined as the minimal cost among all algorithms whose worst case error over the Hilbert space unit ball is at most ε. Here we assume that the cost of evaluating a function depends polynomially on the number of active variables. The infinite-dimensional integration problem is (polynomially) tractable if the ε-complexity is bounded by a constant times a power of 1/ε. The smallest such power is called the exponent of tractability. First we study finite-order weights. We provide improved lower bounds for the exponent of tractability for general finite-order weights and improved upper bounds for three newly defined classes of finite-order weights. The constructive upper bounds are obtained by multilevel algorithms that use for each level quasi-Monte Carlo integration points whose projections onto specific sets of coordinates exhibit a small discrepancy. The newly defined finite-intersection weights model the situation where each group of variables interacts with at most ρ other groups of variables, where ρ is some fixed number. For these weights we obtain a sharp upper bound. This is the first class of weights for which the exact exponent of tractability is known for any possible decay of the weights and for any polynomial degree of the cost function. For the other two classes of finite-order weights our upper bounds are sharp if, e.g., the decay of the weights is fast or slow enough. We extend our analysis to the case of arbitrary weights. In particular, from our results for finite-order weights, we conclude a lower bound on the exponent of tractability for arbitrary weights and a constructive upper bound for product weights. Although we confine ourselves for simplicity to explicit upper bounds for four classes of weights, we stress that our multilevel algorithm together with our default choice of quasi-Monte Carlo points is applicable to any class of weights.Computer scienceComputer ScienceTechnical reportsQuasi-Polynomial Tractability
https://academiccommons.columbia.edu/catalog/ac:133535
Gnewuch, Michael; Wozniakowski, Henrykhttp://hdl.handle.net/10022/AC:P:10503Tue, 07 Jun 2011 15:57:17 +0000Tractability of multivariate problems has become nowadays a popular research subject. Polynomial tractability means that the solution of a d-variate problem can be solved to within $\varepsilon$ with polynomial cost in $\varepsilon^{-1}$ and d. Unfortunately, many multivariate problems are not polynomially tractable. This holds for all non-trivial unweighted linear tensor product problems. By an unweighted problem we mean the case when all variables and groups of variables play the same role. It seems natural to ask what is the ``smallest'' non-exponential function $T:[1,\infty)\times [1,\infty)\to[1,\infty)$ for which we have T-tractability of unweighted linear tensor product problems. That is, when the cost of a multivariate problem can be bounded by a multiple of a power of $T(\varepsilon^{-1},d)$. Under natural assumptions, it turns out that this function is $T^{qpol}(x,y):=\exp((1+\ln\,x)(1+\ln y))$ for all $x,y\in[1,\infty)$. The function $T^{qpol}$ goes to infinity faster than any polynomial although not ``much'' faster, and that is why we refer to $T^{qpol}$-tractability as quasi-polynomial tractability. The main purpose of this paper is to promote quasi-polynomial tractability especially for the study of unweighted multivariate problems. We do this for the worst case and randomized settings and for algorithms using arbitrary linear functionals or only function values. We prove relations between quasi-polynomial tractability in these two settings and for the two classes of algorithms.Computer scienceComputer ScienceTechnical reportsGeneralized Tractability for Multivariate Problems: Part II: Linear Tensor Product Problems, Linear Information, and Unrestricted Tractability
https://academiccommons.columbia.edu/catalog/ac:110791
Gnewuch, Michael; Wozniakowski, Henrykhttp://hdl.handle.net/10022/AC:P:29533Wed, 27 Apr 2011 10:20:50 +0000\usepackage{amssymb} \begin{document} We continue the study of generalized tractability initiated in our previous paper ``Generalized tractability for multivariate problems, Part I: Linear tensor product problems and linear information'', J. Complexity, 23, 262-295 (2007). We study linear tensor product problems for which we can compute linear information which is given by arbitrary continuous linear functionals. We want to approximate an operator $S_d$ given as the $d$-fold tensor product of a compact linear operator $S_1$ for $d=1,2,\dots\,$, with $\|S_1\|=1$ and $S_1$ has at least two positive singular values. Let $n(\varepsilon,S_d)$ be the minimal number of information evaluations needed to approximate $S_d$ to within $\varepsilon\in[0,1]$. We study \emph{generalized tractability} by verifying when $n(\varepsilon,S_d)$ can be bounded by a multiple of a power of $T(\varepsilon^{-1},d)$ for all $(\varepsilon^{-1},d)\in\Omega \subseteq[1,\infty)\times \mathbb{N}$. Here, $T$ is a \emph{tractability} function which is non-decreasing in both variables and grows slower than exponentially to infinity. We study the \emph{exponent of tractability} which is the smallest power of $T(\varepsilon^{-1},d)$ whose multiple bounds $n(\varepsilon,S_d)$. We also study \emph{weak tractability}, i.e., when $\lim_{\varepsilon^{-1}+d\to\infty,(\varepsilon^{-1},d)\in\Omega} \ln\,n(\varepsilon,S_d)/(\varepsilon^{-1}+d)=0$. In our previous paper, we studied generalized tractability for proper subsets $\Omega$ of $[1,\infty)\times\mathbb{N}$, whereas in this paper we take the unrestricted domain $\Omega^{\rm unr}=[1,\infty)\times\mathbb{N}$. We consider the three cases for which we have only finitely many positive singular values of $S_1$, or they decay exponentially or polynomially fast. Weak tractability holds for these three cases, and for all linear tensor product problems for which the singular values of $S_1$ decay slightly faster that logarithmically. We provide necessary and sufficient conditions on the function~$T$ such that generalized tractability holds. These conditions are obtained in terms of the singular values of $S_1$ and mostly limiting properties of $T$. The tractability conditions tell us how fast $T$ must go to infinity. It is known that $T$ must go to infinity faster than polynomially. We show that generalized tractability is obtained for $T(x,y)=x^{1+\ln\,y}$. We also study tractability functions $T$ of product form, $T(x,y) =f_1(x)f_2(x)$. Assume that $a_i=\liminf_{x\to\infty}(\ln\,\ln f_i(x))/(\ln\,\ln\,x)$ is finite for $i=1,2$. Then generalized tractability takes place iff $$a_i>1 \ \ \mbox{and}\ \ (a_1-1)(a_2-1)\ge1,$$ and if $(a_1-1)(a_2-1)=1$ then we need to assume one more condition given in the paper. If $(a_1-1)(a_2-1)>1$ then the exponent of tractability is zero, and if $(a_1-1)(a_2-1)=1$ then the exponent of tractability is finite. It is interesting to add that for $T$ being of the product form, the tractability conditions as well as the exponent of tractability depend only on the second singular eigenvalue of $S_1$ and they do \emph{not} depend on the rate of their decay. Finally, we compare the results obtained in this paper for the unrestricted domain $\Omega^{\rm unr}$ with the results from our previous paper obtained for the restricted domain $\Omega^{\rm res}= [1,\infty)\times\{1,2,\dots,d^*\}\,\cup\,[1,\varepsilon_0^{-1})\times\mathbb{N}$ with $d^*\ge1$ and $\varepsilon_0\in(0,1)$. In general, the tractability results are quite different. We may have generalized tractability for the restricted domain and no generalized tractability for the unrestricted domain which is the case, for instance, for polynomial tractability $T(x,y)=xy$. We may also have generalized tractability for both domains with different or with the same exponents of tractability. \end{document}Computer sciencehw13Computer ScienceTechnical reports