Explanation-Based Methods for Simplifying Intractable Theories

Ellman, Thomas

Existing machine learning programs possess only limited abilities to exploit previously acquired background knowledge. A technique called "explanation-based learning" (EBL) has recently been developed to address this problem. EBL is limited, however, by a requirement that the background knowledge meet restrictive conditions. EBL cannot operate without a complete, correct and tractable theory of the domain under study. In many cases no adequate domain theory can be found. The research proposed here will address this limitation. It will be primarily directed toward extending EBL methods to handle intractable theories. Techniques will be developed for using explanations of examples to make domain theories more tractable. The explanations will be used to find assumptions that can simplify intractable theories. A useful class of assumptions, called "optimistic assumptions," will be defined informally. A program will be developed to learn assumptions drawn from this class. The program will be tested in the domain of "hearts" and possibly other domains as well. This research will be significant inasmuch as the "optimistic" assumptions appear to be applicable to a wide variety of domains. The research will also be relevant to the problems of incomplete and incorrect theories as well as the problem of intractability.



More About This Work

Academic Units
Computer Science
Department of Computer Science, Columbia University
Columbia University Computer Science Technical Reports, CUCS-265-87
Published Here
November 28, 2011