Reports

Integrating Learning: Controlling Explanation

Lebowitz, Michael

Similarity-based learning, which involves largely structural comparisons of instances, and explanation-based learning, a knowledge-intensive method for analyzing instances to build generalized schemata, are two major inductive learning techniques in use in Artificial Intelligence. In this paper, we propose a combination of the two methods—applying explanation-based techniques during the course of similarity-based learning. For domains lacking detailed explanatory rules, this combination can achieve the power of explanation-based learning without some of the computational problems that can otherwise arise. We show how the ideas of predictability and interest can be particularly valuable in this context. We include an example of the computer program UNIMEM applying explanation to a generalization formed using similarity-based methods.

Files

More About This Work

Academic Units
Computer Science
Publisher
Department of Computer Science, Columbia University
Series
Columbia University Computer Science Technical Reports, CUCS-201-85
Published Here
November 7, 2011