Academic Commons

Reports

Finding New Rules for Incomplete Theories: Induction with Explicit Biases in Varying Contexts

Danyluk, Andrea Pohoreckyj

Many AI problem solvers possess explicitly encoded knowledge - a domain theory ““ that they use to solve problems. If these problem solvers are to be autonomous, they must be able to detect and to fill gaps in their own knowledge. The field of machine learning addresses this issue. Recently two disparate machine learning approaches have emerged as predominant in the field: explanation-based learning (EBL) and similarity-based learning (SBL), EBL and SBL have been applied to problems in a variety of domains. Both methods have clear problems, however, EBL assumes that a system is given an explicit theory of the domain that is complete, correct, and tractable. These assumptions are clearly unrealistic for most complex, real-world problems. SBL suffers because of its lack of an explicit theory of the domain. The simplicity of the method requires that human intervention playa large role in tailoring input examples and the features describing them in such a way as to allow a system to choose an appropriate set of features to define a concept. Biasing a system in this way may result in its being unable to discover all concepts in even a Single domain. Less tailoring of the examples leaves a system open to the possibility of not converging on the best definition for a concept, or any at all, due to the computational complexity. The research described in this proposal addresses a number of the problems found in explanation-based and similarity-based learning. The major focus of the research is the elimination of the assumption that the domain theory of an EBL system is complete. In particular, it considers the problem of working with an incomplete theory by suggesting a method by which gaps in an EBL system's knowledge can be detected and filled. We suggest that when EBL cannot derive a complete explanation, the partial explanation focus a context in which learning takes place. Information extracted from partial explanations, as well as from complete explanations, can be exploited by SBL to do better induction of the missing domain knowledge. The extracted information constitutes an explicit bias for similarity-based learning. A second problem to be addressed is that of making the biases of SBL explicit. Finally, all testing of the claims made in this proposal is to be done in the Gemini learning system. The development of the system addresses the goal of constructing an integrated learning architecture utilizing both EBL and SBL.

Subjects

Files

More About This Work

Academic Units
Computer Science
Publisher
Department of Computer Science, Columbia University
Series
Columbia University Computer Science Technical Reports, CUCS-466-89
Published Here
December 23, 2011