We present a cognitive model of the human ability to acquire causal relationships. We report on experimental evidence that demonstrates that human leamers acquire accurate causal relationships more rapidly when training examples are consistent with a general theory of causality. This paper describes a learning process that uses a general theory of causality as background knowledge. The learning process, which we call theory-driven learning (TDL), hypothesizes causal relationships consistent with both observed data and the general theory of causality. TDL accounts for data on both the rate at which human learners acquire causal relationships and the types of causal relationships they acquire. Experiments with TDL demonstrate the advantage of theory-driven learning for acquiring causal relationships over similarity-based approaches to learning: fewer examples are required to learn an accurate relationship.
In this paper, we demonstrate how different forms of background knowledge can be integrated with an inductive method for generating constant-free Horn clause rules. Furthermore, we evaluate, both theoretically and empirically, the effect that these types of knowledge have on the cost of learning a rule and on the accuracy of a learned rule. Moreover, we demonstrate that a hybrid explanation-based and inductive learning method can advantageously use an approximate domain theory, even when this theory is incorrect and incomplete.
Researchers debate whether higher-order learning can be reduced to an associative process. To shed light on this question, we perform two psychological experiments - the results of which cannot be accounted for by any current model of concept acquisition learning. We investigate inducing a set of causally related conceptsfrom examples. We show that human subjects make fewer errors and leam more rapidly when the set of concepts is logically consistent - whether the concepts are learned sequentially or simultaneously. We compare the results of these subjects to subjects learning equivalent concepts that share sets of relevant features, but are not logically consistent. We enhance a neural network model to simulate our psychological experiments.
This paper discusses an approach to integrating empirical and explanation based learning techniques. The paper focuses on OCCAM, a program that has the capability to acquire via empirical means the knowledge needed for analytical learning. Two examples of this capability are discussed:
The ability to use empirical techniques to acquire a domain theory for explanation based learning.
The ability to use empirical learning techniques to find common patterns for causal relationships. These patterns encode a theory of causality (i.e., a set of general principles for recognizing causal relationships). Once acquired, a theory of causality can facilitate later learning by focusing on hypotheses which are consistent with the theory.
We describe an incremental learning algorithm, called theory-driven learning, that creates rules to predict the effect of actions. Theory-driven learning exploits knowledge of regularities among rules to constrain the learning problem. We demonstrate that this knowledge enables the learning system to rapidly converge on accurate predictive rules and to tolerate more complex training data. An algorithm for incrementally learning these regularities is described and we provide evidence that the resulting regularities are sufficiently general to facilitate learning in new domains.
The influence of the prior causal knowledge of subjects on the rate of learning, the categories formed, and the attributes attended to during learning is explored. Conjunctive concepts are thought to be easier for subjects to learn than disjunctive concepts. Conditions are reported under which the opposite occurs. In particular, it is demonstrated that prior knowledge can influence the rate of concept learning and that the influence of prior causal knowledge can dominate the influence of the logical form. A computational model of this learning task is presented. In order to represent the prior knowledge of the subjects, an extension to explanation-based learning is developed to deal with imprecise domain knowledge.
In this paper, we argue that techniques proposed for combining empirical and explanation-based learning methods can also be used to detect errors in rule-based expert systems, to isolate the blame for these errors to a small number of rules and suggest revisions to the rules to eliminate these errors. We demonstrate that FOCL, an extension to Quinlan's FOIL program, can learn in spite of an incorrect domain theory (e.g., a knowledge base of an expert system that contains some erroneous rules). A prototype knowledge acquisition tool, KR-FOCL, has been constructed that can utilize a trace of FOCL to suggest revisions to a rule base.
Cookie SettingseScholarship uses cookies to ensure you have the best experience on our website. You can manage which cookies you want us to use.Our Privacy Statement includes more details on the cookies we use and how we protect your privacy.