Learning approximate diagnosis
Model-based diagnosis (MBD) provides several advantages over experiential rule-based systems. A principal shortcoming of MBD is that MBD learns nothing from any given example. An MBD system facing the same task a second time will incur the same computational effort as that incurred the first time. Our earlier work on incorporating explanation-based learning (EBL) in MBD  suggested a diagnostic architecture integrating EBL and MBD components. In this architecture, EBL was used to learn diagnostic rules. But the diagnoses proposed by the rules could be erroneous. So constraint suspension testing was used to check all proposed diagnoses. Insisting on perfect accuracy causes the performance of this scheme for "learning while doing" to deteriorate rapidly with the size of the device to be diagnosed. In this paper, we describe a method for trading off accuracy for efficiency. In this approach, most diagnosis problems are handled by the associational rules learned from previous problems. Model-based reasoning and learning are activated only when performance drops below a given threshold. We present empirical results on circuits of increasing number of components illustrating how this approach scales up.