ICS has a technical report series, three academic departments and 3 ORUs associated with it that each generate new information and knowledge.

The first parallel designs for implementing self-organizing lists on the Xnet interconnection network are presented. Self-organizing lists permute the order of list entries after an entry is accessed according to some update hueristic. The heuristic attempts to place frequently requested entries closer to the front of the list. This paper outlines Xnet systems for self-organizing lists under the move-to-front and transpose update heuristics. Our novel designs can be used to achieve high-speed lossless text compression.

Diagnostic expert systems constructed using traditional knowledge-engineering techniques identify malfunctioning components using rules that associate symptoms with diagnoses. Model-based diagnosis (MBD) systems use models of devices to find faults given observations of abnormal behavior. These approaches to diagnosis are complementary. We consider hybrid diagnosis systems that include both associational and model-based diagnostic components. We present results on explanation-based learning (EBL) methods aimed at improving the performance of hybrid diagnostic problem solvers. We describe two architectures called EBL_IA and EBL(p). EBL_IA is a form fo "learning in advance" that pre-compiles models into associations. At run-time the diagnostic system is purely associational. In EBL(p), the run-time diagnosis system contains associational, MBD, and EBL components. Learned associational rules are preferred but when they are incomplete they may produce too many incorrect diagnoses. When errors cause performance to dip below a give threshold p, EBL(p) activates MBD and explanation-based "learning while doing". We present results of empirical studies comparing MBD without learning versus EBL_IA and EBL(p). The main conclusions are as follows. EBL_IA is superior when it is feasible but it is not feasible for large devices. EBL(p) can speed-up MBD and scale-up to larger devices in situations where perfect accuracy is not required.

This paper presents a knowledge-based learning method and reports on case studies in different domains. The method integrates abduction and learning. Abduction provides an improved method for constructing explanations, in comparison with deductive methods traditionally associated with explanation-based learning. The improvement enlarges the set of examples that can be explained so that one can learn from additional examples using explanation-based macro-learning techniques. Abduction also provides a form of knowledge level learning. The importance of abductive learning is shown by case studies involving over a hundred examples taken from diverse domains requiring logical, physical, and psychological knowledge and reasoning. The case studies are relevant to a wide range of pratical tasks including: natural langauge understanding and plan recognition; qualitative physical reasoning and postdiction; diagnosis and signal interpretation; and decision-making under uncertainty. The descriptions of the case studies show how to set up abduction engines in particular domains and how abduction solves particular tasks. They show how to provide and how to represent the relevant knowledge, including both domain knowledge and meta-knowledge relevant to abduction and search control. The description of each case study includes an example, its explanation, and discussions of what is learned by macro-learning and by abductive inference.

We address the problem of enumerating (producing) all models of a given theory. We show that the enumeration task can be performed in time proportional to the product of the number of models and the effort needed to generate each model in isolation. In other words, the requirement of generating a new solution in each iteration does not in itself introduce substantial complexity. Consequently, it is possible to decide whether any tractably satisfiable formula has more than K solutions in time polynomial in the size of the forumla and in K. In the special cases of Horn formulas and 2-CNFs, although counting is #P-complete, to decide whether the count exceeds K, is polynomial in K.

We maintain the minimum spanning tree of a point set in the plane, subject to point insertions and deletions, in time O(n^1/2 log^2 n) per update operation. We reduce the problem to maintaining bichromatic closest pairs, which we solve in time O(n^E) per update. Our algorithm uses a novel construction, the ordered nearest neighbors of a sequence of points. Any point set or bichromatic point set can be ordered so that this graph is a simple path. Our results generalize to higher dimensions, and to fully dynamic algorithms for maintaining maxima of decomposable functions, including the diameter of a point set and the bichromatic farthest pair.

We convert constructive solid geometry input to explicit representations of polygons, polyhedra, or more generally d-dimensional polyhedra, in time O(n^d), improving a previous O(n^d log n) bound. We then show that any Boolean formula can be preprocessed in time O(n log n/ log log n) so that the value of the formula can be maintained, as variables are changed one by one, in time O(log n/ log log n) per change; this speeds up certain output-sensitive algorithms for constructive solid geometry.

We used a genetic search algorithm to find sets of points with many halving lines. There are sets of 10 points with 13 halving lines, 12 points with 18 halving lines, 14 points with 22 halving lines, 16 points with 27 halving lines, and 18 points with 32 halving lines. We find a construction generalizing the 12 point configuration and show that, for any n = 3 · 2^i, there are configurations of n points with n log_4 (2n/3) = 3(i + 1)2^i-1 halving lines.

This paper presents several investigations into the prospects for identifying meaningful structures in empirical data, namely, structures permitting effective organization of the data to meet requirements of future queries. We propose a general framework whereby the notion of identifiability is given a precise formal definition similar to that of learnability. Using this framework, we then explore if a tractable procedure exists for deciding whether a given relation is decomposable into a constraint network or a CNF theory with desirable topology and, if the answer is positive, identifying the desired decomposition. Finally, we address the problem of expressing a given relation as a Horn theory and, if this is impossible, finding the best k-Horn approximation to the given relation. We show that both problems can be solved in time polynomial in the length of the data.

We compute the k smallest spanning trees of a point set in the planar Euclidean metric in time O(n log n log k + k min(k, n )^1/2 log(k/n)), and in the rectilinear metrics in time O(n log n + n log log n log k + kmin(k,n)^1/2 log(k/n)). In three or four dimensions our time bound is O(n^4/3+c + k min(k, n)^1/2 log(k/n)), and in higher dimensions the bound is O(n^2-2([d/2]+1)+c + kn^1/2 log n).

Any connected plane nearest neighbor graph has diameter Ω(n^1/6). This bound generalizes to Ω(n^1/3d) in any dimension d.