Recent work in knowledge compilation suggests that relations which can be described precisely by either Horn theories or tree constraint networks are identifiable in output polynomial time. Algorithms for computing approximations using these languages were also proposed. Upon testing such approximations on artificially generated and real life data, it was immediately observed that they yield numerous superfluous models. As a result, although certain entailment queries can be answered reliably, these methods may be ineffective for a large class of membership queries.
To improve the approximation quality, we investigate here the k-decomposition problem, that is, determining whether a relation can be described by a disjunction of k tractable theories. The paper discusses the complexity of this task, outlines several algorithms for computing both exact and approximate k-decompositions, and evaluates the potential of this approach empirically. We focus on the class of tree constraint networks and Horn theories and report results on artificially generated relations and on three real life cases. Our experiments show that for uniform random relations, the quality of upper bound approximations improves as k increases. However, when we require very high accuracy, decomposition is not effective since k grows linearly with the size of the data. When the data comes from a near-tractable source, the approach is useful. Experiments show that for the King Rook King problem the generalizing power of such methods is comparable to that of recently developed learning algorithms.