Skip to main content
eScholarship
Open Access Publications from the University of California

About

The annual meeting of the Cognitive Science Society is aimed at basic and applied cognitive science research. The conference hosts the latest theories and data from the world's best cognitive science researchers. Each year, in addition to submitted papers, researchers are invited to highlight some aspect of cognitive science.

Paper Presentations

Vital A conncetionist Parser

VITAL is an experiment in parsing natural language through the interaction of many local processes without an explicit global control. The global interpretation is achieved through the convergence of a network of processes on a mutually consistent state through cycles of spreading activation and an implicit form of mutual inhibition. This can be viewed as a constraint satisfaction technique consistent with current research both in linguistics and connectionism.

Experiments With Sequential Associative Memories

Humans are very good at manipulating sequential information, but sequences present special problems for connectionist models.As an approach to sequential problems we have examined totally connected subnetworks of cells called sequential associative memories (SAM's). The coefficients for S A M cells are unmodifiable and are generated at random.A subnetwork of S A M cells performs two tasks:1. Their activations determine a state for the network that permits previous inputs andoutputs to be recalled, and2. They increase the dimensionality of input and output representations to make it possible for other (modifiable) cells in the network to learn difficult tasks.The second function is similar to the distributed method, a way of generating intermediate cells for non-sequential problems.Results from several experiments are presented. The first is a robotic control task that required a network to produce one of several sequences of outputs when input cells were set to a corresponding 'plan number'.The second experiment was to learn a sequential version of the parity function that would generalize to arbitrarily long input strings.Finally we attempted to teach a network how to add arbitrarily long pairs of binary numbers. Here we were successful if the network contained a cell dedicated to the notion of' carry'; otherwise the network performed at less than 100% for unseen sequences longer than those used during training.Each of these tasks required a representation of state, and hence a network with feedback.All were learned using subnetworks of S A M cells.

Using Rules and Task Division to Augment Connectionist Learning

Learning as a function of task complexity was examined in human learning and two connectionist simulations. An example task involved learning to map basic input/output digital logic functions for six digital gates (AND OR, X O R and negated versions)with 2- or 6- inputs. Humans given instruction learned the task in about 300 trials and showed no effect of the number of inputs. Back propagation learning in a network with 20 hidden units required 68,000trials and scaled poorly, requiring 8 times as many trials to learn the 6-input gates as to learn the 2-inputgates. A second simulation combined backpropagation with task division based upon rules humans use to perform the task. The combined approach improved the scaling of the problem, learning in 3,100trials and requiring about 3 times as many trials to learn the 6-input gates as to learn the 2-input gates.Issues regarding scaling and augmenting connectionist learning with rule-based instruction are discussed.

On-Line Processing Of A Procedural Text

The processing of sentences, propositions, and conceptual structures was studied using a task environment which required subjects to read, interpret on-line, and recall a procedural text while reading times were measured for each sentence. A declarative representation of the conceptual frame structure of the procedure expressed in the text, as well as propositional and syntactic analysis of sentences, provided variables that were used to predict these three sets of data. Results showed that properties of the procedural frame, as well as propositional density, and clause structure predicted reading times, recall, and on-line interpretation, and that reading times decreased when high-level conceptual frame processing increased. These results were interpreted as evidence for parallel on-line conceptual processing of sentences during input. As well, reading times for information near boundaries of conceptual structure reflected some buffering in comprehension.

Understanding Stories in their Social Context

Stories concerning multiple agents with interacting goals and plans are difficult to understand;the task can be simplified however, if a program is given sufficient knowledge of social structures. Representations of social aspects of the story may also be necessary components of a satisfactory understanding of the story; here, we consider the distinctions which must be representable to support the task of advice-giving in the social-domain.This paper elaborates on established goal taxonomies in order to capture distinctions among goals embedded in a social context;these distinctions serve both as the basis for choosing advice and as inferential shortcuts.It goes on to explore ways in which such social goals can be predicted from a detailed understanding of the conventions defining social structures linking the various agents. Social units,social-situations, and triangles are introduced as conceptual structures which organize interpersonal-themes. The goals predicted by these themes provide a focus for motivational and impact inferences which would otherwise be lacking. These and related structures also allow some direct predictions of actions, as when a social-situation provides scriptal specification of action sequences or when a contract underlying a social-unit licenses specific recourses in response to obligation failures.

A Theory of Simplicity

The simplicity of a hypothesis for a person cannot be measured by the simplicity of the person's representation of that hypothesis (for example, the number of symbols used), because any hypothesis can be represented with a single symbol. A better measure of simplicity is the ease with which the hypothesis can be used to account for actual and foreseeable data. But it is also important to allow for different ways in which data might be represented. W e suggest that the relevant ways of representing data are those ways in which the person is interested, i.e., those representations that most directly help to answer questions the person wants to answer. In particular, we suggest that the simplicity of a hypothesis for a person is determined by the shortness of the connection between that hypothesis and the data that interest the person, as measured by the number of intermediate steps he or she needs to appreciate in order to appreciate the complete connection.

The Induction of Mental Structures While Learning to Use Symbolic Systems

Subjects learned to map phrase-structure-defined strings onto geometric figure arrays. "String-generation" subjects produced symbol strings corresponding to arrays; "String-interpretation" subjects constructed arrays corresponding to strings. "Mixed" subjects alternated between these tasks.Subjects' knowledge of symbol sequence acceptability was periodically probed.Mixed subjects learned the structure dramatically faster than other subjects.This suggests that natural acquisition of structure underlying symbol-world mapping systems like language depends on learning multi-directional mappings

Integrating Marker Passing and Connectionism for Handling Conceptual and Structureal Ambiguities

This paper discusses the problem of selecting the correct knowledge structures in parsing natural language texts which are conceptually and structurally ambiguous and require dynamic reinterpretation. An approach to this problem is presented which represnets all knowledge structures in a uniform manner and which uses a constrained marker passing mechanism augmented with elements of connectionist models. This approach is shown to have the advantage of completely integrating all parsing processes, while maintatining a simple, domain-independedt processing mechanism

Integrated Common Sense and Theoretical Mental Models In Physics Problem Solving

Cognitive Scientists have recently developed models of physicists' problem solving behavior. Their models propose a rich set of cognitive constructs including procedures (Heller and Reif, 1984), problem-solving schemata (Larkin 1983), categorization rules (Chi,Feltovich & Glaser, 1981), phenomenological primitives(diSessa 1983), forward and backward chaining (Larkin,McDermott, Simon, & Simon, 1980), and qualitative reasoning (deKleer, 1975, Forbus 1986, deKleer and Brown, 1986, and others in Bobrow, ed. 1986). These constructs have proved useful in understanding aspects of physics reasoning.This paper udll provide an analysis of physics problem solving skill that integrates cognitive constructs previously considered disparate. The main point is this:Commonsense reasoning about situations provides an indispensable resource for coping with physics problem solving complexity. More precisely, I will argue that the systematic integration of the deep structure of situational and theoretical knowledge can reproduce competent physics cognition. To support this claim I will discuss the capabilities of running computer programs, written in Prolog, that implement several representations and reasoning processes. In addition, I will show how the Prolog models capture the essence of a think-aloud protocol of a physicist recovering from an error while working a novel problem.

A connectionist model of selective attention in visual perception

This paper describes a model of selective attention that is part of a connectionist object recognition system called MORSEL MORSEL is capable of identifying multiple objects presented simultaneously on its"retina," but because of capacity limitations, MORSEL requires attention to prevent it from trying to do too much at once Attentional selection is performed by a network of simple computing units that constructs a variable-diameter "spotlight" on the retina, allowing sensory information within the spotlight to be preferentially processed. Simulations of the model demonstrate that attention is more critical for less familiar items and that attention can be used to reduce inter-item crosstalk The model suggests four distinct roles of attention in visual information processing, as well as a novel view of attentional selection that has characteristics of both early and late selection theories.

Spatial Reasoning Using Sinusoidal Oscillations

This paper outlines some preliminary results concerning the use of sinusoidal oscillations to represent vectors in two-dimensional space. The proposed representation scheme permits efficient implementation of translation and rotation, and immediate detection of such relations as collinearity and proximity of points. This scheme is then extended so that arbitrary convex regions of the plane are represented using a pair of signals varying over time. Finally, the advantages of representing convex regions in this way are shown to derive from the resulting ease with which such regions can be translated and rotated in the plane, and—more strikingly—from the simplicity of determining whether two such regions overlap.

Parsing Metacommunication in Natural Language Dialogue to Understand Indirect Requests

This paper reports on development of a natural language processing system based on human communication theory. Our system, DIALS (for DIALogue Structures), implements and extends the theory of metacommunication developed in the field of human speech communication. The theory of Dialogue Structures is based on research showing that the interpretation of conversation is enabled by metacommunications helpful in managing interactions and that indirect requests are usually patterns expressing relationships in the interaction rather than simply expressing the content of the request. As such,indirect requests are best interpreted by a semantic grammar expert at managing communciation, rather than a semantic grammar knowledgable on some specific task domain. Our system, based on this approach,correctly interprets all indirect requests from a corpus of 1500 requests transcribed from tape recordings with a combined total of over 80minutes of continuous conversation of 27 dialogues between airline reservation agents and customers.

The Architecture of Children's Physics Knowledge a Problem-Solving Perspective

The project investigated the nature of young children's physics knowledge and the architecture of its development. I utilized two contexts of development for this purpose: comparison of cross-age developments in knowledge of a domain and finegrained analysis of developments that occurred in the context of problem-resolution.The empirical base consisted of three conditions, under which preschoolers were asked to establish equilibrium on the pan balance. Analysis focused on the child's transformation of a number-based to a weight-based approach to the problem. All the conditions employed the same nine sets of elements to be balanced; the conditions varied a) whether or not the child received feedback from the apparatus and b) order of set presentation (total n= 56). A sequence of fine-grained analyses of the videotaped data lead to a view of children's physics knowledge as localized and context-sensitive;with the steps involved in its development as remarkably limited in extension : in a) the scope within which they come to represent weight or weight differences (e.g. discrete elements versus collective weight of elements in a pan) b) the scope of contexts in which they come to view weight as relevant to the goal of mechanical equilibrium and c) the bounds of diagnostic and causal implications

Explorations in Understanding How Physical Systems Work

This paper presents a theory of how to enable people to understand how physical systems work Two key hypotheses have emerged from our research. The first is that in order to understand a physical system, students need to acquire causal mental models for how the system works. Further, it is not enough to have just a single mental model. Students need alternative mental models that represent the systems behavior from different, but coordinated, perspectives, such as at the macroscopic and microscopic levels.The second hypothesis is that in order to make causal understanding feasible in the initial stages of learning, students have to be introduced to simplified models. These models then get gradually refined into more sophisticated mental models. W e will present a theory outlining (1) the properties of an easily learnable, coherent set of initial models, and (2) the types of evolutions needed for students to acquire a more powerful set of models with broad utility.

Patching up Old Plans

Recent research has demonstrated the value of re-using old plans rather than creating plans from scratch. This approach to planning creates the need for efficient and flexible plan adaptation methods to transform a past plan to fit the current problem. A characteristic of plans is that they often fail. This creates the need for efficient and flexible plan repair methods. W e propose a uniform treatment of the two issues, plan adaptation and repair, based on a combination of Case-Base Reasoning and heuristics. Plan adaptation involves incremental modification of the old plan, and fixing of anticipated problems through similarity-based retrieval of cases that supply appropriate modifications. Plan repair involves explanation based retrieval of previous failures that supply possible repairs. A selected repair is then adapted to fit the current failure. The proposed approach gives a planner the flexibility to access a broad range of adaptation and repair strategies not available to planners that use either of the two methods in isolation.The approach has been implemented in the PERSUADER, a case-based planner that generates and repairs plans to resolve labor management disputes.

Access and Use of Previous Solutions In a Problem Solving Situation

An important component of problem solving is the ability to make use of previous examples.This requires noticing the relevance between the current and previous problems. W e examine the role of the superficial and structural relations among problems and the remindings that these similarities elicit in a problem solving situation. Students learned to program in an electronic book environment in which they were able to store and later retrieve solved problems. Their use of jM-evious solutions suggests that novices are indeed sensitive to structural similarities and can use retrieved solutions in new problem situations.

The Use of Explanations for Completing and Correcting Causal Models

Causal models describe some part of the world to allow an information system to perform complex tasks such as diagnosis. However, as many researchers have discovered, such models are rarely complete or consistent. As well, the world may change slightly, making a previously complete model incomplete. A computational theory of the use of causal models must allow for completion and correction in the face of new evidence. This paper discusses these issues with respect to the evolution of a causal model in a diagnosis task. The reasoner's goal is to diagnose a fault in a malfunctioning automobile, and it improves its diagnostic model by comparing it with an instructor's. A general process model is presented with two implementations. Related work in explanation based learning and in incorrect causal models is discussed

A Dynamical Theory of the Powe-Law of Learning in Problem-Solving

The ubiquitous power-law of practice has been a touch-tone of cognitive models. It predicts that the speed of performance of a task will improve as the power of the number of times that the task is performed. In this paper we derive the power-law from a graph dynamical theory of learning by considering change^ in problem->space graph topology due to the addition of operators,and alterations in the decision-procedure used to decide which operator to apply at a particular.The general approach of applying dynamical principle.-^ to cognitive problems holds much promise in unifying other' areas of learning and intelligent activity.

Poster Presentations

Unsupervised Learning of Correlational Structure

People can learn simply from observation, without explicit feedback. Natural language acquisition is perhaps the most spectacular example, but unsupervised learning occurs in many domains. W e present 1) a task analysis of a broad class of unsupervised learning problems and 2) an initial simulation based on the task analysis which successfully learns all the rule types identified in the analysis. Our task analysis characterizes systems of interpredictive correlational rules which could be the basis for category formation in unsupervised learning. For example, observation of various animals could lead to abstracting covariation rules among wings, feathers, and flight, and also among fins, scales, and swimming. These rules in turn could form the basis for the categories bird and fish.Our analysis identifies three types of predictive features and three types of rules which may be available in input: universal, contrastive, and exception-based rules. This analysis guided design of our learning procedures.Our simulation succeeds in learning Jill three rule types. This is difficult because procedures which facilitate learning one rule type may inhibit learning another. Further, our simulation is restricted in psychologically motivated ways and succeeds despite these requirements. We know of no other simulation or modeling project which addresses exactly this class of learning problems. Our results demonstrate the existence of successful procedures. However, we believe our most valuable contributions are our task analysis and framework for testing the power and limits of domain-general learning procedures applied to unsupervised learning problems.

Similarity-Based and Explanation-Based Learning of Explanatory and Nonexplanatory Information

We suggest that human learners employ both similarity-based learning (SBL) and explanation-based learning (EBL) procedures and that the successful use of these procedures is determined by the characteristics of the information to be learned. In a domain without underlying causal structure, multiple examples can lead to successful SBL, but not to successful EBL. In a domain with underlying causal structure, the use of appropriate background knowledge can lead to successful EBL, but not to SBL. A series of experiments was carried out in which a common initial passage was followed with a variety of different types of information (a second similar instance, a second contrasting instance, frequency data, or explanations). EBL occurred only when subjects had sufficient background knowledge and when the information to be learned could be causally structured. SBL occurred when there were multiple examples, even in domains without causal structure.

Netzsprech - Another Case for Distributed 'Rule' Systems

This paper compares conventional symbolic rule systems with distributed network models, considerably arguing for the latter. NETZSPRECH - a network that transcribes German texts similar to NetTalk is first introduced for this purpose and serves as an example for the arguments.

Direct Inferences in a Connectionist Knowledge Structure

A model of human cognition is proposed in which all concept properties are context dependent. Concepts are comprised of multiple facets, each motivated by a different functional property. A connectionist implementation is presented in which conceptual modification yields the 'direct inferences' implicit in the structure of a knowledge base.

Processing Aspectual Sematics

A computational treatment of aspect in English is presented. A set of aspectual values is introduced and discussed. The lexical and contextual clues for determining aspectual values are determined. The structure of the entry in the main dictionary supporting aspectual (as well as other types of) analysis is illustrated. A computational framework for an aspectual analyzer is described, in which the latter is conceived as one of a group of specialist analysis modules working together, in a distributed (blackboard-oriented) computational environment.

Conjoint Syntactic and Semantic Context Effects: Tasks and Represntations

Syntactic and semantic relatedness were orthogonally varied in a series of experiments by presenting semantically related and unrelated noun and verb targets in phrasal contexts syntactically disposing to nouns or verbs. In addition, the subjects' task, naming or lexical decision on the target, was varied across experiments. In lexical decision,semantic facilitation and inhibition effects depended on context-target match, especially for noun targets. In several experiments, naming data showed only weak semantic effects, which were not modulated by context-target match. However, there was clear evidence of syntactic inhibition in these experiments. Finally, robust semantic facilitation was observed in a naming experiment where contexts and targets were always syntactically matched. Thus, although in some experiments lexical decision appeared to reflect additional text-level integration processes to which naming was immune, the naming task was less consistent across experiments. This contradiction may be resolved if a distinction is introduced between situations where lexical targets are part of the sequence being tested and situations where they are external probes.

Generalization by humans and multi-layer adaptive networks

Generalization of a pattern categorization task was investigated in a simple, deterministic,inductive learning task. Each of eight patterns in a training set was specified in terms of four binary features. After subjects learned to categorize these patterns in a supervised learning paradigm they were asked generalize their knowledge by categorizing novel patterns. We analyzed both the details of the learning process as well as subjects' generalizations to novel patterns.Certain patterns in the training set were consistency found to be more difficult to learn than others.The subsequent generalizations made by subjects indicate that in spite of important individual differences, subjects showed systematic similarities in how they generalized to novel situations.The generalization performance of subjects was compared to those that could possibly be generated by a two-layer adaptive network. A comparison of network and human generalization syndicate that using a minimal network architecture is not a sufficient constraint to guarantee that a network will generalize the way humans do.

Pattern-Based Parsing for Word-Sense Disambiguation

In the study of natural language understanding, the reductionist approach has been commonly used by A, I. researchers. Here we develop a technique for parsing based on this approach. W e use a set of semantic primitives to represent word meanings and utilize patterns of sentences for mapping sentences onto meaning structures. To assist the parsing process,w e develop semantic mappings for primitive sentences, semantic transformations for decomposing complex sentences using function word sand axioms that encode world knowledge. W e then explore the application of our approach to the word polysemy problem.

A Model of Meter Perception in Music

A fundamental problem in music cognition is the question of how the listener extracts the music's temporal organization. We describe a model, implemented as a computer simulation, that constructs a hierarchical representation of metric structure that conforms to the requirements of Lerdahl & Jackendoff's(1983) generative theory. The model integrates bottom-up processing of score data with top-down processes that generate predictions of temporal structure,and with rules of organization that correspond to musical intuition. Several examples of the program's output are used to illustrate these processes.

The Role of Mapping in Analogiclal Transfer

This paper aims to provide a view of the role of analogical mapping in the entire process of analogical problem solving. In many models, analogical mapping is responsible for identifying the analogy between two problems by considering structural and semantic similarities. However,given a non-trivial analogy problem, success of mapping does not always guarantee successful transfer of analogy. In fact, there exist many analogy problems, which succeed on analogical mapping but which faU on analogical transfer. While a potential mapping between problems cane generated, that mapping might not be justifiable until transfer from one problem to another is attempted.W e present our analogical mapping method and show how it works for inter-domain and intra-domain analogies. W e demonstrate several analogy problems in which a mapping can be generated that cannot be transferred. W e also compare our method to two general mapping mechanisms, S M E and A C M E , and show that it performs at least as well, and sometimes better than either of those methods.