The study of how children and adults learn mathematics has given rise to a rich set of psychological phenomena involving mental representation, conceptual understanding, working memory, relational reasoning and problem solving. The subfield of understanding rational number processing and reasoning focuses on mental representation and conceptual understanding of rational numbers, and in particular fractions. Fractions differ from other number types, such as whole numbers, both conceptually and in format. Previous research has highlighted the extent to which fractions and other rational numbers pose challenges for children and adults with respect to magnitude estimation and misconceptions. The goal of this dissertation is to highlight the distinct differences in reasoning with different types of rational numbers. First, a neuroimaging study provides evidence that fractions yield a distinct pattern of neural activation during magnitude estimation that differs from both decimals and integers (Chapter 2). Second, a set of behavioral studies with adults highlights the affordances of the bipartite format of fractions for relational reasoning tasks (Chapter 3). Finally, a developmental study with pre-algebra students provides evidence for a significant relationship between relational understanding of fractions and algebra performance, and specifically algebraic modeling. This work is presented in the context of viewing mathematical notation as a type of conceptual modeling. In particular, decimals have advantages in measurement and representing magnitude. Fractions, on the other hand, have advantages in relational contexts, due to the fact that fractions, with their bipartite (a/b) format, inherently specify a relation between the cardinalities of two sets. When mathematics is viewed as a type of relational modeling, rational expressions provide a gateway to more complex mathematical notations and concepts, such as those in algebra.

# Your search: "author:Holyoak, Keith J"

## filters applied

## Type of Work

Article (39) Book (0) Theses (4) Multimedia (0)

## Peer Review

Peer-reviewed only (42)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (0)

## Publication Year

## Campus

UC Berkeley (1) UC Davis (0) UC Irvine (0) UCLA (30) UC Merced (14) UC Riverside (0) UC San Diego (0) UCSF (0) UC Santa Barbara (0) UC Santa Cruz (2) UC Office of the President (1) Lawrence Berkeley National Laboratory (0) UC Agriculture & Natural Resources (0)

## Department

UCLA Department of Psychology (25) Department of Statistics, UCLA (7) Research Grants Program Office (RGPO) (1)

## Journal

Proceedings of the Annual Meeting of the Cognitive Science Society (14)

## Discipline

Social and Behavioral Sciences (14) Physical Sciences and Mathematics (3)

## Reuse License

BY - Attribution required (7)

## Scholarly Works (43 results)

STEM education is a persistent problem in the United States. Analogy offers a potential tool for improving educational outcomes because analogical comparison increases attention to the structural-relational information that characterizes experts’ conceptual representations. The current project investigated analogy-inspired instruction in two lab studies using UCLA undergraduates and one naturalistic classroom study. In Study 1, UCLA undergraduates learned about STEM concepts from lecture videos using analogical principles or control videos, and performance was assessed with an immediate posttest. Performance was similar across both instructional conditions, which may be attributable to the high-ability sample. In Study 2, UCLA undergraduates learned how to solve equation construction problems from videos that represented relational information explicitly in a geometric format, in a carefully-matched symbolic format, or in an adaptation of the gold standard of instruction for this topic, JUMP Math. While all lessons improved performance, the geometric and symbolic lessons were most effective. As in Study 1, the high-ability sample demonstrated an ability to learn from all types of instruction. The classroom study investigated the efficacy of analogical instruction in an online class environment in the context of cognitive load theory. UCLA students enrolled in Life Sciences 30A: Quantitative Concepts for Life Scientists (in Winter quarter 2021) learned topics through a structured teacher-directed approach to analogical instruction or a less-structured student-directed approach, and exam performance was measured. Students benefitted from the teacher-directed approach and the benefit was especially pronounced for low-performing students. Implications for designing educational interventions for students with lower abilities, and for successful researcher-practitioner collaborations, are discussed.

How do humans acquire relational concepts such as *larger*, which are essential for analogical inference and other forms of high-level reasoning? Are they necessarily innate, or can they be learned from non-relational inputs? Using comparative relations as a model domain, we show that structured relations can be learned from unstructured inputs of realistic complexity, applying bottom-up Bayesian learning mechanisms that make minimal assumptions about innate representations. First, we introduce *Bayesian Analogy with Relational Transformations* (*BART*), which represents relations as probabilistic weight distributions over object features. BART learns two-place relations such as *larger* by bootstrapping from empirical priors derived from initial learning of one-place predicates such as *large*. The learned relational representations allow classification of novel pairs and yield the kind of distance effect observed in both humans and other primates. Furthermore, BART can transform its learned weight distributions to reliably solve four-term analogies based on higher-order relations such as *opposite* (e.g., *larger:smaller* :: *fiercer:meeker*). Next, we present BARTlet, a representationally simpler version of BART that models how symbolic magnitudes (e.g., size or intelligence of animals) are derived, represented, and compared. BARTlet creates magnitude distributions for objects by applying BART-like weights for categorical predicates such as *large* (learned with the aid of empirical priors derived from pre-categorical comparisons) to more primitive object features. By incorporating psychological reference points that control the precision of these magnitudes in working memory, BARTlet can account for a wide range of empirical phenomena involving magnitude comparisons, including the distance effect, the congruity effect, the markedness effect, and sensitivity to the range of stimuli. Finally, we extend the original discriminative BART model to generate (rather than classify) relational instances, allowing it to make quasi-deductive transitive inferences (e.g., "If *A* is larger than *B* and *B* is larger than *C*, then *A* is larger than *C*") and predict human responses to questions such as, "What is an animal that is smaller than a dog?" Our work is the first demonstration that relations and symbolic magnitudes can be learned from complex non-relational inputs by bootstrapping from prior learning of simpler concepts, enabling human-like analogical, comparative, generative, and deductive reasoning.