Skip to main content
eScholarship
Open Access Publications from the University of California

About

The annual meeting of the Cognitive Science Society is aimed at basic and applied cognitive science research. The conference hosts the latest theories and data from the world's best cognitive science researchers. Each year, in addition to submitted papers, researchers are invited to highlight some aspect of cognitive science.

Member Abstracts

Pre-Training Leads to a Structural Novelty Effect in Spatial Visual Statistical Learning

We investigated the influence of structural properties of previously learned stimuli on Spatial Visual Statistical Learning. Participants (n=170) were first exposed to a stream of scenes containing only one type of regularity (horizontal or vertical pairs), followed by a stream containing both types of regularities. We found that participants performed above chance for the pairs of the first stream (M=54.7%, SE=1.2, p<0.001, BF=91.89) as well as for the novel type of pair of the second stream (M=55.6%, SE=1.9, p=0.005, BF=4.04), but not for the familiar type of pair of the second stream (M=51.5%, SE=2.0, p=0.465, BF=0.11). This observed novelty effect indicates an interference between the similarly structured pairs in the first and second stream of scenes, suggesting representational overlap of pairs of the same orientation.

Individual Differences in Deepfake Detection: Mindblindness and Political Orientation

The proliferation of the capability for producing and distributing deepfake videos threatens the integrity of systems of justice, democratic processes, and the general ability to critically assess evidence. This study sought to identify individual differences that predict one’s ability to detect these forgeries. It was hypothesized that measures of affect detection and political orientation would correlate with performance on a deepfake detection task. Within a sample (N = 173) of college undergraduates and participants from Amazon’s M-turk, affect detection ability was shown to correlate with deepfake detection ability, r(171) = .73, p < .001, and general orientation to the political left was shown to correlate with deepfake detection ability, r(171) = .42, p < .001. The results of this study serve to identify populations who are particularly susceptible to deception via deepfake video and to inform the development of interventions that may help defend against attempts to influence them.

The Anatomy of Discourse: Linguistic Predictors of Narrative and Argument Quality

Narratives (sequences of purposively related concrete situations) and arguments (reasoning and conclusions in an attempt to persuade) are distinct cornerstones of human discourse. While theories of their linguistic structures exist, it is unclear which theorized features influence perception of narrative and argument quality. Furthermore, differences in their usage over time and across formal versus informal mediums remain unexplored. Thus, we use an original dataset of news and Reddit discourse (consisted of >10,000 clauses), annotated for clause-level discourse elements (e.g., generic statements vs. events; Smith, 2003), and their coherence relations (e.g., cause/effect; Wolf & Gibson, 2005). We identify the features that correspond to differing perceptions of narrative and argument quality across multiple dimensions. Since the documents cover marijuana legalization discourse during a period of massive attitude shift in the U.S. (2008-2019), we also examine changes over time in discourse structure within this rapidly evolving sociopolitical context.

Children’s reasoning about hypothetical interventions to complex and dynamic causal systems

Across 3 studies, we investigated children's ability to consider hypothetical interventions to complex and dynamic causal systems. Five- to 7-year-olds learned about novel food chains and were asked about the effects of the removal of one species on others in the food chain. In Studies 1 (n = 72) and 2 (n = 72), 6- and 7-year-olds made correct inferences about the effects on remaining species, but performed better when reasoning about direct predators or prey than indirectly-connected species. Five-year-olds' performance was at chance across all question types. In Study 3 (n = 65, target n = 72), we are currently investigating whether 5-7-year-old children’s performance improves when given more background information on the causal dynamics of the food chains. The results indicate that hypothetical thinking about dynamic causal systems develops between 5 and 7 years. This ability may be leveraged for teaching science concepts.

Testing an interference-based model of working memory in children with developmental language disorder and their typically developing peers

Children with Developmental Language Disorder (DLD) have deficits in verbal and nonverbal processing relative to typically developing (TD) peers. We examined working memory in DLD relative to age-matched TD peers (9-13 years) under the serial-order-in-a-box – complex span model. This model posits a time-based mechanism, Free Time, that governs how interference affects processing performance. Results showed that Free Time was positively associated with accuracy when recall and interference stimuli had verbal features (b = 0.00; stat = 3.11; p < .01), and combined verbal and nonverbal features (b = 0.00; stat = 3.05; p < .01). Group differences in this relationship were evident when recall stimuli had verbal features regardless of interference stimuli features (b = -0.00; stat = -3.66; p < .001; b = 0.00; stat = 2.97; p < .01). Findings suggest a greater role of Free Time for verbal than nonverbal content, which varies depending on participant characteristics.

Does surprisal affect word learning? Evidence from seven languages

What makes a word easy to learn? We know that early-learned words tend to be frequent and name concrete referents. Here we investigate a novel predictor--a word's surprisal -- the ease of predicting it given a context. We computed surprisal for words in child-directed speech and used it to predict age of acquisition (AoA) while controlling for known predictors such as concreteness, frequency, and the mean length of the utterances in which the word appeared. Predicates with greater surprisal (i.e., those less predictable from context) were learned later. Noun learning was not dependent on surprisal. Surprisal was a powerful mediator of both frequency and concreteness, reducing the ordinarily strong effect of frequency and concreteness on AoA. Differences in surprisal across languages also proved to be moderately predictive of differences in AoA across languages: translation equivalents that had higher surprisal had later AoAs.

Does cognitive dissonance depend on self-concept? 2-year-old children, but not 1-year-olds, show blind choice-induced preferences

As adults, we do not only choose what we prefer, but we also tend to adapt our preferences post-hoc to fit our previous choices, even when these were blind. This is thought to result from cognitive dissonance as an effort to reconcile our choices and preferences. It has been argued to rely on the self-concept, but has also been found in preschoolers and monkeys. In a preregistered study, we therefore investigated when blind choice-induced preferences emerge and whether they are related to self-concept development in the second year of life. Results in N=200 children aged 16-36 months provide strong evidence that blind-choice induced preferences develop between 1 and 2 years (BF10=12.5). Two-year-olds avoided an object that they had previously discarded in a blind choice, whereas 1-year-olds did not show any such preferences yet. Further, we found substantial evidence against a relation with measures of self-concept development (BF10=0.2).

The N400 event-related potential component reflects a learning signal

A lot of studies on the N400 event-related brain potential (ERP) component have tried to understand its functional significance. Recently, the N400 was modeled in a neural network model as the update of a probabilistic representation of sentence meaning. The change in activation induced by a word was proposed to reflect a semantic prediction error that drives adaptation of the model’s connections (Rabovsky et al., 2018), which implies that more negative N400 amplitudes should lead to greater adaptation. By experimentally manipulating expectancy in a sentence reading task (n=33), we could show that this manipulation did not only influence N400 amplitudes, but also implicit memory: reaction times in a perceptual identification task were significantly faster for previously unexpected words. Additionally, difference in N400 amplitude correlated with the reaction time benefit for unexpected compared to expected items. These finding support the interpretation of the N400 as a prediction error and learning signal.

The function of function: People use teleological information to predict prevalence

Folk-biological concepts are sensitive to both statistical information about feature prevalence (Hampton, 1995; Kim & Murphy, 2011; Rosch & Mervis, 1975) and teleological beliefs about function (Atran, 1995; Keil, 1994; Kelemen, Rottman, & Seston, 2013; Lombrozo & Rehder, 2012), but it is unknown how these two types of information interact to shape concepts. In three studies (N = 438) using novel animal kinds, we found that information about prevalence and teleology inform each other: People assume that common features are functional, and they assume that functional features are common. However, people use teleological information to predict the future distribution of features across the category, despite conflicting information about current prevalence. Thus, both information about prevalence and teleological beliefs serve important conceptual functions: Prevalence information encodes the current state of the category, while teleological functions provides a means of predicting future category change.

Eye movement consistency in global-local perceptual processing predicts schizotypy

Here we examined whether eye movement measures in global-local perceptual processing tasks, where abnormalities were typically found in individuals with schizophrenia, could be used to predict schizotypy through Eye Movement analysis with Hidden Markov Models (EMHMM). Using both multiple regression analysis and Gaussian process classifier to predict schizotypy, we found that in addition to longer response times in contour integration, a less consistent eye fixation to locate a stimulus and a more consistent subsequent fixation to start engaging local processing in the embedded figures task predicted high schizotypy. These effects may be related to reduced top-down attention control due to deficient global processing and enhanced local processing bias respectively. In addition, performance in embedded figures could further enhance classification accuracy when being used in conjunction with the above predictors, suggesting the multifactorial nature of the identification problem. These predictors may be important endophenotype markers for schizotypal personality.

Characterize Artistic Style based on the Entropy Rate of the Imaginary Stroke Sequences

Recent progress in neuroaesthetics showed that when people appreciate a piece of visual artworks, they can also reconstruct the imaginary stroke sequences from static brushstrokes. This subjective feature is consistent with the general art practice of copying works of art, that is, imitators can interpret the original creative process of practitioners and physically copy their creative tracks. However, this feature has not been measured to characterize artistic styles. This paper attempts to design an experiment to measure this subjective characteristics. 34 pieces of calligraphy and painting works in different styles were chosen as the materials, and 130 testers were invited to draw their imaginary stroke sequences on the chosen pieces. The obtained traces is reinterpreted as AOI(Area of Interests) sequence and modeled as a Markov chain, and its entropy rate with standard error is calculated to measure the randomness. Result show that the measurement can differentiate artistic styles.

Distinctive Features of Emotion Concepts

Some theories of abstract word representation posit a role for affective information; however, little has been done to delineate how this holds for the class of emotion concepts relative to other types of abstract concepts. Work using distributional semantic models has shown that emotion words tend to co-occur with other highly affective words, complicating the picture of how affective information contributes to the grounding of abstract concepts generally and emotion concepts specifically. Here, a novel set of data from 180 participants collected using a feature generation paradigm with emotion, non-emotion-related abstract, and concrete stimuli is leveraged to test how the distributional properties of emotion concepts and their associated features differs from those of other abstract concepts. Using co-occurrence statistics and normative ratings for affect, results show the importance of affect differs between emotion and non-emotion-related abstract words.

Persian collective nouns enhance ensemble size perception

Similarities of either artificial or natural groups are extractable by our visual system in terms of ensemble perception. On the other hand cognitive linguistics postulates that changing mental image schema for describing a scene from global to local and vice versa is obvious in our daily conversations. Here we hypothesized that global or local priming of participants would modulate their threshold in mean discrimination task (MDT). Thereby, 10 right-handed , 14 years old participants went through an ABBA within-subject counterbalance design to compare 4 pairs of collective and singular nouns priming effect by successive ensemble size perception task(MDT) including a display of random circles sizes and then a mean discrimination task to decide which one of the test circles match the previous display mean circle size. Our preliminary results demonstrate Persian collective nouns in comparison to singular nouns could decrease threshold to distinguish mean circle sizes in last display.

Increasing the duration of working-memory in dogs with visual cues

It was assumed that the duration of working memory in dogs is around 27 seconds, however, a study on a group of dogs revealed that they were able to maintain spatial memory of an object up to 240 seconds. In our study, we looked at 50 multiple dog breeds. In experiment 1, a small ball was put in one of the 3 wooden boxes in front of the dogs. The dogs had a retention interval for 1,2,3 and 4 minutes consecutively. Then they were asked to search for the ball. In experiment 2, the same ball was put in 3 colourful boxes that were, green, blue, and purple which are distinguishable to dog vision. The retention intervals were kept the same. Our results revealed that the addition of colour has increased the dogs' likelihood of finding the ball which suggests an advancement in the duration of working memory.

The Role of Attention in Learning through Overheard Speech

Children can learn words through overhearing (i.e., without being directly addressed, Akhtar et al., 2001). Although research shows that children attend to experimenters when new information is presented in an overheard context (Akhtar, 2005), the role attention plays in retention is understudied. In prior studies, spacing of presentations in time facilitates an effortful retrieval process that strengthens long-term retention of the newly learned word (Vlach et al., 2012). The current study examined whether monolingual children retain novel labels for novel shape categories in an overhearing context and the role that attention plays in successful retention. Preliminary results (N = 17) suggest that children regardless of chance performance in the task are showing less attention duration when objects are presented spaced out in time rather than presented in mass succession. This study has implications for how children learn from indirect input and the role attention plays in this unique learning environment.

Hippocampal replay as context-driven memory reactivation

Hippocampal replay is not a simple recapitulation of recent experience, with awake replay often unrolling in reverse temporal order upon receipt of reward, in a manner dependent on reward magnitude. These findings have led to the proposal that replay serves to update values in accordance with reinforcement learning theories. We argue that there may be a more parsimonious account of these observations involving simple associations between contexts and experiences: During wakefulness, animals associate experiences with the contexts in which they are encoded, in a manner modulated by the salience of each experience. During periods of quiescence, replay emerges when contextual cues trigger a cascade of reactivations driven by the reinstatement of each memory’s encoding context, which in turn facilitates memory consolidation. Our theory unifies numerous replay phenomena, including findings that reinforcement learning models fail to account for.

Let’s talk structure: the positive consequences of structural representations

How should we represent social categories? Essentialism, which posits an internal category essence, has negative consequences such as group-based generalization and intolerance of nonconformity. Structural representations, which consider categories as situated in a larger context, could be more constructive. When considering a neutral context, adults who learned a structural (food availability) cause for a novel group’s diet, versus a biological (allergies, digestion) or cultural (taboo) cause, generalized in a context-sensitive manner, considered nonconformity to be more possible and acceptable, and suggested intervening on the structural context rather than the group to change the property. When considering social stratification, adults who learned a structural (discrimination) cause for a novel group working a low-status job, versus a biological (physically well-suited) or cultural (traditions/values) cause, showed more context-sensitive generalization, considered the present disparity to be more unacceptable, and suggested more structural interventions. Structural representations may consequently be a more constructive alternative to essentialism.

Investigating the nature of infants' lexical speed of processing

Children vary widely in their lexical development. These differences have been shown to carry through later ages and can even be found in adults. One parameter often found as a predictor of varying vocabulary size is speed of processing, measuring the reaction time in familiar word recognition tasks. However, the underlying nature of speed of processing is still unclear: Is it a purely linguistic phenomenon or is it tied to the general cognitive abilities of the child. Our study aims to shed light on the nature of speed of processing, by testing 17-month-olds in a word learning experiment and assessing their reaction time during word recognition for both newly acquired and familiar words as well as visual reaction times. Our results can thus disentangle how broad or narrow lexical speed of processing is and help us better understand the origin of its link with language skills across the lifespan.

Investigating Scientific Inquiry Skills from Process Data

Process data recording students’ interactions with digital assessment items are available in digital educational assessments and have become a focus of cognitive scientists to analyze inquiry skills during problem solving. This study examines the inquiry behaviors of using tools (i.e., resource tabs) in short response construction items from the 2019 National Assessment of Educational Progress science assessment. We visualized the occurrence times and durations of response construction behaviors and tool use behaviors and conducted correlation analyses and mixed-effects regressions between the count (and duration) of tool use behaviors and item scores. The results reveal that tool use behaviors are significantly associated with item scores and probabilities of finishing the whole block the problem; increasing tool use durations or counts increases the chances of getting higher scores, but it also increases the chances of not finishing the block. This study exemplifies how to use process data to investigate scientific inquiry skills.

Learning to enact photosynthesis: Towards a characterization of the way academic language mediates concept formation

Forming new science concepts, such as photosynthesis, is one of the ways students transition from everyday thinking to a more scientific worldview. Academic language (AL) is a central mediator of this transition. However, it is unclear how the features of AL (such as nominalisation and encapsulation), and classroom dynamics based on AL, promote new concept formation. We present an analytical framework to explore the way AL mediates the transition to scientific thinking, extending the simulation model of language understanding. We use this framework to develop a distributed cognition model of classroom dynamics, where the teacher first develops a standard mental simulation from textbook descriptions. She then uses this standard simulation to nudge students' individual mental simulations towards a 'gist simulation', which roughly captures the textbook model. We are currently examining how this model, along with an interactive media system to teach AL, could advance our understanding of science classroom dynamics.

Bridging Executive Function and Metacognition through Post-Error Slowing

Executive function and metacognition are two core concepts in cognitive psychology, yet research linking EF and metacognition directly remains rare. It has been suggested that error monitoring can be used as a window to study the association between EF and metacognition (Roebers, 2018). Error monitoring is indexed by post-error slowing, a phenomenon describing a delayed response in actions after error commissions. The present study uses hierarchical multiple regression to investigate whether EF and metacognition can both predict PES. Individual constructs of EF are measured by four computerised-based tasks. Metacognition is assessed by a simplified version of the Self-Regulated Learning Questionnaire (Dowson & McInerney, 2004). A total of 456 participants (M age of 11.9, SD = 0.92) are included in the final analysis. Results indicate that two EF constructs: inhibition and planning, predict corresponding PES in inhibition and planning tasks. Only the regulation component of metacognition predicts PES in the planning task.

If you think your action was erroneous, you will reject the outcome you actually wanted: a case of reverse choice blindness

In choice blindness (CB) experiments participants often accept a manipulated outcome as their actual choice. In a typical CB experiment the manual actions that participants perform are always correct (pointing, writing, etc.), while the outcome is mismatched. However, what would happen if an error was induced at the motor level, but the outcome nevertheless remained correct? We investigated this by having participants drag a mouse cursor across the screen to the face they found the most attractive, while we manipulated either the outcome (classic CB), or the cursor (forced motor deviation), or sometimes both. Interestingly, what we found was that when the cursor was manipulated but not the outcome, the motor ‘wrongness’ would override the goal ‘rightness’, and participants ended up rejecting the outcome they actually wanted. We will discuss the implications of this new reverse choice blindness effect for theories of self-monitoring, agency and preference change.

Experience with Equations in Sequence Promotes Procedural Fluency

Mathematics, by its very nature, is rife with patterns. However, students frequently treat mathematics as a series of isolated and unlinked exercises. The current study examined whether experience with extending mathematical patterns affected adults’ ability to solve equations that involved patterns and/or to reason about mathematical relationships in new contexts. Participants who were given 13 trials of pattern extension experience then went on to demonstrate both more efficient problem-solving (i.e., faster response times) and more accurate problem-solving at posttest, relative to individuals who were given an equivalent amount of explicit instruction related to solving the equations. However, there was no difference between the groups in the ability to abstract the structure of the underlying mathematical relationships. These findings suggest that patterning tasks like those used in this study may be useful in supporting math performance.

Understanding Image Sequences Via Narrative Sensemaking

When humans make sense of the world, they do not understand it as a cascade of observations; rather, from a cascade of observations, humans assemble a holistic narrative, connecting their observations using prior knowledge and inference. The final product of observations connected with prior knowledge and inference may be modeled as a knowledge graph. The process of sensemaking described above is one we seek to emulate in the realm of image understanding through a computational system. Starting from observed objects and relationships in a sequence of images (from Visual Genome Scene Graphs), the system we are building consults a commonsense knowledge network (ConceptNet), over-generates a set of hypothesized narrative-based connections between observations, and evaluates and trims its hypotheses through Multi-Objective Optimization to create a consistent set. The resultant knowledge graph reflects the system’s consistent speculations, beyond the directly observable, of what is happening in, and across, the images.

Machine Learning Models for Predicting, Understanding, and Influencing Health Perception

Lay perceptions of medical conditions and treatments determine people’s health behaviors, guide biomedical research funding, and have important consequences for both individual and societal wellbeing. Yet it has been nearly impossible to quantitatively predict lay health perceptions for hundreds of everyday diseases due to the myriad psychological forces governing health-related attitudes and beliefs. Here we present a data-driven approach that uses text explanations on healthcare websites, combined with large-scale survey data, to train a machine learning model capable of predicting lay health perception. We use our model to analyze how language influences health perceptions, interpret the psychological underpinnings of health judgment, and quantify differences between different descriptions of disease states. Our model is accurate, cost-effective, and scalable, and offers researchers and practitioners a new tool for studying health-related attitudes and beliefs.

Beware of Strangers: Dogs’ Empathetic Response to Unfamiliar Humans

Empathy is a complex cognitive ability once thought to be unique to humans (Batson, 2003). However, studies suggest dogs can exhibit empathetic behaviors towards owners in distress (Sanford et al., 2018; Bourg et al., 2020). The current study examines the empathetic capacities of dogs presented with a trapped stranger crying or humming behind a see-through closed door. Opening behavior and physiological markers of stress including heart rate variability (HRV) and coded stress behaviors were measured. Unlike in past research, dogs did not open more or faster for the distressed stranger than the non-distressed stranger. This fits with findings on the importance of familiarity on empathetic responding (deWaal, 2008). Additionally, the HRV and owner-reported fear of dogs in the crying condition were lower for dogs that opened than those that did not, suggesting that like children, dogs must have low personal distress to show empathy (Eisenberg et al., 1996).

Lightness and darkness are mentally represented during language processing

A growing evidence from a sentence-picture verification paradigm suggests rapid integration of implied perceptual context during sentence processing. This evidence, however, is usually criticized for not providing a strong test of the mechanisms underlying such integration. We addressed this question in relation to the situation describing different sources of light. We hypothesized that if comprehenders simulate the perceptual state of affairs implied by the sentence, then response times should be faster only when pictures are fully (not partially) congruent with sentence content. Participants (N=200) read sentences like “The sun/the moon is shining onto a horse” followed by pictures depicting mentioned (Experiment 1) or non-mentioned (Experiment 2) objects in either a matching or a mismatching perceptual context (sunny/moony background). Verification times (analyzed with linear-mixed-models) were shorter only when the mentioned (but not non-mentioned) objects matched the perceptual context implied by the sentence. The results are discussed in support of simulation account.

The Role of Categories in the Formation of Liking Evaluations

We examine when, why, and how people use category-based knowledge in determining how much they will like an object based on memories. We find that people rely on category-based knowledge when making liking evaluations of items from memory and rely on such knowledge more when items are typical of a category. We suggest that people rely on categories to fill in information that memories leave out and use typicality as a cue for the likelihood that category-based knowledge will be a good substitute for knowledge about an item. In 2 studies, using products and color patches as stimuli, we find that people rely on category liking evaluations in forming liking evaluations of specific items from memory and that they do so more when an items is typical (vs atypical) of that category. This work shows that category-based knowledge can play an important part in preference formation.

Effects of lifetime knowledge on language processing in German and English

In two self-paced reading experiments, we investigated the integration of knowledge about cultural figures during language processing. Experiment 1 investigated how contextually-defined lifetime information (dead/alive) was integrated with temporal verb morphology (English Present Perfect and Past Simple). Experiment 2 investigated how long-term knowledge about a cultural figure, prompted by their picture, is integrated with two types of linguistic input in German: temporal phrases containing a year, and biographical information (e.g., Joaquin Phoenix: "In the year 2013 / *1960, I starred in the film ‘Her’ / *‘Psycho’"). Experiment 1 revealed longer reading times and higher rejection rates when life status mismatched tense (i.e., dead – Present Perfect, alive – Past Simple). In Experiment 2, shorter reading times and higher accuracy rates emerged in conditions containing lifetime-year mismatches. No effects were found for biographical information violations. Together, the two experiments provide cross-linguistic evidence that knowledge of a referent’s lifetime is integrated during language processing.

The Octopus: Implications for Cognitive Science

Octopuses challenge many common assumptions and received views about the relationship between the nervous system, cognition, and the mind. Despite the major anatomical and functional differences between the octopus and vertebrate neurocognitive systems—as well as the divergent evolutionary histories of these clades—octopuses have vertebrate-like cognitive and behavioural capacities and even display aspects of putative “mentality” previously thought to be applicable (if at all) only to vertebrates. Octopuses thus raise significant implications for scientific and philosophical studies of the mind, brain, and cognition, e.g., regarding the mechanisms and substrates of cognition, the functions and structure of consciousness, and the implementation of various cognitive routines. Furthermore, the evolutionary and ecological factors that influenced the development of octopus cognition and behaviour warrant a reexamination of presuppositions about how intelligence arises. This presentation provides an overview of some of the implications octopuses raise for cognitive science.

Modeling Capacity-Limited Decision Making Using a Variational Autoencoder

Due to information processing constraints and cognitive limitations, humans must form limited representations of complex decision making tasks. However, the mechanisms by which humans generate representations of task-relevant stimulus remain unclear. We develop a model that seeks to account for the formation of these representations using a β-variational autoencoder (β-VAE) trained with a novel utility based learning objective. The proposed model forms latent representations of decision making tasks that are constrained in their information complexity. We show through simulation that this approach can account for important phenomena in human economic decision making tasks. This model provides a method of forming task-relevant representations that can be used to make decisions in a human-like manner.

Overcoming Error: Association between Attentional Reorientation and Vocabulary Size

According to prediction-based theories, prior learning creates expectations for subsequent learning events. As they learn words, individuals develop accurate and inaccurate expectations about word meaning. Existing research shows that people who shift their gaze to the referent of words more quickly have larger vocabularies. This shifting reflects the processing of accurate expectations about the referents of these words. Is the speed of processing inaccurate expectations also related to vocabulary size? To examine this question, adults learned eight novel word-object mappings during cross-situational word learning (CSWL). The mappings were either consistent or inconsistent with a prior familiarization phase. Early in CSWL, hearing inconsistent words violated expectations about the referent of those words. Shifting from a distractor to the target of an inconsistent word during CSWL was significantly associated with productive and receptive vocabulary. These findings are consistent with prediction-based theories, in which individuals use prediction errors to adjust their expectations.

Effects of perceptual and emotional imageries of food names to word recognition memories: four behavioral experiments

The aim of this study was to identify the effects of perceptual (visual and olfactory) and emotional imageries of food names to word recognition memories. First, we asked Japanese healthy participants to imagine visual (Experiment 1), olfactory (Experiment 2), emotional (Experiment 3), or preferential features (Experiment 4) of presented words associated with foods and drinks (imagining condition), while we also asked them to passively read presented words (reading condition). Second, they judged whether each word was previously presented (word recognition memory task). Results showed that accuracy rates of the imagining condition were significantly higher than those of the reading condition in word recognition memory tasks of all the experiments. Our findings suggest that perceptual and emotional imageries of food names facilitate word recognition memories.

Recovering human category structure across development using sparse judgments

Multidimensional scaling (MDS) has provided insight into the structure of human perception and conceptual knowledge. However, MDS usually requires participants to produce large numbers of similarity judgments, leading to prohibitively long experiments for most developmental research. Here we propose a method that combines simple grouping tasks with recent neural network models to uncover participants’ psychological spaces. We validate the method on simulated data and find that it can uncover the true structure even when given heterogeneous groupings. We then apply the method to data from the World Color Survey and find that it can uncover language-specific color organization. Finally, we apply the method to a novel developmental experiment and find age-dependent differences in conceptual spaces.

Bayesian Experimental Design for Intractable Models of Cognition

Bayesian experimental design (BED) is a methodology to identify designs that are expected to yield informative data. Recent work in cognitive science considered BED for cognitive models with tractable and known likelihood functions. However, as cognitive models have become more complex and richer, their likelihood functions are often intractable. In this work, we leverage recent advances in BED for intractable models and demonstrate their application on a set of multi-armed bandit tasks. We further propose a generalized latent state model that unifies two previously proposed models. Our experiments show that data gathered using optimal designs results in improved model discrimination and parameter estimation, as compared to naive designs. Furthermore, we find that increasing the number of bandit arms increases the expected information gain in experiments.

Using videos and animations to study zebra finch social behaviors

Animals socially interact with each other using complex behavioral displays. To better understand the dynamics of these interactions, videos and animations on computer screens have been used as one of the interactive animals. While both of these offer more control, whether responses to such stimuli are similar to responses to live animals remains poorly understood. Here, using the song of the male zebra finch, a songbird, as an example of a complex behavioral display, we examine responses to videos and animations of female zebra finches. We show that male zebra finches sing to videos and animations of female zebra finches, and the properties of these songs are similar to songs directed towards live female zebra finches, especially for longer videos. Overall, these results highlight the potential of using videos and animations to better understand social interactions involved in the production of complex behavioral displays, like birdsong.

Learning ecological and artificial visual categories: rhesus macaques, humans, and machines

Comparative studies of categorization using non-human animals are difficult to conduct because studies of human categorization typically rely on verbal reports. Moreover, animal performance may reflect reinforcement learning, whereby discrete features act as discriminative cues for categorization. We trained humans, monkeys, and computer algorithms – an associative, feature-driven algorithm and a neural network – to classify four simultaneously presented visual stimuli, each belonging to a different perceptual category, in a specific order. There were two sets of categories: naturalistic photographs and close-up sections of paintings with distinctive styles. All living subjects classified stimuli better than predicted by chance or by feature-driven learning alone, even when stimuli changed on every trial. However, humans more closely resembled monkeys when classifying the more abstract painting stimuli than photographic stimuli. This points to a common, non-associative, non-linguistic classification strategy in both species, one that humans can rely on in the absence of linguistic labels for categories.

Core knowledge objects in reasoning and language use for highly abstract inductive tasks

Core knowledge concepts such as object behavior principles provide a rich inventory of primitives for thinking and learning in the natural world. However, it remains unexplored how these concepts are reused for problem-solving and communication in highly abstract domains. We analyze a large-scale natural language study drawing on the Abstraction and Reasoning Corpus (ARC), a set of highly abstract visual tasks where solvers construct outputs from input grids according to an inferred pattern. ARC explicitly incorporates core knowledge principles without any real-world objects. In the study, subjects solved and communicated the inferred patterns of ARC tasks via written explanations for other subjects attempting to solve tasks using only the explanations. We examine how subjects solve, communicate, and interpret these explanations, and we show that subjects use fundamentally abstract core knowledge properties–object cohesion and contact causality–to reason about, understand, and communicate the inference tasks with language.

Calibration information reduces bias during estimation of factorials: A (partial) replication and extension of Tversky and Kahneman (1973)

Tversky and Kahneman (1973) found that, under time pressure, people massively underestimated the expansion of 8! (correct value 40,320), and this bias was mitigated for participants presented the descending (8x7x6x5x4x3x2x1; Median=2,250) vs. ascending order (1x2x3x4x5x6x7x8; Median=512). In a first-ever replication (N=140), we also found predominant underestimation, but no significant between-subjects descending vs. ascending order effect. However, when participants then estimated the opposite order, we reproduced this order effect within-subjects. Finally, participants received calibration information (the correct value of 6! or 10!) and again estimated both orders of 8!. Participants who received 10! made more accurate estimates for 8! (Median=38,000), which did not differ statistically from the correct value. Participants who received 6! still grossly underestimated (Median=2,678.5), despite 8! being closer to 6! than 10! in linear and log units. Thus, we surprisingly found the classic factorial estimation bias only within-subjects, and provide evidence for how calibration can reduce it.

No evidence for attraction to consonance in budgerigars (Melopsittacus undulatus) from a place preference paradigm

Tone combinations with small integer frequency ratios are perceived as pleasant and are referred to as “consonant”. Human consonance preference has been connected to preference for sounds, such as the human voice, that inherently contain consonant intervals via the harmonic series. As such, we might expect other species with harmonic vocalizations to also show attraction to consonance. We tested budgerigars and humans in a place preference test. Subjects could freely spend time with consonant or dissonant versions of a piano melody. Time spent with stimulus types was used as a measure of attraction. Human females spent more time with consonant stimuli but males showed no preference. In budgerigars neither sex showed a preference. This did not change when repeating the experiment with consonant and dissonant versions of budgerigar sounds. The amount of nonlinearity in budgerigar vocalizations can explain these results engendering relevant implications for future cross-species consonance studies.

‘Kindergarten’ versus ‘Gartenkinder’: EEG-evidence on the effects of familiarity and semantic transparency on German compounds

This study investigated the effects of semantic transparency and familiarity on the lexical processing of German compounds. We measured event-related potentials (ERPs) while participants saw compound triplets that held the same head like /garten/ (‘garden’): (a) semantically transparent compounds /Gemüsegarten/ (‘vegetable garden’), (b) semantically opaque compounds /Kindergarten/ (‘kinder garden’), and (c) possible but nonexistent novel compounds /Gesichtsgarten/ (‘face garden’). Participants made nonword decisions to compounds with a scrambled constituent /Semüregarten/. ERP potentials for semantically transparent and opaque compounds were alike, irrespective of whether the manipulation was on the modifier or the head. By contrast, novel compounds showed strong N400 effects relative to both transparent and opaque compounds. These findings indicate that – during compound processing – the brain differentiates between familiar and novel, but not between transparent and opaque. Compound processing independent of semantic compositionality differs from that in other Indo-European languages and stresses the importance of cross-language comparisons.

Mapping between numerical and non-numerical magnitude information: An observational study of the integration and interconversion between magnitudes and formats in Colombian children

Improving the magnitude processing during early childhood is essential for further developing numerical cognition and mathematical skills. In this regard, there is an intense debate about whether numbers are processed using a number-specific system or a general magnitude processing system. Additionally, the evidence available focuses on the magnitude’s interference but not on the translation and integration process. This study aims to analyze the ability to map between non-symbolic and symbolic numerical information and integrate numbers and space. For this purpose, we designed an observational between-subjects study using two-cross format comparison tasks involving the integration and interconversion between magnitudes and formats. We will assess for each task the relation between ratio and performance, and the discrimination thresholds in 8- and 12-years old Colombian children to explore the developmental trajectory of these numerical cognition processes.

Pointing North Online: Using photographs of known environments to evaluate north pointing accuracy

Previous research has found that cognitive maps are not consistently oriented towards north as people tend to bias their north-pointing estimates towards nearby roads (Brunyé et al., 2015). While pointing studies are typically conducted within familiar environments, it is not clear whether north-pointing estimates will show a similar bias towards nearby roads when individuals are not physically located in the environment. In essence, a north-pointing task when not located within the environment is a perspective-taking task. In a series of experiments, participants rated their familiarity with the Texas A&M campus and two nearby cities, completed a self-assessment of sense-of-direction, and then pointed towards north. The pointing task used photographs of the A&M campus to provide a location and initial orientation. These experiments provide new insights into individual differences in north pointing and perspective-taking skills when an individual is not physically present within the environment.

On the Gradual Construction of Complex Abstract Representations in Spatial Problem Solving

Finding adequate representations is an important challenge in solving complex problems. Especially in unfamiliar task domains, initially chosen representations might not cover all of its relevant aspects. I am presenting a theory of representational change based on results from a case study of dyadic problem solving using a spatial transformation task of gradually increasing complexity. This theory proposes that change is driven by expressive limitations of the representational substrates in question. Iconic representations, such as gestures are useful in representing simple objects, but full resemblance soon encounters limits. Metonymic gestures, iconically resembling parts of an object but referring to the whole can extend this scope. By omitting aspects of the problem domain, those partial resemblences can then feed into memory retrieval processes for metaphors bearing no actual similarity to domain objects but to, for instance, such metonymic gestures. Finally, elaborating on such metaphors can open up further representational possibilities.

Bilinguals Infer in L2 Similarly, but not in Dual-language

Inference-making is a complex mental process as it involves retrieving prior knowledge and active meaning-making, appropriate for exploring processing automaticity vs. difficulty in different languages. People are prone to falsely recognizing sentences that represent the inferences they made due to gist encoding. To examine whether this process differs for L1 and L2, we presented forty-eight Turkish-English bilingual participants in Turkish, English, and dual-language sentence groups that allowed them to configure objects mentally and draw spatial inferences. Inferred sentences were recognized significantly more than new sentences. We observed this in L1 and L2, but not the dual-language condition. Higher L2 proficiency and lower executive functioning abilities were related to higher false recognition. These results are aligned with the bilingual memory organization, suggesting L2 approaches L1 automaticity with increased proficiency. Lower EF participants might prefer less effortful strategies to process information, such as averaging, abstracting, inferring, and gist extracting.

I see where this is going: Modeling the development of infants' goal-predictive gaze

From about six months of age onward, infants observing an action, such as a grasp, start to shift their gaze from the moving agent to the goal before the action is complete. A variety of factors that influence such goal-predictive gaze have been identified. However, the underlying cognitive processes are heavily debated. We propose that our minds structure sensorimotor dynamics into probabilistic, generative event-predictive models, and, choose actions with the objective to minimize predicted uncertainty. We implement this proposition by means of event-predictive learning and active inference. Trained on manual object-manipulations, the generation of goal-predictive gaze emerges: The model starts fixating the anticipated goal at the start of an observed event when a familiar agent (i.e., a hand) is involved. Meanwhile, the model keeps tracking unfamiliar agents (e.g., a claw) performing the same movement. We conclude that event-predictive learning combined with active inference may be critical for eliciting action-goal predictions.

Utilizing ACT-R to investigate interactions between working memory and visuospatial attention while driving

In an effort towards predicting mental workload while driving, previous research found interactions between working memory load and visuospatial demands, which complicates the accurate prediction of momentary mental workload. To investigate this interaction, the cognitive concepts working memory load and visuospatial attention were integrated into a cognitive driving model using the cognitive architecture ACT-R. The model was developed to safely drive on a multi-lane highway with ongoing traffic while performing a secondary n-back task using speed signs. To manipulate visuospatial demands, the model must drive through a construction site with reduced lane-width in certain blocks of the experiment. Furthermore, it is able to handle complex driving situations such as overtaking traffic while adjusting the speed according to the n-back task. The behavioral results show a negative effect on driving performance with increasing task difficulty of the secondary task. Additionally, the model indicates an interaction at a common, task-unspecific level.

The Effects of Messages About Intellectual Ability on Children’s Activity Preferences

Research shows that there is a strong cultural bias against women: Many people believe that women are less intellectually talented than men (Kirkcaldy et al., 2007; Upson & Friedman, 2012), and these beliefs emerge in childhood. For example, girls as young as six were less motivated than boys to pursue gender-neutral activities that were described as being for children who are “really smart” and more inclined toward activities intended for “hardworking” children (Bian, Cimpian, & Leslie, 2015). This study investigates whether framing activities as stereotypically masculine or feminine alters the impact of messages about intellectual ability on 6- and 7-year-olds’ selection of preferred activities. Preliminary findings (N=75) indicate that making activities stereotypically feminine reduces girls’ preference for "hardworking" over "smart" games. Such results suggest that presenting young girls with toys that are targeted specifically to them might counteract the effects of negative gender stereotypes about intelligence.

Identifying local cognitive representations in the brain across age spans through voxel searchlights and representational similarity analysis

Localizing function in the brain has been an elusive long-term goal in the study of cognition. Prior studies have utilized four reference abilities (RAs) to capture cognition (Salthouse, 2009). Full-brain cortical networks have been tied to these abilities using common multi-voxel patterns across subjects in distinct age groups. Using voxel searchlights the current study explores purely local cortical representations of cognition, less commonly explored. This work analyzes 240 subjects’ responses to cognitive tasks from the four RAs. The current study further employs representational similarity analysis (RSA, Kriegeskorte, Mur, & Bandettini, 2008) to the similarity of brain activities from tasks within the same RA; RSA can capture representational consistencies within each subject even when exact voxel pattern may vary across subjects. We found distinct topographical localizations for each RA that were mostly consistent across age and suggested refinements of broader functional divisions of the brain from prior literature.

Investigating the Utility of Prompting Novice Programmers for Self-Explanations to Improve Mental Models

A mental model is an internal representation that explains how something works. Mental model construction is facilitated by self-explanation, the active generation of explanations for oneself. The overarching goal of this research is to empirically investigate the utility of self-explanation for developing mental models when learning to program. Programming is notoriously challenging and, despite evidence of the importance of mental models for learning, little work has focused on mental models of students learning how to program. They need correct mental models of the notional machine, an abstraction of the steps taken by a computer as it processes a program. Because students do not spontaneously self-explain, we are using a user-centered approach to design a computer tutor to prompt for self-explanation about the notional machine. Here, we present qualitative results on students’ interactions with an initial version of the tutor, including the form of their self-explanations and corresponding mental models.

Effects of memory organization on credit assignment in human reinforcement learning

How does the similarity structure of memory influence credit assignment in reinforcement learning? Memory spaces vary in how integrated versus separable their constituent dimensions are, and how clustered versus distributed items are across dimensions. Greater integration may cause people to overattribute value to multiple dimensions, potentially leading to generalization errors. Greater clustering may bias people to attribute value to discrete category-centers (category-based learning) rather than map value continuously across the space (function learning). In this study, subjects complete a value-learning task in which stimuli are sampled from low-dimensional perceptual spaces, and reward is mapped to one dimension. Each space is intended to engender a different degree of integration and clustering in memory, such that effects of memory organization on learning can be probed. Additionally, we investigate how credit assignment on each of these artificial perceptual spaces differs to credit assignment on more complex spaces that define real-world semantic concepts.

Impacts of colors and container types on predicted and perceived flavor of non-alcoholic beverages

Although it is documented that vision can impact flavor perception, less known are the interrelations between subsets of cross-modal factors. Within the framework of cross-modal perception three between-group experiments were conducted to determine the impact of color and container type on the flavor perception. Two different containers and 10 colors were tested in two online experiments (n1 = 67, n2 = 63); in the third experiment two non-sweetened colored drinks were tested (n = 32). Hedonic, associative, and emotional measurements were applied. Our results indicate that color can increase the expected sense of flavor. For instance, red, pink, orange (average values 3.7, 3.7, 3.6 on a 5 pt scale) are the strongest examples of sweetness. For some colors (e.g., red and brown) predicted sweetness is also determined by the type of container (bottle vs. glass). Additionally, the sense of freshness as a cross-modal factor increases the likeability of the drink.

Application of machine learning to signal entrainment identifies predictive processing in sign language.

We present the first analysis of multi-frequency neural entrainment to dynamic visual features which drives sign language comprehension. Using the measure of EEG coherence to optical flow in video stimuli, we are able to classify fluent signers’ brain states as denoting online language comprehension, or non-comprehension during watching of non-linguistic videos that are equivalent in low-level spatiotemporal features and high-level scene parameters. The data also indicates that lower frequencies, such as 1 Hz and 4 Hz, contribute substantially to brain state classification, indicating relevance of neural coherence to the signal at these frequencies to language comprehension. These findings suggest that fluent signers rely on predictive processing during online comprehension.

Knowing the Shape of the Solution: Causal Structure Constrains Evaluation of Possible Causes.

We investigate whether reasoners are sensitive to the underlying causal structure of an event when evaluating its likely causes. Participants read stories in which two unknown causes led to an outcome in either a converging or linear structure. They were then asked to select two of three possible causes to complete the story. Two candidates were semantically-related, direct causes of the outcome. The third was an unrelated, indirect cause of the outcome that was conditional on a directly-related event. Differences in abstract structure, and not association, guided people’s evaluations of the most likely causes (e.g., ‘breeze blowing’ was judged an unlikely direct cause of a noisy room compared to ‘alarm ringing’ or ‘door slamming’, but a likely indirect cause, conditional on the door slamming). Results demonstrate that people consider abstract information about the structure of an event when evaluating causes. Knowledge of causal structure may therefore guide hypothesis evaluation.

Chunking as a Rational Solution to the Speed-Accuracy Trade-off in a Serial Reaction Time Task

When exposed to perceptual sequences, we are able to gradually identify patterns within and form a compact internal description of the sequence. One proposal of how disparate sequential items can become one is people's ability to form chunks. We study chunking under the regime of serial reaction time tasks. We propose a rational model of chunking that progressively rearranges and modifies its representation to arrive at one that is beneficial to participants' utility under task demands. Our model predicts that participants should, on average, learn longer chunks when optimizing for speed than optimizing for accuracy. We tested this prediction experimentally by instructing and rewarding one group of participants to act as fast as possible, while the other group was instructed to act as accurately as possible. From several independent sources of evidence, we confirmed our model's predictions that participants in the fast condition chunked more than participants in the accurate condition. These results shed new light on the benefits of chunking and pave the way for future studies on structural and representation learning domains.

The effectiveness of face-name mnemonics on name recall

Remembering names is generally considered a difficult task. The face-name mnemonic strategy enhances encoding of face-name associations using a name transformation, a prominent facial feature, and interactive imagery. Name recall involves mentally retracing these steps. However, the components of the face-name mnemonic strategy are relatively unexplored. The present study tested the effectiveness of variants of the face-name mnemonic strategy in comparison to an uninstructed group for learning face-name associations. Experiment 1 demonstrated a significant advantage of the name-transformation mnemonic strategy compared to the uninstructed approach. Experiment 2 further examines the name-transformation mnemonic strategy with young adults. This study demonstrates the effectiveness of a variation of the face-name mnemonic strategy with implications for memory rehabilitation interventions.

Using Causality to Map Difficulties in a Qualitative Physics Problem

A key element of conceptual understanding in physics education is the ability to make qualitative inferences. In this study, we investigated whether people use causal notions when solving simple, qualitative physics problems. We utilized the principles of directionality and holding variables constant to assess participant's accuracy in answering questions about inferences from cause to effect (CE), effect to cause (EC), and from one cause to the other cause (CC). Participants responded with greater accuracy for CE and EC questions when alternative causal information was held constant compared to when it is made explicitly unknown. For CC questions accuracy was lower across all alternative information types. Further, participants generally treated ambiguous alternative causes as if they were held explicitly constant. These results indicate that using notions of causality can potentially help identify difficult qualitative physics questions and be used as a tool in instructional design.

Interpreting Data Tables: Can Variable Symmetry Scaffold Performance?

Data interpretation is crucial in modern society. One common data structure that people frequently encounter is 2 x 2 tables. Past work suggests that the nature of the variables affects how people interpret 2 x 2 tables. Specifically, people interpret tables with symmetric variables (present/present; e.g., treatment A vs. treatment B) more accurately than tables with asymmetric variables (present/absent; e.g., treatment vs. no treatment). This study tested whether interpreting tables with symmetric variables could scaffold later interpretation of tables with asymmetric variables. Undergraduates interpreted tables and rated the importance of each cell to their interpretations. Some participants interpreted tables with symmetric variables before tables with asymmetric variables; others interpreted only tables with asymmetric variables. Participants who first interpreted tables with symmetric variables later judged cells in the bottom row of asymmetric tables to be more important. Thus, experience with symmetric variables shifted participants’ views of tables with asymmetric variables.

Competing goals in the construction and perception of moral narratives

Narratives can communicate moral character by describing one's past actions and motivations. When telling moral narratives, people might twist the truth to appear better than they are, while concealing this goal. We show that audiences evaluate a narrator’s moral character by inferring weights on three goals: providing accurate information, appearing morally good, and projecting an image of informativeness. Participants judged how narrators explained their choice during a “claim” task. Narrators could claim a raise at the risk of causing a co-worker to lose money. They could then lie or tell the truth and be direct or indirect about their motivations (e.g., “I’m not selfish” vs. “It was better for the co-worker”). Our results suggest that to be perceived as morally good, narrators must find a balance: trying too hard to appear generous is costly. We introduce a model for how narratives are constructed through recursive inference of audience perception.

Impact of Performing A Secondary Task on Recall

In a memory task, focusing on to-be-remembered information while concurrently engaging in a secondary task may result in impaired memory, possibly due to limited cognitive resources. However, previous research has demonstrated circumstances where interleaving a secondary task can impair immediate recall but enhance long-term retention. This suggests that the type and difficulty of the secondary task also affected memory. The present study explores the effect of processing a secondary task on recall. In Experiment 1, increasing complexity of the secondary task resulted in a detrimental effect on delayed free recall. Experiment 2 and 3 examine the effect of increasing cognitive load with faster stimuli presentation on delayed free recall and serial recall. These findings have implications for theories advocating the domain-general nature of cognitive resources.

Navigating by Narratives: Cognitive Maps Encode Engagement with Physical and Fictional Worlds

What can stories teach us about the real world? How can stories change our behavior? From research on cognitive maps, we know that hippocampal neurons form maps of physical space and time. Here, we propose that the same neural processes underlie engagement with fictional stories: during story processing we create cognitive maps of various domains—spatial, social, moral, etc.—that we can later use for real-world navigation. This perspective can also inspire new methodological tools for assessing story processing, and we give an example showing how a story can be modeled as paths taken on a cognitive map. In summary, we see stories as affording paths that guide our navigation through multiple dimensions of life, which raises the implication that the function of stories, including their moral content, is not only to be understood at the level of abstract and linguistically coherent propositions, but at the fundamental level of navigation.

Emotion-Color Association in Biologically Inspired Deep Neural Networks

Deep Neural Network representations correlate very well with neural responses measured in primates' brains and with psychological representations of human similarity judgement tasks, making them possible models for human behavior-related tasks. This study investigates whether DNNs can learn an implicit association (between colors and emotions) for images. An experiment was conducted in which subjects were asked to select a color for a given emotion-inducing image. These human responses (decision probabilities) were modeled on neural networks using representations extracted from pre-trained DNNs for the images and colors (a square of the color). The model presented showed a fuzzy linear relationship with the decision probabilities. Finally, this model was presented as a model for emotion classification tasks, specifically with very few training examples, showing an improvement in accuracy from a standard classification model. This analysis can be of relevance to psychologists studying these associations and AI researchers modelling emotional intelligence in machines.

Eye-Tracking Multi-Modal Inference

When you see a glass fall off the table, you can predict it will break without seeing it hit the ground. Similarly, if you hear the glass shatter, you can infer what happened without seeing anything at all. Our knowledge of the causal mechanisms structuring our world allows us to make impressively accurate inferences based on incomplete information spread across multiple sensory modalities. In this work, we study the cognitive processing that supports this remarkable behavior. We utilize the Plinko domain, an intuitive physics setup where marbles are dropped into a box from one of three holes, colliding with obstacles as they fall to the ground. Participants judge where they think the ball fell from based on visual and auditory evidence. We track participants' eye-gaze to gain deeper insight into their mental processes, and develop models that characterize the computational processes underlying participant behavior.

Anaphoric distance dependencies in the sequential structure of wordless visual narratives

Language has been characterized as a “unique” facet of human cognition with complex syntactic features like anaphora and distance dependencies. However, visual narratives, like comics, have been argued to use similar sequencing mechanisms. These narrative structures include “refiner” panels that “zoom in” on the contents of another panel. Similar to linguistic anaphora, refiners co-referentially connect inexplicit information in one unit (refiner/pronoun) to a more informative “antecedent.” Also, refiners can follow their antecedents (anaphoric) or precede them (cataphoric) with either proximal or distant connections. We explored these constraints of order and distance on visual narrative refiners by measuring event-related brain potentials (ERPs) to wordless comic strips. Anaphoric refiners evoked late sustained negativities (Nref) while distant anaphoric refiners attenuated N400s compared to all others, and all distance dependencies evoked leftward negativities. These responses are consistent with (neuro)cognitive responses shown to anaphora in language, suggesting domain-general constraints on the sequencing of referential dependencies.

Multiversionality: Considering multiple possibilities in the processing of narrative

We propose a conceptual framework of multiversional narrative processing. Multiversional narrative processing is the consideration of multiple possible event sequences for an incomplete narrative during reception. It occurs naturally and is experienced in a wide range of cases, such as suspense, surprise, counterfactuals, and detective stories. Receiving a narrative, we propose, is characterized by the spontaneous creation of competing interpretive versions of the narrative that are then used to create predictions and projections for the narrative’s future. These predictions serve as a mechanism for integrating incoming information and updating the narrative model through prediction error, without completely eliminating past versions. We define this process as having three aspects: (1) constrained expectations, (2) preference projection, and (3) causal extrapolation. Constrained expectations and preference projections respectively create the bounds and subjective desires for a narrative’s progress, while causal extrapolation builds, reworks, and maintains the potential models for understanding the narrative.

Do infants infer prosocial goals from disadvantageous payoffs in joint action?

People may engage in a joint activity (JA) to accrue material rewards or to help others. From a third-party perspective, the occurrence of JAs is thus ambiguous about the goals of the participating agents. We argue that the payoff structure of a JA (how costs and rewards are distributed) may help disambiguate these goals. Specifically, we hypothesize that an agent’s participation in a JA should be interpreted as prosocially motivated when its costs cannot be recouped by material rewards (disadvantageous payoff). We tested this hypothesis across three looking-time experiments with 12-month-olds. As predicted, infants expected a disadvantaged agent to behave altruistically towards her JA partner (Exp. 1). However, this expectation might be explained by a sensitivity to changes in overall reward distribution from familiarization to test (Exps. 2 & 3). Our results call for a re-evaluation of the role that payoff information plays in early goal attribution within JA contexts.

Exploring Online Goal Inference in Real World Environments

Machine learning offers techniques for predicting behavior, but has limited capability to use behavior, context, and domain knowledge to infer intentions and predict behavior trajectory (i.e., sequence of directed actions towards reaching goal) of humans in real time to assist in strategic planning. Some existing models of human goal inference (Zhi-Xuan, Mann, Silver, Tenenbaum, & Manshinghka, 2020) possess some of these capabilities, however, these models are typically evaluated in toy worlds, which limits our understanding of how these approaches would generalize to real-world domains. Here, we introduce a novel integration of Bayesian goal inference and a deep learning model used for joint estimation (OpenPose) to address this generalizability problem and evaluate how this integrated approach can infer the goals of humans in real-world environments. This exploration provides an opportunity to evaluate capabilities required for real-time goal inference in real world environments and can highlight benefits and limitations informing future research.

Can losing the sense of smell affect odor language?

A number of studies have explored whether language is grounded in action and perception, however little attention has been given to the sense of smell. Here we directly test whether olfactory information is necessary for comprehension of odor-related language, by comparing language performance in a group of participants with no sense of smell (anosmics) with a group of control participants with an intact sense of smell. We found no difference in comprehension of odor- and taste-related language between anosmics and controls using a lexical decision task, suggesting olfaction is not crucial to semantic representations of odor-related language. However, we did find that anosmics were better at remembering odor-related words than controls, and they also rated odor- and taste-related words as more positively valenced than control participants. We suggest odor-related language is more salient and emotional to anosmics because it reminds them of their missing sense. Overall, this study supports the proposal that odor-related language is not grounded in odor perception.

Learning to be attractive: A test of the skills hypothesis in spotted bowerbirds (Ptilonorhynchus maculatus)

Male spotted bowerbirds perform vigorous courtship dances to visiting females on elaborate display arenas. Each arena is built and defended by one dominant male, which commonly tolerates one or more subordinate males. These non-territorial “auxiliary” males are thought to be inexperienced sub-adults. The skills hypothesis suggests that auxiliaries attend established bowers to practice their courtship skills, but little is known about the development of courtship motor performance and which courtship properties are refined with experience. Here we investigate whether auxiliaries are as proficient as bower owners in performing courtship. First, we investigate whether specific courtship moves are used in different contexts within a courtship routine and whether such flexible use of courtship elements is shared both by bower owners and auxiliaries. Second, we examine other fine-scale parameters of courtship dances, in order to further test for possible differences in courtship properties depending on dominance status.

State vs. Trait: Examining Gaming the System in the Context of Math Perception Tasks

In the development and analysis of interventions designed to improve student learning, it is important to consider potential influences of student behavior. Unproductive behaviors, such as “gaming the system,” have been studied for their potential impacts on the measurement and assessment of student knowledge within these interventions. Conversely, less attention has been given to factors that may influence gaming behavior. Gaming may be attributed to student-level traits, but could also be a temporary state brought on by systemic causes such as the perceived difficulty or presentation of a problem. We leverage prior research in this area to develop a data-driven measure of gaming and address whether this behavior occurs as a “state” or “trait” within a randomized controlled trial. We find that these factors are differential across the two study conditions, suggesting that context contributes to the underlying causes of gaming behavior.

Cyclic reactivation of internal working memory representations of distinct feature dimensions

Recently, several behavioral studies have demonstrated 4-10 Hz rhythmic fluctuations in attention. So far, this attentional sampling has only been demonstrated with regards to external stimuli. Attention, however, is often directed towards internal working memory representations. We conducted a human behavioral dense-sampling experiment on whether simultaneously held representations of two distinct feature dimensions (color and orientation) also exhibit a rhythmic temporal profile. We found an oscillatory component at 9.4 Hz in the joint time-courses of both representations, presumably reflecting a common early perceptual sampling process in the alpha-frequency range. Further, we observed an oscillatory component at 3.5 Hz with a significant phase-difference between feature dimensions. This likely corresponds to a later attentional sampling process, indicating that internal representations of distinct features are activated in alteration. In summary, we demonstrate the cyclic reactivation of internal representations, as well as the co-occurrence of perceptual- and attentional rhythmic fluctuations at distinct frequencies.

It’s not so simple: morphosyntactically simpler languages are not always easier to learn

Are morphosyntactically simpler languages easier to learn? In theory building and empirical work alike, researchers often assume so. While plausible, this assumption nevertheless requires empirical confirmation. To test it, we manipulate form-to-meaning transparency (FtMT) and morphophonemic cohesion (MC) in three large-scale artificial language learning experiments with English and Mandarin participants. We find that (a) FtMT doesn’t affect learnability for all semantic domains (for English speakers, the more transparent marking of plural number in a separate morpheme increases learnability with nouns, but not with pronouns); (b) MC similarly affects learnability in a domain dependent fashion (expressing subjects and verbs in two words, as opposed to one concatenated word, increases learnability with nouns, but not with pronouns); (c) the effects are L1 dependent (pronouns behave like English nouns in regards to plural marking for Mandarin speakers). Taken together, the results falsify the strong hypothesis that simpler languages are necessarily easier to learn.

Sound production of Asian elephant high-frequency vocalisations

Anatomical and cognitive adaptations to overcome morph-mechanical constraints of the vocal folds increase vocal diversity across taxa. The Asian elephants´ vocal repertoire ranges from infrasonic rumbles (F0 ~ 20 Hz) to higher pitched trumpets (F0 340-540 Hz) and species-specific squeaks (F0 300-2300 Hz). While rumbles are congruent with vocal fold vibration in large sized mammals, trumpets and squeaks were hypothesised to be emitted by the trunk without current knowledge of the sound source. We use an acoustic camera to visualise nasal trumpet but oral squeak emission and an event of simultaneous oral and nasal emission (biphonation) in a captive group of female Asian elephants. By combining these findings with acoustic, behavioural and morphological data we suggest that trumpets are produced by vibration of nasal cartilages, but squeaks by vibration of the tensely closed lips. Our data further suggests that context or vocal production learning might be involved in squeak sound production

Embodied Metaphor in Communication about Experiences of COVID-19 Pandemic

This study investigated how a group of twenty seven Wuhan citizens (coronavirus patients and their family member, medical staff, college students, social workers, teacher, journalist) employed metaphors to communicate about their experiences of COVID-19 pandemic through in-depth individual interviews. The analysis of metaphors captured the different kinds of emotional states and psychological conditions of the research participants, focusing on their mental imagery of COVID-19, extreme emotional experiences, and symbolic behaviors under the pandemic. The results show that multiple metaphors were used to construe emotionally-complex, isolating experiences of the COVID-19 pandemic. Most metaphorical narratives are grounded in embodied sensorimotor experiences such as body parts, batting, fighting, hit, weight, temperature, spatialization, motion, violence, light, and journey. Embodied metaphors were manifested in both verbal expressions and nonlinguistic behaviors (e.g., patients’ symptom of obsessive-compulsive behaviors). These results suggest that the bodily experiences of the pandemic, the environment, and the psychological factors combine to shape people’s metaphorical thinking processes.

Does mental simulation of alternative research outcomes reduce bias in predicted results?

Can mental simulation of alternative research outcomes reduce bias? We attempted to extend Hirt et al.’s (2004) finding of debiasing when alternative basketball standings were easy to simulate and participants were low in need for structure (NFS). Amazon Mechanical Turk (AMT) participants explained why taking notes by hand might improve test scores, then three groups of them explained different outcomes: Consider-opposite participants explained benefits of laptop notetaking, and two transfer groups explained either a plausible or an implausible outcome for unrelated research. None of the three groups differed from a baseline group in test score estimates or likelihood that taking notes by hand leads to higher test scores. However, low-NFS participants estimated marginally lower likelihood that notetaking by hand is superior, suggesting less bias to their initial explanation. We consider whether variation in participants’ psychology backgrounds might have overwhelmed effects, and discuss a replication with students taking introductory psychology.

Verbs are More Metaphoric than Nouns: Evidence from the Lexicon

When asked to paraphrase semantically strained sentences (e.g., the lantern limped), participants alter word meanings to fit the context (e.g., the light flickered). Judges who were asked to classify these meaning extensions classified verb extensions as primarily metaphoric and noun extensions as primarily taxonomic or metonymic/associative (King & Gentner, 2019; in prep.). To determine whether these online patterns are reflected in established word senses, we sampled 120 nouns and 120 verbs from three different frequency bands (100 wpm, 10 wpm, and 1 wpm) and obtained metaphoricity ratings of every sense for each word (1015 senses total, Oxford Dictionary API). Verb senses were rated as significantly more metaphoric than noun senses overall. In addition, metaphoricity increased as sense frequency decreased, and this effect was stronger for verbs than for nouns. Verbs’ high propensity for metaphoric extension has implications for evolution of word meanings.

The role of period correction and continuous input from a co-performer in joint rushing

Recent studies provide experimental evidence for joint rushing - the phenomenon that participants in rhythmic group activities unintentionally increase their tempo. We hypothesized that joint rushing arises from the combined workings of established sensorimotor synchronization mechanisms (e.g., phase and period correction) with a simple phase advance mechanism that has been studied in mass-synchronizing insects. We invited musicians and non-musicians to participate in three tapping experiments. Participants were asked to continue to produce a constant target tempo alone (solo) or together with a partner (joint). The results show that joint rushing induces a lasting period correction but stops when auditory feedback of the partner’s taps is removed. Musical training reduced the magnitude of joint rushing but did not eliminate it. These results are consistent with our hypothesis that joint rushing occurs because adjustments induced by a simple phase advance mechanism alter the period of internal timekeeping.

Native perception of non-native speech: Speaker accent mitigates penalization for language errors in non-native speech unless the listener is conscientious

Non-native speakers have been found to be penalized for their accent and grammatical errors. However, little is known about whether and how accent and grammaticality interact to influence native listeners’ perception of non-native speech, and whether listeners’ personality plays a role in this. We examined these questions by relating the acceptability scores of 40 English speech stimuli rated by 60 British listeners (30 female; 30 unfamiliar with accented speech) to factors including speech accent (British vs. Polish), grammaticality (well-formed vs. error-filled) and listener personality. The results suggest that non-native accent “protects” the speaker from being penalized for grammatical errors unless the listener has a certain personality profile: compared to non-native accented speech, the acceptability ratings of native accented speech showed a larger decrease when grammatical errors were present, yet listeners who were more conscientious, extravert or agreeable tended to give lower acceptability ratings to non-native accented speech, regardless of grammaticality.

Pragmatic Bias and the Learnability of Semantic Distinctions

Cross-linguistically prevalent semantic distinctions are widely assumed to be easier to learn, due to the naturalness of the underlying concepts. Here we propose that pragmatic pressures can also shape this cross-linguistic prevalence, and offer evidence from evidentiality (the encoding of information source). Languages with grammatical evidential systems overwhelmingly encode indirect sources (reported information or hearsay) but very rarely mark direct, visual experience. Conceptually, humans reason naturally about what they see, however, on pragmatic grounds, when encoding a single source, reported information is more informative because it is potentially unreliable and consequently more marked. In two Artificial Language Learning experiments, we directly compared the learnability of two simple evidential systems, each marking only a visual or reportative source. Across experiments, participants learned more easily to mark reportative information sources. Our results provide support for a pragmatic bias that shapes both the cross-linguistic frequency and the learnability of evidential semantic distinctions.

The effect of evidentiality markers on the survival processing effect

Human memory retains information related to survival more effectively, termed survival processing effect (SPE). Adaptive memory should also more effectively store reliable information than unreliable information. In an experiment (n=107) we asked whether the SPE depends on the reliability of information encoded by linguistic markers. In Turkish, evidentiality markers encode whether information is gathered firsthand (-miş) or not (-di), and provide cues on source reliability. We found that sentences processed for their relevance to survival were better recalled than those processed for their relevance to moving. This survival processing effect was significant for both direct and indirect evidentiality markers, although it was stronger for direct evidentiality. That SPE persisted in the indirect evidentiality condition suggests survival related information is privileged in encoding even when potentially marked as unreliable by linguistic markers. Thus, the effect of language on memory is not profound but rather computed online.

Concept mapping produces delayed benefits in online learning.

The recent pandemic has emphasized the need for developing more effective pedagogical practices in online education. While activities that require more deep connection of concepts can enhance overall learning versus mere rote rehearsal, are conceptual activities also viable options for inclusion in online coursework, given the asynchronous and flexible nature of online learning? To answer this question, students in 2 online courses either completed concept-mapping exercises, or control exercises (i.e., definitions of concepts). Results indicate that while initially concept-mapping is no better than memorization, as experience with mapping increases, a growth in learning emerges later in the course. Students also rate the educational utility and enjoyablity of conceptual activities similarly to control exercises, suggesting that students do not perceive these conceptual activities more negatively. As such, concept-mapping does have a utility for online learning, but there is a necessary a warm-up period before benefits are realized.

The subjective value of creative outputs: appropriate or original?

Creativity is defined as the ability to produce an object/idea that is both original and appropriate to the context. How originality and appropriateness of production are integrated in creative cognitive processes is not understood. We propose that creativity involves an evaluative component based on individual preferences for appropriateness and originality of candidate ideas stemming from a generative component. Through behavioral experiments inspired by neuroeconomics and using computational modeling, we aimed to characterize this evaluative component. Participants had to generate creative word associations and then rate how much each of their responses was appropriate, original and how much they liked it. We found that the way individuals balanced appropriateness and originality to build a subjective value of their ideas was correlated with creative abilities. These findings provide new insights into how individual preferences can impact creativity.

Looking at the Pragmatics of Laughter

Laughter and gaze have an important role in managing and coordinating social interactions. We investigate whether laughs performing distinct pragmatic functions, either related to humour or to a potentially social discomforting utterance or situation, are accompanied by different gaze patterns. Using a multimodal corpus of dyadic taste-testing interactions, we show that people tend to not look at their partner while producing laughs related to humour, whereas laughs that relate to potential social discomfort are accompanied by gaze at the partner. With respect to the non-laughing partner's gaze at the laugher we observe the opposite pattern around the laughter offset. We show that gaze contributes to the synchronisation and alignment of laughter production, analogously to previously reported results for speech turn-taking. Our study also provides empirical evidence to the debate about gaze aversion, opposing the view that social stress is the main explanatory factor.

Affordances and Grounding Within Concreteness Fading When Learning Proof in STEM's Geometry

The Fourth Industrial Revolution’s technological innovations are driving demand for high skill jobs in a global Knowledge Economy, precipitating Learning Scientists to advocate for developing Deeper Conceptual Understanding of STEM domains. In this study, Concreteness Fading is employed to develop Deeper Conceptual Understanding of Deductive Proof in geometry: enactive stage grounds justifications in spatial relationships through manipulating object-shapes; iconic stage grounds justifications in perceptual relationships through drawing images of object-shapes; symbolic stage grounds justifications in rule-based relationships through writing numbers in formulas about object-shapes. Think-aloud protocols were analyzed of post-secondary non-math majors (N = 8, male = 2, female = 6) randomly assigned to four conditions: enactive-iconic-symbolic (n = 2), enactive-symbolic (n = 2), iconic-symbolic (n = 2), symbolic (n = 2). Findings reveal reliance on affordances of spatial-based and perceptual-based proofs by participants after reaching a perceived impasse – demonstrating fascinating cognitive flexibility crucial for 21st century problem solving.

Humans violate Occam’s razor in learning Gaussian mixture models

Learning the generative model of the world based on limited data is an important problem faced by both human and artificial intelligence. How should one choose among multiple generative models that all well fit the existing data? Occam's razor suggests that one should select the simplest one. Here we asked whether Occam’s razor applies to human learning of probability distribution models. On each trial, participants saw 20~160 samples of spatial locations from a Gaussian mixture model and were asked to choose among four different Gaussian mixture models the one that had generated the samples. In three experiments, we found participants did not, as Occam’s razor would suggest, prefer the one-cluster option that appears to be simplest, other things being the same. Instead, they showed preference for options with two or three clusters. Such violation of Occam’s razor sheds light on distinction between complexity-based and experience-based priors for model selection.

Unpacking the computations of human spatial search under uncertainty: noisy utility maximization, discounting, and probability warping

Humans navigate daily decision-making by flexibly choosing appropriate approximations of what ought to be done. Which mental algorithms do people use, and when? We use behavioural experiments and modelling to investigate three computational principles known to influence decision making: noisy utility maximization, discounting, and the probability warping principle of Prospect Theory. While these principles have been shown to separately influence human behaviour in simple laboratory tasks, such as bandits and gambles, we evaluate their combined use in the context of a naturalistic spatial search that required sequential decision-making. We found that while aggregate human behaviour can be reasonably well explained by an optimal planner with noisy utility maximization, individual-level behaviour exhibits consistent irregularities, that deviate from expected utility theory. We show that model-based prediction of individual-level behaviours in our experiment is significantly improved by combining the three computational principles, and benefits particularly strongly from probability warping. Furthermore, our results suggest that probability warping may be a common factor of human decision making, that generalizes beyond the gambles explored in Prospect Theory, to natural human behaviours such as spatial search and navigation.

Compositionality, modularity, and the architecture of the language faculty

It is often assumed that language is strongly compositional, i.e., that the meaning of complex expressions is uniquely determined by the meanings of their constituents and their mode of composition (Fodor, 1987). Compositionality naturally connects to a broadly modular architecture of the language faculty, according to which our capacity for assigning meaning relies exclusively on lexical and syntactic knowledge (Baggio et al., 2015). Here, we discuss several arguments against strong compositionality. One such argument focuses on novel experimental data on the interpretation of privative adjectives (e.g., ‘fake’) (Partee, 2007). These data show that the interpretation of these adjectives is inexorably connected to the conceptual structure of the modified noun. We argue that lexical and syntactic information serve as important cues for, but do not uniquely determine, the process of meaning assignment (Martin, 2016). We discuss consequences for semantic theorising and the cognitive architecture of the language faculty.

Meta-strategy learning in physical problem-solving: the effect of embodied experience

`Embodied cognition' suggests that our motor experiences shape our cognitive and perceptual capabilities broadly, but often considers tasks that directly relate to or manipulate the body. Here we study how a history of natural embodied experience affects abstract physical problem-solving in a virtual, disembodied physical reasoning task. We compare how groups with different embodied experience -- congenitally limb-different versus two-handed children and adults -- perform on this task, and find that while there is no difference in overall performance, limb-different participants solved problems using fewer actions, and spent a longer time thinking before acting. This suggests that differences in embodied experience drive the acquisition of different meta-strategies for balancing acting with thinking, even on tasks that are designed to equalize differences in embodiment.

Testing the Altercentrism Hypothesis in Young Infants

We test for infants’ proposed altercentric learning bias (Southgate, 2020). By tracking others’ attention, infants build their own models of the environment based on the well developed models of the adults. If true, an event will be better remembered if witnessed together with someone else than if attended alone. Eight-month-old-infants (6 conditions, n = 32/group) saw an object being hidden first in one, then in another location. At the end of each trial, one of the locations was revealed, always empty. Participants correctly remembered the location and looked longer if it was empty (conditions 1-2). When an agent attends the first hiding but not the second, infants misremember the object in the first location (condition 3), providing initial evidence for the altercentric bias. Contrary to our predictions, when the agent only attends to the last hiding location, or both, infants have no expectation of the object’s whereabouts (conditions 4-6).

Comparing Markov and quantum random walk models of categorization decisions

Quantum probability theory has successfully provided accurate descriptions of behavior in decision making, and here we apply the same principles to two category learning tasks, one using overlapping, information-integration (II) categories and the other using overlapping, rule-based (RB) categories. Since II categories lack verbalizable descriptions, unlike RB categories, we assert that an II categorization decision is characterized by quantum probability theory, whereas an RB categorization decision is governed by classical probability theory. In our experiment, participants learn to categorize stimuli as members of either category S or K during an acquisition phase, and then rate the likelihood on a scale of 0 to 5 that a stimulus belongs to one category and subsequently perform the same rating for the other category during a transfer phase. With II categories but not RB ones, the quantum model notably outperforms an analogous Markov model and the order effects on likelihood ratings are significant.

How does mental sorting scale?

Human cognition can tackle a wide range of problems, producing scalable results within a manageable timeframe. Many cognitive models can predict human behavior but lack such scalability, resulting in rapidly increasing processing times for more complex inputs. We present a task where participants mentally sort sequences of rectangles by size while we measure reaction times (RTs) and accuracy. By manipulating the size of the input and the presence of latent structure in the sequences, we investigate i) how mental sorting scales with input complexity, ii) how latent structure influences scaling, and iii) how mental computations can be captured by plausible cognitive models. Our results reveal RTs scale linearly with sequence length, and participants can learn and actively use latent structure to sort faster. This behavior is in line with a noisy sorting algorithm, which sequentially rules out potential hypotheses about the latent structure, thus reducing complexity while retaining accuracy.

What’s in a role? The effects of personality and political differences on gender stereotype processing

When people read the sentence “The babysitter put on a TV show for the kids because he needed to use the washroom,” the male identity prompted by the pronoun clashes with the stereotypical expectation of babysitters as female, rendering the pronoun “he” more difficult to process than “she” would be. We ask whether participants’ HEXACO PI-R personality traits, Political Ideology, and Disgust Sensitivity (DS-R) modulate their reactions to pronouns Congruent versus Incongruent with stereotyped role nouns. 80 English-speaking participants read 40 sentences with female/male stereotypes and were asked to rate each item a Likert scale (1-6) from “Completely Inappropriate” to “Completely Appropriate.” Initial analysis indicates that Openness correlates with higher ratings of Appropriateness for Incongruent stereotypes, and Introversion correlates with low Appropriateness ratings of Incongruent, Female stereotypes. We also expect a correlation between high Conservativeness, Disgust Sensitivity and low ratings of Appropriateness for Incongruent stereotypes.

The Effect of Investment Position on Belief Formation and Trading Behavior

We propose an interaction in expectation formation between returns of an investment and the favorability of new information. Such an interaction can have consequences in trading behavior like leading to the Disposition Effect or differences in the profitability of selling and buying decisions. We introduce a context sensitive Reinforcement Learning model to capture this effect and validate it in a pre-registered investment experiment. Using a Bayesian Hierarchical model fitting approach we find the interaction to stem mainly from participants incorporating unfavorable information more strongly when in a gain position and less so when in a loss. By providing increasing levels of additional information about the price movements we are able to mitigate these effects in a second phase of the experiment. Speaking for a strong effect of belief formation, very clear information is needed to mitigate the observed averse belief and investment patterns.

Songbirds can learn flexible contextual control over syllable sequencing

The flexible control of sequential behavior is a fundamental aspect of speech, enabling endless reordering of a limited set of learned vocal elements (syllables or words). Songbirds are phylogenetically distant from humans but share both the capacity for vocal learning and neural circuitry for vocal control that includes direct pallial-brainstem projections. Based on these similarities, we hypothesized that songbirds might likewise be able to learn flexible, moment-by-moment control over vocalizations. Here, we demonstrate that Bengalese finches (Lonchura striata domestica), which sing variable syllable sequences, can learn to rapidly modify the probability of specific sequences (e.g. ‘ab-c’ versus ‘ab-d’) in response to arbitrary visual cues. Moreover, once learned, this modulation of sequencing occurs immediately following changes in contextual cues and persists without external reinforcement. Our findings reveal a capacity in songbirds for learned contextual control over syllable sequencing that parallels human cognitive control over syllable sequencing in speech.

Action Speaks Louder than Words and Gaze: The Relative Importance of Modalities in Deictic Reference

Deictic communication is fundamentally multimodal. Spatial demonstratives frequently co-occur with eye gaze and physical pointing to draw the attention of an addressee to an object location (e.g. this cup; that chair). Yet the relative importance of language, gesture and eye gaze in deictic reference has not this far been elucidated. In three online experiments, we manipulated the congruency of pointing, gazing and verbal cues to establish their relative importance for demonstrative choice (Experiment 1) and choice of referent (Experiments 2 and 3). Participants saw an image with a person sitting behind a table, interacting with items placed proximally or distally relative to the pictured person, with manipulation of pointing, eye gaze and language (and congruence/incongruence of these modalities). While all three modalities affected demonstrative choice (Experiment 1) and referent choice (Experiments 2 and 3), results show that pointing is the dominant deictic cue to demonstrative/referent choice.

Perceptual similarity and learning from sequential statistics.

Most models of statistical learning do not consider the perceptual properties of the units that make up a sequence. Here, we manipulate the similarity of units in a modified version of classic statistical learning paradigm (Aslin et al., 1998). After a 3-minute familiarization stream, we asked the participants to rate their familiarity with words and part-words, with the addition of non-words (trisyllabic words without any previous exposure). We developed a simple recurrent neural network that used distributed representations for the syllables and produced similar results to the human data. We then explored how perceptual similarity impacted learning of different familiarization streams. Specifically, greater similarity relations among units were predicted to lead to poorer discrimination between words and part-words. Based on the model’s results, we created a new familiarization sequence of trisyllabic words and tested for impacts on perceptual similarity on human performance.

Who is motivating? Students evaluate encouragement based on speaker’s knowledge

Students often receive encouragement ("You can do it!") to take on and stick with challenges, yet they don’t always listen. Past work suggests that domain knowledge underlies speaker credibility. We propose that students find encouragement more or less motivating based not only on the speaker’s domain knowledge (e.g., math), but also on their knowledge of the student’s abilities (e.g., math abilities). Adolescents (n=369; ages 11-19) said they would be more likely to seek out and listen to encouragement from a hypothetical person who has both knowledge of the domain and their abilities compared to someone who has knowledge of just one or neither. When asked to reason about real people in their lives (parents, teachers, peers), knowledge of domain and ability also significantly predicted whose encouragement participants would seek out and listen to. Ongoing work is experimentally testing this hypothesis with behavioral measures of persistence and challenge-seeking.

Do you hear how BIG it is? Iconic Prosody in Child Directed Language Supports Language Acquisition

Child directed language has been characterized by exaggerated prosody which can serve multiple functions including highlighting properties of meaning via iconicity. Iconic prosody may help language acquisition by bringing properties of displaced or unknown referents to the language learner’s “mind’s eye” or facilitating the acquisition of abstract features such as “direction”, or “speed”. We investigate iconic prosody in semi-naturalistic caregiver-child interactions. 50 caregivers were asked to talk to their child (2-4 years) about a set of toys either known or unknown to the child, and either present or absent from the interaction. In a first analysis, we included instances of iconic prosody as subjectively coded. In a second analysis, we looked at acoustic modulations for a set of seed words. In both analyses, we found that caregivers made use of iconic prosody more when talking about unknown or displaced objects, pointing to a neglected role for prosody in word learning.

Discovering computational principles in models and brains

A growing toolbox is emerging for linking neuroimaging data to computations supporting human cognition. In representational similarity analysis (RSA), for example, activity patterns over voxels are compared in response to similar vs. dissimilar stimuli. Resulting similarity matrices are compared to similarity matrices based on theoretical principles or computational models. Similarly, complex EEG or MEG time series can be compared to information-theoretic variables, such as stimulus entropy or surprisal, allowing inferences about the sensitivity of neural responses to aspects of signal information over time. A challenge is determining to what degree such analyses can identify hallmarks of specific computations rather than computationally non-specific resonance with inputs. Here, we apply RSA and information-theoretic analyses to one well-characterized model: TRACE. We consider whether these analyses identify known principles underlying TRACE, and whether TRACE exhibits sensitivity to information-theoretic variables similar to that observed in human brains.

Does encouraging gesture use help us connect remote associations?: The role of mental imagery

Previous research has shown that gestures help people think and solve problems more successfully. Recent studies have also found that encouraging gesture use helps generating new creative ideas and enhances verbal improvisation. Moreover, fluid arm movements have been associated with improving the ability to connect remote associates. Research is still limited with emphasis on divergent thinking and the mechanisms behind the gesture-creativity interplay are not clear. This study examined whether encouraging gesture use could enhance the ability to connect both verbal and visual remote associates of young adults (N = 90) and hypothesised that mental imagery skills could facilitate that relationship. Our preliminary results showed that encouraging gestures did not improve remote association scores, however, mental imagery ability was a significant predictor of verbal remote associates’ scores when gestures were encouraged. We suggest that individuals who have higher mental imagery skills might benefit more from gestures for visualising verbal information.

Capturing uncertainty in relational learning: A Bayesian model of discrimination-based transitive inference

Research on discrimination-based transitive inference (TI) has demonstrated a widespread capacity for relational inference in people and non-human animals. In this domain individuals learn to choose the reinforced item in a set of interrelated discriminations (e.g., A–/B+; B–/C+) and are tested for transitive inference on novel, non-adjacent pairs (e.g., A vs. C). Existing models suggest that transitive responding can be supported by associative learning mechanisms, but they fail to account for evidence that knowledge about the hierarchical nature of the task, whether instructed or discovered during training, has a dramatic influence on learning. I present a model which formalizes TI as the estimation of items’ positions along a latent dimension and tracks learners’ uncertainty about the mapping between item position and feedback. The model naturally accounts for standard effects in TI, while going beyond associative models in explaining the effects of knowledge and rich feedback on relational inference.

I see where you are going: Perception of persuasion goals in moral narratives influences character impressions

Impressions of others’ moral character are key to our social lives, but we rarely directly witness immoral acts, and rather rely on the stories we hear. This creates significant opportunities for narrators to distort their stories in an effort to appear more moral. How does the detection of such persuasion goals affect readers' impressions of the authors' character? Participants read autobiographical stories written by other participants about morally questionable actions they did – written once with no goal, and then again with the goal of appearing morally good or bad. Readers were very accurate in detecting the authors’ goals, but these were nonetheless effective in modulating character impressions. Critically, this effect vanished when readers thought the author didn't care about communicating information accurately. This suggests that audiences sometimes fail to discount narrators' goals when evaluating their character, but only when goals don't come at the cost of communicating information accurately.

The Role of Hand Gestures in Emotion Communication

Previous research suggests that humans use multimodal information during emotional communication. Only a few studies have investigated how emotional information is presented in non-verbal channels. We examined the role of hand gestures in communicating emotions. In a between-subject design, we analyzed the size, frequency, and type of gestures under encouraged and spontaneous gesturing conditions. Participants (N=36) were asked to describe narratives with emotional content. Preliminary analyses with 18 individuals show that, interestingly, gesture frequencies and gesture use with specific emotional phrases did not differ between groups. Also, there was not a significant difference between the two groups in the types of gestures they produced. However, the gestures in the encouraged condition were significantly bigger in size compared to gestures in the spontaneous condition. These diverse preliminary results suggest that people’s hand gestures during emotional communication should be investigated with all their dimensions instead of focusing on one aspect.

Implicit and Explicit Cognitive Processes Associated with COVID-19 Mask-Usage Decisions

Simple, non-pharmaceutical health interventions (e.g., masks) could prevent avoidable COVID-19 deaths (Reiner et al., 2020). Why do some still refuse to wear masks? People may find it easier to rely on implicit, prior knowledge to avoid needing to inhibit new, competing knowledge (e.g., heuristics; Tversky & Kahneman, 1974). Not only does knowledge impact decision making, but from a contextualized deficit framework, knowledge should interact with context to promote differential health decisions (Allum et al., 2008). A computer mouse-tracking paradigm evaluated how context (e.g., trust-in-experts; incidence rate) and germ knowledge impacted cognitive conflict driving mask-usage decisions. Results indicated that increased trust-in-experts, higher positive COVID-19 incidence rates, as well as accurate germ theories promoted mask-usage endorsement, which also reduced cognitive conflict between knowledge about new and old public-health mask guidelines. Contextual factors may help remediate the cognitive stress associated with inhibiting prior inaccuracies in favor of updated, scientific mask recommendations.

Investigating novice and expert programmers' problem solving via protocol analysis

The goal of the present study was to use content analysis to gain insight into the process of problem-solving of novice and expert programmers. While classic work on programmers identifies goals / plans as key constructs needed to code, there is relatively little work using protocol analysis. We recruited 7 expert and 12 novice rogrammers who completed up to 3 brief programming problems while providing a talk-aloud of their inner problem solving process. Based on analysis of the transcriptions of this talk aloud data, we identified the goals and steps used, as well as the broad differences between experts and novices in their problem solving process. These differences were formalized into python ACT-R models, and model output was compared to programs written by human participants.

A Neurocomputational Model of Prospective and Retrospective Timing

Keeping track of time is essential for everyday behavior. Theoretical models have proposed a wide variety of neural processes that could tell time, but it is unclear which ones the brain actually uses. Low-level neural models are specific, but rarely explicate how cognitive processes, such as attention and memory, modulate prospective and retrospective timing. Here we develop a neurocomputational model of prospective and retrospective timing, using a spiking recurrent neural network. The model captures behavior of individual spiking neurons and population dynamics when producing and perceiving time intervals, thus bridging low- and high-level phenomena. When interrupting events are introduced, the model delays responding in a similar way to pigeons and rats. Crucially, the model also explains why attending incoming stimuli decreases prospective estimates and increases retrospective estimates of time. In sum, our model offers a neurocomputational account of prospective and retrospective timing, from low-level neural dynamics to high-level cognition.

Vienna Talking Faces: A multimodal database of synchronized videos (ViTaFa)

Attractiveness research is typically conducted using static photographs of human faces, often cropped or otherwise edited. We present a database of videos of faces, while talking with synchronized audio that will facilitate more ecologically valid, multimodal research into face and voice attractiveness. Our database contains videos, synchronized voice recordings, and photographs of 20 male and 20 female German speakers under different emotional conditions (neutral, sad, happy, angry, flirty). Recordings were simultaneously made from three different angles (frontal, profile, 3/4 view) in front of a green screen. We report the results of a comprehensive validation of this database to make it usable for multisensory empirical aesthetics research, but also for face or emotion research. An online study was conducted to collect ratings on several dimensions including general attractiveness, allowing us to uncover the effects of predictors for facial attractiveness in a natural setting.

Stone tools and trained brains: Comparing anatomical connectivity in expert toolmakers versus naïve subjects using Diffusion Tensor Imaging

Our study gathered Diffusion Tensor Imaging data to compare anatomical connectivity in expert stone toolmakers with naïve subjects with no prior toolmaking experience. The introduction of stone tool technology marked a shift in the evolution of human cognition, as early hominins gradually developed their capacity for complex hierarchical action planning and coordination. It is hypothesized that other abilities requiring these same capacities, like language, co-opted this neurocognitive scaffolding. Similarities in connectivity between experts and novices thus may be explained by the involvement of these networks in language or by a ubiquitous human competence in everyday tool use. Differences are likely explained by the increased complexity of the tool types experts make and use. These differences would support findings from a previous analysis within this study that found tool types of varying complexity (Oldowan, Acheulean, Levallois) differentially activated language networks for subjects with different levels of expertise.

The Perception of Reduced Reliability in an External Store Reduces Vulnerability to its Manipulation.

Offloading cognition to external stores is practiced ubiquitously in daily life (e.g., counting on fingers, writing lists), yet is a relatively new area of investigation within cognitive science. Previous experiments have assessed the benefits and downfalls, including participants’ lowered memory for offloaded information that is no longer available (Gardony et al., 2013; Sparrow et al., 2011). In addition, when offloading, individuals appear susceptible to manipulations of their external store (Risko et al., 2019). We report a series of experiments investigating how the perceived reliability of an external store affects individuals’ susceptibility to manipulation of that store. Consistent with previous research, results suggest that the majority of participants do not notice an item inserted into their external store. However, once cued to this event, individuals do become more likely to subsequently notice a manipulation of their external store. Implications of this research for our understanding of distributed memory systems will be discussed.

The Relationship Between Intelligence Mindset and Test Anxiety as Mediated by Effort Regulation

Test anxiety affects a sizable proportion of college students, especially in competitive STEM fields. Prior research has proposed interventions aimed at changing students’ implicit beliefs about intelligence (intelligence mindset) to help reduce students’ test anxiety, but results have been mixed and the mechanisms underlying this relationship are unclear. We propose that students’ beliefs about effort regulation may partially mediate the relationship between intelligence mindsets and test anxiety. We tested our model as an exploratory post-hoc analysis in a small sample of introductory physics students reporting psychological threat in a laboratory study. Effort regulation was measured as self-reported judgements of persistence in the face of difficulty and test anxiety was measured on a problem by problem basis during a laboratory physics assessment. Students’ intelligence mindset at pretest was a significant predictor of test anxiety at posttest, and this relationship was mediated by self-reported effort regulation. We discuss potential implications of these findings for mindset-based interventions aimed at reducing test anxiety.

Exploring the effects of disgust-related images on cognition in chimpanzees

Sensory stimuli can mediate cognition and behavior in different ways. In humans, chemosensory threat cues enhance performance and increase vigilance. In non-human primates, visual, olfactory and even tactile cues of biological contaminants elicit avoidance behaviors, i.e. manifestations of the adaptive system of disgust in humans. Nevertheless, how contaminant sensory cues may affect cognitive processes in non-human primates remains largely unexplored. We tested how visual cues suggesting pathogen presence may affect cognitive performance in chimpanzees. We used disgust-related images displayed at regular intervals during a number ordering task on touch screens. Images of carcasses provoked more errors following their display compared to control and other condition images (i.e. invertebrates and food). Our results support the hypothesis that visual disgust elicitors decrease performance by distracting individuals. Future studies should determine whether cognitive responses evolved differently across threat contexts (fear vs. disgust) given their differences in outcomes (death vs. disease).

A cognitive bias for Zipfian distributions? Uniform distributions become more skewed via cultural transmission

There is growing evidence that cognitive biases play a role in shaping language structure. We ask whether such biases contribute to the propensity of Zipfian word-frequency distributions, one of the striking commonalities between languages. Recent work suggests Zipfian distributions confer a learnability advantage, facilitating word learning and segmentation (e.g. Lavi-Rotbain & Arnon, 2019). However, it remains unclear whether this reflects the impact of prior linguistic experience with such distributions or a cognitive preference for them. Here, we use an iterated learning paradigm to see if learners change a uniform word distribution into a skewed one via cultural transmission. We exposed the first learner to a story where six nonce words appeared equally often, and asked them to re-tell it. Their output served as input for the next learner. Over time, word distributions became more skewed (lower entropy). The findings provide novel evidence for a cognitive bias for skewed distributions in language.

Efficient adaptation to listener proficiency: The case of referring expressions

If speakers communicate efficiently, they should produce more linguistic material when comprehension difficulty increases. Comprehension difficulty can be impacted by the message itself (studied extensively) or by properties of the listener (studied less). Here we investigate the impact of listeners’ estimated proficiency on speakers’ productions, using referential choice as a case study. Compared to full noun phrases (fNPs, ‘the woman’), pronouns (‘she’) can convey similar content with less linguistic material: accordingly, speakers use more fNPs when the message is unpredictable (Tily & Piantadosi, 2009) and when listeners lack relevant information (Bard & Aylett, 2004). If referential choice is also impacted by listeners’ estimated proficiency, then speakers should use more fNPs when conversing with language learners. To test this, we compared participants' descriptions of the same picture book to learners and proficient speakers. Indeed, participants used more fNPs when their interlocutors were child- or adult-learners, illustrating efficient adaptation to listener type.

Knowledge transfer for tool use in the Goffin's cockatoo

Are Goffin’s cockatoos capable of transferring a tool-use skill acquired in a certain situation to a new contextual setting on which they have no previous experience? In our study, performance of thirteen adult subjects (divided into two groups: experimental or control) was compared in a two-stage experiment where the learning component about the tool was manipulated by providing a more diverse training for the experimental group in stage one. We hypothesized that this broader learning of the tool's affordances would enable to transfer its use to solve a novel task. Our results show that the experimental group outperformed the control group in stage two (higher success rate and faster learning speed), which we interpret as a product of behavioural flexibility being enhanced during stage one: by operating the tool in more diverse contexts, these individuals might have acquired an advantageous experience, transferrable to tackle an untrained problem more efficiently.

Creating a safe environment for text donation: towards a truly informed consent

Our social media activity data is a valuable source of information about our preferences, psychological and social processes. However, collecting such private data, including messages, for scientific research is at an early stage (Ueberwasser & Stark, 2017), which is natural given privacy issues involved (Bemmann & Buschek, 2020). Our study is geared towards: i) making the process of sharing personal data more ethical, consensual, informed and comfortable; ii) identifying profiles of participants willing to share these data. 293 students of both technical and non-technical background completed an online questionnaire designed to identify the relationship between willingness to share the data and factors such as: 1) kinds of data; 2) method of data processing; 3) purpose of data gathering and use; 4) demographics of participants. Qualitative and quantitative analyses revealed the categories of participants’ concerns and preferences regarding the form of anonymization conditional on the subjects’ profile and their technical skills.

Why do people criticize others for suffering irrationally?

People sometimes criticize others for feeling sad, especially when they judge that person's sadness to be irrational. According to canonical models of blame, these reactions reflect attributions of control over the negative emotion. However, some scholars have recently suggested that blame reactions towards other's emotions ignore attributions of control, and instead, reflect negative reactions to the emotion itself. We present a study that adjudicates between these two competing views. Our results support control-based models of blame and criticism. We also identify two cues that people rely on when attributing control over emotions; namely, how well calibrated the emotion is to its eliciting circumstances and the sufferer’s capacity to think rationally. In sum: People will criticize others for their emotional suffering when they judge both that the suffering is irrational, and that the sufferer is sufficiently rational to recognize this fact, and so can choose to stop feeling upset.

Impairment effect of infantile coloration on face discrimination in chimpanzees

Impaired face recognition for certain face categories is known in both humans and non-human primates. A previous study found that chimpanzees are worse at discriminating infant faces than adult faces. Chimpanzee infant faces are different from adult faces in color and shape. However, it remains unclear whether impaired face discrimination for infant faces is solely due to facial color or shape, or due to a combination of both. We investigated which facial features have greater effects on the difficulty of face identification. Adult chimpanzees were required to match the faces in a matching-to-sample task with four types of face stimuli whose shape and color are manipulated independently. We found that chimpanzees’ performance decreased when asked to match the faces with infant coloration regardless of the shape. This study is the first to demonstrate the impairment effect of infantile coloration on face recognition in non-human primates.

The association between preschool teacher-child relationship and children’s kindergarten outcomes

This study examined relations between teacher-child closeness and conflict in preschool and children’s behavior problems, social skills, and executive function (EF) in kindergarten, and explored if these relations are moderated by parental education. The study also sought to examine the relation between teacher-child closeness and conflict and the subscales of children’s behavior problems and social skills. The sample consisted of 126 preschool children (M = 56.70 months, SD = 3.89). Regression analyses revealed that teacher-child conflict predicted children’s social skills, specifically assertion, engagement, and cooperation. Parental education moderated the association between teacher-child conflict and EF, and also emerged as a marginally significant moderator of teacher-child closeness and behavior problems. The findings thereby indicated differential relations between teacher-child closeness and conflict and children’s outcomes. With regard to future research, it may be important to consider other aspects of the teacher-child relationship and classroom environment as well.

Do Ancient Philosophies Help Us Understand Modern Psychologies?

People from Western and East Asian cultures exhibit systematic differences in perception, attention, and cognition. Why do people from these cultures think differently? According to an influential proposal, psychological differences between Westerners and East Asians reflect, and may derive from, differences between ancient Greek and ancient Chinese philosophies. Here, we critique this proposal in two ways. First, we argue that the way ancient Greek philosophy is represented in the cultural psychology literature is skewed, highlighting differences between ancient Greek and Chinese beliefs, and obscuring similarities. Second, we argue that no causal mechanism has been offered by which ancient philosophies could give rise to modern cross-cultural differences.

Effects of Scaling Shoulder Width on Passability Affordance in Virtual Reality

Passability of an aperture, as a perceived affordance, is determined by the fit between the apparent aspects of the environment (e.g. perceived gap) and the perceived body scale. Here, in order to understand the effects of body scaling on the affordance of passability, we conducted a virtual reality study in which in blocked trials, we assigned participants (N = 20) different shoulder widths (narrow, normal, and wide). Participants were instructed to walk naturally to pass through an aperture scaled to their virtual shoulder without colliding and to reach a target on a table. The results showed that participants were closer to the target on the table when assigned narrow rather than normal shoulders. Also reflected in their perceptual judgments, those with narrow virtual shoulders thought they had smaller shoulder width, an effect not seen in the wide shoulder condition, which together demonstrate an asymmetry in the effects of body scaling.

Narratives of Consensus: a Decade of Reddit Discourse on Marijuana Legalization

Defying polarization, U.S. support for marijuana legalization grew from 36% to 67% during 2005-2019 (Pew Research Center, 2019). To identify discourse properties that accompanied this shift, we present the largest social media corpus on the topic (>3M documents) and the first to target Reddit (2008-2019). We used geolocation inference to distinguish U.S. discourse from international chatter and separate pre- and post-legalization content given marijuana’s uneven legal status across U.S. states. Research on online discourse has focused on generalized statements (e.g., moral arguments/sentiments). However, we combine topic modeling with hierarchical clustering to show that personal anecdotes and attitudes were a major driver of discourse especially during the legalization drive, at the expense of certain more generalizable themes. We classified comments by expressed attitude and presence of persuasion attempts using neural networks to show that the anecdotal discourse reflected not only social sharing, but also active argumentation.

Exploring learning trajectories with dynamic infinite hidden Markov models

Learning the contingencies of a complex experiment is hard, and animals likely revise their strategies multiple times during the process. Individuals learn in an idiosyncratic manner and may even end up with different asymptotic strategies. Modeling such long-run acquisition requires a flexible and extensible structure which can capture radically new behaviours as well as slow changes in existing ones. To this end, we suggest a dynamic input-output infinite hidden Markov model whose latent states capture behaviours. We fit this model to data collected from mice who learnt a contrast detection task over tens of sessions and thousands of trials. Different stages of learning are quantified via the number and psychometric nature of prevalent behavioural states. Our model indicates that initial learning proceeds via drastic changes in behavior (i.e. new states), whereas later learning consists of adaptations to existing states, even if the task structure changes notably at this time.

The Relationship Between Mental Imagery Vividness and Blind Reaching Performance

Mental imagery is a core topic in cognitive science and central to most perception and action theories. Although mental imagery is often experimentally elicited, a growing body of literature highlights that people differ in their ability to consciously experience mental imagery with some reporting an inability to experience imagery. We examine the relationship between mental imagery vividness and performance on blind reach tasks within a VR environment. The findings indicated that accuracy did not significantly differ based on differences in imagery vividness during a baseline blind reach task. However, participants who reported experiencing more vivid mental imagery and received terminal visual feedback during a recalibration phase demonstrated greater shifts in movement strategy post-recalibration compared to those with less vivid imagery. The results indicate a degree of ‘perceptual learning’ following limited visual feedback that was predicted by vividness of imagery and feedback type. Implications for perception and action theory are discussed.

A Cognitive Bias for Cross-Category Word Order Harmony

Cross-linguistically, heads tend to be ordered consistently relative to dependents. This tendency is called Cross-Category Harmony. Alternative explanations for harmony include cognitive and non-cognitive processes (e.g., grammaticalization pathways), but evidence disentangling them is still lacking. We report two artificial language learning experiments testing harmony between verb phrases (VP) and adpositional phrases (PP) and between VPs and noun phrases consisting of adjectives and nouns (NP). These two cases are critically different: typological evidence for the former is strong but there is no typological evidence for the latter. Our results parallel the typology; we find a strong preference for harmonic orders between VP and PP regardless whether the participants’ native language has harmonic order (English speakers) or mixed orders (Chinese speakers), but no preference for harmonic order between VP and NP. This suggests that a cognitive bias for harmony may play a role in shaping typology.

Face, body and object representations in the human and dog brain

Neural representations for faces, bodies, and objects have been studied extensively in humans. However, much less is known on how our socio-cognitive niche shaped the evolution of these neural bases. Canine neuroscience allows us to close this gap by studying a longstanding, close companion of humans. Here, we study the neural underpinnings of face, body and object processing in pet dogs (Canis familiaris) and humans. Fifteen awake and unrestrained dogs and forty humans underwent MRI scanning and viewed faces, bodies, objects, and scrambled images. Preliminary results for the dogs indicate temporal regions selective for animate stimuli and a potentially distinct sub-region selective for bodies, and replicate previous findings of category-selective regions in humans. Investigating the multivariate patterns of activation indicates similar categorical object representations in both species. Our findings will provide insights into the potentially convergent evolution of a core cognitive skill in the dog and human brain.

Diachronic Entropy Rate in Language Evolution: A Case Study of 2500 Years of Historical Chinese

Information theory (Shannon, 1948) plays an important role in psycholinguistic and linguistic theories (Genzel & Charniak, 2002; Hale, 2003; Levy, 2008). Here, we examine how entropy rate, a measure of information content encoded in each individual word, changes diachronically in Chinese. We conduct a computational study on the four main development stages of Chinese, Old Chinese, Middle Chinese, Early Modern Chinese and Modern Chinese. We approximate entropy rate of each century by adopting a diachronic trigram language model with interpolated Kneser-Ney smoothing technique (Chen & Goodman, 1999), which is trained on multiple comprehensive data sets selected according to Chinese philology studies (Wang, 1980; Gao & Jing, 2005) covering over 2,500 years of corpus data. Our modeling results show that entropy rate, on average, increases 0.026 for each century. Within each major stage, historical Chinese demonstrates a steady rise in entropy rate, suggesting a vocabulary increase whereas entropy rate tends to fluctuate more in transitional stages, around the 10th century and the 15th century, lending support to the hypothesis that grammar competition in language contact is one of the driving forces behind major changes in diachronic Chinese. Our study demonstrates the interaction between psycholinguistic pressures and the evolution of linguistic systems.

Virtual Poster Presenter: An Emotional Cognitive Architecture in Action

Emotional capabilities of interactive virtual agents grow every year, while most of them lack in emotional intelligence. Their abilities to recognize and express emotions may be worthless, if the agent cannot decide how to behave adequately in a given social context. The central part of the problem, the logic of socially emotional behavior, remains an unsolved challenge. Here this challenge is addressed within a virtual reality paradigm of a poster presenter, designed on the basis of the emotional Biologically Inspired Cognitive Architecture (eBICA). The framework of eBICA combines the formalisms of semantic maps, moral schemas, mental states and narratives in order to achieve believable, socially acceptable interactive behavior of the presenter bot. The paradigm involves establishment and maintenance of a stable socially emotional contact with mutual empathy and trust. Results of evaluation of the prototype by participants at two international virtual conferences speak in favor of the selected approach.

Unveiling unconscious biases and stereotypes in students: The necessity of self-reflection in Higher Education

Preparing the next generation for an ever-changing environment is of utmost importance in Higher Education. Though cultural diversity is highly prevalent in modern societies, (implicit) biases, stereotypes and racism maintain a persistent yet mostly overlooked part of everyday life. Many students are not aware of their biases. Hence, it is imperative for them to undergo a critical self-reflection process about their beliefs and (unconscious) biases. To induce this process, raise awareness and confront them with potential (unconscious) biases and stereotypes, 404 university students in teacher training and vocational education completed an Implicit Association Test (IAT) on skin color before watching a lecture discussing biases and stereotypes. The preliminary results show a variety of reactions after taking the test, ranging from denial and anger to discomfort and surprise. We will discuss possibilities to support students in leveraging the resulting cognitive dissonance as an opportunity to begin their individual self-reflection process.

A Bilingual Inhibitory Control Advantage in Mandarin-English Speaking High School Students in China: An Internet-Based Study

The question of whether bilingual language experience confers a cognitive advantage remains open. Controversy arises from assertions that putative advantages can instead be explained by differences in culture, socioeconomic class, or immigration status, as well as the classification of bilingual experience as a fixed variable rather than a random effect. The present study addresses these issues by assessing the impact of variability in English (L2) language experience on executive function in a group of Mandarin-English speakers (n = 41) from Shenzhen. Participants reported on demographic details, language history, perceived stress, and performed a Simon task online. Data were analysed using Linear Mixed Effects (LME) models to test for individual differences on Simon task performance. Results showed higher levels of L2 proficiency were associated with reduced Simon effects, suggesting a cognitive advantage.

Intention beyond Desire: Humans Spontaneously Commit to Future Actions

It is an ancient insight that human actions are driven by desires. Yet it misses one mental representation, intention, with which agents regulate conflicting desires by committing to an admissible plan. Here we demonstrate four behavioral signatures of intention only observed in humans: disruption resistance as sticking with a plan despite setbacks; exclusiveness as avoiding paths with temptations of re-planning; deliberation as the gradual emergence of a commitment plan; temporal leap as forming future plans before finishing the current one. Humans were compared against an optimal model formulated as Markov Decision Process (MDP), who acts only to maximize expected future rewards. Conflicting desires are defined as a reward function returning positive rewards for multiple states. It showed none of the behavioral signatures of intention. These results reveal that humans regulate conflicting desires with intentions, which directly drive actions.

The Statistical Properties of Color and Shape of Objects in Visual Categorization

We perceive and categorize the world constrained by the restrictions of our sensory and neural systems. Additionally, naming modulates how we categorize the things we see. Here we explore a third influence on visual categorization, we study how the perceived statistical regularities of shapes and colors modulate our experience in categorizing objects. We conducted our analyses in artificial systems. We used computer vision to process pictures of real objects, and artificial neural networks to categorize them. We found that the statistical regularities of different object sets produced either shape or color biases, depending on the nature of the set. Our statistical-based categorization approach presents complementary mechanisms of categorization biases, relevant for a more comprehensive understanding of the linguistic shape bias, the color bias of food, and it let us hypothesize why categorization may vary across populations.

Humans start out altercentric: the ontogenetic development of other-centered cognition

A traditional view of understanding other’s mental states is that early in ontogeny infants start from a me-first position, and through themselves learn to understand others. Here we propose an opposite developmental trajectory, where infants might start out highly altercentric (Southgate, 2020) and through development increasingly rely on their own point of view. In a pre-registered cross-sectional study we present 1-6-year-old children with a task where altercentric modulation has been found with 14-month-old infants (Kampis & Kovács, 2020). There, infants tended to search longer in a box when another person believed an object to be present, than when she believed it was empty. We predicted this tendency to decrease with age. Preliminary results with n=191 children included based on their search in baseline trials, show a decrease in altercentric modulation with age (r=-.174, p= .016), which will be discussed together with the relationship between altercentrism and the development of self-concept.

The ‘know-what’ and the ‘know-how’: importance of declarative and procedural memory systems in the L2 learning of morphology, syntax and semantics

Dividing attention by loading working memory is an effective method of probing the declarative and procedural underpinnings of linguistic knowledge. The current study explores WM/DA effects on four specific L2 domains: morphology (aspect versus case), syntax and semantics (collocations). We contrast performance of 68 learners of Polish as a foreign language (L1 Chinese) in a grammaticality judgement task in baseline and divided attention conditions. We found corroborating evidence for heightened dependency on declarative memory in early (A2) L2 learners across all linguistic categories and in both experimental conditions. While the lexical judgement of more advanced (B1) learners also proved vulnerable to secondary-task interference, introduction of additional cognitive load had a positive effect on participants’ grammatical judgement, yielding more accurate responses in the aspect category. These results point towards the need for pursuing a new line of inquiry into a potentially facilitative role of cognitive load on L2 learners’ grammatical processing.

Emotion Expression Captured by Utterances in Acting and Underpinning Internal Changes in Actors

The purpose of this study is to capture the structure of the interactive role-making process and introduce an integrated perspective to view actors’ creativity in the process. In particular, this study focuses on the characteristics of utterances in an acting training, which emphasizes paying attention to a partner and communicating. Statistical analyses are conducted on temporal changes, individual differences, and the granularity of utterances. The influence of attention toward partners on utterances is discussed, as well as the possible internal changes of actors, which reflects a perspective from which to view creative action as is referred to in the five A’s framework. In addition, by applying the theory of constructed emotion, the present study attempts to provide a possible demonstration of how truthful emotion is born under imaginary settings in acting.

Computational Analysis of Social Cues in the Response to Joint Attention, The More the Better

The Response to Joint Attention (RJA) allows coordinating attention with a partner by following social cues as indicated by the gaze, head turn or gestures. One developmental hypothesis suggests that this ability may initially rely on the perception of head motion and refines until responding to the final gaze direction. The autism spectrum disorders (ASD) present reduced joint attention behaviors, the development of which is the goal of many therapeutic interventions and could benefit from a precise knowledge of how social cues trigger (or fail to trigger) the RJA. In this study, we test the developmental hypothesis. We developed a computational model simulating gaze following tasks and explored the effect of differences in the amount of information and temporal dynamics of social and non-social cues. Our model is contrasted with previous empirical studies and it describes developmental trajectories of typical and atypical RJA.

Effect of morphine administration on human social motivation during stress

Physical social contact, such as grooming in primates or touch in humans, is fundamental to create and maintain social bonds. The Brain Opioid Theory of Social Attachment postulates that µ-opioids play a central role in social connection. Accordingly, pharmacological studies in isolated animals indicate that µ-opioid agonists reduce, and µ-opioid antagonists increase distress responses and motivation for social contact. Despite the abundance of animal studies, human evidence is still lacking. Here, we investigated the neurochemical basis of social motivation under stress in healthy human volunteers, following morphine (µ-opioid agonist) or placebo administration. By adopting a translational approach, real physical effort and facial hedonic reactions, together with self-reports of wanting and liking for social touch, were assessed. Preliminary results revealed increased adverse response to stress following morphine administration. In line with animal models and previous evidence in humans, this enhanced stress response led to increased motivation to obtain social touch.

Processing differences among irregular inflection classes

Theories of inflectional morphology differ in terms of how they treat semi-productive inflection types, that is, inflections that apply to multiple words but are not completely productive (e.g. grow-grew, know-knew, but not clow-clowed). How such semi-regular classes generalize may help distinguish theories, but little work has explored this question due to the difficulty of finding overgeneralized uses of these inflectional classes in naturalistic corpora. We address this issue by conducting a prompted lexical decision study on English past tenses. Participants were shown a regular or irregular verb in the infinitive form (to snow, to grow) and then presented with either a correct inflection (snowed, grew) or an overgeneralization (snew, growed) and asked to indicate whether it is the correct past tense form. We compare how various overgeneralized types (snow-snew, sneeze-snoze) differ in terms of reaction times and accuracy rates finding differences between classes which may inform future theoretical comparisons.

Expertise modulates neural tracking of dance and sign language

Information in speech appears in bursts. To optimize speech perception, the brain aligns these bursts of information with slow rhythms in neural excitability (<10 Hz). How does the brain track the timing of external events? Here we tested whether neural stimulus-tracking depends on participants’ expertise, or on a language-specific mechanism. We recorded electroencephalography (EEG) in participants who were experts in either ballet or in sign language, while they watched videos of ballet or sign language. We show that stimulus-tracking depends on expertise: Dancers' brain activity more closely tracked videos of dance, whereas signers' brain activity more closely tracked videos of sign language. This effect of expertise emerged at frontal channels, but not at occipital channels. These results suggest that frontal cortex forms temporal predictions based on expert knowledge. The brain may use the same predictive mechanisms to optimize perception of temporally-predictable stimuli such as speech, music, sign language, and dance.

The differential effect of explicit and implicit instructions on response execution: a hypnosis study

Our decisions are informed by a variety of sources of information including prior knowledge and contextual cues. Previous research has shown that explicit and implicit cues influence our decisions differently. Using a choice reaction time task, we aimed to explore this differential influence through a Bayesian drift diffusion model of decision making. We contrasted the effect of two types of instructions — one presented in hypnosis and followed by a posthypnotic amnesia suggestion and the other in normal waking state — on drift rate (v), threshold (a), and non-decision time (t). And, we compared this effect between participants who reported involuntariness or amnesia and those who did not. Results suggest involuntary responses to implicit cues may require less evidence to be executed as it was characterized by a lower threshold and a higher drift rate, which should be tested by confirmatory research.

How Face Mask in COVID-19 Pandemic Disrupts Face Learning and Recognition in Adults with Autism Spectrum Disorder?

The use of face masks is one of the measures adopted by the general community to stop the transmission of disease during this ongoing COVID-19 pandemic. This wide use of face masks has indeed been shown to disrupt day-to-day face recognition. People with autism spectrum disorder (ASD) often have predisposed impairment in face recognition and are expected to be more vulnerable to this disruption in face recognition. When faces were initially learned unobstructed, we showed that people with higher autistic traits had a lower face recognition performance for masked faces. In contrast, there was a stronger facilitation in subsequent face recognition of masked faces in typically developing adults, but not in adults with ASD, when masked faces were learned first—this face recognition facilitation is predicted by a higher level of empathy. This paper also discusses how autistic traits and empathy influence processing of faces with and without face masks.

Chimpanzees seek help, but not strategically

Seeking help can be a highly adaptive behavior. From an evolutionary perspective, strategic help-seeking can significantly improve an individual's fitness. However, it has not yet been investigated whether our closest living relatives –chimpanzees– seek help strategically. In Study 1, we investigated whether chimpanzees seek help selectively when they need it. Chimpanzees (N=19) sought help when it was necessary, but not if they could solve the problem themselves (Chi2(1)=30.821, p<.001). In Study 2, we investigated whether chimpanzees seek help strategically: Do they consider action-related costs of potential helpers as much as they consider their own? Chimpanzees (N=14) had a preference for a low-cost option when they had to obtain a reward on their own, but not when they sought help from others (Chi2(1)=7.989, p=.005). These findings imply that chimpanzees seek help when they need it, but they do not strategically consider other’s costs when deciding whom to seek help from.

The importance of stability in children’s and adults’ block-building

Block-building is an early-developing spatial skill in which multiple spatial actions unfold over time. The processes underlying block-building have eluded our understanding, but existing data show that children’s and adults’ construction paths are highly systematic and selective. In this paper, we investigate whether manipulating the stability of block models to be built affects the character of construction paths. We asked participants to build models with either strong or weak support. We measured the step-by-step actions taken and used eye-tracking to assess differences between adults and children in how they collect and use visual information while building. We find that children and adults are highly selective in their construction paths, but models with weak support are more difficult to build, reflected in the specifics of their paths and the eye movements made during building. These results suggest that the stability of a structure drives information-gathering and building strategies in block construction.

Oh, the Irony!: Interpersonal Variation in the Processing of Foreign-Accented and Native Irony

Research shows that language processing mechanisms are permeable to the speaker’s accent, but virtually no data exists on non-literal language. Our online rating study investigated whether accent-based biases could hinder making inferences from ironic speech. Ninety-six participants listened to dialogues between native and foreign-accented English speakers and rated them on several scales. We found that the ironic intent in the accented speech was missed significantly more often than in the native speech for all irony types. Importantly, participants’ individual differences significantly affected the ratings and interacted with both accent and irony-type. More conservative participants were worse at detecting irony than their liberal peers but this effect was stronger for accented speech and a rarer irony type. In contrast, high empathy facilitated irony detection. The results demonstrate that interpersonal variation in personality and moral values affects language comprehension and needs to be accounted for in models of language processing.

Time course of EEG oscillations during creative problem solving

The Remote Associates Test is a creativity task that consists in finding a word that connects three unrelated words. We examined EEG power in healthy participants performing an adaptation of this task that allowed us to explore two mechanisms: remote semantic combination by varying the associative strength between the cue words and the solution, and the insight phenomenon reported by the subjects. Time-frequency analyses revealed that associative remoteness was associated with early synchronization in alpha and beta bands in laterotemporal and temporoparietal clusters and late frontal activity in the theta band just before the participants found the solution. Insight was associated with synchronization in alpha and gamma bands in inferotemporal clusters and a frontal synchronization in the theta band just preceded the response. Our findings provide new insights on the dynamic mechanisms involved in this verbal creativity task.

Promoting Relational Responding: The Role of Prior Exposure to the Sample

The relational match-to-sample (RMTS) task is used to gauge sensitivity and preference for relational content in the presence of compelling object-based alternatives. On each trial, participants see a triad of 3-element shape sequences: target item (YXY), object match (YVO), and relational match (TWT). Traditionally, human adults show a moderate relational preference — supporting the structural alignment account of similarity-based processing. In the present study, we replicate preliminary findings showing that relational responding is facilitated relative to baseline if the target item is initially presented in isolation with a query to generate a short written description. However, we also observed that initial isolated presentation of the target with no accompanying task led to a similarly elevated rate of relational responding. These findings suggest that a minimal manipulation of presenting the target by itself prior to revealing the full RMTS triad promotes relational responding. We discuss implications for underlying mechanisms driving RMTS performance.

Attentional strategies during category learning: an eye-tracking study

In categorization tasks optimization of performance depends on attention to relevant stimulus features. However, studies show that adults and children seem to use different strategies when learning new categories. Deng & Sloutsky (2016) found that adults attend selectively, whereas children prefer to attend diffusely, even when the stimuli possess deterministic features (i.e. with 100% probability of belonging to a certain category). Most studies of attention in category learning rely on behavioral effects to infer attentional strategy, despite the availability of eye-tracking technology. In this study, we combine the use of category training and transfer paradigms (see Miles & Minda, 2009 and Deng & Sloutsky, 2016) and eye-tracking methods to investigate attentional strategies of children and adults during category learning. Preliminary results of our adult pilot (N = 14) confirm the prediction that adults optimize performance by attending increasingly to deterministic features of the stimulus once the categorical rule is found.

Language as a bootstrap for compositional visual reasoning

People think and learn abstractly and compositionally. These two key properties of human cognition are shared with natural language: we use a finite, composable vocabulary of nameable concepts to generate and understand a combinatorially large space of new sentence. In this paper, we present a domain of compositional reasoning tasks and an artificial language learning paradigm designed to probe the role language plays in bootstrapping learning. We discuss results from a language-guided program learning model suggesting that language can play an important role in bootstrapping learning by providing an important signal for search on individual problems, and a cue towards named, reusable abstractions across the domain as a whole. We evaluate adults on the same domain, comparing learning performance between those tasked with jointly learning language and solving reasoning tasks, and those who only approach the domain as a collection of inductive reasoning problems. We find that adults provided with abstract language prompts are better equipped to generalize and compose concepts learned across a domain than adults solving the same problems using reasoning alone.

Ritualized commitment displays in humans and non-human primates

Collective ritual is virtually omnipresent across past and present human cultures, and analogous behaviors were documented in non-human primates. However, surprisingly little is known about the evolution of ritual in the hominin lineage as well as their underlying neurocognitive mechanisms. Here, we identify similarity, coalitional, and commitment signals as the essential features of collective ritual and argue that these signals evolved to facilitate mutualistic cooperation. We compare evidence for the communicative function of ritual between contemporary hunter-gatherers and non-human primates and discuss the underlying cognitive mechanisms facilitating these signals. Importantly, we will provide experimental evidence from our lab supporting the role of ritual as a platform for cooperative communication. Synthesizing this evidence, we will suggest that between 500 and 300 ka, collective ritual as a repetitively performed communicative act evolved from rudimentary signaling systems to help facilitate mutualistic cooperation and collective action.

Effects on word learning from spacing and category variability

Not all categories are made the same. Some categories have high within-category variability (e.g., “vehicles” can look very different) and some have low within-category variability (e.g., “apples” are pretty similar). Categories can also vary on their between-category variability where some categories are very similar to each other (e.g., “apples” and “oranges”) and some are very different (e.g., “apples” and “vehicles”). Studies have found that categories with high within and between variability are learned best in massed formats, and categories with low within and between variability are learned best in interleaved formats. However, the unique contribution of each of these kinds of variability (i.e., within and between) have not been studied independently. These studies investigate the unique contribution of within- and between-category variability to 3-year-old children’s word learning in interleaved and massed presentations. The results inform existing understanding of interleaving in word learning and how category variability impacts learning.

‘Hello! *What your name?’ Children’s evaluations of ungrammatical speakers after live interaction

Children use accent to categorize speakers as in-group or out-group members (i.e., fellow speakers of language variety X or some other variety). This study tested how much 3- to 5-year-old children (N=159) consider both grammaticality and accent when categorizing speakers. After interacting with a native or non-native experimenter whose speech contained grammatical errors (or not), children completed a cultural categorization task, where they were asked where the experimenter likely grew up (a familiar- or unfamiliar-looking dwelling), as well as a resource allocation task, where children could share stickers with the experimenter. Results showed that children relied primarily on accent when deciding where the experimenter grew up, being more likely to associate native speakers with familiar dwellings than non-native speakers. However, children shared stickers with all speakers equally. The latter result contrasts with previous work using non-interactive paradigms, and may indicate that live interactions foster more favorable perceptions of non-native speakers.

Thinking about thinking through inverse reasoning

Human Theory of Mind enables us to attribute mental states like beliefs and desires based on how other people act. However, in many social interactions (particularly ones that lack observable action), people also directly think about other people's thinking. Here we present a computational framework, Bayesian inverse reasoning, for thinking about other people's thoughts. Our framework formalizes inferences about thinking by inferring a generative model of reasoning decisions and computational processes, structured around a principle of rational mental effort -- the idea that people expect other agents to allocate thinking rationally. We show that this model quantitatively predicts human judgements in a task where participants must infer the mental causes behind an agent's pauses as they navigate and solve a maze. Our results contribute to our understanding of the richness of the human ability to think about other minds, and to even think about thinking itself.

Can early birds … fly? Awakening conventional metaphors further down the maze

Conventional metaphors such as early bird are interpreted rather fast and efficiently since they might be stored as lexicalized, non-compositional expressions. Pissani and de Almeida (2020, submitted) showed that, after reading metaphors (John is an early bird so he can…), participants took longer and were less accurate selecting the continuing word (attend) when it was paired with a literally related distractor (fly) rather than an unrelated one (cry) in a maze task. This suggests that the literal meaning of a conventional metaphor is available immediately afterward. But does the availability of the literal meaning remain further downstream? We examined whether this effect is replicated when there is a medium (6–8 words) and a large (11–13 words) distance between metaphor and word selection. Results indicated that the awakening effect persists but decreases significantly as word distance increases, suggesting that the literal meaning is still available downstream but fades rapidly.

Complexity of processing to activate magnitude representation for common fractions and precision of their magnitude representations in fraction magnitude comparison

This study investigated processing/strategy complexity to activate magnitude representation for common fractions (1/4, 1/3, 1/2, 2/3, and 3/4), and the precision of their magnitude representations on the mental number line. We compared the magnitude comparison performance for pairs involving any one of common fractions and pairs of two uncommon fractions. We hypothesized the mean reaction time to be shorter for pairs involving a common fraction with a simple processing/strategy to activate the magnitude representation, and the magnitude distance effect to be smaller for pairs involving common fractions with precise magnitude representation. Bayesian mixed-effect regression analysis found a shorter mean reaction time for pairs involving common fractions except 1/4. Comparisons involving common fractions did not show a smaller magnitude distance effect. Thus, processing to activate magnitude representation is simpler for common (except 1/4) than for uncommon fractions, although common fractions do not have more precise magnitude representation than uncommon fractions.

Perspective Taking in Virtual Reality: Addressing Gender Bias in STEM

Altering an individual’s identity during virtual tasks, such that it is incongruent with their own identity, can reduce unconscious biases and increase empathy and prosocial behavior. While such virtual embodiment has been investigated for race, age, and socioeconomic status, the question remains whether immersive perspective taking reduces implicit gender bias. Accordingly, we investigated the effect of gendered embodiment on gender bias related to the underrepresentation of women in science, technology, engineering, and math fields. Undergraduate students (N=65) undertook a simulated virtual interview task, which included the rating and selection of male and female candidates. Females were significantly more empathetic than males. Greater empathy predicted higher candidate ratings, except for competence of the female candidate. Preliminary findings establish foundations for future comparative studies and contribute to elucidating the nature of empathy during and after virtual perspective taking, such that it may have a greater impact on males for modulating gender bias.

Appreciating interleaved benefits: The effect of metacognitive activities on the selection of learning strategy

Empirical evidence demonstrates the superiority of interleaved over blocked learning schedules in inductive learning, yet many students believe the opposite. We investigated the effect of metacognitive activities on learners’ awareness of interleaved benefits. Participants experienced two learning schedules for different painting styles and took a transfer test. Then we manipulated the provision of post-test feedback for each schedule and metacognitive prompts to reflect on the learning schedules. Finally, participants were asked to choose a more effective schedule between the two. Although there were no significant effects of either the post-test feedback or metacognitive prompts, the performance difference between the two schedules significantly predicted participants’ schedule selection. The greater benefits from the interleaved schedule the participants experienced, the more likely they chose the interleaved schedule. Furthermore, participants’ responses to metacognitive prompts revealed that those who acknowledged spacing benefits from the interleaved schedule were more likely to choose the interleaved schedule.

Distributed Brain Connectivity Predicts Individual Differences in Forgetting: A Neurocomputational Analysis of resting-state fMRI

A complete and holistic understanding of human cognitive function using a cognitive model should be able to incorporate both the idiographic biological parameters from behavioral data and interactions between connected brain networks identified by neuroanatomical techniques. Here, we argue that first, computational modeling can be used to extract biological parameters, and second, the biological parameter should be identifiable through the corresponding brain functional networks. We tested this empirically with the long-term memory’s decay rate (rate of forgetting α) measured from a Swahili-English vocabulary learning task, and resting-state fMRI (rs-fMRI) data, both collected from 33 participants. α was estimated based on a rational model of episodic memory. rs-fMRI data from each brain was processed into a 264 x 264 connectivity matrix, and further selected and grouped into memory-related network matrices using group lasso and cross-validation techniques. We were able to (1) predict the observed α using these connectivity matrices, and (2) show that forgetting can be related to specific brain networks.

Competence Assessment by Stimulus Matching: An Application of GOMS to Assess Chunks in Memory

It has been shown that in hand-written transcription tasks temporal micro-behavioural chunk signals hold promise as measures of competence in various domains (e.g., Cheng, 2014). But data capture under that an approach requires the use of graphics tablets which are relatively uncommon. In this theoretical paper we propose and explore an alternative method – Competence Assessment by Stimulus Matching (CASM). This new method uses simple mouse-driven interfaces to produce temporal chunk signals as measures of learner’s ability. However, it is not obvious what features of CASM will produce effective competence measures and the design space of CASM tasks is large. Thus, this paper uses GOMS modelling in order to explore the design space to find factors that will maximize the discrimination of chunk measures of competence. The modelling results of this paper show that CASM has potential in using chunk signals to measure competence in the domain of English language.

Subitizing Abilities of Bilingual Subset-Knowers

Previous studies have found that bilingual children receive different Give-Number task knower-levels across their languages as subset-knowers. However, recent work reveals that the reliability of Give-Number task is moderate for subset-knower levels. This raises the possibility that the differences found in knower-levels across bilinguals’ languages are explained by a lack of reliability of Give-Number. To address this possibility, we presented bilingual children with Give-Number and a subitizing task that they had to perform in both languages. If differences in knower-levels are due to true differences in children’s understanding of small numbers and not random noise, then differences in subitizing abilities should also be non-random. Data collection is still in progress, but preliminary planned analyses (N=13/64) revealed no differences in subitizing abilities across languages, based on knower-levels classification. We discuss implications of these findings on knowledge transfer across bilinguals’ languages.

Brain Connectivity-Based Prediction of Semantic Network Properties Related to Creativity

The associative theory of creativity proposes that creative ability relies on the organization of semantic memory, yet the relationships between semantic memory structure and brain connectivity, in relation to creativity, remain poorly understood. Here, we explored the relationships between the network properties of individual semantic memory, patterns of functional brain connectivity, and real-life creativity. To this end, we acquired functional magnetic resonance imaging data while participants underwent a semantic relatedness judgment task. Participants’ relatedness ratings between word-pairs were used to estimate their individual semantic networks, whose network properties were significantly related to their real-life creativity. Using a connectome-based predictive modeling approach, we identified patterns of on-task functional connectivity that predicted creativity-related semantic memory network properties in novel individuals. Furthermore, the predicted semantic network properties partially mediated the relationship between functional connectivity and real-life creativity. These results provide new insights on how brain connectivity supports the associative mechanisms of creativity.

Sampling Associations with (Un)related Suggestions

Brainstorming suggests that generating ideas in groups can direct one's thoughts into areas that might be difficult to reach or consider, resulting in better performance than individuals working separately. However, while intuitively appealing, studies have repeatedly found no evidence for such synergy. Here, we reinvestigate group synergy in a task with simulated agents. We use state-of-the-art language embeddings to generate close or far-off semantic associates to people's ideas. This design allows us to adopt a sampling for inference perspective, where individuals produce sample ideas from a generative process (Sanborn & Chater, 2016). We hypothesize that suggestions can allow participants to create samples from previously not considered concepts, and additionally, knowledge about the diversity of solutions can spur more broad idea generation. Our work helps inform broader questions about when and why groups can expect to produce better performance than individuals.

Belief Change Triggers Behavioral Change

Beliefs have long been posited to be a predictor of behavior. However, empirical investigations into the relationship between beliefs (e.g., “vaccines cause autism”) and behaviors (e.g., vaccinating one’s child), mostly correlational in nature, have provided conflicting findings. To explore the causal impact of beliefs on behaviors, participants first rated the accuracy of a set of statements (health-related in Study 1, politically-charged in Studies 2 and 3) and chose corresponding campaigns to donate available funds. They were then provided with relevant evidence in favor of the correct statements and against the incorrect statements. Finally, participants rated the accuracy of the initial set of statements again and were given a chance to change their donation choices. The results of all three studies show that belief change predicts behavioral change, finding of particular relevance for interventions aimed at promoting constructive behaviors such as recycling, donating to charity, or employing preventative health measures.

Computational Modelling of the Cross-Cultural Differences in Face Perception

The other-race effect refers to the difficulty of discriminating between faces from ethnic and racial groups other than one’s own. We challenged the hypotheses that same-race faces are holistically encoded while the other-race faces are analytically encoded. We proposed that the analytic-holistic hypotheses could be discriminated based on their information processing properties: (1) processing order of facial features, (2) stopping rule and (3) process dependency. We compared Eastern and Western participants using the psychophysically adjusted configural facial features in a face categorization task. The two cultures showed markedly different processing strategies: the Easterners demonstrated a higher level of holistic processing, than the Westerners. The computational modeling results show a weak other-race effect for Westerners, and a reversed effect for the Easterners. Overall, all subjects demonstrated parallel processing of facial features. In addition, some showed a presence of the across-feature process dependency, which supports a strong form of facial holistic hypothesis.

How do chimpanzees explore their environment prior to a risky decision?

Seeking for information is a ubiquitous requirement of life. Here, we studied how chimpanzees––one of humans closest living relatives––explore initially unknown payoff distributions before making a final exploitative draw (see Hertwig et al., 2004). More specifically, across two conditions (stable and changing), chimpanzees (N=15) could explore a risky (outcome variance) and a safe (no variance) assortment, prior to making a decision. In the stable condition, the safe and risky assortment remained on the same side and the food in the same location across trials. In the changing condition, the side of the assortments, as well as the location of the food within the assortments changed. We investigated (1) whether chimpanzees explore changing more than stable environments and (2) which strategies they use to explore their environments. We will discuss our findings in light of the evolutionary roots of human exploration and decision making strategies.

How do the Concepts of Native Language Influence Second Language Learning? : Evidence from the Reconstruction of Word Semantic Domain

The meaning of a word is acquired using other words. The current study explores how Japanese native-speakers and adult second-language (L2) learners of Japanese apply the meanings of verbs that belong to the same semantic domain, focusing on the semantic domains of “kai (to open),” “guan (to close),” “kan (to see/to look at/to watch),” and “chuan (to wear)” in Chinese. A comparison of the results from native speakers and L2 learners revealed a marked dissimilarity between the patterns of their semantic domains. Native-speakers were able to use all of the words correctly, whereas L2 learners made errors that were heavily influenced by the word concepts of their native language. Our findings suggest two main points: first, it is possible that adult L2 learners have their own semantic domain influenced by their native language; second, it is important to systematically investigate words that belong to the same domain as a whole.

Pragmatics of Metaphor Revisited: Formalizing the Role of Typicality and Alternative Utterances in Metaphor Understanding

Experimental pragmatics tells us that a metaphor conveys salient features of a vehicle and that highly typical features tend to be salient. But can highly atypical features also be salient? When asking if John is loyal and hearing “John is a fox”, will the hearer conclude that John is disloyal because loyalty is saliently atypical for a fox? This prediction follows from our RSA-based model of metaphor understanding which relies on gradient salience. Our behavioral experiments corroborate the model's predictions, providing evidence that high and low typicality are salient and result in high interpretation confidence and agreement, while average typicality is not salient and makes a metaphor confusing. Our model implements the idea that other features of a vehicle, along with possible alternative vehicles, influence metaphor interpretation. It produces a significantly better fit compared to an existing RSA model of metaphor understanding, supporting our predictions about the factors at play.

Deciding to be wrong: Optimism and pessimism in motivated information search

In social psychology, a common finding is that people prefer confirmation-biased information. Although this confirmatory information seeking is commonly treated as an error in judgment, we note that biased sources of information can sometimes be more useful than more accurate sources. Such confirmatory sources will only advise someone to deviate from the policy they think is most useful when these sources are sure taking alternative action is correct. For this reason, these sources can allow people to avoid particularly costly errors. Avoiding such costly errors can sometimes be worth the price of inaccurate beliefs, even though these beliefs lead to more errors in total. In two studies, we find initial support for this idea. Within a Partially Observable Markov Decision process, we show that participants prefer optimistically-biased information when they would otherwise miss out on a particularly large reward, and pessimistically-biased information when they would otherwise face particularly strong punishment.

Encouraging far-sightedness with automatically generated descriptions of optimal planning strategies: Potentials and Limitations

People often fall victim to decision-making biases, e.g. short-sightedness, that lead to unfavorable outcomes in their lives. It is possible to overcome these biases by teaching people better decision-making strategies. Finding effective interventions is an open problem, with a key challenge being the lack of transfer to the real world. Here, we tested a new approach to improving human decision-making that leverages Artificial Intelligence to discover procedural descriptions of effective planning strategies. Our benchmark problem regarded improving far-sightedness. We found our intervention elicits transfer to a similar task in a different domain, but its effects in more naturalistic financial decisions were not statistically significant. Even though the tested intervention is on par with conventional approaches, which also struggle in far-transfer, further improvements are required to help people make better decisions in real life. We conclude that future work should focus on training decision-making in more naturalistic scenarios.

The role of counterfactual reasoning in responsibility judgments

To hold someone responsible, we need to assess what causal role their action played in bringing about the outcome. Causality, in turn, can be understood in terms of difference-making -- C caused E if E would have been different had C been different. In previous work, the counterfactual of “C being different” is often construed as “C being absent”. Here, we explore how in social situations, this counterfactual can be alternatively construed by imagining how a different person would have acted in the same situation. We propose a computational model that formalizes this idea of counterfactual replacement. The model compares what actually happened with what would have happened had the person of interest been replaced by someone else, and then predicts responsibility to the extent that the replacement person would likely have changed the outcome. We test our model against people’s responsibility judgments in a variety of scenarios across several experiments.

The human visual system spontaneously computes approximate number

Is numerosity an abstraction that arises downstream of basic perceptual processing, or does the brain process numerosity like a primary perceptual feature such as color or motion? Here, we tested whether visual cortex computes number automatically, even when number is task-irrelevant and is being processed unconsciously. We recorded electroencephalography while subjects watched dotcloud stimuli that alternated in numerical content at 15 Hz, under one of three conditions: judging numerosity, judging convex hull, or making no judgments. Under all three conditions, we observed oscillatory activity at 15 Hz in early visual cortex. The strength of this signal depended on the numerical content of the stimuli in all three conditions and did not differ significantly across the conditions. Results show that number can be computed spontaneously by visual cortex, even when participants are attending to non-numerical information, consistent with the proposal that number is a primary perceptual attribute of visual stimuli.

Distributed semantics in a neural network model of human speech recognition

While there are interesting correspondences between form and meaning in many languages, psycholinguists conventionally consider them to be marginal, as they affect only a small subset of words. As such, a common simplification in computational models is to use empirically- or theoretically-motivated representations for form and random vectors for semantics. We recently introduced a novel model of human speech recognition, EARSHOT, which maps spectral slices (form) to pseudo-semantic patterns (sparse random vectors [SRVs]). Here, we replace SRVs with SkipGram vectors. Empirically-based semantics allow the model to learn more quickly and, surprisingly, exhibit more realistic form competition effects. These improved form competition effects do not depend on the particular form-to-meaning mapping in the training lexicon; rather, they arise as a result of the nontrivial output structure. These results suggest that while form-meaning mappings may be mainly arbitrary, realistic semantics afford important computational qualities that promote better fits to human behavior.

Folk theory of epidemics: insights from a 14-day diary study during COVID-19

To cope with the pandemic, we need first to predict it. How does the human brain model epidemic dynamics? In February and March 2020, we conducted an online diary study, where each participant completed 14 days of predictions about the ongoing COVID-19 epidemic in China. Over 400 Chinese adults participated in our study, spanning a total of 40 days. On the group level, we find participants’ predictions of the ending date of the epidemic are correlated with daily new cases. Modeling analysis shows that participants’ predictions agree with a Gaussian process model that generalizes observed case numbers to the future by similarity-based function learning, but deviate from an ideal epidemic model. On the individual level, we find individuals’ pandemic predictions are correlated with their psychological traits or states, such as future time perspective and negative emotions (anxiety, depression, and stress).

Vertical directionality ratings as lexical norms for English verbs

While lexical norm databases are being developed with a renewed emphasis on perceptual psycholinguistic features, verbs are often neglected. Research on nouns and adjectives utilizes vertical spatial localization ratings, providing an analogue for the inclusion of verbs via vertical directionality ratings. This study demonstrates the feasibility of collecting such ratings for 32 English verbs, as well as the possibility of assessing directionality ratings in other spatial dimensions. Further, ratings were analyzed using distributional semantic models. Results indicate that language statistics are strongly associated with human ratings, providing convergent validity for vertical directionality as a useful psycholinguistic measure. Additionally, a comparison of the predictive performance of LSA and Web 1T 5-gram models on human ratings revealed that the first-order 5-gram model accounted for significant unique variance in the norms, while LSA did not. It may be concluded that spatial verb associations are encoded linguistically by proximal, syntactic dependencies.

Network Dynamics of Scientific Knowledge Reveal a Single Conceptual Core that Declines Over Time

How does scientific knowledge grow? Some argue that science advances by developing a set of core concepts, while others argue complementary or competing core/periphery structures allow for scientific progress. We conducted a large-scale analysis of abstracts from the American Physical Society, spanning 58 years. For each year, we created a network of concepts, where single- and multi-word noun phrases were linked if they appeared in the same abstract. Our results suggest that a single core––rather than multiple cores––best explains the structure of our concept networks. We also see that only a small subset of core concepts (around 10%) remain within the core. Finally, we see a decline in the relative size of the core, driven by the growth of scientific production. The decline in core size is negatively associated with discoveries that push the scientific frontier, which suggests that strong core/periphery structures may be important for innovation.

Explaining the Gestalt principle of common fate as amortized inference

Humans perceive the world through a rich, object-centric lens. We are able to infer 3D geometry and features of objects from sparse and noisy data. Gestalt rules describe how perceptual stimuli tend to be grouped based on properties like proximity, closure, and continuity. However, it remains an open question how these mechanisms are implemented algorithmically in the brain, and how (or why) they functionally support 3D object perception. Here, we describe a computational model which accounts for the Gestalt principle of Common Fate - grouping stimuli by shared motion statistics. We argue that this mechanism can be explained as bottom-up neural amortized inference in a top-down generative model for object-based scenes. Our generative model places a low-dimensional prior on the motion and shape of objects, while our inference network learns to group feature clusters using inverse renderings of noisily textured objects moving through time, effectively enabling 3D shape perception.

Associative learning of new word forms in a first language and gustatory stimuli

This study investigated the effect of gustatory stimuli on the associative learning of new (meaningless) word forms in a first language. Japanese native speakers performed the following tasks: (1) subjective evaluations of gustatory stimuli; (2) learning tasks of associative pairs of a new word form and gustatory stimulus (G) or only new word forms (W); (3) recognition memory tasks associated with the G/W condition; (4) free recall task for the G/W conditions. The accuracy rates of W were highest, whereas there was no significant difference between free recall scores. Subjective evaluations of gustatory stimuli negatively correlated with the free recall performance of word forms associated with the gustatory stimuli, while accuracy rates of the recognition and free recall tasks of G were positively correlated. Accordingly, learning new word forms on their own is more effective than associative learning of new word forms and gustatory stimuli in one day of learning.

Sampling Heuristics for Active Function Learning

People are capable of learning diverse functional relationships from data; nevertheless, they are most accurate when learning linear relationships, and deviate further from estimating the true relationship when presented with non-linear functions. We investigate whether, when given the opportunity to learn actively, people choose samples in an efficient fashion, and whether better sampling policies improve their ability to learn linear and non-linear functions. We find that, across multiple different function families, people make informative sampling choices consistent with a simple, low-effort policy that minimizes uncertainty at extreme values without requiring adaptation to evidence. While participants were most accurate at learning linear functions, those who more closely adhered to the simple sampling strategy also made better predictions across all non-linear functions. We discuss how the use of this heuristic might reflect rational allocation of limited cognitive resources.

A preregistered study exploring language-specific distributional learning advantages in English-Mandarin bilingual adults

Bilinguals are reported to have language learning advantages. One possible pathway is a language-specific transfer effect, whereby sensitivity to structural regularities in known languages can be brought to novel languages that share features. To test for specific linguistic feature transfer, we designed a task for bilinguals with homogeneous language exposure (bilingual in same languages) and heterogeneous feature representation (differing proficiencies). As Hindi and Mandarin have retroflexion in phoneme contrasts, we conducted a pre-registered study with a statistical learning task of a Hindi dental-retroflex contrast on parallel English-Mandarin bilinguals with varied Mandarin proficiency. Unlike the pilot study (N = 15), the main study (N = 50) found no evidence for a learning effect, and language-experience did not explain learning variance. As these stimuli have shown learning effects in children, learning effects for this feature may be fragile in this adult population, and language-specific neural commitments may prevent learning of the contrast.

An information and coding theoretical approach to combinatorial communication

A fundamental characteristic of human language is its combinatorial nature, which facilitates the communication of infinite meanings (i.e., words) built from finite sounds (i.e., phonemes). Investigating the selective drivers of this combinatorial feature represents a major challenge in the field of language evolution and has prospered into a rich interdisciplinary field. Here, we discuss the emergence of combinatorial structures in (human and non-human) communication systems from an information and coding theoretical perspective. We describe how “noise” (i.e., factors constraining communication processes) can hamper and distort signal transmission and perception, and how such noise-induced impairments can be circumvented through a combinatorial coding scheme that adds redundancy to a signalling system, in turn increasing its robustness and enhancing signal detection and discrimination. In doing so we argue that basic combinatoriality has emerged due to universal constraints imposed on signalling systems, and that human language-like productive combinatoriality builds upon this phenomenon.

Serial reversal learning using a colour discrimination task in two Ara species

Enhanced behavioural flexibility has been linked to relatively large brain size and social complexity. Aras exhibit both properties. We tested parrots of A. ambiguus and A. glaucogularis in a serial reversal learning paradigm, in which all the individuals completed an acquisition, and 10 reversal learning phases. Behavioural flexibility (and thus learning performance) was measured as the number of errors per trial. We also conducted social observations on individuals’ social interactions. We found that both species made significantly fewer errors per trial during each reversal phase and over the course of ten reversal phases. All individuals gradually developed a generalised learning strategy i.e., ‘win-stay-loose-shift’, and thus showing ‘learning to learn efficiently’. Our results also support the Social Intelligence Hypothesis whereby individuals’ learning performance was positively related to social interaction. These results show that the two Ara species demonstrate behavioural flexibility, and social complexity provides one explanation for varied individual learning performance.

Visual Processing of Biological Motion in the Periphery under Attentional Load

Biological motion is a crucial stimulus with social and survival value that can be processed incidentally. However, no study has examined the factors that would affect bottom-up processing of biological motion in depth. In this study, we investigated the effect of perceptual load and eccentricity on biological motion perception. Human subjects performed a letter search task at the center while biological motion in the form of point-light displays was displayed as a distractor in the periphery. We manipulated the perceptual load at the center as well as the eccentricity of the distractor stimuli in the periphery. Our results show that when the perceptual load is low, people are distracted more by biological motion at near eccentricities, whereas when the load is high, the position of the distractor does not have any effect. In sum, these results suggest that bottom-up perception of biological motion is influenced by perceptual load and eccentricity.

Confidence in control: Metacognitive computations for information search

Having low confidence in a decision can justify the costly search for extra information. Rich literatures have separately modelled the metacognitive monitoring processes involved in confidence formation and the control processes guiding search, but these two processes have yet to be treated in unison. Here, we model the two as inference and action in a unified partially-observable Markov decision problem where decision confidence is generated by more sophisticated postdecisional or second-order models. Our work highlights how different metacognitive monitoring architectures generate diverse relationships between object- and meta-level accuracy as well as normative information collection in the face of costs. In particular, we demonstrate that decreased metacognitive efficiency prescribes both increased and decreased search, depending on the underlying model of metacognitive confidence. More broadly, our work shows how it is crucial to model interactions between metacognitive monitoring and control, whether in information search or beyond.

Is it for all? Spatial abilities matter in processing gestures during the comprehension of spatial language

Observing gestures facilitate listeners’ comprehension, especially for visual-spatial information. However, people differ in how and to what extent they benefit from gestures depending on their visual-spatial abilities. This study examined whether and how spatial skills (i.e., mental rotation) relate to how much listeners benefit from observing gestures. We tested 51 Turkish-speaking adults’ comprehension of spatial relations when the critical spatial information was provided in three different conditions: (1) only in speech (e.g., saying “right” without making any gesture), (2) both in speech and in gesture (saying “right” and gesturing to the right), and (3) only in gesture (saying “here” and gesturing to the right). We found that mental rotation scores were associated with increased accuracy only for speech + gesture (z = 2.37, p = .02) and the gesture-only conditions (z = 2.13, p = .03). These results suggest that visual-spatial cognitive resources might be important for gesture processing.

Impact of Socio-Economic Status on Cognitive Processing

Previous studies have manifested a strong relationship between socio-economic status (SES) and cognitive abilities in young children, however, the effects of SES on developed cognitive architecture remain less explored. The current study examined whether and how cognitive processing differs as a function of socioeconomic status in adults. The performance measured individuals from lower and higher SES (29.9 ± 8.2 years, Male = 81, Female = 19) on tests of working memory (Digit Span), visuo-spatial memory (visual retention and recognition) and executive functioning (Stroop test and Koh’s block test). Lower SES adults had significantly lower test scores on all the measures as compared to the higher SES adults. These findings suggest that lower SES individuals exhibit lower cognitive abilities as compared to the individuals associated with higher SES. The current findings append to the previous literature by demonstrating a significant interaction between SES and cognitive processing in adults.

Can Retrieval Practice of The Testing Effect Increase Self-efficacy in Tests and Reduce Test Anxiety, in 10- to 11-Year-olds?

The testing effect was used to teach the testing effect to increase feelings of self-efficacy in test taking and reduce test anxiety in 10-to11- year-olds. The impact of this intervention was measured with a new Self-efficacy in Test Taking measure and an adapted thoughts' subscale from the Children’s Test Anxiety Scale in pre-test, post-test conditions and with a control group. The intervention was designed to target the self-knowledge beliefs ‘layer’ from the S-REF (Self-referent Executive Function) model of test anxiety. The intervention was delivered primary classrooms, over six weeks and aimed to teach children to believe in test taking abilities, that testing routes were ‘well-oiled’, to improve feelings of self-efficacy about taking high stakes’ tests. There was a significant increase in self-efficacy in test taking for students, who were high in test anxiety, η²p = 0.07 when measured with a subscale from the Self-efficacy in Test Taking new measure.

The influence of media exposure on children’s evaluations of non-local accents

This study investigated whether positive or negative media exposure to non-local accents influences children’s language attitudes of those accents. Following exposure to heroic or villainous cartoon characters with either a regional (British; Experiment 1, N=89) or non-native (Experiment 2, N=84) accents, children were tested on a friend preference task, during which they selected—given a choice between a locally-accented child and a British-accented (Experiment 1) or Korean-accented (Experiment 2) child—which one they wanted to be friends with. Consistent with previous literature, children selected native-accented speakers at above chance rates, both when paired with British- (M=55%, β=0.25, p=0.03) and Korean-accented children (M=86%, β=2.63, p<.001). There was no evidence that exposure to heroic vs. villainous characters influenced children’s preferences for native-accented children in either experiment. Follow-up work is investigating whether protracted exposure to evil/heroic characters with non-local accents influences children’s evaluations to non-local varieties across a wider range of tasks.

Can algorithms learn from babies? Exploring how infant learning can inform and inspire unsupervised learning algorithms

Most of the recent success in machine learning has been achieved in supervised learning and predicated on the availability of large amounts of labelled training data. On the other hand, effectively using readily available unlabelled data has proven a much more difficult endeavour. In contrast to algorithms, infants spontaneously learn from the available sensory information without explicit instructions, supervision or feedback. Thus, infant learning can be viewed as a highly successful approach to unsupervised learning.  In this work, we explore the parallels between infant learning and recent successes of unsupervised machine learning in the area of contrastive learning. We examine how the principles of infant learning and developmental cognitive neuroscience can inform and inspire the development of novel contrastive learning algorithms. We focus on the phenomenon of category learning and explore how these principles can be applied to better understand and improve contrastive methods.

Tracking the Unknown: Modeling Long-Term Implicit Skill Acquisition as Non-Parametric Bayesian Sequence Learning

Long perceptuo-motor sequences underlie skills from walking to language learning, and are often learned gradually and unconsciously in the face of noise. We used a non-parametric Bayesian n-gram model (Teh, 2006) to characterize the multi-day evolution of human subjects’ implicit representation of a serial reaction time task sequence with second-order contingencies. The reaction time for an element in the sequence depended on zero, one and more preceding elements at the same time, predicting frequency, repetition and higher-order learning effects. Our trial-level dynamic model captured these coexistent facilitation effects by seamlessly combining information from shorter and longer windows onto past events. We show how shifting their priors over window lengths allowed subjects to grow and refine their internal sequence representations week by week.

Effects of global discourse coherence on local contextual predictions

During language comprehension, words are easier to process when predictable based on local sentence context. It is unclear how information available from the global discourse interacts with this local predictive processing. To test this, we conducted an online self-paced reading study (n=100), manipulating the predictability of a critical word in a target sentence while varying the coherency of the target sentence with a preamble context that preceded it. We found that people processed the critical word faster when it was highly predictable from local context, replicating a common finding. We also found people were slower to process the critical word when the target sentence was presented in an incoherent context. However, no interaction was observed. Results indicate that subtler semantic changes, such as topic shifts, slow language processing but do not reduce the benefit of a highly predictable, local context.

Computational-Neuroscientific Correspondence of Oscillating-TN SOM Neural Networks

Oscillating-TN (Topological Neighborhood) Self-Organising-Map (SOM) artificial neural networks can facilitate the study of neurodevelopmental cognitive phenomena. Their cognitive modelling significance rests primarily on the premise of biological realism. Despite the difference in neuronal activity description between spike-train brain signaling and the rate-based computer SOM models, there is a valid analogy in cortical columnar activation synchrony. A cortical macrocolumn can be modeled as a computer-trained SOM with emerging or structural minicolumns represented by SOM-TN groups of neurons. Neural excitation and lateral inhibition result in structural cortical changes modeled by SOM Hebbian TN-activation. Oscillating-TN SOMs can model brain plasticity and regulate sensory desensitization. Neural synchrony can be modeled at various levels: macroscopically, there is an analogy between an oscillating local field potential and a SOM oscillating-TN width computational session. There are also arguments to support the hypothesis that SOM stability or entrenchment during computational map formation associates with neural oscillatory sensory prediction.

Memory Performance in Special Forces: Speedier Responses Explain Improved Retrieval Performance after Physical Exertion

Performance on cognitive tasks is typically affected by many factors simultaneously. For example, performance on a retrieval practice task is determined not only by memory encoding and retrieval processes, but also by motor processes, executive functions, affective state, etc. Often, these contributing processes also vary over time. This makes it challenging to attribute performance changes to a single mechanism, especially in real-world settings. Here we analyse performance data from multiple retrieval practice sessions completed by special forces trainees before and after a high-intensity speed march. Learning outcomes improved from session to session, which suggests increased memory function. However, a linear ballistic accumulator fitted to the data showed that changes in non-memory processes—a decrease in non-retrieval time in particular—sufficed to explain the observed improvements. This work demonstrates that even with a simple task, assuming that performance changes are attributable to a single cognitive process can lead to erroneous conclusions.

Regression, encoding, control: an integrated approach to shared representations with distributed coding

Artificial systems currently outperform humans in diverse computational domains, but none has achieved parity in speed and overall versatility of mastering novel tasks. A critical component to human success, in this regard, is the ability to redeploy and redirect data passed between cognitive subsystems (via abstract feature representations) in response to changing task demands. However, analyzing shared representations is difficult in neural systems with distributed nonlinear coding. This work presents a simple but effective approach to this problem. In experiments, the proposed model robustly predicts behavior and performance of multitasking networks on natural language data (MNIST) using common deep network architectures. Consistent with existing theory in cognitive control, representation structure varies in response to (a) environmental pressures for representation sharing, (b) demands for parallel processing capacity, and (c) tolerance for crosstalk. Implications for geometric (dimension, curvature), functional (automaticity, generalizability, modularity), and applied aspects of representation learning are discussed.

Are you talking about me? A pilot investigation of how gender modulates the effects of self-relevance and valence on emotional feelings

A pilot study (n=68) investigated participants’ emotional valence after reading sentences that varied in self-relevance (referring to the participant/someone else), valence (positive/negative), and the gender of the person expressing the sentence (man/woman). Positive sentences induced more positive emotion than negative sentences. Self-relevance enhanced the emotional response, with positive self-relevant sentences rated more positively than positive other-relevant sentences, and negative self-relevant sentences rated more negatively than negative other-relevant sentences. Gender impacted these responses. First, the difference in emotional valence felt in response to positive versus negative sentences was larger for women than men – women responded more positively to positive sentences than men. Second, self-relevance enhanced the positive emotion in response to positive sentences more strongly when the sentence came from the opposite gender. These findings suggest that women may have stronger responses to emotional sentences and that processing biases may exist to prioritize positive self-relevant information from the opposite gender.

Common Origins of Social Interaction of Different Species: The Model of Coherent Intelligence Linking Physics to Social Sciences

This paper studies social interaction in newborns and of various species, whose behavioral developments attributed to the circular sensory motor Stage 3 of behavior development. The article has associated 16 exciting facts of the social behavior in groups of different species, highlighting three questions: (1) how can newborns successfully classify social phenomena that are abstract or absent from their reality; (2) whether emotional contagion can appear through cues of body language that subjects cannot consciously perceive; (3) how do organisms distinguish identical stimuli by their importance (value) without perceptual driver stimuli. The analysis suggests that organisms can interact at initial stages of development by distinguishing cues of a similar modality by their significance (their value). This ability contributes to the assimilation of knowledge about the initial social phenomena. This paper proposes the model of Coherent intelligence, which is strictly based on experimental evidence in modern literature and laws of physics.

Cognitive Effort and Preference: A Curious Case of Rotated Words

Understanding how we judge effortfulness promises insight into the myriad choices we make each day. Here, we present research investigating a curious finding: In Dunn et al., 2019, individuals anticipated reading a single rotated word as more effortful than reading two upright words. This contravenes the expectation that the latter will take significantly longer. In Dunn et al. (2019) participants had limited experience engaging in the task beyond judging anticipated effort. In the present investigation participants performed demand selection and forced-choice tasks for which they read either one rotated or two upright words. We analyze results in terms of choice, accuracy, vocal onset time, and task duration. Participants still generally considered reading two words as less effortful (and preferable) to one rotated word but, interestingly, this tendency diminished as the task wore on. We discuss implications for understanding individuals’ effort-based choices.

A Self-Supervised and Predictive Processing-Based Model of Event Segmentation and Learning

Event is a fuzzy term that refers to bounded spatio-temporal units. Events guide behavior to allow adaptation to complex environments. The study of event segmentation investigates mechanisms behind the ability to segment the continuous information flow into discrete units. Event Segmentation Theory states that people predict observed ongoing activities and monitor their prediction errors for event segmentation. In this study, inspired by Event Segmentation Theory and predictive processing, we introduced a computational model of event segmentation and learning. In order to verify that our method can segment ongoing activity into meaningful parts and learn them via passive observation, we compared the performance of our method with humans for fine and coarse segmentation tasks in two psychological experiments. The results demonstrated that our model not only learned segmented behavioral units accurately but also displayed similar segmentation performances with human subjects.

A theory of algorithms and implementations and their relevance to cognitive science

The question of how algorithms in general and cognitive skills in particular are implemented by our nervous system is at the core of cognitive science. The notions of what it means for a physical system (such as our nervous system) to implement an algorithm, however, are surprisingly vague. We argue that a rigorous theory is needed to formulate and evaluate precise hypotheses about the brain's cognitive functions and propose a definition of the term algorithm as a chain of functions. Subsequently, we define the term implementation via a sequence of projections from a dynamical system, represented by a Markov process, to the algorithm. We furthermore show the practical applicability of this approach in a simulated example. We believe that the theory proposed here contributes to bridging the gap between the algorithmic and the implementational level by rendering the task at hand theoretically precise.

Exploring the influence of semantics on the German plural system: a wug study

The role of semantics in inflectional morphology has long been debated (Huang & Pinker, 2010; Pinker & Prince, 1988; Ramscar, 2002) with most of the focus on the English past tense. This paper explores whether an effect of semantics can be found for German noun plural generalisation, a system as yet only poorly understood. German speakers were asked to first freely produce and then rate plural forms of 24 new wug words, presented in a semantically manipulated context. We expected that the German plural class ending in -n should be used more frequently with nouns presented as persons than as objects (Gaeta, 2008). While this hypothesis was not confirmed, the post-hoc discovery of other semantic influences prevents us from completely rejecting the original hypothesis. In light of these results we discuss possible sources of the observed pattern of plural classes and stress the importance of replicating wug studies with novel sets of wug words. We conclude that generalisation of the German plural system cannot easily be explained by phonological nor semantic influences.

Are Explicit Frequency Counters Necessary in Computational Models of Early Word Segmentation?

Frequency counters are computational mechanisms that track the frequency or probability of speech units. Such counters are idealizations which re-describe frequency effects in early word segmentation, not providing an underlying learning mechanism from which these effects arise. Previous work has shown that Implicit Chunking represents a plausible learning mechanism explaining infants’ sensitivity to statistical cues when segmenting small-scale artificial languages (French et al., 2011). However, no work has examined whether Implicit Chunking allows to segment naturalistic speech in a developmentally plausible way. Here, we show how a novel symbolic model of Implicit Chunking – CLASSIC-Utterance-Boundary - performs better or as well as previous frequency-based models (i.e., transitional probability, chunking) at predicting children’s word age of first production and a range of word-level characteristics of children’s vocabularies (word frequency, word length, neighborhood density, phonotactic probability). We suggest that explicit frequency counters are not necessary to explain infants’ speech segmentation in naturalistic settings.

Linguistic distributional information about object labels affects ultrarapid object categorization

When given unrestricted time to process an image, people are faster and more accurate at making categorical decisions about a depicted object (e.g., Labrador) if it is close in sensorimotor and linguistic distributional experience to its target category concept (e.g., dog). In this preregistered study, we examined whether sensorimotor and linguistic distributional information affect object categorisation differently as a function of time available for perceptual processing. Using an ultrarapid categorisation paradigm with backwards masking, we systematically varied onset timing (SOA) of a post-stimulus mask (17-133ms) following a briefly displayed (17ms) object. Preliminary results suggest that linguistic distributional distance between concept and category (e.g., Labrador → dog), but not sensorimotor distance, affects categorisation accuracy and RT even in rapid categorisation, and that these effects do not vary systematically by SOA. These findings support the role of a linguistic shortcut (i.e., using linguistic distributional instead of sensorimotor information) in rapid object categorisation.

Grounding Word Learning Across Situations

Word learning models are typically evaluated as the problem of observing words together with sets of atomic objects and learn-ing an alignment between them. We use ADAM, a Python software platform for modeling grounded language acquisition, to evaluate a particular word learning model, Pursuit (Stevens, Gleitman, Trueswell, & Yang, 2017),under more realistic learning conditions (see e.g. Gleitman and Trueswell (2020) for review). In particular, we manipulate the degree of referential ambiguity and the salience of attentional cues available to the learner, and we present extensions to Pursuit which address the challenges of non-atomic meanings and exploiting attentional cues.

AUGMENTING LINGUISTIC INTELLIGENCE THROUGH CHESS TRAINING - AN EMPIRICAL STUDY

Does playing Chess improve reading, writing, speaking, and listening skill, the components of Linguistic Intelligence? The ability to manipulate syntax, phonetics, pragmatics, and semantics of the language is defined as Linguistic Intelligence. The experimental group underwent Chess training once a week, through the year (around 25 to 30 sessions), as part of co-curricular school activity. ANCOVA showed significant gains in Verbal Reasoning (an integral part of Linguistic Intelligence) for the experimental group than the control group who were involved in other extra-curricular activities. The study used pretest–posttest with control group design with WISC IV India and Binet Kamat Test of Intelligence for the assessment. 70 children in the experimental group and 81 children in the control group were randomly selected from four schools of both genders in the age group of 5 -16. Results indicate enhanced Linguistic Intelligence, which could also lead to improved communication, critical and analytical skills.

Behind the Bar: Coordinated Collision Avoidance in a Goal-Directed Joint Action Task

The current study explores the dynamics of interpersonal collision avoidance during an everyday coordination task. The task was a full body adaptation of the collision avoidance arm-movement task employed by Richardson et al. (2015) and required pairs of co-actors to avoid colliding into one another as they moved back and forth within a collaborative workspace. Specifically, participants were required to serve drinks to humanoid customers in a virtual bar where they needed to move back and forth between drink service and refill stations. As expected, stable patterns of movement coordination emerged between co-actors, with one participant in a pair exhibiting a more circular movement trajectory between the service and refill locations, while the second participant in a pair exhibited a more straight-line trajectory between the two locations. Consistent with the results of Richardson et al., (2015), this pattern was observed across nearly all pairs, indicating that the same dynamical processes governing simple hand-arm coordination tasks also regulate more complex, full-body and naturalistic goal-directed interpersonal coordination tasks.

Emotion Words May Connect Complex Emotional Events and Facial Expressions in Early Childhood

Recent theories suggest that emotion words may facilitate the development of emotion concepts. However, most research investigating this relation in early childhood has been correlational. To assess whether emotion words causally influence emotion concept development, we conducted a pre-test post-test study examining which facial configurations 3-year-olds associate with complex emotional scenarios (annoyed, disgusted, and nervous). Between pre- and post-test, children were randomly assigned to one of three conditions. Children observed a facial configuration paired with a scenario while presented with either an explicit emotion label, a vague emotion label, or irrelevant information. Data from 54 children (36 female, mean age= 3.53) revealed that children’s average change in number of correct responses from pre-test to post-test by condition were as follows: Explicit=1.00 (SD=1.68); Vague=0.11 (SD=1.45); Irrelevant=-0.28, (SD=1.71). These results hold implications for how specific emotion words may casually influence children’s ability to learn new emotion concepts.

SpeakEasy Pronunciation Trainer: Personalized Multimodal Pronunciation Training

The primary goals of computer-assisted pronunciation training (CAPT) systems are to provide a personalized interactive environment and to accurately diagnose mispronunciations. Automatic speech recognition (ASR) systems have been shown to be an effective tool for diagnosing mispronunciations. While the data ASR systems output can be difficult for the layperson to understand, presenting it in a multimodal fashion can make it easier and feeding it into an automated narrative system can produce personalized feedback. In the absence of native speech examples, synthetic examples produced by text-to-speech (TTS) engines have proven to be an adequate substitute, making data collection easier and allowing for larger CAPT systems. In this work we present the SpeakEasy pronunciation trainer, a CAPT system that leverages ASR, TTS, automated narrative systems, and multimodal data representation to provide a personalized interactive environment that tracks a user's progress over time.

Interpretations of meaningful and ambiguous hand gestures from individuals with and without Autism Spectrum Disorder (ASD)

Previous research indicates that Autism Spectrum Disorder (ASD) is associated with altered production of co-speech gestures. However, little research has examined whether ASD impacts the processes involved in interpreting observed gestures. We collected meaningfulness ratings and one-word interpretations for 165 video-recorded gestures of varying ambiguity from individuals with and without an ASD diagnosis. The resulting dataset contains insights into gesture processing in individuals with and without ASD, including the number and variability of interpretations assigned to gestures as well as tendencies to endorse ambiguous gestures as meaningful. We also used a subset of these stimuli to identify the neural mechanisms by which gestures enhance memory in neurotypical adults. This collection of videos, as well as the interpretations and ratings from each group, will be made available through the Open Science Framework for use in future research.

Lay Theories of Manipulation: Do Consumers Believe They are Susceptible to Marketers’ Trickery?

Persuasion is hard. Then why do some consumers think that marketers can easily manipulate them? Three studies and an internal meta-analysis suggest that consumers’ beliefs about marketing manipulation are rooted in humans’ deeper and older psychology—motivation to understand life events. Consumers higher in motivation to make sense of their environments not only detect persuasion where it exists but also where none (make false-positive errors). Whereas consumers higher in sense-making motives believe that manipulations are more effective, objective sense-making abilities negatively predict false manipulation detection. We also tested how manipulation beliefs are related to personality traits, gender, and age. These findings help 1) understand the origins of lay (false) beliefs about the marketplace and persuasion in general, 2) consider marketing segmentation strategies to reach out to particularly skeptical consumers, and 3) understand ways to attenuate false-positive beliefs and foster accurate persuasion detection, which is critical in the era of infodemic.

Task strategies mediate the interaction between working memory and other cognitive systems

Individuals can learn differently, even in very simple tasks. In an association learning task, participants learned associations between correct keypresses and sets of categorically related images. Following a delay, participants were then tested on the associations in a surprise test. This task has been used to examine the relationship between reinforcement learning and working memory. However, the strategies that individuals use to explore the associations between keypresses and images as well as the opportunity to take advantage of a simple rule based on the structure of the task without having to retrieve the correct response may both impact test performance. Two experiments were conducted to test these hypotheses. Results showed that the performance difference from the end of the learning phase to the testing phase differed significantly between set size conditions in a way that is more consistent with strategic differences than an interaction between working memory and reinforcement learning.

Modeling human planning in a life-like search-and-rescue mission

The ability to plan under a variety of constraints, environments, and uncertainties is one of the greatest puzzles of human intelligence. Planning in human-like domains remains intractable for machine learning algorithms, where the greatest challenges is designing appropriate representations and state-evaluation heuristics. How do humans navigate their natural world so efficiently? Which computations drive them? Existing studies have investigated planning in simple tasks, such as gambles and multi-armed bandits, with little resemblance to natural tasks. We analyze human behavior in a search-and-rescue mission in a large 3D environment, designed to simulate a real-world context. We propose a hierarchical planning framework, which jointly solves an orienteering problem on the high level and a set of local Partially Observable Markov Decision Processes on the low level. Using this framework, we evaluate alternative computational models of human planning that captures core mental representations of human spatial planning.

Perceptual Sensitivity to an Artificial Co-Actor in Competitive 2D Pong

Deep reinforcement learning (Deep RL) methods can train artificial agents (AAs) to reach or exceed human-level performance. However, in multiagent contexts requiring competitive behavior or where the aim is to use AAs for human training, the qualitative behaviors AAs adopt may be just as important as their performance to ensure representative training. This paper compares human behaviors and performance when competing against either a human expert or an AA opponent trained using Deep RL on a 2-dimensional version of Pong. Results show that participants were not sensitive to the movement differences between the human expert and AA. Further, the participants did not alter their behaviors, except to compensate for differences in the environmental states caused by the opponents. The paper concludes with discussion on the potential impacts of AA training on human behavior with regard to representative design in the areas of skill development and team training.

Assessing prosocial tendencies of parrots in food sharing situations

Prosociality is considered one of the driving forces for cooperation in human and animal societies. The presence of prosociality in different taxa suggests convergent evolution. To broaden the phylogenetic spectrum in our understanding of prosociality, we examined two parrot species (i.e., African grey parrots (AGP) and blue-headed macaws (BHM)) using a food-sharing paradigm. By controlling the parrots’ hunger level, we tested how satiation may affect their willingness to share food with their most affiliated partner and a less affiliated partner. We also assessed whether they reciprocated if roles were reversed and examined birds’ regurgitation behaviour following the test condition. Preliminary results show that the parrots did not directly transfer any food pieces to their partner. However, food-sharing by regurgitation subsequent to the test situation occurred between the most affiliative partners and more frequently in the AGP than the BHM. This study highlights parrots as a fruitful model for studying prosociality.

Long-term effects of valence, concreteness, and arousal on lexical reproduction

Emotional factors like valence, concreteness, and arousal have been shown to influence lexical processing, in that they exhibit non-neutrality, positivity, and negativity biases, respectively (e.g. Kuperman et al. 2014, J. Exp. Psych.; Pauligk et al. 2019, Scientific Reports). Since even weak cognitive biases can yield strong tendencies on a larger time scale, we investigate diachronic long-term effects of these factors on the reproductive success of English words. We operationalize reproductive success by means of diachronic growth, age-of-acquisition and prevalence of words. By combining emotional norms (Warriner et al. 2013, Beh. Res. Meth.; Kuperman et al. 2012, Beh. Res. Meth.) with historical language data (COHA; controlling for semantic shift), we show that long-term effects of valence and concreteness largely mimic cognitive short-term biases. However, arousal, quite surprisingly, exhibits a clearly positive effect on lexical reproduction. We attribute this reversed effect to (i) interactions among emotional factors and (ii) social effects.

The Funny Thing About Algorithm Aversion: Investigating Bias Toward AI Humor

Though humans should defer to the superior judgement of AI in an increasing number of domains, certain biases prevent us from doing so. Understanding when and how these biases occur is a central challenge for human-computer interaction. A proposed source of such bias is the perceived subjectivity of tasks. We tested this hypothesis using one of the most subjective tasks possible: Evaluating joke funniness. Across two experiments, we addressed the following: Would people rate jokes as less funny if they believed an AI created them? When asked to rate jokes and guess their likeliest source, participants evaluated jokes attributed to humans as the funniest and those to AI as the least funny. However, when we explicitly framed these same jokes as either human or AI-created, there was no difference in performance-level ratings. These results challenge the notion that task subjectivity always biases users against AI if the source is transparent.

Structural Inductive Biases in Emergent Communication

In order to communicate, humans flatten complex ideas and their attributes into a sequence of words. Humans can use this ability to express and understand complex hierarchical and relational concepts, such as kinship relations and logical deduction chains. We simulate communication of relational and hierarchical concepts using artificial agents. We propose a new set of graph communication games, which show that agents parametrized by graph neural networks develop a more compositional language compared to bag-of-words and sequence models. Graph-based agents are also more successful at systematic generalization to new combinations of familiar features. We release the implementation to probe research on emergent communication over complex data.

Pandemic Panic: The Effect of Disaster-Related Stress on Negotiation Outcomes

Prior research often finds increased altruism following natural disasters. One explanation is the social heuristic hypothesis: humans are prosocial by nature but become self-interested when they have the opportunity to deliberate. As the stress of a disaster lowers people’s ability to engage in effortful deliberation, their heuristic prosocial tendencies emerge. However, this link has often been explored with very simple tasks; here, we study the impact of COVID-related stress on outcomes in multi-issue negotiations with a computational virtual agent. In two experiments with a virtual negotiation partner, we explore two distinct pathways for how COVID-19 stress shapes prosocial behavior. Consistent with the social heuristic hypothesis, COVID-stress is correlated with giving, mediated by heuristic thinking. But COVID-stress also seems to enhance information-exchange and perspective taking, which allowed participants to grow more value which they could give away. Our results give new insights into the relationship between stress, cognition, and prosocial behavior.

Verb learning in young children: Are types of comparisons important?

Evidence shows comparing events helps children learn verbs (e.g., Childers et al., 2016), but studies are needed to understand whether event type is important. Study 1 examines two types of experiences during learning: seeing similar events than varied, or all varied events. Two½-(n=22), 3½-(n=20) and 4½-year-olds(n=14) learned 4 novel verbs. BS:similar events first or all varied events; pointed at test. A 3(Age: 2,3,4 years) x 2(Condition: similar,varied) univariate ANOVA,dv=proportion correct, showed main effect of Age, F(2, 55)= 7.6, p=.001 only. All children succeeded (one sample t-tests,ps< .03), whereas in a prior study, younger children failed. A second study confirms these results with a different set of video events, and events separated by 1-minute distractors. Here, only 2½- and 3½-year-olds who saw similar events extended verbs,perhaps because events were separated in time. These studies show that verb learners can benefit from comparing events, and comparing similar events can be especially useful.

Metaphors Embedded in Chinese Characters Bridge Dissimilar Concepts

How related is skin to a quilt or door to worry? Here, we show that linguistic experience strongly informs people’s judgments of such word pairs. We asked Chinese-speakers, English-speakers, and Chinese-English bilinguals to rate semantic and visual similarity between pairs of Chinese words and of their English translation equivalents. Some pairs were unrelated, others were also unrelated but shared a radical (e.g., “expert” and “dolphin” share the radical meaning “pig”), others also shared a radical which invokes a metaphorical relationship. For example, a quilt covers the body like skin; understand, with a sun radical, invokes understanding as illumination. Importantly, the shared radicals are not part of the pronounced word form. Chinese speakers rated word pairs with metaphorical connections as more similar than other pairs. English speakers did not even though they were sensitive to shared radicals. Chinese-English bilinguals showed sensitivity to the metaphorical connections even when tested with English words.

Empirical Support for a Rate-Distortion Account of Pragmatic Reasoning

Iterated models of pragmatic reasoning, such as the Rational Speech Act model (RSA; Frank & Goodman, 2012), aim to explain how meaning is understood in context. We propose an optimal experiment design approach for teasing apart such models, in which contexts are optimized for differentiating model predictions in reference games. We use this approach to compare RSA with RD-RSA (Zaslavsky et al., 2020), a recent variant of RSA grounded in Rate-Distortion theory. First, we show that our optimal experiment design approach finds cases in which the two models yield qualitatively different predictions, in contrast to previous experimental settings for which these models generate similar predictions. Next, we test the models on newly collected experimental data using our optimal design. Our results show that in this experimental setting RD-RSA robustly outperforms the standard RSA model. This finding supports the idea that Rate-Distortion theory may characterize human pragmatic reasoning.

A virtual actor with socially emotional behavior

The framework of emotional Biologically Inspired Cognitive Architecture (eBICA) is used to define a cognitive model, producing believable socially emotional behavior in social interaction paradigms in a virtual environment. The paradigm selected for this study is a virtual pet (a penguin) interacting with a human user. Its implementation in Unity on a desktop PC in two versions, with and without an Oculus VR headset, was used in experiments involving 20 college student participants. Several versions of the model were compared. Results support the validity of the eBICA framework and indicate that the combination of somatic factors, cognitive appraisals and moral schemas in one model has the potential to make behavior of a virtual actor believable and socially attractive. At the same time, partial randomization of behavior does not affect the general result. The work has implications for the design of future emotionally intelligent collaborative robots and virtual assistants.

Exploring the Structure and Grounding of Concrete and Abstract Categories

Category production tasks (aka semantic fluency) typically concentrate on concrete categories, meaning little is known about abstract categories and potential differences in their structure. Using a category production task for 67 concrete (e.g., animals, tools) and 50 abstract (e.g., science, emotion) categories, we investigated differences in produced member concepts. Abstract categories were smaller than concrete categories, their member concepts were generated more slowly, and had longer and more phonologically-distinct names. Abstract category members were grounded in sensorimotor experience (i.e., with high sensorimotor strength), though overall to a lesser extent than concrete category members. Several ostensibly abstract categories appeared amongst the highest-rated in sensorimotor strength (e.g., sport) while some ostensibly concrete categories were amongst the lowest-rated (e.g., chemical element). The data highlight linguistic and semantic differences in concrete and abstract category structure, but also that they share a common sensorimotor grounding.

Primates evolved spectrally complex calls in compensation for reduction in olfactory cognition

Tetrapod vertebrates evolved acoustic calling to circumvent impediments in communication media. Motor and auditory neural underpinnings of such calling are hundreds of millions of years old. More complex vocalizations, known as display, however, have since convergently evolved in many lineages. I hypothesized that music-like calling might correspond to larger sizes of neural structures corresponding to visual-spatial and motor control capacities, perhaps due to processing overlap. I tested this theory on primates by comparing relative brain component sizes to several spectrographic indexes—song complexity, reappearance diversity [ARDI], and syllable count. Visual and spatial components had moderately positive associations with vocal complexity measures. Areas associated with emotion, arousal, and motivation as well as motor control (especially of head, neck, and eyes) had even stronger associations. Olfaction, however, had negative correlations with all indexes, suggesting an evolutionary trade-off between smell and other brain components during evolution towards signals of greater acoustic complexity.

Is Iconic Language More Vivid?

Iconicity refers to instances in which the form of language resembles its meaning (Perniss et al., 2010). The most prominent example in spoken language is onomatopoeia (e.g., woosh, which sounds like a gust of wind). Here we tested whether iconicity makes language more vivid by depicting the sensorimotor experiences being referred to. In Experiment 1, 44 participants read ten short passages (five iconic, five non-iconic), and then rated their vividness on several scales. These passages differed on two key words, which were either iconic (e.g., screech) or non-iconic (e.g., yell). We found no evidence that iconic language was more vivid. In Experiment 2, 199 participants each rated one longer passage that was either iconic or non-iconic (differing on eight key words). We found only marginal evidence of iconic language being more vivid, on subscales related to felt vividness. These results suggest that iconicity does not make written language more vivid.

Fast or efficient? Strategy selection in the game Entropy Mastermind

How do people acquire information and make decisions in an inherently complex world? We use the game Entropy Mastermind to investigate the cognitive strategies people adopt under various task conditions in situations characterized by high combinatorial complexity. N = 42 participants completed a total of 271 games, varying in incentive structure: In the speed condition, participants were incentivised for solving games quickly; in the efficiency condition, incentives were given for solutions requiring few problem solving steps; in the mixed condition, both speed and efficiency were incentivised. We found that participants adapted their problem solving strategies to the imposed constraints: In the speed condition simpler strategies were used, making feedback easy to interpret. Our results support the hypothesis that in complex environments people flexibly adjust their cognitive strategies to optimize the fit between situational requirements and their cognitive resources.

The Omniglot Jr. challenge; Can a model achieve child-level character generation and classification?

Lake 2015 presented the Omniglot dataset to study how people generate and classify characters. They proposed a model for one-shot-learning of new concepts based on inferring compositionally structured generative models, and could transfer from familiar concepts to new ones. Their Bayesian-Programming-Language model was able to both classify and generate characters similar to adults. However, adults have years of experience, would a similar model apply to children without this prior knowledge? We introduce a new dataset called Omniglot_Jr., composed of Omniglot letters generated by children aged 3-6. Our results of training BPL with children's data in classification and generation find that BPL achieves higher classification accuracy and generates more adult looking letters. We propose the challenge of reproducing children's distinctive pattern of mistakes. Skills such as character recognition depend on child-like learning; this challenge should help us understand how that learning is possible, and how to simulate it in computational systems.

Effects of articulatory suppression on the homophone judgments of Chinese-character words

This study examined the nature of phonological processing in Chinese-character word recognition. Chinese and Japanese use similar linguistic notations. Previous studies have suggested that Japanese readers use abstract, non-articulatory phonology when they read two-character Japanese kanji words. We examined whether Chinese readers use the same process. In Experiment 1, 25 native Chinese speakers performed a homophone judgment task using two-character Chinese words. The participants made more errors in the articulatory suppression condition than in the control condition. This suggests that the phonological information used during the task was speech-like, articulatory phonology. In Experiment 2, 24 native Chinese speakers performed a homophone judgment task using one-character Chinese words. Articulatory suppression did not disrupt the performance. Chinese speakers should use non-articulatory phonology when they read one-character Chinese words. The results of two experiments indicate that phonological processing used by Chinese and Japanese readers are not the same.

Sustained Attention in Phonological Form Preparation: Evidence from Highly Associated Word Pairs

In phonological form preparation, speakers are able to prepare in advance when to-be-spoken words share initial segments, despite not knowing which word they will be asked to produce. The standard account viewed preparation as partial production and therefore claimed that all possible words must share that segment (e.g. Roelofs, 1997). Alternatively, preparation occurs through a flexible sustained attention process that is external to production. Recently, O’Seaghdha and Frazer (2014) demonstrated preparation benefits in non-unanimous sets using picture naming and word reading versions of the blocked cyclic task, though not in the more attentionally-demanding paired associates version. Here, the current studies demonstrate small, but significant, preparation benefits using the paired-associates task, but only when cue-target pairs are highly associated (e.g. dog-cat, oreo-cookie, hot-cold, worm-bait). This finding supports the sustained attentional account of preparation in word production.

Great apes’ understanding of others’ beliefs in two manual search tasks

Humans are ultra-social: they spontaneously incorporate others’ mental states into their action-planning (Kaminski et al.,2008), and altercentric: their behavior is influenced by others’ perspectives, even perspectives irrelevant to their instrumental goal (Kampis & Southgate, 2020). Recent evidence suggests that similarly to human infants, non-human great apes anticipate others’ actions based on their beliefs (Krupenye et al.,2016; Kano et al, 2019); raising the critical question whether altercentrism is uniquely human. In two experiments, we tested chimpanzees, bonobos, and orangutans in a manual search paradigm adapted from Mendes et al. (2008). These experiments replicated findings demonstrating apes’ first-person object-tracking abilities. Experiment 1 found no evidence for altercentrism: apes’ search behavior was not spontaneously modulated by another agent’s beliefs (unlike 14-month-old human infants; Kampis & Kovács, 2020). Experiment 2 found tentative evidence that apes inferred a person’s actions based on her beliefs, and adapted their own actions based on this information.

Moderators of Acquiescing to Intuition: Strength of Intuition, Task Characteristics and Individual Differences.

Will people maintain their initial intuitive decision even when they know the rational answer is different? Some dual-process models assume that there is a process of detection and correction of initial intuitive biases. But do these models also account for patterns of behaviour that some people exhibit when they return to their initial intuitive response, even after acknowledging the rational response? Over four studies we tested for acquiescence by asking participants to solve congruent and incongruent problems utilising a three-response decision paradigm, manipulating base-rate and ratios as well as measuring both Cognitive Reflection Test (CRT) and Rational-Experiential Inventory (REI) responses. Results suggest that for incongruent problems participants demonstrated acquiescence across all studies. In addition, although individuals appear to be more rational, and can explicitly recognise the rational response, in some case when moderated by task characteristics, and cognitive reflection, they are unable to supress their initial intuitive decision.

Mind over Body: Investigating Cognitive Control of Cycling Performance with Dual-Task Interference

In cognitive psychology, dual-task investigations have indicated that internal language plays a role in a variety of cognitive functions. This preregistered study investigated whether physical endurance as exemplified by cycling performance depends on internal language and internal visual experience. A sample of 50 physically active participants performed 12 cycling trials, each lasting one minute where they were required to cycle as fast as possible while remembering either a sequence of letters and numbers (verbal interference) or locations on a grid (nonverbal interference). We found that participants cycled a shorter distance in the verbal interference condition compared with the no-interference (p < .001) and the visuospatial inference conditions (nonsignificant: p = .10). Further, participants who reported that self-talk usually helps their sports performance were more negatively affected by verbal interference. Our study comprises a first attempt at using the dual-task method to investigate the causal role of self-talk in physical performance.

Distinct rhythms of joint and individual action: Evidence from an auditory sequence production paradigm

Many joint actions such as group dance and music performance require that partners take turns producing actions at specific temporal intervals. Here we assessed whether turn-taking partners can co-produce temporal intervals with the same levels of accuracy and precision as individuals. Participants learned to tap a piano key at the rate of a metronome cue, either alone (Individual) or in alternation with a partner (Joint). Findings revealed that partners did not achieve individual coordination levels: Temporal accuracy (deviation from the cued tapping rate) and precision were reduced in Joint relative to Individual sequences, though partners displayed learning across the experiment. Critically, partners appeared to group temporal intervals according to the turn-taking structure; no such grouping patterns were observed in Individual sequences. Together, co-production of temporal intervals with a turn-taking partner poses challenges to coordination and partners attempt to overcome this challenge by rhythmically grouping intervals according to the turn-taking structure.

Investigating the effect of distance entropy on semantic priming

Recent studies have found that people are sensitive to the large-scale network structure of semantic free associations. The current work aims to conduct a stronger test of people’s sensitivity to structural nuances within the semantic network by going beyond measurements of path lengths between word pairs (e.g., Kumar et al., 2019). Here we examine the influence of distance entropy on semantic priming. Distance entropy is the entropy of the shortest paths from a target word to all other words in the network. Simulations suggested that nodes with lower distance entropy (i.e., many shortest paths of similar lengths) led to more “democratic” spread of activation overall—with higher median and less variable activation levels among nodes—and hence should be more “effective” primes. However, analyses of Semantic Priming Project data were not conclusive and targeted experiments are needed to further examine the effect of distance entropy on semantic priming.

Do chimpanzees represent the actions of a co-ordination partner?

Effective social co-ordination benefits from mentally representing a partner’s actions. Chimpanzees can successfully work together, but the cognitive mechanisms they employ to aid social co-ordination remain unclear. Studies of action planning show that, like humans, chimpanzees demonstrate the end-state-comfort effect; considering the end of an action sequence during motor-planning, e.g. using an initially awkward grasp when handling an overturned glass to facilitate how it will end up being held. Human research shows that we extend this to a partner; we pass objects in a way that facilitates the action to be performed with them. Here, we assessed the location in which chimpanzees passed a tool to an experimenter to investigate action accommodation. We manipulated experimenter hand location and their ease of access to locations and found some effect on passing behaviour indicating that, under certain conditions, chimpanzees consider a partner’s actions when planning their own actions in a co-ordination context.

Long-range sequential dependencies precede complex syntactic production in language acquisition

To convey meaning, language relies on hierarchically organized, long-range relationships spanning words, phrases, sentences, and discourse. As the distances between elements in language sequences increase, the strength of the long range relationships between those elements decays following a power law. This power-law relationship has been attributed variously to long-range sequential organization present in language syntax, semantics, and discourse structure. However, non-linguistic behaviors in numerous phylogenetically distant species, ranging from humpback whale song to fruit fly motility, demonstrate similar long-range statistical dependencies. Therefore, we hypothesized that long-range statistical dependencies in speech may occur independently of linguistic structure. To test this hypothesis, we measured long-range dependencies in speech corpora from children (aged 6 months -- 12 years). We find that adult-like power-law statistical dependencies are present in human vocalizations prior to the production of complex linguistic structure. These linguistic structures cannot, therefore, be the sole cause of long-range statistical dependencies in language.

Effects of interim testing and feature highlighting on natural category learning

Previous studies have suggested testing and feature highlighting each facilitates category learning. We investigated whether the beneficial effects of interim testing change with the presence of feature highlighting in natural category learning. Participants learned various rock categories that were divided into two sections. They studied a series of rocks with or without feature descriptions highlighted on each image and either took an interim test or not between the two sections. On the final test, participants classified studied and new rock exemplars from both sections. The testing group outperformed the no-testing group for both sections, indicating both backward and forward effects of testing. Such beneficial effects of testing occurred regardless of whether the feature highlighting was provided or not. The feature highlighting, however, showed negative effects on learning of both sections, suggesting that provision of explicit instruction may impede learning when it is not appropriately embedded in the learning material.

Learning rate and success as a function of code-switching strategies in the input

Code-switching is a natural phenomenon in which a speaker alternates between two languages. Although code-switching could be a useful tool for foreign language learning (Bhatti et al., 2018; Macaro, 2005), it is unknown what types of code-switches are potentially most useful. To investigate this, we present an experiment in which we compare learning rate and success of learning vocabulary (nouns) and functional categories (determiners) from input containing two types of switches from English into a Swahili-based artificial language: inserting two adjacent words (e.g., Kiti ro is next to the book) or inserting two distant words (e.g., The kiti is next to book ro). Images help participants deduce word meaning. Recall accuracy over successive cycles is used to provide a measure of learning rate for nouns and determiners, allowing us to gauge the effect of code-switching on learning grammar and lexis independently. Data collection (n ≈ 40 per condition) is ongoing.

New exposure, no constraints: Semantic restrictions on novel nouns do not constrain adults’ subsequent referent selections

Children and adults use linguistic context to learn new words. For instance, in “She wears the dax,” “dax” likely refers to clothing. We asked whether learners retain these constraints across ambiguous word-learning exposures. Adults (n=139) learned 12 words. On Exposure 1, novel nouns were presented as the object of either Restrictive (e.g., “wears”) or Non-restrictive (“finds”) verbs. Participants selected which of two compatible referents the noun referred to. On Exposure 2, participants heard each noun again and chose between a distractor referent and the unselected referent from Exposure 1—now the only referent compatible with the previous Restrictive verb. If adults retain selectional restrictions, then Restrictive verbs should increase selection of the compatible referent. However, we found no difference between Restrictive (M=.74, SD=.26) and Non-restrictive (M=.76, SD=.24) conditions, p=.24. This suggests adults use verbs’ selectional restrictions to identify referents in the moment but do not retain these restrictions across exposures.

The Effects of Dyadic Conversations on Coronavirus-Related Belief Change

In a high-risk environment, such as during an epidemic, people are exposed to a large amount of information, both accurate and inaccurate. Following exposure, they typically discuss the information with each other in conversations. Here, we assessed the effects of such conversations on their beliefs. A sample of 126 M-Turk participants first rated the accuracy of a set of COVID-19 statements (pre-test). They were then paired and asked to discuss either any of these statements (low epistemic condition) or only the statements they thought were accurate (high epistemic condition). Finally, they rated the accuracy of the initial statements again (post-test). We did not find a difference of epistemic condition on belief change. However, we found that individuals were sensitive to their conversational partners and changed their beliefs according to their partners’ conveyed beliefs. This influence was strongest for initially moderately held beliefs.

Enhancing Preschool Readiness: Evidence from a Home-based Game to Improve 5-year-old Children’s Mastery of Symbolic Numbers and Concepts

Preschool children vary in their numerical knowledge, and this variation predicts math achievement throughout elementary school. Can preschool interventions that exercise school-relevant numerical concepts support later school math learning, and if so, what numerical activities should be targeted to best foster this learning? Here we ask whether a game-based intervention targeting preschool children’s understanding of the base-10 compositional system of number words and symbols improves their school-relevant numerical concepts in the short term. Five- to six-year-old children who played a numerical board game at home with their parents for two-three weeks showed improved preschool numerical concepts, compared to children who played a game with similar materials and procedures but no numerical content. This finding takes a first step toward developing and evaluating a suite of game-based interventions, leveraging research in developmental cognitive science both to enhance children’s learning in school and to deepen understanding of how children learn.

A neural network model of referent identification in the inter-modal preference looking task

We present a neural network model of referent identification in a preferential looking task. The inputs are visual representations of pairs of objects concurrent with unfolding sequences of phonemes identifying the target object. The model is trained to output the semantic representation of the target object and to suppress the semantic representation of the distractor object. Referent identification is achieved in the model based only on bottom-up processing. The training set uses a lexicon of 200 words and their visual and semantic referents, reported by parents as typically known by toddlers. The phonological, visual and semantic representations are derived from real corpora. The model successfully replicates experimental evidence that phonological, perceptual and categorical relationships between target and distractor modulate the temporal pattern of visual attention. In particular, the network captures early effects of phonological similarity, followed by later effects of semantic similarity on referent identification.

Gene expression under human self-domestication: an in silico exploration of modern human high-frequency variants

Domesticated animals and modern humans show a set of behavioral and molecular changes that converge on shared genetic targets. There is evidence that these changes disproportionately target neurotransmission, in particular the glutamatergic signaling system. This has led to proposals that attenuation of glutamatergic signaling may be crucial for the downregulation of the stress response in both modern humans and domesticates and the potentiation of exploratory motor output in our species. Here, we use a deep learning method (ExPecto) to predict gene expression in silico of H. sapiens-specific (relative to the closest extinct human species, Neanderthals and Denisovans) variants of genes involved in glutamatergic signaling. This approach allows us to hone hypotheses about the functional implications of genetic changes in our species' recent evolution, including proposals as to the neurobiological substrates of H. sapiens so called 'self-domestication hypothesis'.

In Touch with Causation: Understanding the Impact of Kinesthetic Haptics on Causality

Humans rely on multimodal information to make judgements about events occurring in their environment. Haptic feedback, in particular, is essential to how people learn about and manipulate the objects they use daily. While much work has investigated how visual and auditory information affect the perception of causal events, little has explored how causal judgments change with the addition of haptic feedback. To begin addressing this question, we ran a psychophysical study based on the Michottean launching paradigm. We compared the use of visual and haptic with solely visual information during causally ambiguous collisions. We manipulated the offset between when the first object stops moving and the second object starts moving. Using a custom one degree-of-freedom haptic device, users in the vision and haptic condition received kinesthetic haptic feedback synchronized to the second object’s motion. The results demonstrate that adding haptic information increases causal perception for events with larger temporal offsets.

Spectrotemporal cues and attention modulate neural networks for speech and music

Speech and music are fundamental human communication modes. To what extent they rely on specific brain networks or exploit general auditory mechanisms based on their spectrotemporal acoustic structure is debated. We aimed at defining connectivity patterns modulated by attention to auditory content and spectrotemporal information, using fMRI. Participants tried to recognise sung speech stimuli that were gradually deprived of spectral or temporal information. Although auditory cortices appeared to specialise in temporal (left) and spectral (right) encoding, modularity of the bilateral connectivity network was largely unaffected by spectrotemporal degradations or attention. However, while participants' recognition decreased when necessary acoustic information - spectral for attention to melodies, temporal for attention to sentences - was degraded, efficiency of information flow in the network increased, and different subnetworks emerged. This suggests that the loss of crucial spectral (melody) or temporal (speech) information is compensated within the network by recruiting more and differential neural ressources.

The asymmetry between descriptions of vertical and horizontal spatial relations

Language guides how we conceptualize the level of detail in categorical spatial relations (Bowerman, 1996; Choi, 2006). This study examined whether details in different dimensions (i.e., distance and direction) given in spatial descriptions vary as a function of spatial categories. We asked Turkish speaking participants (N=40) to describe the spatial relation between two geometric objects aligned either on the horizontal or the vertical axis. There were three relations (on/above, under/below, next to/near) of which the direction (left, right) and the distance (adjacent, 5 mm, 10 mm, 15 mm) varied equally. More detailed descriptions in terms of direction and distance were provided for ‘next to/near’ compared to ‘on/above’ and ‘under/below’ relations. These results suggest an asymmetry between vertical and horizontal axes. Sensitivity for detailed descriptions along the horizontal axis may relate to language-specific spatial categorization.

The effect of semantic categorization on object location memory

We often organize objects around both visual and semantic boundaries in space. Across four experiments, we examined how semantically consistent partitions influenced memory for object locations. Participants learned the locations of items in a semantically partitioned display (where each partition contained objects from a single category), as well as a purely visually partitioned display (where each partition contained a random assortment of objects from different categories). While semantic partitions significantly improved location memory over the purely visually partitioned display, this advantage was significantly reduced when participants were cued to the correct partition during recall. Our results suggest that semantic category information benefits memory via strengthening the association between a given category and a spatial region delineated by a partition. Further, there was some indication that this benefit may come with the drawback of reducing memory precision for objects within a partitioned space.

A metric of children’s inference-making difficulty during language comprehension

Reading comprehension research has identified sources of children’s difficulty with inference-making: lack of semantic/content knowledge and logical reasoning difficulty. NLP tools modeling semantic knowledge (e.g. BERT) can predict adult inference-making, but it is unclear whether they can predict children’s inference-making difficulty. In our ongoing study, we will examine whether our new inference difficulty metric can predict kindergarten students’ inference-making, using empirical data from a classroom intervention (ELCII). Students were given verbal information on a topic and multiple-choice questions, which require students to draw an inference from two given scaffolds. To develop this metric, we will train BERT on children’s books and ELCII content to compute an additive inference vector, the sum of the two vectorized scaffolds. The cosine distance between the additive and correct inferences may indicate inference difficulty. Results will indicate whether a probabilistic semantic space can model children’s inferences or if other components (e.g. logic) should be considered.

Of Pieces and Patterns: Modeling Poetic Devices

How many rhymes are possible in English? How much alliteration or assonance? This paper explores the space of common phonologically driven poetic devices. I investigate instances of these poetic devices across unique words in the dictionary, assuming perfect matching in relevant corresponding elements of word pairs. Documenting the frequencies of poetic device matches across words provides a baseline for understanding the diversity and use of these forms in the wild. It also allows us to describe the distribution of words across these patterns within each poetic device. We show that certain devices, such as alliteration and stress, support a relatively small number of unique phonological patterns, providing more consolidated and predictable resources than other devices. Forms like masculine and feminine rhyme, assonance, and consonance display relatively more sound patterns respectively, and unique words are distributed across them differently. These results are discussed in terms of their poetic and cognitive implications.

Associative learning of new word forms in a first language and haptic features in a single-day experiment

This study focused on associative learning for new words in the first language and haptic stimuli. In the first, healthy Japanese participants made nine subjective evaluations of haptic stimuli using five-point semantic differential scales (e.g., regarding stickiness, scored from 1 [not sticky] to 5 [sticky]). In the second and third, the participants carried out two learning tasks for associative pairs of a new word in Japanese and haptic stimulus (H), or new words only (W). In the fourth one, after each learning task, participants performed recognition and free recall tasks. The results of the recognition tasks showed that the accuracy rates of W were better than those of H, whereas the response times of W were faster than those of H. Further, preference of haptic features negatively correlated with free recall scores of H; however, there was no significant difference between the free recall scores of H and W.

Infants combine kind and quantity concepts

The meaning of complex expressions (“two apples”) is computed by accessing and combining the concepts linked to their constituent words (“two”, “apples”). Across three eye-tracking experiments (N = 60), we demonstrate that preverbal infants can perform such computations and successfully derive the meaning of novel quantified noun phrases. Experiment 1 established that 12-month-olds can learn two distinct novel labels (pseudowords) denoting a singleton or a pair. Experiments 2-3 indicated that infants combine the meanings of the newly learnt quantity labels with those of familiar kind labels. When presented with four potential referents (e.g., 1duck, 2ducks, 1ball, 2balls) and asked to look at one ball, infants oriented to the target satisfying the meaning of both labels (1ball) over the distractors satisfying the meaning of the labels separately (2balls, 1duck). Conceptual combination skills that enable complex thought seem to be operational in infancy, and can be triggered by linguistic stimuli.

Re-examining cross-cultural similarity judgements using lexical co-occurrence

Is “cow” more closely related to “grass” or “chicken”? Speakers of different languages judge similarity in this context differently, but why? One possibility is that cultures co-varying with these languages induce variation in conceptualizations of similarity. Specifically, East Asian cultures may promote reasoning about thematic similarity, by which cow and grass are more related whereas Western cultures may bias similarity judgements toward taxonomic relations, like cow-chicken. This difference in notions of similarity is the consensus interpretation for cross-cultural variation in this paradigm. We consider, and provide evidence for, an alternative possibility, by which notions of similarity are equivalent across contexts, but the statistics of the environment vary. On this account, similarity judgements are guided by co-occurrence in experience, and observing or hearing about cows and grass or cows and chickens more often could induce preferences for the relevant grouping, and account for apparent differences in notions of similarity across contexts.

Certainly Strange: A Probabilistic Perspective on Ignorance

Knowing that something is unknown is an important part of human cognition. While Bayesian models of cognition have been successful in explaining many aspects of human learning, current explanations of how humans realise that they need to introduce a new concept are problematic. Bayesian models lack a principled way to describe ignorance, as doing so requires comparing the probabilities of concepts in the model with the probabilities of concepts not present in the model, which is by definition impossible. Formal definitions of uncertainty (e.g. Shannon-entropy) are commonly used as a substitute for ignorance, but we will show that these concepts are fundamentally distinct, and thus that something more is needed. Enhancing probability theory to allow Bayesian agents to conclude that they are ignorant would be an important advance for both cognitive engineering and cognitive science. In this research project, we formally analyse this challenge.

Mutual exclusivity inferences in 12-to-15-month-olds: an online looking-while-listening study

Novel word disambiguation via mutual exclusivity is the tendency to discard familiar objects as referents of novel words. While mutual-exclusivity inferences have been widely documented in toddlers beyond 18 months of age, studies typically failed to show it in younger infants. Recently, Pomiechowska, Brody, Csibra, and Gliga (2021) showed that 12-month-old infants can use mutual exclusivity, but only if familiar objects are targeted by non-verbal communication (i.e., pointing) prior to labeling. This arguably triggers infants to represent these objects under a conceptual description, a representational format that is necessary for mutual exclusivity inferences but is not spontaneously generated in young infants. The present study is a direct online replication of the task used by Pomiechowska et al. with a wider age range (12-15 months) and adapted for Zoom-based testing.

Semantic and Phonological False Memory: A Review of Theory and Data

Deese/Roediger/McDermott (DRM) list words can share either semantic relatedness or phonological resemblance with their critical distractor. We review three lines of evidence in which semantic and phonological DRM illusions have been compared: (a) studies in which the two illusions were tracked in populations with different semantic or surface memory abilities; (b) studies that investigated the effects of manipulations that target semantic content or surface content or both; and (c) studies that examined hybrid forms of the illusion in which there was both semantic and surface resemblance between false memories and actual experiences. The three lines of evidence showed that semantic and phonological DRM illusions display dissociative patterns in most instances, indicating that they are two distinct types of false memories. We also discussed how the two major theories of the DRM illusion, fuzzy-trace theory and the activation/monitoring framework, account for the underlying mechanisms for the semantic and phonological illusions.

Approximate division on multiple visual ensembles

Prior work demonstrates that humans represent the approximate number of items across multiple sets simultaneously. Here we investigated whether adults can approximately divide multiple distinct overlapping sets simultaneously. Participants viewed arrays with dots of one to four colors (potential dividends) and a non-symbolic divisor of 2-4. Participants then estimated the quotient of the division operation. On each trial a cue indicated whether to divide over the superset of all dots or a color subset. The cue was presented before or after the brief presentation of the dividend array. We found that the capacity to divide over a pre-selected subset was not affected by set size. By comparing the estimation error between the cue-before and cue-after conditions, we determined that a substantial proportion of participants can approximately divide three ensembles simultaneously (two subsets plus the superset). These findings demonstrate the computational efficiency and possible utility of ensemble representations.

Multiple items in working memory are cyclically activated at a theta-rhythm

Representations held in working memory (WM) are crucial in guiding human attention in a goal-directed fashion. Currently, it is debated whether only a single or several of these representations can be active and bias behaviour at any given moment. In our present study, 25 university students performed a behavioural dense-sampling experiment to produce an estimate of the temporal activation patterns of two simultaneously held visual templates. We report two key novel results. First, the performance related to both representations was not continuous, but fluctuated rhythmically at 6 Hz. This corresponds to neural oscillations in the theta-band, whose functional importance in WM is well established. Second, our findings suggest that two concurrently held representations may be prioritized in alternation, not simultaneously. Our data extend recent research on rhythmic sampling of external information by demonstrating an analogous mechanism in the cyclic activation of internal WM representations.

How are Spatial Distance, Temporal Distance and Temporal Valuation Related?

A widely shared view on temporal representation suggests that people conceptualize time metaphorically as a spatial journey from a back (past) location to a front (future) location. This view predicts 1) shorter estimated distances to and better evaluations of front/future than back/past events (an asymmetry); 2) positive correlations between space, time, and evaluation; 3) negative correlations between responses to the front/future and the back/past. In the present study, participants performed a temporal distance task, a time discounting task, and a spatial distance task, all with back/past and front/future versions. Results showed that 1) there was not asymmetry between back/past and front/future in any task; 2) spatial and temporal tasks correlated positively, but they did not correlate with time discounting; and 3) responses toward the front/future and back/past correlated positively (and not negatively) in all three tasks. The results suggest the need to revise the "moving forward view of time".

Contextual Diversity and the Lexical Organization of Multiword Expressions

Corpus-based models of lexical strength have questioned the role of word frequency in lexical organization. Specifically, closer fits to lexical behavior data on single words have been obtained by measures of contextual diversity, which modifies frequency by ignoring word repetition in context, semantic diversity, which considers the semantic consistency of contextual word distribution, and socially-based semantic diversity, which encodes the communication patterns of individuals across discourses (Adelman, Brown & Quesada, 2006; Jones, Johns, & Recchia, 2012; Johns, in press). The present work aimed at determining if diversity drives lexical organization also at the level multiword units. Normative ratings of familiarity for 210 English idioms (Libben & Titone, 2008) were predicted from contextual, semantic and socially-based diversity measures computed from a 55-billion word corpus of Reddit comments. Results confirmed the superiority of diversity measures over word frequency, suggesting that multiword idiomatic phrases show similar lexical organization dynamics as single words.

Mental Models of Illness in the COVID-19 Era

The COVID-19 pandemic and its profound global effects may be changing the way we think about illness. We surveyed 120 American adults to explore the effects of the pandemic on their mental models of illness. Participants read three vignettes: one relating to COVID-19, one to cancer, and another to the common cold, and were asked questions relating to the diagnosis, treatment, prevention, time-course, and transmission of each disease. Results showed that participants were more likely to correctly diagnose COVID-19 (93% accuracy) compared to a cold (60% accuracy) or cancer (51% accuracy). Of the 40 participants that incorrectly diagnosed the cold vignette, 7 misdiagnosed a cold as COVID-19. These and other preliminary findings suggest a distinct mental model for COVID-19 compared to other illnesses. The prevalence of COVID-19 in everyday discourse may lead to biased responding, similar to errors in medical diagnosis that result from physicians’ expertise (Hashem et al., 2003).

Who Needs More Help? Sixteen-Month-Old Infants Prefer to Look at and Reach for Helpers who Help with Harder Tasks

Not every prosocial act is equally praiseworthy. As adults, we tend to evaluate helpers depending on others’ needs; helping someone with a hard task may be more burdensome but also more prosocial than helping someone with an easy task. Despite growing evidence that infants are sensitive to the costs of others’ actions, whether such representations inform their evaluation of prosocial acts remains an open question. Here, we ask whether 16-month-olds’ preference between two helpers is sensitive to task difficulty. Infants preferentially reached for (Exp. 1), and looked at (Exp. 2), an agent who helped someone facing a high-cost task over an agent who helped someone facing a low-cost task. Such preference disappeared when the agents completed the same tasks in a self-serving context. These results suggest that infants use action cost not only to predict and explain others’ behaviors but also to evaluate others in prosocial contexts.

Inferring Knowledge from Behavior in Search-and-rescue Tasks

Theory-of-Mind inference is natural for humans but poses significant computational challenges. The core difficulty can be traced back to the exponential growth of paths to consider in planning given a mental state. In this paper we tackle this problem in a search-and-rescue task implemented in Minecraft. Our goal is to infer differences in knowledge from participants' continuous-time trajectory. By abstracting the spatio-temporal state space and the reward function together, we surface natural decision points, on which we compare the participants' behavior to myopic rational agents of varying knowledgeability. Collectively, the abstraction and rational agent analysis yield successful inference of participants' knowledge states and reveal distinct patterns of their exploratory behavior.

When and Why Do Reasoners Generalize Causal Integration Functions? Causal Invariance as Generalizable Causal Knowledge

The present paper reports an experiment testing two views of how reasoners learn and generalize potentially complex causal knowledge. Previous work has focused on reasoners’ ability to learn integration functions that best describe how pre-defined candidate causes combine, potentially interactively, to produce an outcome in a domain. This empirical-function learning view predicts that participants would generalize an acquired integration function based on similarity to stimuli they experienced in the domain. An alternative causal-invariance view recognizes that one’s current representation may not yield invariant/useable causal knowledge –– knowledge that holds true when applied to new circumstances. This view incorporates useable causal knowledge as a goal and deviation from causal invariance as a criterion for knowledge revision. It predicts that participants would re-represent causes such that they do not interact with other causes, even when in participants’ experience all (pre-defined) causes in that domain interact. Our results favor the causal-invariance view.

Cultural differences in analogical reasoning

According to a long-standing dogma, Westerners are more capable of thinking abstractly than East Asians. Here, we challenge this generalization by comparing US and Chinese adults in a paradigm case of abstract thinking: analogy. Chinese and American participants completed the most difficult set of Raven’s (2003) Standard Progressive Matrices (SPM), a widely used test of analogical reasoning. For each item, participants attempted to discern the analogical relationships between parts of an incomplete matrix to identify the correct way to complete it. Chinese participants produced significantly more correct answers on the SPM, indicating more successful analogical reasoning. We replicated this result with a second sample of Chinese participants. This cross-cultural difference remained significant when demographic factors were controlled. We predicted this difference, reasoning that East Asian’s sensitivity to context gives them an advantage over Westerners in various kinds of abstract thinking.

8-10 months old infants extract non-adjacent dependencies from segmental information

Infants have been shown to be particularly adept at extracting so-called non-adjacent dependencies (NADs) from auditory linguistic input (for a review, cf. Mueller, Milne, & Männel, 2018). Mueller, Friederici & Männel (2012) showed that 3-4 month-olds readily learn arbitrary associations between specific non-adjacent syllables from an artificial language (e.g. fikato) and related this ability to individual differences in pitch processing. Here, we addressed the question whether syllables are the building block NAD learning operates on, or whether smaller segmental units also allow for such structural generalizations (Nespor, Peña, & Mehler, 2003). In an oddball paradigm, 8-10-months-olds showed a differential ERP response to standard (e.g. bokäwu, liwase) and deviant (e.g. sogäle, kisüru) exemplars of an exclusively vowel-based NAD, and to an intensity manipulation. Implications of successful rule generalization from this smaller segmental unit and its relation to auditory sensory processing are discussed.

Pragmatic Reasoning Ability Predicts Syntactic Framing Effects on Social Judgments

Although subject-complement statements like “girls are as good as boys at math” seem egalitarian, the group in the complement position (boys) is often judged superior. Across two experiments, we examined whether this syntactic framing effect is driven by the ability to discern the pragmatic implications of the syntax. After reading subject-complement statements about the equal math ability of girls and boys or of unstereotyped social groups, participants judged which group was better at math. They also completed a novel measure of pragmatic reasoning ability for subject-complement statements. We found reliable framing effects regardless of stereotype strength, and these effects were uniquely predicted by pragmatic reasoning ability over and above other social-cognitive factors. Moreover, for unstereotyped groups, pragmatic reasoning ability predicted explicit recognition of, and resistance to, the influence of framing. Our findings point to pragmatic reasoning as both a mechanism driving syntactic framing effects and a tool for counteracting them.

Vocal patterns in schizophrenia: toward a cumulative approach

Voice atypicalities are a characteristic feature of schizophrenia, often associated with core negative symptoms. A recent meta-analysis identified atypicalities in pitch, speech rate, and pauses. However, heterogeneity across studies was large and replications almost nonexistent. Further, it is not clear whether vocal patterns are directly related to the mechanisms underlying the disorder and could therefore be found across languages, or not. In this study we implemented a more rigorously cumulative scientific approach by collecting and analyzing a large cross-linguistic corpus of voice recordings. We critically employed meta-analytic priors to systematically assess the replicability of previous findings, and modeled between-participants variability and cross-linguistic differences. We replicate previous meta-analytic findings across all languages for reduced pitch variability, while increased pause duration and lower speech rate results were replicated only in some languages. Most atypical voice patterns, thus, seem not to be distinctive of schizophrenia in general, but more specifically situated in linguistic/cultural differences.

Tangled Physics: Knots as a challenge for physical scene understanding

Humans have a remarkable capacity to make intuitive predictions about physical scenes. Recent studies suggest that this capacity recruits a general-purpose “physics engine” that reliably simulates how scenes will unfold. Here, we complicate this picture by introducing knots to the study of intuitive physics. Three experiments reveal that even basic judgments about knots strain human physical reasoning. Experiment 1-2 presented photographs of simple knots and asked participants to judge each knot’s relative strength. Strikingly, observers reliably ranked weaker knots as strong and stronger knots as weak. Experiment 3 presented photographs of tangled strings and asked participants whether the tangle forms a knot when pulled. When shown a tangle that would not form a knot when pulled taut, subjects were at or near chance discriminating knots from non-knots. These failures challenge the domain-generality of physical reasoning mechanisms, and perhaps suggest that soft-body phenomena recruit different cognitive processes than rigid-body physics.

We have nothing to fear but everything: A surprising effect of training set diversity on the generalization of learned fear

When an otherwise neutral stimulus signals an aversive event, we learn to use that information to avoid negative outcomes. Extended too broadly, however, fear learning can become maladaptive. In concept learning, people tend to restrict generalization to the narrowest category consistent with the learning set (Xu & Tenenbaum, 2007). We tested whether this pattern extends to fear conditioning. Shocks were associated with bananas (basic level group) or with fruit (superordinate group), but never with vegetables (both). In generalization, both groups were shown novel fruits and vegetables (no shocks). Surprisingly, while the basic level group’s Skin Conductance Response (SCR) reliably diminished from acquisition to generalization, the superordinate group’s SCR did not. Further, SCR to vegetables was greater in generalization for the superordinate group, suggesting that superordinate training encourages overgeneralizing threat beyond what concept learning models would predict. These novel findings have implications for both learning models and anxiety disorders (e.g., PTSD).

Sensitivity to geometric shape regularity in humans and baboons: A putative signature of human singularity

Among primates, humans are special in their ability to create and manipulate highly elaborate structures of language, mathematics or music. We show that this sensitivity is present in a much simpler domain: the visual perception of geometric shapes. We asked human subjects to detect an intruder shape among six quadrilaterals. Although the intruder was defined by an identical amount of displacement of a single vertex, the results revealed a geometric regularity effect: detection was considerably easier with most regular shapes. This effect was replicated in several tasks and in both uneducated adults and preschoolers. Baboons, however, showed no such geometric regularity effect even after extensive training. Baboon behavior was captured by convolutional neural networks (CNNs) but a symbolic model was needed to fit human behavior. Our results indicate that the human propensity for symbolic abstraction permeates even elementary shape perception and they suggest a new putative signature of human singularity.

Directionality Effects and Exceptions in Learning Phonological Alternations

The present study explores learning vowel harmony with exceptions using artificial language learning paradigm. Participants were exposed to a back/round vowel harmony pattern in which one affix (prefix or suffix) alternated between /me/ and /mo/ depending on the phonetic feature of the stem vowels. In Experiment 1, participants were able to learn the behaviors of alternating and non-alternating affixes, but were more likely to generalize to novel affixes for non-alternating items than alternating items. In Experiment 2, participants were exposed to learning data that contains non-alternating affixes in prefix position while alternating affixes were all suffixes, or vice versa. Participants were able to extend the non-alternating affixes to novel items. Overall, the patterns of alternating affixes are harder to learn than patterns of exceptions, which aligns with previous results of non-alternation bias. Our study raises the question of how biases towards exceptionality and directionality interact in phonological learning.

Can computers tell a story? Discourse Structure in Computer-generated Text and Humans

Text-generation algorithms like GPT-2 (Radford et al., 2019) and GPT-3 (Brown et al., 2020) produce documents which resemble coherent human writing. But no study has compared the discourse linguistics features of the artificial text with that of comparable human content. We used a sample of Reddit and news discourse as prompts to generate artificial text using fine-tuned GPT-2 (Grover; Zellers et al., 2019). Blind annotators identified clause-level discourse features (e.g., states and events; Smith, 2003), and coherence relations (e.g., contrast; Wolf and Gibson, 2005) in prompts and generated text. Comparing the >20000 clauses, Grover recreates human word co-occurrence patterns and clause types across discourse modes. However, its coherence relations have shorter length and lower quality, with many nonsensical instances. Therefore, annotators could perfectly guess the human/algorithmic source of documents. Using a corresponding GPT-3 sample, we discuss aspects of generation that have and have not improved since Grover.

Stereotypes as Bayesian Judgements of Social Groups

A stereotype is a generalization about people within a certain category––and category information is often used to make probabilistic predictions about people within a particular group. The current work examines whether stereotypes can be understood in terms of conditional probabilities as per Bayesian reasoning. For instance, the stereotype of Germans as efficient can be understood as the conditional probability of someone being efficient given that they are German. Whether such representations follow Bayes’ rule was tested in a replication and extension of McCauley and Stitt’s (1978) original studies. Across two experiments, we found that people's judgements of eight different social groups were appropriately Bayesian i.e., their direct posterior predictions were in line with what Bayes' rule suggests they should be, given subjects’ priors and likelihood ratios. For any given social group, it was also the case that traits with a high calculated diagnostic ratio distinguished stereotypic from non-stereotypic traits.

Contribution of receptive field center and surround to repetition suppression in macaque visual area V2

Primate inferotemporal cortex (ITC) neurons respond with declining strength to repeated presentations of large natural images. This phenomenon - repetition suppression - has been assumed to arise at the level of ITC because ITC neurons possess the large receptive fields and sophisticated selectivity to recognize images as repetitions. It was recently discovered that V2 neurons exhibit repetition suppression under identical conditions. How do V2 neurons, with classical receptive fields encompassing only a small fraction of the image, recognize it as a repetition? One possibility is that they are sensitive to repetition of content outside the classical receptive field, in the surround. To assess this, we recorded neuronal responses to displays while independently controlling repetition of elements in the classical receptive field and the surround. We found that content in the surround contributed to repetition suppression and that this occurred relatively late in the response, consistent with being mediated by feedback.

Biological Motion Perception under Attentional Load

Biological motion perception is supported by a network of regions in the occipito-temporal areas, primarily in superior temporal sulcus (STS), and premotor cortex (PMC). How biological motion is processed outside the focus of attention and whether it is modulated by attentional load remain unknown. We investigated the bottom-up processing of biological motion under different levels of attentional load (high vs. low) with functional magnetic resonance imaging (N=13). In line with previous work, we found that fronto-parietal attention regions were significantly more activated when the attentional load was high than when it was low. Importantly, biological motion under low attentional load yielded activity in STS and PMC, whereas biological motion under high load was restricted only to the low-level motion sensitive areas. These results show that biological motion is processed outside the focus of attention and it is modulated by attentional load.

Do humans recalibrate the confidence of advisers?

In collaborative tasks, humans can make better joint decisions by aggregating individual information in proportion to their communicated confidence (Bahrami et al., 2010). However, if people blindly rely on their partner’s confidence expressions, they could easily reach suboptimal solutions when their collaborator's confidence judgments are not calibrated to their performance, but for instance exhibit an overconfidence bias. Given that calibrated advisers are rated as more credible (Sah et al., 2013), we propose that prior experience with a collaborator will lead to a recalibration of their confidence judgements before incorporating their advice. In an online experiment, participants first viewed two other fictitious participants, one calibrated and one biased, perform a categorization task. Following this, participants completed a similar task by taking advice from just one of the two previously observed advisers on a given trial. We tested whether participants chose the adviser who had the trial-by-trial highest expressed or recalibrated confidence.

Categorization of robot animacy using implicit visual cues

Recognition of animacy is fundamental to human cognition, yet robots complicate this categorization, because they are non-living objects with human-like traits. We examined this categorization of robot animacy using speech balloons from comics, which require connecting to animate “stems” (speakers), or coercing inanimate objects to become animate (e.g., a talking toaster). Participants rated the text-image congruity of silhouettes of humans, inanimate objects, and robots paired with descriptive words placed in either a speech balloon or a label box. Overall, humans and object text-image pairs were rated as more congruent than those with robots. However, a positive correlation suggested human-looking robots with balloons were more congruent than less human-looking ones, but such a graded congruency did not appear with labels. This suggested that speech balloons select for an animate stem compared to labels, but also that intuitions for animacy in robots falls along a gradient depending on their human-like traits.

Bonobos' (Pan paniscus) and chimpanzees' (Pan troglodytes) understanding of, and pupillary responses to, others' needs

Humans are uniquely impressive cooperators, and yet it remains unclear exactly which cognitive and motivational mechanisms set human cooperation apart. Our closest relatives, bonobos and chimpanzees, have also demonstrated a range of prosocial tendencies across experimental and observational contexts. Critically, however, we do not yet know whether their helping behavior, like that of humans, is motivated by an acute sensitivity to others’ needs. We investigated this question in a novel eye-tracking task, with a large sample of captive apes. While their gaze and pupils were tracked, apes viewed controlled videos of an agent reaching toward objects that ultimately would or would not be attainable without help. If apes are acutely sensitive to others’ needs, we predicted that they would show greater pupil dilation (arousal) when the agent could not complete a goal on his own. Our findings shed light on the mechanistic and evolutionary bases of human prosociality and empathy.

Mechanisms of early causal reasoning: Investigating infants' sensitivity to confounded information in a causal reasoning task, using EEG and eyetracking

Distinguishing spurious correlations from unconfounded causal evidence is a challenge in every day reasoning. Adapting the classic “blicket detector” paradigm (Sobel et al., 2014), we investigate whether 15-17-month-old infants’ neural activity (theta oscillations), indicates that infants recognize when the information they are expecting or observing is confounded or not. By concurrently tracking infants’ eye-movements, we investigate also whether infants correctly infer the functionality of unknown objects (when such an inference is possible, i.e. when data is unconfounded) and predict future events based on these causal inferences. Data collection is on-going (current N = 29) and preliminary analysis suggests infants show increased theta activity in anticipation of receiving un-confounded (as opposed to confounded) information. By relating the neural measures of information expectation to the accuracy of infants’ subsequent predictions, we hope to offer a unique insight into the mechanisms, development, and individual differences in early causal reasoning.

Humorous Judgments of Incongruity in Short Internet Videos

With an increased reliance on technology to exchange social communication, social media platforms have become vehicles for humorous content — often taking the form of short internet videos (Vine or TikTok). Dynel (2016) hypothesizes that short internet videos may be effective because they capitalize on contrast to promote humor — i.e., consistent with incongruity theory (see Morreal, 2012). Specifically, incongruity theory assumes that humor arises from a contrast between an expectation and a perceived deviation from that expectation. To test the effects of incongruity in short internet videos, we manipulated linguistic and paralinguistic content in common short internet videos from the defunct social media platform - Vine. Results indicated that incongruence is extremely important for humor perception of short internet videos, as more linguistic and paralinguistic incongruity promoted higher social judgments of humor. Findings support that incongruity is an extremely effective tool for short internet videos to communicate humor.

Revising the Scope of Linguistic Relativity: Language Influences Perception in Non-linguistic Tasks

According to linguistic relativity, the structure of a language influences its speaker’s cognition. To examine the generalization to non-linguistic tasks, we compared Korean and German native speakers. In Korean, a grammatical distinction forces to discriminate between tight and loose spatial relations, while in German, this falls into one single semantic category. In visual priming and masking experiments, we found 1) direct and 2) indirect evidence for linguistic relativity. 1) Congruence effects in the discrimination of tight/loose spatial relations only for Korean speakers and 2) a reduced efficiency of metacontrast masking (which relies on spatial fit between stimulus and mask) for Korean speakers, as attention was automatically captured by varying distances, according to their grammar. With object-substitution masks (where tightness of fit does not play a role), Korean speakers no longer outperformed German speakers. In our study, the highly practiced characteristics of one’s mother tongue affected attention and, hence, perception.

A model of timing in simple anticipatory decisions

While most models of response times have focused on reactive response times, many of the decisions we make involve planning ahead and making anticipatory responses. We present an accumulator model of these anticipatory timing choices, where a decision maker must make a response at a specific time depending on when they expect an event to occur. This model is applied to simple perceptual decisions where participants must determine the trajectory of a stimulus and anticipate its future location. We manipulate the stimulus speed, its travel distance, and the length of time it is occluded, requiring the decision maker to mentally represent its position and motion. Generally, we find that participants anticipatory tend to be much more likely to respond too late than too early. This pattern of results can be accounted for by a Wald accumulator model where the drift, threshold, and non-decision time change with the stimulus manipulations.

Inherence bias in explanation increases with age and cognitive impairment

People tend to explain events using inherent more than extrinsic factors, a phenomenon known as the inherence bias. This bias is hypothesized to be more pronounced when cognitive resources are scarce. Here, we tested an important prediction of this account: namely, that aging and cognitive impairment should increase the inherence bias in explanation. Participants were shown vignettes of surprising scientific discoveries, and were asked to generate and evaluate explanations for those events. Our results indicate that as age increased, participants were more likely to generate inherent explanations, though age did not lead participants to endorse more inherent explanations when generation was not required. Older adults with Mild Cognitive Impairment generated a similar proportion of inherent explanations as healthy adults on average, though they did not do so increasingly with age. These findings suggest that cognitive deficits due to aging can have downstream effects on how we engage in complex reasoning.

Can Action Bias the Perception of Ambiguous Auditory Stimuli?

According to the theory of common coding, actions are represented in terms of their sensory effects. Hence, performing or anticipating an action biases perception. Previous studies provided evidence for this notion by showing how the perception of ambiguous visual stimuli can be affected by concurrent actions. Here we investigated whether performing a directed action can affect the perception of ambiguous auditory stimuli in a gamified dual-task. In an online study, participants had to avoid obstacles in an endless runner game while classifying the pitch shift in an ambiguous sequence of Shepard tones. Response times indicate interference between both tasks, but pitch shift classifications seem to be unaffected by the motor task. Meanwhile, participants showed a strong compatibility effect between base pitch and pitch shift classifications, in line with a typical SMARC effect. We discuss possible reasons for the absence of perceptual modulations and implications for common coding approaches.

Who thinks wh-questions are exhaustive?

Asking and answering questions is a staple of human communication. To answer a question effectively, a hearer must interpret the speaker's intention given the specific question asked. ‘Wh’-questions like ‘Where can I get coffee?’ are underspecified for (non-)exhaustivity, i.e., how many answers must be provided to resolve the speaker's goal. Intuitions from the semantics literature report that questions are generally exhaustive, and non-exhaustive only in the context of specific linguistic factors (e.g., the modal ‘can’, certain ‘wh’-words). To test these assumptions, we collected question paraphrase ratings for naturally occurring root questions in variable linguistic contexts. In contrast to previous claims, we find that questions are not biased for exhaustivity. However, other prior observations are supported by the data. We argue that a full account of the observed distribution of meanings must integrate discourse factors like the hearer's estimate of the speaker's goal, alongside (or subsuming the effect of) linguistic cues.

Affordances in the wild: Anthropological Contributions to Embodied Cognitive Science

The usual approach to studying affordances is in controlled laboratory situations. While recognizing the value of controlled experimentation, I argue for the benefits of additionally considering affordances “in the wild,” i.e., in the context of real-world activity. As an example, I describe ethnographic observation about the introduction of cellphones in rural Uganda in the early 2000s, highlighting the unexpected, innovative uses that emerged in that unique context. This case, I suggest, illustrates the constitutive role of material, sociocultural constraints in the perception and realization of affordances which often go unacknowledged in experimental laboratory research. Theoretically, the case raises questions about the definition of affordances, in particular how narrowly or broadly to conceptualize their spatiotemporal dimensionality. And methodologically, it poses the question of how anthropology can contribute to embodied cognitive science and, more broadly, how experimental and observational approaches can help one another to further our understanding of psychological phenomena.

Semantic networks of space and time between deaf signers and Spanish listeners

Studies on the processing, functional, and social distribution of spoken and signed languages suggest partial overlaps between the mental lexicon of deaf and hearers. However, factors such as ontogenetic development, language acquisition conditions, the development of deaf culture, and the lexical repertoire available in each language suggest important differences. The aim was to explore the semantic networks of the conceptual domains of space and time in the Uruguayan deaf signers' population and Spanish hearers. 60 participants carried out a word association task in their respective languages and with semantically equivalent lexical items. Both groups showed an important formal similarity between their semantic networks. Mainly, a categorical-semantic analysis showed a bias of the hearers to taxonomic and introspective semantic relationships. In contrast, the deaf showed a bias toward situational semantic relationships and entities. These findings suggest differences in the concrete / abstract thinking between both populations when organizing their mental lexicon.

Investigating the Impact of Metacognition on Working Memory and Procedural Learning Mechanisms

This study examined the influence of metacognition on declarative and reinforcement learning (RL) mechanisms. We collected data from 218 undergraduates using a within-subjects metacognitive manipulation of a stimulus-response (S-R) learning task created by Collins (2018). Contributions of declarative and RL mechanisms are assessed by differences in learning rate for blocks of 3 items versus 6 items, and by the rate of forgetting with an incidental post-test. If metacognition differentially affects declarative and RL, we expect a three-way interaction between the task phase (learning/post-test), block type (long/short), and metacognition (before/during). Our results showed significant main effects of phase (F(1,217) =143.18, p=9.18e-32), length (F(1,217)=541.11, p=2.06e-104) and metacognition (F(1,217) = 19.78, p = 9.22e-06), with better performance during the learning phase, short blocks, and metacognitive manipulation. A significant phase by metacognition interaction (F(1,217) =8.11, 4.45e-03) suggested that metacognition monitoring improved test performance while having little effect on learning performance.

Impact of Living Environment on the Development of Cognitive Navigation Strategy in Chinese Urban and Rural Children

Previous studies suggest that environment affect human spatial navigation ability. It remains in question when and how the environment assert influence on the developmental changes of spatial cognition. To answer this question, current study compared the general navigation ability and the specific ability in using allocentric vs. egocentric cues between urban and rural children in China. Our results showed urban and rural children did not differ in their general navigation ability when they can use various cognitive strategies in a free navigation task. However, the 9 to12-yrs old rural children performed significantly better than the age-matched urban children when they were instructed to use allocentric cues in a similar maze navigation task. No such difference was found between the 5 to 8-yrs old rural and urban children, suggesting that living environment plays an important role in the development of the allocentric processing ability for navigation during the middle childhood.

Preschool-aged children can use communicators' influence on others to infer what they know

How do we know what others know? Prior work has focused on our early-emerging ability to infer the knowledge of single agents. Yet we often infer others’ knowledge by observing interactions between agents. In particular, the ability to reason about what communicators know can help children identify knowledgeable teachers. The present study investigates whether preschool-aged children infer what communicators know from how their communication influences listeners. Children observed two scenarios where a listener failed to activate a toy before succeeding. In the Effective-Communicator scenario, a speaker spoke nonsense language to a listener after they failed but before they succeeded. In the Ineffective- Communicator scenario, a speaker spoke to a listener before they initially failed. By around age 5, children preferred the one in the Effective- Communicator scenario when asked which speaker knows how the toy works. These results suggest young children can infer communicators’ knowledge solely from the presence of communication and its influence on others.

More than nothing: Behavioural and neuronal correlates of numerosity zero in the carrion crow

Although representations of countable numerosities, i.e., number of elements in a set, have been deciphered down to single neurons, the neuronal representations of numerosity zero (empty set, ES) remain largely unknown. We probed the behavioural and neuronal markers of numerosity zero in carrion crows. Crows were trained on a numerosity discrimination task with small numerosities including the ES. Behavioural performance functions exhibited a numerical distance effect in one crow, suggesting the quantitative handling of empty set alongside countable numerosities. Single-cell recordings in the nidopallium caudolaterale (NCL) revealed a great proportion of neurons tuned to the ES. NCL neurons integrated the empty set in the neural number line, shown by neuronal distance and size effects. Neuronal representations were behaviourally relevant. These findings mark the first account of neuronal ES representations outside the mammalian taxon. They underline the pivotal role of the NCL not only for numerical-, but general cognition.

Individuals with High Kinesthetic Intelligence Experience an Active Embodiment Illusion Assessed with Pupil Dilation

The role of the Sense of Embodiment (SoE) in teleoperation is becoming prominent. The SoE affects both the operator's experience and the task performance. In this study we investigate how the individual level of kinaesthetic intelligence affects the SoE during an embodiment illusion experience (EIE). We identify the experimental group in dancers and gymnasts who practice at a competitive level. We hypothesise that individuals with high kinaesthetic intelligence are more resilient to the EIE, due to their awareness of the joints position in the space. Moreover, we designed an active EIE to better assess the sense of agency and self-location. Usually, EIEs propose static tasks which are appropriate to assess the sense of ownership, but cannot clearly assess the other two components of the SoE. Finally, for the first time, to the knowledge of the authors, the variation of the pupil dilation was used as psycho-physiological measure of the SoE.

More is not necessarily better – how different aspects of sensorimotor experience affect recognition memory for words

We investigated the effect of semantic information on word memory, using imageability and sensorimotor strength as predictors, with data from a mega-study of word recognition memory (Cortese et al., 2010; 2015), as well as from an online memory task. Memory performance was analysed in hierarchical linear regressions. Both sensorimotor strength and imageability had an effect on word memory performance, but not as strong as reported in previous literature. However, the effects were smaller when the memory task was unexpected, suggesting that the semantic effects are dependent on memory strategies (or context). Most importantly, different types of sensorimotor strength had a variable effect on memory, which was not in line with the prediction of the semantic richness effect, and highlighted the importance of a multi-dimensional approach to measuring and testing semantic experience, and its effect on cognitive processing. The findings have implications for the use of semantic variables in memory research.

“It Depends”: How Children Reason about Stable and Unstable Causes

Adults have been shown to favor stable causal relationships – those that hold robustly across background contexts – in their actions and causal/explanatory generalizations (Vasilyeva et al, 2018). Here we explore how this preference develops. We present results from one study with 141 4-7-year-olds investigating whether children pay attention to causal stability when they explain observations and design interventions in novel contexts. We report developmental shifts in reliance on causal stability in a range of inferential tasks, highlight the important role of perceived average causal strength in determining children’s causal preferences, and discuss the implications of our findings for theories of early causal learning. To our knowledge, this is the first study exploring the role of stability in children’s causal reasoning.

Do I trust you more if you speak like me?

We trust ingroups more than outgroups (Balliet et al. 2014, Psychological Bulletin), but to correctly identify group members we need reliable markers of group membership (Cohen 2012, Current Anthropology). Artificial language experiments show that linguistic tags can serve as such group markers (Roberts 2013, Language and Linguistics Compass). We now tested whether sharing a language also promotes trust, in comparison to other physical and cultural ingroup markers. We created an online alien game in which participants assumed the identity of an alien and played a simultaneous trust game (Berg et al. 1995, Games and Economic Behavior) with two other aliens exhibiting the ingroup versus outgroup marker types (artificial languages, body parts, costumes). GLM analyses showed that participants significantly entrusted aliens sharing their own social marker with more money, independently of marker type. Thus, a considerable number of people trusted those who spoke or looked like themselves more.

Investigating indirect and direct reputation formation in dogs and wolves

Reputation is a key component in social interactions of group-living animals. Considering dogs’ dependence on humans, it may benefit them to form reputations of humans to choose an appropriate partner with whom to associate. It is also unknown whether this ability is an effect of domestication or inherited from their ancestor, wolves. This study investigates whether dogs and wolves can form reputations of humans through indirect and/or direct experience in a begging situation. 7 wolves and 6 dogs participated in an experiment that comprised three parts: baseline, observation, and testing. In the observation phase, the subject saw a dog interact with two people – one generous and one selfish. The observer could then choose which person to approach in the test. The subjects were also tested after direct experience with the two people. Preliminary results suggest that dogs and wolves cannot form reputations of humans through indirect or limited direct experience.

Causal judgment in the wild

We use forecasting models for the 2020 US presidential election to test a model of human causal judgment. Across tens of thousands of simulations of possible outcomes of the election, we computed, for each US state, an adjusted measure of the correlation between a democratic victory in that state and a democratic victory at the national level. These scores accurately predicted the extent to which US participants (N=207, pre-registered) viewed victory in a given state as having caused Joe Biden to win the presidency. This supports the theory that people intuitively select as causes of an outcome the factors with the largest average causal effect on that outcome across possible worlds. This is the first evidence that the theory scales to real-world complex settings, and suggests a deep connection between cognitive processes for prediction and causal judgment.

Student collaboration during code tracing activities

Learning to program is challenging, because it involves novel skills. In contrast to the majority of work focusing on code generation, we target the skill of code tracing. Code tracing involves simulating the high-level actions a computer takes when it executes the program, including the flow of execution through it and how variable values change as a result. Code tracing supports program comprehension and generation. However, many students do not code trace effectively. Our project investigates the utility of peer tutoring for the learning of code tracing. While this approach remains untested in this context, it has successfully been used in other domains to improve student learning. We will present qualitative data from a case study involving students tutoring each other to code trace. This work is the first step in the context of a broader project focused on identifying ways to help students learn to code.

Do Judgments of Learning and Judgments of Inference Enhance Text Learning?

The present study investigated whether judgments of learning (JOLs) and judgments of inference (JOIs) enhance the retention and transfer of previously studied text (backward effect) and newly studied text (forward effect). Participants read two different passages through Section A and B. After studying Section A, participants made either JOLs or JOIs on Section A, while the control group read texts without any interim judgments. Then all groups studied Section B and took a final test for Sections A and B. For Section A, there were no significant performance differences among the groups, showing no backward effect of metacognitive judgments on text learning (neither JOLs nor JOIs). In contrast, for Section B, the JOI group outperformed both the JOL and control groups on the transfer test, indicating a forward effect of JOIs on text learning. Metacognitive judgments with a higher-order learning goal appear to help subsequent learning of new material.

Students Prefer to Learn from Figures that Include Spatial Supports for Comparison

Visual comparison is used in education to convey important commonalties and differences. This process is more effective when the figures are spatially aligned so that the corresponding parts and relations are maximally clear (direct placement) (Matlen, Gentner, & Franconeri, 2020). Yet science textbooks often fail to follow this principle arranging figures meant to be compared (Jee et al., in prep)—perhaps in service of visual appeal. To explore whether this choice in fact maximizes visual appeal, we gave middle-school students illustrations characteristic of textbook figures, along with modified versions that followed direct placement principles. Students were significantly more likely to choose the direct placement version when given the goal of helping other students see differences among the figures (M=94%), than when given the goal to make the figure “look nice” (M=61%). These findings suggest that direct placements improve the educational value of a figure without sacrificing its aesthetic appeal.

Do Time Constraints Re-Prioritize Attention to Shapes During Visual Photo Inspection?

People's visual experiences are easy to examine along natural language boundaries, e.g., by categories or attributes. However, it is more difficult to elicit detailed visuospatial information about what a person attends to, e.g., the specific shape of a tree. Paying attention to the shapes of things not only feeds into tasks like visual category learning, but also enables us to differentiate similarly named objects and to take on creative visual pursuits, like poetically describing the shape of a thing, or finding shapes in the clouds or stars. We use a new data collection method that elicits people's prioritized attention to shapes during visual photo inspection by asking them to trace important parts of the image under varying time constraints. Using data collected via crowdsourcing over a set of 187 photographs, we examine changes in patterns of visual attention across individuals, across image types, and across time constraints.

Slovaks in Czechia: L1 Attrition and L2 Acquisition in Two Mutually Intelligible Languages

The paper addresses both issues of first language (L1) attrition and second language (L2) acquisition in the context of two mutually intelligible languages from a psycholinguistic perspective. It summarises results of a study examining how native speakers of Slovak living long-term in Czechia process and produce Slovak and Czech cognates and noncognates. Two experimental sessions consisting of lexical decision task and picture naming task were conducted once with Slovak stimuli and once with Czech stimuli. The results showed that Slovak noncognates were processed and produced more slowly by Slovaks who use their L1 less, and their L2 more. Analogically, this subgroup processed and produced Czech noncognates relatively faster. Thus, the experiments give evidence of (a) L1 attrition (online processing difficulties), and (b) the shift towards L2 patterns depending on the amount of L1 and L2 use even in speakers of two very close and mutually intelligible languages.

Cognitive Supports for Objective Numeracy

Political ideology leads educated adults–especially the highly numerate–to selectively reason about numbers that support their beliefs (“motivated numeracy”). We investigated whether supports that help children’s quantitative reasoning (number-lines) might also help political partisans. To test this, we asked 429 adults to interpret fictional data, in table or number-line format, about the effect of gun control on crime or the effect of a skin cream on rashes. We found data presented in number-line formats yielded greater accuracy than table formats controlling for numeracy skills (χ2 (1) = 21.88, p < .001), regardless of whether the true interpretation of data affirms, neutral to, or disaffirms participants’ political outlooks. Solving table problems after number-line problems yielded greater accuracy compared to solving table problems first (χ2 (1) = 4.78, p < .005), suggesting number-line practice is educational. Our research has important implications for communicating policy data and improving objectivity.

Mutual Exclusivity as Competition in Cross-situational Word Learning

Children learn word meanings by making use of commonalities across the usages of a word in different situations. However, early word learning experiences have a high level of uncertainty. For a word in an utterance, there are many possible meanings in the environment (referential uncertainty). Similarly, for a meaning, there are multiple possible words in the utterance (linguistic uncertainty). We propose a general framework to investigate the role of mutual exclusivity bias (asserting one-to-one mappings between words and their meanings) in early word learning. Through a set of computational studies, we show that to successfully learn word meanings under uncertainty, a model needs to implement two types of competition: words competing for the association to a meaning reduces linguistic uncertainty, and meanings competing for a word limits referential uncertainty. Our work highlights the importance of an algorithmic-level analysis to shed light on different mechanisms that implement a computational-level theory.

Knowledge-Gap Awareness as Mediating Cognitive Mechanism in Tool-Mediated Learning in Computer Science: A Multi-Method Experimental Study

Interaction with environmental resources such as technological tools facilitates conceptual learning and preparation for future learning. Writing and compiling code, failing, and trying again fosters deep understanding of computational concepts. This study investigates knowledge gap awareness as potential mediating cognitive mechanism in the learning process from situated interaction with a technological tool to internalized conceptual understanding. Students engaged in a tool-mediated or tool-dissociated learning activity on functional programming and were subsequently assessed on their learning again under tool-mediated or tool-dissociated conditions. Students’ assessment performances were triangulated with their open comments and questionnaire responses on the study’s learning and assessment activities. The findings support the assumption of tool-mediation facilitating knowledge gap awareness, which in turn facilitates conceptual learning.

Category learning in preschool and primary school children: The use of rule-based and similarity-based strategies

Categories can be learned through different strategies. Sometimes we may use abstract rules to categorize objects, and other times we may rely on the perceptual similarity among stimuli. The ability to categorize objects based on a common pattern develops since early childhood and exhibits systematic age differences. Numerous studies demonstrated that younger children rely on similarity-based processes, while older children employ rule-based categorization strategies (Miles & Minda, 2009; Rabi & Minda, 2014; Deng & Sloutsky, 2016). We used a model-based approach to investigate individual differences in category learning in pre-school children (6 years old) and primary school children (6-8 and 10-11 years old). Our results suggest that older children were more likely to employ a rule-based categorization strategy and demonstrated better learning outcomes. Lastly, we employed several computational models of categorization to uncover the properties of the process that may best account for the obtained results.

Using playback to investigate multimodal signalling of attractiveness in ring doves (Streptopelia risoria)

Multimodal signals consist of multiple components in multiple sensory channels and are common in animal courtship. Signal components can carry unique or redundant information about the courting animal. The response to such multimodal displays might additionally reveal multisensory integration such that the response to the whole display is not simply the sum of the responses to the individual parts. In this study, we used high-quality audiovisual recordings of courting male ring doves and measured female behavioural responses to video playback. 21 females were split into three conditions: one featured multimodal, audiovisual playback, while in the other two, either the visual or auditory courtship component was occluded using familiar stimuli (foliage; vacuum cleaner sound). We analysed female behaviours associated with sexual stimulation and compared frequency of behaviours across conditions and between playback and control intervals. Additionally, we measured blood levels of oestradiol before and after testing.

Causal reasoning under time pressure: testing theories of systematic non-normative reasoning patterns

While research indicates that people are skilled causal reasoners, systematic deviations from the normative causal Bayesian network model have been observed. These include Markov violations, failures to ‘explain away’, and conservative responding. Different processes have been posited to account for these violations: sampling, associative reasoning, and heuristics. These processes entail effects of response time. To test the relationships between these theories, normative violations, and reasoning time we conducted a causal reasoning study employing time pressure manipulations and response time measurements. Our results show that time pressure decreases overall accuracy. Crucially, we find that time pressure does not affect the magnitude Markov independence violations. This is not what many existing explanations would predict. We find evidence that participants’ responses result from two separate cognitive processes and that time pressure modulates their relative contribution to responses. Hence we provide an explanation of non-normative reasoning patterns based on a mixture of cognitive processes.

Modeling "spatial purport of perceptual experience": egocentric space perception in a semi-realistic 3D virtual environment

Egocentric space perception is multimodal, closely tied to action and bodily movements and has an inherent phenomenal dimension. One prominent account, provided by Rick Grush, has postulated posterior parietal cortex as key neural area. Computational model based on Kalman filter has been proposed to account for the operation of this brain region, underscoring the importance of bodily skills for perceiving spatial properties. The current study provides a first direct simulation of this model in a semi-realistic 3D virtual environment. The goal of the simulation was to develop an agent with a realistic ability for egocentric space perception based on a neural approximation of Kalman filter. To achieve this goal, we use machine learning techniques, with a strong focus on unsupervised methods of reinforcement learning. Resulting agent is tested behaviorally on ecologically plausible tasks to evaluate its internal, learned representations. Poster presents simulation results and discusses the model.

Phylogenitic map of vocal learning in parrots

Vocal learning is considered a crucial component of human language. The ability of vocal learning is rare and among birds has been detected only in songbirds, hummingbirds. and parrots. Parrots are probably the most advanced vocal learners who learn new vocalisations throughout their lives and are known for their ability to imitate human speech. Thus parrots present an intriguing model to shed light on how human language evolved. However, only little is know about how widely vocal learning is distributed in Psitticaformes, an avian order comprising 399 species. In the past decade, surveying behaviour from online video repositories have become a promising research tool to investigate animal behaviour. In this study, we conducted a YouTube survey and provided an overview of the phylogenetic distribution of (allospecific) vocal learning in parrots to enhance our undertstanding of the evolution of language. We discuss why some parrot species are better imitators than others.

The impact of interface alignment structure on aesthetic appreciation and usability rating

Although interacting with visual displays of websites is a central part of our everyday online practices, little is known about the impact of geometric principles on users’ perceptions. In an exploratory study, canonical interface layout structures were extracted from websites for different purposes, such as education, medicine or finance, amongst others. Preference for the obtained prototypes was then rated in a quasi-experiment (n=65) following aesthetic and usability criteria (Lavie & Tractinsky, 2004). Our results indicate that vertical and horizontal alignment properties, such as different degrees of symmetry and complexity, shape the rating of expressive and creative aesthetics, as well as perceived usability. This contribution aims at shedding light on geometric constraints in UX analysis and giving an outlook on potential impacts of preference on content comprehension in an ecological setting.

Language representations in L2 learners: Toward neural models

We investigated how the language background (L1) of bilinguals influences the representation and use of the second language (L2) through computational models. With the essays part from The International Corpus Network of Asian Learners of English (ICNALE), we compared variables indicating syntactic complexity in their L2 production to predict L1. We then trained neural language models based on BERT to predict the L1 of these English learners. Results showed the systematic influence of L1 syntax properties on English learners' L2 production, which further confirmed integrations of syntactic knowledge across languages in bilingual speakers. Results also found neural models can learn to represent and detect such L1 impacts, while multilingually trained models have no advantage in doing so.

Chimpanzees utilize video information when facing its referent later in another room

In humans, out-of-sight events and objects can be referred through language. Such referentiality serves a function to convey information that could be used when we face the referents afterwards. However, it is unclear whether the comprehension of those information’s referentiality is shared in non-linguistic animals. To address this, we explored whether chimpanzees would utilize video information when they face its referent in another room later. They first watched a food-hiding event (food being hidden into either a green or red cup) through video in one room. They, then, moved to the next room and received a choice test to locate the food. Two out of five chimpanzees performed better than expected by chance. This suggests that like humans, chimpanzees can utilize referential information across time and space between the two rooms, at least based on correspondence between objects in video and those in real.

Acquiring the meaning of conditionals

Children acquire conditionals late for reasons that are poorly understood. One possibility is because conditionals have multiple meanings. For instance, the statement “If he goes out without an umbrella, he will get wet” is logically true when he goes out without an umbrella and he gets wet (conjunction), when he goes out with an umbrella and does not get wet (bi-conditional) and when he goes out with an umbrella and (still) gets wet (conditional). Here, we employ a new paradigm to test these interpretations in young children. Eighty 3-6-year-olds were asked to match an if-then statement with one of two pictures: 1 depicting a scenario where the conditional is false vs. one of the 3 scenarios where the conditional is true. Results show that children had a conjunctive interpretation since age 3 but the development of the other two interpretations is protracted until after age 5, with great inter-subject variation.

The Learnability of Goal-directedness in Jazz Music

Musicians and listeners perceive dependency structures between musical events such as chords and keys. Music theory postulates the goal-directedness of such dependencies, which manifests in formal grammar models as right-headed (head-final, left-branching) phrase structure. Goal-directedness has a direct cognitive interpretation; dependencies that point forward in time can be understood as creating expectation, and the empirical correlates of this relationship are a topic of current psychological research. This study presents a computational grammar model that represents the abstract concept of headedness but does not encode properties specific to music. Bayesian grammar learning is applied to infer a grammar for Jazz and its headedness proportions from a corpus of Jazz-chord sequences. The results show that the inferred grammar is right-headed. A second simulation using artificial data was conducted to verify the correct functionality of the headedness induction. The goal-directedness of Jazz harmony is thus demonstrated to be learnable without music-specific prior knowledge.

Biological motion perception in perceptual decision-making framework: ERP evidence in humans

Neurophysiological studies in non-human primates suggest that perceptual decision-making consists of two stages of information processing: sensory evidence accumulation and response selection. Recent work with humans shows that the sensory evidence accumulation process can be tracked with the CPP component derived from EEG. As most studies in the field use simple motion stimuli, it remains unclear whether these processes generalize to more complex and socially important stimuli such as biological motion. In the present study, we used point-light displays with 4 levels of coherence and recorded EEG as human subjects (N=14) performed a perceptual decision-making task. Our results show that biological motion elicited a CPP component whose peak rate tracks the coherence level of the stimuli, albeit with a later onset than observed previously. These results suggest that similar decision-making mechanisms may play a role in biological motion perception.

A systematic investigation into team coordination breakdowns

A critically understudied aspect of team adaptation is the phenomenon of coordination breakdowns (CBs). CBs are characterized by a temporarily diminished ability to function effectively as a team. However, team research currently lacks robust methods for identifying and anticipating when teams transition from functioning effectively to a CB. With the current study, we aim to deepen our understanding of how team coordination dynamics across various interaction modalities reflect CBs and effective teamwork, utilizing a three-pronged research approach. First, we used audiovisual data from four person teams involved in a stressful collaborative game task to manually identify CBs. Second, we applied team coordination dynamics measures to physiological and speech data obtained during the task to computationally identify CBs. Third, the latter output was used as input for transition anticipation methods, generating computationally anticipated CBs. Our findings contribute theoretically and methodologically to the systematic investigation of CBs.

Kea show three signatures of domain-general inference

Domain-general thought requires an ability to combine information from different cognitive domains within a single judgement. We presented kea parrots (Nestor notabilis) with probabilistic choice tasks that required them to make predictions about which of two hidden samples was most likely to contain a rewarding token. Over the course of three experiments, we found that kea used relative rather than absolute quantities to make their sampling predictions and could integrate either knowledge about a physical barrier or demonstrators’ sampling biases to adjust their judgements. This work provides the first evidence for domain-general statistical inference outside of humans and the great apes. This suggests that at least two structurally distinct brain models have independently evolved an ability to integrate information, highlighting how comparative cognition may inspire the development of novel artificial general intelligence systems.

Modelling the production effect in recognition memory

Memory is reliably enhanced for information read aloud compared with information read silently—this is known as the production effect. Theoretical accounts of this effect have been largely verbal in nature with very little exception, yet its robustness (and that of related phenomena) suggests that it is worth integrating into existing computational approaches to memory. A leading account of the production effect proposes that production leads to encoding of additional features at study and that these features are available at test to assist retrieval, conferring the observed memory benefit. We implement a version of this account into the Retrieving Effectively from Memory (REM) computational framework and examine its ability to capture key phenomena associated with the production effect. We compare and contrast the current implementation in REM with a pre-existing implementation of this effect in MINERVA2, in addition to discussing alternative conceptualizations and future work.

Does the spacing effect depend on prior knowledge? Evaluating the role of word familiarity in learning from spaced vs. massed schedules

Spacing out information promotes retention more than massing information – a robust finding in psychological science. Research on the spacing effect has primarily manipulated aspects of the learning schedule (e.g., item repetitions, retention interval). Limited work has considered how familiarity with the to-be-learned information impacts the spacing effect. The current study addressed this gap by examining how word frequency/familiarity affects retention on a massed or spaced schedule. One-hundred adults were presented with 24 high familiarity (e.g., apple), low familiarity (e.g., vestige), or nonsense (e.g., blicket) words on a massed and spaced schedule. Retrieval was tested after a 5-minute delay, revealing a significant spacing effect regardless of word familiarity. Furthermore, overall performance was significantly greater for highly familiar items. These results suggest that spacing is effective regardless of general word familiarity. Future studies will assess how learners’ prior knowledge of test items – as measured by a vocabulary test – impacts the spacing effect.

Growing knowledge culturally across generations to solve novel, complex task

Knowledge built culturally across generations rests on language, but the power and mechanisms of language as a means of cultural learning are not well understood. We take a first step towards reverse-engineering cultural learning through language. We developed a suite of complex, high-stakes video games, which we deployed in an iterated learning paradigm. Game participants were limited to only two attempts (lives) per game, after which they wrote a message to a future participant who read the message before playing. Knowledge accumulated gradually across generations, with later generations advancing further in the games while performing more efficient actions, in a manner strikingly similar trajectory to the learning trajectories of isolated, immortal individuals. These results suggest that language is sufficient to accumulate diverse repertoires of knowledge acquired in these tasks. The video game paradigm is thus a rich test-bed for theories of cultural transmission and learning from language.

The influence of the ability to rely on an external store on value-directed remembering

Whether through setting reminders on a smartphone, or creating grocery lists, we often rely on external aids to skirt the accuracy/capacity limitations of internal memory. When relying on internal memory, memory for valuable information is better than less valuable information. Across two preregistered experiments, we investigated how the availability of an external aid influences internal memory for high- and low-value information. We presented participants with to-be-stored words paired with values and assessed recall. Perceived external store availability and word value were manipulated within-participants. Recall was significantly higher for high-value items and when participants knew not to expect access to the external store. Critically, we found a significant reduction to the value effect when participants were told that they could use the external store, suggesting a reduction in the differential encoding of information by value. Results highlight the potential cost to memory for high-value information when external stores are available.

Am I tone-deaf? Assessing pitch discrimination in 700,000 people

Congenital amusia of pitch (tone-deafness), which affects ~1.5% of the population, involves a deficit in pitch processing affecting the perception of musical melody and some speech contrasts. Lay-knowledge of tone-deafness considers the phenomenon to be categorical, as does prior work contrasting ‘amusics’ to ‘controls’, designated as such by thresholds on diagnostic tests. Is amusia a qualitative break from normal pitch discrimination, or does it represent the extreme end of a distributed skill? Large-scale datasets, combined with theoretically motivated tools for extracting latent measures of ability, can answer this question. We studied individual differences in pitch discrimination in 700,000 people using Bayesian hierarachical diffusion models. We found no evidence for a categorical deficit: pitch perception ability was normally and continuously distributed. We additionally report preliminary findings on pitch perception ability as a function of age, gender, native language, musical experience, and self-assessments of tone-deafness.

Verbal working memory capacity modulates category representation.

Previous studies investigating the influence of a working memory load on category learning gained mixed findings. The current study investigated the role of increased working memory load in the processes of encoding two artificial categories with a rule-plus-similarity structure. During category learning, adults were trained with a dual-task that integrated the category learning task with a secondary working memory task. Participants were instructed to retain verbal (auditorily delivered) information while learning the categories in each trial. The learning phase was followed by recognition and categorization tasks evaluating their learning of the categories and memory for exemplar features. The results indicated that verbal working memory load affected category learning and representation as compared to the control condition where participants learned the categories without interference. The dual-task paradigm disrupted attention optimization and the formation of a rule-based category representation while increasing the probability of similarity-based encoding.

Aging and Social Robots: How Overspecification Affects Real-Time Language Processing

Despite the rise in communicative technologies for healthy aging, little research has focused on how effectively older adults process language spoken by artificial agents. We explore whether a robot's redundant (but potentially helpful) descriptions facilitate real-time comprehension in younger and older listeners. Gaze was recorded as participants heard instructions like "Tap on the [purple/closed] umbrella" for a display containing eight unique objects. We manipulated the description (no-adjective, color-adjective, state-adjective) and the visual context, specifically whether there was another object bearing the property denoted by the adjective (purple/closed notebook). Relative to the no-adjective condition, redundant color adjectives speeded comprehension when they uniquely identified targets, whereas (less-salient) state adjectives always impeded comprehension. No age-related differences were observed. Paralleling human-human studies, language processing in human-robot communication is facilitated when salient information narrows visual search. Together, these findings help inform the future design of communicative technologies.

Evaluating infants’ reasoning about agents using the Baby Intuitions Benchmark (BIB)

Young infants reason about the goals, preferences, and actions of others. State of the art computational models, however, still fail in such reasoning. The Baby Intuitions Benchmark (BIB) was designed to test agency reasoning in AI using an infant behavioral paradigm. While BIB’s presentation of simple animations makes it particularly suitable for testing AI, such vignettes have yet to be validated with infants. In this pilot, 11-month-old infants watched two sets of animations from BIB, one on agents’ consistent preferences and the other on agents’ efficient actions. Infants looked longer towards violations in agents’ behavior in both the preference (N = 24, β = 3.24 p = .040) and efficiency task (N = 24, β = 4.50 p = .016). These preliminary results suggest that infants’ agency reasoning is abstract enough to be elicited by simple animations and validate BIB as a test of agency reasoning for humans and AIs.

The Proceduralization of Metacognitive Skills

Metacognitive control is the deliberate manipulation of cognitive states such as attentional control and emotional regulation (Flavell, 1979). Metacognitive control is known to improve with practice to become more skillful, yet the mechanisms for developing metacognitive skills remains unclear. I propose that metacognitive skills can be explained through the skills acquisition model advanced by Fitts and Posner (1967) and Anderson (1982). This account will focus on the process of proceduralization, where declarative task knowledge is converted into procedural knowledge. This model has been well researched in the development of both motor skills and cognitive skills (Ford et al., 2005; Anderson, 2007). To date, the model has not yet been robustly applied to the acquisition of metacognitive skills. As Anderson used an ACT-R model to frame his account of cognitive skills, I will apply an ACT-R model to account for the development of metacognitive skills.

CoViDisgust: Language Comprehension at the Intersection of a Global Pandemic and Individual Disgust Sensitivity

Recent research suggests that an individual's disgust sensitivity affects language comprehension and correlates with political attitudes. Importantly, disgust sensitivity is not a stable measure and can be manipulated dynamically. We investigated the effect of the ongoing pandemic on language processing in a word rating and lexical decision study. Each participant was first exposed to either headlines portraying CoViD-19 as a serious disease or those downplaying it. The results showed an interaction between person-based factors, inherent word characteristics, and the participants’ responses. After reading headlines emphasizing the threat of CoViD-19, easily disgusted participants considered the least disgusting words more disgusting. Further, political views played a role. More liberal participants rated the words lower for disgust in the downplayed condition but higher in the severe condition than their more conservative peers. The results of this study shed new light on how the media's stance on the pandemic may affect the public’s response.

Hand constraint affects semantic processing of hand-manipulable objects: An fNIRS study

Embodied cognition theory predicts that semantic processing shares processing resources with sensorimotor systems. The present study aimed to reveal mechanisms for impact of motor simulation on semantic processing of objects, which are manipulated by hands. We measured activation of inferior parietal lobule (IPL) by functional near-infrared spectroscopy (fNIRS) to examine the effect of constraint on hand movement. Participants were faced with two words representing name of objects that can be manipulated by hand (e.g. cup) or objects that cannot be manipulated by hand (e.g. windmill), and answered which object was larger. We analyzed effect of two factors on IPL activity: hand constraint and hand manipulability of the objects represented by the words. We found that (1) IPL activity for hand-manipulable objects was significantly higher than non-manipulable objects under control condition, and that (2) the difference in IPL activity between hand manipulable and non-manipulable objects was significantly reduced under hand-constraint condition.

LARC: Language annotated Abstraction and Reasoning Corpus

The Abstraction and Reasoning Challenge (ARC) is a set of tasks where one must induce a program from a few given input-output examples, and apply it to a new input. Although humans can easily solve most tasks, ARC is challenging for state-of-the-art algorithms. We hypothesize that humans use intuitive program induction, and interpret ARC tasks by constructing "natural programs'' in language. We experimentally study "natural programs'' by formulating a two-player game: A participant solves and then communicates the program to another participant using natural language; the second participant must solve the task using the description alone. We find at least 361 out of 400 tasks can be solved from a natural language description, demonstrating that natural language is sufficient in transmitting these natural programs. We compared the natural language programs to computer programs constructed using two separate state-of-the-art program synthesis approaches, and conducted a study on leveraging natural language annotations to improve performance of program-synthesis tools.

A large-scale comparison of cross-situational word learning models

One problem language learners face is extracting word meanings from scenes with many possible referents. Despite the ambiguity of individual situations, a large body of empirical work shows that people are able to learn cross-situationally when a word occurs in different situations. Many computational models of cross-situational word learning have been proposed, yet there is little consensus on the main mechanisms supporting learning, in part due to the profusion of disparate studies and models, and lack of systematic model comparisons across a wide range of studies. This study compares the performance of several extant models on a dataset of 44 experimental conditions and a total of 1,696 participants. Using cross-validation, we fit multiple models representing theories of both associative learning and hypothesis-testing theories of word learning, find two best-fitting models, and discuss issues of model and mechanism identifiability. Finally, we test the models' ability to generalize to additional experiments, including developmental data.

Learning how to use the verb ‘want’: A corpus study

Children’s production of mental state verbs is studied to research their theory of mind and general cognitive development. Desire verbs are a rich resource as children produce them frequently and early in development, with ‘want’ being of the most frequent production. We report on a corpus study of 450+ instances of ‘want’ as gathered from dialogues with children in CHILDES. We describe a novel coding scheme that measures the semantic development of ‘want’ utterances, such as the use of negation, clause type, complement subject, and the semantic type of the complement, in addition to more conventional categories. We report on these features’ frequencies for children aged 2-4. Noteworthy findings suggest that children talk about their own desires most often, but as they grow older, they talk more about others’ desires; they desire more complex objects as they mature; and they primarily use questions to talk about second person desires.

Evolutionary influences in learned bird communication signals

What is the extent to which learned communication signals are a product of biological evolution? Songbirds are a good candidate group for studying this question, since songbird species show remarkable similarities but also a large variability in their vocalizations, with many examples of cultural transmission. To this end, in a large sample of songbird species and other birds that do not learn their songs, we analyzed whether evolutionary relations estimated from molecular genetics can predict acoustic similarity between songs. We assessed birdsong similarity with various acoustic features extracted by signal processing methods. Surprisingly, we found that the extent to which learned songs reflect genetic relations is comparable to - if not exceeding - innate vocalizations. These findings suggest that even in communication signals that are largely determined by cultural transmission (eg: human language), evolutionary constraints could manifest, as famously suggested by Chomsky in his theory of universal grammar.

Now or Later: Representational Convergence in Simulated Simultaneous and Sequential Bilingual Learning Contexts

In bilinguals, certain concepts across languages come to be represented similarly—a semantic convergence effect that reflects interactivity between languages. The causal factors that affect semantic convergence are not fully understood; this gap may be due to limitations of the correlative methods used in extant work, which assesses the representations of real-world bilinguals. Here, we utilize an artificial language learning paradigm—inspired by the study of category learning—to elucidate causal influences on semantic convergence. We contrast simulated simultaneous bilingual learners with simulated sequential bilingual learners before assessing the representations of both. Bilingual groups are additionally compared to simulated monolingual controls from each language. We report on the pattern of semantic convergence and conclude with implications for theories of bilingual representation.

Comparative Aesthetics: A novel approach to investigate multimodal attractiveness in humans and animals

It has long been debated whether aesthetics is something uniquely human. Our ongoing project aims to empirically address comparative aesthetics in a collaboration between psychologists and biologists. Defining aesthetic responses as approach behaviour associated with sensory pleasure and corresponding physiological states, we are exploring whether humans and non-human animals share similar mechanisms in their evaluation of sexually attractive stimuli. Specifically, we compare humans and ring doves (Streptopelia risoria), a species whose courtship behaviour and physiology have been extensively investigated in behavioural and neuroendocrinology research. Both species form pair bonds, communicate using visual and auditory signals, and can be tested in laboratory conditions with conspecifics or using video and auditory stimuli. Truly comparative work involves using as similar an experimental approach as possible for both human and animal participants. Here we report our comparative methods and experimental paradigms, and some recent results.

Designing a behavioral experiment to study the factors underlying procrastination

Procrastination is ubiquitous, but its underlying processes are poorly understood. Studies of procrastination have used self-report questionnaires and therefore have limited use as the basis for process models. As typical procrastination behavior is characterized by a delay in starting the work and rushing in the end, we argue that studies should emphasize the time course of progress. We designed a reading task to quantify the time course of procrastination. Subjects were given seven days to work online on a boring and lengthy reading task. We tested whether reward delay is a predictor of procrastination. We introduce a metric for quantifying the degree of procrastination from the time course of progress. The degree of procrastination tends to be higher in the delayed reward condition than that in the instantaneous condition. We also observed great individual variation in the progress course. Further work needs to investigate what factors contribute to this variability.

Sensorimotor similarity: A fully grounded and efficient measure of semantic similarity

Experimental design and computational modelling across the cognitive sciences often rely on measures of semantic similarity between words/concepts. Traditional measures of semantic similarity typically rely on corpus-derived linguistic distributional similarity (e.g. LSA) or distance in taxonomic databases (e.g. WordNet), which are theoretically problematic in their lack of grounding in sensorimotor experience. We present a new measure of sensorimotor similarity between concepts, based on multidimensional comparisons of their experiential strength across 11 perceptual and action-effector dimensions in the Lancaster Sensorimotor norms. We demonstrate that, in modelling human similarity and relatedness judgements, sensorimotor similarity has comparable explanatory power to LSA and WordNet distance, explains variance in human judgements which is missed by other measures, and does so with the advantages of remaining both fully grounded and computationally efficient. We further introduce a web-based tool for easily calculating and visualising sensorimotor similarity between words, featuring coverage of nearly 800 million word pairs.

Suboptimal deployment of object-mediated space-based attention during a flanker task

Space-based and object-based attention studies suggest these selective mechanisms can be involuntarily or voluntarily deployed. We performed two experiments exploring automatic deployment of object-mediated space-based attention. Subjects performed a modified flanker task with targets and distractors presented within the same or different object frames. If object selection occurs automatically, the flanker effect should be larger in the same condition. However, both object frame conditions produced equally large flanker effects within accuracy. Next, we manipulated the observer’s sustained attentional spotlight via an inducer task to determine whether object-mediated space-based selection depends on initial spotlight size. This time, object-based effects emerged only during narrow spotlight conditions. The results from both experiments suggest the deployment of object-based attention may occur when spatial attention is initially focused narrowly, even when such selection is suboptimal. These results add to the existing literature while reconciling previous inconsistent findings of object-based selection.

The influence of the self-perspective in infant theory of mind

We examine whether development of self-awareness influences infants’ ability to track and use others’ perspectives to make belief-based action predictions. Based on the altercentric hypothesis (Southgate, 2020), we expect that infants who do not yet have a self-perspective to make more accurate predictions of an agent’s actions on a non-verbal mentalizing task (NVMT) compared to infants who may encode the other’s and their own conflicting perspective. To test this, we presented 18-month-olds, half of whom passed the mirror self-recognition (MSR) task, with a NVMT and used anticipatory looking as a measure of action-based attribution. Contrary to our hypothesis, preliminary findings with 32 infants using the differential looking score, suggest that those who pass the MSR task are more accurate in their anticipatory looking compared to infants who do not pass the MSR task. All other preregistered analyses will be conducted once data collection is complete in June 2021.

Preschoolers' Learning of Words with Emotional Variability in Shared Book Reading

Mapping words to referents is important for language acquisition. Learning a word relies heavily on memory because it requires forming associations between labels and referents, integrating examples across time, and retrieving words. Memory supports, such as variability, can be added to word learning events to help children learn words. This study examined the effect of emotional variability on word learning. Four-year-olds learned eight novel words presented four times in a storybook organized in one of three conditions: no variability, low variability, or high variability. Words with no variability were presented in the same emotional context (i.e., happy, sad, afraid, or angry). Words with low variability were presented with two emotions (e.g., happy and sad), and words with high variability were presented with all four emotions. After hearing the book, preschoolers participated in a generalization test. This study informs our understanding of the role of social contextual variability in word learning.

War Language in Tweets of Politicians, Reporters, and Medical Experts: A Focus on Covid-19

Metaphors can both shape and reflect how we think about com- plex issues. Here, we explore the prevalence of WAR language in a large corpus (N = 1.63 million) of Tweets. We compare how different groups of people use the language of WAR to talk about different topics. In Study 1, we find that about 5% of Tweets about Covid-19 from the general public include WAR language, replicating prior work (Wicke & Bolognesi, 2020). In Studies 2 and 3, we find that politicians use WAR language much more often, while Reporters and Medical Experts use WAR language less often. The findings are relevant to current debates about the role of language in democracy, and to theo- ries of metaphor in communication.

A computationally rational model of human reinforcment learning

Human learning efficiency in reinforcement learning tasks decreases when the number of the presented stimuli increases, a finding known as the "set size effect". From the computational rationality perspective, this effect can be interpreted as the brain’s balancing task performance against rising cognitive costs. Still, it remains unclear how best to quantify cognitive cost in learning tasks. One candidate is policy complexity, defined in terms of information theory as the mutual information between the sensory input and behavioral response. However, using a published data set (Collins & Frank, 2012), we show that policy complexity alone cannot explain the set size effect because the optimal policy complexity does not necessarily increase with the set size. We therefore propose a computational model and conduct a model-based analysis to show the minimal constituents of cognitive cost are policy complexity and representation complexity---the information quantity conveyed from sensory inputs to internal representations.

EFFECT OF FORMATIVE FEEDBACK ON THE METACOGNITIVE DEBUGGING STRATEGY USING POLLING TECHNOLOGIES

The metacognitive debugging process is a cognition regulation strategy that is carried out by the student to recognize weaknesses in learning and adjust strategies to improve his/her performance. The effects of formative feedback on the metacognitive debugging strategy of participants were examined using the experimental method in an online course of pedagogical practice. A total of 300 responses obtained in the application of the MAI instrument of 60 students (20 in the control group, 20 in the experimental group (Individual) and 20 in another experimental group (Collaborative)) were quantitatively analyzed. The results revealed that using formative feedback significantly affects the metacognitive debugging strategy of the students: (1) The level of debugging strategy used in the experimental groups was significantly higher than the control group; (2) Teacher-Student feedback (collaborative group) showed better results for the debugging strategy. The groups that received formative feedback showed a positive effect on academic performance.

Detecting the involvement of agents through physical reasoning

The physical world is rich with social information that people readily detect and extract, such as inferring that someone was present when we encounter a stack of rocks in the woods. How do people recognize that a physical scene contains social information? Research in developmental psychology has argued that this capacity is supported by a sensitivity to violations of randomness. Here we present a computational model of this idea and test its explanatory power in a quantitative manner. Our model infers agency by estimating the likelihood that a scene would arise naturally, as determined by human intuitive physics instantiated as a physics engine. Our results suggest that people's ability to detect agency in a physical scene is sensitive not only to the superficial visual properties, but also to the underlying physical generative process. Our results highlight how people use intuitive physics to decide when to engage in nuanced social reasoning.

Visuospatial Skills and the Workforce: A Brief Review

Visuospatial cognitive skills are increasingly recognized as critical in many areas of both formal and informal learning, but far less research has looked at their role in the workforce. We know, for example, that strong visuospatial skills in high school are predictive of STEM occupations, but we know less about how these skills are actually deployed in the workplace, and also the extent to which these skills might also be important in various non-STEM occupational pathways. Our review provides current context for these questions, including identifying occupations for which these research questions are actively being explored, as well as areas that urgently call for more research. We argue that systematic and rigorous cognitive research should be used to inform evidence-based best practices for addressing the unique workforce-related challenges of the 21st century.

The Interplay Between Local and Global Strategies in Navigational Decisions

Although the Traveling Salesman Problem (TSP) is an NP-hard problem, human schedulers can find strategies that are comparable if not better than existing algorithms. However, it is still unclear what heuristics they adopt in order to select an approximate solution (Schaefer, 2018). In solving a navigational problem in 3D, the directional heading is an essential factor because changes of direction require additional energy expenditure. In our study, we focus on the connection between local choices minimizing distance and local choices minimizing the turning angle. We then examine human solutions to the TSP in light of the tradeoff between local and global optimization of distance and turning angle. We conducted two experiments showing that while subjects were more likely to move to the nearest node when they plan the next step ahead, they also take the turning angle into consideration and overall adopt a strategy that combines local and global heuristics.

Know your network: Sensitivity to structure in social learning

To glean information from social networks, people must be adept at distinguishing real evidence from hearsay. Here, we investigate human inferences within a simulated social network. We introduce a novel social learning task in which participants infer their surrounding social network structure while also using social communications to aid judgments about the shared environment. We find that the majority of adults correctly identified independent and acyclic network structures, though they struggled to identify cyclic communications. Comparison of judgments with several social inference models supports a naïve social learning account that integrates social evidence with the learner’s own observations in a structure insensitive way. This suggests that people are capable of using communication patterns to identify social influences, but they may still be misled by the distortions of evidence that these network dynamics can produce.

Testing the ‘inherent superiority hypothesis’ in behavioural flexibility of grey squirrels.

Enhanced cognitive ability has been shown to impart fitness advantages to some species by facilitating establishment in new environments. Enhanced cognitive ability may be an adaptation driven during the establishment process in response to new environments or, alternatively, may reflect a species' characteristic. We used an intraspecific-comparative paradigm to examine the cause of a successful mammalian invader and urban dweller, Eastern grey squirrels’ (Sciurus carolinensis) cognitive ability (novel problem solving, motor memory and spatial learning) using well-established tasks. Free-ranging squirrels residing in rural and urban habitats in native environments (US) were compared with their counterparts living in non-native environments (UK). The four groups of squirrels showed comparable performance in most measures, suggesting that the previously reported ‘enhanced’ performance is likely a general characteristic of this species. Despite this, some cognitive abilities such as solving novel problems in grey squirrels has undergone mild variation during the adaptive process in new environments.

Syntactic adaptation and word learning in French and English

Syntactic priming may be a key mechanism underlying children’s learning of novel words. Havron et al. (2019) exposed French-speaking children (ages 3 to 4) to a speaker biased by the use of either familiar verbs or nouns presented in the same syntactic context. This influenced participants’ interpretations of ambiguous novel words presented in the same syntactic frame. In Experiment 1, we successfully replicated Havron et al. with 77 French-speaking adults, using a web-based eye-tracking paradigm. Experiment 2 adapted this paradigm to English: Repeated exposure to a syntactic structure induced 102 English-speaking adults to update their expectations about the meanings of novel words. Our results indicate participants adapted to the specific linguistic structure used, not just the speaker’s tendency to mention actions or objects. These findings support the role of rapid adaptation during word learning. Experiments in progress investigate whether the English paradigm yields successful learning in 3- to 5-year-old children.

Androgen responsiveness to simulated territorial intrusions in Allobates femoralis males: evidence supporting the challenge hypothesis in a territorial frog

Territorial behaviour has been widely described across many animal taxa, where the acquisition and defence of territory are critical for the fitness of an individual. Extensive evidence suggests that androgens are involved in the modulation of territorial behaviour in male vertebrates. A short-term increase of androgen following a territorial encounter appears to favour the outcome of a challenge. The “Challenge Hypothesis” outlines the relationship between androgen and social challenges in male vertebrates. Here we tested the challenge hypothesis in the highly territorial poison frog, Allobates femoralis, in its natural habitat by exposing males to simulated territorial intrusions in the form of acoustic playbacks. We quantified repeatedly androgen concentrations of individual males via a non-invasive water-borne sampling approach. Our results show that A. femoralis males exhibited a positive behavioural and androgenic response after being confronted with simulated territorial intrusions, providing support for the Challenge Hypothesis in a territorial frog.

Transfer of Knowledge in a Semantic Navigation Task Without the Accurate Map: Model-based Analysis of Knowledge Transfer

Humans can adapt their knowledge acquired for one problem to solve other problems of different problem domains. To seek for evidence of knowledge transfer, we investigated the human navigation behavior on the network of Wikipedia. We showed that the performance of human players is between that of the best navigator with full knowledge on the network structure of Wikipedia and of the poorest navigator with no knowledge on it. This suggests that human players transferred their knowledge on the nominal concepts onto the network structure of Wikipedia. Further, we conducted an analysis of the degree of confidence in decision making based on reinforcement learning. We found that human players might be very certain even about their first choice, from which we suspect that human players can solve the new problem by transferring their everyday knowledge successfully from the beginning and this can be a true power of knowledge transfer across problem domains.

Synchronising the emergence of institutions and value systems: a model of opinion dynamics mediated by proportional representation

Individuals increasingly participate in online platforms where they copy, share and form they opinions. Social interactions in these platforms are mediated by digital institutions, which dictate algorithms that in turn affect how users form and evolve their opinions. In this work, we examine the conditions under which convergence on shared opinions can be obtained in a social network where connected agents repeatedly update their normalised cardinal preferences (i.e. value systems) under the influence of a non-constant reflexive signal (i.e. institution) that aggregates populations' information using a proportional representation rule. We analyse the impact of institutions that aggregate (i) expressed opinions (i.e. opinion-aggregation institutions), and (ii) cardinal preferences (i.e. value-aggregation institutions). We find that, in certain regions of the parameter space, moderate institutional influence can lead to moderate consensus and strong institutional influence can lead to polarisation. In our randomised network, local coordination alone in the total absence of institutions does not lead to convergence on shared opinions, but very low levels of institutional influence are sufficient to generate a feedback loop that favours global conventions. We also show that opinion-aggregation may act as a catalyst for value change and convergence. When applied to digital institutions, we show that the best mechanism to avoid extremism is to increase the initial diversity of the value systems in the population.

Creative Foraging: Examining Relations Between Foraging Styles, Semantic Memory Structure, and Creative Thinking

Creativity has been separately related to differences in foraging search styles and semantic memory structure. Here, we converge computational methods to examine the relation of creative foraging styles, semantic memory structure, and creative thinking. A large sample of participants was divided into groups based on their exploration and exploitation strategies in a novel creative foraging game. Their semantic memory networks were estimated and compared, based on an animal category semantic fluency task. We find differential relations between the properties of semantic memory structure and foraging styles and link such differences to performance in a standard creative thinking task. Our results highlight the interaction of semantic memory structure and foraging strategies in creativity.

Effects of Combining Refutation and Self-Explanation on Student Learning

Misconceptions in science are ubiquitous and difficult to revise, but refutation texts are one effective tool for prompting conceptual change (Tippett, 2010). However, refutation texts are effective only to the extent that students are forming coherent representations. Self-explanation is a well-studied method for increasing coherence of readers’ mental representations (Allen, McNamara & McCrudden, 2015). The current study examines the unexplored question of whether these common interventions enhance each other’s positive learning outcomes. Two-hundred fifteen UCSD undergraduates were randomly assigned to read either a refutation or an expository text about the phases of the moon, and were prompted to either self-explain or think aloud while reading. Students then took a post-test assessing knowledge of moon phases and related misconceptions. We measured both accuracy and explanatory qualities, such as causality and circularity, in order to assess the relative efficacy of refutation and self-explanation and their combination.

Papers

Mental Representation of Budgeting Categories

Understanding how people mentally represent expenditures is crucial to understanding how they manage their money. In this paper, we report three studies that investigate people’s representation of budgeting categories by asking people to categorize common expenditures of money (e.g., rent, dining out, etc.). We then examine the implications of these taxonomic representations of expenditures for how people selectively restrict their uses of money (e.g., when overspending on one item, for which other items would people choose to spend less). We found that there is consensus in people’s representations of expenditures, and that both the category membership and taxonomic distance between items predict how people restrict their spending.

Connecting perceptual and procedural abstractions in physical construction

Compositionality is a core feature of human cognition and behavior. People readily decompose visual objects into parts and complex procedures into subtasks. Here we investigate how these two abilities interact to support learning in a block-tower assembly experiment. We measured the way participants segmented these towers based on shape information alone, and asked how well the resulting parts explained the procedures other participants used to build them. We found that people decomposed these shapes in consistent ways and the most common parts appeared especially frequently as subroutines in the assembly experiment. Moreover, we found that the subroutines participants used converged over time, reflecting shared biases toward certain ways of reconstructing each tower. More broadly, our findings suggest important similarities between the perceptual and procedural abstractions humans use to perceive and interact with objects in their environment.

Peekbank: Exploring children's word recognition through an open, large-scale repository for developmental eye-tracking data

The ability to rapidly recognize words and link them to referents in context is central to children's early language development. This ability, often called word recognition in the developmental literature, is typically studied in the looking-while-listening paradigm, which measures infants' fixation on a target object (vs. a distractor) after hearing a target label. We present a large-scale, open database of infant and toddler eye-tracking data from looking-while-listening tasks. The goal of this effort is to address theoretical and methodological challenges in measuring vocabulary development. We present two analyses of the current database (N=1,320): (1) capturing age-related changes in infants' word recognition while generalizing across item-level variability and (2) assessing how a central methodological decision -- selecting the time window of analysis -- impacts the reliability of measurement. Future efforts will expand the scope of the current database to advance our understanding of participant-level and item-level variation in children's vocabulary development.

Unifying Models for Belief and Syllogistic Reasoning

Judging if a conclusion follows logically from a given set of premises can depend much more on the believability than on the logical validity of the conclusion. This so-called belief bias effect has been replicated repeatedly for many decades now. An interesting observation is, however, that process models for deductive reasoning and models for the belief bias have not much of an overlap - they have largely been developed independently. Models for the belief bias often just implement first order logic for the reasoning part, thereby neglecting a whole research field. This paper aims to change that by presenting a first attempt at substituting the first order logic components of two models for belief, selective scrutiny and misinterpreted necessity, with two state of the art approaches for modeling human syllogistic reasoning, mReasoner and PHM. In addition, we propose an approach for extending the traditionally dichotomous predictions to numerical rating scales thereby enabling more detailed analysis. Evaluating the models on a dataset published with a recent meta-analysis on the belief bias effect, we demonstrate the general success of the augmented models and discuss the implication of our extensions in terms of the limitations of the current focus of research as well as the potential for future investigation of human reasoning.

"If only Santa had one more present": Exploring the development of near-miss counterfactual reasoning

Near-misses sting: As adults, we intuitively understand that someone who just missed a desirable outcome (near-miss) feels worse than someone who missed by a far margin (far-miss). What cognitive capacities support these intuitions, and how do they emerge in early childhood? We presented adults (n=42) and six- to eight-year-olds (n=91; pre-registered) with various near-miss scenarios. We found that (1) adults generally infer that a near-miss character would feel worse than a far-miss character, (2) yet their inferences vary depending on the context, and (3) children show a strikingly different pattern from adults, robustly choosing the far-miss character as feeling worse. The tendency to judge the near-miss character as feeling worse increased with age, but even 8-year-olds were still below chance. These patterns raise the possibility that young children start with a distance-based bias that gradually gets replaced by adult-like inferences that involve counterfactual reasoning.

The Use of Co-Speech Gestures in Conveying Japanese Phrases with Verbs

An influential claim about co-speech gestures is that they are used as a supplement to unsaid meaning in speech. However, when the gesture is fully synchronized with speech, the supplementary role appears unnecessary. The present study examined whether people use gestures differently when they produce Japanese noun phrases that contain verbs. This study compared an ambiguous noun phrase “Rakka-shiteiru (falling) otoko-no (man’s) keitai (cell-phone),” which can be interpreted following the left branching (LB) structure as “The falling man’s cell-phone ({{falling, man}, cell-phone})” or the right branching (RB) structure as “The man’s cell-phone, which is falling ({falling, {man, cell-phone}}).” This study predicted that the onset of the first gesture would be delayed with the RB structure, as the important chunk {man, cell-phone} is produced later in the utterance than in the LB structure. The results supported the prediction, indicating that the onset of the first gesture tended to be delayed when RB was produced. This finding suggests that people may disambiguate syntactically ambiguous linguistic structures through gesture use.

Interaction Flexibility in Artificial Agents Teaming with Humans

Team interaction involves the division of labor and coordination of actions between members to achieve a shared goal. Although the dynamics of interactions that afford effective coordination and performance have been a focus in the cognitive science community, less is known about how to generate these flexible and adaptable coordination patterns. This is important when the goal is to design artificial agents that can augment and enhance team coordination as synthetic teammates. Although previous research has demonstrated the negative impact of model-based agents on the pattern of interactions between members using recurrence quantification methods, more recent work utilizing deep reinforcement learning has demonstrated a promising approach to bootstrap the design of agents to team with humans effectively. This paper explores the impact of artificial agent design on the interaction patterns that are exhibited in human-autonomous agent teams and discusses future directions that can facilitate the design of human-compatible artificial agents.

Pragmatic factors can explain variation in interpretation preferences for quantifier-negation utterances: A computational approach

Traditional investigations of quantifier-negation scope ambiguity (e.g., Everyone didn't go, meaning that no one went or not everyone went) have focused on universal quantifiers, and how ambiguity in interpretation preferences is due to the logical operators themselves and the syntactic relation between those operators. We investigate a broader range of quantifiers in combination with negation, observing differences in interpretation preferences both across quantifiers and also within the same quantifier (confirmed by corpus analysis). To explain this variation, we extend a computational cognitive model that incorporates pragmatic context-related factors, and which previously accounted for every-negation, to predict human interpretation preferences also for some and no. We evaluate the model's predictions against human judgments for quantifier-negation utterances, finding a strong qualitative and quantitative match when the listener has particular expectations about the world in which the utterance occurs. These results suggest that pragmatic factors can explain variation in interpretation preferences.

Where Word and World Meet: Intuitive Correspondence Between Visual and Linguistic Symmetry

Symmetry is ubiquitous in nature, in logic and mathematics, and in perception, language, and thought. Although humans are exquisitely sensitive to visual symmetry (e.g., of a butterfly), linguistic symmetry goes far beyond visuospatial properties: Many words refer to abstract, logically symmetrical concepts (e.g., equal, marry). This raises a question: Do representations of symmetry correspond across language and vision, and if so, how? To address this question, we used a cross-modal matching paradigm. On each trial, adult participants observed a visual stimulus (either symmetrical or non-symmetrical) and had to choose between a symmetrical and non-symmetrical English predicate unrelated to the stimulus (e.g., "negotiate" vs. "propose"). In a first study with visual events (symmetrical collision or asymmetrical launch), participants reliably chose the predicate matching the event's symmetry. A second study showed that this "matching" generalized to static objects, and was weakened when the stimuli's binary-relational nature was made less apparent (i.e., one object with a symmetrical contour, rather than two symmetrically configured objects). Taken together, our findings support the existence of an abstract relational concept of symmetry which humans access via both perceptual and linguistic means. More broadly, this work sheds light on the rich, structured nature of the language-cognition interface, and points towards a possible avenue for acquisition of word-to-world mappings for the seemingly inaccessible logical symmetry of linguistic terms.

Preparing Unprepared Students For Future Learning

Based on strategy-awareness (knowing which problem-solving strategy to use) and time-awareness (knowing when to use it), students are categorized into Rote (neither type of awareness), Dabbler (strategy-aware only) or Selective (both types of awareness). It was shown that Selective is often significantly more prepared for future learning than Rote and Dabbler (Abdelshiheed et al., 2020). In this work, we explore the impact of explicit strategy instruction on Rote and Dabbler students across two domains: logic and probability. During the logic instruction, our logic tutor handles both Forward- Chaining (FC) and Backward-Chaining (BC) strategies, with FC being the default; the Experimental condition is taught how to use BC via worked examples and when to use it via prompts. Six weeks later, all students are trained on a probability tutor that supports BC only. Our results show that Experimental significantly outperforms Control in both domains, and Experimental Rote catches up with Selective.

Toward Transformer-Based NLP for Extracting Psychosocial Indicators of Moral Disengagement

Moral disengagement is a mechanism whereby people distance or disconnect their actions from their moral evaluation. This work presents a novel knowledge graph schema, dataset, and transformer-based NLP model to identify and represent indicators of moral disengagement in text. Our graph schema is informed by Albert Bandura’s psychosocial mechanisms of moral disengagement, including dehumanization, victimization, moral condemnation and justification, and attribution (or displacement) of responsibility. Our preliminary dataset is comprised of online posts from five different communities. We present initial evidence that (1) our theory-based schema can represent moral disengagement indicators across these communities and (2) our transformer-based NLP model can identify indicators of moral disengagement in text. As it matures, this thread of computational social science research can help us understand the spread of morally-disengaged language and its effect on online communities.

Do language models learn typicality judgments from text?

Building on research arguing for the possibility of conceptual and categorical knowledge acquisition through statistics contained in language, we evaluate predictive language models (LMs)---informed solely by textual input---on a prevalent phenomenon in cognitive science: typicality. Inspired by experiments that involve language processing and show robust typicality effects in humans, we propose two tests for LMs. Our first test targets whether typicality modulates LM probabilities in assigning taxonomic category memberships to items. The second test investigates sensitivities to typicality in LMs' probabilities when extending new information about items to their categories. Both tests show modest---but not completely absent---correspondence between LMs and humans, suggesting that text-based exposure alone is insufficient to acquire typicality knowledge.

"The Parrot next to the Hamster (and) next to the Bunny" Sheds Light on Recursion in Child Romanian

The current paper brings experimental evidence that Romanian 4-and 5-year-olds are able to understand recursive prepositional modifiers such as "papagalul de lȃngǎ hamsterul de lȃngǎ iepuraş", ‘the parrot next to the hamster next to the bunny’. 23 children engaged in a picture matching task (PMT) where they heard sentences containing either recursive structures or coordinative structures, and they had to choose between a picture corresponding to a recursive interpretation and a picture corresponding to a coordinative interpretation. Interestingly, children provided recursive interpretations to recursive structures to a quite high degree, though their behavior was not fully adult-like. We argue that this can be accounted for through children’s sensitivity to specific recursion cues that are present in Romanian, as well as to the contrast between recursion and coordination, which is activated through the experimental set-up.

Understanding Others' Roles Based on Perspective Taking in Coordinated Group Behavior

Interacting through understanding others' roles based on perspective taking is important for achieving a group goal. However, complex and dynamic interactions, such as group non-verbal behaviors with three or more members, have not been fully examined. Our theoretical contribution expands the range of the theory applied to problem solving and learning in cognitive science to group non-verbal behavior with three members. In this study, participant triads repeatedly engaged in a coordinated drawing task, operating reels to adjust the thread tensions and moving a pen connected to the three threads to draw an equilateral triangle. We measured the pen positions and tensions. Analyzing group behavior quantitatively, the results showed that the role of stretching the thread a little significantly contributed to improved performance for drawing quickly. It suggests that maintaining overall balance through individuals' understanding others' roles based on perspective taking, is key to coordination.

Arc Length as a Geometric Constraint for Psychological Spaces

Many cognitive models assume that stimuli can be represented as points in a latent psychological space. However, it has been difficult to provide these spaces with a geometric structure where the distance between items accurately reflects their subjective dissimilarity. In this paper, we propose a new method to give psychological spaces a geometric structure by equating the amount of change undergone by a stimulus with the arc length of a curve in psychological space. We then assess our method with a categorization experiment where participants classified continuously changing visual stimuli according to their rate of change. Our results indicate that individuals’ judgements are well predicted by arc length, suggesting that it may be a promising geometric constraint for psychological spaces in other contexts.

A Causal Proximity Effect in Moral Judgment

In three experiments (total N = 1302) we investigated whether causal proximity affects moral judgments. We manipulated causal proximity by varying the length of chains mediating between actions and outcomes, and by varying the strengths of causal links. We demonstrate that moral judgments are affected by causal proximity with longer chains or weaker links leading to more lenient moral evaluations. Moreover, we identify outcome foreseeability as the crucial factor linking causal proximity and moral judgments. While effects of causal proximity on moral judgments were small when controlling for factors that were confounded in previous studies, knowledge about the presence of causal links substantially alters judgments of permissibility and responsibility. The experiments demonstrate a tight coupling between causal representations, inferences about mental states, and moral reasoning.

In-the-Moment Visual Information from the Infant's Egocentric View Determines the Success of Infant Word Learning: A Computational Study

Infants learn the meaning of words from accumulated experiences of real-time interactions with their caregivers. To study the effects of visual sensory input on word learning, we recorded infant's view of the world using head-mounted eye trackers during free-flowing play with a caregiver. While playing, infants were exposed to novel label-object mappings and later learning outcomes for these items were tested after the play session. In this study we use a classification based approach to link properties of infants' visual scenes during naturalistic labeling moments to their word learning outcomes. We find that a model which integrates both highly informative and ambiguous sensory evidence is a better fit to infants' individual learning outcomes than models where either type of evidence is taken alone, and that raw labeling frequency is unable to account for the word learning differences we observe. Here we demonstrate how a computational model, using only raw pixels taken from the egocentric scene image, can derive insights on human language learning.

Children know what words other children know

To communicate successfully, we need to use words that our conversational partner understands. Adults maintain precise models of the words people are likely to know, using both prior experience with their conversational partner and general metalinguistic information. Do children also know what words others are likely to know? We asked children ages 4-8 (n = 62) to predict whether a very young child would know each of 15 familiar animal words. With minimal information, even children as young as 4 made reliable predictions about the target child's vocabulary knowledge. Children were more likely to judge that a younger child would know an early-acquired word (e.g., dog) than a late-acquired word (e.g., lobster), and this pattern became more robust over development. Thus, even preschool age children are adept at inferring other children's vocabulary knowledge, and they could leverage this information to communicate effectively.

The emergence of indexicality in an artificial language

We investigated the emergence of register-like indexical associations, whereby linguistic forms that are associated with groups of speakers acquire novel associations with contextual features of those groups. We employed an artificial-language paradigm in which participants were exposed to an “alien” language spoken by two alien species wearing two different ceremonial outfits. The language varied with respect to plural suffixes, such that one suffix was associated reliably with one species and outfit in training. We then tested participants on what associations they had acquired. In two experiments we manipulated which aliens wore which outfits in the test phase. Regardless of condition or length of training, participants associated suffixes strongly with aliens rather than clothing. In a third experiment we introduced a new alien species in the test phase. For these aliens, which participants had not seen during training, participants made a clear association based on outfit. These results show clearly ranked indexical (or proto-indexical) associations on the part of participants and lay clear groundwork for the experimental investigation of the emergence of indexical social meaning in language.

People Adjust Recency Adaptively to Environment Structure

Recency effects—giving exaggerated importance to recent outcomes—are a common aspect of decision tasks. In the current study, we explore two explanations of recency-based decision making, that it is (1) a deliberate strategy for adaptive decision making in real-world environments which tend to be dynamic and autocorrelated, and/or (2) a product of processing limitations of working memory. Supporting explanation 1, we found that participants strategically adjusted their recency levels across trials to achieve optimal levels in a range of tasks. Furthermore, they started with default recency values that had high aggregate performance across environments. However, only some correlations between recency values and WM scores were significant, providing no clear conclusion regarding explanation 2. Ultimately, we propose that recency involves a combination of the two—people can strategically change recency within the limits of WM capacities to adapt to external environments.

Children Use Artifacts to Infer Others’ Shared Interests

Artifacts – the objects we own, make, and choose – provide a source of rich social information. Adults use people’s artifacts to judge others’ traits, interests, and social affiliations. Here we show that 4-year-old children (N=32) infer others’ shared interests from their artifacts. When asked who had the same interests as a target character, children chose the character with a conceptually similar object to the target’s – an object used for the same activity – over a character with a perceptually similar object. When asked which person had the same arbitrary property (bedtime, birthday, or middle name), children did not systematically select either character, and most often reported that they did not know. Adults (N=32) made similar inferences, but differed in their tendency to use artifacts to infer friendships. Overall, by age 4, children show a sophisticated ability to make selective, warranted inferences about others’ interests based solely on their artifacts.

Hierarchical syntactic structure predicts listeners’ sequence completion in music

Studies in psycho-linguistics have provided compelling evidence that theoretical syntactic structures have cognitive correlates that inform and influence language perception. Generative grammar models also present a principled way to represent a plethora of hierarchical structures outside the domain of language. Hierarchical aspects of musical structure, in particular, are often described through grammar models. Whether such models carry perceptual relevance in music, however, requires further study. To address the descriptive adequacy of a grammar model in music, unfamiliar musical phrases consisting of chord progressions within the Jazz idiom were used, and zero to three chords were cut from the end of each phrase. A total of 150 participants were then presented with these stimuli and asked to provide a Closure Response, that is to predict how many more chords (0, 1, 2, or 3) were expected before the chord progression was complete. Simultaneously, a grammar model of hierarchical structure as well as a bigram model were trained over a corpus of 150 expert-annotated Jazz tunes. The models were then used to estimate probability distributions of Closure Responses in the stimuli presented to the participants. Bayesian mixed-effects models reveal that the models carry predictive value for the participants' response distributions and that the hierarchical model contains incremental predictive information over the bigram model. The present results suggest that -- akin to language -- hierarchical relationships between musical events have a cognitive correlate, which influences the perception and interpretation of music.

Recovering Quantitative Models of Human Information Processing with Differentiable Architecture Search

The integration of behavioral phenomena into mechanistic models of cognitive function is a fundamental staple of cognitive science. Yet, researchers are beginning to accumulate increasing amounts of data without having the temporal or monetary resources to integrate these data into scientific theories. We seek to overcome these limitations by incorporating existing machine learning techniques into an open-source pipeline for the automated construction of quantitative models. This pipeline leverages the use of neural architecture search to automate the discovery of interpretable model architectures, and automatic differentiation to automate the fitting of model parameters to data. We evaluate the utility of these methods based on their ability to recover quantitative models of human information processing from synthetic data. We find that these methods are capable of recovering basic quantitative motifs from models of psychophysics, learning and decision making. We also highlight weaknesses of this framework and discuss future directions for their mitigation.

Planning and Action Organization in Ill-Defined Tasks: The Case of Everyday Activities

Planning and organization of one's actions are crucial for successfully performing everyday activities such as setting the table. While existing research has addressed planning for well-defined tasks and control of already established sequences, little is known about how such sequences are planned in ill-defined tasks such as everyday activities. Initial attempts suggest that planning may be opportunistic, based on a number of environmental factors to minimize cognitive and physical effort. We address two questions arising from the existing work: First, to what extent is variation in human everyday activity behavior captured by the proposed opportunistic consideration of environmental factors? We address this questions by employing machine learning baselines to gauge the proposed models explanatory scope. Second, to what extent are existing models of sequence control consistent with opportunistic action organization? We address this by investigating and discussing the implications opportunistic planning has for the mechanisms currently assumed for sequence control.

Blame Blocking and Expertise Effects Revisited

This paper examines whether advanced law students are resistant to the blame blocking effect—a tendency to assign higher punishments for failed attempts than failed attempts with independent causal chains leading to intended harm (Cushman, 2008). This effect goes against the criminal law principle that intentionally acting towards committing a crime, and not accidental outcomes, determine liability for attempts. To further investigate whether advanced students of law are judging blame blocking scenarios correctly (in line with their legal expertise), in two experiments, we compared their punishment responses with four populations: beginning law students, advanced philosophy students, advanced natural science students, and laypeople with no academic background. We did not observe the blame blocking effect in either of the four student populations, and it was only partially present in the lay population. We discuss the implications of these findings for research on legal expertise and the blame blocking effect in general.

When are Humans Reasoning with Modus Tollens?

Modus tollens is a rule of inference in classical, two-valued logic which allows to derive the negation of the antecedent from a conditional and the negation of its consequent. In this paper, we investigate when humans draw such conclusions and what modulates the application of modus tollens. We consider conditionals which may or may not be obligations and which may or may not have necessary antecedents. We show that humans make significantly more modus tollens inferences in case of obligation conditionals and that the time to make a modus tollens inference is shorter than the time to answer ``nothing follows''. We illustrate how these differences can be modeled within the weak completion semantics.

Perceptual Processes of Face Recognition: Single feature orientation and holistic information contribute to the face inversion effect

In this study (n=144) we investigated the perceptual processes that are the basis of the face inversion effect (better recognition for upright vs inverted faces). We evaluated the effects of disrupting configural information (i.e., the spatial relationships among the main facial features) and disrupting holistic information indexed by the face outline. We used scrambled faces which are characterized by a disruption of configural information and scrambled no-contour faces which in addition to disrupted configural information they also suffered of disruption of the face outline. Using an old/new recognition task we obtained a robust inversion effect for scrambled faces. No significant inversion effect was found for scrambled no-contour faces. Our results provide direct evidence that holistic information plays a significant role in the inversion effect. We also confirmed that it is possible to obtain a robust inversion effect when configural information is disrupted.

Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?

Despite being designed for performance rather than cognitive plausibility, transformer language models have been found to be better at predicting metrics used to assess human language comprehension than language models with other architectures, such as recurrent neural networks. Based on how well they predict the N400, a neural signal associated with processing difficulty, we propose and provide evidence for one possible explanation—their predictions are affected by the preceding context in a way analogous to the effect of semantic facilitation in humans.

Younger and Older Speakers' Use of Linguistic Redundancy with a Social Robot

Social robotics has shown expansive growth in areas related to companionship/assistance for older adults. Critically, everyday interactions with artificial agents often involve spoken language in the context of a shared visual environment. Therefore, language interfaces for these applications must account for the distinctive nature of visually-situated communication revealed by psycholinguistic studies. In traditional frameworks, "rational" speakers were thought to avoid redundancy, yet human-human communication research shows that both younger and older speakers include redundant information (e.g., color adjectives) in descriptions to facilitate listeners' visual search. However, this "cooperative" use of redundant expressions hinges on beliefs about listeners' perception (e.g., "pop-out" nature of human color processing). We explored the incidence and nature of younger and older speakers' redundant descriptions for a robot partner in different visual environments. Whereas both age groups produced redundant descriptions, there were important age differences for when these descriptions occurred and for the properties encoded in them.

Musical syntactic structure improves memory for melody: evidence from the processing of ambiguous melodies

Memories of most stimuli in the auditory and other domains are prone to the disruptive interference of intervening events, whereby memory performance continuously declines as the number of intervening events increases. However, melodies in a familiar musical idiom are robust to such interference. We propose that representations of musical structure emerging from syntactic processing may provide partially redundant information that accounts for this robust encoding in memory. The present study employs tonally ambiguous melodies which afford two different syntactic interpretations in the tonal idiom. Crucially, since the melodies are ambiguous, memory across two presentations of the same melody cannot bias whether the interpretation in a second listening will be the same as the first, unless a representation of the first syntactic interpretation is also encoded in memory in addition to sensory information. The melodies were presented in a Memory Task, based on a continuous recognition paradigm, as well as in a Structure Task, where participants reported their syntactic interpretation of each melody following a disambiguating cue. Our results replicate memory-for-melody's robustness to interference, and further establish a predictive relationship between memory performance in the Memory Task and the robustness of syntactic interpretations against the bias introduced by the disambiguating cue in the Structure Task. As a consequence, our results support that a representation based on a disambiguating syntactic parse provides an additional, partially redundant encoding that feeds into memory alongside sensory information. Furthermore, establishing a relationship between memory performance and the formation of structural representations supports the relevance of syntactic relationships towards the experience of music.

Explaining Machine Learned Relational Concepts in Visual Domains - Effects of Perceived Accuracy on Joint Performance and Trust

Most machine learning based decision support systems are black box models that are not interpretable for humans. However, the demand for explainable models to create comprehensible and trustworthy systems is growing, particularly in complex domains involving risky decisions. In many domains, decision making is based on visual information. We argue that nevertheless, explanations need to be verbal to communicate the relevance of specific feature values and critical relations for a classification decision. To address that claim, we introduce a fictitious visual domain from archeology where aerial views of ancient grave sites must be classified. Trustworthiness among other factors relies on the perceived or assumed correctness of a system's decisions. Models learned by induction of data, in general, cannot have perfect predictive accuracy and one can assume that unexplained erroneous system decisions might reduce trust. In a 2×2 factorial online experiment with 190 participants, we investigated the effect of verbal explanations and information about system errors. Our results show that explanations increase comprehension of the factors on which classification of grave sites is based and that explanations increase the joint performance of human and system for new decision tasks. Furthermore, explanations result in more confidence in decision making and higher trust in the system.

Revisiting the Role of Uncertainty-Driven Exploration in a (Perceived) Non-Stationary World

Humans are often faced with an exploration-versus-exploitation trade-off. A commonly used paradigm, multi-armed bandit, has shown humans to exhibit an "uncertainty bonus", which combines with estimated reward to drive exploration. However, previous studies often modeled belief updating using either a Bayesian model that assumed the reward contingency to remain stationary, or a reinforcement learning model. Separately, we previously showed that human learning in the bandit task is best captured by a dynamic-belief Bayesian model. We hypothesize that the estimated uncertainty bonus may depend on which learning model is employed. Here, we re-analyze a bandit dataset using all three learning models. We find that the dynamic-belief model captures human choice behavior best, while also uncovering a much larger uncertainty bonus than the other models. More broadly, our results also emphasize the importance of an appropriate learning model, as it is crucial for correctly characterizing the processes underlying human decision making.

Regularisation, Systematicity and Naturalness in a Silent Gesture Learning Task

Typological analysis of the world’s language shows that, of the 6 possible basic word orders, SOV and SVO orders are predominant, a preference supported by experimental studies in which participants improvise gestures to describe events. Silent gesture studies have also provided evidence for natural ordering patterns, where SOV and SVO orders are used selectively depending on the semantics of the event, a finding recently supported by data from natural sign languages. We present an artificial language learning task using gesture to ask to what extent preferences for natural ordering patterns, in addition to biases for regular languages, are at play during learning in the manual modality.

Macaques preferentially attend to intermediately surprising information

Normative learning theories dictate that we should preferentially attend to informative sources, but only up to the point that our limited learning systems can process their content. Humans, including infants, show this predicted strategic deployment of attention. Here we demonstrate that rhesus monkeys, much like humans, attend to events of moderate surprisingness over both more and less surprising events. They do this in the absence of any specific goal or contingent reward, indicating that the behavioral pattern is spontaneous. We suggest this U-shaped attentional preference represents an evolutionarily preserved strategy for guiding intelligent organisms toward material that is maximally useful for learning.

Learning part-based abstractions for visual object concepts

The ability to represent semantic structure in the environment — objects, parts, and relations — is a core aspect of human visual perception and cognition. Here we leverage recent advances in program synthesis to develop an algorithm for learning the part-based structure of drawings as represented by graphics programs. This algorithm iteratively learns a library of abstract subroutines that can be used to more compactly represent a set of drawings by capturing common structural elements. Our experiments explore how this algorithm exploits statistical reg- ularities across drawings to learn new subroutines. Together, these findings highlight the potential for understanding human visual concept learning via program-like abstractions.

One and known: Incidental probability judgments from very few samples

We test whether people are able to reason based on incidentally acquired probabilistic and context-specific magnitude information. We manipulated variance of values drawn from two normal distributions as participants perform an unrelated counting task. Our results show that people do learn category-specific information incidentally, and that the pattern of their judgments is broadly consistent with normative Bayesian reasoning at the cohort level, but with large individual-level variability. We find that this variability is explained well by a frugal memory sampling approximation; observer models making this assumption explain approximately 70% of the variation in participants' responses. We also find that behavior while judging easily discriminable categories is consistent with a model observer drawing fewer samples from memory, while behavior while judging less discriminable categories is better fit by models drawing more samples from memory. Thus, our model-based analysis additionally reveals resource-rationality in memory sampling.

The role of eye movement pattern and global-local information processing abilities in isolated English word reading

In isolated word reading, readers have the best performance when fixating between the beginning and center of a word, i.e., the optimal viewing position (OVP). Also, perceptual expertise literature suggests that both global and local processing are important for visual stimulus recognition. Here we showed that in lexical decision, higher similarity to an eye movement pattern that focused at the OVP and better local processing ability predicted faster response time (RT), in addition to verbal working memory and lexical knowledge. Also, this eye movement pattern was associated with longer RT in naming isolated single letters, suggesting conflicting visual abilities required for identifying isolated letters and letter strings. In contrast, word and pseudoword naming RT, and lexical decision and naming accuracy, were predicted by lexical knowledge but not eye movement pattern or global-local processing abilities. Thus, visual processing abilities are important factors accounting for isolated word reading fluency not involving naming.

Cognitive cost and information gain trade off in a large-scale number guessing game

How do people ask questions to zero in on a correct answer? Although we can formally define an optimal query to maximize information gain, algorithms for finding this optimal guess may impose large resource costs in space (memory) and time (computation). To understand how people trade off the information gain and the computational difficulty of choosing the ideal query, we turned to a large dataset of 380,000 guesses made during a number-guessing game with Amazon Alexa. We analyzed whether the arithmetic difficulty of following the optimal strategy predicts how far a guess deviates from theoretically optimal query. We find that when memory load is higher, and when more arithmetic operations need to be performed, human guesses deviate more from the most informative query. These results suggest human computational resource constraints limit how people seek out informative questions.

Contextual Flexibility Guides Communication in a Cooperative Language Game

Context-sensitive communication not only requires speakers to choose relevant utterances from alternatives, but also to retrieve and evaluate the relevant utterances from memory in the first place. In this work, we compared different proposals about how underlying semantic representations work together with higher-level selection processes to enable individuals to flexibly utilize context to guide their language use. We examined speaker and guesser performance in a two-player iterative language game based on Codenames, which asks speakers to choose a single `clue' word that allows their partner to select a pair of target words from a context of distractors. The descriptive analyses indicated that speakers were sensitive to the shared semantic neighborhood of the target word pair and were able to use guesser feedback to shift their clues closer to the unguessed word. We also formulated a series of computational models combining different semantic representations with different selection processes. Model comparisons suggested that a model which integrated contextualized lexical representations based on association networks with a contextualized model of pragmatic reasoning was better able to predict behavior in the game compared to models that lacked context at either the representational or process level. Our findings suggest that flexibility in communication is driven by context-sensitivity at at the level of both representations and processes.

Building a Psychological Ground Truth Dataset with Empathy and Theory-of-Mind During the COVID-19 Pandemic

As the mental health crisis deepens with the prolonged COVID-19 pandemic, there is an increasing need for understanding individuals’ emotional experiences. We have built a large-scale Korean text corpus with five self-labeled psychological ground-truths: empathy, loneliness, stress, personality, and emotions. We collected 19,025 documents of daily emotional experiences from 3,805 Korean residents from October to December 2020. We collected 42,128 sentences with different levels of theory-of-mind. Each sentence was annotated by trained psychology students and reviewed by experts. Participants varied in their ages from the early 20s to late 80s and had various social and economic statuses. The pandemic impacted the majority of daily lives, and participants often reported negative emotional experiences. We found the most frequent topics: responses to confirmed cases, health concerns of family members, anger towards people without masks, stress-relief strategies, change of the lifestyle, and preventive practices. We then trained the Word2Vec model to observe specific words that match each topic from the topic model. The current dataset will serve as benchmark data for large-scale and computational methods for identifying mental health levels based on text. This dataset is expected to be used and transformed in many creative ways to mitigate COVID-19-related mental health problems

A sequential sampling account of semantic relatedness decisions

Semantic relatedness decisions – decisions whether two concepts are semantically related or not – depend on cognitive processes of semantic memory and decision making. However, behavioral findings are mostly interpreted in the light of memory retrieval as spreading activation but neglects decision components. We propose that sequential sampling models (SSM) of decision making as a novel computational account of choice and response time data. In a simulation study, we generate data from basic SSM versions. We show if and how these models can account for two established behavioral benchmarks: The inverted-U shape of response times and the relatedness effect. Further, one of the SSM, the Leaky Competing Accumulator model makes a novel prediction: The relatedness reverses for weakly related word pairs. Reanalyzing a publicly available data set, we found credible evidence that this prediction holds empirically. Our results provide strong support for SSM as a viable computational account of semantic relatedness decisions.

Zipf's law of abbreviation and common ground: Past communicative success hampers the re-optimization of language

Zipf’s Law of Abbreviation (ZLA) states that the more frequently a word is used, the shorter its length tends to be. This arises due to the optimal trade-off between competing pressures for accuracy and efficiency in communication, known as the Principle of Least Effort. Existing research has not focused on how individuals adapt their language use to remain optimal despite language change and whether social factors like common ground affect this. To investigate this, we replicated and extended the artificial language learning paradigm and communication game of Kanwal et al. (2017). We found participants were able to re-optimize their language use according to ZLA after a language change event, but this ability was hampered by common ground. This research identifies common ground as one potential cause for observed sub-optimalities in human languages and may have implications for understanding the dynamics of language change across communities where common ground varies.

Body Image During Quarantine; Generational Effects of Social Media Pressure on Body Appearance Perception

One of the consequences of the pandemic is that throughout 2020 virtual interactions largely replaced face-to-face interactions. Though there are few studies of how social media impacts body image perception across genders, research suggests that socializing through a virtual self-body image might have distinct implications for men and women. On an online study, we examined whether type of social pressure and body-ideal exert distinct pressures on members of the X, Y, and Z generations. Results showed media pressure affected body image satisfaction significantly more than other kinds of social pressure across genders and generations, with young males reporting a higher impact compared to older males. Males experienced more pressure to be muscular and women to be thin, especially for the younger generation.Future research should focus on social media as a potential intervention tool for the detection and prevention of body image disorders in both young female and male adults.

AgeNet: A Neurobiological Model of Age-related Word Retrieval Deficits

Normal aging is associated with an increase in word finding problems. Competing explanations posit that age impairs access to phonological representations (transmission deficit) or leads to a deterioration of semantic representations (under-activation). Because these accounts are difficult to disentangle in a highly interactive language system, we employed a neuro-biologically grounded spiking network model which was lesioned to reflect transmission deficits or under-activation. Results of three simulated picture naming paradigms were in line with the transmission deficit account that normal aging impairs access to representations during word production. These initial findings suggest that this is a promising approach for understanding age-related changes to language ability in an interactive system.

Causal Learning With Delays Up to 21 Hours

Delays between causes and effects are commonly found in cause-effect relationships in real life. However, previous studies have only investigated delays on the order of seconds. In the current study we tested whether people can learn a cause-effect relation with hour long delays. The delays between the cause and effect were either 0, 3, 9, or 21 hours, and the study lasted 16 days. Surprisingly, we found that participants were able to learn the causal relation about equally as well in all four conditions. These findings demonstrate a remarkable ability to accurately learn causal relations in a realistic timeframe that has never been tested before.

Can 1- and 2-year-old toddlers learn causal action sequences?

Toddlers can learn cause-effect relationships between single actions and outcomes. However, real-world causality is often more complex. We investigated whether toddlers (12- to 35-month-olds) can learn that a sequence of two actions is causally necessary, from observing the actions of an adult demonstrator. In Experiment 1, toddlers saw evidence that performing a two-action sequence (AB) on a puzzle-box was necessary to produce a sticker, and evidence that B alone was not sufficient. Toddlers were then given the opportunity to interact with the box and retrieve up to five stickers. Toddlers had difficulty reproducing the required two-action sequence, with the ability to do so improving with age. In Experiment 2, toddlers saw evidence that performing a single action (B) was sufficient to produce an effect (i.e., a sequence was not causally necessary). Toddlers were more successful and performed fewer sequences in Experiment 2, suggesting some sensitivity to the sequential causal structure.

Evaluating Transformative Decisions

Recent philosophical work has taken interest in the decision-theoretic problems posed by transformative experiences, or experiences that are epistemically revelatory and life-changing (like becoming a parent). The problem is roughly as follows: if we cannot know what it’s like to be a parent (its subjective value) before actually becoming one, then how are we to decide whether to become one? This topic has received recent empirical attention, some of which has challenged the central importance of subjective value for transformative decision-making. Here, we present empirical work that suggest these findings can be explained by the evaluability bias, in which people weigh decision criteria not based on their importance, but how easy they are to evaluate. Participants not only find subjective value important, but they report willing to pay a great deal of money to get this information. Furthermore, participants who were most uncertain about whether to undergo a transformative decision we most likely to report interest in seeking out information about subjective value. We conclude by considering the philosophical and empirical implications of this work.

A confirmation bias due to approximate active inference

Collecting new information about the outside world is a key aspect of brain function. In the context of vision, we move our eyes multiple times per second to accumulate evidence about a scene. Prior studies have suggested that this process is goal-directed and close to optimal. Here, we show that this process of seeking new information suffers from a confirmation bias similar to what has been observed in a wide range of other contexts. We present data from a new gaze-contingent task that allows us to both estimate a participant's current belief, and compare that to their subsequent eye-movements. We find that these eye-movements are biased in a confirmatory way. Finally, we show that these empirical results can be parsimoniously explained under the assumption that the brain performs approximate, not exact, inference, with computations being more approximate in decision-making compared to sensory areas.

Model-based foraging using latent-cause inference

Foraging has been suggested to provide a more ecologically-valid context for studying decision-making. However, the environments used in human foraging tasks fail to capture the structure of real world environments which contain multiple levels of spatio-temporal regularities. We ask if foragers detect these statistical regularities and use them to construct a model of the environment that guides their patch-leaving decisions. We propose a model of how foragers might accomplish this, and test its predictions in a foraging task with a structured environment that includes patches of varying quality and predictable transitions. Here, we show that human foraging decisions reflect ongoing, statistically-optimal structure learning. Participants modulated decisions based on the current and potential future context. From model fits to behavior, we can identify an individual's structure learning ability and relate it to decision strategy. These findings demonstrate the utility of leveraging model-based reinforcement learning to understand foraging behavior.

Learning Evolved Combinatorial Symbols with a Neuro-symbolic Generative Model

Humans have the ability to rapidly understand rich combinatorial concepts from limited data. Here we investigate this ability in the context of auditory signals, which have been evolved in a cultural transmission experiment to study the emergence of combinatorial structure in language. We propose a neuro-symbolic generative model which combines the strengths of previous approaches to concept learning. Our model performs fast inference drawing on neural network methods, while still retaining the interpretability and generalization from limited data seen in structured generative approaches. This model outperforms a purely neural network-based approach on classification as evaluated against both ground truth and human experimental classification preferences, and produces superior reproductions of observed signals as well. Our results demonstrate the power of flexible combined neural-symbolic architectures for human-like generalization in raw perceptual domains and offers a step towards developing precise computational models of inductive biases in language evolution.

Order Effects in Bayesian Updates

Order effects occur when judgments about a hypothesis's probability given a sequence of information do not equal the probability of the same hypothesis when the information is reversed. Different experiments have been performed in the literature that supports evidence of order effects. We proposed a Bayesian update model for order effects where each question can be thought of as a mini-experiment where the respondents reflect on their beliefs. We showed that order effects appear, and they have a simple cognitive explanation: the respondent's prior belief that two questions are correlated. The proposed Bayesian model allows us to make several predictions: (1) we found certain conditions on the priors that limit the existence of order effects; (2) we show that, for our model, the QQ equality is not necessarily satisfied (due to symmetry assumptions); and (3) the proposed Bayesian model has the advantage of possessing fewer parameters than its quantum counterpart.

Why belief in species purpose prompts moral condemnation of individuals who fail to fulfill that purpose

Suppose humans exist in order to reproduce. Does it follow that an individual who chooses not to reproduce is committing a moral wrong? Past work suggests that, right or wrong, beliefs about species-level purpose are associated with moral condemnation of individuals who choose not to fulfill that purpose. Across two experiments we investigate why. Experiment 1 replicates a causal effect of species-level purpose on moral condemnation. Experiment 2 finds evidence that when a species is believed to exist to perform some action, people infer that the action is good for the species, and that this belief in turn supports moral condemnation of individuals who choose not to perform the action. Together, these findings shed light on how our descriptive understanding can sometimes shape our prescriptive judgments.

I Know You Know I’m Signaling: Novel gestures are designed to guide observers’ inferences about communicative goals

For a gesture to be successful, observers must recognize its communicative purpose. Are communicators sensitive to this problem and do they try to ease their observer’s inferential burden? We propose that people shape their gestures to help observers easily infer that their movements are meant to communicate. Using computational models of recursive goal inference, we show that this hypothesis predicts that gestures ought to reveal that the movement is inconsistent with the space of non-communicative goals in the environment. In two gesture-design experiments, we find that people spontaneously shape communicative movements in response to the distribution of potential instrumental goals, ensuring that the movement can be easily differentiated from instrumental action. Our results show that people are sensitive to the inferential demands that observers face. As a result, people actively work to help ensure that the goal of their communicative movement is understood.

Incidental discrete emotions influence processes of evidence accumulation in reinforcement-learning

Discrete emotions are known to elicit changes in decision-making. Previous research has found that affect biases response times and the perception of evidence for choices, among other key factors of decision-making. However, little is known how affect influences the specific cognitive mechanisms that underlie decision-making. We investigated these mechanisms by fitting a hierarchical reinforcement-learning decision diffusion model to participant choice data. Following the collection of baseline decision-making data, participants took part in a writing exercise to generate neutral or discrete emotions. Following the writing exercise, participants made additional decisions. We found that exposure to discrete emotions modulates decision-making through several mechanisms including rates of learning and evidence accumulation, separation of decision thresholds, and sensitivity to noise. Furthermore, we found that exposure to each of the four discrete emotions modulated decision-making differently. These findings integrate learning and decision process models to expand on previous research and elucidate processes of affective decision-making.

Learning exceptions to the rule in human and model via hippocampal encoding

We explore the impact of learning sequence on performance in a rule-plus-exception categorization task. Behavioural results indicate that exception categorization accuracy improves when exceptions are introduced later in learning, after exposure to rule-following stimuli. Simulations of this task using a neural network model of hippocampus and its subfields replicate these behavioural findings. Representational similarity analysis of the model’s hidden layers suggests that model representations are also impacted by trial sequence. These results provide novel computational evidence of hippocampus’s sensitivity to learning sequence and further support this region’s proposed role in category learning.

Association knowledge guides conjunctive predictions in novel situations

The mind readily learns cue-outcome associations where an object predicts a specific outcome. Previous work suggested that when multiple objects associated with different outcomes were jointly presented , the mind made conjunctive predictions that represented the common property of the associated outcomes. Using attentional tracking measures, we provided more evidence for the weighted summation framework when the conjunctive predictions involved spatial locations (Experiment 1) or conceptual categories (Experiment 2). Then, we examined the reverse of conjunction, where participants were presented with a single object, which is a part of an object pair that was previously associated with an outcome (Experiment 3). Rather than making predictions based on mental operations such as subtraction, we found that participants’ predictions were purely based on previous associations. These results together demonstrated the robust tendency to make conjunctive predictions based on knowledge of cue-outcome associations.

A neural dynamic process model of combined bottom-up and top-down guidance in triple conjunction visual search

The surprising efficiency of triple conjunction search has created a puzzle for modelers who link visual feature binding to selective attention, igniting an ongoing debate on whether features are bound with or without attention. Nordfang and Wolfe (2014) identified feature sharing and grouping as important factors in solving the puzzle and thereby established new constraints for models of visual search. Here we extend our neural dynamic model of scene perception and visual search (Grieben et al., 2020) to account for these constraints without the need for preattentive binding. By demonstrating that visual search is not only guided top-down, but that its efficiency is affected by bottom-up salience, we address a major theoretical weakness of models of conjunctive visual search (Proulx, 2007). We show how these complex interactions emerge naturally from the underlying neural dynamics.

The psychophysics of number arise from resource-limited spatial memory

People can identify the number of objects in small sets rapidly and without error but become increasingly noisy for larger sets. However, the cognitive mechanisms underlying these ubiquitous psychophysics are poorly understood. We present a model of a limited-capacity visual system optimized to individuate and remember the location of objects in a scene which gives rise to all key aspects of number psychophysics, including error-free small number perception and scalar variability for larger numbers. We therefore propose that number psychophysics can be understood as an emergent property of primitive perceptual mechanisms --- namely, the process of identifying and representing individual objects in a scene. To test our theory, we ran two experiments: a change-localization task to measure participants' memory for the locations of objects (Experiment 1) and a numerical estimation task (Experiment 2). Our model accounts well for participants' performance in both experiments, despite only being optimized to efficiently encode where objects are present in a scene. Our results demonstrate that the key psychophysical features of numerical cognition do not arise from separate modules or capacities specific to number, but rather from lower-level constraints on perception which are manifested even in non-numerical tasks.

Infants’ Social Communication from a Predictive Processing Perspective

Predictive Processing (PP) has been suggested to account for early cognitive development (Köster et al., 2020). In this paper, we extend its application to early social coordination in parent-infant interactions. Interpersonal neural synchrony in parent-child interactions is hypothesized to be a function of the coupling of internal generative models for mutual prediction. By aligning these internal models and reducing prediction errors, social uncertainty is reduced, and interpersonal neural synchrony is enhanced. Support for this hypothesis is provided by assessing neural synchrony during mother-infant interaction.

Why is scaling up models of language evolution hard?

Computational model simulations have been very fruitful for gaining insight into how the systematic structure we observe in the world's natural languages could have emerged through cultural evolution. However, these model simulations operate on a toy scale compared to the size of actual human vocabularies, due to the prohibitive computational resource demands that simulations with larger lexicons would pose. Using computational complexity analysis, we show that this is not an implementational artifact, but instead it reflects a deeper theoretical issue: these models are (in their current formulation) computationally intractable. This has important theoretical implications, because it means that there is no way of knowing whether or not the properties and regularities observed for the toy models would scale up. All is not lost however, because awareness of intractability allows us to face the issue of scaling head-on, and can guide the development of our theories.

Less Egocentric Biases in Theory of Mind When Observing Agents in Unbalanced Decision Problems

Theory of Mind (ToM) or mentalizing is the ability to infer mental states of oneself and other agents. Theory of mind plays a key role in social interactions as it allows one to predict other agents' likely future actions by inferring what they may intend or know. However, there is a wide range of ToM skills of increasing complexity. While most people are generally capable of performing complex ToM reasoning such as recursive belief inference when explicitly prompted, there is much evidence that humans do not always use ToM to their full capabilities. Instead, people often fall back to heuristics and biases, such as an egocentric bias that projects one's beliefs and perspective onto the observed agent. We explore which (internal or external) factors may influence the mentalizing processes that humans employ unsolicitedly, i.e., employ without being primed or explicitly triggered. In this paper we present an online study investigating unbalanced decision problems where one choice is significantly better than the other. Our results demonstrate that participant's are significantly less likely to exhibit an egocentric bias in such situations.

Modeling joint attention from egocentric vision

Numerous studies in cognitive development have provided converging evidence that Joint Attention (JA) is crucial for children to learn about the world together with their parents. However, a closer look reveals that, in the literature, JA has been operationally defined in different ways. For example, some definitions require explicit signals of “awareness” of being in JA—such as gaze following, while others simply define JA as shared gaze to an object or activity. But what if “awareness” is possible without gaze following? The present study examines egocentric images collected via head-mounted eye-trackers during parent-child toy play. A Convolutional Neural Network model was used to process and learn to classify raw egocentric images as JA vs not JA. We demonstrate individual child and parent egocentric views can be classified as being part of a JA bout at above chance levels. This provides new evidence that an individual can be “aware” they are in JA based solely on the in-the-moment visual information. Moreover, both models trained on child views and those trained on parent views leveraged the visual properties associated with visual object holding to improve classification accuracy—suggesting a critical role for object handling in not only establishing JA, as shown in previous research, but also in inferring the social partner’s attentional state during JA.

Large-scale study of speech acts' development using automatic labelling

Studies of children's language use in the wild (e.g., in the context of child-caregiver social interaction) have been slowed by the time- and resource- consuming task of hand annotating utterances for communicative intents/speech acts. Existing studies have typically focused on investigating rather small samples of children, raising the question of how their findings generalize both to larger and more representative populations and to a richer set of interaction contexts. Here we propose a simple automatic model for speech act labeling in early childhood based on the INCA-A coding scheme (Ninio et al., 1994). After validating the model against ground truth labels, we automatically annotated the entire English-language data from the CHILDES corpus. The major theoretical result was that earlier findings generalize quite well at a large scale. Our model will be shared with the community so that researchers can use it with their data to investigate various question related to language use both in typical and atypical populations of children.

The Effects of Onset and Offset Masking on the Time Course of Non-Native Spoken-Word Recognition in Noise

Using the visual-word paradigm, the present study investigated the effects of word onset and offset masking on the time course of non-native spoken-word recognition in the presence of background noise. In two experiments, Dutch non-native listeners heard English target words, preceded by carrier sentences that were noise-free (Experiment 1) or contained intermittent noise (Experiment 2). Target words were either onset- or offset-masked or not masked at all. Analyses showed that onset masking delayed target word recognition more than offset masking did. These results suggest that – in line with contemporary models of spoken-word recognition – non-native listeners strongly rely on word onset information when hearing words in noise.

Aesthetic experience is influenced by causality in biological movements

People watching is a ubiquitous component of human activities. An important aspect of such activities is the aesthetic experience that arises naturally from seeing how elegant people move their bodies in performing different actions. What makes some body movements look better than others? We examined how visual processing contributes to the aesthetics experience from seeing actions, using point-light “creatures” generated by spatially scrambling locations of a point-light walker’s joints. Observers rated how aesthetically pleasing and lifelike each creature looked in a video of the creature moving from left to right. They viewed four kinds of creatures: The joints’ trajectories were either from an upright walker (thus exhibiting gravitational acceleration) or an inverted walker (thus defying gravity) and were either congruent to the direction of global body displacements or incongruent (as in the moonwalk). Observers gave both higher aesthetic and animacy ratings for creatures with upright versus inverted trajectories, and congruent versus incongruent movements. Moreover, after regressing out the influence of animacy, the creatures that move in a natural causal manner (in accordance with gravity and their body displacements) were still preferred. The subtle differences between different kinds of creatures suggest a role of automatic perceptual mechanisms in these preferences. Thus, while our thinking minds may enjoy watching the magical moonwalk, our automatic minds, with a taste for causality, may curtail the impression of its visual beauty.

What interventions can decrease or increase belief polarisation in a population of rational agents?

In many situations where people communicate (e.g., Twitter, Facebook etc), people self-organise into ‘echo chambers’ of like-minded individuals, with different echo chambers espousing very different beliefs. Why does this occur? Previous work has demonstrated that such belief polarisation can emerge even when all agents are completely rational, as long as their initial beliefs are heterogeneous and they do not automatically know who to trust. In this work, we used agent-based simulations to further investigate the mechanisms for belief polarisation. Our work extended previous work by using a more realistic scenario. In this scenario, we found that previously proposed methods for reducing belief polarisation did not work but we were able to find a new method that did. However, this same method could be reversed by adversarial entities to increase belief polarisation. We discuss how this danger can be best mitigated and what theoretical conclusions be drawn from our findings.

A formal comparison/contrast of associative and relational learning: a case study of relational schema induction

Relational schema induction involves a series of learning tasks conforming to a common (group-like) structure. The paradigm contrasts associative versus relational aspects of learning for cognitive, developmental and comparative psychology. Yet, a theory accounting for the relationship between these forms of learning has not been fully developed. We use (mathematical) category theory methods to redress this situation: both forms of learning involve a (universal) construction that differs in terms of ``dimensionality'', i.e. one-dimensional (associative) versus two-dimensional (relational). Accordingly, the development of relational learning pertains to changes in the dimensionality of the underlying relational schemas induced.

Naive Utility Calculus underlies the reproduction of disparities in social groups

On the road to a more fair and just world, we must recognize ubiquitous disparities in our society, but awareness alone is not enough: Observed disparities between groups often get wrongly attributed to inherent traits (e.g., African Americans are disproportionately arrested because they are more prone to crime), creating a self-perpetuating feedback loop. As shown in a past study (Meng & Xu, 2020), such reasoning can result from the Naive Utility Calculus (Jara-Ettinger et al., 2016): If an agent knows a target trait's "hit rate'' in every group and avoids unnecessary sampling, it is rational to infer that groups sampled from more often have higher hit rates. The previous study used non-social categories (robot chickens) as stimuli, which raises the question of whether the results generalize to the social domain. In the current study, we replicated past findings using novel social groups (aliens): Overall, people were more likely to check groups examined more often by the agent but when observed hit rates did not support the agent's sampling behavior, people incorporated both information sources to infer group hit rates. This work brought NUC-based models one step closer towards tackling disparities in the real world consisted of social groups.

Tracking what matters: A decision-variable account of human behavior in bandit tasks

We study human learning & decision-making in tasks with probabilistic rewards. Recent studies in a 2-armed bandit task find that a modification of classical Q-learning algorithms, with outcome-dependent learning rates, better explains behavior compared to constant learning rates. We propose a simple alternative: humans directly track the decision variable underlying choice in the task. Under this policy learning perspective, asymmetric learning can be reinterpreted as an increasing confidence in the preferred choice. We provide specific update rules for incorporating partial feedback (outcomes on chosen arms) and complete feedback (outcome on chosen & unchosen arms), and show that our model consistently outperforms previously proposed models on a range of datasets. Our model and update rules also add nuance to previous findings of perseverative behavior in bandit tasks; we show evidence of outcome-dependent choice perseveration, i.e., that humans persevere in their choices unless contradictory evidence is presented.

Expectancy violations about physical properties of animated objects in dogs

Dogs are not particularly known for complex physical cognitive abilities. However, a number of recent violation-of-expectation studies have challenged this view. In the current eye-tracking study, we further investigated dogs’ (N=15) reaction to physically implausible events, particularly in the context of support, occlusion, and launching events. In Experiment 1, the dogs watched a rolling ball moving over a gap in a surface either falling down or hovering over the gap. In Experiment 2, the dogs saw a ball rolling behind a narrow pole either disappearing behind it or re-appearing on the other side. In Experiment 3, the dogs observed launching events either with or without contact between the balls. The dogs’ pupil dilation response and looking times suggest that they form implicit expectations about occlusion and launching events but not about gravity-related events at least in the context of animated objects on a screen.

How to Revise Beliefs from Conditionals: A New Proposal

A large body of work has demonstrated the utility of the Bayesian framework for capturing inference in both specialist and everyday contexts. However, the central tool of the framework, conditionalization via Bayes’ rule, does not apply directly to a common type of learning: the acquisition of conditional information. How should an agent change her beliefs on learning that “If A, then C”? This issue, which is central to both reasoning and argumentation, has recently prompted considerable research interest. In this paper, we critique a prominent proposal and provide a new, alternative, answer.

The Sound of Pedagogical Questions

Questions are prevalent in everyday speech and they are often used to teach. When learners receive explicit cues that the intent of a question is pedagogical, they draw inferences that lead to superior learning. Although the ability to infer pedagogical intent is critical, very little is known about the mechanisms that support the inference that any particular statement is pedagogical or not. We tested the hypothesis that the prosody of speech marks the intent of pedagogical and information-seeking questions. In Studies 1 and 2, 256 naïve participants rated 100 pedagogical and information-seeking questions, spoken in child- or adult-directed speech. We found that naïve listeners can accurately infer pedagogical intent on the basis of prosody alone. In Study 3, we begin charting the acoustic features that differentiate pedagogical from information-seeking questions. These findings provide a window into the mechanisms that allow learners to infer pedagogical intent in otherwise ambiguous situations.

Numbers vs. Variables: The Effect of Symbols on Students’ Math Problem-Solving

Numbers and variables often follow the same principles of arithmetic operations, yet numbers can be computed to a value whereas variables cannot. We examined the effect of symbols—numbers versus variables—on middle school students’ problem-solving behaviors in a dynamic algebra notation system by presenting problems in numbers (e.g., 3+5−3) or variables (e.g., x+y−x). We found that compared to problems presented in numbers, students attempted the problems more times and took more total steps when the problems were presented in variables. We did not find differences in pre-solving pause time or strategy efficiency on the two types of problems, indicating that students might notice problem structure in both types of problems. The results have implications for research on cognitive processes of symbols as well as the design of educational technologies.

Embodied morality: Repetitive motor actions change moral decision-making

Can the body affect our morals? In the present study, we tested if motor system activation can change our moral decisions. Participants (N = 70) were presented with the choice to kill one person in order to save several lives. The action was described by means of hand (e.g., “push”) or foot (e.g., “kick”) verbs. As a secondary task, they moved rhythmically either their hands or their feet. Participants refused to act more often in both hand and foot dilemmas when they had been moving the same effector. We propose that the repetitive rhythm activates motor areas, leading to a more detailed simulation of the harmful act, so making more difficult the decision to carry it out. These findings reveal that mundane activities of the body can affect our most elevated decisions and suggest a causal implication of the motor system in moral cognition.

Using Machine Learning to Predict Bilingual Language Proficiency from Reaction Time Priming Data

Studies of bilingual language processing typically assign participants to groups based on their language proficiency and average across participants in order to compare the two groups. This approach loses much of the nuance and individual differences that could be important for furthering theories of bilingual language comprehension. In this study, we present a novel use of machine learning (ML) to develop a predictive model of language proficiency based on behavioral data collected in a priming task. The model achieved 75% accuracy in predicting which participants were proficient in both Spanish and English. Our results indicate that ML can be a useful tool for characterizing and studying individual differences.

The Dynamics of Exemplar and Prototype Representations Depend on Environmental Statistics

How people represent categories—and how those representations change over time—is a basic question about human cognition. Previous research has suggested that people categorize objects by comparing them to category prototypes in early stages of learning but use strategies that consider the individual exemplars within each category in later stages. However, many category learning experiments do not accurately reflect the environmental statistics of the real world, where the probability that we encounter an object changes over time. Our goal in this study was to introduce memory constraints by presenting each stimulus at intervals corresponding to the power-law function of memory decay. Since the exemplar model relies on the individual’s ability to store and retrieve previously seen exemplars, we hypothesized that adding memory constraints that better reflect real environments would favor the exemplar model more early on compared with later. Confirming our hypothesis, the results illustrate that under realistic environmental statistics with memory constraints, the exemplar model’s advantage over the the prototype model decreases over time.

Do Scalar Implicatures Prime? The Case of Exclusive ‘or’

Understanding language requires comprehenders to understand not only what speakers say, but what speakers might imply. Scalar items (e.g. some, numerals) often invite comprehenders to compute scalar implicatures, pragmatically strengthening the semantic meaning of scalar items by negating their stronger alternatives. Recent priming evidence suggests that scalar implicatures may share underlying mechanisms, priming both within and between implicature types. We report two experiments designed to extend these finding to or, which has an inclusive meaning that can be strengthened to an exclusive meaning, potentially via scalar implicature. Experiment 1 investigated or alongside some and numerals, holding the number of visual symbols constant. Experiment 2 reduced the visual complexity of Experiment 1. Both experiments found robust within-category priming, but failed to fully replicate or extend between-category priming effects. We discuss implications of these results with respect to visual manipulations and the potential fragility of priming across different categories of scalar implicature.

A Computational Model of Comprehension in Manga Style Visual Narratives

Understanding a sequence of images as a visual narrative is challenging because it requires not only the understanding of what is shown at a particular moment but also what has changed, been omitted or is out of frame. The human cognitive system makes inferences about the state of the world based on transitions between sequential frames. In this paper, we present a principled analysis of the stylistic differences between two dominant styles of multi-modal narratives, western comics and manga. These two styles differ in terms of screening, ballooning, layout, language, and reading order. We first provide a systematic account of these differences based on an annotated dataset consisting of both comics and manga. We then annotate these datasets with a new feature set and evaluate the contributions of these features through development of a computational model of multi-modal comprehension. The model evaluation is presented through the cloze test that measures the accuracy of the model in predicting unseen next frames given the prior frames in a sequence. Our results provide initial benchmarks and insight into the fundamental challenges that the multi-modal narrative understanding task presents for computational models both for language and vision.

Cognitive Properties of Norm Representations

Norms are central to social life. They help people select actions that benefit the community and facilitate behavior prediction and coordination. However, little is known about the cognitive properties of norms. Here we focus on norm activation, context specificity, and how those properties differ for the two major types of norms: prescriptions and prohibitions. In two studies, participants are exposed to a variety of contexts by way of scene images and either (a) freely generate norms that apply to the context or (b) decide whether each of a series of candidate norms applies to a given context. Across both studies, people showed high levels of context specificity and fast norm activation, and these properties were substantially stronger for prescriptions than for prohibitions.

Designing probabilistic category learning experiments: The probabilistic prototype distortion task

Many category learning experiments use supervised learning (i.e., trial-by-trial feedback). Most of those procedures use deterministic feedback, teaching participants to classify exemplars into consistent categories (i.e., the stimulus i is always classified in category k). Though some researchers suggest that natural learning conditions are more likely to be inconsistent, the literature using probabilistic feedback in category learning experiments is sparse. Our analysis of the literature suggests that part of the reason for this sparsity is a relative lack of flexibility of current paradigms and procedures for designing probabilistic feedback experiments. The work we report here offers a novel paradigm (the Probabilistic Prototype Distortion task) which allows researchers greater flexibility when creating experiments with different p(category|feature) probabilities, and also allows parametrically manipulating the amount of randomness in an experimental task. In the current work, we offer a detailed procedure, implementation, experimental results and discussion of this novel procedure. Our results suggest that by designing experiments with our procedures, the experimental setup allows subjects to achieve the desired classification performance.

Individual Variability in Strategies and Learning Outcomes in Auditory Category Learning

Learning the sounds of a new language depends on the ability to learn novel auditory categories. Multidimensional categories, whether speech or nonspeech, can be learned through feedback and different category structures are proposed to recruit separate cognitive and neural mechanisms. There is substantial individual variability in learning; however, it is rare to compare learning of different categories in the same individuals. Understanding the sources of variability has theoretical implications for category learning. In this study, we trained the same participants on three types of multidimensional auditory categories. Participants learned nonspeech rule-based, nonspeech information-integration, and Mandarin speech categories. Learning of all category types was related across individuals and differences in working memory similarly supported learning across tasks. There was substantial variability in learning outcomes and strategies used to learn the categories. There are multiple paths to successful learning and appreciation of individual variability is essential to understanding the mechanisms of learning.

Can Closed-ended Practice Tests Promote Understanding from Text?

Many studies have demonstrated that testing students on to-be-learned materials can be an effective learning activity. However, past studies have also shown that some practice test formats are more effective than others. Open-ended recall or short answer practice tests may be effective because the questions prompt deeper processing as students must generate an answer. With closed-ended testing formats such as multiple-choice or true-false tests, there are concerns that they may prompt only superficial processing, and that any benefits will not extend to non-practiced information or over time. They also may not be effective for improving comprehension from text as measured by how-and-why questions. The present study explored the utility of practice tests with closed-ended questions to improve learning from text. Results showed closed-ended practice testing can lead to benefits even when the learning outcome was comprehension of text.

Infants’ interpretation of information-seeking actions

Although infants can frequently observe others gathering information, it is an open question whether and how they make sense of such activities since the mental causes and intended effects of these are hidden and underdetermined by the available evidence. We tested the hypothesis that infants possess a naive theory that leads them to grasp the purpose of information-gathering actions when they serve as sub-goals of higher-order instrumental goals. We presented 14-month-old infants with actions that were inefficient with respect to the agent’s instrumental goal but could or could not be justified as information-seeking behavior via this theory. We expected longer looks in the condition where the detour could not be justified and the results were in line with our predictions. While this evidence is compatible with our hypothesis, further studies are in progress to rule out alternative interpretations of our findings.

Gesture Dynamics and Therapeutic Success in Patient-Therapist Dyads

We investigated gesture dynamics by examining wrist-worn accelerometer data from 28 patient-therapist dyads involved in multiple sessions of mentalization-based therapy. We sought to determine if there were long-term correlations in the signals and evaluate the degree of complexity matching between patient and therapist. Moreover, we looked into the relationship between complexity matching and the level of therapeutic success (operationalized by change in mentalization and the severity of symptoms). The results indicated that the patient and therapist gesture dynamics were significantly different than long-term correlations produced by white noise. Further, six patient-therapist dyads matched each other in complexity across sessions, but no systematic relationship between the patient and therapists' was observed and there were no relationships between these dynamics and measures of therapeutic success.

Eye Movement Traces of Linguistic Knowledge

This study examines how linguistic knowledge is manifested in eye movements in reading, focusing on the effect of two key word properties: frequency and surprisal, on three progressively longer standard fixation measures: First Fixation, Gaze Duration and Total Fixation. Comparing English L1 speakers to a large and linguistically diverse group of English L2 speakers, we obtain the following results. 1) Word property effects on reading times are larger in L2 than in L1. 2) Differences between L1 and L2 speakers are substantially larger in the response to frequency than to surprisal. 3) The functional form of the relation between fixation times and frequency and surprisal in L2 is superlinear. 4) In L2 speakers, proficiency modulates frequency effects as a U shaped function. We discuss the implications of these results on theory of language processing and acquisition, as well as the general interpretation of frequency and surprisal effects in reading.

Is children's norm learning rational? A meta-analysis

A good deal of recent research has examined children’s norm learning across a wide range of novel contexts. The typical interpretation of these findings is that children’s norm learning is driven by group-based biases. In this paper, we present an alternative interpretation and corresponding meta-analyses that cast the current body of evidence in a rather different light. First, we argue the extant literature uses an ill-suited standard for assessing bias. Rather than comparing children’s judgments to what is expected under random chance (a ‘random standard’), bias is better assessed by comparing children’s judgments to what is most probable, given their total evidence (an ‘evidential standard’). Next, we report a meta-analysis of the known findings to date (k = 40 effect sizes; N = 1,369 in total; ages 4- to 13-years-old) to compare children’s norm learning against an appropriate evidential standard. Meta-analytic estimates reveal that children’s norm learning is not restrictively biased toward narrow-scope inferences on account of group-based factors. Rather, the findings to date are consistent with children’s norm learning being rational (i.e., statistically appropriate, given their evidence) or even inclusively biased toward making the wide-scope inference that a novel norm applies to everyone in a population. We conclude with brief discussion of implications for current understanding and future research on norm acquisition.

Modelling Characters’ Mental Depth in Stories Told by Children Aged 4-10

From age 3-4, children are generally capable of telling stories about a topic free of choice. Over the years their stories become more sophisticated in content and structure, reflecting various aspects of cognitive development. Here we focus on children’s ability to construe characters with increasing levels of mental depth, arguably reflecting socio-cognitive capacities including Theory of Mind. Within our sample of 51 stories told by children aged 4-10, characters range from flat “actors” performing simple actions, to “agents” having basic perceptive, emotional, and intentional capacities, to fully-blown “persons” with complex inner lives. We argue for the underexplored potential of computationally extracted story-internal factors (e.g. lexical/syntactic complexity) in explaining variance in character depth, as opposed to story-external factors (e.g. age, socioeconomic status) on which existing work has focused. We show that especially lexical richness explains variance in character depth, and this effect is larger than and not moderated by age.

English Negative Constructions and Communicative Functions in Child Language

How does abstract linguistic negation develop in early child language? Previous research has suggested that abstract negation develops in stages and from more concrete communicative functions such as rejection, prohibition, or non-existence. The evidence for the emergence of these functions in stages is mixed, however, leaving the possibility that negation is an abstract concept from the beginning that can serve multiple specific functions depending on early communicative environment. Leveraging automatic annotations of large-scale child speech corpora in English, we examine the production trajectories of seven negative constructions that tend to convey communicative functions previously discussed in the literature. The results demonstrate the emergence and gradual increase of these constructions in child speech within the age range of 18-36 months. Production mostly remains stable, regular, and close to parents’ levels after this age range. These findings are consistent with two hypotheses: first, that negation starts as an abstract concept that can serve multiple functions from the beginning; and second, that negation develops in stages from specific communicative functions but this development is early and quick, leaving our corpus methods incapable of detecting them from the available corpus data.

What Transformers Might Know About the Physical World: T5 and the Origins of Knowledge

Features of the physical world may be acquired from the statistical properties of language. Here we investigate how the Transformer Language Model T5 is able to gain knowledge of the visual world without being able to see or feel. In a series of four studies, we show that T5 possesses an implicit understanding of the relative sizes of animals, their weights, and their shapes, but not their colors, that aligns well with that of humans. As the size of the models was increased from 60M to 11B parameters, we found that the fit to human judgments improved dramatically, suggesting that the difference between humans and these learning systems might ultimately disappear as the parameter sizes grow even larger. The results imply that knowledge of the perceptual world—and much of semantic memory—might be acquired in dis-embodied learning systems using real-time inferential processes

Revising Core Beliefs in Young Children

A set of fundamental beliefs governs our reasoning about objects and agents since infancy. Studies have shown that infants and children show enhanced exploration and learning when they observe apparent violations of these beliefs. However, little is known about whether these beliefs can be revised given counterevidence. In the present experiments, we demonstrate that 4- to 6-year-old children can revise their most fundamental beliefs in the physical domain (Experiment 1) and the psychological domain (Experiment 2) when they observe multiple pieces of belief-violating evidence.

Learning Approximate and Exact Numeral Systems via Reinforcement Learning

Recent work (Xu et al., 2020) has suggested that numeral systems in different languages are shaped by a functional need for efficient communication in an information-theoretic sense. Here we take a learning-theoretic approach and show how efficient communication emerges via reinforcement learning. In our framework, two artificial agents play a Lewis signaling game where the goal is to convey a numeral concept. The agents gradually learn to communicate using reinforcement learning and the resulting numeral systems are shown to be efficient in the information-theoretic framework of Regier et al.(2015); Gibson et al. (2017). They are also shown to be similar to human numeral systems of same type. Our results thus provide a mechanistic explanation via reinforcement learning of the recent results in Xu et al. (2020) and can potentially be generalized to other semantic domains.

Memory Constraints on Cross Situational Word Learning

A simple memory component is amended to local (“Pursuit”; Stevens, Gleitman, Trueswell, and Yang (2017)) and global (e.g., Yu and Smith (2007); Fazly, Alishahi, and Stevenson(2010)) models of cross-situational word learning. Only a finite (and small) number of words can be concurrently learned; successfully learned words are removed from the memory buffer and stored in the lexicon. The memory buffer improves the empirical coverage for both local and global learn-ing models. However, the complex task of homophone learn-ing (Yurovsky & Yu, 2008) proves a more decisive advantage for the local model (dubbed Memory Bound Pursuit; MBP). Implications and limitations of these results are discussed.

Desires can conflict with intentions; plans cannot

While many formal frameworks distinguish between desires and intentions, and considerable empirical work shows that people interpret them differently, no studies examine how people reason about them. We extend Harner and Khemlani’s (2020) model-based theory of relations describing desire. The theory holds that people represent desires, as in, e.g., Pav wants to visit Angkor Wat, by pairing a factual representation of the negation of the desire (e.g., that Pav is not currently visiting Angkor Wat) with a future possibility where the desire is realized. We propose that intentions, which people express using verbs like plan, are represented as future actions that agents seek to perform. A particular individual’s plans must be consistent with one another, whereas desires can conflict with these plans. We show how the model theory distinguishes desires and intentions, namely that models can be coherent even when a desire and a plan are inconsistent with each other. The distinctions make predictions about how reasoners should assess the consistency of statements concerning desires and intentions, and we report on two experiments that validate them.

Comparison Promotes the Spontaneous Transfer of Relational Categories

Snoddy and Kurtz (2020) demonstrated spontaneous transfer of relational categories to new learning. Recognition memory data suggested that transfer was driven by schematization during learning. In the present study, we explored whether schema abstraction underlies transfer and recognition effects. Participants were assigned to condition based on the type of initial learning: classification with comparison, supervised observation with comparison, single-item supervised observation, or baseline (no learning). After initial learning, participants underwent a study phase and recognition test on novel stimuli followed by a target category learning task involving the same underlying category structures expressed in a new domain. During the recognition test, all conditions led to increased false alarms relative to baseline. Only the comparison conditions exhibited analogical transfer on the target category learning task. Results suggest that comparison facilitates the transfer of relational categories (due to schema abstraction), but recognition memory effects may be driven by more general categorization mechanisms.

Three-dimensional pose discrimination in natural images of humans

Perceiving 3D structure in natural images is an immense computational challenge for the visual system. While many previous studies focused on rigid 3D objects, we applied a novel method on a common set of non-rigid objects—static images of the human body in the natural world. We investigated to what extent human ability to interpret 3D poses in natural images depends on pose typicality and viewpoint informativeness. We tested subjects on matching natural pose images with synthetic body images of the same poses given viewpoint changes. We found that performance for typical poses was measurably better than atypical poses; however, we found no significant difference between informative and less informative viewpoints. Results suggested that human ability to interpret 3D poses depends on pose typicality but not viewpoint informativeness. Further comparisons of 2D and 3D pose matching models suggested that humans probably use prior knowledge of 3D pose structures.

Open-Minded, Not Naïve: Three-Month-Old Infants Encode Objects as the Goals of Other People’s Reaches

When people act on objects, their goals can depend on the objects’ intrinsic properties and conventional uses (e.g., using forks, not knives, to eat spaghetti), locations (e.g., clearing the table, regardless of what is on it), or both (eating with the fork next to your plate, not your dining partner’s). For adults, objects’ intrinsic properties matter more than their locations in most action contexts. Whereas 5-month-old infants privilege objects’ intrinsic properties in attributing goals to people reaching for objects, 3-month-old infants do not. Do younger infants fail to view reaching as goal-directed, or are they uncertain which properties of objects are relevant in different contexts? Here we show that 3-month-old infants attribute goals to others’ reaching actions when given information that their actions depend on what, not where, an object is. Our findings suggest that 3-month-old infants can learn about others’ object goals, before they reach for objects themselves.

Challenges for using Representational Similarity Analysis to Infer Cognitive Processes: A Demonstration from Interactive Activation Models of Word Reading

Representational Similarity Analysis (RSA) is a powerful tool for linking brain activity patterns to cognitive processes via similarity, allowing researchers to identify the neural substrates of different cognitive levels of representation. However, the ability to map between levels of representation and brain activity using similarity depends on underlying assumptions about the dynamics of cognitive processing. To demonstrate this point, we present three toy models that make different assumptions about the interactivity within the reading system, (1) discrete, feedforward, (2) cascading, feedforward and (3) fully interactive. With the temporal resolution of fMRI, only the discrete, feedforward model provides a straightforward mapping between activation similarity and level of representation. These simulations indicate the need for a cautious interpretation of RSA results, especially with processes that are highly interactive and with neuroimaging methods that have low temporal resolution. The study further suggests a role for fully-fleshed out computational models in RSA analyses.

Mental representations of objects reflect the ways in which we interact with them

In order to interact with objects in our environment, humans rely on an understanding of the actions that can be performed on them, as well as their properties. When considering concrete motor actions, this knowledge has been called the object affordance. Can this notion be generalized to any type of interaction that one can have with an object? In this paper we introduce a method to represent objects in a space where each dimension corresponds to a broad mode of interaction, based on verb selectional preferences in text corpora. This object embedding makes it possible to predict human judgments of verb applicability to objects better than a variety of alternative approaches. Furthermore, we show that the dimensions in this space can be used to predict categorical and functional dimensions in a state- of-the-art mental representation of objects, derived solely from human judgements of object similarity. These results suggest that interaction knowledge accounts for a large part of mental representations of objects.

Measuring More to Learn More From the Block Design Test: A Literature Review

The block design test (BDT), in which a person has to recreate a visual design using colored blocks, is notable among cognitive assessments because it makes so much of a person's problem-solving strategy ``visible'' through their ongoing manual actions. While, for decades, numerous pockets of research on the BDT have identified certain behavioral variables as being important for certain cognitive or neurological hypotheses, there is no unifying framework for bringing together this spread of variables and hypotheses. In this paper, we identify 25 independent and dependent variables that have been examined as part of published BDT studies across many areas of cognitive science and present a sample of the research on each one. We also suggest variables of interest for future BDT research, especially as made possible with the advent of advanced recording technologies like wearable eye trackers.

Using prototype-defined checkerboards to investigate the mechanisms contributing to the Composite Face Effect

We report the results from two experiments (n=192) examining the congruency effect (better performance for congruent vs incongruent stimuli) for prototype-defined checkerboard composites. We used a complete matching task design as that used to study a robust index of face recognition i.e., the composite face effect. The results from both experiments reveal an effect of order of presentation for congruent and incongruent trials. Critically, participants presented with incongruent first and then congruent trials revealed a significant congruency effect. In contrast, participants presented with congruent first and then incongruent trials showed no congruency effect. These results contribute to the composite effect literature by reporting the first evidence of a congruency effect for artificial non-face stimuli which do not have a predefined orientation. Also, they provide evidence in support of test order as a determining factor potentially modulating the composite effect.

Deciding to be Authentic: Intuition is Favored Over Deliberation for Self-Reflective Decisions

People think they ought to make some decisions on the basis of deliberative analysis, and others on the basis of intuitive, gut feelings. What accounts for this variation in people’s preferences for intuition versus deliberation? We propose that intuition might be prescribed for some decisions because people’s folk theory of decision-making accords a special role to authenticity, where authenticity is uniquely associated with intuitive choice. Two pre-registered experiments find evidence in favor of this claim. In Experiment 1 (N=631), we find that decisions made on the basis of intuition (vs. deliberation) are more likely to be judged authentic, especially in domains where authenticity is plausibly valued. In Experiment 2 (N=177), we find that people are more likely to prescribe intuition as a basis for choice when the value of authenticity is heightened experimentally. These effects hold beyond previously recognized influences, such as computational costs, presumed efficacy, objectivity, complexity, and expertise.

The Role of Mindreading in a Pluralist Framework of Social Cognition

How do we manage to understand the minds of others and usefully interact with them? In the last decade, the debate on these issues has developed from unitary to pluralist approaches. According to the latter, we make use of multiple socio-cognitive strategies when predicting, interpretating, and reacting to the behavior of others. This means a departure from the view of mindreading as the main strategy underlying social cognition. In this paper, we address the question of the controversial status of mindreading within such a pluralist framework. Contrary to many other accounts, we ascribe mindreading an equal status in a pluralist framework. Mindreading is required for a variety of central situations in life and importantly underlies the way in which we understand other people. Mindreading is also no less reliable than alternative strategies; reliability is not so much a matter of different competing socio-cognitive strategies, but rather of their complementary use.

Preschoolers’ Spontaneous Gesture Production Predicts Analogical Transfer

We explore the link between children’s gesture production and analogical reasoning. Specifically, we ask whether children who spontaneously gesture when completing a retelling task are more likely to engage in analogical transfer, compared to those who do not gesture. To test this, 85 5-7-year-olds listened to three superficially distinct stories that shared a common abstract problem and solution. After each of the first two exemplar stories, participants were asked to retell the story events to a naïve listener and their speech and spontaneous gesture(s) were coded. For the third story, participants were asked to generate the analogous solution themselves. Results indicate a significant relationship between children’s analogical transfer and gesture production. This preliminary study suggests that children’s spontaneous gestures may provide a window into their analogical processing. We discuss future directions aimed at further examining the mechanism underlying this relationship.

Lexically-Mediated Compensation for Coarticulation in Older Adults

The claim that contextual knowledge exerts a top-down influence on sensory processing is supported by evidence for lexically-mediated compensation for coarticulation (LCfC) in spoken language processing. In this phenomenon, a lexically restored context phoneme (e.g., the final phoneme in Christma# or fooli#) influences perception of a subsequent target phoneme (e.g., a phoneme ambiguous between /t/ and /k/). A recent report shows that carefully vetted materials produce robust, replicable LCfC effects in younger adults (18-34 years old). Here, we asked whether we would observe LCfC in a sample of older adults (aged 60+). This is of interest because older adults must often contend with age-related declines in sensory processing, with previous research suggesting that older adults may compensate for age-related changes by relying more strongly on contextual knowledge. We observed robust LCfC effects in younger and older samples, with no significant difference in the effect size between age groups.

How effective is perceptual training? Evaluating two perceptual training methods on a difficult visual categorisation task

Perceptual training leads to improvements in a wide range of simple visual tasks. However, it is still unclear how effective it can be for more difficult visual tasks in real-world domains such as radiology. Is it possible to train people to the level of experts? If so, what method is best, and how much training is necessary? Over four training sessions, we trained medically naive participants to identify the degree of fatty liver tissue present in ultrasound images. We found that both COMPARISON and SINGLE-CASE perceptual training techniques resulted in significant post-training improvement, but that the SINGLE-CASE training was more effective. Whilst people showed rapid learning with less than one hour of training, they did not improve to the level of experts, and additional training sessions did not provide significant benefits beyond the initial session. This suggests that perceptual training could usefully augment, but not replace, the traditional rule-based training that medical students currently receive.

Speak before you listen: Pragmatic reasoning in multi-trial language games

Rational Speech Act theory (Frank & Goodman, 2012) has been successfully applied in numerous communicative settings, including studies using one-shot web-based language games. Several follow-up studies of the latter, however, suggest that listeners may not behave as pragmatically as originally suggested in those tasks. We investigate whether, in such reference games, listeners’ pragmatic reasoning about an informative speaker is improved by greater exposure to the task, and/or prior experience with being a speaker in this task. While we find limited evidence that increased exposure results in more pragmatic responses, listeners do show increased pragmatic reasoning after playing the role of the speaker. Moreover, we find that only in the Speaker-first condition, participant’s tendency to be an informative speaker predicts their degree of pragmatic behavior as a listener. These findings demonstrate that, in these settings, experience as a speaker enhances the ability of listeners to reason pragmatically, as modeled by RSA.

Modeling speech act development in early childhood: the role of frequency and linguistic cues

A crucial step in children's language development is the mastery of how to use language in context. This involves the ability to recognize and use major categories of speech acts (e.g., learning that a "question" is different from a "request"). The current work provides a quantitative account of speech acts' emergence in the wild. Using a longitudinal corpus of child-caregiver conversations annotated for speech acts (Snow et al.,1996), we introduced two complementary measures of learning based on both children's production and comprehension. We also tested two predictors of learning based on input frequency and the speech acts' quality of linguistic cues. We found that children's developmental trajectory differed largely between production and comprehension. In addition, development in both of these dimensions was not explained with the same predictors (e.g., frequency in child-directed speech was predictive of production, but not of comprehension). The broader impact of this work is to provide a computational framework for the study of communicative development where both measures and predictors of children's pragmatic development can be tested and compared.

The Optimal Amount of Visuals Promotes Children’s Comprehension and Attention: An Eye Tracking Study

This preregistered study examined whether extraneous illustration details promote attentional competition and hinder reading comprehension in beginning readers. Reading comprehension was highest in the Streamlined Condition (text + relevant illustrations) compared to a Standard Condition (text + relevant illustrations + extraneous illustrations) and Text Only Condition (no illustrations). Gaze shifts away from the text were highest in the Standard Condition, indicating increased distractibility while reading text with extraneous illustration details. Gaze shifts away from the text were associated with performance on an independent measure of attention, validating eye gaze patterns as an assessment of attentional allocation while reading. Lower comprehension in the Standard Condition was associated with higher gaze shifts away from text and lower scores on the independent measure of selective attention. This study suggests that illustrations can support reading comprehension, but only when they are optimally designed. Importantly, the removal of extraneous details did not decrease book enjoyment.

Labels, Even Arbitrary Ones, Facilitate Categorization

Labels may play a role in the formation and acquisition of object categories. We investigated this using a free-categorization task, manipulating the presence or absence of labels and whether labels were random or reinforced one of two alternative categorization cues (taxonomic or thematic relationships). When labels were absent, participants used thematic and taxonomic cues equally to categorize stimuli. When present, labels were used as the primary cue for category formation, with random labels leading participants to attend less to taxonomic and thematic relations between stimuli. When labels redundantly reinforced either thematic or taxonomic cues, the use of the cue in question was boosted along with the use of labels as a cue for categorization. Most interestingly, in spite of previously observed associations between labels and taxonomic grouping, labels did not preferentially boost the use of either taxonomic or thematic cues in comparison with the other.

Improvised Numerals Rely on 1-to-1 Correspondence

Symbolic representations of number are instrumental to mathematical reasoning and many aspects of social organization. What explains their emergence in human cultures? To understand how functional and cognitive constraints impact people’s communication about number, we used a drawing-based reference game to investigate how human dyads coordinated to form novel number systems. We found a systematic bias towards symbols exploiting 1-to-1 correspondence to objects in visual arrays, and that this strategy was contingent on the communicative relevance of number. Moreover, the meaning of these symbols was transparent to third party observers not present during their production. Finally, model-based analyses of these symbols' visual properties suggest that the ability to decode exact quantity from them may rely on perceptual processing mechanisms beyond those sufficient for object recognition. These findings contribute to our understanding of how both communicative need and capacity for visual abstraction constrain the emergence of iconic representations of exact number.

Learning communicative acts in children’s conversations: a Hidden Topic Markov Model analysis of the CHILDES corpus

Over their first years of life, children learn not just the words of their native languages, but how to use them to communicate. Because manual annotation of communicative intent does not scale to large corpora, our understanding of communicative act development is limited to case studies of a few children at a few time points. We present an approach to automatic identification of communicative acts using a Hidden Topic Markov Model, applying it to the CHILDES database. We first describe qualitative changes in parent-child communication over development, and then use our method to demonstrate two large-scale features of communicative development: (1) children develop a parent-like repertoire of our model's communicative acts rapidly, their learning rate peaking around 14 months of age, and (2) this period of steep repertoire change coincides with the highest predictability between parents' acts and children's, suggesting that structured interactions play a role in learning to communicate.

Intrinsic Rewards in Human Curiosity-Driven Exploration: An Empirical Study

Despite their apparent importance for the acquisition of full-fledged human intelligence, mechanisms of intrinsically motivated autonomous learning are poorly understood. How do humans identify useful sources of knowledge and decide which learning situations to approach in the absence of external rewards? While the recognition of this important problem has grown in psychological sciences over the recent years, an intriguing proposition for the possible mechanism comes from artificial intelligence, where efficient autonomous learning is achieved by programming agents to follow the heuristic of maximizing learning progress (LP) during exploration. In this study, we set out to examine the empirical evidence for this idea. Using computational modeling, we demonstrate that humans show signs of following LP while they freely explore and practice a set of multiple learning activities of varying difficulty, including an activity that is impossible to learn. Different approaches to operationalizing the notion of LP and their plausibility in light of empirical data are also discussed. We also show that models combining several types of intrinsic rewards fit better human exploration data than single component models considered so far in theoretical accounts.

Relating confidence judgements to temporal biases in perceptual decision-making

Decision-making is often hierarchical and approximate in nature: decisions are not being made based on actual observations, but on intermediate variables that themselves have to be inferred. Recently, we showed that during sequential perceptual decision-making, those conditions induce characteristic temporal biases that depend on the balance of sensory and category information present in the stimulus. Here, we show that the same model makes predictions for when observers will be over-confident and when they will be under-confident with respect to a Bayesian observer. We tested these predictions by collecting new data in a dual-report decision-making task. We found that for most participants the bias in confidence judgments changed in the predicted direction for stimulus changes that led them from over-weighting early evidence to equal weighting of evidence or over-weighting of late evidence. Our results suggest that approximate hierarchical inference might provide the computational basis for biases beyond low-level perceptual decision-making, including those affecting higher level cognitive functions like confidence judgements.

Exploring Causal Overhypotheses in Active Learning

People’s active interventions play a key role in causal learning. Past studies have tended to focus on how interventions help people learn relationships where causes are independently sufficient to produce an effect. In reality, however, people can learn different rules governing how multiple causes combine to produce an effect, i.e., different functional forms. These forms are examples of causal overhypotheses—abstract beliefs about causal relationships that are acquired in one situation and transferred to another. Here we present an active "blicket" experiment to study whether and how people learn overhypotheses in an active setting. Our results showed participants can learn disjunctive and conjunctive overhypotheses through active training, as measured in a new disjunctive task. Furthermore, intervening on two objects led to better conjunctive judgments, and complementarily, conjunctive training predicted more objects in future interventions. Overall, these results expand our understanding of how active learning can facilitate causal inference.

Modelling Human Communication as a Rejection Game

We present a computational model of interactions between a Speaker and a Hearer in a signalling game. The partly cooperative/partly competitive interaction is intended to reproduce essential aspects of human communication, with the goal of approximating human language use and analysing it by means of simulation studies. This was accomplished by implementing a language that accommodates for compositional signals, allowing agents to express infinitely many meanings with a finite set of signals. Personal attributes integral to human decision making such as sympathy and trust were implemented as adjustable parameters, providing the opportunity to create and study individuals with different 'personalities'. Over the course of several rounds, agents learn their optimal strategies using the Moran algorithm. The model was able to substantiate widely confirmed notions about human communication such as correlations between truthfulness and trust, demonstrating the possibility of correlation between aspects of linguistic cognition and social aspects of language use.

Superordinate Word Knowledge Predicts Longitudinal Vocabulary Growth

Does knowing certain words help children learn other words? We hypothesized that knowledge of more general (more superordinate) words at time1 would lead to faster vocabulary growth as measured through vocabulary checklists administered at later timepoints. We find that this is indeed the case. Children who have similar vocabularies at time1 , but differ in their productive knowledge of more general words such as “animal,” “picture,” and “get” go on to have different rates of word learning. Knowledge of more general words is associated with faster vocabulary growth, particularly of words semantically related to the superordinate terms they are reported to produce. This positive relationship between knowledge of more general words and word learning remains even when controlling for measures of verbal and nonverbal intelligence.

Inferring Structural Constraints in Musical Sequences via Multiple Self-Alignment

A critical aspect of the way humans recognize and understand meaning in sequential data is the ability to identify abstract structural repetitions. We present a novel approach to discovering structural repetitions within sequences that uses a multiple Smith-Waterman self-alignment. We illustrate our approach in the context of finding different forms of structural repetition in music composition. Feature-specific alignment scoring functions enable structure finding in primitive features such as rhythm, melody, and lyrics. These can be compounded to create scoring functions that find higher-level structure including verse-chorus structure. We demonstrate our approach by finding harmonic, pitch, rhythmic, and lyrical structure in symbolic music and compounding these viewpoints to identify the abstract structure of verse-chorus segmentation.

Scaffolded Self-explanation with Visual Representations Promotes Efficient Learning in Early Algebra

Although visual representations are generally beneficial for learners, past research also suggests that often only a subset of learners benefits from visual representations. In this work, we designed and evaluated anticipatory diagrammatic self-explanation, a novel form of instructional scaffolding in which visual representations are used to guide learners’ inference generation as they solve algebra problems in an Intelligent Tutoring System. We conducted a classroom experiment with 84 students in grades 5-8 in the US to investigate the effectiveness of anticipatory diagrammatic self-explanation on algebra performance and learning. The results show that anticipatory diagrammatic self-explanation benefits learners on problem-solving performance and the acquisition of formal problem-solving strategies. These effects mostly did not depend on students’ prior knowledge. We analyze and discuss how performance with the visual representation may have influenced the enhanced problem-solving performance.

Perception of soft materials relies on physics-based object representations:Behavioral and computational evidence

When encountering objects, we readily perceive not only low-level properties (e.g., color and orientation), but also seemingly higher-level ones--including aspects of physics (e.g., mass). Perhaps nowhere is this contrast more salient than in the perception of soft materials such as cloths: the dynamics of these objects (including how their three-dimensional forms vary) are determined by their physical properties such as stiffness, elasticity, and mass. Here we hypothesize that the perception of cloths and their physical properties must involve not only image statistics, but also abstract object representations that incorporate "intuitive physics". We provide behavioral and computational evidence for this hypothesis. We find that humans can visually match the stiffness of cloths with unfamiliar textures from the way they undergo natural transformations (e.g. flapping in the wind) across different scenarios. A computational model that casts cloth perception as mental physics simulation explains important aspects of this behavior. Full paper can be found at https://www.biorxiv.org/content/10.1101/2021.05.12.443806v1.

‘Decoding’ the locus of spatial representation from simple localization errors

Representing a location in space requires two things: an anchor point, and a code (or coordinate system) to define other locations relative to that anchor point. Recent work has shed light on the latter, providing evidence that the default ‘format’ of visuospatial representation may be polar coordinates (i.e., angle/distance relations). Yet the former remains a topic of debate. For example, a classic distinction in the realm of spatial navigation research pits representation relative to landmarks against representation relative to boundaries. Here, we exploit the polar format of spatial representations to propose a new method for assessing the locus of spatial representation. Specifically, we show that from simple localization errors we can infer the anchor point from which observers localized a target point. We highlight a few basic demonstrations of this method and discuss possible applications for further research on spatial representation.

The computer judge: Expectations about algorithmic decision-making

The use of algorithmic decision-making is steadily increasing, but people may have misgivings about machines making moral decisions. In two experiments (N = 551), we examined whether people expect machines to weigh information differently than humans in making moral decisions. We found that people expected that a computer judge would be more likely to convict than a human judge, and that both judge types would be more likely to convict based on individuating information than on base-rate information. While our main hypotheses were not supported, these findings suggest that people might anticipate machines will commit to decisions based on less evidence than a human would require, providing a possible explanation for why people are averse to machines making moral decisions.

It's Complicated: Improving Decisions on Causally Complex Topics

We make frequent decisions about how to manage our health, yet do so with information that is highly complex or received piecemeal. Causal models can provide guidance about how components of a complex system interact, yet models that provide a complete causal story may be more complex than people can reason about. Prior work has provided mixed insights into our ability to make decisions with causal models, showing that people can use them in novel domains but that they may impede decisions in familiar ones. We examine how tailoring causal information to the question at hand may aid decision making, using simple diagrams with only the relevant causal paths (Experiment 1) or those paths highlighted within a complex causal model (Experiment 2). We find that diagrams tailored to a choice improve decision accuracy over complex diagrams or prior knowledge, providing new evidence for how causal models can aid decisions.

Disentangling factors in the placement of manner adverbials in German: the effect of distributional similarity

Processing differences between obligatory constituents like verbal complements and facultative ones like adjuncts are widely discussed in psycholinguistics. But the relative positional and semantic variability of adjuncts makes analyzing these differences a daunting challenge. Focusing on the intricate problem of the default position of manner adverbials in German, we present a re-analysis of a recent psycholinguistic study on their ordering preferences in which we explore the extent to which similarities between word embeddings can be used as stand-ins for shared semantic memory, representing the probability of seamless conceptual combination. In re-analyzing six experiments across different paradigms, the addition of the new predictors yields substantially better models which show that these factors considerably interact with established lexical and grammatical predictors.

Reason-Based Constraint in Theory of Mind

In the face of strong evidence that a coin landed heads, can someone simply choose to believe it landed tails? Knowing that a large earthquake could result in personal tragedy, can someone simply choose to desire that it occur? We propose that in the face of strong reasons to adopt a given belief or desire, people are perceived to lack control: they cannot simply believe or desire otherwise. We test this “reason-based constraint” account of mental state change, and find that people reliably judge that evidence constrains belief formation, and utility constrains desire formation, in others. These results were not explained by a heuristic that simply treats irrational mental states as impossible to adopt intentionally. Rather, constraint results from the perceived influence of reasons on reasoning: people judge others as free to adopt irrational attitudes through actions that eliminate their awareness of strong reasons. These findings fill an important gap in our understanding of folk psychological reasoning, with implications for attributions of autonomy and moral responsibility.

From music to animacy: Causal reasoning links animate agents with musical sounds

Listening to music activates representations of movement and social agents. Why? We ask whether high-level causal reasoning about how music was generated can lead people to link musical sounds with animate agents. To test this, we asked whether people (N=60) make flexible inferences about whether an agent caused musical sounds, integrating information from the sounds’ timing and from the visual context in which it was produced. Using a 2x2 within-subject design, we found evidence of causal reasoning: In a context where producing a musical sequence would require self-propelled movement, people inferred that an agent had been present causing the sounds. When the context provided an alternative possible explanation, this ‘explained away’ the agent, reducing the tendency to infer an agent was present for the same acoustic stimuli. People can use causal reasoning to infer whether an agent produced musical sounds, suggesting that high-level cognition can link music with social concepts.

Syntactic satiation is driven by speaker-specific adaptation

Listeners adapt to variability in language use by updating their expectations over variants, often in speaker-specific ways. We propose that adaptation of this sort contributes to satiation, the phenomenon whereby the acceptability of unacceptable sentences increases after repeated exposure. We provide support for an adaptation account of satiation by showing that the satiation of purportedly unacceptable island-violating constructions demonstrates speaker-specificity, a key property of adaptation.

Falling Through the Gaps: Neural Architectures as Models of Morphological Rule Learning

Recent advances in neural architectures have revived the problem of morphological rule learning. We evaluate the Transformer as a model of morphological rule learning and compare it with Recurrent Neural Networks (RNN) on English, German, and Russian. We bring to the fore a hitherto overlooked problem, the morphological gaps, where the expected inflection of a word is missing. For example, 63 Russian verbs lack a first-person-singular present form such that one cannot comfortably say "*oščušču" ("I feel"). Even English has gaps, such as the past participle of "stride": the function of morphological inflection can be partial. Both neural architectures produce inflections that ought to be missing. Analyses reveal that Transformers recapitulate the statistical distribution of inflections in the training data, similar to RNNs. Models' success on English and German is driven by the fact that rules in these languages can be identified with the majority forms, which is not universal.

Keep Calm and Move On: Interplay between Morphological Cue Occurrence and Frequency-based Heuristics for Sentence Comprehension in Korean

We explore how morphological cue occurrence and frequency-based heuristics interplay during sentence comprehension in Korean, a lesser-studied language in this respect. Two self-paced reading experiments with a suffixal passive construction (verb-final vs. verb-initial) and a morphological causative construction (verb-initial in comparison to the same word-order pattern in the suffixal passive) revealed that the heuristics (canonicity of word order; typicality of form-function parings involving case-marking) affected processing behaviours more strongly than the expected advantage of an early-arriving morphological cue in comprehension. Our findings support the heuristic-before-algorithm processing architecture, which is driven by the general property of human cognition that continuously seeks to reduce the burden of work at hand at the earliest opportunities. This appeals to the online cognitive equilibrium hypothesis that argues for the processor’s propensity to enter and remain in the state of cognitive equilibrium as early and long as possible during processing.

Recollection & Traumatic Growth: Unique Mediational Pathways Through Traumatic Stress Components

Although the severity of the COVID-19 outbreak varies from time to time, pandemic has affected larger audiences all around the world. Given the increasingly severe measures taken by the authorities, healthcare professionals have experienced positive and negative effects of the events, both personally and vicariously. The main aim is to examine how remembering influences vicarious traumatization and post-traumatic growth in a sample of healthcare workers. We proposed a multiple mediation model testing of distinct roles of stress components (hypervigilance, avoidance, intrusion) on the link between recollective features of remembering and post-traumatic growth, which allows to characterize memory-linked mechanisms underlying the effects of traumatic stress on growth. We demonstrated unique pathways by which remembering influenced traumatic growth. For the links of emotional intensity and imagery with growth, we found full mediation through avoidance and intrusion Individuals recalling events with high emotional intensity and imagery tend to experience more intrusions of trauma, which then resulted in traumatic growth. On the other hand, the opposite pattern was found for avoidance. Emotionally intense and vivid recall of events increased avoidance responses, but high avoidance reduced traumatic growth. With respect to reliving, while the pattern was similar, we found a partial mediation, showing significant role reliving has in supporting traumatic growth.

Probing the Mental Representation of Relation-Defined Categories

The mental representation of relation-based concepts is different from that of feature-based concepts. In the present experiment, participants learned to categorize two fictional diseases that were defined either by a feature (e.g., short cells) or an ordinal relation (e.g., diseased cells being shorter than healthy cells). After the participants learned the categorization task to criterion, their strategies were probed in transfer task in which features and relations were pitted against one another. Finally, participants engaged in a stimulus reconstruction task. The results supported the prediction that participants who had adopted a feature-based strategy on a stimulus dimension, as identified by transfer data, tended to reconstruct values close to the means presented during training. By contrast, participants who had adopted a relation-based strategy tended to exaggerate that dimension away from the mean of the training examples and in the direction of the category-defining comparative relation. These data add to the growing literature suggesting that, unlike featural categories, relational categories are not represented in terms of the category’s central tendency.

Givenness Hierarchy Theoretic Referential Choice in Situated Contexts

We present a computational cognitive model of referential choice that models and explains the choice between a wide variety of referring forms using a small set of features important to situated contexts. By combining explainable machine learning techniques, data collected in situated contexts, and recent computational models of cognitive status, we produce an accurate and explainable model of referential choice that provides an intuitive pragmatic account of this process in humans, and an intuitive method for computationally enabling this capability in robots and other autonomous agents.

Statistical properties of the speed-accuracy trade-off (SAT) paradigm in sentence processing

Studies of the speed-accuracy trade-off (SAT) have been influential in arguing for the direct-access model of retrieval in sentence processing. The direct-access model assumes that long-distance dependencies rely on a content-addressable search for the correct representation in memory. Here, we address two important weaknesses in the statistical methods standardly used for analysing SAT data. First, these methods are based on non-hierarchical modelling. We show how a hierarchical model can be fit to SAT data, and we test parameter recovery in this more conservative model. The parameters most relevant to the direct-access account cannot be accurately estimated, and we attribute this to the standard SAT model being overparameterised for the limited data available to fit it. Second, the power properties of SAT studies are unknown. We conduct a power analysis and show that inferences from null results to the null hypothesis, though commonplace in the SAT literature, may be unwarranted.

Using Machine Teaching to Investigate Human Assumptions when Teaching Reinforcement Learners

Successful teaching requires an assumption of how the learner learns - how the learner uses experiences from the world to update their internal states. We investigate what expectations people have about a learner with a behavioral experiment: Hu- man teachers were asked to teach a sequential decision-making task to an artificial dog in an online manner using rewards and punishments. The artificial dogs were implemented with either an Action Signaling agent or a Q-learner with different dis- count factors. Our findings are threefold: First, we used ma- chine teaching to prove that the optimal teaching complexity across all the learners is the same, and thus the differences in human performance was solely due to the discrepancy between human teacher’s theory of mind and the actual student model. Second, we found that Q-learners with small discount factors were easier to teach than action signaling agents, challenging the established conclusion from prior work. Third, we showed that the efficiency of teaching was monotonically increasing as the discount factors decreased, suggesting that humans’ theory of mind bias towards myopic learners.

Unfolding Conscious Awareness from Non-Conscious Perception in Non-Human Animals

Conscious awareness to the events and stimuli around us is a central part of our everyday experience. Yet, are humans the only species that experiences conscious awareness? Since non-verbal species cannot report their internal states, philosophers and scientists have long debated whether the question of animal consciousness is empirically testable, and it still remains a topic of speculation (Dawkins, 2015; Gutfreund, 2017). In the large spectrum of views, some advocate that consciousness may require complex processes like language, a capacity that is unique to adult humans (Dennett, 1995) or a human-like theory of mind (Carruthers, 1998), which may extend to only a few selected species such as great apes (e.g., Krupenye, Kano, Hirata, Call, & Tomasello, 2016; but see Horschler, MacLean, & Santos, 2020). In contrast, others have used neuroanatomical similarities to argue that a number of species (including some birds and octopuses) are likely to be capable of generating conscious experience (see, for example, the Cambridge declaration on consciousness, 2012). Others argue that non-human animals are conscious on the basis of intelligent behaviors which, at least in humans, seem to coincide with conscious awareness as supporting evidence for animal consciousness. These include behaviors such as planning (Osvath & Osvath, 2008), or metacognition (Hampton, Engelberg, & Brady, 2020; Rosati & Santos, 2016), for review see Boly et al., 2013; Griffin & Speck, 2004. Yet, since many complex human behaviors and high-level functions can be performed outside of conscious awareness (i.e., Hassin, 2013), it is difficult to determine whether non-human animals that display intelligent behaviors are indeed conscious or not (Carruthers, 2018). Furthermore, given the ambiguity and difficulty in disentangling conscious from non-conscious processes in non-verbal species, many consider the question of animal consciousness as far from having been resolved (Dawkins, 2015; Gutfreund, 2017). For many, the gap in evidence needed to unambiguously infer animal consciousness is considered “as wide as ever” (Dawkins, 2012).

Modelling Recognition in Human Puzzle Solving

Our ability to play games like chess and Go relies on both planning several moves ahead and on recognition or gist - intuitively assessing the quality of possible game states without explicit planning. In this paper, we investigate the role of recognition in puzzle solving. We introduce a simple puzzle game to study planning and recognition in a non-adversarial context and a reinforcement learning agent which solves these puzzles relying purely on recognition. The agent relies on a neural network to capture intuitions about which game states are promising. We find that our model effectively predicts the relative difficulty of the puzzles for humans and shows similar qualitative patterns of success and initial moves to humans. Our task and model provide a basis for the study of planning and intuitive notions of fit in puzzle solving that is simple enough for use in developmental studies.

The Greedy and Recursive Search for Morphological Productivity

As children acquire the knowledge of their language's morphology, they invariably discover the productive processes that can generalize to new words. Morphological learning is made challenging by the fact that even fully productive rules have exceptions, as in the well-known case of English past tense verbs, which features the -ed rule against the irregular verbs. The Tolerance Principle is a recent proposal that provides a precise threshold of exceptions that a productive rule can withstand. Its empirical application so far, however, requires the researcher to fully specify rules defined over a set of words. We propose a greedy search model that automatically hypothesizes rules and evaluates their productivity over a vocabulary. When the search for broader productivity fails, the model recursively subdivides the vocabulary and continues the search for productivity over narrower rules. Trained on psychologically realistic data from child-directed input, our model displays developmental patterns observed in child morphology acquisition, including the notoriously complex case of German noun pluralization. It also produces responses to nonce words that, despite receiving only a fraction of the training data, are more similar to those of human subjects than current neural network models' responses are.

Modeling rules and similarity in colexification

Colexification, or the expression of multiple concepts by the same word, is ubiquitous in language. Colexifications may appear rule-like, as when an artifact is used for an activity ('repair the shower'/'take a shower'), or similarity-based ('child' refers to both 'young person' and 'descendant'). We investigate whether these two modes of generalization (rules and similarity) reflect how people structure new meanings. We propose computational models based on rules, similarity, and a hybrid of the two, and correlate model predictions to human behavior—in a novel task, participants generalized labels across colexified meanings. We found that a model using similarity correlated much better with human behavior than rules. Further, the similarity model was significantly outperformed by a hybrid model of the two mechanisms. However, the difference in correlations was modest, suggesting that a framework which combines rules and similarity largely relies on similarity-based generalization to characterize human expectations about colexification.

Judgement of political statements are influenced by speaker identity

Analyses of political discourse typically focus on the semantic content of politicians’ statements. The approach treats the meaning of a speaker’s words as independent from the speaker’s identity itself; however, there are reasons to believe that one might influence the other. Features of a speaker’s identity influence others’ judgements of their character (e.g., Kinzler & DeJesus, 2013), and thus speaker identity could influence listeners’ assessment of the semantics and validity of the statements themselves. Here, we collect U.S. participants’ judgements of the political orientation of different statements, from liberal to conservative, heard with one of three accents, a generic U.S. accent, a Southern U.S. accent or an Australian accent. In comparison to identical statements conveyed in the generic U.S. accent, participants tended to perceive the U.S. Southern accented statements as more conservative and the Australian accented statements as more liberal.

Quantifying Lexical Ambiguity in Speech To and From English-Learning Children

Because words have multiple meanings, language users must often choose appropriate meanings according to the context of use. How this potential ambiguity affects first language learning, especially word learning, is unknown. Here, we present the first large-scale study of how children are exposed to, and themselves use, ambiguous words in their language learning environments. We tag 180,000 words in two longitudinal child language corpora with word senses from WordNet, focusing between 9 and 51 months and limiting to words from a popular parental vocabulary report. We then compare the diversity of sense usage in adult speech around children to that observed in a sample of adult language, as well as the diversity of sense usage in children's own productions. To accomplish this we use a Bayesian model-based estimate of sense entropy, a measure of diversity that takes into account uncertainty inherent in small sample sizes. This reveals that sense diversity in caregivers' speech to children is similar to that observed in a sample of adult-directed written material, and that children's use of nouns---but not verbs---is similarly diverse to that of adults. Finally, we show that sense entropy is a significant predictor of vocabulary development: children begin to produce words with a higher diversity of adult sense usage at later ages. We discuss the implications of our findings for theories of word learning.

Testing a Process Model of Causal Reasoning With Inhibitory Causal Links

In this paper, we test people’s causal judgments when the graphs have inhibitory causal relations. We find evidence that a particularly important class of errors known as Markov vio- lations extend to these settings. These Markov violations are important because they are incompatible with causal graphical models, a theoretical framework that is often used as a com- putational level account of causal cognition. In contrast, the systematic pattern of errors are in line with the predictions of a recently proposed rational process model that models peo- ple as reasoning about concrete cases (Davis & Rehder, 2020). These findings demonstrate that errors in causal reasoning ex- tend across a range of settings, and do so in line with the pre- dictions of a model that describes the process by which causal judgments are drawn.

Humans fail to outwit adaptive rock, paper, scissors opponents

How do humans adapt when others exploit patterns in their behavior? When can people modify such patterns and when are they simply trapped? The present work explores these questions using the children's game of rock, paper, scissors (RPS). Adult participants played 300 rounds of RPS against one of eight bot opponents. The bots chose a move each round by exploiting unique sequential regularities in participant move choices. In order to avoid losing against their bot opponent, participants needed to recognize the ways in which their own behavior was predictable and disrupt the pattern. We find that for simple biases, participants were able to recognize that they were being exploited and even counter-exploit their opponents. However, for more complex sequential dependencies, participants were unable to change their behavior and lost reliably to the bots. Results provide a quantitative delineation of people's ability to identify and alter patterns in their past decisions.

From Alien Zoo to Spy School: A Preregistered Study of Linguistic Sound Symbolism and its Links to Reading in 8-year-olds

Adults and children systematically match certain kinds of words to certain kinds of shapes according to the sounds of their phonemes (e.g., ‘kiki’-spiky ‘bouba’-curvy). These sound-shape mappings rely on multisensory processing of perceived goodness of fit between vision and audition. Dyslexic individuals have shown deficits in general multisensory processing and sound-symbolic matching suggesting that multisensory processing deficits may be developmentally implicated in early reading difficulties. A longitudinal cohort study tracking bilingual children in Singapore showed that early predictors of English reading at 4 years (e.g., phonological awareness, vocabulary size and letter knowledge) did not correlate with a novel child-friendly task eliciting the bouba-kiki effect at 6 years. However, since the children had not yet started formal reading instruction, it is difficult to interpret the lack of relationship. In the current study, we followed the same cohort of children into early reading years and tested their English word and pseudoword reading abilities at 8.5 years. In our preregistered analysis, no significant relationship was observed between earlier multisensory sound-shape matching reading outcomes but known predictors of reading showed strong relationships in this cohort of bilingual children.

Visual representation of negation: Real world data analysis on comic image design

There has been a widely held view that visual representations (e.g., photographs and illustrations) do not depict negation, for example, one that can be expressed by a sentence "the train is not coming". This view is empirically challenged by analyzing the real-world visual representations of comic (manga) illustrations. In the experiment using image captioning tasks, we gave people comic illustrations and asked them to explain what they could read from them. The collected data showed that some comic illustrations could depict negation without any aid of sequences (multiple panels) or conventional devices (special symbols). This type of comic illustrations was subjected to further experiments, classifying images into those containing negation and those not containing negation. While this image classification was easy for humans, it was difficult for data-driven machines, i.e., deep learning models (CNN), to achieve the same high performance. Given the findings, we argue that some comic illustrations evoke background knowledge and thus can depict negation with purely visual elements.

Specialization and selective social attention establishes the balance between individual and social learning

A key question individuals face in any social learning environment is when to innovate alone and when to imitate others. Previous simulation results have found that the best performing groups exhibit an intermediate balance, yet it is still largely unknown how individuals collectively negotiate this balance. We use an immersive collective foraging experiment, implemented in the Minecraft game engine, facilitating unprecedented access to spatial trajectories and visual field data. The virtual environment imposes a limited field of view, creating a natural trade-off between allocating visual attention towards individual search or to look towards peers for social imitation. By analyzing foraging patterns, social interactions (visual and spatial), and social influence, we shine new light on how groups collectively adapt to the fluctuating demands of the environment through specialization and selective imitation, rather than homogeneity and indiscriminate copying of others.

Can Children use Numerical Reasoning to Compare Odds in Games?

Children can represent, compute, and manipulate numbers from very early in development. Additionally, beginning in infancy, children appear to have intuitions about probability, correctly anticipating the outcomes of simple sampling events. In two experiments, we examined 3- to 7-year-olds’ (N=196) ability to compare the number of items across sets in games of chance. In Experiment 1, children were asked to select between two games with different numbers of hiding locations to either find or hide a gold coin. Using a similar set up, in Experiment 2, they were asked to select the game that would make it easy or hard for another player to find the coin. Results from both experiments suggest that by around age 5, children can use numerical reasoning to compare odds: they were more likely to select the game with more cups when asked to help hide the gold coin than find it (Experiment 1) and when asked to make the game hard rather than easy (Experiment 2).

Compositional processing emerges in neural networks solving math problems

A longstanding question in cognitive science concerns the learning mechanisms underlying compositionality in human cognition. Humans can infer the structured relationships (e.g., grammatical rules) implicit in their sensory observations (e.g., auditory speech), and use this knowledge to guide the composition of simpler meanings into complex wholes. Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations. We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings (e.g., the quantities corresponding to numerals) should be composed according to structured rules (e.g., order of operations). Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.

Integrating emotional expressions with utterances in pragmatic inference

Human communication involves far more than words; speakers’ utterances are often accompanied by various kinds of emotional expressions. How do listeners represent and integrate these distinct sources of information to make communicative inferences? We first show that people, as listeners, integrate both verbal and emotional information when inferring true states of the world and others' communicative goals, and then present computational models that formalize these inferences by considering different ways in which these signals might be generated. Results suggest that while listeners understand that utterances and emotional expressions are generated by a balance of speakers’ informational and social goals, they additionally consider the possibility that emotional expressions are noncommunicative signals that directly reflect the speaker’s internal states. These results are consistent with the predictions of a probabilistic model that integrates goal inferences with linguistic and emotional signals, moving us towards a more complete formal theory of human communicative reasoning.

Racial Bias in Emotion Inference: An Experimental Study Using a Word Embedding Method

We investigated racial bias in emotion inference by having participants describe the emotion of the featured person in the image, including Asian, Black, and White. We collected 4,197 sentences (63,900 tokens) and used the data to train Word2Vec, a neural network-based word embedding model. We calculated the cosine distance between emotion words and words indicating the target for each racial group in order to measure the strength of association. Although all images portrayed neutral emotions, the results show that negative emotion words were close to Asian and Black, whereas neutral words were close to White. This result indicates that stereotypes contribute to the racial bias in Artificial Intelligence as crowdsourcing workers can generate annotations depending on the featured person's race. Based on the present study, we suggest employing various de-biasing methods such as data augmentation or removing bias components in word embeddings before using Artificial Intelligence in the real world.

Quantifiers satisfying semantic universals are simpler

Despite wide variation among natural languages, there are linguistic properties thought to be universal to all or almost all natural languages. Here, we consider universals at the semantic level, in the domain of quantifiers, which are given by the properties of monotonicity, quantity, and conservativity. We investigate whether these universals might be explained by differences in complexity. We generate a large collection of quantifiers, based on a simple yet expressive grammar, and compute both their complexities and whether they adhere to these universal properties. We find that quantifiers satisfying semantic universals are less complex: they have a shorter minimal description length.

Speechless Reader Model: A neurocognitive model for human reading reveals cognitive underpinnings of baboon lexical decision behavior.

Animal reading studies have shown that word/non-word decision behavior can be performed by baboons and pigeons, despite their inability to access phonological and semantic representations. Previous modeling work used different learning models (e.g., deep-learning architectures) to successfully reproduce baboon lexical decisions. More transparent investigations of the implemented representations underlying baboons’ behavior are currently missing, however. Here we apply the highly transparent Speechless Reader Model, which is motivated by human reading and its underlying neurocognitive processes, to existing baboon data. We implemented four variants that comprise different sets of representations—all four models implemented visual-orthographic prediction errors. In addition, one model included prediction errors derived from positional letter frequencies, one prediction errors constrained by specific letter sequences, and finally, a combinatory model combined all three prediction errors. We compared the models’ behavior to that of the baboons and thereby identified one model which most adequately mirrored the animals’ learning success. This model combined the image-based prediction error and the letter-based prediction error that also accounts for the transitional probabilities within the letter sequence. Thus, we can conclude that animals, similarly to humans, use prediction error representations that capture orthographic codes to implement efficient reading-like behavior.

Why people err on multiple-choice analogical reasoning tests

A widespread tool in analogy research consists of multiple-choice tests that require identifying a relation between two situations and mapping it to another two situations, to find the correct response option. A key source of difficulty during such tests is attributed to the complexity of mapping. However, most people do not construct mappings purely in the mind, but also compare the emerging mapping with the existing response options, so their features may affect the reasoning process. This study examined the impact of relational match of error options with respect to the correct option (the proportion of correct elements present in a given error option) on the option selection. Results indicate that option selection depends almost linearly on the relational match. Moreover, the higher working memory capacity of the participants, the more relationally matching errors they select. The study suggests careful design of error options in multiple-choice reasoning tests, because the pattern of these options can affect the solution process.

Dual Processes on Dual Dimensions: Associative and Propositionally-Mediated Discrimination and Peak Shift.

Dual-process accounts posit that human learning can occur as a consequence of both associative and propositional processes. This can be contrasted with single process accounts that suggest learning is entirely propositional. In this paper, we offer evidence for both associative and propositional processes using a within-subjects two alternative forced choice discrimination paradigm. Stimuli that varied concurrently along two dimensions were created and each participant’s awareness was directed toward one, facilitating rule induction (i.e., propositional processing) on that dimension. Performance on the other dimension was then used to assess associatively-based performance. We report results that are initially inconsistent with both single process and dual-process accounts of discrimination learning. However, we then show how an associative network, that represents stimuli integrally, can predict the performance shown by participants in the experiment, providing evidence for a dual-process account.

Deconstructing the Label Advantage Effect

Is language unique in how it evokes conceptual content, and if so, why? In an influential study, Lupyan & Thompson-Schill (2012) report that labels (nouns like “dog”) have a stronger cuing effect in picture verification tasks than “equally informative and predictive nonverbal cues”, like the sound of a barking dog. Here we sought to better understand the factors that lead to a label advantage. First, while we replicate the label advantage itself, our data do not support the assumption that labels and environmental sounds are equally informative. Instead, we show that different cue types are associated with target images to different degrees, and that labels show the strongest association. Moreover, the degree of association is a better predictor of reaction times than cue type. Thus, we conclude that labels are not more effective at activating the same semantic content than non-verbal cues, but rather activate different semantic content.

Just Following Directions! The Effects of Gender on Direction Giving

Direction giving involves diverse cognitive processes such as creating a mental map, following the desired route and choosing the correct terminology to provide directions efficiently. Many perceived differences have been speculated in the speech of men and women, yet research on spontaneous direction giving differences based on gender is limited. This small-scale qualitative study uses Cognitive Discourse Analysis to investigate whether men and women differ in the frequency of usage of projective terms, cardinal directions, hedges, modal verbs, landmarks, serial orientation measures and distance indicators in route directions. The patterns emerging consistently through the results show that gender plays an important role in the provision of directions. Key results included a utilization of humor by women when direction giving, as well as a higher usage of landmarks and hedges than men. Key results contradicting previous findings showed no usage of cardinal directions by either gender, as well as the serial orientation marker ‘then’ being utilized more by women rather than men.

Linguistic Metaphors Shape Attitudes towards Immigration

Immigration policy has been one of the top concerns of American voters over the last decade and has attracted some of the most heated rhetoric in politics and news media across the world. Much like other political language, talk about immigration is suffused with metaphor. To what extent does the language about immigration, and specifically the metaphors used, influence people’s views of the issues? How powerful are these metaphors? In our studies, we exposed participants to one of four versions of a passage about an increase in immigrants in one town. The four versions of the passage included all identical facts and figures and differed in only a single word at the beginning of the passage, describing the increase in immigrant labor as either an “increase,” a “boost,” an “invasion,” or a “flood.” Although the passages differed only in this one word, participants’ attitudes towards this increase and their predictions about its effects on the economy differed significantly depending on the metaphor. Of course, opinions on immigration differ across political affiliations. Remarkably, the single word metaphor was strong enough to mitigate much of the difference in opinion on immigration between Democrats and Republicans in our sample. Further analyses suggested that the results are not due simply to positive or negative lexical associations to the metaphorical words, and also that metaphors can act covertly in organizing people’s beliefs.

Sensory Modality of Input Influences the Encoding of Motion Events in Speech But Not Co-Speech Gestures

Visual and auditory channels have different affordances and this is mirrored in what information is available for linguistic encoding. The visual channel has high spatial acuity, whereas the auditory channel has better temporal acuity. These differences may lead to different conceptualizations of events and affect multimodal language production. Previous studies of motion events typically present visual input to elicit speech and gesture. The present study compared events presented as audio-only, visual-only, or multimodal (visual+audio) input and assessed speech and co-speech gesture for path and manner of motion in Turkish. Speakers with audio-only input mentioned path more and manner less in verbal descriptions, compared to speakers who had visual input. There was no difference in the type or frequency of gestures across conditions, and gestures were dominated by path-only gestures. This suggests that input modality influences speakers’ encoding of path and manner of motion events in speech, but not in co-speech gestures.

Modeling the Anticipatory Remapping of Spatial Body Representations: A Free Energy Approach

According to theories of event-predictive cognition, neural processing focuses on the next relevant interaction targets. Evidence for this notion comes from the anticipatory crossmodal congruency effect (aCCE), which implies that spatial body representations are mapped onto future goal locations in advance of a goal-directed action. Here we present a free energy based normative process model that accounts for the aCCE quantitatively by applying crossmodal mappings between vision and touch as well as active inference. A comparison with a diffusion model shows that our model accounts for the response time distributions and the aCCE with a sparser set of parameters. However, the temporal dynamics of the model require further fine tuning to account for all aspects of the aCCE. The model shows how the free energy framework can be used to account for behavioral data in general and how to implement theories of event-predictive cognition in a normative cognitive process model.

Change of Body Representation in Symmetric Body Parts

Many researchers have demonstrated the change of body representation, including the rubber hand illusion and pinocchio illusion. However, they focused on the change of a single body part. The human body has a symmetric structure; therefore, the modification of body representation can be facilitated by the corresponding movement of symmetric parts. In our four experiments, participants moved their hands in different manners with distorted vision and were measured whether their behavior changed throughout the task. The four tasks differed in whether participant's hands were moving simultaneously or separately and whether they moved their hands to the same point or a different point. A behavioral change occurred in all experiments. When the participants moved their hands to the same point simultaneously the greatest behavioral change was facilitated. Neither moving both hands simultaneously or moving to the same position facilitated the change.

Gender differences in face-based trait perception and social decision making

Despite recent progress in promoting gender equality, gender bias continues to post challenges to women's career advancement. Here, we use a statistically grounded framework to investigate how face-based social perception may contribute to gender biases in political and job application settings. By analyzing a large face dataset and performing a novel behavioral experiment, we find that: 1) female faces exhibit stronger anti-correlation between perceived trustworthiness and dominance, 2) this anti-correlation is due to distinct sets of facial features humans utilize to assess female and male faces for trustworthiness and dominance, 3) perceived dominance positively contributes to social decision preferences for female faces, contrary to prior suggestions that perceived dominance affects female candidates negatively, and 4) the anti-correlated perception of trustworthiness and dominance put females at a disadvantage in competitive environments. More generally, our findings reveal the important role of face-based trait perceptions underlying gender biases in social decision making.

Do you speak 'kid'? The role of experience in comprehending child speech

Child speech deviates from adult speech in predictable ways. Are listeners who routinely interact with children implicitly aware of these systematic deviations, and thereby better at comprehending children? In Experiment 1, we explore this possibility by testing three types of participants with variable experience interacting with children: undergraduates with minimal experience with children (N=48), mothers of young children (N=48), and early childcare educators (N=36). Participants transcribed single-word utterances produced by the same set of children at 2.5-, 4-, and 5.5-years-old. In Experiment 2, mothers (N=50) completed a similar transcription task that featured speech by their own, and another, 2.5-year-old child. Participants performed similarly regardless of their experience with children, while mothers demonstrated a Familiar Talker Advantage with their own child’s speech. Our findings suggest that while experience with children may not facilitate improved comprehension of child speech in general, it may lead to enhanced comprehension of those children in particular.

Productive Failure and Student Emotions

Productive failure (PF) is a learning paradigm that reverses the standard order of instruction by asking students to solve problems prior to instruction. This paradigm has been shown to be effective for fostering student learning. To date, however, the role of student emotion in productive failure has not been investigated. In other paradigms, there is some evidence that failure elicits negative emotions and that these emotions can interfere with learning. This leads to a conundrum given productive failure’s positive effect on learning. To shed light on this, we report on results from a study (N = 48) in the productive failure paradigm. For the analysis, we used a mixed-methods approach to investigate the distribution of emotions in productive failure, how these changed across different instructional activities, and the relation between emotions and posttest performance.

Effects of syntactic and semantic predictability on sentence comprehension: A comparison between native and non-native speakers

Prediction is pervasive during sentence comprehension among native speakers of a language. But whether non-native speakers predict to the same extent as native speakers remains an open question. To examine the effects of semantic and syntactic predictability in native and non-native speakers, we con- ducted a self-paced reading and an acceptability judgement task. The results suggest that the effects of semantic and syntactic predictability are unequivocally robust among native speakers during sentence comprehension. However, the effects of syntactic predictability seem to be more robust for native speakers than for non-native speakers who are largely sensitive to semantic predictability.

In the blink of an eye? Evidence for a reduced attentional blink for eyes

Eye contact serves as an important social signal and humans show a special sensitivity for detecting eyes. Here, we asked whether people’s sensitivity to eyes would enable them to overcome temporal limitations in visual attention. We used an "attentional blink" (AB) paradigm, in which the second of two visual stimuli presented in quick succession typically cannot be detected. Participants performed a rapid serial visual presentation (RSVP) task and were asked to identify, within a stream of symbols, a target and to detect whether the target was succeeded by a probe. The probe was either an image of an eye (with direct gaze) or of a star. As expected, participants' detection rate for the star was poor, demonstrating the typical attentional blink. Crucially, detection rate for the eye was significantly better. This reduced attentional blink suggests that people's sensitivity to eyes is strong enough to circumvent fundamental limitations in visuotemporal attention.

Transfer of learned opponent models in repeated games

Human learning transfer takes advantage of important cognitive building blocks such as an abstract representation of concepts underlying tasks and causal models of the environment. One way to build abstract representations of the environment when the task involves interactions with others is to build a model of the opponent that may inform what actions they are likely to take next. In this study, we explore opponent modelling and its role in learning transfer by letting human participants play different games against the same computer agent, who possesses human-like theory of mind abilities with a limited degree of iterated reasoning. We find that participants deviate from Nash equilibrium play and learn to adapt to the opponent's strategy to exploit it. Moreover, we show that participants transfer their learning to new games and that this transfer is moderated by the level of sophistication of the opponent. Computational modelling shows that it is likely that players start each game using a model-based learning strategy that facilitates generalisation and opponent model transfer, but then switch to behaviour that is consistent with a model-free learning strategy in the later stages of the interaction.

The impact of readability on trust in information

The increased prevalence of “fake news" in recent years makes it important for us to better understand how we decide which information to trust and accept as true. Existing research has identified the important roles of factors such as trust in the source of the information and reading ability. We report on the results of a study on how the style of writing, and specifically the difficulty level of the text providing the information, affects our trust in the information provided. Participants trusted the information more in texts that were more difficult to read, regardless of whether the information presented was correct or incorrect. Regardless of readability, participants trusted paragraphs providing truthful information more than those providing false information. This result has important implications for information on social media platforms, where the source may or may not be readily available and is often not an expert on the topic.

We Are What We Say: Pragmatic Violations Have Social Costs

In two studies we show that a speaker’s choice to obey or violate the pragmatic maxims of Relevance and Informativeness – as well as the reasons behind these choices (Inability vs. Unwillingness) – affect how the speaker is socially perceived, revealing a connection between pragmatic reasoning and social evaluation. Our findings further suggest that core dimensions of social evaluation (Competence vs. Warmth) are differentially informed by different aspects of a speaker’s conversational behavior. We conclude that, even after a brief exposure to someone’s conversational behavior, people draw social inferences about the speaker by reasoning along the same principles that inform pragmatic inferences. Our results highlight how pragmatic reasoning, social evaluation and person perception jointly underlie the meaning conveyed by linguistic utterances in communication.

Visual communication of object concepts at different levels of abstraction

People can produce drawings of specific entities (e.g., Garfield), as well as general categories (e.g., ``cat''). What explains this ability to produce such varied drawings of even highly familiar object concepts? We hypothesized that drawing objects at different levels of abstraction depends on both sensory information and representational goals, such that drawings intended to portray a recently seen object preserve more detail than those intended to represent a category. Participants drew objects cued either with a photo or a category label. For each cue type, half the participants aimed to draw a specific exemplar; the other half aimed to draw the category. We found that label-cued category drawings were the most recognizable at the basic level, whereas photo-cued exemplar drawings were the least recognizable. Together, these findings highlight the importance of task context for explaining how people use drawings to communicate visual concepts in different ways.

Compression: A Lossless Mechanism for Learning Complex Structured Relational Representations

People learn by both decomposing and combining concepts; most accounts of combination are either compositional or conjunctive. We augment the DORA model of representation learning to build new predicate representation by combining (or compressing) existing predicate representations (e.g., building a predicate a_b by combining predicates a and b). The resulting model learns structured relational representations from experience and then combines these relational concepts to form more complex, compressed concepts. We show that the resulting model provides an account of a category learning experiment in which categories are defined as novel combinations of relational concepts.

Communicating uncertain beliefs with conditionals: Probabilistic modeling and experimental data

Conditionals like 'If A, then C' can be used, among others, to convey important knowledge about rules, dependencies and causal relationships. Much work has been devoted to the interpretation of conditional sentences, but much less is known about when speakers choose to use a conditional over another type of utterance in communication. To fill this gap, we consider a recently proposed computational model from probabilistic pragmatics, adapted for modeling the use of conditionals in natural language, by comparing its predictions to experimental production data from a behavioral experiment. In a novel experimental approach, we manipulate relevant causal beliefs that might influence whether utterances with conditional structure are preferred over utterances without conditional structure. This is a step towards a systematic, quantitative investigation of the situations that do or do not elicit the natural use of conditionals.

The Impact of Ignorance Beyond Causation: An Experimental Meta-Analysis

Norm violations have been demonstrated to impact a wide-range of seemingly non-normative judgments. Among other things, when agents' actions violate prescriptive norms they tend to be seen as having done those actions more freely, as having acted more intentionally, as being more of a cause of subsequent outcomes, and even as being less happy. The explanation of this effect continue to be debated, with some researchers appealing to features of actions that violate norms, and other researcher emphasizing the importance of agents' mental states when acting. Here, we report the results of a large-scale experiment that replicates and extends twelve of the studies that originally demonstrated the pervasive impact of norm violations. In each case, we build on the pre-existing experimental paradigms to additionally manipulate whether the agents knew that they were violating a norm while holding fixed the action done. We find evidence for a pervasive impact of ignorance: the impact of norm violations on non-normative judgments depends largely on the agent knowing that they were violating a norm when acting.

Epistemic and aleatory uncertainty in decisions from experience

People intuitively distinguish between uncertainty they believe is potentially resolvable and uncertainty that arises from inherently stochastic processes. The vast majority of experiments investigating decisions based on experience, however, have focused exclusively on scenarios that promote a stochastic interpretation by representing options as images that remain identical each time they are presented. In the current research, we contrasted this method with one in which the visual appearance of options was subtly differentiated each time participants encountered them. We found that introducing this variability to the appearance of options influenced the way people interpreted uncertainty. Although there was little evidence of an impact on exploration, these differences in interpretation may reveal other limitations to the generalisability of previous decision-making tasks.

Characterizing the development of relational reasoning in India

Relational reasoning is an important component of abstract thought that emerges early in development but shows substantial variation across contexts, with children in the US and China following distinct developmental trajectories in one paradigm between 18m and 4y. To understand the mechanisms through which variation in the learning environment influences the development of relational reasoning, we examine early relational reasoning in Punjabi speakers in India, who share some cultural and linguistic elements of their experience with children in both the US and China. In a causal relational match-to-sample task, we find that 3-year-olds in India exhibit performance that is intermediate to their high-performing peers in China and the relatively poor performance observed in the US at this age. These results suggest complexity and variability in the development of relational reasoning and lay a foundation for future research designed to tease apart the factors associated with early diversity in relational reasoning.

Passive Versus Active: Frameworks of Active Learning for Linking Humans to Machines

There are numerous studies that show that the more learner actively participate in the learning process, the more they learn. Although the use of active learning to increase learning outcomes has been recently introduced in a variety of methods, empirical experiments are lacking. In this study, we introduce two frameworks of human active learning and then conducted two experiments to determine how these frameworks can be used as leaning tools. In experiment 1, we compared three types of active learning and passive learning in order to empirically confirm the effect of active learning. In experiment 2, based on the results of experiment 1, we explored through simulation on machine learning with the frameworks that the more active the learners are, the better outcomes can be obtained. Both experiments showed that active learning both also effective in human and machine learning. Therefore, our analyses of the two experiments fit within the taxonomy and classification of the frameworks of active learning. This result is further significant in that it gives practical implications on human and machine learning methods.

Chunks are not “Content-Free”: Hierarchical Representations Preserve Perceptual Detail within Chunks

Chunks allow us to use long-term knowledge to efficiently represent the world in working memory. Most views of chunking assume that when we use chunks, this results in the loss of specific perceptual details, since it is presumed the contents of chunks are decoded from long-term memory rather than reflecting the exact details of the item that was presented. However, in two experiments, we find that in situations where participants make use of chunks to improve visual working memory, access to instance-specific perceptual detail (that cannot be retrieved from long-term memory) increased, rather than decreased. This supports an alternative view: that chunks facilitate the encoding and retention into memory of perceptual details as part of structured, hierarchical memories, rather than serving as mere “content-free” pointers. It also provides a strong contrast to accounts in which working memory capacity is assumed to be exhaustively described by the number of chunks remembered.

A Demonstration of The Positive Manifold of Cognitive Test Inter-correlations, and how it Relates to General Intelligence, Modularity, and Lexical Knowledge

Widely recognized in differential psychology, but less so in cognitive science, the positive manifold is the phenomena of all cognitive tests inter-correlating positively. Frequently demonstrated in people, it can also be observed in non-human species. With 217 Ecuadorian adult participants, who performed 11 cognitive tests, we show that all 55 pairwise inter-correlations are positive, and of large magnitude. Additionally, factor analysis revealed a single underlying general, or g factor, often identified as general intelligence. This robustly replicates the positive manifold in a non- WEIRD (Western, Educated, Industrialized, Rich, Democratic) context. We further demonstrate that tests of lexical knowledge, such as word pronunciation, have particularly high loadings on g. We explore explanations for the positive manifold, and the implications for understanding the mind as being composed of independent cognitive processing modules. We propose that the positive manifold reveals a neglected but important role of lexical-conceptual knowledge in high-level, top-down, domain-general cognitive processing.

Exploring mental representation with a memory matching game

Games have been integral to the development of cognitive science. As experimental tasks, games can provide an ideal environment for studying questions about mental representation, memory, and strategic decision making. Here, we explore the potential of memory matching games as an experimental task by using a simple online version to conceptually replicate two classic effects from the cognitive scientific literature—the picture superiority effect (Paivio & Csapo, 1973) and the word length effect (Baddeley, Thomson, & Buchanan, 1975). We manipulate the Item Format of the game pieces (pictures vs. words) and their Label Length (short vs. long). As expected, we find a picture superiority effect. We do not find the predicted word length effect. We argue that the results of the study, along with several practical properties of the task, sup- port the use of the game for cognitive scientific research.

Word order affects the frequency of adjective use across languages

Recent research has proposed that adjective form (i.e., whether adjectives typically occur before or after the nouns they modify) interacts with considerations from efficient communication to determine the rate at which we use adjectives to resolve reference to objects. According to this efficiency hypothesis, languages with pre-nominal adjectives use modifying adjectives at a higher rate in an effort to aid incremental reference resolution. We test this broad typological prediction in a large-scale corpus analysis of 74 languages, finding that languages that favor pre-nominal adjectives indeed do exhibit higher rates of adjectival modification than languages that favor post-nominal adjectives.

How People Make Causal Judgments about Unprecedented Societal Events

Counterfactual theories of causal judgment propose that people infer causality between events by comparing an actual outcome with what would have happened in a relevant alternative situation. If the candidate cause is “difference-making”, people infer causality. This framework has not been applied to people’s judgments about unprecedented societal events (e.g., global pandemics), in which people have limited causal knowledge (e.g., about effective policies). In these contexts, it is less clear how people reason counterfactually. This study examined this issue. Participants judged whether a mandatory evacuation reduced population bite rates during a novel insect infestation. People tended to rely on prior causal knowledge, unless data from close alternatives (i.e., structurally similar counterfactuals) provided counterevidence. There were also notable individual differences, such that some people privileged prior knowledge regardless of the available counterevidence or privileged far alternatives (i.e., structurally distinct counterfactuals), which may have implications for understanding public disagreement about policy issues.

You can’t trust an angry group: asymmetric evaluations of angry and surprised rhetoric affect confidence in trending opinions

Communication in groups allows social learners to influence one another and change their beliefs over time. Though some of the same heuristics that guide learners’ trust in individual informants can be applied to groups, variation in how individual beliefs are aggregated into a collective judgement can radically alter the accuracy of collective judgement. How do observers evaluate collective judgements? We present two experiments testing the impact of affective signals on observer trust. In each experiment, one faction “converts” group members from an opposing faction, or is converted by them. When the focal faction is surprised at the opposing view, observer trust in the focal faction’s belief rises or falls as consensus increases or decreases. When the focal faction is angry, observer trust falls when consensus decreases, but does not rise even when the “consensus” approaches unanimity. Affective signals in group interactions may help naive learners evaluate collective accuracy.

Children's Use of Causal Structure When Making Similarity Judgments

A deep understanding of any phenomenon requires knowing how its causal elements are related to one another. Here, we examine whether children recognize similar causal structures across superficially distinct events. We presented 4- to 7-year-olds with three-variable narratives in which story events unfold according to a causal chain or a common effect structure. We then asked children to make judgments about which stories are the most similar. Results indicate that the ability to recognize and use abstract causal structure as a metric of similarity develops gradually between the ages of 4 and 7: While we find no evidence that 4-year-olds recognize the common causal structure between events, 7-year-olds have a relatively mature understanding of causal system categories when making similarity judgements. Five- and 6-year-olds show mixed success. We discuss these findings in light of children’s developing causal and abstract reasoning and propose directions for future work.

The Role of Physical Inference in Pronoun Resolution

When do people use knowledge about the world in order to comprehend language? We asked whether pronoun resolution decisions are influenced by knowledge about physical plausibility. Results showed that referents which are more physically plausible in described events were more likely to be selected as antecedents of ambiguous pronouns, implying that resolution decisions were driven by physical inference. An alternative explanation is that these decisions were driven instead by distributional word knowledge. We tested this by including predictions of a statistical language model (BERT) and found that physical plausibility explained variance on top of the statistical language model predictions. This indicates that at least part of people's pronoun resolution judgments comes from knowledge about the world and not the word. This result constrains psycholinguistic models of comprehension—world knowledge must influence propositional interpretation—and raises the broader question of how non-linguistic physical inference processes are incorporated during comprehension.

Displacement and Evolution: A Neurocognitive and Comparative Perspective

By re-evaluating Crow (2000)’s claim that “Schizophrenia [is] the price that Homo sapiens pays for language”, we suggest that displacement, the ability to refer to things and situations outside from here and now, partly realized through syntactic operation, could be related to the symptoms of schizophrenia. Mainly supported by episodic memory, displacement has been found in nonhuman animals, but more limited than that in humans. As a conserved subcortical region, the hippocampus plays a key role in episodic memory across species. Evidence in humans suggests that the parietal lobe and basal ganglia are also involved in episodic Memory. We propose that what makes human displacement more developed could rely on the better coordination between the hippocampus and the parietal lobe and basal ganglia. Given that all these areas taking part in language processing, displacement could have served as an interface between episodic memory and language.

Tolerance to failure unleashes the benefits of cognitive diversity in collective problem solving

Collective problem solving is supposed to benefit from cognitive diversity (e.g., when a team consists of individuals with different learning strategies). However, recent evidence for this claim fails to rule out an alternative explanation: that the benefit is due to moderate non-conformity, not diversity. We extend a previous agent-based simulation to distinguish these hypotheses, and demonstrate that diverse learning strategies alone do not yield the expected benefit. We extend the model further, based on an idea from the philosophy of science: Group-level benefits in complex problem solving often entail individual-level failures. Accordingly, we parameterize tolerance for failure, and show that there is an interaction between tolerance for failure and diversity. When tolerance for failure is zero, heterogeneous and homogeneous groups perform equally; when non-zero, diverse groups can outperform heterogeneous groups. Our agent-based simulations help clarify when cognitive diversity benefits collective problem solving.

Cognitive and Cultural Diversity in Human Evolution

Most well-accepted models of cognitive evolution define the modern human mind in terms of an amalgamation of species-specific cognitive mechanisms, many of which are described as adaptive. Likewise, these models often use the rich archaeological record of Homo sapiens to illustrate how ‘uniquely human’ mental abilities gave our species an evolutionary advantage over extinct hominins. Recent evidence from various fields, however, indicates that closely related species, particularly Neanderthals and Denisovans, likely had cognitive capacities very similar to ours, and that several key aspects of ‘modern’ cognition are not exclusive to our lineage. The sum of these data therefore requires a timely revision of human cognitive evolution models. On the one hand, claims of species-specific cognitive mechanisms have been weakened. On the other hand, there are tangible differences among extinct and extant humans that call for an explanation. One way to accommodate these differences is to understand cognition as shaped by sociocultural and environmental factors, and to argue for culture-specific rather than species-specific cognition over the course of human evolution.

Mechanistic Learning Goals Enhance Elementary Student Understanding and Enjoyment of Heart Lessons.

Biologists, lay adults, and children alike value causal explanations of how biological entities work. Despite this, elementary school science education has historically lacked mechanistic content. In line with recent science education standards, we investigated the effects of mechanistic learning goals on understanding of an in-depth lesson about how the heart works. Children ages 6 to 11 who were given mechanistic learning goals performed better on knowledge assessments of the heart lesson and enjoyed their learning goal more than children who were given a relatively superficial learning goal—to focus on labels. Thus, learning goals orienting children towards mechanistic content during lessons enhance science learning and enjoyment.

Does Amy Know Ben Knows You Know Your Cards? A Computational Model of Higher-Order Epistemic Reasoning

Reasoning about what other people know is an important cognitive ability, known as epistemic reasoning, which has fascinated psychologists, economists, and logicians. In this paper, we propose a computational model of humans’ epistemic reasoning, including higher-order epistemic reasoning—reasoning about what one person knows about another person’s knowledge—that we test in an experiment using a deductive card game called “Aces and Eights”. Our starting point is the model of perfect higher-order epistemic reasoners given by the framework of dynamic epistemic logic. We modify this idealized model with bounds on the level of feasible epistemic reasoning and stochastic update of a player’s space of possibilities in response to new information. These modifications are crucial for explaining the variation in human performance across different participants and different games in the experiment. Our results demonstrate how research on epistemic logic and cognitive models can inform each other.

The Sensorimotor Dynamics of Joint Attention

Social interactions are composed of coordinated, multimodal behaviors with each individual taking turns and sharing attention. By the second year of life, infants are able to engage in coordinated interactions with their caregivers. Although research has focused on the social behaviors that enable parent-infant dyads to engage in joint attention, little work has been done to understand the sensorimotor mechanisms underlying coordination. Using wireless head-mounted eye trackers and motion sensing, we recorded 31 dyads as they played freely in a home-like laboratory. We identified moments of visual joint attention, when parent and infant were looking at the same object, and then measured the dyad’s head and hand movements during and around joint attention. We found evidence that both parents and infants still their bodies during joint attention. We also compared instances of joint attention that were led by the parent or by the infant and identified different sensorimotor pathways that support the two types of joint attention. These results provide the foundation for continued exploration of the critical role of sensorimotor processes on coordinated social behavior and its development.

Lies are crafted to the audience

Do people cater their lies to their own beliefs or others' beliefs? One dominant individual-based account considers lying to be an internal tradeoff between self-interest, norms, and morals. However, recent audience-based accounts suggests that lying behavior can be better explained within a communicative framework, wherein speakers consider others' beliefs to design plausible lies---highlighting the role of theory-of-mind in strategic lying. We tease apart these accounts by examining human lying behavior in a novel asymmetric, dyadic lying game in which speakers' beliefs differ from those they ascribe to their audience. We compare participants' average reported lie (controlling for the truth) across conditions that manipulated the player's and the audience's beliefs. We find that people spontaneously tune their lies to beliefs unique to their audience, more than to their own beliefs. These results support the audience-based account of lying: estimates of how listeners will respond determine how people decide to lie.

Selection, Engagement, & Enhancement: A Framework for Modeling Visual Attention

This paper presents a theoretical framework for modeling human visual attention. The framework's core claim is that three mechanisms drive attention: selection, which picks out an item for further processing; engagement, which tags a selected item as relevant or irrelevant to the current task; and enhancement, which increases sensitivity to task-relevant items and decreases sensitivity to task-irrelevant items. Building on these mechanisms, the framework is able to explain human performance on attentionally demanding tasks like visual search and multiple object tracking, and it supports a broad range of predictions about the interactions between such tasks.

Word Probability Re-Estimation Using Topic Modeling and Lexical Decision Data

Two assumptions of psycholinguistic research are that text corpora can be used as a proxy of the language that people have been exposed to and that the reaction time with which people recognize words decreases with the probability (or frequency) of the words in a corpus. We propose a method that produces topic-specific word probabilities from a text corpus using latent Dirichlet allocation, then combines them to fit lexical decision reaction times and re-estimates word probabilities. We evaluated how well independent lexical decision reaction times could be predicted from re-estimated word probabilities compared to original probabilities, using independent lexical decision data. In an experiment designed to prove the concept, the re-estimated word frequency model explained up to 9.6% of additional variability in reaction times on group level and up to 2.9% on level of individual participants.

Inferring Actions, Intentions, and Causal Relations in a Deep Neural Network

From a young age, we can select actions to achieve desired goals, infer the goals of other agents, and learn causal relations in our environment through social interactions. Crucially, these abilities are productive and generative: we can impute desires to others that we have never held ourselves. These abilities are often captured by only partially overlapping models, each requiring substantial changes to fit combinations of abilities. Here, in an attempt to unify previous models, we present a neural network underpinned by the linearly solvable Markov Decision Process (LMDP) framework which permits a distributed representation of tasks. The network contains two pathways: one captures the desirability of states, and another encodes the passive dynamics of state transitions in the absence of control. Interactions between pathways are bound by a principle of rational action, enabling generative inference of actions, goals, and causal relations supported by gradient updates to parts of the network.

Categories affect color perception of only some simultaneously present objects

There is broad empirical evidence suggesting that higher-level cognitive processes, such as language, categorization, and emotion, shape human visual perception. For example, categories that we acquire throughout lifetime have been found to alter our perceptual discriminations and distort perceptual processing. Here, we study categorical effects on perception by adapting the perceptual matching task to minimize the potential non-perceptual influences on the results. We found that the learned category-color associations bias human color matching judgments away from their category ideal on a color continuum. This effect, however, unequally biased two objects (probe and manipulator) that were simultaneously present on the screen, thus demonstrating a more nuanced picture of top-down influences on perception than has been assumed both by the theories of categorical perception and the El Greco methodological fallacy. We suggest that only the concurrent memory for visually present objects is subject to a contrast-from-caricature distortion due to category-association learning.

Coin it up: Generalization of creative constructions in the wild

Language is inherently flexible: people continually generalize over observed data to produce creative linguistic expressions. This process is constrained by a wide range of factors, whose interaction is not fully understood. We present a novel study of the creative use of verb constructions ``in the wild'', in a very large social media corpus. Our first experiment confirms on this large-scale data the important interaction of category variability and item similarity within creative extensions in actual language use. Our second experiment confirms the novel hypothesis that low-frequency exemplars may play a role in generalization by signaling the area of semantic space where creative coinages occur.

Social Media Spillover: Attitude-Inconsistent Tweets Reduce Memory for Subsequent Information

Social media users are generally exposed to information that is predominantly consistent with their attitudes and beliefs (i.e., filter bubbles), which can increase polarization and decrease understanding of complex and controversial topics. One potential approach to mitigating the negative consequences of filter bubbles is intentional exposure to information that is inconsistent with attitudes. However, it is unclear how exposure to attitude-inconsistent information in social media contexts influences memory for controversial information. To fill this gap, this study examines the effects of presenting participants (n = 96) with Twitter content on a controversial topic (i.e., labor unions) that was either pro-union, anti-union, or neutral. Participants then read a media article including both pro-union and anti-union information. Participants who saw Twitter content that was inconsistent with their prior attitudes regarding labor unions recalled less of the article content compared to those who saw Twitter content that was consistent with their prior attitudes. The findings suggest that Twitter users’ memory for information related to controversial topics may not benefit from exposure to messages outside their filter bubble.

Are Convolutional Neural Networks or Transformers more like human vision?

Modern machine learning models for computer vision exceed humans in accuracy on specific visual recognition tasks, notably on datasets like ImageNet. However, high accuracy can be achieved in many ways. The particular decision function found by a machine learning system is determined not only by the data to which the system is exposed, but also the inductive biases of the model, which are typically harder to characterize. In this work, we follow a recent trend of in-depth behavioral analyses of neural network models that go beyond accuracy as an evaluation metric by looking at patterns of errors. Our focus is on comparing a suite of standard Convolutional Neural Networks (CNNs) and a recently-proposed attention-based network, the Vision Transformer (ViT), which relaxes the translation-invariance constraint of CNNs and therefore represents a model with a weaker set of inductive biases. Attention-based networks have previously been shown to achieve higher accuracy than CNNs on vision tasks, and we demonstrate, using new metrics for examining error consistency with more granularity, that their errors are also more consistent with those of humans. These results have implications both for building more human-like vision models, as well as for understanding visual object recognition in humans.

Additional acquisition sessions monotonically benefit retention and relearning

Spacing is a highly effective encoding strategy that has been shown to benefit memory in a variety of domains. Recent work has emphasized the evaluation of spaced practice under conditions that more closely reflect daily life. This work found that spacing repetitions over days, weeks, or months is effective over retention intervals as long as one year. One aspect of spaced study that has received less attention, however, is the relationship between the number of acquisition sessions and final retention. That is, if a student preparing for an exam plans to allocate 10 hours to preparing for that exam, is there an optimal, or perhaps minimal, number of study sessions that they should engage in to best leverage the benefits of spacing? In the present experiment we had participants complete 16 practice tests of Japanese-English pairs (e.g., boushi – hat). These practice tests were either all completed in one session, or distributed across two, three, or four sessions. These sessions were spaced either 1, 7, 30, or 90 days apart. Participants completed four test trials following a retention interval of 90 days, 180 days, or some variable length. Our results suggest that the number of acquisition sessions monotonically enhanced first-trial test performance as well as relearning, though evidence for enhanced relearning between one and two sessions was ambiguous. Unexpectedly, these monotonic trends were stable across practice lags and retention intervals. These findings suggest that, in addition to the temporal lag between practice episodes, the number of sessions over which one elects to distribute those episodes also has ramifications for long-term retention, and that each additional session yields meaningful benefits.

Adult Intuitions about Mechanistic Content in Elementary School Science Lessons

Elementary schools provide students with their first encounters with formal science, creating both foundations for students’ knowledge of science content, and impressions of what it means to learn science. Here, we examined adult, including K-12 teachers’, intuitions regarding different types of content relevant to elementary school science, namely, labels, function, and mechanism. We focused primarily on perceptions of mechanistic explanations—causal explanations that explain how something works. This focus stems from children’s curiosity and aptitude for mechanism. Across four studies we predicted, and found, that participants deprioritize mechanistic explanations relative to more superficial explanation types: labels and function. This intuition, which appears to be reflected in formal science curricula, misestimates children’s abilities and attitudes towards mechanistic information.

Multimodal Behaviors from Children Elicit Parent Responses in Real-Time Social Interaction

Social interactions serve as the primary training ground for many of a child’s positive cognitive and developmental abilities. Parent responsiveness has been identified as one key mechanism through which children gain more mature skills, but the question of how children elicit responses from their parents remains to be fully investigated. In this study, we utilized head-mounted eye trackers to track moment-by-moment shifts in gaze, manual action, and speech during parent-child toy play. This allowed us to identify the moments preceding a parent response and the type and timing of the parent’s response relative to the child’s behaviors. We found that child events of attention and action – where they were both touching and looking at a toy – were more successful in eliciting parent responses overall and in eliciting multimodal parent responses than events of just child look or child touch. The parent’s latency to respond to their child differed by event type and duration, suggesting that child behaviors influence parent responses. Implications and future directions are discussed.

Episodic memory demands modulate novel metaphor use during event narration

Metaphor is an important part of everyday thought and language. Although we are often not aware of metaphor in everyday speech, on occasion, a particularly creative or novel use of metaphor will make us pay attention. It has been hypothesized that one of the driving cognitive factors behind the use of novel metaphor is a need to describe a new reality (as opposed to a preexisting reality) that would otherwise be difficult to convey using conventionalized metaphor. To this extent, novel metaphor use in everyday language may be more associated with episodic memory demands in contrast to conventional metaphor that is associated with semantic memory. To test this idea we analyzed novel metaphor use in the Hippocorpus --- a corpus of more than 5000 recalled and imagined stories about memorable life events in the first person perspective. In this dataset, recalled events have been shown to rely on episodic memory to a greater extent than descriptions of imagined events (i.e., narrating an event as if it happened to you but not describing an event that actually happened to you), which largely draw on semantic memory. We hypothesized that novel metaphor use during event narration should be modulated by the extent to which language users are able to draw on primary experience to describe events. We found that novel metaphor counts in recalled events were significantly higher than imagined events. Importantly, we found that factors that influence the extent to which language users are able to draw on primary experience during event narration (i.e., openness to experience, similarity to one's own experience, and how memorable or important an event was) modulated novel metaphor use in different ways in imagined compared to recalled events. The work paves the way for using large scale corpora to analyze underlying cognitive processes that modulate metaphorical language use.

Do You Know What I Know? Children Use Informants’ Beliefs About Their Abilities to Calibrate Choices During Pedagogy

Models of pedagogy highlight the reciprocal reasoning underlying learner-teacher interactions, including that learners' inferences should be shaped by what they believe a teacher knows about them. Yet, little is known about how this influences learning, despite the fact that even young children make rapid inferences about teaching from sparse data. In the current work, six- to eight-year-olds' performance on a picture-matching game was either overestimated, underestimated, or accurately represented by a confederate (the "Teacher"), who then presented three new matching games of varying assessed difficulty (too easy, too hard, just right). A simple model of this problem predicts that while children should follow the recommendation of an accurate Teacher, learners should choose easier games when the Teacher overestimated their abilities, and harder games when she underestimated them. Results from our experiment support these predictions, providing insight into children’s ability to consider teachers’ knowledge when learning from pedagogy.

How Goals Erase Framing Effects in Risky Decision Making

Evidence has shown that goals systematically change risk preferences in repeated decisions under risk. For instance, decision makers could aim to reach goals in a limited time, such as “making at least $1000 with ten stock investments within a year.” We test whether goal-based risky decisions differ when facing gains as compared to losses. More specif-ically, we examine the impact of outcome framing (gains vs. losses) and state framing (positive vs. negative resource states) on goal-based risky decisions. Our results (N=100) reveal no framing effects; instead, we find a consistently strong effect of the goal on risk preferences independent of framing. Computational modeling showed that a dynamic version of prospect theory, with a goal-dependent reference point, described 87% of participants best. This model treats outcomes as gains and losses depending on the state-goal distance. Our results show how goals can erase standard framing effects observed in risky choices without goals.

Try smarter, not harder: Exploration and strategy diversity are related to infant persistence

Much research into persistence focuses on methods to increase trying without distinguishing whether persistence is rational. However, expectations of effort efficiency suggest that reducing effort in the face of repeated failure is logical. We performed archival behavioral coding to propose exploration as a rational means to extend persistence as new information is gained and the possibility of success is maintained. Infants were presented with an impossible task and exploratory behavior was classified. Infants decreased effort with increased experience failing, but persisted for longer when using several exploratory strategies and exploring for proportionally longer. These results confirm that infants are sensitive to the utility of their actions, and that exploration offers a means to persist even in the face of failure.

Let's talk (efficiently) about us: Person systems achieve near-optimal compression

Systems of personal pronouns (e.g.,`you' and `I') vary widely across languages, but at the same time not all possible systems are attested. Linguistic theories have generally accounted for this in terms of strong grammatical constraints, but recent experimental work challenges this view. Here, we take a novel approach to understanding personal pronoun systems by invoking a recent information-theoretic framework for semantic systems that predicts that languages efficiently compress meanings into forms. We find that a test set of cross-linguistically attested personal pronoun systems achieves near-optimal compression, supporting the hypothesis that efficient compression shapes semantic systems. Further, our best-fitting model includes an egocentric bias that favors a salient speaker representation, accounting for a well-known typological generalization of person systems (`Zwicky's Generalization') without the need for a hard grammatical constraint.

Listeners evaluate native and non-native speakers differently (but not in the way you think)

Speaking in a foreign accent has often been thought to carry many disadvantages. Here we probe the social evaluation of foreign-accented vs. native speakers using spoken utterances that either obey or violate the pragmatic principle of Informativeness. We show that listeners form different impressions of native and non-native speakers with identical pragmatic behavior: specifically, in contexts where violations of Informativeness can be detrimental to or misleading for the listener, people rated underinformative speakers more negatively on trustworthiness and interpersonal appeal compared to informative speakers, but this tendency was mitigated in some cases for speakers with foreign accents. Furthermore, this mitigating effect was strongest for less proficient non-native speakers who were presumably not fully responsible for their linguistic choices. Contrary to previous studies, we also find no consistent global bias against non-native speakers. Thus the fact that non-native speakers have imperfect control of the linguistic signal affects pragmatic inferences and social evaluation in ways that can lead to surprising social benefits.

Perceptual and Memory Metacognition in Children

Confidence can be experienced for all kinds of decisions – evaluating the value of a piece of artwork, determining whether the lights are flickering, or remembering where we left our keys. These are fundamentally different kinds of decisions, but does that mean the confidence we feel is also fundamentally different for each one? Here, we test competing theories of domain-generality and domain-specificity in metacognitive ability by correlating individual differences in memory and perceptual confidence judgments in childhood. Children performed a recognition memory task and an area discrimination task followed by confidence judgments. Using 4 measures of metacognitive ability (indicated by higher confidence for accurate compared to inaccurate judgments: difference scores, meta-d’, MRatio, and HMeta-d’), we find no significant correlations between this ability in memory and perceptual tasks. These findings support an account of domain-specificity in children’s metacognitive abilities.

The Burgeoning Reach of Social Learning and Culture in Animals’ Lives

Culture – the totality of traditions acquired in a community by social learning from others – has increasingly been found to be pervasive not only in humans but in many animals’ lives, with profound implications for comparative cognitive science as well as evolutionary biology, anthropology and conservation (Aplin, 2019; Brakes et al., 2019; Whiten, 2011; Whiten, 2017a; Whiten et al., 2017). Compared to individual learning, learning from experienced others can more safely and efficiently assimilate the wisdom already accumulated in those individuals.

A Layered Bridge from Sound to Meaning: Investigating Cross-linguistic Phonosemantic Correspondences

The present paper addresses the study of cross-linguistic phonosemantic correspondences within a deep learning framework. An LSTM-based Recurrent Neural Network is trained to associate the phonetic representation of a word, encoded as a sequence of feature vectors, to its corresponding semantic representation in a multilingual and cross-family vector space. The processing network is then tested, without further training, in a language that does not appear in the training set and belongs to a different language family. The performance of the model is evaluated through a comparison with a monolingual and mono-family upper bound and a randomized baseline. After the assessment of the network's performance, the distribution of phonosemantic properties in the lexicon is inspected in relation to different (psycho)linguistic variables, showing a link between lexical non-arbitrariness and semantic, syntactic, pragmatic, and developmental factors.

What is the Cooperative Behavior of Moving in Shared Spaces?

The development of mobility technologies has led to the concept of shared spaces. In the shared space, mobilities and pedestrians share a single common space. Compared to conventional separated spaces, cooperative behaviors are critical in shared spaces because all agents can move freely at their own speed and in their directions with few constraints. An experiment was conducted using indices for own cost, others' benefit, and own loss to reveal the nature of the cooperative behaviors associated with moving. We found that compared to when people are encouraged to behave without urgency, they frequently change their speed and direction so as not to interrupt others and reach their destination more quickly when people are required to behave cooperatively. Therefore, it was concluded that both others' benefit and one's own benefit are critical for cooperative behaviors when moving in shared spaces.

What we ought to do is…’: Are we More Willing to Defer to Experts who Provide Descriptive Facts Than Those who Offer Prescriptive Advice?

A considerable amount of cognition is, in some way, social. Here we consider one example: our reliance upon experts for information about phenomenon within a particular domain. Novices and experts share some knowledge within a domain in question which is crucial for knowing when to seek expert advice and how to evaluate that advice. Just when we decide to relinquish our own knowledge or skill in deference to an expertise remains an important question for cognitive scientists. Here we explored some conditions that might influence when we choose to defer to experts. In two experiments (N=570) we demonstrated that participants have a greater willingness to defer when experts have provide descriptive information (i.e., facts) about their domain of expertise, than when they provide prescriptive advice about what we ought to do with those facts. We interpret these results from the perspective that individuals exercise greater vigilance when given prescriptive advice in the form of normative statements. From this perspective individuals feel threatened, and therefore are less deferential, when experts tell them what to do, rather than share knowledge with them.

Domestic dogs’ gaze and behaviour in 2-alternative choice tasks

Species such as humans rely on their excellent visual abilities to perceive and navigate the world. Dogs have co-habited with humans for millennia, yet we know little about how they gather and use visual information to guide decision-making. Across five experiments, we presented pet dogs (N=49) with two foods of unequal value in a 2-alternative choice task, and measured whether dogs showed preferential gazing, and whether visual attention patterns were associated with item choice. Overall, dogs looked significantly longer at the preferred (high value) food over the low value alternative. There was also evidence of item-dependent predictive gaze—dogs looked proportionally longer at the item they subsequently chose. Surprisingly, dogs’ choice behavior was only slightly above chance, despite visual discrimination. These results suggest that dogs use visual information in the environment to inform their choice behavior, but that other factors may also contribute to their decision-making.

Modelling the development of counting with memory-augmented neural networks

Learning to count is an important example of the broader human capacity for systematic generalization, and the development of counting is often characterized by an inflection point when children rapidly acquire proficiency with the procedures that support this ability. We aimed to model this process by training a reinforcement learning agent to select N items from a binary vector when instructed (known as the give-$N$ task). We found that a memory-augmented modular network architecture based on the recently proposed Emergent Symbol Binding Network (ESBN) exhibited an inflection during learning that resembled human development. This model was also capable of systematic extrapolation outside the range of its training set - for example, trained only to select between 1 and 10 items, it could succeed at selecting 11 to 15 items as long as it could make use of an arbitrary count sequence of at least that length. The close parallels to child development and the capacity for extrapolation suggest that our model could shed light on the emergence of systematicity in humans.

Maternal behaviors that mediate skill development in Sumatran orangutans

Although orangutans are closely related to humans, very little is known about their ontogenetic development. In particular, there is a lack of systematic research on the maternal behaviors that mediate skill development in early infancy. To address this topic, we conducted a longitudinal study in which a Sumatran orangutan (Pongo abelii) mother-infant dyad was systematically observed across 28 months, starting with the infant’s birth. Our data revealed several classes of maternal behavior that potentially influenced infant skill development. The timing of these behaviors was contingent upon infant competence level, as active interventions were intense during periods of skill acquisition. The same behaviors were flexibly deployed independent of whether the infant was in the process of acquiring foraging, locomotor or social skills. Our findings suggest that the maternal behaviors that mediate infant skill development in Sumatran orangutans have features reminiscent of human scaffolding, and raise questions about intentionality in such behaviors.

Language- and spatially-mediated attention in toddlers

Selective attention involves attending to task-relevant information and inhibiting task-irrelevant information. While spatial priming is known to efficiently shape selective attention, the nature of language-mediated effects on selective attention is not well-understood, particularly in young toddlers. We com- pare the impact of language-mediated and spatially-mediated attention in an eye-tracking paradigm in which two objects are presented in one of four possible locations and one of the objects is highlighted. The impact of labelling on attention orienting during a prime phase was tested in a subsequent probe phase, where either the identity, location or both were manipulated, and compared to the impact of spatial priming. To elucidate the role of development on these effects, the study was conducted with 18- and 26-month-old toddlers. The results revealed that both language-mediated and spatially-mediated priming lead to attention orienting during the probe phase: at- tended information during the prime phase facilitates attention during the probe phase while ignored features are inhibited. However, in contrast to spatially-mediated attention during the prime phase, language-mediated attention can override these inhibitory effects. The impact of language on overcoming inhibitory effects is particularly noteworthy in the older age group.

Solid ground makes solid understandings: does simple comparison paves the way for more complex comparisons?

In this experiment, we investigated the role of dimensional distinctiveness on the generalization of novel names for unfamiliar objects. In a comparison design, we manipulated the sequence of trials difficulty, starting either with more difficult trials or with easier trials. To achieve this, we manipulated the dimensional distinctiveness of the first comparison trials and of the, later, transfer trials. Results showed that high-distinctiveness (easy) stimuli increased children’s later performance in the low-distinctiveness (difficult) condition whereas low-distinctiveness early training led to no later improvement in easier trials. Last, a correct answer for the first trial in the first learning part predicted the level of performance in the second learning part. We interpret these findings in terms of differential costs of comparison for varying levels of distinctiveness and level of abstraction from one condition to another.

Visual Analogy: Deep Learning Versus Compositional Models

Is analogical reasoning a task that must be learned to solve from scratch by applying deep learning models to massive numbers of reasoning problems? Or are analogies solved by computing similarities between structured representations of analogs? We address this question by comparing human performance on visual analogies created using images of familiar three-dimensional objects (cars and their subregions) with the performance of alternative computational models. Human reasoners achieved above-chance accuracy for all problem types, but made more errors in several conditions (e.g., when relevant subregions were occluded). We compared human performance to that of two recent deep learning models (Siamese Network and Relation Network) directly trained to solve these analogy problems, as well as to that of a compositional model that assesses relational similarity between part-based representations. The compositional model based on part representations, but not the deep learning models, generated qualitative performance similar to that of human reasoners.

Who’s Stopping You? – Using Microanalysis to Explore the Impact of Science Anxiety on Self-Regulated Learning Operations

Research shows that anxiety can disrupt learning processes, but few studies have examined anxiety’s relationships to online learning behaviors. This study considers the interplay between students’ anxiety about science and behavior within an online system designed to support self-regulated science inquiry. Using the searching, monitoring, assessing, rehearsing, and translating (SMART) classification schema for self-regulated learning (SRL), we leverage microanalysis of self-regulated behaviors to better understand how science anxiety inhibits (or supports) different learning operations. Specifically, we show that while science anxiety is positively associated with searching behaviors, it is negatively associated with monitoring behaviors, suggesting that anxious students may avoid evaluation, opting instead to compensate with information-seeking. These findings help us to better understand SRL processes and may also help us support anxious students in developing SRL strategies.

The impact of child-directed language on children’s lexical development

This study investigated (1) whether and how English caregivers adjust their speech (i.e., mean length of utterances, lexical diversity, lexical sophistication, sentence types, and deixis) according to different contexts, children’s knowledge, and age, and (2) which aspects of parental speech input predict children’s immediate learning of novel words as well as their vocabulary size. We studied a semi-naturalistic corpus, in which English caregivers talked to their children (3-4 years old) about toys that were present or absent, and known or unknown to the children. We found that caregivers flexibly adjusted various aspects of their speech to maintain an informative and engaging learning environment. Furthermore, we found that rich lexicon and yes-no questions predict better immediate word learning, whereas caregivers' lexical diversity, lexical frequency, the use of Yes-No questions are related to children’s general vocabulary size. In conclusion, higher quality of caregivers’ language predicts better immediate word learning and vocabulary size.

Illusory bimodality in repeated reconstructions of probability distributions

Probability density estimation is widely-known as an ill-posed statistical problem whose solving depends on extra constraints. We investigated what prior beliefs people might have in their learning of an arbitrary probability distribution, especially whether the distribution is believed to be unimodal or multimodal. In each block of our experiments, participants repeatedly reconstructed a one-dimensional spatial distribution after observing every 60 new samples from the distribution. The probability distribution function (PDF) they reported on each trial was submitted to a spectral analysis, where the powers for 1-cycle, 2-cycle, …, n-cycle components respectively indicate participants’ tendency of reporting unimodal, bimodal, …, n-modal distributions. In two experiments, the reported PDFs showed significant bimodality—that the 2-cycle power was above chance and even larger than the 1-cycle power—not only for bimodal distributions, but also for uniform distribution. Such illusory bimodality for uniform distribution was first found when we used an adaptive procedure analogous to “human MCMC”, updating the generative distribution of samples from trial to trial to reinforce potential biases in PDFs (Experiment 1). However, even when we fixed the generative distribution across trials (Experiment 2), the illusory bimodality did not vanish. The illusory bimodality was even observed before participants experienced any bimodal distributions in the experiment . We considered a few kernel density models and discuss further computational explanations (e.g. prior beliefs following Chinese Restaurant Process) for this new phenomenon.

Studying science denial with a complex problem-solving task

Explanations of science denial rooted in individual cognition tend to focus on general trait-like factors such as cognitive style, conspiracist ideation or delusional ideation. However, we argue that this focus typically glosses over the concrete, mechanistic elements of belief formation, such as hypothesis generation, data gathering, or hypothesis evaluation. We show, empirically, that such elements predict variance in science denial not accounted for by cognitive style, even after accounting for social factors such as political ideology. We conclude that a cognitive account of science denial would benefit from the study of complex (i.e., open-ended, multi-stage) problem solving that incorporates these mechanistic elements.

Come Together: Integrating Perspective Taking and Perspectival Expressions

Conversational interaction involves integrating the perspectives of multiple interlocutors with varying knowledge and beliefs. An issue that has received little attention in cognitive modeling of pragmatics is how speakers deal with the choice of words like come that are inherently perspectival. How do such lexical perspectival items fit into a speaker's overall integration of conversational perspective? We present new experimental results on production of perspectival words, in which speakers have varying degrees of certainty about their addressee's perspective. We show that the Multiple Perspectives Model closely fits the empirical data, lending support to the hypothesis that use of perspectival words can be naturally accommodated as a type of conversational perspective taking.

Dynamics of Counterfactual Retrieval

People often think about counterfactual possibilities to an event and imagine how it could have been otherwise. The study of how this occurs is central to many areas of cognitive science, including decision making, social cognition, and causal judgment; however, modeling the memory processes at play in naturalistic counterfactual retrieval has been difficult. We use established memory models to evaluate and compare multiple mechanisms that could be involved in counterfactual retrieval. Our models are able to capture nuanced dynamics of retrieval (e.g. how retrieved counterfactuals cue subsequent counterfactuals), and can predict the effects of retrieval on evaluations and decisions. In doing so, we show how existing theories of counterfactual thinking can be combined with quantitative models of memory search to provide new insights about the formation and consequences of counterfactual thought.

Exemplar Account for Category Variability Effect: Single Category based Categorization

The category variability effect is referred to as that the middle item between two categories is more similar to the low-variability category but tends to be classified as the high-variability category, which challenges the exemplar model. We however hypothesized that this effect can result from the use of the single-category strategy in a binary categorization task, specifically when only the low-variability category is referenced for categorization. One experiment was conducted with a recognition task inserted in the categorization task to selectively deepen the processing for the exemplars of the high-variability category, low-variability category, or both categories. The results showed that the strongest category variability effect occurred when the low-variability category was emphasized in the recognition task. The exemplar model SD-GCM provided a good account for the category variability effect, with a large weight for the low-variability category and a small weight for the high-variability category, hence verifying our hypothesis.

Disgraced Professionals: Revelation of Immorality Decreases Evaluations of Professionals’ Competence and Contribution

Competence and morality are two of the most important dimensions in social evaluation. Recent studies have suggested the primacy of morality, showing that information about immorality of an ordinary target person decreases evaluation of their competence. We examined the effect of moral taint on multiple non-moral judgments: ratings of the competence, accomplishment, and contribution of fictitious professionals who were described as highly successful in various fields. Moral taint significantly decreased participants’ non-moral social evaluations of professionals regardless of their field. Mediation analyses showed that the negative impact of immoral character on competence judgments is more strongly mediated by the decrease in participants’ psychological involvement with the target, rather than a decrease in perceived social intelligence of the target. These findings suggest that motivation to distance oneself from immoral others plays a critical role in the revision of social evaluations.

How do the semantic properties of visual explanations guide causal inference?

What visualization strategies do people use to communicate abstract knowledge to others? We developed a drawing paradigm to elicit visual explanations about novel machines and obtained detailed annotations of the semantic information conveyed in each drawing. We found that these visual explanations contained: (1) greater emphasis on causally relevant parts of the machine, (2) less emphasis on structural features that were visually salient but causally irrelevant, and (3) more symbols, relative to baseline drawings intended only to communicate the machines' appearance. However, this overall pattern of emphasis did not necessarily improve naive viewers' ability to infer how to operate the machines, nor their ability to identify them, suggesting a potential mismatch between what people believe a visual explanation contains and what may be most useful. Taken together, our findings advance our understanding of how communicative goals constrain visual communication of abstract knowledge across behavioral contexts.

Individual vs. Joint Perception: a Pragmatic Model of Pointing as Smithian Helping

The simple gesture of pointing can greatly augment one’s ability to comprehend states of the world based on observations. It triggers additional inferences relevant to one’s task at hand.We model an agent’s update to its belief of the world based on individual observations using a partially observable Markov decision process (POMDP), a mainstream artificial intelligence (AI) model of how to act rationally according to beliefs formed through observation. On top of that, we model pointing as a communicative act between agents who have a mutual understanding that the pointed observation must be relevant and interpretable. Our model measures “relevance” by defining a Smithian Value of Information (SVI) as the utility improvement of the POMDP agent before and after receiving the pointing. We model that agents calculate SVI by using the cognitive theory of Smithian helping as a principle of coordinating separate beliefs for action prediction and action evaluation. We then import SVI into rational speech act (RSA) as the utility function of an utterance. These lead us to a pragmatic model of pointing allowing for contextually flexible interpretations. We demonstrate the power of our Smithian pointing model by extending the Wumpus world, a classic AI task where a hunter hunts a monster with only partial observability of the world. We add another agent as a guide who can only help by marking an observation already perceived by the hunter with a pointing or not, without providing new observations or offering any instrumental help. Our results show that this severely limited and overloaded communication nevertheless significantly improves the hunters’ performance. The advantage of pointing is indeed due to a computation of relevance based on Smithian helping, as it disappears completely when the task is too difficult or too easy for the guide to help.

Biologically Plausible Spiking Neural Networks for Perceptual Filling-In

Visual perception initiated with a low-level derivation of Spatio-temporal edges and advances to a higher-level perception of filled surfaces. According to the isomorphic theory, this perceptual filling-in is governed by an activation spread across the retinotopic map, driven from edges to interiors. Here we propose two biologically plausible spiking neural networks, which demonstrate perceptual filling-in by resolving the Poisson equation. Each network exhibits a distinct dynamic and architecture and could be realized and further integrated in the brain.

Is the emotional mapping of lines caused by the motion they imply?

Different patterns of lines can express different emotions, but the reason for the metaphor has not been fully revealed. Some studies speculate that this may be caused by the motion implied by lines. In order to verify this speculation, this paper conducts an experiment on the relationship between the emotional expression of lines and the motion of lines. We created 87 different patterns of lines and visualized the motion implied by the lines as dynamic effects. The subjects chose descriptors from a list of 29 emotion words for samples. The results show that the test samples can well cover the classical two-dimensional emotional space and the emotional expression of lines is obviously related to their implied motion. The implied motion, compared with the static version, tends to shift towards the positive and high arousal in the emotional space. In addition, the speed of the motion affects emotional arousal.

Local Sampling with Momentum Accounts for Human Random Sequence Generation

Many models of cognition assume that people can generate independent samples, yet people fail to do so in random generation tasks. One prominent explanation for this behavior is that people use learned schemas. Instead, we propose that deviations from randomness arise from people sampling locally rather than independently. To test these explanations, we teach people one- and two-dimensional arrangements of syllables and ask them to generate random sequences from them. Although our results reproduce characteristic features of human random generation, such as a preference for adjacent items and an avoidance of repetitions, we also find an effect of dimensionality on the patterns people produce. Furthermore, model comparisons revealed that local sampling accounted better for participants' sequences than a schema account. Finally, evaluating the importance of each models' constituents, we show that the local sampling model proposed new states based on its current trajectory, rather than an inhibition-of-return-like principle.

Moral Judgments and Triage Principles related to COVID-19 Pandemic

The present study explores moral judgment in COVID-19 related moral dilemma situations involving allocation of ventilators with conflicting allocation principles. Utilitarian triage criteria like the chance of recovery or longer life expectancy are opposed to egalitarian procedures like random allocation and ‘first come, first served’. In the first part of the experiment, participants are presented with three hypothetical situations in which there are two patients admitted to a hospital in a critical state needing a ventilator but only one is available. The conditions about the patients are described and several triage procedures are suggested and rated by participants. Separately, participants rated their agreement with several triage principles. The result shows a clear preference for utilitarian allocation principles. The random allocation principle receives the lowest ratings. The ‘first come, first served’ correlates with the belief in fate score hinting that the egalitarian nature of this principle is questionable.

A Mixture of Experts in Associative Generalization

After learning that one stimulus predicts an outcome (e.g., an aqua-colored rectangle leads to shock) and a very similar stimulus predicts no outcome (e.g., a slightly greener rectangle leads to no shock), some participants generalize the predictive relationship on the basis of physical similarity to the predictive stimulus, while others generalize on the basis of the relational difference between the two stimuli (e.g., “higher likelihood of shock for bluer stimuli”). To date, these individual differences in generalization rules have remained unexplored in associative learning. Here, we present evidence that a given individual simultaneously entertains belief in both “similarity” and “relational” rules, and generalizes using a mixture of these strategies. Using a “mixture of experts” modelling framework constrained by participants self-reported rule beliefs, we show that considering multiple rules predicts generalization gradients better than a single rule, and that generalization behavior is better described as switching between, rather than averaging over, different rules.

Complementary Structure-Learning Neural Networks for Relational Reasoning

The neural mechanisms supporting flexible relational inferences, especially in novel situations, are a major focus of current research. In the complementary learning systems framework, pattern separation in the hippocampus allows rapid learning in novel environments, while slower learning in neocortex accumulates small weight changes to extract systematic structure from well-learned environments. In this work, we adapt this framework to a task from a recent fMRI experiment where novel transitive inferences must be made according to implicit relational structure. We show that computational models capturing the basic cognitive properties of these two systems can explain relational transitive inferences in both familiar and novel environments, and reproduce key phenomena observed in the fMRI experiment.

Expectation Violation Leads to Generalization: The Effect of Prediction Error on the Acquisition of New Syntactic Structures

Prediction error is known to enhance priming effects for familiar syntactic structures; it also strengthens the formation of new declarative memories. Here, we investigate whether violating expectations may aid the acquisition of new abstract syntactic structures, too, by enhancing memory for individual instances which can then form the basis for abstraction. In a cross-situational artificial language learning paradigm, participants were exposed to novel syntactic structures in ways that either violated their expectations (Surprisal group) or that conformed to them (Control group). Results from a delayed post-test show that participants in the Surprisal group developed stronger representations of the structures’ form-meaning mappings and were better able to generalize them to new instances, relative to the Control group.

A model of selection history in visual attention

Attention can be biased by the previous learning and experience. We present an algorithmic-level model of this bias in visual attention that predicts quantitatively how bottom-up, top-down and selection history compete to control attention. In the model, the output of saliency maps as bottom-up guidance interacts with a history map that encodes learning effects and a top-down task control to prioritize visual features. We test the model on a reaction-time (RT) data set from the experiment presented in (Feldmann-Wustefeld, Uengoer, & Schubö, 2015). The model accurately predicts parameters of reaction time distributions from an integrated priority map that is comprised of an optimal, weighted combination of separate maps. Analysis of the weights confirms learning history effects on attention guidance.

Electrophysiological signatures of multimodal comprehension in second language

Language is multimodal: non-linguistic cues, such as prosody, gestures and mouth movements, are always present in face-to-face communication and interact to support processing. In this paper, we ask whether and how multimodal cues affect L2 processing by recording EEG for highly proficient bilinguals when watching naturalistic materials. For each word, we quantified surprisal and the informativeness of prosody, gestures, and mouth movements. We found that each cue modulates the N400: prosodic accentuation, meaningful gestures, and informative mouth movements all reduce N400. Further, effects of meaningful gestures but not mouth informativeness are enhanced by prosodic accentuation, whereas effects of mouth are enhanced by meaningful gestures but reduced by beat gestures. Compared with L1, L2 participants benefit less from cues and their interactions, except for meaningful gestures and mouth movements. Thus, in real-world language comprehension, L2 comprehenders use multimodal cues just as L1 speakers albeit to a lesser extent.

Seeing in the dark: Testing deep neural network and analysis-by-synthesis accounts of 3D shape perception with highly degraded images

The visual system does not require extensive signal in its inputs to compute rich, three-dimensional (3D) shape percepts. Even under highly degraded stimuli conditions, we can accurately interpret images in terms of volumetric objects. What computations support such broad generalization in the visual system? To answer, we exploit two degraded image modalities – silhouettes and two-tone “Mooney” images – alongside regular shaded images. We test two distinct approaches to vision: deep networks for classification and analysis-by-synthesis for scene inference. Deep networks perform substantially sub-human even after training on 18 times more images per category compared to the existing large-scale image sets for object classification. We also present a novel analysis-by-synthesis architecture that infers 3D scenes from images via optimization in a differentiable, physically-based renderer. This model also performs substantially sub-human. Nevertheless, both approaches can explain some of the key behavioral patterns. We discuss the insights these results provide for reverse-engineering visual cognition.

Rhythmic behaviors in chimpanzees: range, functional contexts, sex differences and emotional correlates

There have been recently multiple calls to investigate the rhythmic behaviors (RBs) of nonhuman animals, as a way to gain insight into the evolution of human rhythm cognition and musicality. Currently, the empirical data from non-human species is scarce. Most strikingly, we lack data from chimpanzees (Pan troglodytes), our closest genetic relatives. Here, we present an observational study conducted at three sites (N=41), in which we systematically documented RBs in chimpanzees, with a particular focus on functional contexts, sex differences and emotional correlates. We found that RBs were frequent in chimpanzees, occurred primarily in social contexts, and often had social consequences. RBs were not exclusively associated with high arousal or playfulness. RBs were more frequent in males than females, but sex did not affect their social efficacy. Our findings are consistent with social theories on the evolution of musicality, but also highlight a role for RBs in chimpanzee inter-sexual communication.

Loopholes, a Window into Value Alignment and the Learning of Meaning

Finding and exploiting loopholes is a familiar facet of fable, law, and everyday life. But cognitive, computational, and empirical work on this behavior remains scarce. Engaging with loopholes requires a nuanced understanding of goals, social ambiguity, and value alignment. We trace loophole behavior to early childhood, and we propose that exploiting loopholes results from a conflict in actors' goals combined with a pressure to cooperate. A survey of 260 parents reporting on 425 children reveals that loophole behavior is prevalent, frequent, and diverse in daily parent-child interactions, emerging around ages five to six and tapering off from around ages nine to ten into adolescence. A further experiment shows that adults consider loophole behavior in children as less costly than non-compliance, and children increasingly differentiate loophole behavior from non-compliance from ages four to ten. We discuss limitations of the current work together with a proposal for a formal framework for loophole behavior.

What is a 'mechanism'? A distinction between two sub-types of mechanistic explanations

Mechanistic explanations reveal the rich causal structure of the world we inhabit. For instance, an explanation like “A clock ticks because an internal motor turns a gear which moves the hands” explains a feature of the clock (i.e., the fact that it ticks) by describing the parts and actions that cause it. People often seek out such explanations, as they may be particularly valuable to understanding the world. However, are mechanistic explanations truly a single class of explanation? Here, we distinguish between two subtypes of mechanism: constitutive and etiological. We argue that this distinction, long made by philosophers of science, has cognitive consequences: People treat these two kinds of explanation differently and prefer one kind over the other. We discuss implications for understanding mechanism and for explanation preferences more broadly.

Lions, tigers and bears: Conveying a superordinate category without a superordinate label

We asked whether categories expressed through lists of salient exemplars (e.g., car, truck, boat, etc.) convey the same meaning as categories expressed through conventional superordinate nouns (e.g., vehicles). We asked English speakers to list category members, with one group given superordinate labels like vehicles and the other group given only a list of salient exemplars. We found that the responses of the group given labels were more related, more typical, and less diverse than the responses of the group given exemplars. This result suggests that when people do not see a superordinate label, the categories that they infer are less well aligned across participants. In addition, categories inferred based on exemplars may be broader in general than categories given by superordinate labels.

The role of causal models in evaluating simple and complex legal explanations

Despite the increase in studies investigating people’s explana- tory preferences in the domains of psychology and philoso- phy, little is known about their preferences in more applied do- mains, such as the criminal justice system. We show that when people evaluate competing legal accounts of the same evidence that vary in complexity, their explanatory preferences are af- fected by: i) whether they are required to draw causal mod- els of the evidence, and ii) the actual structure that is drawn. Although previous research has shown that people can reason correctly about causality, ours is one of the first studies that shows that generating and drawing causal models directly af- fects people’s evaluations of explanations.

A Longitudinal Study of Great Ape Cognition: Stability, Reliability and the Influence of Individual Characteristics

Primate cognition research allows us to reconstruct the evolution of human cognition. However, temporal and contextual factors that induce variation in cognitive studies with great apes are poorly understood. Here we report on a longitudinal study where we repeatedly tested a comparatively large sample of great apes (N = 40) with the same set of cognitive measures. We investigated the stability of group-level results, the reliability of individual differences, and the relation between cognitive performance and individual-level characteristics. We found results to be relatively stable on a group level. Some, but not all, tasks showed acceptable levels of reliability. Cognitive performance across tasks was not systematically related to any particular individual-level predictor. This study highlights the importance of methodological considerations – especially when studying individual differences – on the route to building a more robust science of primate cognitive evolution.

Explore, Exploit, Create: Inventing goals in play

Recent models of children’s exploratory play assume, either implicitly or explicitly, that the goal of exploration is to acquire an accurate representation of the world. Under this “play as rational exploration” view, actions are motivated by the value of information or other extrinsically defined rewards; however, this fails to explain the richness and variety observed in children’s play. We propose instead that distinctively human play is often characterized not primarily by the pursuit of external goals but by the creation of new goals. Using a novel free play paradigm, we find that both adults (N=140) and children (N=19, ages 3-8) invent a rich diversity of goals, take costly actions to pursue those goals despite receiving no external reward, and deploy rich planning strategies in the process.

Investigating cross-cultural differences in reasoning, vision, and social cognition through replication

People perceive, think, and act in a multitude of different ways across cultures, and there is an extensive history of research documenting these differences. At the heart of much of this work is a contrast between Western and East Asian cultures that has inspired important efforts to document human psychology in populations outside of the WEIRD (Western, Educated, Industrial, Rich, Democratic) demographic, which is much overrepresented within psychological research (Henrich, Heine, & Norenzayan, 2010). Recent recommendations for measuring cultural distance (Muthukrishna et al., 2020) profile the US and China as focal points for cultural comparisons, but define cultural distance using explicit self-report measures. Here, we evaluate cross-cultural differences between the US and China using implicit and experimental measures. We attempt to reproduce and test extensions of prior work demonstrating cross-cultural differences in reasoning, vision, and social cognition with convenience snowball samples of university students. Few of these differences appeared in our sample.

Combining rules and simulation to explain infant physical learning

Two very different kinds of views exist of how infants learn about physics: on the one hand, through domain-specific rules, on the other, through general purpose simulation. We attempt to reconcile these two views through a model that uses simulation to bootstrap rule learning. This model makes a variety of predictions about rapid concept acquisition in young infants which are consistent with experiments performed by developmental cognitive scientists. Consistent with the developmental literature, our model learns physical rules from just a few examples, but only when those examples are consistent with general physical principles. A model without simulation shows no such biases. Our approach provides a general mechanism for explaining how simulation and rule learning might bootstrap off of each other throughout development, and opens up a number of new questions about how children learn physical representations.

They is Changing: Pragmatic and Grammatical Factors that License Singular they

Singular they has become increasingly common as a personal pronoun of reference for non-binary individuals and in use with generic referents. While previous accounts of the licensing conditions of they are primarily syntactic, pragmatics may also play a role. By Maximize Presupposition (Heim 1991), speakers who use they rather than a more specific gender marked pronoun are potentially signaling that they do not know the antecedent’s gender or that it is not relevant to their current goals. This would predict that socially close referents would be less felicitous antecedents for they.  In this study, participants made judgments for nine types of antecedents. Gender marking, specificity, and social distance had reliable effects on acceptability.  In addition, cluster analyses indicated that participants naturally fell into three groups, which align with those predicted by Konnelly and Cowper (2020). Individuals who were younger, more open to non-binary gender, and had more experience with non-binary individuals accepted they in more situations.

"Fringe" beliefs aren't fringe

COVID-19 and the 2021 U.S. Capitol attacks have highlighted the potential dangers of pseudoscientific and conspiratorial belief adoption. Approaches to combating misinformed beliefs have tried to "pre-bunk" or "inoculate" people against misinformation adoption and have yielded only modest results. These approaches presume that some citizens may be more gullible than others and thus susceptible to multiple misinformed beliefs. We provide evidence of an alternative account: it's simply too hard for all people to be accurate in all domains of belief, but most individuals are trying. We collected data on a constellation of human beliefs across domains from more than 1,700 people on Amazon Mechanical Turk, and find misinformed beliefs to be broadly, but thinly, spread among the population. Further, we do not find that individuals who adopt one misinformed belief are more likely to engage in pseudoscientific or conspiratorial thinking across the board, in opposition to "slippery slope” notions of misinformation adoption.

Planning to plan: a Bayesian model for optimizing the depth of decision tree search

Planning, the process of evaluating the future consequences of actions, is typically formalized as search over a decision tree. This procedure increases expected rewards but is computationally expensive. Past attempts to understand how people mitigate the costs of planning have been guided by heuristics or the accumulation of prior experience, both of which are intractable in novel, high-complexity tasks. In this work, we propose a normative framework for optimizing the depth of tree search. Specifically, we model a metacognitive process via Bayesian inference to compute optimal planning depth. We show that our model makes sensible predictions over a range of parameters without relying on retrospection and that integrating past experiences into our model produces results that are consistent with the transition from goal-directed to habitual behavior over time and the uncertainty associated with prospective and retrospective estimates. Finally, we derive an online variant of our model that replicates these results.

Invariance of Information Seeking Across Reward Magnitudes

Most theoretical accounts of non-instrumental information seeking suggest that the magnitude of rewards has a direct influence on the attractiveness of the information. Specifically, the magnitude of rewards is assumed to be proportional to the strength of information seeking (or avoidant) behaviour. In a series of experiments using numerical and pictorial stimuli, we explore the extent to which observed information seeking behaviour tracks these predictions. Our findings indicate a robust independence of information seeking from outcome magnitude and valence with preferences for information largely remaining constant across different reward valence and magnitudes. We discuss these results in the context of current computational models with suggestions for future theoretical and empirical work.

Possibility judgments may depend on assessments of similarity to known events

We explore whether people’s judgments about the possibility of events are predicted by their knowledge of similar events. Participants read 80 events from a list including events that were ordinary, unusual, and impossible. Participants rated whether the events were possible or whether the events were similar to events they knew to have happened. The averaged ratings for each judgment were strongly correlated, and the correlation remained significant in an analysis limited to a subset of the events that were neither viewed as totally impossible, or as extremely similar to known events. These findings provide preliminary evidence that adults may judge whether events are possible by relying on a memory-based heuristic which aims to identify whether these events are similar to known events.

Learning to communicate about shared procedural abstractions

Many real-world tasks require agents to coordinate their behavior to achieve shared goals. Successful collaboration requires not only adopting the same communicative conventions, but also grounding these conventions in the same task-appropriate conceptual abstractions. We investigate how humans use natural language to collaboratively solve physical assembly problems more effectively over time. Human participants were paired up in an online environment to reconstruct scenes containing two block towers. One participant could see the target towers, and sent assembly instructions for the other participant to reconstruct. Participants provided increasingly concise instructions across repeated attempts on each pair of towers, using more abstract referring expressions that captured each scene's hierarchical structure. To explain these findings, we extend recent probabilistic models of ad hoc convention formation with an explicit perceptual learning mechanism. These results shed light on the inductive biases that enable intelligent agents to coordinate upon shared procedural abstractions.

A bathtub by any other name: the reduction of German compounds in predictive contexts

The Uniform Information Density hypothesis (UID) predicts that lexical choice between long and short word forms depends on the predictability of the referent in context, and recent studies have shown such an effect of predictability on lexical choice during online production. We here set out to test whether the UID predictions hold up in a related setting, but different language (German) and different phenomenon, namely the choice between compounds (e.g. Badewanne / bathtub) or their base forms (Wanne / tub). Our study is consistent with the UID: we find that participants choose the shorter base form more often in predictive contexts, showing an active tendency to be information-theoretically efficient.

Using Recurrent Neural Networks to Understand Human Reward Learning

Computational models are greatly useful in cognitive science in revealing the mechanisms of learning and decision making. However, it is hard to know whether all meaningful variance in behavior has been account for by the best-fit model selected through model comparison. In this work, we propose to use recurrent neural networks (RNNs) to assess the limits of predictability afforded by a model of behavior, and reveal what (if anything) is missing in the cognitive models. We apply this approach in a complex reward-learning task with a large choice space and rich individual variability. The RNN models outperform the best known cognitive model through the entire learning phase. By analyzing and comparing model predictions, we show that the RNN models are more accurate at capturing the temporal dependency between subsequent choices, and better at identifying the subspace in the space of choices where participants' behavior is more likely to reside. The RNNs can also capture individual differences across participants by utilizing an embedding. The usefulness of this approach suggests promising applications of using RNNs to predict human behavior in complex cognitive tasks, in order to reveal cognitive mechanisms and their variability.

Using the Interpolated Maze Task to Assess Incremental Processing in English Relative Clauses

In English, Subject Relative Clauses are processed more quickly than Object Relative Clauses, but open questions remain about where in the clause slowdown occurs. The surprisal theory of incremental processing, under which processing difficulty corresponds to probabilistic expectations about upcoming material, predicts that slowdown should occur immediately on material that disambiguates the subject from object relative clause. However, evidence from eye tracking and self-paced reading studies suggests that slowdown occurs downstream of RC-disambiguating material, on the relative clause verb. These methods, however, suffer from well-known spillover effects which makes their results difficult to interpret. To address these issues, we introduce and deploy a novel variant of the Maze task for reading times (Forster, Guerrera, & Elliot, 2009), called the Interpolated Maze in two English web-based experiments. In Experiment 1, we find that the locus of reading-time differences between SRCs and ORCs falls on immediate disambiguating definite determiner. Experiment 2 provides a control, showing that ORCs are read more slowly than lexically-matching, non-anomalous material. These results provide new evidence for the locus of processing difficulty in relative clauses and support the surprisal theory of incremental processing.

Variation in spatial concepts: Different frames of reference on different axes

The physical properties of space may be universal, but the way people conceptualize space is not. In some groups, people tend to use egocentric space (e.g. left, right) to encode the locations of objects, while in other groups, people encode the same spatial scene using allocentric space (e.g. upriver, downriver). These different spatial Frames of Reference (FoRs) characterize the way people talk about spatial relations and the way they think about them, even when they are not using language. Although spatial language and spatial thinking tend to covary, the root causes of this variation are unclear. Here we propose that this variation in FoR use reflects the spatial discriminability of the relevant spatial continua. In an initial test of this proposal in a group of indigenous Bolivians, we compared FoR use across spatial axes that are known to differ in discriminability. In two non-verbal tests, participants spontaneously used different FoRs on different spatial axes: On the lateral axis, where egocentric (left-right) discrimination is difficult, their behavior was predominantly allocentric; on the sagittal axis, where egocentric (front-back) discrimination is relatively easy, their behavior was predominantly egocentric. These findings support the spatial discriminability hypothesis, which may explain variation in spatial concepts not only across axes, but also across groups, between individuals, and over development.

Evaluating General versus Singular Causal Prevention

Most psychological studies focused on how people reason about generative causation, in which a cause produces an effect. We study the prevention of effects both on the general and singular level. A general prevention query might ask how strongly a vaccine is expected to reduce the risk of contracting COVID-19, whereas a singular prevention query might ask whether the absence of COVID-19 in a vaccinated person actually resulted from this person's vaccination. We propose a computational model answering how knowledge about the general strength of a preventive cause can be used to assess whether a preventive link is instantiated in a singular case. We also discuss how psychological models of causal strength learning relate to mathematical models of vaccination efficacy used in medical research. The results of an experiment suggest that many, but not all people differentiate between preventive strength and singular prevention queries, in line with the formal model.

Risk-taking in adversarial games: What can 1 billion online chess games tell us?

Humans are social beings, and most of our decisions are influenced by considerations of how others will respond. Whether in poker or political negotiations, the riskiness of a decision is often determined by the variance of the other party's possible responses. Such socially-contingent decisions can be framed in terms of adversarial games, which differ from other risky situations such as lotteries because the risk arises from uncertainty about the opponent's decisions, and not some independent stochasticity in the world. We use chess as a lens through which we can study human risk-taking behavior in adversarial decision making. We develop a novel algorithm for calculating the riskiness of each move in a chess game, and apply it to data from over 1 billion online chess games. We find that players not only exhibit state-dependent risk preferences, but also change their risk-taking strategy depending on their opponent, and that this effect differs in experts and novices.

Modeling Question Asking Using Neural Program Generation

People ask questions that are far richer, more informative, and more creative than current AI systems. We propose a neuro-symbolic framework for modeling human question asking, which represents questions as formal programs and generates programs with an encoder-decoder based deep neural network. From extensive experiments using an information-search game, we show that our method can predict which questions humans are likely to ask in unconstrained settings. We also propose a novel grammar-based question generation framework trained with reinforcement learning, which is able to generate creative questions without supervised human data.

Chaining and the formation of spatial semantic categories in childhood

Children face the problem of extending a limited spatial lexicon to potentially infinite spatial situations. Previous work has examined how spatial semantic categories may be formed in child development, but it is unclear how children extend these categories to novel situations over the developmental time course. Drawing on cognitive linguistic theories of category extension, we present a framework that models the incremental extension of spatial relational words to novel situations through time. We describe a longitudinal dataset and computational analyses for investigating the extension of spatial word meanings in a developmental setting. Our preliminary results suggest that the formation of spatial categories takes place through an exemplar-based process of chaining, similar to the process underlying the growth of linguistic categories in history. Our work offers opportunities to explore the connection between ontogeny and phylogeny in the process of word meaning extension.

Cognitive Strategies for Parameter Estimation in Model Exploration

Virtual laboratories that enable novice scientists to construct, evaluate and revise models of complex systems heavily involve parameter estimation tasks. We seek to understand novice strategies for parameter estimation in model exploration to design better cognitive supports for them. We conducted a study of 50 college students for a parameter estimation task in exploring an ecological model. We identified three types of behavioral patterns and their underlying cognitive strategies. Specifically, the students used systematic search, problem decomposition and reduction, and global search followed by local search as their cognitive strategies.

Limits to Early Mental State Reasoning: Fourteen- to 15-Month-Old Infants Appreciate Whether Others Can See Objects, But Not Others’ Experiences of Objects

Research provides evidence that infants infer what others can and cannot see from their differing perspectives, but do infants appreciate that their own perspective on an object can differ from that of a person who views the same object from a different direction? First, infants were shown two faces with screens in front of or behind them. Infants correctly inferred that a face that was visible to them was occluded to a person sitting across from them. Then two experiments presented infants with two unscreened faces: one upright and one inverted. Infants attributed their own perspective on those faces to the other person: They did not appreciate that faces that were upright to them were inverted to the other person. Thus, infants appreciate that others may see things that they do not, but they fail to grasp not that others may experience the same visible objects differently than they do.

Developmental Change in What Elicits Curiosity

Across the lifespan, humans direct their learning towards information they are curious to know. However, it is unclear what elicits curiosity, and whether and how this changes across development. Is curiosity triggered by surprise and uncertainty, as prior research suggests, or by expected learning, which is often confounded with these features? In the present research, we use a Bayesian reinforcement learning model to quantify and disentangle surprise, uncertainty, and expected learning. We use the resulting model-estimated features to predict curiosity ratings from 5- to 9-year-olds and adults in an augmented multi-armed bandit task. Like adults’ curiosity, children’s curiosity was best predicted by expected learning. However, after accounting for expected learning, children (but not adults) were also more curious when uncertainty was higher and surprise lower. This research points to developmental changes in what elicits curiosity and calls for a reexamination of research that confounds these elicitors.

An Observer-oriented Theory of Creativity and Aesthetic Measure

As a step toward computational modeling of cognitive processes and interpretation of creativity, we present an extended theory of computational creative systems that aims to better define the evaluation of creative systems and processes. Previous research has focused largely on the {\em generation} aspect of creative systems. This paper extends this formalism by modeling the {\em judgment} of creativity through computational models of {\em observer} systems. As a concrete illustration of its applicability, we show how this theory can support the interpretation of MacGyver problems---problems defined in the cognitive systems research community as classical planning problems designed to elucidate the cognitive process of creative problem-solving. Finally, we demonstrate how this theory provides an interpretation to previous empirical work in approximation of an aesthetic measure in artistic domains for individual and group preferences over photographs and gestural performances. Our overall contributions are: (a) an initial formal model of creativity that incorporates a generative process and a given observer (b) application of the theory to the interpretation of results from both symbolic (classical planning) and statistical (preference learning) approaches.

Distributional learning of recursive structures

Languages differ regarding the depth, structure, and syntactic domains of recursive structures. Even within a language, some structures allow infinite self-embedding while others are more restricted. For example, English allows infinite free embedding of the prenominal genitive -s, whereas the postnominal genitive of is largely restricted to one level and to a limited set of items. Therefore, speakers need to learn from experience which specific structures allow free embedding and which do not. One effort to account for the underlying learning mechanism, the distributional learning proposal, suggests the recursion of a structure (e.g. X1’s- X2) is licensed if X1 position and X2 position are productively substitutable in the input. A series of corpus studies have confirmed the availability of such distributional cues in child directed speech. The present study further tests the distributional learning proposal with an artificial language learning experiment.

White Matter Tract Properties and Mathematics Skills: A Longitudinal Study of Children Born Preterm and Full-term

Children born preterm are at increased risk for white matter injury, impaired cognitive development and lower academic achievement. Here, we examined the association between fractional anisotropy and volume of select white matter tracts at age 5 with mathematics skills at age 7 in children born preterm (<33 weeks gestational age, n=52) without severe neurological complications and children born full-term (38-41 weeks gestational age, n=34). The preterm group had significantly lower mathematics scores and lower volume in several white matter tracts. Using multiple linear regression models, we examined white matter tracts that have previously been associated with mathematical cognition. We found a significant interaction with term status: fractional anisotropy of the corticospinal tract, and volume of corticospinal tract and parietal superior longitudinal fasciculus were significantly associated with mathematics skills in children born full-term, but not in children born preterm. These findings indicate white matter plasticity following preterm birth.

Cognitive Linguistics Support for the Evolution of Language from Animal Cognition

This paper explores previous arguments that language evolved not from animal communication, as naturally assumed by most scholars, but instead out of animal cognition. It is proposed here that additional support for this argument comes from Cognitive Linguistics, an interdisciplinary branch of linguistics. Cognitive Linguistics mediates communication and cognition but, as with Cognitive Discourse Analysis, studies language use in terms of what it demonstrates about underlying cognitive processes and concepts. The paper demonstrates key examples of animal cognition’s links to language, with Cognitive Linguistics support, as well as the approach’s application to animal cognition in terms of domain-general symbolism, beyond verbal language. However, because communication remains a major function of language, the communicative aspect ought to be maintained in the explanation of language evolution.

A computational evaluation of gender asymmetry in semantic change

A fundamental goal in cognitive and historical linguistic research on semantic change is to characterize the regularity in how word meanings change over time. We examine a common belief that has not yet been evaluated comprehensively, which asserts that gender of a word influences its direction of semantic change. By this account, female terms like mistress should undergo pejorative change in meaning systematically more so than male terms like master. We evaluate this claim in gender-marked word pairs in English and French respectively as languages without and with grammatical gender. Our results provide supporting evidence for gender asymmetry in semantic change of English words but not French words. Our study raises questions about the generality of the claim about gender asymmetry in semantic change and provides a scalable computational framework for understanding the social roots of word meaning change.

Statistical Power in Response Signal Paradigm Experiments

The speed-accuracy tradeoff (SAT) method has produced several prominent findings in sentence processing. While a substantial number of SAT studies has yielded statistical null results regarding the degree to which certain factors influence the speed of sentence processing operations, the statistical power of the SAT paradigm is not known. As a result, it is not entirely clear how to interpret these findings. We addressed this problem by means of a simulation study in which we simulated SAT experiments for a range of known effect sizes in order to determine the statistical power in typical SAT experiments. We found that while SAT experiments appear to have quite satisfactory power to detect differences in asymptotic accuracy, that is not the case for speed-related parameters. We conclude that the failure to find an effect in speed-related parameters in SAT experiments may be less meaningful than previously thought.

The lure of the self: How we misattribute our lesser likes to the “other” in perspective-taking and decision-making

How do we represent other people? Our representations are prone to a wide range of biases. We project our mental states onto others (especially when we assume they are similar to us), or rely on existing stereotypes (when we think they are different). But sometimes, it can be unclear how similar or different a person actually is from us. How does this affect how we represent their preferences? Here, subjects declared their favorite and least favorite colors and were introduced to another person whose preferences were neither completely similar or dissimilar. Across experiments, people successfully remembered the other person’s preferences, but they also tended to falsely ascribe their own lower ranked preferences to the other player in multiple memory and decision-making tasks. These results suggest a tendency to distance ourselves from preferences we identify with the least in our hierarchy of preferences, and to associate them instead with the ''other''.

Behavioral interference or facilitation does not distinguish between competitive and noncompetitive accounts of lexical selection in word production

One of the major debates in the field of word production is whether lexical selection is competitive or not. For nearly half a century, semantic interference effects in picture naming latencies have been claimed as evidence for competitive (relative threshold) models of lexical selection, while semantic facilitation effects have been claimed as evidence for non-competitive (simple threshold) models instead. In this paper, we use a computational modeling approach to compare the consequences of competitive and noncompetitive selection algorithms for blocked cyclic picture naming latencies, combined with two approaches to representing taxonomic and thematic semantic features. We show that although our simple model can capture both semantic interference and facilitation, the presence or absence of competition in the selection mechanism is unrelated to the polarity of these semantic effects. These results question the validity of prior assumptions and offer new perspectives on the origins of interference and facilitation in language production.

Dimensions of Moral Status

In a recent theoretical paper, Birsch, Schnell & Clayton (2020) introduced a multidimensional framework of animal consciousness. In two online studies, we adopted their classification system and asked which of these dimensions contribute most to moral concern for non-human beings. Participants placed moral value on mental attributes more than physical similarity to humans in biology, appearance, and size. Specifically, behavioural indications of rich and complex visual processing had strong effects on both consciousness ratings and moral concern, more so than indications of self-awareness. Furthermore, moral worth was highly correlated with consciousness ratings, across items and participants. We discuss our findings in light of the philosophical debate over the moral significance of functional aspects of consciousness (Carruthers, 2019; Danaher, 2020; Levy, 2014), and in relation to the relevance of the scientific study of consciousness to ethics.

Vector Autoregression, Cross-Correlation, and Cross-Recurrence Quantification Analysis: A Case Study in Social Cohesion and Collective Action

As time series analysis continues to capture the interest of cognitive and behavioral researchers, it is increasingly important to evaluate these methods and compare their respective insights. Here, we evaluate three popular analyses: vector autoregression, cross-correlation, and cross-recurrence quantification analysis. Using social cohesion data derived from Twitter and daily counts of real-world events during the Arab Spring, we present a case study using these methods and evaluate their benefits, limitations, and differences in results. We propose that researchers interested in time series analysis consider these differences and use multiple methods to assure reliability of their results.

Predicting the N400 ERP component using the Sentence Gestalt model trained on a large scale corpus

The N400 component of the event related brain potential is widely used to investigate language and meaning processing. However, despite much research the component’s functional basis remains actively debated. Recent work showed that the update of the predictive representation of sentence meaning (semantic update, or SU) generated by the Sentence Gestalt model (McClelland, St. John, & Taraban, 1989) consistently displayed a similar pattern to the N400 amplitude in a series of conditions known to modulate this event-related potential. These results led (Rabovsky, Hansen, and McClelland,2018) to suggest that the N400 might reflect change in a probabilistic representation of meaning corresponding to an implicit semantic prediction error. However, a limitation of this work is that the model was trained on a small artificial training corpus and thus could not be presented with the same naturalistic stimuli presented in empirical experiments. In the present study, we overcome this limitation and directly model the amplitude of the N400 elicited during naturalistic sentence processing by using as predictor the SU generated by a Sentence Gestalt model trained on a large corpus of texts. The results reported in this paper corroborate the hypothesis that the N400 component reflects the change in a probabilistic representation of meaning after every word presentation. Further analyses demonstrate that the SU of the Sentence Gestalt model and the amplitude of the N400 are influenced similarly by the stochastic and positional properties of the linguistic input.

Structural Comparisons of Noun and Verb Networks in the Mental Lexicon

Recent studies have applied network-based approaches to analyze the organization and retrieval of specific semantic categories, with a focus on the animal category. The current study extended previous studies by using network science tools to quantitatively investigate the structural differences of noun and verb categories of various levels of specificity. Specific (animal and body movement) and general noun and verb networks were constructed from four verbal fluency tasks. Common network measures indicated that the two verb networks were more condensed and less modular than the noun networks, supporting that nouns are more well-organized in the mental lexicon than verbs. Comparing the specific and general networks within each lexical category also corroborated lexical semantic studies that nouns have a more clear hierarchical structure. The results of this paper, along with recent semantic network studies, provide converging evidence for the usefulness of network science in semantic memory research.

Do left-right and back-front mental timelines activate simultaneously?

We asked whether it is possible to simultaneously activate two timelines in the human mind. We hypothesized that the lateral (left-right) and sagittal (back-front) spatial dimensions can be coactivated and expected the congruent space-time mappings of each dimension (back-past front-future and left-past right-future), but not the non-coherent ones, to prime each other. Participants were asked to keep in mind the two spatial dimensions as discrete entities. Spanish speakers categorized the temporal reference of sentences by pressing a sagittal directional key with their left or right hand. Results suggest that (i) full congruence facilitates the spatial representation of time the most, (ii) full incongruence interferes the most with the spatial representation of time, and (iii) the two partial forms of congruence produce similar interference effects between the two spatial dimensions and time. The results were interpreted according to the Coherent Working Models approach.

Can Deep Convolutional Neural Networks Learn Same-Different Relations?

Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This fact has lead neural networks researchers to test same-different classification on deep convolutional neural networks (DCNNs), which has resulted in a controversy regarding whether this skill is within the capacity of these models. However, most tests of same-different classification rely on testing on images that come from the same pixel-level distribution as the testing images, yielding the results inconclusive. In this study we tested relational same-different reasoning DCNNs. In a series of simulations we show that DCNNs are capable of visual same-different classification, but only when the test images are similar to the training images at the pixel-level. In contrast, even when there are only subtle differences between the testing and training images, the performance of DCNNs could drop to chance levels. This is true even when DCNNs' training regime included a wide distribution of images or when they were trained in a multi-task setup in which training included an additional relational task with test images from the same pixel-level distribution.

Phonological Interactions, Process Types, and Minimum Description Length Principles

Learnability has been a topic of great interest in phonology. In particular is the question of the relative learnability of process interactions. In both historical and experimental domains, researchers have noted that certain kinds of interactions are harder to learn than others. In both domains, however, the results are seemingly in conflict. One potential source of the conflicting outcome is the types of processes involved. In this paper, we investigate the effect of process types on the learnability of different interaction types, using an ideal minimum-description-length learner (MDL). We find that the model indeed predicts different learnability outcomes for each interaction type; however, the asymmetry is largely independent of the process type. This computational model explains certain elements of both the historical as well as some experimental findings of the relative learnability of linguistics process interactions, while contradicting other behavioral findings.

A Nonlinear Dynamical Systems Approach to Emotional Attractor States during Media Viewing

This study examined dynamic attractor states in skin conductance activity during resting baselines and media viewing in order to determine if there are qualitatively distinct dynamics during information processing and whether those dynamics vary based on features of task stimuli. The results indicate that media viewing shifts one from a resting non-chaotic attractor to a chaotic attractor. Content valence (positive or negative) and the emotional context in which videos were delivered (presentation order) had significant impact on the probability of exhibiting a chaotic attractor. Using the nonlinear dynamic systems approach, this study provides novel understandings of emotional information processing, the electrodermal system, and the relationship between physiological and emotional experiences.

Providing explanations shifts preschoolers’ metaphor preferences

In order to learn from metaphors, children must not only be able to understand metaphors, but also appreciate their relative informativeness. Although functional metaphors based on abstract commonalities (e.g. “Eyes are windows”) allow for more learning than perceptual metaphors based on superficial commonalities (e.g. “Eyes are buttons”), previous research shows that preschoolers prefer perceptual metaphors over functional metaphors. In the present studies, we ask whether providing additional context can shift metaphor preferences in preschoolers and adults. Experiment 1 finds that pedagogical context increases preferences for functional metaphors in adults, but not preschoolers. Experiment 2 finds that providing explanations for conceptual similarities in a metaphor increases preschoolers’ preferences for functional metaphors. These findings suggest that providing explanations allows even preschoolers to appreciate the informativeness of functional metaphors.

Goffin's cockatoos learn to discriminate objects based on weight alone in an object choice task

Paying attention to weight is important when deciding upon an object’s efficacy or value in various contexts (e.g. tool use, foraging). Proprioceptive discrimination learning, with objects that differ only in weight, has so far been investigated in a handful of primate species. Here we show that while Goffin’s cockatoos learn faster when additional colour cues are used, they can also quickly learn to discriminate between objects on the basis of their weight alone. Ultimately, the birds learned to discriminate between visually identical objects on the basis of weight much faster than primates, although methodological differences between tasks should be considered.

Predicting children's and adults' preferences in physical interactions via physics simulation

Curiosity is a fundamental driver of human behavior, and yet because of its open-ended nature and the wide variety of behaviors it inspires in different contexts, it is remarkably difficult to study in a laboratory context. A promising approach to developing and testing theories of curiosity is to instantiate them in artificial agents that are able to act and explore in a simulated environment, and then compare the behavior of these agents to humans exploring the same stimuli. Here we propose a new experimental paradigm for examining children's -- and AI agents' -- curiosity about objects' physical interactions. We let them choose which object to drop another object onto in order to create the most interesting effect. We compared adults' (N=155) and children's choices (N=66; 3-7 year-olds) and found that both children and adults show a strong preference for choosing target objects that could potentially contain the dropped object. Adults alone also make choices consistent with achieving support relations. We contextualize our results using heuristic computational models based on 3D physical simulations of the same scenarios judged by participants.

Children infer the behavioral contexts of unfamiliar foreign songs

Humans readily form musical inference: upon hearing a Blackfoot lullaby, a Korean listener is far more likely to judge the music’s function as “to soothe a baby” than as “for dancing”. Are such inferences driven by experience, or does the mind naturally detect form-function links? We tested this in a large online sample of 2418 children, who were played songs from 70 world cultures and guessed the original behavioral context. Results show that inferences were reliable, with practically no improvement in performance from the youngest (age 3) to oldest (age 12). Moreover, their intuitions tightly correlated with adults’ intuitions about the same songs (? = 85,068). And both children’s and adults’ intuitions were predictable from a few key musical features of the songs. These results support the existence of universal links between form and function in music, and imply that sensitivity to these links is minimally, if at all, experience-dependent.

Automatic computation of navigational affordances explains selective processing of geometry in scene perception: behavioral and computational evidence

One of the more surprising findings in visual cognition is the apparent sparsity of our scene percepts. Yet, scene perception also enables planning and navigation, which require a detailed, structured analysis of the scene geometry, including exit locations and the obstacles along the way. Here, we hypothesize that computation of navigational affordances (e.g., paths to an exit) is a “default” task in the mind, and that task induces selective analysis of the scene geometry most relevant to computing these affordances. In an indoor scene setting, we show that observers more readily detect changes if these changes impact shortest paths to visible exits. We show that behavioral detection rates are explained by a new model of attention that makes heterogeneous-precision inferences about the scene geometry, relative to how its different regions impact navigational affordance computation. This work provides a formal window into the contents of our scene percepts.

Utilizing Dynamic and Embodied Visualization to Facilitate Understanding of Normal Probability Distributions

Teachers often use drawings of the normal distribution to support explanations of related statistical concepts, assuming that the normal curve provides a common language for such discussions. However, we find that students may not understand the basic features of the normal curve. In Study 1, we showed that students who already have studied the normal distribution in a college-level class do not understand basic concepts associated with it. Then, in Study 2, we investigated whether a brief instructional, narrated video could improve students’ understanding of the normal probability distribution. Specifically, we compared three instructional formats: static slides, a video recording of a hand physically drawing those plots, and a screen recording of the hand-drawing. Despite the brevity of the intervention, we found significant improvements in students’ understanding of the normal probability distribution and related probability concepts. The findings are discussed in relation to the dynamic representation and embodied cognition literature.

Is there a predictability hierarchy in reference resolution?

The concept of accessibility has often been evoked to explain reference resolution. According to the Givenness Hierarchy theory, a referent’s accessibility in the mental state of a comprehender is encoded in the form of the referent as part of its lexical semantic representation. However, the current literature has not reached a consensus on what accessibility exactly means and how to best quantify it. The factors that modulate accessibility show a great extent of overlap with another independently motivated concept of predictability, raising the possibility of a “Predictability Hierarchy” that mirrors Givenness Hierarchy. In a self-paced reading study, the current study examines whether there is such “Predictability Hierarchy” by manipulating the predictability and the form of a referent presented to the participants. Our results indicate that although there is no strong evidence for approximating the Givenness Hierarchy with a “Predictability Hierarchy,” there is some preliminary evidence for a partial correlation between the form and the predictability of a referent.

Cognitive Argumentation and the Selection Task

This paper presents a study of the selection task based on Cognitive Argumentation (CA), a computational framework for dialectic argumentation-based reasoning. CA is built from a theoretical framework of argumentation in AI which is then grounded via cognitive principles from Cognitive Science. The aim is to understand the selection task variations by studying how argumentative reasoning is suitably flexible to uniformly capture the differences among the individuals' selections, the canonical groups and the shifts within the different contexts in which the experiment is carried out. Our approach is assessed with respect to the developed criteria within the meta-analysis of Ragni and Johnson-Laird in 2018.

Evaluating Information and Misinformation during the COVID-19 Pandemic: Evidence for Epistemic Vigilance

There are many ways to go wrong when evaluating new information, e.g. by putting unwarranted trust in non-experts, or failing to scrutinize information about threats. We examined how effective people were at evaluating information about the COVID-19 pandemic. Early in the course of the pandemic, we recruited 1791 participants from six countries with varying levels of pandemic severity, and asked them to evaluate true and false pandemic-related statements (assertions and prescriptions) sampled from the media. We experimentally manipulated the source of each statement (a doctor, a political/religious leader, social media, etc.). Overall, people proved to be epistemically vigilant: they distinguished between true and false statements, especially prescriptions, and they trusted doctors more than other sources. These effects were moderated by feeling threatened by the pandemic, and by strong identification with some sources (political/religious leaders). These findings provide optimism in the fight against misinformation, while highlighting challenges posed by politics and ideology.

The Mystery of Early Taxonomic Development

Research has long investigated how knowledge about the world is connected by meaningful, semantic links. Much of this research has focused on a specific type of semantic link known as a taxonomic link, which connects concepts belonging to the same semantic category. However, many inconsistencies have emerged regarding how and when this specific type of semantic link is formed. The goal of the present study was to investigate this contradicting research and provide an explanation for the inconsistent results. To do this, we examined the linguistic environment of stimuli sets from three studies that either supported or did not support protracted taxonomic development. Results provided evidence that the idea that semantic links between members of taxonomic categories in early development may be based on simple co-occurrences in language.

A Unified, Resource-Rational Account of the Allais and Ellsberg Paradoxes

Decades of empirical and theoretical research on human decision-making has broadly categorized it into two, separate realms: decision-making under risk and decision-making under uncertainty, with the Allais paradox and the Ellsberg paradox being a prominent example of each, respectively. In this work, we present the first unified, resource-rational account of these two paradoxes. Specifically, we show that Nobandegani et al.’s (2018) sample-based expected utility model provides a unified, process-level account of the two variants of the Allais paradox (the common-consequence effect and the common-ratio effect) and the Ellsberg paradox. Our work suggests that the broad framework of resource-rationality could permit a unified treatment of decision-making under risk and decision-making under uncertainty, thus approaching a unified account of human decision-making.

Computational challenges in explaining communication: How deep the rabbit hole goes

When people are unsure of the intended meaning of a word, they often ask for clarification. One way of doing so-often assumed in models of communication-is to point at a potential target: "Do you mean [points at the rabbit]?'' However, what if the target is unavailable? Then the only recourse is language itself, which seems equivalent to pulling oneself up from a swamp by one's hair. We created two computational models of communication, one able to point and one not. The latter incorporates inference to resolve the meaning of non-pointing signals. Simulations show agents in both models reach perceived understanding equally quickly. While this means agents think they are successfully communicating, non-pointing agents understand each other only at chance level. This shows that state-of-the-art computational explanations have difficulty explaining how people solve the puzzle of underdetermination, and that doing so will require a fundamental leap forward.

The Emergence of Cultural Attractors: An Agent-Based Model of Collective Perceptual Alignment

Cultural attractor landscapes describe the time-evolution of cultural variants over transmission events. When variants sit at a local minimum of a stable attractor landscape, there will be no cumulative error over transmissions, laying the foundation for cumulative culture. But because cultural attractors are emergent products of dynamic populations of cognitive landscapes, which are in turn emergent products of individual experience within a culture, stable cultural attractor landscapes cannot be taken for granted. Yet, little is known about how cultural attractors form or stabilize. We present an agent-based model of cultural attractor dynamics, which adapts a cognitive model of unsupervised learning of phoneme categories in individual learners to a multi-agent, sociocultural setting wherein individual learners provide the training input to each other. We find that constraints at the level of cognition, development, and demographic structure determine the tendency for populations to self-organize into and dynamically stablilize a cultural attractor landscape.

Which acoustic features support the language-cognition link in infancy: A machine-learning approach

From the ambient auditory environment, infants identify which communicative signals are linked to cognition. By 3 to 4 months of age, they have already begun to establish this link: listening to their native language and to non-human primate vocalizations supports infants’ core cognitive capacities, including object categorization. This study aims to shed light on the specific acoustic properties in these vocalizations which enable their links to cognition. We constructed a series of supervised machine-learning models to classify those vocalizations that support cognition from those that do not, based on classes of acoustic features derived from a collection of human language and non-human vocalization samples. The models highlight a potential role for spectral envelope and rhythmic features from both human languages and non-human vocalizations. Results implicate a potential role of underlying perceptual mechanisms relevant to spectral envelope and rhythmic features in infants’ establishment of the uniquely human language-cognition link.

Coherence-Building in Multiple Document Comprehension

The current study examined the extent to which the cohesion detected in readers’ constructed responses to multiple documents was predictive of persuasive, source-based essay quality. Participants (N=95) completed multiple-documents reading tasks wherein they were prompted to think-aloud, self-explain, or evaluate the sources while reading a set of four texts. They were then asked to write a source-based essay based on their reading. Natural Language Processing techniques were used to automatically analyze the cohesion of the constructed responses at both within- and across-documents levels. Results indicated that within-document cohesion was negatively related to essay quality, whereas across-documents cohesion was positively related to essay quality. Further, these relations differed by instructional condition such that strategic instructions to either self-explain or evaluate sources seemed to promote across-text integration, compared to thinking aloud. Overall, this study indicates that the cohesion of constructed responses to text can provide insights into the coherence of the mental representations readers construct while reading multiple documents.

Analyzing contingent interactions in R with `chattr`

The `chattr` R package enables users to easily detect and describe temporal contingencies in pre-annotated interactional data. Temporal contingency analysis is ubiquitous across signal system research, including human and non-human animal communication. Current approaches require manual evaluation (i.e., do not scale up), are proprietary/over-specialized (i.e., have limited utility), or are constructed ad-hoc per study (i.e., are variable in construct). `Chattr`'s theoretically motivated, customizable, and open source code provides a set of core functions that allow users to quickly and automatically extract contingency information in data already annotated for interactant activity (via manual or automated annotation). We demonstrate the use of `chattr` by testing predictions about turn-taking behavior in three language development corpora. We find that the package effectively recovers documented variation in linguistic input given both manual and automatically created speech annotations and note future directions for package development key to its use across multiple research domains.

Parents Adaptively Use Anaphora During Parent-child Social Interaction

Anaphora, a ubiquitous feature of natural language, poses a particular challenge to young children as they first learn language due to its referential ambiguity. In spite of this, parents and caregivers use anaphora frequently in child-directed speech, potentially presenting a risk to effective communication if children do not yet have the linguistic capabilities of resolving anaphora successfully. Through an eye-tracking study in a naturalistic free-play context, we examine the strategies that parents employ to calibrate their use of anaphora to their child's linguistic development level. We show that, in this way, parents are able to intuitively scaffold the complexity of their speech such that greater referential ambiguity does not hurt overall communication success.

Toward a Comprehensive Developmental Theory for Symbolic Magnitude Understanding

Whether different formats of numbers are represented by one or more systems across development is a subject of long-standing interest in the field of numerical cognition, with seemingly contradictory results. Here we examined numerical comparison to test a developmental theory that can reconcile these discrepancies. In Experiment 1, we found numerical understanding progresses through three continuous phases of association between numerical symbols and approximate sense of numerosity. In the youngest age group (prefluent phase), comparing numerals were slower than comparing dot arrays, but became similar (fluent phase) then faster (overlearning phase) with age. Because this developmental change occurred in the order of numeric range 1-9, followed by 10-99 and 100-999, multiple phases co-existed during childhood. Furthermore, results from Experiment 2 indicated that comparing different formats of numbers was affected by ratio even at the highest levels of proficiency, suggesting that the approximate number system is never fully replaced.

Aesthetic perception of prosodic patterns as a factor in speech segmentation

This study addresses the hypothesis that the aesthetic appeal of linguistic features may influence their learnability and in turn their stability in a language. Focusing on prosodic patterns, we investigated the crucial baseline assumption that linguistic features like stress affect aesthetic appeal. Listeners’ liking, beauty and naturalness ratings of isochronous words and words with initially, medially or finally lengthened or shortened syllables revealed that, indeed, these patterns differed in their aesthetic appeal. Interestingly, the aesthetic appeal of prosodic patterns corresponded to their effectiveness for speech segmentation in other experiments, indicating a potential connection between aesthetics and language learning and opening up avenues for further research on the role of aesthetics in language acquisition and change.

How the Mind Creates Structure: Hierarchical Learning of Action Sequences

Humans have the astonishing capacity to quickly adapt to varying environmental demands and reach complex goals in the absence of extrinsic rewards. Part of what underlies this capacity is the ability to flexibly reuse and recombine previous experiences, and to plan future courses of action in a psychological space that is shaped by these experiences. Decades of research have suggested that humans use hierarchical representations for efficient planning and flexibility, but the origin of these representations has remained elusive. This study investigates how 73 participants learned hierarchical representations through experience, in a task in which they had to perform complex action sequences to obtain rewards. Complex action sequences were composed of simpler action sequences, which were not rewarded, but whose execution led to changes in the environment. After participants learned action sequences, they completed a transfer phase. Unbeknownst to them, we manipulated either complex or simple sequences by exchanging individual elements, requiring them to relearn. Relearning progressed slower when simple (rather than complex) sequences were changed, in accordance with a hierarchical representations in which lower levels are quickly consolidated, potentially stabilizing exploration, while higher levels remain malleable, with benefits for flexible recombination.

Spatial Language Use Predicts Spatial Memory of Children: Evidence from Sign, Speech, and Speech-plus-gesture

There is a strong relation between children’s exposure to spatial terms and their later memory accuracy. In the current study, we tested whether the production of spatial terms by children themselves predicts memory accuracy and whether and how language modality of these encodings modulates memory accuracy differently. Hearing child speakers of Turkish and deaf child signers of Turkish Sign Language described pictures of objects in various spatial relations to each other and later tested for their memory accuracy of these pictures in a surprise memory task. We found that having described the spatial relation between the objects predicted better memory accuracy. However, the modality of these descriptions in sign, speech, or speech-plus-gesture did not reveal differences in memory accuracy. We discuss the implications of these findings for the relation between spatial language, memory, and the modality of encoding.

The Shape of Modified Numerals

The pattern of implicatures of modified numeral `more than n' depends on the roundness of n. Cummins, Sauerland, and Solt (2012) present experimental evidence for the relation between roundness and implicature patterns, and propose a pragmatic account of the phenomenon. More recently, Hesse and Benz (2020) present more extensive evidence showing that implicatures also depend on the magnitude of n and propose a novel explanation based on the Approximate Number System (Dehaene 1999). Despite the wealth of experimental data, no formal account has yet been proposed to characterize the full posterior distribution over numbers of a listener after hearing `more than n'. We develop one such account within the Rational Speech Act framework, quantitatively reconstructing the pragmatic reasoning of a rational listener. We show that our pragmatic account correctly predicts various features of the experimental data.

The Construct and Criterion Validity of a Cognitive Game-based Assessment: Cognitive Control, Academic Achievement, and Prefrontal Cortex Connectivity

Cognitive control—the ability to execute goal-relevant responses in the presence of competing goal-irrelevant response alternatives—predicts academic achievement, delinquency, and occupational success. Assessing children's cognitive control is challenging due to the tedious nature of cognitive assessments and children’s low attention spans. This study examined whether a cognitive game-based assessment (GBA) may alleviate these challenges by investigating the construct and criterion validity of implementing game-based features into a traditional cognitive assessment, and the associations between GBA performance, academic achievement outcomes, and associated neural substrates in children ages 3-5. Performance on the GBA was significantly associated with performance on a traditional measure of cognitive control, functional brain connectivity, and mathematical and verbal test outcomes. Children also showed a stronger preference and higher ratings of enjoyment for the GBA compared to the traditional cognitive control assessment.

Biologically Constrained Large-Scale Model of the Wisconsin Card Sorting Test

We propose a biologically constrained, large-scale neural network model that solves the Wisconsin Card Sorting Test (WCST). The WCST has been widely used in clinical and research settings to study cognitive flexibility and executive function. The model shows a good quantitative match with human responses across a number of WCST scoring indices, while consisting of neural networks that functionally and anatomically map to brain areas and structures implicated in the task, such as the prefrontal cortex and the cortico-basal ganglia-thalamus-cortical loop. We argue that the model provides a mechanistic account of WCST solving, and demonstrate its robustness by examining its performance across a range of biologically motivated parameter values.

Differences in implicit vs. explicit grammar processing as revealed by drift-diffusion modeling of reaction times

Learning new languages is a complex cognitive task involving both implicit and explicit processes. Batterink, Oudiette, Reber, and Paller (2014) report that participants with vs. without conscious awareness of a hidden semi-artificial language regularity showed no significant differences in grammar learning, suggesting that implicit/explicit routes may be functionally equivalent. However, their operationalizing of learning via median reaction times might not capture underlying differences in cognitive processes. In a conceptual replication, we compared rule-aware (n=14) and rule-unaware (n=21) participants via drift-diffusion modeling, which can quantify distinct subcomponents of evidence-accumulation processes (Ratcliff & Rouder, 1998). For both groups, grammar learning was manifested in non-decision parameters, suggesting anticipation of motor responses. For rule-aware participants only, learning also affected bias in evidence accumulation during word reading. These results suggest that implicit grammar learning may be manifested through low-level mechanisms whereas explicit grammar learning may involve more direct engagement with encoded target meanings.

Causal Learning With Interrupted Time Series

Interrupted time series analysis (ITSA) is a statistical procedure that evaluates whether an intervention causes a change in the intercept and/or slope of the time series. However, very little research has accessed causal learning in interrupted time series situations. We systematically investigated whether people are able to learn causal influences from a process akin to ITSA, and compared four different presentation formats of stimuli. We found that participants' judgments agreed with ITSA in cases in which the pre-intervention slope is zero or in the same direction as the changes in intercept or slope. However, participants had considerable difficulty controlling for pre-intervention slope when it is in the opposite direction of the changes in intercept or slope. The presentation formats didn't affect judgments in most cases, but did in one. We discuss these results in terms of two potential heuristics that people might use aside from a process akin to ITSA.

Leveraging rapid scene perception in attentional learning

In addition to saliency and goal-based factors, a scene’s semantic content has been shown to guide attention in visual search tasks. Here, we ask if this rapidly available guidance signal can be leveraged to learn new attentional strategies. In a variant of the scene preview paradigm (Castelhano & Heaven, 2010), participants searched for targets embedded in real-world scenes with target locations linked to scene gist. We found that activating gist with scene previews significantly increased search efficiency over time in a manner consistent with formal theories of skill acquisition. We combine VGG16 and EBRW to provide a biologically inspired account of the gist preview advantage and its effects on learning in gist-guided attention. Preliminary model results suggest that, when preview information is useful, stimulus features may amplify the similarities and differences between exemplars.

Fatal errors in the food domain: children’s categorization performance and strategy depend on both food processing and neophobic dispositions.

In this study, preschool children were tested in a food versus nonfood categorization task. We studied the influence of edibility cues such as food processing (whole versus sliced items) on children’s categorization abilities. We also correlated children’s categorization performance and strategy with their food rejection scores (neophobia). 137 children aged 4-6 years were asked to discriminate foods from nonfoods. Results revealed that food processing features (slicing) afforded edibility, leading to potentially hazardous incorrect categorization. We also found that children’s categorization performance was negatively correlated with their food rejection scores. Moreover, as expected, children with high food rejection scores displayed a more conservative categorization strategy (i.e., categorizing food items as inedible) than children with lower food rejection scores. However, contrary to our expectations, both performance and strategy of less neophobic and picky children were affected by food processing. These children committed dangerous errors, categorizing many nonfood items as food when sliced.

Web-scraping the Expression of Loneliness during COVID-19

We investigated the subjective experience of loneliness during COVID-19 by analyzing social media postings from March 2020 to January 2021. We collected text data from loneliness-related subgroups of Reddit and sampled 12787 posts that were written in ten consecutive days from each month. The results suggest that when individuals express their loneliness, they show an internal focus of attention on their emotions, desires, and cognitive appraisals rather than an external focus of attention on situations or other people. Linguistic markers of emotions expressed by lonely individuals included depression, anxiety, anger, hate, helplessness, and sadness. Also, loneliness-related topics were generally about their internal states pertinent to various social relationships, interpersonal interaction deficits, and their own lives in broad time perspectives. COVID-19 related loneliness was associated with negative appraisal of one’s situation and reaching out for new relationships online.

Categorization in the Wild: Category and Feature Learning across Languages

Categories such as 'animal' or 'furniture' play a pivotal role in processing, organizing, and communicating world knowledge. Many theories and computational models of categorization exist, but evaluation has disproportionately focused on artificially simplified learning problems (e.g., by assuming a given set of relevant features or small data sets); and on English native speakers. This paper presents a large-scale computational study of category and feature learning. We approximate the learning environment with natural language text, and scale previous work in three ways: We (1) model the full complexity of the learning process, acquiring learning categories and structured features jointly; (2) study the generalizability of categorization models to five diverse languages; and (3) learn categorizations comprising hundreds of concepts and thousands of features. Our experiments show that meaningful representations emerge across languages. We further demonstrate a joint model of category and feature acquisition produces more relevant and coherent features than simpler models, suggesting it as an exploratory tool to support cross-cultural categorization studies.

Eye movements when reading spaced and unspaced texts in Arabic

This study investigated the extent to which varying interword spacing influences eye movement during reading in Arabic. Previous works conducted in Latin-script languages suggested that interword spaces facilitated word recognition. On the other hand, word recognition was inhibited when interword spaces were either removed or replaced by other characters (Rayner et al., 1998; Sheridan et al., 2013). We focused on the influence of interword spaces on reading Arabic which is characterized by the use of interword spaces and the position-informative allographic system. Based on an eye tracking experiment in which subjects read Arabic sentences presented in three levels of interword spacing and two levels of target word frequency, we found that eliminating interword spaces did not significantly inhibit reading, yet widening interword spaces exerted a facilitative effect. We argued that the effect of eliminating interword spaces was compensated by the ligating properties of Arabic letters during sentence reading, i.e. Arabic ligatures were position-informative which provided sufficient visual cues for word recognition regardless of the presence of interword spaces.

Experts Interpret Generalizations Differently Than Novices

Generic statements, such as “mosquitoes fly” and “mosquitoes carry malaria,” are remarkable in that they are an intuitive and readily understood means of conveying knowledge, and yet their implied prevalence---the specific quantification they convey---can vary widely. This variability may lead to miscommunication, with speakers using generic statements flexibly and listeners rigidly interpreting them as implying near universal prevalence (Cimpian et al., 2010). However, recent research found that listeners with applicable prior knowledge interpret generic statements flexibly (Tessler & Goodman, 2019b). The evident importance of prior knowledge suggests that expertise may impact how people interpret generic statements. We investigated whether experts and novices systematically differ in the way they interpret generic statements, using the esport League of Legends as a cultural microcosm. As hypothesized, experts interpreted generic statements more flexibly than novices did, and novices tended to assume generic statements applied more broadly than experts did.

Logic Programs as Executable Experimental Task Specifications

This paper proposes a formalized approach to the specification of experimental tasks in cognitive science. Put briefly, the proposal is to represent the structure of a task by a logic program that accepts only valid experimental event logs for the chosen paradigm. It is argued that the proposed approach stands to benefit the research process at various stages as it involves the creation of executable documentation for experimental tasks, which may facilitate the communication, validation, implementation, and analysis of experimental tasks. A worked example is presented in detail and some potential new directions of research at the intersection of psychology and computer science are discussed.

Measuring and predicting variation in the interestingness of physical structures

Curiosity drives much of human behavior, but its open-ended nature makes it hard to study in the laboratory. Moreover, computational theories of curiosity -- models of how intrinsic motivation promotes complex behaviors -- have been challenging to test because of technical limits. To circumvent this problem, we develop a new way to assess intrinsic motivation for building: we assume people build what they find interesting, so we asked them to rate the "interestingness" of visual stimuli -- in this case, simple block towers. Adults gave a range of ratings to towers built by children, with taller towers rated higher. To probe interestingness further, we developed controlled tower stimuli in a simulated 3D environment. While tower height predicted much of the variation in ratings, people also favored more precarious towers, as inferred from geometric features and simulated dynamics. These ratings and features therefore give a clear target for computational accounts of curiosity to explain.

A Formal Operational Model of ACT-R: Structure and Behaviour

It is a long standing challenge to devise a formal model of ACT-R as a basis for formal reasoning on ACT-R. The ACT-R architecture is a composition of components (such as modules) with predefined interfaces between components and predefined interactions on the interfaces. Reasoning over the correctness of a formal model of ACT-R benefits from the separation of abstraction levels i.e. reasoning on the level of interfaces and interactions between components in isolation from the concrete behaviour of each component. We propose a formal semantics of ACT-R that preserves the structural properties of architectural components, i.e., the interfaces of modules to the remaining architecture as well as communication between modules within the architecture. We demonstrate how our new formal semantics of ACT-R serves to prove the correctness of the timed automaton based operational semantics for ACT-R (TA-ACT-R) on the level of architectural components.

Making Progress on the Effort Paradox: Progress Information Moderates Cognitive Demand Avoidance

The law of least mental effort suggests that humans seek to minimize cognitive effort exertion. It is thought that we do so because effort is inherently aversive, playing the cost function in a cost-benefit analysis. However, this is not always the case: Some human activities are valued precisely because they are effortful. This dual nature of effort as valued and costly is known as the Effort Paradox. The question is therefore: what features differentiate an aversively effortful task from a valued one? In the current study, we explore how perceived progress might be one of these features. Across two experiments, we demonstrate that people willfully choose to engage in more demanding cognitive tasks when doing so yields telegraphed progress information. These results suggest that perceived progress may play a moderating role in cognitive effort aversion and hints at the possibility that progress itself may be an inherently valuable stimulus.

If it works we didn’t need it: Intuitive judgments of ‘overreaction’

When laypeople decide if a costly intervention is an overreaction or an appropriate response, they likely base those judgments on mental simulation about what could happen, or what would have happened without an intervention. To narrow down from the infinite set of possibilities they could consider, they may engage in a process of sampling. We examine whether judgments of overreaction can be explained by a utility- weighted sampling account from the JDM literature, or a norm- weighted sampling account from the causal judgment literature, both, or neither. Three experiments test whether these judgments are overly influenced by low-risk bad outcomes (utility-weighted sampling), or by what is likely and prescriptively good (norm-weighted sampling). Overall, participants’ judgments indicate that they disregard low-risk bad outcomes, and even when a high-risk outcome is successfully avoided, the intervention is an overreaction. These results favor a norm-weighted sampling account in the specific case of evaluating overreactions.

Epistemic verbs produce spatial models

Verbs such as ‘know’ and ‘think’ help people describe mental states, and reasoners without any training in logic can make epistemic inferences about mental states. For instance, verbs such as ‘know’ are factive, i.e., they describe true propositions, and the statement Ora knows that it’s sunny licenses the inference that it’s sunny. Logicians have accordingly developed epistemic logics capable of characterizing valid and invalid epistemic inferences based on operators that serve as analogs to verbs such as ‘know’ and ‘think’. Recent work suggests that no existing logical system can capture the inferences that naïve individuals tend to make. This paper describes a new theory of epistemic reasoning that operates on the assumption that reasoners represent epistemic relations as spatial models. The theory accords with recent theoretical advances, existing data, as well as two novel experiments that show how reasoners cope with nested epistemic verbs, e.g., Ami knows that Ora thinks it’s sunny.

Supervised category learning: When do participants use a partially diagnostic feature?

We report a supervised category learning experiment in which the training phase contains both classification and observation learning blocks. To explain the use of different categorization strategies, we propose an account in which use of a stimuli dimension depends on how well the dimension is learned. Our results show that there is an overall preference for a unidimensional categorization based on the perfectly diagnostic dimension. The preference for unidimensional categorization is negatively correlated with how well participants learn the partially diagnostic dimensions. Preference for unidimensional categorization is also negatively correlated with the mean response time. Bayesian modeling results show that participants use a partially diagnostic dimension only when it is learned with a very high level of accuracy. Different strategies are used for categorization depending on how well the perfectly and partially diagnostic dimensions are learned.

More than the sum of its parts: Acquiring semantically complex quantifiers

How does the acquisition of semantically complex expressions track the acquisition of their constituent meanings? We investigate this question using the English quantifiers both and either. These quantifiers, while morphologically simplex, are semantically complex, comprising of two pieces: (i) universal/existential quantification and (ii) a size restriction on the quantificational domain to 2. Across two experiments, we compared the acquisition of these quantifiers with expressions mapping conceptual pieces that contribute to their make-up (two, all, any). Our results suggest that having all of the parts is not enough to put together the whole, a finding that could have implications for quantifier learning more broadly.

Extending rational models of communication from beliefs to actions

Speakers communicate to influence their partner's beliefs and shape their actions. Belief- and action-based objectives have been explored independently in recent computational models, but it has been challenging to explicitly compare or integrate them. Indeed, we find that they are conflated in standard referential communication tasks. To distinguish these accounts, we introduce a new paradigm called signaling bandits, generalizing classic Lewis signaling games to a multi-armed bandit setting where all targets in the context have some relative value. We develop three speaker models: a belief-oriented speaker with a purely informative objective; an action-oriented speaker with an instrumental objective; and a combined speaker which integrates the two by inducing listener beliefs that generally lead to desirable actions. We then present a series of simulations demonstrating that grounding production choices in future listener actions results in relevance effects and flexible uses of nonliteral language. More broadly, our findings suggest that language games based on richer decision problems are a promising avenue for insight into rational communication.

Cumulative frequency can explain cognate facilitation in language models

Cognates – words which share form and meaning across two languages – have been extensively studied to understand the bilingual mental lexicon. One consistent finding is that bilingual speakers process cognates faster than non-cognates, an effect known as cognate facilitation. Yet, there is no agreement on the underlying factors driving this effect. In this paper, we use computational modeling to test whether the effect can be explained by the cumulative frequency hypothesis. We train a computational language model on two language pairs (Dutch–English, Norwegian–English) under different conditions of input presentation and test it on sentence stimuli from two existing studies with bilingual speakers of those languages. We find that our model can exhibit a cognate effect, lending support to the cumulative frequency hypothesis. Further analyses reveal that the size of the effect in the model depends on its linguistic accuracy. We interpret our results within the literature on cognate processing.

A Rational Account of Anchor Effects in Hindsight Bias

Hindsight bias is exhibited when knowledge of an outcome (i.e., an anchor) affects subsequent recollections of previous predictions (i.e., an estimate). Hindsight bias usually leads to estimates being remembered as closer to the anchor than they actually were. The exact amount of hindsight bias exhibited depends on the anchor value and the anchor plausibility, with experimental results showing that hindsight bias is elicited only when the anchor is perceived to be plausible. In this paper we present a Bayesian model that captures the relationship between hindsight bias and anchor plausibility. This model provides a rational account of hindsight bias by considering memory recall as a statistical problem, where the goal is to reconstruct the original estimate using the anchor as new evidence. Simulations show that the modeled trends align closely with previously published human data.

The Role of The Basal Ganglia in the Human Cognitive Architecture: A Dynamic Causal Modeling Comparison Across Tasks and Individuals

The basal ganglia (BG) performs an important functional role in cognition, but models disagree about the nature of the relationship between BG activity and activity in other cortical areas. Previous computational models can be categorized as implementing the effects of the BG on prefrontal cortex as either local and direct, or involving other regions and, therefore, modulatory. To test which of these two effects best represents the role of the BG, a large fMRI dataset of 200 participants performing six, representative cognitive tasks was analyzed through Dynamic Causal Modeling (DCM). To ensure that DCM models were realistic and representative of a general brain architecture, the models were implemented within the putative neural underpinnings of the Common Model of Cognition, an abstract blueprint for cognition. The comparison showed that Mixed model, including both Direct and Modulatory connectivity, consistently outperformed models that included only direct or modulatory connections. It was also found that the relative rankings of the Direct and Modulatory models depended on the specific task, suggesting that the BG is a flexible system that adapts to task demands.

The online advantage of repairing metrical structure: Stress shift in pupillometry

In this paper we use pupillometry, a non-invasive, naturalistic method of measuring attention and cognitive load, to measure the effect of stress clash (Chinése shíp) and its metrical repair (Chínese shíp) during auditory sentence processing. We addressed two main research questions. The first question explores whether phonologically-disfavored metrical structures induce processing costs indexed by changes in pupil size. The second investigates whether the application of an optional process of stress retraction called the Rhythm Rule (Liberman & Prince, 1977) ameliorates or compounds any general penalty for stress clash. We find that unrepaired stress clash leads to greater pupil diameter relative to non-clashing sequences, indicating increased attention and cognitive load. We also find that repaired sequences lead to a decrease in overall pupil diameter, indicating facilitation.

Modeling a direct role of vocabulary size in driving cross-accent word identification

Children typically do not spontaneously recognize accented productions of known words until approximately 19 months. In 15-month-olds, however, this ability is correlated with vocabulary size. Vocabulary size may support cross-accent accommodation by decreasing the likelihood that a variant production is considered to be an unknown word. We simulated a cross-accent word identification experiment, with word tokens generated from a two-dimensional Gaussian space, and accented productions simulated via linear transforms. Simulated participants were Bayesian classifiers with large or small vocabularies. Our large vocabulary group accurately classified more accented tokens and were less likely to classify an accented token as an unknown word. Thus, one way a growing vocabulary size may foster cross-accent accommodation is through increasing infants’ propensity to fit accented variants to known words, rather than treating them as unknown words.

Promoting thinking in terms of causal structures: Impact on performance in solving complex problems

Goldwater and Gentner (2015) showed that the sensitivity for causal structures can be promoted with an intervention combining explication of causal models and guided structural alignment of situations from disparate fields with the same underlying causal model. We extended this intervention with inference questions and combined it with a subsequent complex problem-solving (CPS) task, in order to investigate whether enhanced sensitivity for causal structures results in better performance in CPS. This study (N = 108) compares the CPS performance indicators knowledge acquisition and knowledge application among three experimental groups (intervention, intervention extended with inference questions, control group) and reveals the following results: 1) The effectiveness of the intervention in increasing the sensitivity for causal structures was replicated. 2) Sensitivity for causal structures and CPS performance indicators were significantly positively correlated. 3) There is no direct effect of the intervention on CPS performance, but an indirect-only effect via enhanced sensitivity.

What did I sign? A study of the impenetrability of legalese in contracts

Legal documents, in the form of terms of service agreements and other private contracts, are now an increasingly prevalent part of everyday life. While legal documents have long been acknowledged to be difficult to understand without training, it remains an open question whether the ever-increasing exposure to contracts might have mitigated this difficulty. Moreover, insofar as this difficulty has persisted, there remains no systematic analysis of which linguistic structures contribute most heavily to the processing difficulty of legal texts, nor whether this difficulty is heightened for those with less language experience. Here, we investigate these issues, and in a well-powered experiment find evidence that (a) both recall and comprehension of legal propositions in a contract are hindered by use of a legal register relative to plain-English translations; (b) certain linguistic structures, such as center-embedding, hinder recall to a greater degree than others, such as passive voice; and (c) language experience influences comprehension of legal propositions. Surprisingly, language experience did not influence recall, nor was there an interaction between legal register and language experience on recall or comprehension. These findings suggest that legal language poses heightened difficulties for those with less language experience--who tend to be of lower socioeconomic status and with diminished access to the justice system--and that eliminating complex features of legalese would benefit those of all reading levels.

Who went fishing? Inferences from social evaluations

Humans have a remarkable ability to go beyond the observable. From seeing the current state of our shared kitchen, we can infer what happened and who did it. Prior work has shown how the physical state of the world licenses inferences about the causal history of events, and the agents that participated in these events. Here, we investigate a previously unstudied source of evidence about what happened: social evaluations. In our experiment, we present situations in which a group failed to optimally coordinate their actions. Participants learn how much each agent was blamed for the outcome, and their task is to make inferences about the situation, the agents' actions, as well as the agents' capabilities. We develop a computational model that accurately captures participants' inferences. The model assumes that people blame others by considering what they should have done, and what causal role their action played. By inverting this generative model of blame, people can figure out what happened.

Emotions as the product of body and mind: The hierarchical structure of folk concepts of mental life among US adults and children

How are emotions understood to relate to other aspects of mental life? Among US adults, concepts of mental life are anchored by a distinction between physiological sensations (BODY), social-emotional abilities (HEART), and perceptual-cognitive capacities (MIND); these conceptual units are in place by 7-9y (Weisman et al., 2017a, 2017b, 2018). Here we reanalyze these datasets to explore the structural relationships among BODY, HEART, and MIND. Across six studies (N=1758), adults’ assessments of the mental lives of robots, beetles, birds, goats, and other entities revealed a clear hierarchical structure: social-emotional abilities were virtually never granted to any entity perceived to lack physiological sensations or perceptual-cognitive abilities. This is consistent with a folk theory—similar to prominent theories in affective science—in which emotions emerge from the combination of more basic capacities for sensation and cognition. Studies of US children (4-9y, N=445) suggest that it takes years for children to acquire this understanding.

Compositional generalization in multi-armed bandits

To what extent do human reward learning and decision-making rely on the ability to represent and generate richly structured relationships between options? We provide evidence that structure learning and the principle of compositionality play crucial roles in human reinforcement learning. In a new multi-armed bandit paradigm, we found evidence that participants are able to learn representations of different reward structures and combine them to make correct generalizations about options in novel contexts. Moreover, we found substantial evidence that participants transferred knowledge of simpler reward structures to make compositional generalizations about rewards in complex contexts. This allowed participants to accumulate more rewards earlier, and to explore less whenever such knowledge transfer was possible. We also provide a computational model which is able to generalize and compose knowledge for complex reward structures. This model describes participant behaviour in the compositional generalization task better than various other models of decision-making and transfer learning.

Understanding distal goals from proximal communicative actions

Can people interpret communicative action modulations in terms of the actor’s distal goal? We investigated situations in which the proximal goal of an action (i.e., the movement endpoint) does not overlap with its distal goal (i.e., a final location beyond the movement endpoint). Participants were presented with animations of an object being moved at different velocities towards a designated endpoint. The distal goal, however, was for the object to be moved past this endpoint, to one of two occluded final locations. Participants were asked to select the location which they considered the likely distal goal of the action. As predicted, participants detected differences in movement velocity and, based on these differences, systematically mapped the movements to the two distal goal locations. These findings extend previous research on sensorimotor communication by demonstrating that communicative action modulations are not restricted to proximal goals but can also contain information about distal goals.

A novel non-linguistic audio-visual learning paradigm to test the cognitive correlates of learning rate

Audio‐visual (AV) associative learning is central to many aspects of cognitive development and is key in reading acquisition. Most studies thus far have examined AV associative learning involving linguistic stimuli. Yet it is of importance to examine cross-modal learning free of familiarity confounds. We, therefore, designed an AV learning paradigm relying on novel, non-linguistic auditory and visual stimuli, which were both unfamiliar to participants. On top of AV learning, we collected performance in reading-related abilities, as well as in more domain-general skills, in a population of healthy Italian-speaking adults (N=57). By fitting trial-by-trial performance in our novel learning task, we demonstrate the expected variability in speed of learning (learning rate) across participants. We then show that speed of learning in our novel learning task is positively associated with working memory and replicate this result in a set of French-speaking participants (N=32), showing that it holds in another language.

Frequency vs. Salience in First Language Acquisition: The Acquisition of Aspect Marking in Chintang

Frequency of occurrence in the input is a main factor determining the ease of acquisition in first language learners. However, little is known about the factors relevant for the acquisition of low-frequency items. We examine the use of aspectual markers in a longitudinal corpus of Chintang (Sino-Tibetan, Nepal) children (ages 2;1-4;5). Only 7.7% of all Chintang verbs are overtly marked for aspect. Chintang has three aspect markers, one of which is substantially more frequent than the others. One of the low-frequency markers is positionally and prosodically more salient, appearing at the word-boundary. Using a Bayesian beta-binomial model, we assess the distribution and flexibility of use of aspectual markers in the input and children's production. Our analysis shows that the most frequent marker was acquired earliest, as predicted. For the low-frequency markers, position, segmentability and uniformity are better predictors of ease of acquisition.

Predicting Memory Errors with a Bayesian Model of Concept Generalization

“Similarity” is often thought to dictate memory errors. For example, in visual memory, memory judgements of lures are related to their psychophysical similarity to targets: an approximately exponential function in stimulus space (Schurgin et al. 2020). However, similarity is ill-defined for more complex stimuli, and memory errors seem to depend on all the remembered items, not just pairwise similarity. Such effects can be captured by a model that views similarity as a byproduct of Bayesian generalization (Tenenbaum & Griffiths, 2001). Here we ask whether the propensity of people to generalize from a set to an item predicts memory errors to that item. We use the “number game” generalization task to collect human judgements about set membership for symbolic numbers and show that memory errors for numbers are consistent with these generalization judgements rather than pairwise similarity. These results suggest that generalization propensity, rather than “similarity”, drives memory errors.

The role of clustering in the efficient solution of small Traveling Salesperson Problems

Human solutions to the Traveling Salesperson Problem (TSP) are surprisingly close to optimal and unexpectedly efficient. We posit that humans solve instances of the TSP by first clustering the points into smaller regions and then solving each cluster as a simpler TSP. Prior research has shown that participants cluster visual stimuli reliably. That is, their clustering and re-clustering of the same stimulus are similar, especially when the stimulus is relatively more clustered. In this study, participants solved the same TSP instances twice. On the second presentation, half of the instances were flipped about the horizontal and vertical axes. Participants solved the TSP reliably, with their two tours of the same instance sharing 77 percent of the same edges on average. In addition, within-participant reliability was higher for more clustered versus more dispersed instances. Our findings are consistent with the proposal that people use clustering strategies to solve the TSP.

Distribution of unidimensional space in the LSU time lexicon

In signed languages, space can be used to build linguistic analogs for mental images and linguistically support conceptual domains such as time. This research aimed to (i) describe the spatial patterns of the Uruguayan Sign Language (LSU) time lexicon, (ii) test whether variables such as time construal, reference type, and iconicity labeling produce a clear-cut spatial pattern in the LSU time lexicon, and (iii) determine whether the LSU time lexicon might prime the mental timeline for LSU signers. We discuss how we selected a corpus, labeled space according to certain parameters, and characterized signs within unidimensional spaces. We applied a Chi-square goodness of fit test to compare multiple observed proportions among variables. The results confirmed a bias toward the Sagittal space for deictic time and biases for sequential and span time for Hand number and Reference type. We suggest considering these biases in time discrimination with deaf population.

Categorical Perception as a Combination of Nature and Nurture

This paper reviews the existing literature on categorical perception of sounds and colors in different animals including humans. We highlight that categorical perception is a combination of nature and nurture; to be specific, categorical perception is innate with a phylogenetic root, but it can also be modified by postnatal experience. We also suggest that language is not wholly the basis for categorical perception, as what Sapir-Whorf hypothesis posits; instead, language is one type of experience that can affect the nurture part of categorical perception across domains and modalities in humans.

SUSTAIN captures category learning, recognition, and hippocampal activation in a unidimensional vs information-integration task

There is a growing interest in alternative explanations to the dual-system account of how people learn category structures varying in their optimal decision bounds (unidimensional and information-integration structures). Recognition memory performance and hippocampal activation patterns in these tasks are two interesting findings, which have not been formally explained. Here, we carry out a formal simulation with SUSTAIN (Love, Medin, & Gureckis, 2004), an adaptive model of category learning, which had great success in accounting for recognition memory performance and fMRI activity patterns. We show, for the first time, that a formal single-system model of category learning can accommodate recognition performance after learning and is consistent with fMRI data obtained while participants learned these structures.

Parent-Child Conversation About Negative Aspects of the Biological World

The biological world includes many negatively-valenced activities, like predation, parasitism, and disease. How do parents discuss these activities with their children? Parents of children aged 4 to 12 (n = 147) were asked to discuss an illustrated book of animal facts to their child. Some facts were neutral (e.g., “meerkats live in groups of 2 to 30”) and some were negative (e.g., “meerkats wage war on neighboring colonies to expand their territory”). Parents did not selectively omit negative facts. Instead, they selectively embellished those facts, adding their own comments and questions, often couched in explicitly negative language. Children, in turn, were more likely to remember the negative facts but less likely to generalize them beyond the animal in the book. These findings suggest that early input relevant to biological competition may hamper children’s developing understanding of ecology and evolution.

Awareness motor intention and inhibitory control: the role of reactive and proactive components

An open problem in Libet task literature regards the relationship between the moment in which awareness of motor intention arises and inhibition efficiency in response to an external stimulus (taking into account both the reactive and proactive mechanisms). In this study, 112 volunteers performed the Libet’s clock task to evaluate motor intention awareness, a Stop Signal Task (SST) to evaluate the inhibitory efficiency in its mainly reactive component, and a Cued Go/No-Go to evaluate the inhibitory efficiency in its mainly proactive dimension. We observed that a delayed insurgence of the awareness of motor intent is related to a better reactive inhibitory efficiency. No relationship was observed with the proactive component.

Early Analogical Extensions: An ERP Study on Preschoolers' Semantic Approximations

This study investigates whether the ERPs of 4-years-olds in response to verbal overextensions reflect the encoding of actions through abstract categories. Participants were presented with images of actions (e.g. peeling an orange) while hearing a sentence containing a conventional verb (e.g. peeling), an approximative verb (e.g. undressing), a superficially related verb (e.g. pressing) or a pseudoverb (e.g. rauging). The N400 for approximative verbs significantly differed from the pseudoverb condition, but not from the conventional verb condition. In contrast, the N400 for superficially related verbs was significantly greater than for conventional verbs, but no significant difference was found with the pseudoverb condition. These results confirm our hypothesis that encoding mainly focuses on general categories (e.g. taking of an envelope). The implications of the findings are discussed regarding the conceptual organization of preschoolers and their analogical abilities.

Characterizing the object categories two children see and interact with in a dense dataset of naturalistic visual experience

What do infants and young children tend to see in their everyday lives? Relatively little work has examined the categories and objects that tend to be in the infant view during everyday experience, despite the fact that this knowledge is central to theories of category learning. Here, we analyzed the prevalence of the categories (e.g., people, animals, food) in the infant view in a longitudinal dataset of egocentric infant visual experience. Overall, we found a surprising amount of consistency in the broad characteristics of children's visual environment across individuals and across developmental time, in contrast to prior work examining the changing nature of the social signals in the infant view. In addition, we analyzed the distribution and identity of the categories that children tended touch and interact with in this dataset, generalizing previous findings that these objects tended to be distributed in a Zipfian manner. Taken together, these findings take a first step towards characterizing infants' changing visual environment, and call for future work to examine the generalizability of these results and to link them to learning outcomes.

Theory Acquisition as Constraint-Based Program Synthesis

What computations enable humans to leap from mere obser- vations to rich explanatory theories? Prior work has focused on stochastic algorithms that rely on random, local perturba- tions to model the search for satisfactory theories. Here we introduce a new approach inspired by the practice of ‘debug- ging’ from computer programming, whereby learners use past experience to constrain future proposals, and are thus able to consider large leaps in their current theory to fix specific de- ficiencies. We apply our ‘debugging’ algorithm to the mag- netism domain introduced by (Ullman, Goodman, & Tenen- baum, 2010) and compare its efficiency and accuracy to their stochastic-search algorithm. We find that our algorithm not only requires fewer iterations to find a solution, but that the solutions it finds more reliably recover the correct latent theo- ries, and are more robust to sparse data. Our findings suggest the promise of such constraint-based approaches to emulate the way humans efficiently navigate large, discrete hypothesis spaces.

In Sync or Vocal? How Bottlenose Dolphins Coordinate in a Cooperative Task

Cooperation experiments have long been used to explore the cognition underlying animals' coordination towards a shared goal. While the ability to understand the need for a partner has been demonstrated in a number of species, far fewer studies have explored the behavioral strategies animals use to coordinate their behavior in such tasks. Here, we investigate the strategies two dolphin dyads used to coordinate their behavior during a cooperative button-pressing task that required precise behavioral synchronization. Both dyads were more likely to succeed if they used whistles prior to pressing their buttons, but the results showed that they adopted different strategies. Specifically, one dyad favored physical synchrony, waiting nearby for their partner and swimming together to approach the buttons. The other dyad was much more vocal, and more likely to swim independently before coordinating at the buttons. Only for this second dyad did increased whistling lead to more success. Our results suggest that bottlenose dolphins have the behavioral flexibility to employ either vocal signals or physical synchrony to coordinate their cooperative efforts.

Extrapolation Under Caricatured Representations

Research on contrastive category learning has revealed a robust tendency for learners to develop caricaturized representations (elsewhere: ideals or extreme points) to support successful discriminative classification. These representations are defined by extreme values on some task-relevant dimension and are often indicated as highly representative of their categories. Work in this area has elaborated the task constraints and contexts necessary for these representations to emerge, but little research has scrutinized whether caricatured representations extend beyond a category’s known range of feature values. To these ends, across two experiments, we investigated whether the most representative items for a category can extend beyond the training set. Data from pairwise typicality comparisons following learning suggests that caricatured categories may be supported by representations that extend past the feature range present in training. The findings are better explained by certain representational frameworks (e.g., adaptive reference points, boundaries) than others (e.g., exemplars, clusters).

Temporal explanations help resolve temporal conflicts

People can explain phenomena by appealing to temporal relations, e.g., you might explain a colleague’s absence at a meeting by inferring that their prior meeting did not end on time. Cognitive scientists have yet to investigate temporal explanations, and explanatory reasoning research tends to focus on how people assess causal explanations; it shows that reasoners often generate causal explanations to resolve conflicts. We posit that temporal explanations help reasoners resolve temporal conflicts, and describe three experiments that test the hypothesis. Experiment 1 provided participants with temporal information that was consistent or inconsistent and elicited their inferences about what followed. Participants spontaneously provided temporal explanations to resolve inconsistencies, and many of them also provided more conservative refutations. In Experiments 2 and 3, participants evaluated explanations and refutations in light of conflicting information. The studies showed that participants spontaneously generate temporal explanations, and in certain cases, they prefer temporal explanations when a more conservative refutation was available. The research is the first to examine patterns in temporal explanatory reasoning.

Shared temporal expectation across higher- and lower-level cognition

Temporal expectation for future events allows people to prepare more efficiently for the future. In sensorimotor tasks, it has been considered as an important factor that influences the accuracy and speed of responding to specific sensory events. However, there was no consensus whether the temporal expectation functioning in sensorimotor tasks is simply an emergent property of task-specific, low-level circuits, or an abstract representation shared by higher-level cognition. In four experiments, we asked whether two simultaneously processed tasks—one of lower-level and the other of higher-level cognition—would be influenced by the same temporal expectation. One task was speeded response to a target stimulus, where the target was cancelled on 30% of the trials. The other task was a real-time gambling task, where participants needed to predict from time to time whether the current trial would end up with target or cancellation. Both the target and cancellation latencies followed specific distributions, with the distribution of cancellation latencies varied across blocks. Participants’ choices in gambling provided real-time measures of the updating of temporal expectation over time, which suggest imperfect representation of temporal distributions. Importantly, we found that on a trial when participants predicted an ending of cancellation instead of target, their subsequent response to the target was strikingly slower (up to 1/3 increase in response time). It implies temporal expectation is shared across higher-level and lower-level cognitive tasks.

What happened here? Children integrate physical reasoning to infer actions from indirect evidence

As we navigate through the world, we often leave traces of our actions: a broken branch, a footprint in the mud, a dirty coffee mug at a desk. As observers, these traces enable us to make surprisingly complex social inferences about the actions that may have caused them: what the other person may have been doing, what their likely goals were, and more. But how might a conspicuous lack of evidence prompt this same reasoning? We hypothesize that children use intuitive physics to infer possible prior actions and their outcomes, even in the absence of evidence. In support of this proposal, we found that children readily reconstruct an agent’s actions after observing indirect evidence. Importantly, they are also able to use the difficulty of concealing such evidence to interpret its absence.

Latent Event-Predictive Encodings through Counterfactual Regularization

A critical challenge for any intelligent system is to infer structure from continuous data streams. Theories of event-predictive cognition suggest that the brain segments sensorimotor information into compact event encodings, which are used to anticipate and interpret environmental dynamics. Here, we introduce a SUrprise-GAted Recurrent neural network (SUGAR) using a novel form of counterfactual regularization. We test the model on a hierarchical sequence prediction task, where sequences are generated by alternating hidden graph structures. Our model learns to both compress the temporal dynamics of the task into latent event-predictive encodings and anticipate event transitions at the right moments, given noisy hidden signals about them. The addition of the counterfactual regularization term ensures fluid transitions from one latent code to the next, whereby the resulting latent codes exhibit compositional properties. The implemented mechanisms offer a host of useful applications in other domains, including hierarchical reasoning, planning, and decision making.

The Structure of Team Search Behaviors with Varying Access to Information

In many team-based activities, members search for information to gain situational awareness and thereby structure their own behavior. The extent to which members are coupled and in control of their surrounding environment can be accessed via the fluctuations of their searching behaviors. To facilitate prospective control, assistive technologies such as a head-up display (HUD) can alleviate the demands of search and facilitate team performance. This study investigated how three-person teams divided their labor and structured their search behavior when playing a multiplayer search-and-retrieval task where first-person visibility and access to a HUD were manipulated. Results showed that access to task relevant information facilitated performance and division of labor, and increased prospective control of searching behaviors, as indexed by detrended fluctuation analysis (DFA). Over multiple sessions, teams learned to use the HUD to structure their behavior to achieve the task goal. Results indicate the potential in using DFA for monitoring prospective control in team contexts.

The identity of the partner matters even when naming everyday objects

Social factors, such as partner familiarity (e.g., talking to a friend vs. stranger) may affect some conversations but not others. While researchers do not always control the partner identity when conducting interactive studies, the current empirical report of a language production experiment conducted via Zoom presents effects of partner familiarity (friends vs. strangers) on the form and content of referring expressions in the mundane task of describing everyday objects. First, speakers interacting with a friend were less disfluent than speakers interacting with a stranger, showing that more effort is invested in interactions with strangers. Second, speakers interacting with a friend showed more sensitivity to prior context. Surprisingly, these effects reveal that speakers are sensitive to the partner identity even when describing everyday objects whose labels are shared across all language users. The current findings suggest that researchers should consider social factors as part of the experimental design of interactive tasks.

Children consider the probability of random success when evaluating knowledge

To infer what others know, we must consider under what epistemic states their actions were both rational and probable. We test whether preschoolers can compare the probability of different actions (and outcomes) under different epistemic states—and use this to evaluate what others know. Specifically, four- to six-year-olds (n=90) were asked to help evaluate an agent’s knowledge state by asking the agent to complete either an “undiagnostic” task (where success was assured), or a “diagnostic task” (where the probability of random success was low). By age six, children understood that the “diagnostic” task would more likely reveal the agent’s knowledge state; four- and five-year-olds had no reliable preference, although children in all age groups understood that the “diagnostic” task was harder. These results suggest that, by the end of preschool, children understand how agents’ epistemic states and environment jointly determine success—considering whether agents’ actions imply knowledge, or just luck.

Development of Self and Other’s Body Perception; Effects of Familiarity and Gender on How Children Perceive Adults.

Our ability to perceive our own and other people’s bodies is critical to the success of social interactions. Research has shown that adults have a distorted perception of their own body and those of other adults. However, these studies ask perceivers to estimate for adults that have similar bodily make-up. This study explored the developmental progression in how children perceive their own body (5-12-yrs-olds) (Exp1) and whether children have similar distortions as adults when estimating the dimensions of adult bodies both unknown (Exp2) and familiar to them (Exp3). Overall, children showed similar distortions to those found in adult’s estimations for own body perception (i.e., limbs with a smaller density of sensory receptors showed a larger error than those with a higher density) and perception of adult’s bodies showed less distortion when perceiver and model were of the same gender, but not when the adult was familiar to the child.

Seeing is believing: testing an explicit linking assumption for visual world eye-tracking in psycholinguistics

Experimental investigation is fundamental to theory-building in cognitive science, but its value depends on the linking assumptions made by researchers about the mapping between empirical measurements and theoretical constructs. We argue that sufficient clarity and justification are often lacking for linking assumptions made in visual world eye-tracking, a widely used experimental method in psycholinguistic research. We test what we term the Referential Belief linking assumption: that the proportion of looks to a referent in a time window reflects participants’ degree of belief that the referent is the in- tended target in that time window. We do so by comparing eye-tracking data against explicit beliefs collected in an incremental decision task (Exp. 1), which replicates a scalar implicature processing study (Exp. 3 of Sun & Breheny, 2020). In Exp. 2, we replicate Sun and Breheny (2020) in a web-based eye-tracking paradigm using WebGazer.js. The results provide support for the Referential Belief link and cautious optimism for the prospect of conducting web-based eye-tracking. We discuss limitations on both fronts.

Language Proficiency Impacts the Benefits of Co-Speech Gesture for Narrative Understanding Through a Visual Attention Mechanism

Co-speech gestures can enhance a listener’s understanding of a spoken message, yet children show a greater benefit of gesture than adults (Hostetter, 2011). We explore whether this effect may be driven by language proficiency, and in turn, by differences in how children visually attend to gesture when they are processing narratives in their stronger vs. weaker language. Bilingual children were shown narratives with scripted gesture in their stronger and weaker languages, while their visual attention was monitored. Memory for narratives was then assessed. Our findings suggest language proficiency does affect the degree to which children benefit from co-speech gesture – children showed a greater boost from co-speech gesture when processing narrative in their weaker language. Results also suggest that greater attention to gesture when processing ones’ weaker language may underlie this effect.

Evidential meaning of English clause-embedding verbs

English clause-embedding verbs can be used in the evidential meaning (Simons, 2007; Murray, 2017), modulating the degree to which the speaker is committed to the truth of the proposition in the embedded clause. For example, an utterance like “I think the movie starts at 4” can signal that the speaker is uncertain whether the proposition the movie starts at 4 is true, and would like to attenuate their claim. Previous research has provided detailed accounts of lexical and contextual features that give rise to evidential meanings — however, less is known about how widespread these uses are, whether they are part of the verb’s lexical meaning, or emerge under certain pragmatic conditions. In this study, we addressed these questions by conducting two large-scale acceptability judgment experiments. In line with the observations in literature (Simons, 2007), we found that non-factive clause-embedding verbs are the most acceptable in evidential contexts. We also found, however, that even highly factive verbs can be acceptable as evidentials under favorable pragmatic conditions.

Two languages, one mind: the effects of language learning on motion event processing in early Cantonese-English bilinguals

Can learning a second language (L2) redirect what we perceive to be similar events? This study investigated how Cantonese- English bilinguals categorized and processed spontaneous motion when the access to language ranged from maximal to minimal. In Experiment 1, participants verbalized the target events in either Cantonese or English right before making their similarity judgements. Results suggested that bilinguals patterned with English monolinguals in both lexicalization and conceptualization irrespective of the language of operation. In Experiment 2, participants experienced verbal interference while making their decisions. Results showed that bilinguals followed an English-like way in event conceptualization as indicated by their processing efficiency of manner and path. However, no cross-linguistic differences were found in speakers’ categorical preferences. The overall findings suggest that subtle typological differences between the L1 and L2 can restructure bilinguals’ cognitive behaviour. And the magnitude of such impact is modulated by different degrees of language involvement.

Pragmatic impacts on children’s understanding of exact equality

The distinctly human ability to both represent number exactly and develop symbolic number systems has raised the question of whether such number concepts are culturally constructed through symbolic systems. Although previous work with innumerate and semi-numerate groups has provided some evidence that understanding exact equality is related to numeracy, it is possible that previous failures were driven by pragmatic factors, rather than the absence of conceptual knowledge. Here, we test whether such factors affect performance on a test of exact equality in 3- to 5-year-old children by modifying previous methods to draw children’s attention to number. We find no effect of highlighting exact equality, either through framing the task as a “Number” game or as a “Sharing” game. Instead, we replicate previous findings showing a link between numeracy and an understanding of exact equality, strengthening the proposal that exact number concepts are facilitated by the acquisition of symbolic number systems.

Blame the Player and the Game

How do people assign credit for others’ actions? The Correspondence Bias — a classic bias in social psychology — purports that people are predisposed to attribute behaviors to dispositional, rather than situational, factors. However, recent work suggests that the pattern of data cited as evidence of a bias may be a natural consequence of attribution under uncertainty. Here we devise a novel “Bucket-Toss” task in which we can independently and parametrically manipulate and measure situation and disposition pressures to evaluate whether attribution to dispositions and situations are consistent with probabilistic inference. We find that as the strength of the situation or disposition is varied, attributions to the other (unobserved) cause follow roughly symmetric patterns of graded attribution. Together, these results confirm that social attribution appears to be largely consistent with unbiased inference under uncertainty.

Event visibility in sign language motion: Evidence from Austrian Sign Language (ÖGS)

This is the first kinematic investigation of articulator motion in Austrian Sign Language, which connects kinesiology of sign production and linguistic markers of Aktionsart in the native language of the Deaf community in Austria. Our work used a 3D motion capture approach to sign language analysis to investigate the relationship between the semantics (event structure) of signed verbs, and kinematics of hand articulator movement. The data indicates that the underlying semantics of events in verb signs is reflected in sign duration and acceleration of the dominant hand during sign production. The finding that articulator dynamics (acceleration and deceleration of hand motion) characterizes the event structure in verb signs has significance for linguistic theory of visual communication, and understanding of the relationship between iconicity in sign language, and perceptual biases in meaning construction based on visual input.

Infinite use of finite means? Evaluating the generalization of center embedding learned from an artificial grammar

Human language is often assumed to make "infinite use of finite means" - that is, to generate an infinite number of possible utterances from a finite number of building blocks. From an acquisition perspective, this assumed property of language is interesting because learners must acquire their languages from a finite number of examples. To acquire an infinite language, learners must therefore generalize beyond the finite bounds of the linguistic data they have observed. In this work, we use an artificial language learning experiment to investigate whether people generalize in this way. We train participants on sequences from a simple grammar featuring center embedding, where the training sequences have at most two levels of embedding, and then evaluate whether participants accept sequences of a greater depth of embedding. We find that, when participants learn the pattern for sequences of the sizes they have observed, they also extrapolate it to sequences with a greater depth of embedding. These results support the hypothesis that the learning biases of humans favor languages with an infinite generative capacity.

Do gestures really facilitate speech production?

Why do people gesture when they speak? According to one influential proposal, the Lexical Retrieval Hypothesis (LRH), gestures facilitate speech production by helping people find the right spatial words. Do gestures also help speakers find the right words when they talk about abstract concepts that are spatialized metaphorically? If so, gesture prevention should increase disfluencies during speech about both literal and metaphorical space. We sought to conceptually replicate the finding that gesture prevention increases disfluencies in speech about literal space, which has been interpreted as evidence for the LRH, and to extend this pattern to speech about metaphorical space. Our large dataset provided no evidence that gestures facilitate speech production, even for speech about literal space. Upon reexamining past research, we conclude that there is, in fact, no reliable evidence that preventing gestures makes speech more disfluent. These findings challenge long-held beliefs about why people gesture when they speak.

Meaning in brains and machines: Internal activation update in large-scale language model partially reflects the N400 brain potential

The N400 brain potential has been used as a neural correlate of meaning-related processing in the brain, but its underlying computational mechanism is still not well understood. Although efforts to model the N400 as an update of a probabilistic representation of meaning have been promising, the limited scope of earlier models has restricted experiments to highly simplified sentences. Here, we expand modelling of the N400 to naturalistic sentences using a large-scale, state-of-the-art deep learning language model. We investigate the correspondence between updates in the internal state of the model and the N400 in one quantitative experiment and four qualitative experiments. Our findings suggest that activation updates in the model correspond to several N400 effects, but cannot account for all of them.

Category Learning is Shaped by the Multifaceted Development of Selective Attention

Selective attention allows adults to preferentially exploit input relevant to their goals. One critical role of selective attention is in adult category learning: adults can simplify the entities they encounter into groups of entities that they can treat as equivalent by focusing on category-relevant attributes, while filtering out category-irrelevant attributes. However, much category learning takes place during development, when selective attention substantially matures. We designed two experiments to disentangle the contributions of the focusing and filtering aspects of selective attention to category learning over development. Experiment 1 provided evidence that learning simple categories was accompanied by selective attention in both four year-old and five year-old children and adults. Experiment 2 further provided evidence that only focusing contributed to selective attention in four year-olds, whereas both focusing and filtering contributed to selective attention in five year-olds and adults. Thus, category learning recruits different aspects of selective attention with development.

A conceptual framework for empathy and its application to investigate nonhuman animals

Do nonhuman animals (hereafter “animals”) possess empathy and if so to which degree? Can we develop a conceptual framework that allows us to characterize similarities and differences between implementations of empathy in humans and animals? We aim to answer these questions in two steps. First, we develop a new conceptual framework by distinguishing different levels of empathy starting with paradigmatic cases of human empathy developing in human ontogeny. Second, we describe in detail which of these levels of empathy can be found in other species based on animal studies. This approach allows a detailed characterization of the relation of empathy in humans and other animals.

Predicting Learning and Knowledge Transfer in Two Early Mathematical Equivalence Interventions

Many students fail to develop adequate understanding of mathematical equivalence in early grades, which impacts later algebra learning. Work from McNeil and colleagues proposes that this failure is partly due to the format of traditional instruction and practice with highly similar problems, which encourages students to develop ineffective representations of problem types (McNeil, 2014, McNeil & Alibali, 2005). In the current study, we explore students’ learning trajectories in two matched equivalence interventions. We show that, relative to an active control, the principle-based treatment intervention gives rise to a greater number of successful learners, a designation that, in turn, leads to improved performance on distal transfer assessments. We further demonstrate a predictive relationship between students’ engagement with the intervention, via workbook completion, and likelihood of becoming a successful learner. Our findings have implications for early detection of learning and subsequent scaffolding for low-performing students.

Dynamic Perception Revealed by Cursor Movements and Hidden Markov Modeling

We explore the dynamic coordination of perception, decision, and action underpinning perceptual choices by recording cursor movements during a binary response task. Stimuli were presented sequentially to control the time-course of perception, and we utilized a Hidden Markov Model (HMM) to relate measured movements of the mouse cursor to latent cognitive processes. Stimuli were simple perceptual objects comprised of two features, one of which was fully diagnostic of the correct response, while the other provided a probabilistic cue. The order of their arrival varied across trials, allowing us to manipulate the order of feature processing. The model builds upon response time methods and makes predictions about when individual features were perceived and the accumulation of evidence towards a response, every 10 milliseconds of each trial.

Human Learners Integrate Visual and Linguistic Information Cross-Situational Verb Learning

Learning verbs is challenging because it is difficult to infer the precise meaning of a verb when there are a multitude of relations that one can derive from a single event. To study this verb learning challenge, we used children's egocentric view collected from naturalistic toy-play interaction as learning materials and investigated how visual and linguistic information provided in individual naming moments as well as cross-situational information provided from multiple learning moments can help learners resolve this mapping problem using the Human Simulation Paradigm. Our results show that learners benefit from seeing children's egocentric views compared to third-person observations. In addition, linguistic information can help learners identify the correct verb meaning by eliminating possible meanings that do not belong to the linguistic category. Learners are also able to integrate visual and linguistic information both within and across learning situations to reduce the ambiguity in the space of possible verb meanings.

Children’s use of Reasoning by Exclusion to Track Identities of Occluded Objects

Reasoning by exclusion allows us to infer properties of unobserved objects from currently observed objects, formalized by P or Q, not P, therefore Q. Previous work suggested that, by age 3, children can use this kind of reasoning to infer the location of a hidden object after learning that another location is empty (e.g. Mody & Carey, 2016). In the current study, we asked whether children could use reasoning by exclusion to infer the identities of previously unobserved occluded objects in a task that required them to track the locations of multiple occluded objects. Forty-nine 4-7-year-olds viewed animated arrays of virtual “cards” depicting images which were then hidden by occluders. The occluders then swapped locations during the maintenance period. Children were asked to select which card was hidden in a probed location. During the encoding period, we manipulated whether children saw all the card faces (Face-up block) or all but one of the card faces (Exclusion block), for which children had to reason by exclusion to infer the target in half of the trials. We found that all children succeeded in the Face-up block, but only 6-year-olds succeed in the Exclusion block when they had to deploy logical reasoning to identify a previously-unseen hidden target. Our results suggest that children’s ability to reason by exclusion to infer the identity of a hidden target while tracking multiple objects and locations may undergo protracted development.

Preference reversals between intertemporal choice and pricing

Preference reversals in risky choice -- where people favor low-risk prospects in binary choice but assign higher prices to high-risk prospects -- have led to models of response processes that differentiate pricing from choice. Theories of intertemporal choice do not distinguish between response processes, assuming instead that eliciting choices or prices will lead to the same inferences about people’s preferences for delayed outcomes. Here, we show that this assumption is incorrect. Participants in a price-choice experiment showed systematic preferences for smaller-sooner (SS) over larger-later (LL) options in binary choice, but reversed this apparent preference by pricing the exact same LL options higher than the SS options. This reversal in pricing results in less impulsive behavior, suggesting that pricing frames may reduce choice impulsivity. To explain these diverging price and choice findings in a common framework, we propose a variant of a pricing model from risky choice that accommodates these effects.

Extent of bilingual experience in modulating young adults’ processing of social-communicative cues in a cue integration task: An eye-tracking study

This study investigated whether bilingual experience would influence young adults’ integration of multiple cues to infer a speaker’s intention. Using a cue-integration task coupled with eye-tracking, we examined the effects of balanced language usage on young bilingual adults’ ability to integrate multiple cues in determining a speaker’s referential intent. Behavioral and eye-tracking findings indicated that balanced bilinguals were better able than unbalanced bilinguals in identifying a target object in the three-cue condition (i.e., contextual, semantic and gaze cues were shown). However, there were no group differences in the two-cue condition (i.e., only contextual and semantic cues were shown). Our results suggest that the extent of bilingualism could modulate the sensitivity to and integration of multiple cues in the intention-inference process. We argue that balanced bilinguals’ greater exposure to complex communicative situations could enhance their ability in utilizing multiple cues to understand a speaker’s intention.

The Role of Verbal and Visuospatial Working Memory in Supporting Mathematics Learning With and Without Hand Gesture

Gesture during math instruction supports learning in children and adults. The mechanism by which gesture enhances learning across development is not known. One possibility is that instruction with gesture engages different cognitive abilities during learning than instruction without gesture. Our previous work showed a positive relationship between visuospatial working memory capacity and learning only when gesture was present, and a positive relationship between verbal working memory capacity and learning only when gesture was absent, suggesting that gesture may be processed using visuospatial working memory. The aim of the current experiment was to replicate and extend these prior findings with new instruction, random assignment to instructional condition, and improved measures of both learning and cognitive abilities. Participants observed video instruction in a novel mathematical system that either included speech and gesture or only speech. After instruction, participants completed a posttest to assess learning. Finally, participants completed tasks to assess verbal and visuospatial working memory capacity as well as fluid and crystallized intelligence. We found that gesture benefitted learning in adults. Contrary to previous findings, both learning with gesture and learning without gesture were supported by visuospatial working memory. These findings suggest that changing characteristics of instruction does not necessarily change the cognitive resources supporting learning in a novel math task.

Why and how to study the impact of perception on language emergence in artificial agents

The study of emergent languages in deep multi-agent simulations has become an important research field. While targeting different objectives, most studies focus on analyzing properties of the emergent language—often in relation to the agents’ inputs—ignoring the influence of the agents’ perceptual processes. In this work, we use communication games to investigate how differences in perception affect emergent language. Using a conventional setup, we train two deep reinforcement learning agents, a sender and a receiver, on a reference game. However, we systematically manipulate the agents’ perception by enforcing similar representations for objects with specific shared features. We find that perceptual biases of both sender and receiver influence which object features the agents’ messages are grounded in. When uniformly enforcing the similarity of all features that are relevant for the reference game, agents perform better and the emergent protocol better captures conceptual input properties.

Human Learning from Artificial Intelligence: Evidence from Human Go Players’ Decisions after AlphaGo

Although Artificial Intelligence (AI) is expected to outperform humans in many domains of decision-making, the process by which AI arrives at its superior decisions is often hidden and too complex for humans to fully grasp. As a result, humans may find it difficult to learn from AI, and accordingly, our knowledge about whether and how humans learn from AI is also limited. In this paper, we aim to expand our understanding by examining human decision-making in the board game Go. Our analysis of 1.3 million move decisions made by professional Go players suggests that people learned to make decisions like AI after they observe reasoning processes of AI, rather than mere actions of AI. Follow-up analyses compared the decision quality of two groups of players: those who had access to AI programs and those who did not. In line with the initial results, decision quality significantly improved for the players with AI access after they gained access to reasoning processes of AI, but not for the players without AI access. Our results demonstrate that humans can learn from AI even in a complex domain where the computation process of AI is also complicated.

Fast and Flexible: Human program induction in abstract reasoning tasks

The Abstraction and Reasoning Corpus (ARC) is a collection program induction tasks that was recently proposed by Chollet (2019) as a measure of machine intelligence. Here, we report a preliminary set of results from a behavioral study of humans solving a subset of tasks from ARC (40 out of 1000). We found that humans were able to infer the underlying program and generate the correct test output for a novel test input example, with an average of 84% of tasks solved per participant, and with 65% of tasks being solved by more than 80% of participants. Additionally, we find interesting patterns of behavioral consistency and variability across the action sequences to generate their responses, the natural language descriptions used to describe the rule for each task, and the errors people make. Our findings suggest that people can quickly and reliably determine the relevant features and properties of a task to compose a correct solution, despite limited experience in this domain. This dataset offers useful insights for designing AI systems that can solve abstract reasoning tasks such as ARC with the fluidity of human intelligence.

Uncovering the Metricity of Representational Spaces in the Brain: Evidence from Colors and Letters

An ongoing debate about the structure of conceptual space is based on two competing mathematical theories of similarity that make distinct predictions about the structure of mental representations and how to model the representational space they are stored in. These are known as metric (Shepard, 1962) and ultrametric (Tversky, 1977) theories, modeled by multidimensional scaling and additive trees respectively. Turning to the brain to resolve this conflict, we propose a computational framework to assess behavioral and neural data’s underlying structure and investigate whether the behaviorally known spaces for colors (metric) and letters (ultrametric) can be reproduced from neural data. Our results show that the metric color wheel can be reproduced from brain area V4, but that neural activations of the letters from extrastriate cortex (V2-V5) are also metric instead of being ultrametric. Finally, we discuss three possibilities for the brain’s similarity structure, including a potential metric bias.

Auditory, temporal, and visual sensory discrimination advantage of musicians

Literature on sensory discrimination suggests that it relies on two separate abilities, one related to processing of auditory-temporal stimuli, and the other involved in processing non-temporal visual stimuli. Musical training is associated with structural and functional adaptations in the brain, which improve sensory processing. However, to date the advantage of musicians was particularly evident in the auditory and temporal tasks (as related with perception of music). This study aimed to investigate potential advantages of musicians not only in the ability to discriminate auditory and temporal stimuli, but also with regard to visual discrimination. As many as nine adaptive stimulus discrimination tasks were administered to 56 musicians and 54 non-musicians, with both groups matched on working memory capacity. The musicians displayed better discrimination scores in each modality, including the visual one. The results support the view of modality-independent perceptual benefits resulting from prolonged musical training.

How do blind people know that blue is cold? Distributional semantics encode color-adjective associations.

Certain colors are strongly associated with certain adjectives (e.g. red is hot, blue is cold). Some of these associations are grounded in visual experiences like seeing hot embers glow red. Surprisingly, many congenitally blind people show similar color associations, despite lacking all visual experience of color. Presumably, they learn these associations via language. Can we detect these associations in the statistics of language? And if so, what form do they take? We apply a projection method to word embeddings trained on corpora of spoken and written text to identify color-adjective associations as they are represented in language. We show that these projections are predictive of color-adjective ratings collected from blind and sighted people, and that the effect size depends on the training corpus. Finally, we examine how color-adjective associations might be represented in language by training word embeddings on corpora from which various sources of color-semantic information are removed.

Modeling artificial category learning from pixels: Revisiting Shepard, Hovland, and Jenkins (1961) with deep neural networks

Recent work has paired classic category learning models with convolutional neural networks (CNNs), allowing researchers to study categorization behavior from raw image inputs. However, this research typically uses naturalistic images, which assess participant responses to existing categories; yet, much of traditional category learning research has focused on using novel, artificial stimuli to examine the learning process behind how people acquire categories. In this work, we pair a CNN with ALCOVE (Kruschke, 1992), a well-known exemplar model of categorization, and attempt to examine whether this model can reproduce the classic type ordering effect from Shepard, Hovland, and Jenkins (1961) on raw images rather than abstract features. We examine this question with a variety of CNN architectures and image datasets and compare ALCOVE-CNN to two other models that lacked certain key features of ALCOVE. We found that our ALCOVE-CNN model could reproduce the type ordering effect more often than the other models we tested, but in limited situations. Our results showed that success varied greatly across the various configurations we tested, suggesting that the feature representations from CNNs provide strong constraints in properly capturing this effect.

Modeling procrastination as rational metareasoning about task effort

Current theories of procrastination argue that people put things off into the future with the expectation that they will be better able to do them later. In this paper, we rationalize such expectations within the framework of evidence accumulation models of the choice process. Specifically, we show that it is rational for observers to adopt lower decision thresholds for choices with weak evidence for any alternative, and that observers learning to estimate optimal decision thresholds for tasks that involve decisions will find it reasonable to put the tasks off until the threshold has been sufficiently lowered by time-varying urgency. We designed a computational model and an experiment to differentiate our theory from more general expectancy based temporal motivation accounts. Both simulation and experimental results support our proposal, indicating a large role for choice difficulty in people's self-assessed estimates for how likely they are to procrastinate any given task.

A Deep Gaze into Social and Referential Interaction

In this study, we explicitly code and study the social, referential, and pragmatic features of gaze in human-human spontaneous dyadic interaction, providing novel observations that can be executed in a machine in order to improve multimodal human-agent dialogue. Gaze is an important non-verbal social signal that contains attentional cues about where to look and provides information about others' intentions and future actions. In this work, various types of gaze behaviour are annotated in detail along with speech to explore the meaning of temporal patterns in gaze cues and their co-relations. Considering that 80% of the total stimuli perceived by the brain is visual, gaze behaviour is complex and challenging; hence, implementing human-human gaze cues to an avatar/robots could improve human-agent interaction and make it more natural.

Regularization of nouns due to drift, not selection: An artificial-language experiment

Corpus data suggests that frequent words have lower rates of replacement and regularization. It is not clear, however, whether this holds due to stronger selection against innovation among high-frequency words or due to weaker drift at high frequencies. Here, we report two experiments designed to probe this question. Participants were tasked with learning a simple miniature language consisting of two nouns and two plural markers. After exposing plural markers to drift and selection of varying strengths, we tracked noun regularization. Regularization was greater for low- than for high-frequency nouns, with no detectable effect of selection. Our results therefore suggest that lower rates of regularization of more frequent words may be due to drift alone.

Are people still smarter than machines? If so, why?

The last few years have witnessed amazing breakthroughs in machine intelligence, using systems that rely on neural networks as advocated in Parallel Distributed Processing (Rumelhart, McClelland et al, 1986). Yet in this talk, I will argue, in agreement with Lake et al. (2017) and others, that we still have a long way to go before any machine has truly captured human like cognitive and learning abilities. Unlike Lake et al., I will argue that we should seek the reasons for many of the amazing achievements of human intelligence not in built in biases or special purpose start-up software, but in a fuller appreciation of the roles of culture and experience. I will argue for a central role for culturally constructed formal systems as powerful tools that extend human abilities beyond what can be achieved without these resources. I will also argue for a central role of language-based instruction and explanation.

Respect the code: Speakers expect novel conventions to generalize within but not across social group boundaries

Speakers use different language to communicate with partners in different communities. But how do we learn and represent which conventions to use with which partners? In this paper, we argue that solving this challenging computational problem requires speakers to supplement their lexical representations with knowledge of social group structure. We formalize this idea by extending a recent hierarchical Bayesian model of convention formation with an intermediate layer explicitly representing the latent communities each partner belongs to, and derive predictions about how conventions formed within a group ought to extend to new in-group and out-group members. We then present evidence from two behavioral experiments testing these predictions using a minimal group paradigm. Taken together, our findings provide a first step toward a formal framework for understanding the interplay between language use and social group knowledge.

Characterizing Variability in Shared Meaning through Millions of Sketches

The study of mental representations of concepts has historically focused on the representations of the “average” person. Here, we shift away from this aggregate view and examine the principles of variability across people in conceptual representations. Using a database of millions of sketches by people worldwide, we ask what predicts whether people converge or diverge in their representations of a specific concept, and which kinds of concepts tend to be more or less variable. We find that larger and more dense populations tend to have less variable representations, and concepts high in valence and arousal tend to be less variable across people. Further, two countries tend to have people with more similar conceptual representations when they are linguistically, geographically, and culturally similar. Our work provides the first characterization of the principles of variability in shared meaning across a large, diverse sample of participants.

Towards a Cognitive Model of Collaborative Memory

While humans routinely encode and retrieve memories in groups, the bulk of our knowledge of human memory comes from paradigms with individuals in isolation. The primary phenomenon of interest within the relatively new field of collaborative memory is collaborative inhibition: the tendency for collaborative groups to underperform in free recall tasks compared to nominal groups of the same size. This effect has been found in a variety of materials and group compositions (Rajaram & Pereira-Pasarin, 2010). However, the majority of research in this field is guided by verbal theories without formal computational models. In this paper we adapt the Search of Associative Memory (SAM; Raaijmakers & Shiffrin, 1981) model to collaborative free recall. We present a framework to scale SAM to collaborative paradigms with multiple SAM models working together. Our simulation results with the collaborative SAM model suggest that retrieval disruption, responsible for the part-set cuing effect in individuals, is also the cause of collaborative inhibition when multiple models are working together. Our work provides an existence proof that SAM can act as a unified theory to explain both individual and collaborative memory effects, and offers a framework for future predictions of scaling to increased group sizes, shared knowledge, and spread of false memories.

Recognition of Minimal Pairs in (un)predictive Sentence Contexts in two Types of Noise

Top-down predictive processes and bottom-up auditory processes interact in speech comprehension. In background noise, the acoustic signal is degraded. This study investigated the interaction of these processes in a word recognition paradigm using high and low predictability sentences in two types of background noise and using phonetically controlled contrasts. Previous studies have reported false hearing, but have not provided insight into what phonetic features are most prone to false hearing. We here systematically explore this issue and find that plosives lead to increased false hearing compared to vowels. Furthermore, this study on German for the first time replicates the overall false hearing effect in young adults for a language other than English.

Top-Down Effects on Anthropomorphism of a Robot

Anthropomorphism, or the attribution of human mental states and characteristics to non-human entities, has been widely demonstrated to be cued automatically by certain bottom-up appearance and behavior features in machines. The potential for top-down effects to influence anthropomorphism remains underexplored—even as most people’s exposure to robots prominently features linguistic descriptions, e.g. in common discourse, public media, and product advertising. The results of this online experiment suggest that top-down linguistic cues increase anthropomorphism of a robot—and that these top-down cues may be as important of an influence as bottom-up cues. Moreover, these results suggest that this increased anthropomorphism is associated with increased unwarranted expectations of the robot’s capabilities and increased moral regard for the robot. As robots and other machines become more integrated into human society, it is more important to understand the extent to which top-down influences matter for our thought, talk, and treatment of robots.

Visuo-Locomotive Update in the Wild: The Role of (Un)Familiarity in Choice of Navigation Strategy, and its Application in Computational Spatial Design

We study active human visuo-locomotive experience in everyday navigation from the viewpoints of environmental familiarity, embodied reorientation, and (sensorimotor) spatial update. Following a naturalistic, in situ, embodied multimodal behaviour analysis method, we conclude that familiar users rely on environmental cues as a navigation-aid and exhibit proactive decision-making, whereas unfamiliar users rely on manifest cues, are late in decision-making, and show no sign of sensorimotor spatial update. Qualitative analysis reveals that both groups are able to sketch-map their route and consider path integration: i.e., conscious spatial representation updating was possible but not preferred during active navigation. Overall, the experimental task did not trigger automatic or reflexlike spatial updating, as subjects preferred strategies involving memory of perceptual cues and available manifest cues instead of relying on motor simulation and continuous spatial update. Rooted in the behavioural outcomes, we also position applications in computational modelling of navigation within cognitive technologies for architectural design synthesis.

Speakers Use More Informative Referring Expressions to Describe Surprising Events

Production of referring expressions (the dog, it, Snoopy) is a complex process regulated by a combination of linguistic and cognitive constraints. In this paper, we explore the impact of world knowledge on the types of references speakers produce, focusing on predictability of event progressions. We argue that speakers are more likely to use a full noun phrase rather than drop the subject or use a pronoun when they describe an event progression they find surprising. In order to avoid the influence of distributional properties of event descriptions, we created an artificial world and trained subjects to recognize typical collision-event progressions within. Speakers then described novel scenes, which either conformed to their expectations or violated them, in a free production experiment. The results reveal that unpredictable event progressions lead to a more frequent production of full noun phrases, in contrast to reduced linguistic expressions (pronouns and null subjects). We conclude that speakers choose more informative descriptions to talk about surprising events.

Emotions in Games: Toward a Unified Process-Level Account

Strategic decision-making is chiefly studied in behavioral economics using multi-agent games. Decades of empirical research has revealed that emotions play a crucial role in strategic decision-making, calling into question the “emotionless” homo economicus. In this work, we present a unified process-level account of a broad range of empirical findings on the effect of emotions in Prisoner’s Dilemma and Ultimatum games—the two most studied games in behavioral sciences. Under the empirically well-supported assumption that emotions modulate loss aversion, we show that Nobandegani et al.’s (2018) sample-based expected utility model can account for the effect of emotions on: (i) cooperation rate in Prisoner’s Dilemma, and (ii) the rejection rate of unfair offers in the Ultimatum game. We conclude by discussing the implications of our work for emotion research, and for developing a unified process-level account of the role of emotions in strategic decision-making.

When Does an Individual Accept Misinformation?

A new phenomenon is the spread and acceptance of “fake news” on an individual user level, facilitated by social media such as Twitter. So far, state of the art socio–psychological theories and cognitive models focus on explaining how the accuracy of fake news is judged on average, with little consideration of the individual. This paper takes it to a new level: A breadth of core models are comparatively assessed on their predictive accuracy for the individual decision maker, i.e., how well can models predict an individual’s decision before the decision is made. To conduct this analysis, it requires the raw responses of each individual and the implementation and adaption of theories to predict the individual’s response. We used two previously collected large data sets with a total of 3309 participants and searched for, analyzed and refined existing classical and heuristic modeling approaches. The results suggest that classical reasoning, sentiment analysis models and heuristic approaches can best predict the “Accept” or “Reject” response of a person. A hybrid model that combines those models outperformed the prediction of all individual models pointing to an adaptive tool-box.

Feature encoding modulates cue-based retrieval: Modeling interference effects in both grammatical and ungrammatical sentences

Studies on similarity-based interference in subject-verb number agreement dependencies have found a consistent facilitatory effect in ungrammatical sentences but no conclusive effect in grammatical sentences. Existing models propose that interference is caused either by a faulty representation of the input (encoding-based models) or by difficulty in retrieving the subject based on cues at the verb (retrieval-based models). Neither class of model captures the observed patterns in human reading time data. We propose a new model that integrates a feature encoding mechanism into an existing cue-based retrieval model. Our model outperforms the cue-based retrieval model in explaining interference effect data from both grammatical and ungrammatical sentences. These modeling results yield a new insight into sentence processing, encoding modulates retrieval. Nouns stored in memory undergo feature distortion, which in turn affects how retrieval unfolds during dependency completion.

MALLEABILITY OF INTELLIGENCE THROUGH CHESS TRAINING-A TWO YEAR EMPIRICAL STUDY

The study analyzed the effect of 2-year systematic chess training on the IQ of schoolchildren. A pretest–posttest with control group design was used with randomly selected children studying in four city schools (grades 3–9), of both genders. The experimental group (N = 80) underwent weekly chess training for 2 years, while the control group (N = 77) was involved in extracurricular activities offered in school such as cricket, football, and hockey. Both the groups were involved in these activities after school hours. Intelligence was measured by Wechsler Intelligence Scale for Children (WISC-IV INDIA). Assessment was carried out prior to the chess training, after 1 and 2 years of training. ANCOVA revealed significant increase in both first- and second-year scores by about 12 points across both the years. When systematic in-school chess training program is offered, one could expect a significant increase in IQ.

Episodic Memory Cues in Acquisition of Novel Visual-Phonological Associations: a Webcam-Based Eye-Tracking Study

When learning to bind visual symbols to sounds, to what extent do beginning readers track seemingly irrelevant information such as a symbol’s position within a visual display? In this study, we used adult typical readers’ own webcams to track their eye movements during a paired associate learning task that arbitrarily bound unfamiliar characters with monosyllabic pseudowords. Overall, participants’ error rate in recognition (Phase 1) decreased as a function of exposure, but was not modulated by the episodic memory-based effect of ‘looking-at-nothing’. Moreover, participants’ lowest error rate in both recognition and recall (Phases 1 and 2) was associated with item consistency across multiple exposures, in terms of spatial and contextual properties (i.e., stimulus’ screen location and co-occurrences with specific distractor items during encoding). Taken together, our findings suggest that normally developing readers extract statistical regularities in the input during visual-phonological associative learning, leading to rapid acquisition of these pre-orthographic representations.

Why We Should Report Colorimetric Data In Every Paper

This is a modern horror story about an innocently misbehaving projector, and why we beseech everyone to report minimal col- orimetric data about stimulus displays. We present anecdotal experience of configuring a projector to display video stimuli in a high-tesla MRI room, along with all the gotchas, (broken) technical assumptions, and theoretical rehashings that should be considered by every scientist who uses computers to dis- play visual stimuli. The moral of our story is: (1) check that your monitor/projector is actually showing the colors and lu- minances that you think it is, (2) make explicit assumptions regarding the physical/perceptual space of your stimuli and how they relate to any model analysis you will perform. This is especially important when modeling non-human animals, since most equipment and data formats implicitly assume hu- man perception. We show that innocent changes to display settings such as brightness reduction can cause dangerously unexpected results. Understanding and reporting colorimetric data in scientific publications is important for two reasons: (1) reproducibility, and (2) model fidelity.

Using Simulations to Understand the Reading of Rapidly Displayed Subtitles

Liao et al. (2020) reported an eye-movement experiment in which subtitles were displayed at three different rates, with a key finding being that, with increasing speeds, participants made fewer, shorter fixations and longer saccades. To understand why these eye-movement behaviors might be adaptive, we completed simulations using the E-Z Reader model (Reichle et al., 2012) to examine how subtitle speed might affect word identification and sentence comprehension, as well as the efficacy of six possible compensatory reading strategies. These simulations suggest that the imposition of a lexical-processing deadline and/or strategy of skipping short words may support reading comprehension in impoverished conditions.

Joint Action in Deaf and Hearing Toddlers: A Mobile Eye-Tracking Study

Infants experience the world through their actions with objects and their interactions with other people, especially their parents. Prior research has shown that school-age children with hearing loss experience poorer quality interactions with typically hearing parents, and difficulties in controlling their visual attention. In the current study, we used mobile eye-tracking to investigate parent-child interactions in toddlers with and without hearing loss. Parents and toddlers engaged in a goal-directed, interactive task that involved inserting coins into a slot and required joint coordination between the parent and the child. We examined the visual behaviors of the toddlers and the scaffolding behaviors of the parents. In contrast to previous work, preliminary findings reveal a pattern of potential similarities between deaf and hearing toddlers or their parents.

Cross-language structural priming in recurrent neural network language models

Recurrent neural network (RNN) language models that are trained on large text corpora have shown a remarkable ability to capture properties of human syntactic processing (Linzen & Baroni, 2021). For example, the fact that these models display human-like structural priming effects (Prasad, Van Schijndel, & Linzen, 2019; van Schijndel & Linzen, 2018) suggests that they develop implicit syntactic representations that may not be unlike those of the human language system. A rarely explored question is whether RNNs are also able to simulate aspects of human multilingual sentence processing (Frank, 2021) even though training RNNs on two or more languages simultaneously is technically unproblematic.

A Unifying Model of Grapheme-Color Associations in Synesthetes and Controls

Grapheme-color synesthetes experience linguistic symbols as having a consistent color (e.g., “The letter R is burgundy.”). Intriguingly, certain letters tend to be associated with certain colors, and these biases are not random: numerous properties of letters influence which letter is associated with which color. These influences, called “Regulatory Factors” (RFs), each explain some fraction of the variation in observed associations. No comprehensive model of the influences on grapheme-color associations exists: RFs have only been measured in isolation, are not always operationalized consistently, and often make competing predictions that cannot be accounted for in a univariate model. Here, we describe a statistical framework that integrates the predictions of all RFs into a single model, and thus yields a unified account of their influence on grapheme-color associations. Our model also links these predictions to measurable properties of language, offering a window into the multifactorial contributions to letter representation in the brain.

Causation by Ignorance

Epistemic states, what an agent knows or beliefs, play a crucial role in people's moral evaluations of the agent's actions. Whether and to what extent epistemic states also influence an agent's perceived causal contribution to an outcome remains the subject of debate. In three experiments, we investigate people's causal and counterfactual judgments about ignorant causal agents. We find that agent's epistemic states, the conditions of their ignorance as well as their epistemic actions influence how causal an agent is perceived, but also the kind of counterfactual alternatives people consider. We take these findings to indicate the crucial role of epistemic states in causal cognition and counterfactual models of causation.

Los Angeles Reading Corpus of Individual Differences: Pilot distribution and analysis

We introduce the LARC-ID, a pilot corpus of eye-movements obtained from subjects reading texts from a range of genres. Materials were presented in multiple paragraphs on the screen to more closely match naturalistic reading environments. Readers were encouraged to read for comprehension and enjoyment, engaging in various kinds of comprehension questions, including an open-ended reflection at the end of each text. Subjects also participated in a battery of individual difference measures, including those known to predict reading behavior in controlled experimental contexts, e.g., Rapid Automatized Naming (RAN; Denckla & Rudel, 1976), the author recognition (ART; Stanovich & West, 1989), and reading span (RSpan; Daneman & Carpenter, 1980) tasks. In addition to describing the central properties of the text and relationships between tasks in the battery, we present a sample analysis exploring how intrinsic lexical characteristics (length, frequency, and morphological complexity) interact with selected individual difference measures. The analysis provides the very first glimpse into what we hope will become a useful resource for reading researchers and educators. The entire corpus is freely available for unrestricted use.

On the Role of Low-level Linguistic Levels for Reading Time Prediction

It has been shown that complexity metrics, computed by a syntactic parser, is a predictor of human reading time, which is an approximation of human sentence comprehension difficulty. Nevertheless, parsers usually take as input sentences that have already been processed or even manually annotated. We propose to study a more realistic scenario, where the various processing levels (tokenization, PoS and morphology tagging, lemmatization, syntactic parsing and sentence segmentation) are predicted incrementally from raw text. To this end, we propose a versatile modeling framework, we call the Reading Machine, that performs all such linguistic tasks and allows to incorporate cognitive constrains such as incrementality. We illustrate the behavior of this setting through a case study where we test the hypothesis that the complexity metrics computed at different processing levels predicts human reading difficulty, and that when cognitive constraints are applied to the machine (e.g., incrementality), it yields better predictions.

Visual scoping operations for physical assembly

Planning is hard. The use of subgoals can make planning more tractable, but selecting these subgoals is computationally costly. What algorithms might enable us to reap the benefits of planning using subgoals while minimizing the computational overhead of selecting them? We propose visual scoping, a strategy that interleaves planning and acting by alternately defining a spatial region as the next subgoal and selecting actions to achieve it. We evaluated our visual scoping algorithm on a variety of physical assembly problems against two baselines: planning all subgoals in advance and planning without subgoals. We found that visual scoping achieves comparable task performance to the subgoal planner while requiring only a fraction of the total computational cost.Together, these results contribute to our understanding of how humans might make efficient use of cognitive resources to solve complex planning problems.

Do you see what I see? A meta-analysis of the Dot Perspective Task

Recent research has found evidence for implicit theory of mind, suggesting that humans quickly and involuntarily compute the mental states of others. One highly influential task within this literature, known as the Dot Perspective Task (DPT), purports to demonstrate implicit visual-perspective taking within adult subjects. However, some studies, consisting of variations of the DPT, have challenged these findings suggesting that the DPT does not demonstrate genuine perspective taking. Instead, they argue that these results are reflective of simple attentional cueing. Additionally, some researchers have argued that the DPT is sensitive to unintended attentional and intentional factors. We report the preliminary findings of an on-ongoing meta-analysis which analyzes participant-level data from 23 experiments and 1381 individual subjects. We find evidence for both directional cueing and implicit perspective taking within the DPT, although the effects of directional cueing are significantly larger. Additionally, we find that the effects of perspective taking are sensitive to attentional and intentional factors. These results cast doubt upon much of the evidence which has been taken to demonstrate implicit theory of mind. At the same time, they suggest that future work may utilize a carefully controlled version of the DPT in order to measure genuine implicit theory of mind more accurately.

The Decorated Learning Environment: Simply Noise or an Opportunity for Incidental Learning?

Maintaining attention during instruction is challenging as children face various sources of distraction (peers, announcements, noise) as well as competition from the visual environment itself. Prior studies found decorated environments promote off-task behavior and reduce learning. Additionally, many classroom displays are not relevant to ongoing instruction. This raises the possibility that increasing alignment between displays and instructional content may afford opportunities for incidental learning, reducing the detrimental effects of environmental off-task behavior. To investigate this possibility, participants completed a lesson in which alignment between the lesson content and displays was manipulated (relevant, educational but irrelevant, or no displays). Attention to the lesson and learning gains for content presented in the lesson and/or displays were measured. Results suggest younger children’s learning can benefit from displays that reinforce the lesson content. However, there was no evidence of incidental learning from displays without additional lesson support. Implications for classroom design are discussed.

Multi-Level Linguistic Alignment in a Virtual Collaborative Problem-Solving Task

Co-creating meaning in conversation is challenging. Success is often determined by people’s abilities to coordinate their language in strategic ways to signal problems and to align mental representations. Here we explore one set of grounding mechanisms, known as interactive linguistic alignment, that makes use of the ways people re-use, i.e., “align to,” the lexical, semantic, and syntactic forms of others’ utterances. In particular, the focus is on the temporal development of multi-level linguistic alignment and examine how its expression is related to communicative outcomes within a unique collaborative problem-solving paradigm. The primary task, situated within an educational video game, requires creative thinking between three people where the paths for possible solutions are highly variable. We find that over time interactions are marked by decreasing lexical and syntactic alignment, with a trade-off of increasing semantic alignment. However, a greater semantic alignment does not necessarily translate into better team performance. Overall, these findings provide greater clarity on the role of grounding mechanisms in complex and dynamic collaborative problem-solving tasks.

Improving Medical Image Decision Making by Leveraging Metacognitive Processes and Representational Similarity

Improving the accuracy of medical image interpretation is critical to improving the diagnosis of many diseases. Using both novices (undergraduates) and experts (medical professionals), we investigate methods for improving the accuracy of a single decision maker by aggregating repeated decisions from an individual in different ways. Our participants made classification decisions (cancerous versus non-cancerous) and confidence judgments on a series of cell images, viewing and classifying each image twice. We first applied the maximum confidence slating algorithm (Koriat, 2012), which leverages metacognitive ability by using the most confident response for an image as the `final response'. We also examined algorithms that aggregated decisions based on image similarity, leveraging neural network models to determine similarity. We found maximum confidence slating improves classification accuracy for both novices and experts. However, aggregating responses on similar images improves classification accuracy for novices and not experts, suggesting differences in the decision mechanisms of novices and experts.

Categorical perception of p-values

Traditional statistics instruction emphasizes a .05 significance level for hypothesis tests. Here, we investigate the consequences of this training for researchers’ mental representations of probabilities –– whether .05 becomes a boundary, i.e., a discontinuity of the mental number line, and alters their perception of differences between p-values. Graduate students (n = 25) with statistical training viewed pairs of p-values and judged whether they were ‘similar’ or ‘different’. After controlling for covariates, participants were more likely and faster to judge p-values as ‘different’ when they crossed the .05 boundary (e.g., .047 vs. .052) compared to when they did not (e.g., .027 vs. .032). This categorical perception effect suggests that traditional statistical instruction creates a psychologically real divide between so-called statistically significant and non-significant p-values. Such a distortion is undesirable given modern approaches to statistical reasoning that de-emphasize dichotomizing p-values.

I can tell you know a lot, although I'm not sure what: Modeling broad epistemic inference from minimal action

Inferences about other people's knowledge and beliefs are central to social interaction. In many situations, however, it's not possible to be sure what other people know because their behavior is consistent with a range of potential epistemic states. Nonetheless, this behavior can give us coarse intuitions about how much someone might know, even if we cannot pinpoint the exact nature of this knowledge. We present a computational model of this kind of broad epistemic-state inference, centered on the expectation that agents maximize epistemic utilities. We evaluate our model in a graded inference task where people had to infer how much an agent knew based on the actions they chose. Critically, the agent's behavior was always under-determined, but nonetheless contained information about how much knowledge they possessed. Our model captures nuanced patterns in participant judgments, revealing a quantitative capacity to infer amorphous knowledge from minimal behavioral evidence.

Predicting Social Reopening Following COVID-19 Lockdown Using Bounded Rationality and Threshold Models

The exercise of reopening optional social spaces following the COVID-19 lockdowns presents each individual with a complex problem in determining whether or not to attend these spaces given how the risks of virus transmission scale with crowding. In order to tackle this problem while recognizing individual cognitive capacity limits, this paper used a simulation model based on the El Farol Bar Problem and generated a population of agents relying on simple predictive strategies to determine their attendance to a retail location. It was determined that the more heterogeneous the threshold for crowding among agents was, the less variance there was in overall attendance numbers. This stability in daily attendance comes at the expense of the ability of the most cautious members to enjoy recreational spaces as these locations become the realm of only those least concerned with potential crowding.

Relationship between Delay Discounting and Risk Preference in Chimpanzees (Pan troglodytes) and Humans

Adaptive decisions require that decision makers factor in the subjective values of different possible outcomes, and the probability of these outcomes occurring. Subjective values depend, among other things, on how far an outcome is away in time. This can be captured by assessing an individual’s delay discounting of different options. An individual’s risk preference also affects how attractive particular choice options appear to them. In humans, probability discounting and delay discounting are often related. People who show more risky behaviors also tend to be more impulsive and less patient. Based on such findings, single-process models of delay discounting and probability discounting have been suggested. In the current study, we tested if this relationship is equally present in chimpanzees, one of human’s closest extant evolutionary relatives. We presented 23 chimpanzees with a patience task and a risky-choice task. The patience task was designed to explicitly distinguish between delay preference and self-control (i.e., the ability to wait a given delay). Still, we found no strong correlations between risk and delay preferences. As this task has not been used with humans before, we implemented a computerized version and tested it in a sample of twenty adult participants. Initial results indicate that the task is well suited to capture patience, and it makes a promising candidate to be used in behavioral delay discounting experiments in humans.

A Grounded Approach to Modeling Generic Knowledge Acquisition

We introduce and implement a cognitively plausible model for learning from generic language, statements that express generalizations about members of a category and are an important aspect of concept development in language acquisition (Carlson & Pelletier, 1995; Gelman, 2009). We extend a computational framework designed to model grounded language acquisition by introducing the concept network. This new layer of abstraction enables the system to encode knowledge learned from generic statements and represent the associations between concepts learned by the system. Through three tasks that utilize the concept network, we demonstrate that our extensions to ADAM can acquire generic information and provide an example of how ADAM can be used to model language acquisition.

Variability in causal judgments

People’s causal judgments exhibit substantial variability, but the processes that lead to this variability are not currently understood. In this paper, we use a repeated-measures design to study the within-participant variability of conditional probability judgments in common-cause networks. We establish that these judgments indeed exhibit substantial within-participant variability. This variability differs by inference type and is related to the extent to which participants commit Markov violations. The consistency and systematicity of this variability suggest that it may be an important source of evidence for the cognitive processes that lead to causal judgments. The systematic study of both within- and between-person variability broadens the scope of behavior that can be studied in causal cognition and promotes the evaluation of formal models of the underlying process. The data and methods provided in this paper provide tools to enable the further study of within-participant variability in causal judgment.

A computational model for simulating the future using a memory timeline

The ability to learn temporal relationships and use that knowledge to simulate future events is among the most remarkable aspects of cognition. Recently introduced behavioral task called Judgment of Imminence (JOI) combined with a well-known Judgment of Recency (JOR) task pointed to a remarkable symmetry between the temporal organization of memory and prediction. The data were consistent with the hypothesis that both memory and prediction can be organized as a compressed mental timeline. This means that the past and future can be remembered or simulated sequentially relative to the present. The compression implies that events closer to the present, regardless of whether they are in the past or in the future, were represented more accurately than those further from the present. Here we used the existing JOR model based on a compressed memory timeline to build an associative representation that can learn the temporal relationships and create a timeline of the future, which mirrors the timeline of the past. We show that this approach can simultaneously account for response times and accuracy in both JOR and JOI. This work provides a time-local neural-level mechanistic account for how the temporal organization of the memory can be used to learn the temporal structure of the world and simulate the future in an efficient manner as a compressed mental timeline.

Readers text skimming behavior changes with variation in working memory capacity

Text skimming is a common reading behavior that occurs when readers scan text at a faster than normal rate to attempt to form understanding when not able to read at normal speed. Research has suggested that reading time varies across a skimmed text, guided by attention and comprehension goals. However, do individual differences in the ability to manage attention affect skimming? Are those better at managing attention (i.e., high working memory) also better at managing text skimming? Two experiments were conducted where participants who varied in WMC were asked to skim an unfamiliar expository text, both on a computer, and while being eyetracked. In both experiments, while participants did spend more time reading earlier portions of the text, this did interact with WMC. Those higher in WMC balanced their reading efforts more equally across the entire text, suggesting that text skimming behavior is sensitive to differences in WMC.

Distraction in Semantic Analogies and Their Relationship with Abstract Reasoning

Three leading analogical reasoning paradigms: scene analogies and pictorial A:B::C:D analogies (both semantically-rich) and geometric analogies (semantically-lean) were solved by 251 participants. Pictorial analogies included four types of lures among the response options (perceptual, categorical, semantic, relational). Moreover, distractors related to both B and C were introduced. Additionally, a fluid intelligence test was applied to examine the relationship between the paradigms and general reasoning ability. Results indicated that: (a) objects semantically related to C yielded the strongest distraction in four-term analogies, categorical and relational distractors yielded moderate effects, perceptual distraction was negligible; (b) distractors related to B were relatively easy to ignore, suggesting that C is the primary object of reference during the response selection; (c) whilst the three tasks correlated significantly without control for fluid intelligence, only the semantically-rich tasks did after fluid intelligence was accounted for, suggesting a certain common mechanism, independent of fluid intelligence, underlying the two.

Simulating the factors that correct the erroneous process of phonological generation in Japanese

Language acquisition is supported by an ability called phonological awareness, which allows children to become intentionally aware of phonological units. It is known that erroneous pronunciation appears during the formation of phonological awareness. To clarify children's internal processes during this formation, this research aims to examine the factors that correct an erroneous phonological generation process. To do this, we utilized the innate and experiential factors of the memory retrieval mechanism in the cognitive architecture ACT-R. Specifically, we performed simulations to examine the interaction tasks that contribute to the acquisition of phonological awareness. As a result, it was shown that repeating a single task causes incorrect convergence and that this convergence can be prevented by performing other kinds of learning between tasks. In the future, it will be necessary to examine learning between tasks that can be associated with real situations and to confirm the overall process of phonological awareness formation.

Discretisation and Continuity: Simulating the Emergence of Symbols in Communication Games

Signalling systems of various species (humans and non-human animals) as well as our world both exhibit discrete and continuous properties. However, continuous meanings are not always expressed using continuous forms but instead frequently categorised into discrete symbols. While discrete symbols are ubiquitous in communication, the emergence of discretisation from a continuous form space is not well understood. We investigate the emergence of discrete symbols by simulating the learning process of two agents that acquire a shared signalling system. The task is formalised as a reinforcement learning problem in continuous form and meaning space. We identify two central causes for the emergence of discretisation: 1) sub-optimal signalling conventions and 2) a topological mismatch between form and meaning space. A long version of this paper has been accepted for publication in Cognition (International Journal of Cognitive Science).

A computational model of counting along a mental line

Are mental additions of single-digit numbers solved through direct retrieval from long-term memory or through persistent use of an automatized counting procedure along a mental line? In this paper, we present an experiment based on small additions along an artificial mental line, which tends to show that for very small addends (+2 to +4), counting may still be used at the end of a 3-week training. To investigate this issue, we developed the AutoCoP computational model, which describes how small additions could be solved, based on attention, working memory and experience. The simulations of AutoCoP based on this experiment showed that the effects detected at the behavioral level are reproduced and consistent with the theory, which assumes the use of a counting procedure in experts.

Children's Novelty Preferences Depend on Information-Seeking Goals

Children are often drawn to novelty, but these preferences may depend on their goals. In two experiments (N = 302), we show that children have differing preferences for novelty when seeking information compared to when they are asked to prioritize other goals. In Experiment 1, 4-7-year-olds wanted to have typical items (e.g., a four-legged chair) and learn about atypical items (e.g., a ten-legged chair). In Experiment 2, 4-6- year-olds wanted to learn about foreign characters, but liked foreign and local characters equally. We propose that children prefer to learn about novel instances for the promise of new information, which is evident in at least two domains (artifacts and people). However, this preference diminishes when children are asked about who they like, and it reverses to a familiarity preference when choosing between artifacts to acquire. In sum, our findings suggest that children’s preferences for novelty versus familiarity are sensitive to different goals.

Unexpected guests: When disconfirmed predictions linger

Previous literature suggests that the language processor generates expectations about upcoming material. Several studies have found evidence for a prediction error cost in cases where the comprehender encountered not the predicted word but a plausible unexpected continuation instead. This cost is argued to be a result of an inhibitory process that suppresses activation of the originally predicted word. Others have found no such evidence for a prediction cost. In a probe recognition memory task, we find evidence for interference from an incorrectly predicted word, and in a self-paced reading study, we find evidence for facilitation when the originally predicted word is encountered later on. Taken together, our results provide evidence against a strong version of the suppression account, in which incorrectly predicted words are fully inhibited. Instead we argue in favor of a passive lingering activation account, in which activation for the disconfirmed prediction gradually decays over time.

Beliefs are most swayed by social prevalence under uncertainty

We rely heavily on information from the social world to inform our real-world beliefs. How is this social information used, and when is it most influential? We assess the role of one kind of social information, the prevalence of a belief, in belief updating. Using real-world pseudoscientific and conspiratorial claims, we show that increases in people’s estimates of the prevalence of a belief led to increases in their endorsement of said belief. Prevalence information elicited the strongest belief change when people were most uncertain of their initial belief, suggesting that people weigh social information rationally according to the strength of their initial evidence. We discuss the implications of our results in the context of the present misinformation epidemic.

Learning from Agentic Actions: Modelling Causal Inference from Intention

People have the fascinating ability to infer causality by observing other humans’ actions. We modelled this process using a Bayesian rational agent model and showed how people can reason about another agent’s beliefs and, by extension, infer the world’s causal structure. We compared the model’s predictions against humans’ causal judgements on a novel inference task. Participants (N = 171) were shown a dynamic scene depicting either a human agent, robot agent, or both agents acting on two objects sequentially before observing an outcome. After observing the human (vs the less intentional robot) agent, people were more likely to infer that both objects (in sequential order) caused the outcome. When two agents of different intentionality were shown, people favored the object that the intentional agent interacted with as the cause of the outcome. Our model captured these inference patterns well and revealed insights into reasoning about semi-intentional agents and multi-agent contexts.

Information sampling for contingency planning

From navigation in unfamiliar environments to career planning, people typically first sample information before committing to a plan. However, most studies find that people adopt myopic strategies when sampling information. Here we challenge those findings by investigating whether contingency planning is a driver of information sampling. To this aim, we developed a novel navigation task that is a shortest path finding problem under uncertainty of bridge closures. Participants (n = 109) were allowed to sample information on bridge statuses prior to committing to a path. We developed a computational model in which the agent samples information based on the cost of switching to a contingency plan. We find that this model fits human behavior well and is qualitatively similar to the approximated optimal solution. Together, this suggests that humans use contingency planning as a driver of information sampling.

A Model-Based Analysis of Changes in the Semantic Structure of Free Recall Due to Cognitive Impairment

Alzheimer’s disease leads to a decline in both episodic and semantic memory. Free recall tasks are commonly used in assessments designed to diagnose and monitor cognitive impairment, but tend to focus only on episodic memory. Our goal is to understand the influence of semantic memory on the sequence of free recall in a clinical data set. We develop a cognitive process model that incorporates the influence of semantic similarity and other stimulus properties on the order of free recall. The model also incorporates a decision process based on the Luce choice rule, allowing for different levels of response determinism. We apply the model to a real-world data set including free recall data from 2392 Alzheimer’s patients and their caregivers. We find that semantic similarity between items strongly influences the order of free recall, regardless of impairment. We also observe a trend for response determinism to decrease as impairment increases.

Searching for the Cause: Search Behavior in Explanation of Causal Chains

Understanding cause and effect relationships gives power to produce desired effects and avoid negative outcomes. Despite the power of causal explanations, people often lack full understanding of how causes relate to or produce their effects. In two experiments, we explored how people search for information to enrich their causal explanations of real-world phenomena when given the chance. Participants completed an information search task that provided a causal relationship where they could seek out mechanistic information at different steps between the cause and the effect. We measured where people searched in the causal chain of events that made the explanation. We found that when allowed to search freely (Experiment 1) or when instructed that they must search for information (Experiment 2) participants consistently sought out information closest to the root cause in the explanation. We discuss implications for how to improve the teaching of new explanations to maximize the informational desires of the learner.

Rewiring the Wisdom of the Crowd

Digitally-enabled means for judgment aggregation have renewed interest in "wisdom of the crowd'' effects and kick-started collective intelligence design as an emerging field in the cognitive and computational sciences. A keenly debated question here is whether social influence helps or hinders collective accuracy on estimation tasks, with recently introduced network theories offering a reconciliation of seemingly contradictory past results. Yet, despite a growing body of literature linking social network structure and the accuracy of collective beliefs, strategies for exploiting network structure to harness crowd wisdom are under-explored. In this paper, we introduce a potential new tool for collective intelligence design informed by such network theories: rewiring algorithms. We provide a proof of concept through agent-based modelling and simulation, showing that rewiring algorithms that dynamically manipulate the structure of communicating social networks can increase the accuracy of collective estimations in the absence of knowledge of the ground truth.

Modeling the Mistakes of Boundedly Rational Agents Within a Bayesian Theory of Mind

When inferring the goals that others are trying to achieve, people intuitively understand that others might make mistakes along the way. This is crucial for activities such as teaching, offering assistance, and deciding between blame or forgiveness. However, Bayesian models of theory of mind have generally not accounted for these mistakes, instead modeling agents as mostly optimal in achieving their goals. As a result, they are unable to explain phenomena like locking oneself out of one's house, or losing a game of chess. Here, we extend the Bayesian Theory of Mind framework to model boundedly rational agents who may have mistaken goals, plans, and actions. We formalize this by modeling agents as probabilistic programs, where goals may be confused with semantically similar states, plans may be misguided due to resource-bounded planning, and actions may be unintended due to execution errors. We present experiments eliciting human goal inferences in two domains: (i) a gridworld puzzle with gems locked behind doors, and (ii) a block-stacking domain. Our model better explains human inferences than alternatives, while generalizing across domains. These findings indicate the importance of modeling others as bounded agents, in order to account for the full richness of human intuitive psychology.

Preferences in the quantified description of visual groups

Research suggests that people minimize the amount of effort used to generate natural language descriptions of visual scenes. In the case of visual scenes with multiple groups, recent work has found that people tend to generate quantitative descriptions that mention the number and cardinality of groups (e.g., ``two groups of three limes''), but omit the total quantity (e.g., ``six limes''). This finding suggests that people groupitize, that is, they more quickly determine the number of grouped items by rapid enumeration of subgroups, rather than slower item-by-item counting. %This hypothesis predicts that during description, people exert less effort to encode and report only the number and cardinality of groups. A recent proposal predicts that during description, people exert less effort by encoding and reporting only information that is readily available to perception. In previously studied description tasks, people may have omitted the total quantity from their descriptions because of considerations of brevity and informativity. In this paper, we describe a study designed to test how individuals balance effort, brevity, and informativity when evaluating quantified descriptions. The experiment was designed to elicit more fine-grained preferences for descriptions using direct comparisons between two competing descriptive forms. The results suggest that perceptual effort plays a central role in how people describe grouped collections of items.

Is convenient secure? Exploring the impact of metacognitive beliefs in password selection

Recently, there has been research on what factors influence a user’s password setting practices, which include various types of emotions such as anger, risk-taking tendencies, etc. However, research has shown that factors such as memorability and perceived memorability have a greater influence on password choice. Some recent research has shown a negative correlation between the perceived memorability and the perceived security of passwords, particularly passphrases (that are technically more secure). However, it is unclear whether this effect can be extended to groups with good experiences with digital spaces (IT professionals, entrepreneurs, etc.). Furthermore, it has not been determined whether random, uncommonly-worded, or complex structure passphrases would also maintain the correlation, as opposed to relatively less secure, common/simple passphrases. This study examines this problem using a diverse demographic and different categories of passphrases.

Need for speed: Applying ex-Gaussian modeling techniques to examine intra-individual reaction time variability in expert Tetris players

Studies have shown that video game players exhibit superior performance to non-video game players on a number of cognitive tasks. Methods to compare the groups have mainly involved using measures of central tendency such as the mean without examining intra-individual differences in performance. In the present study, top-ranking Tetris players and novice Tetris players completed a reaction time cognitive task. Results show that the top-ranking players exhibit faster reaction times compared to novice players. Beyond the mean RT, we used the ex-Gaussian modeling technique and found differences in variability and attention between the two groups. Future studies can use modeling techniques such as the exGaussian distribution to analyze the whole distribution at an individual level beyond measures of central tendency and further examine the behavioral differences between video game players and non-video game players.

Social meta-inference and the evidentiary value of consensus

Reasoning beyond available data is a ubiquitous feature of human cognition. But while the availability of first-hand data typically diminishes as the concepts we reason about become more complex, our ability to draw inferences seems not to. We may offset the sparsity of direct evidence by observing the statements of others, but such social meta-inference comes with challenges of its own. The strength of socially-provided evidence depends on multiple factors which themselves must be inferred, like the knowledge, social goals, and independence of the people providing the data. Here, we present the results of an experiment aimed at examining how people draw conclusions from information provided by others in the context of social media posts. By systematically varying the degree of consensus along with the number of people and distinct arguments involved we are able to assess how much each factor affects the conclusions reasoners draw. Across a range of topics we find that while people are influenced by the number of people on each side of an argument, the number of posts is the dominant factor driving belief revision. In contrast to well established findings in simpler domains, we find that people are largely insensitive to the diversity of the arguments made.

Modelling the Sense-Making of Diagrams Using Image Schemas

We model the sense-making process of diagrams as conceptual blends of the diagrams’ geometric configurations with apt image schemas.We specify image schemas and geometric configurations with typed FOL theories. In addition, for the latter, we utilise some Qualitative Spatial Reasoning formalisms. Using an algebraic specification language, we can compute the conceptual blends of image schemas and geometry as category-theoretic colimits.We show through several examples how this model captures the sort of direct inferences we confer to diagrammatic representations due to our embodied cognition. We argue that this approach to sense-making might be of value for the design and application of diagrammatic and graphical visualisations, as well as for AI in general.

Explaining Algorithm Aversion with Metacognitive Bandits

Human-AI collaboration is an increasingly commonplace part of decision-making in real world applications. However, how humans behave when collaborating with AI is not well understood. We develop metacognitive bandits, a computational model of a human's advice-seeking behavior when working with an AI. The model describes a person's metacognitive process of deciding when to rely on their own judgment and when to solicit the advice of the AI. It also accounts for the difficulty of each trial in making the decision to solicit advice. We illustrate that the metacognitive bandit makes decisions similar to humans in a behavioral experiment. We also demonstrate that algorithm aversion, a widely reported bias, can be explained as the result of a quasi-optimal sequential decision-making process. Our model does not need to assume any prior biases towards AI to produce this behavior.

A crosslinguistic study of the acquisition of time words in English- and German-speaking children

Unlike English, German contains single words for “the day after tomorrow” (übermorgen) and “the day before yesterday” (vorgestern). How might these cross-linguistic differences influence children’s acquisition of time words? Prior work shows that English-speaking preschoolers learn the deictic status of time words (e.g., yesterday was in the past) long before learning their precise temporal locations (e.g., yesterday was exactly one day ago). Here we ask whether the set of time words influences children’s understanding of proximal (yesterday/tomorrow) and distal (day before yesterday/day after tomorrow) terms. English- and German-speaking 3- to 7-year-olds (N = 253) marked the temporal location of each term relative to today on a calendar template. While children in both language groups demonstrated equal knowledge of deictic status, German speakers were more likely to have precise meanings for proximal and distal items, suggesting that having more alternative time words available may help narrow the scope of children’s meanings.

Priming implicatures in young children

Children struggle to derive scalar implicatures. Initially this was thought to relate to a lack of cognitive resources required for the computation. More recently however, there has been a shift towards the alternatives (what a speaker could have said but did not). The argument is that children struggle to make the scalar implicature associated with some because they are unaware of its relationship with the stronger alternative all. We present a priming study that investigates this. We show that children’s implicatures can be primed equally by alternatives in quantifier and ad hoc expressions. This suggests that children are aware of the scalar relationship between some and all, even if they choose not to derive the implicature.

Temporal Continuity and the Judgment of Actual Causation

Psychological theories of actual causation aim to characterize which of multiple causes of an event is singled out as the pri- mary cause. We present one such theory called the continu- ity account of actual causation. The continuity account treats events as changes of state in continuous time and traces a se- quence of stage changes backwards through time from an event to its primary cause. The account is broadly compatible with the physical process view of causation and we test it by ask- ing people to identify the primary cause of events occurring in simple physical systems. An initial experiment confirms that root causes are more likely to be chosen as primary causes than are immediate causes. A second experiment demonstrates that root causes that have temporal continuity with the effect are preferred even when probability raising accounts would pre- dict otherwise. The results of both experiments are consistent with the continuity account, and suggest that inferences about changes of state in continuous time may underpin an important class of actual causation judgments.

Children’s Generalization of Novel Relational Nouns in Comparison Contexts: An Eye Tracking Analysis

Comparison settings (i.e. several stimuli introduced simultaneously) favor novel word learning and generalization. This study investigates the temporal dynamics of 6-year-olds solving strategies in a relational noun (e.g. “x is the dax for y”) comparison and generalization task with eye tracking data. We manipulated conceptual distance between the task’s items and recorded children’s performances and eye tracking data. We analyze and interpret solving strategies following the predictions made by two hypotheses, the Projection-First and Alignment-first. Eye tracking data clearly revealed that children, first, extract the relation from comparisons of items within a pair and search for a match for the extracted relation, which confirms the predictions of the projection-first hypothesis. Further analyses on error and correct trials suggest that errors occurred in the late, choice, phase of a trial.

Exploring the variable effects of frequency and semantic diversity as predictors for a word's ease of acquisition in different word classes

Infant vocabulary development is inevitably dependent on the speech they hear in their environment. This paper reports an investigation of the vocabulary statistics that predict a word’s age of acquisition, focusing on frequency and contextual diversity as derived from child-directed speech data along with associative norms generated by adults. Age of acquisition is operationalised using parental-report of infant word knowledge in a British English-speaking population. The work can be considered an extension of Hills, Maouene, Riordan, and Smith (2010) using a fully British English dataset. We found significant effects of both word frequency and word associations on age of acquisition. Interestingly, the strength of these predictors differed between word classes, with frequency being the strongest predictor for nouns and associations the strongest predictor for function words.

Jointly Perceiving Physics and Mind: Motion, force and intention

Physics and mind are two major causes of motion. In a leash-chasing display, a disc (“sheep") is being chased by another disc (“wolf"), which is physically constrained by a leash attached to a third disc (“master"). A number of interesting motions can emerge from this simple system, such as a wolf being dragged away from its target. Therefore, it is important for vision to jointly infer a physics-mind combination that can best explain the motions. Here we reported two discoveries from studying this display to support this theory. First, an intuitive physical system like a leash can greatly lessen the detrimental effects of spatial deviation and the diminishing objecthood on perceived chasing, strengthening its robustness. Second, a mutual dependency exists between physics and mind, where disrupting one will inevitably result in an impaired perception on the other. These results collectively support a joint perception of physics and mind.

The effect of uncertainty and reward probability on information seeking behaviour

People’s desire to seek or avoid information is not only influenced by the possible outcomes of an event, but the probability of those particular outcomes occurring. There are competing explanations however as to how and why people’s desire for non-instrumental information is affected by factors including expected value, probability of outcome, and a unique formulation of outcome uncertainty. Over two experiments, we find that people’s preference for non-instrumental information is positively correlated with probability when the outcome is positive (i.e., winning money) and negatively correlated when the outcome is negative (i.e., losing money). Furthermore, at the aggregate level, we find the probability of an outcome to be a better predictor of information preference than the expected value of the event or its outcome uncertainty.

Child-directed Listening: How Caregiver Inference Enables Children's Early Verbal Communication

How do adults understand children's speech? Children's productions over the course of language development often bear little resemblance to typical adult pronunciations, yet caregivers nonetheless reliably recover meaning from them. Here, we employ a suite of Bayesian models of spoken word recognition to understand how adults overcome the noisiness of child language, showing that communicative success between children and adults relies heavily on adult inferential processes. By evaluating competing models on phonetically-annotated child language from the Providence corpus, we show that adults' recovered meanings are best predicted by prior expectations fitted specifically to the child language environment, rather than to typical adult-adult language. After quantifying the contribution of this "child-directed listening" over developmental time, we discuss the consequences for theories of language acquisition, as well as the implications for commonly-used methods for assessing children's linguistic proficiency.

Modeling Communication to Coordinate Perspectives in Cooperation

Communication is highly overloaded. Despite this, even young children are good at leveraging context to understand ambiguous signals. We propose a computational account of overloaded signaling from a shared agency perspective which we call the Imagined We for Communication. Under this framework, communication is a way for cooperators to coordinate their perspectives, allowing them to act together to achieve shared goals. We assume agents are rational, utility maximizing cooperators, which puts constraints on how signals can be sent and interpreted. We implement this model in a set of simulations which demonstrate this model’s success under increasing ambiguity as well as increasing layers of reasoning. Our model is capable of improving performance with deeper recursive reasoning; however, it outperforms comparison baselines at even the shallowest level of reasoning, highlighting how shared knowledge and cooperative logic can do much of the heavy-lifting in language.

Can losses help attenuate learning traps?

Recent work has demonstrated robust learning traps during learning from experience – decision-making biases that persist due to the choice-contingent nature of outcome feedback. In two experiments, we investigate the effect of outcome valence on learning trap development. Participants chose to approach or avoid category exemplars associated with rewards or losses, and, to maximize reward, must learn a categorization rule based on two stimulus dimensions. We replicate previous findings showing that when outcome feedback was contingent upon approaching exemplars, people frequently fell into the trap of using an incomplete categorization rule based on only a single dimension, which was suboptimal for long-term reward. Notably, learning trap development was attenuated in an environment with frequent loss outcomes, even when participants received explicit information about the base rates of gains and losses. The implications of these findings for theoretical models and future research are discussed.

The mental representation of integers: Further evidence for the negative number line as a reflection of the natural number line

Humans are able to make sense of extraordinarily abstract concepts in mathematics (e.g., negative numbers). What is the underlying representation of these concepts? Integers extend natural numbers by including zero and negative numbers. To study the mental representation of integers, we employed a number comparison task in an online context. We replicated the previously-reported distance effect, in that far comparisons were faster than near comparisons. Namely, we observed reliable distance effects for positive and negative comparisons, and critically, an inverse distance effect for mixed comparisons. We conclude that the mental representation of integers may align with a hypothesis proposing the mental number line for negative numbers mirrors the natural number line. Moreover, we conclude that web-based data collection is a promising tool for future numerical cognition research.

Categorical Belief Updating Under Uncertainty

The need to update our estimates of probabilities (e.g., the accuracy of a test) given new information is commonplace. Ideally, a new instance (e.g., a correct report) would just be added to the tally, but we are often uncertain whether a new instance has occurred. We present an experiment where participants receive conflicting reports from two early-warning cancer tests, where one has higher historical accuracy (HA). We present a model showing that while uncertain which test is correct, estimates of the accuracy of both tests should be reduced. However, among our participants, we find two dominant approaches: (1) participants increase the more HA test, reducing the other; (2) participants make no change to either. Based on mixed methods we argue that both approaches represent two sides of a ‘binary’ decision i.e., (1) update as if we have complete certainty which test is correct and (2) update as if we have no information.

Modeling Object Recognition in Newborn Chicks using Deep Neural Networks

In recent years, the brain and cognitive sciences have made great strides developing a mechanistic understanding of object recognition in mature brains. Despite this progress, fundamental questions remain about the origins and computational foundations of object recognition. What learning algorithms underlie object recognition in newborn brains? Since newborn animals learn largely through unsupervised learning, we explored whether unsupervised learning algorithms can be used to predict the view-invariant object recognition behavior of newborn chicks. Specifically, we used feature representations derived from unsupervised deep neural networks (DNNs) as inputs to cognitive models of categorization. We show that features derived from unsupervised DNNs make competitive predictions about chick behavior compared to supervised features. More generally, we argue that linking controlled-rearing studies to image-computable DNN models opens new experimental avenues for studying the origins and computational basis of object recognition in newborn animals.

Social structure and lexical uniformity: a case study of gender differences in the Kata Kolok community

Language emergence is characterized by a high degree of lexical variation. It has been suggested that the speed at which lexical conventionalization occurs depends partially on social structure. In large communities, individuals receive input from many sources, creating a pressure for lexical convergence. In small, insular communities, individuals can remember idiolects and share common ground with interlocuters, allowing these communities to retain a high degree of lexical variation. We look at lexical variation in Kata Kolok, a sign language which emerged six generations ago in a Balinese village, where women tend to have more tightly-knit social networks than men. We test if there are differing degrees of lexical uniformity between women and men by reanalyzing a picture description task in Kata Kolok. We find that women’s productions exhibit less lexical uniformity than men’s. One possible explanation of this finding is that women's more tightly-knit social networks allow for remembering idiolects, alleviating the pressure for lexical convergence, but social network data from the Kata Kolok community is needed to support this explanation.

Arguing with experts: Subjective disagreements on matters of taste

When two people disagree about matters of taste, neither is in the wrong: There is nothing contradictory in a dialog where one interlocutor says 'The rollercoaster was scary!' and the other responds 'No, it was not scary.' This contrasts with disagreements about objective facts. This phenomenon is known as faultless disagreement, and is central for theorizing about subjective expressions. Faultless disagreement is typically assumed to stem from subjective expressions having a special semantics. We present evidence that people’s judgments of faultless disagreement are sensitive not only to the lexical content of a sentence, but also to the broader discourse context (properties of the interlocutors in the dialog) and to extra-contextual factors (participants’ own attitudes about that particular domain). These results problematize arguments that faultless disagreement stems directly from the semantics of subjective lexical items.

Learning New Categories for Natural Objects

People learn new categories on a daily basis, and the study of category learning is a major topic of research in cognitive science. However, most prior work has focused on how people learn categories over abstracted, artificial (and usually perceptual) representations. Little is known about how new categories are learnt for natural objects, for which people have extensive prior knowledge. We examine this question in three pre-registered studies involving the learning of new categories for everyday foods. Our models use word vectors derived from large-scale natural language data to proxy mental representations for foods, and apply classical models of categorization over these vectorized representations to predict participant categorization judgments. This approach achieves high predictive accuracy rates, and can be used to identify the real-world settings in which category learning is impaired. In doing so, it shows how existing theories of categorization can be used to predict and improve everyday cognition and behavior.

East-West Revisited: Is Holistic Thinking Relational Thinking?

Analogical reasoning is at the core of human cognition, but is it universal? Do people from different cultures reason analogically in the same way? Despite the prevalence of analogical research, to date, there is almost no cross-cultural work investigating analogical reasoning in adults from non-WEIRD cultures. Here we fill this important gap by revisiting a long-standing cross-cultural difference—the holistic-analytic difference among Easterners and Westerners (Nisbett, 2001)—to ask whether this difference is also evident in analogical reasoning. Decades of cross-cultural research showed that Easterners are more attentive to contextual relations than Westerners, giving way to an untested presumption that Easterners are more relational, more analogical than Westerners. We tested this assumption using the classic holistic-analytic task and scene analogy mapping task with US and Chinese participants. While we replicated the holistic-analytic (East-West) difference, US and Chinese participants did not differ in the analogy task.

On Factors Influencing Typing Time: Insights from a Viral Online Typing Game

Context effects in human spoken language are well-documented and play a central role in the theory of language production. However, the role of context in written language production is far less well understood, even though a considerable proportion of the language produced by many people today is written. Here we analyze the factors predictive of English language typing times in a large, naturalistic corpus from the popular TypeRacer.com website. We find broad consistency with the major documented effects of linguistic context on spoken language production, suggesting potential modality-independence in the cognitive mechanisms underlying language production and/or similar optimization pressures on the production systems in both modalities.

Dynamic Action Facilitates Learning of Non-Adjacent Dependencies in Visual Sequences

Many events that humans and other organisms experience contain regularities in which certain elements within an event predict certain others. While some of these regularities involve tracking the co-occurrences between temporarily adjacent stimuli, others involve tracking the co-occurrences between temporarily distant stimuli (i.e., non-adjacent dependencies, NADs). Prior research shows robust learning of adjacent dependencies in humans and other species, whereas learning NADs is more difficult, and often requires support from properties of the stimulus to help learners notice the NADs. Here we report on four experiments that examined NAD learning from various types of visual stimuli. The results suggest that continuous movements aid the acquisition of NADs. We also found that human motion leads to more robust NAD learning compared to object motions, perhaps because of a richer representation. This richer representation could result in better memory and recall, and provide a stronger signal for NAD learning.

Does anything predict anchoring bias?

Anchoring – the tendency for recently seen numbers to affect estimates – is a robust bias affecting expert and novice judgements across many fields. An anchoring task, in which people (N=301) estimated the number of circles in 10 stimulus figures after comparison to an anchoring value, was conducted within a larger study including numerous intelligence, personality, decision style and attention measures. Individual anchoring susceptibility was calculated and compared to potential predictor variables. Two of eight broad ability measures (from Catell-Horn-Carroll intelligence theory) correlated weakly but significantly with anchoring (Gq = 0.16, Gf = 0.12). No decision style or attention measures correlated significantly with anchoring, nor did the Big 5 personality traits, directly. Indirectly, however, as the anchoring task continued and fatigue increased, people relied more on anchors and higher neuroticism may have increased this tendency. Overall, results suggest our ability to predict anchoring is poor and implications of this are discussed.

How hard is cognitive science?

Cognitive science is itself a cognitive activity. Yet, computational cognitive science tools are seldom used to study (limits of) cognitive scientists' thinking. Here, we do so using computational-level modeling and complexity analysis. We present an idealized formal model of a core inference problem faced by cognitive scientists: Given observations of a system's behaviors, infer cognitive processes that could plausibly produce the behavior. We consider variants of this problem at different levels of explanation and prove that at each level, the inference problem is intractable, or even uncomputable. We discuss the implications for cognitive science.

A Task-Optimized Neural Network Model of Decision Confidence

Our decisions are accompanied by a sense of confidence, a metacognitive assessment of how likely those decisions are to be correct, but the mechanisms that underlie this capacity remain poorly understood. A number of recent behavioral and neural data have suggested that decisions are made in accord with an optimal `balance-of-evidence' rule, whereas confidence is estimated using a heuristic `response-congruent-evidence' rule. We developed a deep neural network model optimized to classify images and predict its own likelihood of being correct, and found that this model naturally accounts for some of the key behavioral dissociations between decisions and confidence ratings. Further investigation revealed that neither the `balance-of-evidence' rule nor the `response-congruent-evidence' rule fully characterized the strategy that the model learned. We argue instead that the model learns to flexibly approximate the distribution of its training data, and, analogously, that apparently suboptimal features of human confidence ratings may arise from optimization for the statistics of naturalistic settings.

A common framework for quantifying the learnability of nouns and verbs

Across the world's languages, children reliably learn nouns more easily than verbs. Attempts to understand the difficulty of verb learning have focused on determining whether the challenge stems from differences in the linguistic usage of nouns and verbs, or instead conceptual differences in the categories that they label. We introduce a novel metric to quantify the contributions of both sources of difficulty using unsupervised learning models trained on corpora of language and images. We find that there is less alignment between the linguistic usage of verbs and their categories than for nouns and their categories. However, this difference is driven almost entirely by differences in the structure of their visual categories: Relative to nouns, events described by the same verb are more variable and events described by two different verbs are more similar. We conclude that differences between noun and verb learning need not be due to fundamental differences in learning processes, but may instead be driven by the difficulty of one-shot generalization from verbs' visual categories.

Identifiability and Specificity of the Two-Point Visual Control Model of Steering

Estimating parameters of cognitive models is crucial to be able to accurately describe cognitive processing of individuals, under varying circumstances. To ensure that individual parameter estimates represent individual cognitive processes, it is important to consider model identification and specific influence. Model identification refers to whether a unique set of parameter estimates is associated with a particular pattern in the data; Specific influence means that certain experimental manipulations affect only specific cognitive processes, reflected in changes in only those parameters that represent those processes, and not others. These two general concepts also apply to cognitive models of more applied tasks and settings, such as driving. In the current work, we specifically test whether these two requirements hold in a commonly used cognitive model of visual control of steering behavior. For this model, we test the identifiability, and then estimate parameters of two experiments to understand how cognitive load and driving speed specifically influence parameter estimates of the model. The results indicate that the two-point visual control of steering model is identified, and that cognitive load and driving speed are related to different parameters.

Developmental changes in perceived moral standing of robots

We live in an age where robots are increasingly present in the social and moral world. Here, we explore how children and adults think about the mental lives and moral standing of robots. In Experiment 1 (N = 116), we found that children granted humans and robots with more mental life and vulnerability to harm than an anthropomorphized control (i.e., a toy bear). In Experiment 2 (N = 157), we found that, relative to children, adults ascribed less mental life and vulnerability to harm to robots. In Experiment 3 (N = 152), we modified our experiment to be within-subjects and measured beliefs concerning moral standing. Though younger children again appeared willing to assign mental capacities — particularly those related to experience (e.g., being capable of experiencing hunger) — to robots, older children and adults did so to a lesser degree. This diminished attribution of mental life tracked with diminished ratings of robot moral standing. This informs ongoing debates concerning emerging attitudes about artificial life.

Listeners can use coarticulation cues to predict an upcoming novel word

During lexical access, listeners turn unfolding phonetic input into words. We tested how participants interpret words that aren't in their lexicon, either due to their coarticulation cues or because they label a novel object. In a 2-picture Visual World study, 57 adults saw a familiar object and an unfamiliar object, while hearing sentences directing their gaze to the target in 3 conditions: with a familiar word (“crib”), a novel word (“crig”), or a familiar word with coarticulation cueing a novel word (“cri(g)b”). When coarticulation cues matched the novel word (“cri(g)b”), participants looked more at the unfamiliar object than when the cues matched the familiar word, suggesting lexical competition can include a novel word under appropriate circumstances. When hearing a novel word (e.g. “crig”), participants showed two patterns: Roughly half looked more at the unfamiliar object, as expected, while the rest surprisingly looked more at the familiar object. We discuss the interaction of mutual exclusivity, phonetic similarity, and coarticulation cues in driving lexical access.

Fifty Shades of Social Cognition. How to Capture the Varieties of Socio-cognitive Abilities?

Numerous disciplines study the nature of social cognition. Also, in philosophy of mind, there are discourses about socio-cognitive abilities, such as joint action, mindreading, and commitment. However, the so-called standard notions require demanding conditions, which leads to the fact that, for example, abilities of young children and non-human animals cannot be captured by this terminology. By introducing minimal notions, a step has been taken to capture a greater variety of phenomena in the field of social cognition. In this way, current empirical findings can be connected to the theoretical work in philosophy. However, when one characterizes minimal and standard notions by a dichotomous interpretation of a two-system approach, quite a few instances are still falling through the conceptual net. This paper will demonstrate how many instances remain neglected and explore the challenges to develop a disjunctive conceptual schema that can capture the varieties of socio-cognitive abilities.

How does the Chimpanzee Mind Represent its Cultures?

Tools are peculiar parts of our environment and tool manufacture remains one of the most prodigious achievements of humankind over the last million years. Chimpanzees, along with other non-primate species, also use and sometimes manufacture tools. In my research, I have investigated the cognitive, ecological, social and emotional factors influencing tool use in wild and captive apes, with a focus on Ugandan chimpanzees. In parallel, I have researched cognitive aspects of the evolution of emotional and intentional communication by studying primate, particularly great ape, vocalizations. Finally, in more recent years, I have investigated the same topics in children, to investigate the possible homologies with humans and our shared ancestry and specificities. My goal is to understand the evolutionary pressures that launched humans on the particular evolutionary pathway that have allowed them to become the ultimate culture-bearers. I am also interested in how other species, in turn, see the world. The research program I develop integrates these interests in a comparative, ecological, cognitive and socio-emotional approach to cultural knowledge in great apes and humans.

Competition from novel features drives scalar inferences in reference games

Scalar implicatures, one of the signatures of pragmatic reasoning, are believed to arise from competing alternative utterances, which the listener knows that the speaker could have used to express a strengthened meaning. But do scalar implicatures also arise in the presence of nonce objects, for which no alternative name is known? We conduct a series of experiments assessing the degree of scalar strengthening driven by familiar and nonce objects. We find that nonce objects can derive scalar implicatures as strongly as familiar objects in simple reference games. Our experiments also reveal an asymmetry in the relative strengths of familiar- and nonce-driven inferences: relative to the prior, participants preferentially interpret the name of a shared feature as referring to an object with an additional nonce feature over an object with an additional familiar feature, suggesting that familiar alternatives exert greater scalar pressure than nonce alternatives. We also present exploratory model simulations suggesting that our results may be explained by rationally reasoning about a high-cost description of the novel object. Our findings support the idea that novel lexical entries may be generated from one-shot encounters and spontaneously used in pragmatic inference.

Individual Differences in Causal Learning

Causal inference from observed cases is a central cognitive challenge. There has been some evidence for individual differences in causal learning strategies, but prior work has not examined fine-grained sequences of judgments. In this paper, we report a large-scale model-fitting effort to determine the best-fitting causal inference models for individual participants. We fit a range of different model-types against multiple judgment sequences from each participant, thereby enabling comparisons of learning strategy both between- and within-participant. The model-fitting effort revealed some diversity in learning strategy along both dimensions, though individuals did exhibit some stability. Overall, however, the model fits were worse than expected, particularly when compared to the high accuracy reported for many of the models when used to predict group-level causal judgments. These results thus call into question whether these models might accurately describe the average behavior without accurately describing many (or any) individual’s behaviors.

Reasoning about social attitudes with uncertain beliefs

We propose a computational model of social preference judgments that accounts for the degree of an agents’ uncertainty about the preferences of others. Underlying this model is the principle that, in the face of social uncertainty, people interpret social agents’ behavior under an assumption of expected utility maximization. We evaluate our model in two experiments which each test a different kind of social preference reasoning: predicting social choices given information about social preferences, and inferring social preferences after observing social choices. The results support our model and highlight how uncertainty influences our social judgments.

Predicting Learning and Retention of a Complex Task Using a Cognitive Architecture

We use a model to explore the implications of ACT-R's learning and forgetting mechanisms to understand learning and retention on a complex task. The model performs a spreadsheet task that has 14 non-iterated subtasks. The model predicts a learning curve and knowledge decay for different learning stages. The model's learning curve fits the human data well for the first four trials without decay. When decay is examined, however, we have to make modifications to the retention equation for the model's predictions to match data and the shapes predicted by the other learning theories. To fix this anomaly, we modified the effect of time on decay (adjusting time outside the experiment to less than the effect of time in the experiment) and the strength of newly learned memories (less well known than the previous default value). From these results, we learn that training and testing have been confounded in many studies.

EEG Reveals Familiarity by Controlling Confidence in Memory Retrieval

We explore the separation of decision confidence and familiarity components in EEG data from recognition memory experiments. We first develop and test a classifier designed to classify decision confidence on new trials. We then use this classifier to control for confidence in the selection of trials of familiarity and correct rejection. This allows us to reveal a familiarity component that is of similar magnitude for recollection and familiarity judgements. This familiarity component reveals more of a frontal extent than obtained without confidence matching. We believe that this preliminary result can serve as a guide for designing future electrophysiological experiments to better separate the different components of recognition memory and that the technique of using classifiers to control for response-related covariates can be used for early exploration of these components in existing data.

Variation in Linguistic Complexity and its Cognitive Underpinning

Linguistic complexity – manifested in terms of hierarchical recursive structures generated by grammar – is often discussed from the perspective of cross-linguistic comparison (cf. Everett, 2005; Nevins, Pesetsky, & Rodrigues, 2009 on Pirahã). In this paper, we focus instead on the variation in complexity within a single language, English, and on the lower bound of complexity, specifically (cf. Futrell et al., 2016). We report results of two studies, a corpus study (Study 1) and a production experiment (Study 2), that investigate syntactic complexity of expressions that arise in the context of human-computer interaction and compare them to the standard language. The results of both studies show that the expressions generated in the context of human-computer interaction exhibit lesser structural complexity and often violate the norm of the language (cf. margaret mead culture famous research). Our results suggest that such expressions are generated by a qualitatively different type of formal grammar, Linear Grammar (Jackendoff & Wittenberg, 2017), rather than by recursive grammar (Roeper, 1999).

Examining Infant Relation Categorization Through Deep Neural Networks

Categorizing spatial relations is central to the development of visual understanding and spatial cognition, with roots in the first few months of life. Quinn (2003) reviews two findings in infant relation categorization: categorizing one object as above/below another precedes categorizing an object as between other objects, and categorizing relations over specific objects predates abstract relations over varying objects. We model these phenomena with deep neural networks, including contemporary architectures specialized for relational learning and vision models pretrained on baby headcam footage (Sullivan et al., 2020). Across two computational experiments, we can account for most of the developmental findings, suggesting these neural network models are useful for studying the computational mechanisms of infant categorization.

Studying the Evolution of Cooperation and Prosociality in Birds

The social brain hypothesis (Humphrey, 1976) poses that the intricacies of social life may have been a significant selection pressure for the evolution of mind. This evolution may act on competition between group members and strategies to outcompete others (Machiavellian intelligence hypothesis: Byrne & Whiten, 1988), or on cooperative tendencies between group-members that provide benefits that cannot be reached by a single individual (Vygotskian intelligence hypothesis: Moll & Tomasello, 2007). The latter hypothesis, however, creates an evolutionary conundrum, as cooperation is prone to free-riders, and with defection being an evolutionary stable system, the occurrence and complexity of cooperation in humans and other animals remains a puzzle. Several theoretical advances have been made to explain the evolution of cooperation nonetheless, with kin-selection (Hamilton, 1964) and reciprocal altruism (Trivers, 1971) being the most prominent ones. However, the proximate mechanisms that foster the strategies proposed in the Vygotskian intelligence hypothesis and the required cognition in nonhuman animals, remain a hotly debated topic (see Massen et al., 2019).

Broken Telephone: Children's Judgments of Messages Delivered by Non-Native Speakers are Influenced by Processing Fluency

Children and adults show preferences for native speakers and judge them to be more credible sources of information than non-native speakers. Previous research with children has attributed this bias to a preference for in-group members. The present study investigated the role of processing fluency on children’s social judgements. Children were shown two speakers (one with a native accent and the other with a non-native accent) relaying a message from another individual. They were then asked to make credibility and social judgements about the speakers and their messages. Children were also asked a processing fluency question, and a question about the speakers’ comprehension of the original message. Responses to the processing fluency question and question about the speakers’ comprehension predicted credibility judgements, but did not predict preference. These findings suggest that processing fluency may play a role in developing biases towards non-native accented speakers. Implications are discussed.

Visual Statistical Learning in the Reading of Unspaced Chinese Sentences

Chinese texts are renowned for the lack of physical spaces between words in a sentence. Reading these sentences requires a stage of word segmentation, the mechanism of which may involve visual statistical learning. In three experiments employing the RSVP task along with the Saffran et al. (1997) paradigm, we provided evidence that foreign learners of Chinese could capture the statistical information embedded in a string of characters and use that information to tell apart a “word” from a “nonword”. The statistical learning effect (.57) was comparable to that observed previously in an auditory task using the same stimuli. The results of the experiments also suggested that significant visual statistical learning required a conscious level of processing that directed the participants’ attention at the characters as well as an unconscious level, at which the distributional information across the characters can be continuously computed and accumulated.

Making Heads or Tails of it: A Competition–Compensation Account of Morphological Deficits in Language Impairment

Children with developmental language disorder (DLD) regularly use the base form of verbs (e.g., dance) instead of inflected forms (e.g., danced). We propose an account of this behavior in which children with DLD have difficulty processing novel inflected verbs in their input. This leads the inflected form to face stronger competition from alternatives. Competition is resolved by the production of a more accessible alternative with high semantic overlap with the inflected form: in English, the bare form. We test our account computationally by training a nonparametric Bayesian model that infers the productivity of the inflectional suffix (-ed). We systematically vary the number of novel types of inflected verbs in the input to simulate the input as processed by children with and without DLD. Modeling results are consistent with our hypothesis, suggesting that children’s inconsistent use of inflectional morphemes could stem from inferences they make on the basis of impoverished data.

Rise of QAnon: A Mental Model of Good and Evil Stews in an Echochamber

The QAnon conspiracy posits that Satan-worshiping Democrats operate a covert child sex-trafficking operation, which Donald Trump is destined to expose and annihilate. Emblematic of the ease with which political misconceptions can spread through social media, QAnon originated in late 2017 and rapidly grew to shape the political beliefs of millions. To illuminate the process by which a conspiracy theory spreads, we report two computational studies examining the social network structure and semantic content of tweets produced by users central to the early QAnon network on Twitter. Using data mined in the summer of 2018, we examined over 800,000 tweets about QAnon made by about 100,000 users. The majority of users disseminated rather than produced information, serving to create an online echochamber. Users appeared to hold a simplistic mental model in which political events are viewed as a struggle between antithetical forces—both observed and unobserved—of Good and Evil.

Sharing is not Needed: Modeling Animal Coordinated Hunting with Reinforcement Learning

Coordinated hunting is widely observed in animals, and sharing rewards is often considered a major incentive for this success. However, it is unclear what causal roles are played by this reward-sharing mechanism. In order to systematically examine the effects of sharing rewards in animal coordinated hunting, we conduct a suite of modeling experiments using a state-of-the-art multi-agent reinforcement learning algorithm. The models are trained and evaluated with a task that simulates real-world collective hunting. We manipulate four evolutionarily important variables: reward distribution, hunting party size, free-rider problems, and hunting difficulty. Our results indicate that individually rewarded predators outperform predators that share rewards, especially when the hunting is difficult, the group size is large, and the action cost is high. Moreover, predators with shared rewards suffer from the free-rider problem. We conclude that sharing reward is neither necessary nor sufficient for modeling animal coordinated hunting through reinforcement learning.

workshop

Combating the climate crisis with cognitive science

The climate crisis is one of the most alarming issues of our time. Our planet is deteriorating at an unprecedented scale and accelerating rate, putting human societies and countless biological species in grave danger. The root cause of this problem is human behavior and thus it could prove crucial to examine the psychology behind the human behaviors that drive unsustainable living and impede enactment of climate policy. Unfortunately, despite the importance of psychological research in responding to the climate crisis, the field has had very little influence on the climate policy process as well as in mobilizing action on climate change, handicapping progress towards a sustainable future. This workshop aims to bring together scientists working in the broad area of climate change and sustainability, along with cognitive scientists, to engage in the development of ideas related to using cognitive science research in understanding, reducing, and responding to the climate crisis.

Using Games to Understand Intelligence

Over the last decades, games have become one of the most popular recreational activities, not only among children but also among adults. Consequently, they have also gained popularity as an avenue for studying cognition. Games offer several advantages, such as the possibility to gather big data sets, engage participants to play for a long time, and better resemblance of real world complexities. In this workshop, we will bring together leading researchers from across the cognitive sciences to explore how games can be used to study diverse aspects of intelligent behavior, explore their differences compared to classical lab experiments, and discuss the future of game-based cognitive science research.

Career Paths beyond the Tenure Track for Cognitive Scientists

Cognitive science research has far-reaching implications, but many graduate students are trained solely for tenure-track faculty positions. Academic training develops a wide range of skills in behavioral research, literature reviewing, data analysis, scientific publishing, grant writing, teaching, and student mentorship. These skills have direct application in many other careers, but training within academia typically neglects to address how these skills translate to other work environments and career paths. As growth in the number of doctoral trainees continues to outpace permanent academic positions, more doctoral recipients have been seeking employment beyond faculty positions and academia. Those who are interested in exploring alternative career paths may not know where to turn for guidance. Our goal in this professional development workshop is to offer such guidance and an opportunity to network with scholars in similar situations.

Interdisciplinary Advances in Affective Cognition

In Ancient Greek philosophy, emotion is considered the opposite of cognition: cognition is rational while emotion is irrational; cognition is cold while emotion is hot. Such thinking has influenced the tradition in the field of cognitive science, where emotion is often described as what contaminates or irrationalizes our judgments and decision-making. Thus, while the ways in which humans express, recognize, and experience emotion have been studied extensively in other subfields of psychology (e.g., affective science), rigorous attempts to understand the role of emotion in thinking, learning, and reasoning have been surprisingly sparse in contemporary cognitive science. As a quick example, a keyword search in the Cognitive Science Proceedings for "Affect" or "Emotion" shows that only 2% of published proceedings over the past three years (55 out of 2454 between 2018 and 2020) have titles including one of these key words.

Symbolic and Sub-Symbolic Systems in People and Machines

To what extent is symbolic processing required for intelligent behaviour? Advances in both sub-symbolic deep learning systems and explicitly symbolic probabilistic program induction approaches have recently reinvigorated this long standing question about cognition. While sub-symbolic approaches have shown impressive results, they still lag far behind human cognition, e.g., in the compositional re-use of learned concepts or generalizing to new contexts. Symbolic systems have successfully addressed some of these shortcomings, but face other unsolved issues relating to feature selection, thorny search spaces and scalability. This workshop intends to bring together established and newly emerging perspectives on the debate and explore the recently rekindled interest in hybrid architectures.

Engineering and reverse-engineering morality

Recent years have witnessed a burst of progress on building formal models of moral decision-making. In psychology, neuroscience and philosophy, the goal has been to “reverse-engineer” the principles of human morality. Meanwhile, in AI ethics, the goal has been to engineer systems that can make moral decisions, in some ways inspired by how humans do this. We aim to showcase the state of the art in both fields and to show how they can be hybridized into a computational cognitive science of morality.

Symposia

Conceptual Foundations of Sustainability

Threats to the health of our environment are numerous, ranging from air and water pollution to deforestation, overpopulation, and climate change. Much research in fields such as biology, earth science, and engineering is devoted to documenting, understanding, and attempting to mitigate the harm. The root cause of all such problems, however, is human behavior. As such, changes to human behavior—and the internal processes that drive them—are essential to solutions. Cognitive scientists therefore have a critical role to play in sustainability research and interventions (e.g., Jaipal, 2014; Weir, 2018, 2019).

Multimodal signalling of attractiveness

A large literature on human facial attractiveness has adopted an evolutionary approach (Little et al., 2011). Much less research has examined cues in other modalities, such as smell (Groyecka et al., 2017) and audition (Zäske et al., 2020). Although these different modalities may interact significantly in human mate choice (Feinberg, 2008), it is not yet understood how humans integrate cues from different sensory modalities. In the literature on animal communication, the most prominent theories suggest that different modalities either signal different qualities of an individual (multiple messages hypothesis) or communicate the same information (back-up signal hypothesis; Moller & Pomiankowski, 1993). These theories tend to disregard the possible interaction of different sensory modalities, and the role of multisensory integration.

Animal Consciousness in Comparison to Human Consciousness

Do some species of nonhuman animals (hereafter “animals”) enjoy consciousness and to which degree? This is a notoriously difficult question for at least two reasons, namely first we need a sufficiently clear concept of consciousness and second it remains difficult to characterize convincing strategies of access to conscious experiences in other species since we then have to rely on third-person access and mostly on behavioral data. Lacking a communicative access to animal minds, it is difficult to justify an analogy argument. Let us characterize central open question guiding the symposium: (1) Concerning the scientific access: How can we develop a nonverbal access to conscious experiences in animals? (2) Are there behavioral markers of consciousness in animals? (3) What is the main functional role of consciousness from an evolutionary perspective? (4) Can we offer a conceptual framework which allows us to adequately characterize evolutionary old basic forms of consciousness and its relation to standard consciousness experiences in humans? The symposium is arranged with four talks which together aim at outlining answers to these questions.

The Deep History of Information Technologies: a Cognitive Perspective

Cognition constrains and influences human cultural productions, among which are information technologies. Information technologies, because of and through their intensive use, can be expected to reflect human cognition particularly well. Cognitive approaches to information technologies have the potential of informing both cognitive science and historical disciplines. Beyond high ecological validity, we demonstrate the relevance of real-world data in testing and informing theories about how the mind works, through four different case studies and contexts: how we represent the world and space around us (Riggsby), how we represent more abstract -number- concepts (Chrisomalis), how we optimize written characters for our visual system (Miton), and coinage to minimize possible errors (Morin). Discussion and moderation will be assured by Valeria Giardino, a philosopher whose main research topic is reasoning with diagrams and the role of cognitive artifacts in improving thought.

Comparative approaches to memory development

Memories for events experienced during infancy and early childhood are rarely recollected later in life—a phenomenon termed infantile and childhood amnesia. The formation and retrieval of such episodic memories relies, in part, on the hippocampus. Characterizing the role of hippocampal development in offsetting infantile and childhood amnesia is key to understanding (i) why infantile and childhood amnesia occur and (ii) how episodic memory capacities develop in early ontogeny. Comparative research is necessary for this enterprise because many paradigms and techniques work better with humans or with non-human animals. The four papers in this symposium gather current work in developmental psychology, developmental cognitive neuroscience, and behavioral neuroscience that characterizes the complex and heterogenous developmental profile of behavioral gains in component processes underlying episodic memory capacity in humans and work on the mammalian hippocampus and how it accompanies development. By leveraging and triangulating multiple levels of analyses, we can gain insights that are unavailable using a siloed approach. This collection of work helps delineate clear future directions for a comparative approach in memory development.

Conceptual Blending in Animal Cognition: A Comparative Approach

Are the differences between human and alloanimal cognition a matter of kind or of degree? This question continues to generate controversial arguments for the uniqueness of certain features of human cognition, with no clear consensus in sight (see, e.g., Hauser, Chomsky & Fitch, 2002; Suddendorf & Corbalis, 2007). To move the debate into fresh territory, this symposium develops a proposal from conceptual blending theory (CBT: Fauconnier & Turner, 2002; Turner, 2014) to argue that the differences in question are both a matter of kind and of degree. The symposium also takes up a line of inquiry initiated by Pelkey, who has proposed synthesizing CBT with related insights from Charles S. Peirce, Jakob Johann von Uexküll, and biosemiotics to build a stronger case for alloanimal blending. We bring together a diverse group of researchers to discuss human-unique cognitive abilities through the lens of CBT. Turner introduces CBT and outlines the cross- species cline of conceptual blending. Pelkey provides evidence for various types of blends in bats and discusses the conclusions of these analyses. Leonardis, Semenuks, and Coulson emphasize the importance of taking non-human perspectives in analyzing behaviors with CBT. Adachi discusses work on metaphorical and cross-modal mapping in primates. Forster serves as the moderator.

Cognition in Context

The theory behind the evolution of cognition frames that cognitive processes have evolved in response to the complexity and challenges posed by the physical and social environment. To date, cognitive abilities have been mostly studied under controlled laboratory conditions that facilitate replicability and high-resolution measurements (Cauchoix, Hermer, Chaine, & Morand-Ferron, 2017). Yet, under these circumstances, cognitive abilities are evaluated in relatively stable and homogeneous situations that hardly match the species’ natural environments (Niemelä and Dingemanse, 2014). Thus, results drawn from these controlled studies do not necessarily scale to the range of cognitive processes displayed by individuals in naturalistic settings (Cauchoix, Chaine, & Barragan-Jason, 2020).

Minds at play

Play and curiosity are unmistakable signatures of an active mind. It is perhaps unsurprising then, that recent advances in machine learning and robotics have used algorithmic approximations of curiosity to build artificial agents that can intelligently explore their environments, learn more efficiently, and acquire more generalizable skills (e.g. Chitnis et al., 2020; Lynch et al., 2020; Forestier et al., 2017).

The evolution of rhythm from neurons to ecology

Why are animal rhythms important? Cross-species work can help isolate what is unique in the human capacity for rhythm. In addition, cross-species work can provide inference on the origins and evolution of human rhythmic capacities. Neural tissue and cognitive capacities do not fossilize: by pinpointing shared biological mechanisms between humans and other animals, cross-species research can help reconstruct the evolution of rhythmic capacities in humans. This symposium will unify multiple comparative approaches to the evolution of musical rhythm. Specifically, we aim at (1) providing a platform for multiple fields to compare theoretical frameworks and methodologies across species, (2) integrating findings from behaviour, neuroscience, modelling, and cognition, (3) actively spurring cross-fertilization between musical rhythm and animal timing research, (4) drawing inferences on the evolution of human rhythm.

Sequential meaning-making in language and visual narratives

The last two decades have seen emerging research on the structure and cognition of visual narratives, like those found in comics and picture stories (Cohn & Magliano, 2020). A primary characteristic of this research has been the comparison between the meaning-making in sequencing of pictures and that of sequential words or sentences in language. This comparison has extended across research methods and their findings...In this symposium, we further explore these comparisons between language and visual narratives in four presentations exploring visual narrative sequencing. These studies span various methodologies of corpus research along with behavioral and neurocognitive experimentation, and they probe several fundamental topics of meaning-making found in both the verbal and visual modalities: the expression of motion events, the generation of inferences, anaphoricity and co-reference, and information density.

Explanation in Human Thinking

Jörg Cassens, Rebekah Wegener, Lorenz Habenicht, and Julian Blohm discuss the dialogic form of explanations. Explanations are a long established research topic in a wide variety of disciplines, ranging from philosophy (van Fraassen, 1980; Achinstein, 1983) over the cognitive sciences and psychology (Lalljee et al., 1983; Keil and Wilson, 2000; Lombrozo, 2006) to computer science in general and artificial intelligence in particular (Schank, 1986; Leake, 1992; Leake (1995); Sørmo et al., 2005). However, while there is compelling research supporting the value, structure and function of explanation, as Edwards et al. (2019) argue, “accounts of explanation typically define explanation (the product) rather than explaining (the process)”. By contrast, we aim at an understanding of explanation as a functional variety of language behaviour that treats explanations as being:

Music Cognition: The Complexity of Musical Structure

Music is highly complex and provides a rich variety of insights into the human mind, its mental structures, and processes. Experienced musicians are able to create complex structures in real time effortlessly, yet there is at present no successful model of full musical structure. The integration of different musical aspects such as melody, rhythm, voice leading, and form as well as the representation of long-term structure are particularly challenging. To open new possibilities for the study of higher-order structure in music and its perceptual correlates, cognitive music research would benefit from further mutual integration of theoretical, mathematical, computational, and psychological research, similar to advancements in linguistics. This symposium therefore focuses on the formal understanding and empirical investigation of music-theoretically motivated research questions in music cognition. It connects perspectives from music theory, behavioral research, corpus research, and computational modeling, and aims to initiate interdisciplinary discussions about the currently most challenging topics related to the cognition of higher-order structures in music.

Similarity-based Influences in Judgment and Decision Making

Psychological similarity—the subjective distance between objects in the world or memory—is a highly influential concept in many areas of cognitive psychology, such as learning, memory, categorization, judgment, and preferential choice. The contributions within this symposium will evaluate the fundamental role that similarity plays in human judgment and decision making. We bring together experts from distinct subdisciplines of psychology, who examine the influence of similarity on categorization, consumer choice, risky choice, social norms, and in memory-based choices. Specifically, the contributions elaborate on three key questions repeatedly pursued within cognitive psychology: 1) how does similarity activate previous experiences and renders them available within a given choice context? 2) how does similarity interact with feature or knowledge abstraction processes? 3) how is similarity represented psychologically? To reach this goal, the contributions within this symposium focus on reinstating similarity-based processes within formal cognitive models and test their predictions experimentally.

tutorial

Tutorial: Introduction to PCIbex – An Open-Science Platform for Online Experiments: Design, Data-Collection and Code-Sharing

PCIbex is a free web platform for designing and running a wide variety of behavioral experiments online. It provides a simple and accessible, yet versatile code interface for designing experimental tasks, and furthermore makes it possible to share experiment code and resources with a single click, allowing for easy replication. PCIbex comes with an easy-to-learn mini-language, which builds on a standard JavaScript engine. Advanced features include audio- and video-recording as well as an integration of the webgazer eye-tracking API. The workshop will provide a hands-on introduction to the basic functionalities, as well as illustrations of advanced features and time for Q&A. While already in fairly wide use in the linguistics community, the tutorial aims to make this resource more widely accessible within the cognitive science community at large.

Practical Interpretation and Insights with Recurrence Quantification Analysis for Decision Making Research

A cornerstone of behavioral modeling and decision-making research includes characterizing the strategies people employ to make decisions. We often study strategies by collecting sequences of choices which hold considerable information about human cognition and behavior. However, it can be difficult to identify patterns and extract strategies, (e.g., `win-stay-lose-shift'), from noisy, raw sequence data. Consequently, many standard analytical approaches aggregate choice data (e.g., over time, across multiple participants, etc.). Unfortunately, this process obscures temporal patterns that may provide insight about strategies employed, strategy switches, or even adaptation of strategies over time. As illustrated by McCormick, Blaha, and Gonzalez (2020b), we can generate novel insights about decision making strategies by characterizing patterns over choice sequences with recurrence quantification analysis.