Skip to main content
eScholarship
Open Access Publications from the University of California

This series is automatically populated with publications deposited by UC Merced Department of Cognitive Science researchers in accordance with the University of California’s open access policies. For more information see Open Access Policy Deposits and the UC Publication Management System.

Strategic identity signaling in heterogeneous networks.

(2022)

SignificanceMuch of online conversation today consists of signaling one's political identity. Although many signals are obvious to everyone, others are covert, recognizable to one's ingroup while obscured from the outgroup. This type of covert identity signaling is critical for collaborations in a diverse society, but measuring covert signals has been difficult, slowing down theoretical development. We develop a method to detect covert and overt signals in tweets posted before the 2020 US presidential election and use a behavioral experiment to test predictions of a mathematical theory of covert signaling. Our results show that covert political signaling is more common when the perceived audience is politically diverse and open doors to a better understanding of communication in politically polarized societies.

Impact of COVID-19 forecast visualizations on pandemic risk perceptions.

(2022)

People worldwide use SARS-CoV-2 (COVID-19) visualizations to make life and death decisions about pandemic risks. Understanding how these visualizations influence risk perceptions to improve pandemic communication is crucial. To examine how COVID-19 visualizations influence risk perception, we conducted two experiments online in October and December of 2020 (N = 2549) where we presented participants with 34 visualization techniques (available at the time of publication on the CDC's website) of the same COVID-19 mortality data. We found that visualizing data using a cumulative scale consistently led to participants believing that they and others were at more risk than before viewing the visualizations. In contrast, visualizing the same data with a weekly incident scale led to variable changes in risk perceptions. Further, uncertainty forecast visualizations also affected risk perceptions, with visualizations showing six or more models increasing risk estimates more than the others tested. Differences between COVID-19 visualizations of the same data produce different risk perceptions, fundamentally changing viewers' interpretation of information.

Cover page of Neuroprosthetics: The Restoration of Brain Damage

Neuroprosthetics: The Restoration of Brain Damage

(2022)

Neuroprosthetics are devices that are used when there has been an interruption in brain signals that produce and complete a series of intended movements, resulting in various forms of paralysis and physical disabilities. These synaptic connections can be replicated by electric pulses coming from the device. It can also be used on amputees with prosthetic limbs, connecting the piece that is placed on the motor cortex to a computer, which decodes the brain's input and converts the message to the prosthetic limb, thus creating the intended movement. The current main obstacles in optimizing neuroprosthetic abilities include needing better understanding the software of brain function, improving sensory features of neuroprosthetics, privacy and accountability controversy, and safety concerns. Overall, neuroprosthetics have the ability to allow affected individuals to regain their physical independence, and there is an enhanced progress in supporting wider ranges of movements.

Cover page of An ERP index of real-time error correction within a noisy-channel framework of human communication.

An ERP index of real-time error correction within a noisy-channel framework of human communication.

(2021)

Recent evidence suggests that language processing is well-adapted to noise in the input (e.g., spelling or speech errors, misreading or mishearing) and that comprehenders readily correct the input via rational inference over possible intended sentences given probable noise corruptions. In the current study, we probed the processing of noisy linguistic input, asking whether well-studied ERP components may serve as useful indices of this inferential process. In particular, we examined sentences where semantic violations could be attributed to noise-for example, in "The storyteller could turn any incident into an amusing antidote", where the implausible word "antidote" is orthographically and phonologically close to the intended "anecdote". We found that the processing of such sentences-where the probability that the message was corrupted by noise exceeds the probability that it was produced intentionally and perceived accurately-was associated with a reduced (less negative) N400 effect and an increased P600 effect, compared to semantic violations which are unlikely to be attributed to noise ("The storyteller could turn any incident into an amusing hearse"). Further, the magnitudes of these ERP effects were correlated with the probability that the comprehender retrieved a plausible alternative. This work thus adds to the growing body of literature that suggests that many aspects of language processing are optimized for dealing with noise in the input, and opens the door to electrophysiologic investigations of the computations that support the processing of imperfect input.

Cover page of Revisiting how we operationalize joint attention.

Revisiting how we operationalize joint attention.

(2021)

Parent-child interactions support the development of a wide range of socio-cognitive abilities in young children. As infants become increasingly mobile, the nature of these interactions change from person-oriented to object-oriented, with the latter relying on children's emerging ability to engage in joint attention. Joint attention is acknowledged to be a foundational ability in early child development, broadly speaking, yet its operationalization has varied substantially over the course of several decades of developmental research devoted to its characterization. Here, we outline two broad research perspectives-social and associative accounts-on what constitutes joint attention. Differences center on the criteria for what qualifies as joint attention and regarding the hypothesized developmental mechanisms that underlie the ability. After providing a theoretical overview, we introduce a joint attention coding scheme that we have developed iteratively based on careful reading of the literature and our own data coding experiences. This coding scheme provides objective guidelines for characterizing mulitmodal parent-child interactions. The need for such guidelines is acute given the widespread use of this and other developmental measures to assess atypically developing populations. We conclude with a call for open discussion about the need for researchers to include a clear description of what qualifies as joint attention in publications pertaining to joint attention, as well as details about their coding. We provide instructions for using our coding scheme in the service of starting such a discussion.

Cover page of Characterizing Bilingual Effects on Cognition: The Search for Meaningful Individual Differences.

Characterizing Bilingual Effects on Cognition: The Search for Meaningful Individual Differences.

(2021)

A debate over the past decade has focused on the so-called bilingual advantage-the idea that bilingual and multilingual individuals have enhanced domain-general executive functions, relative to monolinguals, due to competition-induced monitoring of both processing and representation from the task-irrelevant language(s). In this commentary, we consider a recent study by Pot, Keijzer, and de Bot (2018), which focused on the relationship between individual differences in language usage and performance on an executive function task among multilingual older adults. We discuss their approach and findings in light of a more general movement towards embracing complexity in this domain of research, including individuals' sociocultural context and position in the lifespan. The field increasingly considers interactions between bilingualism/multilingualism and cognition, employing measures of language use well beyond the early dichotomous perspectives on language background. Moreover, new measures of bilingualism and analytical approaches are helping researchers interrogate the complexities of specific processing issues. Indeed, our review of the bilingualism/multilingualism literature confirms the increased appreciation researchers have for the range of factors-beyond whether someone speaks one, two, or more languages-that impact specific cognitive processes. Here, we highlight some of the most salient of these, and incorporate suggestions for a way forward that likewise encompasses neural perspectives on the topic.

Cover page of The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study.

The Cross-Modal Suppressive Role of Visual Context on Speech Intelligibility: An ERP Study.

(2020)

The efficacy of audiovisual (AV) integration is reflected in the degree of cross-modal suppression of the auditory event-related potentials (ERPs, P1-N1-P2), while stronger semantic encoding is reflected in enhanced late ERP negativities (e.g., N450). We hypothesized that increasing visual stimulus reliability should lead to more robust AV-integration and enhanced semantic prediction, reflected in suppression of auditory ERPs and enhanced N450, respectively. EEG was acquired while individuals watched and listened to clear and blurred videos of a speaker uttering intact or highly-intelligible degraded (vocoded) words and made binary judgments about word meaning (animate or inanimate). We found that intact speech evoked larger negativity between 280-527-ms than vocoded speech, suggestive of more robust semantic prediction for the intact signal. For visual reliability, we found that greater cross-modal ERP suppression occurred for clear than blurred videos prior to sound onset and for the P2 ERP. Additionally, the later semantic-related negativity tended to be larger for clear than blurred videos. These results suggest that the cross-modal effect is largely confined to suppression of early auditory networks with weak effect on networks associated with semantic prediction. However, the semantic-related visual effect on the late negativity may have been tempered by the vocoded signal's high-reliability.

Cover page of Tracking differential activation of primary and supplementary motor cortex across timing tasks: An fNIRS validation study.

Tracking differential activation of primary and supplementary motor cortex across timing tasks: An fNIRS validation study.

(2020)

Functional near-infrared spectroscopy (fNIRS) provides an alternative to functional magnetic resonance imaging (fMRI) for assessing changes in cortical hemodynamics. To establish the utility of fNIRS for measuring differential recruitment of the motor network during the production of timing-based actions, we measured cortical hemodynamic responses in 10 healthy adults while they performed two versions of a finger-tapping task. The task, used in an earlier fMRI study (Jantzen et al., 2004), was designed to track the neural basis of different timing behaviors. Participants paced their tapping to a metronomic tone, then continued tapping at the established pace without the tone. Initial tapping was either synchronous or syncopated relative to the tone. This produced a 2 × 2 design: synchronous or syncopated tapping and pacing the tapping with or continuing without a tone. Accuracy of the timing of tapping was tracked while cortical hemodynamics were monitored using fNIRS. Hemodynamic responses were computed by canonical statistical analysis across trials in each of the four conditions. Task-induced brain activation resulted in significant increases in oxygenated hemoglobin concentration (oxy-Hb) in a broad region in and around the motor cortex. Overall, syncopated tapping was harder behaviorally and produced more cortical activation than synchronous tapping. Thus, we observed significant changes in oxy-Hb in direct relation to the complexity of the task.

Cover page of Joint Attention in Hearing Parent-Deaf Child and Hearing Parent-Hearing Child Dyads.

Joint Attention in Hearing Parent-Deaf Child and Hearing Parent-Hearing Child Dyads.

(2020)

Here we characterize establishment of joint attention in hearing parent-deaf child dyads and hearing parent-hearing child dyads. Deaf children were candidates for cochlear implantation who had not yet been implanted and who had no exposure to formal manual communication (e.g., American Sign Language). Because many parents whose deaf children go through early cochlear implant surgery do not themselves know a visual language, these dyads do not share a formal communication system based in a common sensory modality prior to the child's implantation. Joint attention episodes were identified during free play between hearing parents and their hearing children (N = 4) and hearing parents and their deaf children (N = 4). Attentional episode types included successful parent-initiated joint attention, unsuccessful parent-initiated joint attention, passive attention, successful child-initiated joint attention, and unsuccessful child-initiated joint attention. Group differences emerged in both successful and unsuccessful parent-initiated attempts at joint attention, parent passive attention, and successful child-initiated attempts at joint attention based on proportion of time spent in each. These findings highlight joint attention as an indicator of early communicative efficacy in parent-child interaction for different child populations. We discuss the active role parents and children play in communication, regardless of their hearing status.