Skip to main content
eScholarship
Open Access Publications from the University of California

About

The Donald Bren School of Information and Computer Sciences aims for excellence in research and education. Our mission is to lead the innovation of new information and computing technology by fundamental research in the core areas of information and computer sciences and cultivating authentic, cutting-edge research collaborations across the broad range of computing and information application domains as well as studying their economic, commercial and social significance.

Department of Computer Science

There are 871 publications in this collection, published between 1995 and 2024.
Faculty Publications (5)

Teaching Computational Thinking to Multilingual Students through Inquiry-based Learning

Central to the theory of learning are inquiry-based approaches to education. Whereas there is a plethora of research on inquiry learning in the domain of science [19], [20], few studies have analyzed how inquiry-based learning can be applied to computer science education, and how different approaches to inquiry may benefit diverse learners. This is one of the first studies to analyze teacher enactment of inquiry-based learning during the implementation of an upper elementary, computational thinking curriculum, and to explore how teacher approaches to inquiry appear to support or constrain multilingual students' development of computational thinking and computer science identities. Design-based research was used to iteratively develop, test, and refine the inquiry-based curriculum, which aligns with computer science and literacy standards, provides linguistic scaffolding, and integrates culturally responsive materials. We adopt a cross-case mixed-methods design to collect data from five teachers and 149 students including detailed field notes, teacher interviews, student computational artifacts, and student identity surveys. Through analyses of teacher moves, we find that teachers adopt different approaches to inquiry that can be indexed along a continuum ranging from open to closed. Patterns in student data revealed that those who received more structured inquiry lessons developed more sophisticated computational artifacts and showed greater identification with the field of computer science. Findings from this study are being used to add more structured inquiry approaches to the next iteration of our curriculum, including integrating USE/MODIFY/CREATE models into lessons and applying metacognitive strategies from reading research to students' programming activities.

Teaching Computational Thinking to English Learners

Computational thinking is an essential skill for full participation in society in today’s world (Wing, 2006). Yet there has been little discussion about the teaching and learning of computational thinking to English learners. In this paper, we first review what computational thinking is, why it is important in education, and the particular challenges faced in teaching computational thinking to speakers of English as a second language. We then discuss some approaches for addressing these challenges, giving examples from two recent K–12 initiatives in which we have been involved.

Computational Thinking and Literacy

Today’s students will enter a workforce that is powerfully shaped by computing. To be successful in a changing economy, students must learn to think algorithmically and computationally, to solve problems with varying levels of abstraction. These computational thinking skills have become so integrated into social function as to represent fundamental literacies. However, computer science has not been widelytaught in K-12 schools. Efforts to create computer science standards and frameworks have yet to make their way into mandated course requirements. Despite a plethora of research on digital literacies, research on the role of computational thinking in the literature is sparse. This conceptual paper proposes a three dimensional framework for exploring the relationship between computational thinking and literacy through: 1) situating computational thinking in the literature as a literacy; 2) outlining mechanisms by which students’ existing literacy skills can be leveraged to foster computational thinking; and 3) elaborating ways in which computational thinking skills facilitate literacy development.

2 more worksshow all
Open Access Policy Deposits (865)

Towards a systems view of IBS

© 2015 Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved. Despite an extensive body of reported information about peripheral and central mechanisms involved in the pathophysiology of IBS symptoms, no comprehensive disease model has emerged that would guide the development of novel, effective therapies. In this Review, we will first describe novel insights into some key components of brain–gut interactions, starting with the emerging findings of distinct functional and structural brain signatures of IBS. We will then point out emerging correlations between these brain networks and genomic, gastrointestinal, immune and gut-microbiome-related parameters. We will incorporate this new information, as well as the reported extensive literature on various peripheral mechanisms, into a systems-based disease model of IBS, and discuss the implications of such a model for improved understanding of the disorder, and for the development of more-effective treatment approaches in the future.

Poseidon: Mitigating Interest Flooding DDoS Attacks in Named Data Networking

Content-Centric Networking (CCN) is an emerging networking paradigm being considered as a possible replacement for the current IP-based host-centric Internet infrastructure. CCN focuses on content distribution, which is arguably not well served by IP. Named-Data Networking (NDN) is an example of CCN. NDN is also an active research project under the NSF Future Internet Architectures (FIA) program. FIA emphasizes security and privacy from the outset and by design. To be a viable Internet architecture, NDN must be resilient against current and emerging threats. This paper focuses on distributed denial-of-service (DDoS) attacks; in particular we address interest flooding, an attack that exploits key architectural features of NDN. We show that an adversary with limited resources can implement such attack, having a significant impact on network performance. We then introduce Poseidon: a framework for detecting and mitigating interest flooding attacks. Finally, we report on results of extensive simulations assessing proposed countermeasure. © 2013 IEEE.

Protein profiles: Biases and protocols

The use of evolutionary profiles to predict protein secondary structure, as well as other protein structural features, has been standard practice since the 1990s. Using profiles in the input of such predictors, in place or in addition to the sequence itself, leads to significantly more accurate predictions. While profiles can enhance structural signals, their role remains somewhat surprising as proteins do not use profiles when folding in vivo. Furthermore, the same sequence-based redundancy reduction protocols initially derived to train and evaluate sequence-based predictors, have been applied to train and evaluate profile-based predictors. This can lead to unfair comparisons since profiles may facilitate the bleeding of information between training and test sets. Here we use the extensively studied problem of secondary structure prediction to better evaluate the role of profiles and show that: (1) high levels of profile similarity between training and test proteins are observed when using standard sequence-based redundancy protocols; (2) the gain in accuracy for profile-based predictors, over sequence-based predictors, strongly relies on these high levels of profile similarity between training and test proteins; and (3) the overall accuracy of a profile-based predictor on a given protein dataset provides a biased measure when trying to estimate the actual accuracy of the predictor, or when comparing it to other predictors. We show, however, that this bias can be mitigated by implementing a new protocol (EVALpro) which evaluates the accuracy of profile-based predictors as a function of the profile similarity between training and test proteins. Such a protocol not only allows for a fair comparison of the predictors on equally hard or easy examples, but also reduces the impact of choosing a given similarity cutoff when selecting test proteins. The EVALpro program is available in the SCRATCH suite ( www.scratch.proteomics.ics.uci.edu) and can be downloaded at: www.download.igb.uci.edu/#evalpro.

862 more worksshow all