Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine Libraries

LAUC-I and Library Staff Research bannerUC Irvine

The UCI Libraries provide vital leadership in UCI's distinction as a premier research university. The Libraries are committed to supporting and inspiring members of UCI's diverse community to create and contribute new models of research, scholarship, and innovations in all academic subject areas.

To that end, the UCI Libraries have created two spaces for the depositing and sharing of publications by UCI affiliates. The first is dedicated to research produced by members of the Library Association of the University of California, Irvine (LAUC-I) and library staff (see below).

The second is more general in scope and is open to faculty partnering with the UCI Libraries and whose contributions do not fall in the purview of any of the campus' established research centers, departments, and programs. This research is linked in the left sidebar under “Affiliated Units”.

Cover page of UAV Forge

UAV Forge

(2025)

UAV Forge is a multidisciplinary engineering design team focusing on designing, manufacturing, programming, and testing autonomous aerial vehicles at the University of California, Irvine (UCI). The design aims to fulfill the constraints that allow the team to participate in the SUAS 2025 competition season. The SUAS competition is designed to spark interest in Unmanned Aerial Systems (UAS), promote careers and technology in the field, and allow students to take on a challenging mission.

Cover page of De novo colorectal cancer after kidney transplantation: a systematic review and meta-analysis

De novo colorectal cancer after kidney transplantation: a systematic review and meta-analysis

(2025)

Background

Kidney transplant (KT) patients have higher risks of developing de novo colorectal cancer (CRC) compared to the general population. However, there is still a knowledge gap in their clinical characteristics, as most single- or multi-center efforts are underpowered and lack generalizability.

Methods

PubMed, Web of Science, Cochrane CENTRAL, and Scopus databases were queried for studies published until July 22nd, 2024. Studies reporting the clinicopathologic characteristics and outcomes of de novo CRC among KT recipients were included.

Results

There were 49 articles included involving 1855 KT patients who developed CRC. The mean time from transplantation to CRC diagnosis was 8·7 years (95%CI 7·2, 10·3 years; I2 = 98·3%). De novo CRC was most commonly located in the ascending colon (43·6%; 95%CI 29·5%, 58·9%; I2 = 55·3%), and 37·1% had advanced CRC at diagnosis (95%CI 22·3%, 54·8%; I2 = 64·1%). Although 68·8% underwent curative intent treatment (95%CI 45·4%, 85·4%; I2 = 65·4%), pooled 5-year survival rate was 31·8% (95%CI 10·5%, 65·1%; I2 = 82·5%).

Conclusions

De novo CRC was diagnosed in under 10 years after KT, and nearly 40% of patients already have advanced stage disease at diagnosis. The pooled rate of 5-year survival was 31.8%. However, there was wide heterogeneity between studies and further research is required. PROSPERO Registration: CRD42023415767.

Cover page of EBSCO Interface Change Up

EBSCO Interface Change Up

(2025)

The summer of 2025 will include a change up of the EBSCO interface for many academic libraries. This article will describe many of the changes and the ‘new normal’ user interface (UI) for searching the extensive EBSCO offerings. In order to ready our communities, some libraries haven already created updated videos for their students, like this upbeat review from Phillips Library at Aurora University. The vendor’s “Introduction to the New EBSCOhost - Tutorial” states that the “interface features many improvements including personalized dashboards, modern results lists, enhanced displays, greater citing and sharing options, and enhanced detailed record and viewer experiences.” Let’s take a look at these enhancements.

Cover page of Collectively Creating an AI Literacies Community of Practice

Collectively Creating an AI Literacies Community of Practice

(2025)

This presentation will share the AI Literacies Community of Practice (CoP), which was developed by academic librarians at the University of Florida to deepen our collective understanding of artificial intelligence technologies and establish a common language around literacies and pedagogies related to information literacy. This initiative included co-facilitated bimonthly virtual meetings and pre-selected readings over a sixteen week period, during which participants explored topics such as AI literacies, AI ethics, teaching impacts, and AI pedagogies. Additionally, the co-facilitators organized bimonthly conversations in a comfortable space within the library to encourage open dialogue and community building around AI topics. This approach led to a core group of members who attended regularly and formed the community. We found that this approach not only encouraged colleagues to learn from one another, but also fostered an environment where experimentation with emerging technologies could occur without judgment. In this presentation, the core members of the CoP will share practical tips on how to develop an AI literacies community of practice and highlight both the successes and failures of our collective approach. By sharing our model, resources, and experiences, we hope to provide a framework that other librarians can adapt to suit their institutional needs and interests.

Cover page of You Think You Know: Where Learner-Centered Pedagogy Meets Management

You Think You Know: Where Learner-Centered Pedagogy Meets Management

(2024)

This contributed volume focuses on person-centered management practices in academic libraries that create space for criticism, sharing of lived experiences, and a willingness to investigate and make changes to the status quo.

Cover page of Guidance on terminology, application, and reporting of citation searching: the TARCiS statement

Guidance on terminology, application, and reporting of citation searching: the TARCiS statement

(2024)

Evidence syntheses adhering to systematic literature searching techniques are a cornerstone of evidence based healthcare. Beyond term based searching in electronic databases, citation searching is a prevalent search technique to identify relevant sources of evidence. However, for decades, citation searching methodology and terminology has not been standardised. An evidence guided, four round Delphi consensus study was conducted with 27 international methodological experts in order to develop the Terminology, Application, and Reporting of Citation Searching (TARCiS) statement. TARCiS comprises 10 specific recommendations, each with a rationale and explanation on when and how to conduct and report citation searching in the context of systematic literature searches. The statement also presents four research priorities, and it is hoped that systematic review teams are encouraged to incorporate TARCiS into standardised workflows.

Cover page of Accuracy of Prospective Assessments of 4 Large Language Model Chatbot Responses to Patient Questions About Emergency Care: Experimental Comparative Study

Accuracy of Prospective Assessments of 4 Large Language Model Chatbot Responses to Patient Questions About Emergency Care: Experimental Comparative Study

(2024)

Background

Recent surveys indicate that 48% of consumers actively use generative artificial intelligence (AI) for health-related inquiries. Despite widespread adoption and the potential to improve health care access, scant research examines the performance of AI chatbot responses regarding emergency care advice.

Objective

We assessed the quality of AI chatbot responses to common emergency care questions. We sought to determine qualitative differences in responses from 4 free-access AI chatbots, for 10 different serious and benign emergency conditions.

Methods

We created 10 emergency care questions that we fed into the free-access versions of ChatGPT 3.5 (OpenAI), Google Bard, Bing AI Chat (Microsoft), and Claude AI (Anthropic) on November 26, 2023. Each response was graded by 5 board-certified emergency medicine (EM) faculty for 8 domains of percentage accuracy, presence of dangerous information, factual accuracy, clarity, completeness, understandability, source reliability, and source relevancy. We determined the correct, complete response to the 10 questions from reputable and scholarly emergency medical references. These were compiled by an EM resident physician. For the readability of the chatbot responses, we used the Flesch-Kincaid Grade Level of each response from readability statistics embedded in Microsoft Word. Differences between chatbots were determined by the chi-square test.

Results

Each of the 4 chatbots' responses to the 10 clinical questions were scored across 8 domains by 5 EM faculty, for 400 assessments for each chatbot. Together, the 4 chatbots had the best performance in clarity and understandability (both 85%), intermediate performance in accuracy and completeness (both 50%), and poor performance (10%) for source relevance and reliability (mostly unreported). Chatbots contained dangerous information in 5% to 35% of responses, with no statistical difference between chatbots on this metric (P=.24). ChatGPT, Google Bard, and Claud AI had similar performances across 6 out of 8 domains. Only Bing AI performed better with more identified or relevant sources (40%; the others had 0%-10%). Flesch-Kincaid Reading level was 7.7-8.9 grade for all chatbots, except ChatGPT at 10.8, which were all too advanced for average emergency patients. Responses included both dangerous (eg, starting cardiopulmonary resuscitation with no pulse check) and generally inappropriate advice (eg, loosening the collar to improve breathing without evidence of airway compromise).

Conclusions

AI chatbots, though ubiquitous, have significant deficiencies in EM patient advice, despite relatively consistent performance. Information for when to seek urgent or emergent care is frequently incomplete and inaccurate, and patients may be unaware of misinformation. Sources are not generally provided. Patients who use AI to guide health care decisions assume potential risks. AI chatbots for health should be subject to further research, refinement, and regulation. We strongly recommend proper medical consultation to prevent potential adverse outcomes.