Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

A Comparative Analysis of Fine-Tuned Llama 3-8B and DistilBERT for News Classification

Abstract

This thesis presents a comparative analysis of Llama 3-8B and DistilBERT language models for news classification across 26 classes. Utilizing a balanced dataset, we employed Low-Rank Adaptation (LoRA) for fine-tuning Llama 3-8B and traditional fine-tuning for DistilBERT. The study aims to evaluate the performance, efficiency, and practical applicability of these models in categorizing news articles.

Our experiments reveal that Llama 3-8B consistently outperforms DistilBERT in overall accuracy, achieving around 70% compared to DistilBERT's 60%. However, both models demonstrate competitive capabilities and exhibit distinct strengths across different news categories. The analysis uncovers significant variability in category-specific performance across multiple experimental runs, emphasizing the importance of robust evaluation procedures in model assessment.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View