Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Cruz

UC Santa Cruz Electronic Theses and Dissertations bannerUC Santa Cruz

Iterative LLM-Driven RTL Design Using HDLAgent

Creative Commons 'BY-SA' version 4.0 license
Abstract

Trained on billions of lines of GitHub code, Large Language Models (LLMs) have emerged as formidable programming assistants, demonstrating pronounced proficiency in popular programming languages such as Python and C++. This proficiency is attributed to the extensive availability of open-source code in these languages. However, the training of LLMs is an immensely resource-intensive process. For instance, Meta's Llama2-7B model required 184,320 GPU hours for training on 2048 A100 GPUs, incurring costs nearing one million dollars. Moreover, the construction of a robust model necessitates vast datasets. The substantial amount of time, money, and data needed to train an LLM on a new language may dissuade the creation of future languages, as they will not be compatible with LLMs “out of the box”.

In response to these challenges, this study introduces HDLAgent, an innovative LLM agent accompanied by a language analysis framework. HDLAgent employs chain-of-thought reasoning and transfer learning to equip LLMs with knowledge of languages absent from their original training datasets. Focusing primarily on Hardware Description Languages, HDLAgent utilizes summaries of language tutorials and a selection of straightforward code examples as contextual foundations to facilitate circuit design in the specified target language. The efficacy of HDLAgent is further enhanced through a compiler error feedback loop. This loop adopts a retrieval-augmented strategy, mapping compiler error messages to their corresponding explanations and corrective suggestions. Through this approach, HDLAgent significantly accelerates the process of enabling LLMs to understand and operate within previously unfamiliar programming languages, potentially revolutionizing the adaptability and utility of LLMs in the ever-evolving landscape of programming language development.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View