Skip to main content
eScholarship
Open Access Publications from the University of California

UC Irvine

UC Irvine Electronic Theses and Dissertations bannerUC Irvine

Large Language Models for Programming Industrial Control Systems and Mitigating Real-World Software Vulnerabilities

Creative Commons 'BY-SA' version 4.0 license
Abstract

This manuscript is comprised of two sections — automated code generation for Programmable Logic Controllers and vulnerability repair for Common Vulnerabilities & Exposures (CVEs) with Large Language Models (LLMs).

The application of LLMs to Industrial Control Systems (ICS) is a relatively unexplored area. State-of-the-art LLMs such as GPT-4 and Code Llama fail to produce valid programs for ICS operated by Programmable Logic Controllers (PLCs). As a result, there is abundant potential to incorporate the use of Large Language Models into the PLC programming process to achieve end-to-end automation of common ICS tasks. We propose LLM4PLC, a user-guided iterative pipeline leveraging user feedback and external verification tools — including grammar checkers, compilers, SMV verifiers — as well as Parameter-Efficient Fine-Tuning and Prompt Engineering, to guide the LLM's generation. We run a complete test suite on GPT-3.5, GPT-4, Code Llama-7B, a fine-tuned Code Llama-7B model, Code Llama-34B, and a fine-tuned Code Llama-34B model. Ultimately, we demonstrate that the LLM4PLC pipeline improves the generation success rate from 47% to 72%, and the Survey-of-Experts code quality from 2.25/10 to 7.75/10.

Software vulnerabilities continue to be ubiquitous, even in the era of AI-powered code assistants, advanced static analysis tools, and the adoption of extensive testing frameworks. It has become apparent that we must not simply prevent these bugs, but also eliminate them in a quick, efficient manner. Yet, human code intervention is slow, costly, and can often lead to further security vulnerabilities, especially in legacy codebases. The advent of highly advanced Large Language Models (LLM) has opened up the possibility for many software defects to be patched automatically. We propose LLM4CVE — an LLM-based iterative pipeline that robustly fixes vulnerable functions with high accuracy. We examine our pipeline with State-of-the-Art LLMs, such as GPT-3.5, GPT-4o, Llama 3 8B, and Llama 3 70B, along with fine-tuned variants of selected models. We achieve an increase in ground-truth code similarity of 20% with Llama 3 80B.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View