- Main
Towards Augmenting and Evaluating Large Language Models
- Liu, Tianyang
- Advisor(s): McAuley, Julian
Abstract
In the rapidly evolving field of Natural Language Processing (NLP), the advent of Large Language Models (LLMs) marks a significant milestone, setting new standards in language understanding and generation. This thesis focuses on augmenting and evaluating LLMs, introducing ToolkenGPT, a novel method to integrate external tools via tool embeddings to enrich model functionality and adaptability and RepoBench, a benchmark for assessing the proficiency of LLMs in handling repository-level code auto-completion. Additionally, this thesis rethinks approaches towards tabular data reasoning, exploring how LLMs can be better tailored to understand and interpret structured data formats effectively.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-