- Main
LLM-Coordination: Developing Coordinating Agents with Large Language Models
- Agashe, Saaket
- Advisor(s): Wang, Xin
Abstract
It is essential for intelligent agents to not only excel in isolated situations but also coordinate with partners to achieve common goals. Current Multi-agent Coordination methods rely on Reinforcement Learning techniques to train agents that can work together effectively. On the other hand, agents based on Large Language Models (LLM) have shown promising reasoning and planning capabilities in single-agent tasks, at times outperforming RL-based methods. In this study, we build and assess the effectiveness of LLM agents in various coordination scenarios. We introduce the LLM-Coordination Framework to enable LLMs to complete coordination tasks. We evaluate our method on three game environments and organize the evaluation into five aspects: Theory of Mind, Situated Reasoning, Sustained Coordination, Robustness to Partners, and Explicit Assistance. First, the evaluation of the Theory of Mind and Situated Reasoning reveals the capabilities of LLM to infer the partner's intention and reason actions accordingly. Then, the evaluation around Sustained Coordination and Robustness to Partners further showcases the ability of LLMs to coordinate with an unknown partner in complex long-horizon tasks, outperforming Reinforcement Learning baselines. Lastly, to test Explicit Assistance, which refers to the ability of an agent to offer help proactively, we introduce two novel layouts into the Overcooked-AI benchmark, examining if agents can prioritize helping their partners, sacrificing time that could have been spent on their tasks. This research underscores the promising capabilities of LLMs in sophisticated coordination environments and reveals the potential of LLMs in building strong real-world agents for multi-agent coordination.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-