We address the challenge of multi-agent cooperation, where agents achieve a common goal by cooperating with decentralized agents under complex partial observations. Existing cooperative agent systems often struggle with efficiently processing continuously accumulating information, managing globally suboptimal planning due to lack of consideration of collaborators, and addressing false planning caused by environmental changes introduced by other collaborators. To overcome these challenges, we propose the RElevance, Proximity, and Validation-Enhanced Cooperative Language Agent (REVECA), a novel cognitive architecture powered by GPT-4o-mini. REVECA enables efficient memory management, optimal planning, and cost-effective prevention of false planning by leveraging Relevance Estimation, Adaptive Planning, and Trajectory-based Validation. Extensive experimental results demonstrate REVECA's superiority over existing methods across various benchmarks, while a user study reveals its potential for achieving trustworthy human-AI cooperation.
In noisy environments, agents often accumulate irrelevant information, overwhelming their memory and impairing decision-making. Our method enables agents to prioritize goal-relevant data, avoiding distractions from irrelevant objects.
By considering collaborators' positions and task priorities, this approach guides agents toward global optimization, ensuring tasks are assigned to the most suitable agents.
In partially observable environments, agents' knowledge can become outdated due to unseen interactions, leading to false plans. This technique allows agents to infer collaborators past trajectories to verify the planning information.
@article{seo2024llm,
title={LLM-Based Cooperative Agents using Information Relevance and Plan Validation},
author={Seo, SeungWon and Lee, Junhyeok and Noh, SeongRae and Kang, HyeongYeop},
journal={arXiv preprint arXiv:2405.16751},
year={2024}
}