View publication

LLMs are currently effective at answering questions that are clearly asked. However, they may encounter difficulties when faced with ambiguous queries. This emphasizes the need for the development of intelligent agents capable of asking clarification questions, which require complex understanding, state tracking, and planning in multi-turn conversations. In this paper, we study a surrogate problem by employing entity-deducing games as evaluation metrics to assess the conversational planning capabilities of different models. We systematically evaluate various LLMs and discover significant performance discrepancies in conversational planning capabilities. Drawing inspiration from Reinforcement Learning from Human Feedback (RLHF), we utilize Reinforcement Learning from Self-Playing (RLSP) on vanilla Vicuna models to enhance planning capacity through self-play in the game. This research offers insights into potential advancements in achieving more intelligent and autonomous agents.

Related readings and updates.

Robust Autonomy Emerges from Self-Play

Self-play has powered breakthroughs in two-player and multi-player games. Here we show that self-play is a surprisingly effective strategy in another domain. We show that robust and naturalistic driving emerges entirely from self-play in simulation at unprecedented scale -- 1.6~billion~km of driving. This is enabled by GigaFlow, a batched simulator that can synthesize and train on 42 years of subjective driving experience per hour on a single…
See paper details

Towards Learning Multi-Agent Negotiations via Self-Play

Making sophisticated, robust, and safe sequential decisions is at the heart of intelligent systems. This is especially critical for planning in complex multi-agent environments, where agents need to anticipate other agents' intentions and possible future actions. Traditional methods formulate the problem as a Markov Decision Process, but the solutions often rely on various assumptions and become brittle when presented with corner cases. In…
See paper details