Papers
arxiv:2602.17100

AgentConductor: Topology Evolution for Multi-Agent Competition-Level Code Generation

Published on Feb 19
· Submitted by
Siyu Wang
on Mar 4
Authors:
,
,
,
,
,
,
,
,
,

Abstract

AgentConductor uses reinforcement learning-optimized multi-agent systems with an LLM-based orchestrator to dynamically generate interaction topologies for code generation, improving accuracy while reducing computational costs.

AI-generated summary

Large language model(LLM)-driven multi-agent systems(MAS) coordinate specialized agents through predefined interaction topologies and have shown promise for complex tasks such as competition-level code generation. Recent studies demonstrate that carefully designed multi-agent workflows and communication graphs can significantly improve code generation performance by leveraging collaborative reasoning. However, existing methods neither adapt topology density to task difficulty nor iteratively refine the topology within an instance using execution feedback, which leads to redundant communication and performance bottlenecks. To address these issues, we propose AgentConductor: a reinforcement learning-optimized MAS with an LLM-based orchestrator agent as its core, which enables end-to-end feedback-driven dynamic generation of interaction topologies. For each query, AgentConductor infers agent roles and task difficulty, then constructs a task-adapted, density-aware layered directed acyclic graph (DAG) topology, underpinned by two key innovations. First, we design a novel topological density function that captures communication-aware mathematical characterizations of multi-agent interactions. Second, we adopt difficulty interval partitioning to avoid excessive pruning for precise topological density upper bound measurement per difficulty level and finer-grained control. Empirically, across three competition-level and two foundational code datasets, AgentConductor achieves state-of-the-art accuracy, outperforming the strongest baseline by up to 14.6% in pass@1 accuracy, 13% in density reduction, and 68% in token cost reduction.

Community

Paper submitter

A new framework dynamically adjusts multi-agent connections to solve complex programming challenges while using fewer tokens.

The big deal here is the shift from rigid workflows to fluid teamwork.

Normal multi-agent systems use a fixed, hardcoded workflow for every single problem. If you have a team of 5 specialized AI agents, all five talk to each other in the exact same pattern whether they are printing a basic text line or solving a massive competitive programming challenge.

This wastes huge amounts of computing power on simple tasks and fails on complex tasks that actually require a different structure.

AgentConductor fixes this by acting like a smart human project manager. It looks at the problem, judges the difficulty, and creates a custom communication graph just for that specific task. Easy tasks get a small, cheap team.

Hard tasks get a large, highly connected team. Even better, if the generated code fails to run, the manager reads the error message and actually rewrites the team workflow on the fly to try a new strategy.

The big deal is that it drastically improves coding accuracy while cutting computing token costs by 68%, proving that AI teams need flexible, task-specific management rather than rigid, one-size-fits-all pipelines.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.17100 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.17100 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.17100 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.