Papers
arxiv:2602.05892

ContextBench: A Benchmark for Context Retrieval in Coding Agents

Published on Feb 5
Β· Submitted by
Zhaoyang Chu
on Feb 11
Authors:
,
,
,
,
,
,
,
,
,

Abstract

ContextBench evaluates context retrieval in coding agents through detailed process analysis, revealing that advanced agent designs provide limited improvements in context usage while highlighting gaps between explored and utilized information.

AI-generated summary

LLM-based coding agents have shown strong performance on automated issue resolution benchmarks, yet existing evaluations largely focus on final task success, providing limited insight into how agents retrieve and use code context during problem solving. We introduce ContextBench, a process-oriented evaluation of context retrieval in coding agents. ContextBench consists of 1,136 issue-resolution tasks from 66 repositories across eight programming languages, each augmented with human-annotated gold contexts. We further implement an automated evaluation framework that tracks agent trajectories and measures context recall, precision, and efficiency throughout issue resolution. Using ContextBench, we evaluate four frontier LLMs and five coding agents. Our results show that sophisticated agent scaffolding yields only marginal gains in context retrieval ("The Bitter Lesson" of coding agents), LLMs consistently favor recall over precision, and substantial gaps exist between explored and utilized context. ContextBench augments existing end-to-end benchmarks with intermediate gold-context metrics that unbox the issue-resolution process. These contexts offer valuable intermediate signals for guiding LLM reasoning in software tasks.

Community

Paper submitter

Most repo-level benchmarks measure Pass@k βœ…
But fixing a bug does not mean the agent understood the code πŸ‘€

We built ContextBench πŸŽ‰
A benchmark to measure whether coding agents actually retrieve and use the right context πŸ”πŸ“‚

πŸ“Š What’s inside
🧩 1,136 real-world issues
πŸ“ 66 repositories
🌍 8 programming languages
🧠 Expert-verified gold contexts at file, block, and line granularity
πŸ‘£ Full trajectory tracking of agent behavior
πŸ“ˆ Metrics: Recall, Precision, F1, Efficiency, Usage Drop

πŸ” What surprised us
1️⃣ Complex agentic scaffolds often do not improve retrieval quality πŸ˜… Instead, they introduce over-engineering.
A familiar pattern in AI research… the Bitter Lesson again πŸ‹

2️⃣ Many SOTA LLMs chase high recall but sacrifice precision πŸ“‰
More context retrieved, more noise introduced

3️⃣ Retrieved β‰  Utilized ❗
Agents frequently inspect the right code but fail to incorporate it

4️⃣ More balanced retrieval strategies achieve stronger Pass@1 at lower cost βš–οΈβœ¨

ScreenFloat Shot of Google Chrome on 2026-02-11 at 20-29-42

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.05892 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.05892 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.05892 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.