Papers
arxiv:2602.20672

BBQ-to-Image: Numeric Bounding Box and Qolor Control in Large-Scale Text-to-Image Models

Published on Feb 24
· Submitted by
Ron Mokady
on Mar 4
Authors:
,
,
,
,
,
,
,
,

Abstract

BBQ is a text-to-image model that enables precise numeric control over object attributes through structured-text conditioning without architectural changes.

AI-generated summary

Text-to-image models have rapidly advanced in realism and controllability, with recent approaches leveraging long, detailed captions to support fine-grained generation. However, a fundamental parametric gap remains: existing models rely on descriptive language, whereas professional workflows require precise numeric control over object location, size, and color. In this work, we introduce BBQ, a large-scale text-to-image model that directly conditions on numeric bounding boxes and RGB triplets within a unified structured-text framework. We obtain precise spatial and chromatic control by training on captions enriched with parametric annotations, without architectural modifications or inference-time optimization. This also enables intuitive user interfaces such as object dragging and color pickers, replacing ambiguous iterative prompting with precise, familiar controls. Across comprehensive evaluations, BBQ achieves strong box alignment and improves RGB color fidelity over state-of-the-art baselines. More broadly, our results support a new paradigm in which user intent is translated into an intermediate structured language, consumed by a flow-based transformer acting as a renderer and naturally accommodating numeric parameters.

Community

Paper submitter

Fibo BBQ: Bounding Box & Qolor Control in Large-Scale Text-to-Image Models

Text prompts are a terrible UI for precision
It’s much more intuitive to drag objects into place or use a color picker than to write “put it 20% left, make it teal, slightly bigger…”
Fibo BBQ demonstrates that we can train large-scale T2I models with numeric parameters (e.g., positions / boxes / colors) as part of a structured caption, at scale.

This is a step toward richer controllability: more “knobs” beyond text, and new UI paradigms that feel like design tools, not prompt engineering.

teaser8

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.20672 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.20672 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.20672 in a Space README.md to link it from this page.

Collections including this paper 1