Datasets:

Modalities:
Audio
Text
Formats:
parquet
ArXiv:
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
11
11
modality
stringclasses
1 value
audio
audioduration (s)
1.74
10
7fmOlUlwoNg
audio
6BJ455B1aAs
audio
GOD8Bt5LfDE
audio
YQSuFyFm3Lc
audio
VjSEIRnLAh8
audio
DlWd7Wmdi1E
audio
YNDKuNINDOY
audio
fsBR7e_X_0Y
audio
tjCNwdOUiGc
audio
yL3gKa6YLoM
audio
Lbken4JCr94
audio
_xylo5_IiaM
audio
sVYTOURVsQ0
audio
Smdj6JFB9MQ
audio
u84FiZ_omhA
audio
kx6Rj4MDIAw
audio
PLHXGDnig4M
audio
Z0IrCa4MvOA
audio
14ekd4nkpwc
audio
yfYNPWs7mWY
audio
uhSDBwVrEdo
audio
YQGW5AwDOIo
audio
Me4npKmtchA
audio
gbtcDoh0q3c
audio
OpiWMltpj44
audio
9ZZHvwaH-CU
audio
K_Vre_-4KqU
audio
qeSl7YZAfs4
audio
4IeDBwyQ9ZQ
audio
ArHiac57pVk
audio
qZEIs6tS5vk
audio
paf0nyjg1Js
audio
BZCEDkx37rI
audio
FR7BDRhMATo
audio
XJba7pTbpD0
audio
CeRoaEcqUgM
audio
zq00Oe1ecpE
audio
ztSjcZNUY7A
audio
glAeihz0NAM
audio
CM49C3RkzV8
audio
H-vTZh81qAU
audio
up2PpjTzyyc
audio
dlsiellSFf0
audio
0jGH7A_hpBM
audio
CefFMA3klxk
audio
KnXNy5Q6YS4
audio
cPiSd5nJLrI
audio
rJVXE6Axtrg
audio
FA11v4SmdBc
audio
QvATUKXYFBs
audio
_ezm-TpKj1w
audio
YEYeQ0lIkBQ
audio
KtTLsveexOg
audio
5QZ0NtdoKJ8
audio
_AcJVyToQUQ
audio
kEP-BwMarf8
audio
7D7xgd4WJ50
audio
yVVLq4ao1Ck
audio
S0YE96w0YRk
audio
lh801oHGtD4
audio
Pb6MqpdX5Jw
audio
9U8COLzEegs
audio
dxow2DcTrwk
audio
a0yXS7PmVR0
audio
0a9wVat2PWk
audio
bgbnu5YKTDg
audio
CO6-i8NLbeo
audio
pI_kPedctoo
audio
EYTz1LPDHsc
audio
D9tinq3RMpU
audio
EzWEO2WD_MM
audio
tfOIhQpYYe8
audio
JnSwRonB9wI
audio
paetCbEqp2w
audio
Ls1zyPjs3k8
audio
K03ydb1uaoQ
audio
ific_gRalg0
audio
BlbGXalLNVU
audio
1nUOGZgSzZo
audio
c0V_HAul7rI
audio
AbplcXwXnvE
audio
d1tL-9BILy8
audio
onBZOH88OYs
audio
3wV3ST-c4PE
audio
y93cZqNCtks
audio
L2dyilgQ8iM
audio
WWkhzcmx3VE
audio
u9px4Lwv9XI
audio
hrv6fwnmBkY
audio
zEaGx6an4es
audio
E6FH_xp3I54
audio
Dt53UZgyznE
audio
OMGHnJV0l2U
audio
-NsC63dA01g
audio
fGGYeXR_LS8
audio
hJtOGmN_KVw
audio
UmNrhFKpWIY
audio
xBZnvfniA1c
audio
UCy1BEx8jBE
audio
F7QtqKtllK0
audio
End of preview. Expand in Data Studio

AudioCapsT2ARetrieval

An MTEB dataset
Massive Text Embedding Benchmark

Natural language description for any kind of audio in the wild.

Task category t2a
Domains Encyclopaedic, Written
Reference https://audiocaps.github.io/

Source datasets:

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_task("AudioCapsT2ARetrieval")
evaluator = mteb.MTEB([task])

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repository.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@inproceedings{kim2019audiocaps,
  author = {Kim, Chris Dongjoo and Kim, Byeongchang and Lee, Hyunmin and Kim, Gunhee},
  booktitle = {Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
  pages = {119--132},
  title = {Audiocaps: Generating captions for audios in the wild},
  year = {2019},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Loïc and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("AudioCapsT2ARetrieval")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 5294,
        "number_of_characters": 258732,
        "documents_text_statistics": null,
        "documents_image_statistics": null,
        "documents_audio_statistics": {
            "total_duration_seconds": 8708.250125,
            "min_duration_seconds": 1.7415,
            "average_duration_seconds": 9.862117921857305,
            "max_duration_seconds": 10.0,
            "unique_audios": 883,
            "average_sampling_rate": 24000.0,
            "sampling_rates": {
                "24000": 883
            }
        },
        "queries_text_statistics": {
            "total_text_length": 258732,
            "min_text_length": 14,
            "average_text_length": 58.65608705508955,
            "max_text_length": 210,
            "unique_texts": 4201
        },
        "queries_image_statistics": null,
        "queries_audio_statistics": null,
        "relevant_docs_statistics": {
            "num_relevant_docs": 4411,
            "min_relevant_docs_per_query": 1,
            "average_relevant_docs_per_query": 1.0,
            "max_relevant_docs_per_query": 1,
            "unique_relevant_docs": 883
        },
        "top_ranked_statistics": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
12

Papers for mteb/audiocaps_t2a