Access request for MedConcept-23M
Please acknowledge the provenance and responsible-use notice below before accessing this repository.
By requesting access to this repository, you acknowledge that MedConcept-23M is a research resource derived from publicly available PMC-OA materials and metadata processing. You are responsible for complying with any applicable upstream source terms when reconstructing, redistributing, or using source-derived content. You also agree not to use this resource to identify individuals, and not to use it as a substitute for clinical judgment or patient care.
Log in or Sign Up to review the conditions and access this dataset content.
Dataset Card for MedConcept-23M
Dataset Summary
MedConcept-23M is a large-scale biomedical image-text-concept resource associated with the ConceptCLIP project.
It is designed to support research on:
- biomedical vision-language pretraining
- concept-grounded medical image understanding
- explainable medical AI
- multimodal retrieval and downstream evaluation
In the associated paper, MedConcept-23M is described as a large-scale resource of biomedical image-text-concept triplets used to pretrain ConceptCLIP.
What This Repository Contains
This repository is intended to host the released artifacts associated with MedConcept-23M, such as processed captions, concept annotations, metadata, and reconstruction-related resources, depending on the release version.
Users should inspect the repository file tree carefully to understand exactly which assets are included in the current release.
Data Provenance
MedConcept-23M is derived from biomedical figure-caption data collected from the PubMed Central Open Access Subset (PMC-OA) and further enriched with standardized biomedical concepts.
At a high level:
- biomedical image-text pairs are collected from PMC-OA
- captions are processed to identify candidate biomedical entities
- entities are linked to standardized concepts
- the resulting image-text pairs are enriched into image-text-concept triplets
This provenance matters because downstream users may interact with both:
- files released directly in this repository, and
- reconstructed or linked source-derived content obtained from upstream resources
Gated Access Notice
This repository uses gated access with contact sharing and an additional acknowledgment form.
The gate is intended to make users explicitly acknowledge that:
- this is a research resource, not a clinical product
- provenance and upstream-source compliance matter
- users remain responsible for lawful and ethical use
- source-derived content may carry obligations beyond the metadata hosted here
By requesting access, users confirm that they understand these points.
Intended Uses
Direct Use
- Biomedical vision-language pretraining
- Research on medical concept grounding
- Retrieval benchmarking and representation learning
- Data-centric studies of biomedical multimodal learning
- Reproducibility work for the ConceptCLIP project
Downstream Use
- Fine-tuning or continued pretraining of biomedical foundation models
- Studying concept extraction or concept alignment in medical AI
- Building explainable medical image analysis pipelines
- Internal research evaluation under appropriate governance
Out-of-Scope Use
- Re-identification or privacy attacks
- Use as a medical device or as a substitute for clinician judgment
- Clinical decision-making without task-specific validation and oversight
- Redistribution or reuse in ways that violate upstream source terms, local law, or institutional policy
Responsible-Use and Compliance Notice
Using this repository may involve more than simply downloading files.
Depending on your workflow, you may also:
- reconstruct content from upstream sources
- link back to upstream materials
- build derived datasets or training corpora
If you do so, you are responsible for:
- checking applicable upstream terms
- preserving provenance where needed
- following your institution's data-governance requirements
- complying with applicable law and publisher / repository rules
This dataset should not be used for any attempt to identify individuals or for any privacy-invasive purpose.
Recommended Good Practice
- Keep clear records of which files come from this repository versus upstream reconstruction.
- Preserve source provenance and versioning information.
- Document all filtering, cleaning, and reconstruction choices in derived work.
- Validate downstream models before any real-world use.
Relationship to Other Project Repositories
- Model weights:
JerrryNie/ConceptCLIP - Held-out retrieval benchmark:
JerrryNie/pmc9k - Code and reconstruction utilities: https://github.com/JerrryNie/ConceptCLIP
Example Loading Pattern
from datasets import load_dataset
# Replace the split / config names with the actual ones present in the repo.
dataset = load_dataset("JerrryNie/MedConcept-23M")
print(dataset)
Limitations
- The released repository contents may differ from a fully reconstructed end-to-end corpus.
- Upstream biomedical figures and captions can contain publication bias, modality imbalance, and domain skew.
- Rare diseases or rare concepts may still be underrepresented.
- Benchmark performance achieved with models trained on this resource does not by itself establish clinical validity.
Citation
If you use this resource, please cite:
@article{nie2025conceptclip,
title={An Explainable Biomedical Foundation Model via Large-Scale Concept-Enhanced Vision-Language Pre-training},
author={Nie, Yuxiang and He, Sunan and Bie, Yequan and Wang, Yihui and Chen, Zhixuan and Yang, Shu and Cai, Zhiyuan and Wang, Hongmei and Wang, Xi and Luo, Luyang and Wu, Mingxiang and Wu, Xian and Chan, Ronald Cheong Kin and Lau, Yuk Ming and Zheng, Yefeng and Rajpurkar, Pranav and Chen, Hao},
journal={arXiv preprint arXiv:2501.15579},
year={2025}
}
Contact
For questions about the dataset release, please use the repository discussion page or the main project repository.
- Downloads last month
- 8