Datasets:
CatPred-DB: Enzyme Kinetic Parameters Database
- Paper: CatPred: A comprehensive framework for deep learning in vitro enzyme kinetic parameters
- GitHub: https://github.com/maranasgroup/CatPred-DB
Dataset Description
CatPred-DB contains the benchmark datasets introduced alongside the CatPred deep learning framework for predicting in vitro enzyme kinetic parameters. The datasets cover three key kinetic parameters:
| Parameter | Description | Datapoints |
|---|---|---|
| kcat | Turnover number | 23,197 |
| Km | Michaelis constant | 41,174 |
| Ki | Inhibition constant | 11,929 |
These datasets were curated to address the lack of standardized, high-quality benchmarks for enzyme kinetics prediction, with particular attention to coverage of out-of-distribution enzyme sequences.
Uses
Direct Use: This dataset is intended for training, evaluating, and benchmarking machine learning models that predict enzyme kinetic parameters from protein sequences or structural features.
Downstream Use: The dataset can be used to train or benchmark other machine learning models for enzyme kinetic parameter prediction, or to reproduce and extend the experiments described in the CatPred publication.
Out-of-Scope Use: This dataset reflects in vitro measurements and may not generalize to in vivo conditions. It should not be used as a sole basis for clinical or industrial enzyme selection without additional experimental validation.
Dataset Structure
The repository contains:
- datasets/ β CSV files for kcat, Km, and Ki with train/test splits
- scripts/ β Preprocessing and utility scripts
Data Fields
Each entry typically includes:
| Field | Description |
|---|---|
sequence |
Enzyme amino acid sequence |
sequence_source |
Source of the sequence |
uniprot |
UniProt identifier |
substrate_smiles |
Substrate chemical structure in SMILES format |
value |
Raw measured kinetic parameter value |
log10_value |
Log10-transformed kinetic value (use this for modeling) |
log10km_mean |
Log10 mean Km value for the enzyme-substrate pair |
temperature |
Assay temperature (Β°C) |
ph |
Assay pH |
ec |
Enzyme Commission (EC) number |
taxonomy_id |
NCBI taxonomy ID of the source organism |
group |
Train/val/test split assignment |
pdbpath |
Path to associated PDB structural file (if available) |
sequence_40cluster |
Sequence cluster ID at 40% identity threshold |
sequence_60cluster |
Sequence cluster ID at 60% identity threshold |
sequence_80cluster |
Sequence cluster ID at 80% identity threshold |
sequence_99cluster |
Sequence cluster ID at 99% identity threshold |
Source Data
Data was compiled and curated from public biochemical databases, including BRENDA and SABIO-RK, as described in the CatPred publication. Splits were designed to evaluate generalization to enzyme sequences dissimilar to those seen during training. All SMILES were sanitized with RdKit. Broken SMILES were removed.
Dataset Splits
Each kinetic parameter (kcat, km, ki) has two split strategies, described below.
Split strategies
Random splits divide the data without regard to sequence similarity. These are useful for a general baseline but tend to overestimate real-world model performance, since training and test enzymes may be closely related.
Sequence-similarity splits (seq_test_sequence_XXcluster) ensure that test set enzymes
share less than XX% sequence identity with any enzyme in the training set. This is the
more rigorous benchmark β a model that performs well here is genuinely generalizing to
novel enzymes rather than recognizing similar sequences it has effectively seen before.
Five strictness levels are provided:
| Cluster threshold | Test set stringency |
|---|---|
| 20% | Hardest β test enzymes are very dissimilar to training data |
| 40% | Hard |
| 60% | Moderate |
| 80% | Easy |
| 99% | Easiest β nearly any sequence may appear in test |
File Naming
Each split file is named {parameter}-{strategy}_{subset}.csv. The subsets are:
| Subset | Contents | When to use |
|---|---|---|
train |
Training data only | Model development and hyperparameter tuning |
val |
Validation data only | Monitoring training, early stopping |
test |
Test data only | Final benchmark evaluation |
trainval |
Train + val combined | Retrain final model after hyperparameters are locked in |
trainvaltest |
All data combined | Train a release model once all evaluation is complete |
Quickstart Usage
Install HuggingFace Datasets package
Each subset can be loaded into python using the HuggingFace datasets library. First, from the command line, install the datasets library
>>> pip install datasets
Load a subset
>>> from datasets import load_dataset
# Options: "kcat", "km", "ki"
>>> ds = load_dataset("kunikohunter/CatPred-DB", "kcat")
>>> train = ds["train"]
>>> val = ds["validation"]
>>> test = ds["test"]
kcat-random_train.csv: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 28.0M/28.0M [00:04<00:00, 6.29MB/s]
kcat-random_trainval.csv: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 31.1M/31.1M [00:04<00:00, 6.81MB/s]
kcat-random_trainvaltest.csv: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 34.5M/34.5M [00:05<00:00, 6.86MB/s]
Generating train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 18789/18789 [00:00<00:00, 67580.51 examples/s]
Generating validation split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 20877/20877 [00:00<00:00, 78951.22 examples/s]
Generating test split: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 23197/23197 [00:00<00:00, 74253.91 examples/s]
Key columns
| Column | Description |
|---|---|
sequence |
Enzyme amino acid sequence |
uniprot |
UniProt identifier |
reactant_smiles |
Substrate SMILES |
value |
Raw kinetic value |
log10_value |
Logββ-transformed value β use this as your target |
temperature |
Assay temperature (Β°C), nullable |
ph |
Assay pH, nullable |
ec |
EC number |
sequence_40cluster |
Cluster ID at 40% identity β use for similarity-based splits |
Recommended split workflow
train + val β tune architecture and hyperparameters
trainval + test β final benchmark (report results here)
trainvaltest β train the final released model on all available data
This three-stage approach is standard practice in ML: you only touch the test set once, and the combined files make it easy to retrain on progressively more data as you move from experimentation to deployment.
Basic training setup
>>> df = ds["train"].to_pandas()
>>> X_seq = df["sequence"]
>>> X_sub = df["reactant_smiles"]
>>> y = df["log10_value"]
# Drop rows with missing targets or substrates
>>> mask = y.notna() & X_sub.notna()
>>> df = df[mask]
Citation
If you use this dataset, please cite:
BibTeX:
@article{boorla2025catpred,
title={CatPred: a comprehensive framework for deep learning in vitro enzyme kinetic parameters},
author={Boorla, Veda Sheersh and Maranas, Costas D.},
journal={Nature Communications},
volume={16},
pages={2072},
year={2025},
doi={10.1038/s41467-025-57215-9}
}
APA: Boorla, V. S., & Maranas, C. D. (2025). CatPred: a comprehensive framework for deep learning in vitro enzyme kinetic parameters. Nature Communications, 16, 2072. https://doi.org/10.1038/s41467-025-57215-9
License
MIT - see LICENSE
Dataset Card Authors
Jessica Lin, Kuniko Hunter, Manasa Yadavalli, McGuire Metts
- Downloads last month
- -