The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
organism: int64
pos_count: int64
neg_count: int64
pos_prop: double
neg_prop: double
vs
stat: string
value: double
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 588, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
organism: int64
pos_count: int64
neg_count: int64
pos_prop: double
neg_prop: double
vs
stat: string
value: doubleNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Clustered PPI datasets (BIOGRID + STRING) with sequence-disjoint splits
This dataset repo contains multiple dataset variants of protein–protein interactions (PPIs), built by clustering proteins by sequence similarity and then constructing train/valid/test splits that are intended to be disjoint at the protein level (and thus hard to memorize via near-identical sequences).
Artifacts are stored as compressed pickles (*.pkl.gz). A helper downloader exists in this repo:
data_processing/download_ppi_data.py::download_clustered_ppi_data
What’s in each split dataframe?
Each split is a pandas.DataFrame with (at minimum):
- IdA / IdB: protein identifiers
- OrgA / OrgB: organism identifiers (STRING taxon id for STRING datasets; BIOGRID org id for BIOGRID datasets)
- labels:
>0indicates a positive interaction,0indicates a sampled negative
Some variants also include additional columns (e.g. cluster_a, cluster_b, confidences, org_a, org_b).
When negatives are concatenated, some of these columns may be NaN for negative rows.
Dataset variants (index)
A machine-readable index is available at:
tables/dataset_index.csv
| variant | source | threshold | train rows | valid rows | test rows | train pos rate | protein overlap (max) |
|---|---|---|---|---|---|---|---|
string_human_st040 |
string_human |
st040 |
12000396 | 10554 | 20110 | 0.500 | 0 |
Per-variant deep dive (plots + stats)
Each variant has:
plots/<variant>/...png(rendered below)tables/<variant>/summary.csvandtables/<variant>/schema.csv
string_human_st040
Open report
Summary tables
tables/string_human_st040/summary.csvtables/string_human_st040/schema.csv
Label balance
- train:
plots/string_human_st040/train_label_counts.png - valid:
plots/string_human_st040/valid_label_counts.png - test:
plots/string_human_st040/test_label_counts.png
Organism distributions (positives vs negatives)

- data:
plots/string_human_st040/train_organism_distribution.csv - stats:
plots/string_human_st040/train_organism_distribution_stats.csv
- data:
plots/string_human_st040/valid_organism_distribution.csv - stats:
plots/string_human_st040/valid_organism_distribution_stats.csv
- data:
plots/string_human_st040/test_organism_distribution.csv - stats:
plots/string_human_st040/test_organism_distribution_stats.csv
Cross-split organism shift tests
- positives:
plots/string_human_st040/cross_split_pos_stats.csv - negatives:
plots/string_human_st040/cross_split_neg_stats.csv
Sequence length distributions (unique proteins)

- stats:
plots/string_human_st040/train_seq_length_stats.csv
- stats:
plots/string_human_st040/valid_seq_length_stats.csv
- stats:
plots/string_human_st040/test_seq_length_stats.csv
Top organism pairs
train positives:
plots/string_human_st040/train_top_org_pairs_pos.pngtrain negatives:
plots/string_human_st040/train_top_org_pairs_neg.pngvalid positives:
plots/string_human_st040/valid_top_org_pairs_pos.pngvalid negatives:
plots/string_human_st040/valid_top_org_pairs_neg.pngtest positives:
plots/string_human_st040/test_top_org_pairs_pos.pngtest negatives:
plots/string_human_st040/test_top_org_pairs_neg.png
How to download and load
Use the helper in this codebase:
from data_processing.download_ppi_data import download_clustered_ppi_data
# BIOGRID example
train_df, valid_df, test_df, interaction_set, seq_dict = download_clustered_ppi_data(
data_type='biogrid',
cluster_percentage=0.5,
hf_repo='Synthyra/clustered_ppi_string',
)
# STRING example (descriptor must match the variant prefix: e.g. 'human' or 'model_orgs')
train_df, valid_df, test_df, interaction_set, seq_dict = download_clustered_ppi_data(
data_type='string',
descriptor='human',
cluster_percentage=0.5,
hf_repo='Synthyra/clustered_ppi_string',
)
- Downloads last month
- 45



