Datasets:
id stringlengths 20 20 | content stringlengths 211 8.3M | dsir_books float64 -20,679,750.45 -316.68 | fluency_en sequencelengths 2 2 | rps_lines_ending_with_terminal_punctution_mark float64 0 100 | modernbert_cleanliness sequencelengths 6 6 | qurater sequencelengths 4 4 | rps_doc_num_sentences int64 1 84.7k | rps_doc_word_count float64 9 1.29M | ad_en sequencelengths 2 2 | rps_doc_frac_no_alph_words float64 15.7 92 | modernbert_reasoning sequencelengths 6 6 | rps_doc_frac_chars_top_2gram float64 0 92.9 | rps_lines_uppercase_letter_fraction float64 0.06 87.8 | rps_doc_frac_unique_words float64 1.16 100 | rps_lines_numerical_chars_fraction float64 0 85.7 | fineweb_edu sequencelengths 1 1 | dsir_math float64 -17,396,368.29 -306.94 | rps_doc_mean_word_length float64 1.44 61 | dsir_wiki float64 -20,772,722.61 -313.93 | rps_doc_frac_chars_top_3gram float64 0 119 | rps_doc_unigram_entropy float64 1.2 8.5 | modernbert_professionalism sequencelengths 6 6 | modernbert_readability sequencelengths 6 6 | sub_path stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
BkiUdvE4eIfiUWiFdbiw | \subsection{Isolated S(Se)-edge band in MoSe$_{2}$, WS$_{2}$ and WSe$_{2}$
from DFT calculations}
\begin{figure}[tbph]
\centering
\includegraphics[width=0.99\textwidth]{mos2_cmb.eps}\newline
\caption{Left: DFT MoS$_{2}$ zigzag ribbon band structure, without SOC;
Right: Zoom in of the Se-edge band bottom, with SOC. }
\... | -1,961.174329 | [
-0.396484375,
0.446533203125
] | 12.5 | [
-8.703125,
1.4111328125,
9.984375,
1.9501953125,
0.95751953125,
-10.9609375
] | [
-0.83837890625,
5.6796875,
0.158203125,
1.4248046875
] | 10 | 83 | [
-2.732421875,
3.697265625
] | 28.957055 | [
6.38671875,
8.125,
3.447265625,
-3.646484375,
-5.8671875,
-6.44921875
] | 12.060302 | 16.542599 | 42.168675 | 1.66776 | [
1.3297353982925415
] | -1,629.944745 | 7.192771 | -1,934.915481 | 8.040201 | 3.391857 | [
-4.96875,
-3.70703125,
-1.28515625,
0.7763671875,
3.55859375,
2.96484375
] | [
-5.640625,
-2.0625,
-1.189453125,
0.01424407958984375,
3.388671875,
1.3349609375
] | |
BkiUdb44dbjiU9oEeWKt | \section{Introduction}
\smallskip
The question of existence of conformal metrics of constant or more generally prescribed curvature on riemannian manifolds is a recurrent problem in differential geometry and geometric analysis. Indeed a positive or a negative answer to such a question has far reaching cons... | -154,369.188339 | [
-2.380859375,
2.197265625
] | 30.821206 | [
-3.494140625,
0.75244140625,
-2.033203125,
-6.11328125,
-0.859375,
8.21875
] | [
3.76171875,
9.03125,
1.115234375,
5.4453125
] | 540 | 9,571 | [
-3.5234375,
3.91015625
] | 38.795535 | [
-5.44921875,
-4.04296875,
-4.890625,
-2.548828125,
1.7060546875,
12.5546875
] | 0.605369 | 12.647271 | 25.953401 | 2.842884 | [
1.7331242561340332
] | -90,314.711449 | 7.145335 | -153,364.535069 | 0.807159 | 6.379017 | [
-1.8994140625,
-3.455078125,
-4.11328125,
-5.40625,
1.923828125,
12.6875
] | [
-5.52734375,
-1.6845703125,
-2.185546875,
-1.037109375,
3.4765625,
3.3984375
] | |
BkiUd0E5qhDBdMC-Fcd6 | \section{Supplemental Material: \\ Sign-changing photon-mediated atomic interactions in multimode cavity QED}
\subsection{Spectrum of a confocal cavity}
Within paraxial optics, the beam inside a Fabry-Perot cavity is described by Hermite-Gaussian modes. A mode $\Phi_{Q,l,m}$ is labeled by one longitudinal index $Q$ ... | -11,232.844488 | [
-3.212890625,
2.998046875
] | 29.87013 | [
-7.046875,
-4.40625,
-2.4296875,
-10.03125,
0.57958984375,
14.234375
] | [
2.265625,
8.03125,
2.19140625,
5.13671875
] | 55 | 1,129 | [
-3.375,
3.966796875
] | 31.670984 | [
-5.31640625,
-4.01171875,
-2.83984375,
-1.3798828125,
1.9013671875,
8.546875
] | 0.998464 | 19.492032 | 43.578388 | 2.151065 | [
3.1166553497314453
] | -7,571.034708 | 5.766165 | -10,967.794082 | 0.506912 | 5.50255 | [
-2.623046875,
-3.771484375,
-4.56640625,
-5.3125,
2.462890625,
12.2890625
] | [
-6.5,
-4.25,
-3.291015625,
-2.09765625,
4.39453125,
6.78515625
] | |
BkiUdPA4dbgg3Wyis_JT | "\\section{Introduction} \\label{sec:1}\n\t\n\tActive Galactic Nuclei (AGN) and black hole X-ray bin(...TRUNCATED) | -42,351.612971 | [
-3.630859375,
3.326171875
] | 38.85918 | [
-3.783203125,
1.2138671875,
-1.193359375,
-5.94140625,
-1.1259765625,
8.0234375
] | [
2.81640625,
7,
2.72265625,
5.3515625
] | 920 | 9,641 | [
-3.3515625,
3.70703125
] | 34.774782 | [
-6.18359375,
-2.703125,
-2.576171875,
-2.181640625,
1.1630859375,
9.96875
] | 0.763802 | 3.093778 | 19.62045 | 22.240329 | [
3.0763978958129883
] | -29,592.979564 | 5.36407 | -40,494.760294 | 0.440878 | 6.083903 | [
-4.171875,
-3.30859375,
-2.6640625,
-3.583984375,
2.46484375,
10.125
] | [
-7.14453125,
-2.80078125,
-2.61328125,
-2.19140625,
4.2421875,
6.671875
] | |
BkiUdILxK7FjYAb_67PT | "\\section{Introduction}\n\nAtmospheric neutrinos have contributed significantly to our understandin(...TRUNCATED) | -10,655.543137 | [
-3.376953125,
3.13671875
] | 42.857143 | [
-2.798828125,
0.290283203125,
-2.404296875,
-6.09375,
-1.1328125,
9.09375
] | [
4.18359375,
8.4765625,
4.38671875,
7.3671875
] | 151 | 2,458 | [
-2.796875,
2.869140625
] | 25.279337 | [
-6.90625,
-4.94140625,
-4.96875,
-2.685546875,
2.404296875,
13.90625
] | 0.820608 | 22.791403 | 33.360456 | 4.17229 | [
2.6187286376953125
] | -8,341.184871 | 5.701383 | -10,351.29855 | 0.470958 | 5.723614 | [
-3.2265625,
-3.947265625,
-3.384765625,
-4.34765625,
2.443359375,
11.65625
] | [
-6,
-3.5703125,
-3.126953125,
-2.912109375,
4.4609375,
8.0234375
] | |
BkiUfYzxK6-gD5TlfjZo | "\\section*{Acknowlegements}\nWe thank Livio Baldini Soares, Kenton Lee, Tom Kwiatkowski, Ilya Eckst(...TRUNCATED) | -31,678.204772 | [
0.247314453125,
0.417236328125
] | 52.021563 | [
-2.5390625,
1.1357421875,
-1.9951171875,
-5.4296875,
-1.13671875,
7.7734375
] | [
-0.004711151123046875,
3.271484375,
-0.08331298828125,
2.33203125
] | 384 | 6,910 | [
-2.1328125,
2.451171875
] | 21.538952 | [
-5.6328125,
-3.05078125,
-3.314453125,
-1.806640625,
1.740234375,
10.3359375
] | 0.427006 | 36.067881 | 21.143271 | 1.188575 | [
1.1885628700256348
] | -24,333.191699 | 6.100434 | -31,203.729552 | 1.660578 | 6.007914 | [
-2.865234375,
-3.560546875,
-3.681640625,
-4.48046875,
2.79296875,
11.65625
] | [
-6.01171875,
-2.298828125,
-2.666015625,
-2.123046875,
4.0390625,
6.12890625
] | |
BkiUdZY5qrqCyq8KqH5a | "\\section{Introduction}\nIn profiling radar systems, range resolution is determined by transmitted (...TRUNCATED) | -14,980.26078 | [
-2.55859375,
2.41015625
] | 29.910714 | [
-3.41796875,
0.007568359375,
-2.029296875,
-4.8671875,
-0.1904296875,
7.3828125
] | [
1.4443359375,
7.07421875,
1.470703125,
5.13671875
] | 188 | 2,339 | [
-2.541015625,
2.974609375
] | 24.078517 | [
-5.95703125,
-3.4921875,
-3.40625,
-1.7978515625,
2.021484375,
10.2734375
] | 0.93306 | 20.178804 | 31.081659 | 1.769764 | [
2.209501266479492
] | -10,655.804222 | 6.048311 | -14,547.542744 | 0.46653 | 5.754922 | [
-3.21484375,
-3.728515625,
-3.197265625,
-4.20703125,
2.60546875,
11.1796875
] | [
-5.11328125,
-1.4130859375,
-2.201171875,
-1.3154296875,
3.224609375,
4.05859375
] | |
BkiUcQrxK02iP15vfcUN | "\\section{Introduction}\nPrincipal pivot transform (PPT, or simply pivot) is a matrix\ntransformati(...TRUNCATED) | -44,745.591973 | [
-2.99609375,
2.7578125
] | 16.509927 | [
-2.44921875,
1.6630859375,
-1.8779296875,
-5.22265625,
-1.8662109375,
7.36328125
] | [
2.337890625,
8.140625,
2.083984375,
5.734375
] | 326 | 4,966 | [
-3.001953125,
3.513671875
] | 35.676276 | [
-4.62890625,
-2.892578125,
-4.08984375,
-2.27734375,
0.783203125,
10.484375
] | 0.54155 | 10.364483 | 20.720902 | 8.346476 | [
2.7331137657165527
] | -27,952.165738 | 5.391663 | -44,183.940242 | 0.903828 | 5.802729 | [
-2.345703125,
-3.033203125,
-3.46875,
-4.83203125,
2.046875,
11.4921875
] | [
-5.82421875,
-1.6513671875,
-2.2734375,
-1.3623046875,
3.35546875,
4.15625
] | |
BkiUd-7xK7IAD7XMJnpB | "\\section{Introduction}\n\\IEEEPARstart{F}{orthcoming} 5G and beyond wireless network architectures(...TRUNCATED) | -42,902.18836 | [
-2.453125,
2.51171875
] | 43.543046 | [
-3.291015625,
0.61865234375,
-1.90234375,
-5.328125,
-0.2161865234375,
7.4921875
] | [
3.0546875,
6.8984375,
2.2734375,
5.6640625
] | 416 | 7,410 | [
-2.189453125,
2.208984375
] | 27.909868 | [
-6.23046875,
-3.884765625,
-4.0546875,
-2.173828125,
2.423828125,
11.9140625
] | 0.744986 | 30.204365 | 22.523617 | 3.800918 | [
2.6080095767974854
] | -26,749.782027 | 5.887314 | -42,193.526511 | 0.513467 | 6.071932 | [
-3.171875,
-3.5234375,
-3.45703125,
-4.484375,
2.74609375,
11.25
] | [
-5.35546875,
-2.400390625,
-2.33203125,
-2.302734375,
4.0546875,
5.671875
] | |
BkiUdC84eIZjfagW4Wnr | "\\section{Introduction} \n\\label{sec:intro}\nOver the last few years, it has been realized that co(...TRUNCATED) | -99,277.063623 | [
-2.806640625,
2.583984375
] | 29.727273 | [
-3.001953125,
0.32958984375,
-2.15234375,
-5.24609375,
-0.64697265625,
8.109375
] | [
4.1171875,
9.65625,
3.26953125,
6.23046875
] | 551 | 10,552 | [
-2.34375,
2.525390625
] | 35.692381 | [
-5.55078125,
-4.22265625,
-4.80859375,
-2.328125,
1.81640625,
12.3125
] | 0.812671 | 18.130878 | 23.3605 | 5.099632 | [
1.5654466152191162
] | -52,166.761655 | 5.947309 | -98,331.703799 | 0.267703 | 6.439987 | [
-1.96484375,
-3.564453125,
-4.00390625,
-5.23046875,
1.998046875,
12.6484375
] | [
-5.51171875,
-2.15625,
-2.509765625,
-1.7177734375,
3.794921875,
5.6328125
] |
Annotated SlimPajama Dataset
Dataset Description
This dataset contains the first fully annotated SlimPajama dataset with comprehensive quality metrics for data-centric large language model research. The dataset includes approximately 580 billion tokens from the training set of the original SlimPajama dataset, annotated across 25 different quality dimensions.
Note: This dataset contains only the training set portion of the original SlimPajama dataset, which is why the token count is approximately 580B rather than the full 627B tokens.
Dataset Statistics
- Total samples: ~580B tokens from SlimPajama training set
- Quality metrics: 25 dimensions across 3 categories
- Domains: 7 domains (CommonCrawl, C4, GitHub, Books, ArXiv, Wikipedia, StackExchange)
- Annotation coverage: 100% of the training set
Quality Metrics
The dataset includes 25 quality scores across three main categories:
1. Natural Language Quality Signals (11 metrics)
Rule-based measures from RedPajama indicating text naturalness:
rps_doc_frac_no_alph_words: Fraction of words with no alphabetical charactersrps_doc_mean_word_length: Mean word length after normalizationrps_doc_frac_unique_words: Fraction of unique words (degeneracy measure)rps_doc_unigram_entropy: Entropy of unigram distributionrps_doc_word_count: Number of words after normalizationrps_lines_ending_with_terminal_punctution_mark: Lines ending with terminal punctuationrps_lines_numerical_chars_fraction: Ratio of numerical to total charactersrps_lines_uppercase_letter_fraction: Ratio of uppercase to total charactersrps_doc_num_sentences: Number of sentences in contentrps_doc_frac_chars_top_2gram: Fraction of characters in top word 2-gramrps_doc_frac_chars_top_3gram: Fraction of characters in top word 3-gram
2. Data Importance Scores (3 metrics)
DSIR-based importance weights measuring similarity to high-quality domains:
dsir_books: Importance score relative to Books domaindsir_wiki: Importance score relative to Wikipedia domaindsir_math: Importance score relative to AutoMathText domain
3. Model-based Quality Ratings (11 metrics)
Existing Metrics:
fineweb_edu: Educational value (from FineWeb-Edu) - single value in list formatad_en: Advertisement detection (from WanjuanCC) - logits for binary classification [label_0, label_1]fluency_en: Fluency assessment (from WanjuanCC) - logits for binary classification [label_0, label_1]qurater: QuRating scores as a list [Writing Style, Required Expertise, Facts and Trivia, Educational Value]
PRRC Framework (Our Contribution):
modernbert_professionalism: Professionalism logits for 6 levels (0-5 scale) - use argmax() to get ratingmodernbert_readability: Readability logits for 6 levels (0-5 scale) - use argmax() to get ratingmodernbert_reasoning: Reasoning logits for 6 levels (0-5 scale) - use argmax() to get ratingmodernbert_cleanliness: Cleanliness logits for 6 levels (0-5 scale) - use argmax() to get rating
PRRC Framework Details
Our PRRC framework introduces four novel dimensions for comprehensive data quality assessment:
- Professionalism: Measures the degree of expertise and prerequisite knowledge required to comprehend the text
- Readability: Evaluates text clarity, coherence, and ease of understanding
- Reasoning: Assesses the complexity of logical reasoning and analytical thinking required
- Cleanliness: Evaluates text formatting, completeness, and absence of noise/irrelevant content
Each PRRC dimension uses a 5-point additive rating system, with models achieving F1 scores of 87-92% on test sets.
Dataset Structure
The dataset structure for each example:
{
"id": "unique_document_id",
"content": "Main text content of the document",
"sub_path": "domain_name", # e.g., "arxiv", "github", "wikipedia", etc.
# Natural Language Quality Signals (RedPajama-style metrics)
"rps_doc_frac_no_alph_words": float,
"rps_doc_mean_word_length": float,
"rps_doc_frac_unique_words": float,
"rps_doc_unigram_entropy": float,
"rps_doc_word_count": int,
"rps_lines_ending_with_terminal_punctution_mark": float,
"rps_lines_numerical_chars_fraction": float,
"rps_lines_uppercase_letter_fraction": float,
"rps_doc_num_sentences": int,
"rps_doc_frac_chars_top_2gram": float,
"rps_doc_frac_chars_top_3gram": float,
# Data Importance Scores (DSIR)
"dsir_books": float,
"dsir_wiki": float,
"dsir_math": float,
# Model-based Quality Ratings
"fineweb_edu": [float], # Single value in list
"ad_en": [float, float], # [has_ad_logit, no_ad_logit] - use argmax() to get 0-1 rating
"fluency_en": [float, float], # [not_fluent_logit, fluent_logit] - use argmax() to get 0-1 rating
"qurater": [float, float, float, float], # [Writing Style, Required Expertise, Facts and Trivia, Educational Value]
# PRRC Framework (Our Contribution) - all contain 6 logits for levels 0-5
"modernbert_professionalism": [float, float, float, float, float, float], # Use argmax() to get 0-5 rating
"modernbert_readability": [float, float, float, float, float, float], # Use argmax() to get 0-5 rating
"modernbert_reasoning": [float, float, float, float, float, float], # Use argmax() to get 0-5 rating
"modernbert_cleanliness": [float, float, float, float, float, float] # Use argmax() to get 0-5 rating
}
Usage
Loading the Dataset
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("opendatalab/SlimPajama-627B-Annotated")
# Load a specific split if available
train_dataset = load_dataset("opendatalab/SlimPajama-627B-Annotated", split="train")
Data Processing and Selection Example
import pandas as pd
import numpy as np
from datasets import load_dataset
# Load dataset
dataset = load_dataset("opendatalab/SlimPajama-627B-Annotated", split="train")
# Convert to pandas for easier manipulation
df = dataset.to_pandas()
# Process PRRC scores (convert logits to ratings using argmax)
df['professionalism_score'] = df['modernbert_professionalism'].apply(lambda x: np.argmax(x))
df['readability_score'] = df['modernbert_readability'].apply(lambda x: np.argmax(x))
df['reasoning_score'] = df['modernbert_reasoning'].apply(lambda x: np.argmax(x))
df['cleanliness_score'] = df['modernbert_cleanliness'].apply(lambda x: np.argmax(x))
# Process binary classification scores
df['advertisement_score'] = df['ad_en'].apply(lambda x: np.argmax(x)) # 0 = has ad, 1 = no ad
df['fluency_score'] = df['fluency_en'].apply(lambda x: np.argmax(x)) # 0 = not fluent, 1 = fluent
# Extract QuRating scores
df['writing_style'] = df['qurater'].apply(lambda x: x[0])
df['required_expertise'] = df['qurater'].apply(lambda x: x[1])
df['facts_trivia'] = df['qurater'].apply(lambda x: x[2])
df['educational_value'] = df['qurater'].apply(lambda x: x[3])
# Extract FineWeb-Edu score
df['fineweb_educational'] = df['fineweb_edu'].apply(lambda x: x[0])
# Example: Multi-dimensional quality score combination (Meta-rater approach)
# Using the learned weights from the Meta-rater paper
weights = {
'educational_value': 0.0564, # From qurater[3]
'rps_doc_frac_no_alph_words': 0.0493,
'fineweb_educational': 0.0493,
'rps_lines_uppercase_letter_fraction': 0.0488,
'facts_trivia': 0.0477, # From qurater[2]
'rps_doc_frac_chars_top_3gram': 0.0473,
'rps_lines_ending_with_terminal_punctution_mark': 0.0473,
'rps_doc_frac_chars_top_2gram': 0.0471,
'dsir_wiki': 0.0469,
'rps_lines_numerical_chars_fraction': 0.0460,
'rps_doc_num_sentences': 0.0458,
'dsir_math': 0.0448,
'reasoning_score': 0.0444,
'rps_doc_frac_unique_words': 0.0432,
'rps_doc_word_count': 0.0423,
'rps_doc_unigram_entropy': 0.0422,
'dsir_books': 0.0414,
'professionalism_score': 0.0405,
'fluency_score': 0.0402,
'readability_score': 0.0393,
'required_expertise': 0.0373, # From qurater[1]
'advertisement_score': 0.0368,
'cleanliness_score': 0.0117,
'rps_doc_mean_word_length': 0.0065,
'writing_style': 0.0005, # From qurater[0]
}
# Calculate weighted quality score
quality_score = np.zeros(len(df))
for metric, weight in weights.items():
if metric in df.columns:
quality_score += df[metric].values * weight
# Select top-k samples based on quality score
top_k = 10000
top_k_indices = np.argsort(quality_score)[-top_k:]
selected_data = df.iloc[top_k_indices]
print(f"Selected top {top_k} samples using Meta-rater weights")
Applications
This annotated dataset enables:
- Data-Centric LLM Research: Study the impact of different quality dimensions on model performance
- Multi-dimensional Data Selection: Implement sophisticated data selection strategies beyond single-metric approaches
- Quality Score Analysis: Analyze correlations and relationships between different quality metrics
- Benchmark Development: Create standardized benchmarks for data quality assessment
- Efficient Pre-training: Select high-quality subsets for more efficient model training
- Domain-specific Analysis: Compare quality distributions across different domains (ArXiv, GitHub, Wikipedia, etc.)
Annotation Process
The quality scores were generated using:
- Rule-based metrics: Extracted using established heuristics from RedPajama and DSIR
- Existing model-based ratings: Applied pre-trained classifiers from FineWeb-Edu, WanjuanCC, and QuRating
- PRRC ratings: Generated using Llama-3.3-70B-Instruct for annotation, followed by fine-tuned ModernBERT models for efficient scoring
π Citation
If you use Meta-rater in your research, please cite our paper:
@article{zhuang2025meta,
title={Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models},
author={Zhuang, Xinlin and Peng, Jiahui and Ma, Ren and Wang, Yinfan and Bai, Tianyi and Wei, Xingjian and Qiu, Jiantao and Zhang, Chi and Qian, Ying and He, Conghui},
journal={arXiv preprint arXiv:2504.14194},
year={2025}
}
π License
This dataset is released under the same license as the original SlimPajama dataset. Please refer to the original SlimPajama repository for licensing details.
π€ Acknowledgments
This work builds upon:
- SlimPajama: The original dataset from Cerebras
- RedPajama: Natural language quality signals
- DSIR: Data importance scoring methodology
- FineWeb-Edu: Educational value assessment
- WanjuanCC: Advertisement and fluency detection
- QuRating: Multi-dimensional quality rating framework
π Contact
- Project Lead: Ren Ma (maren@pjlab.org.cn)
- Corresponding Author: Conghui He (heconghui@pjlab.org.cn)
- Issues: Please use GitHub Issues for questions.
β Star us on GitHub and HuggingFace if you find Meta-rater useful! β
Made with β€οΈ by the OpenDataLab team
- Downloads last month
- 7,196