url
large_stringlengths 58
61
| repository_url
large_stringclasses 1
value | labels_url
large_stringlengths 72
75
| comments_url
large_stringlengths 67
70
| events_url
large_stringlengths 65
68
| html_url
large_stringlengths 46
51
| id
int64 599M
3.95B
| node_id
large_stringlengths 18
32
| number
int64 1
8.01k
| title
large_stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
large_stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[us, tz=UTC]date 2020-04-14 10:18:02
2026-02-16 17:42:30
| updated_at
timestamp[us, tz=UTC]date 2020-04-27 16:04:17
2026-02-17 06:56:40
| closed_at
timestamp[us, tz=UTC]date 2020-04-14 12:01:40
2026-02-16 23:43:36
⌀ | author_association
large_stringclasses 4
values | type
float64 | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
large_stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
large_stringlengths 67
70
| performed_via_github_app
float64 | state_reason
large_stringclasses 4
values | sub_issues_summary
dict | issue_dependencies_summary
dict | pinned_comment
float64 | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/8008
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/8008/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/8008/comments
|
https://api.github.com/repos/huggingface/datasets/issues/8008/events
|
https://github.com/huggingface/datasets/pull/8008
| 3,948,857,401
|
PR_kwDODunzps7EJ_oM
| 8,008
|
fix: prevent duplicate keywords in load_dataset_builder (#4910)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/190333801?v=4",
"events_url": "https://api.github.com/users/DhyeyTeraiya/events{/privacy}",
"followers_url": "https://api.github.com/users/DhyeyTeraiya/followers",
"following_url": "https://api.github.com/users/DhyeyTeraiya/following{/other_user}",
"gists_url": "https://api.github.com/users/DhyeyTeraiya/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DhyeyTeraiya",
"id": 190333801,
"login": "DhyeyTeraiya",
"node_id": "U_kgDOC1hDaQ",
"organizations_url": "https://api.github.com/users/DhyeyTeraiya/orgs",
"received_events_url": "https://api.github.com/users/DhyeyTeraiya/received_events",
"repos_url": "https://api.github.com/users/DhyeyTeraiya/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DhyeyTeraiya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DhyeyTeraiya/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DhyeyTeraiya",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-16T17:42:30
| 2026-02-16T17:42:30
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/8008.diff",
"html_url": "https://github.com/huggingface/datasets/pull/8008",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/8008.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8008"
}
| null | null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/8008/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/8008/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/8007
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/8007/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/8007/comments
|
https://api.github.com/repos/huggingface/datasets/issues/8007/events
|
https://github.com/huggingface/datasets/issues/8007
| 3,946,695,329
|
I_kwDODunzps7rPcqh
| 8,007
|
Add option for loading audio with video
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Samoed",
"id": 36135455,
"login": "Samoed",
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"repos_url": "https://api.github.com/users/Samoed/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Samoed",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi,\nI’m interested in working on this issue.\n\nFrom what I understand, audio decoding would involve an AudioDecoder, and previous discussions have suggested handling it separately. I’d like to explore whether integrating audio extraction in relation to the current video decoding workflow would be appropriate, or if maintaining a separate decoder is the preferred design.\n\nAny guidance would be greatly appreciated.\n\nThanks!"
] | 2026-02-16T09:11:02
| 2026-02-16T15:56:21
| null |
NONE
| null | null | null | null |
### Describe the bug
Currently, `torchcodec` don't allow extracting `Audio` from `Video` https://github.com/meta-pytorch/torchcodec/issues/1158, but when I upload videos with audio to hub using `videofolder`, then this is not possible to retrieve audio from it. Probably `VideoDecoder` can be extended with `audio` parameter to retrieve this information
### Steps to reproduce the bug
```python
from datasets import load_dataset
test_ds = load_dataset("videofolder", data_dir="/path/to/video")
# uploaded version
# test_ds = load_dataset("Samoed/testds")
test_ds["train"][1]["video"]
# <torchcodec.decoders._video_decoder.VideoDecoder
```
### Expected behavior
Some option to retrieve audio information.
### Environment info
```
datasets==4.5.0
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/8007/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/8007/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/8006
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/8006/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/8006/comments
|
https://api.github.com/repos/huggingface/datasets/issues/8006/events
|
https://github.com/huggingface/datasets/issues/8006
| 3,944,394,074
|
I_kwDODunzps7rGq1a
| 8,006
|
Regression: from_generator crashes if one generator call returns no results
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4",
"events_url": "https://api.github.com/users/hartmans/events{/privacy}",
"followers_url": "https://api.github.com/users/hartmans/followers",
"following_url": "https://api.github.com/users/hartmans/following{/other_user}",
"gists_url": "https://api.github.com/users/hartmans/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hartmans",
"id": 53510,
"login": "hartmans",
"node_id": "MDQ6VXNlcjUzNTEw",
"organizations_url": "https://api.github.com/users/hartmans/orgs",
"received_events_url": "https://api.github.com/users/hartmans/received_events",
"repos_url": "https://api.github.com/users/hartmans/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hartmans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hartmans/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hartmans",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-15T16:16:48
| 2026-02-15T16:16:48
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
Dataset.from_generator splits any list kwarg to the generator and makes a separate call to the generator for each member of the list even in the num_proc=1 case.
I have a generator that processes a number of files, filtering them and producing examples. It used to be the case that things worked fine if one of the files produced no examples.
It doesn't work any more.
I believe that commit 2ed6f72d88c0f37b75751cd0cd41a485439e16c9 is responsible.
What ends up happening is that the code tries to update input shard lengths but the list of shard lengths is still empty.
### Steps to reproduce the bug
```python
import datasets
def gen_examples(num_examples, fingerprint):
assert len(num_examples) == 1
for x in range(num_examples[0]):
yield dict(feature=x)
ds = datasets.Dataset.from_generator(gen_examples, gen_kwargs=dict(num_examples=[0,1],
fingerprint=str(id([]))))
```
Traceback looks like
```
Generating train split: 0 examples [00:00, ? examples/s]
Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 1598, in _prepare_split_single
original_shard_lengths[original_shard_id] += 1
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
IndexError: list index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/srv/models/zendegi_ai/sexpositive-sft/ai-tools/training_tools/foo.py", line 10, in <module>
ds = datasets.Dataset.from_generator(gen_examples, gen_kwargs=dict(num_examples=[0,1],
fingerprint=str(id([]))))
File "/usr/local/lib/python3.13/dist-packages/datasets/arrow_dataset.py", line 1204, in from_generator
).read()
~~~~^^
File "/usr/local/lib/python3.13/dist-packages/datasets/io/generator.py", line 52, in read
self.builder.download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
download_config=download_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
num_proc=self.num_proc,
^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
dl_manager=dl_manager,
^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
**download_and_prepare_kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 1634, in _download_and_prepare
super()._download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
dl_manager,
^^^^^^^^^^^
verification_mode,
^^^^^^^^^^^^^^^^^^
**prepare_splits_kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 1438, in _prepare_split
for job_id, done, content in self._prepare_split_single(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
):
^
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 1617, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
I expect a dataset with one example.
In my case it's fairly easy to work around this. Ideally this behavior would be supported.
If it's not going to be supported, catching the situation and returning a "generators must always return at least one example" error would have saved me hours of debugging.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 4.5.0
- Platform: Linux-6.18.5+deb14-amd64-x86_64-with-glibc2.41
- Python version: 3.13.5
- `huggingface_hub` version: 1.4.1
- PyArrow version: 23.0.0
- Pandas version: 3.0.0
- `fsspec` version: 2025.10.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/8006/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/8006/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/8005
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/8005/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/8005/comments
|
https://api.github.com/repos/huggingface/datasets/issues/8005/events
|
https://github.com/huggingface/datasets/issues/8005
| 3,941,908,297
|
I_kwDODunzps7q9L9J
| 8,005
|
Multi-channel audio is automatically cast to mono, num_channels is ignored
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9717211?v=4",
"events_url": "https://api.github.com/users/ZackHodari/events{/privacy}",
"followers_url": "https://api.github.com/users/ZackHodari/followers",
"following_url": "https://api.github.com/users/ZackHodari/following{/other_user}",
"gists_url": "https://api.github.com/users/ZackHodari/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZackHodari",
"id": 9717211,
"login": "ZackHodari",
"node_id": "MDQ6VXNlcjk3MTcyMTE=",
"organizations_url": "https://api.github.com/users/ZackHodari/orgs",
"received_events_url": "https://api.github.com/users/ZackHodari/received_events",
"repos_url": "https://api.github.com/users/ZackHodari/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZackHodari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZackHodari/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZackHodari",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"**Workaround**\nDirectly load audio using torchcodec, this is what datasets does under the hood (but doesn't maintain multi-channel)\n\n```python\nimport torchcodec\n\ndecoder = torchcodec.decoders.AudioDecoder(audio[\"bytes\"])\naudio_samples = decoder.get_all_samples()\n\naudio = audio_samples.data.numpy()\nsample_rate = audio_samples.sample_rate\n```"
] | 2026-02-14T17:28:03
| 2026-02-14T17:29:43
| null |
NONE
| null | null | null | null |
### Describe the bug
The `num_channels` parameter in `datasets.Audio()` is documented to preserve stereo channels when set to `None` (preserve original) or `2` (explicit stereo), but it currently downmixes all audio to mono regardless of this setting.
### Steps to reproduce the bug
```python
import numpy as np
import soundfile as sf
import tempfile
from datasets import Dataset, Audio
# Create a stereo audio file
sample_rate = 16000
duration = 1.0
num_samples = int(sample_rate * duration)
left_channel = np.sin(2 * np.pi * 440 * np.linspace(0, duration, num_samples))
right_channel = np.sin(2 * np.pi * 880 * np.linspace(0, duration, num_samples))
stereo_audio = np.stack([left_channel, right_channel], axis=1).astype(np.float32)
temp_file = tempfile.NamedTemporaryFile(suffix=".wav", delete=False)
sf.write(temp_file.name, stereo_audio, sample_rate)
# Create HuggingFace dataset
dataset_dict = {"audio": [temp_file.name]}
ds = Dataset.from_dict(dataset_dict)
# Test with num_channels=2
ds_stereo = ds.cast_column("audio", Audio(num_channels=2))
audio_data = ds_stereo[0]["audio"]
print(f"Original file shape (via soundfile): {sf.read(temp_file.name)[0].shape}")
# Output: (16000, 2) ✓ Stereo
print(f"HF datasets shape with num_channels=2: {audio_data['array'].shape}")
# Output: (16000,) ✗ Mono (should be (2, 16000))
```
**Result:**
- Original file: `(16000, 2)` - stereo ✓
- `Audio(num_channels=None)`: `(16000,)` - mono ✗
- `Audio(num_channels=2)`: `(16000,)` - mono ✗
- `Audio(num_channels=1)`: `(16000,)` - mono ✓
### Expected behavior
According to the documentation, `Audio` decoding should return samples with shape `(num_channels, num_samples)`:
- `num_channels=None` should preserve the original number of channels from the source file
- `num_channels=2` should preserve/convert to stereo output with shape `(2, num_samples)`
- `num_channels=1` should downmix to mono with shape `(num_samples,)`
**Actual Behavior**
All `num_channels` settings produce mono output with shape `(num_samples,)`, even when the source audio file is stereo.
### Environment info
OS: macOS / Linux
Python 3.10.19
```
datasets 4.4.2
torchcodec 0.10.0
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/8005/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/8005/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/8004
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/8004/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/8004/comments
|
https://api.github.com/repos/huggingface/datasets/issues/8004/events
|
https://github.com/huggingface/datasets/pull/8004
| 3,939,675,475
|
PR_kwDODunzps7DsCBM
| 8,004
|
fix save_to_disk/load_from_disk with pathlib.Path input
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/64578610?v=4",
"events_url": "https://api.github.com/users/Mr-Neutr0n/events{/privacy}",
"followers_url": "https://api.github.com/users/Mr-Neutr0n/followers",
"following_url": "https://api.github.com/users/Mr-Neutr0n/following{/other_user}",
"gists_url": "https://api.github.com/users/Mr-Neutr0n/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mr-Neutr0n",
"id": 64578610,
"login": "Mr-Neutr0n",
"node_id": "MDQ6VXNlcjY0NTc4NjEw",
"organizations_url": "https://api.github.com/users/Mr-Neutr0n/orgs",
"received_events_url": "https://api.github.com/users/Mr-Neutr0n/received_events",
"repos_url": "https://api.github.com/users/Mr-Neutr0n/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mr-Neutr0n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mr-Neutr0n/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mr-Neutr0n",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-13T23:03:55
| 2026-02-13T23:03:55
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/8004.diff",
"html_url": "https://github.com/huggingface/datasets/pull/8004",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/8004.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8004"
}
|
Since #6704, `save_to_disk` and `load_from_disk` use `fsspec.core.url_to_fs` which expects a `str`, but both methods accept `PathLike` (which includes `pathlib.Path`). Passing a `Path` object raises a `TypeError` because `url_to_fs` can't handle it.
Fixed by converting the path argument with `os.fspath()` before handing it off to `url_to_fs`. This affects all five call sites across `Dataset`, `DatasetDict`, and the standalone `load_from_disk` function.
Fixes #6829
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/8004/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/8004/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/8003
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/8003/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/8003/comments
|
https://api.github.com/repos/huggingface/datasets/issues/8003/events
|
https://github.com/huggingface/datasets/pull/8003
| 3,938,501,557
|
PR_kwDODunzps7DoBmp
| 8,003
|
very basic support for more hf urls
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8003). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-13T18:30:56
| 2026-02-13T18:47:58
| 2026-02-13T18:47:57
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/8003.diff",
"html_url": "https://github.com/huggingface/datasets/pull/8003",
"merged_at": "2026-02-13 18:47:57",
"patch_url": "https://github.com/huggingface/datasets/pull/8003.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8003"
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/8003/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/8003/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/8002
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/8002/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/8002/comments
|
https://api.github.com/repos/huggingface/datasets/issues/8002/events
|
https://github.com/huggingface/datasets/issues/8002
| 3,937,013,814
|
I_kwDODunzps7qqhA2
| 8,002
|
The `Sequence` class in features do not have "dtype"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/167085390?v=4",
"events_url": "https://api.github.com/users/gonzalo-santamaria-iic/events{/privacy}",
"followers_url": "https://api.github.com/users/gonzalo-santamaria-iic/followers",
"following_url": "https://api.github.com/users/gonzalo-santamaria-iic/following{/other_user}",
"gists_url": "https://api.github.com/users/gonzalo-santamaria-iic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gonzalo-santamaria-iic",
"id": 167085390,
"login": "gonzalo-santamaria-iic",
"node_id": "U_kgDOCfWFTg",
"organizations_url": "https://api.github.com/users/gonzalo-santamaria-iic/orgs",
"received_events_url": "https://api.github.com/users/gonzalo-santamaria-iic/received_events",
"repos_url": "https://api.github.com/users/gonzalo-santamaria-iic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gonzalo-santamaria-iic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gonzalo-santamaria-iic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gonzalo-santamaria-iic",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-13T12:45:19
| 2026-02-13T12:52:36
| null |
NONE
| null | null | null | null |
### Describe the bug
**I'm not sure if this is a bug.**
I see that a `FeatureType` object contains an attribute called `self.dtype` that is not covered when this feature is a `Sequence` or a `List`.
When I try to run a multilabel classification with this example script from the transformers library:
https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_classification.py#L442
I get this error on the linked line:
```shell
AttributeError: 'List' object has no attribute 'dtype'. Did you mean: '_type'?
```
Looking at the check that the script is attempting to perform, could we perhaps add a `self.dtype="list"` attribute for this `FeatureType` 's: `Sequence`, `List`, etc.?
### Steps to reproduce the bug
For example, this code works for me:
```python
from datasets import ClassLabel, Features, Sequence, Value
features = {'text': Value('string'), 'label': ClassLabel(names=['No', 'Yes'])}
print(features["text"].dtype)
print(features["label"].dtype)
```
```output
'string'
'int64'
```
and this code does not work for me:
```python
from datasets import ClassLabel, Features, Sequence, Value
features = {'text': Value('string'), 'label': Sequence(ClassLabel(names=['No', 'Yes']))}
print(features["label"].dtype) # it could be equal to "list"?
```
```output
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'List' object has no attribute 'dtype'. Did you mean: '_type'?
```
### Expected behavior
The attribute `dtype` equal to `"list"` when using objects of type `Sequence`.
```python
from datasets import ClassLabel, Features, Sequence, Value
features = {'text': Value('string'), 'label': Sequence(ClassLabel(names=['No', 'Yes']))}
print(features["label"].dtype)
```
```output
'list'
```
### Environment info
I have installed `datasets==4.5.0`.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/8002/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/8002/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/8000
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/8000/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/8000/comments
|
https://api.github.com/repos/huggingface/datasets/issues/8000/events
|
https://github.com/huggingface/datasets/pull/8000
| 3,919,504,253
|
PR_kwDODunzps7CpK9i
| 8,000
|
Fix: make environment variable naming consistent (issue #7998)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/199906670?v=4",
"events_url": "https://api.github.com/users/AnkitAhlawat7742/events{/privacy}",
"followers_url": "https://api.github.com/users/AnkitAhlawat7742/followers",
"following_url": "https://api.github.com/users/AnkitAhlawat7742/following{/other_user}",
"gists_url": "https://api.github.com/users/AnkitAhlawat7742/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AnkitAhlawat7742",
"id": 199906670,
"login": "AnkitAhlawat7742",
"node_id": "U_kgDOC-pVbg",
"organizations_url": "https://api.github.com/users/AnkitAhlawat7742/orgs",
"received_events_url": "https://api.github.com/users/AnkitAhlawat7742/received_events",
"repos_url": "https://api.github.com/users/AnkitAhlawat7742/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AnkitAhlawat7742/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnkitAhlawat7742/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AnkitAhlawat7742",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @lhoestq ,\r\nI’ve addressed the inconsistent environment variable name HF_DATASETS_DISABLE_PROGRESS_BARS in docstrings and warning messages. Please review the changes and let me know if there’s anything else I should update to complete this issue fix."
] | 2026-02-10T05:19:53
| 2026-02-16T15:29:49
| 2026-02-16T15:29:49
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/8000.diff",
"html_url": "https://github.com/huggingface/datasets/pull/8000",
"merged_at": "2026-02-16 15:29:49",
"patch_url": "https://github.com/huggingface/datasets/pull/8000.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8000"
}
|
Fix: https://github.com/huggingface/datasets/issues/7998
**Summary**
Addressed the inconsistent environment variable name HF_DATASETS_DISABLE_PROGRESS_BARS used for toggling progress bars, particularly in docstrings and warning messages.
**Changes**
Updated function docstrings to use the correct environment variable name
Corrected warning messages in tqdm.py to ensure consistency
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/8000/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/8000/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7999
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7999/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7999/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7999/events
|
https://github.com/huggingface/datasets/issues/7999
| 3,915,367,642
|
I_kwDODunzps7pX8Ta
| 7,999
|
Too many dataloader workers: 4 (max is dataset.num_shards=3). Stopping 1 dataloader workers.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50061868?v=4",
"events_url": "https://api.github.com/users/D222097/events{/privacy}",
"followers_url": "https://api.github.com/users/D222097/followers",
"following_url": "https://api.github.com/users/D222097/following{/other_user}",
"gists_url": "https://api.github.com/users/D222097/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/D222097",
"id": 50061868,
"login": "D222097",
"node_id": "MDQ6VXNlcjUwMDYxODY4",
"organizations_url": "https://api.github.com/users/D222097/orgs",
"received_events_url": "https://api.github.com/users/D222097/received_events",
"repos_url": "https://api.github.com/users/D222097/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/D222097/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D222097/subscriptions",
"type": "User",
"url": "https://api.github.com/users/D222097",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi, thanks for the clear question and code snippet!\n\nFrom my understanding, **hf_dataset.num_shards** represents the number of actual iterable partitions that the streaming dataset can be read from in parallel. This is not the same as the number of underlying Parquet files—many files can be grouped into a much smaller number of shards internally.\n\nThe warning seems to persist because **datasets[sub_idx]** is a sliced or sharded view of the original dataset, and its effective num_shards can be smaller than expected after this step. So even if **num_workers** is set equal to **datasets[sub_idx].num_shards**, additional internal sharding can still result in fewer usable shards, which triggers the warning.\n\nBecause of this, manually overriding **num_shards** does not appear to increase real parallelism and may just create extra workers that stay idle. For streaming datasets, it seems safer to let the dataset control sharding and to cap the number of workers to the effective shard count (or even use fewer workers and adjust batch size instead), since performance is often I/O-bound.\n\nHope this helps clarify what’s going on!"
] | 2026-02-09T09:26:37
| 2026-02-15T14:25:56
| null |
NONE
| null | null | null | null |
Hi !
I’m working on training with a large-scale dataset (100+ Parquet files) using lazy loading, and I’m struggling to understand/optimize the num_shards setting— in the lerobot repo: streaming_datasets.py:
```
from datasets import load_dataset
self.hf_dataset: datasets.IterableDataset = load_dataset(
self.repo_id if not self.streaming_from_local else str(self.root),
split="train",
streaming=self.streaming,
data_files="data/*/*.parquet",
revision=self.revision,
)
self.num_shards = min(self.hf_dataset.num_shards, max_num_shards)
```
```
dataloader = torch.utils.data.DataLoader(
datasets[sub_idx],
num_workers=datasets[sub_idx].num_shards, #cfg.num_workers,
batch_size=cfg.batch_size,
shuffle=shuffle and not cfg.dataset.streaming,
sampler=sampler,
collate_fn=FlowerDataCollator(),
pin_memory=device.type == "cuda",
drop_last=True,
prefetch_factor=2 if cfg.num_workers > 0 else None,
)
```
What exactly does hf_dataset.**num_shards** represent? Is it safe to manually override/edit num_shards?
My batch loading is slower than expected (2-3s per batch) despite num_worker cannot be bigger with warning: Too many dataloader workers: 4 (max is dataset.num_shards=3). Stopping 1 dataloader workers.
Even use num_workers=datasets[sub_idx].num_shards, the waring is still exist! (my num_worker is 4 and hf_dataset.num_shards is 100+, so the datasets[sub_idx].num_shards=4)
Why does the "too many workers" warning persist even when num_workers equals dataset.num_shards—and how do I fix this?
Thanks so much for any insights or help with this! Really appreciate your time and expertise 😊
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7999/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7999/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7998
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7998/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7998/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7998/events
|
https://github.com/huggingface/datasets/issues/7998
| 3,912,624,238
|
I_kwDODunzps7pNehu
| 7,998
|
[doc] Inconsistant ENV VAR Name for Progress Bar Toggle
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/49304833?v=4",
"events_url": "https://api.github.com/users/Moenupa/events{/privacy}",
"followers_url": "https://api.github.com/users/Moenupa/followers",
"following_url": "https://api.github.com/users/Moenupa/following{/other_user}",
"gists_url": "https://api.github.com/users/Moenupa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moenupa",
"id": 49304833,
"login": "Moenupa",
"node_id": "MDQ6VXNlcjQ5MzA0ODMz",
"organizations_url": "https://api.github.com/users/Moenupa/orgs",
"received_events_url": "https://api.github.com/users/Moenupa/received_events",
"repos_url": "https://api.github.com/users/Moenupa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moenupa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moenupa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moenupa",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2026-02-08T12:16:44
| 2026-02-16T15:29:50
| 2026-02-16T15:29:50
|
NONE
| null | null | null | null |
Code uses env var name `HF_DATASETS_DISABLE_PROGRESS_BARS`.
https://github.com/huggingface/datasets/blob/025593f2f0722f31fc136e0ae45da4ff44d4416a/src/datasets/config.py#L221-L226
Docstrings and warnings report env var name `HF_DATASETS_DISABLE_PROGRESS_BAR` without the ending `S`.
https://github.com/huggingface/datasets/blob/025593f2f0722f31fc136e0ae45da4ff44d4416a/src/datasets/utils/tqdm.py#L61-L73
https://github.com/huggingface/datasets/blob/025593f2f0722f31fc136e0ae45da4ff44d4416a/src/datasets/utils/tqdm.py#L78-L90
https://github.com/huggingface/datasets/blob/025593f2f0722f31fc136e0ae45da4ff44d4416a/src/datasets/utils/tqdm.py#L95-L100
This affects doc webpages as well, e.g., see https://huggingface.co/docs/datasets/en/package_reference/utilities#datasets.enable_progress_bars.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7998/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7998/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7997
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7997/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7997/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7997/events
|
https://github.com/huggingface/datasets/pull/7997
| 3,912,160,109
|
PR_kwDODunzps7CRNIl
| 7,997
|
fix: Dataset.map writer initialization when first examples return None
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/34209028?v=4",
"events_url": "https://api.github.com/users/veeceey/events{/privacy}",
"followers_url": "https://api.github.com/users/veeceey/followers",
"following_url": "https://api.github.com/users/veeceey/following{/other_user}",
"gists_url": "https://api.github.com/users/veeceey/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/veeceey",
"id": 34209028,
"login": "veeceey",
"node_id": "MDQ6VXNlcjM0MjA5MDI4",
"organizations_url": "https://api.github.com/users/veeceey/orgs",
"received_events_url": "https://api.github.com/users/veeceey/received_events",
"repos_url": "https://api.github.com/users/veeceey/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/veeceey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/veeceey/subscriptions",
"type": "User",
"url": "https://api.github.com/users/veeceey",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Closing in favor of #7996 which addresses the same issue with the same fix."
] | 2026-02-08T07:02:00
| 2026-02-11T08:23:06
| 2026-02-11T08:23:06
|
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7997.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7997",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7997.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7997"
}
|
Fixes #7990
## Summary
When Dataset.map is called and the first examples processed return None, the writer is never properly initialized, causing a ValueError.
## Changes
- Modified _map_single to initialize the writer early if the first batch returns empty results
- Ensures writer is set before the first call to writer.write_batch
## Test Plan
- Added test case that reproduces the bug
- Verified the fix resolves the issue
- Existing tests still pass
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/34209028?v=4",
"events_url": "https://api.github.com/users/veeceey/events{/privacy}",
"followers_url": "https://api.github.com/users/veeceey/followers",
"following_url": "https://api.github.com/users/veeceey/following{/other_user}",
"gists_url": "https://api.github.com/users/veeceey/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/veeceey",
"id": 34209028,
"login": "veeceey",
"node_id": "MDQ6VXNlcjM0MjA5MDI4",
"organizations_url": "https://api.github.com/users/veeceey/orgs",
"received_events_url": "https://api.github.com/users/veeceey/received_events",
"repos_url": "https://api.github.com/users/veeceey/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/veeceey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/veeceey/subscriptions",
"type": "User",
"url": "https://api.github.com/users/veeceey",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7997/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7997/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7996
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7996/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7996/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7996/events
|
https://github.com/huggingface/datasets/pull/7996
| 3,912,066,322
|
PR_kwDODunzps7CQ6GC
| 7,996
|
Fix Dataset.map writer initialization when early examples return None
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/34209028?v=4",
"events_url": "https://api.github.com/users/veeceey/events{/privacy}",
"followers_url": "https://api.github.com/users/veeceey/followers",
"following_url": "https://api.github.com/users/veeceey/following{/other_user}",
"gists_url": "https://api.github.com/users/veeceey/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/veeceey",
"id": 34209028,
"login": "veeceey",
"node_id": "MDQ6VXNlcjM0MjA5MDI4",
"organizations_url": "https://api.github.com/users/veeceey/orgs",
"received_events_url": "https://api.github.com/users/veeceey/received_events",
"repos_url": "https://api.github.com/users/veeceey/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/veeceey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/veeceey/subscriptions",
"type": "User",
"url": "https://api.github.com/users/veeceey",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-08T05:52:45
| 2026-02-08T05:52:45
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7996.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7996",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7996.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7996"
}
|
## Summary
Fixes #7990
This PR fixes a bug in `Dataset.map()` where the writer initialization was incorrectly tied to the index being 0, causing crashes when the map function returns `None` for the first few examples and later returns a dict.
### Changes
- **Non-batched mode** (line 3676): Changed from `if i == 0:` to `if writer is None:`
- **Batched mode** (line 3701): Changed from `if i and i[0] == 0:` to `if writer is None:`
### Why This Fix Works
The original code assumed that `update_data` would always be determined by the time the first example (i=0) was processed. However, `update_data` is set lazily after processing each example - it becomes `True` when the function first returns a non-None value.
If a function returns `None` for early examples and a dict for later ones:
1. At i=0, the function returns `None`, so `update_data` remains `None`
2. Writer is NOT initialized (because we're not updating data)
3. At i=2, the function returns a dict, so `update_data` becomes `True`
4. **Old code**: Tries to use `writer` (still None) because i != 0 → crash
5. **New code**: Checks `if writer is None` and initializes it → works correctly
### Test Plan
The fix can be verified with this minimal test case from the issue:
```python
from datasets import Dataset
ds = Dataset.from_dict({"x": [1, 2, 3]})
def fn(example, idx):
if idx < 2:
return None
return {"x": [example["x"] * 10]}
# Should work without errors
result = list(ds.map(fn, with_indices=True))
print(result) # [{'x': 1}, {'x': 2}, {'x': [30]}]
```
**Before this fix**: Crashes with `AttributeError: 'NoneType' object has no attribute 'write'`
**After this fix**: Works correctly
### Related
This fix ensures the writer is initialized the first time a non-None value is returned, regardless of which example index that occurs at. This makes the code more robust to different map function behaviors.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7996/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7996/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7995
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7995/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7995/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7995/events
|
https://github.com/huggingface/datasets/pull/7995
| 3,912,055,867
|
PR_kwDODunzps7CQ4C9
| 7,995
|
Bump fsspec upper bound to 2026.2.0 (fixes #7994)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11176606?v=4",
"events_url": "https://api.github.com/users/jayzuccarelli/events{/privacy}",
"followers_url": "https://api.github.com/users/jayzuccarelli/followers",
"following_url": "https://api.github.com/users/jayzuccarelli/following{/other_user}",
"gists_url": "https://api.github.com/users/jayzuccarelli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jayzuccarelli",
"id": 11176606,
"login": "jayzuccarelli",
"node_id": "MDQ6VXNlcjExMTc2NjA2",
"organizations_url": "https://api.github.com/users/jayzuccarelli/orgs",
"received_events_url": "https://api.github.com/users/jayzuccarelli/received_events",
"repos_url": "https://api.github.com/users/jayzuccarelli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jayzuccarelli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayzuccarelli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jayzuccarelli",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Apologize for the ping @lhoestq but I wanted to know whether this is something you could look into? And more generally whether you have a policy to regularly bump fsspec? Thanks 🙏",
"thanks for opening a PR, I'm triggering the CI and I'll merge if it's all green :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7995). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-08T05:43:15
| 2026-02-16T15:28:59
| 2026-02-16T15:28:59
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7995.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7995",
"merged_at": "2026-02-16 15:28:59",
"patch_url": "https://github.com/huggingface/datasets/pull/7995.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7995"
}
|
Fixes #7994 and e.g. “Bumps fsspec upper bound so the latest version can be used; CI will validate compatibility.”
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7995/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7995/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7994
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7994/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7994/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7994/events
|
https://github.com/huggingface/datasets/issues/7994
| 3,906,330,806
|
I_kwDODunzps7o1eC2
| 7,994
|
Bump fsspec upper bound constraint
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/528003?v=4",
"events_url": "https://api.github.com/users/hadim/events{/privacy}",
"followers_url": "https://api.github.com/users/hadim/followers",
"following_url": "https://api.github.com/users/hadim/following{/other_user}",
"gists_url": "https://api.github.com/users/hadim/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hadim",
"id": 528003,
"login": "hadim",
"node_id": "MDQ6VXNlcjUyODAwMw==",
"organizations_url": "https://api.github.com/users/hadim/orgs",
"received_events_url": "https://api.github.com/users/hadim/received_events",
"repos_url": "https://api.github.com/users/hadim/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hadim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadim/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hadim",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2026-02-06T11:37:54
| 2026-02-16T15:29:00
| 2026-02-16T15:29:00
|
NONE
| null | null | null | null |
Would it be possible to bump fsspec upper bound to the latest (2026.2.0)?
I saw you had some API compat issues in the past (https://github.com/huggingface/datasets/issues/7326) and I understand the need for an upper bound.
But I wonder if you think your CI and tests are a good proxy to catch fsspec API breakage? If that's the case then triggering dataset CI with the latest version of fsspec should tell us whether it's all good right?
Happy to open a PR if needed.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7994/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7994/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7993
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7993/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7993/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7993/events
|
https://github.com/huggingface/datasets/pull/7993
| 3,898,606,021
|
PR_kwDODunzps7Bkr3F
| 7,993
|
:sparkles: Add 'SparseCsv' builder and 'sparse_collate_fn' for efficient high-dimensional sparse data loading
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22107086?v=4",
"events_url": "https://api.github.com/users/Ebraheem1/events{/privacy}",
"followers_url": "https://api.github.com/users/Ebraheem1/followers",
"following_url": "https://api.github.com/users/Ebraheem1/following{/other_user}",
"gists_url": "https://api.github.com/users/Ebraheem1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ebraheem1",
"id": 22107086,
"login": "Ebraheem1",
"node_id": "MDQ6VXNlcjIyMTA3MDg2",
"organizations_url": "https://api.github.com/users/Ebraheem1/orgs",
"received_events_url": "https://api.github.com/users/Ebraheem1/received_events",
"repos_url": "https://api.github.com/users/Ebraheem1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ebraheem1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ebraheem1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ebraheem1",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-04T21:59:39
| 2026-02-04T22:00:48
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7993.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7993",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7993.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7993"
}
|
This PR introduces a new dataset builder, SparseCsv, designed to handle "wide" tabular datasets (e.g., 100k+ columns common in transcriptomics, sparse NLP features, or recommender systems) that are typically too large to load into memory as dense Arrow tables.
It also adds a utility function, `sparse_collate_fn`, to seamlessly convert these sparse examples into `torch.sparse` or `scipy.sparse` matrices during training.
This PR should fix #7377
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7993/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7993/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7992
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7992/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7992/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7992/events
|
https://github.com/huggingface/datasets/pull/7992
| 3,897,848,157
|
PR_kwDODunzps7BiJps
| 7,992
|
Add `IterableDataset.reshard()`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7992). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-04T18:24:41
| 2026-02-04T18:55:38
| 2026-02-04T18:55:35
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7992.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7992",
"merged_at": "2026-02-04 18:55:35",
"patch_url": "https://github.com/huggingface/datasets/pull/7992.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7992"
}
|
To increase the number of shards of a dataset, you can use [`IterableDataset.reshard`]:
```py
>>> dataset
IterableDataset({
features: ['label', 'title', 'content'],
num_shards: 4
})
>>> dataset.reshard()
IterableDataset({
features: ['label', 'title', 'content'],
num_shards: 3600
})
```
The resharding mechanism depends on the dataset file format.
For example for Parquet, it reshards using row groups instead of having one file per shard.
We can implement other formats later (e.g. JSON Lines, CSV can be split by recovering line boundaries from arbitrary locations)
Other details:
* fixed concatenate after shuffling, now it correctly shuffles the shards: close https://github.com/huggingface/datasets/issues/7196
* fixed interleave after split_by_node: close https://github.com/huggingface/datasets/issues/7868
* changed a bit how the effective seed works in multi node/worker/epoch situations
related to https://github.com/huggingface/datasets/issues/7917
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7992/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7992/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7991
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7991/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7991/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7991/events
|
https://github.com/huggingface/datasets/issues/7991
| 3,896,884,513
|
I_kwDODunzps7oRb0h
| 7,991
|
list(api.list_datasets()) giving jsondecode error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/199609168?v=4",
"events_url": "https://api.github.com/users/Moll-j/events{/privacy}",
"followers_url": "https://api.github.com/users/Moll-j/followers",
"following_url": "https://api.github.com/users/Moll-j/following{/other_user}",
"gists_url": "https://api.github.com/users/Moll-j/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moll-j",
"id": 199609168,
"login": "Moll-j",
"node_id": "U_kgDOC-XLUA",
"organizations_url": "https://api.github.com/users/Moll-j/orgs",
"received_events_url": "https://api.github.com/users/Moll-j/received_events",
"repos_url": "https://api.github.com/users/Moll-j/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moll-j/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moll-j/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moll-j",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2026-02-04T14:39:46
| 2026-02-05T10:30:09
| 2026-02-05T10:30:09
|
NONE
| null | null | null | null |
i am using the python api wrapper to list all datasets available on hugging face. This is for research, and i need all datasets to determine which % have language tags and other related questions requiring the total list. However, the following code that worked a few months ago:
```
from huggingface_hub import HfApi
api = HfApi(token=token)
datasets = list(api.list_datasets())
```
now gives a JSONDecodeError when reaching 2000 results. My understanding is that this is a pagination issue, as there is no cursor exposed to the API and so it doesnt know where to read from after the limit.
Is there any way to find all datasets available and combine them into a list?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/199609168?v=4",
"events_url": "https://api.github.com/users/Moll-j/events{/privacy}",
"followers_url": "https://api.github.com/users/Moll-j/followers",
"following_url": "https://api.github.com/users/Moll-j/following{/other_user}",
"gists_url": "https://api.github.com/users/Moll-j/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moll-j",
"id": 199609168,
"login": "Moll-j",
"node_id": "U_kgDOC-XLUA",
"organizations_url": "https://api.github.com/users/Moll-j/orgs",
"received_events_url": "https://api.github.com/users/Moll-j/received_events",
"repos_url": "https://api.github.com/users/Moll-j/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moll-j/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moll-j/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moll-j",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7991/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7991/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7990
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7990/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7990/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7990/events
|
https://github.com/huggingface/datasets/issues/7990
| 3,895,870,826
|
I_kwDODunzps7oNkVq
| 7,990
|
Dataset.map crashes when first examples return None and later examples return dict — writer not initialized
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/30819640?v=4",
"events_url": "https://api.github.com/users/meta-program/events{/privacy}",
"followers_url": "https://api.github.com/users/meta-program/followers",
"following_url": "https://api.github.com/users/meta-program/following{/other_user}",
"gists_url": "https://api.github.com/users/meta-program/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/meta-program",
"id": 30819640,
"login": "meta-program",
"node_id": "MDQ6VXNlcjMwODE5NjQw",
"organizations_url": "https://api.github.com/users/meta-program/orgs",
"received_events_url": "https://api.github.com/users/meta-program/received_events",
"repos_url": "https://api.github.com/users/meta-program/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/meta-program/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meta-program/subscriptions",
"type": "User",
"url": "https://api.github.com/users/meta-program",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-04T10:43:20
| 2026-02-04T10:43:20
| null |
NONE
| null | null | null | null |
### Describe the bug
I detected a serious [bug from datasets/arrow_dataset.py](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L3676)
---
**Description of the bug**
`Dataset.map` crashes with `writer is None` when the map function returns `None` for the first few examples and a dictionary (or `pa.Table` / DataFrame) for later examples. This happens because the internal writer is initialized only when `i == 0` (or `i[0] == 0` in batched mode), but `update_data` is determined lazily after processing the first example/batch.
**Steps to reproduce**
```python
from datasets import Dataset
ds = Dataset.from_dict({"x": [1, 2, 3]})
def fn(example, idx):
if idx < 2:
return None
return {"x": [example["x"] * 10]}
list(ds.map(fn, with_indices=True))
```
**Expected behavior**
* The function should work regardless of when `update_data` becomes `True`.
* Writer should be initialized the first time a non-`None` return occurs, not tied to the first index.
**Environment info**
* `datasets` version: <insert your version>
* Python version: 3.12
* OS: <insert your OS>
**Suggested fix**
Replace `if i == 0` / `if i[0] == 0` checks with `if writer is None` when initializing the writer.
---
### Steps to reproduce the bug
Here's a ready-to-use version you can paste into that section:
---
### Steps to reproduce the bug
```python
from datasets import Dataset
# Create a minimal dataset
ds = Dataset.from_dict({"x": [1, 2, 3]})
# Define a map function that returns None for first examples, dict later
def fn(example, idx):
if idx < 2:
return None
return {"x": [example["x"] * 10]}
# Apply map with indices
list(ds.map(fn, with_indices=True))
```
**Expected:** function executes without errors.
**Observed:** crashes with `AttributeError: 'NoneType' object has no attribute 'write'` because the internal writer is not initialized when the first non-None return happens after i > 0.
---
This is minimal and clearly demonstrates the exact failure condition (`None` early, `dict` later).
### Expected behavior
---
**Expected behavior**
The `Dataset.map` function should handle map functions that return `None` for some examples and a dictionary (or `pa.Table` / DataFrame) for later examples. In this case, the internal writer should be initialized when the first non-`None` value is returned, so that the dataset can be updated without crashing. The code should run successfully for all examples and return the updated dataset.
---
### Environment info
- python3.12
- datasets==3.6.0 [but the latest version still has this problem]
- transformers==4.55.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7990/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7990/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7989
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7989/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7989/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7989/events
|
https://github.com/huggingface/datasets/pull/7989
| 3,895,613,949
|
PR_kwDODunzps7BaxDx
| 7,989
|
Remove pre-release workaround in CI for `transformers v5` and `huggingface_hub v1`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36770234?v=4",
"events_url": "https://api.github.com/users/hanouticelina/events{/privacy}",
"followers_url": "https://api.github.com/users/hanouticelina/followers",
"following_url": "https://api.github.com/users/hanouticelina/following{/other_user}",
"gists_url": "https://api.github.com/users/hanouticelina/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hanouticelina",
"id": 36770234,
"login": "hanouticelina",
"node_id": "MDQ6VXNlcjM2NzcwMjM0",
"organizations_url": "https://api.github.com/users/hanouticelina/orgs",
"received_events_url": "https://api.github.com/users/hanouticelina/received_events",
"repos_url": "https://api.github.com/users/hanouticelina/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hanouticelina/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanouticelina/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hanouticelina",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7989). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-04T09:42:49
| 2026-02-04T15:20:04
| 2026-02-04T15:20:02
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7989.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7989",
"merged_at": "2026-02-04 15:20:02",
"patch_url": "https://github.com/huggingface/datasets/pull/7989.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7989"
}
|
This PR removes workaround for pre-release `transformers v5.*` / `huggingface_hub v1.*` in `test_py314_future` job since they are now officially released.
cc @Wauplin just for viz since you introduced this in https://github.com/huggingface/datasets/pull/7783.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7989/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7989/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7988
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7988/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7988/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7988/events
|
https://github.com/huggingface/datasets/issues/7988
| 3,895,353,435
|
I_kwDODunzps7oLmBb
| 7,988
|
`Dataset.map()` breaks when `function` calls `import polars as pl` and `num_proc`>1: "UnboundLocalError: cannot access local variable 'pl' where it is not associated with a value"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7464471?v=4",
"events_url": "https://api.github.com/users/ligz08/events{/privacy}",
"followers_url": "https://api.github.com/users/ligz08/followers",
"following_url": "https://api.github.com/users/ligz08/following{/other_user}",
"gists_url": "https://api.github.com/users/ligz08/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ligz08",
"id": 7464471,
"login": "ligz08",
"node_id": "MDQ6VXNlcjc0NjQ0NzE=",
"organizations_url": "https://api.github.com/users/ligz08/orgs",
"received_events_url": "https://api.github.com/users/ligz08/received_events",
"repos_url": "https://api.github.com/users/ligz08/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ligz08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ligz08/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ligz08",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-04T08:42:23
| 2026-02-04T08:42:23
| null |
NONE
| null | null | null | null |
### Describe the bug
# Repro
These two conditions seem to consistently reproduce the issue:
- function passed to `Dataset.map()` explicitly or implicitly calls `import polars as pl`
- `num_proc` > 1
# Trace
```
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "c:\Users\{redacted}\.venv\Lib\site-packages\multiprocess\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "c:\Users\{redacted}\.venv\Lib\site-packages\datasets\utils\py_utils.py", line 586, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\{redacted}\.venv\Lib\site-packages\datasets\arrow_dataset.py", line 3687, in _map_single
and isinstance(example, pl.DataFrame)
^^
UnboundLocalError: cannot access local variable 'pl' where it is not associated with a value
"""
The above exception was the direct cause of the following exception:
UnboundLocalError Traceback (most recent call last)
Cell In[2], [line 9](vscode-notebook-cell:?execution_count=2&line=9)
6 import polars as pl
7 return {'squared': sample['n'] ** 2}
----> [9](vscode-notebook-cell:?execution_count=2&line=9) ds.map(square, num_proc=2)
File c:\Users\{redacted}\.venv\Lib\site-packages\datasets\arrow_dataset.py:562, in transmit_format.<locals>.wrapper(*args, **kwargs)
555 self_format = {
556 "type": self._format_type,
557 "format_kwargs": self._format_kwargs,
558 "columns": self._format_columns,
559 "output_all_columns": self._output_all_columns,
560 }
561 # apply actual function
--> [562](file:///C:/Users/{redacted}/.venv/Lib/site-packages/datasets/arrow_dataset.py:562) out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
563 datasets: list["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
564 # re-apply format to the output
File c:\Users\{redacted}\.venv\Lib\site-packages\datasets\arrow_dataset.py:3334, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc, try_original_type)
3331 os.environ = prev_env
3332 logger.info(f"Spawning {num_proc} processes")
-> [3334](file:///C:/Users/{redacted}/.venv/Lib/site-packages/datasets/arrow_dataset.py:3334) for rank, done, content in iflatmap_unordered(
3335 pool, Dataset._map_single, kwargs_iterable=unprocessed_kwargs_per_job
3336 ):
3337 check_if_shard_done(rank, done, content)
3339 pool.close()
File c:\Users\{redacted}\.venv\Lib\site-packages\datasets\utils\py_utils.py:626, in iflatmap_unordered(pool, func, kwargs_iterable)
623 finally:
624 if not pool_changed:
625 # we get the result in case there's an error to raise
--> [626](file:///C:/Users/{redacted}/.venv/Lib/site-packages/datasets/utils/py_utils.py:626) [async_result.get(timeout=0.05) for async_result in async_results]
File c:\Users\{redacted}\.venv\Lib\site-packages\multiprocess\pool.py:774, in ApplyResult.get(self, timeout)
772 return self._value
773 else:
--> [774](file:///C:/Users/{redacted}/.venv/Lib/site-packages/multiprocess/pool.py:774) raise self._value
UnboundLocalError: cannot access local variable 'pl' where it is not associated with a value
```
# Why `import polars` in a worker function?
To my knowledge `Dataset.map()` doesn't support a worker init function, and objects useful inside a function aren't always pickable, so I commonly use this pattern to essentially construct the unpickable object in a worker process:
```python
def func(example, **kwargs):
if 'my_unpickable_object' not in globals():
from my_module import MyClass
my_unpickable_object = MyClass(**kwargs)
return {'newcol': my_unpickable_object.calculate_something(example['n'])}
ds = Dataset.load_from_disk(...)
ds.map(func, num_proc=2, ...)
```
and here `from my_module import MyClass` may implicitly call `import polars as pl` e.g. when `my_module.py` has that line, or when it imports some other module containing `import polars as pl`.
# A workaround
Things seem to work ok if I don't do any import inside `func`, but instead pass a class already imported outside `func` as a contructor, like below. Although I'm unsure how many scenarios this workaround can cover.
```python
from my_module import MyClass
def func(example, constructor=MyClass, **kwargs):
if 'my_unpickable_object' not in globals():
my_unpickable_object = constructor(**kwargs)
return {'newcol': my_unpickable_object.calculate_something(example['n'])}
ds = Dataset.load_from_disk(...)
ds.map(func, num_proc=2, ...)
```
# My speculations
Before the crash point, on line 3656 of `arrow_dataset.py` it reads:
```python
if config.POLARS_AVAILABLE and "polars" in sys.modules:
import polars as pl
````
My guess is these conditions are somehow messed up in a worker process, and it ended up not entering the `if` block.
### Steps to reproduce the bug
```python
from datasets import Dataset
ds = Dataset.from_dict({'n': list(range(10_000))})
def square(sample):
import polars as pl
return {'squared': sample['n'] ** 2}
ds.map(square, num_proc=2)
```
### Expected behavior
```
Dataset({
features: ['n', 'squared'],
num_rows: 10000
})
```
### Environment info
- `datasets` version: 4.5.0
- Platform: Windows-11-10.0.26200-SP0
- Python version: 3.12.10
- `huggingface_hub` version: 0.31.2
- PyArrow version: 21.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7988/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7988/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7987
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7987/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7987/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7987/events
|
https://github.com/huggingface/datasets/pull/7987
| 3,894,713,494
|
PR_kwDODunzps7BX0pY
| 7,987
|
Fix index out of bound error with original_shard_lengths.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanasdf",
"id": 511073,
"login": "jonathanasdf",
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanasdf",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-04T05:20:43
| 2026-02-04T05:20:43
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7987.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7987",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7987.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7987"
}
|
I have gotten the following error
```
original_shard_lengths[original_shard_id] += 1
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
IndexError: list index out of range
```
Not sure what causes it, but this fixes the error. This may not be the proper fix for the root cause though.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7987/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7987/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7986
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7986/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7986/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7986/events
|
https://github.com/huggingface/datasets/issues/7986
| 3,892,776,651
|
I_kwDODunzps7oBw7L
| 7,986
|
`Dataset.map()` causes cache miss/fingerprint change when closure captures self containing non-deterministic state.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/60375730?v=4",
"events_url": "https://api.github.com/users/Cloud0310/events{/privacy}",
"followers_url": "https://api.github.com/users/Cloud0310/followers",
"following_url": "https://api.github.com/users/Cloud0310/following{/other_user}",
"gists_url": "https://api.github.com/users/Cloud0310/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Cloud0310",
"id": 60375730,
"login": "Cloud0310",
"node_id": "MDQ6VXNlcjYwMzc1NzMw",
"organizations_url": "https://api.github.com/users/Cloud0310/orgs",
"received_events_url": "https://api.github.com/users/Cloud0310/received_events",
"repos_url": "https://api.github.com/users/Cloud0310/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Cloud0310/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cloud0310/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Cloud0310",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I suggest metion this in docs specifically for attention with use, tell users explicitly to pass arguments with `fn_kwargs` param or using `functools.partial` to create a pure funcion."
] | 2026-02-03T19:16:49
| 2026-02-06T08:38:13
| null |
NONE
| null | null | null | null |
### Describe the bug
When using `.map()` with a function defined inside a **class (of which has any non-deterministic states)** method (a closure), if that function captures `self` to access a configuration variable (e.g., self.foo), the fingerprint mechanism serializes the entire class instance state.
If the class instance contains any non-deterministic state (such as random seeds, loggers, or distinct object IDs—in my case, PyTorch Lightning's `LightningDataModule`), the fingerprint changes on every run, rendering the cache useless.
While this may be intended behavior for `dill`, it is a significant "gotcha" for users migrating code into classes, as unrelated state changes cause massive re-processing overhead.
Real world "cache explosion" screenshot caused by the fingerprint mismatch:
<img width="942" height="382" alt="Image" src="https://github.com/user-attachments/assets/2fb0acba-ac07-4f00-bf30-c1ac932c9072" />
### Steps to reproduce the bug
Minimal reproduction code block:
```python3
import datasets
import uuid
# Prevent logging spam
datasets.logging.set_verbosity_error()
class ReproduceIssue:
def __init__(self):
# This is the variable we actually care about in the map function
self.foo = 32
# This simulates "dirty" internal state often found in framework classes
# (e.g., unique IDs, pointers to loggers, thread locks, or random seeds)
self.hidden_state = uuid.uuid4()
self.dataset = datasets.Dataset.from_dict({"strokes": [1, 2, 3]})
def setup(self):
# Closure captures 'self' to access 'self.foo'
def preprocess(batch):
# Accessing self binds the function to the specific instance state
_ = self.foo
return {"foo": batch["bar"]}
return self.dataset.map(preprocess, batched=True)
print("--- Run 1 ---")
inst1 = ReproduceIssue()
ds1 = inst1.setup()
print(f"Fingerprint 1: {ds1._fingerprint}")
print("\n--- Run 2 (New Instance) ---")
inst2 = ReproduceIssue()
ds2 = inst2.setup()
print(f"Fingerprint 2: {ds2._fingerprint}")
if ds1._fingerprint != ds2._fingerprint:
print("\n❌ ISSUE REPRODUCED: Fingerprints differ (Cache Miss).")
else:
print("\n✅ Fingerprints match.")
```
Result:
```
--- Run 1 ---
Mapping: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 2025.26 examples/s]
Fingerprint 1: 1ce6104f9e97912a
--- Run 2 (New Instance) ---
Mapping: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 2300.77 examples/s]
Fingerprint 2: c0fc011ff86ea571
--- Result ---
❌ CACHE MISS: Fingerprints are different!
```
### Expected behavior
The fingerprint should ideally depend **only on the bytecode of the function and the values of the variables actually accessed** (`self.foo`), rather than the state of the whole object self.
### Environment info
datasets version: 4.5.0, platform: any, python version: 3.13.
This was encountered while subclassing torch lightning's `LightningDataModule`. These objects inherently **contain internal state that differs per instance**.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7986/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7986/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7985
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7985/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7985/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7985/events
|
https://github.com/huggingface/datasets/pull/7985
| 3,892,480,150
|
PR_kwDODunzps7BQaGn
| 7,985
|
Remove unused data files optims
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7985). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-03T17:58:30
| 2026-02-03T18:30:30
| 2026-02-03T18:30:28
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7985.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7985",
"merged_at": "2026-02-03 18:30:28",
"patch_url": "https://github.com/huggingface/datasets/pull/7985.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7985"
}
|
this fixes module inference when there are many metadata files
e.g. the lance dataset at https://huggingface.co/datasets/davanstrien/encyclopaedia-britannica-lance has > 200 metadata files
those optims are not used anymore, they come from a time we were dealing with slow data files iterators instead of lists
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7985/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7985/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7984
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7984/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7984/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7984/events
|
https://github.com/huggingface/datasets/issues/7984
| 3,891,431,105
|
I_kwDODunzps7n8obB
| 7,984
|
Data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/228845628?v=4",
"events_url": "https://api.github.com/users/iLenceJhay/events{/privacy}",
"followers_url": "https://api.github.com/users/iLenceJhay/followers",
"following_url": "https://api.github.com/users/iLenceJhay/following{/other_user}",
"gists_url": "https://api.github.com/users/iLenceJhay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iLenceJhay",
"id": 228845628,
"login": "iLenceJhay",
"node_id": "U_kgDODaPoPA",
"organizations_url": "https://api.github.com/users/iLenceJhay/orgs",
"received_events_url": "https://api.github.com/users/iLenceJhay/received_events",
"repos_url": "https://api.github.com/users/iLenceJhay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iLenceJhay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iLenceJhay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iLenceJhay",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-03T14:01:48
| 2026-02-03T14:01:48
| null |
NONE
| null | null | null | null | null | null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7984/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7984/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7983
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7983/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7983/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7983/events
|
https://github.com/huggingface/datasets/pull/7983
| 3,888,225,779
|
PR_kwDODunzps7BCJgV
| 7,983
|
Add Zarr streaming support (POC)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/163377666?v=4",
"events_url": "https://api.github.com/users/KOKOSde/events{/privacy}",
"followers_url": "https://api.github.com/users/KOKOSde/followers",
"following_url": "https://api.github.com/users/KOKOSde/following{/other_user}",
"gists_url": "https://api.github.com/users/KOKOSde/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KOKOSde",
"id": 163377666,
"login": "KOKOSde",
"node_id": "U_kgDOCbzyAg",
"organizations_url": "https://api.github.com/users/KOKOSde/orgs",
"received_events_url": "https://api.github.com/users/KOKOSde/received_events",
"repos_url": "https://api.github.com/users/KOKOSde/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KOKOSde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOKOSde/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KOKOSde",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi! It looks like the GitHub Actions check suites for this PR are in `action_required` (no workflows actually ran). This is usually due to fork workflow approval.\n\nCould a maintainer please approve/run the workflows so CI can execute? Happy to address anything CI flags once it runs.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7983). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Great PR ! It looks mostly good to me, I added a fes suggestions.\r\n> \r\n> Btw the current implementation for streaming returns a StreamingIterableDataset with `num_shards=1`, which corresponds to 1 metadata file.\r\n> \r\n> For large datasets it's maybe more practical to have a more finegrained sharding, e.g. at data file level. Wdyt ?\r\n\r\nThanks for the review. I agree on sharding: num_shards currently tracks the number of input Zarr stores, so it is often 1, which is not ideal for large datasets.\r\n\r\nI can implement finer-grained sharding using row-range shards aligned to axis-0 chunk boundaries (instead of one shard per metadata file), with a config knob like rows_per_shard / target_num_shards.\r\n\r\nI can add this in this PR if you want it before merge, or do it as a focused follow-up PR right after."
] | 2026-02-03T00:06:46
| 2026-02-17T04:47:56
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7983.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7983",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7983.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7983"
}
|
Add initial Zarr streaming support (POC).
This introduces a `zarr` packaged module and docs/tests to validate basic loading.
Note: I pushed a follow-up commit to fix an accidental duplication in `benchmarks/benchmark_zarr_streaming.py` (file now contains a single benchmark script).
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7983/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7983/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7982
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7982/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7982/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7982/events
|
https://github.com/huggingface/datasets/pull/7982
| 3,888,131,856
|
PR_kwDODunzps7BB1zZ
| 7,982
|
Fix unstable tokenizer fingerprinting (enables map cache reuse)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/163377666?v=4",
"events_url": "https://api.github.com/users/KOKOSde/events{/privacy}",
"followers_url": "https://api.github.com/users/KOKOSde/followers",
"following_url": "https://api.github.com/users/KOKOSde/following{/other_user}",
"gists_url": "https://api.github.com/users/KOKOSde/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KOKOSde",
"id": 163377666,
"login": "KOKOSde",
"node_id": "U_kgDOCbzyAg",
"organizations_url": "https://api.github.com/users/KOKOSde/orgs",
"received_events_url": "https://api.github.com/users/KOKOSde/received_events",
"repos_url": "https://api.github.com/users/KOKOSde/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KOKOSde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOKOSde/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KOKOSde",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi! It looks like the GitHub Actions check suites for this PR are in `action_required` (no workflows actually ran). This is usually due to fork workflow approval.\n\nCould a maintainer please approve/run the workflows so CI can execute? Happy to address anything CI flags once it runs."
] | 2026-02-02T23:34:51
| 2026-02-09T22:14:43
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7982.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7982",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7982.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7982"
}
|
Fix unstable dataset fingerprinting when hashing `PreTrainedTokenizerFast`.
Some tokenizers backed by `tokenizers.Tokenizer` mutate runtime settings (padding/truncation) when called, which can change the serialized state and make dataset fingerprints unstable. That prevents `.map(load_from_cache_file=True)` from reusing cache files.
Fix: when hashing, temporarily disable backend padding/truncation so runtime settings don’t affect the fingerprint, then restore the original settings.
Includes a regression test showing `Hasher.hash(tokenizer)` stays stable after calling the tokenizer.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7982/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7982/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7981
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7981/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7981/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7981/events
|
https://github.com/huggingface/datasets/pull/7981
| 3,887,077,016
|
PR_kwDODunzps7A-V7J
| 7,981
|
Support pandas 3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7981). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-02T17:16:37
| 2026-02-02T17:34:25
| 2026-02-02T17:34:22
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7981.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7981",
"merged_at": "2026-02-02 17:34:22",
"patch_url": "https://github.com/huggingface/datasets/pull/7981.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7981"
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7981/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7981/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7980
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7980/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7980/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7980/events
|
https://github.com/huggingface/datasets/pull/7980
| 3,886,785,042
|
PR_kwDODunzps7A9Wrj
| 7,980
|
Drop python 3.9
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7980). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-02T16:13:04
| 2026-02-02T16:26:31
| 2026-02-02T16:26:29
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7980.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7980",
"merged_at": "2026-02-02 16:26:29",
"patch_url": "https://github.com/huggingface/datasets/pull/7980.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7980"
}
|
EOL was a few months ago, and transformers doesn't support 3.9 anymore
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7980/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7980/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7979
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7979/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7979/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7979/events
|
https://github.com/huggingface/datasets/pull/7979
| 3,886,772,007
|
PR_kwDODunzps7A9T06
| 7,979
|
Use temp files in push_to_hub to save memory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7979). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-02T16:10:38
| 2026-02-02T16:26:16
| 2026-02-02T16:26:14
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7979.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7979",
"merged_at": "2026-02-02 16:26:14",
"patch_url": "https://github.com/huggingface/datasets/pull/7979.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7979"
}
|
write parquet data to temp files on disk prior to upload to save memory
this is enabled for for datasets loaded using streaming=True/False
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7979/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7979/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7978
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7978/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7978/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7978/events
|
https://github.com/huggingface/datasets/pull/7978
| 3,879,787,436
|
PR_kwDODunzps7AmQfP
| 7,978
|
Fix 4910 kwargs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/218264809?v=4",
"events_url": "https://api.github.com/users/vedanta777/events{/privacy}",
"followers_url": "https://api.github.com/users/vedanta777/followers",
"following_url": "https://api.github.com/users/vedanta777/following{/other_user}",
"gists_url": "https://api.github.com/users/vedanta777/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vedanta777",
"id": 218264809,
"login": "vedanta777",
"node_id": "U_kgDODQJ06Q",
"organizations_url": "https://api.github.com/users/vedanta777/orgs",
"received_events_url": "https://api.github.com/users/vedanta777/received_events",
"repos_url": "https://api.github.com/users/vedanta777/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vedanta777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vedanta777/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vedanta777",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-31T18:36:32
| 2026-02-02T13:08:33
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7978",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7978"
}
|
Fix #4910 : Merge duplicate kwargs in `load_dataset_builder()`
Problem: load_dataset("dataset", base_path="./data")` gives TypeError("multiple values for keyword 'base_path')
Fix: {**builder_kwargs, **config_kwargs} to user kwargs override dataset defaults
Repro : python
Before: TypeError
load_dataset("rotten_tomatoes", base_path="./sample_data")
After: Works
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7978/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7978/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7977
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7977/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7977/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7977/events
|
https://github.com/huggingface/datasets/pull/7977
| 3,879,142,697
|
PR_kwDODunzps7AkMoM
| 7,977
|
Updated get_dataset_config_names returning default in offline mode
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/178829649?v=4",
"events_url": "https://api.github.com/users/abigailtech/events{/privacy}",
"followers_url": "https://api.github.com/users/abigailtech/followers",
"following_url": "https://api.github.com/users/abigailtech/following{/other_user}",
"gists_url": "https://api.github.com/users/abigailtech/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abigailtech",
"id": 178829649,
"login": "abigailtech",
"node_id": "U_kgDOCqi5UQ",
"organizations_url": "https://api.github.com/users/abigailtech/orgs",
"received_events_url": "https://api.github.com/users/abigailtech/received_events",
"repos_url": "https://api.github.com/users/abigailtech/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abigailtech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abigailtech/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abigailtech",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2026-01-31T12:56:21
| 2026-02-01T07:25:33
| 2026-02-01T07:25:33
|
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7977.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7977",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7977.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7977"
}
|
When a dataset is cached and accessed in offline mode, get_dataset_config_names was returning default instead of the actual cached config names. This happened because CachedDatasetModuleFactory.get_module returned a DatasetModule without builder_configs_parameters, causing the fallback to default in get_dataset_config_names.
The fix reads config_name from each dataset_info file in the cache directory and includes them as builder_configs_parameters in the returned DatasetModule. Invalid or missing dataset_info.json files are handled.
**Testing:**
1. Download a dataset in online mode so it gets cached
2. Switch to offline mode and call get_dataset_config_names
3. Verify it returns the cached config names instead of ['default']
**Example:**
- HF_DATASETS_OFFLINE=0 HF_HOME="/tmp/hftemp" python -c "import datasets; datasets.load_dataset('cais/mmlu', 'all')"
- HF_DATASETS_OFFLINE=1 HF_HOME="/tmp/hftemp" python -c "import datasets; print(datasets.get_dataset_config_names('cais/mmlu'))"
- -> Expected output: ['all']
Fixes https://github.com/huggingface/datasets/issues/7947
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/178829649?v=4",
"events_url": "https://api.github.com/users/abigailtech/events{/privacy}",
"followers_url": "https://api.github.com/users/abigailtech/followers",
"following_url": "https://api.github.com/users/abigailtech/following{/other_user}",
"gists_url": "https://api.github.com/users/abigailtech/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abigailtech",
"id": 178829649,
"login": "abigailtech",
"node_id": "U_kgDOCqi5UQ",
"organizations_url": "https://api.github.com/users/abigailtech/orgs",
"received_events_url": "https://api.github.com/users/abigailtech/received_events",
"repos_url": "https://api.github.com/users/abigailtech/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abigailtech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abigailtech/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abigailtech",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7977/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7977/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7976
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7976/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7976/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7976/events
|
https://github.com/huggingface/datasets/pull/7976
| 3,879,038,987
|
PR_kwDODunzps7Aj2hP
| 7,976
|
Write image/audio/video blobs as is in parquet (PLAIN)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7976). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-31T11:49:39
| 2026-02-03T20:03:48
| 2026-01-31T11:50:33
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7976.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7976",
"merged_at": "2026-01-31 11:50:32",
"patch_url": "https://github.com/huggingface/datasets/pull/7976.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7976"
}
|
following #7971
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7976/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7976/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7975
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7975/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7975/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7975/events
|
https://github.com/huggingface/datasets/pull/7975
| 3,878,625,407
|
PR_kwDODunzps7AikAO
| 7,975
|
Docs: add Dataset.from_dict example
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/163377666?v=4",
"events_url": "https://api.github.com/users/KOKOSde/events{/privacy}",
"followers_url": "https://api.github.com/users/KOKOSde/followers",
"following_url": "https://api.github.com/users/KOKOSde/following{/other_user}",
"gists_url": "https://api.github.com/users/KOKOSde/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KOKOSde",
"id": 163377666,
"login": "KOKOSde",
"node_id": "U_kgDOCbzyAg",
"organizations_url": "https://api.github.com/users/KOKOSde/orgs",
"received_events_url": "https://api.github.com/users/KOKOSde/received_events",
"repos_url": "https://api.github.com/users/KOKOSde/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KOKOSde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOKOSde/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KOKOSde",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-31T07:00:43
| 2026-02-05T05:50:11
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7975.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7975",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7975.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7975"
}
|
Docs: add a minimal `Dataset.from_dict` example.
This helps new users discover the most direct way to build a small dataset from in-memory Python data.
Docs-only change.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7975/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7975/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7974
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7974/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7974/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7974/events
|
https://github.com/huggingface/datasets/pull/7974
| 3,878,625,349
|
PR_kwDODunzps7Aij_g
| 7,974
|
Fix duplicate kwargs in load_dataset_builder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/163377666?v=4",
"events_url": "https://api.github.com/users/KOKOSde/events{/privacy}",
"followers_url": "https://api.github.com/users/KOKOSde/followers",
"following_url": "https://api.github.com/users/KOKOSde/following{/other_user}",
"gists_url": "https://api.github.com/users/KOKOSde/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KOKOSde",
"id": 163377666,
"login": "KOKOSde",
"node_id": "U_kgDOCbzyAg",
"organizations_url": "https://api.github.com/users/KOKOSde/orgs",
"received_events_url": "https://api.github.com/users/KOKOSde/received_events",
"repos_url": "https://api.github.com/users/KOKOSde/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KOKOSde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOKOSde/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KOKOSde",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-31T07:00:39
| 2026-02-05T05:49:31
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7974.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7974",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7974.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7974"
}
|
Avoid passing duplicate keyword arguments to `load_dataset_builder`.
Some module factories provide values in `builder_kwargs` (e.g. `base_path`), and users can also pass the same keys via `config_kwargs`, which raises:
`TypeError: ... got multiple values for keyword argument ...`.
Fix: if `config_kwargs` is provided, drop overlapping keys from `builder_kwargs` (keep the user-provided values).
Includes a regression test.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7974/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7974/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7973
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7973/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7973/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7973/events
|
https://github.com/huggingface/datasets/pull/7973
| 3,878,514,101
|
PR_kwDODunzps7AiMd8
| 7,973
|
Fix resolve_pattern for local symlinked files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/163377666?v=4",
"events_url": "https://api.github.com/users/KOKOSde/events{/privacy}",
"followers_url": "https://api.github.com/users/KOKOSde/followers",
"following_url": "https://api.github.com/users/KOKOSde/following{/other_user}",
"gists_url": "https://api.github.com/users/KOKOSde/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KOKOSde",
"id": 163377666,
"login": "KOKOSde",
"node_id": "U_kgDOCbzyAg",
"organizations_url": "https://api.github.com/users/KOKOSde/orgs",
"received_events_url": "https://api.github.com/users/KOKOSde/received_events",
"repos_url": "https://api.github.com/users/KOKOSde/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KOKOSde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOKOSde/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KOKOSde",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi! It looks like the GitHub Actions check suites for this PR are in `action_required` (no workflows actually ran). This is usually due to fork workflow approval.\n\nCould a maintainer please approve/run the workflows so CI can execute? Happy to address anything CI flags once it runs."
] | 2026-01-31T06:04:51
| 2026-02-09T22:14:42
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7973.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7973",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7973.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7973"
}
|
Fix `resolve_pattern` for *local symlinked files*.
Problem: on the local `file://` filesystem, `fsspec` can report symlinks as `type=="other"` and omit the `islink` flag, so symlinked files are skipped.
Fix: when `protocol=="file"`, treat `os.path.islink(filepath)` as a link candidate and include it if it resolves to a regular file.
Includes a regression test in `tests/test_data_files.py`.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7973/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7973/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7972
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7972/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7972/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7972/events
|
https://github.com/huggingface/datasets/pull/7972
| 3,874,083,781
|
PR_kwDODunzps7AT07I
| 7,972
|
feat: implement iter_arrow for skip, take and step iterables
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/192764477?v=4",
"events_url": "https://api.github.com/users/Edge-Explorer/events{/privacy}",
"followers_url": "https://api.github.com/users/Edge-Explorer/followers",
"following_url": "https://api.github.com/users/Edge-Explorer/following{/other_user}",
"gists_url": "https://api.github.com/users/Edge-Explorer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Edge-Explorer",
"id": 192764477,
"login": "Edge-Explorer",
"node_id": "U_kgDOC31aPQ",
"organizations_url": "https://api.github.com/users/Edge-Explorer/orgs",
"received_events_url": "https://api.github.com/users/Edge-Explorer/received_events",
"repos_url": "https://api.github.com/users/Edge-Explorer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Edge-Explorer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Edge-Explorer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Edge-Explorer",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-30T05:47:13
| 2026-02-17T06:56:40
| null |
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7972.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7972",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7972.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7972"
}
|
This commit optimizes streaming operations by implementing [_iter_arrow](cci:1://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:377:4-391:57) for [SkipExamplesIterable](cci:2://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:1798:0-1892:42), [TakeExamplesIterable](cci:2://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:1944:0-2048:42), and [StepExamplesIterable](cci:2://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:667:0-743:42).
### Key Changes:
- **Fast Batch Processing**: Enabled batch-level slicing for `.skip(n)` and `.take(n)` on streaming datasets, bypassing slow row-by-row iteration.
- **Optimized Sharding**: Updated [StepExamplesIterable](cci:2://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:667:0-743:42) (used in distributed training) to use Arrow's `.take()` to extract multiple records from a batch simultaneously.
- **State Preservation**: Reinforced [_init_state_dict](cci:1://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:505:4-514:31) and [load_state_dict](cci:1://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:2382:4-2430:46) to support flawless checkpointing and resumption while using Arrow iteration.
### Performance Impact:
Users will experience significant performance gains when skipping or taking examples in streaming mode. By staying in the "Arrow path" and avoiding Python dictionary conversions, data loading overhead is drastically reduced, especially for large-scale training jobs.
### Testing:
Integrated 6 new unit tests into [tests/test_iterable_dataset.py](cci:7://file:///c:/Users/ASUS/Desktop/datasets/tests/test_iterable_dataset.py:0:0-0:0) to verify:
- Functional correctness for [skip](cci:1://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:3149:4-3191:9), [take](cci:1://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:3236:4-3271:9), and `step` using Arrow iteration.
- Reliable state checkpointing and resumption after partial iteration.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7972/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7972/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7971
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7971/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7971/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7971/events
|
https://github.com/huggingface/datasets/pull/7971
| 3,871,984,311
|
PR_kwDODunzps7AMzLl
| 7,971
|
push_to_hub() for videos
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7971). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-29T18:16:58
| 2026-01-31T11:50:25
| 2026-01-29T18:56:04
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7971.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7971",
"merged_at": "2026-01-29 18:56:04",
"patch_url": "https://github.com/huggingface/datasets/pull/7971.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7971"
}
|
possible now that row group sizes are auto-determined based on the content size after https://github.com/huggingface/datasets/pull/7589
Videos are uploaded as PLAIN in Parquet to make sure they can be seeked remotely and with random access to frames in https://github.com/huggingface/datasets/pull/7976
In the future it could be cool to have the same behavior as when videos are separate files, i.e. lazily load them instead of downloading them completely in streaming mode.
Right now there is this discrepency:
- `load_dataset("username/my-folder-of-videos", streaming=True)` -> videos are lazy loaded one by one when iterating, and only actually downloaded when accessing frames in `torchcodec`
- `load_dataset("username/my-video-dataset-in-parquet", streaming=True)` -> videos are downloaded one by one when iterating, even if no frame is accessed in `torchcodec`
close https://github.com/huggingface/datasets/issues/7493
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7971/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7971/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7970
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7970/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7970/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7970/events
|
https://github.com/huggingface/datasets/issues/7970
| 3,869,700,866
|
I_kwDODunzps7mpvMC
| 7,970
|
cast_column(..., Audio) fails with load_dataset("csv",)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/148754?v=4",
"events_url": "https://api.github.com/users/jstangroome/events{/privacy}",
"followers_url": "https://api.github.com/users/jstangroome/followers",
"following_url": "https://api.github.com/users/jstangroome/following{/other_user}",
"gists_url": "https://api.github.com/users/jstangroome/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jstangroome",
"id": 148754,
"login": "jstangroome",
"node_id": "MDQ6VXNlcjE0ODc1NA==",
"organizations_url": "https://api.github.com/users/jstangroome/orgs",
"received_events_url": "https://api.github.com/users/jstangroome/received_events",
"repos_url": "https://api.github.com/users/jstangroome/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jstangroome/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jstangroome/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jstangroome",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The following code *does* work:\n```py\nfrom datasets import load_dataset,Audio,Features\n\ndataset = load_dataset(\"csv\",data_files=\"audio.csv\",features=Features({\"audio\": Audio()}))\nprint(dataset[\"train\"][0][\"audio\"])\n```",
"Thanks for reporing ! Are you using pandas v3 by any chance ? The CSV loader uses pandas and this release is brand new and might have caused a breaking change",
"pandas 3.0.0 was present but I've also reproduced the issue with pandas 2.3.3."
] | 2026-01-29T09:33:35
| 2026-01-29T22:24:14
| null |
NONE
| null | null | null | null |
### Describe the bug
Attempt to load a dataset from a csv with a single `audio` column with a single row with a path to an audio file fails when casting the column to Audio, but the exact same dataset created from a dictionary succeeds.
### Steps to reproduce the bug
1. Have any valid audio file `audio.wav`
2. Have a csv file named `audio.csv` with the following content:
```csv
"audio"
"audio.wav"
```
3. Attempt to execute the following python code:
```py
from datasets import load_dataset,Audio,Dataset
dataset = Dataset.from_dict({"audio": ["audio.wav"]})
dataset = dataset.cast_column("audio", Audio())
print(dataset[0]["audio"])
# ^^ succeeds with output: <datasets.features._torchcodec.AudioDecoder object at 0x7a32b341a3c0>
dataset = load_dataset("csv", data_files="audio.csv")
dataset = dataset.cast_column("audio", Audio())
# ^^ errors and terminates
print(dataset[0]["audio"])
```
The error is:
```pytb
Traceback (most recent call last):
File "~/datasets-bug/explore.py", line 8, in <module>
dataset = dataset.cast_column("audio", Audio(sampling_rate=24000))
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/dataset_dict.py", line 337, in cast_column
return DatasetDict({k: dataset.cast_column(column=column, feature=feature) for k, dataset in self.items()})
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 468, in wrapper
out = func(dataset, *args, **kwargs)
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/arrow_dataset.py", line 2201, in cast_column
dataset._data = dataset._data.cast(dataset.features.arrow_schema)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 1124, in cast
return MemoryMappedTable(table_cast(self.table, *args, **kwargs), self.path, replays)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 2224, in cast_table_to_schema
cast_array_to_feature(
~~~~~~~~~~~~~~~~~~~~~^
table[name] if name in table_column_names else pa.array([None] * len(table), type=schema.field(name).type),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
feature,
^^^^^^^^
)
^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 1995, in cast_array_to_feature
return feature.cast_storage(array)
~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/features/audio.py", line 272, in cast_storage
return array_cast(storage, self.pa_type)
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 1797, in wrapper
return func(array, *args, **kwargs)
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 1949, in array_cast
return array.cast(pa_type)
~~~~~~~~~~^^^^^^^^^
File "pyarrow/array.pxi", line 1147, in pyarrow.lib.Array.cast
File "~/datasets-bug/.venv/lib/python3.14/site-packages/pyarrow/compute.py", line 412, in cast
return call_function("cast", [arr], options, memory_pool)
File "pyarrow/_compute.pyx", line 604, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 399, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from large_string to struct using function cast_struct
```
### Expected behavior
The audio column with file paths loaded from a csv can be converted to AudioDecoder objects the same as an identical dataset created from a dict.
### Environment info
datasets 4.3.0 and 4.5.0, Ubuntu 24.04 amd64, python 3.13.11 and 3.14.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7970/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7970/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7969
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7969/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7969/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7969/events
|
https://github.com/huggingface/datasets/pull/7969
| 3,865,100,307
|
PR_kwDODunzps6_1p9H
| 7,969
|
Count examples in lance
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7969). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-28T12:00:37
| 2026-01-28T13:00:26
| 2026-01-28T13:00:23
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7969.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7969",
"merged_at": "2026-01-28 13:00:23",
"patch_url": "https://github.com/huggingface/datasets/pull/7969.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7969"
}
|
```python
In [1]: from datasets import load_dataset_builder, StreamingDownloadManager
In [2]: b = load_dataset_builder("lance-format/openvid-lance")
Resolving data files: 100%|█| 240/240 [00:00<00:00, 42675.64it/s
In [3]: b.count_examples(StreamingDownloadManager())
Out[3]: {'train': 937957}
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7969/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7969/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7968
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7968/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7968/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7968/events
|
https://github.com/huggingface/datasets/issues/7968
| 3,864,988,355
|
I_kwDODunzps7mXwrD
| 7,968
|
Potential conflicting type checks and dead code in `/src/datasets/table.py`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/243496043?v=4",
"events_url": "https://api.github.com/users/rc4typecheck/events{/privacy}",
"followers_url": "https://api.github.com/users/rc4typecheck/followers",
"following_url": "https://api.github.com/users/rc4typecheck/following{/other_user}",
"gists_url": "https://api.github.com/users/rc4typecheck/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rc4typecheck",
"id": 243496043,
"login": "rc4typecheck",
"node_id": "U_kgDODoN0aw",
"organizations_url": "https://api.github.com/users/rc4typecheck/orgs",
"received_events_url": "https://api.github.com/users/rc4typecheck/received_events",
"repos_url": "https://api.github.com/users/rc4typecheck/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rc4typecheck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rc4typecheck/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rc4typecheck",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"ConcatenationTable is a subclass of datasets.table.Table but not pa.Table, so it should be fine"
] | 2026-01-28T11:34:53
| 2026-01-28T13:05:28
| null |
NONE
| null | null | null | null |
When statically analyzing and manually reviewing the code, I noticed a potential logic conflicting in `/src/datasets/table.py` as follows:
```
def to_blocks(table: Union[pa.Table, Table]) -> list[list[TableBlock]]:
if isinstance(table, pa.Table):
return [[InMemoryTable(table)]]
elif isinstance(table, ConcatenationTable): # dead code
return copy.deepcopy(table.blocks)
else:
return [[table]]
```
Within the function, the condition `isinstance(table, ConcatenationTable)` at line 4 will never be True because the previous condition `isinstance(table, pa.Table)` at line 2 would have already caught all instances of `ConcatenationTable` (since `ConcatenationTable` is a subtype of `pa.Table`). This creates a logical conflict in the type checking flow.
Please verify if this logic is intentional or it is an issue warranting a refactoring or fixing.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7968/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7968/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7967
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7967/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7967/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7967/events
|
https://github.com/huggingface/datasets/pull/7967
| 3,863,579,646
|
PR_kwDODunzps6_wnm8
| 7,967
|
Issue 7756 Fix - multiprocessing hang issue with start method check
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/218264809?v=4",
"events_url": "https://api.github.com/users/vedanta777/events{/privacy}",
"followers_url": "https://api.github.com/users/vedanta777/followers",
"following_url": "https://api.github.com/users/vedanta777/following{/other_user}",
"gists_url": "https://api.github.com/users/vedanta777/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vedanta777",
"id": 218264809,
"login": "vedanta777",
"node_id": "U_kgDODQJ06Q",
"organizations_url": "https://api.github.com/users/vedanta777/orgs",
"received_events_url": "https://api.github.com/users/vedanta777/received_events",
"repos_url": "https://api.github.com/users/vedanta777/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vedanta777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vedanta777/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vedanta777",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-28T05:02:20
| 2026-01-31T18:26:03
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7967.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7967",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7967.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7967"
}
|
Added a fix to prevent multiprocessing hangs by checking the start method.
Detects bad multiprocessing start_method, fallback happens.
https://github.com/huggingface/datasets/issues/7756
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7967/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7967/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7966
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7966/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7966/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7966/events
|
https://github.com/huggingface/datasets/pull/7966
| 3,861,774,379
|
PR_kwDODunzps6_qp_e
| 7,966
|
Infer types from lance blobs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7966). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-27T18:00:25
| 2026-01-28T13:02:25
| 2026-01-28T13:02:23
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7966.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7966",
"merged_at": "2026-01-28 13:02:22",
"patch_url": "https://github.com/huggingface/datasets/pull/7966.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7966"
}
|
Ex: infer Video() type in https://huggingface.co/datasets/lance-format/openvid-lance and Image() type in https://huggingface.co/datasets/lance-format/laion-1m
```python
from datasets import load_dataset
ds = load_dataset("lance-format/laion-1m", streaming=True, split="train")
print(ds.feature["image"])
# Image()
ds = load_dataset("lance-format/openvid-lance", streaming=True, split="train")
print(ds.feature["video_blob"])
# Video()
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7966/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7966/timeline
| null | null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7965
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7965/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7965/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7965/events
|
https://github.com/huggingface/datasets/issues/7965
| 3,858,483,549
|
I_kwDODunzps7l-8ld
| 7,965
|
`huggingface_hub.errors.HfHubHTTPError: 404 Client Error: Not Found for url` when fetching a dataset with `datasets.load_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harupy",
"id": 17039389,
"login": "harupy",
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"repos_url": "https://api.github.com/users/harupy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harupy",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Yes you should use `cornell-movie-review-data/rotten_tomatoes` instead of `rotten_tomatoes`, which is the legacy name. Those datasets have been moved under their actual owners accounts some time ago (but we were keeping the old names as aliases)\n\nSome other impacted names are:\n- `imdb` -> `stanfordnlp/imdb`\n- `wikitext` -> `Salesforce/wikitext`\n- `gsm8k` -> `openai/gsm8k`\n- `winogrande` -> `allenai/winogrande`\n\nWe're working on re-enabling them as aliases for backward compatibility. I'll post updates here, sorry for the inconvenience.\n\n**Using the actual name instead of the old legacy name is more future proof though**",
"Thanks for the heads up @lhoestq ! fyi, this change is likely breaking a lot of repos that have legacy names hardcoded ([example](https://github.com/allenai/olmes/pull/40)) Would be helpful to many to share this update in a more visible way if it is likely to persist for a while.",
"[internal tracking link](https://github.com/huggingface-internal/moon-landing/pull/16539)",
"@lhoestq Thanks for clarifying!",
"The aliases are re-enabled :)",
"Thanks!",
"Can I close this issue?",
"Yep :)"
] | 2026-01-27T02:20:31
| 2026-01-28T15:14:50
| 2026-01-28T15:14:50
|
NONE
| null | null | null | null |
Not a bug but a question. We started getting the following error:
https://github.com/mlflow/mlflow/actions/runs/21368603305/job/61506951617
```
ests/data/test_huggingface_dataset_and_source.py::test_from_huggingface_dataset_constructs_expected_dataset_with_revision - huggingface_hub.errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/datasets/rotten_tomatoes/revision/aa13bc287fa6fcab6daf52f0dfb9994269ffea28 (Request ID: Root=1-6977aeca-35bc2b5b605884926a9224d0;aa2391f3-26e8-4975-b9bb-114b2fa40223)
```
Adding a user id fixed the issue (https://github.com/mlflow/mlflow/pull/20350). `https://huggingface.co/api/datasets` no longer accepts a name-only path like `rotten_tomatoes`? Just wondering what changed. Thanks!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harupy",
"id": 17039389,
"login": "harupy",
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"repos_url": "https://api.github.com/users/harupy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harupy",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7965/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7965/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7964
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7964/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7964/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7964/events
|
https://github.com/huggingface/datasets/pull/7964
| 3,858,025,706
|
PR_kwDODunzps6_eOZR
| 7,964
|
handle blob lance
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7964). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-26T22:56:24
| 2026-01-26T22:59:18
| 2026-01-26T22:56:38
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7964.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7964",
"merged_at": "2026-01-26 22:56:38",
"patch_url": "https://github.com/huggingface/datasets/pull/7964.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7964"
}
|
following #7913
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7964/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7964/timeline
| null | null | null | null | null | true
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 5