Datasets:
owner stringclasses 15
values | repo stringclasses 16
values | id int64 116k 4.2B | issue_number int32 1 155k | author stringlengths 0 39 | body stringlengths 1 262k | created_at timestamp[us]date 2000-06-06 02:40:44 2026-04-08 05:18:21 | updated_at timestamp[us]date 2000-06-06 02:40:44 2026-04-08 05:18:21 | reactions unknown | author_association stringclasses 7
values |
|---|---|---|---|---|---|---|---|---|---|
ClickHouse | ClickHouse | 224,789,695 | 1 | alexey-milovidov | Потеряна функциональность быстрого убиения по повторному нажатию Ctrl+C (очень важно).
| 2016-06-09T03:07:50 | 2016-06-09T03:07:50 | {} | MEMBER |
ClickHouse | ClickHouse | 224,847,159 | 1 | PKartaviy | А такая функциональность была? в демонах только печаталось сообщение "Received second termination signal"
| 2016-06-09T09:39:50 | 2016-06-09T09:39:50 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 224,850,275 | 1 | PKartaviy | а если вызвать два раза рестарт, то тоже девятка? вообщем стоит обсудить
| 2016-06-09T09:54:07 | 2016-06-09T09:54:07 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 225,088,976 | 1 | alexey-milovidov | > А такая функциональность была? в демонах только печаталось сообщение "Received second termination signal"
Это довольно забавно - такая функциональность была в clickhouse-server, но не было в других демонах Метрики. И сейчас мне её не хватает.
> а если вызвать два раза рестарт, то тоже девятка? вообщем стоит обсудит... | 2016-06-10T04:11:20 | 2016-06-10T04:11:20 | {} | MEMBER |
ClickHouse | ClickHouse | 225,139,195 | 1 | alexey-milovidov | За счёт того, что уже не ждём сигнал, и для него нет обработчика.
| 2016-06-10T09:39:22 | 2016-06-10T09:39:22 | {} | MEMBER |
ClickHouse | ClickHouse | 226,180,653 | 5 | alexey-milovidov | Thank you!
But fix was already committed just now: https://github.com/yandex/ClickHouse/commit/ffb1672f6839c63cdde1835c35b78eb20edc4ddc
| 2016-06-15T13:04:51 | 2016-06-15T13:04:51 | {} | MEMBER |
ClickHouse | ClickHouse | 226,363,122 | 7 | alexey-milovidov | When you `SELECT *`, you select Users column, which have (unusual) type `AggregateFunction(uniq, String)`.
Something is wrong with transferring values of this type to client and it triggers a bug.
By the way, if it works correctly, it would display serialized state of aggregate function in binary form - not very infor... | 2016-06-16T01:08:36 | 2016-06-16T01:08:36 | {} | MEMBER |
ClickHouse | ClickHouse | 226,540,397 | 13 | bamx23 | Sure! Done
| 2016-06-16T16:33:04 | 2016-06-16T16:33:04 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 226,587,836 | 14 | ei-grad | +1 It would be great to see elasticsearch in official benchmarks, I'd probably have some time next week to add it, if nobody will do that before me...
@alexey-milovidov что нужно подготовить, чтобы кто-нибудь потом мог запустить бенчмарк для эластика на том же железе что остальные? я к вам могу в офисе зайти если что ... | 2016-06-16T19:26:09 | 2016-06-16T19:26:09 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 226,602,251 | 14 | kk00ss | I've spent some time thinking how to represent data in ElasticSearch in the best way. And came up with conclusion that similar denormalization is possible in other systems (any DBMS probably) to implement manually, but it wouldn't be fair, in my opinion, because analytical queries are unknown in advance.
Some queries ... | 2016-06-16T20:22:04 | 2016-06-16T20:22:04 | {} | NONE |
ClickHouse | ClickHouse | 226,614,545 | 14 | alexey-milovidov | > @alexey-milovidov что нужно подготовить, чтобы кто-нибудь потом мог запустить бенчмарк для эластика на том же железе что остальные? я к вам могу в офисе зайти если что с вопросами? вижу dbms/benchmarks, но не совсем понятно с чего начать вообще :-)
Напишу в личку, где можно взять данные для бенчмарка и на каком серв... | 2016-06-16T21:08:55 | 2016-06-16T21:08:55 | {} | MEMBER |
ClickHouse | ClickHouse | 226,614,682 | 17 | shinguz | Data can be provided, structure as well when you tell me how.
| 2016-06-16T21:09:28 | 2016-06-16T21:09:28 | {} | NONE |
ClickHouse | ClickHouse | 226,620,174 | 12 | jalateras | @alexey-milovidov is there a IRC channel or anywhere here you folks hang. Really interested in giving this a good test run over the next few weeks.
| 2016-06-16T21:32:16 | 2016-06-16T21:32:16 | {} | NONE |
ClickHouse | ClickHouse | 226,621,046 | 12 | alexey-milovidov | No IRC.
Feel free to ask (even short/simple) questions on clickhouse-feedback@yandex-team.com and I will try to answer quickly.
| 2016-06-16T21:35:56 | 2016-06-16T21:35:56 | {} | MEMBER |
ClickHouse | ClickHouse | 226,643,260 | 18 | alexey-milovidov | > -"Распределённая сортировка является основной причиной тормозов при выполнении несложных map-reduce задач."
> +"Распределённая сортировка является основной причиной долгого выполнения при выполнении несложных map-reduce задач."
Не очевидно, что это лучше. "Тормоза" - сленг, но его все знают.
> -"Впрочем, производит... | 2016-06-16T23:32:03 | 2016-06-16T23:37:57 | {} | MEMBER |
ClickHouse | ClickHouse | 226,663,860 | 18 | syaroslavtsev | -"Вы можете прервать длинный запрос, нажав Ctrl+C. При этом вам всё-равно придётся чуть-чуть подождать, пока сервер остановит запрос. На некоторых стадиях выполнения, запрос невозможно прервать. Если вы не дождётесь и нажмёте Ctrl+C второй раз, то клиент будет завершён."
+"Вы можете прервать длинный запрос, нажав Ctrl+... | 2016-06-17T02:13:41 | 2016-06-17T02:13:41 | {} | NONE |
ClickHouse | ClickHouse | 226,741,223 | 17 | DieHertz | Not necessarily, the server continues working, stack trace is provided for your convenience of debugging.
| 2016-06-17T10:59:06 | 2016-06-17T10:59:06 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 226,751,775 | 20 | matthiasg | it did find the system requirements stating SSE 4,2 as required. Sorry skipped that before.
```
System requirements
This is not a cross-platform system. It requires Linux Ubuntu Precise (12.04) or newer, x86_64 architecture with SSE 4.2 instruction set.
To test for SSE 4.2 support, do
grep -q sse4_2 /proc/cpuinfo && e... | 2016-06-17T12:02:00 | 2016-06-17T12:02:00 | {} | NONE |
ClickHouse | ClickHouse | 226,715,310 | 14 | dedok | @ei-grad готовый есть, но мне интересно на другом дата сете попробовать, с сложными join'ми.
| 2016-06-17T08:48:29 | 2016-06-17T17:55:01 | {
"laugh": 1
} | NONE |
ClickHouse | ClickHouse | 226,837,960 | 14 | dedok | @bogdanovich fair enough.
| 2016-06-17T17:57:22 | 2016-06-17T17:57:22 | {} | NONE |
ClickHouse | ClickHouse | 226,841,496 | 14 | kk00ss | ES could be better at filtering IMHO, for constraints like SomeID == 123 or SomeField == "abcd".
| 2016-06-17T18:11:42 | 2016-06-17T18:11:42 | {} | NONE |
ClickHouse | ClickHouse | 226,849,297 | 20 | alexey-milovidov | So, we need to modify check in init script.
The most quick way is to remove this fragment:
```
# On x86_64, check for required instruction set.
if uname -mpi | grep -q 'x86_64'; then
if ! grep -q 'sse4_2' /proc/cpuinfo; then
# On KVM, cpuinfo could falsely not report SSE 4.2 support, so skip t... | 2016-06-17T18:42:02 | 2016-06-17T18:42:02 | {
"+1": 1
} | MEMBER |
ClickHouse | ClickHouse | 226,853,676 | 17 | alexey-milovidov | Probably better to disable stack trace by default, as it was done for clickhouse-client (added `--stacktrace` option).
| 2016-06-17T19:00:19 | 2016-06-17T19:00:19 | {} | MEMBER |
ClickHouse | ClickHouse | 226,865,860 | 21 | alexey-milovidov | All CPUs with AVX also have SSE4.
| 2016-06-17T19:52:13 | 2016-06-17T19:52:13 | {} | MEMBER |
ClickHouse | ClickHouse | 226,952,422 | 23 | alexey-milovidov | DieHertz, you are welcome too.
To translate all comments is a difficult task: about one month of work (really longer).
Today we have alternatives: to get done some important tasks (for example, compatibility of SQL dialect for JOINs) or start to translate comments.
Translation of comments is important task, but if we... | 2016-06-18T16:47:56 | 2016-06-18T16:47:56 | {} | MEMBER |
ClickHouse | ClickHouse | 227,000,827 | 14 | synhershko | Comparing ClickHouse to Elasticsearch would be comparing oranges to apples. One is a columnar store and the other is a document-based search server.
| 2016-06-19T14:40:02 | 2016-06-19T14:40:02 | {} | NONE |
ClickHouse | ClickHouse | 227,013,160 | 25 | alexey-milovidov | Everything is fine, thanks!
| 2016-06-19T18:40:42 | 2016-06-19T18:40:42 | {} | MEMBER |
ClickHouse | ClickHouse | 227,015,476 | 14 | bogdanovich | @synhershko Elasticsearch gets promoted as an analytics solution too.
| 2016-06-19T19:25:23 | 2016-06-19T19:25:31 | {} | NONE |
ClickHouse | ClickHouse | 227,060,465 | 20 | matthiasg | @alexey-milovidov .. there is indeed a VM issue .. https://github.com/joyent/smartos-live/issues/647#event-696349073 , its been filed for correction.
Nonetheless thanks for pointing out where to look. i will make a change there until the vm is behaving correctly...
| 2016-06-20T06:33:03 | 2016-06-20T06:33:03 | {} | NONE |
ClickHouse | ClickHouse | 227,277,240 | 19 | alexey-milovidov | Спасибо, исправил!
https://github.com/yandex/ClickHouse/commit/2fbf85a68207cdb205b1a04e7e495fe4d07dd453
При желании, можете сразу отправлять в виде pull request (а то здесь я не сразу разглядел изменения).
| 2016-06-20T21:36:37 | 2016-06-20T21:36:37 | {} | MEMBER |
ClickHouse | ClickHouse | 227,320,573 | 29 | ggodik | @alexey-milovidov Спасибо за быстрый ответ
Do you see any benefits to using the C++ API vs going over http. Is the TCP interface somehow more efficient, less chatty, etc?
| 2016-06-21T01:58:31 | 2016-06-21T01:58:31 | {} | NONE |
ClickHouse | ClickHouse | 227,453,343 | 30 | alexey-milovidov | > Initially I thought that the primary key must contain at least two columns so I created a table with constant expression in primary key tuple with this query
You could specify one column (in parentheses or without) as primary key.
> However this query causes ClickHouse to throw Segmentation fault and exits ungracef... | 2016-06-21T14:15:56 | 2016-06-21T14:27:33 | {} | MEMBER |
ClickHouse | ClickHouse | 227,461,932 | 29 | alexey-milovidov | > Do you see any benefits to using the C++ API vs going over http. Is the TCP interface somehow more efficient, less chatty, etc?
There are some benefits, but (I think) not crucial.
| 2016-06-21T14:43:06 | 2016-06-21T14:43:06 | {} | MEMBER |
ClickHouse | ClickHouse | 227,871,348 | 14 | alexey-milovidov | In terms of capabilities, ClickHouse support INNER, LEFT, RIGHT, FULL OUTER
(and also CROSS JOIN).
For INNER, LEFT, RIGHT, FULL, only equality JOIN relation is supported.
Also, syntax for JOINs in ClickHouse is not compatible to standard SQL and too explicit: only two tables could be JOINed, it is recommended to write... | 2016-06-22T20:46:59 | 2016-06-22T20:46:59 | {} | MEMBER |
ClickHouse | ClickHouse | 228,109,499 | 31 | alexey-milovidov | > Shark for Spark is deprecated. Spark SQL is actually a replacement of Shark so I am not sure if both should be mentioned.
Ok. Shark could be removed from text, or added note about obsoleteness... but it doesn't matter much.
> Also, I did not get what YT is referring to. Is it yt-project
> In any case shall a link b... | 2016-06-23T16:42:07 | 2016-06-23T16:42:07 | {} | MEMBER |
ClickHouse | ClickHouse | 228,630,116 | 34 | alexey-milovidov | Ok. Everything is alright.
| 2016-06-26T23:38:32 | 2016-06-26T23:38:32 | {} | MEMBER |
ClickHouse | ClickHouse | 228,650,010 | 35 | alexey-milovidov | https://clickhouse.yandex/tutorial.html
| 2016-06-27T04:06:35 | 2016-06-27T04:06:35 | {} | MEMBER |
ClickHouse | ClickHouse | 229,054,804 | 36 | blackbass1988 | toMonday exists
| 2016-06-28T13:51:15 | 2016-06-28T13:51:15 | {} | NONE |
ClickHouse | ClickHouse | 229,331,376 | 37 | alexey-milovidov | Ok.
| 2016-06-29T11:38:01 | 2016-06-29T11:38:01 | {} | MEMBER |
ClickHouse | ClickHouse | 229,645,384 | 38 | alexey-milovidov | GROUP BY doesn't take into account order of data in table, it keeps in memory all states of aggregate functions even when GROUP BY is by primary key.
The problem could be solved in following ways:
1. Split query to several parts by ranges of uid:
`WHERE uid < '8'`
`WHERE uid >= '8'`
and do UNION ALL.
(BTW ... | 2016-06-30T12:36:09 | 2016-06-30T12:36:09 | {} | MEMBER |
ClickHouse | ClickHouse | 229,648,853 | 38 | karteek-gamooga | Thanks Alexey for your quick response. The uid is a string because it is typically a email id or some other unique identifier. Your first solution could work, I will try that out. The pattern matching is a little more complex than what sequenceMatch provides so i just quickly wrote a custom function. Regarding max_byte... | 2016-06-30T12:52:11 | 2016-06-30T12:52:11 | {} | NONE |
ClickHouse | ClickHouse | 229,751,554 | 39 | alexey-milovidov | Test 00222_sequence_aggregate_function_family is broken.
| 2016-06-30T18:43:14 | 2016-06-30T18:43:14 | {} | MEMBER |
ClickHouse | ClickHouse | 230,339,757 | 43 | alexey-milovidov | Thanks, I will try to investigate, what's going on.
By the way, we don't use `make install` directly for deployment.
It is used just to place part of content into debian/tmp directory while building debian package.
There is a way to build on Ubuntu and install on any other distribution, look here:
https://github.com/... | 2016-07-04T18:37:09 | 2016-07-04T18:37:09 | {} | MEMBER |
ClickHouse | ClickHouse | 230,620,220 | 43 | alexey-milovidov | We are using RELWITHDEBINFO for production (fat binaries is minor issue).
Yes, I see compile errors when building with Release type. I'll try to fix them.
| 2016-07-05T22:22:29 | 2016-07-05T22:22:29 | {} | MEMBER |
ClickHouse | ClickHouse | 230,637,578 | 43 | alexey-milovidov | https://github.com/yandex/ClickHouse/commit/fa5e66c1fc7a982e1cfdc8ba7ce5a2ccc593e92b
(Warnings was in 3rd-party library)
| 2016-07-06T00:01:43 | 2016-07-06T00:01:43 | {} | MEMBER |
ClickHouse | ClickHouse | 230,762,756 | 43 | code-of-kpp | 103.2M /usr/bin/clickhouse-benchmark
103.3M /usr/bin/clickhouse-client
104.0M /usr/bin/clickhouse-server
3.8M /usr/bin/compressor
4.2M /usr/bin/config-processor
3.1M /usr/bin/corrector_utf8
still looks bigger, than it deb files, but 5 times smaller than before
But queries don't work. `SELECT 1` and `CREAT... | 2016-07-06T12:55:23 | 2016-07-06T12:55:23 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 225,119,956 | 1 | PKartaviy | Засчет чего она достигалась. я не могу найти код, который бы это делал
| 2016-06-10T08:13:00 | 2016-06-10T08:13:00 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 226,179,167 | 5 | smagellan | here is detailed error info.
[ 56%] Building CXX object dbms/CMakeFiles/dbms.dir/src/Dictionaries/ComplexKeyCacheDictionary.cpp.o
In file included from /tmp/clickhouse/ClickHouse/dbms/src/Dictionaries/ComplexKeyCacheDictionary.cpp:1:0:
/tmp/clickhouse/ClickHouse/dbms/include/DB/Dictionaries/ComplexKeyCacheDictionary.h... | 2016-06-15T12:58:31 | 2016-06-15T12:59:30 | {} | NONE |
ClickHouse | ClickHouse | 226,350,266 | 11 | alexey-milovidov | We have not support for UDFs yet for several reasons:
1. To not fixate on some API. Internal interfaces are subjects to change. We want to have the ability to "break" the code.
2. Interface of ordinary functions is quite complex: due to vector engine, functions are dispatched not on single values, but on arrays of valu... | 2016-06-15T23:34:29 | 2016-06-15T23:34:29 | {
"+1": 7
} | MEMBER |
ClickHouse | ClickHouse | 226,350,868 | 9 | alexey-milovidov | Very promising.
Also histogram could be calculated on top of t-digest algorithm.
| 2016-06-15T23:38:23 | 2016-06-15T23:38:23 | {} | MEMBER |
ClickHouse | ClickHouse | 226,470,663 | 12 | DieHertz | You shall not. Corresponding includes are mentioned in server's **config.xml** and **users.xml** and are optional AFAIK.
To be more precise, **_clickhouse_remote_servers**_ may be specified if the user intends to define some remote machines/clusters for using them with **StorageDistributed**, **ReplicatedMergeTree**, ... | 2016-06-16T12:28:17 | 2016-06-16T12:42:11 | {
"+1": 2
} | CONTRIBUTOR |
ClickHouse | ClickHouse | 226,531,239 | 13 | alexey-milovidov | Could you please recreate your pull request?
Because commit "Merge commit '942db72153ba9312fa17b4b2fb786c2ce3e1140e' into METR-19660" and all commits before were wrong and I have done reset.
| 2016-06-16T16:00:50 | 2016-06-16T16:00:50 | {} | MEMBER |
ClickHouse | ClickHouse | 226,597,775 | 12 | alexey-milovidov | Yes, these messages are just notices.
| 2016-06-16T20:04:59 | 2016-06-16T20:04:59 | {} | MEMBER |
ClickHouse | ClickHouse | 226,609,926 | 14 | dedok | @ei-grad я тут на этих выходных планирую погонять на AWS'е, план такой:
Делаешь load (cli у Click'а и ES) данных (я планирую dump вики взять) ну и бомбишь join'ы в кучу потоков в ES через http и С++ демона Click'а - api там вроде понятное.
Думаю для тестовых join, union идеально подойдут категории википедии.
P.S. Може... | 2016-06-16T20:50:52 | 2016-06-16T20:54:17 | {} | NONE |
ClickHouse | ClickHouse | 226,613,102 | 14 | ei-grad | Есть же уже готовый бенчмарк в dbms/benchmark, как я понимаю это он https://clickhouse.yandex/benchmark.html, без всяких join'ов, чисто под стандартный паттерн аналитики по событиям в веб/мобайл приложениях, под эластик отлично ложится, только надо загрузчик данных и запросы написать.Но было бы клево иметь возможность ... | 2016-06-16T21:03:15 | 2016-06-16T21:03:15 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 226,617,896 | 17 | alexey-milovidov | Probably your dump has many INSERT queries?
You should create dump with only data, or with single query and then all data.
As in example here:
https://clickhouse.yandex/reference_en.html#HTTP%20interface
(after words "Creating a table")
It because data is parsed with stream format parser, not with SQL-parser, as desc... | 2016-06-16T21:22:53 | 2016-06-16T21:22:53 | {} | MEMBER |
ClickHouse | ClickHouse | 226,623,363 | 17 | shinguz | you are right. I tried to import a mysqldump from our foodmart example schema to play around a bit. there are/were also many other things broken (in our dump). But: a stacktrace is a non caught exit and thus a bug.
| 2016-06-16T21:46:12 | 2016-06-16T21:46:12 | {} | NONE |
ClickHouse | ClickHouse | 226,644,146 | 18 | alexey-milovidov | https://github.com/yandex/ClickHouse/commit/d19d3a27da2ece3c49a8afbf8c5772a7b74df440
| 2016-06-16T23:37:45 | 2016-06-16T23:37:45 | {} | MEMBER |
ClickHouse | ClickHouse | 226,663,224 | 14 | buremba | It would be great if you could write your posts in english because we don't know russian. :)
| 2016-06-17T02:08:13 | 2016-06-17T02:08:13 | {
"+1": 13,
"confused": 2
} | NONE |
ClickHouse | ClickHouse | 226,664,132 | 18 | syaroslavtsev | -"Принимает строку, число, дату или дату-с-временем. Возвращает строку, содержащую шестнадцатиричное представление аргумента. Используются заглавные буквы A-F. Не используются префиксы %%0x%% и суффиксы %%h%%. Для строк просто все байты кодируются в виде двух шестнадцатиричных цифр. Числа выводятся в big endian ("челов... | 2016-06-17T02:16:22 | 2016-06-17T02:16:22 | {} | NONE |
ClickHouse | ClickHouse | 226,665,105 | 18 | syaroslavtsev | -"Эта функция обычно используется совместо с ARRAY JOIN. Она позволяет, после применения ARRAY JOIN, посчитать что-либо только один раз для каждого массива. Пример:"
+"Эта функция обычно используется совместно с ARRAY JOIN. Она позволяет, после применения ARRAY JOIN, посчитать что-либо только один раз для каждого масси... | 2016-06-17T02:25:08 | 2016-06-17T02:25:08 | {} | NONE |
ClickHouse | ClickHouse | 226,752,807 | 14 | ei-grad | @buremba sorry, @dedok wants to proceed with his own test case containing complex join queries, and looking how clickhouse and elasticsearch deal with them.
I think it doesn't make a good sense, since nor ES, nor ClickHouse, are not designed to be good in such operations. Also, there is a good official benchmark (http... | 2016-06-17T12:08:33 | 2016-06-17T12:10:34 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 226,818,346 | 14 | buremba | I think the only fair comparisons would be simple cardinality and group by queries.
| 2016-06-17T16:37:06 | 2016-06-17T16:37:06 | {} | NONE |
ClickHouse | ClickHouse | 226,846,955 | 20 | alexey-milovidov | Your CPU supports required instruction set:
http://ark.intel.com/ru/products/83356/Intel-Xeon-Processor-E5-2630-v3-20M-Cache-2_40-GHz
Probably it is VM issue: VM could report wrong flags in /proc/cpuinfo.
It even doesn't report sse3, ssse3...
2014 year CPU definitely has support for it.
| 2016-06-17T18:33:19 | 2016-06-17T18:33:19 | {} | MEMBER |
ClickHouse | ClickHouse | 226,853,025 | 20 | alexey-milovidov | How do we check for required instruction set without /proc/cpuinfo?
We could compile minimal program which using one of instructions and then look does it get 'Illegal instruction' signal. To do it without installed compiler, we could provide minimal program:
```
echo -n 'H4sICAwAW1cCA2Eub3V0AKt39XFjYmRkgAEmBjsGEI+H0Q... | 2016-06-17T18:57:41 | 2016-06-17T18:57:41 | {} | MEMBER |
ClickHouse | ClickHouse | 226,864,085 | 22 | alexey-milovidov | No, SELECTs from Distributed tables don't touch ZooKeeper.
| 2016-06-17T19:43:29 | 2016-06-17T19:43:29 | {} | MEMBER |
ClickHouse | ClickHouse | 226,867,564 | 21 | DieHertz | Concerning the initial question, SSE4.2 has some unique string-related instructions which are not present in AVX, thus "updating to AVX" is impossible is this regard.
However it may be interesting to implement runtime AVX code paths for functions which are currently vectorized using SSE2, potentially increasing theoret... | 2016-06-17T20:00:20 | 2016-06-17T20:00:20 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 226,946,851 | 23 | DieHertz | You are welcome to study the code, translate commentaries and submit a pull request :-)
| 2016-06-18T15:00:19 | 2016-06-18T15:00:28 | {
"+1": 1,
"laugh": 1
} | CONTRIBUTOR |
ClickHouse | ClickHouse | 226,952,085 | 24 | alexey-milovidov | Example to load this dataset was unfinished, I was not loaded it.
I appreciate if someone going to help.
| 2016-06-18T16:41:32 | 2016-06-18T16:41:32 | {} | MEMBER |
ClickHouse | ClickHouse | 227,024,893 | 26 | duremar | subquery can't reference columns from outer query. correlated queries are impossible for now.
| 2016-06-19T22:42:17 | 2016-06-19T22:42:17 | {} | NONE |
ClickHouse | ClickHouse | 227,025,314 | 26 | alexey-milovidov | Added note about that to documentation: https://github.com/yandex/ClickHouse/commit/fa98bd6789aa474cbb1e9c7cf684b2773e94e626
> subqueries can only reference columns from the table system.one
And from other available tables too; but not from outside.
| 2016-06-19T22:51:41 | 2016-06-19T22:51:41 | {} | MEMBER |
ClickHouse | ClickHouse | 227,060,198 | 21 | matthiasg | really annoying that the intel ark pages are so utterly unclear on this topic. some show avx, some show sse4 and some show both. also sse4 and other features are not filterable unlike things such as socket type etc.
| 2016-06-20T06:31:14 | 2016-06-20T06:31:14 | {} | NONE |
ClickHouse | ClickHouse | 227,233,251 | 29 | alexey-milovidov | We have C++ API internally. Drawback is that it depends on (almost) whole ClickHouse codebase.
We could open it, but I am not sure, would it be appropriate solution.
Linking with whole ClickHouse is ugly.
Maybe, we could open it with many precautions statements.
| 2016-06-20T18:48:16 | 2016-06-20T18:48:16 | {} | MEMBER |
ClickHouse | ClickHouse | 227,277,649 | 19 | alexey-milovidov | На сайте обновится само через несколько минут.
| 2016-06-20T21:38:19 | 2016-06-20T21:38:19 | {} | MEMBER |
ClickHouse | ClickHouse | 227,294,470 | 19 | syaroslavtsev | Добрый день. К своему великому сожалению на данный момент нет возможности.
Есть несколько вопросов по продукту:
Есть спецификация на code convention для проекта и комментариям, Public CI
and deployment script for experiments without using docker, for example
vagrant?
Возможно у Вас есть или у Yandex планы в будущем по ... | 2016-06-20T22:56:27 | 2016-06-20T22:56:27 | {} | NONE |
ClickHouse | ClickHouse | 227,310,127 | 19 | alexey-milovidov | > Есть спецификация на code convention для проекта и комментариям
Есть. Только что добавил эту спецификацию в дерево проекта.
https://github.com/yandex/ClickHouse/blob/master/doc/style_ru.md
> Public CI
Есть только внутренний CI. Он поддерживается другой командой и не открыт.
> deployment script for experiments wit... | 2016-06-21T00:37:55 | 2016-06-21T00:37:55 | {} | MEMBER |
ClickHouse | ClickHouse | 227,498,279 | 18 | alexey-milovidov | Всё исправил.
| 2016-06-21T16:40:05 | 2016-06-21T16:40:05 | {} | MEMBER |
ClickHouse | ClickHouse | 227,506,307 | 30 | alexey-milovidov | Commited: https://github.com/yandex/ClickHouse/commit/b8ca97a891ac841144e43992333fe2fc89e9ca76
https://gist.github.com/alexey-milovidov/71c61a63aeca4b0da8ba2c2298315e1d
(Scheduled for next release. Releases are about once per week, irregularly.)
| 2016-06-21T17:08:12 | 2016-06-21T17:08:12 | {} | MEMBER |
ClickHouse | ClickHouse | 227,644,702 | 14 | matthiasg | This does bring up the question how good clickhouse is with joins... I saw it being mentioned it might not be good at that?
| 2016-06-22T05:06:31 | 2016-06-22T05:06:31 | {} | NONE |
ClickHouse | ClickHouse | 226,811,786 | 14 | bogdanovich | @dedok elasticsearch doesn't support complex join queries. Only for specified parent-child relationships (defined in mapping) where parent table is ideally should be a low cardinality table. And parent-child id relationships index is actually gets fully loaded into memory during these 'join queries'. I would say it doe... | 2016-06-17T16:09:45 | 2016-06-22T17:13:11 | {} | NONE |
ClickHouse | ClickHouse | 228,090,585 | 31 | alexey-milovidov | Great, thanks!
Please, do some cleanup of missing/excessive whitespaces here:
```
operation(partial reduce)
streaming mode,so can
In 2014-2016 YT is to develop
sorting " functionality using
```
| 2016-06-23T15:37:35 | 2016-06-23T15:37:35 | {} | MEMBER |
ClickHouse | ClickHouse | 228,098,369 | 31 | Ngalstyan4 | Done,
A couple of points:
Shark for Spark is deprecated. Spark SQL is actually a replacement of Shark so I am not sure if both should be mentioned.
Also, I did not get what YT is referring to. Is it [yt-project](http://yt-project.org/)
In any case shall a link be added?
| 2016-06-23T16:02:34 | 2016-06-23T16:02:34 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 228,448,974 | 31 | alexey-milovidov | Merged.
By the way, it is not Markdown.
Replaced links with
```
\[([^\]]+)\]\((http[^\)]+)\)
```
to
```
<a href="\2">\1</a>
```
| 2016-06-24T20:04:45 | 2016-06-24T20:04:45 | {} | MEMBER |
ClickHouse | ClickHouse | 228,449,292 | 31 | alexey-milovidov | https://github.com/yandex/ClickHouse/commit/708c6e532b570da561741cce55044501710fde75
| 2016-06-24T20:06:19 | 2016-06-24T20:06:19 | {} | MEMBER |
ClickHouse | ClickHouse | 228,451,200 | 7 | alexey-milovidov | > Something is wrong with transferring values of this type to client and it triggers a bug.
Fix is committed and scheduled for next release.
| 2016-06-24T20:15:37 | 2016-06-24T20:15:37 | {} | MEMBER |
ClickHouse | ClickHouse | 228,519,150 | 17 | alexey-milovidov | > disable stack trace by default
Committed, scheduled for next release.
| 2016-06-25T07:25:23 | 2016-06-25T07:25:23 | {} | MEMBER |
ClickHouse | ClickHouse | 228,600,896 | 33 | alexey-milovidov | Ok.
| 2016-06-26T13:15:28 | 2016-06-26T13:15:28 | {} | MEMBER |
ClickHouse | ClickHouse | 228,601,500 | 34 | alexey-milovidov | > Do you prefer a few Pull Requests with small changes or one PR with huge diff?
Doesn't matter, do what you prefer.
| 2016-06-26T13:31:10 | 2016-06-26T13:31:10 | {} | MEMBER |
ClickHouse | ClickHouse | 228,601,550 | 34 | flrnull | Ok, thanks.
| 2016-06-26T13:32:35 | 2016-06-26T13:32:35 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 228,630,278 | 35 | alexey-milovidov | Thanks!
I will edit the text and publish to web site.
| 2016-06-26T23:42:32 | 2016-06-26T23:42:32 | {} | MEMBER |
ClickHouse | ClickHouse | 228,631,781 | 32 | alexey-milovidov | Is is umbrella issue.
Lets look at different possible features that are related to stored procedures.
1. User defined functions and aggregate functions.
Discussed here: https://github.com/yandex/ClickHouse/issues/11
2. User defined streaming transformations of data.
Like mapreduce. Definitely makes sense.
But ... | 2016-06-27T00:11:44 | 2016-06-27T00:11:44 | {
"+1": 1
} | MEMBER |
ClickHouse | ClickHouse | 228,749,971 | 32 | gotlium | My case: I have one realtime data table and once on day on background I insert all data from realtime table to persistent pre-calculated OLAP table. For this case will be super if I call from code simple procedure to calculate and insert data to OLAP table.
| 2016-06-27T13:46:44 | 2016-06-27T13:46:44 | {
"+1": 1
} | NONE |
ClickHouse | ClickHouse | 229,064,497 | 36 | alexey-milovidov | BTW it is intentionally named `toMonday`, not `toStartOfWeek`, because there are different rules about at what day week starts.
| 2016-06-28T14:22:57 | 2016-06-28T14:22:57 | {
"+1": 7
} | MEMBER |
ClickHouse | ClickHouse | 229,665,645 | 38 | alexey-milovidov | It means that memory usage for aggregate function sequenceMatch is not tracked correctly.
We have a task for it.
Message "std::bad_alloc" means "cannot allocate memory".
| 2016-06-30T13:56:25 | 2016-06-30T13:56:25 | {} | MEMBER |
ClickHouse | ClickHouse | 229,723,256 | 39 | alexey-milovidov | Ok.
| 2016-06-30T17:05:46 | 2016-06-30T17:05:46 | {} | MEMBER |
ClickHouse | ClickHouse | 229,723,571 | 38 | alexey-milovidov | Commited fix for accounting of used memory for aggregate function sequenceMatch.
Scheduled for next release.
| 2016-06-30T17:06:52 | 2016-06-30T17:06:52 | {} | MEMBER |
ClickHouse | ClickHouse | 229,984,294 | 41 | alexey-milovidov | It is for more easy deployment.
With static linking it has runtime dependencies only on libc.
To build ClickHouse on other distributions, there are several ways:
1. Build ICU, GLib2 from sources and use compiled static libraries (install to /usr/local as default: libraries is searched there).
2. Change static linking ... | 2016-07-01T16:01:04 | 2016-07-01T16:01:04 | {
"+1": 1
} | MEMBER |
ClickHouse | ClickHouse | 230,123,703 | 42 | alexey-milovidov | Already fixed.
Update to latest version, which is 1.1.53988 and is in debian repository.
| 2016-07-02T21:52:01 | 2016-07-02T21:52:01 | {} | MEMBER |
ClickHouse | ClickHouse | 230,123,727 | 42 | alexey-milovidov | All versions are compatible. Updating is trivial.
| 2016-07-02T21:52:35 | 2016-07-02T21:52:35 | {} | MEMBER |
ClickHouse | ClickHouse | 230,508,126 | 43 | code-of-kpp | Actually I am able to compile it now on Gentoo with GCC-5.4.0.
But somehow default CMAKE_BUILD_TYPE is RELWITHDEBINFO. As a result I got huge binary files.
When I do `cmake -DCMAKE_BUILD_TYPE=Release` or `cmake -DCMAKE_BUILD_TYPE=MinSizeRel` I got errors from gcc: unknown variables and unknown options
| 2016-07-05T15:13:07 | 2016-07-05T15:13:07 | {} | CONTRIBUTOR |
OpenGitHub Issues
What is it?
The full development metadata of 16 public GitHub repositories, fetched from the GitHub REST API and GraphQL API, converted to Parquet and hosted here for easy access.
Right now the archive has 18.2M rows across 8 tables in 61.8 MB of Zstd-compressed Parquet. Every issue, pull request, comment, code review, timeline event, file change, and CI status check is stored as a separate table you can load individually or query together.
This is the companion to OpenGitHub, which mirrors the real-time GitHub event stream via GH Archive. That dataset tells you what happened across all of GitHub. This one gives you the full picture for specific repos: complete issue threads, full PR review conversations, the state machine from open to close.
People use it for:
- Code review research with inline comments attached to specific diff lines
- Project health metrics like merge rates, review turnaround, label usage
- Issue triage and classification with full text, labels, and timeline
- Software engineering process mining from timeline event sequences
Last updated: 2026-04-08 08:30 UTC.
Latest Sync
New items since the previous publish:
| Repository | Issues | PRs | Comments | Reviews | Timeline |
|---|---|---|---|---|---|
| kubernetes/kubernetes | — | — | +38.3K | — | — |
Repositories
| Repository | Issues | PRs | Comments | Reviews | Timeline | Total | Last Updated |
|---|---|---|---|---|---|---|---|
| ClickHouse/ClickHouse | 100.8K | 72.8K | 303.9K | 101.0K | 25.6K | 1.3M | 2026-04-07 08:59 UTC |
| duckdb/duckdb | 18.1K | 11.4K | 61.0K | 13.2K | 10.0K | 310.0K | 2026-04-07 08:48 UTC |
| etcd-io/etcd | 21.0K | 13.8K | 124.2K | 28.5K | 11.6K | 319.5K | 2026-04-07 08:59 UTC |
| facebook/react | 33.7K | 19.2K | 170.7K | 20.1K | 251.2K | 861.0K | 2026-04-07 07:32 UTC |
| golang/go | 75.9K | 4.9K | 536.2K | 323 | 268.3K | 957.5K | 2026-04-07 08:20 UTC |
| kubernetes/kubernetes | 137.4K | 88.8K | 1.9M | 302.4K | 10.0K | 3.9M | 2026-04-07 14:23 UTC |
| mdn/content | 41.6K | 31.5K | 157.5K | 18.9K | 13.3K | 412.7K | 2026-04-07 08:57 UTC |
| microsoft/TypeScript | 62.1K | 19.1K | 336.7K | 41.9K | 13.2K | 1.1M | 2026-04-07 07:22 UTC |
| moby/moby | 51.4K | 28.1K | 101.7K | 50.4K | 10.0K | 588.8K | 2026-04-07 10:20 UTC |
| pingcap/tidb | 67.2K | 44.5K | 487.5K | 162.6K | 11.4K | 1.2M | 2026-04-07 09:01 UTC |
| python/cpython | 145.8K | 69.8K | 864.6K | 149.9K | 26.8K | 1.9M | 2026-04-07 08:40 UTC |
| redis/redis | 14.6K | 7.6K | 81.5K | 27.2K | 11.0K | 207.4K | 2026-04-07 08:53 UTC |
| rust-lang/rust | 154.0K | 92.2K | 1.5M | 185.9K | 47.4K | 3.7M | 2026-04-07 08:57 UTC |
| swiftlang/swift | 84.4K | 66.8K | 447.3K | 108.5K | 14.0K | 1.4M | 2026-04-07 08:51 UTC |
| vuejs/core | 12.1K | 6.1K | 35.7K | 4.8K | 10.4K | 90.4K | 2026-04-07 08:56 UTC |
| vuejs/docs | 3.3K | 2.2K | 7.0K | 2.7K | 10.0K | 40.4K | 2026-04-03 19:23 UTC |
How to download and use this dataset
Data lives at data/{table}/{owner}/{repo}/0.parquet. Load a single table, a single repo, or everything at once. Standard Hugging Face Parquet layout, works with DuckDB, datasets, pandas, and huggingface_hub out of the box.
Using DuckDB
DuckDB reads Parquet directly from Hugging Face, no download step needed. Save any query below as a .sql file and run it with duckdb < query.sql.
-- Top issue authors across all repos
SELECT
author,
COUNT(*) as issue_count,
COUNT(*) FILTER (WHERE state = 'open') as open,
COUNT(*) FILTER (WHERE state = 'closed') as closed
FROM read_parquet('hf://datasets/open-index/open-github-issues/data/issues/**/0.parquet')
WHERE is_pull_request = false
GROUP BY author
ORDER BY issue_count DESC
LIMIT 20;
-- PR merge rate by repo
SELECT
split_part(filename, '/', 8) || '/' || split_part(filename, '/', 9) as repo,
COUNT(*) as total_prs,
COUNT(*) FILTER (WHERE merged) as merged,
ROUND(COUNT(*) FILTER (WHERE merged) * 100.0 / COUNT(*), 1) as merge_pct
FROM read_parquet('hf://datasets/open-index/open-github-issues/data/pull_requests/**/0.parquet', filename=true)
GROUP BY repo
ORDER BY total_prs DESC;
-- Most reviewed PRs by number of review submissions
SELECT
r.pr_number,
COUNT(*) as review_count,
COUNT(*) FILTER (WHERE r.state = 'APPROVED') as approvals,
COUNT(*) FILTER (WHERE r.state = 'CHANGES_REQUESTED') as changes_requested
FROM read_parquet('hf://datasets/open-index/open-github-issues/data/reviews/**/0.parquet') r
GROUP BY r.pr_number
ORDER BY review_count DESC
LIMIT 20;
-- Label activity over time (monthly)
SELECT
date_trunc('month', created_at) as month,
COUNT(*) as label_events
FROM read_parquet('hf://datasets/open-index/open-github-issues/data/timeline_events/**/0.parquet')
WHERE event_type = 'LabeledEvent'
GROUP BY month
ORDER BY month;
-- Largest PRs by lines changed
SELECT
number,
additions,
deletions,
changed_files,
additions + deletions as total_lines
FROM read_parquet('hf://datasets/open-index/open-github-issues/data/pull_requests/**/0.parquet')
ORDER BY total_lines DESC
LIMIT 20;
Using Python (uv run)
These scripts use PEP 723 inline metadata. Save as a .py file and run with uv run script.py. No virtualenv or pip install needed.
Stream issues:
# /// script
# requires-python = ">=3.11"
# dependencies = ["datasets"]
# ///
from datasets import load_dataset
ds = load_dataset("open-index/open-github-issues", "issues", streaming=True)
for i, row in enumerate(ds["train"]):
print(f"#{row['number']}: [{row['state']}] {row['title']} (by {row['author']})")
if i >= 19:
break
Load a specific repo:
# /// script
# requires-python = ">=3.11"
# dependencies = ["datasets"]
# ///
from datasets import load_dataset
ds = load_dataset(
"open-index/open-github-issues",
"pull_requests",
data_files="data/pull_requests/facebook/react/0.parquet",
)
df = ds["train"].to_pandas()
print(f"Loaded {len(df)} pull requests")
print(f"Merged: {df['merged'].sum()} ({df['merged'].mean()*100:.1f}%)")
print(f"\nTop 10 by lines changed:")
df["total_lines"] = df["additions"] + df["deletions"]
print(df.nlargest(10, "total_lines")[["number", "additions", "deletions", "total_lines"]].to_string(index=False))
Download files:
# /// script
# requires-python = ">=3.11"
# dependencies = ["huggingface-hub"]
# ///
from huggingface_hub import snapshot_download
# Download only issues
snapshot_download(
"open-index/open-github-issues",
repo_type="dataset",
local_dir="./open-github-issues/",
allow_patterns="data/issues/**/*.parquet",
)
print("Downloaded issues parquet files to ./open-github-issues/")
For faster downloads, install pip install huggingface_hub[hf_transfer] and set HF_HUB_ENABLE_HF_TRANSFER=1.
Dataset structure
issues
Both issues and PRs live in this table (check is_pull_request). Join with pull_requests on number for PR-specific fields like merge status and diff stats.
| Column | Type | Description |
|---|---|---|
number |
int32 | Issue/PR number (primary key) |
node_id |
string | GitHub GraphQL node ID |
is_pull_request |
bool | True if this is a PR |
title |
string | Title |
body |
string | Full body text in Markdown |
state |
string | open or closed |
state_reason |
string | completed, not_planned, or reopened |
author |
string | Username of the creator |
created_at |
timestamp | When opened |
updated_at |
timestamp | Last activity |
closed_at |
timestamp | When closed (null if open) |
labels |
string (JSON) | Array of label names |
assignees |
string (JSON) | Array of assignee usernames |
milestone_title |
string | Milestone name |
milestone_number |
int32 | Milestone number |
reactions |
string (JSON) | Reaction counts ({"+1": 5, "heart": 2}) |
comment_count |
int32 | Number of comments |
locked |
bool | Whether the conversation is locked |
lock_reason |
string | Lock reason |
pull_requests
PR-specific fields. Join with issues on number for title, body, labels, and other shared fields.
| Column | Type | Description |
|---|---|---|
number |
int32 | PR number (matches issues.number) |
merged |
bool | Whether the PR was merged |
merged_at |
timestamp | When merged |
merged_by |
string | Username who merged |
merge_commit_sha |
string | Merge commit SHA |
base_ref |
string | Target branch (e.g. main) |
head_ref |
string | Source branch |
head_sha |
string | Head commit SHA |
additions |
int32 | Lines added |
deletions |
int32 | Lines deleted |
changed_files |
int32 | Number of files changed |
draft |
bool | Whether the PR is a draft |
maintainer_can_modify |
bool | Whether maintainers can push to the head branch |
comments
Conversation comments on issues and PRs. These are the threaded discussion comments, not inline code review comments (those are in review_comments).
| Column | Type | Description |
|---|---|---|
id |
int64 | Comment ID (primary key) |
issue_number |
int32 | Parent issue/PR number |
author |
string | Username |
body |
string | Comment body in Markdown |
created_at |
timestamp | When posted |
updated_at |
timestamp | Last edit |
reactions |
string (JSON) | Reaction counts |
author_association |
string | OWNER, MEMBER, CONTRIBUTOR, NONE, etc. |
review_comments
Inline code review comments on PR diffs. Each comment is attached to a specific file and line in the diff.
| Column | Type | Description |
|---|---|---|
id |
int64 | Comment ID (primary key) |
pr_number |
int32 | Parent PR number |
review_id |
int64 | Parent review ID |
author |
string | Reviewer username |
body |
string | Comment body in Markdown |
path |
string | File path in the diff |
line |
int32 | Line number |
side |
string | LEFT (old code) or RIGHT (new code) |
diff_hunk |
string | Surrounding diff context |
created_at |
timestamp | When posted |
updated_at |
timestamp | Last edit |
in_reply_to_id |
int64 | Parent comment ID (for threaded replies) |
reviews
PR review decisions. One row per review action on a PR.
| Column | Type | Description |
|---|---|---|
id |
int64 | Review ID (primary key) |
pr_number |
int32 | Parent PR number |
author |
string | Reviewer username |
state |
string | APPROVED, CHANGES_REQUESTED, COMMENTED, DISMISSED |
body |
string | Review summary in Markdown |
submitted_at |
timestamp | When submitted |
commit_id |
string | Commit SHA that was reviewed |
timeline_events
The full lifecycle of every issue and PR. Every label change, assignment, cross-reference, merge, force-push, lock, and other state transition.
| Column | Type | Description |
|---|---|---|
id |
string | Event ID (node_id or synthesized) |
issue_number |
int32 | Parent issue/PR number |
event_type |
string | Event type (see below) |
actor |
string | Username who triggered the event |
created_at |
timestamp | When it happened |
database_id |
int64 | GitHub database ID for the event |
label_name |
string | Label name (labeled, unlabeled) |
label_color |
string | Label hex color |
state_reason |
string | Close reason: COMPLETED, NOT_PLANNED (closed) |
assignee_login |
string | Username assigned/unassigned (assigned, unassigned) |
milestone_title |
string | Milestone name (milestoned, demilestoned) |
title_from |
string | Previous title before rename (renamed) |
title_to |
string | New title after rename (renamed) |
ref_type |
string | Referenced item type: Issue or PullRequest (cross-referenced, referenced) |
ref_number |
int32 | Referenced issue/PR number |
ref_url |
string | URL of the referenced item |
will_close |
bool | Whether the reference will close this issue |
lock_reason |
string | Lock reason (locked) |
data |
string (JSON) | Remaining event-specific payload (common fields stripped) |
Event types: labeled, unlabeled, closed, reopened, assigned, unassigned, milestoned, demilestoned, renamed, cross-referenced, referenced, locked, unlocked, pinned, merged, review_requested, head_ref_force_pushed, head_ref_deleted, ready_for_review, convert_to_draft, and more.
Common fields (actor, created_at, database_id and extracted columns above) are stored in dedicated columns and removed from data to reduce storage. The data field contains only remaining event-specific payload. See the GitHub GraphQL timeline items documentation for the full type catalog.
pr_files
Every file touched by each pull request, with per-file diff statistics.
| Column | Type | Description |
|---|---|---|
pr_number |
int32 | Parent PR number |
path |
string | File path |
additions |
int32 | Lines added |
deletions |
int32 | Lines deleted |
status |
string | added, removed, modified, renamed |
previous_filename |
string | Original path (for renames) |
commit_statuses
CI/CD status checks and GitHub Actions results for each commit.
| Column | Type | Description |
|---|---|---|
sha |
string | Commit SHA |
context |
string | Check name (e.g. ci/circleci, check:build) |
state |
string | success, failure, pending, error |
description |
string | Status description |
target_url |
string | Link to CI details |
created_at |
timestamp | When reported |
Dataset statistics
| Table | Rows | Description |
|---|---|---|
issues |
1.0M | Issues and pull requests (shared metadata) |
pull_requests |
578.9K | PR-specific fields (merge status, diffs, refs) |
comments |
5.7M | Conversation comments on issues and PRs |
review_comments |
1.3M | Inline code review comments on PR diffs |
reviews |
1.2M | PR review decisions |
timeline_events |
744.4K | Activity timeline (labels, closes, merges, assignments) |
pr_files |
7.4M | Files changed in each pull request |
commit_statuses |
164.0K | CI/CD status checks per commit |
| Total | 18.2M |
How it's built
The sync pipeline uses both GitHub APIs. The REST API handles bulk listing: issues, comments, and review comments are fetched repo-wide with since-based incremental pagination and parallel page fetching across multiple tokens. The GraphQL API handles per-item detail: one query grabs reviews, timeline events, file changes, and commit statuses in a single round trip, with automatic REST fallback for PRs with more than 100 files or reviews.
Multiple GitHub Personal Access Tokens rotate round-robin to spread rate limit load. The pipeline is fully incremental and idempotent: re-running picks up only what changed since the last sync.
Everything lands in per-repo DuckDB files first, then gets exported to Parquet with Zstd compression for publishing here. No filtering, deduplication, or content changes. Bot activity, automated PRs, CI noise, Dependabot upgrades, all of it is preserved, because that's how repos actually work.
Known limitations
- Point-in-time snapshot. Data reflects the state at the last sync, not real-time. Incremental updates capture everything that changed since the previous sync.
- Bot activity included. Comments and PRs from bots (Dependabot, Renovate, GitHub Actions, etc.) are included without filtering. This is intentional. Filter on
authorif you want humans only. - JSON columns.
labels,assignees,reactions, anddatacontain JSON strings. Usejson_extract()in DuckDB orjson.loads()in Python. - Body text can be large. Issue and comment bodies contain full Markdown, sometimes with embedded images. Project only the columns you need for memory-constrained workloads.
- Timeline data varies by event type. The
datafield intimeline_eventscontains the raw event payload as JSON. The schema depends onevent_type.
Personal and sensitive information
Usernames, user IDs, and author associations are included as they appear in the GitHub API. All data was already publicly accessible on GitHub. Email addresses do not appear in this dataset (they exist only in git commit objects, which are in the separate code archive, not here). No private repository data is present.
License
Released under the Open Data Commons Attribution License (ODC-By) v1.0. The underlying data is sourced from GitHub's public API. GitHub's Terms of Service apply to the original data.
Thanks
All the data here comes from GitHub's public REST API and GraphQL API. We are grateful to the open-source maintainers and contributors whose work is represented in these tables.
- OpenGitHub, our companion dataset covering the full GitHub event stream via GH Archive by Ilya Grigorik
- Built with DuckDB (Go driver), Apache Parquet (Zstd compression), published via Hugging Face Hub
Questions, feedback, or issues? Open a discussion on the Community tab.
- Downloads last month
- 277