owner stringclasses 14
values | repo stringclasses 14
values | id int64 116k 4.23B | issue_number int32 1 180k | author stringlengths 0 39 | body stringlengths 1 262k | created_at timestamp[us]date 2000-06-06 02:40:44 2026-04-10 19:55:49 | updated_at timestamp[us]date 2000-06-06 02:40:44 2026-04-10 19:55:49 | reactions unknown | author_association stringclasses 7
values |
|---|---|---|---|---|---|---|---|---|---|
ClickHouse | ClickHouse | 316,627,713 | 1,001 | kolobaev | I changed "pragma GCC diagnostic to pragma clang diagnostic."
**make failed:**
ClickHouse/dbms/src/Common/Collator.cpp:13:0: error: ignoring #pragma clang diagnostic [-Werror=unknown-pragmas]
#pragma clang diagnostic ignored "-Wunused-private-field"
cc1plus: all warnings being treated as errors
dbms/CMake... | 2017-07-20T08:00:15 | 2017-07-20T08:00:15 | {} | NONE |
ClickHouse | ClickHouse | 318,123,846 | 1,028 | alexey-milovidov | `Array(Nullable(...))` is Ok,
but `Nullable(Array(...))` is prohibited (in current version it could be created, but mostly does not work; and it will be strictly prohibited in next version). | 2017-07-26T17:24:05 | 2017-07-26T17:24:05 | {} | MEMBER |
ClickHouse | ClickHouse | 318,197,220 | 935 | ei-grad | @yctn https://stackoverflow.com/a/5380763/2649222 | 2017-07-26T22:07:23 | 2017-07-26T22:08:01 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 315,859,987 | 993 | ztlpn | Confirmed. Currently `execution_time` quotas are checked only when (and if) resulting output blocks appear. So e.g. if the query is an aggregation returning small number of rows, this quota is checked only at the very end. And if the query returns zero rows, the quota is never checked. | 2017-07-17T19:40:21 | 2017-07-17T19:40:21 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 315,935,314 | 990 | jackpgao | Thanks @ztlpn .
I search the whole document, but didn't find some demo like this.
Hope that you can provide more demo in the document.
| 2017-07-18T01:54:42 | 2017-07-18T01:54:42 | {} | NONE |
ClickHouse | ClickHouse | 316,424,177 | 1,001 | alexey-milovidov | Ok, now I understand the motivation... | 2017-07-19T15:29:12 | 2017-07-19T15:29:12 | {} | MEMBER |
ClickHouse | ClickHouse | 316,496,498 | 999 | ztlpn | Easily reproducible, thanks! Will fix. | 2017-07-19T19:48:08 | 2017-07-19T19:48:08 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 316,719,485 | 1,001 | alexey-milovidov | It should be like this:
```
#if __clang__
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wunused-private-field"
#endif
``` | 2017-07-20T14:23:35 | 2017-07-20T14:23:35 | {} | MEMBER |
ClickHouse | ClickHouse | 316,788,593 | 1,009 | alexey-milovidov | Hi.
OPTIMIZE doesn't guarantee that all data parts will be merged, it just performs additional step of selection what parts to merge and then merge them. Selection of what data parts to merge is done by some heuristic on basis of amount of data parts, their sizes and ages, non-uniformity of sizes, etc. Sometimes it ... | 2017-07-20T18:21:15 | 2017-07-20T18:21:15 | {} | MEMBER |
ClickHouse | ClickHouse | 317,068,469 | 1,013 | alexey-milovidov | Perfect. | 2017-07-21T17:51:15 | 2017-07-21T17:51:15 | {} | MEMBER |
ClickHouse | ClickHouse | 318,108,964 | 1,029 | ekonkov | autotest | 2017-07-26T16:31:41 | 2017-07-26T16:31:41 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 318,480,037 | 1,029 | proller | autotest | 2017-07-27T20:40:18 | 2017-07-27T20:40:18 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 319,056,321 | 1,045 | robot-metrika-test | Can one of the admins verify this patch? | 2017-07-31T12:44:02 | 2017-07-31T12:44:02 | {} | NONE |
ClickHouse | ClickHouse | 319,413,647 | 1,002 | proller | can you try build current master? | 2017-08-01T15:54:28 | 2017-08-01T15:54:28 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 319,466,124 | 1,051 | vavrusa | Hi 👋, I got back from vacation, sorry it took so long. Is the test suite going to be run by CI on this PR? | 2017-08-01T19:05:11 | 2017-08-01T19:05:11 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 316,786,181 | 999 | mfridental | You're right, the issue is still there | 2017-07-20T18:11:53 | 2017-07-20T18:11:53 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 316,791,215 | 1,009 | remenska | Indeed, after a while, now I see the data is summed. Thanks for the tip on optimize!
Is there any structured way to control or estimate how long before the Summing merge tree is merged, based, say, on data rate or something? It does affect our storage requirements estimations.
I gave that as example, but I have a r... | 2017-07-20T18:31:31 | 2017-07-20T18:38:47 | {} | NONE |
ClickHouse | ClickHouse | 316,900,994 | 1,010 | sigsergv | This is how it looks in ps:
~~~~~
3759 pts/10 Ss \_ /bin/zsh
3773 pts/10 S | \_ sudo -i
3774 pts/10 S | \_ -bash
12861 pts/10 S+ | \_ dpkg -i ch-defunct-test_1.0-1_amd64.deb
12871 pts/10 S+ | \_ /usr/bin/perl -w /usr/share/debconf/frontend /var/lib/dpkg/info/... | 2017-07-21T04:41:49 | 2017-07-21T04:41:49 | {} | NONE |
ClickHouse | ClickHouse | 316,920,297 | 985 | alexey-milovidov | Note: `__restrict` was removed, because it is not applicable to `std::reverse_iterator`.
(The code wasn't compile under clang.) | 2017-07-21T06:54:56 | 2017-07-21T06:54:56 | {} | MEMBER |
ClickHouse | ClickHouse | 317,539,327 | 1,016 | alexey-milovidov | Basically, ClickHouse could handle Chinese:
```
:) CREATE TABLE test.chinese (d Date, s String) ENGINE = MergeTree(d, s, 8192)
CREATE TABLE test.chinese
(
d Date,
s String
) ENGINE = MergeTree(d, s, 8192)
Ok.
0 rows in set. Elapsed: 0.073 sec.
:) INSERT INTO test.chinese VALUES ('2000-01-0... | 2017-07-24T20:08:13 | 2017-07-24T20:08:13 | {} | MEMBER |
ClickHouse | ClickHouse | 317,691,788 | 1,008 | YiuRULE | Ok, should be done now. :)
I let the name `generateUUIDv4` because some library who encapsulate UUID V4 use some components of the UUID V4 algorithm (for determine the version). Name it `generateRandomUUID` can be a little ambiguous because we can think than every bits of the UUID are random, who it's not totally th... | 2017-07-25T10:06:32 | 2017-07-25T14:01:19 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 319,063,611 | 1,037 | kam1sh | There is also weird things with `date` field parsing. ClickHouse parsed it without errors, but when i executed `SELECT max(date), min(date) FROM logs` it gave me 0000-00-00 in both columns.
But when i dropped table and created it from another CSV, where this field was not in `1500953104` but in `2017-07-25`, `max(da... | 2017-07-31T13:14:49 | 2017-07-31T13:16:29 | {} | NONE |
ClickHouse | ClickHouse | 319,238,127 | 1,044 | alexey-milovidov | Reasonable.
There are the following ways for implementation:
1. Add command line option `--ask-password`. Allow to specify this option in configuration file of client to set default behaviour.
2. Ask password if password was not specified and default empty password was incorrect. | 2017-08-01T01:02:00 | 2017-08-01T01:02:00 | {} | MEMBER |
ClickHouse | ClickHouse | 315,913,276 | 985 | alexey-milovidov | I will review this in two days. | 2017-07-17T23:23:27 | 2017-07-17T23:23:27 | {} | MEMBER |
ClickHouse | ClickHouse | 316,803,137 | 1,009 | alexey-milovidov | To have summed data, you just have GROUP BY in all queries from SummingMergeTree.
For storage size requirement, it is also not a concern: overhead for larger datasets should be neglible. | 2017-07-20T19:17:09 | 2017-07-20T19:17:09 | {
"+1": 2
} | MEMBER |
ClickHouse | ClickHouse | 318,108,985 | 1,030 | ekonkov | autotest | 2017-07-26T16:31:47 | 2017-07-26T16:31:47 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 318,112,600 | 1,020 | ekonkov | autotest | 2017-07-26T16:44:40 | 2017-07-26T16:44:40 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 318,401,356 | 79 | seufagner | @alexey-milodov Do you just implements default value to Date, DateTime? I can do this :) | 2017-07-27T15:42:43 | 2017-07-27T15:42:43 | {} | NONE |
ClickHouse | ClickHouse | 318,996,472 | 1,028 | mxzwrnz | @alexey-milovidov
The array I use is created via `groupUniqArray(Nullable(Int64))`. Is it the inteded behaviour that this array is of `Nullable(Array(Int64))` instead of `Array(Nullable(Int64))`.
Could you explain me, how to create an Array(Nullable(...)) instead from groupUniqArray()? | 2017-07-31T08:00:08 | 2017-07-31T08:00:08 | {} | NONE |
ClickHouse | ClickHouse | 319,140,630 | 1,028 | alexey-milovidov | It is broken in current (master) version:
```
:) SELECT groupUniqArray(x) FROM (SELECT arrayJoin([1, NULL, 2]) AS x)
SELECT groupUniqArray(x)
FROM
(
SELECT arrayJoin([1, NULL, 2]) AS x
)
Received exception from server:
Code: 43. DB::Exception: Received from localhost:9000, 127.0.0.1. DB::Exception:... | 2017-07-31T17:38:38 | 2017-07-31T17:39:33 | {} | MEMBER |
ClickHouse | ClickHouse | 319,199,798 | 1,037 | alexey-milovidov | Parsing from unix timestamp is not supported for Date fields for now. | 2017-07-31T21:19:50 | 2017-07-31T21:19:50 | {} | MEMBER |
ClickHouse | ClickHouse | 316,370,793 | 1,001 | robot-metrika-test | Can one of the admins verify this patch? | 2017-07-19T12:32:03 | 2017-07-19T12:32:03 | {} | NONE |
ClickHouse | ClickHouse | 316,370,798 | 985 | robot-metrika-test | Can one of the admins verify this patch? | 2017-07-19T12:32:06 | 2017-07-19T12:32:06 | {} | NONE |
ClickHouse | ClickHouse | 316,804,786 | 1,009 | remenska | Golden tip, maybe I missed it in the documentation 🥇 | 2017-07-20T19:23:55 | 2017-07-20T19:23:55 | {} | NONE |
ClickHouse | ClickHouse | 317,335,960 | 775 | dmitryluhtionov | Спасибо. Странно. Я у себя тестировал, и у меня все работало правильно.
2017-07-24 9:57 GMT+03:00 alexey-milovidov <notifications@github.com>:
> Да, теперь всё Ок.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/yandex/Cl... | 2017-07-24T07:00:18 | 2017-07-24T07:00:18 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 318,109,065 | 1,027 | ekonkov | autotest | 2017-07-26T16:32:05 | 2017-07-26T16:32:05 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 318,953,073 | 1,016 | jackpgao | @alexey-milovidov Sorry for the late.
I use iTerm2 on MacBookPro, when I put a Chinese characters into to the query, it turns into blank:
```sql
:) select distinct city, city_name from XXXX where date=today() limit 10;
SELECT DISTINCT
city,
city_name
FROM XXXX
WHERE date = today()
LIMIT 10
... | 2017-07-31T02:36:09 | 2017-07-31T02:36:09 | {} | NONE |
ClickHouse | ClickHouse | 319,483,057 | 1,045 | alexey-milovidov | Also it will be nice, if you could share performance testing results.
Both total numbers (query execution speed) and `perf` listings are interesting! | 2017-08-01T20:14:18 | 2017-08-01T20:14:18 | {} | MEMBER |
ClickHouse | ClickHouse | 316,511,736 | 999 | mfridental | Confirming that fix for #998 also fixes this issue. Thanks! | 2017-07-19T20:45:34 | 2017-07-19T20:45:34 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 318,177,817 | 1,029 | alexey-milovidov | This breaks our current CI environment. | 2017-07-26T20:48:35 | 2017-07-26T20:48:35 | {} | MEMBER |
ClickHouse | ClickHouse | 318,177,971 | 1,029 | alexey-milovidov | Any clang version before 4.0 is strongly not recommended to use. | 2017-07-26T20:49:07 | 2017-07-26T20:49:07 | {} | MEMBER |
ClickHouse | ClickHouse | 316,540,017 | 999 | ztlpn | @mfridental are you sure? Because I still can reproduce it. | 2017-07-19T22:40:27 | 2017-07-19T22:40:27 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 316,692,494 | 1,008 | robot-metrika-test | Can one of the admins verify this patch? | 2017-07-20T12:49:03 | 2017-07-20T12:49:03 | {} | NONE |
ClickHouse | ClickHouse | 317,101,030 | 1,010 | sigsergv | Yes, I think I've found cause of the problem: exec in bash invocation when demon starts, not closed file descriptors and use of debconf. I've added db_stop at the end of our postinst and that fixed the problem. At least it doesn't hang at the end. | 2017-07-21T20:10:52 | 2017-07-21T20:10:52 | {} | NONE |
ClickHouse | ClickHouse | 317,160,600 | 1,001 | kolobaev | thx, it works! | 2017-07-22T06:39:39 | 2017-07-22T06:39:39 | {} | NONE |
ClickHouse | ClickHouse | 317,362,800 | 1,003 | sunsingerus | >run make:
>make -j16 clickhouse
Failed again:
```
[100%] Built target clickhouse-local
Scanning dependencies of target clickhouse
[100%] Building CXX object dbms/src/Server/CMakeFiles/clickhouse.dir/main.cpp.o
[100%] Linking CXX executable clickhouse
../../../libs/libmysqlxx/libmysqlclient.a(vio.c.o): In fun... | 2017-07-24T09:05:05 | 2017-07-24T09:05:05 | {} | NONE |
ClickHouse | ClickHouse | 319,467,145 | 1,051 | robot-metrika-test | Can one of the admins verify this patch? | 2017-08-01T19:09:02 | 2017-08-01T19:09:02 | {} | NONE |
ClickHouse | ClickHouse | 317,085,265 | 1,015 | alexey-milovidov | I prefer more informal comments: what we are doing and why. | 2017-07-21T18:59:03 | 2017-07-21T18:59:03 | {} | MEMBER |
ClickHouse | ClickHouse | 317,335,414 | 775 | alexey-milovidov | Да, теперь всё Ок. | 2017-07-24T06:57:03 | 2017-07-24T06:57:03 | {} | MEMBER |
ClickHouse | ClickHouse | 317,348,057 | 1,003 | sunsingerus | >rm build/CMakeCache.txt
>run cmake again and share output
Please, find output of the `cmake .. -DCMAKE_BUILD_TYPE:STRING=Release` command:
```
-- The C compiler identification is GNU 6.2.0
-- The CXX compiler identification is GNU 6.2.0
-- Check for working C compiler: /usr/local/bin/cc
-- Check for working... | 2017-07-24T08:01:38 | 2017-07-24T08:01:38 | {} | NONE |
ClickHouse | ClickHouse | 317,774,513 | 1,016 | jackpgao | @alexey-milovidov Actually, I can never input Chinese character with clickhouse-client in terminal.
All the character becomes blank.
| 2017-07-25T15:25:08 | 2017-07-25T15:25:08 | {} | NONE |
ClickHouse | ClickHouse | 318,109,032 | 1,031 | ekonkov | autotest | 2017-07-26T16:31:58 | 2017-07-26T16:31:58 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 318,464,774 | 79 | alexey-milovidov | We have implicit default values for Date and DateTime - `0000-00-00` and `0000-00-00 00:00:00`.
The problem is not with default values, but that empty string is not parsed as default in CSV. | 2017-07-27T19:37:07 | 2017-07-27T19:37:07 | {} | MEMBER |
ClickHouse | ClickHouse | 319,238,463 | 1,016 | alexey-milovidov | This could be an issue of Mac OS builds of client.
What happens if you use `clickhouse-client` on Ubuntu (either on server through ssh or using Docker)? | 2017-08-01T01:04:30 | 2017-08-01T01:04:30 | {} | MEMBER |
ClickHouse | ClickHouse | 319,466,625 | 1,051 | robot-metrika-test | Can one of the admins verify this patch? | 2017-08-01T19:07:02 | 2017-08-01T19:07:02 | {} | NONE |
ClickHouse | ClickHouse | 319,481,725 | 1,045 | prog8 | Thanks @alexey-milovidov | 2017-08-01T20:09:02 | 2017-08-01T20:09:02 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 319,482,728 | 1,045 | alexey-milovidov | Ok. I have added remaining changes. Thank you!
About copy avoidance. This is definitely worth to do.
For example, look at `CompressedReadBufferFromFile::nextImpl`.
This method prepares buffer for decompressed data (`memory`) and sets `working_buffer` to point to it. Then decompresses into `working_buffer`.
If y... | 2017-08-01T20:12:58 | 2017-08-01T20:12:58 | {} | MEMBER |
ClickHouse | ClickHouse | 319,488,654 | 1,045 | prog8 | Yeah I can do copy-free version but this only for reading but for writes there will be still `memcpy` because of hash function (checksum).
I think I will not use non-compression version in production use because it turns out I will waste too much disk space so I cannot afford speeding up queries in favor of storage us... | 2017-08-01T20:34:52 | 2017-08-01T20:34:52 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 318,290,089 | 1,025 | bamx23 | With the latest version of `docker-compose.yml` file you could start a server with `docker-compose up -d server` and connect a client with `docker-compose run client`. Also HTTP-API will be available on the host machine at port `8123`. | 2017-07-27T08:05:57 | 2017-07-27T08:05:57 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 319,055,867 | 1,045 | robot-metrika-test | Can one of the admins verify this patch? | 2017-07-31T12:42:03 | 2017-07-31T12:42:03 | {} | NONE |
ClickHouse | ClickHouse | 319,489,628 | 1,045 | alexey-milovidov | Ok. | 2017-08-01T20:38:24 | 2017-08-01T20:38:24 | {} | MEMBER |
ClickHouse | ClickHouse | 317,797,574 | 1,016 | alexey-milovidov | It depends on what line editing library you use for build.
ClickHouse supports `readline`, `libedit` and also supports build with no editing library.
`readline` should have no issues with Chinese.
`readline` is preferred, if it is available in your system.
To be sure, do `sudo apt-get install libreadline-dev` be... | 2017-07-25T16:43:43 | 2017-07-25T16:44:31 | {} | MEMBER |
ClickHouse | ClickHouse | 317,963,589 | 590 | MikhailKalashnikov | I have a similar error with a distributed table with an alias column:
```sql
CREATE DATABASE tmpdb;
CREATE TABLE tmpdb.test_alias_local10 (
Id Int8,
EventDate Date DEFAULT today(),
field1 Int8,
field2 String,
field3 ALIAS CASE WHEN field1 = 1 THEN field2 ELSE '0' END
) ENGINE = MergeTree(EventDat... | 2017-07-26T06:44:24 | 2017-07-26T06:44:24 | {} | NONE |
ClickHouse | ClickHouse | 315,833,499 | 990 | ztlpn | There is no INTERVAL type at the moment. But you can use arithmetic functions with Date and DateTime types. For example, here is how you get DateTime 3 hours ago from now and Date one week ago from today:
```
:) select now(), now() - 3 * 3600, today(), today() - 7
SELECT
now(),
now() - (3 * 3600),
... | 2017-07-17T18:03:29 | 2017-07-17T18:03:29 | {
"+1": 1
} | CONTRIBUTOR |
ClickHouse | ClickHouse | 316,121,980 | 996 | alexey-milovidov | See update. | 2017-07-18T16:35:10 | 2017-07-18T16:35:10 | {} | MEMBER |
ClickHouse | ClickHouse | 316,411,526 | 845 | zrkn | +1 | 2017-07-19T14:48:28 | 2017-07-19T14:48:28 | {} | NONE |
ClickHouse | ClickHouse | 316,635,938 | 1,000 | YiuRULE | Did you follow the [building instruction](https://clickhouse.yandex/docs/en/development/build.html) ? especially the `Use GCC 6 for builds` ?
I know I also had this kind of problems with building Clickhouse for my PRs, after changing the env, it works for me.
When you use `cmake`, does they display that the `GNU ... | 2017-07-20T08:35:24 | 2017-07-20T08:38:43 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 316,797,530 | 1,008 | alexey-milovidov | PS. Also you could use name `generateRandomUUID` instead of `generateUUIDv4`. This will clearly state the type of UUID algorithm. This is up to your choice, `generateUUIDv4` is also Ok :) | 2017-07-20T18:55:45 | 2017-07-20T18:55:45 | {} | MEMBER |
ClickHouse | ClickHouse | 319,477,130 | 1,051 | alexey-milovidov | autotests | 2017-08-01T19:49:55 | 2017-08-01T19:49:55 | {} | MEMBER |
ClickHouse | ClickHouse | 319,675,874 | 1,002 | sunsingerus | Made an attempt with master
```
cat /etc/redhat-release
Fedora release 26 (Twenty Six)
g++ --version
g++ (GCC) 7.1.1 20170622 (Red Hat 7.1.1-3)
Copyright (C) 2017 Free Software Foundation, Inc.
```
Results:
compilation OK
linking FAILED, with the same error as [described here](https://github.com/yandex/Clic... | 2017-08-02T13:43:17 | 2017-08-02T13:44:01 | {} | NONE |
ClickHouse | ClickHouse | 316,424,738 | 1,001 | alexey-milovidov | Just change `pragma GCC diagnostic` to `pragma clang diagnostic`. | 2017-07-19T15:30:57 | 2017-07-19T15:30:57 | {} | MEMBER |
ClickHouse | ClickHouse | 316,799,790 | 1,009 | remenska | Alright, thanks again. What I understand from this, is, if we want to have summed up data from SummingMergeTree to show up with a lag of about an hour (or so, controllable amount of time), we have to force-optimize.
Closing this, thanks again. | 2017-07-20T19:04:09 | 2017-07-20T19:04:09 | {} | NONE |
ClickHouse | ClickHouse | 317,314,863 | 972 | ipolevoy | bump, hey good people from Clickhouse, any advice? | 2017-07-24T04:10:24 | 2017-07-24T04:10:24 | {} | NONE |
ClickHouse | ClickHouse | 317,334,539 | 775 | dmitryluhtionov | После последних трех коммитов оно теперь правильное ?
2017-07-23 8:21 GMT+03:00 alexey-milovidov <notifications@github.com>:
> Don't worry, I am going to rewrite these functions by myself.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://gith... | 2017-07-24T06:51:47 | 2017-07-24T06:51:47 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 317,802,992 | 1,008 | alexey-milovidov | Ok, thanks! | 2017-07-25T17:02:11 | 2017-07-25T17:02:11 | {} | MEMBER |
ClickHouse | ClickHouse | 319,010,776 | 1,037 | kam1sh | Also, i formatted this Timestamp field into "YYYY-MM-DD hh:mm:ss", and after that CSV was successfully eaten by ClickHouse. But it's temporary solution. | 2017-07-31T09:06:06 | 2017-07-31T09:06:06 | {} | NONE |
ClickHouse | ClickHouse | 319,297,055 | 1,016 | jackpgao | Actually I use a docker image on Mac.
It based on a Ubuntu image.
I have another Question.
Superset which created by Airbnb, is a very powerful visualization tool.
I can't in put query with Chinese character. The errors are as below:
```python
{"status": "failed", "query_id": 1356, "error_essage": "C... | 2017-08-01T07:56:29 | 2017-08-01T07:56:29 | {} | NONE |
ClickHouse | ClickHouse | 319,355,786 | 1,016 | jackpgao | superset use sqlalchemy_clickhouse: https://github.com/cloudflare/sqlalchemy-clickhouse | 2017-08-01T12:28:29 | 2017-08-01T12:28:29 | {} | NONE |
ClickHouse | ClickHouse | 316,326,248 | 1,001 | robot-metrika-test | Can one of the admins verify this patch? | 2017-07-19T09:24:13 | 2017-07-19T09:24:13 | {} | NONE |
ClickHouse | ClickHouse | 316,356,023 | 1,001 | kolobaev | Debian 8 (Jessie) 3.16.36-1+deb8u2 amd64
gcc-6: Installed: 6.4.0-1
g++-6: Installed: 6.4.0-1
libc6: Installed: 2.24-12
| 2017-07-19T11:27:37 | 2017-07-19T11:27:37 | {} | NONE |
ClickHouse | ClickHouse | 318,803,646 | 717 | VladislavPershin | @seufagner, 16.04 ubuntu:
sed -i 's/trusty/xenial/' /etc/apt/sources.list.d/clickhouse.list
apt-get update
apt-get install clickhouse-server-base clickhouse-server-common clickhouse-client -y | 2017-07-29T04:40:14 | 2017-07-29T04:40:14 | {} | NONE |
ClickHouse | ClickHouse | 315,767,624 | 9 | elrik75 | Histogram approximation in one pass (without setting the min / max) is also possible, as explain here : http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf
Different implementations in different languages are available. | 2017-07-17T14:12:23 | 2017-07-17T14:12:23 | {} | NONE |
ClickHouse | ClickHouse | 315,844,683 | 982 | ztlpn | Good catch! Yes, when one of the shards has a local replica, ClickHouse uses it regardless of any settings. | 2017-07-17T18:42:55 | 2017-07-17T18:42:55 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 316,693,209 | 1,008 | robot-metrika-test | Can one of the admins verify this patch? | 2017-07-20T12:52:02 | 2017-07-20T12:52:02 | {} | NONE |
ClickHouse | ClickHouse | 316,796,134 | 1,008 | alexey-milovidov | Mostly Ok.
1. I prefer the function to be named with lowercase `v`: `generateUUIDv4`.
Because in similar abbreviations, `v` is also traditionally in lowercase. Examples: `IPv4`, `IPv6`.
2. Method `RandImpl::execute` now assumes, that sizeof(T) divide sizeof(ReturnType). This looks unnatural. We could do better. ... | 2017-07-20T18:50:21 | 2017-07-20T18:51:49 | {} | MEMBER |
ClickHouse | ClickHouse | 316,799,177 | 1,009 | alexey-milovidov | > Is there any structured way to control or estimate how long before the Summing merge tree is merged, based, say, on data rate or something?
It is difficult to give exact number. You could expect about tens of data parts for each partition, with sizes having power law distribution. Data parts are get merged to sing... | 2017-07-20T19:01:52 | 2017-07-20T19:01:52 | {} | MEMBER |
ClickHouse | ClickHouse | 317,023,439 | 1,010 | ztlpn | Are you sure that the problem lies in ClickHouse init script and not in your scripts? I tried substituting clickhouse-server for nginx (off the top of my mind) and got the same problem...
Also, IMO it is bad practice to restart anything in postinst scripts - any problem like this and the packaging system is locked. | 2017-07-21T14:55:30 | 2017-07-21T14:55:30 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 317,229,942 | 775 | alexey-milovidov | Just realized, that these functions are totally wrong:
```
:) SELECT MACStringToNum('01:02:03:04:05:06') AS x, hex(x), MACNumToString(1108152157446)
SELECT
MACStringToNum('01:02:03:04:05:06') AS x,
hex(x),
MACNumToString(1108152157446)
┌─────────────x─┬─hex(MACStringToNum(\'01:02:03:04:05:06... | 2017-07-23T05:20:36 | 2017-07-23T05:20:36 | {} | MEMBER |
ClickHouse | ClickHouse | 317,229,960 | 775 | alexey-milovidov | Don't worry, I am going to rewrite these functions by myself. | 2017-07-23T05:21:09 | 2017-07-23T05:21:09 | {} | MEMBER |
ClickHouse | ClickHouse | 319,160,465 | 1,046 | robot-metrika-test | Can one of the admins verify this patch? | 2017-07-31T18:47:02 | 2017-07-31T18:47:02 | {} | NONE |
ClickHouse | ClickHouse | 319,161,070 | 1,046 | robot-metrika-test | Can one of the admins verify this patch? | 2017-07-31T18:49:02 | 2017-07-31T18:49:02 | {} | NONE |
ClickHouse | ClickHouse | 316,422,985 | 1,001 | alexey-milovidov | What is the motivation of this change?
As I remember, the warning appears when compiling with clang
(note than clang understands `#pragma GCC diagnostic` as well)
And when disabling ICU, private field is really unused.
(And I decided to keep ifdefs in .cpp file, not in header, though it's also possible to move pr... | 2017-07-19T15:25:21 | 2017-07-19T15:25:21 | {} | MEMBER |
ClickHouse | ClickHouse | 317,005,991 | 1,003 | proller | Please
rm build/CMakeCache.txt
run cmake again and share output | 2017-07-21T13:49:40 | 2017-07-21T13:49:40 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 317,006,804 | 1,003 | proller | also usually you dont need `failover` executable, to skip all test run make:
make -j16 clickhouse
if it fail again please run
make VERBOSE=1 clickhouse
and show us failed command | 2017-07-21T13:52:33 | 2017-07-21T13:52:33 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 317,536,425 | 1,017 | alexey-milovidov | Thank you for appreciation!
We have AVX optimizations in long list of features to implement. Probably we will introduce them in few single places where it could provide maximum benefit (your suggestions?). This will require dynamic dispatching as long as we still have many hardware without AVX support. | 2017-07-24T19:56:37 | 2017-07-24T19:56:37 | {
"+1": 1
} | MEMBER |
ClickHouse | ClickHouse | 316,370,320 | 1,002 | sunsingerus | having the same error on Fedora 26 | 2017-07-19T12:30:05 | 2017-07-19T12:30:05 | {} | NONE |
ClickHouse | ClickHouse | 317,372,121 | 1,018 | ztlpn | Duplicate of #934
It was fixed in #939 | 2017-07-24T09:43:30 | 2017-07-24T09:43:30 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 317,552,824 | 1,017 | innerr | Look forward to it! | 2017-07-24T20:58:53 | 2017-07-24T20:58:53 | {} | CONTRIBUTOR |
ClickHouse | ClickHouse | 319,349,254 | 1,016 | alexey-milovidov | In error message we see, that query was cut: it should end with `FORMAT TabSeparatedWithNamesAndTypes` but actually we have `FORMAT TabSeparatedWithNamesAndT`. It is definitely related to multibyte characters.
Looks like an issue either with Superset or with ClickHouse JDBC driver.
I will ask developer of JDBC driv... | 2017-08-01T11:55:23 | 2017-08-01T11:55:23 | {} | MEMBER |
ClickHouse | ClickHouse | 319,361,131 | 1,016 | serebrserg | This query works through JDBC driver correctly
```sql
select '北京' as n where '北京' = '北京'
```
Anyway that doesn't relate to the issue. | 2017-08-01T12:52:30 | 2017-08-01T12:52:30 | {} | CONTRIBUTOR |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.