GithubHelp home page GithubHelp logo

clickhouse / clickhouse Goto Github PK

View Code? Open in Web Editor NEW
34.2K 689.0 6.5K 1.19 GB

ClickHouse® is a free analytics DBMS for big data

Home Page: https://clickhouse.com

License: Apache License 2.0

CMake 1.33% Shell 4.24% C++ 80.94% Python 8.45% Perl 0.01% HTML 0.17% CSS 0.01% C 3.66% Assembly 0.30% Dockerfile 0.07% Go 0.01% PHP 0.01% JavaScript 0.01% ANTLR 0.05% Java 0.02% Cap'n Proto 0.01% Clojure 0.10% Jinja 0.62% C# 0.01% GAP 0.01%
dbms olap analytics sql distributed-database big-data mpp clickhouse hacktoberfest

clickhouse's Introduction

Website Apache 2.0 License

The ClickHouse company logo.

ClickHouse® is an open-source column-oriented database management system that allows generating analytical data reports in real-time.

How To Install (Linux, macOS, FreeBSD)

curl https://clickhouse.com/ | sh

Useful Links

  • Official website has a quick high-level overview of ClickHouse on the main page.
  • ClickHouse Cloud ClickHouse as a service, built by the creators and maintainers.
  • Tutorial shows how to set up and query a small ClickHouse cluster.
  • Documentation provides more in-depth information.
  • YouTube channel has a lot of content about ClickHouse in video format.
  • Slack and Telegram allow chatting with ClickHouse users in real-time.
  • Blog contains various ClickHouse-related articles, as well as announcements and reports about events.
  • Code Browser (github.dev) with syntax highlighting, powered by github.dev.
  • Contacts can help to get your questions answered if there are any.

Monthly Release & Community Call

Every month we get together with the community (users, contributors, customers, those interested in learning more about ClickHouse) to discuss what is coming in the latest release. If you are interested in sharing what you've built on ClickHouse, let us know.

Upcoming Events

Keep an eye out for upcoming meetups and events around the world. Somewhere else you want us to be? Please feel free to reach out to tyler <at> clickhouse <dot> com. You can also peruse ClickHouse Events for a list of all upcoming trainings, meetups, speaking engagements, etc.

Recent Recordings

  • Recent Meetup Videos: Meetup Playlist Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments"
  • Recording available: v24.2 Release Call All the features of 24.2, one convenient video! Watch it now!

Interested in joining ClickHouse and making it your full-time job?

We are a globally diverse and distributed team, united behind a common goal of creating industry-leading, real-time analytics. Here, you will have an opportunity to solve some of the most cutting-edge technical challenges and have direct ownership of your work and vision. If you are a contributor by nature, a thinker and a doer - we’ll definitely click!

Check out our current openings here: https://clickhouse.com/company/careers

Can't find what you are looking for, but want to let us know you are interested in joining ClickHouse? Email [email protected]!

clickhouse's People

Contributors

4ertus2 avatar akuzm avatar al13n321 avatar alesapin avatar alexey-milovidov avatar algunenano avatar amosbird avatar antonio2368 avatar avogar avatar azat avatar blinkov avatar curtizj avatar danroscigno avatar devcrafter avatar evillique avatar felixoid avatar kitaisreal avatar kochetovnicolai avatar kssenii avatar mergify[bot] avatar nikitamikhaylov avatar novikd avatar proller avatar qoega avatar rschu1ze avatar serxa avatar taiyang-li avatar tavplubix avatar vdimir avatar yakov-olkhovskiy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clickhouse's Issues

Implement DETACH PART

It's possible to ATTACH PARTs manually but DETACH only works with partitions which makes moving data between nodes impossible. Currently, the only workaround is to delete the directory of PART and restart the node but it's not a reliable way since we don't keep track of merges which might include the PART we want to delete and the queries that are executed before restart will fail because of missing PART.

At least support for DETACH PART for non-replicated MergeTree tables would be a good starting point and would solve our issue since we handle replication externally.

Also see the discussion here: https://groups.google.com/forum/#!msg/clickhouse/ZABNyXsiIWI/fZyFk6IQBAAJ

Filters do not work on a sorted integer column with negative values

If a table is sorted by an integer column, and the column contains negative values then equality filters on the column do not always work correctly.

Here is the test case:

::) CREATE TABLE test
:-] (
:-]     key Int32,
:-]     name String,
:-]     merge_date Date
:-] ) ENGINE = MergeTree(merge_date, key, 8192);

CREATE TABLE test
(
    key Int32,
    name String,
    merge_date Date
) ENGINE = MergeTree(merge_date, key, 8192)

Ok.

0 rows in set. Elapsed: 0.037 sec.

:) insert into test values (1,'1','2016-07-07')

INSERT INTO test VALUES

Ok.

1 rows in set. Elapsed: 0.001 sec.

:) select * from test where key=1

SELECT *
FROM test
WHERE key = 1

┌─key─┬─name─┬─merge_date─┐
│   112016-07-07 │
└─────┴──────┴────────────┘

1 rows in set. Elapsed: 0.001 sec.

:) insert into test values (-1,'-1','2016-07-07')

INSERT INTO test VALUES

Ok.

1 rows in set. Elapsed: 0.001 sec.

:) select * from test where key=1

SELECT *
FROM test
WHERE key = 1

┌─key─┬─name─┬─merge_date─┐
│   112016-07-07 │
└─────┴──────┴────────────┘

1 rows in set. Elapsed: 0.001 sec.

:) optimize table test

OPTIMIZE TABLE test

Ok.

0 rows in set. Elapsed: 0.001 sec.

:) select * from test where key=1

SELECT *
FROM test
WHERE key = 1

Ok.

0 rows in set. Elapsed: 0.001 sec.

:)

Running INSERT query on a VIEW does not throw an exception

When trying to insert into a NON MATERIALIZED VIEW clickhouse returns Ok,
but actually inserts nothing.
It is normal that in this case nothing shall be inserted as it says in the documentation that non materialized view is just a saved query.
However, I think an exception shall be thrown in this case to prevent some potential bugs.

Docu lacks example for insert via clickhouse-client

echo -ne "1, 'some text', '2016-08-14 00:00:00'\n2, 'some more text', '2016-08-14 00:00:01'" | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";

cat <<_EOF | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";
3, 'some text', '2016-08-14 00:00:00'
4, 'some more text', '2016-08-14 00:00:01'
_EOF

cat file.csv | clickhouse-client --database=test --query="INSERT INTO test FORMAT CSV";

Option for returning result instead of empty

Need to add option(setting) to return result from one of the rows containing the initial values of aggregate functions instead of returning no result (as we discussed it here: https://groups.google.com/forum/#!topic/clickhouse/2JS_yzvYAHM)
This described in documentation:
"However, in contrast to standard SQL, if the table doesn't have any rows (either there aren't any at all, or there aren't any after using WHERE to filter), an empty result is returned, and not the result from one of the rows containing the initial values of aggregate functions."

It will be very useful.
Thanks!

how to delete data at specific hour

I have

CREATE TABLE default.toy
(
    day Date, 
    hour Int8, 
    info String
) ENGINE = MergeTree(day, (day, hour), 1024)

and, batch inserting data every hour. from the doc I can drop a partition of a month, but if there is one hour dirty data inserted, how to delete this hour's data? I don't want to drop all partition of that month.
can i do subpartition by day, by hour?

Aggregate function running out of memory

Hi,

I'm creating a table containing 3 columns uid, ev and timestamp and running sequencematch function grouped by uid and searching for a pattern on events. The table has 10 million unique uids and 100 evs per uid, so a total of 1 billion rows. The query below is running out of memory(which is 8gb on the test machine). I understand from the docs that merge tree is sorted by the primary key in each part.

Is it guaranteed that a uid is present in a single part as it is part of the primary key? If so can we not call merge and insert for each part and not at the end and deallocate the data?

If uid is not guaranteed to be in the same part can you suggest a better alternative to achieve the below result.

Table:
CREATE TABLE ev ( uid String, ev String, t DateTime, d Date) ENGINE = MergeTree(d, (uid, t, d), 8192)

Query:
SELECT count()
FROM
(
SELECT uid
FROM ev
GROUP BY uid
HAVING sequenceMatch('{"pattern":["ev1","ev2"]}')(t, ev)
)

Thanks

/usr/bin/ld: cannot find -lanl

I try to install ClickHouse on CentOS7

Installation steps
https://github.com/yandex/ClickHouse/blob/master/doc/build.md

error information is as follows:

/usr/bin/ld: cannot find -lanl
collect2: error: ld returned 1 exit status
make[2]: *** [contrib/libpoco/bin/f2cpsp] Error 1
make[1]: *** [contrib/libpoco/PageCompiler/File2Page/CMakeFiles/File2Page.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....

Looks like the ld command cannot find lanl library.

echo $LD_LIBRARY_PATH
/lib64:/usr/lib64:/usr/local/lib64:/usr/local:/usr/local/lib/mysql:/usr/lib

ls -l /usr/lib64/libanl.so
lrwxrwxrwx 1 root root 23 Jul 27 16:00 /usr/lib64/libanl.so -> ../../lib64/libanl.so.1

ls -l /lib64/libanl.so
lrwxrwxrwx 1 root root 23 Jul 27 16:00 /lib64/libanl.so -> ../../lib64/libanl.so.1

rpm -qf /lib64/libanl.so.1
glibc-2.17-106.el7_2.6.x86_64

Comparison with ElasticSearch

Great work, thanks for opensourcing it.
It's rare for an open source project to have an abundance of documentation 👍
But for some of the queries in presented benchmark it might be better to use inverted index,
like coun() on all the advertisements from a particular advertisement provider.
If filtering clicks for advertisement provider is the most complex stage of the query - inverted index would serve as precomputed filter. Faceting/Aggregation cause random IO which might be slow if there are a lot of clicks for that advertisement provider. So it's rather unclear which approach is better.
I'll probably compare them myself, but I'm interested in your opinion on this topic.

C++ client

The documentation makes a reference to a C++ API / some way of programmaticly accessing a ClickHouse server via C++ via TCP.

Can you guys separate it out into a libclickhouseclient and provide headers?

Excuse my ignorance if this exists already and I'm unable to find it.

Cannot create table with info_id column name

Run this query through http post interface

CREATE TABLE default.TestTimes (
    EventDate Date,
    info_id UInt32,
    test_id UInt32
) ENGINE = MergeTree(EventDate, info_id, 8192)

got this response

Code: 62, e.displayText() = DB::Exception: Syntax error: failed at position 134 (line 5, col 42): 8192)

, expected identifier, e.what() = DB::Exception, Stack trace:

0. clickhouse-server(StackTrace::StackTrace()+0xe) [0x10a385e]
1. clickhouse-server(DB::parseQuery(DB::IParser&, char const*, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x9e) [0x134442e]
2. clickhouse-server() [0x141406d]
3. clickhouse-server(DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, DB::Context&, std::shared_ptr<DB::IBlockInputStream>&, std::function<void (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)>)+0x210) [0x14161c0]
4. clickhouse-server(DB::HTTPHandler::processQuery(Poco::Net::HTTPServerRequest&, Poco::Net::HTTPServerResponse&, DB::HTTPHandler::Output&)+0x129b) [0x104cf7b]
5. clickhouse-server(DB::HTTPHandler::handleRequest(Poco::Net::HTTPServerRequest&, Poco::Net::HTTPServerResponse&)+0x78) [0x104db38]
6. clickhouse-server(Poco::Net::HTTPServerConnection::run()+0x298) [0x2a8cb38]
7. clickhouse-server(Poco::Net::TCPServerConnection::start()+0x7) [0x2a6fed7]
8. clickhouse-server(Poco::Net::TCPServerDispatcher::run()+0x11f) [0x2a6c70f]
9. clickhouse-server(Poco::PooledThread::run()+0x97) [0x303e927]
10. clickhouse-server(Poco::ThreadImpl::runnableEntry(void*)+0x87) [0x306e227]
11. /lib64/libpthread.so.0(+0x80a4) [0x7f02cfe800a4]
12. /lib64/libc.so.6(clone+0x6d) [0x7f02cf23ffed]

Parsers fails on inf_id as identifier also

Python driver

Are there any estimations on making python driver for ClickHouse?
Or.. what is the best way of running queries from Python?

TCP documentation

Hi, I am using ClickHouse in Go, currently via HTTP, but I want TCP transport too. Do you have any documentation about it? Now I have this error (I am doing something wrong):

echo "SELECT 1" | nc localhost 9000
eDB::NetException/DB::NetException: Unexpected packet from client�0. clickhouse-server(StackTrace::StackTrace()+0xe) [0xfd7c7e]
1. clickhouse-server(DB::Exception::Exception(std::string const&, int)+0x1e) [0xf72a6e]
2. clickhouse-server(DB::TCPHandler::receiveHello()+0xa92) [0xf8de92]
3. clickhouse-server(DB::TCPHandler::runImpl()+0x1b8) [0xf90818]
4. clickhouse-server(DB::TCPHandler::run()+0x17) [0xf91957]
5. clickhouse-server(Poco::Net::TCPServerConnection::start()+0x7) [0x2a84b47]
6. clickhouse-server(Poco::Net::TCPServerDispatcher::run()+0x107) [0x2a8f837]
7. clickhouse-server(Poco::PooledThread::run()+0x7f) [0x307860f]
8. clickhouse-server(Poco::ThreadImpl::runnableEntry(void*)+0x87) [0x3036087]
9. /lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7fd479fdd182]
10. /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fd4795f847d]

toLocalTime function

Hi,

Can we add a toTimeZone function that takes a timezone parameter?

so if I want to convert any datetime to a timezone I can just do toTimeZone(datetime, "AST") or toTimeZone(datetime, "+4:00") and return the datetime.

Segmentation fault when inserting data

Initially I thought that the primary key must contain at least two columns so I created a table with constant expression in primary key tuple with this query:

create table hoba (naber Date) ENGINE = MergeTree(naber, (1, naber), 8192)

The query is executed successfully and I tried to INSERT data to the table:

INSERT INTO hoba (naber) VALUES ('2010-01-01')

However this query causes ClickHouse to throw Segmentation fault and exits ungracefully:

2016.06.21 13:27:03.361 [ 6 ] <Error> BaseDaemon: (from thread 5) Received signal Segmentation fault (11).
2016.06.21 13:27:03.361 [ 6 ] <Error> BaseDaemon: Address: 0x1
2016.06.21 13:27:03.366 [ 6 ] <Error> BaseDaemon: 1. clickhouse-server(DB::IDataTypeNumberFixed<unsigned char, DB::ColumnVector<unsigned char> >::serializeBinary(DB::IColumn const&, unsigned long, DB::WriteBuffer&) const+0x39) [0x100f269]
2016.06.21 13:27:03.366 [ 6 ] <Error> BaseDaemon: 2. clickhouse-server(Poco::ThreadImpl::runnableEntry(void*)+0x87) [0x3018287]
2016.06.21 13:27:03.366 [ 6 ] <Error> BaseDaemon: 3. /lib/x86_64-linux-gnu/libpthread.so.0(+0x8184) [0x7f8078fc7184]
2016.06.21 13:27:03.366 [ 6 ] <Error> BaseDaemon: 4. /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f80785e237d]

Using constant values in primary key may be invalid but in this case Clickhouse should not allow such table definition.

Enabling CORS for HTTP Interface

Hi,

Is there a configuration setting where we can enable cross domain sharing for HTTP interface?

Right now when I query using jQuery I get the following error:

"No 'Access-Control-Allow-Origin' header is present on the requested resource"

Thanks!

example dataset start_schema cannot make sucessfully

Thank you for open sourcing the great project!

Today I want to use the free dataset of star_schema, but cannot compile successfully, the error message is shown as follows,

root@f90500858fe7:/work/datasets/ssb-dbgen# make
gcc -O -DDBNAME=\"dss\" -DLinux -DDB2  -DSSBM    -c -o driver.o driver.c
In file included from driver.c:55:0:
dss.h:201:6: error: conflicting types for 'getopt'
 int  getopt PROTO((int arg_cnt, char **arg_vect, char *oprions));
      ^
In file included from /usr/include/unistd.h:874:0,
                 from driver.c:12:
/usr/include/getopt.h:150:12: note: previous declaration of 'getopt' was here
 extern int getopt (int ___argc, char *const *___argv, const char *__shortopts)
            ^
driver.c: In function 'partial':
driver.c:605:20: warning: format '%d' expects argument of type 'int', but argument 4 has type 'long int' [-Wformat=]
   fprintf (stderr, "\tStarting to load stage %d of %d for %s...",
                    ^
driver.c: In function 'pload':
driver.c:634:20: warning: format '%d' expects argument of type 'int', but argument 3 has type 'long int' [-Wformat=]
   fprintf (stderr, "Starting %d children to load %s",
                    ^
driver.c: In function 'main':
driver.c:1019:5: warning: format '%d' expects argument of type 'int', but argument 3 has type 'long int' [-Wformat=]
     "Generating update pair #%d for %s [pid: %d]",
     ^
driver.c:1118:24: warning: format '%ld' expects argument of type 'long int', but argument 5 has type '__pid_t {aka int}' [-Wformat=]
       fprintf (stderr, "%s data for %s [pid: %ld]",
                        ^
driver.c:1125:13: warning: format '%d' expects argument of type 'int', but argument 3 has type 'long int' [-Wformat=]
      printf("Validation checksum for %s at %d GB: %0x\n",
             ^
driver.c:1125:13: warning: format '%x' expects argument of type 'unsigned int', but argument 4 has type 'long unsigned int' [-Wformat=]
<builtin>: recipe for target 'driver.o' failed
make: *** [driver.o] Error 1

My env:

root@f90500858fe7:/work/datasets/ssb-dbgen# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04 LTS"

root@f90500858fe7:/work/datasets/ssb-dbgen# env
HOSTNAME=f90500858fe7
TERM=xterm
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DISABLE_MONGODB=1
PWD=/work/datasets/ssb-dbgen
THREADS=1
CXX=g++-5
SHLVL=1
HOME=/root
CC=gcc-5
_=/usr/bin/env
OLDPWD=/work/datasets

Can you help on this?

Thanks,
Hongbin

add toStartOfWeek function

Hi!

Can you add toStartOfWeek function?

Behavior of this function is similar to toStartOfMonth,
but I want that function returns truncated to monday's date.

example :

toStartOfWeek(2016-06-28, 0) -> 2016-06-26
toStartOfWeek(2016-06-23, 1) -> 2016-06-20

here is first argument is a date
second argument is a first day week (sunday, monday ...)

Os x docker client connection fail (-ti)

На os x 10.11.5 по примеру (https://hub.docker.com/r/yandex/clickhouse-server/) при попытке приаттачиться нативным клиентом:

  1. Не возвращается tty (-it)
  2. Клиент падает
    ClickHouse client version 1.1.53998.
    Connecting to localhost:9000.
    Code: 210. DB::NetException: Connection refused: (localhost:9000, ::1)

docker -v
Docker version 1.12.0-rc4, build e4a0dbc, experimental

сервер:
docker run -d -p 8123 --name ccs yandex/clickhouse-server

Клиент
docker run -it --rm --link ccs:clickhouse-server yandex/clickhouse-client

AVX Support

Since many XEON processors available dont actually seem to support SSE4 but do support AVX are there any plans on updating to AVX ?

Unknown identifier when a column is referenced in subquery

The following query doesn't work and complains that t used in subquery is an unknown identifier.

select (select t), (select dummy), t from system.one array join([1,2] as t)

I guess the problem is that t comes from ARRAY JOIN and the subqueries can only reference columns from the table system.one. I think that this is a consistency problem, however; if it's a technical restriction, the documentation should state it.

Missing SSE 4.2 -- Cannot start on Intel(r) Xeon(r) CPU E5-2630 v3 @ 2.40GHz

I am trying to start clickhouse but get the error described below, sse4 is not listed for the cpu, but is sse4 really that important that ClickHouse cannot start without it ? Can we make it start anyway ?

root@bc601037-cd0f-c491-80cc-806f9d992c91:~# sudo service clickhouse-server start
SSE 4.2 instruction set is not supported
sudo lshw

     *-cpu
          product: Intel(r) Xeon(r) CPU E5-2630 v3 @ 2.40GHz
          vendor: Intel Corp.
          physical id: 1
          bus info: cpu@0
          width: 64 bits
          capabilities: fpu fpu_exception vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx x86-64 pni monitor ds_cpl est tm2 cx16 xtpr

I would assume sse4 would have to be listed here, but the CPU Ark Page is quite unhelpful to find out whether the CPU does support it. It only lists AVX2.0

Type's default value used upon insert via pipe instead of table's schema defined defaults

There's a table with default values for column:

CREATE TABLE xdr_tst (
date Date,
answered Int8 DEFAULT -1,
end_date UInt64,
start_date UInt64
) ENGINE = MergeTree(date, (start_date, end_date), 8192);

I do insert of the following data:

cat /tmp/zero.csv | od -c
0000000 s t a r t _ d a t e \t e n d _ d
0000020 a t e \t a n s w e r e d \t d a t
0000040 e \n 6 3 2 2 4 1 1 1 9 9 4 3 1 1
0000060 1 9 1 9 0 \t 6 3 2 2 4 1 1 1 9 9
0000100 4 3 1 1 1 9 1 9 0 \t \t 2 0 1 6 -
0000120 0 8 - 2 4 \n
0000126

Note that data for the column with default values is missing

As the result I have:

SELECT *
FROM xdr_tst

┌───────date─┬─answered─┬────────────end_date─┬──────────start_date─┐
│ 2016-08-24 │ 0 │ 6322411199431119190 │ 6322411199431119190 │
└────────────┴──────────┴─────────────────────┴─────────────────────┘

1 rows in set. Elapsed: 0.002 sec.

But I expected that answered would be equal to -1 in this case.

commands and source data are attached
req.txt

Minor bugfix to russian doc

Doc URL https://clickhouse.yandex/reference_ru.html#%D0%92%D0%BE%D0%B7%D0%BC%D0%BE%D0%B6%D0%BD%D1%8B%D0%B5%20%D0%B3%D0%BB%D1%83%D0%BF%D1%8B%D0%B5%20%D0%B2%D0%BE%D0%BF%D1%80%D0%BE%D1%81%D1%8B

-"Распределённая сортировка является основной причиной тормозов при выполнении несложных map-reduce задач."
+"Распределённая сортировка является основной причиной долгого выполнения при выполнении несложных map-reduce задач."

-"Впрочем, производительность при выполнении таких задач является сильно неоптимальной по сравнению"
+"Впрочем, производительность при выполнении таких задач является не оптимальной в использовании по сравнению"

-"Задачи в YT выполняются с помощью произвольного кода в режиме streaming"
+"Задачи в YT выполняются с помощью произвольного кода в режиме потока(ов) (streaming)"

-"(как наиболее высоким throughput на длинных запросах, так и наиболее низкой latency на коротких запросах)"
+"(как наиболее высокой пропускной способностью (throughput) на длинных запросах, так и наиболее низкой задержкой (latency) на коротких запросах)"

-"В кажом шарде можно указать от одной до произвольного числа реплик. Можно указать разное число реплик для каждого шарда."
+"В каждом шарде можно указать от одной до произвольного числа реплик. Можно указать разное число реплик для каждого шарда."

-"Движок принимает параметры: имя столбца типа Date"
+"Движок принимает параметры: имя столбца c типом данных Date/Timestamp?"

-"При вставке, данные относящиеся к разным месяцам, разбиваются на разные кусочки"
+"При вставке, данные относящиеся к разным месяцам, разбиваются на разные сегменты"

-"Вставки никак не мешают чтениям"
+"Операции вставки данных никак не мешают операциям чтения"

-"Чтения из таблицы автоматически распараллеливаются."
+"Операции чтения из таблицы автоматически распараллеливаются."

IS NULL and IS NOT NULL equivalents

Hey,
Is there a way to figure out whether the value of a field is the default value or not?

I know that I can do
SELECT * from TABLE where int == 0 for numbers and
SELECT * from TABLE where str == " but is there a way to unify these two and write sth like
SELECT * from TABLE where var == default(var)

Possible doku bug in DataTypes

https://clickhouse.yandex/reference_en.html#Data%20types
FixedString(N)

When reading a string that contains more bytes, an error message is returned.
When writing a string, null bytes are not trimmed off of the end of the string, but are output.

I assume this should be:

When writing a string that contains more bytes, an error message is returned.
When reading a string, null bytes are not trimmed off of the end of the string, but are output.

Regards,
Oli

problem with loading data

i know NULL isnt supported by clickhouse. but if i have a csv with ,,, line in it. why cant clickhouse just insert data with default values?

jdbc driver

I've read that jdbc driver will be open source later, when will it be ready and where will the code go?

Thanks.

UDF support

It would be great if we could create user defined functions in ClickHouse. I don't know whether it has C++ API for that but a high level language such as V8 for Lua would be better since they allow us to create functions in runtime similar to PL/pgSQL and PL/SQL.

Since ClickHouse is an analytical database, users will want to perform complex analytical queries such as funnel and retention and implementing them in ANSI SQL (with joins, CTEs etc.) is quite inefficient. UDFs would help us to avoid expensive JOINS so it would be a huge win.

UDFs can be created using CREATE FUNCTION syntax similar to this one:

CREATE FUNCTION dummy_func() RETURNS int8 AS '
    return 1;
' LANGUAGE V8;

Implementing aggregate functions may be harder than scalar functions but even the support for scalar functions would be great.

Error fetching data from materialized view

I loaded example ontime dataset and created a materialized view with the following definition:

CREATE MATERIALIZED VIEW basic ENGINE = AggregatingMergeTree(FlightDate, Carrier, 8192) AS
SELECT
    FlightDate,
    Carrier,
    uniqState(FlightNum) AS Users
FROM ontime
GROUP BY
    FlightDate,
    Carrier

Then I inserted data to ontime database and tried to query basic table with SELECT * FROM basic LIMIT 10 via the clickhouse-client

The output of the query is:

Exception on client:
Code: 89. DB::Exception: QuickLZ compression method is disabled: while receiving packet from localhost:9000, ::1

The problem is that I try to fetch the aggregation state stored in User column. I can execute the following query successfully: select uniqMerge(Users) from basic since the output of uniqMerge is a numeric value. I think the server should throw a more descriptive error or return the aggregation state as binary type.

Isssue passing timestamps with ms precision

I am trying to import 200M CSV records into clickhouse to take it for a spin. I have the following schema

CREATE TABLE vod (
  createdAt DateTime,
  sessionId String,
  deviceId String,
  clientTimestamp DateTime,
  event String,
  publisherId String,
  mediaId String,
  mediaDuration Float32,
  remoteAddress String,
  fromPosition Float32,
  toPosition Float32,
  secondsViewed Float32,
  browser String,
  os String,
  platform String,
  version String,
  plugin String,
  timezoneOffset Int16,
  url String,
  altMediaId String,
  clockDifferential Int32,
  seriesName String,
  episodeName String,
  channel String,
  classification String,
  errors UInt16,
  vendorVersion String,
  screenType String,
  protocolVersion String,
  publisherName String,
  genre String,
  programId String,
  programName String,
  demo1 String,
  demo2 String,
  demo3 String,
  connectionType String,
  streamingType String,
  longitude Float32,
  latitude Float32,
  id String,
  mediaType String,
  insertedAt DateTime,
  clientDeviceId String,
  endpoint String
)
ENGINE = Merge(createdAt, (publisherName, createdAt), 8192);

The problem i have is that createdAt is specified as 2016-07-14T14:00:00.002Z, which is causing issues when i try and import the data. Here is the error below

ubuntu@secondary-realtime:/vol/20160715/vod/data$ cat 00.csv | clickhouse-client --query="INSERT INTO vod FORMAT CSV"
Code: 27. DB::Exception: Cannot parse input: expected , before: T14:00:00.002Z,35d326b8-d9df-f916-2926-feca78234209,ae533d61912a45a76462f3bde8b44b89,2016-07-14T14:00:00.164Z,PROGRESS,f0619170-c1e9-47cf-b347-f068ee3a3b65,9264: 

Row 1:
Column 0,   name: createdAt,         type: Date,    parsed text: "2016-07-14"
ERROR: garbage after Date: "T14:00:00."
ERROR: Date must be in YYYY-MM-DD format.

: (at row 1)

ubuntu@secondary-realtime:/vol/20160715/vod/data$ clickhouse-client 
ClickHouse client version 1.1.53988.
Connecting to localhost:9000.
Connected to ClickHouse server version 1.1.53988.

Here is a typical row of CSV.

2016-07-14T14:00:00.002Z,35d326b8-d9df-f916-2926-feca78234209,ae533d61912a45a76462f3bde8b44b89,2016-07-14T14:00:00.164Z,PROGRESS,f0619170-c1e9-47cf-b347-f068ee3a3b65,9264697,2580,d4
50eaf3215d99090b6be9ec76d088a1,1173.9,1233.9,60,IE,Windows 8,Microsoft Windows,8.0,shim,600, , ,-162, , , , ,0,Publisher1-PC-1.90-CV-1.8,tablet,1,Publisher1, , , , , , , , , , ,981bc1632b
e4f5d6fdec020b96c129a8,vod, , ,meter

Do you have any advice on how i can proceed?

Default values don’t work when using JSONEachRow format w/o specifying column names

The code:

DROP TABLE IF EXISTS test_table;

CREATE TABLE test_table (
  a String,
  b Date,
  c DEFAULT sipHash64(a),
  d MATERIALIZED sipHash64(a)
) ENGINE = MergeTree(b, a, 8192);

INSERT INTO test_table (a, b) values ('1', '2016-01-01');

INSERT INTO test_table (a, b) FORMAT JSONEachRow {"a":"2","b":"2016-01-02"}

INSERT INTO test_table values ('3', '2016-01-03'); -- throws an error

INSERT INTO test_table FORMAT JSONEachRow {"a":"4","b":"2016-01-04"} -- doesn’t throw an error

SELECT *, d FROM test_table ORDER BY a;

The result:

┌─a─┬──────────b─┬────────────────────c─┬────────────────────d─┐
│ 1 │ 2016-01-01 │  5003827105613308882 │  5003827105613308882 │
│ 2 │ 2016-01-02 │ 11449545338359147399 │ 11449545338359147399 │
│ 4 │ 2016-01-04 │                    0 │  3672830208859661989 │
└───┴────────────┴──────────────────────┴──────────────────────┘

The expected result:

┌─a─┬──────────b─┬────────────────────c─┬────────────────────d─┐
│ 1 │ 2016-01-01 │  5003827105613308882 │  5003827105613308882 │
│ 2 │ 2016-01-02 │ 11449545338359147399 │ 11449545338359147399 │
│ 4 │ 2016-01-04 │  3672830208859661989 │  3672830208859661989 │
└───┴────────────┴──────────────────────┴──────────────────────┘

Distinct and order by with different columns

When selecting a distinct rows and ordering by another column I see strange result. Most of RDBMS return error on queries like this, except MySQL (behavior is documented).

create table t1 (a UInt8,b UInt8) ENGINE=Memory;
INSERT INTO t1 VALUES (2,1),(1,2),(3,3),(2,4);
SELECT * FROM t1

┌─a─┬─b─┐
│ 2 │ 1 │
│ 1 │ 2 │
│ 3 │ 3 │
│ 2 │ 4 │
└───┴───┘
SELECT DISTINCT a FROM t1 ORDER BY b ASC
┌─a─┐
│ 2 │
│ 1 │
│ 3 │
└───┘

Correct result if "distinct" after ordering

SELECT DISTINCT a FROM t1 ORDER BY b DESC
┌─a─┐
│ 3 │
│ 1 │
│ 2 │
└───┘

Not correct if "distinct" after ordering

LEFT JOIN ....ON SUPPORT

is left join ... on support ? I test and it doesn't work. like (a.uid=b.id) or use regex in on

Initial column values are not used during DEFAULT expression evaluation.

I've created a table in the following way:

CREATE TABLE IF NOT EXISTS gusev.`vf_vg-desktop_8582E5B16CE53B87` (
  id UInt32,
  track UInt8,
  codec String,
  content String,
  rdate Date DEFAULT today(),
  track_id String DEFAULT concat(concat(concat(toString(track), '-'), codec), content)
) ENGINE=MergeTree(rdate, (id, track_id), 8192)

Then got an exception while trying to insert a row:

INSERT INTO gusev.`vf_vg-desktop_8582E5B16CE53B87` (id,track,codec) FORMAT TabSeparated
1       0       h264

Error in log:

2016.08.04 20:47:08.229 [ 5 ] <Error> HTTPHandler: Code: 10, e.displayText() = DB::Exception: Not found column: 'content', e.what() = DB::Exception, Stack trace:

0. clickhouse-server(StackTrace::StackTrace()+0xe) [0xfcad5e]
1. clickhouse-server(DB::Exception::Exception(std::string const&, int)+0x1e) [0xf65d4e]
2. clickhouse-server(DB::ExpressionAction::execute(DB::Block&) const+0x186f) [0x12848ff]
3. clickhouse-server(DB::ExpressionActions::execute(DB::Block&) const+0x32) [0x1284d42]
4. clickhouse-server(DB::evaluateMissingDefaults(DB::Block&, DB::NamesAndTypesList const&, std::unordered_map<std::string, DB::ColumnDefault, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, DB::ColumnDefault> > > const&, DB::Context const&)+0x63d) [0x136761d]
5. clickhouse-server(DB::AddingDefaultBlockOutputStream::write(DB::Block const&)+0x24) [0x293a564]
6. clickhouse-server(DB::ProhibitColumnsBlockOutputStream::write(DB::Block const&)+0x47) [0x2939b67]
7. clickhouse-server(DB::copyData(DB::IBlockInputStream&, DB::IBlockOutputStream&, std::atomic<bool>*)+0x7d) [0x11d4f0d]
8. clickhouse-server(DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, DB::Context&, std::shared_ptr<DB::IBlockInputStream>&, std::function<void (std::string const&)>)+0x5a1) [0x13597d1]
9. clickhouse-server(DB::HTTPHandler::processQuery(Poco::Net::HTTPServerRequest&, Poco::Net::HTTPServerResponse&, DB::HTTPHandler::Output&)+0x1024) [0xf6f294]
10. clickhouse-server(DB::HTTPHandler::handleRequest(Poco::Net::HTTPServerRequest&, Poco::Net::HTTPServerResponse&)+0x8a) [0xf702ca]
11. clickhouse-server(Poco::Net::HTTPServerConnection::run()+0x26a) [0x2a7716a]
12. clickhouse-server(Poco::Net::TCPServerConnection::start()+0x7) [0x2a66d37]
13. clickhouse-server(Poco::Net::TCPServerDispatcher::run()+0x107) [0x2a71a37]
14. clickhouse-server(Poco::PooledThread::run()+0x7f) [0x305a80f]
15. clickhouse-server(Poco::ThreadImpl::runnableEntry(void*)+0x87) [0x3018287]
16. /lib/x86_64-linux-gnu/libpthread.so.0(+0x8184) [0x7f8c180a8184]
17. /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f8c176ba37d]

But if I define content column as content String DEFAULT '' then it works as expected.

Include not found

I have build the docker server image and when i run it using

docker run -it --rm clickhouse/server 

Curious whether I should be concerned about the Include not found messages in the console

Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
2016.06.16 10:17:42.461 [ 1 ] <Warning> Application: Logging to console
2016.06.16 10:17:42.471 [ 1 ] <Information> : Starting daemon with revision 53981
2016.06.16 10:17:42.471 [ 1 ] <Information> Application: starting up
2016.06.16 10:17:42.471 [ 1 ] <Debug> Application: rlimit on number of file descriptors is 262144
2016.06.16 10:17:42.471 [ 1 ] <Debug> Application: Initializing DateLUT.
2016.06.16 10:17:42.471 [ 1 ] <Trace> Application: Initialized DateLUT.
2016.06.16 10:17:42.472 [ 1 ] <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use '85cbe55f8ddd' as replica host.
2016.06.16 10:17:42.472 [ 1 ] <Debug> UsersConfigReloader: Loading users config
Include not found: networks
Include not found: networks
2016.06.16 10:17:42.473 [ 1 ] <Information> Application: Loading metadata.
2016.06.16 10:17:42.473 [ 1 ] <Information> DatabaseOrdinary (default): Total 0 tables.
2016.06.16 10:17:42.473 [ 1 ] <Debug> Application: Loaded metadata.
2016.06.16 10:17:42.473 [ 1 ] <Information> DatabaseOrdinary (system): Total 0 tables.
2016.06.16 10:17:42.474 [ 1 ] <Information> Application: Ready for connections.

non-debian builds: fail on make install / make package

Looks like there is a missing rule

CMake Error at /tmp/build/contrib/libpoco/cmake_install.cmake:36 (file):
  file INSTALL cannot find
  "/tmp/build/contrib/libpoco/Poco/PocoConfigVersion.cmake".
Call Stack (most recent call first):
  /tmp/build/contrib/cmake_install.cmake:43 (include)
  /tmp/build/cmake_install.cmake:37 (include)

touching /tmp/build/contrib/libpoco/Poco/PocoConfigVersion.cmake makes it possible to complete the process,

clickhouse-client --query does not return results

I'm use the clickhouse-client in interactive mode and when I run a query with inner join between 2 subqueries, results are returned

However when I run the same query with batch mode i.e. 'clickhouse-client --query=""' no results are return in bash.

there's no error in the log and the access log looks the same for the 2 queries

can you see if you can replicate it?

Minor bugfix to russian doc 1

-"Вы можете прервать длинный запрос, нажав Ctrl+C. При этом вам всё-равно придётся чуть-чуть подождать, пока сервер остановит запрос. На некоторых стадиях выполнения, запрос невозможно прервать. Если вы не дождётесь и нажмёте Ctrl+C второй раз, то клиент будет завершён."
+"Вы можете прервать длинный запрос, нажав Ctrl+C. При этом вам всё равно придётся чуть-чуть подождать, пока сервер остановит запрос. На некоторых стадиях выполнения, запрос невозможно прервать. Если вы не дождётесь и нажмёте Ctrl+C второй раз, то клиент будет завершён."

-"Принимает Float32 или Float64 и возвращает UInt8, равный 1, если агрумент не бесконечный и не NaN, иначе 0."
+"Принимает Float32 или Float64 и возвращает UInt8, равный 1, если аргумент не бесконечный и не NaN, иначе 0."

-"Принимает строку, число, дату или дату-с-временем. Возвращает строку, содержащую шестнадцатиричное представление аргумента. Используются заглавные буквы A-F. Не используются префиксы %%0x%% и суффиксы %%h%%. Для строк просто все байты кодируются в виде двух шестнадцатиричных цифр. Числа выводятся в big endian ("человеческом") формате. Для чисел вырезаются старшие нули, но только по целым байтам. Например, %%hex(1) = '01'%%. Даты кодируются как число дней с начала unix-эпохи. Даты-с-временем кодируются как число секунд с начала unix-эпохи."
+"Принимает строку, число, дату или дату-с-временем. Возвращает строку, содержащую шестнадцатеричное представление аргумента. Используются заглавные буквы A-F. Не используются префиксы %%0x%% и суффиксы %%h%%. Для строк просто все байты кодируются в виде двух шестнадцатеричных цифр. Числа выводятся в big endian ("человеческом") формате. Для чисел вырезаются старшие нули, но только по целым байтам. Например, %%hex(1) = '01'%%. Даты кодируются как число дней с начала unix-эпохи. Даты-с-временем кодируются как число секунд с начала unix-эпохи."

data load with POST stack traced

cat /tmp/data_dump.sql | POST 'http://localhost:8123/'
Code: 62, e.displayText() = DB::Exception: Syntax error: failed at end of query.
Expected CHECK, e.what() = DB::Exception, Stack trace:

  1. clickhouse-server(StackTrace::StackTrace()+0xe) [0xfcad5e]
  2. clickhouse-server(DB::parseQuery(DB::IParser&, char const_, char const_, std::string const&)+0x194) [0x127d114]
  3. clickhouse-server() [0x1356c88]
  4. clickhouse-server(DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, DB::Context&, std::shared_ptrDB::IBlockInputStream&, std::function<void (std::string const&)>)+0x20e) [0x135943e]
  5. clickhouse-server(DB::HTTPHandler::processQuery(Poco::Net::HTTPServerRequest&, Poco::Net::HTTPServerResponse&, DB::HTTPHandler::Output&)+0x1024) [0xf6f294]
  6. clickhouse-server(DB::HTTPHandler::handleRequest(Poco::Net::HTTPServerRequest&, Poco::Net::HTTPServerResponse&)+0x8a) [0xf702ca]
  7. clickhouse-server(Poco::Net::HTTPServerConnection::run()+0x26a) [0x2a7716a]
  8. clickhouse-server(Poco::Net::TCPServerConnection::start()+0x7) [0x2a66d37]
  9. clickhouse-server(Poco::Net::TCPServerDispatcher::run()+0x107) [0x2a71a37]
  10. clickhouse-server(Poco::PooledThread::run()+0x7f) [0x305a80f]
  11. clickhouse-server(Poco::ThreadImpl::runnableEntry(void*)+0x87) [0x3018287]
  12. /lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7eff1f513182]
  13. /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7eff1eb2e47d]

Service fails to restart on ubuntu-trusty-14.04-amd64-server-20160314

I'm trying to install Clickhouse on AWS. Initially I decided to use instance-store for better IO performance and the AMI for instances that supports instance-store is ubuntu-trusty-14.04-amd64-server-20160314.manifest.xml (ami-2c717046).

I simply install Clickhouse from http://repo.yandex.ru/clickhouse/trusty stable main, start ClickHouse service and run clickhouse-client. Then I create a table like this one: (the table definition doesn't make difference)

CREATE TABLE testtable ( EventDate Date, UserID UInt64, Attrs Nested( Key String, Value String) ) ENGINE = MergeTree(EventDate, UserID, 8192)

Then I restarted clickhouse-server service but clickhouse-client was not able to connect the server. Unfortunately clickhouse-server doesn't log any error message (clickhouse.err, stderr and stdout is empty) but output of clickhouse.log file is similar to this one:

2016.07.13 12:00:07.699 [ 1 ] <Information> : Starting daemon with revision 53988
2016.07.13 12:00:07.814 [ 1 ] <Information> Application: starting up
2016.07.13 12:00:07.814 [ 1 ] <Debug> Application: rlimit on number of file descriptors is 262144
2016.07.13 12:00:07.814 [ 1 ] <Debug> Application: Initializing DateLUT.
2016.07.13 12:00:07.814 [ 1 ] <Trace> Application: Initialized DateLUT.
2016.07.13 12:00:07.815 [ 1 ] <Debug> Application: Configuration parameter 'interserver_http_host' doesn't exist or exists and empty. Will use 'clickhouse1.localdomain' as replica host.
2016.07.13 12:00:07.815 [ 1 ] <Debug> UsersConfigReloader: Loading users config
2016.07.13 12:00:07.816 [ 1 ] <Information> Application: Loading metadata.
2016.07.13 12:00:07.816 [ 1 ] <Information> DatabaseOrdinary (default): Total 1 tables.

It seems that the server got stuck when loading the table and never able to start. service clickhouse-server stop also got stuck since the process doesn't respond so I had to execute forcestop command in order to shutdown the clickhouse-server process.

If I don't create any table, I'm able to restart the service without any issue. Also when I try to install Clickhouse on ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-20160114.5 (ami-fce3c696) it works as expected.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.