GithubHelp home page GithubHelp logo

clickhouse-net's Introduction

ClickHouse.ADO

.NET driver for Yandex ClickHouse. This driver implements native ClickHouse protocol, shamelessly ripped out of original ClickHouse sources. In some ways it does not comply to ADO.NET rules however this is intentional.

А ещё есть описание по-русски, см. ниже.

Changelog

v.2.0.5: Support for Map columns (thanks to @jorgeparavicini). Provisional support for JSON columns. Case-insensitive connection strings.

v.2.0.4: Fixed connection becoming unusable after any recoverable error.

v.2.0.3: Added support for INSERT ... SETTINGS setting=value syntax.

v.2.0.2.1 and v.2.0.2.2: Added net461 target and downgraded K4os.Compression.LZ4 requirement for it.

v.2.0.2: Switched to async IO, implemented System.Data.Common stuff like DbProviderFactory. Added support for IPv4 and IPv6 columns.

v.1.5.6-no-polling-on-tls: Backported changes from 2.0.3.

v.1.5.5-no-polling-on-tls: Patched a bug preventing SSL/TLS secured connections from working properly.

v.1.5.5: Added support for Bool type.

v.1.5.3: Fixed errors reading empty arrays.

v.1.5.2: Added support for Date32 type.

v.1.5.1: Introduced new way to handle Clickhouse's Tuple type. Now values are read as System.Tuple<> instead of System.Object[]. That change made possible reading of Array(Tuple(...)) typed values.

v.1.4.0: Fixed query parsing when values are escaped.

v.1.3.1: Added support for LowCardinality type. Extended support for Decimal types.

v.1.2.6: Added (quite limited) support for timezones.

Important usage notes

SSL/TLS support

In order to wrap your clickhouse connection in SSL/TLS tunnel you should enable is on your server first (tcp_port_secure setting in the config.xml) and add Encrypt=True to the connection string (do not forget to change port number).

SSL/TLS was not working properly before 1.5.5 and led to infinite wait for data to arrive. It was 'patched' in 1.5.5-no-polling-on-tls version and completely mitigated in 2.x+.

Raw SQL debug output

If you'd like to see all queries emitted by the driver to the server add Trace=True to the connection string and set up a .NET trace listener for the category ClickHouse.Ado.

No multiple queries

ClickHouse engine does not support parsing multiple queries per on IDbCommand.Execute* roundtrip. Please split your queries into separately executed commands.

Always use NextResult

Although you may think that NextResult would not be used due to aforementioned lack of multiple query support that's completely wrong! You must always use NextResult as ClickHouse protocol and engine may and will return multiple resultsets per query and sometime result schemas may differ (definetly in regard to field ordering if query doesn't explicitly specify it).

Hidden bulk-insert functionality

If you read ClickHouse documentation it strongly advices you to insert records in bulk (1000+ per request). This driver can do bulk inserts. To do so you have to use special insert syntax:

INSERT INTO some_table (col1, col2, col3) VALUES @bulk

And after that you must add parameted named bulk with its Value castable to IEnumerable each item of it must be IEnumerable too. Empty lists are not allowed. Alternatively you may pass IBulkInsertEnumerable implementation as a bulk's value to speed up processing and use less memory inside clickhouse driver. This may be used conviniently with the following syntax:

CREATE TABLE test (date Date, time DateTime, str String, int UInt16) ENGINE=MergeTree(date,(time,str,int), 8192)
class MyPersistableObject:IEnumerable{
	public string MyStringField;
	public DateTime MyDateField;
	public int MyIntField;

	//Count and order of returns must match column order in SQL INSERT
	public IEnumerator GetEnumerator(){
		yield return MyDateField;
		yield return MyDateField;
		yield return MyStringField;
		yield return (ushort)MyIntField;
	}
}

//... somewhere elsewhere ...
var list=new List<MyPersistableObject>();

// fill the list to insert
list.Add(new MyPersistableObject());

var command=connection.CreateCommand();
command.CommandText="INSERT INTO test (date,time,str,int) VALUES @bulk";
command.Parameters.Add(new ClickHouseParameter{
	ParameterName="bulk",
	Value=list
});
command.ExecuteNonQuery();

Extending and deriving

If you've fixed some bugs or wrote some useful addition to this driver, please, do pull request them back here.

If you need some functionality or found a bug but unable to implement/fix it, please file a ticket here, on GitHub.

ClickHouse.ADO по-русски

.NET драйвер для Yandex ClickHouse. В отличие от официального JDBC клиента этот драйвер не является обёрткой поверх ClickHouse HTTP, а реализует нативный протокол. Протокол (и части его реализации) нагло выдраны из исходников самого ClickHouse. В некоторых случаях этот драйвер ведёт себя не так, как обычные ADO.NET драйверы, это сделано намеренно и связано со спецификой ClickHouse.

Важные изменения

v.2.0.5: Поддержка Map колонок (спасибо @jorgeparavicini). Предварительная поддержка JSON колонок. Строки соединения игнорируют регистр символов (названий).

v.2.0.4: Исправлена поломка соединения после любой, даже восстановимой ошибки.

v.2.0.3: Добавлена поддержка подвыражения SETTINGS в SQL командах INSERT.

v.2.0.2.1 и v.2.0.2.2: Добавлена цель net461 и для неё снижено требование к версии K4os.Compression.LZ4.

v.2.0.2: Переход на асинхронный ввод/вывод, реализация всякого из System.Data.Common, в том числе DbProviderFactory. Добавлена поддержка типов IPv4 and IPv6.

v.1.5.6-no-polling-on-tls: Бэкпортированы изменения из 2.0.3.

v.1.5.5-no-polling-on-tls: "Исправлен" баг с повисаниями на соединениях использующих SSL/TLS.

v.1.5.5: Добавлена поддержка типа Bool.

v.1.5.3: Исправлена ошибка чтения пустых массивов.

v.1.5.2: Добавлена поддержка типа Date32.

v.1.5.1. Изменён формат чтения/записи Clickhouse типа Tuple. Теперь значения читаются как System.Tuple<> вместо используемого ранее System.Object[]. Это изменение позволяет читать колонки типа Array(Tuple(...)), что раньше было не возможно из-за ошибок.

v.1.4.0: Исправлен разбор запросов с значениями содержащими экранированные символы.

v.1.3.1: Добавлена поддержка типа LowCardinality. Улучшена работа с типами Decimal.

v.1.2.6: Добавлена ограниченная поддержка временных зон.

Прочти это перед использованием

Поддержка SSL/TLS

Чтобы завернуть протокол кликхауса в SSL/TLS тунель надо, во-первых, включить SSL на сервере (настройка tcp_port_secure в config.xml), и, затем, добавить Encrypt=True в строку соединения (также не забыть сменить используемый номер порта).

До версии 1.5.5-no-polling-on-tls SSL/TLS нормально не работал и приводил к зависаниям. В 1.5.5-no-polling-on-tls это было "исправлено", а в версиях 2.х+ полностью устранено.

Отладочный вывод SQL

Если хочется видеть какие SQL драйвер посылает серверу, то в строку соединения надо добавить Trace=True и включить слушатель трассировки для категории ClickHouse.Ado.

Нет поддержки нескольких запросов

Движок ClickHouse не умеет обрабатывать несколько SQL запросов за один вызов IDbCommand.Execute*. Запросы надо разбивать на отдельные команды.

Всегда используй NextResult

В связи с вышесказаным может показаться что NextResult не нужен, но это совершенно не так. Использование NextResult обязательно, поскольку протокол и движок ClickHouse может и будет возвращать несколько наборов данных на один запрос, и, хуже того, схемы этих наборов могут различаться (по крайней мере может быть перепутан порядок полей, если запрос не имеет явного указания порядка).

Секретная функция групповой вставки

В документации ClickHouse указано, что вставлять данные лучше пачками 100+ записей. Для этого предусмотрен специальный синтаксис:

INSERT INTO some_table (col1, col2, col3) VALUES @bulk

Для этой команды надо задать параметр bulk со значением Value приводимым к IEnumerable, каждый из элементов которого, в свою очередь, тоже должен быть IEnumerable. Кроме того, в качестве значения параметра bulk передать объект реализующий IBulkInsertEnumerable - это уменьшит использование памяти и процессора внутри драйвера clickhouse. Это удобно при использовании такого синтаксиса:

CREATE TABLE test (date Date, time DateTime, str String, int UInt16) ENGINE=MergeTree(date,(time,str,int), 8192)
class MyPersistableObject:IEnumerable{
	public string MyStringField;
	public DateTime MyDateField;
	public int MyIntField;

	//Количество и порядок return должны соответствовать количеству и порядку полей в SQL INSERT
	public IEnumerator GetEnumerator(){
		yield return MyDateField;
		yield return MyDateField;
		yield return MyStringField;
		yield return (ushort)MyIntField;
	}
}

//... где-то ещё ...
var list=new List<MyPersistableObject>();

// заполнение списка вставляемых объектов
list.Add(new MyPersistableObject());

var command=connection.CreateCommand();
command.CommandText="INSERT INTO test (date,time,str,int) VALUES @bulk";
command.Parameters.Add(new ClickHouseParameter{
	ParameterName="bulk",
	Value=list
});
command.ExecuteNonQuery();

Расширение и наследование

Если вы исправили баг или реализовали какую-то фичу, пожалуйста, сделайте pull request в этот репозиторий.

Если вам не хватает какой-то функции или вы нашли баг, который не можете исправить, напишите тикет здесь, на GitHub.

clickhouse-net's People

Contributors

alexxstst avatar antonalekseevaa avatar bfoxstudio avatar dmitrydem avatar dorzhevsky avatar eminemjk avatar enragez avatar ilyabreev avatar isaaafc avatar jiangxianfu avatar jorgeparavicini avatar killwort avatar mohammadrhz avatar olegstotsky avatar pitchalt avatar sgtholotoaster avatar stuv7cb avatar sych474 avatar treno1 avatar vitaliymf avatar vlanin avatar xontab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clickhouse-net's Issues

problem working with datetime on clickhouse-server and clickhouse.ado

I have a problem working with datetime on clickhouse-server and clickhouse.ado.
When I perform a insert with a datetime from remote app with clickhouse.ado, and then use a select with the datetime columm, the server returns the datetime without the timezone offset.
If I execute the select command from server machine using clickhouse-client, the server returns datetime correctly( with my timezone offset).
Ex.:
The same query: SELECT toDateTime('2018-07-20 :11:55:00')
return from clickhouse-client directly on server: 2018-07-20 11:55:00
return from remote app with clickhouse.ado: 2018-07-20 14:55:00

For replication proporses:
My timezone: America/Sao_Paulo (-3 hours)
Debian server version: 9.4
.NET Framework version on remote app: 4.6.1
clickhouse-server version: 1.1.54394 and 1.1.54383 (same result on both versions)
clickhouse.ado version: 1.1.9

Thanks, Ulisses Ottoni
P.S.: Sorry my bad english

Possible infinite cycle in ProtocolFormatter

In my usage scenario ClickHouse select query can take some time, and SocketTimeout value used by default (1000 which means 1s) is too small; this value can be changed with connection string of course, but I have noticed strange thing: when timeout is reached (most likely) datareader falls into infinite cycle that never stops (and causes 100% CPU load of 1 core); I was able to reproduce this situation at least twice.

I've reviewed the code, and it seems this is possible in ProtocolFormatter.ReadBytes method:

            do
            {
                read += _ioStream.Read(bytes, read, i - read);
            }
            while (read < i);

_ioStream (which is NetworkStream actually) can return 0 in case if underlying socket is closed (this is mentioned in the documentation for "NetworkStream.Read" method). It seems when timeout is reached socket is closed so thing might be a reason of the infinite cycle.

My propositions:

  • increase default value for SocketTimeout to at least 60000 (60 sec)
  • change "ReadBytes" code to handle zero-read result. Should it throw an exception? I don't know much about ClickHouse binary protocol.

ChunkedStream

Добрый день

Я не могу даже собрать Ваш проект.

  1. Я его скачал, распаковал.
  2. Запускаю ClickHouse.sln в VisualStudio 2017.
  3. Компилирую, ошибка:
    Severity Code Description Project File Line Suppression State
    Error CS0246 The type or namespace name 'ChunkedStream' could not be found (are you missing a using directive or an assembly reference?) ClickHouse.Ado D:\Work\ClickHouse-Net-master\ClickHouse.Ado\Impl\Compress\HashingCompressor.cs 34 Active

Может быть, есть где-то хитрая инструкция по сборке?

Exceptions received while readind Decimal values from DB

Description

I have a few decimal(x,y) fields in my DB and if I try to read data I'll receive an exception "Unknown column type"

Steps to reproduce

CREATE TABLE default.logger (
timestamp UInt64,
id UInt64,
value Decimal(19, 9)
) ENGINE = MergeTree()
ORDER BY
id SETTINGS index_granularity = 8192

And I'm trying to read data from this table for a test by simple "Select * FROM tableName" command.

Workarounds

Don't know for now

What should be done

Decimals should be added to Types or method public static ColumnType Create(string name) should be customized.

DB::Exception

Hi. I have application with log data in ClickHouse database with code some like this

private ClickHouseConnection GetConnection()
 {
   var settings = new ClickHouseConnectionSettings("Host=...;Port=...;Database=Data;User=default");
   var chn = new ClickHouseConnection(settings);
   chn.Open();
   return chn;
}
...
using (var chn = GetConnection())
{
   var cmd = chn.CreateCommand($"INSERT INTO ... values...");
   cmd.ExecuteNonQuery();
}

Insert command runs ~10 times per second. But every 2-5 minutes I get exception in log:

DB::Exception: Timeout exceeded while receiving data from client. Waited for 300 seconds, timeout is 300 seconds.
StackTrace
   at ClickHouse.Ado.Impl.ProtocolFormatter.ReadAndThrowException()
   at ClickHouse.Ado.Impl.ProtocolFormatter.ReadPacket(Response rv)
   at ClickHouse.Ado.Impl.ProtocolFormatter.ReadResponse()
   at ClickHouse.Ado.ClickHouseCommand.Execute(Boolean readResponse, ClickHouseConnection connection)
   at ClickHouse.Ado.ClickHouseCommand.ExecuteNonQuery()

Is any ideas why this happen?

DataReader - Read returns false when called before NextResult

Description

Most ORM frameworks will not call NextResult() before Read(). Consequently it fails with ORM frameworks that are provider agnostic.

Npgsql, Oracle, Vertica and SQL data readers don't enforce NextResult() to be called before Read()
https://github.com/npgsql/npgsql/blob/dev/src/Npgsql/NpgsqlDataReader.cs

Steps to reproduce

Call reader.Read() without calling reader.NextResult()
Expected: Should return true when you have rows

Inserted DateTime value has +10 hours offset

Hi, Andrey.

I'm using ClickHouse.ADO

If I insert DateTime values in DateTime columns in table they will have +10 hours offset.
For example 03.10.2017 12:30:00 becomes 03.10.2017 22:30:00.

It doesn't matter whether date is utc or local, it will be changed +10 hours anyway. I tested databases on different servers, one with +3 utc, other with +5 utc and other with +10 utc. But all of those had +10 hours offset. I am now in Vladivostok which has +10 utc. So I thought maybe it was the reason of the problem. I tried to change time on my local machine, but that changed nothing.

I tried to insert DateTime values using other ClickHouse clients such as DBeaver. And it worked nice. DateTime values were being inserted as they had to be. So I decided that problem in ClickHouse.ADO.

I researched your source and tried changing value of "use_client_time_zone" in QuerySettings, but it didn't help. I also have known than you calculate DateTime as a number of seconds since Unix date and then pass bytes of that number to the stream. So maybe problem is somewhere around it? What do you think?

Add support for UUID Type

Description

Yandex implemented UUID type

What should be done

Implement GuidColumnType and add it to type map in ColumnType

Insert Null value in Nullable(Int16) Column

Description

Continuing to use your library came across another bug. When I try to insert a NULL value in a Nullable (Int16) column, I get an error.

Steps to reproduce

  1. The TestTable table was created for testing.

CREATE TABLE tflow.TestTable (
Int16Column Nullable(Int16),
Name String
) ENGINE = ReplacingMergeTree() ORDER BY Name SETTINGS index_granularity = 32768

  1. A test application has been created.

TestTable.cs

public class TestTable : IEnumerable
{
    public Int16? Int16Column { get; set; }
    public string Name { get; set; }

    public IEnumerator GetEnumerator()
    {
        yield return Int16Column;
        yield return Name;
    }
}

        var testConnection = new  ClickHouseConnection(_connString_);

        var list = new List<TestTable>();

        list.Add(new TestTable()
        {
            Int16Column = null,
            Name = "Example 1"
        });

        var commandBulkInsert = testConnection.CreateCommand();
        commandBulkInsert.CommandText = "INSERT INTO TestTable (Int16Column,Name) VALUES @bulk";
        commandBulkInsert.Parameters.Add(new ClickHouseParameter
        {
            ParameterName = "bulk",
            Value = list
        });

        testConnection.Open();
        commandBulkInsert.ExecuteNonQuery();
        testConnection.Close();

Result: System.NullReferenceException: "Object reference not set to an instance of an object."

  1. For example, the same GUID was inserted using DBeaver. SQL:
    INSERT INTO TestTable VALUES (NULL,'Example 2')
  2. Query result
    SELECT * FROM TestTable
Int16Column Name
[NULL] Example 2

GUID to UUID type convertation problem

Description

When inserting records containing a GUID, an incorrect conversion occurs.
As a result, the table contains a value that does not match the transmitted one.

Steps to reproduce

  1. The TestTable table was created for testing.

CREATE TABLE tflow.TestTable (
GlobalUID UUID,
Name String
) ENGINE = ReplacingMergeTree() ORDER BY (GlobalUID, Name) SETTINGS index_granularity = > 32768

  1. A test application has been created.

         var testConnection = new  ClickHouseConnection(_connString_);
    
         var list = new List<TestTable>();
    
         list.Add(new TestTable()
         {
             GlobalUID = Guid.Parse("dca0e161-9503-41a1-9de2-18528bfffe88"),
             Name = "Example 1"
         });
    
         var commandBulkInsert = testConnection.CreateCommand();
         commandBulkInsert.CommandText = "INSERT INTO TestTable (GlobalUID,Name) VALUES @bulk";
         commandBulkInsert.Parameters.Add(new ClickHouseParameter
         {
             ParameterName = "bulk",
             Value = list
         });
    
         testConnection.Open();
         commandBulkInsert.ExecuteNonQuery();
         testConnection.Close();
    

TestTable.cs

public class TestTable : IEnumerable
{
    public Guid GlobalUID { get; set; }
    public string Name { get; set; }

    public IEnumerator GetEnumerator()
    {
        yield return GlobalUID;
        yield return Name;
    }
}
  1. For example, the same GUID was inserted using DBeaver. SQL:
    INSERT INTO TestTable VALUES (toUUID('dca0e161-9503-41a1-9de2-18528bfffe88'),'Example 2')
  2. Query result
    SELECT * FROM TestTable
GlobalUID Name
41a19503-dca0-e161-88fe-ff8b5218e29d Example 1
dca0e161-9503-41a1-9de2-18528bfffe88 Example 2

Conclusion

As can be seen from the test, when using ClickHouse-Net, when inserting a specific value (dca0e161-9503-41a1-9de2-18528bfffe88), a completely different one is entered into the database (41a19503-dca0-e161-88fe-ff8b5218e29d)

Hi! I ran into a trivial problem. I can't find how to send a single set of data using the insert statement. How to do it? The example in README does not work for me.

Description

... provide some general description of what has gone wrong or what you're proposing ...

Steps to reproduce

  1. You should include database structure as one of the steps (just CREATE commands you user to create relevant tables and some data suitable for test)
  2. The second most important thing is SQL commands you run through the driver that gives trouble
  3. And lastly, you could include some code that reproduces error
  4. Anyway you may not follow these steps if you feel they're not relevant to your issue

Workarounds

... if you know some workaround for the issue, provide it here ...

What should be done

... if you know what should be done to mitigate the issue, provide it here ...

Inability to insert into tables with special characters in the name

When try to insert into a table with a name in special quotes application gets stuck.

Steps to reproduce

  1. Create any table with name like super_table++
  2. Run any insert (either single or bulk) using syntax
    "INSERT INTO super_db.super_table++ (id, name) VALUES (1, 'meow') "
  3. Application hangs, and after 300s timeout raises an exception and disconnects.

Workarounds

Not use quotes ;-) But in my case table name might contain different sort of characters, not only numbers and letters. ClickHouse does support creation and insertion with this syntax.

What should be done

Please amend Scanner.NextToken case 2 to be more flexible with characters. And case 1 (char)96 has to be allowed as well.

executeNonQuery hangs...

Hello,
testing this, I made something very simple that hangs...

I have that table:
create table vince_test (fakedate Date, csa String, server Int32) Engine=MergeTree(fakedate,(csa,server),8192)

and I am doing "insert into vince_test values ('2017-05-17','CSA_CPTY1233',0)"

var cnx = GetConnection();
cnx.ChangeDatabase("my_db);
var sql = "insert into vince_test values ('2017-05-17','CSA_CPTY1233',0)"
cnx.CreateCommand(sql).ExecuteNonQuery(); // <=== hangs forever...

I know it must be something stupid...

hint ?

PS:
when I do:
insert into vince_test (fakedate,csa,server) values ('2017-05-17','CSA_CPTY1233',0)
I get back an exception:
ClickHouse.Ado.ClickHouseException: 'DB::Exception: Checksum doesn't match: corrupted data.'

ouch !

Can this SDK support Json or not?

I want to use JsonEachRow while inserting or querying, is there any support on Json ? and how can I set the parameters type?

ClickHouseParameter parameter = new ClickHouseParameter
{
DbType = DbType.Object,
ParameterName = "bulk",
Value = list.SafeSelect(x => x.JsonSerialize()).ToArray()
};

What should I write connection string for the cluster clickhouse ?

What should I write connection string for the cluster clickhouse ?
In my test server(only one clickhouse),the connectionStrings is below,it works fine.

<connectionStrings>
   <add name="ck.conn" connectionString="Compress=True;CheckCompressedHash=False;Compressor=lz4;Host=192.168.2.10;Port=9000;Database=test;User=default;Password=" />
 </connectionStrings>

but i have no idea how to write connection string for the cluster clickhouse ? any idea?

Select statement with UUID type field fails with System.ArgumentException

Description

Select query fails, when there are UUID type fields in fields list.

Steps to reproduce

  1. Set up database and fiill it with some test data
CREATE DATABASE test_uuid

USE test_uuid

CREATE TABLE contains_uuid

CREATE TABLE contains_uuid (
 id UUID,
 name String
)
ENGINE = Log()

INSERT INTO contains_uuid (id, name) VALUES ('b79d6111-f72a-46f4-81ff-62c16edb573e', 'test1'), ('b79d6111-f72a-52f4-81ff-62c16edb573e', 'test2')
  1. Then
public static void SelectWithUuidFields()
        {
            using (var cnn = GetConnection())
            using (var cmd = cnn.CreateCommand("SELECT id, name FROM contains_uuid"))
            {
                using (var reader = cmd.ExecuteReader())
                {
                    PrintData(reader);
                }
            }

        }

        private static void PrintData(IDataReader reader)
        {
            do
            {
                Console.Write("Fields: ");
                for (var i = 0; i < reader.FieldCount; i++)
                {
                    Console.Write("{0}:{1} ", reader.GetName(i), reader.GetDataTypeName(i));
                }
                Console.WriteLine();
                while (reader.Read())
                {
                    for (var i = 0; i < reader.FieldCount; i++)
                    {
                        var val = reader.GetValue(i);
                        if (val.GetType().IsArray)
                        {
                            Console.Write('[');
                            Console.Write(string.Join(", ", ((IEnumerable) val).Cast<object>()));
                            Console.Write(']');
                        }
                        else
                        {
                            Console.Write(val);
                        }
                        Console.Write(", ");
                    }
                    Console.WriteLine();
                }
                Console.WriteLine();
            } while (reader.NextResult());
        }

and

 reader.NextResult()

throws an exception

System.ArgumentException: 'Object must be an array of primitives.'

StackTrace

   at System.Buffer.BlockCopy(Array src, Int32 srcOffset, Array dst, Int32 dstOffset, Int32 count)
   at ClickHouse.Ado.Impl.ColumnTypes.GuidColumnType.Read(ProtocolFormatter formatter, Int32 rows)
   at ClickHouse.Ado.Impl.Data.ColumnInfo.Read(ProtocolFormatter formatter, Int32 rows)
   at ClickHouse.Ado.Impl.Data.Block.Read(ProtocolFormatter formatter)
   at ClickHouse.Ado.Impl.ProtocolFormatter.ReadPacket(Response rv)
   at ClickHouse.Ado.Impl.ProtocolFormatter.ReadBlock()
   at ClickHouse.Ado.ClickHouseDataReader.NextResult()
   at ConsoleApp2.Program.PrintData(IDataReader reader) in .\Program.cs:line 113
   at ConsoleApp2.Program.SelectWithUuidFields() in .\Program.cs:line 77
   at ConsoleApp2.Program.Main(String[] args) in .\Program.cs:line 41
  1. In the same time, if we modify the query to
"SELECT id, name FROM contains_uuid"

no exception will be thrown and list of string field values will be printed.

Other requisites:

  • net core 2.1 console application
  • "ClickHouse.Ado" nuget package version="1.1.13"

Strange incompatibilities in build for netstandard1.1

Andrey,

My netcore1.0 app which reference netstandard1.5 build of ClickHouse.Ado.
Recently I've switched to netcore1.1 target that starts using build that targets "netcore1.1" and for some reason this build is quite different from net451 / netstandard1.5: for example, ClickHouseConnection doesn't implement IDbConnection and ClickHouseDataReader doesn't implement IDataReader (??).

Could you explain

  1. why special "netcore1.1" build is needed at all (netstandard1.5 can be referenced by wide variety of platforms including both netcore1.0 and netcore1.1
  2. why "netcore1.1" doesn't implement standard ADO.NET interfaces (this is really weird)

New logo for ClickHouse

I wanted to contribute to ClickHouse and I designed a logo for ClickHouse. If you like it, you could use it.

clickhouse

Failing to connect (at handshake stage) to Hosted ClickHouse cluster

Description

I have run into a connection issue while trying to work with ClickHouse via this API:

			var connectionString = $"Compress=True;CheckCompressedHash=False;Compressor=lz4;Host={_opts.Host};Port={_opts.Port};Database={_opts.Database};User={_opts.User};Password={_opts.Password}";
			var settings = new ClickHouseConnectionSettings(connectionString);
			var connection = new ClickHouseConnection(settings);
			connection.Open();

Here the exception is raised SocketException: An existing connection was forcibly closed by the remote host.

After come digging, I've discovered, that it is in the middle of the handshake process, right after sending client name ("ClickHouse .NET client library")

image

Slow NextResult when large dataset

Description

If you return a relatively large result set from Clickhouse (like 25k rows) it takes ~10 secs for the reader to do what it needs to do.
The query it self takes 300 ms in clickhouse, but before the response is returned it takes 10 sec. I have tried to make the same call in a bash console, and the result is instant (Even thogh piping this result to a webservice). This also across machines to verify that that its not an network issue.

So I'm quite sure the error is in the .NET implementation. in the NextResult command.

Lz4

I'm having problem with library you are using. The author suggested to use new port of lz4. Is it possible to fix?

Details are here

MiloszKrajewski/lz4net#38

Array(Int64) bulk insert casting exception

Description

During bulk insert of Array(Int64) type column values, there will be casting error

Steps to reproduce

  1. CREATE TABLE tmp ( text_a String, longs Array(Int64), timestamp Date ) ENGINE = MergeTree() PARTITION BY toYYYYMM(timestamp) ORDER BY (timestamp)
    public class TmpObj : IEnumerable
    {
        public string TextA { get; set; }
        public long[] Longs { get; set; }
        public DateTime Timestamp { get; set; }

        public IEnumerator GetEnumerator()
        {
            yield return TextA;
            yield return Longs;
            yield return Timestamp;
        }
    }

    public class Program
    {
        public static async Task Main(string[] args)
        {
            ClickTest();
        }

        private static void ClickTest()
        {
            try
            {
                List<TmpObj> a = new List<TmpObj>();
                a.Add(new TmpObj()
                {
                    TextA = "a",
                    Longs = new long[]{1,2,3},
                    Timestamp = DateTime.Now
                });

                var settings = new ClickHouseConnectionSettings("Host=localhost;Port=9000;Database=default;" +
                                                                "User=default;");
                using (var conn = new ClickHouseConnection(settings))
                using (var command = conn.CreateCommand())
                {
                    command.CommandText = "INSERT INTO tmp " +
                                          "(text_a, longs, timestamp)" +
                                          "VALUES @bulk";
                    command.Parameters.Add(new ClickHouseParameter
                    {
                        ParameterName = "bulk",
                        Value = a
                    });

                    conn.Open();

                    command.ExecuteNonQuery();
                }
            }
            catch (Exception ex)
            {
                throw;
            }
        }
    }
  1. Exception: Unnable to cast Int64[] to IEnumerable

    What should be done

    Fix casting in ArrayColumnType.ValuesFromConst()

Missed implementation of DbProviderFactory

Hi Andrey,

I've tried to use netstandard build of ClickHouse.Ado and found that it doesn't have DbProviderFactory implementation - it is needed for my data access layer used for SQL commands generation.
Then I tried to implement it by myself and found that both ClickHouseConnection and ClickHouseCommand have no default constructor for some reason (it is needed for DbProviderFactory.CreateCommand() and CreateConnecton() implementations).

And what is more important I found that ClickHouseConnection implements IDbConnection but doesn't derive from DbConnection class (as well as ClickHouseCommand doesn't derive from DbCommand).

Is there some reason why ClickHouse connector cannot be implemented in a standard way? For example, here is Microsoft's SQLite connector implementation: https://github.com/aspnet/Microsoft.Data.Sqlite/tree/dev/src/Microsoft.Data.Sqlite.Core

Please add support for the UUID Type

Description

... provide some general description of what has gone wrong or what you're proposing ...

Steps to reproduce

  1. You should include database structure as one of the steps (just CREATE commands you user to create relevant tables and some data suitable for test)
  2. The second most important thing is SQL commands you run through the driver that gives trouble
  3. And lastly, you could include some code that reproduces error
  4. Anyway you may not follow these steps if you feel they're not relevant to your issue

Workarounds

... if you know some workaround for the issue, provide it here ...

What should be done

... if you know what should be done to mitigate the issue, provide it here ...

Query_log fault

Description

I think that there is a problem when you run a query with your wrapper. After I run a Query I Look for it in "system.query_log" but I didn't find it. I execute others queries with Tabix or with clickhouse-client,
and they appears. All of then with the same User: "DEFAULT"

I put this in the user configuration in case:
<log_queries>1</log_queries>

Thanks.

SocketException in Handshake

Description

Getting SocketException in Handshake method when trying to read server message

Steps to reproduce

  1. Connect to clickhouse database
  2. Get exception like this:
    System.IO.IOException: Не удается прочитать данные из транспортного соединения: Попытка установить соединение была безуспешной, т.к. от другого компьютера за требуемое время не получен нужный отклик, или было разорвано уже установленное соединение из-за неверного отклика уже подключенного компьютера. ---> System.Net.Sockets.SocketException: Попытка установить соединение была безуспешной, т.к. от другого компьютера за требуемое время не получен нужный отклик, или было разорвано уже установленное соединение из-за неверного отклика уже подключенного компьютера at в System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at в System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at в System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at в ClickHouse.Ado.Impl.UnclosableStream.Read(Byte[] buffer, Int32 offset, Int32 count) в D:\ChildProjects\ClickHouse.NET\ClickHouse.Ado\Impl\UnclosableStream.cs:строка 49 at в ClickHouse.Ado.Impl.ProtocolFormatter.ReadBytes(Int32 i) в D:\ChildProjects\ClickHouse.NET\ClickHouse.Ado\Impl\ProtocolFormatter.cs:строка 398 at в ClickHouse.Ado.Impl.ProtocolFormatter.ReadByte() в D:\ChildProjects\ClickHouse.NET\ClickHouse.Ado\Impl\ProtocolFormatter.cs:строка 423 at в ClickHouse.Ado.Impl.ProtocolFormatter.ReadUInt() в D:\ChildProjects\ClickHouse.NET\ClickHouse.Ado\Impl\ProtocolFormatter.cs:строка 352 at в ClickHouse.Ado.Impl.ProtocolFormatter.Handshake(ClickHouseConnectionSettings connectionSettings) в D:\ChildProjects\ClickHouse.NET\ClickHouse.Ado\Impl\ProtocolFormatter.cs:строка 67 at в ClickHouse.Ado.ClickHouseConnection.Open() в D:\ChildProjects\ClickHouse.NET\ClickHouse.Ado\ClickHouseConnection.cs:строка 96 at в ClickHouse.Isql.Program.Main(String[] args) в D:\ChildProjects\ClickHouse.NET\ClickHouse.ISQL\Program.cs:строка 128

Workarounds

I have successful connection via HTTP, like HttpWebRequest. But get this exception via TCP.
How can i fix my problem?
What server settings i should fix?
Now im working on port 8123

Problem with query_log

I think that there is a problema in queries. When I execute a query with this API I didn´t found it in system.query_log table.

I execute the same queries with the same user with Tabix or with clickhouse-client and they appears. But when I execute it with this wrapper they don´t.

Problem in INSERT from file

I recently started using Clickhouse and planning to migrate data to use via this interface.
I have a requirement to insert millions of rows of data from a text file present on HDFS or Azure storage.
I don't find any solution to resolve this. I need a functionality similar to SQL bulk insert, for a file present on external storage. Let me know if I can get some help towards this.

Terrible queries performance

Description

As I see in repo description, this library is not a wrapper on HTTP interface so I thought that queries over native protocol must be faster than, for example, getting json and deserializing it. But it works incredible slow. I made some investigation that shows exponential growth of data processing

Steps to reproduce

I tried three ways to get data. A little bit modified method from tests (TestPerfromance), ExecuteSelectCommand from this wrapper and simple wrapper for executing requests over http:

{
Stopwatch sw = new Stopwatch();
            sw.Start();    
            for (int i = 1; i <= 16384; i *= 2)
            {
                string query = $"SELECT address from (SELECT DISTINCT  address FROM (SELECT from_address as address FROM addresses) ANY FULL OUTER JOIN (SELECT to_address as address FROM addresses) USING (address) ) LIMIT {i}";
                using (var cnn = GetConnection())
                {
                    var cmd = cnn.CreateCommand(query);
                    var list = new List<List<object>>();
                    using (var reader = cmd.ExecuteReader())
                    {                       
                        reader.ReadAll(x =>
                        {
                            var rowList = new List<Object>();
                            for (var j = 0; j < x.FieldCount; j++)
                                rowList.Add(x.GetValue(j));
                            list.Add(rowList);                          
                        });
                    }
                }
                Console.WriteLine($"{i} records: {sw.ElapsedMilliseconds} ms. IDataReader.ReadAll");                
                var nodes = _db.ExecuteSelectCommand(query);
                sw.Restart();
                Console.WriteLine($"{i} records: {sw.ElapsedMilliseconds} ms. ExecuteSelectCommand");
                sw.Restart();
                var nodesJson = ClickhouseQueryExecutor.ExecuteQuery<List<HolderMap>>(query);
                Console.WriteLine($"{i} records: {sw.ElapsedMilliseconds} ms. JSON ");
            }
}
public static T ExecuteQuery<T>(string query) where T : class
        {
            _webClient.Headers[HttpRequestHeader.AcceptEncoding] = "gzip";
            var data = Encoding.ASCII.GetBytes(query + " FORMAT JSON");
            var response = _webClient.UploadData("http://house.click:8123/", data);
            var responseString = JObject.Parse(Encoding.Default.GetString(response));
            var responseBody = responseString["data"];
            var results = JsonConvert.DeserializeObject<T>(responseBody.ToString());
            return results;
        }`

And get next result:
1 records: 609 ms. IDataReader.ReadAll
1 records: 425 ms. ExecuteSelectCommand
1 records: 595 ms. JSON
2 records: 1012 ms. IDataReader.ReadAll
2 records: 397 ms. ExecuteSelectCommand
2 records: 246 ms. JSON
4 records: 640 ms. IDataReader.ReadAll
4 records: 404 ms. ExecuteSelectCommand
4 records: 264 ms. JSON
8 records: 668 ms. IDataReader.ReadAll
8 records: 395 ms. ExecuteSelectCommand
8 records: 254 ms. JSON
16 records: 652 ms. IDataReader.ReadAll
16 records: 398 ms. ExecuteSelectCommand
16 records: 242 ms. JSON
32 records: 638 ms. IDataReader.ReadAll
32 records: 387 ms. ExecuteSelectCommand
32 records: 270 ms. JSON
64 records: 712 ms. IDataReader.ReadAll
64 records: 441 ms. ExecuteSelectCommand
64 records: 261 ms. JSON
128 records: 777 ms. IDataReader.ReadAll
128 records: 515 ms. ExecuteSelectCommand
128 records: 256 ms. JSON
256 records: 948 ms. IDataReader.ReadAll
256 records: 685 ms. ExecuteSelectCommand
256 records: 314 ms. JSON
512 records: 1361 ms. IDataReader.ReadAll
512 records: 1052 ms. ExecuteSelectCommand
512 records: 299 ms. JSON
1024 records: 2037 ms. IDataReader.ReadAll
1024 records: 1715 ms. ExecuteSelectCommand
1024 records: 335 ms. JSON
2048 records: 3713 ms. IDataReader.ReadAll
2048 records: 6521 ms. ExecuteSelectCommand
2048 records: 603 ms. JSON
4096 records: 6949 ms. IDataReader.ReadAll
4096 records: 6341 ms. ExecuteSelectCommand
4096 records: 793 ms. JSON
8192 records: 13315 ms. IDataReader.ReadAll
8192 records: 12631 ms. ExecuteSelectCommand
8192 records: 1143 ms. JSON
16384 records: 25422 ms. IDataReader.ReadAll
16384 records: 24835 ms. ExecuteSelectCommand
16384 records: 1286 ms. JSON

Any thoughts?

Inserting into Nullable(UUID) column

I'm running into a problem inserting null into a Nullable(UUID) column.

Table:

CREATE TABLE logs.box_history
(
	id UUID, 
	box_id Nullable(UUID)
) ENGINE = Log

Attempt 1

using(var command = new ClickHouseCommand(connection, "insert into box_history (id, box_id) values (@id, @box_id)")
{
    command.Parameters.Add("id", DbType.Guid, Guid.NewGuid());
    command.Parameters.Add("box_id",  null);
}

command.ExecuteNonQuery();

Executing this gives me the error:

Unhandled Exception: System.InvalidCastException: Cannot convert parameter with type AnsiString to Guid.                   
   at ClickHouse.Ado.Impl.ColumnTypes.GuidColumnType.ValueFromParam(ClickHouseParameter parameter)                         
   at ClickHouse.Ado.Impl.ColumnTypes.NullableColumnType.ValueFromParam(ClickHouseParameter parameter)                    
   at ClickHouse.Ado.ClickHouseCommand.Execute(Boolean readResponse, ClickHouseConnection connection)                      
   at ClickHouse.Ado.ClickHouseCommand.ExecuteNonQuery()                                               

Attempt 2

Changing the code to:

command.Parameters.Add("box_id", DbType.Guid, null);

Gives this error:

Unhandled Exception: System.InvalidCastException: Null object cannot be converted to a value type.
   at System.Convert.ChangeType(Object value, Type conversionType, IFormatProvider provider)
   at ClickHouse.Ado.Impl.ColumnTypes.GuidColumnType.ValueFromParam(ClickHouseParameter parameter)
   at ClickHouse.Ado.Impl.ColumnTypes.NullableColumnType.ValueFromParam(ClickHouseParameter parameter)
   at ClickHouse.Ado.ClickHouseCommand.Execute(Boolean readResponse, ClickHouseConnection connection)
   at ClickHouse.Ado.ClickHouseCommand.ExecuteNonQuery()

Attempt 3

And changing the code to use DBNull.Value doesn't seem to help either:

command.Parameters.Add("box_id", DbType.Guid, DBNull.Value);

Gives the error:

Unhandled Exception: System.InvalidCastException: Invalid cast from 'System.DBNull' to 'System.Guid'.
   at System.Convert.DefaultToType(IConvertible value, Type targetType, IFormatProvider provider)
   at ClickHouse.Ado.Impl.ColumnTypes.GuidColumnType.ValueFromParam(ClickHouseParameter parameter)
   at ClickHouse.Ado.Impl.ColumnTypes.NullableColumnType.ValueFromParam(ClickHouseParameter parameter)
   at ClickHouse.Ado.ClickHouseCommand.Execute(Boolean readResponse, ClickHouseConnection connection)
   at ClickHouse.Ado.ClickHouseCommand.ExecuteNonQuery()

How do I get null values into the database?

Поддерживается ли пакетная запись uint[]?

Пакетная запись с базовыми типами работает отлично. Когда пытаемся добавить запись нового поля
alter table table add column field2 Array(UInt32)

public uint[] field2{ get; set; }

получаем ошибку
Message: Managed Debugging Assistant 'FatalExecutionEngineError' has detected a problem in 'C:\Git\App1\bin\Debug\App1.exe'.
Additional information: The runtime has encountered a fatal error. The address of the error was at 0x131cc846, on thread 0x39bc. The error code is 0x80131623. This error may be a bug in the CLR or in the unsafe or non-verifiable portions of user code. Common sources of this bug include user marshaling errors for COM-interop or PInvoke, which may corrupt the stack.

Есть версии?

Execution with Progress

The API is fantastic. But I miss some features. Some of them you have the API but they are not implemented yet.

  • The cancelation of a Query.
  • Execution of a Query with progress. This feature exists in the python API and is very Userfull.

Bad message type 53 received from server.

My code

string cstr = "Compress=True;CheckCompressedHash=False;Compressor=lz4;Host=192.168.7.34;Port=22;Database=default;User=root;Password=111";

var settings = new ClickHouseConnectionSettings(cstr);

var cnn = new ClickHouseConnection(settings);

cnn.Open();


But I receive error:

Bad message type 53 received from server.

What means code 53?

Posible Error with DISTINCT

I don't know if is a Error of ClickHouse.Ado or not. I have a simple distributed table and I do a a Simple query:

"SELECT COUNT ( DISTINCT campo) AS cnt FROM BBDD.Tabla
WHERE InsertionDate >= today()"

I receive a TimeOut when I do the "reader.NextResult()". I receive a Exception:
_"Connection reset by peer while writing to socket"

" en System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
en ClickHouse.Ado.Impl.UnclosableStream.Read(Byte[] buffer, Int32 offset, Int32 count)
en ClickHouse.Ado.Impl.ProtocolFormatter.ReadBytes(Int32 i)
en ClickHouse.Ado.Impl.ProtocolFormatter.ReadByte()
en ClickHouse.Ado.Impl.ProtocolFormatter.ReadUInt()
en ClickHouse.Ado.Impl.ProtocolFormatter.ReadPacket(Response rv)
en ClickHouse.Ado.Impl.ProtocolFormatter.ReadBlock()
en ClickHouse.Ado.ClickHouseDataReader.NextResult()
en CHB_BackTestingConsole.ClickHouse.RunQuery(String query, List1 l_header, List1 l_types) "_

If I do the same query, but without the "DISTINCT", it works perfectly.
"SELECT COUNT ( campo) AS cnt FROM BBDD.Tabla
WHERE InsertionDate >= today()"

I proved the same query with other Python APIS, sqlAlchemy, Tabix, etc... and It works perfectly.
I proved changing conexion parameters, without succeed:
ConnectionTimeout=2000;
SocketTimeout=1000;
DataTransferTimeout=100000;

Any Ideas?? Could it be a bug in your code?? Or Maybe in the Native Driver??

Thanks.

Problem with 1.1.10 release

I tried to upgrade to latest ClickHouse.Ado 1.1.10 nuget package but for some reason it doesn't work for me. I execute queries and iterate through results with data reader; I able to read all rows, but then my thread is blocked with either data reader dispose or connection close (still not sure what exactly blocks execution). Just to note, this is NET Core 2.1 app.

Version 1.1.9 works fine with the same queries; as I see by commits only significant change is usage of BufferedStream #46, so maybe it is related to this issue.

Type mismatch for Enum8

clip2net_180507222650

There is error in the following code. The escape of string for the enum value is excess:
public override string AsClickHouseType()
{
return $"Enum{BaseSize}({string.Join(",", Values.Select(x => $"{ProtocolFormatter.EscapeStringValue(x.Item1)}={x.Item2}"))})";
}

ClickHouseCommand.ExecuteReader throws NotSupportedException when used with Dapper

Description

I'm trying to integrate ClickHouse.NET with Dapper. When I invoked ExecuteReader method NotSupportedException has been thrown. IMHO that is ClickHouse.NET issue rather than Dapper problem.

Steps to reproduce

  1. Create "persons" table in ClickHouse
    CREATE TABLE persons (inn String, name String, pass String) ENGINE=MergeTree() ORDER BY inn
  2. Create .NET project targeting .NET 4.5
  3. Install ClickHouse.NET and Dapper NuGet packages
  4. Create Person class
    class Person
    {
    public string inn { get; set; }
    public string name { get; set; }
    public string pass { get; set; }
    }
  5. Run the following code
    using (var cnn = GetConnection())
    {
    var r = cnn.QueryMultiple(@"SELECT * FROM persons ORDER BY inn");
    List result = new List();
    while (!r.IsConsumed)
    {
    var list = r.Read();
    result.AddRange(list);
    }
    }

I think this issue is somehow related to DapperLib/Dapper#441

What should be done

I suggest to just remove the following code from ExecuteReader method
if ((behavior & (CommandBehavior.SchemaOnly | CommandBehavior.KeyInfo | CommandBehavior.SingleResult | CommandBehavior.SingleRow | CommandBehavior.SequentialAccess)) != 0) throw new NotSupportedException($"CommandBehavior {behavior} is not supported.");

SELECT ... WHERE Id IN @bulk

Description

It would be really cool to have chance to use bulk parameters(Enumerables) for parameters in "IN clause"

Inserting into UUID type column

It is impossible to perform «INSERT INTO» command for an UUID type field. Execution of ExecuteNonQuery() method do not stop.

Steps to reproduce:

  1. Table structure is following:
    CREATE TABLE UUIDTest (
    primaryKey UUID
    ) ENGINE = MergeTree() ORDER BY primaryKey

  2. Insert command:
    INSERT INTO "UUIDTest" ( "primaryKey") VALUES ('5c41db8e-4614-42a9-bb26-1cbdf9741f78')

  3. C# code:
    var sql = "INSERT INTO "UUIDTest" ( "primaryKey") VALUES ('5c41db8e-4614-42a9-bb26-1cbdf9741f78')";

         using (var cnn = … )
         {
             cnn.Open();
             var cmd = cnn.CreateCommand(sql);
    
             cmd.ExecuteNonQuery();
         }
    

Sum(DecimalField) length is not correct

Description

When having a decimal field specified as Decimal(9,2), its fails when receiving a sum:

  1. Select anotherColumn,DecimalColumn from Table: OK!
  2. Select anotherColumn,Sum(DecimalColumn) from Table group by anotherColumn: Fails!

Steps to reproduce

  1. Create Table B (Id UInt16, Money Decimal(9,2)) Engine = Log
  2. insert into B values (1,3.14);
  3. Query: SELECT Id,Money from B => OK
  4. Query: SELECT Id,Sum(Money) from B group by Id => Fails (Expected same result as above)

image

Workarounds

I would like to hear some ;)

String object null value leads to skipped column

Description

During bulk insert, if value of string column equals null that will cause bug, that column would be skipped.

Steps to reproduce

  1. CREATE TABLE tmp ( text_a String, text_b String, timestamp Date ) ENGINE = MergeTree() PARTITION BY toYYYYMM(timestamp) ORDER BY (timestamp)
public class TmpObj : IEnumerable
    {
        public string TextA { get; set; }
        public string TextB { get; set; }
        public DateTime Timestamp { get; set; }

        public IEnumerator GetEnumerator()
        {
            yield return TextA;
            yield return TextB;
            yield return Timestamp;
        }
    }

    public class Program
    {
        public static async Task Main(string[] args)
        {
            ClickTest();
        }

        private static void ClickTest()
        {
            try
            {
                List<TmpObj> a = new List<TmpObj>();
                a.Add(new TmpObj()
                {
                    TextA = "a",
                    TextB = null,
                    Timestamp = DateTime.Now
                });

                var settings = new ClickHouseConnectionSettings("Host=localhost;Port=9000;Database=default;" +
                                                                "User=default;");
                using (var conn = new ClickHouseConnection(settings))
                using (var command = conn.CreateCommand())
                {
                    command.CommandText = "INSERT INTO tmp " +
                                          "(text_a, text_b, timestamp)" +
                                          "VALUES @bulk";
                    command.Parameters.Add(new ClickHouseParameter
                    {
                        ParameterName = "bulk",
                        Value = a
                    });

                    conn.Open();

                    command.ExecuteNonQuery();
                }
            }
            catch (Exception ex)
            {
                throw;
            }
        }
    }
  1. Unhandled Exception: System.FormatException: Column count in parameter table (2) doesn't match column count in schema (3).
    at ClickHouse.Ado.ClickHouseCommand.Execute(Boolean readResponse, ClickHouseConnection connection) in /Users/asuvorkin/Documents/GitKraken/ClickHouse-Net/ClickHouse.Ado/ClickHouseCommand.cs:line 104
    at ClickHouse.Ado.ClickHouseCommand.ExecuteNonQuery() in /Users/asuvorkin/Documents/GitKraken/ClickHouse-Net/ClickHouse.Ado/ClickHouseCommand.cs:line 156

What should be done

Change ClickHouseCommand.cs, line 101 from var colCount = table.First().OfType<object>().Count(); to var colCount = table.First().Cast<object>().Count();

Cannot build with VS2017 v15.6.7

Description

So, I've cloned the repo, opened with VS.
It asked whether to download .NET 4.5 or upgrade project to 4.6.1. I tried to install 4.5, but since I already had 4.5.1 it couldn't install.
Then I upgraded project to 4.6.1, but had this error:
image

Then I opened csproj file and saw this:
<Project Sdk="Microsoft.NET.Sdk" ToolsVersion="15.0">
After capitalizing "Sdk" to "SDK":
<Project SDK="Microsoft.NET.SDK" ToolsVersion="15.0">
Project loaded fine, but I don't see any code in there and manual adding files/folders doesn't make any sense.

Any advice?

ClickHouseConnectionSettings throws NullReferenceException on ToString()

Description

ClickHouseConnectionSettings throws NullReferenceException if I trying to call ToString() on it.

Steps to reproduce

var settings = new ClickHouseConnectionSettings("Compress=True;CheckCompressedHash=False;Compressor=lz4;Host=clickhouse;Port=9000;User=default;Password=;SocketTimeout=600000;Database=Test;");
var connectionString = settings.ToString(); // throws NullReferenceException

Build error because of missed ZstdNet.1.0.0 package

It seems ClickHouse.Ado.csproj has references to 'ZstdNet' package that is no longer needed and causes build errors:

I haven't tested the connector yet mostly because I have .NET Core appplication.
Do you have any plans to add nestandard build? I see that only non-common dependency is 'lz4net' and it already has netstandard build ( https://www.nuget.org/packages/lz4net/ ).

Also I think it is good idea to publish 'ClickHouse.Ado' on nuget :-)

BTW, I can help with these 2 things: migrate csproj to VS-2017 format that supports multiple targets at once (net451 / netstandard1.x) and nuget package generation with 'dotnet pack'.

the expectation when you run the query

For any query, ExecuteReader () does not wait for the query to complete.
And my select is about 200
seconds clickhouse, so my app does not display an empty "reader" if necessary do not use sleep(). Every time guess sleep()? Is it serious? Explain, so same should not be. When using " sql reader "with MSSQL," execute " is waiting to execute

Parameters naming

It is more a question than an issue.

Is there a particular reason to not allow parameters starting from lowercase character? Like "@param1" would raise an exception in the regex, used for parameters match.

It is easy thing to fix (replace regex with @: ) , but I was wondering, why it was made this way.

Thanks!

Multiple insert command is not working

Hello!

I'm trying to execute a multiple insert command (like INSERT INTO Table (Col1, Col2, Col3) VALUES (Val1-1, Val1-2, Val1-3),(Val2-1, Val2-2, Val2-3)), which is correctly works in clickhouse-client, but I get "Sequence contains no matching element" exception:
System.Linq.Enumerable.First[TSource](IEnumerable 1 source, Func 2 predicate) ClickHouse.Ado.ClickHouseParameterCollection.get_Item(String parameterName) ClickHouse.Ado.ClickHouseCommand.<SubstituteParameters>b__28_0(Match m) System.Text.RegularExpressions.RegexReplacement.Replace(MatchEvaluator evaluator, Regex regex, String input, Int32 count, Int32 startat) System.Text.RegularExpressions.Regex.Replace(String input, MatchEvaluator evaluator, Int32 count, Int32 startat) System.Text.RegularExpressions.Regex.Replace(String input, MatchEvaluator evaluator) ClickHouse.Ado.ClickHouseCommand.SubstituteParameters(String commandText) ClickHouse.Ado.ClickHouseCommand.Execute(Boolean readResponse) ClickHouse.Ado.ClickHouseCommand.ExecuteNonQuery()

This exception occures in ClickHouseCommand.SubstituteParameters method, in which, as I understand, the program should not enter at all, since I do not pass any parameters. I would not really like to use hidden bulk-insert functionality because it's very undesirable to realize the GetEnumerator() for each entity. Could you explain why an exception may occur, is it a bug?

nuget 下载下来,无法使用

Description

... provide some general description of what has gone wrong or what you're proposing ...

Steps to reproduce

  1. You should include database structure as one of the steps (just CREATE commands you user to create relevant tables and some data suitable for test)
  2. The second most important thing is SQL commands you run through the driver that gives trouble
  3. And lastly, you could include some code that reproduces error
  4. Anyway you may not follow these steps if you feel they're not relevant to your issue

Workarounds

... if you know some workaround for the issue, provide it here ...

What should be done

... if you know what should be done to mitigate the issue, provide it here ...
@jiangxianfu

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.