GithubHelp home page GithubHelp logo

`zlib deflate failed, output buffer too small` error when writing a Parquet + GZIP file into S3 using the S3 table function about clickhouse HOT 5 OPEN

tb-fjurado avatar tb-fjurado commented on June 25, 2024
`zlib deflate failed, output buffer too small` error when writing a Parquet + GZIP file into S3 using the S3 table function

from clickhouse.

Comments (5)

tb-fjurado avatar tb-fjurado commented on June 25, 2024

Ok I've got a reproducer:

INSERT INTO FUNCTION s3('https://test-sinks.s3.eu-west-3.amazonaws.com/rep_deflate/myfile.parquet', 'REDACTED', 'REDACTED', 'Parquet')
SETTINGS output_format_parquet_compression_method = 'gzip'
SELECT *
FROM generateRandom('a UInt64, d UInt64', 1, 64, 2)
LIMIT 100000

Query id: b40c068e-10f5-4f77-b598-0cbfffb37943


Elapsed: 0.031 sec.

Received exception from server (version 24.2.1):
Code: 1002. DB::Exception: Received from localhost:9000. DB::Exception: Error while writing a table: IOError: zlib deflate failed, output buffer too small. ()

If we have two fields with long strings it works:

INSERT INTO FUNCTION s3('https://test-sinks.s3.eu-west-3.amazonaws.com/rep_deflate/myfile.parquet', 'REDACTED', 'REDACTED', 'Parquet')
SETTINGS output_format_parquet_compression_method = 'gzip'
SELECT *
FROM generateRandom('a String, d String', 1, 1024, 2)
LIMIT 100000

Query id: 5244e390-3f49-4ce6-9081-c25af8644146

Ok.

0 rows in set. Elapsed: 3.304 sec. Processed 100.19 thousand rows, 104.38 MB (30.32 thousand rows/s., 31.59 MB/s.)
Peak memory usage: 258.04 MiB.

If we have a single UInt64 column with enough rows it fails:

INSERT INTO FUNCTION s3('https://test-sinks.s3.eu-west-3.amazonaws.com/rep_deflate/myfile.parquet', 'REDACTED', 'REDACTED', 'Parquet')
SETTINGS output_format_parquet_compression_method = 'gzip'
SELECT *
FROM generateRandom('a UInt64', 1, 1024, 2)
LIMIT 100000

Query id: 581138a6-f3e2-4f38-b114-8e2e45dfda34


Elapsed: 0.031 sec.

Received exception from server (version 24.2.1):
Code: 1002. DB::Exception: Received from localhost:9000. DB::Exception: Error while writing a table: IOError: zlib deflate failed, output buffer too small. ()

But if we use less rows it works:

INSERT INTO FUNCTION s3('https://test-sinks.s3.eu-west-3.amazonaws.com/rep_deflate/myfile.parquet', 'REDACTED', 'REDACTED', 'Parquet')
SETTINGS output_format_parquet_compression_method = 'gzip'
SELECT *
FROM generateRandom('a UInt64', 1, 1024, 2)
LIMIT 10000

Query id: 5059d82c-f15a-4a19-8123-f7c9cd5388af

Ok.

0 rows in set. Elapsed: 0.155 sec. Processed 10.00 thousand rows, 80.00 KB (64.49 thousand rows/s., 515.95 KB/s.)
Peak memory usage: 16.86 KiB.

from clickhouse.

tb-fjurado avatar tb-fjurado commented on June 25, 2024

It doesn't seem exclusive to Uint64 either, it seems that at some column block size the compression breaks. For example, here's an example with Int8. This works:

INSERT INTO FUNCTION s3('https://test-sinks.s3.eu-west-3.amazonaws.com/rep_deflate/myfile.parquet', 'REDACTED', 'REDACTED', 'Parquet')
SETTINGS output_format_parquet_compression_method = 'gzip'
SELECT *
FROM generateRandom('a Int8', 1, 1024, 2)
LIMIT 550000

Query id: df0dee41-9aa5-467f-8db1-58ff16b25ee1

Ok.

But if we insert 560k rows it fails:

INSERT INTO FUNCTION s3('https://test-sinks.s3.eu-west-3.amazonaws.com/rep_deflate/myfile.parquet', 'REDACTED', 'REDACTED', 'Parquet')
SETTINGS output_format_parquet_compression_method = 'gzip'
SELECT *
FROM generateRandom('a Int8', 1, 1024, 2)
LIMIT 560000

Query id: 22150098-bd2a-4fac-a5bf-f4cb04b1c977


Elapsed: 0.025 sec.

Received exception from server (version 24.2.1):
Code: 1002. DB::Exception: Received from localhost:9000. DB::Exception: Error while writing a table: IOError: zlib deflate failed, output buffer too small. ()

So it seems there's some threshold at which it breaks. If we enable the custom encoder it works even inserting 1M rows:

INSERT INTO FUNCTION s3('https://test-sinks.s3.eu-west-3.amazonaws.com/rep_deflate/myfile.parquet', 'REDACTED', 'REDACTED', 'Parquet')
SETTINGS output_format_parquet_compression_method = 'gzip', output_format_parquet_use_custom_encoder = 1
SELECT *
FROM generateRandom('a Int8', 1, 1024, 2)
LIMIT 1000000

Query id: f1d40025-ed20-4508-87bc-11f2a18903d6

Ok.

from clickhouse.

jrdi avatar jrdi commented on June 25, 2024

The error seems to be coming from zlib and there is very little information about it on the internet. Checking the code it seems that zlib does a wrong estimation of the buffer size to allocate? The only thing I've been able to find is this bug report from apache/arrow#2756 where the problem seems to be caused by an apache/arrow#2756 (comment). Maybe it's a matter of upgrading?

@tb-fjurado since the issue you're facing seems to be exclusively related to Arrow. Have you tried enabling output_format_parquet_use_custom_encoder? By comments and changes here looks like is stable enough already.

from clickhouse.

tb-fjurado avatar tb-fjurado commented on June 25, 2024

The error seems to be coming from zlib and there is very little information about it on the internet. Checking the code it seems that zlib does a wrong estimation of the buffer size to allocate? The only thing I've been able to find is this bug report from apache/arrow#2756 where the problem seems to be caused by an apache/arrow#2756 (comment). Maybe it's a matter of upgrading?

@tb-fjurado since the issue you're facing seems to be exclusively related to Arrow. Have you tried enabling output_format_parquet_use_custom_encoder? By comments and changes here looks like is stable enough already.

Yes, I tried enabling it and it worked as expected. I see that #63210 has been already merged so I understand that the custom encoder is the way forward and we don't want to invest more time in arrow right?

Also, does anybody know from which CH version could we consider the custom encoder stable, even if it was not set as default? I saw this comment here from Aug 2023 that said it should be good enough to enable and I haven't found many more (if any) changes on the encoder in the PRs ever since. Just to know if we can enable it right away in our current CH deployments or we need to upgrade.

Thanks!

from clickhouse.

al13n321 avatar al13n321 commented on June 25, 2024

Also, does anybody know from which CH version could we consider the custom encoder stable, even if it was not set as default? I saw this comment #53130 (comment) from Aug 2023 that said it should be good enough to enable and I haven't found many more (if any) changes on the encoder in the PRs ever since. Just to know if we can enable it right away in our current CH deployments or we need to upgrade.

Yes, the last significant fix was #52951 (August 2023), so 23.10+ should be good.


Yes, seems to be a problem with zlib-ng. Apparently:

  1. deflateBound(sourceLen = 800000) (in zlib) returns 800268.
  2. MaxCompressedLen() (in arrow compression_zlib.cc) adds 12 to it with comment: "ARROW-3514: return a more pessimistic estimate to account for bugs in old zlib versions."
  3. deflate() (in zlib) ends up trying to write 38 bytes more than that (or at least strm->state->pending = 38 - I didn't follow the code carefully, this interpretation may be incorrect).

Maybe I'll investigate more.

from clickhouse.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.