GithubHelp home page GithubHelp logo

Comments (9)

adschm avatar adschm commented on May 31, 2024 1

@Loquacity Actually, decompression is quite user-unfriendly as far as I was able to determine. You essentially need to decompress all chunks individually by name:

SELECT decompress_chunk('_timescaledb_internal._hyper_1_203_chunk');

This can theoretically be aggregated like this (derived from the compression example in the docs):

SELECT decompress_chunk(i) FROM show_chunks('mytable', older_than => INTERVAL ' 120 days') i;

This works well if you can assure that the interval (or condition) 1. includes ALL compressed chunks and 2. includes NO uncompressed chunks. If you are missing compressed chunks, you won't be able to remove the compress property. If you include uncompressed chunks, the process will fail and exit on the first one right in the middle of the process. This is even worse, because the result is sorted arbitrary, and you won't be able to rerun the command as it will fail on the first chunk in the set which is uncompressed now.

I essentially circumvented this annoying problem by printing chunk names from a bigger area to the console, so I make sure all compressed chunks are included, e.g.

SELECT i FROM show_chunks('ldt.machineconnectparameterlogs', older_than => INTERVAL ' 100 days') i;

This yields me a list (one per line) of chunks; it would even be possible to omit the filter in more complicated scenarios:

_timescaledb_internal._hyper_1_2_chunk
_timescaledb_internal._hyper_1_3_chunk
_timescaledb_internal._hyper_1_4_chunk
[...]

I then paste this list into Notepad++ and do a Multi-Column Edit (Alt-Key plus Selection) to construct a list of commands like in the initial example:

SELECT decompress_chunk('_timescaledb_internal._hyper_1_2_chunk');
SELECT decompress_chunk('_timescaledb_internal._hyper_1_3_chunk');
SELECT decompress_chunk('_timescaledb_internal._hyper_1_4_chunk');
[...]

Pasting this whole block back into the CLI will then execute each line individually. This will result in decompression for the compressed chunks and in an error for the uncompressed ones, which I don't care about; since each line is a separate command, execution will not be terminated by an error, though.

While this is a terribly ugly process, it's highly productive for getting the job done.

Eventually, when everything is decompressed, you will be able to run
ALTER TABLE mytable SET (timescaledb.compress=false);
successfully.

@Puciek Your error is different from mine. You seem to have some additional dependencies/complexity that I've not been faced with so far.

from docs.

manojkarthick avatar manojkarthick commented on May 31, 2024 1

I seem to have the same issue as @Puciek ; I ran the following: (1) remove compression policy (2) decompress all the compressed chunks (3) disable compression.

When I tried to disable the compression, I get the following error message:

psql> alter table <table_name> set (timescaledb.compress=false);

[2021-10-25 12:04:41] [2BP01] ERROR: cannot drop table _timescaledb_internal._compressed_hypertable_292 because other objects depend on it
[2021-10-25 12:04:41] Detail: table _timescaledb_internal.compress_hyper_292_7671_chunk depends on table _timescaledb_internal._compressed_hypertable_292
[2021-10-25 12:04:41] table _timescaledb_internal.compress_hyper_292_7673_chunk depends on table 
<snip>
[2021-10-25 12:04:41] Hint: Use DROP ... CASCADE to drop the dependent objects too.

Any thoughts on what the issue might be?

from docs.

solugebefola avatar solugebefola commented on May 31, 2024

There's this in the docs now (compression.md line 403, from timescale/docs.timescale.com-content#525 ):

Next, pause the job with:

SELECT alter_job(<job_id>, scheduled => false);

Does that resolve this issue?

from docs.

tylerfontaine avatar tylerfontaine commented on May 31, 2024

It doesn't, those are different things. Specifically, this needs to be removed from the hypertable settings itself.

from docs.

adschm avatar adschm commented on May 31, 2024

Unfortunately, disabling compression really seems to be not covered at all in the docs ...?

The ALTER TABLE command above will fail if there are any compressed chunks.

from docs.

Loquacity avatar Loquacity commented on May 31, 2024

Unfortunately, disabling compression really seems to be not covered at all in the docs ...?

The ALTER TABLE command above will fail if there are any compressed chunks.

Happy to add it, how do you do it?

from docs.

AidaPaul avatar AidaPaul commented on May 31, 2024

I actually just tried that, as I need to change the table structure where renaming won't do, but when I try the command:

xxxxxx> ALTER TABLE data_source_tagentry SET (timescaledb.compress=false)
[2021-10-07 08:04:05] [2BP01] ERROR: cannot drop table _timescaledb_internal._compressed_hypertable_4 because other objects depend on it
[2021-10-07 08:04:05] Detail: table _timescaledb_internal.compress_hyper_4_367_chunk depends on table _timescaledb_internal._compressed_hypertable_4
[2021-10-07 08:04:05] table _timescaledb_internal.compress_hyper_4_379_chunk depends on table _timescaledb_internal._compressed_hypertable_4
[2021-10-07 08:04:05] Hint: Use DROP ... CASCADE to drop the dependent objects too.

So clearly missing something!

from docs.

AidaPaul avatar AidaPaul commented on May 31, 2024

@adschm Yea, I wound up just making a copy of the table and inserting data over from the compressed table, it worked and wasn't wonky, and also faster than just decompressing all those chunks. A bit weird!

from docs.

adschm avatar adschm commented on May 31, 2024

This can theoretically be aggregated like this (derived from the compression example in the docs):

SELECT decompress_chunk(i) FROM show_chunks('mytable', older_than => INTERVAL ' 120 days') i;

This works well if you can assure that the interval (or condition) 1. includes ALL compressed chunks and 2. includes NO uncompressed chunks. If you are missing compressed chunks, you won't be able to remove the compress property. If you include uncompressed chunks, the process will fail and exit on the first one right in the middle of the process. This is even worse, because the result is sorted arbitrary, and you won't be able to rerun the command as it will fail on the first chunk in the set which is uncompressed now.

Actually, decompress_chunk() appears to have a switch to reduce this to a warning:

https://docs.timescale.com/api/latest/compression/decompress_chunk/#sample-usage

from docs.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.