-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow DDL changes to Compressed Hypertables via Migration #21
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,81 @@ | ||
-- This utility is for helping to make major DDL changes to compressed | ||
-- hypertables. Because you must decompress the entire hypertable, one | ||
-- might not have sufficient space to do that. An alternate method is to | ||
-- copy the data from one table to a new table with the correct schema | ||
-- or constraints, etc. This tool helps you do that chunk by chunk, | ||
-- compressing as you go to greatly reduce the space requirements to | ||
-- perform the migration. | ||
-- | ||
-- Prerequisites: | ||
-- The target table must be created first, and compression must be | ||
-- configured for it. | ||
-- | ||
-- You must have sufficient space for the compressed data, so roughly | ||
-- double your current hypertable size, which is usually still | ||
-- considerably smaller than all the uncompressed data. | ||
-- | ||
-- Variables: | ||
-- old_table is the original table to copy from | ||
-- new_table is the table you are copying to | ||
-- older_than is the limit for the chunks to recompress. It will | ||
-- recompress all chunks older than the interval specified | ||
-- | ||
-- Limitations: | ||
-- This tool does not re-create policies, continuous aggregates, or | ||
-- other dependent objects. It only copies the data to a new table. | ||
-- | ||
-- Usage: | ||
-- CALL migrate_data('copy_from', 'copy_to', '30 days'); | ||
-- | ||
|
||
|
||
|
||
CREATE OR REPLACE procedure migrate_data( | ||
old_table text, | ||
new_table text, | ||
older_than interval) | ||
language plpgsql | ||
as | ||
$$ | ||
DECLARE | ||
c_row record; | ||
c_curs cursor for | ||
select * from chunknames; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think you'll have to open this later given that the temp table isn't created until later in the run, or at least it's a bit confusing. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I might even just avoid the temp table all together and use an array, which you can also loop through. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I thought the same, but it does actually work. Though I'm not against an array. I'll just move to an array. That would mean I wouldn't need the cursor either anyway. |
||
last_chunk text; | ||
|
||
BEGIN | ||
|
||
--create temp table for storing original hypertable chunk names | ||
CREATE TEMPORARY TABLE chunknames (chunkname text); | ||
|
||
insert into chunknames | ||
(select * from show_chunks(old_table)); | ||
|
||
-- loop through (one at a time) the chunks and copy the data to the new | ||
-- hypertable | ||
FOR c_row in c_curs LOOP | ||
EXECUTE format('insert into %I | ||
(select * from %s)', new_table, c_row.chunkname); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Switching to regclasses should help here, we may have to do some casting to text beforehand into other variables, but shouldn't be too bad, and then we won't be switching between %c/%I and we'll actually make sure things are formatted correctly. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The problem wasn't their classes so much as you have to do this as a dynamic query in order for the variables to be read in. Or at least, that was the only way that made it work. |
||
|
||
-- get most recent chunk from the new hypertable after copying the | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this whole thing is not accounting correctly for space partitioning...also, does selecting directly from compressed chunks work or do you have to go through the hypertable in order to get proper routing? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It does not - I somehow managed to do all this testing with an uncompressed copyfrom table. Oops |
||
-- whole other chunk is finished. | ||
-- it will only grab chunks to drop up to the older_than interval | ||
-- specified. | ||
select a.show_chunks into last_chunk from | ||
(select * from | ||
show_chunks('copyto', older_than => older_than) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The static string in this seems wrong to me? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is. Will fix. |
||
order by show_chunks DESC | ||
LIMIT 1) a; | ||
|
||
RAISE NOTICE 'Copied Chunk % into %', t_row.chunkname, last_chunk; | ||
|
||
-- compress that last chunk. | ||
PERFORM compress_chunk(last_chunk); | ||
|
||
RAISE NOTICE 'Compressed %', last_chunk; | ||
END LOOP; | ||
|
||
|
||
drop table chunknames; | ||
END; | ||
$$; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd use regclasses for these to properly account for schema qualification
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point.