Skip to content

Commit

Permalink
Release 2.6.0
Browse files Browse the repository at this point in the history
This release is medium priority for upgrade. We recommend that you
upgrade at the next available opportunity.

This release adds major new features since the 2.5.2 release,
including:

Compression in continuous aggregates Experimental support for timezones
in continuous aggregates Experimental support for monthly buckets in
continuous aggregates It also includes several bug fixes. Telemetry
reports are switched to a new format, and now include more detailed
statistics on compression, distributed hypertables and indexes.

**Features**

* #3768 Allow ALTER TABLE ADD COLUMN with DEFAULT on compressed
hypertable
* #3769 Allow ALTER TABLE DROP COLUMN on compressed hypertable
* #3943 Optimize first/last
* #3945 Add support for ALTER SCHEMA on multi-node
* #3949 Add support for DROP SCHEMA on multi-node

**Bugfixes**

* #3808 Properly handle max_retries option
* #3863 Fix remote transaction heal logic
* #3869 Fix ALTER SET/DROP NULL contstraint on distributed hypertable
* #3944 Fix segfault in add_compression_policy
* #3961 Fix crash in EXPLAIN VERBOSE on distributed hypertable
* #4015 Eliminate float rounding instabilities in interpolate
* #4019 Update ts_extension_oid in transitioning state
* #4073 Fix buffer overflow in partition scheme

**Improvements**

Query planning performance is improved for hypertables with a large
number of chunks.

**Thanks**

* @fvannee for reporting a first/last memory leak
* @mmouterde for reporting an issue with floats and interpolate
  • Loading branch information
akuzm authored and svenklemm committed Feb 17, 2022
1 parent fc5341c commit 23962c8
Show file tree
Hide file tree
Showing 8 changed files with 179 additions and 166 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/abi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ jobs:
cp build_abi/install_lib/* `pg_config --pkglibdir`
chown -R postgres /mnt
set -o pipefail
sudo -u postgres make -C build_abi -k regresscheck regresscheck-t regresscheck-shared | tee installcheck.log
sudo -u postgres make -C build_abi -k regresscheck regresscheck-t regresscheck-shared IGNORES="memoize" | tee installcheck.log
EOF
- name: Show regression diffs
Expand Down
14 changes: 13 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,16 @@
`psql` with the `-X` flag to prevent any `.psqlrc` commands from
accidentally triggering the load of a previous DB version.**

## Unreleased
## 2.6.0 (2022-02-16)
This release is medium priority for upgrade. We recommend that you upgrade at the next available opportunity.

This release adds major new features since the 2.5.2 release, including:

* Compression in continuous aggregates
* Experimental support for timezones in continuous aggregates
* Experimental support for monthly buckets in continuous aggregates

The release also includes several bug fixes. Telemetry reports now include new and more detailed statistics on regular tables and views, compression, distributed hypertables, and continuous aggregates, which will help us improve TimescaleDB.

**Features**
* #3768 Allow ALTER TABLE ADD COLUMN with DEFAULT on compressed hypertable
Expand All @@ -23,6 +32,9 @@ accidentally triggering the load of a previous DB version.**
* #4019 Update ts_extension_oid in transitioning state
* #4073 Fix buffer overflow in partition scheme

**Improvements**
* Query planning performance is improved for hypertables with a large number of chunks.

**Thanks**
* @fvannee for reporting a first/last memory leak
* @mmouterde for reporting an issue with floats and interpolate
Expand Down
5 changes: 3 additions & 2 deletions sql/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,12 @@ set(MOD_FILES
updates/2.4.1--2.4.2.sql
updates/2.4.2--2.5.0.sql
updates/2.5.0--2.5.1.sql
updates/2.5.1--2.5.2.sql)
updates/2.5.1--2.5.2.sql
updates/2.5.2--2.6.0.sql)

# The downgrade file to generate a downgrade script for the current version, as
# specified in version.config
set(CURRENT_REV_FILE reverse-dev.sql)
set(CURRENT_REV_FILE 2.6.0--2.5.2.sql)
# Files for generating old downgrade scripts. This should only include files for
# downgrade to from one version to its previous version since we do not support
# skipping versions when downgrading.
Expand Down
42 changes: 42 additions & 0 deletions sql/updates/2.5.2--2.6.0.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
DROP FUNCTION IF EXISTS @extschema@.recompress_chunk;
DROP FUNCTION IF EXISTS @extschema@.delete_data_node;
DROP FUNCTION IF EXISTS @extschema@.get_telemetry_report;

-- Also see the comments for ContinuousAggsBucketFunction structure.
CREATE TABLE _timescaledb_catalog.continuous_aggs_bucket_function(
mat_hypertable_id integer PRIMARY KEY REFERENCES _timescaledb_catalog.hypertable (id) ON DELETE CASCADE,
-- The schema of the function. Equals TRUE for "timescaledb_experimental", FALSE otherwise.
experimental bool NOT NULL,
-- Name of the bucketing function, e.g. "time_bucket" or "time_bucket_ng"
name text NOT NULL,
-- `bucket_width` argument of the function, e.g. "1 month"
bucket_width text NOT NULL,
-- `origin` argument of the function provided by the user
origin text NOT NULL,
-- `timezone` argument of the function provided by the user
timezone text NOT NULL
);

-- in tables.sql the same is done with GRANT SELECT ON ALL TABLES IN SCHEMA
GRANT SELECT ON _timescaledb_catalog.continuous_aggs_bucket_function TO PUBLIC;

SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_aggs_bucket_function', '');

-- Adding overloaded versions of invalidation_process_hypertable_log() and invalidation_process_cagg_log()
-- with bucket_functions argument is done in cagg_utils.sql. Note that this file is included when building
-- the update scripts, so we don't have to do it here.

DROP VIEW IF EXISTS timescaledb_information.continuous_aggregates;

CREATE FUNCTION @extschema@.delete_data_node(
node_name NAME,
if_exists BOOLEAN = FALSE,
force BOOLEAN = FALSE,
repartition BOOLEAN = TRUE,
drop_database BOOLEAN = FALSE
) RETURNS BOOLEAN AS '@MODULE_PATHNAME@', 'ts_data_node_delete' LANGUAGE C VOLATILE;

CREATE FUNCTION @extschema@.get_telemetry_report() RETURNS jsonb
AS '@MODULE_PATHNAME@', 'ts_telemetry_get_report_jsonb'
LANGUAGE C STABLE PARALLEL SAFE;

119 changes: 119 additions & 0 deletions sql/updates/2.6.0--2.5.2.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@

DROP PROCEDURE IF EXISTS @extschema@.recompress_chunk;
DROP FUNCTION IF EXISTS _timescaledb_internal.chunk_status;
DROP FUNCTION IF EXISTS @extschema@.delete_data_node;
DROP FUNCTION IF EXISTS @extschema@.get_telemetry_report;

CREATE FUNCTION @extschema@.delete_data_node(
node_name NAME,
if_exists BOOLEAN = FALSE,
force BOOLEAN = FALSE,
repartition BOOLEAN = TRUE
) RETURNS BOOLEAN AS '@MODULE_PATHNAME@', 'ts_data_node_delete' LANGUAGE C VOLATILE;

CREATE FUNCTION @extschema@.get_telemetry_report(always_display_report boolean DEFAULT false) RETURNS TEXT
AS '@MODULE_PATHNAME@', 'ts_get_telemetry_report' LANGUAGE C STABLE PARALLEL SAFE;

DO $$
DECLARE
caggs text[];
caggs_nr int;
BEGIN
SELECT array_agg(format('%I.%I', user_view_schema, user_view_name)) FROM _timescaledb_catalog.continuous_agg WHERE bucket_width < 0 INTO caggs;
SELECT array_length(caggs, 1) INTO caggs_nr;
IF caggs_nr > 0 THEN
RAISE EXCEPTION 'Downgrade is impossible since % continuous aggregates exist which use variable buckets: %', caggs_nr, caggs
USING HINT = 'Remove the corresponding continuous aggregates manually before downgrading';
END IF;
END
$$ LANGUAGE 'plpgsql';

-- It's safe to drop the table.
-- ALTER EXTENSION is required to revert the effect of pg_extension_config_dump()
-- See "The list of tables configured to be dumped" test in test/sql/updates/post.catalog.sql
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.continuous_aggs_bucket_function;

-- Actually drop the table.
-- ALTER EXTENSION only removes the table from the extension but doesn't drop it.
DROP TABLE IF EXISTS _timescaledb_catalog.continuous_aggs_bucket_function;

-- Drop overloaded versions of invalidation_process_hypertable_log() and invalidation_process_cagg_log()
-- with bucket_functions argument.

ALTER EXTENSION timescaledb DROP FUNCTION _timescaledb_internal.invalidation_process_hypertable_log(
mat_hypertable_id INTEGER,
raw_hypertable_id INTEGER,
dimtype REGTYPE,
mat_hypertable_ids INTEGER[],
bucket_widths BIGINT[],
max_bucket_widths BIGINT[],
bucket_functions TEXT[]
);

DROP FUNCTION IF EXISTS _timescaledb_internal.invalidation_process_hypertable_log(
mat_hypertable_id INTEGER,
raw_hypertable_id INTEGER,
dimtype REGTYPE,
mat_hypertable_ids INTEGER[],
bucket_widths BIGINT[],
max_bucket_widths BIGINT[],
bucket_functions TEXT[]
);

ALTER EXTENSION timescaledb DROP FUNCTION _timescaledb_internal.invalidation_process_cagg_log(
mat_hypertable_id INTEGER,
raw_hypertable_id INTEGER,
dimtype REGTYPE,
window_start BIGINT,
window_end BIGINT,
mat_hypertable_ids INTEGER[],
bucket_widths BIGINT[],
max_bucket_widths BIGINT[],
bucket_functions TEXT[],
OUT ret_window_start BIGINT,
OUT ret_window_end BIGINT
);

DROP FUNCTION IF EXISTS _timescaledb_internal.invalidation_process_cagg_log(
mat_hypertable_id INTEGER,
raw_hypertable_id INTEGER,
dimtype REGTYPE,
window_start BIGINT,
window_end BIGINT,
mat_hypertable_ids INTEGER[],
bucket_widths BIGINT[],
max_bucket_widths BIGINT[],
bucket_functions TEXT[],
OUT ret_window_start BIGINT,
OUT ret_window_end BIGINT
);

--undo compression feature for caggs
--check that all continuous aggregates have compression disabled
--this check is sufficient as we cannot have compressed chunks, if
-- compression is disabled
DO $$
DECLARE
cagg_name NAME;
cnt INTEGER := 0;
BEGIN
FOR cagg_name IN
SELECT view_name,
materialization_hypertable_schema,
materialization_hypertable_name
FROM timescaledb_information.continuous_aggregates
WHERE compression_enabled is TRUE
LOOP
RAISE NOTICE 'compression is enabled for continuous aggregate: %', cagg_name;
cnt := cnt + 1;
END LOOP;
IF cnt > 0 THEN
RAISE EXCEPTION 'cannot downgrade as compression is enabled for continuous aggregates'
USING DETAIL = 'Please disable compression on all continuous aggregates before downgrading.',
HINT = 'To disable compression, call decompress_chunk to decompress chunks, then drop any existing compression policy on the continuous aggregate, and finally run ALTER MATERIALIZED VIEW % SET timescaledb.compress = ''false''. ';
END IF;
END $$;

-- revert changes to continuous aggregates view definition
DROP VIEW IF EXISTS timescaledb_information.continuous_aggregates;

42 changes: 0 additions & 42 deletions sql/updates/latest-dev.sql
Original file line number Diff line number Diff line change
@@ -1,42 +0,0 @@
DROP FUNCTION IF EXISTS @extschema@.recompress_chunk;
DROP FUNCTION IF EXISTS @extschema@.delete_data_node;
DROP FUNCTION IF EXISTS @extschema@.get_telemetry_report;

-- Also see the comments for ContinuousAggsBucketFunction structure.
CREATE TABLE _timescaledb_catalog.continuous_aggs_bucket_function(
mat_hypertable_id integer PRIMARY KEY REFERENCES _timescaledb_catalog.hypertable (id) ON DELETE CASCADE,
-- The schema of the function. Equals TRUE for "timescaledb_experimental", FALSE otherwise.
experimental bool NOT NULL,
-- Name of the bucketing function, e.g. "time_bucket" or "time_bucket_ng"
name text NOT NULL,
-- `bucket_width` argument of the function, e.g. "1 month"
bucket_width text NOT NULL,
-- `origin` argument of the function provided by the user
origin text NOT NULL,
-- `timezone` argument of the function provided by the user
timezone text NOT NULL
);

-- in tables.sql the same is done with GRANT SELECT ON ALL TABLES IN SCHEMA
GRANT SELECT ON _timescaledb_catalog.continuous_aggs_bucket_function TO PUBLIC;

SELECT pg_catalog.pg_extension_config_dump('_timescaledb_catalog.continuous_aggs_bucket_function', '');

-- Adding overloaded versions of invalidation_process_hypertable_log() and invalidation_process_cagg_log()
-- with bucket_functions argument is done in cagg_utils.sql. Note that this file is included when building
-- the update scripts, so we don't have to do it here.

DROP VIEW IF EXISTS timescaledb_information.continuous_aggregates;

CREATE FUNCTION @extschema@.delete_data_node(
node_name NAME,
if_exists BOOLEAN = FALSE,
force BOOLEAN = FALSE,
repartition BOOLEAN = TRUE,
drop_database BOOLEAN = FALSE
) RETURNS BOOLEAN AS '@MODULE_PATHNAME@', 'ts_data_node_delete' LANGUAGE C VOLATILE;

CREATE FUNCTION @extschema@.get_telemetry_report() RETURNS jsonb
AS '@MODULE_PATHNAME@', 'ts_telemetry_get_report_jsonb'
LANGUAGE C STABLE PARALLEL SAFE;

119 changes: 0 additions & 119 deletions sql/updates/reverse-dev.sql
Original file line number Diff line number Diff line change
@@ -1,119 +0,0 @@

DROP PROCEDURE IF EXISTS @extschema@.recompress_chunk;
DROP FUNCTION IF EXISTS _timescaledb_internal.chunk_status;
DROP FUNCTION IF EXISTS @extschema@.delete_data_node;
DROP FUNCTION IF EXISTS @extschema@.get_telemetry_report;

CREATE FUNCTION @extschema@.delete_data_node(
node_name NAME,
if_exists BOOLEAN = FALSE,
force BOOLEAN = FALSE,
repartition BOOLEAN = TRUE
) RETURNS BOOLEAN AS '@MODULE_PATHNAME@', 'ts_data_node_delete' LANGUAGE C VOLATILE;

CREATE FUNCTION @extschema@.get_telemetry_report(always_display_report boolean DEFAULT false) RETURNS TEXT
AS '@MODULE_PATHNAME@', 'ts_get_telemetry_report' LANGUAGE C STABLE PARALLEL SAFE;

DO $$
DECLARE
caggs text[];
caggs_nr int;
BEGIN
SELECT array_agg(format('%I.%I', user_view_schema, user_view_name)) FROM _timescaledb_catalog.continuous_agg WHERE bucket_width < 0 INTO caggs;
SELECT array_length(caggs, 1) INTO caggs_nr;
IF caggs_nr > 0 THEN
RAISE EXCEPTION 'Downgrade is impossible since % continuous aggregates exist which use variable buckets: %', caggs_nr, caggs
USING HINT = 'Remove the corresponding continuous aggregates manually before downgrading';
END IF;
END
$$ LANGUAGE 'plpgsql';

-- It's safe to drop the table.
-- ALTER EXTENSION is required to revert the effect of pg_extension_config_dump()
-- See "The list of tables configured to be dumped" test in test/sql/updates/post.catalog.sql
ALTER EXTENSION timescaledb DROP TABLE _timescaledb_catalog.continuous_aggs_bucket_function;

-- Actually drop the table.
-- ALTER EXTENSION only removes the table from the extension but doesn't drop it.
DROP TABLE IF EXISTS _timescaledb_catalog.continuous_aggs_bucket_function;

-- Drop overloaded versions of invalidation_process_hypertable_log() and invalidation_process_cagg_log()
-- with bucket_functions argument.

ALTER EXTENSION timescaledb DROP FUNCTION _timescaledb_internal.invalidation_process_hypertable_log(
mat_hypertable_id INTEGER,
raw_hypertable_id INTEGER,
dimtype REGTYPE,
mat_hypertable_ids INTEGER[],
bucket_widths BIGINT[],
max_bucket_widths BIGINT[],
bucket_functions TEXT[]
);

DROP FUNCTION IF EXISTS _timescaledb_internal.invalidation_process_hypertable_log(
mat_hypertable_id INTEGER,
raw_hypertable_id INTEGER,
dimtype REGTYPE,
mat_hypertable_ids INTEGER[],
bucket_widths BIGINT[],
max_bucket_widths BIGINT[],
bucket_functions TEXT[]
);

ALTER EXTENSION timescaledb DROP FUNCTION _timescaledb_internal.invalidation_process_cagg_log(
mat_hypertable_id INTEGER,
raw_hypertable_id INTEGER,
dimtype REGTYPE,
window_start BIGINT,
window_end BIGINT,
mat_hypertable_ids INTEGER[],
bucket_widths BIGINT[],
max_bucket_widths BIGINT[],
bucket_functions TEXT[],
OUT ret_window_start BIGINT,
OUT ret_window_end BIGINT
);

DROP FUNCTION IF EXISTS _timescaledb_internal.invalidation_process_cagg_log(
mat_hypertable_id INTEGER,
raw_hypertable_id INTEGER,
dimtype REGTYPE,
window_start BIGINT,
window_end BIGINT,
mat_hypertable_ids INTEGER[],
bucket_widths BIGINT[],
max_bucket_widths BIGINT[],
bucket_functions TEXT[],
OUT ret_window_start BIGINT,
OUT ret_window_end BIGINT
);

--undo compression feature for caggs
--check that all continuous aggregates have compression disabled
--this check is sufficient as we cannot have compressed chunks, if
-- compression is disabled
DO $$
DECLARE
cagg_name NAME;
cnt INTEGER := 0;
BEGIN
FOR cagg_name IN
SELECT view_name,
materialization_hypertable_schema,
materialization_hypertable_name
FROM timescaledb_information.continuous_aggregates
WHERE compression_enabled is TRUE
LOOP
RAISE NOTICE 'Compression is enabled for continuous aggregate: %', cagg_name;
cnt := cnt + 1;
END LOOP;
IF cnt > 0 THEN
RAISE EXCEPTION 'Cannot downgrade as compression is enabled for continuous aggregates'
USING DETAIL = 'Please disable compression on all continuous aggregates before downgrading',
HINT = 'To disable compression, call decompress_chunk to decompress chunks, then drop any existing compression policy on the continuous aggregate, and finally run ALTER MATERIALIZED VIEW % SET timescaledb.compress = ''false'' ';
END IF;
END $$;

-- revert changes to continuous aggregates view definition
DROP VIEW IF EXISTS timescaledb_information.continuous_aggregates;

2 changes: 1 addition & 1 deletion version.config
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
version = 2.6.0-dev
version = 2.6.0
update_from_version = 2.5.2
downgrade_to_version = 2.5.2

0 comments on commit 23962c8

Please sign in to comment.