Skip to content

Commit

Permalink
Merge branch 'latest' into gsg-improvements-part1-lana
Browse files Browse the repository at this point in the history
  • Loading branch information
Loquacity authored Aug 28, 2023
2 parents 4e2347c + 49581f1 commit 5ee17e1
Show file tree
Hide file tree
Showing 27 changed files with 162 additions and 141 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ jobs:
id: timescale
run: |
echo "DEV_FOLDER=$(echo ${GITHUB_HEAD_REF})" >> $GITHUB_OUTPUT
echo "HYPHENATED_BRANCH_NAME=$(echo "${GITHUB_HEAD_REF}" | sed 's|/|-|')" >> $GITHUB_ENV
echo "HYPHENATED_BRANCH_NAME=$(echo "${GITHUB_HEAD_REF}" | sed 's|/|-|' | sed 's/\./-/g')" >> $GITHUB_ENV
- name: Repository Dispatch
uses: peter-evans/repository-dispatch@26b39ed245ab8f31526069329e112ab2fb224588
Expand All @@ -30,7 +30,7 @@ jobs:
client-payload: '{"branch": "${{ steps.timescale.outputs.DEV_FOLDER }}", "pr_number": "${{ env.PR_NUMBER }}"}'

- name: Write comment
uses: marocchino/sticky-pull-request-comment@f6a2580ed520ae15da6076e7410b088d1c5dddd9
uses: marocchino/sticky-pull-request-comment@efaaab3fd41a9c3de579aba759d2552635e590fd
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
message: |
Expand Down
26 changes: 21 additions & 5 deletions _partials/_caggs-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,28 @@ temperature readings taken every second, you can find the average temperature
for each hour. Every time you run this query, the database needs to scan the
entire table and recalculate the average every time.

Continuous aggregate views are refreshed automatically in the background as new
data is added, or old data is modified. Timescale tracks these changes to the
dataset, and automatically updates the view in the background. This does not add
any maintenance burden to your database, and does not slow down `INSERT`
operations.
Continuous aggregates are a kind of hypertable that is refreshed automatically
in the background as new data is added, or old data is modified. Changes to your
dataset are tracked, and the hypertable behind the continuous aggregate is
automatically updated in the background.

You don't need to manually refresh your continuous aggregates, they are
continuously and incrementally updated in the background. Continuous aggregates
also have a much lower maintenance burden than regular PostgreSQL materialized
views, because the whole view is not created from scratch on each refresh. This
means that you can get on with working your data instead of maintaining your
database.

Because continuous aggregates are based on hypertables, you can query them in
exactly the same way as your other tables, and enable [compression][compression]
or [data tiering][data-tiering] on your continuous aggregates. You can even
create
[continuous aggregates on top of your continuous aggregates][hierarchical-caggs].

By default, querying continuous aggregates provides you with real-time data.
Pre-aggregated data from the materialized view is combined with recent data that
hasn't been aggregated yet. This gives you up-to-date results on every query.

[data-tiering]: /use-timescale/:currentVersion:/data-tiering/
[compression]: /use-timescale/:currentVersion:/compression/
[hierarchical-caggs]: /use-timescale/:currentVersion:/continuous-aggregates/hierarchical-continuous-aggregates/
16 changes: 9 additions & 7 deletions _partials/_caggs-types.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,18 @@
There are three main types of aggregation: materialized views, continuous
aggregates, and real time aggregates.
There are three main ways to make aggregation easier: materialized views,
continuous aggregates, and real time aggregates.

[Materialized views][pg-materialized views] are a standard PostgreSQL function.
They are used to cache the result of a complex query so that you can reuse it
later on. Materialized views do not update regularly, although you can manually
refresh them as required.

[Continuous aggregates][about-caggs] are a Timescale only feature. They work in a similar way
to a materialized view, but they are refreshed automatically. Continuous
aggregates update to a set point in time called the materialization threshold,
which means that they do not include the most recent data chunk from the
underlying hypertable.
[Continuous aggregates][about-caggs] are a Timescale only feature. They work in
a similar way to a materialized view, but they are updated automatically in the
background, as new data is added to your database. Continuous aggregates are
updated continuously and incrementally, which means they are less resource
intensive to maintain than materialized views. Continuous aggregates are based
on hypertables, and you can query them in the same way as you do your other
tables.

[Real time aggregates][real-time-aggs] are a Timescale only feature. They are
the same as continuous aggregates, but they add the most recent raw data to the
Expand Down
2 changes: 1 addition & 1 deletion _partials/_usage-based-storage-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,6 @@ You are only charged for the storage space that you actually use. Make sure you
[compression][compression], a [data retention policy][data-retention], and
[data tiering][data-tiering], to help you manage costs.

[compression]: /use-timescale/:currentVersion:/compression/
[compression]: /use-timescale/:currentVersion:/compression/about-compression
[data-retention]: /use-timescale/:currentVersion:/data-retention/
[data-tiering]: /use-timescale/:currentVersion:/data-tiering/
30 changes: 30 additions & 0 deletions about/release-notes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,36 @@ GitHub and be notified by email whenever a new release is available. On the
click `Watch`, select `Custom` and then check `Releases`.
</Highlight>

## TimescaleDB&nbsp;2.11.2 on 2023-08-17

These release notes are for the release of TimescaleDB&nbsp;2.11.2 on
2023-08-17.

<Highlight type="note">
This release contains bug fixes since the 2.11.1 release.
It is recommended that you upgrade at the next available opportunity.
</Highlight>

### Complete list of features

* #5923 Feature flags for TimescaleDB features

### Complete list of bug fixes

* #5680 Fix DISTINCT query with `JOIN` on multiple `segmentby` columns
* #5774 Fixed two bugs in decompression sorted merge code
* #5786 Ensure pg_config --cppflags are passed
* #5906 Fix quoting owners in SQL scripts.
* #5912 Fix crash in 1-step integer policy creation

### Acknowledgments

Timescale thanks:

* @mrksngl for submitting a PR to fix extension upgrade scripts
* @ericdevries for reporting an issue with DISTINCT queries using
`segmentby` columns of compressed hypertable

## TimescaleDB&nbsp;2.11.1 on 2023-06-29

These release notes are for the release of TimescaleDB&nbsp;2.11.1 on
Expand Down
23 changes: 6 additions & 17 deletions about/timescaledb-editions.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,15 @@ tags: [learn, contribute]

There are two versions of TimescaleDB available:

* TimescaleDB with an Apache 2 licence
* TimescaleDB Apache 2 Edition
* TimescaleDB Community Edition

The TimescaleDB Apache 2 Edition is the version of TimescaleDB that is available
under the [Apache 2.0 license][apache-license]. This is a classic open source license,
meaning that it is completely unrestricted - anyone can take this code and offer it
as a service.

### TimescaleDB Apache 2 Edition
## TimescaleDB Apache 2 Edition

You can install TimescaleDB Apache 2 Edition on your own on-premises or cloud
infrastructure and run it for free.
Expand All @@ -29,17 +29,10 @@ main contributor.
You can modify the TimescaleDB Apache 2 Edition source code and run it for
production use.

TimescaleDB Apache 2 Edition is available from these service providers:
## TimescaleDB Community Edition

* [Azure Database for PostgreSQL][azure-database]
* [Digital Ocean][digital-ocean]
* [Aiven for PostgreSQL][aiven]
* [Neon. Serverless Postgres][neon]

### TimescaleDB Community Edition

TimescaleDB Community Edition is the latest, most updated version of TimescaleDB,
available under the
TimescaleDB Community Edition is the advanced, best, and most feature complete
version of TimescaleDB, available under the terms of the
[Timescale License (TSL)][timescale-license].

For more information about the Timescale license, see [this blog post][license-blog].
Expand All @@ -63,7 +56,7 @@ the TimescaleDB Community Edition source code and offer it as a service.
You can access a hosted version of TimescaleDB Community Edition through
[Timescale][timescale-cloud], which is a cloud-native platform for time-series.

### Feature comparison
## Feature comparison

<table>
<tr>
Expand Down Expand Up @@ -517,12 +510,8 @@ You can access a hosted version of TimescaleDB Community Edition through

<!-- vale Google.Units = NO -->

[aiven]: https://aiven.io/postgresql
[azure-database]: https://azure.microsoft.com/en-us/services/postgresql/?&ef_id=CjwKCAjwhOyJBhA4EiwAEcJdcWZ6_o9d5INkZvm1MGsOsinuXgDwV_ySL5vc34z3pyxxrP0R49J_8xoCVvIQAvD_BwE:G:s&OCID=AID2200277_SEM_CjwKCAjwhOyJBhA4EiwAEcJdcWZ6_o9d5INkZvm1MGsOsinuXgDwV_ySL5vc34z3pyxxrP0R49J_8xoCVvIQAvD_BwE:G:s&gclid=CjwKCAjwhOyJBhA4EiwAEcJdcWZ6_o9d5INkZvm1MGsOsinuXgDwV_ySL5vc34z3pyxxrP0R49J_8xoCVvIQAvD_BwE#overview
[digital-ocean]: https://docs.digitalocean.com/products/databases/postgresql/details/supported-extensions/
[license-blog]: https://blog.timescale.com/blog/building-open-source-business-in-cloud-era-v2/
[mst]: /mst/:currentVersion:
[timescale-cloud]: /use-timescale/:currentVersion:/services/
[timescale-license]: https://github.com/timescale/timescaledb/blob/master/tsl/LICENSE-TIMESCALE
[neon]: https://neon.tech/
[apache-license]: https://github.com/timescale/timescaledb/blob/master/LICENSE-APACHE
9 changes: 4 additions & 5 deletions api/add_data_node.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,11 +101,10 @@ TimescaleDB extension on the data node unless it is already installed.

### Sample usage

Let's assume that you have an existing hypertable `conditions` and
want to use `time` as the time partitioning column and `location` as
the space partitioning column. You also want to distribute the chunks
of the hypertable on two data nodes `dn1.example.com` and
`dn2.example.com`:
If you have an existing hypertable `conditions` and want to use `time`
as the time partitioning column and `location` as the space partitioning
column. You also want to distribute the chunks of the hypertable on two
data nodes `dn1.example.com` and `dn2.example.com`:

```sql
SELECT add_data_node('dn1', host => 'dn1.example.com');
Expand Down
10 changes: 5 additions & 5 deletions api/compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,11 @@ Before you set up compression, you need to
[set up a compression policy][add_compression_policy].

<Highlight type="note">
Before you set up compression for the first time, read the compression
[blog post][blog-compression] and
[documentation][using-compression].
Before you set up compression for the first time, read
the compression
[blog post](https://blog.timescale.com/blog/building-columnar-compression-in-a-row-oriented-database/)
and
[documentation](https://docs.timescale.com/use-timescale/latest/compression/).
</Highlight>

You can also [compress chunks manually][compress_chunk], instead of using an
Expand Down Expand Up @@ -54,8 +56,6 @@ In TimescaleDB&nbsp;2.11 and later, you can update and delete compressed data.
You can also use advanced insert statements like `ON CONFLICT` and `RETURNING`.

[add_compression_policy]: /api/:currentVersion:/compression/add_compression_policy/
[blog-compression]: https://blog.timescale.com/blog/building-columnar-compression-in-a-row-oriented-database/
[compress_chunk]: /api/:currentVersion:/compression/compress_chunk/
[configure-compression]: /api/:currentVersion:/compression/alter_table_compression/
[using-compression]: /use-timescale/:currentVersion:/compression/
[skipscan]: /use-timescale/:currentVersion:/query-data/skipscan/
13 changes: 0 additions & 13 deletions api/create_hypertable.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,19 +147,6 @@ SELECT create_hypertable('conditions', 'time', chunk_time_interval => 8640000000
SELECT create_hypertable('conditions', 'time', chunk_time_interval => INTERVAL '1 day');
```

Convert table `conditions` to hypertable with time partitioning on `time` and
space partitioning (4 partitions) on `location`:

```sql
SELECT create_hypertable('conditions', 'time', 'location', 4);
```

The same as above, but using a custom partitioning function:

```sql
SELECT create_hypertable('conditions', 'time', 'location', 4, partitioning_func => 'location_hash');
```

Convert table `conditions` to hypertable. Do not raise a warning
if `conditions` is already a hypertable:

Expand Down
28 changes: 15 additions & 13 deletions api/drop_chunks.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,19 +28,19 @@ specified one.
Chunks can only be dropped based on their time intervals. They cannot be dropped
based on a space partition.

### Required arguments
## Required arguments

|Name|Type|Description|
|---|---|---|
| `relation` | REGCLASS | Hypertable or continuous aggregate from which to drop chunks. |
| `older_than` | INTERVAL | Specification of cut-off point where any full chunks older than this timestamp should be removed. |
|-|-|-|
|`relation`|REGCLASS|Hypertable or continuous aggregate from which to drop chunks.|
|`older_than`|INTERVAL|Specification of cut-off point where any full chunks older than this timestamp should be removed.|

### Optional arguments
## Optional arguments

|Name|Type|Description|
|---|---|---|
| `newer_than` | INTERVAL | Specification of cut-off point where any full chunks newer than this timestamp should be removed. |
| `verbose` | BOOLEAN | Setting to true displays messages about the progress of the reorder command. Defaults to false.|
|-|-|-|
|`newer_than`|INTERVAL|Specification of cut-off point where any full chunks newer than this timestamp should be removed.|
|`verbose`|BOOLEAN|Setting to true displays messages about the progress of the reorder command. Defaults to false.|

The `older_than` and `newer_than` parameters can be specified in two ways:

Expand All @@ -60,13 +60,15 @@ you are removing things _in the past_. If you want to remove data
in the future, for example to delete erroneous entries, use a timestamp.
</Highlight>

When both arguments are used, the function returns the intersection of the resulting two ranges. For example,
specifying `newer_than => 4 months` and `older_than => 3 months` drops all full chunks that are between 3 and
4 months old. Similarly, specifying `newer_than => '2017-01-01'` and `older_than => '2017-02-01'` drops
all full chunks between '2017-01-01' and '2017-02-01'. Specifying parameters that do not result in an overlapping
When both arguments are used, the function returns the intersection of the
resulting two ranges. For example, specifying `newer_than => 4 months` and
`older_than => 3 months` drops all full chunks that are between 3 and 4 months
old. Similarly, specifying `newer_than => '2017-01-01'` and
`older_than => '2017-02-01'` drops all full chunks between '2017-01-01' and
'2017-02-01'. Specifying parameters that do not result in an overlapping
intersection between two ranges results in an error.

### Sample usage
## Sample usage

Drop all chunks from hypertable `conditions` older than 3 months:

Expand Down
11 changes: 6 additions & 5 deletions api/set_chunk_time_interval.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,15 @@ Sets the `chunk_time_interval` on a hypertable. The new interval is used
when new chunks are created, and time intervals on existing chunks are
not changed.

### Required arguments
## Required arguments

|Name|Type|Description|
|-|-|-|
|`hypertable`|REGCLASS| Hypertable to update interval for|
|`hypertable`|REGCLASS|Hypertable or continuous aggregate to update interval for|
|`chunk_time_interval`|See note|Event time that each new chunk covers|

The valid types for the `chunk_time_interval` depend on the type used for the hypertable `time` column:
The valid types for the `chunk_time_interval` depend on the type used for the
hypertable `time` column:

|`time` column type|`chunk_time_interval` type|Time unit|
|-|-|-|
Expand All @@ -38,7 +39,7 @@ The valid types for the `chunk_time_interval` depend on the type used for the hy

For more information, see the [`create_hypertable` section][create-hypertable].

### Optional arguments
## Optional arguments

|TEXT|Description|
|-|-|-|
Expand All @@ -47,7 +48,7 @@ For more information, see the [`create_hypertable` section][create-hypertable].
You need to use `dimension_name` argument only if your hypertable has multiple
time dimensions.

### Sample usage
## Sample usage

For a TIMESTAMP column, set `chunk_time_interval` to 24 hours:

Expand Down
43 changes: 23 additions & 20 deletions api/show_chunks.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,37 +16,40 @@ Get list of chunks associated with a hypertable.
Function accepts the following required and optional arguments. These arguments
have the same semantics as the `drop_chunks` [function][drop_chunks].

### Required arguments
## Required arguments

|Name|Type|Description|
|---|---|---|
| `relation` | REGCLASS | Hypertable or continuous aggregate from which to select chunks. |
|-|-|-|
|`relation`|REGCLASS|Hypertable or continuous aggregate from which to select chunks.|

### Optional arguments
## Optional arguments

|Name|Type|Description|
|---|---|---|
| `older_than` | ANY | Specification of cut-off point where any full chunks older than this timestamp should be shown. |
| `newer_than` | ANY | Specification of cut-off point where any full chunks newer than this timestamp should be shown. |
|-|-|-|
|`older_than`|ANY|Specification of cut-off point where any full chunks older than this timestamp should be shown.|
|`newer_than`|ANY|Specification of cut-off point where any full chunks newer than this timestamp should be shown.|

The `older_than` and `newer_than` parameters can be specified in two ways:

* **interval type:** The cut-off point is computed as `now() -
older_than` and similarly `now() - newer_than`. An error is returned if an INTERVAL is supplied
and the time column is not one of a TIMESTAMP, TIMESTAMPTZ, or
DATE.

* **timestamp, date, or integer type:** The cut-off point is
explicitly given as a TIMESTAMP / TIMESTAMPTZ / DATE or as a
SMALLINT / INT / BIGINT. The choice of timestamp or integer must follow the type of the hypertable's time column.

When both arguments are used, the function returns the intersection of the resulting two ranges. For example,
specifying `newer_than => 4 months` and `older_than => 3 months` shows all full chunks that are between 3 and
4 months old. Similarly, specifying `newer_than => '2017-01-01'` and `older_than => '2017-02-01'` shows
all full chunks between '2017-01-01' and '2017-02-01'. Specifying parameters that do not result in an overlapping
older_than` and similarly `now() - newer_than`. An error is returned if an
INTERVAL is supplied and the time column is not one of a TIMESTAMP,
TIMESTAMPTZ, or DATE.

* **timestamp, date, or integer type:** The cut-off point is explicitly given
as a TIMESTAMP / TIMESTAMPTZ / DATE or as a SMALLINT / INT / BIGINT. The
choice of timestamp or integer must follow the type of the hypertable's time
column.

When both arguments are used, the function returns the intersection of the
resulting two ranges. For example, specifying `newer_than => 4 months` and
`older_than => 3 months` shows all full chunks that are between 3 and 4 months
old. Similarly, specifying `newer_than => '2017-01-01'` and
`older_than => '2017-02-01'` shows all full chunks between '2017-01-01' and
'2017-02-01'. Specifying parameters that do not result in an overlapping
intersection between two ranges results in an error.

### Sample usage
## Sample usage

Get list of all chunks associated with a table:

Expand Down
Loading

0 comments on commit 5ee17e1

Please sign in to comment.