You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running a create_distributed_table function on a table with thousands of partitions and with a large citus.shard_count configured, the following error is triggered:
ERROR: out of memory
DETAIL: Failed on request of size 56 in memory context "ExprContext".
How to reproduce
Execute the following SQL statements:
CREATE SCHEMA oom;
CREATE TABLE oom.orders (
id bigint,
order_time timestamp without time zone NOT NULL,
region_id bigint NOT NULL
)
PARTITION BY RANGE (order_time);
SELECT create_time_partitions(
table_name := 'oom.orders',
partition_interval := '1 day',
start_from := now() - '5 years'::interval,
end_at := now() + '5 years'::interval
);
This will create a table with 3653 partitions without any data. Next, run the following commands to increase the citus.shard_count value and distribute the newly created table.
SET citus.shard_count to 200;
SELECT create_distributed_table('oom.orders', 'region_id');
After some time, you should see the OOM error.
The text was updated successfully, but these errors were encountered:
…ns (#6722)
We have memory leak during distribution of a table with a lot of
partitions as we do not release memory at ExprContext until all
partitions are not distributed. We improved 2 things to resolve the
issue:
1. We create and delete MemoryContext for each call to
`CreateDistributedTable` by partitions,
2. We rebuild the cache after we insert all the placements instead of
each placement for a shard.
DESCRIPTION: Fixes memory leak during distribution of a table with a lot
of partitions and shards.
Fixes#6572.
…ns (#6722)
We have memory leak during distribution of a table with a lot of
partitions as we do not release memory at ExprContext until all
partitions are not distributed. We improved 2 things to resolve the
issue:
1. We create and delete MemoryContext for each call to
`CreateDistributedTable` by partitions,
2. We rebuild the cache after we insert all the placements instead of
each placement for a shard.
DESCRIPTION: Fixes memory leak during distribution of a table with a lot
of partitions and shards.
Fixes#6572.
Settings
PostgreSQL version: 14
Citus version: 11.1
Coordinator node: 4 vCores / 16 GiB RAM, 512 GiB storage
Worker nodes: 4 nodes, 16 vCores / 512 GiB RAM, 4096 GiB storage
Problem description
When running a
create_distributed_table
function on a table with thousands of partitions and with a largecitus.shard_count
configured, the following error is triggered:How to reproduce
Execute the following SQL statements:
This will create a table with 3653 partitions without any data. Next, run the following commands to increase the
citus.shard_count
value and distribute the newly created table.After some time, you should see the OOM error.
The text was updated successfully, but these errors were encountered: