-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
Related to #14692
Thanks to #15700, some queries are now able to succeed under tighter memory limits. However, there are still failing cases. One strange situation is that some queries succeed under stricter memory limits but fail when given more generous limits.
For example, sort-tpch Q5 fails with 100M limit while succeeds with 30M limit.
Query 5 failed: Resources exhausted: Additional allocation failed with top memory consumers (across reservations) as:
ExternalSorterMerge[0]#1(can spill: false) consumed 99.9 MB,
ExternalSorter[0]#0(can spill: true) consumed 0.0 B.
Error: Failed to allocate additional 160.0 KB for ExternalSorterMerge[0] with 39.1 MB already allocated for this reservation - 109.7 KB remain available for the total pool
Failed Queries: 5
To Reproduce
- 100M, partitions 1 (fails)
cargo run --profile release-nonlto --bin dfbench -- sort-tpch --path benchmarks/data/tpch_sf1 --partitions 1 --memory-limit 100M --query 5
- 30M, partitions 1 (succeeds with spill)
cargo run --profile release-nonlto --bin dfbench -- sort-tpch --path benchmarks/data/tpch_sf1 --partitions 1 --memory-limit 30M --query 5
Expected behavior
sort-tpch Q5 succeeds with both 30M and 100M memory-limit
Additional context
sort-tpch Q6, which requires more memory than Q5, succeeds with the same configuration. Therefore, Q5 isn't a query that’s inherently too memory-intensive.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working