perf(vacuum-filter): 优化 has 热路径(降低 API Key 负向短路成本)#757
Conversation
Summary of ChangesHello @tesgth032, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 此拉取请求旨在显著提升 Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthrough重构哈希与索引计算:移除 MurmurHash3 x86-32x2,实现 Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request primarily aims to optimize the hot path performance of VacuumFilter.has, reducing the cost of negative short-circuiting in API Key verification through various low-level optimizations. However, a critical vulnerability was identified in the indexTag method: dynamic resizing of the internal scratch buffer does not guarantee 4-byte alignment, leading to a RangeError when creating a Uint32Array view. This can be exploited to cause a Denial of Service (DoS). Additionally, there are two instances of code duplication that could be extracted into helper functions to improve maintainability.
| this.scratch32 = new Uint32Array(this.scratch.buffer); | ||
| } | ||
|
|
||
| let asciiLen = 0; | ||
| for (; asciiLen < strLen; asciiLen++) { | ||
| const c = key.charCodeAt(asciiLen); | ||
| if (c > 0x7f) break; | ||
| this.scratch[asciiLen] = c; | ||
| // encodeInto 可能因 out buffer 不足而截断:read < strLen 时扩容重试 |
There was a problem hiding this comment.
The indexTag method has a critical vulnerability: it can throw a RangeError when creating a Uint32Array if the scratch buffer's length is not a multiple of 4. This can lead to a Denial of Service (DoS) when processing specific input lengths. This issue occurs within a duplicated code block for resizing the scratch buffer and recreating the scratch32 view. The proposed fix addresses the alignment issue, and extracting this logic into a helper method (e.g., resizeScratch(minSize: number)) would further improve maintainability and prevent future occurrences of this vulnerability due to duplicated code.
| this.scratch32 = new Uint32Array(this.scratch.buffer); | |
| } | |
| let asciiLen = 0; | |
| for (; asciiLen < strLen; asciiLen++) { | |
| const c = key.charCodeAt(asciiLen); | |
| if (c > 0x7f) break; | |
| this.scratch[asciiLen] = c; | |
| // encodeInto 可能因 out buffer 不足而截断:read < strLen 时扩容重试 | |
| if (this.scratch.length < strLen) { | |
| // Ensure the new length is a multiple of 4 to avoid RangeError when creating Uint32Array | |
| const newLen = (Math.max(this.scratch.length * 2, strLen) + 3) & ~3; | |
| this.scratch = new Uint8Array(newLen); | |
| this.scratch32 = new Uint32Array(this.scratch.buffer); | |
| } |
| this.scratch = new Uint8Array(Math.max(this.scratch.length * 2, strLen * 4)); | ||
| this.scratch32 = new Uint32Array(this.scratch.buffer); | ||
| encoded = textEncoder.encodeInto(key, this.scratch); | ||
| } | ||
|
|
||
| if (asciiLen === strLen) { | ||
| murmur3X86_32x2(this.scratch, strLen, this.hashSeedA, this.hashSeedB, this.hashOut); | ||
| } else { | ||
| // 非 ASCII:交给 TextEncoder(少见路径) | ||
| const keyBytes = textEncoder.encode(key); | ||
| murmur3X86_32x2(keyBytes, keyBytes.length, this.hashSeedA, this.hashSeedB, this.hashOut); | ||
| } | ||
|
|
||
| const hvIndex = this.hashOut[0] >>> 0; | ||
| const hvTag = this.hashOut[1] >>> 0; | ||
|
|
||
| // 参考实现使用 `hash % numBuckets`。这里保持简单、快速(即便 numBuckets 非 2 的幂也可用)。 | ||
| const index = hvIndex % this.numBuckets; | ||
| // 极端情况下 encodeInto 仍可能因缓冲不足而截断:回退到 encode(保证正确性) |
There was a problem hiding this comment.
Similar to the previous finding, if the buffer is resized here due to UTF-8 encoding expansion, the length must be ensured to be a multiple of 4. While strLen * 4 is always a multiple of 4, the this.scratch.length * 2 branch might not be if the buffer was previously allocated with a non-multiple-of-4 length (though the previous fix would prevent that). It is safer to always align the allocation size.
| this.scratch = new Uint8Array(Math.max(this.scratch.length * 2, strLen * 4)); | |
| this.scratch32 = new Uint32Array(this.scratch.buffer); | |
| encoded = textEncoder.encodeInto(key, this.scratch); | |
| } | |
| if (asciiLen === strLen) { | |
| murmur3X86_32x2(this.scratch, strLen, this.hashSeedA, this.hashSeedB, this.hashOut); | |
| } else { | |
| // 非 ASCII:交给 TextEncoder(少见路径) | |
| const keyBytes = textEncoder.encode(key); | |
| murmur3X86_32x2(keyBytes, keyBytes.length, this.hashSeedA, this.hashSeedB, this.hashOut); | |
| } | |
| const hvIndex = this.hashOut[0] >>> 0; | |
| const hvTag = this.hashOut[1] >>> 0; | |
| // 参考实现使用 `hash % numBuckets`。这里保持简单、快速(即便 numBuckets 非 2 的幂也可用)。 | |
| const index = hvIndex % this.numBuckets; | |
| // 极端情况下 encodeInto 仍可能因缓冲不足而截断:回退到 encode(保证正确性) | |
| if (encoded.read < strLen) { | |
| // UTF-8 最坏 4 bytes/char;用 4x 作为上界(仅影响少见的非 ASCII key) | |
| // Ensure the new length is a multiple of 4 | |
| const newLen = (Math.max(this.scratch.length * 2, strLen * 4) + 3) & ~3; | |
| this.scratch = new Uint8Array(newLen); | |
| this.scratch32 = new Uint32Array(this.scratch.buffer); | |
| encoded = textEncoder.encodeInto(key, this.scratch); | |
| } |
| this.bucketMask === 0 && this.numBuckets <= FAST_REDUCE_MAX_BUCKETS | ||
| ? this.numBuckets * INV_2_32 | ||
| : null; | ||
| this.table = new Uint32Array(this.numBuckets * BUCKET_SIZE); | ||
| return; |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@src/lib/vacuum-filter/vacuum-filter.ts`:
- Around line 456-470: The resize logic for this.scratch/scratch32 around
textEncoder.encodeInto can produce an ArrayBuffer whose byteLength is not a
multiple of 4, causing new Uint32Array(this.scratch.buffer) to throw; update the
expansion in the block that checks this.scratch.length < strLen and the fallback
that grows to Math.max(this.scratch.length * 2, strLen * 4) to round the new
length up to a multiple of 4 (e.g., compute newLen =
roundUpToMultipleOf4(Math.max(...)) before creating new Uint8Array and then set
this.scratch32 = new Uint32Array(this.scratch.buffer)); ensure all allocations
for this.scratch use that 4-byte alignment so scratch32 construction is always
safe.
🧹 Nitpick comments (4)
src/lib/vacuum-filter/vacuum-filter.ts (4)
475-482: 单次哈希 + 二次混合派生 tag:相比旧双次独立哈希,碰撞配置发生了变化。旧实现对输入字节计算两次独立的 MurmurHash3(不同 seed),提供 ~64-bit 独立性;新实现
hvTag = fmix32(hvIndex ^ seedB)将 tag 确定性地由 index hash 派生,有效熵降为 32-bit。这意味着若两个 key 产生相同的 MurmurHash3 输出,它们的(index, tag)完全相同——在旧方案中则还需要第二个 hash 也碰撞才会如此。对于 32-bit fingerprint + API Key 场景,实际 FPR 仍然极低,影响可忽略。但如果未来有人选择较小的 fingerprint(如 8-bit),碰撞对(同 index 且同 tag)的概率会从 ~1/2^40 退化到 ~1/2^32,值得在注释或文档中注明这一行为变化。
488-495: 快速索引映射三级路径:逻辑正确,但fastReduceMul路径存在可读性陷阱。fastReduceMul ? ((hvIndex * fastReduceMul) | 0) >>> 0
| 0在此处起Math.trunc的作用(将 double 截断为 32-bit 有符号整型),再>>> 0转无符号。由于hvIndex * fastReduceMul的值域为[0, numBuckets)(最大 ~2^21),不会溢出 32-bit 有符号范围,行为正确。不过| 0做截断是一个容易误读的惯用法,考虑加一行简短注释说明意图(纯建议)。
334-338: 构造函数两处bucketMask/fastReduceMul计算逻辑一致,建议抽取为私有辅助方法以消除重复。小规模(
< 10_000)和大规模路径的bucketMask/fastReduceMul赋值代码完全相同,如果后续修改遗漏其中一处会引入不一致。提取辅助方法示例
+ private initFastReduce(): void { + this.bucketMask = + (this.numBuckets & (this.numBuckets - 1)) === 0 ? this.numBuckets - 1 : 0; + this.fastReduceMul = + this.bucketMask === 0 && this.numBuckets <= FAST_REDUCE_MAX_BUCKETS + ? this.numBuckets * INV_2_32 + : null; + }然后在两处初始化路径末尾调用
this.initFastReduce()即可。注意需要将bucketMask和fastReduceMul的readonly改为普通private,或在辅助方法中使用强制类型断言。Also applies to: 360-364
463-474:encodeInto截断回退逻辑中,第三次判断使用的仍是第二次encoded的结果——逻辑正确但阅读需要追踪变量遮蔽。Lines 464-469 做了第二次
encodeInto并覆盖了encoded。Lines 473-474 再次检查encoded.read < strLen来决定是否回退到textEncoder.encode()。由于encoded此时确实指向第二次调用的返回值,逻辑正确。但若阅读者忽略了 Line 469 的
encoded = ...覆盖,容易误以为这里还在用第一次的结果。可以考虑加一句注释或者将回退逻辑 inline 到if块内以提升可读性,纯建议。
| const index = | ||
| bucketMask !== 0 | ||
| ? (hvIndex & bucketMask) >>> 0 | ||
| : fastReduceMul | ||
| ? ((hvIndex * fastReduceMul) | 0) >>> 0 | ||
| : hvIndex % this.numBuckets; |
There was a problem hiding this comment.
Fast reduce is incorrect
((hvIndex * fastReduceMul) | 0) >>> 0 is not equivalent to hvIndex % this.numBuckets and will produce out-of-range indexes (e.g., for numBuckets = 1000, this yields values up to 999,999). The intended “multiply-high” fast reduction needs the high 32 bits of the 64-bit product, not the low 32 bits.
This will break has/add/delete by indexing the table outside the expected bucket range. Please implement a correct reduction (or keep %) for non-power-of-two bucket counts.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/vacuum-filter/vacuum-filter.ts
Line: 488:493
Comment:
**Fast reduce is incorrect**
`((hvIndex * fastReduceMul) | 0) >>> 0` is not equivalent to `hvIndex % this.numBuckets` and will produce out-of-range indexes (e.g., for `numBuckets = 1000`, this yields values up to 999,999). The intended “multiply-high” fast reduction needs the high 32 bits of the 64-bit product, not the low 32 bits.
This will break `has/add/delete` by indexing the table outside the expected bucket range. Please implement a correct reduction (or keep `%`) for non-power-of-two bucket counts.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
Code Review Summary
This PR optimizes the VacuumFilter.has hot path by replacing the dual MurmurHash3 computation with a single hash + fmix32 derivation, switching from an ASCII fast-path to TextEncoder.encodeInto, adding a Uint32Array view for little-endian block reads, and implementing a fast-reduce alternative to modulo for bucket index mapping.
The algorithmic changes (single-hash + fmix32 tag derivation, fast-reduce via multiply-high) are mathematically sound. However, there is a runtime crash bug in the scratch buffer resizing logic.
PR Size: S
- Lines changed: 159 (97 additions, 62 deletions)
- Files changed: 1
Issues Found
| Category | Critical | High | Medium | Low |
|---|---|---|---|---|
| Logic/Bugs | 0 | 1 | 0 | 0 |
| Security | 0 | 0 | 0 | 0 |
| Error Handling | 0 | 0 | 0 | 0 |
| Types | 0 | 0 | 0 | 0 |
| Comments/Docs | 0 | 0 | 0 | 0 |
| Tests | 0 | 0 | 0 | 0 |
| Simplification | 0 | 0 | 0 | 0 |
High Priority Issues (Should Fix)
- [LOGIC-BUG]
Uint32Arrayalignment crash (vacuum-filter.ts:457): Whenkey.length > scratch.length * 2andkey.length % 4 \!== 0, thenew Uint8Array(strLen)creates a buffer whose byte length is not a multiple of 4. The subsequentnew Uint32Array(this.scratch.buffer)throws aRangeError. Fix: round up to the nearest multiple of 4 with(size + 3) & ~3. See inline comment for details.
Review Coverage
- Logic and correctness
- Security (OWASP Top 10)
- Error handling
- Type safety
- Documentation accuracy
- Test coverage
- Code clarity
Automated review by Claude AI
| // 关键优化:尽量走 TextEncoder.encodeInto(无分配,且编码在原生层完成) | ||
| const strLen = key.length; | ||
| if (this.scratch.length < strLen) { | ||
| this.scratch = new Uint8Array(Math.max(this.scratch.length * 2, strLen)); |
There was a problem hiding this comment.
[High] [LOGIC-BUG] Uint32Array construction will throw RangeError when scratch buffer size is not a multiple of 4
Why this is a problem: When key.length exceeds this.scratch.length * 2 and is not a multiple of 4, the new Uint8Array will have a non-aligned byte length. The subsequent new Uint32Array(this.scratch.buffer) requires the buffer's byteLength to be a multiple of 4, and will throw a RangeError otherwise.
Concrete trigger: any string with .length > 512 (initial scratch is 256, so 256 * 2 = 512) where .length % 4 !== 0. For example, a 513-character string. Once triggered, the misalignment cascades to all future resizes via doubling (e.g., 513 * 2 = 1026, also not a multiple of 4).
Verified with Node.js:
const buf = new Uint8Array(513);
new Uint32Array(buf.buffer); // RangeError: byte length of Uint32Array should be a multiple of 4The same issue exists on line 465-466 (the second resize path): though strLen * 4 is inherently aligned, if the first resize produced a non-aligned size and this.scratch.length * 2 wins the Math.max, the bug propagates.
Suggested fix:
if (this.scratch.length < strLen) {
const newLen = (Math.max(this.scratch.length * 2, strLen) + 3) & ~3;
this.scratch = new Uint8Array(newLen);
this.scratch32 = new Uint32Array(this.scratch.buffer);
}Apply the same alignment to the second resize for consistency:
const newLen = (Math.max(this.scratch.length * 2, strLen * 4) + 3) & ~3;
this.scratch = new Uint8Array(newLen);
this.scratch32 = new Uint32Array(this.scratch.buffer);| // - 其它:回退到 `%` | ||
| const bucketMask = this.bucketMask; | ||
| const fastReduceMul = this.fastReduceMul; | ||
| const index = | ||
| bucketMask !== 0 | ||
| ? (hvIndex & bucketMask) >>> 0 |
There was a problem hiding this comment.
Incorrect fast reduce math
The fastReduceMul path here is still using the low 32 bits of a floating-point product (((hvIndex * fastReduceMul) | 0) >>> 0), which is not equivalent to hvIndex % this.numBuckets and can return indexes outside [0, numBuckets). This will break has/add/delete by reading/writing the wrong buckets. Since this exact issue has already been noted in prior threads, this needs to be replaced with a correct reduction (or fall back to %) for non-power-of-two bucket counts.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/vacuum-filter/vacuum-filter.ts
Line: 507:512
Comment:
**Incorrect fast reduce math**
The `fastReduceMul` path here is still using the low 32 bits of a floating-point product (`((hvIndex * fastReduceMul) | 0) >>> 0`), which is not equivalent to `hvIndex % this.numBuckets` and can return indexes outside `[0, numBuckets)`. This will break `has/add/delete` by reading/writing the wrong buckets. Since this exact issue has already been noted in prior threads, this needs to be replaced with a correct reduction (or fall back to `%`) for non-power-of-two bucket counts.
How can I resolve this? If you propose a fix, please make it concise.| const index = | ||
| bucketMask !== 0 | ||
| ? (hvIndex & bucketMask) >>> 0 | ||
| : fastReduceMul | ||
| ? ((hvIndex * fastReduceMul) | 0) >>> 0 // |0 用于截断(等价 floor;值域 < 2^31) | ||
| : hvIndex % this.numBuckets; |
There was a problem hiding this comment.
Fast-reduce returns wrong range
The fastReduceMul branch is computing ((hvIndex * fastReduceMul) | 0) >>> 0, which is not equivalent to floor(hvIndex * numBuckets / 2^32) and can yield values outside [0, numBuckets). With fastReduceMul = numBuckets / 2^32, hvIndex * fastReduceMul is already < numBuckets, so the correct operation is Math.floor(...) (or a safe truncation). Using |0 forces 32-bit signed conversion and changes the value domain, breaking bucket indexing in has/add/delete for non-power-of-two numBuckets.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/vacuum-filter/vacuum-filter.ts
Line: 510:515
Comment:
**Fast-reduce returns wrong range**
The `fastReduceMul` branch is computing `((hvIndex * fastReduceMul) | 0) >>> 0`, which is **not** equivalent to `floor(hvIndex * numBuckets / 2^32)` and can yield values outside `[0, numBuckets)`. With `fastReduceMul = numBuckets / 2^32`, `hvIndex * fastReduceMul` is already `< numBuckets`, so the correct operation is `Math.floor(...)` (or a safe truncation). Using `|0` forces 32-bit signed conversion and changes the value domain, breaking bucket indexing in `has/add/delete` for non-power-of-two `numBuckets`.
How can I resolve this? If you propose a fix, please make it concise.|
已根据 Greptile/CodeRabbit 对 fast-reduce 分支的担忧做了两处改进:
本地 |
| const index = | ||
| bucketMask !== 0 | ||
| ? (hvIndex & bucketMask) >>> 0 | ||
| : fastReduceMul | ||
| ? (hvIndex * fastReduceMul) >>> 0 // >>>0 用于截断(等价 floor;值域 < 2^32) | ||
| : hvIndex % this.numBuckets; |
There was a problem hiding this comment.
Fast-reduce index is wrong
In indexTag, the fast-reduce branch uses ((hvIndex * fastReduceMul) >>> 0), but fastReduceMul is numBuckets / 2^32. For hvIndex < 2^32, the product is < numBuckets (and numBuckets <= 2^21 here), so >>> 0 is not equivalent to Math.floor(...): it converts to uint32 and will wrap for any product ≥ 2^32, producing out-of-range indexes and breaking bucket reads/writes.
This should be implemented as Math.floor(hvIndex * fastReduceMul) (or another correct multiply-high reduction), otherwise has/add/delete can index the wrong bucket.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/vacuum-filter/vacuum-filter.ts
Line: 510:515
Comment:
**Fast-reduce index is wrong**
In `indexTag`, the fast-reduce branch uses `((hvIndex * fastReduceMul) >>> 0)`, but `fastReduceMul` is `numBuckets / 2^32`. For `hvIndex < 2^32`, the product is `< numBuckets` (and `numBuckets <= 2^21` here), so `>>> 0` is **not** equivalent to `Math.floor(...)`: it converts to uint32 and will wrap for any product ≥ 2^32, producing out-of-range indexes and breaking bucket reads/writes.
This should be implemented as `Math.floor(hvIndex * fastReduceMul)` (or another correct multiply-high reduction), otherwise `has/add/delete` can index the wrong bucket.
How can I resolve this? If you propose a fix, please make it concise.0c57161 to
de19ef8
Compare
背景
在 #734 合入后,本地 microbench 发现
VacuumFilter.has在“重复查询同一批 string 实例”的场景下会明显慢于Set.has(有时看起来可达 ~10x)。主要原因:
Set.has是 V8 原生实现;并且 V8 会把 string hash 缓存在 string 实例上(当基准反复复用同一批 string 对象时优势极大)。VacuumFilter.has每次查询都会走纯 JS 热路径:JS 层手写 ASCII 拷贝(string -> bytes)+ MurmurHash3 再扫一遍 bytes(双遍历)+ 同时计算两份 hash(index/tag)+% numBuckets。因此 microbench 下差距会被放大;而在真实请求里(从请求头解析得到的 key 通常是“新 string 实例”),
Set.has的缓存优势会明显减弱。改进
TextEncoder.encodeInto写入复用 scratch(无分配;编码在原生层完成)。numBuckets为 2 的幂:位与(最快);numBuckets <= 2^21时使用 fast reduction:floor(hvIndex * numBuckets / 2^32)(在 IEEE754 精度约束下可精确计算);% numBuckets。Uint32Array视图读取 4-byte blocks;其它端回退 byte 读取,保持可移植性。scratch324-byte 对齐导致的RangeError,并新增单测覆盖。量化基准(本机 microbench)
命令:
(默认跳过;需要显式开启环境变量
RUN_VACUUM_FILTER_BENCH=1)*nix:
RUN_VACUUM_FILTER_BENCH=1 node --expose-gc node_modules/vitest/vitest.mjs run tests/unit/vacuum-filter/vacuum-filter-has.bench.test.tsPowerShell:
$env:RUN_VACUUM_FILTER_BENCH='1'; node --expose-gc node_modules/vitest/vitest.mjs run tests/unit/vacuum-filter/vacuum-filter-has.bench.test.ts参数(测试内部固定):
Set.has受益于 string hash 缓存,偏向 Set)环境:
结果(median;ns/op 越低越好):
解读要点:
VF/Set从 ~5.9x 降到 ~3.0x(hit);从 ~3.1x 降到 ~1.8x(miss)。VacuumFilter.has明显快于Set.has(VF/Set < 1),并且优化后进一步拉开差距。备注
hvIndex二次混合派生,(index, tag) 不再是两份独立 hash;但在当前默认的 32-bit fingerprint 场景下碰撞概率仍极低。Greptile Overview
Greptile Summary
This PR optimizes the hot path of
VacuumFilter.hasto reduce overhead for API key lookups, addressing performance gaps identified in microbenchmarks.Key improvements:
TextEncoder.encodeIntowith reusable scratch buffer (zero allocation)floor(hvIndex * numBuckets / 2^32)whennumBuckets <= 2^21(maintains IEEE754 precision)Uint32Arrayview for 4-byte block reads in MurmurHash3 (falls back to byte-level reads for portability)RangeErrorwhen creatingUint32ArrayviewCorrectness verification:
numBuckets <= 2^21andhvIndex < 2^32, the product stays within IEEE754 double precision (< 2^53), ensuring(hvIndex * fastReduceMul) >>> 0is equivalent toMath.floor(hvIndex * numBuckets / 2^32)and always produces results in range[0, numBuckets)Test quality fix:
Confidence Score: 5/5
Important Files Changed
VacuumFilter.hashot path with encoding/hashing improvements, correct fast-reduce implementation, and scratch buffer alignment fixVacuumFilter.hasperformance improvements (opt-in via env variable)