-
Notifications
You must be signed in to change notification settings - Fork 22
Maintain last written LSN for each page to enable prefetch on vacuum,… #245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… delete and other massive update operations
| /* | ||
| * NEON: despite to the comment above we need to update page LSN here. | ||
| * See discussion at hackers: https://www.postgresql.org/message-id/flat/039076d4f6cdd871691686361f83cb8a6913a86a.camel%40j-davis.com#101ba42b004f9988e3d54fce26fb3462 | ||
| * For Neon this assignment is critical because otherwise last written LSN tracked at compute doesn't | ||
| * match with page LSN assignee by WAL-redo and as a result, prefetched page is rejected. | ||
| */ | ||
| PageSetLSN(page, lsn); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: this was done in upstream commit 7bf713d, but with a check for XLogHintBitIsNeeded:
@@ -8838,6 +8837,9 @@ heap_xlog_visible(XLogReaderState *record)
PageSetAllVisible(page);
+ if (XLogHintBitIsNeeded())
+ PageSetLSN(page, lsn);
+
MarkBufferDirty(buffer);
}
else if (action == BLK_RESTORED)
I believe we need to do it always in Neon, so unfortunately we still need to carry a patch here. But let's move this to the same location, above the MarkBufferDirty.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, but please notice that in upstream it is committed with XLogHintBitIsNeeded() check and we are running neon now with wal_log_hints=off
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not super happy with the memory overhead of tracking so many pages, but that's fine for now.
|
Please notice that there is eviction algorithm for this cache: we do not need to keep information about all pages i memory. Current default cache size is 128k entries - few Mb. |
Those "few MB" is what I'm not super happy about. In low-memory systems (like our free tier - 256MB) every MB counts. For now, I think this is OK, but I think we need to revisit this in the future. |
#245) * Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations * Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
#245) * Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations * Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
#245) * Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations * Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
#245) * Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations * Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
#245) * Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations * Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
#245) * Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations * Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
#245) * Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations * Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
#245) * Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations * Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
#245) * Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations * Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
#245) * Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations * Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
… delete and other massive update operations