Skip to content

Conversation

@knizhnik
Copy link

… delete and other massive update operations

Comment on lines 8949 to 8955
/*
* NEON: despite to the comment above we need to update page LSN here.
* See discussion at hackers: https://www.postgresql.org/message-id/flat/039076d4f6cdd871691686361f83cb8a6913a86a.camel%40j-davis.com#101ba42b004f9988e3d54fce26fb3462
* For Neon this assignment is critical because otherwise last written LSN tracked at compute doesn't
* match with page LSN assignee by WAL-redo and as a result, prefetched page is rejected.
*/
PageSetLSN(page, lsn);
Copy link
Contributor

@hlinnaka hlinnaka Nov 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: this was done in upstream commit 7bf713d, but with a check for XLogHintBitIsNeeded:

@@ -8838,6 +8837,9 @@ heap_xlog_visible(XLogReaderState *record)
 
                PageSetAllVisible(page);
 
+               if (XLogHintBitIsNeeded())
+                       PageSetLSN(page, lsn);
+
                MarkBufferDirty(buffer);
        }
        else if (action == BLK_RESTORED)

I believe we need to do it always in Neon, so unfortunately we still need to carry a patch here. But let's move this to the same location, above the MarkBufferDirty.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, but please notice that in upstream it is committed with XLogHintBitIsNeeded() check and we are running neon now with wal_log_hints=off

Copy link
Contributor

@MMeent MMeent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not super happy with the memory overhead of tracking so many pages, but that's fine for now.

@knizhnik
Copy link
Author

Please notice that there is eviction algorithm for this cache: we do not need to keep information about all pages i memory. Current default cache size is 128k entries - few Mb.

@MMeent
Copy link
Contributor

MMeent commented Nov 23, 2022

Please notice that there is eviction algorithm for this cache: we do not need to keep information about all pages i memory. Current default cache size is 128k entries - few Mb.

Those "few MB" is what I'm not super happy about. In low-memory systems (like our free tier - 256MB) every MB counts.

For now, I think this is OK, but I think we need to revisit this in the future.

@knizhnik knizhnik merged commit edf4c16 into REL_15_STABLE_neon Nov 24, 2022
@knizhnik knizhnik deleted the last_written_lsn_v15 branch November 24, 2022 09:45
MMeent pushed a commit that referenced this pull request Feb 10, 2023
#245)

* Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations

* Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
MMeent pushed a commit that referenced this pull request May 11, 2023
#245)

* Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations

* Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
tristan957 pushed a commit that referenced this pull request Aug 10, 2023
#245)

* Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations

* Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
tristan957 pushed a commit that referenced this pull request Nov 8, 2023
#245)

* Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations

* Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
tristan957 pushed a commit that referenced this pull request Nov 8, 2023
#245)

* Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations

* Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
tristan957 pushed a commit that referenced this pull request Nov 8, 2023
#245)

* Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations

* Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
tristan957 pushed a commit that referenced this pull request Feb 5, 2024
#245)

* Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations

* Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
tristan957 pushed a commit that referenced this pull request Feb 5, 2024
#245)

* Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations

* Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
tristan957 pushed a commit that referenced this pull request Feb 6, 2024
#245)

* Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations

* Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
tristan957 pushed a commit that referenced this pull request May 10, 2024
#245)

* Maintain last written LSN for each page to enable prefetch on vacuum, delete and other massive update operations

* Move PageSetLSN in heap_xlog_visible before MarkBufferDirty
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants