-
Notifications
You must be signed in to change notification settings - Fork 496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump vendor/postgres #1573
Bump vendor/postgres #1573
Conversation
3cb2b1a
to
a5d9e23
Compare
c6094e6
to
2458372
Compare
The pg_regress test is failing. In the debug build, it fails with assertion failure:
In release build, with an error:
I'm not seeing this on my laptop. |
3c24e0a
to
52da6a9
Compare
I also can not reproduce it on my laptop? |
I'd rather decrease MAX_PAGES to make the problem appear more readily, so that we can hunt it down and fix it. I don't think there is any WAL record that legitimately requires that many buffers. |
52da6a9
to
c8327ae
Compare
I lowered MAX_PAGES to 32, and now I can reproduce the "Inmem storage overflow" error:
It seems to happen at many different WAL record types. I think the important thing here is that you have a lot of WAL records to replay in one batch, 30650 records in the above example. Even if one WAL record needs to evict just one page from the local buffer cache, into the "inmem" area, that adds up if you have a lot of records to replay. It occurs to me that we could clear the "inmem" area after every record. Every record is supposed to be independent of each other, it shouldn't be necessary to carry over any other buffers than the target page we're applying the records for. |
O, sorry. Now wiht 32 pages regression tests passed. |
eea5e2b
to
81b5c6b
Compare
CI failed:
I cannot reproduce that locally, and I believe this doesn't happen consistently on the CI either.
|
@knizhnik, can you finish this, please? It's a confusing situation that the tip of 'main' on vendor/postgres is not what's actually used in the builds. |
Sorry, but what should I do? |
Ok, if you're convinced it's not caused by this bug, then let's ignore it and push this. We really need to hunt down that GC bug... |
This brings us the performance improvements to WAL redo from neondatabase/postgres#144
81b5c6b
to
7223ed6
Compare
This brings us the performance improvements to WAL redo from
neondatabase/postgres#144