forked from postgres/postgres
-
Notifications
You must be signed in to change notification settings - Fork 1
G2 #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Previously, although the initial state of a proclist_node is expected to be next == prev == 0, proclist_delete_offset would reset nodes to next == prev == INVALID_PGPROCNO when removing them from a list. This is the same state that a node in a singleton list has, so that it's impossible to distinguish not-in-a-list from in-a-list. Change proclist_delete_offset to reset removed nodes to next == prev == 0, making it possible to distinguish those cases, and then add Asserts to the list add and delete functions that the supplied node isn't or is in a list at entry. Also tighten assertions about the node being in the particular list (not some other one) where it is possible to check that in O(1) time. In ConditionVariablePrepareToSleep, since we don't expect the process's cvWaitLink to already be in a list, remove the more-or-less-useless proclist_contains check; we'd rather have proclist_push_tail's new assertion fire if that happens. Improve various comments related to proclists, too. Patch by me, reviewed by Thomas Munro. This isn't back-patchable, since there could theoretically be inlined copies of proclist_delete_offset in third-party modules. But it's only improving debuggability anyway. Discussion: https://postgr.es/m/CAEepm=0NWKehYw7NDoUSf8juuKOPRnCyY3vuaSvhrEWsOTAa3w@mail.gmail.com
Clarify a bunch of comments. Discussion: https://postgr.es/m/CAEepm=0NWKehYw7NDoUSf8juuKOPRnCyY3vuaSvhrEWsOTAa3w@mail.gmail.com
Previously an inaccurate but harmless error was generated when running --check on a live server before reporting the servers as compatible. The fix is to split error reporting and exit control in the exec_prog() API. Reported-by: Daniel Westermann Backpatch-through: 10
RELATION_IS_OTHER_TEMP is tested in the caller, not here. Discussion: http://postgr.es/m/5A5438E4.3090709@lab.ntt.co.jp
Added: - variable now might contain integer, double, boolean and null values - functions ln, exp - logical AND/OR/NOT - bitwise AND/OR/NOT/XOR - bit right/left shift - comparison operators - IS [NOT] (NULL|TRUE|FALSE) - conditional choice (in form of when/case/then) New operations and functions allow to implement more complicated test scenario. Author: Fabien Coelho with minor editorization by me Reviewed-By: Pavel Stehule, Jeevan Ladhe, me Discussion: https://www.postgresql.org/message-id/flat/alpine.DEB.2.10.1604030742390.31618@sto
There are plans to extend the syntax for ANALYZE, so we need to break the link between VacuumStmt and AnalyzeStmt. But apart from that, the syntax above is undocumented and, if discovered by users, might give the impression that the VERBOSE option for VACUUM differs from the verbose option from ANALYZE, which it does not. Nathan Bossart, reviewed by Michael Paquier and Masahiko Sawada Discussion: http://postgr.es/m/D3FC73E2-9B1A-4DB4-8180-55F57D116B4E@amazon.com
The original coding here insisted that callers manually cancel any prepared sleep for one condition variable before starting a sleep on another one. While that's not a huge burden today, it seems like a gotcha that will bite us in future if the use of condition variables increases; anything we can do to make the use of this API simpler and more robust is attractive. Hence, allow these functions to automatically switch their attention to a different CV when required. This is safe for the same reason it was OK for commit aced5a9 to let a broadcast operation cancel any prepared CV sleep: whenever we return to the other test-and-sleep loop, we will automatically re-prepare that CV, paying at most an extra test of that loop's exit condition. Back-patch to v10 where condition variables were introduced. Ordinarily we would probably not back-patch a change like this, but since it does not invalidate any coding pattern that was legal before, it seems safe enough. Furthermore, there's an open bug in replorigin_drop() for which the simplest fix requires this. Even if we chose to fix that in some more complicated way, the hazard would remain that we might back-patch some other bug fix that requires this behavior. Patch by me, reviewed by Thomas Munro. Discussion: https://postgr.es/m/2437.1515368316@sss.pgh.pa.us
replorigin_drop() misunderstood the API for condition variables: it had ConditionVariablePrepareToSleep and ConditionVariableCancelSleep inside its test-and-sleep loop, rather than outside the loop as intended. The net effect is a narrow race-condition window wherein, if the process using a replication slot releases it immediately after replorigin_drop() releases the ReplicationOriginLock, replorigin_drop() would get into the condition variable's wait list too late and then wait indefinitely for a signal that won't come. Because there's a different CV for each replication slot, we can't just move the ConditionVariablePrepareToSleep call to above the test-and-sleep loop. What we can do, in the wake of commit 13db3b9, is drop the ConditionVariablePrepareToSleep call entirely. This fix depends on that commit because (at least in principle) the slot matching the target replication origin might move around, so that once in a blue moon successive loop iterations might involve different CVs. We can now cope with such a scenario, at the cost of an extra trip through the retry loop. (There are ways we could fix this bug without depending on that commit, but they're all a lot more complicated than this way.) While at it, upgrade the rather skimpy comments in this function. Back-patch to v10 where this code came in. Discussion: https://postgr.es/m/19947.1515455433@sss.pgh.pa.us
Add a function to TestLib that allows us to check pg_config.h and then decide the expected test outcome based on that. Author: Michael Paquier <michael.paquier@gmail.com>
The general assumption for postmaster child processes is that they should just exit(1), reasonably promptly, if the postmaster disappears. condition_variable.c neglected this consideration and could be left waiting forever, if the counterpart process it is waiting for has done the right thing and exited. We had some discussion of adjusting the WaitEventSet API to make it harder to make this type of mistake in future; but for the moment, and for v10, let's make this narrow fix. Discussion: https://postgr.es/m/20412.1515456143@sss.pgh.pa.us
Commit ab72716 introduced code that attempts to order the child scans of a Parallel Append node in a way that will minimize execution time, based on total cost and startup cost. However, it failed to think hard about what to do when estimated costs are exactly equal; a case that's particularly likely to occur when comparing on startup cost. In such a case the ordering of the child paths would be left to the whims of qsort, an algorithm that isn't even stable. We can improve matters by applying the rule used elsewhere in the planner: if total costs are equal, sort on startup cost, and vice versa. When both cost estimates are exactly equal, rather than letting qsort do something unpredictable, sort based on the child paths' relids, which should typically result in sorting in inheritance order. (The latter provision requires inventing a qsort-style comparator for bitmapsets, but maybe we'll have use for that for other reasons in future.) This results in a few plan changes in the select_parallel test, but those all look more reasonable than before, when the actual underlying cost numbers are taken into account. Discussion: https://postgr.es/m/4944.1515446989@sss.pgh.pa.us
The initial implementation of list_qsort(), from commit ab72716, re-used the ListCells of the input list while not touching the List header. This meant that anybody who still had a pointer to the original header would now be in possession of a corrupted list, a problem that seems sure to bite us eventually. One possible solution is to re-use the original List header as well, giving the function the semantics of update-in-place. However, that doesn't seem like a very good idea either given the way that the function is used in the planner: create_path functions aren't normally supposed to modify their input lists. It doesn't look like there would be a problem today, but it's not hard to foresee a time when modifying a list of Paths in-place could have side-effects on some other append path. On the whole, and in view of the likelihood that this function might be used in other contexts in the future, it seems best to get rid of the micro-optimization of re-using the input list cells. Just build a new list. Discussion: https://postgr.es/m/16912.1515449066@sss.pgh.pa.us
Rename PortalMemory to TopPortalContext, to avoid confusion with PortalContext and align naming with similar top-level memory contexts. Rename PortalData's "heap" field to portalContext. The "heap" naming seems quite antiquated and confusing. Also get rid of the PortalGetHeapMemory() macro and access the field directly, which we do for other portal fields, so this abstraction doesn't buy anything. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
After having gotten rid of PortalGetHeapMemory(), there seems little reason to keep one Portal access macro around that offers no actual abstraction and isn't consistently used anyway. Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com> Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
These are compatible with Oracle and required for the datetime template language for jsonpath in an upcoming patch. Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.
As src/backend/access/transam/README says, PageGetLSN may only be called by processes holding either exclusive lock on buffer, or a shared lock on buffer plus buffer header lock. Therefore any place that only holds a shared buffer lock must use BufferGetLSNAtomic instead of PageGetLSN, which internally obtains buffer header lock prior to reading the LSN. A few callsites failed to comply with this rule. This was detected by running all tests under a new (not committed) assertion that verifies PageGetLSN locking contract. All but one of the callsites that failed the assertion are fixed by this patch. Remaining callsites were inspected manually and determined not to need any change. The exception (unfixed callsite) is in TestForOldSnapshot, which only has a Page argument, making it impossible to access the corresponding Buffer from it. Fixing that seems a much larger patch that will have to be done separately; and that's just as well, since it was only introduced in 9.6 and other bugs are much older. Some of these bugs are ancient; backpatch all the way back to 9.3. Authors: Jacob Champion, Asim Praveen, Ashwin Agrawal Reviewed-by: Michaël Paquier Discussion: https://postgr.es/m/CABAq_6GXgQDVu3u12mK9O5Xt5abBZWQ0V40LZCE+oUf95XyNFg@mail.gmail.com
Previously aggregate transition and combination functions were invoked by special case code in nodeAgg.c, evaluating input and filters separately using the expression evaluation machinery. That turns out to not be great for performance for several reasons: - repeated expression evaluations have some cost - the transition functions invocations are poorly predicted, as commonly there are multiple aggregates in a query, resulting in the same call-stack invoking different functions. - filter and input computation had to be done separately - the special case code made it hard to implement JITing of the whole transition function invocation Address this by building one large expression that computes input, evaluates filters, and invokes transition functions. This leads to moderate speedups in queries bottlenecked by aggregate computations, and enables large speedups for similar cases once JITing is done. There's potential for further improvement: - It'd be nice if we could simplify the somewhat expensive aggstate->all_pergroups lookups. - right now there's still an advance_transition_function invocation in nodeAgg.c, leading to some code duplication. Author: Andres Freund Discussion: https://postgr.es/m/20170901064131.tazjxwus3k2w3ybh@alap3.anarazel.de
Reported-by: Ashutosh Sharma
The previous code gave the same error message for attempting to drop pinned and active portals, but those are separate states, so give separate error messages.
PL/pgSQL "pins" internally generated (unnamed) portals so that user code cannot close them by guessing their names. This logic is also useful in other languages and really for any code. So move that logic into SPI. An unnamed portal obtained through SPI_cursor_open() and related functions is now automatically pinned, and SPI_cursor_close() automatically unpins a portal that is pinned. In the core distribution, this affects PL/Perl and PL/Python, preventing users from manually closing cursors created by spi_query and plpy.cursor, respectively. (PL/Tcl does not currently offer any cursor functionality.) Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Without this, Parallel Append can end up with extra children. Report by Rajkumar Raghuwanshi. Fix by Amit Khandekar. Brown paper bag bug by me. Discussion: http://postgr.es/m/CAKcux6mBF-NiddyEe9LwymoUC5+wh8bQJ=uk2gGkOE+L8cv=LA@mail.gmail.com
It seems incorrect to assume that the list of CkptSortItems can never contain duplicate page numbers: concurrent activity could result in some page getting dropped from a low-numbered buffer and later loaded into a high-numbered buffer while BufferSync is scanning the buffer pool. If that happened, the comparator would give self-inconsistent results, potentially confusing qsort(). Saving one comparison step is not worth possibly getting the sort wrong. So far as I can tell, nothing would actually go wrong given our current implementation of qsort(). It might get a bit slower than expected if there were a large number of duplicates of one value, but that's surely a probability-epsilon case. Still, the comment is wrong, and if we ever switched to another sort implementation it might be less forgiving. In passing, avoid casting away const-ness of the argument pointers; I've not seen any compiler complaints from that, but it seems likely that some compilers would not like it. Back-patch to 9.6 where this code came in, just in case I've underestimated the possible consequences. Discussion: https://postgr.es/m/18437.1515607610@sss.pgh.pa.us
This reverts commit b3617cd. This broke returning unnamed cursors from PL/pgSQL functions. Apparently, there are no test cases for this.
Existing tests only covered returning explicitly named portals as refcursor. The unnamed cursor case was recently broken without a test failing.
PL/pgSQL "pins" internally generated portals so that user code cannot close them by guessing their names. Add this functionality to PL/Perl and PL/Python as well, preventing users from manually closing cursors created by spi_query and plpy.cursor, respectively. (PL/Tcl does not currently offer any cursor functionality.)
These functions are stated to be Oracle-compatible, but they weren't. Yugo Nagata noticed that while our code returns zero for a zero or negative fourth parameter (occur_index), Oracle throws an error. Further testing by me showed that there was also a discrepancy in the interpretation of a negative third parameter (beg_index): Oracle thinks that a negative beg_index indicates the last place where the target substring can *begin*, whereas our code thinks it is the last place where the target can *end*. Adjust the sample code to behave like Oracle in both these respects. Also change it to be a CDATA[] section, simplifying copying-and-pasting out of the documentation source file. And fix minor problems in the introductory comment, which wasn't very complete or accurate. Back-patch to all supported branches. Although this patch only touches documentation, we should probably call it out as a bug fix in the next minor release notes, since users who have adopted the functions will likely want to update their versions. Yugo Nagata and Tom Lane Discussion: https://postgr.es/m/20171229191705.c0b43a8c.nagata@sraoss.co.jp
~> (cube, int) operator was especially designed for knn-gist search. However, it appears that knn-gist search can't work correctly with current behavior of this operator when dataset contains cubes of variable dimensionality. In this case, the same value of second operator argument can point to different dimension depending on dimensionality of particular cube. Such behavior is incompatible with gist indexing of cubes, and knn-gist doesn't work correctly for it. This patch changes behavior of ~> (cube, int) operator by introducing dimension numbering where value of second argument unambiguously identifies number of dimension. With new behavior, this operator can be correctly supported by knn-gist. Relevant changes to cube operator class are also included. Backpatch to v9.6 where operator was introduced. Since behavior of ~> (cube, int) operator is changed, depending entities must be refreshed after upgrade. Such as, expression indexes using this operator must be reindexed, materialized views must be rebuilt, stored procedures and client code must be revised to correctly use new behavior. That should be mentioned in release notes. Noticed by: Tomas Vondra Author: Alexander Korotkov Reviewed by: Tomas Vondra, Andrey Borodin Discussion: https://www.postgresql.org/message-id/flat/a9657f6a-b497-36ff-e56-482a2c7e3292@2ndquadrant.com
~> (cube, int) operator was especially designed for knn-gist search. However, knn-gist supports only ascending ordering of results. Nevertheless it would be useful to support descending ordering by ~> (cube, int) operator. We provide workaround for that: negative coordinate give us inversed value of corresponding cube bound. Therefore, knn search using negative coordinate gives us an effect of descending ordering by cube bound. Author: Alexander Korotkov Reviewed by: Tomas Vondra, Andrey Borodin Discussion: https://www.postgresql.org/message-id/flat/a9657f6a-b497-36ff-e56-482a2c7e3292@2ndquadrant.com
"c.f." should be "cf.".
Per buildfarm
I missed a '0' in the version number string ... Per buildfarm member crake.
There used to be a lot of different *Type and *Kind symbol groups to address objects within different commands, most of which have been replaced by ObjectType, starting with b256f24. But this conversion was never done for the ACL commands until now. This change ends up being just a plain replacement of the types and symbols, without any code restructuring needed, except deleting some now redundant code. Reviewed-by: Michael Paquier <michael.paquier@gmail.com> Reviewed-by: Stephen Frost <sfrost@snowman.net>
AclObjectKind was basically just another enumeration for object types, and we already have a preferred one for that. It's only used in aclcheck_error. By using ObjectType instead, we can also give some more precise error messages, for example "index" instead of "relation". Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
When an index column is an expression, it makes no sense to compare its attribute numbers. This seems to account for remaining buildfarm fallout from 8b08f7d. At least, it solves the issue in my local 32bit VM -- let's see what the rest thinks.
When an UPDATE causes a row to no longer match the partition constraint, try to move it to a different partition where it does match the partition constraint. In essence, the UPDATE is split into a DELETE from the old partition and an INSERT into the new one. This can lead to surprising behavior in concurrency scenarios because EvalPlanQual rechecks won't work as they normally did; the known problems are documented. (There is a pending patch to improve the situation further, but it needs more review.) Amit Khandekar, reviewed and tested by Amit Langote, David Rowley, Rajkumar Raghuwanshi, Dilip Kumar, Amul Sul, Thomas Munro, Álvaro Herrera, Amit Kapila, and me. A few final revisions by me. Discussion: http://postgr.es/m/CAJ3gD9do9o2ccQ7j7+tSgiE1REY65XRiMb=yJO3u3QhyP8EEPQ@mail.gmail.com
We don't actually need two code paths, one for 32 bits and one for 64 bits. Since the existing code already assumed that "long long" is available, we can just use PyLong_FromLongLong() for 64 bits as well. In Python 2.5 and later, PyLong_FromLong() and PyLong_FromLongLong() use the same code, so there will be no difference for 64-bit platforms. In Python 2.4, the code is different, but performance testing showed no noticeable difference in PL/Python, and that Python version is ancient anyway. Discussion: https://www.postgresql.org/message-id/0a02203c-e157-55b2-464e-6087066a1849@2ndquadrant.com
Apparently, Peter's compiler has faith that the switch test values here could never not be valid values of their enums. Mine does not, and I tend to agree with it.
The previous code converted SPI_processed to a Python float if it didn't fit into a Python int. But Python longs have unlimited precision, so use that instead in all cases. As in eee50a8, we use the Python LongLong API unconditionally for simplicity. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
x4m
pushed a commit
that referenced
this pull request
Nov 4, 2022
In a similar effort to f736e18 and 110d817, fixup various usages of string functions where a more appropriate function is available and more fit for purpose. These changes include: 1. Use cstring_to_text_with_len() instead of cstring_to_text() when working with a StringInfoData and the length can easily be obtained. 2. Use appendStringInfoString() instead of appendStringInfo() when no formatting is required. 3. Use pstrdup(...) instead of psprintf("%s", ...) 4. Use pstrdup(...) instead of psprintf(...) (with no formatting) 5. Use appendPQExpBufferChar() instead of appendPQExpBufferStr() when the length of the string being appended is 1. 6. appendStringInfoChar() instead of appendStringInfo() when no formatting is required and string is 1 char long. 7. Use appendPQExpBufferStr(b, .) instead of appendPQExpBuffer(b, "%s", .) 8. Don't use pstrdup when it's fine to just point to the string constant. I (David) did find other cases of #8 but opted to use #4 instead as I wasn't certain enough that applying #8 was ok (e.g in hba.c) Author: Ranier Vilela, David Rowley Discussion: https://postgr.es/m/CAApHDvo2j2+RJBGhNtUz6BxabWWh2Jx16wMUMWKUjv70Ver1vg@mail.gmail.com
x4m
pushed a commit
that referenced
this pull request
Jul 5, 2024
1. TruncateMultiXact() performs the SLRU truncations in a critical section. Deleting the SLRU segments calls ForwardSyncRequest(), which will try to compact the request queue if it's full (CompactCheckpointerRequestQueue()). That in turn allocates memory, which is not allowed in a critical section. Backtrace: TRAP: failed Assert("CritSectionCount == 0 || (context)->allowInCritSection"), File: "../src/backend/utils/mmgr/mcxt.c", Line: 1353, PID: 920981 postgres: autovacuum worker template0(ExceptionalCondition+0x6e)[0x560a501e866e] postgres: autovacuum worker template0(+0x5dce3d)[0x560a50217e3d] postgres: autovacuum worker template0(ForwardSyncRequest+0x8e)[0x560a4ffec95e] postgres: autovacuum worker template0(RegisterSyncRequest+0x2b)[0x560a50091eeb] postgres: autovacuum worker template0(+0x187b0a)[0x560a4fdc2b0a] postgres: autovacuum worker template0(SlruDeleteSegment+0x101)[0x560a4fdc2ab1] postgres: autovacuum worker template0(TruncateMultiXact+0x2fb)[0x560a4fdbde1b] postgres: autovacuum worker template0(vac_update_datfrozenxid+0x4b3)[0x560a4febd2f3] postgres: autovacuum worker template0(+0x3adf66)[0x560a4ffe8f66] postgres: autovacuum worker template0(AutoVacWorkerMain+0x3ed)[0x560a4ffe7c2d] postgres: autovacuum worker template0(+0x3b1ead)[0x560a4ffecead] postgres: autovacuum worker template0(+0x3b620e)[0x560a4fff120e] postgres: autovacuum worker template0(+0x3b3fbb)[0x560a4ffeefbb] postgres: autovacuum worker template0(+0x2f724e)[0x560a4ff3224e] /lib/x86_64-linux-gnu/libc.so.6(+0x27c8a)[0x7f62cc642c8a] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f62cc642d45] postgres: autovacuum worker template0(_start+0x21)[0x560a4fd16f31] To fix, bail out in CompactCheckpointerRequestQueue() without doing anything, if it's called in a critical section. That covers the above call path, as well as any other similar cases where RegisterSyncRequest might be called in a critical section. 2. After fixing that, another problem became apparent: Autovacuum process doing that truncation can deadlock with the checkpointer process. TruncateMultiXact() sets "MyProc->delayChkptFlags |= DELAY_CHKPT_START". If the sync request queue is full and cannot be compacted, the process will repeatedly sleep and retry, until there is room in the queue. However, if the checkpointer is trying to start a checkpoint at the same time, and is waiting for the DELAY_CHKPT_START processes to finish, the queue will never shrink. More concretely, the autovacuum process is stuck here: #0 0x00007fc934926dc3 in epoll_wait () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x000056220b24348b in WaitEventSetWaitBlock (set=0x56220c2e4b50, occurred_events=0x7ffe7856d040, nevents=1, cur_timeout=<optimized out>) at ../src/backend/storage/ipc/latch.c:1570 #2 WaitEventSetWait (set=0x56220c2e4b50, timeout=timeout@entry=10, occurred_events=<optimized out>, occurred_events@entry=0x7ffe7856d040, nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:1516 #3 0x000056220b243224 in WaitLatch (latch=<optimized out>, latch@entry=0x0, wakeEvents=wakeEvents@entry=40, timeout=timeout@entry=10, wait_event_info=wait_event_info@entry=150994949) at ../src/backend/storage/ipc/latch.c:538 #4 0x000056220b26cf46 in RegisterSyncRequest (ftag=ftag@entry=0x7ffe7856d0a0, type=type@entry=SYNC_FORGET_REQUEST, retryOnError=true) at ../src/backend/storage/sync/sync.c:614 #5 0x000056220af9db0a in SlruInternalDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1495 #6 0x000056220af9dab1 in SlruDeleteSegment (ctl=ctl@entry=0x56220b7beb60 <MultiXactMemberCtlData>, segno=segno@entry=11350) at ../src/backend/access/transam/slru.c:1566 #7 0x000056220af98e1b in PerformMembersTruncation (oldestOffset=<optimized out>, newOldestOffset=<optimized out>) at ../src/backend/access/transam/multixact.c:3006 #8 TruncateMultiXact (newOldestMulti=newOldestMulti@entry=3221225472, newOldestMultiDB=newOldestMultiDB@entry=4) at ../src/backend/access/transam/multixact.c:3201 #9 0x000056220b098303 in vac_truncate_clog (frozenXID=749, minMulti=<optimized out>, lastSaneFrozenXid=749, lastSaneMinMulti=3221225472) at ../src/backend/commands/vacuum.c:1917 postgres#10 vac_update_datfrozenxid () at ../src/backend/commands/vacuum.c:1760 postgres#11 0x000056220b1c3f76 in do_autovacuum () at ../src/backend/postmaster/autovacuum.c:2550 postgres#12 0x000056220b1c2c3d in AutoVacWorkerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/autovacuum.c:1569 and the checkpointer is stuck here: #0 0x00007fc9348ebf93 in clock_nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x00007fc9348fe353 in nanosleep () from /lib/x86_64-linux-gnu/libc.so.6 #2 0x000056220b40ecb4 in pg_usleep (microsec=microsec@entry=10000) at ../src/port/pgsleep.c:50 #3 0x000056220afb43c3 in CreateCheckPoint (flags=flags@entry=108) at ../src/backend/access/transam/xlog.c:7098 #4 0x000056220b1c6e86 in CheckpointerMain (startup_data=<optimized out>, startup_data_len=<optimized out>) at ../src/backend/postmaster/checkpointer.c:464 To fix, add AbsorbSyncRequests() to the loops where the checkpointer waits for DELAY_CHKPT_START or DELAY_CHKPT_COMPLETE operations to finish. Backpatch to v14. Before that, SLRU deletion didn't call RegisterSyncRequest, which avoided this failure. I'm not sure if there are other similar scenarios on older versions, but we haven't had any such reports. Discussion: https://www.postgresql.org/message-id/ccc66933-31c1-4f6a-bf4b-45fef0d4f22e@iki.fi
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.