-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WriteBufferManager: proactively initate flush requests when nearing quota #155
Comments
udi-speedb
added a commit
that referenced
this issue
Sep 29, 2022
udi-speedb
added a commit
that referenced
this issue
Sep 29, 2022
hilikspdb
pushed a commit
that referenced
this issue
Oct 1, 2022
udi-speedb
added a commit
that referenced
this issue
Nov 6, 2022
udi-speedb
added a commit
that referenced
this issue
Nov 14, 2022
udi-speedb
added a commit
that referenced
this issue
Nov 28, 2022
udi-speedb
added a commit
that referenced
this issue
Nov 30, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 6, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 6, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 6, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 7, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 7, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 7, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 7, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 7, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 7, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 7, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 11, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 12, 2022
udi-speedb
added a commit
that referenced
this issue
Dec 14, 2022
@erez-speedb , plz run performance |
Yuval-Ariel
pushed a commit
that referenced
this issue
Dec 18, 2022
Yuval-Ariel
pushed a commit
that referenced
this issue
Dec 20, 2022
Repository owner
moved this from 🏗️ Working on it
to 📖 Need your Opinion!
in Speedb Roadmap
Dec 20, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Currently the
WriteBufferManager
is a passive entity that can be queried for the state of the memory quota and let the caller act upon it. In the case of flushes, this means that theWriteBufferManager
exposes aShouldFlush()
method that writers are calling during the write flow and trigger flushes for the DB that the writer is writing into. The algorithm for choosing what to flush in that database is pretty rudimentary, and either does atomic flush (of all CFs) if needed, or chooses the CF that holds the oldest data. This means that in a multi-DB scenario, only the active DB will be the one flushing, even if most of the memtable memory is held by inactive databases. In a multi-CF scenario without atomic flush, the current algorithm may not help much, because the CF that holds the oldest data isn't necessarily using much memory.Allow fixing this behavior by letting the
WriteBufferManager
be aware of databases that depend on it for quota management and proactively initiating flushes as needed. This requires that each database register itself for information extraction and flush requests on DB open and unregister on closing. The information extraction part is needed so that theWriteBufferManager
could choose the most suitable DB for flush requests (the one that will release the most memory), and the flush triggering for allowing memory to be released.The triggering of flushes should be done once total memory usage exceeds a certain threshold (the specifics of which can be determined later. Currently we have two options: either trigger flushes once we start delaying writes (#114), or do this periodically on every e.g. 25% increase in used memory). The
WriteBufferManager
should then iterate over the registered databases and query them for the amount of memory used, broken down into immutable and mutable memory, and choose the database with the most potential for memory reduction (in a multi-CF database this may not be ideal, because if we have many small CFs we will not release as much memory as we're hoping to, so we may need to break that information down further by CF in order to let the WBM choose in a better way).On the database side, when a flush request is received, in the case of atomic flush the behaviour will be exactly as it is today wrt. choosing the CFs for flushing. However, if we're not doing an atomic flush, the logic should be changed to choose the CF with the most memory to free (there's no point in choosing the oldest, since that is relevant only for when the WAL size limit is reached and is already done there). Additionally, depending on complexity, we may want to skip switching active memtables on chosen CFs, and just request to flush the immutable ones if they would free enough memory (if we go that route we should probably change the logic on the WBM side to choose the DB with the most potential for freeing immutable memory first before we go hunting down mutable memory as well).
Note that adding data to memtables (and by extension, calling into the WBM for reserving memory) isn't done under the DB mutex, so we'll need to lock it when triggering flushes (the current code that queries the WBM and triggers flushes is done as part of
DBImpl::PreprocessWrite()
while the mutex is being held).Additionally, we may want to expose the difference between memory that has already been marked for flush (and cannot be freed by triggering a flush) and immutable memory which hasn't been marked yet (#113) in the information query interface and use that when considering which DB to choose (and we may also forgo trying to choose a DB if e.g. more than 50% of the total memory is already marked for flush, since adding another flush job wouldn't necessarily help here).
Lastly, we may need to have a notification mechanism that allows the WBM to know when a flush request has actually been processed (picked from the flush request queue and memtables picked), so that we would be able to mark databases where flush has already been requested and not request again until that request has been processed, so that under memory pressure we'll simply choose another database instead of sending another request based on the intermediate state between requesting the flush and it being processed.
The text was updated successfully, but these errors were encountered: