-
-
Notifications
You must be signed in to change notification settings - Fork 39.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generic wear-leveling algorithm #16996
Conversation
7988c3b
to
1b2b23b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
__attribute__((weak))
✅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
__attribute__((weak))
✔️
Tested in hardware on the GD32VF103 using #17376
What is the plan with the
|
|
Co-authored-by: Stefan Kerkmann <karlk90@pm.me>
I wonder if we could have some pretty basic error detection for the consolidated data. This would work by having the CRC8 or FNV hash of that blob as the header before the write log. At least when the cache is build from that consolidated data + write log we can then detect corruption in the former. For the log entries this doesn't work of course. |
Added FNV1a_64 that gets written after a consolidation occurs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing my findings so quickly and picking up the fnv hash idea. The changes look good for me now!
* Initial import of wear-leveling algorithm. * Alignment. * Docs tweaks. * Lock/unlock. * Update quantum/wear_leveling/wear_leveling_internal.h Co-authored-by: Stefan Kerkmann <karlk90@pm.me> * More tests, fix issue with consolidation when unlocked. * More tests. * Review comments. * Add plumbing for FNV1a. * Another test checking that checksum mismatch clears the cache. * Check that the write log still gets played back. Co-authored-by: Stefan Kerkmann <karlk90@pm.me>
* Initial import of wear-leveling algorithm. * Alignment. * Docs tweaks. * Lock/unlock. * Update quantum/wear_leveling/wear_leveling_internal.h Co-authored-by: Stefan Kerkmann <karlk90@pm.me> * More tests, fix issue with consolidation when unlocked. * More tests. * Review comments. * Add plumbing for FNV1a. * Another test checking that checksum mismatch clears the cache. * Check that the write log still gets played back. Co-authored-by: Stefan Kerkmann <karlk90@pm.me>
Description
Just getting the ball rolling -- this is an adaptation of the wear leveling algorithms within QMK.
Currently we support 2-byte and 4-byte writes for "emulated EEPROM", but with Keychron (and others) using STM32L4, there was a need for 8-byte writes.
#15050 intended to add support for 8-byte EEPROM emulation; it has not been (and likely will not be) merged as the requested changes and tests were not implemented.
This PR intends to begin to rectify the situation -- decoupling the wear leveling algorithm from the underlying reads/writes, also allowing for it to be adapted for other things like external SPI NOR flash.
This PR is inclusive of the algorithm and tests only. No documentation is required as the configuration and the like will be at the driver level.
NOTE: For people wanting persistence support on their Keychron boards, they'll have to wait a while longer. This is laying the foundations, it does not implement concrete support for their boards. Same goes for GMMK and their later boards with external flash.
TODO
Better data corruption detection (checksums? rolling codes?)-- Cancelled, no bits to spare.Types of Changes
Checklist