-
Notifications
You must be signed in to change notification settings - Fork 809
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation guide, or other examples? Particular for embedded serial flash. #56
Comments
There should probably more documentation about block device geometry. Also agree on the usefulness of a portable test suite (#44). Here's some info for the short term that may help:
This should be what you set your erase size to. littlefs does not buffer a full sector in RAM, but smaller values are always better for less wasted space for alignment. If sector size is less than 128 B, problems will happen, but this case should assert in mount.
littlefs doesn't do a read-modify-write, instead it performs modifications on a stream of data as it copies it from one sector to another. This requires more buffers, but the buffers can be much smaller, since they don't need to be a full sector.
This is entirely a tradeoff of RAM vs speed. Larger read sizes = more RAM = less read operations = more speed. I haven't actually seen a use case where it makes sense to use a different read size and prog size. I'm planning to change this to a single "cache_size" variable, but waiting for a major revision. I put together some benchmarks and found that an optimal read_size is ~64B for SPI flash (MX25R). But if speed is a high priority it may be worth measuring your application with different values. |
Agree with @geky, I also found that 64B for read/prog size is the optimal solution, but I put 1024B for the lookahead on my side. |
@geky thanks, this is helpful. however, i have more questions to understand the requirements of the underlying device better. |
The measurement I did was on a filesystem with only one directory, so scanning the filesystem for free blocks was very cheap. that's probably why the lookahead size had such a low impact. Would probably be worth more investigation. Hi @rojer,
LFS will always invode the device read/write with a multiple of this size. So if a device has a read size of 4 bytes, LFS may read 16 bytes at a time. But LFS will not read 15 bytes.
Yep! It does sound like we need an implementation guide, thanks for the questions, feel free to ask more because this is a good starting point for me to know what isn't clear. |
Some more example implementations would be a good starting point, which you could then sound off on. For example, SPI flash chips are a likely & good candidate for use with this library. As are SD cards in SPI mode. |
Hi all, first of all, amazing project and thank you all very much for this work. I just started trying Littlefs and was successful in going through initial setup. However, the controller(STM32 discovery with SPI flash total size of 512KB) is going in Hard fault when writing and closing multiple files in the root dictionary(crash after writing 15 files of size 6 bytes). The source of hard fault is at LFS_ASSERT(entry.d.type == LFS_TYPE_REG); in lfs_file_sync function. I have some open question for better understanding. What should be the return value from the user-defined function(e.g 0 for valid read/write? ) and in the link to that how the LFS determines that block has gone corrupt? Looking forward. |
@geky continuing my exploration of LFS i wonder if its CoW approach would be more friendly to using simple block device encryption. for that to work, we need the following:
(2) is a rather big ask and basically precludes RMW operations, but with CoW being at the heart of LFS i'm hopeful :) it would be great if device encryption could be added without building a whole translation layer (something necessary for other filesystems, e.g. SPIFFS which relies a lot on NOR flash semantics and single bit flipping). |
Hi @elekrofuss! That's sounds like a tricky bug, maybe opening a separate issue will make it easier to track it. I've not seen that assert fail in the wild before, which means:
Which is a bit hard to make happen. It's either a bug in littlefs, or some sort of memory corrupting on the MCU side. Are you in a multithreaded environment? Are you protecting littlefs with mutexes? If you're able to narrow down what's causing the bug to a code snippet and open a separate issue, that would help a lot.
On a lot of the parts I've seen, the "page size" is the size of an internal buffer that caches the data sent over SPI. Smaller programs are still possible, but writing at the page size gets you the best program speed. In these cases it would be more efficient to use a multiple of the device's page size, but may not be a requirement. It's then a tradeoff of RAM vs speed. A multiple of the page size is a good idea if you have enough RAM. As you note some devices don't support partial page programming, on these devices your program size must be a multiple of the page size or you will get data corruption.
All read offsets and sizes will be a multiple of the "read size", and all program offsets and sizes will be a multiple of the "program size". However, littlefs does require that the "program size" is a multiple of the "read size". Because of math this means that all program offsets and sizes will additionally be a multiple of the "read size" as a side effect.
This is a good question to raise. All that the functions read, prog, erase and sync need to do is return 0 if it thinks it was successful. If the write actually failed to stay on the device, that is ok, after every write littlefs reads back the written block to verify that the write was succesful. These functions can also return any negative error code (as long as it doesn't conflict with the littlefs error codes), which will be propogated directly to the user of the library. Optionally, (and because of some legacy), these functions can also return LFS_ERROR_CORRUPT. This tells littlefs that the block was corrupted early on and can be useful if you want to synthetically limit the erases on a block or have some other measure of wear. |
@apullin, example implementations are a good idea, though I'm sure where to put them. I'd like to make sure any code in the repository is tested, but that would be difficult if they require hardware. Maybe just inlining the examples in the documentation would work? |
@rojer Absolutely! LFS tries to work with as few assumptions about storage as possible, with the goal of supporting these more abstract storage ideas. I'm hoping this gives the most room for littlefs to grow as the technology underneath it is still evolving.
Yep!
It should work without a problem! Rather than using RMW operations, LFS modifies data in a stream-like operation as it copies data from one block to another. This keeps RAM low and avoids rewrites. This has some tradeoffs, such as needing two-blocks for every metadata block, but opens up a lot of opportunities. And yes, if a file does not end on a 16 byte boundary, it will be copied out to another block even if appended to. (Actually, right now littefs doesn't trust that any data block has previously been erased, any opened files will get copied out to another block for safety. This could be improved in the future but would require a concept of the "erase value" and would be opt in). Also a quick hint, littlefs doesn't actually need power-of-2 block sizes. So if you needed to store your nonce somewhere, you might be able to slip it into some space in each block. : ) |
Does this apply to erase, program and read blocks? So a backing store could (for example) use 1024 byte hardware sectors but tell littlefs that erase=1000 bytes, program=read=250 bytes, and then have 6 bytes to itself for metadata for each program/read block? |
Yep, all that is required is that erase is a multiple of program is a multiple of read. Though note that littlefs still expects offsets to be continuous. So if you do this you may need to add a bit of math to map the offset in block device space to the offset in littlefs space.
|
Great, thanks for the explanation! |
Are there any guidelines for how to map the LFS block/read/program sizes onto a serial flash that has well-defined underlying geometry? e.g. align blocks to the sector size (minimum erase quanta), so the
erase
command in the drive is very simple to implement. However, sectors are 4K, which is not resource-light to do a read-modify-rewrite.And then the what the effective implication of the choice read size and prog size are?
I actually already have it working on a large serial NOR flash, but due to the size and the high-level SDK shims, I am not entirely sure that everything is working as expected, even if files appear to read/write/list/etc correctly.
As I raised in another repo issue, the test battery doesn't map to an embedded context (not yet, I'll work on it), so I can't be sure that all the cases for bad blocks, exhaustion, and other edges cases are really correct on the actual serial flash block-device-like implementation.
The text was updated successfully, but these errors were encountered: