The inconsistency here between the use of block_size vs metadata_max was
suspicious. Turns out there's a bug when metadata_max == prog_size.
We correctly use metadata_max for the block_size/2 check, but we weren't
using it for the block_size-40 check. The second check seems unnecessary
after the first, but it protects against running out of space in a
commit for commit-related metadata (checksums, tail pointers, etc) when
we can't program half-blocks.
Turns out this is also needed when limiting metadata_max to a single
prog, otherwise we risk erroring with LFS_ERR_NOSPC early.
Found by ajheck, dpkristensen, NLLK, and likely others.
- Added METADATA_MAX to test_runner.
- Added METADATA_MAX to bench_runner.
- Added a simple metadata_max test to test_superblocks, for lack of
better location.
There have been several issues floating around related to metadata_max
and LFS_ERR_NOSPC which makes me think there's a bug in our metadata_max
logic.
metadata_max was a quick patch and is relatively untested, so an
undetected bug isn't too surprising. This commit adds at least some
testing over metadata_max.
Sure enough, the new test_superblocks_metadata_max test reveals a
curious LFS_ERR_NAMETOOLONG error that shouldn't be there.
More investigation needed.
In the previous implementation of lfs_file_seek, we calculated the new
offset using signed arithmetic before checking for possible
overflow/underflow conditions. This results in undefined behavior in C.
Fortunately for us, littlefs is now limited to 31-bit file sizes for API
reasons, so we don't have to be too clever here. Doing the arithmetic
with unsigned integers and just checking if we're in a valid range
afterwards should work.
Found by m-kostrzewa and lucic71
With the existing method, (-DLFS_MALLOC=my_malloc)
users often had to use compiler options like -include, which
was not so portable.
This change introduces another way to provide partial overrides of
lfs_util.h using a user-provided header.
Looks like cross-workflow downloads has finally been added to the
standard download-artifact action, so we might as well switch to it to
reduce dependencies.
dawidd6's version was also missing the merge-multiple feature which is
necessary to work around breaking changes in download-artifact's v4
bump.
Weirdly it needs GITHUB_TOKEN for some reason? Not sure why this
couldn't be implicit.
Turns out major versions break things.
Old behavior: Artifacts with same name are merged
New behavior: Artifacts with same name error
Using a pattern and merging on download should fix this at least on the
job-side. Though I do wonder if we'll start running into artifact limit
issues with the new way artifacts are handled...
- block_cycles is signed and should use PRId32
- flags is signed (which is a bit weird) and should be cast for %x
Unfortunately exactly what PRI* expands to is dependant on both the
compiler and the underlying architecture, so I don't think it's possible
for us to catch these mistakes with CI...
Found by stefano-zanotti
These were grouped up a bit better at one point, but that sort of
drifted as new project were added:
1. Official repos (mainly littlefs-fuse)
2. Non-C reimplementations/wrappers
3. Utilities
4. Non-littlefs related projects
Eventually, maybe when these move out of the README.md, these categories
should probably be actually codified as headers or something.
Implemented by earlephilhower, mklittlefs is a command line interface
that seems to be used by the ESP8266 and RP2040 ecosystems. It deserves
a mention.
Also tweaked mklfs's description a bit.
Implemented by oyama, pico-littlefs-usb provides an easy interface to
littlefs by emulating a FAT12 filesystem over USB.
There are some tradeoffs to this, but being able to mount a littlefs
device without installing additional drivers is very nice. Maybe in the
future devices could provide both a FAT and raw endpoint for
easy/advanced filesystem access.
- Prefer "defaults to blablabla when zero" to hint that this is the
default state when both explicitly set to zero and implicitly set to
zero thanks to C's initializers.
- Prefer "disk" when referencing something stored "on disk". Other terms
can quickly get ambiguous. Except maybe "block device"...
The block allocator is an area where inferred block counts (when
cfg.block_count=0) are more likely to cause problems.
As is shown by the recent divide-by-zero-exhaustion issue.
Because multiple, out-of-date superblocks can exist in our superblock
chain, we need to be careful to make sure newer superblock entries
override older superblock entries.
If we see an older on-disk minor version in the superblock chain, we
were correctly overriding the on-disk minor version, but we were also
leaving the "needs superblock" bit set in our consistency state.
This isn't a hard-error, but would lead to a superblock rewrite every
mount. The rewrite would make no progress, as the out-of-date version is
effectively immutable at this point, and just waste prog cycles.
This should fix that by clearing the "needs superblock" bit if we see a
newer on-disk minor version.
It used to be the case that the entire superblock entry could be found
at specific offsets, but this was only possible while the superblock
entry was immutable. Now that the superblock entry is very mutable
(block-count changes, lfs2.0 -> lfs2.1 version bumps, etc), the correct
superblock entry may end up later in the metadata log.
At the very least, the "littlefs" magic string is still immutable and at
the specific offset offset=8. This is arguably the most useful
fixed-offset item.
The documentation does not match the implementation here. The intended
behavior of superblock expansion was to duplicate the current superblock
entry into the new superblock:
.--------. .--------.
.|littlefs|->|littlefs|
||bs=4096 | ||bs=4096 |
||bc=256 | ||bc=256 |
||crc32 | ||root dir|
|| | ||crc32 |
|'--------' |'--------'
'--------' '--------'
The main benefit is that we can rely on the magic string "littlefs"
always residing in blocks 0x{0,1}, even if the superblock chain has
multiple superblocks.
The downside is that earlier superblocks in the superblock chain may
contain out-of-date configuration. This is a bit annoying, and risks
hard-to-reach bugs, but in theory shouldn't break anything as long as
the filesystem is aware of this.
Unfortunately this was lost at some point during refactoring in the
early v2-alpha work. A lot of code was moving around in this stage, so
it's a bit hard to track down the change and if it was intentional. The
result is superblock expansion creates a valid linked-list of
superblocks, but only the last superblock contains a valid superblock
entry:
.--------. .--------.
.|crc32 |->|littlefs|
|| | ||bs=4096 |
|| | ||bc=256 |
|| | ||root dir|
|| | ||crc32 |
|'--------' |'--------'
'--------' '--------'
What's interesting is this isn't invalid as far as lfs_mount is
concerned. lfs_mount is happy as long as a superblock entry exists
anywhere in the superblock chain. This is good for compat flexibility,
but is the main reason this has gone unnoticed for so long.
---
With the benefit of more time to think about the problem, it may have
been more preferable to copy only the "littlefs" magic string and NOT
the superblock entry:
.--------. .--------.
.|littlefs|->|littlefs|
||crc32c | ||bs=4096 |
|| | ||bc=256 |
|| | ||root dir|
|| | ||crc32 |
|'--------' |'--------'
'--------' '--------'
This would allow for simple "littlefs" magic string checks without the
risks associated with out-of-date superblock entries.
Unfortunately the current implementation errors if it finds a "littlefs"
magic string without an associated superblock entry, so such a change
would not be compatible with old drivers.
---
This commit tweaks superblock expansion to duplicate the superblock
entry instead of simply moving it to the new superblock. And adds tests
over the magic string "littlefs" both before and after superblock
expansion.
Found by rojer and Nikola Kosturski
Unlike the heuristic based testing, exhaustive powerloss testing
effectively forks the current test and runs both the interrupted and
uninterrupted test states to completion. But emubd wasn't expecting
bd->cfg->powerloss_cb to return.
The fix here is to keep track to both the old+new out-of-order block
states and unrevert them if bd->cfg->powerloss_cb returns.
This may leak the temporary copy, but powerloss testing is already
inherently leaky.
Long story short we aren't calling sync correctly in littlefs. This
fixes that.
Some forms of storage, mainly anything with an FTL, eMMC, SD, etc, do
not guarantee a strict write order for writes to different blocks. In
theory this is what bd sync is for, to tell the bd when it is important
for the writes to be ordered.
Currently, littlefs calls bd sync after committing metadata. This is
useful as it ensures that user code can rely on lfs_file_sync for
ordering external side-effects.
But this is insufficient for handling storage with out-of-order writes.
Consider the simple case of a file with one data block:
1. lfs_file_write(blablabla) => writes data into a new data block
2. lfs_file_sync() => commits metadata to point to the new data block
But with out-of-order writes, the bd is free to reorder things such that
the metadata is updated _before_ the data is written. If we lose power,
that would be bad.
The solution to this is to call bd sync twice: Once before we commit
the metadata to tell the bd that these writes must be ordered, and once
after we commit the metadata to allow ordering with user code.
As a small optimization, we only call bd sync if the current file is not
inlined and has actually been modified (LFS_F_DIRTY). It's possible for
inlined files to be interleaved with writes to other files.
Found by MFaehling and alex31
Some forms of storage, mainly anything with an FTL, eMMC, SD, etc, do
not guarantee a strict write order for writes to different blocks. It
would be good to test that this doesn't break littlefs.
This adds LFS_EMUBD_POWERLOSS_OOO to lfs_emubd, which tells lfs_emubd to
try to break any order-dependent code on powerloss.
The behavior right now is a bit simple, but does result in test
breakage:
1. Save the state of the block on first write (erase really) after
sync/init.
2. On powerloss, revert the first write to its original state.
This might be a bit confusing when debugging, since the block will
appear to time-travel, but doing anything fancier would make emubd quite
a bit more complicated.
You could also get a bit fancier with which/how many blocks to revert,
but this should at least be sufficient to make sure bd sync calls are in
the right place.
By "luck" the previous code somehow managed to not be broken, though it
was possible to traverse the same file twice in lfs_fs_traverse/size
(which is not an error).
The problem was an underlying assumption in lfs_dir_get that it would
never be called when the requested id is pending removal because of a
powerloss. The assumption was either:
1. lfs_dir_find would need to be called first to find the id, and it
would correctly toss out pending-rms with LFS_ERR_NOENT.
2. lfs_fs_mkconsistent would be implicitly called before any filesystem
traversals, cleaning up any pending-rms. This is at least true for
allocator scans.
But, as noted by andriyndev, both lfs_fs_traverse and lfs_fs_size can
call lfs_fs_get with a pending-rm id if called in a readonly context.
---
By "luck" this somehow manages to not break anything:
1. If the pending-rm id is >0, the id is decremented by 1 in lfs_fs_get,
returning the previous file entry during traversal. Worst case, this
reports any blocks owned by the previous file entry twice.
Note this is not an error, lfs_fs_traverse/size may return the same
block multiple times due to underlying copy-on-write structures.
2. More concerning, if the pending-rm id is 0, the id is decremented by
1 in lfs_fs_get and underflows. This underflow propagates into the
type field of the tag we are searching for, decrementing it from
0x200 (LFS_TYPE_STRUCT) to 0x1ff (LFS_TYPE_INTERNAL(UNUSED)).
Fortunately, since this happens to underflow to the INTERNAL tag
type, the type intended to never exist on disk, we should never find
a matching tag during our lfs_fs_get search. The result? lfs_dir_get
returns LFS_ERR_NOENT, which is actually what we want.
Also note that LFS_ERR_NOENT does not terminate the mdir traversal
early. If it did we would have missed files instead of duplicating
files, which is a slightly worse situation.
---
The fix is to add an explicit check for pending-rms in lfs_dir_get, just
like in lfs_dir_find. This avoids relying on unintended underflow
propagation, and should make the internal API behavior more consistent.
This is especially important for potential future gc extensions.
Found by andriyndev
So instead of lfs_file_rawopencfg, it's now lfs_file_opencfg_.
The "raw" prefix is annoying, doesn't really add meaning ("internal"
would have been better), and gets in the way of finding the relevant
function implementations.
I have been using _s as suffixes for unimportant name collisions in
other codebases, and it seems to work well at reducing wasted brain
cycles naming things. Adopting it here avoids the need for "raw"
prefixes.
It's quite a bit like the use of prime symbols to resolve name
collisions in math, e.g. x' = x + 1. Which is even supported in Haskell
and is quite nice there.
And the main benefit: Now if you search for the public API name, you get
the internal function first, which is probably what you care about.
Here is the exact script:
sed -i 's/_raw\([a-z0-9_]*\)\>/_\1_/g' $(git ls-tree -r HEAD --name-only | grep '.*\.c')
Inlined files live in metadata and decrease storage requirements, but
may be limited to improve metadata-related performance. This is
especially important given the current plague of metadata performance.
Though decreasing inline_max may make metadata more dense and increase
block usage, so it's important to benchmark if optimizing for speed.
The underlying limits of inlined files haven't changed:
1. Inlined files need to fit in RAM, so <= cache_size
2. Inlined files need to fit in a single attr, so <= attr_max
3. Inlined files need to fit in 1/8 of a block to avoid metadata
overflow issues, this is after limiting by metadata_max,
so <= min(metadata_max, block_size)/8
By default, the largest possible inline_max is used. This preserves
backwards compatibility and is probably a good default for most use
cases.
This does have the awkward effect of requiring inline_max=-1 to
indicate disabled inlined files, but I don't think there's a good
way around this.