Commit Graph

465 Commits

Author SHA1 Message Date
4c9360020e Added ability to bump on-disk minor version
This just means a rewrite of the superblock entry with the new minor
version.

Though it's interesting to note, we don't need to rewrite the superblock
entry until the first write operation in the filesystem, an optimization
that is already in use for the fixing of orphans and in-flight moves.

To keep track of any outdated minor version found during lfs_mount, we
can carve out a bit from the reserved bits in our gstate. These are
currently used for a counter tracking the number of orphans in the
filesystem, but this is usually a very small number so this hopefully
won't be an issue.

In-device gstate tag:

  [--       32      --]
  [1|- 11 -| 10 |1| 9 ]
   ^----^-----^--^--^-- 1-bit has orphans
        '-----|--|--|-- 11-bit move type
              '--|--|-- 10-bit move id
                 '--|-- 1-bit needs superblock
                    '-- 9-bit orphan count
2023-04-21 00:56:55 -05:00
b33a5b3f85 Fixed issue where seeking to end-of-directory return LFS_ERR_INVAL
This was just an oversight. Seeking to the end of the directory should
not error, but instead restore the directory to the state where the next
read returns 0.

Note this matches the behavior of lfs_file_tell/lfs_file_seek.

Found by sosthene-nitrokey
2023-04-18 15:10:07 -05:00
b0a4a44e5b Added explicit assert for minimum block size of 128 bytes
There was already an assert for this, but because it included the
underlying equation for the requirement it was too confusing for
users that had no prior knowledge for why the assert could trigger.

The math works out such that 128 bytes is a reasonable minimum
requirement, so I've added that number as an explicit assert.
Hopefully this makes this sort of situation easier to debug.

Note that this requirement would need to be increased to 512 bytes if
block addresses are ever increased to 64-bits. DESIGN.md goes into more
detail why this is.
2023-04-17 19:58:09 -05:00
aae897ffd0 Added an assert for truthy-preserving bool conversions
This has caught enough people that an explicit assert is warranted.
How littlefs, a c99 project, should be integrated with c89 projects
is still an open question, but no one deserves to debug this sort of
undetected casting issue.

Found by johnernberg and XinStellaris
2023-04-17 19:19:42 -05:00
e57402c8e9 Added ability to revert to inline file in lfs_file_truncate
Before, once converted to a CTZ skip-list, a file would remain a CTZ
skip-list even if truncated back to a size that could be inlined.

This was just a shortcut in implementation. And since the fix for boundary
truncates needed special handling for size==0, it made sense to extend
this special condition to allow reverting to inline files.

---

The only case I can think of, where reverting to an inline file would be
detrimental, is if it's a readonly file that you would otherwise not need
to pay the metadata overhead for. But as a tradeoff, inlining the file
would free up the block it was on, so it's unclear if this really is
a net loss.

If the truncate is followed by a write, reverting to an inline file will
always be beneficial. We assume writes will change the data, so in the
non-inlined case there's no way to avoid copying the underlying block.
Even if we assume padding issues are solved.
2023-04-17 18:18:06 -05:00
6dc18c38c1 Fixed block-boundary truncate issue
There has been a bug in the filesystem for a while where truncating to a
block boundary suffers from an off-by-one mistake that corrupts the
internal representation of the CTZ skip-list.

This mostly appears when the file_size == block_size, as file_size >
block_size includes CTZ skip-list metadata, so the underlying block
boundaries appear at slightly different offsets.

---

The reason for off-by-one issue is a nuance in lfs_ctz_find that we sort
of abuse to get two different behaviors.

Consider the situation where this bug occurs:

   block 0     block 1
  .--------.  .--------.
  | abcdef |<-| {ptr0} |
  | ghijkl |  | yzabcd |
  | mnopqr |  |        |
  | stuvwx |  |        |
  '--------'  '--------'

With these 24-byte blocks, there's an ambiguity if we wanted to point to
offset 24. We could point before the block boundary, or we could point
after the block boundary

Before:

   block 0     block 1
  .--------.  .--------.
  | abcdef |<-| {ptr0} |
  | ghijkl |  | yzabcd |
  | mnopqr |  |        |
  | stuvwx |  |        |
  '-------^'  '--------'
          '-- off=24 is here

After:

   block 0     block 1
  .--------.  .--------.
  | abcdef |<-| {ptr0} |
  | ghijkl |  | yzabcd |
  | mnopqr |  | ^      |
  | stuvwx |  | |      |
  '--------'  '-|------'
                '-- off=24 is here

When we want these two offsets depends on the context. We want the
offset to be conservative if it represents a size, but eager if it is
being used to prepare a block for writing.

The workaround/hack is to prefer the eager offset, after the block boundary,
but use `size-1` as the argument if we need the conservative offset.

This finds the correct block, but is off-by-one in the calculated
block-offset. Fortunately we happen to not use the block-offset in the
places we need this workaround/hack.

---

To get back to the bug, the wrong mode of lfs_ctz_find was used in
lfs_file_truncate, leading to internal corruption of the CTZ skip-list.

The correct behavior is size-1, with care to avoid underflow.

Also I've tweaked the code to make it clear the calculated block-offset
goes unused in these situations.

Thanks to ghost, ajaybhargav, and others for reporting the issue,
colin-foster-advantage for a reproducible test case, and rvanschoren,
hgspbs for the initial solution.
2023-04-17 17:49:57 -05:00
ba1c76435a Fixed issue where deorphan could get stuck circling between two half-orphans
This of course should never happen normally, two half-orphans requires
two parents, which is disallowed in littlefs for this reason. But it can
happen if there is an outdated half-orphan later in the metadata
linked-list. The two half-orphans can cause the deorphan step to get
stuck, constantly "fixing" the first half-orphan before it has a chance
to remove the problematic, outdated half-orphan later in the list.

The solution here is to do a full check for half-orphans before
restarting the half-orphan loop. This strategy has the potential to
visit more metadata blocks unnecessarily, but avoids situations where
removing a later half-orphan will eventually cause an earlier
half-orphan to resolve itself.

Found with heuristic powerloss testing with test_relocations_reentrant_renames
after 192 nested powerlosses.
2022-12-17 12:42:05 -06:00
d1b254da2c Reverted removal of 1-bit counter threaded through tags
Initially I thought the fcrc would be sufficient for all of the
end-of-commit context, since indicating that there is a new commit is a
simple as invalidating the fcrc. But it turns out there are cases that
make this impossible.

The surprising, and actually common, case, is that of an fcrc that
will end up containing a full commit. This is common as soon as the
prog_size is big, as small commits are padded to the prog_size at
minimum.

  .------------------. \
  |     metadata     | |
  |                  | |
  |                  | +-.
  |------------------| | |
  |   foward CRC ------------.
  |------------------| / |   |
  |   commit CRC    -----'   |
  |------------------|       |
  |     padding      |       |
  |                  |       |
  |------------------| \   \ |
  |     metadata     | |   | |
  |                  | +-. | |
  |                  | | | +-'
  |------------------| / | |
  |   commit CRC --------' |
  |------------------|     |
  |                  |     /
  '------------------'

When the commit + crc is all contained in the fcrc, something silly
happens with the math behind crcs. Everything in the commit gets
canceled out:

  crc(m) = m(x) x^|P|-1 mod P(x)

  m ++ crc(m) = m(x) x^|P|-1 + (m(x) x^|P|-1 mod P(x))

  crc(m ++ crc(m)) = (m(x) x^|P|-1 + (m(x) x^|P|-1 mod P(x))) x^|P|-1 mod P(x)

  crc(m ++ crc(m)) = (m(x) x^|P|-1 + m(x) x^|P|-1) x^|P|-1 mod P(x)

  crc(m ++ crc(m)) = 0 * x^|P|-1 mod P(x)

This is the reason the crc of a message + naive crc is zero. Even with an
initializer/bit-fiddling, the crc of the whole commit ends up as some
constant.

So no manipulation of the commit can change the fcrc...

But even if this did work, or we changed this scheme to use two
different checksums, it would still require calculating the fcrc of
the whole commit to know if we need to tweak the first bit to invalidate
the unlikely-but-problematic case where we happen to match the fcrc. This
would add a large amount of complexity to the commit code.

It's much simpler and cheaper to keep the 1-bit counter in the tag, even
if it adds another moving part to the system.
2022-12-17 12:42:05 -06:00
2f26966710 Continued implementation of forward-crcs, adopted new test runners
This fixes most of the remaining bugs (except one with multiple padding
commits + noop erases in test_badblocks), with some other code tweaks.

The biggest change was dropping reliance on end-of-block commits to know
when to stop parsing commits. We can just continue to parse tags and
rely on the crc for catch bad commits, avoiding a backwards-compatiblity
hiccup. So no new commit tag.

Also renamed nprogcrc -> fcrc and commitcrc -> ccrc and made naming in
the code a bit more consistent.
2022-12-17 12:42:05 -06:00
b4091c6871 Switched to separate-tag encoding of forward-looking CRCs
Previously forward-looking CRCs was just two new CRC types, one for
commits with forward-looking CRCs, one without. These both contained the
CRC needed to complete the current commit (note that the commit CRC
must come last!).

         [--   32   --|--   32   --|--   32   --|--   32   --]
with:    [  crc3 tag  | nprog size |  nprog crc | commit crc ]
without: [  crc2 tag  | commit crc ]

This meant there had to be several checks for the two possible structure
sizes, messying up the implementation.

         [--   32   --|--   32   --|--   32   --|--   32   --|--   32   --]
with:    [nprogcrc tag| nprog size |  nprog crc | commit tag | commit crc ]
without: [ commit tag | commit crc ]

But we already have a mechanism for storing optional metadata! The
different metadata tags! So why not use a separate tage for the
forward-looking CRC, separate from the commit CRC?

I wasn't sure this would actually help that much, there are still
necessary conditions for wether or not a forward-looking CRC is there,
but in the end it simplified the code quite nicely, and resulted in a ~200 byte
code-cost saving.
2022-12-17 12:42:05 -06:00
91ad673c45 Cleaned up a few additional commit corner cases
- General cleanup from integration, including cleaning up some older
  commit code
- Partial-prog tests do not make sense when prog_size == block_size
  (there can't be partial-progs!)
- Fixed signed-comparison issue in modified filebd
2022-12-17 12:42:05 -06:00
52dd83096b Initial implementation of forward-looking erase-state CRCs
This change is necessary to handle out-of-order writes found by pjsg's
fuzzing work.

The problem is that it is possible for (non-NOR) block devices to write
pages in any order, or to even write random data in the case of a
power-loss. This breaks littlefs's use of the first bit in a page to
indicate the erase-state.

pjsg notes this behavior is documented in the W25Q here:
https://community.cypress.com/docs/DOC-10507

---

The basic idea here is to CRC the next page, and use this "erase-state CRC" to
check if the next page is erased and ready to accept programs.

.------------------. \   commit
|     metadata     | |
|                  | +---.
|                  | |   |
|------------------| |   |
| erase-state CRC -----. |
|------------------| | | |
|   commit CRC    ---|-|-'
|------------------| / |
|     padding      |   | padding (doesn't need CRC)
|                  |   |
|------------------| \ | next prog
|     erased?      | +-'
|        |         | |
|        v         | /
|                  |
|                  |
'------------------'

This is made a bit annoying since littlefs doesn't actually store the
page (prog_size) in the superblock, since it doesn't need to know the
size for any other operation. We can work around this by storing both
the CRC and size of the next page when necessary.

Another interesting note is that we don't need to any bit tweaking
information, since we read the next page every time we would need to
know how to clobber the erase-state CRC. And since we only read
prog_size, this works really well with our caching, since the caches
must be a multiple of prog_size.

This also brings back the internal lfs_bd_crc function, in which we can
use some optimizations added to lfs_bd_cmp.

Needs some cleanup but the idea is passing most relevant tests.
2022-12-17 12:42:05 -06:00
1278ec1d08 Adopted Brent's algorithm for cycle detection
The previous cycle detection algorithm (a naive check against the largest
possible tail list) is simple and gets the job done, but has the potential to
take a very long time on disks with many blocks. Brent's algorithm, on
the other hand, takes at most 2x the number of blocks in the tail list.

Originally naive cycle detection was chosen over Floyd's algorithm to
avoid the extra complexity of managing two desynced traversals for every
traversal of the tail list, but Brent's algorithm is very well suited for our
use case, requiring only we keep track of an additional mdir pointer on the
stack as we traverse.
2022-12-17 12:41:39 -06:00
0c781dd822 Merge remote-tracking branch 'origin/master' into test-and-bench-runners 2022-12-06 23:08:53 -06:00
0b11ce03b7 Fixed incorrect calculation of extra space needed in mdir blocks
Despite the comment being correct, the calculation is somehow off by a word,
meaning something must have been missed. Maybe the space for the move-delete
was missed since that was added later to avoid losing move-deletes during
relocations.

This was found with the new exhaustive power-loss searching added to the
test framework with -P2. The exact failure was
test_dirs_many_reentrant:2gg2cb:k4o6. This must be the first test that
ends up with all possible extra state in a single mdir block.
2022-11-28 12:51:18 -06:00
eba5553314 Fixed hidden orphans by separating deorphan search into two passes
This happens in rare situations where there is a failed mdir relocation,
interrupted by a power-loss, containing the destination of a directory
rename operation, where the directory being renamed preceded the
relocating mdir in the mdir tail-list. This requires at some point for a
previous directory rename to create a cycle.

If this happens, it's possible for the half-orphan to contain the only
reference to the renamed directory. Since half-orphans contain outdated
state when viewed through the mdir tail-list, the renamed directory
appears to be a full-orphan until we fix the relocating half-orphan.
This causes littlefs to incorrectly remove the renamed directory from
the mdir tail-list, causes catastrophic problems down the line.

The source of the problem is that the two different types of orphans
really operate on two different levels of abstraction: half-orphans fix
failed mdir commits, while full-orphans fix directory removes/renames.
Conflating the two leads to situations where we attempt to fix assumed
problems about the directory tree before we have fixed problems with the
mdir state.

The fix here is to separate out the deorphan search into two passes: one
to fix half-orphans and correct any mdir-commits, restoring the mdirs
and gstate to a known good state, then two to fix failed
removes/renames.

---

This was found with the -Plinear heuristic powerloss testing, which now
runs on more geometries. The failing case was:

  test_relocations_reentrant_renames:112gg261dk1e3f3:123456789abcdefg1h1i1j1k1
  l1m1n1o1p1q1r1s1t1u1v1g2h2i2j2k2l2m2n2o2p2q2r2s2t2

Also fixed/tweaked some parts of the test framework as a part of finding
this bug:

- Fixed off-by-one in exhaustive powerloss state encoding.

- Added --gdb-powerloss-before and --gdb-powerloss-after to help debug
  state changes through a failing powerloss, maybe this should be
  expanded to any arbitrary powerloss number in the future.

- Added lfs_emubd_crc and lfs_emubd_bdcrc to get block/bd crcs for quick
  state comparisons while debugging.

- Fixed bd read/prog/erase counts not being copied during exhaustive
  powerloss testing.

- Fixed small typo in lfs_emubd trace.
2022-11-28 12:51:18 -06:00
6a53d76e90 Merge pull request #744 from littlefs-project/fix-fetchmatch-err-path
Fix lfs_dir_fetchmatch not propogating bd errors correctly in one case
2022-11-10 10:32:30 -06:00
dfa8abdd2c Merge pull request #740 from cbiffle/fix-invalid-block-size-reporting
Fix invalid block size reporting.
2022-11-10 10:31:58 -06:00
d8c96abf92 Merge pull request #724 from littlefs-project/clang-lint
Fix self-assign warnings discovered by clang, remove some warning flags
2022-11-10 10:31:42 -06:00
d08f949afd Fixed lfs_dir_fetchmatch not propogating bd errors correctly in one case
Found by cbiffle
2022-11-04 13:45:13 -05:00
eb9f4d5d7e Fix invalid block size reporting.
This boilerplate got copied from the stanza just above and incompletely
edited.
2022-10-09 17:45:14 -07:00
47914b925f Fixed self-assign warnings discovered by clang 2022-09-07 12:46:29 -05:00
6c720dc2bb Fix unused function warning with LFS_NO_MALLOC 2022-04-25 12:12:41 +02:00
abbfe8e92e Reduced lfs_dir_traverse's explicit stack to 3 frames
This is possible thanks to invoxiaamo's optimization of compacting
renames to avoid the O(n^3) nested filters. Not only does this
significantly reduce the runtime cost of that operation, but it
reduces the maximum possible depth of recursion to 3 frames.

Deepest lfs_dir_traverse before:

traverse with commit
'-> traverse with filter
    '-> traverse with move
        '-> traverse with filter

Deepest lfs_dir_traverse after:

traverse with commit
'-> traverse with move
    '-> traverse with filter
2022-04-10 23:27:49 -05:00
c60c977c25 Merge pull request #658 from littlefs-project/no-recursion
Restructure littlefs to not use recursion, measure stack usage
2022-04-10 23:23:39 -05:00
3ce64d1ac0 Merge pull request #666 from invoxiaamo/rename-opti2
Optimization of the rename case.
2022-04-10 22:02:04 -05:00
0ced3623d4 Merge pull request #657 from littlefs-project/copyright-update
Update copyright notice
2022-04-10 21:59:27 -05:00
f28ac3ea7d Merge pull request #638 from lmapii/master
Removed invalid overwrite for return value.
2022-04-10 21:52:48 -05:00
a94fbda1cd Merge pull request #632 from robekras/patch-1
Fix lfs_file_rawseek performance issue
2022-04-10 21:52:27 -05:00
cc025653ed Merge pull request #630 from Johnxjj/dev-johnxjj
add the limit, the cursor cannot be set to a negative number
2022-04-10 14:44:47 -05:00
bfb9bd2483 Merge pull request #614 from nnayo/fix_no_malloc_2
don't use lfs_file_open() when LFS_NO_MALLOC is set
2022-04-10 14:44:33 -05:00
f40b854ab5 Merge pull request #584 from colin-foster-in-advantage/block_size_mount_fail
Fail mount when the block size changes
2022-04-10 14:44:24 -05:00
c2fa1bb7df Optimization of the rename case.
Rename can be VERY time consuming. One of the reasons is the 4 recursion
level depth of lfs_dir_traverse() seen if a compaction happened during the
rename.

lfs_dir_compact()
  size computation
    [1] lfs_dir_traverse(cb=lfs_dir_commit_size)
         - do 'duplicates and tag update'
       [2] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1])
           - Reaching a LFS_FROM_MOVE tag (here)
         [3] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1]) <= on 'from' dir
             - do 'duplicates and tag update'
           [4] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[3])
  followed by the compaction itself:
    [1] lfs_dir_traverse(cb=lfs_dir_commit_commit)
         - do 'duplicates and tag update'
       [2] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1])
           - Reaching a LFS_FROM_MOVE tag (here)
         [3] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1]) <= on 'from' dir
             - do 'duplicates and tag update'
           [4] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[3])

Yet, analyse shows that levels [3] and [4] don't perform anything
if the callback is lfs_dir_traverse_filter...

A practical example:

- format and mount a 4KB block FS
- create 100 files of 256 Bytes named "/dummy_%d"
- create a 1024 Byte file "/test"
- rename "/test" "/test_rename"
- create a 1024 Byte file "/test"
- rename "/test" "/test_rename"
This triggers a compaction where lfs_dir_traverse was called 148393 times,
generating 25e6+ lfs_bd_read calls (~100 MB+ of data)

With the optimization, lfs_dir_traverse is now called 3248 times
(589e3 lfs_bds_calls (~2.3MB of data)

=> x 43 improvement...
2022-04-10 13:12:45 -05:00
3b62ec1c47 Updated error handling for NOSPC 2022-04-10 13:00:13 -05:00
b898977fd8 Set the limit, the cursor cannot be set to a negative number 2022-04-10 12:57:42 -05:00
cf274e6ec6 Squash of CR changes
- nit: Moving brace to end of if statement line for consistency
- mount: add more debug info per CR
- Fix compiler error from extra parentheses
- Fix superblock typo
2022-04-10 12:53:33 -05:00
425dc810a5 Modified robekras's optimization to avoid flush for all seeks in cache
The basic idea is simple, if we seek to a position in the currently
loaded cache, don't flush the cache. Notably this ensures that seek is
always as fast or faster than just reading the data.

This is a bit tricky since we need to check that our new block and
offset match the cache, fortunately we can skip the block check by
reevaluating the block index for both the current and new positions.

Note this only works whene reading, for writing we need to always flush
the cache, or else we will lose the pending write data.
2022-04-10 12:46:51 -05:00
a6f01b7d6e Update lfs.c
This should fix the performance issue if a new seek position belongs to currently cached data.
This avoids unnecessary rereads of file data.
2022-04-09 02:12:18 -05:00
9c7e232086 Fixed missing definition of lfs_cache_drop in readonly mode
Interestingly this was introduced by two different PRs which were not tested
together until pre-release testing:

- Fix lfs_file_seek doesn't update cache properties correctly
- Fix compiler warnings when LFS_READONLY defined
2022-03-21 20:29:04 -05:00
c676bcee4c Merge branch 'bf_lfs_file_seek_readonly' into HEAD 2022-03-20 23:16:15 -05:00
03f088b92c Tweaked lfs_file_flush to still flush caches when build under LFS_READONLY
A slight varation to the fix from ondrap
2022-03-20 23:14:34 -05:00
e955b9f65d Fix lfs_file_seek doesn't update cache properties correctly in readonly mode. Invalidate cache to fix it. 2022-03-20 23:10:11 -05:00
5801169348 Merge pull request #635 from mikee47/fix/spelling-errors
Fix spelling errors
2022-03-20 23:09:23 -05:00
2d6f4ead13 Merge pull request #620 from XinStellaris/master
fix bug:lfs_alloc will alloc one block repeatedly in multiple split
2022-03-20 23:09:04 -05:00
2db5dc80c2 Update copyright notice 2022-03-20 23:03:52 -05:00
1363c9f9d4 fix bug:lfs_alloc will alloc one block repeatedly in multiple split
BUG CASE:Assume there are 6 blocks in littlefs, block 0,1,2,3 already allocated. 0 has a tail pair of {2, 3}. Now we try to write more into 0.
When writing to block 0, we will split(FIRST SPLIT), thus allocate block 4 and 5. Up to now , everything is as expected.
Then we will try to commit in block 4, during which split(SECOND SPLIT) is triggered again(In our case, some files are large, some are small, one split may not be enough).  Still as expected now.
BUG happens when we try to alloc a new block pair for the second split:
As lookahead buffer reaches the end , a new lookahead buffer will be generated from flash content, and block 4, 5 are unused blocks in the new lookahead buffer because they are not programed yet. HOWEVER, block 4,5 should be occupied in the first split!!!!!  The result is block 4,5 are allocated again(This is where things are getting wrong).

commit ce2c01f results in this bug. In the commit, a lfs_alloc_ack is inserted in lfs_dir_split, which will cause split to reset lfs->free.ack to block count.
In summary, this problem exists after 2.1.3.

Solution: don't call lfs_alloc_ack in lfs_dir_split.
2022-03-20 20:53:48 -05:00
8109f28266 Removed recursion from lfs_dir_traverse
lfs_dir_traverse is a bit unpleasant in that it is inherently a
recursive function, but without a strict bound of 4 calls (commit -> filter ->
move -> filter), and efforts to unroll the recursion comes at a
signification code cost.

It turns out the best solution I've found so far is to simple create an
explicit stack with an explicit bound of 4 calls (or more accurately,
3 pushed frames).

---

This actually highlights one of the bigger flaws in littlefs right now,
which is that this function, lfs_dir_traverse, takes O(n^2) disk reads
to traverse.

Note that LFS_FROM_MOVE can only occur once per commit, which is why
this code is O(n^2) and not O(n^4).
2022-03-20 04:27:54 -05:00
fedf646c79 Removed recursion in file read/writes
This mostly just required separate functions for "lfs_file_rawwrite" and
"lfs_file_flushedwrite", since lfs_file_flush recursively invokes
lfs_file_rawread and lfs_file_rawwrite.

This comes at a code cost, but gives us bounded and measurable RAM usage
on this code path.
2022-03-20 04:25:24 -05:00
84da4c0b1a Removed recursion from commit/relocate code path
lfs_dir_commit originally relied heavily on tail-recursion, though at
least one path (through relocations) was not tail-recursive, and could
cause unbounded stack usage in extreme cases of bad blocks. (Keep in
mind even extreme cases of bad blocks should be in scope for littlefs).

In order to remove recursion from this code path, several changed were
raequired:

- The lfs_dir_compact logic had to be somewhat inverted. Instead of
  first compacting and then resolving issues such as relocations and
  orphans, the overarching lfs_dir_commit now contains a state-machine
  which after committing or compacting handles the extra changes to the
  filesystem in a single, non-recursive loop

- Instead of fixing all relocations recursively, >1 relocation requires
  defering to a full deorphan step. This step is unfortunately an
  additional n^2 process. It also required some changes to lfs_deorphan
  in order to ignore intentional orphans created as an intermediary in
  lfs_mkdir. Maybe in the future we should remove these.

- Tail recursion normally found in lfs_fs_deorphan had to be rewritten
  as a loop which restarts any time a new commit causes a relocation.
  This does show that the algorithm may not terminate, but only if every
  block is bad, which will eventually cause littlefs to run out of
  blocks to write to.
2022-03-20 04:24:44 -05:00
yog
e334983767 don't use lfs_file_open() when LFS_NO_MALLOC is set 2022-02-18 20:57:20 -06:00