This is the same as the implicit Clang => NO_GCC behavior introduced by
yamt, but with an explicit variable that can be assigned by users using
other, non-gcc, compilers:
$ NO_GCC=1 make
Note, stack measurements are currently GCC specific:
$ NO_GCC=1 make stack
... snip ...
FileNotFoundError: [Errno 2] No such file or directory: 'lfs.ci'
make: *** [Makefile:494: lfs.stack.csv] Error 1
This warning is useful for catching the easy mistake of missing the
keyword static on functions intended to be internal-only.
Missing the static keyword risks symbol polution and misses potential
compiler optimizations.
This is an interesting warning, while useful for libraries such as
littlefs, it's perfectly valid C to not predeclare all functions, and
common in final application binaries.
Relatedly, this warning is re-disabled for the test/bench runner. There
may be a better way to organize the CFLAGS, maybe into separate
LIB/RUNNER CFLAGS, but I'll leave this to future work if our CFLAGS grow
more complicated.
This was motivated by non-static internal-only functions leaking into a
release. Found and fixed by DvdGiessen.
This is a bit tricky since we need two different version of littlefs in
order to test for most compatibility concerns.
Fortunately we already have scripts/changeprefix.py for version-specific
symbols, so it's not that hard to link in the previous version of
littlefs in CI as a separate set of symbols, "lfsp_" in this case.
So that we can at least test the compatibility tests locally, I've added
an ifdef against the expected define "LFSP" to define a set of aliases
mapping "lfsp_" symbols to "lfs_" symbols. This is manual at the moment,
and a bit hacky, but gets the job done.
---
Also changed BUILDDIR creation to derive subdirectories from a few
Makefile variables. This makes the subdirectories less manual and more
flexible for things like LFSP. Note this wasn't possible until BUILDDIR
was changed to default to "." when omitted.
- Added support for negative numbers in the leb16 encoding with an
optional 'w' prefix.
- Changed prettyasserts.py rule to .a.c => .c, allowing other .a.c files
in the future.
- Updated .gitignore with missing generated files (tags, .csv).
- Removed suite-namespacing of test symbols, these are no longer needed.
- Changed test define overrides to have higher priority than explicit
defines encoded in test ids. So:
./runners/bench_runner bench_dir_open:0f1g12gg2b8c8dgg4e0 -DREAD_SIZE=16
Behaves as expected.
Otherwise it's not easy to experiment with known failing test cases.
- Fixed issue where the -b flag ignored explicit test/bench ids.
- Renamed struct_.py -> structs.py again.
- Removed lfs.csv, instead prefering script specific csv files.
- Added *-diff make rules for quick comparison against a previous
result, results are now implicitly written on each run.
For example, `make code` creates lfs.code.csv and prints the summary, which
can be followed by `make code-diff` to compare changes against the saved
lfs.code.csv without overwriting.
- Added nargs=? support for -s and -S, now uses a per-result _sort
attribute to decide sort if fields are unspecified.
Two flags introduced: -fcallgraph-info=su for stack analysis, and
-ftrack-macro-expansions=0 for cleaner prettyassert.py warnings, are
unfortunately not supported in Clang.
The override vars in the Makefile meant it wasn't actually possible to
remove these flags for Clang testing, so this commit changes those vars
to normal, non-overriding vars. This means `make CFLAGS=-Werror` and
`CFLAGS=-Werror make` behave _very_ differently, but this is just an
unfortunate quirk of make that needs to be worked around.
This is the name expected if you are actually linking against littlefs.
The use as a default build rule is mostly for linting. Most uses of
littlefs likely compile directly with the sources (it is only several K
of code), or use their own build system, and the previous name would have made
linking a bit of a challenge.
Still, this might cause some breakage for someone...
- Changed --(tool)-tool to --(tool)-path in scripts, this seems to be
a more common name for this sort of flag.
- Changed BUILDDIR to not have implicit slash, makes Makefile internals
a bit more readable.
- Fixed some outdated names hidden in less-often used ifdefs.
Based loosely on Linux's perf tool, perfbd.py uses trace output with
backtraces to aggregate and show the block device usage of all functions
in a program, propagating block devices operation cost up the backtrace
for each operation.
This combined with --trace-period and --trace-freq for
sampling/filtering trace events allow the bench-runner to very
efficiently record the general cost of block device operations with very
little overhead.
Adopted this as the default side-effect of make bench, replacing
cycle-based performance measurements which are less important for
littlefs.
This adds -P/--propagate and -Z/--depth to perf.py for showing recursive
results, making it easy to narrow down on where spikes in performance
come from.
This ended up being a bit different from stack.py's recursive results,
as we end up with different (diminishing) numbers as we descend.
This provides 2 things:
1. perf integration with the bench/test runners - This is a bit tricky
with perf as it doesn't have its own way to combine perf measurements
across multiple processes. perf.py works around this by writing
everything to a zip file, using flock to synchronize. As a plus, free
compression!
2. Parsing and presentation of perf results in a format consistent with
the other CSV-based tools. This actually ran into a surprising number of
issues:
- We need to process raw events to get the information we want, this
ends up being a lot of data (~16MiB at 100Hz uncompressed), so we
paralellize the parsing of each decompressed perf file.
- perf reports raw addresses post-ASLR. It does provide sym+off which
is very useful, but to find the source of static functions we need to
reverse the ASLR by finding the delta the produces the best
symbol<->addr matches.
- This isn't related to perf, but decoding dwarf line-numbers is
really complicated. You basically need to write a tiny VM.
This also turns on perf measurement by default for the bench-runner, but at a
low frequency (100 Hz). This can be decreased or removed in the future
if it causes any slowdown.
The main change is requiring field names for -b/-f/-s/-S, this
is a bit more powerful, and supports hidden extra fields, but
can require a bit more typing in some cases.
- Changed multi-field flags to action=append instead of comma-separated.
- Dropped short-names for geometries/powerlosses
- Renamed -Pexponential -> -Plog
- Allowed omitting the 0 for -W0/-H0/-n0 and made -j0 consistent
- Better handling of --xlim/--ylim
Now both scripts also fallback to guessing what fields to use based on
what fields can be converted to integers. This is more falible, and
doesn't work for tests/benchmarks, but in those cases explicit fields
can be used (which is what would be needed without guessing anyways).
These are really just different flavors of test.py and test_runner.c
without support for power-loss testing, but with support for measuring
the cumulative number of bytes read, programmed, and erased.
Note that the existing define parameterization should work perfectly
fine for running benchmarks across various dimensions:
./scripts/bench.py \
runners/bench_runner \
bench_file_read \
-gnor \
-DSIZE='range(0,131072,1024)'
Also added a couple basic benchmarks as a starting point.
- Added the littlefs license note to the scripts.
- Adopted parse_intermixed_args everywhere for more consistent arg
handling.
- Removed argparse's implicit help text formatting as it does not
work with perse_intermixed_args and breaks sometimes.
- Used string concatenation for argparse everywhere, uses backslashed
line continuations only works with argparse because it strips
redundant whitespace.
- Consistent argparse formatting.
- Consistent openio mode handling.
- Consistent color argument handling.
- Adopted functools.lru_cache in tracebd.py.
- Moved unicode printing behind --subscripts in traceby.py, making all
scripts ascii by default.
- Renamed pretty_asserts.py -> prettyasserts.py.
- Renamed struct.py -> struct_.py, the original name conflicts with
Python's built in struct module in horrible ways.
With more scripts generating CSV files this moves most CSV manipulation
into summary.py, which can now handle more or less any arbitrary CSV
file with arbitrary names and fields.
This also includes a bunch of additional, probably unnecessary, tweaks:
- summary.py/coverage.py use a custom fractional type for encoding
fractions, this will also be used for test counts.
- Added a smaller diff output for size scripts with the --percent flag.
- Added line and hit info to coverage.py's CSV files.
- Added --tree flag to stack.py to show only the call tree without
other noise.
- Renamed structs.py to struct.py.
- Changed a few flags around for consistency between size/summary scripts.
- Added `make sizes` alias.
- Added `make lfs.code.csv` rules
These are just some minor quality of life improvements
- Added a "make build-test" alias
- Made test runner a positional arg for test.py since it is almost
always required. This shortens the command line invocation most of the
time.
- Added --context to test.py
- Renamed --output in test.py to --stdout, note this still merges
stderr. Maybe at some point these should be split, but it's not really
worth it for now.
- Reworked the test_id parsing code a bit.
- Changed the test runner --step to take a range such as -s0,12,2
- Changed tracebd.py --block and --off to take ranges
Doing this now specifically because clang does not have
-Wjump-misses-init, but I've been looking for an excuse to remove these
for a while.
These warning flags create more annoyance than they add value. There is
probably a reason they aren't included in -Wall + -Wextra.
-Wshadow specifically is potentially harmful as it forces coming up with
new, sometimes less descriptive names for repeated variables.
Dependent projects should use different flags for their dependencies if
this introduces problems.
The main change here from the previous test framework design is:
1. Powerloss testing remains in-process, speeding up testing.
2. The state of a test, included all powerlosses, is encoded in the
test id + leb16 encoded powerloss string. This means exhaustive
testing can be run in CI, but then easily reproduced locally with
full debugger support.
For example:
./scripts/test.py test_dirs#reentrant_many_dir#10#1248g1g2 --gdb
Will run the test test_dir, case reentrant_many_dir, permutation #10,
with powerlosses at 1, 2, 4, 8, 16, and 32 cycles. Dropping into gdb
if an assert fails.
The changes to the block-device are a work-in-progress for a
lazily-allocated/copy-on-write block device that I'm hoping will keep
exhaustive testing relatively low-cost.
- Renamed explode_asserts.py -> pretty_asserts.py, this name is
hopefully a bit more descriptive
- Small cleanup of the parser rules
- Added recognization of memcmp/strcmp => 0 statements and generate
the relevant memory inspecting assert messages
I attempted to fix the incorrect column numbers for the generated
asserts, but unfortunately this didn't go anywhere and I don't think
it's actually possible.
There is no column control analogous to the #line directive. I thought
you might be able to intermix #line directives to put arguments at the
right column like so:
assert(a == b);
__PRETTY_ASSERT_INT_EQ(
#line 1
a,
#line 1
b);
But this doesn't work as preprocessor directives are not allowed in
macros arguments in standard C. Unfortunately this is probably not
possible to fix without better support in the language.
Also renamed GCI -> CI, this holds .ci files, though there is a risk
of confusion with continuous integration.
Also added unused but generated .ci files to clean rule.
These scripts can't easily share the common logic, but separating
field details from the print/merge/csv logic should make the common
part of these scripts much easier to create/modify going forward.
This also tweaked the behavior of summary.py slightly.
This also adds coverage support to the new test framework, which due to
reduction in scope, no longer needs aggregation and can be much
simpler. Really all we need to do is pass --coverage to GCC, which
builds its .gcda files during testing in a multi-process-safe manner.
The addition of branch coverage leverages information that was available
in both lcov and gcov.
This was made easier with the addition of the --json-format to gcov
in GCC 9.0, however the lax backwards compatibility for gcov's
intermediary options is a bit concerning. Hopefully --json-format
sticks around for a while.
GCC is a bit annoying here, it can't generate .cgi files without
generating the related .o files, though I suppose the alternative risks
duplicating a large amount of compilation work (littlefs is really
a small project).
Previously we rebuilt the .o files anytime we needed .cgi files
(callgraph info used for stack.py). This changes it so we always
built .cgi files as a side-effect of compilation. This is similar
to the .d file generation, though may be annoying if the system
cc doesn't support --callgraph-info.
This mostly required names for each test case, declarations of
previously-implicit variables since the new test framework is more
conservative with what it declares (the small extra effort to add
declarations is well worth the simplicity and improved readability),
and tweaks to work with not-really-constant defines.
Also renamed test_ -> test, replacing the old ./scripts/test.py,
unfortunately git seems to have had a hard time with this.
- Expanded test defines to allow for lists of configurations
These are useful for changing multi-dimensional test configurations
without leading to extremely large and less useful configuration
combinations.
- Made warnings more visible durring test parsing
- Add lfs_testbd.h to implicit test includes
- Fixed issue with not closing files in ./scripts/explode_asserts.py
- Add `make test_runner` and `make test_list` build rules for
convenience
In the test-runner, defines are parameterized constants (limited
to integers) that are generated from the test suite tomls resulting
in many permutations of each test.
In order to make this efficient, these defines are implemented as
multi-layered lookup tables, using per-layer/per-scope indirect
mappings. This lets the test-runner and test suites define their
own defines with compile-time indexes independently. It also makes
building of the lookup tables very efficient, since they can be
incrementally populated as we expand the test permutations.
The four current define layers and when we need to build them:
layer defines predefine_map define_map
user-provided overrides per-run per-run per-suite
per-permutation defines per-perm per-case per-perm
per-geometry defines per-perm compile-time -
default defines compile-time compile-time -
This moves defines entirely into the runtime of the test_runner,
simplifying thing and reducing the amount of generated code that needs
to be build, at the cost of limiting test-defines to uintmax_t types.
This is implemented using a set of index-based scopes (created by
test.py) that allow different layers to override defines from other
layers, accessible through the global `test_define` function.
layers:
1. command-line overrides
2. per-case defines
3. per-geometry defines
This is to try a different design for testing, the goals are to make the
test infrastructure a bit simpler, with clear stages for building and
running, and faster, by avoiding rebuilding lfs.c n-times.
GCC is a bit frustrating here, it really wants to generate every file in
a single command, which _is_ more efficient if our build system could
leverage this. But -fcallgraph-info is a rather novel flag, so we can't
really rely on it for generally compiling and testing littlefs.
The multi-file output gets in the way when we want an explicitly
separate rule for callgraph-info generation. We can't generate the
callgraph-info without generating the objects files.
This becomes a surprsing issue when parallel building (make -j) is used!
Suddenly we might end up with both the .o and .ci rules writing to .o
files, which creates a really difficult to track down issue of corrupted
.o files.
The temporary solution is to use an order-only prerequisite. This still
ends up building the .o files twice, but it's an acceptable tradeoff for
not requiring the -fcallgraph-info for all builds.
- Changed `make summary` to show a one line summary
- Added `make lfs.csv` rule, which is useful for finding more info with
other scripts
- Fixed small issue in ./scripts/summary.py
- Added *.ci (callgraph) and *.csv (script output) to CI
A full summary of static measurements (code size, stack usage, etc) can now
be found with:
make summary
This is done through the combination of a new ./scripts/summary.py
script and the ability of existing scripts to merge into existing csv
files, allowing multiple results to be merged either in a pipeline, or
in parallel with a single ./script/summary.py call.
The ./scripts/summary.py script can also be used to quickly compare
different builds or configurations. This is a proper implementation
of a similar but hacky shell script that has already been very useful
for making optimization decisions:
$ ./scripts/structs.py new.csv -d old.csv --summary
name (2 added, 0 removed) code stack structs
TOTAL 28648 (-2.7%) 2448 1012
Also some other small tweaks to scripts:
- Removed state saving diff rules. This isn't the most useful way to
handle comparing changes.
- Added short flags for --summary (-Y) and --files (-F), since these
are quite often used.
This required a patch to the --diff flag for the scripts to ignore
a missing file. This enables the useful one liner for making comparisons
with potentially missing previous versions:
./scripts/code.py lfs.o -d lfs.o.code.csv -o lfs.o.code.csv
function (0 added, 0 removed) old new diff
TOTAL 25476 25476 +0
One downside, these previous files are easy to delete as a part of make
clean, which limits their usefulness for comparing configuration
changes...
Note this detects loops (recursion), and renders this as infinity.
Currently littlefs does have a single recursive function and you can see
how this infects the full call graph. Eventually this should be removed.
This is to avoid unexpected script behavior even though data.py should
always return 0 bytes for littlefs. Maybe a check for this should be
added to CI?
Now with -s/--sort and -S/--reverse-sort for sorting the functions by
size.
You may wonder why add reverse-sort, since its utility doesn't seem
worth the cost to implement (these are just helper scripts after all),
the reason is that reverse-sort is quite useful on the command-line,
where scrollback may be truncated, and you only care about the larger
entries.
Outside of the command-line, normal sort is prefered.
Fortunately the difference is just the sign in the sort key.
Note this conflicts with the short --summary flag, so that has been
removed.
Currently this is just lfs.c and lfs_util.c. Previously this included
the block devices, but this meant all of the scripts needed to
explicitly deselect the block devices to avoid reporting build
size/coverage info on them.
Note that test.py still explicitly adds the block devices for compiling
tests, which is their main purpose. Humorously this means the block
devices will probably be compiled into most builds in this repo anyways.
This was lost in the Travis -> GitHub transition, in serializing some of
the jobs, I missed that we need to clean between tests with different
geometry configurations. Otherwise we end up running outdated binaries,
which explains some of the weird test behavior we were seeing.
Also tweaked a few script things:
- Better subprocess error reporting (dump stderr on failure)
- Fixed a BUILDDIR rule issue in test.py
- Changed test-not-run status to None instead of undefined