mirror of
https://github.com/littlefs-project/littlefs.git
synced 2025-12-01 12:20:02 +00:00
This provides 2 things:
1. perf integration with the bench/test runners - This is a bit tricky
with perf as it doesn't have its own way to combine perf measurements
across multiple processes. perf.py works around this by writing
everything to a zip file, using flock to synchronize. As a plus, free
compression!
2. Parsing and presentation of perf results in a format consistent with
the other CSV-based tools. This actually ran into a surprising number of
issues:
- We need to process raw events to get the information we want, this
ends up being a lot of data (~16MiB at 100Hz uncompressed), so we
paralellize the parsing of each decompressed perf file.
- perf reports raw addresses post-ASLR. It does provide sym+off which
is very useful, but to find the source of static functions we need to
reverse the ASLR by finding the delta the produces the best
symbol<->addr matches.
- This isn't related to perf, but decoding dwarf line-numbers is
really complicated. You basically need to write a tiny VM.
This also turns on perf measurement by default for the bench-runner, but at a
low frequency (100 Hz). This can be decreased or removed in the future
if it causes any slowdown.
23 lines
209 B
Plaintext
23 lines
209 B
Plaintext
# Compilation output
|
|
*.o
|
|
*.d
|
|
*.a
|
|
*.ci
|
|
*.csv
|
|
*.t.c
|
|
*.b.c
|
|
*.a.c
|
|
*.gcno
|
|
*.gcda
|
|
*.perf
|
|
|
|
# Testing things
|
|
blocks/
|
|
lfs
|
|
test.c
|
|
tests/*.toml.*
|
|
scripts/__pycache__
|
|
.gdb_history
|
|
runners/test_runner
|
|
runners/bench_runner
|