License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
|
|
|
# SPDX-License-Identifier: GPL-2.0
|
2021-04-16 22:00:51 +09:00
|
|
|
include ../../../build/Build.include
|
2019-07-11 16:29:27 +02:00
|
|
|
include ../../../scripts/Makefile.arch
|
2021-01-13 17:33:16 +01:00
|
|
|
include ../../../scripts/Makefile.include
|
2017-12-04 10:56:48 +01:00
|
|
|
|
2020-06-02 19:56:49 +02:00
|
|
|
CXX ?= $(CROSS_COMPILE)g++
|
|
|
|
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
CURDIR := $(abspath .)
|
2019-12-13 17:43:38 -08:00
|
|
|
TOOLSDIR := $(abspath ../../..)
|
|
|
|
LIBDIR := $(TOOLSDIR)/lib
|
2017-03-19 23:03:14 -07:00
|
|
|
BPFDIR := $(LIBDIR)/bpf
|
2019-12-13 17:43:38 -08:00
|
|
|
TOOLSINCDIR := $(TOOLSDIR)/include
|
2024-09-27 15:13:53 +02:00
|
|
|
TOOLSARCHINCDIR := $(TOOLSDIR)/arch/$(SRCARCH)/include
|
2019-12-13 17:43:38 -08:00
|
|
|
BPFTOOLDIR := $(TOOLSDIR)/bpf/bpftool
|
|
|
|
APIDIR := $(TOOLSINCDIR)/uapi
|
2023-07-05 13:39:26 +02:00
|
|
|
ifneq ($(O),)
|
|
|
|
GENDIR := $(O)/include/generated
|
|
|
|
else
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
GENDIR := $(abspath ../../../../include/generated)
|
2023-07-05 13:39:26 +02:00
|
|
|
endif
|
2017-03-31 02:24:04 +02:00
|
|
|
GENHDR := $(GENDIR)/autoconf.h
|
2023-11-25 17:42:50 +09:00
|
|
|
PKG_CONFIG ?= $(CROSS_COMPILE)pkg-config
|
2017-02-11 23:20:23 +01:00
|
|
|
|
2017-03-31 02:24:04 +02:00
|
|
|
ifneq ($(wildcard $(GENHDR)),)
|
|
|
|
GENFLAGS := -DHAVE_GENHDR
|
|
|
|
endif
|
|
|
|
|
selftests/bpf: add bpf-gcc support
Now that binutils and gcc support for BPF is upstream, make use of it in
BPF selftests using alu32-like approach. Share as much as possible of
CFLAGS calculation with clang.
Fixes only obvious issues, leaving more complex ones for later:
- Use gcc-provided bpf-helpers.h instead of manually defining the
helpers, change bpf_helpers.h include guard to avoid conflict.
- Include <linux/stddef.h> for __always_inline.
- Add $(OUTPUT)/../usr/include to include path in order to use local
kernel headers instead of system kernel headers when building with O=.
In order to activate the bpf-gcc support, one needs to configure
binutils and gcc with --target=bpf and make them available in $PATH. In
particular, gcc must be installed as `bpf-gcc`, which is the default.
Right now with binutils 25a2915e8dba and gcc r275589 only a handful of
tests work:
# ./test_progs_bpf_gcc
# Summary: 7/39 PASSED, 1 SKIPPED, 98 FAILED
The reason for those failures are as follows:
- Build errors:
- `error: too many function arguments for eBPF` for __always_inline
functions read_str_var and read_map_var - must be inlining issue,
and for process_l3_headers_v6, which relies on optimizing away
function arguments.
- `error: indirect call in function, which are not supported by eBPF`
where there are no obvious indirect calls in the source calls, e.g.
in __encap_ipip_none.
- `error: field 'lock' has incomplete type` for fields of `struct
bpf_spin_lock` type - bpf_spin_lock is re#defined by bpf-helpers.h,
so its usage is sensitive to order of #includes.
- `error: eBPF stack limit exceeded` in sysctl_tcp_mem.
- Load errors:
- Missing object files due to above build errors.
- `libbpf: failed to create map (name: 'test_ver.bss')`.
- `libbpf: object file doesn't contain bpf program`.
- `libbpf: Program '.text' contains unrecognized relo data pointing to
section 0`.
- `libbpf: BTF is required, but is missing or corrupted` - no BTF
support in gcc yet.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Cc: Jose E. Marchesi <jose.marchesi@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-09-12 18:05:43 +02:00
|
|
|
BPF_GCC ?= $(shell command -v bpf-gcc;)
|
selftests/bpf: Add SAN_CFLAGS param to selftests build to allow sanitizers
Add ability to specify extra compiler flags with SAN_CFLAGS for compilation of
all user-space C files. This allows to build all of selftest programs with,
e.g., custom sanitizer flags, without requiring support for such sanitizers
from anyone compiling selftest/bpf.
As an example, to compile everything with AddressSanitizer, one would do:
$ make clean && make SAN_CFLAGS="-fsanitize=address"
For AddressSanitizer to work, one needs appropriate libasan shared library
installed in the system, with version of libasan matching what GCC links
against. E.g., GCC8 needs libasan5, while GCC7 uses libasan4.
For CentOS 7, to build everything successfully one would need to:
$ sudo yum install devtoolset-8-gcc devtoolset-libasan-devel
$ scl enable devtoolset-8 bash # set up environment
For Arch Linux to run selftests, one would need to install gcc-libs package to
get libasan.so.5:
$ sudo pacman -S gcc-libs
N.B. EXTRA_CFLAGS name wasn't used, because it's also used by libbpf's
Makefile and this causes few issues:
1. default "-g -Wall" flags are overriden;
2. compiling shared library with AddressSanitizer generates a bunch of symbols
like: "_GLOBAL__sub_D_00099_0_btf_dump.c", "_GLOBAL__sub_D_00099_0_bpf.c",
etc, which screws up versioned symbols check.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: Julia Kartseva <hex@fb.com>
Link: https://lore.kernel.org/bpf/20200429012111.277390-3-andriin@fb.com
2020-04-28 18:21:02 -07:00
|
|
|
SAN_CFLAGS ?=
|
2023-02-10 01:11:57 +01:00
|
|
|
SAN_LDFLAGS ?= $(SAN_CFLAGS)
|
2023-10-06 10:57:43 -07:00
|
|
|
RELEASE ?=
|
|
|
|
OPT_FLAGS ?= $(if $(RELEASE),-O2,-O0)
|
2023-11-25 17:42:52 +09:00
|
|
|
|
|
|
|
LIBELF_CFLAGS := $(shell $(PKG_CONFIG) libelf --cflags 2>/dev/null)
|
|
|
|
LIBELF_LIBS := $(shell $(PKG_CONFIG) libelf --libs 2>/dev/null || echo -lelf)
|
|
|
|
|
2025-05-22 02:38:13 +01:00
|
|
|
SKIP_DOCS ?=
|
|
|
|
SKIP_LLVM ?=
|
|
|
|
|
2024-08-20 03:23:53 -07:00
|
|
|
ifeq ($(srctree),)
|
|
|
|
srctree := $(patsubst %/,%,$(dir $(CURDIR)))
|
|
|
|
srctree := $(patsubst %/,%,$(dir $(srctree)))
|
|
|
|
srctree := $(patsubst %/,%,$(dir $(srctree)))
|
|
|
|
srctree := $(patsubst %/,%,$(dir $(srctree)))
|
|
|
|
endif
|
|
|
|
|
2025-01-07 23:58:18 +00:00
|
|
|
CFLAGS += -g $(OPT_FLAGS) -rdynamic -std=gnu11 \
|
2024-03-01 13:45:51 -08:00
|
|
|
-Wall -Werror -fno-omit-frame-pointer \
|
2023-11-25 17:42:52 +09:00
|
|
|
$(GENFLAGS) $(SAN_CFLAGS) $(LIBELF_CFLAGS) \
|
selftests/bpf: Add SAN_CFLAGS param to selftests build to allow sanitizers
Add ability to specify extra compiler flags with SAN_CFLAGS for compilation of
all user-space C files. This allows to build all of selftest programs with,
e.g., custom sanitizer flags, without requiring support for such sanitizers
from anyone compiling selftest/bpf.
As an example, to compile everything with AddressSanitizer, one would do:
$ make clean && make SAN_CFLAGS="-fsanitize=address"
For AddressSanitizer to work, one needs appropriate libasan shared library
installed in the system, with version of libasan matching what GCC links
against. E.g., GCC8 needs libasan5, while GCC7 uses libasan4.
For CentOS 7, to build everything successfully one would need to:
$ sudo yum install devtoolset-8-gcc devtoolset-libasan-devel
$ scl enable devtoolset-8 bash # set up environment
For Arch Linux to run selftests, one would need to install gcc-libs package to
get libasan.so.5:
$ sudo pacman -S gcc-libs
N.B. EXTRA_CFLAGS name wasn't used, because it's also used by libbpf's
Makefile and this causes few issues:
1. default "-g -Wall" flags are overriden;
2. compiling shared library with AddressSanitizer generates a bunch of symbols
like: "_GLOBAL__sub_D_00099_0_btf_dump.c", "_GLOBAL__sub_D_00099_0_bpf.c",
etc, which screws up versioned symbols check.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: Julia Kartseva <hex@fb.com>
Link: https://lore.kernel.org/bpf/20200429012111.277390-3-andriin@fb.com
2020-04-28 18:21:02 -07:00
|
|
|
-I$(CURDIR) -I$(INCLUDE_DIR) -I$(GENDIR) -I$(LIBDIR) \
|
2024-09-27 15:13:53 +02:00
|
|
|
-I$(TOOLSINCDIR) -I$(TOOLSARCHINCDIR) -I$(APIDIR) -I$(OUTPUT)
|
2023-02-10 01:11:57 +01:00
|
|
|
LDFLAGS += $(SAN_LDFLAGS)
|
2023-11-25 17:42:52 +09:00
|
|
|
LDLIBS += $(LIBELF_LIBS) -lz -lrt -lpthread
|
2016-10-17 14:28:36 +02:00
|
|
|
|
2024-08-23 12:44:09 -07:00
|
|
|
PCAP_CFLAGS := $(shell $(PKG_CONFIG) --cflags libpcap 2>/dev/null && echo "-DTRAFFIC_MONITOR=1")
|
|
|
|
PCAP_LIBS := $(shell $(PKG_CONFIG) --libs libpcap 2>/dev/null)
|
|
|
|
LDLIBS += $(PCAP_LIBS)
|
|
|
|
CFLAGS += $(PCAP_CFLAGS)
|
selftests/bpf: Add traffic monitor functions.
Add functions that capture packets and print log in the background. They
are supposed to be used for debugging flaky network test cases. A monitored
test case should call traffic_monitor_start() to start a thread to capture
packets in the background for a given namespace and call
traffic_monitor_stop() to stop capturing. (Or, option '-m' implemented by
the later patches.)
lo In IPv4 127.0.0.1:40265 > 127.0.0.1:55907: TCP, length 68, SYN
lo In IPv4 127.0.0.1:55907 > 127.0.0.1:40265: TCP, length 60, SYN, ACK
lo In IPv4 127.0.0.1:40265 > 127.0.0.1:55907: TCP, length 60, ACK
lo In IPv4 127.0.0.1:55907 > 127.0.0.1:40265: TCP, length 52, ACK
lo In IPv4 127.0.0.1:40265 > 127.0.0.1:55907: TCP, length 52, FIN, ACK
lo In IPv4 127.0.0.1:55907 > 127.0.0.1:40265: TCP, length 52, RST, ACK
Packet file: packets-2173-86-select_reuseport:sockhash_IPv4_TCP_LOOPBACK_test_detach_bpf-test.log
#280/87 select_reuseport/sockhash IPv4/TCP LOOPBACK test_detach_bpf:OK
The above is the output of an example. It shows the packets of a connection
and the name of the file that contains captured packets in the directory
/tmp/tmon_pcap. The file can be loaded by tcpdump or wireshark.
This feature only works if libpcap is available. (Could be found by pkg-config)
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
Link: https://lore.kernel.org/r/20240815053254.470944-2-thinker.li@gmail.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-08-14 22:32:49 -07:00
|
|
|
|
2024-08-20 03:23:53 -07:00
|
|
|
# Some utility functions use LLVM libraries
|
|
|
|
jit_disasm_helpers.c-CFLAGS = $(LLVM_CFLAGS)
|
|
|
|
|
selftests/bpf: Silence clang compilation warnings
With clang compiler:
make -j60 LLVM=1 LLVM_IAS=1 <=== compile kernel
make -j60 -C tools/testing/selftests/bpf LLVM=1 LLVM_IAS=1
Some linker flags are not used/effective for some binaries and
we have warnings like:
warning: -lelf: 'linker' input unused [-Wunused-command-line-argument]
We also have warnings like:
.../selftests/bpf/prog_tests/ns_current_pid_tgid.c:74:57: note: treat the string as an argument to avoid this
if (CHECK(waitpid(cpid, &wstatus, 0) == -1, "waitpid", strerror(errno)))
^
"%s",
.../selftests/bpf/test_progs.h:129:35: note: expanded from macro 'CHECK'
_CHECK(condition, tag, duration, format)
^
.../selftests/bpf/test_progs.h:108:21: note: expanded from macro '_CHECK'
fprintf(stdout, ##format); \
^
The first warning can be silenced with clang option -Wno-unused-command-line-argument.
For the second warning, source codes are modified as suggested by the compiler
to silence the warning. Since gcc does not support the option
-Wno-unused-command-line-argument and the warning only happens with clang
compiler, the option -Wno-unused-command-line-argument is enabled only when
clang compiler is used.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210413153429.3029377-1-yhs@fb.com
2021-04-13 08:34:29 -07:00
|
|
|
ifneq ($(LLVM),)
|
2023-07-27 18:12:50 -07:00
|
|
|
# Silence some warnings when compiled with clang
|
selftests/bpf: Silence clang compilation warnings
With clang compiler:
make -j60 LLVM=1 LLVM_IAS=1 <=== compile kernel
make -j60 -C tools/testing/selftests/bpf LLVM=1 LLVM_IAS=1
Some linker flags are not used/effective for some binaries and
we have warnings like:
warning: -lelf: 'linker' input unused [-Wunused-command-line-argument]
We also have warnings like:
.../selftests/bpf/prog_tests/ns_current_pid_tgid.c:74:57: note: treat the string as an argument to avoid this
if (CHECK(waitpid(cpid, &wstatus, 0) == -1, "waitpid", strerror(errno)))
^
"%s",
.../selftests/bpf/test_progs.h:129:35: note: expanded from macro 'CHECK'
_CHECK(condition, tag, duration, format)
^
.../selftests/bpf/test_progs.h:108:21: note: expanded from macro '_CHECK'
fprintf(stdout, ##format); \
^
The first warning can be silenced with clang option -Wno-unused-command-line-argument.
For the second warning, source codes are modified as suggested by the compiler
to silence the warning. Since gcc does not support the option
-Wno-unused-command-line-argument and the warning only happens with clang
compiler, the option -Wno-unused-command-line-argument is enabled only when
clang compiler is used.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210413153429.3029377-1-yhs@fb.com
2021-04-13 08:34:29 -07:00
|
|
|
CFLAGS += -Wno-unused-command-line-argument
|
2023-07-27 22:57:45 -07:00
|
|
|
endif
|
|
|
|
|
|
|
|
# Check whether bpf cpu=v4 is supported or not by clang
|
2023-07-27 18:12:50 -07:00
|
|
|
ifneq ($(shell $(CLANG) --target=bpf -mcpu=help 2>&1 | grep 'v4'),)
|
|
|
|
CLANG_CPUV4 := 1
|
|
|
|
endif
|
selftests/bpf: Silence clang compilation warnings
With clang compiler:
make -j60 LLVM=1 LLVM_IAS=1 <=== compile kernel
make -j60 -C tools/testing/selftests/bpf LLVM=1 LLVM_IAS=1
Some linker flags are not used/effective for some binaries and
we have warnings like:
warning: -lelf: 'linker' input unused [-Wunused-command-line-argument]
We also have warnings like:
.../selftests/bpf/prog_tests/ns_current_pid_tgid.c:74:57: note: treat the string as an argument to avoid this
if (CHECK(waitpid(cpid, &wstatus, 0) == -1, "waitpid", strerror(errno)))
^
"%s",
.../selftests/bpf/test_progs.h:129:35: note: expanded from macro 'CHECK'
_CHECK(condition, tag, duration, format)
^
.../selftests/bpf/test_progs.h:108:21: note: expanded from macro '_CHECK'
fprintf(stdout, ##format); \
^
The first warning can be silenced with clang option -Wno-unused-command-line-argument.
For the second warning, source codes are modified as suggested by the compiler
to silence the warning. Since gcc does not support the option
-Wno-unused-command-line-argument and the warning only happens with clang
compiler, the option -Wno-unused-command-line-argument is enabled only when
clang compiler is used.
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20210413153429.3029377-1-yhs@fb.com
2021-04-13 08:34:29 -07:00
|
|
|
|
2018-02-08 12:48:27 +01:00
|
|
|
# Order correspond to 'make run_tests' order
|
2024-12-06 19:06:21 +08:00
|
|
|
TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_progs \
|
2024-10-22 15:29:04 +00:00
|
|
|
test_sockmap \
|
2025-06-19 16:06:03 +02:00
|
|
|
test_tcpnotify_user \
|
2021-01-14 11:10:36 -03:00
|
|
|
test_progs-no_alu32
|
2023-08-31 18:29:54 +02:00
|
|
|
TEST_INST_SUBDIRS := no_alu32
|
2019-01-26 12:26:14 -05:00
|
|
|
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
# Also test bpf-gcc, if present
|
selftests/bpf: add bpf-gcc support
Now that binutils and gcc support for BPF is upstream, make use of it in
BPF selftests using alu32-like approach. Share as much as possible of
CFLAGS calculation with clang.
Fixes only obvious issues, leaving more complex ones for later:
- Use gcc-provided bpf-helpers.h instead of manually defining the
helpers, change bpf_helpers.h include guard to avoid conflict.
- Include <linux/stddef.h> for __always_inline.
- Add $(OUTPUT)/../usr/include to include path in order to use local
kernel headers instead of system kernel headers when building with O=.
In order to activate the bpf-gcc support, one needs to configure
binutils and gcc with --target=bpf and make them available in $PATH. In
particular, gcc must be installed as `bpf-gcc`, which is the default.
Right now with binutils 25a2915e8dba and gcc r275589 only a handful of
tests work:
# ./test_progs_bpf_gcc
# Summary: 7/39 PASSED, 1 SKIPPED, 98 FAILED
The reason for those failures are as follows:
- Build errors:
- `error: too many function arguments for eBPF` for __always_inline
functions read_str_var and read_map_var - must be inlining issue,
and for process_l3_headers_v6, which relies on optimizing away
function arguments.
- `error: indirect call in function, which are not supported by eBPF`
where there are no obvious indirect calls in the source calls, e.g.
in __encap_ipip_none.
- `error: field 'lock' has incomplete type` for fields of `struct
bpf_spin_lock` type - bpf_spin_lock is re#defined by bpf-helpers.h,
so its usage is sensitive to order of #includes.
- `error: eBPF stack limit exceeded` in sysctl_tcp_mem.
- Load errors:
- Missing object files due to above build errors.
- `libbpf: failed to create map (name: 'test_ver.bss')`.
- `libbpf: object file doesn't contain bpf program`.
- `libbpf: Program '.text' contains unrecognized relo data pointing to
section 0`.
- `libbpf: BTF is required, but is missing or corrupted` - no BTF
support in gcc yet.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Cc: Jose E. Marchesi <jose.marchesi@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-09-12 18:05:43 +02:00
|
|
|
ifneq ($(BPF_GCC),)
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
TEST_GEN_PROGS += test_progs-bpf_gcc
|
2023-08-31 18:29:54 +02:00
|
|
|
TEST_INST_SUBDIRS += bpf_gcc
|
bpf: Use -Wno-error in certain tests when building with GCC
Certain BPF selftests contain code that, albeit being legal C, trigger
warnings in GCC that cannot be disabled. This is the case for example
for the tests
progs/btf_dump_test_case_bitfields.c
progs/btf_dump_test_case_namespacing.c
progs/btf_dump_test_case_packing.c
progs/btf_dump_test_case_padding.c
progs/btf_dump_test_case_syntax.c
which contain struct type declarations inside function parameter
lists. This is problematic, because:
- The BPF selftests are built with -Werror.
- The Clang and GCC compilers sometimes differ when it comes to handle
warnings. in the handling of warnings. One compiler may emit
warnings for code that the other compiles compiles silently, and one
compiler may offer the possibility to disable certain warnings, while
the other doesn't.
In order to overcome this problem, this patch modifies the
tools/testing/selftests/bpf/Makefile in order to:
1. Enable the possibility of specifing per-source-file extra CFLAGS.
This is done by defining a make variable like:
<source-filename>-CFLAGS := <whateverflags>
And then modifying the proper Make rule in order to use these flags
when compiling <source-filename>.
2. Use the mechanism above to add -Wno-error to CFLAGS for the
following selftests:
progs/btf_dump_test_case_bitfields.c
progs/btf_dump_test_case_namespacing.c
progs/btf_dump_test_case_packing.c
progs/btf_dump_test_case_padding.c
progs/btf_dump_test_case_syntax.c
Note the corresponding -CFLAGS variables for these files are
defined only if the selftests are being built with GCC.
Note that, while compiler pragmas can generally be used to disable
particular warnings per file, this 1) is only possible for warning
that actually can be disabled in the command line, i.e. that have
-Wno-FOO options, and 2) doesn't apply to -Wno-error.
Tested in bpf-next master branch.
No regressions.
Signed-off-by: Jose E. Marchesi <jose.marchesi@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20240127100702.21549-1-jose.marchesi@oracle.com
2024-01-27 11:07:02 +01:00
|
|
|
|
|
|
|
# The following tests contain C code that, although technically legal,
|
|
|
|
# triggers GCC warnings that cannot be disabled: declaration of
|
|
|
|
# anonymous struct types in function parameter lists.
|
2024-05-07 13:22:19 +01:00
|
|
|
progs/btf_dump_test_case_bitfields.c-bpf_gcc-CFLAGS := -Wno-error
|
|
|
|
progs/btf_dump_test_case_namespacing.c-bpf_gcc-CFLAGS := -Wno-error
|
|
|
|
progs/btf_dump_test_case_packing.c-bpf_gcc-CFLAGS := -Wno-error
|
|
|
|
progs/btf_dump_test_case_padding.c-bpf_gcc-CFLAGS := -Wno-error
|
|
|
|
progs/btf_dump_test_case_syntax.c-bpf_gcc-CFLAGS := -Wno-error
|
bpf: avoid UB in usages of the __imm_insn macro
[Changes from V2:
- no-strict-aliasing is only applied when building with GCC.
- cpumask_failure.c is excluded, as it doesn't use __imm_insn.]
The __imm_insn macro is defined in bpf_misc.h as:
#define __imm_insn(name, expr) [name]"i"(*(long *)&(expr))
This may lead to type-punning and strict aliasing rules violations in
it's typical usage where the address of a struct bpf_insn is passed as
expr, like in:
__imm_insn(st_mem,
BPF_ST_MEM(BPF_W, BPF_REG_1, offsetof(struct __sk_buff, mark), 42))
Where:
#define BPF_ST_MEM(SIZE, DST, OFF, IMM) \
((struct bpf_insn) { \
.code = BPF_ST | BPF_SIZE(SIZE) | BPF_MEM, \
.dst_reg = DST, \
.src_reg = 0, \
.off = OFF, \
.imm = IMM })
In all the actual instances of this in the BPF selftests the value is
fed to a volatile asm statement as soon as it gets read from memory,
and thus it is unlikely anti-aliasing rules breakage may lead to
misguided optimizations.
However, GCC detects the potential problem (indirectly) by issuing a
warning stating that a temporary <Uxxxxxx> is used uninitialized,
where the temporary corresponds to the memory read by *(long *).
This patch adds -fno-strict-aliasing to the compilation flags of the
particular selftests that do type punning via __imm_insn, only for
GCC.
Tested in master bpf-next.
No regressions.
Signed-off-by: Jose E. Marchesi <jose.marchesi@oracle.com>
Cc: david.faust@oracle.com
Cc: cupertino.miranda@oracle.com
Cc: Yonghong Song <yonghong.song@linux.dev>
Cc: Eduard Zingerman <eddyz87@gmail.com>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240508103551.14955-1-jose.marchesi@oracle.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-05-08 12:35:51 +02:00
|
|
|
|
selftests/bpf: add bpf-gcc support
Now that binutils and gcc support for BPF is upstream, make use of it in
BPF selftests using alu32-like approach. Share as much as possible of
CFLAGS calculation with clang.
Fixes only obvious issues, leaving more complex ones for later:
- Use gcc-provided bpf-helpers.h instead of manually defining the
helpers, change bpf_helpers.h include guard to avoid conflict.
- Include <linux/stddef.h> for __always_inline.
- Add $(OUTPUT)/../usr/include to include path in order to use local
kernel headers instead of system kernel headers when building with O=.
In order to activate the bpf-gcc support, one needs to configure
binutils and gcc with --target=bpf and make them available in $PATH. In
particular, gcc must be installed as `bpf-gcc`, which is the default.
Right now with binutils 25a2915e8dba and gcc r275589 only a handful of
tests work:
# ./test_progs_bpf_gcc
# Summary: 7/39 PASSED, 1 SKIPPED, 98 FAILED
The reason for those failures are as follows:
- Build errors:
- `error: too many function arguments for eBPF` for __always_inline
functions read_str_var and read_map_var - must be inlining issue,
and for process_l3_headers_v6, which relies on optimizing away
function arguments.
- `error: indirect call in function, which are not supported by eBPF`
where there are no obvious indirect calls in the source calls, e.g.
in __encap_ipip_none.
- `error: field 'lock' has incomplete type` for fields of `struct
bpf_spin_lock` type - bpf_spin_lock is re#defined by bpf-helpers.h,
so its usage is sensitive to order of #includes.
- `error: eBPF stack limit exceeded` in sysctl_tcp_mem.
- Load errors:
- Missing object files due to above build errors.
- `libbpf: failed to create map (name: 'test_ver.bss')`.
- `libbpf: object file doesn't contain bpf program`.
- `libbpf: Program '.text' contains unrecognized relo data pointing to
section 0`.
- `libbpf: BTF is required, but is missing or corrupted` - no BTF
support in gcc yet.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Cc: Jose E. Marchesi <jose.marchesi@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-09-12 18:05:43 +02:00
|
|
|
endif
|
|
|
|
|
2023-07-27 18:12:50 -07:00
|
|
|
ifneq ($(CLANG_CPUV4),)
|
|
|
|
TEST_GEN_PROGS += test_progs-cpuv4
|
2023-08-31 18:29:54 +02:00
|
|
|
TEST_INST_SUBDIRS += cpuv4
|
2023-07-27 18:12:50 -07:00
|
|
|
endif
|
|
|
|
|
2025-03-04 10:24:51 +01:00
|
|
|
TEST_GEN_FILES = test_tc_edt.bpf.o
|
2021-11-10 21:36:17 -08:00
|
|
|
TEST_FILES = xsk_prereqs.sh $(wildcard progs/btf_dump_test_case_*.c)
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
|
2018-02-08 12:48:27 +01:00
|
|
|
# Order correspond to 'make run_tests' order
|
|
|
|
TEST_PROGS := test_kmod.sh \
|
2018-08-12 10:49:30 -07:00
|
|
|
test_lirc_mode2.sh \
|
2019-03-22 16:40:19 -07:00
|
|
|
test_tc_tunnel.sh \
|
selftests/bpf: measure RTT from xdp using xdping
xdping allows us to get latency estimates from XDP. Output looks
like this:
./xdping -I eth4 192.168.55.8
Setting up XDP for eth4, please wait...
XDP setup disrupts network connectivity, hit Ctrl+C to quit
Normal ping RTT data
[Ignore final RTT; it is distorted by XDP using the reply]
PING 192.168.55.8 (192.168.55.8) from 192.168.55.7 eth4: 56(84) bytes of data.
64 bytes from 192.168.55.8: icmp_seq=1 ttl=64 time=0.302 ms
64 bytes from 192.168.55.8: icmp_seq=2 ttl=64 time=0.208 ms
64 bytes from 192.168.55.8: icmp_seq=3 ttl=64 time=0.163 ms
64 bytes from 192.168.55.8: icmp_seq=8 ttl=64 time=0.275 ms
4 packets transmitted, 4 received, 0% packet loss, time 3079ms
rtt min/avg/max/mdev = 0.163/0.237/0.302/0.054 ms
XDP RTT data:
64 bytes from 192.168.55.8: icmp_seq=5 ttl=64 time=0.02808 ms
64 bytes from 192.168.55.8: icmp_seq=6 ttl=64 time=0.02804 ms
64 bytes from 192.168.55.8: icmp_seq=7 ttl=64 time=0.02815 ms
64 bytes from 192.168.55.8: icmp_seq=8 ttl=64 time=0.02805 ms
The xdping program loads the associated xdping_kern.o BPF program
and attaches it to the specified interface. If run in client
mode (the default), it will add a map entry keyed by the
target IP address; this map will store RTT measurements, current
sequence number etc. Finally in client mode the ping command
is executed, and the xdping BPF program will use the last ICMP
reply, reformulate it as an ICMP request with the next sequence
number and XDP_TX it. After the reply to that request is received
we can measure RTT and repeat until the desired number of
measurements is made. This is why the sequence numbers in the
normal ping are 1, 2, 3 and 8. We XDP_TX a modified version
of ICMP reply 4 and keep doing this until we get the 4 replies
we need; hence the networking stack only sees reply 8, where
we have XDP_PASSed it upstream since we are done.
In server mode (-s), xdping simply takes ICMP requests and replies
to them in XDP rather than passing the request up to the networking
stack. No map entry is required.
xdping can be run in native XDP mode (the default, or specified
via -N) or in skb mode (-S).
A test program test_xdping.sh exercises some of these options.
Note that native XDP does not seem to XDP_TX for veths, hence -N
is not tested. Looking at the code, it looks like XDP_TX is
supported so I'm not sure if that's expected. Running xdping in
native mode for ixgbe as both client and server works fine.
Changes since v4
- close fds on cleanup (Song Liu)
Changes since v3
- fixed seq to be __be16 (Song Liu)
- fixed fd checks in xdping.c (Song Liu)
Changes since v2
- updated commit message to explain why seq number of last
ICMP reply is 8 not 4 (Song Liu)
- updated types of seq number, raddr and eliminated csum variable
in xdpclient/xdpserver functions as it was not needed (Song Liu)
- added XDPING_DEFAULT_COUNT definition and usage specification of
default/max counts (Song Liu)
Changes since v1
- moved from RFC to PATCH
- removed unused variable in ipv4_csum() (Song Liu)
- refactored ICMP checks into icmp_check() function called by client
and server programs and reworked client and server programs due
to lack of shared code (Song Liu)
- added checks to ensure that SKB and native mode are not requested
together (Song Liu)
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-31 18:47:14 +01:00
|
|
|
test_tc_edt.sh \
|
tools: bpftool: improve and check builds for different make invocations
There are a number of alternative "make" invocations that can be used to
compile bpftool. The following invocations are expected to work:
- through the kbuild system, from the top of the repository
(make tools/bpf)
- by telling make to change to the bpftool directory
(make -C tools/bpf/bpftool)
- by building the BPF tools from tools/
(cd tools && make bpf)
- by running make from bpftool directory
(cd tools/bpf/bpftool && make)
Additionally, setting the O or OUTPUT variables should tell the build
system to use a custom output path, for each of these alternatives.
The following patch fixes the following invocations:
$ make tools/bpf
$ make tools/bpf O=<dir>
$ make -C tools/bpf/bpftool OUTPUT=<dir>
$ make -C tools/bpf/bpftool O=<dir>
$ cd tools/ && make bpf O=<dir>
$ cd tools/bpf/bpftool && make OUTPUT=<dir>
$ cd tools/bpf/bpftool && make O=<dir>
After this commit, the build still fails for two variants when passing
the OUTPUT variable:
$ make tools/bpf OUTPUT=<dir>
$ cd tools/ && make bpf OUTPUT=<dir>
In order to remember and check what make invocations are supposed to
work, and to document the ones which do not, a new script is added to
the BPF selftests. Note that some invocations require the kernel to be
configured, so the script skips them if no .config file is found.
v2:
- In make_and_clean(), set $ERROR to 1 when "make" returns non-zero,
even if the binary was produced.
- Run "make clean" from the correct directory (bpf/ instead of bpftool/,
when relevant).
Reported-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-08-30 12:00:38 +01:00
|
|
|
test_xdping.sh \
|
2020-02-26 17:59:39 +01:00
|
|
|
test_bpftool_build.sh \
|
2020-09-15 16:45:43 -07:00
|
|
|
test_bpftool.sh \
|
2025-06-21 01:18:12 +10:00
|
|
|
test_bpftool_map.sh \
|
2020-09-15 16:45:43 -07:00
|
|
|
test_bpftool_metadata.sh \
|
2021-03-08 19:28:31 +01:00
|
|
|
test_doc_build.sh \
|
2023-02-01 11:24:24 +01:00
|
|
|
test_xsk.sh \
|
|
|
|
test_xdp_features.sh
|
2016-10-17 14:28:36 +02:00
|
|
|
|
2025-02-04 11:59:43 +01:00
|
|
|
TEST_PROGS_EXTENDED := \
|
|
|
|
ima_setup.sh verify_sig_setup.sh \
|
2025-02-21 11:04:58 +01:00
|
|
|
test_bpftool.py
|
2018-10-10 16:27:04 +02:00
|
|
|
|
2024-12-04 14:28:26 +01:00
|
|
|
TEST_KMODS := bpf_testmod.ko bpf_test_no_cfi.ko bpf_test_modorder_x.ko \
|
|
|
|
bpf_test_modorder_y.ko
|
|
|
|
TEST_KMOD_TARGETS = $(addprefix $(OUTPUT)/,$(TEST_KMODS))
|
|
|
|
|
2018-02-08 12:48:27 +01:00
|
|
|
# Compile but not part of 'make run_tests'
|
2024-08-13 14:45:08 +02:00
|
|
|
TEST_GEN_PROGS_EXTENDED = \
|
2024-11-03 14:44:51 -08:00
|
|
|
bench \
|
|
|
|
flow_dissector_load \
|
|
|
|
runqslower \
|
|
|
|
test_cpp \
|
|
|
|
test_lirc_mode2_user \
|
|
|
|
veristat \
|
|
|
|
xdp_features \
|
|
|
|
xdp_hw_metadata \
|
|
|
|
xdp_synproxy \
|
|
|
|
xdping \
|
|
|
|
xskxceiver
|
2018-02-08 12:48:27 +01:00
|
|
|
|
2023-10-04 14:27:21 +02:00
|
|
|
TEST_GEN_FILES += liburandom_read.so urandom_read sign-file uprobe_multi
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
|
2024-04-11 12:05:34 -07:00
|
|
|
ifneq ($(V),1)
|
2020-01-12 23:31:40 -08:00
|
|
|
submake_extras := feature_display=0
|
2019-12-16 22:14:25 -08:00
|
|
|
endif
|
|
|
|
|
|
|
|
# override lib.mk's default rules
|
|
|
|
OVERRIDE_TARGETS := 1
|
|
|
|
override define CLEAN
|
2020-01-12 23:31:40 -08:00
|
|
|
$(call msg,CLEAN)
|
2021-11-10 21:36:17 -08:00
|
|
|
$(Q)$(RM) -r $(TEST_GEN_PROGS)
|
|
|
|
$(Q)$(RM) -r $(TEST_GEN_PROGS_EXTENDED)
|
|
|
|
$(Q)$(RM) -r $(TEST_GEN_FILES)
|
2024-12-04 14:28:26 +01:00
|
|
|
$(Q)$(RM) -r $(TEST_KMODS)
|
2021-11-10 21:36:17 -08:00
|
|
|
$(Q)$(RM) -r $(EXTRA_CLEAN)
|
2024-12-04 14:28:26 +01:00
|
|
|
$(Q)$(MAKE) -C test_kmods clean
|
2021-03-02 09:19:43 -08:00
|
|
|
$(Q)$(MAKE) docs-clean
|
2019-12-16 22:14:25 -08:00
|
|
|
endef
|
|
|
|
|
2017-03-19 23:03:14 -07:00
|
|
|
include ../lib.mk
|
|
|
|
|
2024-08-20 03:23:53 -07:00
|
|
|
NON_CHECK_FEAT_TARGETS := clean docs-clean
|
|
|
|
CHECK_FEAT := $(filter-out $(NON_CHECK_FEAT_TARGETS),$(or $(MAKECMDGOALS), "none"))
|
|
|
|
ifneq ($(CHECK_FEAT),)
|
|
|
|
FEATURE_USER := .selftests
|
|
|
|
FEATURE_TESTS := llvm
|
|
|
|
FEATURE_DISPLAY := $(FEATURE_TESTS)
|
|
|
|
|
|
|
|
# Makefile.feature expects OUTPUT to end with a slash
|
2024-09-05 08:13:52 +00:00
|
|
|
ifeq ($(shell expr $(MAKE_VERSION) \>= 4.4), 1)
|
2024-08-20 03:23:53 -07:00
|
|
|
$(let OUTPUT,$(OUTPUT)/,\
|
|
|
|
$(eval include ../../../build/Makefile.feature))
|
2024-09-05 08:13:52 +00:00
|
|
|
else
|
2024-12-24 15:59:57 +08:00
|
|
|
override OUTPUT := $(OUTPUT)/
|
2024-09-05 08:13:52 +00:00
|
|
|
$(eval include ../../../build/Makefile.feature)
|
2024-12-24 15:59:57 +08:00
|
|
|
override OUTPUT := $(patsubst %/,%,$(OUTPUT))
|
2024-09-05 08:13:52 +00:00
|
|
|
endif
|
2024-08-20 03:23:53 -07:00
|
|
|
endif
|
|
|
|
|
2025-05-22 02:38:13 +01:00
|
|
|
ifneq ($(SKIP_LLVM),1)
|
2024-08-20 03:23:53 -07:00
|
|
|
ifeq ($(feature-llvm),1)
|
|
|
|
LLVM_CFLAGS += -DHAVE_LLVM_SUPPORT
|
|
|
|
LLVM_CONFIG_LIB_COMPONENTS := mcdisassembler all-targets
|
|
|
|
# both llvm-config and lib.mk add -D_GNU_SOURCE, which ends up as conflict
|
|
|
|
LLVM_CFLAGS += $(filter-out -D_GNU_SOURCE,$(shell $(LLVM_CONFIG) --cflags))
|
2025-01-30 15:33:45 -07:00
|
|
|
# Prefer linking statically if it's available, otherwise fallback to shared
|
2025-03-10 14:51:12 +00:00
|
|
|
ifeq ($(shell $(LLVM_CONFIG) --link-static --libs >/dev/null 2>&1 && echo static),static)
|
2025-01-30 15:33:45 -07:00
|
|
|
LLVM_LDLIBS += $(shell $(LLVM_CONFIG) --link-static --libs $(LLVM_CONFIG_LIB_COMPONENTS))
|
2025-05-16 20:55:22 +01:00
|
|
|
LLVM_LDLIBS += $(filter-out -lxml2,$(shell $(LLVM_CONFIG) --link-static --system-libs $(LLVM_CONFIG_LIB_COMPONENTS)))
|
2025-01-30 15:33:45 -07:00
|
|
|
LLVM_LDLIBS += -lstdc++
|
|
|
|
else
|
|
|
|
LLVM_LDLIBS += $(shell $(LLVM_CONFIG) --link-shared --libs $(LLVM_CONFIG_LIB_COMPONENTS))
|
|
|
|
endif
|
2024-08-20 03:23:53 -07:00
|
|
|
LLVM_LDFLAGS += $(shell $(LLVM_CONFIG) --ldflags)
|
|
|
|
endif
|
2025-05-22 02:38:13 +01:00
|
|
|
endif
|
2024-08-20 03:23:53 -07:00
|
|
|
|
2020-01-20 14:06:52 +01:00
|
|
|
SCRATCH_DIR := $(OUTPUT)/tools
|
|
|
|
BUILD_DIR := $(SCRATCH_DIR)/build
|
|
|
|
INCLUDE_DIR := $(SCRATCH_DIR)/include
|
|
|
|
BPFOBJ := $(BUILD_DIR)/libbpf/libbpf.a
|
2021-01-13 17:33:16 +01:00
|
|
|
ifneq ($(CROSS_COMPILE),)
|
|
|
|
HOST_BUILD_DIR := $(BUILD_DIR)/host
|
|
|
|
HOST_SCRATCH_DIR := $(OUTPUT)/host-tools
|
2021-10-07 20:44:30 +01:00
|
|
|
HOST_INCLUDE_DIR := $(HOST_SCRATCH_DIR)/include
|
2021-01-13 17:33:16 +01:00
|
|
|
else
|
|
|
|
HOST_BUILD_DIR := $(BUILD_DIR)
|
|
|
|
HOST_SCRATCH_DIR := $(SCRATCH_DIR)
|
2021-10-07 20:44:30 +01:00
|
|
|
HOST_INCLUDE_DIR := $(INCLUDE_DIR)
|
2021-01-13 17:33:16 +01:00
|
|
|
endif
|
|
|
|
HOST_BPFOBJ := $(HOST_BUILD_DIR)/libbpf/libbpf.a
|
|
|
|
RESOLVE_BTFIDS := $(HOST_BUILD_DIR)/resolve_btfids/resolve_btfids
|
tools/runqslower: Install libbpf headers when building
API headers from libbpf should not be accessed directly from the
library's source directory. Instead, they should be exported with "make
install_headers". Let's make sure that runqslower installs the
headers properly when building.
We use a libbpf_hdrs target to mark the logical dependency on libbpf's
headers export for a number of object files, even though the headers
should have been exported at this time (since bpftool needs them, and is
required to generate the skeleton or the vmlinux.h).
When descending from a parent Makefile, the specific output directories
for building the library and exporting the headers are configurable with
BPFOBJ_OUTPUT and BPF_DESTDIR, respectively. This is in addition to
OUTPUT, on top of which those variables are constructed by default.
Also adjust the Makefile for the BPF selftests. We pass a number of
variables to the "make" invocation, because we want to point runqslower
to the (target) libbpf shared with other tools, instead of building its
own version. In addition, runqslower relies on (target) bpftool, and we
also want to pass the proper variables to its Makefile so that bpftool
itself reuses the same libbpf.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-6-quentin@isovalent.com
2021-10-07 20:44:31 +01:00
|
|
|
RUNQSLOWER_OUTPUT := $(BUILD_DIR)/runqslower/
|
2020-01-20 14:06:52 +01:00
|
|
|
|
2020-12-10 17:59:46 -08:00
|
|
|
VMLINUX_BTF_PATHS ?= $(if $(O),$(O)/vmlinux) \
|
|
|
|
$(if $(KBUILD_OUTPUT),$(KBUILD_OUTPUT)/vmlinux) \
|
|
|
|
../../../../vmlinux \
|
|
|
|
/sys/kernel/btf/vmlinux \
|
|
|
|
/boot/vmlinux-$(shell uname -r)
|
|
|
|
VMLINUX_BTF ?= $(abspath $(firstword $(wildcard $(VMLINUX_BTF_PATHS))))
|
2020-12-15 10:20:10 -08:00
|
|
|
ifeq ($(VMLINUX_BTF),)
|
|
|
|
$(error Cannot find a vmlinux for VMLINUX_BTF at any of "$(VMLINUX_BTF_PATHS)")
|
|
|
|
endif
|
2020-12-10 17:59:46 -08:00
|
|
|
|
2025-06-19 16:06:03 +02:00
|
|
|
# Define simple and short `make test_progs`, `make test_maps`, etc targets
|
2019-10-15 23:00:48 -07:00
|
|
|
# to build individual tests.
|
|
|
|
# NOTE: Semicolon at the end is critical to override lib.mk's default static
|
|
|
|
# rule for binaries.
|
2024-12-04 14:28:26 +01:00
|
|
|
$(notdir $(TEST_GEN_PROGS) $(TEST_KMODS) \
|
2024-02-16 09:42:45 -03:00
|
|
|
$(TEST_GEN_PROGS_EXTENDED)): %: $(OUTPUT)/% ;
|
2019-10-15 23:00:48 -07:00
|
|
|
|
2021-01-13 17:33:16 +01:00
|
|
|
# sort removes libbpf duplicates when not cross-building
|
selftests/bpf: Cross-compile bpftool
When the BPF selftests are cross-compiled, only the a host version of
bpftool is built. This version of bpftool is used on the host-side to
generate various intermediates, e.g., skeletons.
The test runners are also using bpftool, so the Makefile will symlink
bpftool from the selftest/bpf root, where the test runners will look
the tool:
| $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool \
| $(OUTPUT)/$(if $2,$2/)bpftool
There are two problems for cross-compilation builds:
1. There is no native (cross-compilation target) of bpftool
2. The bootstrap/bpftool is never cross-compiled (by design)
Make sure that a native/cross-compiled version of bpftool is built,
and if CROSS_COMPILE is set, symlink the native/non-bootstrap version.
Acked-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Björn Töpel <bjorn@rivosinc.com>
Link: https://lore.kernel.org/r/20230214161253.183458-1-bjorn@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-02-14 17:12:53 +01:00
|
|
|
MAKE_DIRS := $(sort $(BUILD_DIR)/libbpf $(HOST_BUILD_DIR)/libbpf \
|
|
|
|
$(BUILD_DIR)/bpftool $(HOST_BUILD_DIR)/bpftool \
|
|
|
|
$(HOST_BUILD_DIR)/resolve_btfids \
|
tools/runqslower: Install libbpf headers when building
API headers from libbpf should not be accessed directly from the
library's source directory. Instead, they should be exported with "make
install_headers". Let's make sure that runqslower installs the
headers properly when building.
We use a libbpf_hdrs target to mark the logical dependency on libbpf's
headers export for a number of object files, even though the headers
should have been exported at this time (since bpftool needs them, and is
required to generate the skeleton or the vmlinux.h).
When descending from a parent Makefile, the specific output directories
for building the library and exporting the headers are configurable with
BPFOBJ_OUTPUT and BPF_DESTDIR, respectively. This is in addition to
OUTPUT, on top of which those variables are constructed by default.
Also adjust the Makefile for the BPF selftests. We pass a number of
variables to the "make" invocation, because we want to point runqslower
to the (target) libbpf shared with other tools, instead of building its
own version. In addition, runqslower relies on (target) bpftool, and we
also want to pass the proper variables to its Makefile so that bpftool
itself reuses the same libbpf.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-6-quentin@isovalent.com
2021-10-07 20:44:31 +01:00
|
|
|
$(RUNQSLOWER_OUTPUT) $(INCLUDE_DIR))
|
2021-01-13 17:33:16 +01:00
|
|
|
$(MAKE_DIRS):
|
|
|
|
$(call msg,MKDIR,,$@)
|
|
|
|
$(Q)mkdir -p $@
|
|
|
|
|
2020-08-06 20:30:57 -07:00
|
|
|
$(OUTPUT)/%.o: %.c
|
|
|
|
$(call msg,CC,,$@)
|
|
|
|
$(Q)$(CC) $(CFLAGS) -c $(filter %.c,$^) $(LDLIBS) -o $@
|
|
|
|
|
2019-12-16 22:14:25 -08:00
|
|
|
$(OUTPUT)/%:%.c
|
2020-01-12 23:31:40 -08:00
|
|
|
$(call msg,BINARY,,$@)
|
2020-08-06 20:30:57 -07:00
|
|
|
$(Q)$(LINK.c) $^ $(LDLIBS) -o $@
|
2019-12-16 22:14:25 -08:00
|
|
|
|
2022-06-16 21:55:12 -07:00
|
|
|
# LLVM's ld.lld doesn't support all the architectures, so use it only on x86
|
2023-10-04 14:27:20 +02:00
|
|
|
ifeq ($(SRCARCH),$(filter $(SRCARCH),x86 riscv))
|
2022-06-16 21:55:12 -07:00
|
|
|
LLD := lld
|
|
|
|
else
|
2024-10-08 21:07:20 -07:00
|
|
|
LLD := $(shell command -v $(LD))
|
2022-06-16 21:55:12 -07:00
|
|
|
endif
|
|
|
|
|
2022-05-14 00:21:15 +00:00
|
|
|
# Filter out -static for liburandom_read.so and its dependent targets so that static builds
|
|
|
|
# do not fail. Static builds leave urandom_read relying on system-wide shared libraries.
|
2023-09-18 02:48:13 +00:00
|
|
|
$(OUTPUT)/liburandom_read.so: urandom_read_lib1.c urandom_read_lib2.c liburandom_read.map
|
2022-04-04 16:42:02 -07:00
|
|
|
$(call msg,LIB,,$@)
|
2023-10-04 14:27:19 +02:00
|
|
|
$(Q)$(CLANG) $(CLANG_TARGET_ARCH) \
|
|
|
|
$(filter-out -static,$(CFLAGS) $(LDFLAGS)) \
|
2023-09-18 02:48:13 +00:00
|
|
|
$(filter %.c,$^) $(filter-out -static,$(LDLIBS)) \
|
2024-11-01 09:27:13 +01:00
|
|
|
-Wno-unused-command-line-argument \
|
2022-11-04 10:40:16 +01:00
|
|
|
-fuse-ld=$(LLD) -Wl,-znoseparate-code -Wl,--build-id=sha1 \
|
2023-09-18 02:48:13 +00:00
|
|
|
-Wl,--version-script=liburandom_read.map \
|
2022-11-04 10:40:16 +01:00
|
|
|
-fPIC -shared -o $@
|
2022-04-04 16:42:02 -07:00
|
|
|
|
|
|
|
$(OUTPUT)/urandom_read: urandom_read.c urandom_read_aux.c $(OUTPUT)/liburandom_read.so
|
2020-01-12 23:31:40 -08:00
|
|
|
$(call msg,BINARY,,$@)
|
2023-10-04 14:27:19 +02:00
|
|
|
$(Q)$(CLANG) $(CLANG_TARGET_ARCH) \
|
|
|
|
$(filter-out -static,$(CFLAGS) $(LDFLAGS)) $(filter %.c,$^) \
|
2024-11-01 09:27:13 +01:00
|
|
|
-Wno-unused-command-line-argument \
|
2023-10-04 14:27:19 +02:00
|
|
|
-lurandom_read $(filter-out -static,$(LDLIBS)) -L$(OUTPUT) \
|
2022-11-04 10:40:16 +01:00
|
|
|
-fuse-ld=$(LLD) -Wl,-znoseparate-code -Wl,--build-id=sha1 \
|
|
|
|
-Wl,-rpath=. -o $@
|
2019-01-26 12:26:14 -05:00
|
|
|
|
2022-09-20 09:59:50 +02:00
|
|
|
$(OUTPUT)/sign-file: ../../../../scripts/sign-file.c
|
|
|
|
$(call msg,SIGN-FILE,,$@)
|
2023-11-25 17:42:50 +09:00
|
|
|
$(Q)$(CC) $(shell $(PKG_CONFIG) --cflags libcrypto 2> /dev/null) \
|
2022-09-20 09:59:50 +02:00
|
|
|
$< -o $@ \
|
2023-11-25 17:42:50 +09:00
|
|
|
$(shell $(PKG_CONFIG) --libs libcrypto 2> /dev/null || echo -lcrypto)
|
2022-09-20 09:59:50 +02:00
|
|
|
|
2024-12-04 14:28:26 +01:00
|
|
|
# This should really be a grouped target, but make versions before 4.3 don't
|
|
|
|
# support that for regular rules. However, pattern matching rules are implicitly
|
|
|
|
# treated as grouped even with older versions of make, so as a workaround, the
|
|
|
|
# subst() turns the rule into a pattern matching rule
|
|
|
|
$(addprefix test_kmods/,$(subst .ko,%ko,$(TEST_KMODS))): $(VMLINUX_BTF) $(RESOLVE_BTFIDS) $(wildcard test_kmods/Makefile test_kmods/*.[ch])
|
|
|
|
$(Q)$(RM) test_kmods/*.ko test_kmods/*.mod.o # force re-compilation
|
|
|
|
$(Q)$(MAKE) $(submake_extras) -C test_kmods \
|
|
|
|
RESOLVE_BTFIDS=$(RESOLVE_BTFIDS) \
|
2024-11-01 09:27:11 +01:00
|
|
|
EXTRA_CFLAGS='' EXTRA_LDFLAGS=''
|
2024-02-21 18:11:05 -08:00
|
|
|
|
2024-12-04 14:28:26 +01:00
|
|
|
$(TEST_KMOD_TARGETS): $(addprefix test_kmods/,$(TEST_KMODS))
|
2024-10-10 15:27:09 +02:00
|
|
|
$(call msg,MOD,,$@)
|
2024-12-04 14:28:26 +01:00
|
|
|
$(Q)cp test_kmods/$(@F) $@
|
2024-10-10 15:27:09 +02:00
|
|
|
|
|
|
|
|
2021-01-13 17:33:16 +01:00
|
|
|
DEFAULT_BPFTOOL := $(HOST_SCRATCH_DIR)/sbin/bpftool
|
selftests/bpf: Cross-compile bpftool
When the BPF selftests are cross-compiled, only the a host version of
bpftool is built. This version of bpftool is used on the host-side to
generate various intermediates, e.g., skeletons.
The test runners are also using bpftool, so the Makefile will symlink
bpftool from the selftest/bpf root, where the test runners will look
the tool:
| $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool \
| $(OUTPUT)/$(if $2,$2/)bpftool
There are two problems for cross-compilation builds:
1. There is no native (cross-compilation target) of bpftool
2. The bootstrap/bpftool is never cross-compiled (by design)
Make sure that a native/cross-compiled version of bpftool is built,
and if CROSS_COMPILE is set, symlink the native/non-bootstrap version.
Acked-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Björn Töpel <bjorn@rivosinc.com>
Link: https://lore.kernel.org/r/20230214161253.183458-1-bjorn@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-02-14 17:12:53 +01:00
|
|
|
ifneq ($(CROSS_COMPILE),)
|
|
|
|
CROSS_BPFTOOL := $(SCRATCH_DIR)/sbin/bpftool
|
|
|
|
TRUNNER_BPFTOOL := $(CROSS_BPFTOOL)
|
|
|
|
USE_BOOTSTRAP := ""
|
|
|
|
else
|
|
|
|
TRUNNER_BPFTOOL := $(DEFAULT_BPFTOOL)
|
|
|
|
USE_BOOTSTRAP := "bootstrap/"
|
|
|
|
endif
|
2020-08-04 17:47:57 -07:00
|
|
|
|
tools/runqslower: Install libbpf headers when building
API headers from libbpf should not be accessed directly from the
library's source directory. Instead, they should be exported with "make
install_headers". Let's make sure that runqslower installs the
headers properly when building.
We use a libbpf_hdrs target to mark the logical dependency on libbpf's
headers export for a number of object files, even though the headers
should have been exported at this time (since bpftool needs them, and is
required to generate the skeleton or the vmlinux.h).
When descending from a parent Makefile, the specific output directories
for building the library and exporting the headers are configurable with
BPFOBJ_OUTPUT and BPF_DESTDIR, respectively. This is in addition to
OUTPUT, on top of which those variables are constructed by default.
Also adjust the Makefile for the BPF selftests. We pass a number of
variables to the "make" invocation, because we want to point runqslower
to the (target) libbpf shared with other tools, instead of building its
own version. In addition, runqslower relies on (target) bpftool, and we
also want to pass the proper variables to its Makefile so that bpftool
itself reuses the same libbpf.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-6-quentin@isovalent.com
2021-10-07 20:44:31 +01:00
|
|
|
$(OUTPUT)/runqslower: $(BPFOBJ) | $(DEFAULT_BPFTOOL) $(RUNQSLOWER_OUTPUT)
|
|
|
|
$(Q)$(MAKE) $(submake_extras) -C $(TOOLSDIR)/bpf/runqslower \
|
|
|
|
OUTPUT=$(RUNQSLOWER_OUTPUT) VMLINUX_BTF=$(VMLINUX_BTF) \
|
2021-11-12 15:51:30 +00:00
|
|
|
BPFTOOL_OUTPUT=$(HOST_BUILD_DIR)/bpftool/ \
|
2024-05-02 16:08:31 +02:00
|
|
|
BPFOBJ_OUTPUT=$(BUILD_DIR)/libbpf/ \
|
2023-02-10 01:11:58 +01:00
|
|
|
BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR) \
|
2025-01-24 23:14:23 -08:00
|
|
|
BPF_TARGET_ENDIAN=$(BPF_TARGET_ENDIAN) \
|
2024-11-01 09:27:11 +01:00
|
|
|
EXTRA_CFLAGS='-g $(OPT_FLAGS) $(SAN_CFLAGS) $(EXTRA_CFLAGS)' \
|
|
|
|
EXTRA_LDFLAGS='$(SAN_LDFLAGS) $(EXTRA_LDFLAGS)' && \
|
tools/runqslower: Install libbpf headers when building
API headers from libbpf should not be accessed directly from the
library's source directory. Instead, they should be exported with "make
install_headers". Let's make sure that runqslower installs the
headers properly when building.
We use a libbpf_hdrs target to mark the logical dependency on libbpf's
headers export for a number of object files, even though the headers
should have been exported at this time (since bpftool needs them, and is
required to generate the skeleton or the vmlinux.h).
When descending from a parent Makefile, the specific output directories
for building the library and exporting the headers are configurable with
BPFOBJ_OUTPUT and BPF_DESTDIR, respectively. This is in addition to
OUTPUT, on top of which those variables are constructed by default.
Also adjust the Makefile for the BPF selftests. We pass a number of
variables to the "make" invocation, because we want to point runqslower
to the (target) libbpf shared with other tools, instead of building its
own version. In addition, runqslower relies on (target) bpftool, and we
also want to pass the proper variables to its Makefile so that bpftool
itself reuses the same libbpf.
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211007194438.34443-6-quentin@isovalent.com
2021-10-07 20:44:31 +01:00
|
|
|
cp $(RUNQSLOWER_OUTPUT)runqslower $@
|
2017-03-19 23:03:14 -07:00
|
|
|
|
selftests/bpf: Cross-compile bpftool
When the BPF selftests are cross-compiled, only the a host version of
bpftool is built. This version of bpftool is used on the host-side to
generate various intermediates, e.g., skeletons.
The test runners are also using bpftool, so the Makefile will symlink
bpftool from the selftest/bpf root, where the test runners will look
the tool:
| $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool \
| $(OUTPUT)/$(if $2,$2/)bpftool
There are two problems for cross-compilation builds:
1. There is no native (cross-compilation target) of bpftool
2. The bootstrap/bpftool is never cross-compiled (by design)
Make sure that a native/cross-compiled version of bpftool is built,
and if CROSS_COMPILE is set, symlink the native/non-bootstrap version.
Acked-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Björn Töpel <bjorn@rivosinc.com>
Link: https://lore.kernel.org/r/20230214161253.183458-1-bjorn@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-02-14 17:12:53 +01:00
|
|
|
TEST_GEN_PROGS_EXTENDED += $(TRUNNER_BPFTOOL)
|
2021-08-20 09:55:56 +08:00
|
|
|
|
2021-11-03 15:08:43 -07:00
|
|
|
$(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED): $(BPFOBJ)
|
|
|
|
|
2021-12-01 14:51:02 +00:00
|
|
|
TESTING_HELPERS := $(OUTPUT)/testing_helpers.o
|
2023-03-25 04:54:44 +02:00
|
|
|
CGROUP_HELPERS := $(OUTPUT)/cgroup_helpers.o
|
|
|
|
UNPRIV_HELPERS := $(OUTPUT)/unpriv_helpers.o
|
2021-12-01 14:51:02 +00:00
|
|
|
TRACE_HELPERS := $(OUTPUT)/trace_helpers.o
|
selftests/bpf: Add --json-summary option to test_progs
Currently, test_progs outputs all stdout/stderr as it runs, and when it
is done, prints a summary.
It is non-trivial for tooling to parse that output and extract meaningful
information from it.
This change adds a new option, `--json-summary`/`-J` that let the caller
specify a file where `test_progs{,-no_alu32}` can write a summary of the
run in a json format that can later be parsed by tooling.
Currently, it creates a summary section with successes/skipped/failures
followed by a list of failed tests and subtests.
A test contains the following fields:
- name: the name of the test
- number: the number of the test
- message: the log message that was printed by the test.
- failed: A boolean indicating whether the test failed or not. Currently
we only output failed tests, but in the future, successful tests could
be added.
- subtests: A list of subtests associated with this test.
A subtest contains the following fields:
- name: same as above
- number: sanme as above
- message: the log message that was printed by the subtest.
- failed: same as above but for the subtest
An example run and json content below:
```
$ sudo ./test_progs -a $(grep -v '^#' ./DENYLIST.aarch64 | awk '{print
$1","}' | tr -d '\n') -j -J /tmp/test_progs.json
$ jq < /tmp/test_progs.json | head -n 30
{
"success": 29,
"success_subtest": 23,
"skipped": 3,
"failed": 28,
"results": [
{
"name": "bpf_cookie",
"number": 10,
"message": "test_bpf_cookie:PASS:skel_open 0 nsec\n",
"failed": true,
"subtests": [
{
"name": "multi_kprobe_link_api",
"number": 2,
"message": "kprobe_multi_link_api_subtest:PASS:load_kallsyms 0 nsec\nlibbpf: extern 'bpf_testmod_fentry_test1' (strong): not resolved\nlibbpf: failed to load object 'kprobe_multi'\nlibbpf: failed to load BPF skeleton 'kprobe_multi': -3\nkprobe_multi_link_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3\n",
"failed": true
},
{
"name": "multi_kprobe_attach_api",
"number": 3,
"message": "libbpf: extern 'bpf_testmod_fentry_test1' (strong): not resolved\nlibbpf: failed to load object 'kprobe_multi'\nlibbpf: failed to load BPF skeleton 'kprobe_multi': -3\nkprobe_multi_attach_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3\n",
"failed": true
},
{
"name": "lsm",
"number": 8,
"message": "lsm_subtest:PASS:lsm.link_create 0 nsec\nlsm_subtest:FAIL:stack_mprotect unexpected stack_mprotect: actual 0 != expected -1\n",
"failed": true
}
```
The file can then be used to print a summary of the test run and list of
failing tests/subtests:
```
$ jq -r < /tmp/test_progs.json '"Success: \(.success)/\(.success_subtest), Skipped: \(.skipped), Failed: \(.failed)"'
Success: 29/23, Skipped: 3, Failed: 28
$ jq -r < /tmp/test_progs.json '.results | map([
if .failed then "#\(.number) \(.name)" else empty end,
(
. as {name: $tname, number: $tnum} | .subtests | map(
if .failed then "#\($tnum)/\(.number) \($tname)/\(.name)" else empty end
)
)
]) | flatten | .[]' | head -n 20
#10 bpf_cookie
#10/2 bpf_cookie/multi_kprobe_link_api
#10/3 bpf_cookie/multi_kprobe_attach_api
#10/8 bpf_cookie/lsm
#15 bpf_mod_race
#15/1 bpf_mod_race/ksym (used_btfs UAF)
#15/2 bpf_mod_race/kfunc (kfunc_btf_tab UAF)
#36 cgroup_hierarchical_stats
#61 deny_namespace
#61/1 deny_namespace/unpriv_userns_create_no_bpf
#73 fexit_stress
#83 get_func_ip_test
#99 kfunc_dynptr_param
#99/1 kfunc_dynptr_param/dynptr_data_null
#99/4 kfunc_dynptr_param/dynptr_data_null
#100 kprobe_multi_bench_attach
#100/1 kprobe_multi_bench_attach/kernel
#100/2 kprobe_multi_bench_attach/modules
#101 kprobe_multi_test
#101/1 kprobe_multi_test/skel_api
```
Signed-off-by: Manu Bretelle <chantr4@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20230317163256.3809328-1-chantr4@gmail.com
2023-03-17 09:32:56 -07:00
|
|
|
JSON_WRITER := $(OUTPUT)/json_writer.o
|
2022-03-16 10:38:29 -07:00
|
|
|
CAP_HELPERS := $(OUTPUT)/cap_helpers.o
|
2024-04-23 18:35:29 +08:00
|
|
|
NETWORK_HELPERS := $(OUTPUT)/network_helpers.o
|
2021-12-01 14:51:02 +00:00
|
|
|
|
|
|
|
$(OUTPUT)/test_sockmap: $(CGROUP_HELPERS) $(TESTING_HELPERS)
|
|
|
|
$(OUTPUT)/test_tcpnotify_user: $(CGROUP_HELPERS) $(TESTING_HELPERS) $(TRACE_HELPERS)
|
|
|
|
$(OUTPUT)/test_sock_fields: $(CGROUP_HELPERS) $(TESTING_HELPERS)
|
|
|
|
$(OUTPUT)/test_tag: $(TESTING_HELPERS)
|
|
|
|
$(OUTPUT)/test_lirc_mode2_user: $(TESTING_HELPERS)
|
|
|
|
$(OUTPUT)/xdping: $(TESTING_HELPERS)
|
|
|
|
$(OUTPUT)/flow_dissector_load: $(TESTING_HELPERS)
|
|
|
|
$(OUTPUT)/test_maps: $(TESTING_HELPERS)
|
2023-03-25 04:54:44 +02:00
|
|
|
$(OUTPUT)/test_verifier: $(TESTING_HELPERS) $(CAP_HELPERS) $(UNPRIV_HELPERS)
|
2022-06-27 14:15:13 -07:00
|
|
|
$(OUTPUT)/xsk.o: $(BPFOBJ)
|
2018-02-13 14:19:15 +01:00
|
|
|
|
2019-12-13 17:43:38 -08:00
|
|
|
BPFTOOL ?= $(DEFAULT_BPFTOOL)
|
2020-01-23 21:41:48 -08:00
|
|
|
$(DEFAULT_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) \
|
2021-01-13 17:33:16 +01:00
|
|
|
$(HOST_BPFOBJ) | $(HOST_BUILD_DIR)/bpftool
|
2020-01-23 21:41:48 -08:00
|
|
|
$(Q)$(MAKE) $(submake_extras) -C $(BPFTOOLDIR) \
|
2023-02-10 01:11:55 +01:00
|
|
|
ARCH= CROSS_COMPILE= CC="$(HOSTCC)" LD="$(HOSTLD)" \
|
2024-11-01 09:27:11 +01:00
|
|
|
EXTRA_CFLAGS='-g $(OPT_FLAGS) $(EXTRA_CFLAGS)' \
|
|
|
|
EXTRA_LDFLAGS='$(EXTRA_LDFLAGS)' \
|
2021-01-13 17:33:16 +01:00
|
|
|
OUTPUT=$(HOST_BUILD_DIR)/bpftool/ \
|
2021-10-07 20:44:29 +01:00
|
|
|
LIBBPF_OUTPUT=$(HOST_BUILD_DIR)/libbpf/ \
|
|
|
|
LIBBPF_DESTDIR=$(HOST_SCRATCH_DIR)/ \
|
2021-10-07 20:44:38 +01:00
|
|
|
prefix= DESTDIR=$(HOST_SCRATCH_DIR)/ install-bin
|
2021-03-02 09:19:43 -08:00
|
|
|
|
selftests/bpf: Cross-compile bpftool
When the BPF selftests are cross-compiled, only the a host version of
bpftool is built. This version of bpftool is used on the host-side to
generate various intermediates, e.g., skeletons.
The test runners are also using bpftool, so the Makefile will symlink
bpftool from the selftest/bpf root, where the test runners will look
the tool:
| $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool \
| $(OUTPUT)/$(if $2,$2/)bpftool
There are two problems for cross-compilation builds:
1. There is no native (cross-compilation target) of bpftool
2. The bootstrap/bpftool is never cross-compiled (by design)
Make sure that a native/cross-compiled version of bpftool is built,
and if CROSS_COMPILE is set, symlink the native/non-bootstrap version.
Acked-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Björn Töpel <bjorn@rivosinc.com>
Link: https://lore.kernel.org/r/20230214161253.183458-1-bjorn@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-02-14 17:12:53 +01:00
|
|
|
ifneq ($(CROSS_COMPILE),)
|
|
|
|
$(CROSS_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) \
|
|
|
|
$(BPFOBJ) | $(BUILD_DIR)/bpftool
|
|
|
|
$(Q)$(MAKE) $(submake_extras) -C $(BPFTOOLDIR) \
|
|
|
|
ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) \
|
2024-11-01 09:27:11 +01:00
|
|
|
EXTRA_CFLAGS='-g $(OPT_FLAGS) $(EXTRA_CFLAGS)' \
|
|
|
|
EXTRA_LDFLAGS='$(EXTRA_LDFLAGS)' \
|
selftests/bpf: Cross-compile bpftool
When the BPF selftests are cross-compiled, only the a host version of
bpftool is built. This version of bpftool is used on the host-side to
generate various intermediates, e.g., skeletons.
The test runners are also using bpftool, so the Makefile will symlink
bpftool from the selftest/bpf root, where the test runners will look
the tool:
| $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool \
| $(OUTPUT)/$(if $2,$2/)bpftool
There are two problems for cross-compilation builds:
1. There is no native (cross-compilation target) of bpftool
2. The bootstrap/bpftool is never cross-compiled (by design)
Make sure that a native/cross-compiled version of bpftool is built,
and if CROSS_COMPILE is set, symlink the native/non-bootstrap version.
Acked-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Björn Töpel <bjorn@rivosinc.com>
Link: https://lore.kernel.org/r/20230214161253.183458-1-bjorn@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-02-14 17:12:53 +01:00
|
|
|
OUTPUT=$(BUILD_DIR)/bpftool/ \
|
|
|
|
LIBBPF_OUTPUT=$(BUILD_DIR)/libbpf/ \
|
|
|
|
LIBBPF_DESTDIR=$(SCRATCH_DIR)/ \
|
|
|
|
prefix= DESTDIR=$(SCRATCH_DIR)/ install-bin
|
|
|
|
endif
|
|
|
|
|
2025-05-10 01:24:50 +01:00
|
|
|
ifneq ($(SKIP_DOCS),1)
|
2021-04-20 15:24:28 +02:00
|
|
|
all: docs
|
2025-05-10 01:24:50 +01:00
|
|
|
endif
|
2021-04-20 15:24:28 +02:00
|
|
|
|
2021-03-02 09:19:43 -08:00
|
|
|
docs:
|
|
|
|
$(Q)RST2MAN_OPTS="--exit-status=1" $(MAKE) $(submake_extras) \
|
|
|
|
-f Makefile.docs \
|
|
|
|
prefix= OUTPUT=$(OUTPUT)/ DESTDIR=$(OUTPUT)/ $@
|
|
|
|
|
|
|
|
docs-clean:
|
|
|
|
$(Q)$(MAKE) $(submake_extras) \
|
|
|
|
-f Makefile.docs \
|
|
|
|
prefix= OUTPUT=$(OUTPUT)/ DESTDIR=$(OUTPUT)/ $@
|
2019-12-13 17:43:38 -08:00
|
|
|
|
2020-01-23 21:41:48 -08:00
|
|
|
$(BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile) \
|
2021-11-10 21:36:17 -08:00
|
|
|
$(APIDIR)/linux/bpf.h \
|
2021-10-07 20:44:30 +01:00
|
|
|
| $(BUILD_DIR)/libbpf
|
2020-01-20 14:06:52 +01:00
|
|
|
$(Q)$(MAKE) $(submake_extras) -C $(BPFDIR) OUTPUT=$(BUILD_DIR)/libbpf/ \
|
2024-11-01 09:27:11 +01:00
|
|
|
EXTRA_CFLAGS='-g $(OPT_FLAGS) $(SAN_CFLAGS) $(EXTRA_CFLAGS)' \
|
|
|
|
EXTRA_LDFLAGS='$(SAN_LDFLAGS) $(EXTRA_LDFLAGS)' \
|
2020-01-23 21:41:48 -08:00
|
|
|
DESTDIR=$(SCRATCH_DIR) prefix= all install_headers
|
2017-03-30 21:45:41 -07:00
|
|
|
|
2021-01-13 17:33:16 +01:00
|
|
|
ifneq ($(BPFOBJ),$(HOST_BPFOBJ))
|
2021-11-10 21:36:17 -08:00
|
|
|
$(HOST_BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile) \
|
|
|
|
$(APIDIR)/linux/bpf.h \
|
|
|
|
| $(HOST_BUILD_DIR)/libbpf
|
2021-01-13 17:33:16 +01:00
|
|
|
$(Q)$(MAKE) $(submake_extras) -C $(BPFDIR) \
|
2024-11-01 09:27:11 +01:00
|
|
|
ARCH= CROSS_COMPILE= \
|
|
|
|
EXTRA_CFLAGS='-g $(OPT_FLAGS) $(EXTRA_CFLAGS)' \
|
|
|
|
EXTRA_LDFLAGS='$(EXTRA_LDFLAGS)' \
|
2023-02-10 01:11:55 +01:00
|
|
|
OUTPUT=$(HOST_BUILD_DIR)/libbpf/ \
|
|
|
|
CC="$(HOSTCC)" LD="$(HOSTLD)" \
|
2021-01-13 17:33:16 +01:00
|
|
|
DESTDIR=$(HOST_SCRATCH_DIR)/ prefix= all install_headers
|
|
|
|
endif
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
|
2024-08-28 17:46:23 +00:00
|
|
|
# vmlinux.h is first dumped to a temprorary file and then compared to
|
|
|
|
# the previous version. This helps to avoid unnecessary re-builds of
|
|
|
|
# $(TRUNNER_BPF_OBJS)
|
2021-03-18 12:40:34 -07:00
|
|
|
$(INCLUDE_DIR)/vmlinux.h: $(VMLINUX_BTF) $(BPFTOOL) | $(INCLUDE_DIR)
|
2020-06-29 17:47:59 -07:00
|
|
|
ifeq ($(VMLINUX_H),)
|
2020-03-13 10:23:36 -07:00
|
|
|
$(call msg,GEN,,$@)
|
2024-08-28 17:46:23 +00:00
|
|
|
$(Q)$(BPFTOOL) btf dump file $(VMLINUX_BTF) format c > $(INCLUDE_DIR)/.vmlinux.h.tmp
|
|
|
|
$(Q)cmp -s $(INCLUDE_DIR)/.vmlinux.h.tmp $@ || mv $(INCLUDE_DIR)/.vmlinux.h.tmp $@
|
2020-06-29 17:47:59 -07:00
|
|
|
else
|
|
|
|
$(call msg,CP,,$@)
|
2020-08-06 20:30:57 -07:00
|
|
|
$(Q)cp "$(VMLINUX_H)" $@
|
2020-06-29 17:47:59 -07:00
|
|
|
endif
|
2020-03-13 10:23:36 -07:00
|
|
|
|
2021-01-13 17:33:16 +01:00
|
|
|
$(RESOLVE_BTFIDS): $(HOST_BPFOBJ) | $(HOST_BUILD_DIR)/resolve_btfids \
|
2020-07-11 23:53:29 +02:00
|
|
|
$(TOOLSDIR)/bpf/resolve_btfids/main.c \
|
|
|
|
$(TOOLSDIR)/lib/rbtree.c \
|
|
|
|
$(TOOLSDIR)/lib/zalloc.c \
|
|
|
|
$(TOOLSDIR)/lib/string.c \
|
|
|
|
$(TOOLSDIR)/lib/ctype.c \
|
|
|
|
$(TOOLSDIR)/lib/str_error_r.c
|
|
|
|
$(Q)$(MAKE) $(submake_extras) -C $(TOOLSDIR)/bpf/resolve_btfids \
|
2023-02-10 01:11:55 +01:00
|
|
|
CC="$(HOSTCC)" LD="$(HOSTLD)" AR="$(HOSTAR)" \
|
2021-10-07 20:44:30 +01:00
|
|
|
LIBBPF_INCLUDE=$(HOST_INCLUDE_DIR) \
|
2021-01-13 17:33:16 +01:00
|
|
|
OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ)
|
2020-07-11 23:53:29 +02:00
|
|
|
|
2018-05-21 09:00:03 +02:00
|
|
|
# Get Clang's default includes on this system, as opposed to those seen by
|
2023-06-24 00:18:56 +00:00
|
|
|
# '--target=bpf'. This fixes "missing" files on some architectures/distros,
|
2018-05-21 09:00:03 +02:00
|
|
|
# such as asm/byteorder.h, asm/socket.h, asm/sockios.h, sys/cdefs.h etc.
|
|
|
|
#
|
|
|
|
# Use '-idirafter': Don't interfere with include mechanics except where the
|
|
|
|
# build would have failed anyways.
|
selftests/bpf: add bpf-gcc support
Now that binutils and gcc support for BPF is upstream, make use of it in
BPF selftests using alu32-like approach. Share as much as possible of
CFLAGS calculation with clang.
Fixes only obvious issues, leaving more complex ones for later:
- Use gcc-provided bpf-helpers.h instead of manually defining the
helpers, change bpf_helpers.h include guard to avoid conflict.
- Include <linux/stddef.h> for __always_inline.
- Add $(OUTPUT)/../usr/include to include path in order to use local
kernel headers instead of system kernel headers when building with O=.
In order to activate the bpf-gcc support, one needs to configure
binutils and gcc with --target=bpf and make them available in $PATH. In
particular, gcc must be installed as `bpf-gcc`, which is the default.
Right now with binutils 25a2915e8dba and gcc r275589 only a handful of
tests work:
# ./test_progs_bpf_gcc
# Summary: 7/39 PASSED, 1 SKIPPED, 98 FAILED
The reason for those failures are as follows:
- Build errors:
- `error: too many function arguments for eBPF` for __always_inline
functions read_str_var and read_map_var - must be inlining issue,
and for process_l3_headers_v6, which relies on optimizing away
function arguments.
- `error: indirect call in function, which are not supported by eBPF`
where there are no obvious indirect calls in the source calls, e.g.
in __encap_ipip_none.
- `error: field 'lock' has incomplete type` for fields of `struct
bpf_spin_lock` type - bpf_spin_lock is re#defined by bpf-helpers.h,
so its usage is sensitive to order of #includes.
- `error: eBPF stack limit exceeded` in sysctl_tcp_mem.
- Load errors:
- Missing object files due to above build errors.
- `libbpf: failed to create map (name: 'test_ver.bss')`.
- `libbpf: object file doesn't contain bpf program`.
- `libbpf: Program '.text' contains unrecognized relo data pointing to
section 0`.
- `libbpf: BTF is required, but is missing or corrupted` - no BTF
support in gcc yet.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Cc: Jose E. Marchesi <jose.marchesi@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-09-12 18:05:43 +02:00
|
|
|
define get_sys_includes
|
2022-11-15 19:20:51 +01:00
|
|
|
$(shell $(1) $(2) -v -E - </dev/null 2>&1 \
|
2020-11-18 08:16:38 +01:00
|
|
|
| sed -n '/<...> search starts here:/,/End of search list./{ s| \(/.*\)|-idirafter \1|p }') \
|
2023-02-28 20:03:01 +08:00
|
|
|
$(shell $(1) $(2) -dM -E - </dev/null | grep '__riscv_xlen ' | awk '{printf("-D__riscv_xlen=%d -D__BITS_PER_LONG=%d", $$3, $$3)}') \
|
2024-07-22 17:13:28 -07:00
|
|
|
$(shell $(1) $(2) -dM -E - </dev/null | grep '__loongarch_grlen ' | awk '{printf("-D__BITS_PER_LONG=%d", $$3)}') \
|
|
|
|
$(shell $(1) $(2) -dM -E - </dev/null | grep -E 'MIPS(EL|EB)|_MIPS_SZ(PTR|LONG) |_MIPS_SIM |_ABI(O32|N32|64) ' | awk '{printf("-D%s=%s ", $$2, $$3)}')
|
selftests/bpf: add bpf-gcc support
Now that binutils and gcc support for BPF is upstream, make use of it in
BPF selftests using alu32-like approach. Share as much as possible of
CFLAGS calculation with clang.
Fixes only obvious issues, leaving more complex ones for later:
- Use gcc-provided bpf-helpers.h instead of manually defining the
helpers, change bpf_helpers.h include guard to avoid conflict.
- Include <linux/stddef.h> for __always_inline.
- Add $(OUTPUT)/../usr/include to include path in order to use local
kernel headers instead of system kernel headers when building with O=.
In order to activate the bpf-gcc support, one needs to configure
binutils and gcc with --target=bpf and make them available in $PATH. In
particular, gcc must be installed as `bpf-gcc`, which is the default.
Right now with binutils 25a2915e8dba and gcc r275589 only a handful of
tests work:
# ./test_progs_bpf_gcc
# Summary: 7/39 PASSED, 1 SKIPPED, 98 FAILED
The reason for those failures are as follows:
- Build errors:
- `error: too many function arguments for eBPF` for __always_inline
functions read_str_var and read_map_var - must be inlining issue,
and for process_l3_headers_v6, which relies on optimizing away
function arguments.
- `error: indirect call in function, which are not supported by eBPF`
where there are no obvious indirect calls in the source calls, e.g.
in __encap_ipip_none.
- `error: field 'lock' has incomplete type` for fields of `struct
bpf_spin_lock` type - bpf_spin_lock is re#defined by bpf-helpers.h,
so its usage is sensitive to order of #includes.
- `error: eBPF stack limit exceeded` in sysctl_tcp_mem.
- Load errors:
- Missing object files due to above build errors.
- `libbpf: failed to create map (name: 'test_ver.bss')`.
- `libbpf: object file doesn't contain bpf program`.
- `libbpf: Program '.text' contains unrecognized relo data pointing to
section 0`.
- `libbpf: BTF is required, but is missing or corrupted` - no BTF
support in gcc yet.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Cc: Jose E. Marchesi <jose.marchesi@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-09-12 18:05:43 +02:00
|
|
|
endef
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
|
2019-10-28 11:20:49 +01:00
|
|
|
# Determine target endianness.
|
2024-12-12 16:32:24 -08:00
|
|
|
IS_LITTLE_ENDIAN := $(shell $(CC) -dM -E - </dev/null | \
|
2019-10-28 11:20:49 +01:00
|
|
|
grep 'define __BYTE_ORDER__ __ORDER_LITTLE_ENDIAN__')
|
2024-12-12 16:32:24 -08:00
|
|
|
MENDIAN:=$(if $(IS_LITTLE_ENDIAN),-mlittle-endian,-mbig-endian)
|
|
|
|
BPF_TARGET_ENDIAN:=$(if $(IS_LITTLE_ENDIAN),--target=bpfel,--target=bpfeb)
|
2019-10-28 11:20:49 +01:00
|
|
|
|
2022-11-15 19:20:51 +01:00
|
|
|
ifneq ($(CROSS_COMPILE),)
|
|
|
|
CLANG_TARGET_ARCH = --target=$(notdir $(CROSS_COMPILE:%-=%))
|
|
|
|
endif
|
|
|
|
|
|
|
|
CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG),$(CLANG_TARGET_ARCH))
|
2023-03-08 21:40:15 -08:00
|
|
|
BPF_CFLAGS = -g -Wall -Werror -D__TARGET_ARCH_$(SRCARCH) $(MENDIAN) \
|
2020-03-13 12:31:05 +01:00
|
|
|
-I$(INCLUDE_DIR) -I$(CURDIR) -I$(APIDIR) \
|
2024-01-30 12:36:24 +01:00
|
|
|
-I$(abspath $(OUTPUT)/../usr/include) \
|
2025-01-07 23:58:18 +00:00
|
|
|
-std=gnu11 \
|
2025-01-06 20:17:31 +00:00
|
|
|
-fno-strict-aliasing \
|
2024-01-30 12:36:24 +01:00
|
|
|
-Wno-compare-distinct-pointer-types
|
2023-12-26 11:11:43 -08:00
|
|
|
# TODO: enable me -Wsign-compare
|
2018-05-21 09:00:03 +02:00
|
|
|
|
2024-01-30 12:36:24 +01:00
|
|
|
CLANG_CFLAGS = $(CLANG_SYS_INCLUDES)
|
2017-12-14 17:55:11 -08:00
|
|
|
|
selftests/bpf: add bpf-gcc support
Now that binutils and gcc support for BPF is upstream, make use of it in
BPF selftests using alu32-like approach. Share as much as possible of
CFLAGS calculation with clang.
Fixes only obvious issues, leaving more complex ones for later:
- Use gcc-provided bpf-helpers.h instead of manually defining the
helpers, change bpf_helpers.h include guard to avoid conflict.
- Include <linux/stddef.h> for __always_inline.
- Add $(OUTPUT)/../usr/include to include path in order to use local
kernel headers instead of system kernel headers when building with O=.
In order to activate the bpf-gcc support, one needs to configure
binutils and gcc with --target=bpf and make them available in $PATH. In
particular, gcc must be installed as `bpf-gcc`, which is the default.
Right now with binutils 25a2915e8dba and gcc r275589 only a handful of
tests work:
# ./test_progs_bpf_gcc
# Summary: 7/39 PASSED, 1 SKIPPED, 98 FAILED
The reason for those failures are as follows:
- Build errors:
- `error: too many function arguments for eBPF` for __always_inline
functions read_str_var and read_map_var - must be inlining issue,
and for process_l3_headers_v6, which relies on optimizing away
function arguments.
- `error: indirect call in function, which are not supported by eBPF`
where there are no obvious indirect calls in the source calls, e.g.
in __encap_ipip_none.
- `error: field 'lock' has incomplete type` for fields of `struct
bpf_spin_lock` type - bpf_spin_lock is re#defined by bpf-helpers.h,
so its usage is sensitive to order of #includes.
- `error: eBPF stack limit exceeded` in sysctl_tcp_mem.
- Load errors:
- Missing object files due to above build errors.
- `libbpf: failed to create map (name: 'test_ver.bss')`.
- `libbpf: object file doesn't contain bpf program`.
- `libbpf: Program '.text' contains unrecognized relo data pointing to
section 0`.
- `libbpf: BTF is required, but is missing or corrupted` - no BTF
support in gcc yet.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Cc: Jose E. Marchesi <jose.marchesi@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-09-12 18:05:43 +02:00
|
|
|
$(OUTPUT)/test_l4lb_noinline.o: BPF_CFLAGS += -fno-inline
|
|
|
|
$(OUTPUT)/test_xdp_noinline.o: BPF_CFLAGS += -fno-inline
|
2017-12-14 17:55:11 -08:00
|
|
|
|
2019-01-28 08:53:55 -08:00
|
|
|
$(OUTPUT)/flow_dissector_load.o: flow_dissector_load.h
|
2022-08-23 15:25:55 -07:00
|
|
|
$(OUTPUT)/cgroup_getset_retval_hooks.o: cgroup_getset_retval_hooks.h
|
2019-01-28 08:53:55 -08:00
|
|
|
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
# Build BPF object using Clang
|
|
|
|
# $1 - input .c file
|
|
|
|
# $2 - output .o file
|
|
|
|
# $3 - CFLAGS
|
2024-07-19 22:25:35 -07:00
|
|
|
# $4 - binary name
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
define CLANG_BPF_BUILD_RULE
|
2024-07-19 22:25:35 -07:00
|
|
|
$(call msg,CLNG-BPF,$4,$2)
|
2024-09-16 01:37:47 -07:00
|
|
|
$(Q)$(CLANG) $3 -O2 $(BPF_TARGET_ENDIAN) -c $1 -mcpu=v3 -o $2
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
endef
|
2019-11-19 16:25:10 -08:00
|
|
|
# Similar to CLANG_BPF_BUILD_RULE, but with disabled alu32
|
|
|
|
define CLANG_NOALU32_BPF_BUILD_RULE
|
2024-07-19 22:25:35 -07:00
|
|
|
$(call msg,CLNG-BPF,$4,$2)
|
2024-09-16 01:37:47 -07:00
|
|
|
$(Q)$(CLANG) $3 -O2 $(BPF_TARGET_ENDIAN) -c $1 -mcpu=v2 -o $2
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
endef
|
2023-07-27 18:12:50 -07:00
|
|
|
# Similar to CLANG_BPF_BUILD_RULE, but with cpu-v4
|
|
|
|
define CLANG_CPUV4_BPF_BUILD_RULE
|
2024-07-19 22:25:35 -07:00
|
|
|
$(call msg,CLNG-BPF,$4,$2)
|
2024-09-16 01:37:47 -07:00
|
|
|
$(Q)$(CLANG) $3 -O2 $(BPF_TARGET_ENDIAN) -c $1 -mcpu=v4 -o $2
|
2023-07-27 18:12:50 -07:00
|
|
|
endef
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
# Build BPF object using GCC
|
|
|
|
define GCC_BPF_BUILD_RULE
|
2024-07-19 22:25:35 -07:00
|
|
|
$(call msg,GCC-BPF,$4,$2)
|
2024-05-07 11:50:11 +02:00
|
|
|
$(Q)$(BPF_GCC) $3 -DBPF_NO_PRESERVE_ACCESS_INDEX -Wno-attributes -O2 -c $1 -o $2
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
endef
|
|
|
|
|
2020-04-29 20:11:54 +02:00
|
|
|
SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
|
2019-12-13 17:43:38 -08:00
|
|
|
|
2021-04-23 11:13:47 -07:00
|
|
|
LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
|
2022-03-16 23:37:31 +00:00
|
|
|
linked_vars.skel.h linked_maps.skel.h \
|
2022-04-04 16:42:01 -07:00
|
|
|
test_subskeleton.skel.h test_subskeleton_lib.skel.h \
|
|
|
|
test_usdt.skel.h
|
2021-03-18 12:40:36 -07:00
|
|
|
|
2022-10-20 09:07:20 -07:00
|
|
|
LSKELS := fentry_test.c fexit_test.c fexit_sleep.c atomics.c \
|
|
|
|
trace_printk.c trace_vprintk.c map_ptr_kern.c \
|
|
|
|
core_kern.c core_kern_overflow.c test_ringbuf.c \
|
2024-06-21 16:08:28 +02:00
|
|
|
test_ringbuf_n.c test_ringbuf_map_key.c test_ringbuf_write.c
|
2022-10-20 09:07:20 -07:00
|
|
|
|
2021-10-02 06:47:57 +05:30
|
|
|
# Generate both light skeleton and libbpf skeleton for these
|
2022-09-06 17:12:57 +02:00
|
|
|
LSKELS_EXTRA := test_ksyms_module.c test_ksyms_weak.c kfunc_call_test.c \
|
|
|
|
kfunc_call_test_subprog.c
|
2021-05-13 17:36:20 -07:00
|
|
|
SKEL_BLACKLIST += $$(LSKELS)
|
|
|
|
|
2022-09-01 22:22:53 +00:00
|
|
|
test_static_linked.skel.h-deps := test_static_linked1.bpf.o test_static_linked2.bpf.o
|
|
|
|
linked_funcs.skel.h-deps := linked_funcs1.bpf.o linked_funcs2.bpf.o
|
|
|
|
linked_vars.skel.h-deps := linked_vars1.bpf.o linked_vars2.bpf.o
|
|
|
|
linked_maps.skel.h-deps := linked_maps1.bpf.o linked_maps2.bpf.o
|
2022-04-04 16:42:01 -07:00
|
|
|
# In the subskeleton case, we want the test_subskeleton_lib.subskel.h file
|
|
|
|
# but that's created as a side-effect of the skel.h generation.
|
2022-09-01 22:22:53 +00:00
|
|
|
test_subskeleton.skel.h-deps := test_subskeleton_lib2.bpf.o test_subskeleton_lib.bpf.o test_subskeleton.bpf.o
|
|
|
|
test_subskeleton_lib.skel.h-deps := test_subskeleton_lib2.bpf.o test_subskeleton_lib.bpf.o
|
|
|
|
test_usdt.skel.h-deps := test_usdt.bpf.o test_usdt_multispec.bpf.o
|
2023-01-11 10:35:22 +01:00
|
|
|
xsk_xdp_progs.skel.h-deps := xsk_xdp_progs.bpf.o
|
2023-01-19 14:15:36 -08:00
|
|
|
xdp_hw_metadata.skel.h-deps := xdp_hw_metadata.bpf.o
|
2023-02-01 11:24:24 +01:00
|
|
|
xdp_features.skel.h-deps := xdp_features.bpf.o
|
2021-03-18 12:40:36 -07:00
|
|
|
|
2024-07-18 22:57:43 +00:00
|
|
|
LINKED_BPF_OBJS := $(foreach skel,$(LINKED_SKELS),$($(skel)-deps))
|
|
|
|
LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(LINKED_BPF_OBJS))
|
2021-04-23 11:13:44 -07:00
|
|
|
|
2024-08-28 17:46:14 +00:00
|
|
|
HEADERS_FOR_BPF_OBJS := $(wildcard $(BPFDIR)/*.bpf.h) \
|
|
|
|
$(addprefix $(BPFDIR)/, bpf_core_read.h \
|
|
|
|
bpf_endian.h \
|
|
|
|
bpf_helpers.h \
|
|
|
|
bpf_tracing.h)
|
|
|
|
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
# Set up extra TRUNNER_XXX "temporary" variables in the environment (relies on
|
|
|
|
# $eval()) and pass control to DEFINE_TEST_RUNNER_RULES.
|
|
|
|
# Parameters:
|
|
|
|
# $1 - test runner base binary name (e.g., test_progs)
|
2024-05-07 13:22:19 +01:00
|
|
|
# $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, bpf_gcc, etc)
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
define DEFINE_TEST_RUNNER
|
|
|
|
|
|
|
|
TRUNNER_OUTPUT := $(OUTPUT)$(if $2,/)$2
|
|
|
|
TRUNNER_BINARY := $1$(if $2,-)$2
|
|
|
|
TRUNNER_TEST_OBJS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.test.o, \
|
|
|
|
$$(notdir $$(wildcard $(TRUNNER_TESTS_DIR)/*.c)))
|
|
|
|
TRUNNER_EXTRA_OBJS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.o, \
|
|
|
|
$$(filter %.c,$(TRUNNER_EXTRA_SOURCES)))
|
|
|
|
TRUNNER_EXTRA_HDRS := $$(filter %.h,$(TRUNNER_EXTRA_SOURCES))
|
|
|
|
TRUNNER_TESTS_HDR := $(TRUNNER_TESTS_DIR)/tests.h
|
2019-12-13 17:43:38 -08:00
|
|
|
TRUNNER_BPF_SRCS := $$(notdir $$(wildcard $(TRUNNER_BPF_PROGS_DIR)/*.c))
|
2022-09-01 22:22:53 +00:00
|
|
|
TRUNNER_BPF_OBJS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.bpf.o, $$(TRUNNER_BPF_SRCS))
|
2019-12-13 17:43:38 -08:00
|
|
|
TRUNNER_BPF_SKELS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.skel.h, \
|
2021-04-23 11:13:44 -07:00
|
|
|
$$(filter-out $(SKEL_BLACKLIST) $(LINKED_BPF_SRCS),\
|
2019-12-13 17:43:38 -08:00
|
|
|
$$(TRUNNER_BPF_SRCS)))
|
2021-10-02 06:47:57 +05:30
|
|
|
TRUNNER_BPF_LSKELS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.lskel.h, $$(LSKELS) $$(LSKELS_EXTRA))
|
2021-03-18 12:40:36 -07:00
|
|
|
TRUNNER_BPF_SKELS_LINKED := $$(addprefix $$(TRUNNER_OUTPUT)/,$(LINKED_SKELS))
|
2020-05-13 05:17:22 +03:00
|
|
|
TEST_GEN_FILES += $$(TRUNNER_BPF_OBJS)
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
|
|
|
|
# Evaluate rules now with extra TRUNNER_XXX variables above already defined
|
|
|
|
$$(eval $$(call DEFINE_TEST_RUNNER_RULES,$1,$2))
|
|
|
|
|
|
|
|
endef
|
|
|
|
|
|
|
|
# Using TRUNNER_XXX variables, provided by callers of DEFINE_TEST_RUNNER and
|
|
|
|
# set up by DEFINE_TEST_RUNNER itself, create test runner build rules with:
|
|
|
|
# $1 - test runner base binary name (e.g., test_progs)
|
2024-05-07 13:22:19 +01:00
|
|
|
# $2 - test runner extra "flavor" (e.g., no_alu32, cpuv4, bpf_gcc, etc)
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
define DEFINE_TEST_RUNNER_RULES
|
2019-03-06 11:59:26 -08:00
|
|
|
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
ifeq ($($(TRUNNER_OUTPUT)-dir),)
|
|
|
|
$(TRUNNER_OUTPUT)-dir := y
|
|
|
|
$(TRUNNER_OUTPUT):
|
2020-01-12 23:31:40 -08:00
|
|
|
$$(call msg,MKDIR,,$$@)
|
2020-08-06 20:30:57 -07:00
|
|
|
$(Q)mkdir -p $$@
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
endif
|
|
|
|
|
|
|
|
# ensure we set up BPF objects generation rule just once for a given
|
|
|
|
# input/output directory combination
|
|
|
|
ifeq ($($(TRUNNER_BPF_PROGS_DIR)$(if $2,-)$2-bpfobjs),)
|
|
|
|
$(TRUNNER_BPF_PROGS_DIR)$(if $2,-)$2-bpfobjs := y
|
2022-09-01 22:22:53 +00:00
|
|
|
$(TRUNNER_BPF_OBJS): $(TRUNNER_OUTPUT)/%.bpf.o: \
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
$(TRUNNER_BPF_PROGS_DIR)/%.c \
|
|
|
|
$(TRUNNER_BPF_PROGS_DIR)/*.h \
|
2020-03-13 10:23:36 -07:00
|
|
|
$$(INCLUDE_DIR)/vmlinux.h \
|
2024-08-28 17:46:14 +00:00
|
|
|
$(HEADERS_FOR_BPF_OBJS) \
|
2021-09-27 18:01:36 +02:00
|
|
|
| $(TRUNNER_OUTPUT) $$(BPFOBJ)
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
$$(call $(TRUNNER_BPF_BUILD_RULE),$$<,$$@, \
|
bpf: Use -Wno-error in certain tests when building with GCC
Certain BPF selftests contain code that, albeit being legal C, trigger
warnings in GCC that cannot be disabled. This is the case for example
for the tests
progs/btf_dump_test_case_bitfields.c
progs/btf_dump_test_case_namespacing.c
progs/btf_dump_test_case_packing.c
progs/btf_dump_test_case_padding.c
progs/btf_dump_test_case_syntax.c
which contain struct type declarations inside function parameter
lists. This is problematic, because:
- The BPF selftests are built with -Werror.
- The Clang and GCC compilers sometimes differ when it comes to handle
warnings. in the handling of warnings. One compiler may emit
warnings for code that the other compiles compiles silently, and one
compiler may offer the possibility to disable certain warnings, while
the other doesn't.
In order to overcome this problem, this patch modifies the
tools/testing/selftests/bpf/Makefile in order to:
1. Enable the possibility of specifing per-source-file extra CFLAGS.
This is done by defining a make variable like:
<source-filename>-CFLAGS := <whateverflags>
And then modifying the proper Make rule in order to use these flags
when compiling <source-filename>.
2. Use the mechanism above to add -Wno-error to CFLAGS for the
following selftests:
progs/btf_dump_test_case_bitfields.c
progs/btf_dump_test_case_namespacing.c
progs/btf_dump_test_case_packing.c
progs/btf_dump_test_case_padding.c
progs/btf_dump_test_case_syntax.c
Note the corresponding -CFLAGS variables for these files are
defined only if the selftests are being built with GCC.
Note that, while compiler pragmas can generally be used to disable
particular warnings per file, this 1) is only possible for warning
that actually can be disabled in the command line, i.e. that have
-Wno-FOO options, and 2) doesn't apply to -Wno-error.
Tested in bpf-next master branch.
No regressions.
Signed-off-by: Jose E. Marchesi <jose.marchesi@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20240127100702.21549-1-jose.marchesi@oracle.com
2024-01-27 11:07:02 +01:00
|
|
|
$(TRUNNER_BPF_CFLAGS) \
|
2024-05-07 13:22:19 +01:00
|
|
|
$$($$<-CFLAGS) \
|
2024-07-19 22:25:35 -07:00
|
|
|
$$($$<-$2-CFLAGS),$(TRUNNER_BINARY))
|
2019-12-13 17:43:38 -08:00
|
|
|
|
2022-09-01 22:22:53 +00:00
|
|
|
$(TRUNNER_BPF_SKELS): %.skel.h: %.bpf.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
|
2020-01-12 23:31:40 -08:00
|
|
|
$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
|
2021-03-18 12:40:36 -07:00
|
|
|
$(Q)$$(BPFTOOL) gen object $$(<:.o=.linked1.o) $$<
|
|
|
|
$(Q)$$(BPFTOOL) gen object $$(<:.o=.linked2.o) $$(<:.o=.linked1.o)
|
2021-03-18 12:40:35 -07:00
|
|
|
$(Q)$$(BPFTOOL) gen object $$(<:.o=.linked3.o) $$(<:.o=.linked2.o)
|
|
|
|
$(Q)diff $$(<:.o=.linked2.o) $$(<:.o=.linked3.o)
|
2022-09-01 22:22:53 +00:00
|
|
|
$(Q)$$(BPFTOOL) gen skeleton $$(<:.o=.linked3.o) name $$(notdir $$(<:.bpf.o=)) > $$@
|
|
|
|
$(Q)$$(BPFTOOL) gen subskeleton $$(<:.o=.linked3.o) name $$(notdir $$(<:.bpf.o=)) > $$(@:.skel.h=.subskel.h)
|
2024-02-20 15:11:02 -08:00
|
|
|
$(Q)rm -f $$(<:.o=.linked1.o) $$(<:.o=.linked2.o) $$(<:.o=.linked3.o)
|
2021-03-18 12:40:36 -07:00
|
|
|
|
2022-09-01 22:22:53 +00:00
|
|
|
$(TRUNNER_BPF_LSKELS): %.lskel.h: %.bpf.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
|
2021-05-13 17:36:20 -07:00
|
|
|
$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
|
2022-05-08 17:41:40 -07:00
|
|
|
$(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked1.o) $$<
|
|
|
|
$(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked2.o) $$(<:.o=.llinked1.o)
|
|
|
|
$(Q)$$(BPFTOOL) gen object $$(<:.o=.llinked3.o) $$(<:.o=.llinked2.o)
|
|
|
|
$(Q)diff $$(<:.o=.llinked2.o) $$(<:.o=.llinked3.o)
|
2022-09-01 22:22:53 +00:00
|
|
|
$(Q)$$(BPFTOOL) gen skeleton -L $$(<:.o=.llinked3.o) name $$(notdir $$(<:.bpf.o=_lskel)) > $$@
|
2024-02-20 15:11:02 -08:00
|
|
|
$(Q)rm -f $$(<:.o=.llinked1.o) $$(<:.o=.llinked2.o) $$(<:.o=.llinked3.o)
|
2021-05-13 17:36:20 -07:00
|
|
|
|
2024-07-18 22:57:43 +00:00
|
|
|
$(LINKED_BPF_OBJS): %: $(TRUNNER_OUTPUT)/%
|
|
|
|
|
|
|
|
# .SECONDEXPANSION here allows to correctly expand %-deps variables as prerequisites
|
|
|
|
.SECONDEXPANSION:
|
|
|
|
$(TRUNNER_BPF_SKELS_LINKED): $(TRUNNER_OUTPUT)/%: $$$$(%-deps) $(BPFTOOL) | $(TRUNNER_OUTPUT)
|
2022-09-01 22:22:53 +00:00
|
|
|
$$(call msg,LINK-BPF,$(TRUNNER_BINARY),$$(@:.skel.h=.bpf.o))
|
2021-03-18 12:40:36 -07:00
|
|
|
$(Q)$$(BPFTOOL) gen object $$(@:.skel.h=.linked1.o) $$(addprefix $(TRUNNER_OUTPUT)/,$$($$(@F)-deps))
|
|
|
|
$(Q)$$(BPFTOOL) gen object $$(@:.skel.h=.linked2.o) $$(@:.skel.h=.linked1.o)
|
|
|
|
$(Q)$$(BPFTOOL) gen object $$(@:.skel.h=.linked3.o) $$(@:.skel.h=.linked2.o)
|
|
|
|
$(Q)diff $$(@:.skel.h=.linked2.o) $$(@:.skel.h=.linked3.o)
|
|
|
|
$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
|
|
|
|
$(Q)$$(BPFTOOL) gen skeleton $$(@:.skel.h=.linked3.o) name $$(notdir $$(@:.skel.h=)) > $$@
|
2022-03-16 23:37:31 +00:00
|
|
|
$(Q)$$(BPFTOOL) gen subskeleton $$(@:.skel.h=.linked3.o) name $$(notdir $$(@:.skel.h=)) > $$(@:.skel.h=.subskel.h)
|
2024-02-20 15:11:02 -08:00
|
|
|
$(Q)rm -f $$(@:.skel.h=.linked1.o) $$(@:.skel.h=.linked2.o) $$(@:.skel.h=.linked3.o)
|
2024-07-18 22:57:43 +00:00
|
|
|
|
|
|
|
# When the compiler generates a %.d file, only skel basenames (not
|
|
|
|
# full paths) are specified as prerequisites for corresponding %.o
|
selftests/bpf: Set vpath in Makefile to search for skels
Auto-dependencies generated for %.test.o files refer to skels using
filenames as opposed to full paths. This requires make to be able to
link this name to an actual path, because not all generated skels are
put in the working directory.
In the original patch [1], this was mitigated by this target:
$(notdir %.skel.h): $(TRUNNER_OUTPUT)/%.skel.h
@true
This turned out to be insufficient.
First, %.lskel.h and %.subskel.h were missed, because a typical
selftests/bpf build could find these files in the working directory.
This error was detected by an out-of-tree build [2].
Second, even with missing rules added, this target causes unnecessary
rebuilds in the out-of-tree case, as X.skel.h is searched for in the
working directory, and not in the $(OUTPUT).
Using vpath directive [3] is a better solution. Instead of introducing
a separate target (X.skel.h in addition to $(TRUNNER_OUTPUT)/X.skel.h),
make is instructed to search for skels in the output, which allows make
to correctly detect that skel has already been generated.
[1]: https://lore.kernel.org/bpf/VJihUTnvtwEgv_mOnpfy7EgD9D2MPNoHO-MlANeLIzLJPGhDeyOuGKIYyKgk0O6KPjfM-MuhtvPwZcngN8WFqbTnTRyCSMc2aMZ1ODm1T_g=@pm.me/
[2]: https://lore.kernel.org/bpf/CIjrhJwoIqMc2IhuppVqh4ZtJGbx8kC8rc9PHhAIU6RccnWT4I04F_EIr4GxQwxZe89McuGJlCnUk9UbkdvWtSJjAsd7mHmnTy9F8K2TLZM=@pm.me/
[3]: https://www.gnu.org/software/make/manual/html_node/Selective-Search.html
Reported-by: Björn Töpel <bjorn@kernel.org>
Signed-off-by: Ihor Solodrai <ihor.solodrai@pm.me>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Tested-by: Björn Töpel <bjorn@rivosinc.com>
Link: https://lore.kernel.org/bpf/20240916195919.1872371-2-ihor.solodrai@pm.me
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-09-16 19:59:27 +00:00
|
|
|
# file. vpath directives below instruct make to search for skel files
|
|
|
|
# in TRUNNER_OUTPUT, if they are not present in the working directory.
|
|
|
|
vpath %.skel.h $(TRUNNER_OUTPUT)
|
|
|
|
vpath %.lskel.h $(TRUNNER_OUTPUT)
|
|
|
|
vpath %.subskel.h $(TRUNNER_OUTPUT)
|
2024-07-18 22:57:43 +00:00
|
|
|
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
endif
|
|
|
|
|
|
|
|
# ensure we set up tests.h header generation rule just once
|
|
|
|
ifeq ($($(TRUNNER_TESTS_DIR)-tests-hdr),)
|
|
|
|
$(TRUNNER_TESTS_DIR)-tests-hdr := y
|
|
|
|
$(TRUNNER_TESTS_HDR): $(TRUNNER_TESTS_DIR)/*.c
|
2020-01-12 23:31:40 -08:00
|
|
|
$$(call msg,TEST-HDR,$(TRUNNER_BINARY),$$@)
|
2021-10-22 15:32:26 -07:00
|
|
|
$$(shell (echo '/* Generated header, do not edit */'; \
|
|
|
|
sed -n -E 's/^void (serial_)?test_([a-zA-Z0-9_]+)\((void)?\).*/DEFINE_TEST(\2)/p' \
|
|
|
|
$(TRUNNER_TESTS_DIR)/*.c | sort ; \
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
) > $$@)
|
|
|
|
endif
|
|
|
|
|
|
|
|
# compile individual test files
|
|
|
|
# Note: we cd into output directory to ensure embedded BPF object is found
|
|
|
|
$(TRUNNER_TEST_OBJS): $(TRUNNER_OUTPUT)/%.test.o: \
|
|
|
|
$(TRUNNER_TESTS_DIR)/%.c \
|
2024-07-23 20:57:43 +00:00
|
|
|
| $(TRUNNER_OUTPUT)/%.test.d
|
2020-01-12 23:31:40 -08:00
|
|
|
$$(call msg,TEST-OBJ,$(TRUNNER_BINARY),$$@)
|
2024-07-18 22:57:43 +00:00
|
|
|
$(Q)cd $$(@D) && $$(CC) -I. $$(CFLAGS) -MMD -MT $$@ -c $(CURDIR)/$$< $$(LDLIBS) -o $$(@F)
|
|
|
|
|
|
|
|
$(TRUNNER_TEST_OBJS:.o=.d): $(TRUNNER_OUTPUT)/%.test.d: \
|
|
|
|
$(TRUNNER_TESTS_DIR)/%.c \
|
|
|
|
$(TRUNNER_EXTRA_HDRS) \
|
|
|
|
$(TRUNNER_BPF_SKELS) \
|
|
|
|
$(TRUNNER_BPF_LSKELS) \
|
|
|
|
$(TRUNNER_BPF_SKELS_LINKED) \
|
|
|
|
$$(BPFOBJ) | $(TRUNNER_OUTPUT)
|
|
|
|
|
2024-07-23 03:07:00 +00:00
|
|
|
ifeq ($(filter clean docs-clean,$(MAKECMDGOALS)),)
|
2024-07-18 22:57:43 +00:00
|
|
|
include $(wildcard $(TRUNNER_TEST_OBJS:.o=.d))
|
2024-07-23 03:07:00 +00:00
|
|
|
endif
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
|
2024-08-20 03:23:53 -07:00
|
|
|
# add per extra obj CFGLAGS definitions
|
|
|
|
$(foreach N,$(patsubst $(TRUNNER_OUTPUT)/%.o,%,$(TRUNNER_EXTRA_OBJS)), \
|
|
|
|
$(eval $(TRUNNER_OUTPUT)/$(N).o: CFLAGS += $($(N).c-CFLAGS)))
|
|
|
|
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
$(TRUNNER_EXTRA_OBJS): $(TRUNNER_OUTPUT)/%.o: \
|
|
|
|
%.c \
|
|
|
|
$(TRUNNER_EXTRA_HDRS) \
|
|
|
|
$(TRUNNER_TESTS_HDR) \
|
|
|
|
$$(BPFOBJ) | $(TRUNNER_OUTPUT)
|
2020-01-12 23:31:40 -08:00
|
|
|
$$(call msg,EXT-OBJ,$(TRUNNER_BINARY),$$@)
|
2020-08-06 20:30:57 -07:00
|
|
|
$(Q)$$(CC) $$(CFLAGS) -c $$< $$(LDLIBS) -o $$@
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
|
2021-02-24 12:14:45 +01:00
|
|
|
# non-flavored in-srctree builds receive special treatment, in particular, we
|
|
|
|
# do not need to copy extra resources (see e.g. test_btf_dump_case())
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
$(TRUNNER_BINARY)-extras: $(TRUNNER_EXTRA_FILES) | $(TRUNNER_OUTPUT)
|
2021-02-24 12:14:45 +01:00
|
|
|
ifneq ($2:$(OUTPUT),:$(shell pwd))
|
2020-01-12 23:31:40 -08:00
|
|
|
$$(call msg,EXT-COPY,$(TRUNNER_BINARY),$(TRUNNER_EXTRA_FILES))
|
2021-02-24 12:14:45 +01:00
|
|
|
$(Q)rsync -aq $$^ $(TRUNNER_OUTPUT)/
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
endif
|
|
|
|
|
2024-07-18 22:57:43 +00:00
|
|
|
# some X.test.o files have runtime dependencies on Y.bpf.o files
|
|
|
|
$(OUTPUT)/$(TRUNNER_BINARY): | $(TRUNNER_BPF_OBJS)
|
|
|
|
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
$(OUTPUT)/$(TRUNNER_BINARY): $(TRUNNER_TEST_OBJS) \
|
|
|
|
$(TRUNNER_EXTRA_OBJS) $$(BPFOBJ) \
|
2020-07-11 23:53:29 +02:00
|
|
|
$(RESOLVE_BTFIDS) \
|
selftests/bpf: Cross-compile bpftool
When the BPF selftests are cross-compiled, only the a host version of
bpftool is built. This version of bpftool is used on the host-side to
generate various intermediates, e.g., skeletons.
The test runners are also using bpftool, so the Makefile will symlink
bpftool from the selftest/bpf root, where the test runners will look
the tool:
| $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool \
| $(OUTPUT)/$(if $2,$2/)bpftool
There are two problems for cross-compilation builds:
1. There is no native (cross-compilation target) of bpftool
2. The bootstrap/bpftool is never cross-compiled (by design)
Make sure that a native/cross-compiled version of bpftool is built,
and if CROSS_COMPILE is set, symlink the native/non-bootstrap version.
Acked-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Björn Töpel <bjorn@rivosinc.com>
Link: https://lore.kernel.org/r/20230214161253.183458-1-bjorn@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-02-14 17:12:53 +01:00
|
|
|
$(TRUNNER_BPFTOOL) \
|
2025-02-25 16:31:01 +00:00
|
|
|
$(OUTPUT)/veristat \
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
| $(TRUNNER_BINARY)-extras
|
2020-01-12 23:31:40 -08:00
|
|
|
$$(call msg,BINARY,,$$@)
|
2025-05-16 20:55:22 +01:00
|
|
|
$(Q)$$(CC) $$(CFLAGS) $$(filter %.a %.o,$$^) $$(LDLIBS) $$(LLVM_LDLIBS) $$(LDFLAGS) $$(LLVM_LDFLAGS) -o $$@
|
2022-09-01 22:22:53 +00:00
|
|
|
$(Q)$(RESOLVE_BTFIDS) --btf $(TRUNNER_OUTPUT)/btf_data.bpf.o $$@
|
selftests/bpf: Cross-compile bpftool
When the BPF selftests are cross-compiled, only the a host version of
bpftool is built. This version of bpftool is used on the host-side to
generate various intermediates, e.g., skeletons.
The test runners are also using bpftool, so the Makefile will symlink
bpftool from the selftest/bpf root, where the test runners will look
the tool:
| $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool \
| $(OUTPUT)/$(if $2,$2/)bpftool
There are two problems for cross-compilation builds:
1. There is no native (cross-compilation target) of bpftool
2. The bootstrap/bpftool is never cross-compiled (by design)
Make sure that a native/cross-compiled version of bpftool is built,
and if CROSS_COMPILE is set, symlink the native/non-bootstrap version.
Acked-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Björn Töpel <bjorn@rivosinc.com>
Link: https://lore.kernel.org/r/20230214161253.183458-1-bjorn@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-02-14 17:12:53 +01:00
|
|
|
$(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/$(USE_BOOTSTRAP)bpftool \
|
2023-01-28 01:06:24 +01:00
|
|
|
$(OUTPUT)/$(if $2,$2/)bpftool
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
|
|
|
|
endef
|
|
|
|
|
|
|
|
# Define test_progs test runner.
|
|
|
|
TRUNNER_TESTS_DIR := prog_tests
|
|
|
|
TRUNNER_BPF_PROGS_DIR := progs
|
2023-10-24 23:49:03 +02:00
|
|
|
TRUNNER_EXTRA_SOURCES := test_progs.c \
|
|
|
|
cgroup_helpers.c \
|
|
|
|
trace_helpers.c \
|
|
|
|
network_helpers.c \
|
|
|
|
testing_helpers.c \
|
|
|
|
btf_helpers.c \
|
|
|
|
cap_helpers.c \
|
|
|
|
unpriv_helpers.c \
|
|
|
|
netlink_helpers.c \
|
2024-08-20 03:23:53 -07:00
|
|
|
jit_disasm_helpers.c \
|
2024-11-12 03:09:04 -08:00
|
|
|
io_helpers.c \
|
2023-10-24 23:49:03 +02:00
|
|
|
test_loader.c \
|
|
|
|
xsk.c \
|
|
|
|
disasm.c \
|
2024-07-22 16:38:38 -07:00
|
|
|
disasm_helpers.c \
|
2023-10-24 23:49:03 +02:00
|
|
|
json_writer.c \
|
|
|
|
flow_dissector_load.h \
|
2023-07-21 14:22:49 -06:00
|
|
|
ip_check_defrag_frags.h
|
2024-12-04 14:28:26 +01:00
|
|
|
TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read \
|
2022-04-04 16:42:02 -07:00
|
|
|
$(OUTPUT)/liburandom_read.so \
|
2022-06-15 16:48:45 +03:00
|
|
|
$(OUTPUT)/xdp_synproxy \
|
2022-09-20 09:59:50 +02:00
|
|
|
$(OUTPUT)/sign-file \
|
2023-08-09 10:34:34 +02:00
|
|
|
$(OUTPUT)/uprobe_multi \
|
2024-12-04 14:28:26 +01:00
|
|
|
$(TEST_KMOD_TARGETS) \
|
2022-12-05 14:16:16 +01:00
|
|
|
ima_setup.sh \
|
|
|
|
verify_sig_setup.sh \
|
|
|
|
$(wildcard progs/btf_dump_test_case_*.c) \
|
|
|
|
$(wildcard progs/*.bpf.o)
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
TRUNNER_BPF_BUILD_RULE := CLANG_BPF_BUILD_RULE
|
2021-11-10 21:36:17 -08:00
|
|
|
TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS) -DENABLE_ATOMICS_TESTS
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
$(eval $(call DEFINE_TEST_RUNNER,test_progs))
|
|
|
|
|
2019-10-21 21:31:19 -07:00
|
|
|
# Define test_progs-no_alu32 test runner.
|
2019-11-19 16:25:10 -08:00
|
|
|
TRUNNER_BPF_BUILD_RULE := CLANG_NOALU32_BPF_BUILD_RULE
|
2021-01-14 18:17:50 +00:00
|
|
|
TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS)
|
2019-10-21 21:31:19 -07:00
|
|
|
$(eval $(call DEFINE_TEST_RUNNER,test_progs,no_alu32))
|
2019-01-26 12:26:14 -05:00
|
|
|
|
2023-07-27 18:12:50 -07:00
|
|
|
# Define test_progs-cpuv4 test runner.
|
|
|
|
ifneq ($(CLANG_CPUV4),)
|
|
|
|
TRUNNER_BPF_BUILD_RULE := CLANG_CPUV4_BPF_BUILD_RULE
|
selftests/bpf: Enable tests for atomics with cpuv4
When looking at Alexei's patch ([1]) which added tests for atomics,
I noticed that the tests will be skipped with cpuv4. For example,
with latest llvm19, I see:
[root@arch-fb-vm1 bpf]# ./test_progs -t arena_atomics
#3/1 arena_atomics/add:OK
...
#3/7 arena_atomics/xchg:OK
#3 arena_atomics:OK
Summary: 1/7 PASSED, 0 SKIPPED, 0 FAILED
[root@arch-fb-vm1 bpf]# ./test_progs-cpuv4 -t arena_atomics
#3 arena_atomics:SKIP
Summary: 1/0 PASSED, 1 SKIPPED, 0 FAILED
[root@arch-fb-vm1 bpf]#
It is perfectly fine to enable atomics-related tests for cpuv4.
With this patch, I have
[root@arch-fb-vm1 bpf]# ./test_progs-cpuv4 -t arena_atomics
#3/1 arena_atomics/add:OK
...
#3/7 arena_atomics/xchg:OK
#3 arena_atomics:OK
Summary: 1/7 PASSED, 0 SKIPPED, 0 FAILED
[1] https://lore.kernel.org/r/20240405231134.17274-2-alexei.starovoitov@gmail.com
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240410153326.1851055-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-04-10 08:33:26 -07:00
|
|
|
TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(CLANG_CFLAGS) -DENABLE_ATOMICS_TESTS
|
2023-07-27 18:12:50 -07:00
|
|
|
$(eval $(call DEFINE_TEST_RUNNER,test_progs,cpuv4))
|
|
|
|
endif
|
|
|
|
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
# Define test_progs BPF-GCC-flavored test runner.
|
selftests/bpf: add bpf-gcc support
Now that binutils and gcc support for BPF is upstream, make use of it in
BPF selftests using alu32-like approach. Share as much as possible of
CFLAGS calculation with clang.
Fixes only obvious issues, leaving more complex ones for later:
- Use gcc-provided bpf-helpers.h instead of manually defining the
helpers, change bpf_helpers.h include guard to avoid conflict.
- Include <linux/stddef.h> for __always_inline.
- Add $(OUTPUT)/../usr/include to include path in order to use local
kernel headers instead of system kernel headers when building with O=.
In order to activate the bpf-gcc support, one needs to configure
binutils and gcc with --target=bpf and make them available in $PATH. In
particular, gcc must be installed as `bpf-gcc`, which is the default.
Right now with binutils 25a2915e8dba and gcc r275589 only a handful of
tests work:
# ./test_progs_bpf_gcc
# Summary: 7/39 PASSED, 1 SKIPPED, 98 FAILED
The reason for those failures are as follows:
- Build errors:
- `error: too many function arguments for eBPF` for __always_inline
functions read_str_var and read_map_var - must be inlining issue,
and for process_l3_headers_v6, which relies on optimizing away
function arguments.
- `error: indirect call in function, which are not supported by eBPF`
where there are no obvious indirect calls in the source calls, e.g.
in __encap_ipip_none.
- `error: field 'lock' has incomplete type` for fields of `struct
bpf_spin_lock` type - bpf_spin_lock is re#defined by bpf-helpers.h,
so its usage is sensitive to order of #includes.
- `error: eBPF stack limit exceeded` in sysctl_tcp_mem.
- Load errors:
- Missing object files due to above build errors.
- `libbpf: failed to create map (name: 'test_ver.bss')`.
- `libbpf: object file doesn't contain bpf program`.
- `libbpf: Program '.text' contains unrecognized relo data pointing to
section 0`.
- `libbpf: BTF is required, but is missing or corrupted` - no BTF
support in gcc yet.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Cc: Jose E. Marchesi <jose.marchesi@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-09-12 18:05:43 +02:00
|
|
|
ifneq ($(BPF_GCC),)
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
TRUNNER_BPF_BUILD_RULE := GCC_BPF_BUILD_RULE
|
2022-11-15 19:20:51 +01:00
|
|
|
TRUNNER_BPF_CFLAGS := $(BPF_CFLAGS) $(call get_sys_includes,gcc,)
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
$(eval $(call DEFINE_TEST_RUNNER,test_progs,bpf_gcc))
|
selftests/bpf: add bpf-gcc support
Now that binutils and gcc support for BPF is upstream, make use of it in
BPF selftests using alu32-like approach. Share as much as possible of
CFLAGS calculation with clang.
Fixes only obvious issues, leaving more complex ones for later:
- Use gcc-provided bpf-helpers.h instead of manually defining the
helpers, change bpf_helpers.h include guard to avoid conflict.
- Include <linux/stddef.h> for __always_inline.
- Add $(OUTPUT)/../usr/include to include path in order to use local
kernel headers instead of system kernel headers when building with O=.
In order to activate the bpf-gcc support, one needs to configure
binutils and gcc with --target=bpf and make them available in $PATH. In
particular, gcc must be installed as `bpf-gcc`, which is the default.
Right now with binutils 25a2915e8dba and gcc r275589 only a handful of
tests work:
# ./test_progs_bpf_gcc
# Summary: 7/39 PASSED, 1 SKIPPED, 98 FAILED
The reason for those failures are as follows:
- Build errors:
- `error: too many function arguments for eBPF` for __always_inline
functions read_str_var and read_map_var - must be inlining issue,
and for process_l3_headers_v6, which relies on optimizing away
function arguments.
- `error: indirect call in function, which are not supported by eBPF`
where there are no obvious indirect calls in the source calls, e.g.
in __encap_ipip_none.
- `error: field 'lock' has incomplete type` for fields of `struct
bpf_spin_lock` type - bpf_spin_lock is re#defined by bpf-helpers.h,
so its usage is sensitive to order of #includes.
- `error: eBPF stack limit exceeded` in sysctl_tcp_mem.
- Load errors:
- Missing object files due to above build errors.
- `libbpf: failed to create map (name: 'test_ver.bss')`.
- `libbpf: object file doesn't contain bpf program`.
- `libbpf: Program '.text' contains unrecognized relo data pointing to
section 0`.
- `libbpf: BTF is required, but is missing or corrupted` - no BTF
support in gcc yet.
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Cc: Jose E. Marchesi <jose.marchesi@oracle.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-09-12 18:05:43 +02:00
|
|
|
endif
|
|
|
|
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
# Define test_maps test runner.
|
|
|
|
TRUNNER_TESTS_DIR := map_tests
|
|
|
|
TRUNNER_BPF_PROGS_DIR := progs
|
|
|
|
TRUNNER_EXTRA_SOURCES := test_maps.c
|
|
|
|
TRUNNER_EXTRA_FILES :=
|
|
|
|
TRUNNER_BPF_BUILD_RULE := $$(error no BPF objects should be built)
|
|
|
|
TRUNNER_BPF_CFLAGS :=
|
|
|
|
$(eval $(call DEFINE_TEST_RUNNER,test_maps))
|
|
|
|
|
|
|
|
# Define test_verifier test runner.
|
|
|
|
# It is much simpler than test_maps/test_progs and sufficiently different from
|
|
|
|
# them (e.g., test.h is using completely pattern), that it's worth just
|
|
|
|
# explicitly defining all the rules explicitly.
|
|
|
|
verifier/tests.h: verifier/*.c
|
2019-03-06 11:59:26 -08:00
|
|
|
$(shell ( cd verifier/; \
|
2019-01-25 15:24:42 -08:00
|
|
|
echo '/* Generated header, do not edit */'; \
|
|
|
|
echo '#ifdef FILL_ARRAY'; \
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
ls *.c 2> /dev/null | sed -e 's@\(.*\)@#include \"\1\"@'; \
|
2019-01-25 15:24:42 -08:00
|
|
|
echo '#endif' \
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
) > verifier/tests.h)
|
|
|
|
$(OUTPUT)/test_verifier: test_verifier.c verifier/tests.h $(BPFOBJ) | $(OUTPUT)
|
2020-01-12 23:31:40 -08:00
|
|
|
$(call msg,BINARY,,$@)
|
2020-08-06 20:30:57 -07:00
|
|
|
$(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@
|
2019-01-25 15:24:42 -08:00
|
|
|
|
2023-09-27 19:22:37 +05:30
|
|
|
# Include find_bit.c to compile xskxceiver.
|
|
|
|
EXTRA_SRC := $(TOOLSDIR)/lib/find_bit.c
|
2024-04-02 11:45:27 +00:00
|
|
|
$(OUTPUT)/xskxceiver: $(EXTRA_SRC) xskxceiver.c xskxceiver.h $(OUTPUT)/network_helpers.o $(OUTPUT)/xsk.o $(OUTPUT)/xsk_xdp_progs.skel.h $(BPFOBJ) | $(OUTPUT)
|
2023-01-11 10:35:22 +01:00
|
|
|
$(call msg,BINARY,,$@)
|
|
|
|
$(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@
|
|
|
|
|
2023-01-19 14:15:36 -08:00
|
|
|
$(OUTPUT)/xdp_hw_metadata: xdp_hw_metadata.c $(OUTPUT)/network_helpers.o $(OUTPUT)/xsk.o $(OUTPUT)/xdp_hw_metadata.skel.h | $(OUTPUT)
|
|
|
|
$(call msg,BINARY,,$@)
|
|
|
|
$(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@
|
|
|
|
|
2023-02-01 11:24:24 +01:00
|
|
|
$(OUTPUT)/xdp_features: xdp_features.c $(OUTPUT)/network_helpers.o $(OUTPUT)/xdp_features.skel.h | $(OUTPUT)
|
|
|
|
$(call msg,BINARY,,$@)
|
|
|
|
$(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@
|
|
|
|
|
2019-12-02 13:59:31 -08:00
|
|
|
# Make sure we are able to include and link libbpf against c++.
|
2025-01-07 23:58:18 +00:00
|
|
|
CXXFLAGS += $(CFLAGS)
|
|
|
|
CXXFLAGS := $(subst -D_GNU_SOURCE=,,$(CXXFLAGS))
|
|
|
|
CXXFLAGS := $(subst -std=gnu11,-std=gnu++11,$(CXXFLAGS))
|
2019-12-26 13:02:53 -08:00
|
|
|
$(OUTPUT)/test_cpp: test_cpp.cpp $(OUTPUT)/test_core_extern.skel.h $(BPFOBJ)
|
2020-01-12 23:31:40 -08:00
|
|
|
$(call msg,CXX,,$@)
|
2025-01-07 23:58:18 +00:00
|
|
|
$(Q)$(CXX) $(CXXFLAGS) $(filter %.a %.o %.cpp,$^) $(LDLIBS) -o $@
|
2019-12-02 13:59:31 -08:00
|
|
|
|
selftests/bpf: Add benchmark runner infrastructure
While working on BPF ringbuf implementation, testing, and benchmarking, I've
developed a pretty generic and modular benchmark runner, which seems to be
generically useful, as I've already used it for one more purpose (testing
fastest way to trigger BPF program, to minimize overhead of in-kernel code).
This patch adds generic part of benchmark runner and sets up Makefile for
extending it with more sets of benchmarks.
Benchmarker itself operates by spinning up specified number of producer and
consumer threads, setting up interval timer sending SIGALARM signal to
application once a second. Every second, current snapshot with hits/drops
counters are collected and stored in an array. Drops are useful for
producer/consumer benchmarks in which producer might overwhelm consumers.
Once test finishes after given amount of warm-up and testing seconds, mean and
stddev are calculated (ignoring warm-up results) and is printed out to stdout.
This setup seems to give consistent and accurate results.
To validate behavior, I added two atomic counting tests: global and local.
For global one, all the producer threads are atomically incrementing same
counter as fast as possible. This, of course, leads to huge drop of
performance once there is more than one producer thread due to CPUs fighting
for the same memory location.
Local counting, on the other hand, maintains one counter per each producer
thread, incremented independently. Once per second, all counters are read and
added together to form final "counting throughput" measurement. As expected,
such setup demonstrates linear scalability with number of producers (as long
as there are enough physical CPU cores, of course). See example output below.
Also, this setup can nicely demonstrate disastrous effects of false sharing,
if care is not taken to take those per-producer counters apart into
independent cache lines.
Demo output shows global counter first with 1 producer, then with 4. Both
total and per-producer performance significantly drop. The last run is local
counter with 4 producers, demonstrating near-perfect scalability.
$ ./bench -a -w1 -d2 -p1 count-global
Setting up benchmark 'count-global'...
Benchmark 'count-global' started.
Iter 0 ( 24.822us): hits 148.179M/s (148.179M/prod), drops 0.000M/s
Iter 1 ( 37.939us): hits 149.308M/s (149.308M/prod), drops 0.000M/s
Iter 2 (-10.774us): hits 150.717M/s (150.717M/prod), drops 0.000M/s
Iter 3 ( 3.807us): hits 151.435M/s (151.435M/prod), drops 0.000M/s
Summary: hits 150.488 ± 1.079M/s (150.488M/prod), drops 0.000 ± 0.000M/s
$ ./bench -a -w1 -d2 -p4 count-global
Setting up benchmark 'count-global'...
Benchmark 'count-global' started.
Iter 0 ( 60.659us): hits 53.910M/s ( 13.477M/prod), drops 0.000M/s
Iter 1 (-17.658us): hits 53.722M/s ( 13.431M/prod), drops 0.000M/s
Iter 2 ( 5.865us): hits 53.495M/s ( 13.374M/prod), drops 0.000M/s
Iter 3 ( 0.104us): hits 53.606M/s ( 13.402M/prod), drops 0.000M/s
Summary: hits 53.608 ± 0.113M/s ( 13.402M/prod), drops 0.000 ± 0.000M/s
$ ./bench -a -w1 -d2 -p4 count-local
Setting up benchmark 'count-local'...
Benchmark 'count-local' started.
Iter 0 ( 23.388us): hits 640.450M/s (160.113M/prod), drops 0.000M/s
Iter 1 ( 2.291us): hits 605.661M/s (151.415M/prod), drops 0.000M/s
Iter 2 ( -6.415us): hits 607.092M/s (151.773M/prod), drops 0.000M/s
Iter 3 ( -1.361us): hits 601.796M/s (150.449M/prod), drops 0.000M/s
Summary: hits 604.849 ± 2.739M/s (151.212M/prod), drops 0.000 ± 0.000M/s
Benchmark runner supports setting thread affinity for producer and consumer
threads. You can use -a flag for default CPU selection scheme, where first
consumer gets CPU #0, next one gets CPU #1, and so on. Then producer threads
pick up next CPU and increment one-by-one as well. But user can also specify
a set of CPUs independently for producers and consumers with --prod-affinity
1,2-10,15 and --cons-affinity <set-of-cpus>. The latter allows to force
producers and consumers to share same set of CPUs, if necessary.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200512192445.2351848-3-andriin@fb.com
2020-05-12 12:24:43 -07:00
|
|
|
# Benchmark runner
|
libbpf: Add LIBBPF_DEPRECATED_SINCE macro for scheduling API deprecations
Introduce a macro LIBBPF_DEPRECATED_SINCE(major, minor, message) to prepare
the deprecation of two API functions. This macro marks functions as deprecated
when libbpf's version reaches the values passed as an argument.
As part of this change libbpf_version.h header is added with recorded major
(LIBBPF_MAJOR_VERSION) and minor (LIBBPF_MINOR_VERSION) libbpf version macros.
They are now part of libbpf public API and can be relied upon by user code.
libbpf_version.h is installed system-wide along other libbpf public headers.
Due to this new build-time auto-generated header, in-kernel applications
relying on libbpf (resolve_btfids, bpftool, bpf_preload) are updated to
include libbpf's output directory as part of a list of include search paths.
Better fix would be to use libbpf's make_install target to install public API
headers, but that clean up is left out as a future improvement. The build
changes were tested by building kernel (with KBUILD_OUTPUT and O= specified
explicitly), bpftool, libbpf, selftests/bpf, and resolve_btfids builds. No
problems were detected.
Note that because of the constraints of the C preprocessor we have to write
a few lines of macro magic for each version used to prepare deprecation (0.6
for now).
Also, use LIBBPF_DEPRECATED_SINCE() to schedule deprecation of
btf__get_from_id() and btf__load(), which are replaced by
btf__load_from_kernel_by_id() and btf__load_into_kernel(), respectively,
starting from future libbpf v0.6. This is part of libbpf 1.0 effort ([0]).
[0] Closes: https://github.com/libbpf/libbpf/issues/278
Co-developed-by: Quentin Monnet <quentin@isovalent.com>
Co-developed-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210908213226.1871016-1-andrii@kernel.org
2021-09-08 14:32:26 -07:00
|
|
|
$(OUTPUT)/bench_%.o: benchs/bench_%.c bench.h $(BPFOBJ)
|
selftests/bpf: Add benchmark runner infrastructure
While working on BPF ringbuf implementation, testing, and benchmarking, I've
developed a pretty generic and modular benchmark runner, which seems to be
generically useful, as I've already used it for one more purpose (testing
fastest way to trigger BPF program, to minimize overhead of in-kernel code).
This patch adds generic part of benchmark runner and sets up Makefile for
extending it with more sets of benchmarks.
Benchmarker itself operates by spinning up specified number of producer and
consumer threads, setting up interval timer sending SIGALARM signal to
application once a second. Every second, current snapshot with hits/drops
counters are collected and stored in an array. Drops are useful for
producer/consumer benchmarks in which producer might overwhelm consumers.
Once test finishes after given amount of warm-up and testing seconds, mean and
stddev are calculated (ignoring warm-up results) and is printed out to stdout.
This setup seems to give consistent and accurate results.
To validate behavior, I added two atomic counting tests: global and local.
For global one, all the producer threads are atomically incrementing same
counter as fast as possible. This, of course, leads to huge drop of
performance once there is more than one producer thread due to CPUs fighting
for the same memory location.
Local counting, on the other hand, maintains one counter per each producer
thread, incremented independently. Once per second, all counters are read and
added together to form final "counting throughput" measurement. As expected,
such setup demonstrates linear scalability with number of producers (as long
as there are enough physical CPU cores, of course). See example output below.
Also, this setup can nicely demonstrate disastrous effects of false sharing,
if care is not taken to take those per-producer counters apart into
independent cache lines.
Demo output shows global counter first with 1 producer, then with 4. Both
total and per-producer performance significantly drop. The last run is local
counter with 4 producers, demonstrating near-perfect scalability.
$ ./bench -a -w1 -d2 -p1 count-global
Setting up benchmark 'count-global'...
Benchmark 'count-global' started.
Iter 0 ( 24.822us): hits 148.179M/s (148.179M/prod), drops 0.000M/s
Iter 1 ( 37.939us): hits 149.308M/s (149.308M/prod), drops 0.000M/s
Iter 2 (-10.774us): hits 150.717M/s (150.717M/prod), drops 0.000M/s
Iter 3 ( 3.807us): hits 151.435M/s (151.435M/prod), drops 0.000M/s
Summary: hits 150.488 ± 1.079M/s (150.488M/prod), drops 0.000 ± 0.000M/s
$ ./bench -a -w1 -d2 -p4 count-global
Setting up benchmark 'count-global'...
Benchmark 'count-global' started.
Iter 0 ( 60.659us): hits 53.910M/s ( 13.477M/prod), drops 0.000M/s
Iter 1 (-17.658us): hits 53.722M/s ( 13.431M/prod), drops 0.000M/s
Iter 2 ( 5.865us): hits 53.495M/s ( 13.374M/prod), drops 0.000M/s
Iter 3 ( 0.104us): hits 53.606M/s ( 13.402M/prod), drops 0.000M/s
Summary: hits 53.608 ± 0.113M/s ( 13.402M/prod), drops 0.000 ± 0.000M/s
$ ./bench -a -w1 -d2 -p4 count-local
Setting up benchmark 'count-local'...
Benchmark 'count-local' started.
Iter 0 ( 23.388us): hits 640.450M/s (160.113M/prod), drops 0.000M/s
Iter 1 ( 2.291us): hits 605.661M/s (151.415M/prod), drops 0.000M/s
Iter 2 ( -6.415us): hits 607.092M/s (151.773M/prod), drops 0.000M/s
Iter 3 ( -1.361us): hits 601.796M/s (150.449M/prod), drops 0.000M/s
Summary: hits 604.849 ± 2.739M/s (151.212M/prod), drops 0.000 ± 0.000M/s
Benchmark runner supports setting thread affinity for producer and consumer
threads. You can use -a flag for default CPU selection scheme, where first
consumer gets CPU #0, next one gets CPU #1, and so on. Then producer threads
pick up next CPU and increment one-by-one as well. But user can also specify
a set of CPUs independently for producers and consumers with --prod-affinity
1,2-10,15 and --cons-affinity <set-of-cpus>. The latter allows to force
producers and consumers to share same set of CPUs, if necessary.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200512192445.2351848-3-andriin@fb.com
2020-05-12 12:24:43 -07:00
|
|
|
$(call msg,CC,,$@)
|
2021-10-27 16:45:03 -07:00
|
|
|
$(Q)$(CC) $(CFLAGS) -O2 -c $(filter %.c,$^) $(LDLIBS) -o $@
|
2020-05-12 12:24:44 -07:00
|
|
|
$(OUTPUT)/bench_rename.o: $(OUTPUT)/test_overhead.skel.h
|
selftest/bpf: Add BPF triggering benchmark
It is sometimes desirable to be able to trigger BPF program from user-space
with minimal overhead. sys_enter would seem to be a good candidate, yet in
a lot of cases there will be a lot of noise from syscalls triggered by other
processes on the system. So while searching for low-overhead alternative, I've
stumbled upon getpgid() syscall, which seems to be specific enough to not
suffer from accidental syscall by other apps.
This set of benchmarks compares tp, raw_tp w/ filtering by syscall ID, kprobe,
fentry and fmod_ret with returning error (so that syscall would not be
executed), to determine the lowest-overhead way. Here are results on my
machine (using benchs/run_bench_trigger.sh script):
base : 9.200 ± 0.319M/s
tp : 6.690 ± 0.125M/s
rawtp : 8.571 ± 0.214M/s
kprobe : 6.431 ± 0.048M/s
fentry : 8.955 ± 0.241M/s
fmodret : 8.903 ± 0.135M/s
So it seems like fmodret doesn't give much benefit for such lightweight
syscall. Raw tracepoint is pretty decent despite additional filtering logic,
but it will be called for any other syscall in the system, which rules it out.
Fentry, though, seems to be adding the least amoung of overhead and achieves
97.3% of performance of baseline no-BPF-attached syscall.
Using getpgid() seems to be preferable to set_task_comm() approach from
test_overhead, as it's about 2.35x faster in a baseline performance.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200512192445.2351848-5-andriin@fb.com
2020-05-12 12:24:45 -07:00
|
|
|
$(OUTPUT)/bench_trigger.o: $(OUTPUT)/trigger_bench.skel.h
|
bpf: Add BPF ringbuf and perf buffer benchmarks
Extend bench framework with ability to have benchmark-provided child argument
parser for custom benchmark-specific parameters. This makes bench generic code
modular and independent from any specific benchmark.
Also implement a set of benchmarks for new BPF ring buffer and existing perf
buffer. 4 benchmarks were implemented: 2 variations for each of BPF ringbuf
and perfbuf:,
- rb-libbpf utilizes stock libbpf ring_buffer manager for reading data;
- rb-custom implements custom ring buffer setup and reading code, to
eliminate overheads inherent in generic libbpf code due to callback
functions and the need to update consumer position after each consumed
record, instead of batching updates (due to pessimistic assumption that
user callback might take long time and thus could unnecessarily hold ring
buffer space for too long);
- pb-libbpf uses stock libbpf perf_buffer code with all the default
settings, though uses higher-performance raw event callback to minimize
unnecessary overhead;
- pb-custom implements its own custom consumer code to minimize any possible
overhead of generic libbpf implementation and indirect function calls.
All of the test support default, no data notification skipped, mode, as well
as sampled mode (with --rb-sampled flag), which allows to trigger epoll
notification less frequently and reduce overhead. As will be shown, this mode
is especially critical for perf buffer, which suffers from high overhead of
wakeups in kernel.
Otherwise, all benchamrks implement similar way to generate a batch of records
by using fentry/sys_getpgid BPF program, which pushes a bunch of records in
a tight loop and records number of successful and dropped samples. Each record
is a small 8-byte integer, to minimize the effect of memory copying with
bpf_perf_event_output() and bpf_ringbuf_output().
Benchmarks that have only one producer implement optional back-to-back mode,
in which record production and consumption is alternating on the same CPU.
This is the highest-throughput happy case, showing ultimate performance
achievable with either BPF ringbuf or perfbuf.
All the below scenarios are implemented in a script in
benchs/run_bench_ringbufs.sh. Tests were performed on 28-core/56-thread
Intel Xeon CPU E5-2680 v4 @ 2.40GHz CPU.
Single-producer, parallel producer
==================================
rb-libbpf 12.054 ± 0.320M/s (drops 0.000 ± 0.000M/s)
rb-custom 8.158 ± 0.118M/s (drops 0.001 ± 0.003M/s)
pb-libbpf 0.931 ± 0.007M/s (drops 0.000 ± 0.000M/s)
pb-custom 0.965 ± 0.003M/s (drops 0.000 ± 0.000M/s)
Single-producer, parallel producer, sampled notification
========================================================
rb-libbpf 11.563 ± 0.067M/s (drops 0.000 ± 0.000M/s)
rb-custom 15.895 ± 0.076M/s (drops 0.000 ± 0.000M/s)
pb-libbpf 9.889 ± 0.032M/s (drops 0.000 ± 0.000M/s)
pb-custom 9.866 ± 0.028M/s (drops 0.000 ± 0.000M/s)
Single producer on one CPU, consumer on another one, both running at full
speed. Curiously, rb-libbpf has higher throughput than objectively faster (due
to more lightweight consumer code path) rb-custom. It appears that faster
consumer causes kernel to send notifications more frequently, because consumer
appears to be caught up more frequently. Performance of perfbuf suffers from
default "no sampling" policy and huge overhead that causes.
In sampled mode, rb-custom is winning very significantly eliminating too
frequent in-kernel wakeups, the gain appears to be more than 2x.
Perf buffer achieves even more impressive wins, compared to stock perfbuf
settings, with 10x improvements in throughput with 1:500 sampling rate. The
trade-off is that with sampling, application might not get next X events until
X+1st arrives, which is not always acceptable. With steady influx of events,
though, this shouldn't be a problem.
Overall, single-producer performance of ring buffers seems to be better no
matter the sampled/non-sampled modes, but it especially beats ring buffer
without sampling due to its adaptive notification approach.
Single-producer, back-to-back mode
==================================
rb-libbpf 15.507 ± 0.247M/s (drops 0.000 ± 0.000M/s)
rb-libbpf-sampled 14.692 ± 0.195M/s (drops 0.000 ± 0.000M/s)
rb-custom 21.449 ± 0.157M/s (drops 0.000 ± 0.000M/s)
rb-custom-sampled 20.024 ± 0.386M/s (drops 0.000 ± 0.000M/s)
pb-libbpf 1.601 ± 0.015M/s (drops 0.000 ± 0.000M/s)
pb-libbpf-sampled 8.545 ± 0.064M/s (drops 0.000 ± 0.000M/s)
pb-custom 1.607 ± 0.022M/s (drops 0.000 ± 0.000M/s)
pb-custom-sampled 8.988 ± 0.144M/s (drops 0.000 ± 0.000M/s)
Here we test a back-to-back mode, which is arguably best-case scenario both
for BPF ringbuf and perfbuf, because there is no contention and for ringbuf
also no excessive notification, because consumer appears to be behind after
the first record. For ringbuf, custom consumer code clearly wins with 21.5 vs
16 million records per second exchanged between producer and consumer. Sampled
mode actually hurts a bit due to slightly slower producer logic (it needs to
fetch amount of data available to decide whether to skip or force notification).
Perfbuf with wakeup sampling gets 5.5x throughput increase, compared to
no-sampling version. There also doesn't seem to be noticeable overhead from
generic libbpf handling code.
Perfbuf back-to-back, effect of sample rate
===========================================
pb-sampled-1 1.035 ± 0.012M/s (drops 0.000 ± 0.000M/s)
pb-sampled-5 3.476 ± 0.087M/s (drops 0.000 ± 0.000M/s)
pb-sampled-10 5.094 ± 0.136M/s (drops 0.000 ± 0.000M/s)
pb-sampled-25 7.118 ± 0.153M/s (drops 0.000 ± 0.000M/s)
pb-sampled-50 8.169 ± 0.156M/s (drops 0.000 ± 0.000M/s)
pb-sampled-100 8.887 ± 0.136M/s (drops 0.000 ± 0.000M/s)
pb-sampled-250 9.180 ± 0.209M/s (drops 0.000 ± 0.000M/s)
pb-sampled-500 9.353 ± 0.281M/s (drops 0.000 ± 0.000M/s)
pb-sampled-1000 9.411 ± 0.217M/s (drops 0.000 ± 0.000M/s)
pb-sampled-2000 9.464 ± 0.167M/s (drops 0.000 ± 0.000M/s)
pb-sampled-3000 9.575 ± 0.273M/s (drops 0.000 ± 0.000M/s)
This benchmark shows the effect of event sampling for perfbuf. Back-to-back
mode for highest throughput. Just doing every 5th record notification gives
3.5x speed up. 250-500 appears to be the point of diminishing return, with
almost 9x speed up. Most benchmarks use 500 as the default sampling for pb-raw
and pb-custom.
Ringbuf back-to-back, effect of sample rate
===========================================
rb-sampled-1 1.106 ± 0.010M/s (drops 0.000 ± 0.000M/s)
rb-sampled-5 4.746 ± 0.149M/s (drops 0.000 ± 0.000M/s)
rb-sampled-10 7.706 ± 0.164M/s (drops 0.000 ± 0.000M/s)
rb-sampled-25 12.893 ± 0.273M/s (drops 0.000 ± 0.000M/s)
rb-sampled-50 15.961 ± 0.361M/s (drops 0.000 ± 0.000M/s)
rb-sampled-100 18.203 ± 0.445M/s (drops 0.000 ± 0.000M/s)
rb-sampled-250 19.962 ± 0.786M/s (drops 0.000 ± 0.000M/s)
rb-sampled-500 20.881 ± 0.551M/s (drops 0.000 ± 0.000M/s)
rb-sampled-1000 21.317 ± 0.532M/s (drops 0.000 ± 0.000M/s)
rb-sampled-2000 21.331 ± 0.535M/s (drops 0.000 ± 0.000M/s)
rb-sampled-3000 21.688 ± 0.392M/s (drops 0.000 ± 0.000M/s)
Similar benchmark for ring buffer also shows a great advantage (in terms of
throughput) of skipping notifications. Skipping every 5th one gives 4x boost.
Also similar to perfbuf case, 250-500 seems to be the point of diminishing
returns, giving roughly 20x better results.
Keep in mind, for this test, notifications are controlled manually with
BPF_RB_NO_WAKEUP and BPF_RB_FORCE_WAKEUP. As can be seen from previous
benchmarks, adaptive notifications based on consumer's positions provides same
(or even slightly better due to simpler load generator on BPF side) benefits in
favorable back-to-back scenario. Over zealous and fast consumer, which is
almost always caught up, will make thoughput numbers smaller. That's the case
when manual notification control might prove to be extremely beneficial.
Ringbuf back-to-back, reserve+commit vs output
==============================================
reserve 22.819 ± 0.503M/s (drops 0.000 ± 0.000M/s)
output 18.906 ± 0.433M/s (drops 0.000 ± 0.000M/s)
Ringbuf sampled, reserve+commit vs output
=========================================
reserve-sampled 15.350 ± 0.132M/s (drops 0.000 ± 0.000M/s)
output-sampled 14.195 ± 0.144M/s (drops 0.000 ± 0.000M/s)
BPF ringbuf supports two sets of APIs with various usability and performance
tradeoffs: bpf_ringbuf_reserve()+bpf_ringbuf_commit() vs bpf_ringbuf_output().
This benchmark clearly shows superiority of reserve+commit approach, despite
using a small 8-byte record size.
Single-producer, consumer/producer competing on the same CPU, low batch count
=============================================================================
rb-libbpf 3.045 ± 0.020M/s (drops 3.536 ± 0.148M/s)
rb-custom 3.055 ± 0.022M/s (drops 3.893 ± 0.066M/s)
pb-libbpf 1.393 ± 0.024M/s (drops 0.000 ± 0.000M/s)
pb-custom 1.407 ± 0.016M/s (drops 0.000 ± 0.000M/s)
This benchmark shows one of the worst-case scenarios, in which producer and
consumer do not coordinate *and* fight for the same CPU. No batch count and
sampling settings were able to eliminate drops for ringbuffer, producer is
just too fast for consumer to keep up. But ringbuf and perfbuf still able to
pass through quite a lot of messages, which is more than enough for a lot of
applications.
Ringbuf, multi-producer contention
==================================
rb-libbpf nr_prod 1 10.916 ± 0.399M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 2 4.931 ± 0.030M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 3 4.880 ± 0.006M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 4 3.926 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 8 4.011 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 12 3.967 ± 0.016M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 16 2.604 ± 0.030M/s (drops 0.001 ± 0.002M/s)
rb-libbpf nr_prod 20 2.233 ± 0.003M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 24 2.085 ± 0.015M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 28 2.055 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 32 1.962 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 36 2.089 ± 0.005M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 40 2.118 ± 0.006M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 44 2.105 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 48 2.120 ± 0.058M/s (drops 0.000 ± 0.001M/s)
rb-libbpf nr_prod 52 2.074 ± 0.024M/s (drops 0.007 ± 0.014M/s)
Ringbuf uses a very short-duration spinlock during reservation phase, to check
few invariants, increment producer count and set record header. This is the
biggest point of contention for ringbuf implementation. This benchmark
evaluates the effect of multiple competing writers on overall throughput of
a single shared ringbuffer.
Overall throughput drops almost 2x when going from single to two
highly-contended producers, gradually dropping with additional competing
producers. Performance drop stabilizes at around 20 producers and hovers
around 2mln even with 50+ fighting producers, which is a 5x drop compared to
non-contended case. Good kernel implementation in kernel helps maintain decent
performance here.
Note, that in the intended real-world scenarios, it's not expected to get even
close to such a high levels of contention. But if contention will become
a problem, there is always an option of sharding few ring buffers across a set
of CPUs.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200529075424.3139988-5-andriin@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-05-29 00:54:23 -07:00
|
|
|
$(OUTPUT)/bench_ringbufs.o: $(OUTPUT)/ringbuf_bench.skel.h \
|
|
|
|
$(OUTPUT)/perfbuf_bench.skel.h
|
2021-10-27 16:45:03 -07:00
|
|
|
$(OUTPUT)/bench_bloom_filter_map.o: $(OUTPUT)/bloom_filter_bench.skel.h
|
selftest/bpf/benchs: Add bpf_loop benchmark
Add benchmark to measure the throughput and latency of the bpf_loop
call.
Testing this on my dev machine on 1 thread, the data is as follows:
nr_loops: 10
bpf_loop - throughput: 198.519 ± 0.155 M ops/s, latency: 5.037 ns/op
nr_loops: 100
bpf_loop - throughput: 247.448 ± 0.305 M ops/s, latency: 4.041 ns/op
nr_loops: 500
bpf_loop - throughput: 260.839 ± 0.380 M ops/s, latency: 3.834 ns/op
nr_loops: 1000
bpf_loop - throughput: 262.806 ± 0.629 M ops/s, latency: 3.805 ns/op
nr_loops: 5000
bpf_loop - throughput: 264.211 ± 1.508 M ops/s, latency: 3.785 ns/op
nr_loops: 10000
bpf_loop - throughput: 265.366 ± 3.054 M ops/s, latency: 3.768 ns/op
nr_loops: 50000
bpf_loop - throughput: 235.986 ± 20.205 M ops/s, latency: 4.238 ns/op
nr_loops: 100000
bpf_loop - throughput: 264.482 ± 0.279 M ops/s, latency: 3.781 ns/op
nr_loops: 500000
bpf_loop - throughput: 309.773 ± 87.713 M ops/s, latency: 3.228 ns/op
nr_loops: 1000000
bpf_loop - throughput: 262.818 ± 4.143 M ops/s, latency: 3.805 ns/op
>From this data, we can see that the latency per loop decreases as the
number of loops increases. On this particular machine, each loop had an
overhead of about ~4 ns, and we were able to run ~250 million loops
per second.
Signed-off-by: Joanne Koong <joannekoong@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211130030622.4131246-5-joannekoong@fb.com
2021-11-29 19:06:22 -08:00
|
|
|
$(OUTPUT)/bench_bpf_loop.o: $(OUTPUT)/bpf_loop_bench.skel.h
|
2021-12-10 22:16:51 +08:00
|
|
|
$(OUTPUT)/bench_strncmp.o: $(OUTPUT)/strncmp_bench.skel.h
|
2022-06-10 10:33:08 +08:00
|
|
|
$(OUTPUT)/bench_bpf_hashmap_full_update.o: $(OUTPUT)/bpf_hashmap_full_update_bench.skel.h
|
selftests/bpf: Add benchmark for local_storage get
Add a benchmarks to demonstrate the performance cliff for local_storage
get as the number of local_storage maps increases beyond current
local_storage implementation's cache size.
"sequential get" and "interleaved get" benchmarks are added, both of
which do many bpf_task_storage_get calls on sets of task local_storage
maps of various counts, while considering a single specific map to be
'important' and counting task_storage_gets to the important map
separately in addition to normal 'hits' count of all gets. Goal here is
to mimic scenario where a particular program using one map - the
important one - is running on a system where many other local_storage
maps exist and are accessed often.
While "sequential get" benchmark does bpf_task_storage_get for map 0, 1,
..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4
bpf_task_storage_gets for the important map for every 10 map gets. This
is meant to highlight performance differences when important map is
accessed far more frequently than non-important maps.
A "hashmap control" benchmark is also included for easy comparison of
standard bpf hashmap lookup vs local_storage get. The benchmark is
similar to "sequential get", but creates and uses BPF_MAP_TYPE_HASH
instead of local storage. Only one inner map is created - a hashmap
meant to hold tid -> data mapping for all tasks. Size of the hashmap is
hardcoded to my system's PID_MAX_LIMIT (4,194,304). The number of these
keys which are actually fetched as part of the benchmark is
configurable.
Addition of this benchmark is inspired by conversation with Alexei in a
previous patchset's thread [0], which highlighted the need for such a
benchmark to motivate and validate improvements to local_storage
implementation. My approach in that series focused on improving
performance for explicitly-marked 'important' maps and was rejected
with feedback to make more generally-applicable improvements while
avoiding explicitly marking maps as important. Thus the benchmark
reports both general and important-map-focused metrics, so effect of
future work on both is clear.
Regarding the benchmark results. On a powerful system (Skylake, 20
cores, 256gb ram):
Hashmap Control
===============
num keys: 10
hashmap (control) sequential get: hits throughput: 20.900 ± 0.334 M ops/s, hits latency: 47.847 ns/op, important_hits throughput: 20.900 ± 0.334 M ops/s
num keys: 1000
hashmap (control) sequential get: hits throughput: 13.758 ± 0.219 M ops/s, hits latency: 72.683 ns/op, important_hits throughput: 13.758 ± 0.219 M ops/s
num keys: 10000
hashmap (control) sequential get: hits throughput: 6.995 ± 0.034 M ops/s, hits latency: 142.959 ns/op, important_hits throughput: 6.995 ± 0.034 M ops/s
num keys: 100000
hashmap (control) sequential get: hits throughput: 4.452 ± 0.371 M ops/s, hits latency: 224.635 ns/op, important_hits throughput: 4.452 ± 0.371 M ops/s
num keys: 4194304
hashmap (control) sequential get: hits throughput: 3.043 ± 0.033 M ops/s, hits latency: 328.587 ns/op, important_hits throughput: 3.043 ± 0.033 M ops/s
Local Storage
=============
num_maps: 1
local_storage cache sequential get: hits throughput: 47.298 ± 0.180 M ops/s, hits latency: 21.142 ns/op, important_hits throughput: 47.298 ± 0.180 M ops/s
local_storage cache interleaved get: hits throughput: 55.277 ± 0.888 M ops/s, hits latency: 18.091 ns/op, important_hits throughput: 55.277 ± 0.888 M ops/s
num_maps: 10
local_storage cache sequential get: hits throughput: 40.240 ± 0.802 M ops/s, hits latency: 24.851 ns/op, important_hits throughput: 4.024 ± 0.080 M ops/s
local_storage cache interleaved get: hits throughput: 48.701 ± 0.722 M ops/s, hits latency: 20.533 ns/op, important_hits throughput: 17.393 ± 0.258 M ops/s
num_maps: 16
local_storage cache sequential get: hits throughput: 44.515 ± 0.708 M ops/s, hits latency: 22.464 ns/op, important_hits throughput: 2.782 ± 0.044 M ops/s
local_storage cache interleaved get: hits throughput: 49.553 ± 2.260 M ops/s, hits latency: 20.181 ns/op, important_hits throughput: 15.767 ± 0.719 M ops/s
num_maps: 17
local_storage cache sequential get: hits throughput: 38.778 ± 0.302 M ops/s, hits latency: 25.788 ns/op, important_hits throughput: 2.284 ± 0.018 M ops/s
local_storage cache interleaved get: hits throughput: 43.848 ± 1.023 M ops/s, hits latency: 22.806 ns/op, important_hits throughput: 13.349 ± 0.311 M ops/s
num_maps: 24
local_storage cache sequential get: hits throughput: 19.317 ± 0.568 M ops/s, hits latency: 51.769 ns/op, important_hits throughput: 0.806 ± 0.024 M ops/s
local_storage cache interleaved get: hits throughput: 24.397 ± 0.272 M ops/s, hits latency: 40.989 ns/op, important_hits throughput: 6.863 ± 0.077 M ops/s
num_maps: 32
local_storage cache sequential get: hits throughput: 13.333 ± 0.135 M ops/s, hits latency: 75.000 ns/op, important_hits throughput: 0.417 ± 0.004 M ops/s
local_storage cache interleaved get: hits throughput: 16.898 ± 0.383 M ops/s, hits latency: 59.178 ns/op, important_hits throughput: 4.717 ± 0.107 M ops/s
num_maps: 100
local_storage cache sequential get: hits throughput: 6.360 ± 0.107 M ops/s, hits latency: 157.233 ns/op, important_hits throughput: 0.064 ± 0.001 M ops/s
local_storage cache interleaved get: hits throughput: 7.303 ± 0.362 M ops/s, hits latency: 136.930 ns/op, important_hits throughput: 1.907 ± 0.094 M ops/s
num_maps: 1000
local_storage cache sequential get: hits throughput: 0.452 ± 0.010 M ops/s, hits latency: 2214.022 ns/op, important_hits throughput: 0.000 ± 0.000 M ops/s
local_storage cache interleaved get: hits throughput: 0.542 ± 0.007 M ops/s, hits latency: 1843.341 ns/op, important_hits throughput: 0.136 ± 0.002 M ops/s
Looking at the "sequential get" results, it's clear that as the
number of task local_storage maps grows beyond the current cache size
(16), there's a significant reduction in hits throughput. Note that
current local_storage implementation assigns a cache_idx to maps as they
are created. Since "sequential get" is creating maps 0..n in order and
then doing bpf_task_storage_get calls in the same order, the benchmark
is effectively ensuring that a map will not be in cache when the program
tries to access it.
For "interleaved get" results, important-map hits throughput is greatly
increased as the important map is more likely to be in cache by virtue
of being accessed far more frequently. Throughput still reduces as #
maps increases, though.
To get a sense of the overhead of the benchmark program, I
commented out bpf_task_storage_get/bpf_map_lookup_elem in
local_storage_bench.c and ran the benchmark on the same host as the
'real' run. Results:
Hashmap Control
===============
num keys: 10
hashmap (control) sequential get: hits throughput: 54.288 ± 0.655 M ops/s, hits latency: 18.420 ns/op, important_hits throughput: 54.288 ± 0.655 M ops/s
num keys: 1000
hashmap (control) sequential get: hits throughput: 52.913 ± 0.519 M ops/s, hits latency: 18.899 ns/op, important_hits throughput: 52.913 ± 0.519 M ops/s
num keys: 10000
hashmap (control) sequential get: hits throughput: 53.480 ± 1.235 M ops/s, hits latency: 18.699 ns/op, important_hits throughput: 53.480 ± 1.235 M ops/s
num keys: 100000
hashmap (control) sequential get: hits throughput: 54.982 ± 1.902 M ops/s, hits latency: 18.188 ns/op, important_hits throughput: 54.982 ± 1.902 M ops/s
num keys: 4194304
hashmap (control) sequential get: hits throughput: 50.858 ± 0.707 M ops/s, hits latency: 19.662 ns/op, important_hits throughput: 50.858 ± 0.707 M ops/s
Local Storage
=============
num_maps: 1
local_storage cache sequential get: hits throughput: 110.990 ± 4.828 M ops/s, hits latency: 9.010 ns/op, important_hits throughput: 110.990 ± 4.828 M ops/s
local_storage cache interleaved get: hits throughput: 161.057 ± 4.090 M ops/s, hits latency: 6.209 ns/op, important_hits throughput: 161.057 ± 4.090 M ops/s
num_maps: 10
local_storage cache sequential get: hits throughput: 112.930 ± 1.079 M ops/s, hits latency: 8.855 ns/op, important_hits throughput: 11.293 ± 0.108 M ops/s
local_storage cache interleaved get: hits throughput: 115.841 ± 2.088 M ops/s, hits latency: 8.633 ns/op, important_hits throughput: 41.372 ± 0.746 M ops/s
num_maps: 16
local_storage cache sequential get: hits throughput: 115.653 ± 0.416 M ops/s, hits latency: 8.647 ns/op, important_hits throughput: 7.228 ± 0.026 M ops/s
local_storage cache interleaved get: hits throughput: 138.717 ± 1.649 M ops/s, hits latency: 7.209 ns/op, important_hits throughput: 44.137 ± 0.525 M ops/s
num_maps: 17
local_storage cache sequential get: hits throughput: 112.020 ± 1.649 M ops/s, hits latency: 8.927 ns/op, important_hits throughput: 6.598 ± 0.097 M ops/s
local_storage cache interleaved get: hits throughput: 128.089 ± 1.960 M ops/s, hits latency: 7.807 ns/op, important_hits throughput: 38.995 ± 0.597 M ops/s
num_maps: 24
local_storage cache sequential get: hits throughput: 92.447 ± 5.170 M ops/s, hits latency: 10.817 ns/op, important_hits throughput: 3.855 ± 0.216 M ops/s
local_storage cache interleaved get: hits throughput: 128.844 ± 2.808 M ops/s, hits latency: 7.761 ns/op, important_hits throughput: 36.245 ± 0.790 M ops/s
num_maps: 32
local_storage cache sequential get: hits throughput: 102.042 ± 1.462 M ops/s, hits latency: 9.800 ns/op, important_hits throughput: 3.194 ± 0.046 M ops/s
local_storage cache interleaved get: hits throughput: 126.577 ± 1.818 M ops/s, hits latency: 7.900 ns/op, important_hits throughput: 35.332 ± 0.507 M ops/s
num_maps: 100
local_storage cache sequential get: hits throughput: 111.327 ± 1.401 M ops/s, hits latency: 8.983 ns/op, important_hits throughput: 1.113 ± 0.014 M ops/s
local_storage cache interleaved get: hits throughput: 131.327 ± 1.339 M ops/s, hits latency: 7.615 ns/op, important_hits throughput: 34.302 ± 0.350 M ops/s
num_maps: 1000
local_storage cache sequential get: hits throughput: 101.978 ± 0.563 M ops/s, hits latency: 9.806 ns/op, important_hits throughput: 0.102 ± 0.001 M ops/s
local_storage cache interleaved get: hits throughput: 141.084 ± 1.098 M ops/s, hits latency: 7.088 ns/op, important_hits throughput: 35.430 ± 0.276 M ops/s
Adjusting for overhead, latency numbers for "hashmap control" and
"sequential get" are:
hashmap_control_1k: ~53.8ns
hashmap_control_10k: ~124.2ns
hashmap_control_100k: ~206.5ns
sequential_get_1: ~12.1ns
sequential_get_10: ~16.0ns
sequential_get_16: ~13.8ns
sequential_get_17: ~16.8ns
sequential_get_24: ~40.9ns
sequential_get_32: ~65.2ns
sequential_get_100: ~148.2ns
sequential_get_1000: ~2204ns
Clearly demonstrating a cliff.
In the discussion for v1 of this patch, Alexei noted that local_storage
was 2.5x faster than a large hashmap when initially implemented [1]. The
benchmark results show that local_storage is 5-10x faster: a
long-running BPF application putting some pid-specific info into a
hashmap for each pid it sees will probably see on the order of 10-100k
pids. Bench numbers for hashmaps of this size are ~10x slower than
sequential_get_16, but as the number of local_storage maps grows far
past local_storage cache size the performance advantage shrinks and
eventually reverses.
When running the benchmarks it may be necessary to bump 'open files'
ulimit for a successful run.
[0]: https://lore.kernel.org/all/20220420002143.1096548-1-davemarchevsky@fb.com
[1]: https://lore.kernel.org/bpf/20220511173305.ftldpn23m4ski3d3@MBP-98dd607d3435.dhcp.thefacebook.com/
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Link: https://lore.kernel.org/r/20220620222554.270578-1-davemarchevsky@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-06-20 15:25:54 -07:00
|
|
|
$(OUTPUT)/bench_local_storage.o: $(OUTPUT)/local_storage_bench.skel.h
|
2022-07-05 12:00:18 -07:00
|
|
|
$(OUTPUT)/bench_local_storage_rcu_tasks_trace.o: $(OUTPUT)/local_storage_rcu_tasks_trace_bench.skel.h
|
selftests/bpf: Add local-storage-create benchmark
This patch tests how many kmallocs is needed to create and free
a batch of UDP sockets and each socket has a 64bytes bpf storage.
It also measures how fast the UDP sockets can be created.
The result is from my qemu setup.
Before bpf_mem_cache_alloc/free:
./bench -p 1 local-storage-create
Setting up benchmark 'local-storage-create'...
Benchmark 'local-storage-create' started.
Iter 0 ( 73.193us): creates 213.552k/s (213.552k/prod), 3.09 kmallocs/create
Iter 1 (-20.724us): creates 211.908k/s (211.908k/prod), 3.09 kmallocs/create
Iter 2 ( 9.280us): creates 212.574k/s (212.574k/prod), 3.12 kmallocs/create
Iter 3 ( 11.039us): creates 213.209k/s (213.209k/prod), 3.12 kmallocs/create
Iter 4 (-11.411us): creates 213.351k/s (213.351k/prod), 3.12 kmallocs/create
Iter 5 ( -7.915us): creates 214.754k/s (214.754k/prod), 3.12 kmallocs/create
Iter 6 ( 11.317us): creates 210.942k/s (210.942k/prod), 3.12 kmallocs/create
Summary: creates 212.789 ± 1.310k/s (212.789k/prod), 3.12 kmallocs/create
After bpf_mem_cache_alloc/free:
./bench -p 1 local-storage-create
Setting up benchmark 'local-storage-create'...
Benchmark 'local-storage-create' started.
Iter 0 ( 68.265us): creates 243.984k/s (243.984k/prod), 1.04 kmallocs/create
Iter 1 ( 30.357us): creates 238.424k/s (238.424k/prod), 1.04 kmallocs/create
Iter 2 (-18.712us): creates 232.963k/s (232.963k/prod), 1.04 kmallocs/create
Iter 3 (-15.885us): creates 238.879k/s (238.879k/prod), 1.04 kmallocs/create
Iter 4 ( 5.590us): creates 237.490k/s (237.490k/prod), 1.04 kmallocs/create
Iter 5 ( 8.577us): creates 237.521k/s (237.521k/prod), 1.04 kmallocs/create
Iter 6 ( -6.263us): creates 238.508k/s (238.508k/prod), 1.04 kmallocs/create
Summary: creates 237.298 ± 2.198k/s (237.298k/prod), 1.04 kmallocs/create
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20230308065936.1550103-18-martin.lau@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-03-07 22:59:36 -08:00
|
|
|
$(OUTPUT)/bench_local_storage_create.o: $(OUTPUT)/bench_local_storage_create.skel.h
|
2023-02-13 09:15:19 +00:00
|
|
|
$(OUTPUT)/bench_bpf_hashmap_lookup.o: $(OUTPUT)/bpf_hashmap_lookup.skel.h
|
selftests/bpf: Add benchmark for bpf memory allocator
The benchmark could be used to compare the performance of hash map
operations and the memory usage between different flavors of bpf memory
allocator (e.g., no bpf ma vs bpf ma vs reuse-after-gp bpf ma). It also
could be used to check the performance improvement or the memory saving
provided by optimization.
The benchmark creates a non-preallocated hash map which uses bpf memory
allocator and shows the operation performance and the memory usage of
the hash map under different use cases:
(1) overwrite
Each CPU overwrites nonoverlapping part of hash map. When each CPU
completes overwriting of 64 elements in hash map, it increases the
op_count.
(2) batch_add_batch_del
Each CPU adds then deletes nonoverlapping part of hash map in batch.
When each CPU adds and deletes 64 elements in hash map, it increases
the op_count twice.
(3) add_del_on_diff_cpu
Each two-CPUs pair adds and deletes nonoverlapping part of map
cooperatively. When each CPU adds or deletes 64 elements in hash map,
it will increase the op_count.
The following is the benchmark results when comparing between different
flavors of bpf memory allocator. These tests are conducted on a KVM guest
with 8 CPUs and 16 GB memory. The command line below is used to do all
the following benchmarks:
./bench htab-mem --use-case $name ${OPTS} -w3 -d10 -a -p8
These results show that preallocated hash map has both better performance
and smaller memory footprint.
(1) non-preallocated + no bpf memory allocator (v6.0.19)
use kmalloc() + call_rcu
overwrite per-prod-op: 11.24 ± 0.07k/s, avg mem: 82.64 ± 26.32MiB, peak mem: 119.18MiB
batch_add_batch_del per-prod-op: 18.45 ± 0.10k/s, avg mem: 50.47 ± 14.51MiB, peak mem: 94.96MiB
add_del_on_diff_cpu per-prod-op: 14.50 ± 0.03k/s, avg mem: 4.64 ± 0.73MiB, peak mem: 7.20MiB
(2) preallocated
OPTS=--preallocated
overwrite per-prod-op: 191.42 ± 0.09k/s, avg mem: 1.24 ± 0.00MiB, peak mem: 1.49MiB
batch_add_batch_del per-prod-op: 221.83 ± 0.17k/s, avg mem: 1.23 ± 0.00MiB, peak mem: 1.49MiB
add_del_on_diff_cpu per-prod-op: 39.66 ± 0.31k/s, avg mem: 1.47 ± 0.13MiB, peak mem: 1.75MiB
(3) normal bpf memory allocator
overwrite per-prod-op: 126.59 ± 0.02k/s, avg mem: 2.26 ± 0.00MiB, peak mem: 2.74MiB
batch_add_batch_del per-prod-op: 83.37 ± 0.20k/s, avg mem: 2.14 ± 0.17MiB, peak mem: 2.74MiB
add_del_on_diff_cpu per-prod-op: 21.25 ± 0.24k/s, avg mem: 17.50 ± 3.32MiB, peak mem: 28.87MiB
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20230704025039.938914-1-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-07-04 10:50:39 +08:00
|
|
|
$(OUTPUT)/bench_htab_mem.o: $(OUTPUT)/htab_mem_bench.skel.h
|
2024-04-22 15:50:24 -07:00
|
|
|
$(OUTPUT)/bench_bpf_crypto.o: $(OUTPUT)/crypto_bench.skel.h
|
2025-04-07 22:21:23 +08:00
|
|
|
$(OUTPUT)/bench_sockmap.o: $(OUTPUT)/bench_sockmap_prog.skel.h
|
libbpf: Add LIBBPF_DEPRECATED_SINCE macro for scheduling API deprecations
Introduce a macro LIBBPF_DEPRECATED_SINCE(major, minor, message) to prepare
the deprecation of two API functions. This macro marks functions as deprecated
when libbpf's version reaches the values passed as an argument.
As part of this change libbpf_version.h header is added with recorded major
(LIBBPF_MAJOR_VERSION) and minor (LIBBPF_MINOR_VERSION) libbpf version macros.
They are now part of libbpf public API and can be relied upon by user code.
libbpf_version.h is installed system-wide along other libbpf public headers.
Due to this new build-time auto-generated header, in-kernel applications
relying on libbpf (resolve_btfids, bpftool, bpf_preload) are updated to
include libbpf's output directory as part of a list of include search paths.
Better fix would be to use libbpf's make_install target to install public API
headers, but that clean up is left out as a future improvement. The build
changes were tested by building kernel (with KBUILD_OUTPUT and O= specified
explicitly), bpftool, libbpf, selftests/bpf, and resolve_btfids builds. No
problems were detected.
Note that because of the constraints of the C preprocessor we have to write
a few lines of macro magic for each version used to prepare deprecation (0.6
for now).
Also, use LIBBPF_DEPRECATED_SINCE() to schedule deprecation of
btf__get_from_id() and btf__load(), which are replaced by
btf__load_from_kernel_by_id() and btf__load_into_kernel(), respectively,
starting from future libbpf v0.6. This is part of libbpf 1.0 effort ([0]).
[0] Closes: https://github.com/libbpf/libbpf/issues/278
Co-developed-by: Quentin Monnet <quentin@isovalent.com>
Co-developed-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210908213226.1871016-1-andrii@kernel.org
2021-09-08 14:32:26 -07:00
|
|
|
$(OUTPUT)/bench.o: bench.h testing_helpers.h $(BPFOBJ)
|
selftests/bpf: Add benchmark runner infrastructure
While working on BPF ringbuf implementation, testing, and benchmarking, I've
developed a pretty generic and modular benchmark runner, which seems to be
generically useful, as I've already used it for one more purpose (testing
fastest way to trigger BPF program, to minimize overhead of in-kernel code).
This patch adds generic part of benchmark runner and sets up Makefile for
extending it with more sets of benchmarks.
Benchmarker itself operates by spinning up specified number of producer and
consumer threads, setting up interval timer sending SIGALARM signal to
application once a second. Every second, current snapshot with hits/drops
counters are collected and stored in an array. Drops are useful for
producer/consumer benchmarks in which producer might overwhelm consumers.
Once test finishes after given amount of warm-up and testing seconds, mean and
stddev are calculated (ignoring warm-up results) and is printed out to stdout.
This setup seems to give consistent and accurate results.
To validate behavior, I added two atomic counting tests: global and local.
For global one, all the producer threads are atomically incrementing same
counter as fast as possible. This, of course, leads to huge drop of
performance once there is more than one producer thread due to CPUs fighting
for the same memory location.
Local counting, on the other hand, maintains one counter per each producer
thread, incremented independently. Once per second, all counters are read and
added together to form final "counting throughput" measurement. As expected,
such setup demonstrates linear scalability with number of producers (as long
as there are enough physical CPU cores, of course). See example output below.
Also, this setup can nicely demonstrate disastrous effects of false sharing,
if care is not taken to take those per-producer counters apart into
independent cache lines.
Demo output shows global counter first with 1 producer, then with 4. Both
total and per-producer performance significantly drop. The last run is local
counter with 4 producers, demonstrating near-perfect scalability.
$ ./bench -a -w1 -d2 -p1 count-global
Setting up benchmark 'count-global'...
Benchmark 'count-global' started.
Iter 0 ( 24.822us): hits 148.179M/s (148.179M/prod), drops 0.000M/s
Iter 1 ( 37.939us): hits 149.308M/s (149.308M/prod), drops 0.000M/s
Iter 2 (-10.774us): hits 150.717M/s (150.717M/prod), drops 0.000M/s
Iter 3 ( 3.807us): hits 151.435M/s (151.435M/prod), drops 0.000M/s
Summary: hits 150.488 ± 1.079M/s (150.488M/prod), drops 0.000 ± 0.000M/s
$ ./bench -a -w1 -d2 -p4 count-global
Setting up benchmark 'count-global'...
Benchmark 'count-global' started.
Iter 0 ( 60.659us): hits 53.910M/s ( 13.477M/prod), drops 0.000M/s
Iter 1 (-17.658us): hits 53.722M/s ( 13.431M/prod), drops 0.000M/s
Iter 2 ( 5.865us): hits 53.495M/s ( 13.374M/prod), drops 0.000M/s
Iter 3 ( 0.104us): hits 53.606M/s ( 13.402M/prod), drops 0.000M/s
Summary: hits 53.608 ± 0.113M/s ( 13.402M/prod), drops 0.000 ± 0.000M/s
$ ./bench -a -w1 -d2 -p4 count-local
Setting up benchmark 'count-local'...
Benchmark 'count-local' started.
Iter 0 ( 23.388us): hits 640.450M/s (160.113M/prod), drops 0.000M/s
Iter 1 ( 2.291us): hits 605.661M/s (151.415M/prod), drops 0.000M/s
Iter 2 ( -6.415us): hits 607.092M/s (151.773M/prod), drops 0.000M/s
Iter 3 ( -1.361us): hits 601.796M/s (150.449M/prod), drops 0.000M/s
Summary: hits 604.849 ± 2.739M/s (151.212M/prod), drops 0.000 ± 0.000M/s
Benchmark runner supports setting thread affinity for producer and consumer
threads. You can use -a flag for default CPU selection scheme, where first
consumer gets CPU #0, next one gets CPU #1, and so on. Then producer threads
pick up next CPU and increment one-by-one as well. But user can also specify
a set of CPUs independently for producers and consumers with --prod-affinity
1,2-10,15 and --cons-affinity <set-of-cpus>. The latter allows to force
producers and consumers to share same set of CPUs, if necessary.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200512192445.2351848-3-andriin@fb.com
2020-05-12 12:24:43 -07:00
|
|
|
$(OUTPUT)/bench: LDLIBS += -lm
|
2021-11-15 17:30:41 -08:00
|
|
|
$(OUTPUT)/bench: $(OUTPUT)/bench.o \
|
2021-12-01 14:51:02 +00:00
|
|
|
$(TESTING_HELPERS) \
|
|
|
|
$(TRACE_HELPERS) \
|
selftests/bpf: Add benchmark for bpf memory allocator
The benchmark could be used to compare the performance of hash map
operations and the memory usage between different flavors of bpf memory
allocator (e.g., no bpf ma vs bpf ma vs reuse-after-gp bpf ma). It also
could be used to check the performance improvement or the memory saving
provided by optimization.
The benchmark creates a non-preallocated hash map which uses bpf memory
allocator and shows the operation performance and the memory usage of
the hash map under different use cases:
(1) overwrite
Each CPU overwrites nonoverlapping part of hash map. When each CPU
completes overwriting of 64 elements in hash map, it increases the
op_count.
(2) batch_add_batch_del
Each CPU adds then deletes nonoverlapping part of hash map in batch.
When each CPU adds and deletes 64 elements in hash map, it increases
the op_count twice.
(3) add_del_on_diff_cpu
Each two-CPUs pair adds and deletes nonoverlapping part of map
cooperatively. When each CPU adds or deletes 64 elements in hash map,
it will increase the op_count.
The following is the benchmark results when comparing between different
flavors of bpf memory allocator. These tests are conducted on a KVM guest
with 8 CPUs and 16 GB memory. The command line below is used to do all
the following benchmarks:
./bench htab-mem --use-case $name ${OPTS} -w3 -d10 -a -p8
These results show that preallocated hash map has both better performance
and smaller memory footprint.
(1) non-preallocated + no bpf memory allocator (v6.0.19)
use kmalloc() + call_rcu
overwrite per-prod-op: 11.24 ± 0.07k/s, avg mem: 82.64 ± 26.32MiB, peak mem: 119.18MiB
batch_add_batch_del per-prod-op: 18.45 ± 0.10k/s, avg mem: 50.47 ± 14.51MiB, peak mem: 94.96MiB
add_del_on_diff_cpu per-prod-op: 14.50 ± 0.03k/s, avg mem: 4.64 ± 0.73MiB, peak mem: 7.20MiB
(2) preallocated
OPTS=--preallocated
overwrite per-prod-op: 191.42 ± 0.09k/s, avg mem: 1.24 ± 0.00MiB, peak mem: 1.49MiB
batch_add_batch_del per-prod-op: 221.83 ± 0.17k/s, avg mem: 1.23 ± 0.00MiB, peak mem: 1.49MiB
add_del_on_diff_cpu per-prod-op: 39.66 ± 0.31k/s, avg mem: 1.47 ± 0.13MiB, peak mem: 1.75MiB
(3) normal bpf memory allocator
overwrite per-prod-op: 126.59 ± 0.02k/s, avg mem: 2.26 ± 0.00MiB, peak mem: 2.74MiB
batch_add_batch_del per-prod-op: 83.37 ± 0.20k/s, avg mem: 2.14 ± 0.17MiB, peak mem: 2.74MiB
add_del_on_diff_cpu per-prod-op: 21.25 ± 0.24k/s, avg mem: 17.50 ± 3.32MiB, peak mem: 28.87MiB
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20230704025039.938914-1-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-07-04 10:50:39 +08:00
|
|
|
$(CGROUP_HELPERS) \
|
2020-05-12 12:24:44 -07:00
|
|
|
$(OUTPUT)/bench_count.o \
|
selftest/bpf: Add BPF triggering benchmark
It is sometimes desirable to be able to trigger BPF program from user-space
with minimal overhead. sys_enter would seem to be a good candidate, yet in
a lot of cases there will be a lot of noise from syscalls triggered by other
processes on the system. So while searching for low-overhead alternative, I've
stumbled upon getpgid() syscall, which seems to be specific enough to not
suffer from accidental syscall by other apps.
This set of benchmarks compares tp, raw_tp w/ filtering by syscall ID, kprobe,
fentry and fmod_ret with returning error (so that syscall would not be
executed), to determine the lowest-overhead way. Here are results on my
machine (using benchs/run_bench_trigger.sh script):
base : 9.200 ± 0.319M/s
tp : 6.690 ± 0.125M/s
rawtp : 8.571 ± 0.214M/s
kprobe : 6.431 ± 0.048M/s
fentry : 8.955 ± 0.241M/s
fmodret : 8.903 ± 0.135M/s
So it seems like fmodret doesn't give much benefit for such lightweight
syscall. Raw tracepoint is pretty decent despite additional filtering logic,
but it will be called for any other syscall in the system, which rules it out.
Fentry, though, seems to be adding the least amoung of overhead and achieves
97.3% of performance of baseline no-BPF-attached syscall.
Using getpgid() seems to be preferable to set_task_comm() approach from
test_overhead, as it's about 2.35x faster in a baseline performance.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200512192445.2351848-5-andriin@fb.com
2020-05-12 12:24:45 -07:00
|
|
|
$(OUTPUT)/bench_rename.o \
|
bpf: Add BPF ringbuf and perf buffer benchmarks
Extend bench framework with ability to have benchmark-provided child argument
parser for custom benchmark-specific parameters. This makes bench generic code
modular and independent from any specific benchmark.
Also implement a set of benchmarks for new BPF ring buffer and existing perf
buffer. 4 benchmarks were implemented: 2 variations for each of BPF ringbuf
and perfbuf:,
- rb-libbpf utilizes stock libbpf ring_buffer manager for reading data;
- rb-custom implements custom ring buffer setup and reading code, to
eliminate overheads inherent in generic libbpf code due to callback
functions and the need to update consumer position after each consumed
record, instead of batching updates (due to pessimistic assumption that
user callback might take long time and thus could unnecessarily hold ring
buffer space for too long);
- pb-libbpf uses stock libbpf perf_buffer code with all the default
settings, though uses higher-performance raw event callback to minimize
unnecessary overhead;
- pb-custom implements its own custom consumer code to minimize any possible
overhead of generic libbpf implementation and indirect function calls.
All of the test support default, no data notification skipped, mode, as well
as sampled mode (with --rb-sampled flag), which allows to trigger epoll
notification less frequently and reduce overhead. As will be shown, this mode
is especially critical for perf buffer, which suffers from high overhead of
wakeups in kernel.
Otherwise, all benchamrks implement similar way to generate a batch of records
by using fentry/sys_getpgid BPF program, which pushes a bunch of records in
a tight loop and records number of successful and dropped samples. Each record
is a small 8-byte integer, to minimize the effect of memory copying with
bpf_perf_event_output() and bpf_ringbuf_output().
Benchmarks that have only one producer implement optional back-to-back mode,
in which record production and consumption is alternating on the same CPU.
This is the highest-throughput happy case, showing ultimate performance
achievable with either BPF ringbuf or perfbuf.
All the below scenarios are implemented in a script in
benchs/run_bench_ringbufs.sh. Tests were performed on 28-core/56-thread
Intel Xeon CPU E5-2680 v4 @ 2.40GHz CPU.
Single-producer, parallel producer
==================================
rb-libbpf 12.054 ± 0.320M/s (drops 0.000 ± 0.000M/s)
rb-custom 8.158 ± 0.118M/s (drops 0.001 ± 0.003M/s)
pb-libbpf 0.931 ± 0.007M/s (drops 0.000 ± 0.000M/s)
pb-custom 0.965 ± 0.003M/s (drops 0.000 ± 0.000M/s)
Single-producer, parallel producer, sampled notification
========================================================
rb-libbpf 11.563 ± 0.067M/s (drops 0.000 ± 0.000M/s)
rb-custom 15.895 ± 0.076M/s (drops 0.000 ± 0.000M/s)
pb-libbpf 9.889 ± 0.032M/s (drops 0.000 ± 0.000M/s)
pb-custom 9.866 ± 0.028M/s (drops 0.000 ± 0.000M/s)
Single producer on one CPU, consumer on another one, both running at full
speed. Curiously, rb-libbpf has higher throughput than objectively faster (due
to more lightweight consumer code path) rb-custom. It appears that faster
consumer causes kernel to send notifications more frequently, because consumer
appears to be caught up more frequently. Performance of perfbuf suffers from
default "no sampling" policy and huge overhead that causes.
In sampled mode, rb-custom is winning very significantly eliminating too
frequent in-kernel wakeups, the gain appears to be more than 2x.
Perf buffer achieves even more impressive wins, compared to stock perfbuf
settings, with 10x improvements in throughput with 1:500 sampling rate. The
trade-off is that with sampling, application might not get next X events until
X+1st arrives, which is not always acceptable. With steady influx of events,
though, this shouldn't be a problem.
Overall, single-producer performance of ring buffers seems to be better no
matter the sampled/non-sampled modes, but it especially beats ring buffer
without sampling due to its adaptive notification approach.
Single-producer, back-to-back mode
==================================
rb-libbpf 15.507 ± 0.247M/s (drops 0.000 ± 0.000M/s)
rb-libbpf-sampled 14.692 ± 0.195M/s (drops 0.000 ± 0.000M/s)
rb-custom 21.449 ± 0.157M/s (drops 0.000 ± 0.000M/s)
rb-custom-sampled 20.024 ± 0.386M/s (drops 0.000 ± 0.000M/s)
pb-libbpf 1.601 ± 0.015M/s (drops 0.000 ± 0.000M/s)
pb-libbpf-sampled 8.545 ± 0.064M/s (drops 0.000 ± 0.000M/s)
pb-custom 1.607 ± 0.022M/s (drops 0.000 ± 0.000M/s)
pb-custom-sampled 8.988 ± 0.144M/s (drops 0.000 ± 0.000M/s)
Here we test a back-to-back mode, which is arguably best-case scenario both
for BPF ringbuf and perfbuf, because there is no contention and for ringbuf
also no excessive notification, because consumer appears to be behind after
the first record. For ringbuf, custom consumer code clearly wins with 21.5 vs
16 million records per second exchanged between producer and consumer. Sampled
mode actually hurts a bit due to slightly slower producer logic (it needs to
fetch amount of data available to decide whether to skip or force notification).
Perfbuf with wakeup sampling gets 5.5x throughput increase, compared to
no-sampling version. There also doesn't seem to be noticeable overhead from
generic libbpf handling code.
Perfbuf back-to-back, effect of sample rate
===========================================
pb-sampled-1 1.035 ± 0.012M/s (drops 0.000 ± 0.000M/s)
pb-sampled-5 3.476 ± 0.087M/s (drops 0.000 ± 0.000M/s)
pb-sampled-10 5.094 ± 0.136M/s (drops 0.000 ± 0.000M/s)
pb-sampled-25 7.118 ± 0.153M/s (drops 0.000 ± 0.000M/s)
pb-sampled-50 8.169 ± 0.156M/s (drops 0.000 ± 0.000M/s)
pb-sampled-100 8.887 ± 0.136M/s (drops 0.000 ± 0.000M/s)
pb-sampled-250 9.180 ± 0.209M/s (drops 0.000 ± 0.000M/s)
pb-sampled-500 9.353 ± 0.281M/s (drops 0.000 ± 0.000M/s)
pb-sampled-1000 9.411 ± 0.217M/s (drops 0.000 ± 0.000M/s)
pb-sampled-2000 9.464 ± 0.167M/s (drops 0.000 ± 0.000M/s)
pb-sampled-3000 9.575 ± 0.273M/s (drops 0.000 ± 0.000M/s)
This benchmark shows the effect of event sampling for perfbuf. Back-to-back
mode for highest throughput. Just doing every 5th record notification gives
3.5x speed up. 250-500 appears to be the point of diminishing return, with
almost 9x speed up. Most benchmarks use 500 as the default sampling for pb-raw
and pb-custom.
Ringbuf back-to-back, effect of sample rate
===========================================
rb-sampled-1 1.106 ± 0.010M/s (drops 0.000 ± 0.000M/s)
rb-sampled-5 4.746 ± 0.149M/s (drops 0.000 ± 0.000M/s)
rb-sampled-10 7.706 ± 0.164M/s (drops 0.000 ± 0.000M/s)
rb-sampled-25 12.893 ± 0.273M/s (drops 0.000 ± 0.000M/s)
rb-sampled-50 15.961 ± 0.361M/s (drops 0.000 ± 0.000M/s)
rb-sampled-100 18.203 ± 0.445M/s (drops 0.000 ± 0.000M/s)
rb-sampled-250 19.962 ± 0.786M/s (drops 0.000 ± 0.000M/s)
rb-sampled-500 20.881 ± 0.551M/s (drops 0.000 ± 0.000M/s)
rb-sampled-1000 21.317 ± 0.532M/s (drops 0.000 ± 0.000M/s)
rb-sampled-2000 21.331 ± 0.535M/s (drops 0.000 ± 0.000M/s)
rb-sampled-3000 21.688 ± 0.392M/s (drops 0.000 ± 0.000M/s)
Similar benchmark for ring buffer also shows a great advantage (in terms of
throughput) of skipping notifications. Skipping every 5th one gives 4x boost.
Also similar to perfbuf case, 250-500 seems to be the point of diminishing
returns, giving roughly 20x better results.
Keep in mind, for this test, notifications are controlled manually with
BPF_RB_NO_WAKEUP and BPF_RB_FORCE_WAKEUP. As can be seen from previous
benchmarks, adaptive notifications based on consumer's positions provides same
(or even slightly better due to simpler load generator on BPF side) benefits in
favorable back-to-back scenario. Over zealous and fast consumer, which is
almost always caught up, will make thoughput numbers smaller. That's the case
when manual notification control might prove to be extremely beneficial.
Ringbuf back-to-back, reserve+commit vs output
==============================================
reserve 22.819 ± 0.503M/s (drops 0.000 ± 0.000M/s)
output 18.906 ± 0.433M/s (drops 0.000 ± 0.000M/s)
Ringbuf sampled, reserve+commit vs output
=========================================
reserve-sampled 15.350 ± 0.132M/s (drops 0.000 ± 0.000M/s)
output-sampled 14.195 ± 0.144M/s (drops 0.000 ± 0.000M/s)
BPF ringbuf supports two sets of APIs with various usability and performance
tradeoffs: bpf_ringbuf_reserve()+bpf_ringbuf_commit() vs bpf_ringbuf_output().
This benchmark clearly shows superiority of reserve+commit approach, despite
using a small 8-byte record size.
Single-producer, consumer/producer competing on the same CPU, low batch count
=============================================================================
rb-libbpf 3.045 ± 0.020M/s (drops 3.536 ± 0.148M/s)
rb-custom 3.055 ± 0.022M/s (drops 3.893 ± 0.066M/s)
pb-libbpf 1.393 ± 0.024M/s (drops 0.000 ± 0.000M/s)
pb-custom 1.407 ± 0.016M/s (drops 0.000 ± 0.000M/s)
This benchmark shows one of the worst-case scenarios, in which producer and
consumer do not coordinate *and* fight for the same CPU. No batch count and
sampling settings were able to eliminate drops for ringbuffer, producer is
just too fast for consumer to keep up. But ringbuf and perfbuf still able to
pass through quite a lot of messages, which is more than enough for a lot of
applications.
Ringbuf, multi-producer contention
==================================
rb-libbpf nr_prod 1 10.916 ± 0.399M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 2 4.931 ± 0.030M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 3 4.880 ± 0.006M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 4 3.926 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 8 4.011 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 12 3.967 ± 0.016M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 16 2.604 ± 0.030M/s (drops 0.001 ± 0.002M/s)
rb-libbpf nr_prod 20 2.233 ± 0.003M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 24 2.085 ± 0.015M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 28 2.055 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 32 1.962 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 36 2.089 ± 0.005M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 40 2.118 ± 0.006M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 44 2.105 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 48 2.120 ± 0.058M/s (drops 0.000 ± 0.001M/s)
rb-libbpf nr_prod 52 2.074 ± 0.024M/s (drops 0.007 ± 0.014M/s)
Ringbuf uses a very short-duration spinlock during reservation phase, to check
few invariants, increment producer count and set record header. This is the
biggest point of contention for ringbuf implementation. This benchmark
evaluates the effect of multiple competing writers on overall throughput of
a single shared ringbuffer.
Overall throughput drops almost 2x when going from single to two
highly-contended producers, gradually dropping with additional competing
producers. Performance drop stabilizes at around 20 producers and hovers
around 2mln even with 50+ fighting producers, which is a 5x drop compared to
non-contended case. Good kernel implementation in kernel helps maintain decent
performance here.
Note, that in the intended real-world scenarios, it's not expected to get even
close to such a high levels of contention. But if contention will become
a problem, there is always an option of sharding few ring buffers across a set
of CPUs.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200529075424.3139988-5-andriin@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-05-29 00:54:23 -07:00
|
|
|
$(OUTPUT)/bench_trigger.o \
|
2021-10-27 16:45:03 -07:00
|
|
|
$(OUTPUT)/bench_ringbufs.o \
|
selftest/bpf/benchs: Add bpf_loop benchmark
Add benchmark to measure the throughput and latency of the bpf_loop
call.
Testing this on my dev machine on 1 thread, the data is as follows:
nr_loops: 10
bpf_loop - throughput: 198.519 ± 0.155 M ops/s, latency: 5.037 ns/op
nr_loops: 100
bpf_loop - throughput: 247.448 ± 0.305 M ops/s, latency: 4.041 ns/op
nr_loops: 500
bpf_loop - throughput: 260.839 ± 0.380 M ops/s, latency: 3.834 ns/op
nr_loops: 1000
bpf_loop - throughput: 262.806 ± 0.629 M ops/s, latency: 3.805 ns/op
nr_loops: 5000
bpf_loop - throughput: 264.211 ± 1.508 M ops/s, latency: 3.785 ns/op
nr_loops: 10000
bpf_loop - throughput: 265.366 ± 3.054 M ops/s, latency: 3.768 ns/op
nr_loops: 50000
bpf_loop - throughput: 235.986 ± 20.205 M ops/s, latency: 4.238 ns/op
nr_loops: 100000
bpf_loop - throughput: 264.482 ± 0.279 M ops/s, latency: 3.781 ns/op
nr_loops: 500000
bpf_loop - throughput: 309.773 ± 87.713 M ops/s, latency: 3.228 ns/op
nr_loops: 1000000
bpf_loop - throughput: 262.818 ± 4.143 M ops/s, latency: 3.805 ns/op
>From this data, we can see that the latency per loop decreases as the
number of loops increases. On this particular machine, each loop had an
overhead of about ~4 ns, and we were able to run ~250 million loops
per second.
Signed-off-by: Joanne Koong <joannekoong@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20211130030622.4131246-5-joannekoong@fb.com
2021-11-29 19:06:22 -08:00
|
|
|
$(OUTPUT)/bench_bloom_filter_map.o \
|
2021-12-10 22:16:51 +08:00
|
|
|
$(OUTPUT)/bench_bpf_loop.o \
|
2022-06-10 10:33:08 +08:00
|
|
|
$(OUTPUT)/bench_strncmp.o \
|
selftests/bpf: Add benchmark for local_storage get
Add a benchmarks to demonstrate the performance cliff for local_storage
get as the number of local_storage maps increases beyond current
local_storage implementation's cache size.
"sequential get" and "interleaved get" benchmarks are added, both of
which do many bpf_task_storage_get calls on sets of task local_storage
maps of various counts, while considering a single specific map to be
'important' and counting task_storage_gets to the important map
separately in addition to normal 'hits' count of all gets. Goal here is
to mimic scenario where a particular program using one map - the
important one - is running on a system where many other local_storage
maps exist and are accessed often.
While "sequential get" benchmark does bpf_task_storage_get for map 0, 1,
..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4
bpf_task_storage_gets for the important map for every 10 map gets. This
is meant to highlight performance differences when important map is
accessed far more frequently than non-important maps.
A "hashmap control" benchmark is also included for easy comparison of
standard bpf hashmap lookup vs local_storage get. The benchmark is
similar to "sequential get", but creates and uses BPF_MAP_TYPE_HASH
instead of local storage. Only one inner map is created - a hashmap
meant to hold tid -> data mapping for all tasks. Size of the hashmap is
hardcoded to my system's PID_MAX_LIMIT (4,194,304). The number of these
keys which are actually fetched as part of the benchmark is
configurable.
Addition of this benchmark is inspired by conversation with Alexei in a
previous patchset's thread [0], which highlighted the need for such a
benchmark to motivate and validate improvements to local_storage
implementation. My approach in that series focused on improving
performance for explicitly-marked 'important' maps and was rejected
with feedback to make more generally-applicable improvements while
avoiding explicitly marking maps as important. Thus the benchmark
reports both general and important-map-focused metrics, so effect of
future work on both is clear.
Regarding the benchmark results. On a powerful system (Skylake, 20
cores, 256gb ram):
Hashmap Control
===============
num keys: 10
hashmap (control) sequential get: hits throughput: 20.900 ± 0.334 M ops/s, hits latency: 47.847 ns/op, important_hits throughput: 20.900 ± 0.334 M ops/s
num keys: 1000
hashmap (control) sequential get: hits throughput: 13.758 ± 0.219 M ops/s, hits latency: 72.683 ns/op, important_hits throughput: 13.758 ± 0.219 M ops/s
num keys: 10000
hashmap (control) sequential get: hits throughput: 6.995 ± 0.034 M ops/s, hits latency: 142.959 ns/op, important_hits throughput: 6.995 ± 0.034 M ops/s
num keys: 100000
hashmap (control) sequential get: hits throughput: 4.452 ± 0.371 M ops/s, hits latency: 224.635 ns/op, important_hits throughput: 4.452 ± 0.371 M ops/s
num keys: 4194304
hashmap (control) sequential get: hits throughput: 3.043 ± 0.033 M ops/s, hits latency: 328.587 ns/op, important_hits throughput: 3.043 ± 0.033 M ops/s
Local Storage
=============
num_maps: 1
local_storage cache sequential get: hits throughput: 47.298 ± 0.180 M ops/s, hits latency: 21.142 ns/op, important_hits throughput: 47.298 ± 0.180 M ops/s
local_storage cache interleaved get: hits throughput: 55.277 ± 0.888 M ops/s, hits latency: 18.091 ns/op, important_hits throughput: 55.277 ± 0.888 M ops/s
num_maps: 10
local_storage cache sequential get: hits throughput: 40.240 ± 0.802 M ops/s, hits latency: 24.851 ns/op, important_hits throughput: 4.024 ± 0.080 M ops/s
local_storage cache interleaved get: hits throughput: 48.701 ± 0.722 M ops/s, hits latency: 20.533 ns/op, important_hits throughput: 17.393 ± 0.258 M ops/s
num_maps: 16
local_storage cache sequential get: hits throughput: 44.515 ± 0.708 M ops/s, hits latency: 22.464 ns/op, important_hits throughput: 2.782 ± 0.044 M ops/s
local_storage cache interleaved get: hits throughput: 49.553 ± 2.260 M ops/s, hits latency: 20.181 ns/op, important_hits throughput: 15.767 ± 0.719 M ops/s
num_maps: 17
local_storage cache sequential get: hits throughput: 38.778 ± 0.302 M ops/s, hits latency: 25.788 ns/op, important_hits throughput: 2.284 ± 0.018 M ops/s
local_storage cache interleaved get: hits throughput: 43.848 ± 1.023 M ops/s, hits latency: 22.806 ns/op, important_hits throughput: 13.349 ± 0.311 M ops/s
num_maps: 24
local_storage cache sequential get: hits throughput: 19.317 ± 0.568 M ops/s, hits latency: 51.769 ns/op, important_hits throughput: 0.806 ± 0.024 M ops/s
local_storage cache interleaved get: hits throughput: 24.397 ± 0.272 M ops/s, hits latency: 40.989 ns/op, important_hits throughput: 6.863 ± 0.077 M ops/s
num_maps: 32
local_storage cache sequential get: hits throughput: 13.333 ± 0.135 M ops/s, hits latency: 75.000 ns/op, important_hits throughput: 0.417 ± 0.004 M ops/s
local_storage cache interleaved get: hits throughput: 16.898 ± 0.383 M ops/s, hits latency: 59.178 ns/op, important_hits throughput: 4.717 ± 0.107 M ops/s
num_maps: 100
local_storage cache sequential get: hits throughput: 6.360 ± 0.107 M ops/s, hits latency: 157.233 ns/op, important_hits throughput: 0.064 ± 0.001 M ops/s
local_storage cache interleaved get: hits throughput: 7.303 ± 0.362 M ops/s, hits latency: 136.930 ns/op, important_hits throughput: 1.907 ± 0.094 M ops/s
num_maps: 1000
local_storage cache sequential get: hits throughput: 0.452 ± 0.010 M ops/s, hits latency: 2214.022 ns/op, important_hits throughput: 0.000 ± 0.000 M ops/s
local_storage cache interleaved get: hits throughput: 0.542 ± 0.007 M ops/s, hits latency: 1843.341 ns/op, important_hits throughput: 0.136 ± 0.002 M ops/s
Looking at the "sequential get" results, it's clear that as the
number of task local_storage maps grows beyond the current cache size
(16), there's a significant reduction in hits throughput. Note that
current local_storage implementation assigns a cache_idx to maps as they
are created. Since "sequential get" is creating maps 0..n in order and
then doing bpf_task_storage_get calls in the same order, the benchmark
is effectively ensuring that a map will not be in cache when the program
tries to access it.
For "interleaved get" results, important-map hits throughput is greatly
increased as the important map is more likely to be in cache by virtue
of being accessed far more frequently. Throughput still reduces as #
maps increases, though.
To get a sense of the overhead of the benchmark program, I
commented out bpf_task_storage_get/bpf_map_lookup_elem in
local_storage_bench.c and ran the benchmark on the same host as the
'real' run. Results:
Hashmap Control
===============
num keys: 10
hashmap (control) sequential get: hits throughput: 54.288 ± 0.655 M ops/s, hits latency: 18.420 ns/op, important_hits throughput: 54.288 ± 0.655 M ops/s
num keys: 1000
hashmap (control) sequential get: hits throughput: 52.913 ± 0.519 M ops/s, hits latency: 18.899 ns/op, important_hits throughput: 52.913 ± 0.519 M ops/s
num keys: 10000
hashmap (control) sequential get: hits throughput: 53.480 ± 1.235 M ops/s, hits latency: 18.699 ns/op, important_hits throughput: 53.480 ± 1.235 M ops/s
num keys: 100000
hashmap (control) sequential get: hits throughput: 54.982 ± 1.902 M ops/s, hits latency: 18.188 ns/op, important_hits throughput: 54.982 ± 1.902 M ops/s
num keys: 4194304
hashmap (control) sequential get: hits throughput: 50.858 ± 0.707 M ops/s, hits latency: 19.662 ns/op, important_hits throughput: 50.858 ± 0.707 M ops/s
Local Storage
=============
num_maps: 1
local_storage cache sequential get: hits throughput: 110.990 ± 4.828 M ops/s, hits latency: 9.010 ns/op, important_hits throughput: 110.990 ± 4.828 M ops/s
local_storage cache interleaved get: hits throughput: 161.057 ± 4.090 M ops/s, hits latency: 6.209 ns/op, important_hits throughput: 161.057 ± 4.090 M ops/s
num_maps: 10
local_storage cache sequential get: hits throughput: 112.930 ± 1.079 M ops/s, hits latency: 8.855 ns/op, important_hits throughput: 11.293 ± 0.108 M ops/s
local_storage cache interleaved get: hits throughput: 115.841 ± 2.088 M ops/s, hits latency: 8.633 ns/op, important_hits throughput: 41.372 ± 0.746 M ops/s
num_maps: 16
local_storage cache sequential get: hits throughput: 115.653 ± 0.416 M ops/s, hits latency: 8.647 ns/op, important_hits throughput: 7.228 ± 0.026 M ops/s
local_storage cache interleaved get: hits throughput: 138.717 ± 1.649 M ops/s, hits latency: 7.209 ns/op, important_hits throughput: 44.137 ± 0.525 M ops/s
num_maps: 17
local_storage cache sequential get: hits throughput: 112.020 ± 1.649 M ops/s, hits latency: 8.927 ns/op, important_hits throughput: 6.598 ± 0.097 M ops/s
local_storage cache interleaved get: hits throughput: 128.089 ± 1.960 M ops/s, hits latency: 7.807 ns/op, important_hits throughput: 38.995 ± 0.597 M ops/s
num_maps: 24
local_storage cache sequential get: hits throughput: 92.447 ± 5.170 M ops/s, hits latency: 10.817 ns/op, important_hits throughput: 3.855 ± 0.216 M ops/s
local_storage cache interleaved get: hits throughput: 128.844 ± 2.808 M ops/s, hits latency: 7.761 ns/op, important_hits throughput: 36.245 ± 0.790 M ops/s
num_maps: 32
local_storage cache sequential get: hits throughput: 102.042 ± 1.462 M ops/s, hits latency: 9.800 ns/op, important_hits throughput: 3.194 ± 0.046 M ops/s
local_storage cache interleaved get: hits throughput: 126.577 ± 1.818 M ops/s, hits latency: 7.900 ns/op, important_hits throughput: 35.332 ± 0.507 M ops/s
num_maps: 100
local_storage cache sequential get: hits throughput: 111.327 ± 1.401 M ops/s, hits latency: 8.983 ns/op, important_hits throughput: 1.113 ± 0.014 M ops/s
local_storage cache interleaved get: hits throughput: 131.327 ± 1.339 M ops/s, hits latency: 7.615 ns/op, important_hits throughput: 34.302 ± 0.350 M ops/s
num_maps: 1000
local_storage cache sequential get: hits throughput: 101.978 ± 0.563 M ops/s, hits latency: 9.806 ns/op, important_hits throughput: 0.102 ± 0.001 M ops/s
local_storage cache interleaved get: hits throughput: 141.084 ± 1.098 M ops/s, hits latency: 7.088 ns/op, important_hits throughput: 35.430 ± 0.276 M ops/s
Adjusting for overhead, latency numbers for "hashmap control" and
"sequential get" are:
hashmap_control_1k: ~53.8ns
hashmap_control_10k: ~124.2ns
hashmap_control_100k: ~206.5ns
sequential_get_1: ~12.1ns
sequential_get_10: ~16.0ns
sequential_get_16: ~13.8ns
sequential_get_17: ~16.8ns
sequential_get_24: ~40.9ns
sequential_get_32: ~65.2ns
sequential_get_100: ~148.2ns
sequential_get_1000: ~2204ns
Clearly demonstrating a cliff.
In the discussion for v1 of this patch, Alexei noted that local_storage
was 2.5x faster than a large hashmap when initially implemented [1]. The
benchmark results show that local_storage is 5-10x faster: a
long-running BPF application putting some pid-specific info into a
hashmap for each pid it sees will probably see on the order of 10-100k
pids. Bench numbers for hashmaps of this size are ~10x slower than
sequential_get_16, but as the number of local_storage maps grows far
past local_storage cache size the performance advantage shrinks and
eventually reverses.
When running the benchmarks it may be necessary to bump 'open files'
ulimit for a successful run.
[0]: https://lore.kernel.org/all/20220420002143.1096548-1-davemarchevsky@fb.com
[1]: https://lore.kernel.org/bpf/20220511173305.ftldpn23m4ski3d3@MBP-98dd607d3435.dhcp.thefacebook.com/
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Link: https://lore.kernel.org/r/20220620222554.270578-1-davemarchevsky@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-06-20 15:25:54 -07:00
|
|
|
$(OUTPUT)/bench_bpf_hashmap_full_update.o \
|
2022-07-05 12:00:18 -07:00
|
|
|
$(OUTPUT)/bench_local_storage.o \
|
2023-02-13 09:15:19 +00:00
|
|
|
$(OUTPUT)/bench_local_storage_rcu_tasks_trace.o \
|
|
|
|
$(OUTPUT)/bench_bpf_hashmap_lookup.o \
|
selftests/bpf: Add local-storage-create benchmark
This patch tests how many kmallocs is needed to create and free
a batch of UDP sockets and each socket has a 64bytes bpf storage.
It also measures how fast the UDP sockets can be created.
The result is from my qemu setup.
Before bpf_mem_cache_alloc/free:
./bench -p 1 local-storage-create
Setting up benchmark 'local-storage-create'...
Benchmark 'local-storage-create' started.
Iter 0 ( 73.193us): creates 213.552k/s (213.552k/prod), 3.09 kmallocs/create
Iter 1 (-20.724us): creates 211.908k/s (211.908k/prod), 3.09 kmallocs/create
Iter 2 ( 9.280us): creates 212.574k/s (212.574k/prod), 3.12 kmallocs/create
Iter 3 ( 11.039us): creates 213.209k/s (213.209k/prod), 3.12 kmallocs/create
Iter 4 (-11.411us): creates 213.351k/s (213.351k/prod), 3.12 kmallocs/create
Iter 5 ( -7.915us): creates 214.754k/s (214.754k/prod), 3.12 kmallocs/create
Iter 6 ( 11.317us): creates 210.942k/s (210.942k/prod), 3.12 kmallocs/create
Summary: creates 212.789 ± 1.310k/s (212.789k/prod), 3.12 kmallocs/create
After bpf_mem_cache_alloc/free:
./bench -p 1 local-storage-create
Setting up benchmark 'local-storage-create'...
Benchmark 'local-storage-create' started.
Iter 0 ( 68.265us): creates 243.984k/s (243.984k/prod), 1.04 kmallocs/create
Iter 1 ( 30.357us): creates 238.424k/s (238.424k/prod), 1.04 kmallocs/create
Iter 2 (-18.712us): creates 232.963k/s (232.963k/prod), 1.04 kmallocs/create
Iter 3 (-15.885us): creates 238.879k/s (238.879k/prod), 1.04 kmallocs/create
Iter 4 ( 5.590us): creates 237.490k/s (237.490k/prod), 1.04 kmallocs/create
Iter 5 ( 8.577us): creates 237.521k/s (237.521k/prod), 1.04 kmallocs/create
Iter 6 ( -6.263us): creates 238.508k/s (238.508k/prod), 1.04 kmallocs/create
Summary: creates 237.298 ± 2.198k/s (237.298k/prod), 1.04 kmallocs/create
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20230308065936.1550103-18-martin.lau@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-03-07 22:59:36 -08:00
|
|
|
$(OUTPUT)/bench_local_storage_create.o \
|
selftests/bpf: Add benchmark for bpf memory allocator
The benchmark could be used to compare the performance of hash map
operations and the memory usage between different flavors of bpf memory
allocator (e.g., no bpf ma vs bpf ma vs reuse-after-gp bpf ma). It also
could be used to check the performance improvement or the memory saving
provided by optimization.
The benchmark creates a non-preallocated hash map which uses bpf memory
allocator and shows the operation performance and the memory usage of
the hash map under different use cases:
(1) overwrite
Each CPU overwrites nonoverlapping part of hash map. When each CPU
completes overwriting of 64 elements in hash map, it increases the
op_count.
(2) batch_add_batch_del
Each CPU adds then deletes nonoverlapping part of hash map in batch.
When each CPU adds and deletes 64 elements in hash map, it increases
the op_count twice.
(3) add_del_on_diff_cpu
Each two-CPUs pair adds and deletes nonoverlapping part of map
cooperatively. When each CPU adds or deletes 64 elements in hash map,
it will increase the op_count.
The following is the benchmark results when comparing between different
flavors of bpf memory allocator. These tests are conducted on a KVM guest
with 8 CPUs and 16 GB memory. The command line below is used to do all
the following benchmarks:
./bench htab-mem --use-case $name ${OPTS} -w3 -d10 -a -p8
These results show that preallocated hash map has both better performance
and smaller memory footprint.
(1) non-preallocated + no bpf memory allocator (v6.0.19)
use kmalloc() + call_rcu
overwrite per-prod-op: 11.24 ± 0.07k/s, avg mem: 82.64 ± 26.32MiB, peak mem: 119.18MiB
batch_add_batch_del per-prod-op: 18.45 ± 0.10k/s, avg mem: 50.47 ± 14.51MiB, peak mem: 94.96MiB
add_del_on_diff_cpu per-prod-op: 14.50 ± 0.03k/s, avg mem: 4.64 ± 0.73MiB, peak mem: 7.20MiB
(2) preallocated
OPTS=--preallocated
overwrite per-prod-op: 191.42 ± 0.09k/s, avg mem: 1.24 ± 0.00MiB, peak mem: 1.49MiB
batch_add_batch_del per-prod-op: 221.83 ± 0.17k/s, avg mem: 1.23 ± 0.00MiB, peak mem: 1.49MiB
add_del_on_diff_cpu per-prod-op: 39.66 ± 0.31k/s, avg mem: 1.47 ± 0.13MiB, peak mem: 1.75MiB
(3) normal bpf memory allocator
overwrite per-prod-op: 126.59 ± 0.02k/s, avg mem: 2.26 ± 0.00MiB, peak mem: 2.74MiB
batch_add_batch_del per-prod-op: 83.37 ± 0.20k/s, avg mem: 2.14 ± 0.17MiB, peak mem: 2.74MiB
add_del_on_diff_cpu per-prod-op: 21.25 ± 0.24k/s, avg mem: 17.50 ± 3.32MiB, peak mem: 28.87MiB
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20230704025039.938914-1-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-07-04 10:50:39 +08:00
|
|
|
$(OUTPUT)/bench_htab_mem.o \
|
2024-04-22 15:50:24 -07:00
|
|
|
$(OUTPUT)/bench_bpf_crypto.o \
|
2025-04-07 22:21:23 +08:00
|
|
|
$(OUTPUT)/bench_sockmap.o \
|
2023-02-13 09:15:19 +00:00
|
|
|
#
|
selftests/bpf: Add benchmark runner infrastructure
While working on BPF ringbuf implementation, testing, and benchmarking, I've
developed a pretty generic and modular benchmark runner, which seems to be
generically useful, as I've already used it for one more purpose (testing
fastest way to trigger BPF program, to minimize overhead of in-kernel code).
This patch adds generic part of benchmark runner and sets up Makefile for
extending it with more sets of benchmarks.
Benchmarker itself operates by spinning up specified number of producer and
consumer threads, setting up interval timer sending SIGALARM signal to
application once a second. Every second, current snapshot with hits/drops
counters are collected and stored in an array. Drops are useful for
producer/consumer benchmarks in which producer might overwhelm consumers.
Once test finishes after given amount of warm-up and testing seconds, mean and
stddev are calculated (ignoring warm-up results) and is printed out to stdout.
This setup seems to give consistent and accurate results.
To validate behavior, I added two atomic counting tests: global and local.
For global one, all the producer threads are atomically incrementing same
counter as fast as possible. This, of course, leads to huge drop of
performance once there is more than one producer thread due to CPUs fighting
for the same memory location.
Local counting, on the other hand, maintains one counter per each producer
thread, incremented independently. Once per second, all counters are read and
added together to form final "counting throughput" measurement. As expected,
such setup demonstrates linear scalability with number of producers (as long
as there are enough physical CPU cores, of course). See example output below.
Also, this setup can nicely demonstrate disastrous effects of false sharing,
if care is not taken to take those per-producer counters apart into
independent cache lines.
Demo output shows global counter first with 1 producer, then with 4. Both
total and per-producer performance significantly drop. The last run is local
counter with 4 producers, demonstrating near-perfect scalability.
$ ./bench -a -w1 -d2 -p1 count-global
Setting up benchmark 'count-global'...
Benchmark 'count-global' started.
Iter 0 ( 24.822us): hits 148.179M/s (148.179M/prod), drops 0.000M/s
Iter 1 ( 37.939us): hits 149.308M/s (149.308M/prod), drops 0.000M/s
Iter 2 (-10.774us): hits 150.717M/s (150.717M/prod), drops 0.000M/s
Iter 3 ( 3.807us): hits 151.435M/s (151.435M/prod), drops 0.000M/s
Summary: hits 150.488 ± 1.079M/s (150.488M/prod), drops 0.000 ± 0.000M/s
$ ./bench -a -w1 -d2 -p4 count-global
Setting up benchmark 'count-global'...
Benchmark 'count-global' started.
Iter 0 ( 60.659us): hits 53.910M/s ( 13.477M/prod), drops 0.000M/s
Iter 1 (-17.658us): hits 53.722M/s ( 13.431M/prod), drops 0.000M/s
Iter 2 ( 5.865us): hits 53.495M/s ( 13.374M/prod), drops 0.000M/s
Iter 3 ( 0.104us): hits 53.606M/s ( 13.402M/prod), drops 0.000M/s
Summary: hits 53.608 ± 0.113M/s ( 13.402M/prod), drops 0.000 ± 0.000M/s
$ ./bench -a -w1 -d2 -p4 count-local
Setting up benchmark 'count-local'...
Benchmark 'count-local' started.
Iter 0 ( 23.388us): hits 640.450M/s (160.113M/prod), drops 0.000M/s
Iter 1 ( 2.291us): hits 605.661M/s (151.415M/prod), drops 0.000M/s
Iter 2 ( -6.415us): hits 607.092M/s (151.773M/prod), drops 0.000M/s
Iter 3 ( -1.361us): hits 601.796M/s (150.449M/prod), drops 0.000M/s
Summary: hits 604.849 ± 2.739M/s (151.212M/prod), drops 0.000 ± 0.000M/s
Benchmark runner supports setting thread affinity for producer and consumer
threads. You can use -a flag for default CPU selection scheme, where first
consumer gets CPU #0, next one gets CPU #1, and so on. Then producer threads
pick up next CPU and increment one-by-one as well. But user can also specify
a set of CPUs independently for producers and consumers with --prod-affinity
1,2-10,15 and --cons-affinity <set-of-cpus>. The latter allows to force
producers and consumers to share same set of CPUs, if necessary.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200512192445.2351848-3-andriin@fb.com
2020-05-12 12:24:43 -07:00
|
|
|
$(call msg,BINARY,,$@)
|
2021-12-16 16:38:43 +00:00
|
|
|
$(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@
|
selftests/bpf: Add benchmark runner infrastructure
While working on BPF ringbuf implementation, testing, and benchmarking, I've
developed a pretty generic and modular benchmark runner, which seems to be
generically useful, as I've already used it for one more purpose (testing
fastest way to trigger BPF program, to minimize overhead of in-kernel code).
This patch adds generic part of benchmark runner and sets up Makefile for
extending it with more sets of benchmarks.
Benchmarker itself operates by spinning up specified number of producer and
consumer threads, setting up interval timer sending SIGALARM signal to
application once a second. Every second, current snapshot with hits/drops
counters are collected and stored in an array. Drops are useful for
producer/consumer benchmarks in which producer might overwhelm consumers.
Once test finishes after given amount of warm-up and testing seconds, mean and
stddev are calculated (ignoring warm-up results) and is printed out to stdout.
This setup seems to give consistent and accurate results.
To validate behavior, I added two atomic counting tests: global and local.
For global one, all the producer threads are atomically incrementing same
counter as fast as possible. This, of course, leads to huge drop of
performance once there is more than one producer thread due to CPUs fighting
for the same memory location.
Local counting, on the other hand, maintains one counter per each producer
thread, incremented independently. Once per second, all counters are read and
added together to form final "counting throughput" measurement. As expected,
such setup demonstrates linear scalability with number of producers (as long
as there are enough physical CPU cores, of course). See example output below.
Also, this setup can nicely demonstrate disastrous effects of false sharing,
if care is not taken to take those per-producer counters apart into
independent cache lines.
Demo output shows global counter first with 1 producer, then with 4. Both
total and per-producer performance significantly drop. The last run is local
counter with 4 producers, demonstrating near-perfect scalability.
$ ./bench -a -w1 -d2 -p1 count-global
Setting up benchmark 'count-global'...
Benchmark 'count-global' started.
Iter 0 ( 24.822us): hits 148.179M/s (148.179M/prod), drops 0.000M/s
Iter 1 ( 37.939us): hits 149.308M/s (149.308M/prod), drops 0.000M/s
Iter 2 (-10.774us): hits 150.717M/s (150.717M/prod), drops 0.000M/s
Iter 3 ( 3.807us): hits 151.435M/s (151.435M/prod), drops 0.000M/s
Summary: hits 150.488 ± 1.079M/s (150.488M/prod), drops 0.000 ± 0.000M/s
$ ./bench -a -w1 -d2 -p4 count-global
Setting up benchmark 'count-global'...
Benchmark 'count-global' started.
Iter 0 ( 60.659us): hits 53.910M/s ( 13.477M/prod), drops 0.000M/s
Iter 1 (-17.658us): hits 53.722M/s ( 13.431M/prod), drops 0.000M/s
Iter 2 ( 5.865us): hits 53.495M/s ( 13.374M/prod), drops 0.000M/s
Iter 3 ( 0.104us): hits 53.606M/s ( 13.402M/prod), drops 0.000M/s
Summary: hits 53.608 ± 0.113M/s ( 13.402M/prod), drops 0.000 ± 0.000M/s
$ ./bench -a -w1 -d2 -p4 count-local
Setting up benchmark 'count-local'...
Benchmark 'count-local' started.
Iter 0 ( 23.388us): hits 640.450M/s (160.113M/prod), drops 0.000M/s
Iter 1 ( 2.291us): hits 605.661M/s (151.415M/prod), drops 0.000M/s
Iter 2 ( -6.415us): hits 607.092M/s (151.773M/prod), drops 0.000M/s
Iter 3 ( -1.361us): hits 601.796M/s (150.449M/prod), drops 0.000M/s
Summary: hits 604.849 ± 2.739M/s (151.212M/prod), drops 0.000 ± 0.000M/s
Benchmark runner supports setting thread affinity for producer and consumer
threads. You can use -a flag for default CPU selection scheme, where first
consumer gets CPU #0, next one gets CPU #1, and so on. Then producer threads
pick up next CPU and increment one-by-one as well. But user can also specify
a set of CPUs independently for producers and consumers with --prod-affinity
1,2-10,15 and --cons-affinity <set-of-cpus>. The latter allows to force
producers and consumers to share same set of CPUs, if necessary.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200512192445.2351848-3-andriin@fb.com
2020-05-12 12:24:43 -07:00
|
|
|
|
2025-06-13 00:21:47 -07:00
|
|
|
# This works around GCC warning about snprintf truncating strings like:
|
|
|
|
#
|
|
|
|
# char a[PATH_MAX], b[PATH_MAX];
|
|
|
|
# snprintf(a, "%s/foo", b); // triggers -Wformat-truncation
|
|
|
|
$(OUTPUT)/veristat.o: CFLAGS += -Wno-format-truncation
|
selftests/bpf: Add veristat tool for mass-verifying BPF object files
Add a small tool, veristat, that allows mass-verification of
a set of *libbpf-compatible* BPF ELF object files. For each such object
file, veristat will attempt to verify each BPF program *individually*.
Regardless of success or failure, it parses BPF verifier stats and
outputs them in human-readable table format. In the future we can also
add CSV and JSON output for more scriptable post-processing, if necessary.
veristat allows to specify a set of stats that should be output and
ordering between multiple objects and files (e.g., so that one can
easily order by total instructions processed, instead of default file
name, prog name, verdict, total instructions order).
This tool should be useful for validating various BPF verifier changes
or even validating different kernel versions for regressions.
Here's an example for some of the heaviest selftests/bpf BPF object
files:
$ sudo ./veristat -s insns,file,prog {pyperf,loop,test_verif_scale,strobemeta,test_cls_redirect,profiler}*.linked3.o
File Program Verdict Duration, us Total insns Total states Peak states
------------------------------------ ------------------------------------ ------- ------------ ----------- ------------ -----------
loop3.linked3.o while_true failure 350990 1000001 9663 9663
test_verif_scale3.linked3.o balancer_ingress success 115244 845499 8636 2141
test_verif_scale2.linked3.o balancer_ingress success 77688 773445 3048 788
pyperf600.linked3.o on_event success 2079872 624585 30335 30241
pyperf600_nounroll.linked3.o on_event success 353972 568128 37101 2115
strobemeta.linked3.o on_event success 455230 557149 15915 13537
test_verif_scale1.linked3.o balancer_ingress success 89880 554754 8636 2141
strobemeta_nounroll2.linked3.o on_event success 433906 501725 17087 1912
loop6.linked3.o trace_virtqueue_add_sgs success 282205 398057 8717 919
loop1.linked3.o nested_loops success 125630 361349 5504 5504
pyperf180.linked3.o on_event success 2511740 160398 11470 11446
pyperf100.linked3.o on_event success 744329 87681 6213 6191
test_cls_redirect.linked3.o cls_redirect success 54087 78925 4782 903
strobemeta_subprogs.linked3.o on_event success 57898 65420 1954 403
test_cls_redirect_subprogs.linked3.o cls_redirect success 54522 64965 4619 958
strobemeta_nounroll1.linked3.o on_event success 43313 57240 1757 382
pyperf50.linked3.o on_event success 194355 46378 3263 3241
profiler2.linked3.o tracepoint__syscalls__sys_enter_kill success 23869 43372 1423 542
pyperf_subprogs.linked3.o on_event success 29179 36358 2499 2499
profiler1.linked3.o tracepoint__syscalls__sys_enter_kill success 13052 27036 1946 936
profiler3.linked3.o tracepoint__syscalls__sys_enter_kill success 21023 26016 2186 915
profiler2.linked3.o kprobe__vfs_link success 5255 13896 303 271
profiler1.linked3.o kprobe__vfs_link success 7792 12687 1042 1041
profiler3.linked3.o kprobe__vfs_link success 7332 10601 865 865
profiler2.linked3.o kprobe_ret__do_filp_open success 3417 8900 216 199
profiler2.linked3.o kprobe__vfs_symlink success 3548 8775 203 186
pyperf_global.linked3.o on_event success 10007 7563 520 520
profiler3.linked3.o kprobe_ret__do_filp_open success 4708 6464 532 532
profiler1.linked3.o kprobe_ret__do_filp_open success 3090 6445 508 508
profiler3.linked3.o kprobe__vfs_symlink success 4477 6358 521 521
profiler1.linked3.o kprobe__vfs_symlink success 3381 6347 507 507
profiler2.linked3.o raw_tracepoint__sched_process_exec success 2464 5874 292 189
profiler3.linked3.o raw_tracepoint__sched_process_exec success 2677 4363 397 283
profiler2.linked3.o kprobe__proc_sys_write success 1800 4355 143 138
profiler1.linked3.o raw_tracepoint__sched_process_exec success 1649 4019 333 240
pyperf600_bpf_loop.linked3.o on_event success 2711 3966 306 306
profiler2.linked3.o raw_tracepoint__sched_process_exit success 1234 3138 83 66
profiler3.linked3.o kprobe__proc_sys_write success 1755 2623 223 223
profiler1.linked3.o kprobe__proc_sys_write success 1222 2456 193 193
loop2.linked3.o while_true success 608 1783 57 30
profiler3.linked3.o raw_tracepoint__sched_process_exit success 789 1680 146 146
profiler1.linked3.o raw_tracepoint__sched_process_exit success 592 1526 133 133
strobemeta_bpf_loop.linked3.o on_event success 1015 1512 106 106
loop4.linked3.o combinations success 165 524 18 17
profiler3.linked3.o raw_tracepoint__sched_process_fork success 196 299 25 25
profiler1.linked3.o raw_tracepoint__sched_process_fork success 109 265 19 19
profiler2.linked3.o raw_tracepoint__sched_process_fork success 111 265 19 19
loop5.linked3.o while_true success 47 84 9 9
------------------------------------ ------------------------------------ ------- ------------ ----------- ------------ -----------
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20220909193053.577111-4-andrii@kernel.org
2022-09-09 12:30:53 -07:00
|
|
|
$(OUTPUT)/veristat.o: $(BPFOBJ)
|
|
|
|
$(OUTPUT)/veristat: $(OUTPUT)/veristat.o
|
|
|
|
$(call msg,BINARY,,$@)
|
|
|
|
$(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@
|
|
|
|
|
2024-07-22 17:13:29 -07:00
|
|
|
# Linking uprobe_multi can fail due to relocation overflows on mips.
|
|
|
|
$(OUTPUT)/uprobe_multi: CFLAGS += $(if $(filter mips, $(ARCH)),-mxgot)
|
2024-08-29 10:42:32 -07:00
|
|
|
$(OUTPUT)/uprobe_multi: uprobe_multi.c uprobe_multi.ld
|
2023-08-09 10:34:34 +02:00
|
|
|
$(call msg,BINARY,,$@)
|
2024-08-29 10:42:32 -07:00
|
|
|
$(Q)$(CC) $(CFLAGS) -Wl,-T,uprobe_multi.ld -O0 $(LDFLAGS) \
|
|
|
|
$(filter-out %.ld,$^) $(LDLIBS) -o $@
|
2023-08-09 10:34:34 +02:00
|
|
|
|
2024-02-16 09:42:45 -03:00
|
|
|
EXTRA_CLEAN := $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \
|
selftests/bpf: Replace test_progs and test_maps w/ general rule
Define test runner generation meta-rule that codifies dependencies
between test runner, its tests, and its dependent BPF programs. Use that
for defining test_progs and test_maps test-runners. Also additionally define
2 flavors of test_progs:
- alu32, which builds BPF programs with 32-bit registers codegen;
- bpf_gcc, which build BPF programs using GCC, if it supports BPF target.
Overall, this is accomplished through $(eval)'ing a set of generic
rules, which defines Makefile targets dynamically at runtime. See
comments explaining the need for 2 $(evals), though.
For each test runner we have (test_maps and test_progs, currently), and,
optionally, their flavors, the logic of build process is modeled as
follows (using test_progs as an example):
- all BPF objects are in progs/:
- BPF object's .o file is built into output directory from
corresponding progs/.c file;
- all BPF objects in progs/*.c depend on all progs/*.h headers;
- all BPF objects depend on bpf_*.h helpers from libbpf (but not
libbpf archive). There is an extra rule to trigger bpf_helper_defs.h
(re-)build, if it's not present/outdated);
- build recipe for BPF object can be re-defined per test runner/flavor;
- test files are built from prog_tests/*.c:
- all such test file objects are built on individual file basis;
- currently, every single test file depends on all BPF object files;
this might be improved in follow up patches to do 1-to-1 dependency,
but allowing to customize this per each individual test;
- each test runner definition can specify a list of extra .c and .h
files to be built along test files and test runner binary; all such
headers are becoming automatic dependency of each test .c file;
- due to test files sometimes embedding (using .incbin assembly
directive) contents of some BPF objects at compilation time, which are
expected to be in CWD of compiler, compilation for test file object does
cd into test runner's output directory; to support this mode all the
include paths are turned into absolute paths using $(abspath) make
function;
- prog_tests/test.h is automatically (re-)generated with an entry for
each .c file in prog_tests/;
- final test runner binary is linked together from test object files and
extra object files, linking together libbpf's archive as well;
- it's possible to specify extra "resource" files/targets, which will be
copied into test runner output directory, if it differes from
Makefile-wide $(OUTPUT). This is used to ensure btf_dump test cases and
urandom_read binary is put into a test runner's CWD for tests to find
them in runtime.
For flavored test runners, their output directory is a subdirectory of
common Makefile-wide $(OUTPUT) directory with flavor name used as
subdirectory name.
BPF objects targets might be reused between different test runners, so
extra checks are employed to not double-define them. Similarly, we have
redefinition guards for output directories and test headers.
test_verifier follows slightly different patterns and is simple enough
to not justify generalizing TEST_RUNNER_DEFINE/TEST_RUNNER_DEFINE_RULES
further to accomodate these differences. Instead, rules for
test_verifier are minimized and simplified, while preserving correctness
of dependencies.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20191016060051.2024182-6-andriin@fb.com
2019-10-15 23:00:49 -07:00
|
|
|
prog_tests/tests.h map_tests/tests.h verifier/tests.h \
|
2024-12-04 14:28:26 +01:00
|
|
|
feature bpftool $(TEST_KMOD_TARGETS) \
|
2024-07-18 22:57:43 +00:00
|
|
|
$(addprefix $(OUTPUT)/,*.o *.d *.skel.h *.lskel.h *.subskel.h \
|
2024-12-04 14:28:26 +01:00
|
|
|
no_alu32 cpuv4 bpf_gcc \
|
2024-08-20 03:23:53 -07:00
|
|
|
liburandom_read.so) \
|
|
|
|
$(OUTPUT)/FEATURE-DUMP.selftests
|
2021-03-02 09:19:43 -08:00
|
|
|
|
|
|
|
.PHONY: docs docs-clean
|
2022-12-18 06:35:09 +08:00
|
|
|
|
|
|
|
# Delete partially updated (corrupted) files on error
|
|
|
|
.DELETE_ON_ERROR:
|
2023-08-31 18:29:54 +02:00
|
|
|
|
|
|
|
DEFAULT_INSTALL_RULE := $(INSTALL_RULE)
|
|
|
|
override define INSTALL_RULE
|
|
|
|
$(DEFAULT_INSTALL_RULE)
|
|
|
|
@for DIR in $(TEST_INST_SUBDIRS); do \
|
|
|
|
mkdir -p $(INSTALL_PATH)/$$DIR; \
|
|
|
|
rsync -a $(OUTPUT)/$$DIR/*.bpf.o $(INSTALL_PATH)/$$DIR;\
|
|
|
|
done
|
|
|
|
endef
|