linux/tools/testing/selftests/bpf/progs/bpf_misc.h

246 lines
11 KiB
C
Raw Permalink Normal View History

/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __BPF_MISC_H__
#define __BPF_MISC_H__
#define XSTR(s) STR(s)
#define STR(s) #s
/* Expand a macro and then stringize the expansion */
#define QUOTE(str) #str
#define EXPAND_QUOTE(str) QUOTE(str)
/* This set of attributes controls behavior of the
* test_loader.c:test_loader__run_subtests().
*
* The test_loader sequentially loads each program in a skeleton.
* Programs could be loaded in privileged and unprivileged modes.
* - __success, __failure, __msg, __regex imply privileged mode;
* - __success_unpriv, __failure_unpriv, __msg_unpriv, __regex_unpriv
* imply unprivileged mode.
* If combination of privileged and unprivileged attributes is present
* both modes are used. If none are present privileged mode is implied.
*
* See test_loader.c:drop_capabilities() for exact set of capabilities
* that differ between privileged and unprivileged modes.
*
* For test filtering purposes the name of the program loaded in
* unprivileged mode is derived from the usual program name by adding
* `@unpriv' suffix.
*
* __msg Message expected to be found in the verifier log.
* Multiple __msg attributes could be specified.
* To match a regular expression use "{{" "}}" brackets,
* e.g. "foo{{[0-9]+}}" matches strings like "foo007".
* Extended POSIX regular expression syntax is allowed
* inside the brackets.
* __msg_unpriv Same as __msg but for unprivileged mode.
*
* __xlated Expect a line in a disassembly log after verifier applies rewrites.
* Multiple __xlated attributes could be specified.
* Regular expressions could be specified same way as in __msg.
* __xlated_unpriv Same as __xlated but for unprivileged mode.
*
* __jited Match a line in a disassembly of the jited BPF program.
* Has to be used after __arch_* macro.
* For example:
*
* __arch_x86_64
* __jited(" endbr64")
* __jited(" nopl (%rax,%rax)")
* __jited(" xorq %rax, %rax")
* ...
* __naked void some_test(void)
* {
* asm volatile (... ::: __clobber_all);
* }
*
* Regular expressions could be included in patterns same way
* as in __msg.
*
* By default assume that each pattern has to be matched on the
* next consecutive line of disassembly, e.g.:
*
* __jited(" endbr64") # matched on line N
* __jited(" nopl (%rax,%rax)") # matched on line N+1
*
* If match occurs on a wrong line an error is reported.
* To override this behaviour use literal "...", e.g.:
*
* __jited(" endbr64") # matched on line N
* __jited("...") # not matched
* __jited(" nopl (%rax,%rax)") # matched on any line >= N
*
* __jited_unpriv Same as __jited but for unprivileged mode.
*
*
* __success Expect program load success in privileged mode.
* __success_unpriv Expect program load success in unprivileged mode.
*
* __failure Expect program load failure in privileged mode.
* __failure_unpriv Expect program load failure in unprivileged mode.
*
* __retval Execute the program using BPF_PROG_TEST_RUN command,
* expect return value to match passed parameter:
* - a decimal number
* - a hexadecimal number, when starts from 0x
* - a macro which expands to one of the above
* - literal _INT_MIN (expands to INT_MIN)
* In addition, two special macros are defined below:
* - POINTER_VALUE
* - TEST_DATA_LEN
* __retval_unpriv Same, but load program in unprivileged mode.
*
* __description Text to be used instead of a program name for display
* and filtering purposes.
*
* __log_level Log level to use for the program, numeric value expected.
*
* __flag Adds one flag use for the program, the following values are valid:
* - BPF_F_STRICT_ALIGNMENT;
* - BPF_F_TEST_RND_HI32;
* - BPF_F_TEST_STATE_FREQ;
* - BPF_F_SLEEPABLE;
* - BPF_F_XDP_HAS_FRAGS;
* - A numeric value.
* Multiple __flag attributes could be specified, the final flags
* value is derived by applying binary "or" to all specified values.
*
* __auxiliary Annotated program is not a separate test, but used as auxiliary
* for some other test cases and should always be loaded.
* __auxiliary_unpriv Same, but load program in unprivileged mode.
*
* __arch_* Specify on which architecture the test case should be tested.
* Several __arch_* annotations could be specified at once.
* When test case is not run on current arch it is marked as skipped.
* __caps_unpriv Specify the capabilities that should be set when running the test.
*/
#define __msg(msg) __attribute__((btf_decl_tag("comment:test_expect_msg=" XSTR(__COUNTER__) "=" msg)))
#define __xlated(msg) __attribute__((btf_decl_tag("comment:test_expect_xlated=" XSTR(__COUNTER__) "=" msg)))
#define __jited(msg) __attribute__((btf_decl_tag("comment:test_jited=" XSTR(__COUNTER__) "=" msg)))
selftests/bpf: add generic BPF program tester-loader It's become a common pattern to have a collection of small BPF programs in one BPF object file, each representing one test case. On user-space side of such tests we maintain a table of program names and expected failure or success, along with optional expected verifier log message. This works, but each set of tests reimplement this mundane code over and over again, which is a waste of time for anyone trying to add a new set of tests. Furthermore, it's quite error prone as it's way too easy to miss some entries in these manually maintained test tables (as evidences by dynptr_fail tests, in which ringbuf_release_uninit_dynptr subtest was accidentally missed; this is fixed in next patch). So this patch implements generic test_loader, which accepts skeleton name and handles the rest of details: opens and loads BPF object file, making sure each program is tested in isolation. Optionally each test case can specify expected BPF verifier log message. In case of failure, tester makes sure to report verifier log, but it also reports verifier log in verbose mode unconditionally. Now, the interesting deviation from existing custom implementations is the use of btf_decl_tag attribute to specify expected-to-fail vs expected-to-succeed markers and, optionally, expected log message directly next to BPF program source code, eliminating the need to manually create and update table of tests. We define few macros wrapping btf_decl_tag with a convention that all values of btf_decl_tag start with "comment:" prefix, and then utilizing a very simple "just_some_text_tag" or "some_key_name=<value>" pattern to define things like expected success/failure, expected verifier message, extra verifier log level (if necessary). This approach is demonstrated by next patch in which two existing sets of failure tests are converted. Tester supports both expected-to-fail and expected-to-succeed programs, though this patch set didn't convert any existing expected-to-succeed programs yet, as existing tests couple BPF program loading with their further execution through attach or test_prog_run. One way to allow testing scenarios like this would be ability to specify custom callback, executed for each successfully loaded BPF program. This is left for follow up patches, after some more analysis of existing test cases. This test_loader is, hopefully, a start of a test_verifier-like runner, but integrated into test_progs infrastructure. It will allow much better "user experience" of defining low-level verification tests that can take advantage of all the libbpf-provided nicety features on BPF side: global variables, declarative maps, etc. All while having a choice of defining it in C or as BPF assembly (through __attribute__((naked)) functions and using embedded asm), depending on what makes most sense in each particular case. This will be explored in follow up patches as well. Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221207201648.2990661-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-12-07 12:16:47 -08:00
#define __failure __attribute__((btf_decl_tag("comment:test_expect_failure")))
#define __success __attribute__((btf_decl_tag("comment:test_expect_success")))
#define __description(desc) __attribute__((btf_decl_tag("comment:test_description=" desc)))
#define __msg_unpriv(msg) __attribute__((btf_decl_tag("comment:test_expect_msg_unpriv=" XSTR(__COUNTER__) "=" msg)))
#define __xlated_unpriv(msg) __attribute__((btf_decl_tag("comment:test_expect_xlated_unpriv=" XSTR(__COUNTER__) "=" msg)))
#define __jited_unpriv(msg) __attribute__((btf_decl_tag("comment:test_jited=" XSTR(__COUNTER__) "=" msg)))
#define __failure_unpriv __attribute__((btf_decl_tag("comment:test_expect_failure_unpriv")))
#define __success_unpriv __attribute__((btf_decl_tag("comment:test_expect_success_unpriv")))
selftests/bpf: add generic BPF program tester-loader It's become a common pattern to have a collection of small BPF programs in one BPF object file, each representing one test case. On user-space side of such tests we maintain a table of program names and expected failure or success, along with optional expected verifier log message. This works, but each set of tests reimplement this mundane code over and over again, which is a waste of time for anyone trying to add a new set of tests. Furthermore, it's quite error prone as it's way too easy to miss some entries in these manually maintained test tables (as evidences by dynptr_fail tests, in which ringbuf_release_uninit_dynptr subtest was accidentally missed; this is fixed in next patch). So this patch implements generic test_loader, which accepts skeleton name and handles the rest of details: opens and loads BPF object file, making sure each program is tested in isolation. Optionally each test case can specify expected BPF verifier log message. In case of failure, tester makes sure to report verifier log, but it also reports verifier log in verbose mode unconditionally. Now, the interesting deviation from existing custom implementations is the use of btf_decl_tag attribute to specify expected-to-fail vs expected-to-succeed markers and, optionally, expected log message directly next to BPF program source code, eliminating the need to manually create and update table of tests. We define few macros wrapping btf_decl_tag with a convention that all values of btf_decl_tag start with "comment:" prefix, and then utilizing a very simple "just_some_text_tag" or "some_key_name=<value>" pattern to define things like expected success/failure, expected verifier message, extra verifier log level (if necessary). This approach is demonstrated by next patch in which two existing sets of failure tests are converted. Tester supports both expected-to-fail and expected-to-succeed programs, though this patch set didn't convert any existing expected-to-succeed programs yet, as existing tests couple BPF program loading with their further execution through attach or test_prog_run. One way to allow testing scenarios like this would be ability to specify custom callback, executed for each successfully loaded BPF program. This is left for follow up patches, after some more analysis of existing test cases. This test_loader is, hopefully, a start of a test_verifier-like runner, but integrated into test_progs infrastructure. It will allow much better "user experience" of defining low-level verification tests that can take advantage of all the libbpf-provided nicety features on BPF side: global variables, declarative maps, etc. All while having a choice of defining it in C or as BPF assembly (through __attribute__((naked)) functions and using embedded asm), depending on what makes most sense in each particular case. This will be explored in follow up patches as well. Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221207201648.2990661-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-12-07 12:16:47 -08:00
#define __log_level(lvl) __attribute__((btf_decl_tag("comment:test_log_level="#lvl)))
#define __flag(flag) __attribute__((btf_decl_tag("comment:test_prog_flags="#flag)))
#define __retval(val) __attribute__((btf_decl_tag("comment:test_retval="XSTR(val))))
#define __retval_unpriv(val) __attribute__((btf_decl_tag("comment:test_retval_unpriv="XSTR(val))))
#define __auxiliary __attribute__((btf_decl_tag("comment:test_auxiliary")))
#define __auxiliary_unpriv __attribute__((btf_decl_tag("comment:test_auxiliary_unpriv")))
#define __btf_path(path) __attribute__((btf_decl_tag("comment:test_btf_path=" path)))
#define __arch(arch) __attribute__((btf_decl_tag("comment:test_arch=" arch)))
#define __arch_x86_64 __arch("X86_64")
#define __arch_arm64 __arch("ARM64")
#define __arch_riscv64 __arch("RISCV64")
#define __caps_unpriv(caps) __attribute__((btf_decl_tag("comment:test_caps_unpriv=" EXPAND_QUOTE(caps))))
#define __load_if_JITed() __attribute__((btf_decl_tag("comment:load_mode=jited")))
#define __load_if_no_JITed() __attribute__((btf_decl_tag("comment:load_mode=no_jited")))
/* Define common capabilities tested using __caps_unpriv */
#define CAP_NET_ADMIN 12
#define CAP_SYS_ADMIN 21
#define CAP_PERFMON 38
#define CAP_BPF 39
selftests/bpf: add generic BPF program tester-loader It's become a common pattern to have a collection of small BPF programs in one BPF object file, each representing one test case. On user-space side of such tests we maintain a table of program names and expected failure or success, along with optional expected verifier log message. This works, but each set of tests reimplement this mundane code over and over again, which is a waste of time for anyone trying to add a new set of tests. Furthermore, it's quite error prone as it's way too easy to miss some entries in these manually maintained test tables (as evidences by dynptr_fail tests, in which ringbuf_release_uninit_dynptr subtest was accidentally missed; this is fixed in next patch). So this patch implements generic test_loader, which accepts skeleton name and handles the rest of details: opens and loads BPF object file, making sure each program is tested in isolation. Optionally each test case can specify expected BPF verifier log message. In case of failure, tester makes sure to report verifier log, but it also reports verifier log in verbose mode unconditionally. Now, the interesting deviation from existing custom implementations is the use of btf_decl_tag attribute to specify expected-to-fail vs expected-to-succeed markers and, optionally, expected log message directly next to BPF program source code, eliminating the need to manually create and update table of tests. We define few macros wrapping btf_decl_tag with a convention that all values of btf_decl_tag start with "comment:" prefix, and then utilizing a very simple "just_some_text_tag" or "some_key_name=<value>" pattern to define things like expected success/failure, expected verifier message, extra verifier log level (if necessary). This approach is demonstrated by next patch in which two existing sets of failure tests are converted. Tester supports both expected-to-fail and expected-to-succeed programs, though this patch set didn't convert any existing expected-to-succeed programs yet, as existing tests couple BPF program loading with their further execution through attach or test_prog_run. One way to allow testing scenarios like this would be ability to specify custom callback, executed for each successfully loaded BPF program. This is left for follow up patches, after some more analysis of existing test cases. This test_loader is, hopefully, a start of a test_verifier-like runner, but integrated into test_progs infrastructure. It will allow much better "user experience" of defining low-level verification tests that can take advantage of all the libbpf-provided nicety features on BPF side: global variables, declarative maps, etc. All while having a choice of defining it in C or as BPF assembly (through __attribute__((naked)) functions and using embedded asm), depending on what makes most sense in each particular case. This will be explored in follow up patches as well. Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221207201648.2990661-1-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-12-07 12:16:47 -08:00
/* Convenience macro for use with 'asm volatile' blocks */
#define __naked __attribute__((naked))
#define __clobber_all "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "memory"
#define __clobber_common "r0", "r1", "r2", "r3", "r4", "r5", "memory"
#define __imm(name) [name]"i"(name)
#define __imm_const(name, expr) [name]"i"(expr)
#define __imm_addr(name) [name]"i"(&name)
bpf: Use r constraint instead of p constraint in selftests Some of the BPF selftests use the "p" constraint in inline assembly snippets, for input operands for MOV (rN = rM) instructions. This is mainly done via the __imm_ptr macro defined in tools/testing/selftests/bpf/progs/bpf_misc.h: #define __imm_ptr(name) [name]"p"(&name) Example: int consume_first_item_only(void *ctx) { struct bpf_iter_num iter; asm volatile ( /* create iterator */ "r1 = %[iter];" [...] : : __imm_ptr(iter) : CLOBBERS); [...] } The "p" constraint is a tricky one. It is documented in the GCC manual section "Simple Constraints": An operand that is a valid memory address is allowed. This is for ``load address'' and ``push address'' instructions. p in the constraint must be accompanied by address_operand as the predicate in the match_operand. This predicate interprets the mode specified in the match_operand as the mode of the memory reference for which the address would be valid. There are two problems: 1. It is questionable whether that constraint was ever intended to be used in inline assembly templates, because its behavior really depends on compiler internals. A "memory address" is not the same than a "memory operand" or a "memory reference" (constraint "m"), and in fact its usage in the template above results in an error in both x86_64-linux-gnu and bpf-unkonwn-none: foo.c: In function ‘bar’: foo.c:6:3: error: invalid 'asm': invalid expression as operand 6 | asm volatile ("r1 = %[jorl]" : : [jorl]"p"(&jorl)); | ^~~ I would assume the same happens with aarch64, riscv, and most/all other targets in GCC, that do not accept operands of the form A + B that are not wrapped either in a const or in a memory reference. To avoid that error, the usage of the "p" constraint in internal GCC instruction templates is supposed to be complemented by the 'a' modifier, like in: asm volatile ("r1 = %a[jorl]" : : [jorl]"p"(&jorl)); Internally documented (in GCC's final.cc) as: %aN means expect operand N to be a memory address (not a memory reference!) and print a reference to that address. That works because when the modifier 'a' is found, GCC prints an "operand address", which is not the same than an "operand". But... 2. Even if we used the internal 'a' modifier (we shouldn't) the 'rN = rM' instruction really requires a register argument. In cases involving automatics, like in the examples above, we easily end with: bar: #APP r1 = r10-4 #NO_APP In other cases we could conceibly also end with a 64-bit label that may overflow the 32-bit immediate operand of `rN = imm32' instructions: r1 = foo All of which is clearly wrong. clang happens to do "the right thing" in the current usage of __imm_ptr in the BPF tests, because even with -O2 it seems to "reload" the fp-relative address of the automatic to a register like in: bar: r1 = r10 r1 += -4 #APP r1 = r1 #NO_APP Which is what GCC would generate with -O0. Whether this is by chance or by design, the compiler shouln't be expected to do that reload driven by the "p" constraint. This patch changes the usage of the "p" constraint in the BPF selftests macros to use the "r" constraint instead. If a register is what is required, we should let the compiler know. Previous discussion in bpf@vger: https://lore.kernel.org/bpf/87h6p5ebpb.fsf@oracle.com/T/#ef0df83d6975c34dff20bf0dd52e078f5b8ca2767 Tested in bpf-next master. No regressions. Signed-off-by: Jose E. Marchesi <jose.marchesi@oracle.com> Cc: Yonghong Song <yonghong.song@linux.dev> Cc: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240123181309.19853-1-jose.marchesi@oracle.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2024-01-23 19:13:09 +01:00
#define __imm_ptr(name) [name]"r"(&name)
#define __imm_insn(name, expr) [name]"i"(*(long *)&(expr))
/* Magic constants used with __retval() */
#define POINTER_VALUE 0xbadcafe
#define TEST_DATA_LEN 64
selftests/bpf: add precision propagation tests in the presence of subprogs Add a bunch of tests validating verifier's precision backpropagation logic in the presence of subprog calls and/or callback-calling helpers/kfuncs. We validate the following conditions: - subprog_result_precise: static subprog r0 result precision handling; - global_subprog_result_precise: global subprog r0 precision shortcutting, similar to BPF helper handling; - callback_result_precise: similarly r0 marking precise for callback-calling helpers; - parent_callee_saved_reg_precise, parent_callee_saved_reg_precise_global: propagation of precision for callee-saved registers bypassing static/global subprogs; - parent_callee_saved_reg_precise_with_callback: same as above, but in the presence of callback-calling helper; - parent_stack_slot_precise, parent_stack_slot_precise_global: similar to above, but instead propagating precision of stack slot (spilled SCALAR reg); - parent_stack_slot_precise_with_callback: same as above, but in the presence of callback-calling helper; - subprog_arg_precise: propagation of precision of static subprog's input argument back to caller; - subprog_spill_into_parent_stack_slot_precise: negative test validating that verifier currently can't support backtracking of stack access with non-r10 register, we validate that we fallback to forcing precision for all SCALARs. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230505043317.3629845-10-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-05-04 21:33:16 -07:00
#ifndef __used
#define __used __attribute__((used))
#endif
#if defined(__TARGET_ARCH_x86)
#define SYSCALL_WRAPPER 1
#define SYS_PREFIX "__x64_"
#elif defined(__TARGET_ARCH_s390)
#define SYSCALL_WRAPPER 1
#define SYS_PREFIX "__s390x_"
#elif defined(__TARGET_ARCH_arm64)
#define SYSCALL_WRAPPER 1
#define SYS_PREFIX "__arm64_"
#elif defined(__TARGET_ARCH_riscv)
#define SYSCALL_WRAPPER 1
#define SYS_PREFIX "__riscv_"
#elif defined(__TARGET_ARCH_powerpc)
#define SYSCALL_WRAPPER 1
#define SYS_PREFIX ""
#else
#define SYSCALL_WRAPPER 0
#define SYS_PREFIX "__se_"
#endif
/* How many arguments are passed to function in register */
#if defined(__TARGET_ARCH_x86) || defined(__x86_64__)
#define FUNC_REG_ARG_CNT 6
#elif defined(__i386__)
#define FUNC_REG_ARG_CNT 3
#elif defined(__TARGET_ARCH_s390) || defined(__s390x__)
#define FUNC_REG_ARG_CNT 5
#elif defined(__TARGET_ARCH_arm) || defined(__arm__)
#define FUNC_REG_ARG_CNT 4
#elif defined(__TARGET_ARCH_arm64) || defined(__aarch64__)
#define FUNC_REG_ARG_CNT 8
#elif defined(__TARGET_ARCH_mips) || defined(__mips__)
#define FUNC_REG_ARG_CNT 8
#elif defined(__TARGET_ARCH_powerpc) || defined(__powerpc__) || defined(__powerpc64__)
#define FUNC_REG_ARG_CNT 8
#elif defined(__TARGET_ARCH_sparc) || defined(__sparc__)
#define FUNC_REG_ARG_CNT 6
#elif defined(__TARGET_ARCH_riscv) || defined(__riscv__)
#define FUNC_REG_ARG_CNT 8
#else
/* default to 5 for others */
#define FUNC_REG_ARG_CNT 5
#endif
/* make it look to compiler like value is read and written */
#define __sink(expr) asm volatile("" : "+g"(expr))
#ifndef ARRAY_SIZE
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
#endif
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
(defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390) || \
defined(__TARGET_ARCH_loongarch)) && \
__clang_major__ >= 18
#define CAN_USE_GOTOL
#endif
#if __clang_major__ >= 18
#define CAN_USE_BPF_ST
#endif
#if __clang_major__ >= 18 && defined(ENABLE_ATOMICS_TESTS) && \
(defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
powerpc64/bpf: Add jit support for load_acquire and store_release Add JIT support for the load_acquire and store_release instructions. The implementation is similar to the kernel where: load_acquire => plain load -> lwsync store_release => lwsync -> plain store To test the correctness of the implementation, following selftests were run: [fedora@linux-kernel bpf]$ sudo ./test_progs -a \ verifier_load_acquire,verifier_store_release,atomics #11/1 atomics/add:OK #11/2 atomics/sub:OK #11/3 atomics/and:OK #11/4 atomics/or:OK #11/5 atomics/xor:OK #11/6 atomics/cmpxchg:OK #11/7 atomics/xchg:OK #11 atomics:OK #519/1 verifier_load_acquire/load-acquire, 8-bit:OK #519/2 verifier_load_acquire/load-acquire, 8-bit @unpriv:OK #519/3 verifier_load_acquire/load-acquire, 16-bit:OK #519/4 verifier_load_acquire/load-acquire, 16-bit @unpriv:OK #519/5 verifier_load_acquire/load-acquire, 32-bit:OK #519/6 verifier_load_acquire/load-acquire, 32-bit @unpriv:OK #519/7 verifier_load_acquire/load-acquire, 64-bit:OK #519/8 verifier_load_acquire/load-acquire, 64-bit @unpriv:OK #519/9 verifier_load_acquire/load-acquire with uninitialized src_reg:OK #519/10 verifier_load_acquire/load-acquire with uninitialized src_reg @unpriv:OK #519/11 verifier_load_acquire/load-acquire with non-pointer src_reg:OK #519/12 verifier_load_acquire/load-acquire with non-pointer src_reg @unpriv:OK #519/13 verifier_load_acquire/misaligned load-acquire:OK #519/14 verifier_load_acquire/misaligned load-acquire @unpriv:OK #519/15 verifier_load_acquire/load-acquire from ctx pointer:OK #519/16 verifier_load_acquire/load-acquire from ctx pointer @unpriv:OK #519/17 verifier_load_acquire/load-acquire with invalid register R15:OK #519/18 verifier_load_acquire/load-acquire with invalid register R15 @unpriv:OK #519/19 verifier_load_acquire/load-acquire from pkt pointer:OK #519/20 verifier_load_acquire/load-acquire from flow_keys pointer:OK #519/21 verifier_load_acquire/load-acquire from sock pointer:OK #519 verifier_load_acquire:OK #556/1 verifier_store_release/store-release, 8-bit:OK #556/2 verifier_store_release/store-release, 8-bit @unpriv:OK #556/3 verifier_store_release/store-release, 16-bit:OK #556/4 verifier_store_release/store-release, 16-bit @unpriv:OK #556/5 verifier_store_release/store-release, 32-bit:OK #556/6 verifier_store_release/store-release, 32-bit @unpriv:OK #556/7 verifier_store_release/store-release, 64-bit:OK #556/8 verifier_store_release/store-release, 64-bit @unpriv:OK #556/9 verifier_store_release/store-release with uninitialized src_reg:OK #556/10 verifier_store_release/store-release with uninitialized src_reg @unpriv:OK #556/11 verifier_store_release/store-release with uninitialized dst_reg:OK #556/12 verifier_store_release/store-release with uninitialized dst_reg @unpriv:OK #556/13 verifier_store_release/store-release with non-pointer dst_reg:OK #556/14 verifier_store_release/store-release with non-pointer dst_reg @unpriv:OK #556/15 verifier_store_release/misaligned store-release:OK #556/16 verifier_store_release/misaligned store-release @unpriv:OK #556/17 verifier_store_release/store-release to ctx pointer:OK #556/18 verifier_store_release/store-release to ctx pointer @unpriv:OK #556/19 verifier_store_release/store-release, leak pointer to stack:OK #556/20 verifier_store_release/store-release, leak pointer to stack @unpriv:OK #556/21 verifier_store_release/store-release, leak pointer to map:OK #556/22 verifier_store_release/store-release, leak pointer to map @unpriv:OK #556/23 verifier_store_release/store-release with invalid register R15:OK #556/24 verifier_store_release/store-release with invalid register R15 @unpriv:OK #556/25 verifier_store_release/store-release to pkt pointer:OK #556/26 verifier_store_release/store-release to flow_keys pointer:OK #556/27 verifier_store_release/store-release to sock pointer:OK #556 verifier_store_release:OK Summary: 3/55 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Tested-by: Saket Kumar Bhaskar <skb99@linux.ibm.com> Reviewed-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20250717202935.29018-2-puranjay@kernel.org
2025-07-17 20:29:17 +00:00
(defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64)) || \
(defined(__TARGET_ARCH_powerpc))
#define CAN_USE_LOAD_ACQ_STORE_REL
#endif
bpf: Fall back to nospec for Spectre v1 This implements the core of the series and causes the verifier to fall back to mitigating Spectre v1 using speculation barriers. The approach was presented at LPC'24 [1] and RAID'24 [2]. If we find any forbidden behavior on a speculative path, we insert a nospec (e.g., lfence speculation barrier on x86) before the instruction and stop verifying the path. While verifying a speculative path, we can furthermore stop verification of that path whenever we encounter a nospec instruction. A minimal example program would look as follows: A = true B = true if A goto e f() if B goto e unsafe() e: exit There are the following speculative and non-speculative paths (`cur->speculative` and `speculative` referring to the value of the push_stack() parameters): - A = true - B = true - if A goto e - A && !cur->speculative && !speculative - exit - !A && !cur->speculative && speculative - f() - if B goto e - B && cur->speculative && !speculative - exit - !B && cur->speculative && speculative - unsafe() If f() contains any unsafe behavior under Spectre v1 and the unsafe behavior matches `state->speculative && error_recoverable_with_nospec(err)`, do_check() will now add a nospec before f() instead of rejecting the program: A = true B = true if A goto e nospec f() if B goto e unsafe() e: exit Alternatively, the algorithm also takes advantage of nospec instructions inserted for other reasons (e.g., Spectre v4). Taking the program above as an example, speculative path exploration can stop before f() if a nospec was inserted there because of Spectre v4 sanitization. In this example, all instructions after the nospec are dead code (and with the nospec they are also dead code speculatively). For this, it relies on the fact that speculation barriers generally prevent all later instructions from executing if the speculation was not correct: * On Intel x86_64, lfence acts as full speculation barrier, not only as a load fence [3]: An LFENCE instruction or a serializing instruction will ensure that no later instructions execute, even speculatively, until all prior instructions complete locally. [...] Inserting an LFENCE instruction after a bounds check prevents later operations from executing before the bound check completes. This was experimentally confirmed in [4]. * On AMD x86_64, lfence is dispatch-serializing [5] (requires MSR C001_1029[1] to be set if the MSR is supported, this happens in init_amd()). AMD further specifies "A dispatch serializing instruction forces the processor to retire the serializing instruction and all previous instructions before the next instruction is executed" [8]. As dispatch is not specific to memory loads or branches, lfence therefore also affects all instructions there. Also, if retiring a branch means it's PC change becomes architectural (should be), this means any "wrong" speculation is aborted as required for this series. * ARM's SB speculation barrier instruction also affects "any instruction that appears later in the program order than the barrier" [6]. * PowerPC's barrier also affects all subsequent instructions [7]: [...] executing an ori R31,R31,0 instruction ensures that all instructions preceding the ori R31,R31,0 instruction have completed before the ori R31,R31,0 instruction completes, and that no subsequent instructions are initiated, even out-of-order, until after the ori R31,R31,0 instruction completes. The ori R31,R31,0 instruction may complete before storage accesses associated with instructions preceding the ori R31,R31,0 instruction have been performed Regarding the example, this implies that `if B goto e` will not execute before `if A goto e` completes. Once `if A goto e` completes, the CPU should find that the speculation was wrong and continue with `exit`. If there is any other path that leads to `if B goto e` (and therefore `unsafe()`) without going through `if A goto e`, then a nospec will still be needed there. However, this patch assumes this other path will be explored separately and therefore be discovered by the verifier even if the exploration discussed here stops at the nospec. This patch furthermore has the unfortunate consequence that Spectre v1 mitigations now only support architectures which implement BPF_NOSPEC. Before this commit, Spectre v1 mitigations prevented exploits by rejecting the programs on all architectures. Because some JITs do not implement BPF_NOSPEC, this patch therefore may regress unpriv BPF's security to a limited extent: * The regression is limited to systems vulnerable to Spectre v1, have unprivileged BPF enabled, and do NOT emit insns for BPF_NOSPEC. The latter is not the case for x86 64- and 32-bit, arm64, and powerpc 64-bit and they are therefore not affected by the regression. According to commit a6f6a95f2580 ("LoongArch, bpf: Fix jit to skip speculation barrier opcode"), LoongArch is not vulnerable to Spectre v1 and therefore also not affected by the regression. * To the best of my knowledge this regression may therefore only affect MIPS. This is deemed acceptable because unpriv BPF is still disabled there by default. As stated in a previous commit, BPF_NOSPEC could be implemented for MIPS based on GCC's speculation_barrier implementation. * It is unclear which other architectures (besides x86 64- and 32-bit, ARM64, PowerPC 64-bit, LoongArch, and MIPS) supported by the kernel are vulnerable to Spectre v1. Also, it is not clear if barriers are available on these architectures. Implementing BPF_NOSPEC on these architectures therefore is non-trivial. Searching GCC and the kernel for speculation barrier implementations for these architectures yielded no result. * If any of those regressed systems is also vulnerable to Spectre v4, the system was already vulnerable to Spectre v4 attacks based on unpriv BPF before this patch and the impact is therefore further limited. As an alternative to regressing security, one could still reject programs if the architecture does not emit BPF_NOSPEC (e.g., by removing the empty BPF_NOSPEC-case from all JITs except for LoongArch where it appears justified). However, this will cause rejections on these archs that are likely unfounded in the vast majority of cases. In the tests, some are now successful where we previously had a false-positive (i.e., rejection). Change them to reflect where the nospec should be inserted (using __xlated_unpriv) and modify the error message if the nospec is able to mitigate a problem that previously shadowed another problem (in that case __xlated_unpriv does not work, therefore just add a comment). Define SPEC_V1 to avoid duplicating this ifdef whenever we check for nospec insns using __xlated_unpriv, define it here once. This also improves readability. PowerPC can probably also be added here. However, omit it for now because the BPF CI currently does not include a test. Limit it to EPERM, EACCES, and EINVAL (and not everything except for EFAULT and ENOMEM) as it already has the desired effect for most real-world programs. Briefly went through all the occurrences of EPERM, EINVAL, and EACCESS in verifier.c to validate that catching them like this makes sense. Thanks to Dustin for their help in checking the vendor documentation. [1] https://lpc.events/event/18/contributions/1954/ ("Mitigating Spectre-PHT using Speculation Barriers in Linux eBPF") [2] https://arxiv.org/pdf/2405.00078 ("VeriFence: Lightweight and Precise Spectre Defenses for Untrusted Linux Kernel Extensions") [3] https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/runtime-speculative-side-channel-mitigations.html ("Managed Runtime Speculative Execution Side Channel Mitigations") [4] https://dl.acm.org/doi/pdf/10.1145/3359789.3359837 ("Speculator: a tool to analyze speculative execution attacks and mitigations" - Section 4.6 "Stopping Speculative Execution") [5] https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/programmer-references/software-techniques-for-managing-speculation.pdf ("White Paper - SOFTWARE TECHNIQUES FOR MANAGING SPECULATION ON AMD PROCESSORS - REVISION 5.09.23") [6] https://developer.arm.com/documentation/ddi0597/2020-12/Base-Instructions/SB--Speculation-Barrier- ("SB - Speculation Barrier - Arm Armv8-A A32/T32 Instruction Set Architecture (2020-12)") [7] https://wiki.raptorcs.com/w/images/5/5f/OPF_PowerISA_v3.1C.pdf ("Power ISA™ - Version 3.1C - May 26, 2024 - Section 9.2.1 of Book III") [8] https://www.amd.com/content/dam/amd/en/documents/processor-tech-docs/programmer-references/40332.pdf ("AMD64 Architecture Programmer’s Manual Volumes 1–5 - Revision 4.08 - April 2024 - 7.6.4 Serializing Instructions") Signed-off-by: Luis Gerhorst <luis.gerhorst@fau.de> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Acked-by: Henriette Herzog <henriette.herzog@rub.de> Cc: Dustin Nguyen <nguyen@cs.fau.de> Cc: Maximilian Ott <ott@cs.fau.de> Cc: Milan Stephan <milan.stephan@fau.de> Link: https://lore.kernel.org/r/20250603212428.338473-1-luis.gerhorst@fau.de Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2025-06-03 23:24:28 +02:00
#if defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)
#define SPEC_V1
#endif
#if defined(__TARGET_ARCH_x86)
#define SPEC_V4
#endif
#endif