hardening updates for v6.17-rc1

- Introduce and start using TRAILING_OVERLAP() helper for fixing
   embedded flex array instances (Gustavo A. R. Silva)
 
 - mux: Convert mux_control_ops to a flex array member in mux_chip
   (Thorsten Blum)
 
 - string: Group str_has_prefix() and strstarts() (Andy Shevchenko)
 
 - Remove KCOV instrumentation from __init and __head (Ritesh Harjani,
   Kees Cook)
 
 - Refactor and rename stackleak feature to support Clang
 
 - Add KUnit test for seq_buf API
 
 - Fix KUnit fortify test under LTO
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQRSPkdeREjth1dHnSE2KwveOeQkuwUCaIfUkgAKCRA2KwveOeQk
 uypLAP92r6f47sWcOw/5B9aVffX6Bypsb7dqBJQpCNxI5U1xcAEAiCrZ98UJyOeQ
 JQgnXd4N67K4EsS2JDc+FutRn3Yi+A8=
 =+5Bq
 -----END PGP SIGNATURE-----

Merge tag 'hardening-v6.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull hardening updates from Kees Cook:

 - Introduce and start using TRAILING_OVERLAP() helper for fixing
   embedded flex array instances (Gustavo A. R. Silva)

 - mux: Convert mux_control_ops to a flex array member in mux_chip
   (Thorsten Blum)

 - string: Group str_has_prefix() and strstarts() (Andy Shevchenko)

 - Remove KCOV instrumentation from __init and __head (Ritesh Harjani,
   Kees Cook)

 - Refactor and rename stackleak feature to support Clang

 - Add KUnit test for seq_buf API

 - Fix KUnit fortify test under LTO

* tag 'hardening-v6.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: (22 commits)
  sched/task_stack: Add missing const qualifier to end_of_stack()
  kstack_erase: Support Clang stack depth tracking
  kstack_erase: Add -mgeneral-regs-only to silence Clang warnings
  init.h: Disable sanitizer coverage for __init and __head
  kstack_erase: Disable kstack_erase for all of arm compressed boot code
  x86: Handle KCOV __init vs inline mismatches
  arm64: Handle KCOV __init vs inline mismatches
  s390: Handle KCOV __init vs inline mismatches
  arm: Handle KCOV __init vs inline mismatches
  mips: Handle KCOV __init vs inline mismatch
  powerpc/mm/book3s64: Move kfence and debug_pagealloc related calls to __init section
  configs/hardening: Enable CONFIG_INIT_ON_FREE_DEFAULT_ON
  configs/hardening: Enable CONFIG_KSTACK_ERASE
  stackleak: Split KSTACK_ERASE_CFLAGS from GCC_PLUGINS_CFLAGS
  stackleak: Rename stackleak_track_stack to __sanitizer_cov_stack_depth
  stackleak: Rename STACKLEAK to KSTACK_ERASE
  seq_buf: Introduce KUnit tests
  string: Group str_has_prefix() and strstarts()
  kunit/fortify: Add back "volatile" for sizeof() constants
  acpi: nfit: intel: avoid multiple -Wflex-array-member-not-at-end warnings
  ...
This commit is contained in:
Linus Torvalds 2025-07-28 17:16:12 -07:00
commit 8e736a2eea
79 changed files with 514 additions and 259 deletions

View file

@ -1465,7 +1465,7 @@ stack_erasing
============= =============
This parameter can be used to control kernel stack erasing at the end This parameter can be used to control kernel stack erasing at the end
of syscalls for kernels built with ``CONFIG_GCC_PLUGIN_STACKLEAK``. of syscalls for kernels built with ``CONFIG_KSTACK_ERASE``.
That erasing reduces the information which kernel stack leak bugs That erasing reduces the information which kernel stack leak bugs
can reveal and blocks some uninitialized stack variable attacks. can reveal and blocks some uninitialized stack variable attacks.
@ -1473,7 +1473,7 @@ The tradeoff is the performance impact: on a single CPU system kernel
compilation sees a 1% slowdown, other systems and workloads may vary. compilation sees a 1% slowdown, other systems and workloads may vary.
= ==================================================================== = ====================================================================
0 Kernel stack erasing is disabled, STACKLEAK_METRICS are not updated. 0 Kernel stack erasing is disabled, KSTACK_ERASE_METRICS are not updated.
1 Kernel stack erasing is enabled (default), it is performed before 1 Kernel stack erasing is enabled (default), it is performed before
returning to the userspace at the end of syscalls. returning to the userspace at the end of syscalls.
= ==================================================================== = ====================================================================

View file

@ -176,5 +176,5 @@ Be very careful vs. KASLR when changing anything here. The KASLR address
range must not overlap with anything except the KASAN shadow area, which is range must not overlap with anything except the KASAN shadow area, which is
correct as KASAN disables KASLR. correct as KASAN disables KASLR.
For both 4- and 5-level layouts, the STACKLEAK_POISON value in the last 2MB For both 4- and 5-level layouts, the KSTACK_ERASE_POISON value in the last 2MB
hole: ffffffffffff4111 hole: ffffffffffff4111

View file

@ -303,7 +303,7 @@ Memory poisoning
When releasing memory, it is best to poison the contents, to avoid reuse When releasing memory, it is best to poison the contents, to avoid reuse
attacks that rely on the old contents of memory. E.g., clear stack on a attacks that rely on the old contents of memory. E.g., clear stack on a
syscall return (``CONFIG_GCC_PLUGIN_STACKLEAK``), wipe heap memory on a syscall return (``CONFIG_KSTACK_ERASE``), wipe heap memory on a
free. This frustrates many uninitialized variable attacks, stack content free. This frustrates many uninitialized variable attacks, stack content
exposures, heap content exposures, and use-after-free attacks. exposures, heap content exposures, and use-after-free attacks.

View file

@ -259,7 +259,7 @@ KALLSYSM则会直接打印原始地址。
-------- --------
在释放内存时,最好对内存内容进行清除处理,以防止攻击者重用内存中以前 在释放内存时,最好对内存内容进行清除处理,以防止攻击者重用内存中以前
的内容。例如在系统调用返回时清除堆栈CONFIG_GCC_PLUGIN_STACKLEAK, 的内容。例如在系统调用返回时清除堆栈CONFIG_KSTACK_ERASE,
在释放堆内容是清除其内容。这有助于防止许多未初始化变量攻击、堆栈内容 在释放堆内容是清除其内容。这有助于防止许多未初始化变量攻击、堆栈内容
泄露、堆内容泄露以及使用后释放攻击user-after-free 泄露、堆内容泄露以及使用后释放攻击user-after-free

View file

@ -9997,8 +9997,6 @@ L: linux-hardening@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening
F: Documentation/kbuild/gcc-plugins.rst F: Documentation/kbuild/gcc-plugins.rst
F: include/linux/stackleak.h
F: kernel/stackleak.c
F: scripts/Makefile.gcc-plugins F: scripts/Makefile.gcc-plugins
F: scripts/gcc-plugins/ F: scripts/gcc-plugins/
@ -13094,13 +13092,17 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/har
F: Documentation/ABI/testing/sysfs-kernel-oops_count F: Documentation/ABI/testing/sysfs-kernel-oops_count
F: Documentation/ABI/testing/sysfs-kernel-warn_count F: Documentation/ABI/testing/sysfs-kernel-warn_count
F: arch/*/configs/hardening.config F: arch/*/configs/hardening.config
F: include/linux/kstack_erase.h
F: include/linux/overflow.h F: include/linux/overflow.h
F: include/linux/randomize_kstack.h F: include/linux/randomize_kstack.h
F: include/linux/ucopysize.h F: include/linux/ucopysize.h
F: kernel/configs/hardening.config F: kernel/configs/hardening.config
F: kernel/kstack_erase.c
F: lib/tests/randstruct_kunit.c F: lib/tests/randstruct_kunit.c
F: lib/tests/usercopy_kunit.c F: lib/tests/usercopy_kunit.c
F: mm/usercopy.c F: mm/usercopy.c
F: scripts/Makefile.kstack_erase
F: scripts/Makefile.randstruct
F: security/Kconfig.hardening F: security/Kconfig.hardening
K: \b(add|choose)_random_kstack_offset\b K: \b(add|choose)_random_kstack_offset\b
K: \b__check_(object_size|heap_object)\b K: \b__check_(object_size|heap_object)\b

View file

@ -1086,6 +1086,7 @@ include-$(CONFIG_KMSAN) += scripts/Makefile.kmsan
include-$(CONFIG_UBSAN) += scripts/Makefile.ubsan include-$(CONFIG_UBSAN) += scripts/Makefile.ubsan
include-$(CONFIG_KCOV) += scripts/Makefile.kcov include-$(CONFIG_KCOV) += scripts/Makefile.kcov
include-$(CONFIG_RANDSTRUCT) += scripts/Makefile.randstruct include-$(CONFIG_RANDSTRUCT) += scripts/Makefile.randstruct
include-$(CONFIG_KSTACK_ERASE) += scripts/Makefile.kstack_erase
include-$(CONFIG_AUTOFDO_CLANG) += scripts/Makefile.autofdo include-$(CONFIG_AUTOFDO_CLANG) += scripts/Makefile.autofdo
include-$(CONFIG_PROPELLER_CLANG) += scripts/Makefile.propeller include-$(CONFIG_PROPELLER_CLANG) += scripts/Makefile.propeller
include-$(CONFIG_GCC_PLUGINS) += scripts/Makefile.gcc-plugins include-$(CONFIG_GCC_PLUGINS) += scripts/Makefile.gcc-plugins

View file

@ -630,11 +630,11 @@ config SECCOMP_CACHE_DEBUG
If unsure, say N. If unsure, say N.
config HAVE_ARCH_STACKLEAK config HAVE_ARCH_KSTACK_ERASE
bool bool
help help
An architecture should select this if it has the code which An architecture should select this if it has the code which
fills the used part of the kernel stack with the STACKLEAK_POISON fills the used part of the kernel stack with the KSTACK_ERASE_POISON
value before returning from system calls. value before returning from system calls.
config HAVE_STACKPROTECTOR config HAVE_STACKPROTECTOR

View file

@ -87,11 +87,11 @@ config ARM
select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL
select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
select HAVE_ARCH_KSTACK_ERASE
select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_PFN_VALID select HAVE_ARCH_PFN_VALID
select HAVE_ARCH_SECCOMP select HAVE_ARCH_SECCOMP
select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT
select HAVE_ARCH_STACKLEAK
select HAVE_ARCH_THREAD_STRUCT_WHITELIST select HAVE_ARCH_THREAD_STRUCT_WHITELIST
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE if ARM_LPAE select HAVE_ARCH_TRANSPARENT_HUGEPAGE if ARM_LPAE

View file

@ -9,7 +9,6 @@ OBJS =
HEAD = head.o HEAD = head.o
OBJS += misc.o decompress.o OBJS += misc.o decompress.o
CFLAGS_decompress.o += $(DISABLE_STACKLEAK_PLUGIN)
ifeq ($(CONFIG_DEBUG_UNCOMPRESS),y) ifeq ($(CONFIG_DEBUG_UNCOMPRESS),y)
OBJS += debug.o OBJS += debug.o
AFLAGS_head.o += -DDEBUG AFLAGS_head.o += -DDEBUG
@ -96,6 +95,7 @@ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
ccflags-y := -fpic $(call cc-option,-mno-single-pic-base,) -fno-builtin \ ccflags-y := -fpic $(call cc-option,-mno-single-pic-base,) -fno-builtin \
-I$(srctree)/scripts/dtc/libfdt -fno-stack-protector \ -I$(srctree)/scripts/dtc/libfdt -fno-stack-protector \
$(DISABLE_KSTACK_ERASE) \
-I$(obj) -I$(obj)
ccflags-remove-$(CONFIG_FUNCTION_TRACER) += -pg ccflags-remove-$(CONFIG_FUNCTION_TRACER) += -pg
asflags-y := -DZIMAGE asflags-y := -DZIMAGE

View file

@ -119,7 +119,7 @@ no_work_pending:
ct_user_enter save = 0 ct_user_enter save = 0
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK #ifdef CONFIG_KSTACK_ERASE
bl stackleak_erase_on_task_stack bl stackleak_erase_on_task_stack
#endif #endif
restore_user_regs fast = 0, offset = 0 restore_user_regs fast = 0, offset = 0

View file

@ -295,7 +295,7 @@ static inline u32 read_extra_features(void)
return u; return u;
} }
static inline void write_extra_features(u32 u) static inline void __init write_extra_features(u32 u)
{ {
__asm__("mcr p15, 1, %0, c15, c1, 0" : : "r" (u)); __asm__("mcr p15, 1, %0, c15, c1, 0" : : "r" (u));
} }

View file

@ -177,7 +177,7 @@ static inline void __init write_actlr(u32 actlr)
__asm__("mcr p15, 0, %0, c1, c0, 1\n" : : "r" (actlr)); __asm__("mcr p15, 0, %0, c1, c0, 1\n" : : "r" (actlr));
} }
static void enable_extra_feature(unsigned int features) static void __init enable_extra_feature(unsigned int features)
{ {
u32 u; u32 u;

View file

@ -26,7 +26,7 @@ CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
CFLAGS_REMOVE_vdso.o = -pg CFLAGS_REMOVE_vdso.o = -pg
# Force -O2 to avoid libgcc dependencies # Force -O2 to avoid libgcc dependencies
CFLAGS_REMOVE_vgettimeofday.o = -pg -Os $(RANDSTRUCT_CFLAGS) $(GCC_PLUGINS_CFLAGS) CFLAGS_REMOVE_vgettimeofday.o = -pg -Os $(RANDSTRUCT_CFLAGS) $(KSTACK_ERASE_CFLAGS) $(GCC_PLUGINS_CFLAGS)
ifeq ($(c-gettimeofday-y),) ifeq ($(c-gettimeofday-y),)
CFLAGS_vgettimeofday.o = -O2 CFLAGS_vgettimeofday.o = -O2
else else

View file

@ -187,12 +187,12 @@ config ARM64
select HAVE_ARCH_KCSAN if EXPERT select HAVE_ARCH_KCSAN if EXPERT
select HAVE_ARCH_KFENCE select HAVE_ARCH_KFENCE
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_ARCH_KSTACK_ERASE
select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_BITS
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
select HAVE_ARCH_PREL32_RELOCATIONS select HAVE_ARCH_PREL32_RELOCATIONS
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_STACKLEAK
select HAVE_ARCH_THREAD_STRUCT_WHITELIST select HAVE_ARCH_THREAD_STRUCT_WHITELIST
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_ARCH_TRANSPARENT_HUGEPAGE

View file

@ -150,7 +150,7 @@ acpi_set_mailbox_entry(int cpu, struct acpi_madt_generic_interrupt *processor)
{} {}
#endif #endif
static inline const char *acpi_get_enable_method(int cpu) static __always_inline const char *acpi_get_enable_method(int cpu)
{ {
if (acpi_psci_present()) if (acpi_psci_present())
return "psci"; return "psci";

View file

@ -614,7 +614,7 @@ SYM_CODE_END(ret_to_kernel)
SYM_CODE_START_LOCAL(ret_to_user) SYM_CODE_START_LOCAL(ret_to_user)
ldr x19, [tsk, #TSK_TI_FLAGS] // re-check for single-step ldr x19, [tsk, #TSK_TI_FLAGS] // re-check for single-step
enable_step_tsk x19, x2 enable_step_tsk x19, x2
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK #ifdef CONFIG_KSTACK_ERASE
bl stackleak_erase_on_task_stack bl stackleak_erase_on_task_stack
#endif #endif
kernel_exit 0 kernel_exit 0

View file

@ -2,7 +2,7 @@
# Copyright 2022 Google LLC # Copyright 2022 Google LLC
KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) -fpie \ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) -fpie \
-Os -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) \ -Os -DDISABLE_BRANCH_PROFILING $(DISABLE_KSTACK_ERASE) \
$(DISABLE_LATENT_ENTROPY_PLUGIN) \ $(DISABLE_LATENT_ENTROPY_PLUGIN) \
$(call cc-option,-mbranch-protection=none) \ $(call cc-option,-mbranch-protection=none) \
-I$(srctree)/scripts/dtc/libfdt -fno-stack-protector \ -I$(srctree)/scripts/dtc/libfdt -fno-stack-protector \

View file

@ -36,7 +36,8 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO
# -Wmissing-prototypes and -Wmissing-declarations are removed from # -Wmissing-prototypes and -Wmissing-declarations are removed from
# the CFLAGS to make possible to build the kernel with CONFIG_WERROR enabled. # the CFLAGS to make possible to build the kernel with CONFIG_WERROR enabled.
CC_FLAGS_REMOVE_VDSO := $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) \ CC_FLAGS_REMOVE_VDSO := $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) \
$(RANDSTRUCT_CFLAGS) $(GCC_PLUGINS_CFLAGS) \ $(RANDSTRUCT_CFLAGS) $(KSTACK_ERASE_CFLAGS) \
$(GCC_PLUGINS_CFLAGS) \
$(CC_FLAGS_LTO) $(CC_FLAGS_CFI) \ $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) \
-Wmissing-prototypes -Wmissing-declarations -Wmissing-prototypes -Wmissing-declarations

View file

@ -12,7 +12,7 @@ asflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS
ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS -D__DISABLE_TRACE_MMIO__ ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS -D__DISABLE_TRACE_MMIO__
ccflags-y += -fno-stack-protector \ ccflags-y += -fno-stack-protector \
-DDISABLE_BRANCH_PROFILING \ -DDISABLE_BRANCH_PROFILING \
$(DISABLE_STACKLEAK_PLUGIN) $(DISABLE_KSTACK_ERASE)
hostprogs := gen-hyprel hostprogs := gen-hyprel
HOST_EXTRACFLAGS += -I$(objtree)/include HOST_EXTRACFLAGS += -I$(objtree)/include

View file

@ -120,11 +120,11 @@ config LOONGARCH
select HAVE_ARCH_KASAN select HAVE_ARCH_KASAN
select HAVE_ARCH_KFENCE select HAVE_ARCH_KFENCE
select HAVE_ARCH_KGDB if PERF_EVENTS select HAVE_ARCH_KGDB if PERF_EVENTS
select HAVE_ARCH_KSTACK_ERASE
select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
select HAVE_ARCH_SECCOMP select HAVE_ARCH_SECCOMP
select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_STACKLEAK
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD

View file

@ -55,7 +55,7 @@ static inline int mips_clockevent_init(void)
*/ */
extern int init_r4k_clocksource(void); extern int init_r4k_clocksource(void);
static inline int init_mips_clocksource(void) static inline __init int init_mips_clocksource(void)
{ {
#ifdef CONFIG_CSRC_R4K #ifdef CONFIG_CSRC_R4K
return init_r4k_clocksource(); return init_r4k_clocksource();

View file

@ -343,7 +343,7 @@ static inline bool hash_supports_debug_pagealloc(void)
static u8 *linear_map_hash_slots; static u8 *linear_map_hash_slots;
static unsigned long linear_map_hash_count; static unsigned long linear_map_hash_count;
static DEFINE_RAW_SPINLOCK(linear_map_hash_lock); static DEFINE_RAW_SPINLOCK(linear_map_hash_lock);
static void hash_debug_pagealloc_alloc_slots(void) static __init void hash_debug_pagealloc_alloc_slots(void)
{ {
if (!hash_supports_debug_pagealloc()) if (!hash_supports_debug_pagealloc())
return; return;
@ -409,7 +409,7 @@ static DEFINE_RAW_SPINLOCK(linear_map_kf_hash_lock);
static phys_addr_t kfence_pool; static phys_addr_t kfence_pool;
static inline void hash_kfence_alloc_pool(void) static __init void hash_kfence_alloc_pool(void)
{ {
if (!kfence_early_init_enabled()) if (!kfence_early_init_enabled())
goto err; goto err;
@ -445,7 +445,7 @@ err:
disable_kfence(); disable_kfence();
} }
static inline void hash_kfence_map_pool(void) static __init void hash_kfence_map_pool(void)
{ {
unsigned long kfence_pool_start, kfence_pool_end; unsigned long kfence_pool_start, kfence_pool_end;
unsigned long prot = pgprot_val(PAGE_KERNEL); unsigned long prot = pgprot_val(PAGE_KERNEL);

View file

@ -363,7 +363,7 @@ static int __meminit create_physical_mapping(unsigned long start,
} }
#ifdef CONFIG_KFENCE #ifdef CONFIG_KFENCE
static inline phys_addr_t alloc_kfence_pool(void) static __init phys_addr_t alloc_kfence_pool(void)
{ {
phys_addr_t kfence_pool; phys_addr_t kfence_pool;
@ -393,7 +393,7 @@ no_kfence:
return 0; return 0;
} }
static inline void map_kfence_pool(phys_addr_t kfence_pool) static __init void map_kfence_pool(phys_addr_t kfence_pool)
{ {
if (!kfence_pool) if (!kfence_pool)
return; return;

View file

@ -137,13 +137,13 @@ config RISCV
select HAVE_ARCH_KASAN if MMU && 64BIT select HAVE_ARCH_KASAN if MMU && 64BIT
select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT
select HAVE_ARCH_KFENCE if MMU && 64BIT select HAVE_ARCH_KFENCE if MMU && 64BIT
select HAVE_ARCH_KSTACK_ERASE
select HAVE_ARCH_KGDB if !XIP_KERNEL select HAVE_ARCH_KGDB if !XIP_KERNEL
select HAVE_ARCH_KGDB_QXFER_PKT select HAVE_ARCH_KGDB_QXFER_PKT
select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_STACKLEAK
select HAVE_ARCH_THREAD_STRUCT_WHITELIST select HAVE_ARCH_THREAD_STRUCT_WHITELIST
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE if 64BIT && MMU select HAVE_ARCH_TRANSPARENT_HUGEPAGE if 64BIT && MMU

View file

@ -220,7 +220,7 @@ SYM_CODE_START_NOALIGN(ret_from_exception)
#endif #endif
bnez s0, 1f bnez s0, 1f
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK #ifdef CONFIG_KSTACK_ERASE
call stackleak_erase_on_task_stack call stackleak_erase_on_task_stack
#endif #endif

View file

@ -2,7 +2,7 @@
# This file was copied from arm64/kernel/pi/Makefile. # This file was copied from arm64/kernel/pi/Makefile.
KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) -fpie \ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) -fpie \
-Os -DDISABLE_BRANCH_PROFILING $(DISABLE_STACKLEAK_PLUGIN) \ -Os -DDISABLE_BRANCH_PROFILING $(DISABLE_KSTACK_ERASE) \
$(call cc-option,-mbranch-protection=none) \ $(call cc-option,-mbranch-protection=none) \
-I$(srctree)/scripts/dtc/libfdt -fno-stack-protector \ -I$(srctree)/scripts/dtc/libfdt -fno-stack-protector \
-include $(srctree)/include/linux/hidden.h \ -include $(srctree)/include/linux/hidden.h \

View file

@ -53,7 +53,7 @@ targets += purgatory.ro purgatory.chk
PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel
PURGATORY_CFLAGS := -mcmodel=medany -ffreestanding -fno-zero-initialized-in-bss PURGATORY_CFLAGS := -mcmodel=medany -ffreestanding -fno-zero-initialized-in-bss
PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN) -DDISABLE_BRANCH_PROFILING PURGATORY_CFLAGS += $(DISABLE_KSTACK_ERASE) -DDISABLE_BRANCH_PROFILING
PURGATORY_CFLAGS += -fno-stack-protector -g0 PURGATORY_CFLAGS += -fno-stack-protector -g0
# Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That # Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That

View file

@ -176,10 +176,10 @@ config S390
select HAVE_ARCH_KCSAN select HAVE_ARCH_KCSAN
select HAVE_ARCH_KMSAN select HAVE_ARCH_KMSAN
select HAVE_ARCH_KFENCE select HAVE_ARCH_KFENCE
select HAVE_ARCH_KSTACK_ERASE
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_SOFT_DIRTY select HAVE_ARCH_SOFT_DIRTY
select HAVE_ARCH_STACKLEAK
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select HAVE_ARCH_VMAP_STACK select HAVE_ARCH_VMAP_STACK

View file

@ -48,7 +48,7 @@ void hypfs_sprp_exit(void);
int __hypfs_fs_init(void); int __hypfs_fs_init(void);
static inline int hypfs_fs_init(void) static __always_inline int hypfs_fs_init(void)
{ {
if (IS_ENABLED(CONFIG_S390_HYPFS_FS)) if (IS_ENABLED(CONFIG_S390_HYPFS_FS))
return __hypfs_fs_init(); return __hypfs_fs_init();

View file

@ -19,7 +19,7 @@ int diag204_store(void *buf, int pages);
int __hypfs_diag_fs_init(void); int __hypfs_diag_fs_init(void);
void __hypfs_diag_fs_exit(void); void __hypfs_diag_fs_exit(void);
static inline int hypfs_diag_fs_init(void) static __always_inline int hypfs_diag_fs_init(void)
{ {
if (IS_ENABLED(CONFIG_S390_HYPFS_FS)) if (IS_ENABLED(CONFIG_S390_HYPFS_FS))
return __hypfs_diag_fs_init(); return __hypfs_diag_fs_init();

View file

@ -124,7 +124,7 @@ _LPP_OFFSET = __LC_LPP
#endif #endif
.macro STACKLEAK_ERASE .macro STACKLEAK_ERASE
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK #ifdef CONFIG_KSTACK_ERASE
brasl %r14,stackleak_erase_on_task_stack brasl %r14,stackleak_erase_on_task_stack
#endif #endif
.endm .endm

View file

@ -142,7 +142,7 @@ bool force_dma_unencrypted(struct device *dev)
} }
/* protected virtualization */ /* protected virtualization */
static void pv_init(void) static void __init pv_init(void)
{ {
if (!is_prot_virt_guest()) if (!is_prot_virt_guest())
return; return;

View file

@ -48,7 +48,7 @@ CFL := $(PROFILING) -mcmodel=medlow -fPIC -O2 -fasynchronous-unwind-tables -m64
SPARC_REG_CFLAGS = -ffixed-g4 -ffixed-g5 $(call cc-option,-fcall-used-g5) $(call cc-option,-fcall-used-g7) SPARC_REG_CFLAGS = -ffixed-g4 -ffixed-g5 $(call cc-option,-fcall-used-g5) $(call cc-option,-fcall-used-g7)
$(vobjs): KBUILD_CFLAGS := $(filter-out $(RANDSTRUCT_CFLAGS) $(GCC_PLUGINS_CFLAGS) $(SPARC_REG_CFLAGS),$(KBUILD_CFLAGS)) $(CFL) $(vobjs): KBUILD_CFLAGS := $(filter-out $(RANDSTRUCT_CFLAGS) $(KSTACK_ERASE_CFLAGS) $(GCC_PLUGINS_CFLAGS) $(SPARC_REG_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
# #
# vDSO code runs in userspace and -pg doesn't help with profiling anyway. # vDSO code runs in userspace and -pg doesn't help with profiling anyway.
@ -79,6 +79,7 @@ KBUILD_CFLAGS_32 := $(filter-out -m64,$(KBUILD_CFLAGS))
KBUILD_CFLAGS_32 := $(filter-out -mcmodel=medlow,$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out -mcmodel=medlow,$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out $(RANDSTRUCT_CFLAGS),$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out $(RANDSTRUCT_CFLAGS),$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out $(KSTACK_ERASE_CFLAGS),$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out $(SPARC_REG_CFLAGS),$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out $(SPARC_REG_CFLAGS),$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 += -m32 -msoft-float -fpic KBUILD_CFLAGS_32 += -m32 -msoft-float -fpic

View file

@ -204,13 +204,13 @@ config X86
select HAVE_ARCH_KFENCE select HAVE_ARCH_KFENCE
select HAVE_ARCH_KMSAN if X86_64 select HAVE_ARCH_KMSAN if X86_64
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_ARCH_KSTACK_ERASE
select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_BITS if MMU
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT
select HAVE_ARCH_COMPAT_MMAP_BASES if MMU && COMPAT select HAVE_ARCH_COMPAT_MMAP_BASES if MMU && COMPAT
select HAVE_ARCH_PREL32_RELOCATIONS select HAVE_ARCH_PREL32_RELOCATIONS
select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_THREAD_STRUCT_WHITELIST select HAVE_ARCH_THREAD_STRUCT_WHITELIST
select HAVE_ARCH_STACKLEAK
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD if X86_64 select HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD if X86_64

View file

@ -369,7 +369,7 @@ For 32-bit we have the following conventions - kernel is built with
.endm .endm
.macro STACKLEAK_ERASE_NOCLOBBER .macro STACKLEAK_ERASE_NOCLOBBER
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK #ifdef CONFIG_KSTACK_ERASE
PUSH_AND_CLEAR_REGS PUSH_AND_CLEAR_REGS
call stackleak_erase call stackleak_erase
POP_REGS POP_REGS
@ -388,7 +388,7 @@ For 32-bit we have the following conventions - kernel is built with
#endif /* !CONFIG_X86_64 */ #endif /* !CONFIG_X86_64 */
.macro STACKLEAK_ERASE .macro STACKLEAK_ERASE
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK #ifdef CONFIG_KSTACK_ERASE
call stackleak_erase call stackleak_erase
#endif #endif
.endm .endm

View file

@ -62,7 +62,7 @@ ifneq ($(RETPOLINE_VDSO_CFLAGS),)
endif endif
endif endif
$(vobjs): KBUILD_CFLAGS := $(filter-out $(PADDING_CFLAGS) $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) $(RANDSTRUCT_CFLAGS) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL) $(vobjs): KBUILD_CFLAGS := $(filter-out $(PADDING_CFLAGS) $(CC_FLAGS_LTO) $(CC_FLAGS_CFI) $(RANDSTRUCT_CFLAGS) $(KSTACK_ERASE_CFLAGS) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
$(vobjs): KBUILD_AFLAGS += -DBUILD_VDSO $(vobjs): KBUILD_AFLAGS += -DBUILD_VDSO
# #
@ -123,6 +123,7 @@ KBUILD_CFLAGS_32 := $(filter-out -mcmodel=kernel,$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out $(RANDSTRUCT_CFLAGS),$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out $(RANDSTRUCT_CFLAGS),$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out $(KSTACK_ERASE_CFLAGS),$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_LTO),$(KBUILD_CFLAGS_32)) KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_LTO),$(KBUILD_CFLAGS_32))

View file

@ -158,13 +158,13 @@ static inline bool acpi_has_cpu_in_madt(void)
} }
#define ACPI_HAVE_ARCH_SET_ROOT_POINTER #define ACPI_HAVE_ARCH_SET_ROOT_POINTER
static inline void acpi_arch_set_root_pointer(u64 addr) static __always_inline void acpi_arch_set_root_pointer(u64 addr)
{ {
x86_init.acpi.set_root_pointer(addr); x86_init.acpi.set_root_pointer(addr);
} }
#define ACPI_HAVE_ARCH_GET_ROOT_POINTER #define ACPI_HAVE_ARCH_GET_ROOT_POINTER
static inline u64 acpi_arch_get_root_pointer(void) static __always_inline u64 acpi_arch_get_root_pointer(void)
{ {
return x86_init.acpi.get_root_pointer(); return x86_init.acpi.get_root_pointer();
} }

View file

@ -5,7 +5,7 @@
#if defined(CONFIG_CC_IS_CLANG) && CONFIG_CLANG_VERSION < 170000 #if defined(CONFIG_CC_IS_CLANG) && CONFIG_CLANG_VERSION < 170000
#define __head __section(".head.text") __no_sanitize_undefined __no_stack_protector #define __head __section(".head.text") __no_sanitize_undefined __no_stack_protector
#else #else
#define __head __section(".head.text") __no_sanitize_undefined #define __head __section(".head.text") __no_sanitize_undefined __no_sanitize_coverage
#endif #endif
struct x86_mapping_info { struct x86_mapping_info {

View file

@ -78,7 +78,7 @@ extern unsigned char secondary_startup_64[];
extern unsigned char secondary_startup_64_no_verify[]; extern unsigned char secondary_startup_64_no_verify[];
#endif #endif
static inline size_t real_mode_size_needed(void) static __always_inline size_t real_mode_size_needed(void)
{ {
if (real_mode_header) if (real_mode_header)
return 0; /* already allocated. */ return 0; /* already allocated. */

View file

@ -420,7 +420,7 @@ static u64 kvm_steal_clock(int cpu)
return steal; return steal;
} }
static inline void __set_percpu_decrypted(void *ptr, unsigned long size) static inline __init void __set_percpu_decrypted(void *ptr, unsigned long size)
{ {
early_set_memory_decrypted((unsigned long) ptr, size); early_set_memory_decrypted((unsigned long) ptr, size);
} }

View file

@ -805,7 +805,7 @@ kernel_physical_mapping_change(unsigned long paddr_start,
} }
#ifndef CONFIG_NUMA #ifndef CONFIG_NUMA
static inline void x86_numa_init(void) static __always_inline void x86_numa_init(void)
{ {
memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0); memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0);
} }

View file

@ -35,7 +35,7 @@ targets += purgatory.ro purgatory.chk
PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel
PURGATORY_CFLAGS := -mcmodel=small -ffreestanding -fno-zero-initialized-in-bss -g0 PURGATORY_CFLAGS := -mcmodel=small -ffreestanding -fno-zero-initialized-in-bss -g0
PURGATORY_CFLAGS += -fpic -fvisibility=hidden PURGATORY_CFLAGS += -fpic -fvisibility=hidden
PURGATORY_CFLAGS += $(DISABLE_STACKLEAK_PLUGIN) -DDISABLE_BRANCH_PROFILING PURGATORY_CFLAGS += $(DISABLE_KSTACK_ERASE) -DDISABLE_BRANCH_PROFILING
PURGATORY_CFLAGS += -fno-stack-protector PURGATORY_CFLAGS += -fno-stack-protector
# Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That # Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That

View file

@ -55,10 +55,9 @@ static unsigned long intel_security_flags(struct nvdimm *nvdimm,
{ {
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
unsigned long security_flags = 0; unsigned long security_flags = 0;
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_get_security_state cmd; struct nd_intel_get_security_state cmd;
} nd_cmd = { ) nd_cmd = {
.pkg = { .pkg = {
.nd_command = NVDIMM_INTEL_GET_SECURITY_STATE, .nd_command = NVDIMM_INTEL_GET_SECURITY_STATE,
.nd_family = NVDIMM_FAMILY_INTEL, .nd_family = NVDIMM_FAMILY_INTEL,
@ -120,10 +119,9 @@ static unsigned long intel_security_flags(struct nvdimm *nvdimm,
static int intel_security_freeze(struct nvdimm *nvdimm) static int intel_security_freeze(struct nvdimm *nvdimm)
{ {
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_freeze_lock cmd; struct nd_intel_freeze_lock cmd;
} nd_cmd = { ) nd_cmd = {
.pkg = { .pkg = {
.nd_command = NVDIMM_INTEL_FREEZE_LOCK, .nd_command = NVDIMM_INTEL_FREEZE_LOCK,
.nd_family = NVDIMM_FAMILY_INTEL, .nd_family = NVDIMM_FAMILY_INTEL,
@ -153,10 +151,9 @@ static int intel_security_change_key(struct nvdimm *nvdimm,
unsigned int cmd = ptype == NVDIMM_MASTER ? unsigned int cmd = ptype == NVDIMM_MASTER ?
NVDIMM_INTEL_SET_MASTER_PASSPHRASE : NVDIMM_INTEL_SET_MASTER_PASSPHRASE :
NVDIMM_INTEL_SET_PASSPHRASE; NVDIMM_INTEL_SET_PASSPHRASE;
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_set_passphrase cmd; struct nd_intel_set_passphrase cmd;
} nd_cmd = { ) nd_cmd = {
.pkg = { .pkg = {
.nd_family = NVDIMM_FAMILY_INTEL, .nd_family = NVDIMM_FAMILY_INTEL,
.nd_size_in = ND_INTEL_PASSPHRASE_SIZE * 2, .nd_size_in = ND_INTEL_PASSPHRASE_SIZE * 2,
@ -195,10 +192,9 @@ static int __maybe_unused intel_security_unlock(struct nvdimm *nvdimm,
const struct nvdimm_key_data *key_data) const struct nvdimm_key_data *key_data)
{ {
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_unlock_unit cmd; struct nd_intel_unlock_unit cmd;
} nd_cmd = { ) nd_cmd = {
.pkg = { .pkg = {
.nd_command = NVDIMM_INTEL_UNLOCK_UNIT, .nd_command = NVDIMM_INTEL_UNLOCK_UNIT,
.nd_family = NVDIMM_FAMILY_INTEL, .nd_family = NVDIMM_FAMILY_INTEL,
@ -234,10 +230,9 @@ static int intel_security_disable(struct nvdimm *nvdimm,
{ {
int rc; int rc;
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_disable_passphrase cmd; struct nd_intel_disable_passphrase cmd;
} nd_cmd = { ) nd_cmd = {
.pkg = { .pkg = {
.nd_command = NVDIMM_INTEL_DISABLE_PASSPHRASE, .nd_command = NVDIMM_INTEL_DISABLE_PASSPHRASE,
.nd_family = NVDIMM_FAMILY_INTEL, .nd_family = NVDIMM_FAMILY_INTEL,
@ -277,10 +272,9 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm,
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
unsigned int cmd = ptype == NVDIMM_MASTER ? unsigned int cmd = ptype == NVDIMM_MASTER ?
NVDIMM_INTEL_MASTER_SECURE_ERASE : NVDIMM_INTEL_SECURE_ERASE; NVDIMM_INTEL_MASTER_SECURE_ERASE : NVDIMM_INTEL_SECURE_ERASE;
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_secure_erase cmd; struct nd_intel_secure_erase cmd;
} nd_cmd = { ) nd_cmd = {
.pkg = { .pkg = {
.nd_family = NVDIMM_FAMILY_INTEL, .nd_family = NVDIMM_FAMILY_INTEL,
.nd_size_in = ND_INTEL_PASSPHRASE_SIZE, .nd_size_in = ND_INTEL_PASSPHRASE_SIZE,
@ -318,10 +312,9 @@ static int __maybe_unused intel_security_query_overwrite(struct nvdimm *nvdimm)
{ {
int rc; int rc;
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_query_overwrite cmd; struct nd_intel_query_overwrite cmd;
} nd_cmd = { ) nd_cmd = {
.pkg = { .pkg = {
.nd_command = NVDIMM_INTEL_QUERY_OVERWRITE, .nd_command = NVDIMM_INTEL_QUERY_OVERWRITE,
.nd_family = NVDIMM_FAMILY_INTEL, .nd_family = NVDIMM_FAMILY_INTEL,
@ -354,10 +347,9 @@ static int __maybe_unused intel_security_overwrite(struct nvdimm *nvdimm,
{ {
int rc; int rc;
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_overwrite cmd; struct nd_intel_overwrite cmd;
} nd_cmd = { ) nd_cmd = {
.pkg = { .pkg = {
.nd_command = NVDIMM_INTEL_OVERWRITE, .nd_command = NVDIMM_INTEL_OVERWRITE,
.nd_family = NVDIMM_FAMILY_INTEL, .nd_family = NVDIMM_FAMILY_INTEL,
@ -407,10 +399,9 @@ const struct nvdimm_security_ops *intel_security_ops = &__intel_security_ops;
static int intel_bus_fwa_businfo(struct nvdimm_bus_descriptor *nd_desc, static int intel_bus_fwa_businfo(struct nvdimm_bus_descriptor *nd_desc,
struct nd_intel_bus_fw_activate_businfo *info) struct nd_intel_bus_fw_activate_businfo *info)
{ {
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_bus_fw_activate_businfo cmd; struct nd_intel_bus_fw_activate_businfo cmd;
} nd_cmd = { ) nd_cmd = {
.pkg = { .pkg = {
.nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO, .nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE_BUSINFO,
.nd_family = NVDIMM_BUS_FAMILY_INTEL, .nd_family = NVDIMM_BUS_FAMILY_INTEL,
@ -518,33 +509,31 @@ static enum nvdimm_fwa_capability intel_bus_fwa_capability(
static int intel_bus_fwa_activate(struct nvdimm_bus_descriptor *nd_desc) static int intel_bus_fwa_activate(struct nvdimm_bus_descriptor *nd_desc)
{ {
struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc); struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_bus_fw_activate cmd; struct nd_intel_bus_fw_activate cmd;
} nd_cmd = { ) nd_cmd;
.pkg = { int rc;
.nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE,
.nd_family = NVDIMM_BUS_FAMILY_INTEL, nd_cmd.pkg = (struct nd_cmd_pkg) {
.nd_size_in = sizeof(nd_cmd.cmd.iodev_state), .nd_command = NVDIMM_BUS_INTEL_FW_ACTIVATE,
.nd_size_out = .nd_family = NVDIMM_BUS_FAMILY_INTEL,
sizeof(struct nd_intel_bus_fw_activate), .nd_size_in = sizeof(nd_cmd.cmd.iodev_state),
.nd_fw_size = .nd_size_out =
sizeof(struct nd_intel_bus_fw_activate), sizeof(struct nd_intel_bus_fw_activate),
}, .nd_fw_size =
sizeof(struct nd_intel_bus_fw_activate),
};
nd_cmd.cmd = (struct nd_intel_bus_fw_activate) {
/* /*
* Even though activate is run from a suspended context, * Even though activate is run from a suspended context,
* for safety, still ask platform firmware to force * for safety, still ask platform firmware to force
* quiesce devices by default. Let a module * quiesce devices by default. Let a module
* parameter override that policy. * parameter override that policy.
*/ */
.cmd = { .iodev_state = acpi_desc->fwa_noidle
.iodev_state = acpi_desc->fwa_noidle ? ND_INTEL_BUS_FWA_IODEV_OS_IDLE
? ND_INTEL_BUS_FWA_IODEV_OS_IDLE : ND_INTEL_BUS_FWA_IODEV_FORCE_IDLE,
: ND_INTEL_BUS_FWA_IODEV_FORCE_IDLE,
},
}; };
int rc;
switch (intel_bus_fwa_state(nd_desc)) { switch (intel_bus_fwa_state(nd_desc)) {
case NVDIMM_FWA_ARMED: case NVDIMM_FWA_ARMED:
case NVDIMM_FWA_ARM_OVERFLOW: case NVDIMM_FWA_ARM_OVERFLOW:
@ -582,10 +571,9 @@ const struct nvdimm_bus_fw_ops *intel_bus_fw_ops = &__intel_bus_fw_ops;
static int intel_fwa_dimminfo(struct nvdimm *nvdimm, static int intel_fwa_dimminfo(struct nvdimm *nvdimm,
struct nd_intel_fw_activate_dimminfo *info) struct nd_intel_fw_activate_dimminfo *info)
{ {
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_fw_activate_dimminfo cmd; struct nd_intel_fw_activate_dimminfo cmd;
} nd_cmd = { ) nd_cmd = {
.pkg = { .pkg = {
.nd_command = NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO, .nd_command = NVDIMM_INTEL_FW_ACTIVATE_DIMMINFO,
.nd_family = NVDIMM_FAMILY_INTEL, .nd_family = NVDIMM_FAMILY_INTEL,
@ -688,27 +676,24 @@ static int intel_fwa_arm(struct nvdimm *nvdimm, enum nvdimm_fwa_trigger arm)
{ {
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm); struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc; struct acpi_nfit_desc *acpi_desc = nfit_mem->acpi_desc;
struct { TRAILING_OVERLAP(struct nd_cmd_pkg, pkg, nd_payload,
struct nd_cmd_pkg pkg;
struct nd_intel_fw_activate_arm cmd; struct nd_intel_fw_activate_arm cmd;
} nd_cmd = { ) nd_cmd;
.pkg = {
.nd_command = NVDIMM_INTEL_FW_ACTIVATE_ARM,
.nd_family = NVDIMM_FAMILY_INTEL,
.nd_size_in = sizeof(nd_cmd.cmd.activate_arm),
.nd_size_out =
sizeof(struct nd_intel_fw_activate_arm),
.nd_fw_size =
sizeof(struct nd_intel_fw_activate_arm),
},
.cmd = {
.activate_arm = arm == NVDIMM_FWA_ARM
? ND_INTEL_DIMM_FWA_ARM
: ND_INTEL_DIMM_FWA_DISARM,
},
};
int rc; int rc;
nd_cmd.pkg = (struct nd_cmd_pkg) {
.nd_command = NVDIMM_INTEL_FW_ACTIVATE_ARM,
.nd_family = NVDIMM_FAMILY_INTEL,
.nd_size_in = sizeof(nd_cmd.cmd.activate_arm),
.nd_size_out = sizeof(struct nd_intel_fw_activate_arm),
.nd_fw_size = sizeof(struct nd_intel_fw_activate_arm),
};
nd_cmd.cmd = (struct nd_intel_fw_activate_arm) {
.activate_arm = arm == NVDIMM_FWA_ARM ?
ND_INTEL_DIMM_FWA_ARM :
ND_INTEL_DIMM_FWA_DISARM,
};
switch (intel_fwa_state(nvdimm)) { switch (intel_fwa_state(nvdimm)) {
case NVDIMM_FWA_INVALID: case NVDIMM_FWA_INVALID:
return -ENXIO; return -ENXIO;

View file

@ -43,7 +43,7 @@ static struct delay_timer orion_delay_timer = {
.read_current_timer = orion_read_timer, .read_current_timer = orion_read_timer,
}; };
static void orion_delay_timer_init(unsigned long rate) static void __init orion_delay_timer_init(unsigned long rate)
{ {
orion_delay_timer.freq = rate; orion_delay_timer.freq = rate;
register_current_timer_delay(&orion_delay_timer); register_current_timer_delay(&orion_delay_timer);

View file

@ -22,16 +22,16 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ -std=gnu11 \
# arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
# disable the stackleak plugin # disable the stackleak plugin
cflags-$(CONFIG_ARM64) += -fpie $(DISABLE_STACKLEAK_PLUGIN) \ cflags-$(CONFIG_ARM64) += -fpie $(DISABLE_KSTACK_ERASE) \
-fno-unwind-tables -fno-asynchronous-unwind-tables -fno-unwind-tables -fno-asynchronous-unwind-tables
cflags-$(CONFIG_ARM) += -DEFI_HAVE_STRLEN -DEFI_HAVE_STRNLEN \ cflags-$(CONFIG_ARM) += -DEFI_HAVE_STRLEN -DEFI_HAVE_STRNLEN \
-DEFI_HAVE_MEMCHR -DEFI_HAVE_STRRCHR \ -DEFI_HAVE_MEMCHR -DEFI_HAVE_STRRCHR \
-DEFI_HAVE_STRCMP -fno-builtin -fpic \ -DEFI_HAVE_STRCMP -fno-builtin -fpic \
$(call cc-option,-mno-single-pic-base) \ $(call cc-option,-mno-single-pic-base) \
$(DISABLE_STACKLEAK_PLUGIN) $(DISABLE_KSTACK_ERASE)
cflags-$(CONFIG_RISCV) += -fpic -DNO_ALTERNATIVE -mno-relax \ cflags-$(CONFIG_RISCV) += -fpic -DNO_ALTERNATIVE -mno-relax \
$(DISABLE_STACKLEAK_PLUGIN) $(DISABLE_KSTACK_ERASE)
cflags-$(CONFIG_LOONGARCH) += -fpie $(DISABLE_STACKLEAK_PLUGIN) cflags-$(CONFIG_LOONGARCH) += -fpie $(DISABLE_KSTACK_ERASE)
cflags-$(CONFIG_EFI_PARAMS_FROM_FDT) += -I$(srctree)/scripts/dtc/libfdt cflags-$(CONFIG_EFI_PARAMS_FROM_FDT) += -I$(srctree)/scripts/dtc/libfdt

View file

@ -8,7 +8,7 @@ lkdtm-$(CONFIG_LKDTM) += perms.o
lkdtm-$(CONFIG_LKDTM) += refcount.o lkdtm-$(CONFIG_LKDTM) += refcount.o
lkdtm-$(CONFIG_LKDTM) += rodata_objcopy.o lkdtm-$(CONFIG_LKDTM) += rodata_objcopy.o
lkdtm-$(CONFIG_LKDTM) += usercopy.o lkdtm-$(CONFIG_LKDTM) += usercopy.o
lkdtm-$(CONFIG_LKDTM) += stackleak.o lkdtm-$(CONFIG_LKDTM) += kstack_erase.o
lkdtm-$(CONFIG_LKDTM) += cfi.o lkdtm-$(CONFIG_LKDTM) += cfi.o
lkdtm-$(CONFIG_LKDTM) += fortify.o lkdtm-$(CONFIG_LKDTM) += fortify.o
lkdtm-$(CONFIG_PPC_64S_HASH_MMU) += powerpc.o lkdtm-$(CONFIG_PPC_64S_HASH_MMU) += powerpc.o

View file

@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/* /*
* This code tests that the current task stack is properly erased (filled * This code tests that the current task stack is properly erased (filled
* with STACKLEAK_POISON). * with KSTACK_ERASE_POISON).
* *
* Authors: * Authors:
* Alexander Popov <alex.popov@linux.com> * Alexander Popov <alex.popov@linux.com>
@ -9,9 +9,9 @@
*/ */
#include "lkdtm.h" #include "lkdtm.h"
#include <linux/stackleak.h> #include <linux/kstack_erase.h>
#if defined(CONFIG_GCC_PLUGIN_STACKLEAK) #if defined(CONFIG_KSTACK_ERASE)
/* /*
* Check that stackleak tracks the lowest stack pointer and erases the stack * Check that stackleak tracks the lowest stack pointer and erases the stack
* below this as expected. * below this as expected.
@ -85,7 +85,7 @@ static void noinstr check_stackleak_irqoff(void)
while (poison_low > task_stack_low) { while (poison_low > task_stack_low) {
poison_low -= sizeof(unsigned long); poison_low -= sizeof(unsigned long);
if (*(unsigned long *)poison_low == STACKLEAK_POISON) if (*(unsigned long *)poison_low == KSTACK_ERASE_POISON)
continue; continue;
instrumentation_begin(); instrumentation_begin();
@ -96,7 +96,7 @@ static void noinstr check_stackleak_irqoff(void)
} }
instrumentation_begin(); instrumentation_begin();
pr_info("stackleak stack usage:\n" pr_info("kstack erase stack usage:\n"
" high offset: %lu bytes\n" " high offset: %lu bytes\n"
" current: %lu bytes\n" " current: %lu bytes\n"
" lowest: %lu bytes\n" " lowest: %lu bytes\n"
@ -121,7 +121,7 @@ out:
instrumentation_end(); instrumentation_end();
} }
static void lkdtm_STACKLEAK_ERASING(void) static void lkdtm_KSTACK_ERASE(void)
{ {
unsigned long flags; unsigned long flags;
@ -129,19 +129,19 @@ static void lkdtm_STACKLEAK_ERASING(void)
check_stackleak_irqoff(); check_stackleak_irqoff();
local_irq_restore(flags); local_irq_restore(flags);
} }
#else /* defined(CONFIG_GCC_PLUGIN_STACKLEAK) */ #else /* defined(CONFIG_KSTACK_ERASE) */
static void lkdtm_STACKLEAK_ERASING(void) static void lkdtm_KSTACK_ERASE(void)
{ {
if (IS_ENABLED(CONFIG_HAVE_ARCH_STACKLEAK)) { if (IS_ENABLED(CONFIG_HAVE_ARCH_KSTACK_ERASE)) {
pr_err("XFAIL: stackleak is not enabled (CONFIG_GCC_PLUGIN_STACKLEAK=n)\n"); pr_err("XFAIL: stackleak is not enabled (CONFIG_KSTACK_ERASE=n)\n");
} else { } else {
pr_err("XFAIL: stackleak is not supported on this arch (HAVE_ARCH_STACKLEAK=n)\n"); pr_err("XFAIL: stackleak is not supported on this arch (HAVE_ARCH_KSTACK_ERASE=n)\n");
} }
} }
#endif /* defined(CONFIG_GCC_PLUGIN_STACKLEAK) */ #endif /* defined(CONFIG_KSTACK_ERASE) */
static struct crashtype crashtypes[] = { static struct crashtype crashtypes[] = {
CRASHTYPE(STACKLEAK_ERASING), CRASHTYPE(KSTACK_ERASE),
}; };
struct crashtype_category stackleak_crashtypes = { struct crashtype_category stackleak_crashtypes = {

View file

@ -98,13 +98,12 @@ struct mux_chip *mux_chip_alloc(struct device *dev,
if (WARN_ON(!dev || !controllers)) if (WARN_ON(!dev || !controllers))
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
mux_chip = kzalloc(sizeof(*mux_chip) + mux_chip = kzalloc(size_add(struct_size(mux_chip, mux, controllers),
controllers * sizeof(*mux_chip->mux) + sizeof_priv),
sizeof_priv, GFP_KERNEL); GFP_KERNEL);
if (!mux_chip) if (!mux_chip)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
mux_chip->mux = (struct mux_control *)(mux_chip + 1);
mux_chip->dev.class = &mux_class; mux_chip->dev.class = &mux_class;
mux_chip->dev.type = &mux_type; mux_chip->dev.type = &mux_type;
mux_chip->dev.parent = dev; mux_chip->dev.parent = dev;

View file

@ -145,7 +145,7 @@ static int am33xx_do_sram_idle(u32 wfi_flags)
return pm_ops->cpu_suspend(am33xx_do_wfi_sram, wfi_flags); return pm_ops->cpu_suspend(am33xx_do_wfi_sram, wfi_flags);
} }
static int __init am43xx_map_gic(void) static int am43xx_map_gic(void)
{ {
gic_dist_base = ioremap(AM43XX_GIC_DIST_BASE, SZ_4K); gic_dist_base = ioremap(AM43XX_GIC_DIST_BASE, SZ_4K);

View file

@ -3290,7 +3290,7 @@ static int proc_pid_ksm_stat(struct seq_file *m, struct pid_namespace *ns,
} }
#endif /* CONFIG_KSM */ #endif /* CONFIG_KSM */
#ifdef CONFIG_STACKLEAK_METRICS #ifdef CONFIG_KSTACK_ERASE_METRICS
static int proc_stack_depth(struct seq_file *m, struct pid_namespace *ns, static int proc_stack_depth(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *task) struct pid *pid, struct task_struct *task)
{ {
@ -3303,7 +3303,7 @@ static int proc_stack_depth(struct seq_file *m, struct pid_namespace *ns,
prev_depth, depth); prev_depth, depth);
return 0; return 0;
} }
#endif /* CONFIG_STACKLEAK_METRICS */ #endif /* CONFIG_KSTACK_ERASE_METRICS */
/* /*
* Thread groups * Thread groups
@ -3410,7 +3410,7 @@ static const struct pid_entry tgid_base_stuff[] = {
#ifdef CONFIG_LIVEPATCH #ifdef CONFIG_LIVEPATCH
ONE("patch_state", S_IRUSR, proc_pid_patch_state), ONE("patch_state", S_IRUSR, proc_pid_patch_state),
#endif #endif
#ifdef CONFIG_STACKLEAK_METRICS #ifdef CONFIG_KSTACK_ERASE_METRICS
ONE("stack_depth", S_IRUGO, proc_stack_depth), ONE("stack_depth", S_IRUGO, proc_stack_depth),
#endif #endif
#ifdef CONFIG_PROC_PID_ARCH_STATUS #ifdef CONFIG_PROC_PID_ARCH_STATUS

View file

@ -759,13 +759,13 @@ int acpi_arch_timer_mem_init(struct arch_timer_mem *timer_mem, int *timer_count)
#endif #endif
#ifndef ACPI_HAVE_ARCH_SET_ROOT_POINTER #ifndef ACPI_HAVE_ARCH_SET_ROOT_POINTER
static inline void acpi_arch_set_root_pointer(u64 addr) static __always_inline void acpi_arch_set_root_pointer(u64 addr)
{ {
} }
#endif #endif
#ifndef ACPI_HAVE_ARCH_GET_ROOT_POINTER #ifndef ACPI_HAVE_ARCH_GET_ROOT_POINTER
static inline u64 acpi_arch_get_root_pointer(void) static __always_inline u64 acpi_arch_get_root_pointer(void)
{ {
return 0; return 0;
} }

View file

@ -290,7 +290,7 @@ int __init xbc_get_info(int *node_size, size_t *data_size);
/* XBC cleanup data structures */ /* XBC cleanup data structures */
void __init _xbc_exit(bool early); void __init _xbc_exit(bool early);
static inline void xbc_exit(void) static __always_inline void xbc_exit(void)
{ {
_xbc_exit(false); _xbc_exit(false);
} }

View file

@ -1334,7 +1334,7 @@ struct linux_efi_initrd {
bool xen_efi_config_table_is_usable(const efi_guid_t *guid, unsigned long table); bool xen_efi_config_table_is_usable(const efi_guid_t *guid, unsigned long table);
static inline static __always_inline
bool efi_config_table_is_usable(const efi_guid_t *guid, unsigned long table) bool efi_config_table_is_usable(const efi_guid_t *guid, unsigned long table)
{ {
if (!IS_ENABLED(CONFIG_XEN_EFI)) if (!IS_ENABLED(CONFIG_XEN_EFI))

View file

@ -49,7 +49,9 @@
/* These are for everybody (although not all archs will actually /* These are for everybody (although not all archs will actually
discard it in modules) */ discard it in modules) */
#define __init __section(".init.text") __cold __latent_entropy __noinitretpoline #define __init __section(".init.text") __cold __latent_entropy \
__noinitretpoline \
__no_sanitize_coverage
#define __initdata __section(".init.data") #define __initdata __section(".init.data")
#define __initconst __section(".init.rodata") #define __initconst __section(".init.rodata")
#define __exitdata __section(".exit.data") #define __exitdata __section(".exit.data")

View file

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 */ /* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_STACKLEAK_H #ifndef _LINUX_KSTACK_ERASE_H
#define _LINUX_STACKLEAK_H #define _LINUX_KSTACK_ERASE_H
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/sched/task_stack.h> #include <linux/sched/task_stack.h>
@ -9,10 +9,10 @@
* Check that the poison value points to the unused hole in the * Check that the poison value points to the unused hole in the
* virtual memory map for your platform. * virtual memory map for your platform.
*/ */
#define STACKLEAK_POISON -0xBEEF #define KSTACK_ERASE_POISON -0xBEEF
#define STACKLEAK_SEARCH_DEPTH 128 #define KSTACK_ERASE_SEARCH_DEPTH 128
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK #ifdef CONFIG_KSTACK_ERASE
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
#include <linux/linkage.h> #include <linux/linkage.h>
@ -50,7 +50,7 @@ stackleak_task_high_bound(const struct task_struct *tsk)
static __always_inline unsigned long static __always_inline unsigned long
stackleak_find_top_of_poison(const unsigned long low, const unsigned long high) stackleak_find_top_of_poison(const unsigned long low, const unsigned long high)
{ {
const unsigned int depth = STACKLEAK_SEARCH_DEPTH / sizeof(unsigned long); const unsigned int depth = KSTACK_ERASE_SEARCH_DEPTH / sizeof(unsigned long);
unsigned int poison_count = 0; unsigned int poison_count = 0;
unsigned long poison_high = high; unsigned long poison_high = high;
unsigned long sp = high; unsigned long sp = high;
@ -58,7 +58,7 @@ stackleak_find_top_of_poison(const unsigned long low, const unsigned long high)
while (sp > low && poison_count < depth) { while (sp > low && poison_count < depth) {
sp -= sizeof(unsigned long); sp -= sizeof(unsigned long);
if (*(unsigned long *)sp == STACKLEAK_POISON) { if (*(unsigned long *)sp == KSTACK_ERASE_POISON) {
poison_count++; poison_count++;
} else { } else {
poison_count = 0; poison_count = 0;
@ -72,7 +72,7 @@ stackleak_find_top_of_poison(const unsigned long low, const unsigned long high)
static inline void stackleak_task_init(struct task_struct *t) static inline void stackleak_task_init(struct task_struct *t)
{ {
t->lowest_stack = stackleak_task_low_bound(t); t->lowest_stack = stackleak_task_low_bound(t);
# ifdef CONFIG_STACKLEAK_METRICS # ifdef CONFIG_KSTACK_ERASE_METRICS
t->prev_lowest_stack = t->lowest_stack; t->prev_lowest_stack = t->lowest_stack;
# endif # endif
} }
@ -80,9 +80,9 @@ static inline void stackleak_task_init(struct task_struct *t)
asmlinkage void noinstr stackleak_erase(void); asmlinkage void noinstr stackleak_erase(void);
asmlinkage void noinstr stackleak_erase_on_task_stack(void); asmlinkage void noinstr stackleak_erase_on_task_stack(void);
asmlinkage void noinstr stackleak_erase_off_task_stack(void); asmlinkage void noinstr stackleak_erase_off_task_stack(void);
void __no_caller_saved_registers noinstr stackleak_track_stack(void); void __no_caller_saved_registers noinstr __sanitizer_cov_stack_depth(void);
#else /* !CONFIG_GCC_PLUGIN_STACKLEAK */ #else /* !CONFIG_KSTACK_ERASE */
static inline void stackleak_task_init(struct task_struct *t) { } static inline void stackleak_task_init(struct task_struct *t) { }
#endif #endif

View file

@ -463,7 +463,7 @@ static inline void *memblock_alloc_raw(phys_addr_t size,
NUMA_NO_NODE); NUMA_NO_NODE);
} }
static inline void *memblock_alloc_from(phys_addr_t size, static __always_inline void *memblock_alloc_from(phys_addr_t size,
phys_addr_t align, phys_addr_t align,
phys_addr_t min_addr) phys_addr_t min_addr)
{ {

View file

@ -213,7 +213,7 @@ struct prcmu_fw_version {
#if defined(CONFIG_UX500_SOC_DB8500) #if defined(CONFIG_UX500_SOC_DB8500)
static inline void prcmu_early_init(void) static inline void __init prcmu_early_init(void)
{ {
db8500_prcmu_early_init(); db8500_prcmu_early_init();
} }

View file

@ -56,18 +56,18 @@ struct mux_control {
/** /**
* struct mux_chip - Represents a chip holding mux controllers. * struct mux_chip - Represents a chip holding mux controllers.
* @controllers: Number of mux controllers handled by the chip. * @controllers: Number of mux controllers handled by the chip.
* @mux: Array of mux controllers that are handled.
* @dev: Device structure. * @dev: Device structure.
* @id: Used to identify the device internally. * @id: Used to identify the device internally.
* @ops: Mux controller operations. * @ops: Mux controller operations.
* @mux: Array of mux controllers that are handled.
*/ */
struct mux_chip { struct mux_chip {
unsigned int controllers; unsigned int controllers;
struct mux_control *mux;
struct device dev; struct device dev;
int id; int id;
const struct mux_control_ops *ops; const struct mux_control_ops *ops;
struct mux_control mux[] __counted_by(controllers);
}; };
#define to_mux_chip(x) container_of((x), struct mux_chip, dev) #define to_mux_chip(x) container_of((x), struct mux_chip, dev)

View file

@ -1603,8 +1603,10 @@ struct task_struct {
/* Used by BPF for per-TASK xdp storage */ /* Used by BPF for per-TASK xdp storage */
struct bpf_net_context *bpf_net_context; struct bpf_net_context *bpf_net_context;
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK #ifdef CONFIG_KSTACK_ERASE
unsigned long lowest_stack; unsigned long lowest_stack;
#endif
#ifdef CONFIG_KSTACK_ERASE_METRICS
unsigned long prev_lowest_stack; unsigned long prev_lowest_stack;
#endif #endif

View file

@ -53,7 +53,7 @@ static inline void setup_thread_stack(struct task_struct *p, struct task_struct
* When the stack grows up, this is the highest address. * When the stack grows up, this is the highest address.
* Beyond that position, we corrupt data on the next page. * Beyond that position, we corrupt data on the next page.
*/ */
static inline unsigned long *end_of_stack(struct task_struct *p) static inline unsigned long *end_of_stack(const struct task_struct *p)
{ {
#ifdef CONFIG_STACK_GROWSUP #ifdef CONFIG_STACK_GROWSUP
return (unsigned long *)((unsigned long)task_thread_info(p) + THREAD_SIZE) - 1; return (unsigned long *)((unsigned long)task_thread_info(p) + THREAD_SIZE) - 1;

View file

@ -221,7 +221,7 @@ static inline void wake_up_all_idle_cpus(void) { }
#ifdef CONFIG_UP_LATE_INIT #ifdef CONFIG_UP_LATE_INIT
extern void __init up_late_init(void); extern void __init up_late_init(void);
static inline void smp_init(void) { up_late_init(); } static __always_inline void smp_init(void) { up_late_init(); }
#else #else
static inline void smp_init(void) { } static inline void smp_init(void) { }
#endif #endif

View file

@ -93,4 +93,24 @@ enum {
#define DECLARE_FLEX_ARRAY(TYPE, NAME) \ #define DECLARE_FLEX_ARRAY(TYPE, NAME) \
__DECLARE_FLEX_ARRAY(TYPE, NAME) __DECLARE_FLEX_ARRAY(TYPE, NAME)
/**
* TRAILING_OVERLAP() - Overlap a flexible-array member with trailing members.
*
* Creates a union between a flexible-array member (FAM) in a struct and a set
* of additional members that would otherwise follow it.
*
* @TYPE: Flexible structure type name, including "struct" keyword.
* @NAME: Name for a variable to define.
* @FAM: The flexible-array member within @TYPE
* @MEMBERS: Trailing overlapping members.
*/
#define TRAILING_OVERLAP(TYPE, NAME, FAM, MEMBERS) \
union { \
TYPE NAME; \
struct { \
unsigned char __offset_to_##FAM[offsetof(TYPE, FAM)]; \
MEMBERS \
}; \
}
#endif #endif

View file

@ -345,16 +345,6 @@ extern ssize_t memory_read_from_buffer(void *to, size_t count, loff_t *ppos,
int ptr_to_hashval(const void *ptr, unsigned long *hashval_out); int ptr_to_hashval(const void *ptr, unsigned long *hashval_out);
/**
* strstarts - does @str start with @prefix?
* @str: string to examine
* @prefix: prefix to look for.
*/
static inline bool strstarts(const char *str, const char *prefix)
{
return strncmp(str, prefix, strlen(prefix)) == 0;
}
size_t memweight(const void *ptr, size_t bytes); size_t memweight(const void *ptr, size_t bytes);
/** /**
@ -562,4 +552,14 @@ static __always_inline size_t str_has_prefix(const char *str, const char *prefix
return strncmp(str, prefix, len) == 0 ? len : 0; return strncmp(str, prefix, len) == 0 ? len : 0;
} }
/**
* strstarts - does @str start with @prefix?
* @str: string to examine
* @prefix: prefix to look for.
*/
static inline bool strstarts(const char *str, const char *prefix)
{
return strncmp(str, prefix, strlen(prefix)) == 0;
}
#endif /* _LINUX_STRING_H_ */ #endif /* _LINUX_STRING_H_ */

View file

@ -139,11 +139,12 @@ obj-$(CONFIG_WATCH_QUEUE) += watch_queue.o
obj-$(CONFIG_RESOURCE_KUNIT_TEST) += resource_kunit.o obj-$(CONFIG_RESOURCE_KUNIT_TEST) += resource_kunit.o
obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o obj-$(CONFIG_SYSCTL_KUNIT_TEST) += sysctl-test.o
CFLAGS_stackleak.o += $(DISABLE_STACKLEAK_PLUGIN) CFLAGS_kstack_erase.o += $(DISABLE_KSTACK_ERASE)
obj-$(CONFIG_GCC_PLUGIN_STACKLEAK) += stackleak.o CFLAGS_kstack_erase.o += $(call cc-option,-mgeneral-regs-only)
KASAN_SANITIZE_stackleak.o := n obj-$(CONFIG_KSTACK_ERASE) += kstack_erase.o
KCSAN_SANITIZE_stackleak.o := n KASAN_SANITIZE_kstack_erase.o := n
KCOV_INSTRUMENT_stackleak.o := n KCSAN_SANITIZE_kstack_erase.o := n
KCOV_INSTRUMENT_kstack_erase.o := n
obj-$(CONFIG_SCF_TORTURE_TEST) += scftorture.o obj-$(CONFIG_SCF_TORTURE_TEST) += scftorture.o

View file

@ -60,9 +60,15 @@ CONFIG_LIST_HARDENED=y
# Initialize all heap variables to zero on allocation. # Initialize all heap variables to zero on allocation.
CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y
# Initialize all heap variables to zero on free to reduce stale data lifetime.
CONFIG_INIT_ON_FREE_DEFAULT_ON=y
# Initialize all stack variables to zero on function entry. # Initialize all stack variables to zero on function entry.
CONFIG_INIT_STACK_ALL_ZERO=y CONFIG_INIT_STACK_ALL_ZERO=y
# Wipe kernel stack after syscall completion to reduce stale data lifetime.
CONFIG_KSTACK_ERASE=y
# Wipe RAM at reboot via EFI. For more details, see: # Wipe RAM at reboot via EFI. For more details, see:
# https://trustedcomputinggroup.org/resource/pc-client-work-group-platform-reset-attack-mitigation-specification/ # https://trustedcomputinggroup.org/resource/pc-client-work-group-platform-reset-attack-mitigation-specification/
# https://bugzilla.redhat.com/show_bug.cgi?id=1532058 # https://bugzilla.redhat.com/show_bug.cgi?id=1532058

View file

@ -93,7 +93,7 @@
#include <linux/kcov.h> #include <linux/kcov.h>
#include <linux/livepatch.h> #include <linux/livepatch.h>
#include <linux/thread_info.h> #include <linux/thread_info.h>
#include <linux/stackleak.h> #include <linux/kstack_erase.h>
#include <linux/kasan.h> #include <linux/kasan.h>
#include <linux/scs.h> #include <linux/scs.h>
#include <linux/io_uring.h> #include <linux/io_uring.h>

View file

@ -310,8 +310,8 @@ err_free:
return -ENOMEM; return -ENOMEM;
} }
static void deserialize_bitmap(unsigned int order, static void __init deserialize_bitmap(unsigned int order,
struct khoser_mem_bitmap_ptr *elm) struct khoser_mem_bitmap_ptr *elm)
{ {
struct kho_mem_phys_bits *bitmap = KHOSER_LOAD_PTR(elm->bitmap); struct kho_mem_phys_bits *bitmap = KHOSER_LOAD_PTR(elm->bitmap);
unsigned long bit; unsigned long bit;

View file

@ -6,14 +6,14 @@
* *
* Author: Alexander Popov <alex.popov@linux.com> * Author: Alexander Popov <alex.popov@linux.com>
* *
* STACKLEAK reduces the information which kernel stack leak bugs can * KSTACK_ERASE reduces the information which kernel stack leak bugs can
* reveal and blocks some uninitialized stack variable attacks. * reveal and blocks some uninitialized stack variable attacks.
*/ */
#include <linux/stackleak.h> #include <linux/kstack_erase.h>
#include <linux/kprobes.h> #include <linux/kprobes.h>
#ifdef CONFIG_STACKLEAK_RUNTIME_DISABLE #ifdef CONFIG_KSTACK_ERASE_RUNTIME_DISABLE
#include <linux/jump_label.h> #include <linux/jump_label.h>
#include <linux/string_choices.h> #include <linux/string_choices.h>
#include <linux/sysctl.h> #include <linux/sysctl.h>
@ -68,7 +68,7 @@ late_initcall(stackleak_sysctls_init);
#define skip_erasing() static_branch_unlikely(&stack_erasing_bypass) #define skip_erasing() static_branch_unlikely(&stack_erasing_bypass)
#else #else
#define skip_erasing() false #define skip_erasing() false
#endif /* CONFIG_STACKLEAK_RUNTIME_DISABLE */ #endif /* CONFIG_KSTACK_ERASE_RUNTIME_DISABLE */
#ifndef __stackleak_poison #ifndef __stackleak_poison
static __always_inline void __stackleak_poison(unsigned long erase_low, static __always_inline void __stackleak_poison(unsigned long erase_low,
@ -91,7 +91,7 @@ static __always_inline void __stackleak_erase(bool on_task_stack)
erase_low = stackleak_find_top_of_poison(task_stack_low, erase_low = stackleak_find_top_of_poison(task_stack_low,
current->lowest_stack); current->lowest_stack);
#ifdef CONFIG_STACKLEAK_METRICS #ifdef CONFIG_KSTACK_ERASE_METRICS
current->prev_lowest_stack = erase_low; current->prev_lowest_stack = erase_low;
#endif #endif
@ -113,7 +113,7 @@ static __always_inline void __stackleak_erase(bool on_task_stack)
else else
erase_high = task_stack_high; erase_high = task_stack_high;
__stackleak_poison(erase_low, erase_high, STACKLEAK_POISON); __stackleak_poison(erase_low, erase_high, KSTACK_ERASE_POISON);
/* Reset the 'lowest_stack' value for the next syscall */ /* Reset the 'lowest_stack' value for the next syscall */
current->lowest_stack = task_stack_high; current->lowest_stack = task_stack_high;
@ -156,16 +156,16 @@ asmlinkage void noinstr stackleak_erase_off_task_stack(void)
__stackleak_erase(false); __stackleak_erase(false);
} }
void __used __no_caller_saved_registers noinstr stackleak_track_stack(void) void __used __no_caller_saved_registers noinstr __sanitizer_cov_stack_depth(void)
{ {
unsigned long sp = current_stack_pointer; unsigned long sp = current_stack_pointer;
/* /*
* Having CONFIG_STACKLEAK_TRACK_MIN_SIZE larger than * Having CONFIG_KSTACK_ERASE_TRACK_MIN_SIZE larger than
* STACKLEAK_SEARCH_DEPTH makes the poison search in * KSTACK_ERASE_SEARCH_DEPTH makes the poison search in
* stackleak_erase() unreliable. Let's prevent that. * stackleak_erase() unreliable. Let's prevent that.
*/ */
BUILD_BUG_ON(CONFIG_STACKLEAK_TRACK_MIN_SIZE > STACKLEAK_SEARCH_DEPTH); BUILD_BUG_ON(CONFIG_KSTACK_ERASE_TRACK_MIN_SIZE > KSTACK_ERASE_SEARCH_DEPTH);
/* 'lowest_stack' should be aligned on the register width boundary */ /* 'lowest_stack' should be aligned on the register width boundary */
sp = ALIGN(sp, sizeof(unsigned long)); sp = ALIGN(sp, sizeof(unsigned long));
@ -174,4 +174,4 @@ void __used __no_caller_saved_registers noinstr stackleak_track_stack(void)
current->lowest_stack = sp; current->lowest_stack = sp;
} }
} }
EXPORT_SYMBOL(stackleak_track_stack); EXPORT_SYMBOL(__sanitizer_cov_stack_depth);

View file

@ -2460,6 +2460,15 @@ config SCANF_KUNIT_TEST
If unsure, say N. If unsure, say N.
config SEQ_BUF_KUNIT_TEST
tristate "KUnit test for seq_buf" if !KUNIT_ALL_TESTS
depends on KUNIT
default KUNIT_ALL_TESTS
help
This builds unit tests for the seq_buf library.
If unsure, say N.
config STRING_KUNIT_TEST config STRING_KUNIT_TEST
tristate "KUnit test string functions at runtime" if !KUNIT_ALL_TESTS tristate "KUnit test string functions at runtime" if !KUNIT_ALL_TESTS
depends on KUNIT depends on KUNIT

View file

@ -337,7 +337,7 @@ obj-$(CONFIG_UBSAN) += ubsan.o
UBSAN_SANITIZE_ubsan.o := n UBSAN_SANITIZE_ubsan.o := n
KASAN_SANITIZE_ubsan.o := n KASAN_SANITIZE_ubsan.o := n
KCSAN_SANITIZE_ubsan.o := n KCSAN_SANITIZE_ubsan.o := n
CFLAGS_ubsan.o := -fno-stack-protector $(DISABLE_STACKLEAK_PLUGIN) CFLAGS_ubsan.o := -fno-stack-protector $(DISABLE_KSTACK_ERASE)
obj-$(CONFIG_SBITMAP) += sbitmap.o obj-$(CONFIG_SBITMAP) += sbitmap.o

View file

@ -37,6 +37,7 @@ obj-$(CONFIG_OVERFLOW_KUNIT_TEST) += overflow_kunit.o
obj-$(CONFIG_PRINTF_KUNIT_TEST) += printf_kunit.o obj-$(CONFIG_PRINTF_KUNIT_TEST) += printf_kunit.o
obj-$(CONFIG_RANDSTRUCT_KUNIT_TEST) += randstruct_kunit.o obj-$(CONFIG_RANDSTRUCT_KUNIT_TEST) += randstruct_kunit.o
obj-$(CONFIG_SCANF_KUNIT_TEST) += scanf_kunit.o obj-$(CONFIG_SCANF_KUNIT_TEST) += scanf_kunit.o
obj-$(CONFIG_SEQ_BUF_KUNIT_TEST) += seq_buf_kunit.o
obj-$(CONFIG_SIPHASH_KUNIT_TEST) += siphash_kunit.o obj-$(CONFIG_SIPHASH_KUNIT_TEST) += siphash_kunit.o
obj-$(CONFIG_SLUB_KUNIT_TEST) += slub_kunit.o obj-$(CONFIG_SLUB_KUNIT_TEST) += slub_kunit.o
obj-$(CONFIG_TEST_SORT) += test_sort.o obj-$(CONFIG_TEST_SORT) += test_sort.o

View file

@ -1003,8 +1003,8 @@ static void fortify_test_memcmp(struct kunit *test)
{ {
char one[] = "My mind is going ..."; char one[] = "My mind is going ...";
char two[] = "My mind is going ... I can feel it."; char two[] = "My mind is going ... I can feel it.";
size_t one_len = sizeof(one) - 1; volatile size_t one_len = sizeof(one) - 1;
size_t two_len = sizeof(two) - 1; volatile size_t two_len = sizeof(two) - 1;
OPTIMIZER_HIDE_VAR(one_len); OPTIMIZER_HIDE_VAR(one_len);
OPTIMIZER_HIDE_VAR(two_len); OPTIMIZER_HIDE_VAR(two_len);

208
lib/tests/seq_buf_kunit.c Normal file
View file

@ -0,0 +1,208 @@
// SPDX-License-Identifier: GPL-2.0
/*
* KUnit tests for the seq_buf API
*
* Copyright (C) 2025, Google LLC.
*/
#include <kunit/test.h>
#include <linux/seq_buf.h>
static void seq_buf_init_test(struct kunit *test)
{
char buf[32];
struct seq_buf s;
seq_buf_init(&s, buf, sizeof(buf));
KUNIT_EXPECT_EQ(test, s.size, 32);
KUNIT_EXPECT_EQ(test, s.len, 0);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_EQ(test, seq_buf_buffer_left(&s), 32);
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 0);
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "");
}
static void seq_buf_declare_test(struct kunit *test)
{
DECLARE_SEQ_BUF(s, 24);
KUNIT_EXPECT_EQ(test, s.size, 24);
KUNIT_EXPECT_EQ(test, s.len, 0);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_EQ(test, seq_buf_buffer_left(&s), 24);
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 0);
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "");
}
static void seq_buf_clear_test(struct kunit *test)
{
DECLARE_SEQ_BUF(s, 128);
seq_buf_puts(&s, "hello");
KUNIT_EXPECT_EQ(test, s.len, 5);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "hello");
seq_buf_clear(&s);
KUNIT_EXPECT_EQ(test, s.len, 0);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "");
}
static void seq_buf_puts_test(struct kunit *test)
{
DECLARE_SEQ_BUF(s, 16);
seq_buf_puts(&s, "hello");
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 5);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "hello");
seq_buf_puts(&s, " world");
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 11);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "hello world");
}
static void seq_buf_puts_overflow_test(struct kunit *test)
{
DECLARE_SEQ_BUF(s, 10);
seq_buf_puts(&s, "123456789");
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 9);
seq_buf_puts(&s, "0");
KUNIT_EXPECT_TRUE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 10);
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "123456789");
seq_buf_clear(&s);
KUNIT_EXPECT_EQ(test, s.len, 0);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "");
}
static void seq_buf_putc_test(struct kunit *test)
{
DECLARE_SEQ_BUF(s, 4);
seq_buf_putc(&s, 'a');
seq_buf_putc(&s, 'b');
seq_buf_putc(&s, 'c');
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 3);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "abc");
seq_buf_putc(&s, 'd');
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 4);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "abc");
seq_buf_putc(&s, 'e');
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 4);
KUNIT_EXPECT_TRUE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "abc");
seq_buf_clear(&s);
KUNIT_EXPECT_EQ(test, s.len, 0);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "");
}
static void seq_buf_printf_test(struct kunit *test)
{
DECLARE_SEQ_BUF(s, 32);
seq_buf_printf(&s, "hello %s", "world");
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 11);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "hello world");
seq_buf_printf(&s, " %d", 123);
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 15);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "hello world 123");
}
static void seq_buf_printf_overflow_test(struct kunit *test)
{
DECLARE_SEQ_BUF(s, 16);
seq_buf_printf(&s, "%lu", 1234567890UL);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 10);
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "1234567890");
seq_buf_printf(&s, "%s", "abcdefghij");
KUNIT_EXPECT_TRUE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 16);
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "1234567890abcde");
seq_buf_clear(&s);
KUNIT_EXPECT_EQ(test, s.len, 0);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "");
}
static void seq_buf_get_buf_commit_test(struct kunit *test)
{
DECLARE_SEQ_BUF(s, 16);
char *buf;
size_t len;
len = seq_buf_get_buf(&s, &buf);
KUNIT_EXPECT_EQ(test, len, 16);
KUNIT_EXPECT_PTR_NE(test, buf, NULL);
memcpy(buf, "hello", 5);
seq_buf_commit(&s, 5);
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 5);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "hello");
len = seq_buf_get_buf(&s, &buf);
KUNIT_EXPECT_EQ(test, len, 11);
KUNIT_EXPECT_PTR_NE(test, buf, NULL);
memcpy(buf, " worlds!", 8);
seq_buf_commit(&s, 6);
KUNIT_EXPECT_EQ(test, seq_buf_used(&s), 11);
KUNIT_EXPECT_FALSE(test, seq_buf_has_overflowed(&s));
KUNIT_EXPECT_STREQ(test, seq_buf_str(&s), "hello world");
len = seq_buf_get_buf(&s, &buf);
KUNIT_EXPECT_EQ(test, len, 5);
KUNIT_EXPECT_PTR_NE(test, buf, NULL);
seq_buf_commit(&s, -1);
KUNIT_EXPECT_TRUE(test, seq_buf_has_overflowed(&s));
}
static struct kunit_case seq_buf_test_cases[] = {
KUNIT_CASE(seq_buf_init_test),
KUNIT_CASE(seq_buf_declare_test),
KUNIT_CASE(seq_buf_clear_test),
KUNIT_CASE(seq_buf_puts_test),
KUNIT_CASE(seq_buf_puts_overflow_test),
KUNIT_CASE(seq_buf_putc_test),
KUNIT_CASE(seq_buf_printf_test),
KUNIT_CASE(seq_buf_printf_overflow_test),
KUNIT_CASE(seq_buf_get_buf_commit_test),
{}
};
static struct kunit_suite seq_buf_test_suite = {
.name = "seq_buf",
.test_cases = seq_buf_test_cases,
};
kunit_test_suite(seq_buf_test_suite);
MODULE_DESCRIPTION("Runtime test cases for seq_buf string API");
MODULE_LICENSE("GPL");

View file

@ -8,20 +8,6 @@ ifdef CONFIG_GCC_PLUGIN_LATENT_ENTROPY
endif endif
export DISABLE_LATENT_ENTROPY_PLUGIN export DISABLE_LATENT_ENTROPY_PLUGIN
gcc-plugin-$(CONFIG_GCC_PLUGIN_STACKLEAK) += stackleak_plugin.so
gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STACKLEAK) \
+= -DSTACKLEAK_PLUGIN
gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STACKLEAK) \
+= -fplugin-arg-stackleak_plugin-track-min-size=$(CONFIG_STACKLEAK_TRACK_MIN_SIZE)
gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STACKLEAK) \
+= -fplugin-arg-stackleak_plugin-arch=$(SRCARCH)
gcc-plugin-cflags-$(CONFIG_GCC_PLUGIN_STACKLEAK_VERBOSE) \
+= -fplugin-arg-stackleak_plugin-verbose
ifdef CONFIG_GCC_PLUGIN_STACKLEAK
DISABLE_STACKLEAK_PLUGIN += -fplugin-arg-stackleak_plugin-disable
endif
export DISABLE_STACKLEAK_PLUGIN
# All the plugin CFLAGS are collected here in case a build target needs to # All the plugin CFLAGS are collected here in case a build target needs to
# filter them out of the KBUILD_CFLAGS. # filter them out of the KBUILD_CFLAGS.
GCC_PLUGINS_CFLAGS := $(strip $(addprefix -fplugin=$(objtree)/scripts/gcc-plugins/, $(gcc-plugin-y)) $(gcc-plugin-cflags-y)) -DGCC_PLUGINS GCC_PLUGINS_CFLAGS := $(strip $(addprefix -fplugin=$(objtree)/scripts/gcc-plugins/, $(gcc-plugin-y)) $(gcc-plugin-cflags-y)) -DGCC_PLUGINS
@ -34,6 +20,8 @@ KBUILD_CFLAGS += $(GCC_PLUGINS_CFLAGS)
# be included in GCC_PLUGIN so they can get built. # be included in GCC_PLUGIN so they can get built.
gcc-plugin-external-$(CONFIG_GCC_PLUGIN_RANDSTRUCT) \ gcc-plugin-external-$(CONFIG_GCC_PLUGIN_RANDSTRUCT) \
+= randomize_layout_plugin.so += randomize_layout_plugin.so
gcc-plugin-external-$(CONFIG_GCC_PLUGIN_STACKLEAK) \
+= stackleak_plugin.so
# All enabled GCC plugins are collected here for building in # All enabled GCC plugins are collected here for building in
# scripts/gcc-scripts/Makefile. # scripts/gcc-scripts/Makefile.

View file

@ -0,0 +1,21 @@
# SPDX-License-Identifier: GPL-2.0
ifdef CONFIG_GCC_PLUGIN_STACKLEAK
kstack-erase-cflags-y += -fplugin=$(objtree)/scripts/gcc-plugins/stackleak_plugin.so
kstack-erase-cflags-y += -fplugin-arg-stackleak_plugin-track-min-size=$(CONFIG_KSTACK_ERASE_TRACK_MIN_SIZE)
kstack-erase-cflags-y += -fplugin-arg-stackleak_plugin-arch=$(SRCARCH)
kstack-erase-cflags-$(CONFIG_GCC_PLUGIN_STACKLEAK_VERBOSE) += -fplugin-arg-stackleak_plugin-verbose
DISABLE_KSTACK_ERASE := -fplugin-arg-stackleak_plugin-disable
endif
ifdef CONFIG_CC_IS_CLANG
kstack-erase-cflags-y += -fsanitize-coverage=stack-depth
kstack-erase-cflags-y += -fsanitize-coverage-stack-depth-callback-min=$(CONFIG_KSTACK_ERASE_TRACK_MIN_SIZE)
DISABLE_KSTACK_ERASE := -fno-sanitize-coverage=stack-depth
endif
KSTACK_ERASE_CFLAGS := $(kstack-erase-cflags-y)
export STACKLEAK_CFLAGS DISABLE_KSTACK_ERASE
KBUILD_CFLAGS += $(KSTACK_ERASE_CFLAGS)

View file

@ -9,7 +9,7 @@
* any of the gcc libraries * any of the gcc libraries
* *
* This gcc plugin is needed for tracking the lowest border of the kernel stack. * This gcc plugin is needed for tracking the lowest border of the kernel stack.
* It instruments the kernel code inserting stackleak_track_stack() calls: * It instruments the kernel code inserting __sanitizer_cov_stack_depth() calls:
* - after alloca(); * - after alloca();
* - for the functions with a stack frame size greater than or equal * - for the functions with a stack frame size greater than or equal
* to the "track-min-size" plugin parameter. * to the "track-min-size" plugin parameter.
@ -33,7 +33,7 @@ __visible int plugin_is_GPL_compatible;
static int track_frame_size = -1; static int track_frame_size = -1;
static bool build_for_x86 = false; static bool build_for_x86 = false;
static const char track_function[] = "stackleak_track_stack"; static const char track_function[] = "__sanitizer_cov_stack_depth";
static bool disable = false; static bool disable = false;
static bool verbose = false; static bool verbose = false;
@ -58,7 +58,7 @@ static void add_stack_tracking_gcall(gimple_stmt_iterator *gsi, bool after)
cgraph_node_ptr node; cgraph_node_ptr node;
basic_block bb; basic_block bb;
/* Insert calling stackleak_track_stack() */ /* Insert calling __sanitizer_cov_stack_depth() */
stmt = gimple_build_call(track_function_decl, 0); stmt = gimple_build_call(track_function_decl, 0);
gimple_call = as_a_gcall(stmt); gimple_call = as_a_gcall(stmt);
if (after) if (after)
@ -120,12 +120,12 @@ static void add_stack_tracking_gasm(gimple_stmt_iterator *gsi, bool after)
gcc_assert(build_for_x86); gcc_assert(build_for_x86);
/* /*
* Insert calling stackleak_track_stack() in asm: * Insert calling __sanitizer_cov_stack_depth() in asm:
* asm volatile("call stackleak_track_stack" * asm volatile("call __sanitizer_cov_stack_depth"
* :: "r" (current_stack_pointer)) * :: "r" (current_stack_pointer))
* Use ASM_CALL_CONSTRAINT trick from arch/x86/include/asm/asm.h. * Use ASM_CALL_CONSTRAINT trick from arch/x86/include/asm/asm.h.
* This constraint is taken into account during gcc shrink-wrapping * This constraint is taken into account during gcc shrink-wrapping
* optimization. It is needed to be sure that stackleak_track_stack() * optimization. It is needed to be sure that __sanitizer_cov_stack_depth()
* call is inserted after the prologue of the containing function, * call is inserted after the prologue of the containing function,
* when the stack frame is prepared. * when the stack frame is prepared.
*/ */
@ -137,7 +137,7 @@ static void add_stack_tracking_gasm(gimple_stmt_iterator *gsi, bool after)
input = build_tree_list(NULL_TREE, build_const_char_string(2, "r")); input = build_tree_list(NULL_TREE, build_const_char_string(2, "r"));
input = chainon(NULL_TREE, build_tree_list(input, sp_decl)); input = chainon(NULL_TREE, build_tree_list(input, sp_decl));
vec_safe_push(inputs, input); vec_safe_push(inputs, input);
asm_call = gimple_build_asm_vec("call stackleak_track_stack", asm_call = gimple_build_asm_vec("call __sanitizer_cov_stack_depth",
inputs, NULL, NULL, NULL); inputs, NULL, NULL, NULL);
gimple_asm_set_volatile(asm_call, true); gimple_asm_set_volatile(asm_call, true);
if (after) if (after)
@ -151,11 +151,11 @@ static void add_stack_tracking(gimple_stmt_iterator *gsi, bool after)
{ {
/* /*
* The 'no_caller_saved_registers' attribute is used for * The 'no_caller_saved_registers' attribute is used for
* stackleak_track_stack(). If the compiler supports this attribute for * __sanitizer_cov_stack_depth(). If the compiler supports this attribute for
* the target arch, we can add calling stackleak_track_stack() in asm. * the target arch, we can add calling __sanitizer_cov_stack_depth() in asm.
* That improves performance: we avoid useless operations with the * That improves performance: we avoid useless operations with the
* caller-saved registers in the functions from which we will remove * caller-saved registers in the functions from which we will remove
* stackleak_track_stack() call during the stackleak_cleanup pass. * __sanitizer_cov_stack_depth() call during the stackleak_cleanup pass.
*/ */
if (lookup_attribute_spec(get_identifier("no_caller_saved_registers"))) if (lookup_attribute_spec(get_identifier("no_caller_saved_registers")))
add_stack_tracking_gasm(gsi, after); add_stack_tracking_gasm(gsi, after);
@ -165,7 +165,7 @@ static void add_stack_tracking(gimple_stmt_iterator *gsi, bool after)
/* /*
* Work with the GIMPLE representation of the code. Insert the * Work with the GIMPLE representation of the code. Insert the
* stackleak_track_stack() call after alloca() and into the beginning * __sanitizer_cov_stack_depth() call after alloca() and into the beginning
* of the function if it is not instrumented. * of the function if it is not instrumented.
*/ */
static unsigned int stackleak_instrument_execute(void) static unsigned int stackleak_instrument_execute(void)
@ -205,7 +205,7 @@ static unsigned int stackleak_instrument_execute(void)
DECL_NAME_POINTER(current_function_decl)); DECL_NAME_POINTER(current_function_decl));
} }
/* Insert stackleak_track_stack() call after alloca() */ /* Insert __sanitizer_cov_stack_depth() call after alloca() */
add_stack_tracking(&gsi, true); add_stack_tracking(&gsi, true);
if (bb == entry_bb) if (bb == entry_bb)
prologue_instrumented = true; prologue_instrumented = true;
@ -241,7 +241,7 @@ static unsigned int stackleak_instrument_execute(void)
return 0; return 0;
} }
/* Insert stackleak_track_stack() call at the function beginning */ /* Insert __sanitizer_cov_stack_depth() call at the function beginning */
bb = entry_bb; bb = entry_bb;
if (!single_pred_p(bb)) { if (!single_pred_p(bb)) {
/* gcc_assert(bb_loop_depth(bb) || /* gcc_assert(bb_loop_depth(bb) ||
@ -270,15 +270,15 @@ static void remove_stack_tracking_gcall(void)
rtx_insn *insn, *next; rtx_insn *insn, *next;
/* /*
* Find stackleak_track_stack() calls. Loop through the chain of insns, * Find __sanitizer_cov_stack_depth() calls. Loop through the chain of insns,
* which is an RTL representation of the code for a function. * which is an RTL representation of the code for a function.
* *
* The example of a matching insn: * The example of a matching insn:
* (call_insn 8 4 10 2 (call (mem (symbol_ref ("stackleak_track_stack") * (call_insn 8 4 10 2 (call (mem (symbol_ref ("__sanitizer_cov_stack_depth")
* [flags 0x41] <function_decl 0x7f7cd3302a80 stackleak_track_stack>) * [flags 0x41] <function_decl 0x7f7cd3302a80 __sanitizer_cov_stack_depth>)
* [0 stackleak_track_stack S1 A8]) (0)) 675 {*call} (expr_list * [0 __sanitizer_cov_stack_depth S1 A8]) (0)) 675 {*call} (expr_list
* (symbol_ref ("stackleak_track_stack") [flags 0x41] <function_decl * (symbol_ref ("__sanitizer_cov_stack_depth") [flags 0x41] <function_decl
* 0x7f7cd3302a80 stackleak_track_stack>) (expr_list (0) (nil))) (nil)) * 0x7f7cd3302a80 __sanitizer_cov_stack_depth>) (expr_list (0) (nil))) (nil))
*/ */
for (insn = get_insns(); insn; insn = next) { for (insn = get_insns(); insn; insn = next) {
rtx body; rtx body;
@ -318,7 +318,7 @@ static void remove_stack_tracking_gcall(void)
if (SYMBOL_REF_DECL(body) != track_function_decl) if (SYMBOL_REF_DECL(body) != track_function_decl)
continue; continue;
/* Delete the stackleak_track_stack() call */ /* Delete the __sanitizer_cov_stack_depth() call */
delete_insn_and_edges(insn); delete_insn_and_edges(insn);
#if BUILDING_GCC_VERSION < 8000 #if BUILDING_GCC_VERSION < 8000
if (GET_CODE(next) == NOTE && if (GET_CODE(next) == NOTE &&
@ -340,12 +340,12 @@ static bool remove_stack_tracking_gasm(void)
gcc_assert(build_for_x86); gcc_assert(build_for_x86);
/* /*
* Find stackleak_track_stack() asm calls. Loop through the chain of * Find __sanitizer_cov_stack_depth() asm calls. Loop through the chain of
* insns, which is an RTL representation of the code for a function. * insns, which is an RTL representation of the code for a function.
* *
* The example of a matching insn: * The example of a matching insn:
* (insn 11 5 12 2 (parallel [ (asm_operands/v * (insn 11 5 12 2 (parallel [ (asm_operands/v
* ("call stackleak_track_stack") ("") 0 * ("call __sanitizer_cov_stack_depth") ("") 0
* [ (reg/v:DI 7 sp [ current_stack_pointer ]) ] * [ (reg/v:DI 7 sp [ current_stack_pointer ]) ]
* [ (asm_input:DI ("r")) ] []) * [ (asm_input:DI ("r")) ] [])
* (clobber (reg:CC 17 flags)) ]) -1 (nil)) * (clobber (reg:CC 17 flags)) ]) -1 (nil))
@ -375,7 +375,7 @@ static bool remove_stack_tracking_gasm(void)
continue; continue;
if (strcmp(ASM_OPERANDS_TEMPLATE(body), if (strcmp(ASM_OPERANDS_TEMPLATE(body),
"call stackleak_track_stack")) { "call __sanitizer_cov_stack_depth")) {
continue; continue;
} }
@ -389,7 +389,7 @@ static bool remove_stack_tracking_gasm(void)
/* /*
* Work with the RTL representation of the code. * Work with the RTL representation of the code.
* Remove the unneeded stackleak_track_stack() calls from the functions * Remove the unneeded __sanitizer_cov_stack_depth() calls from the functions
* which don't call alloca() and don't have a large enough stack frame size. * which don't call alloca() and don't have a large enough stack frame size.
*/ */
static unsigned int stackleak_cleanup_execute(void) static unsigned int stackleak_cleanup_execute(void)
@ -474,13 +474,13 @@ static bool stackleak_gate(void)
return track_frame_size >= 0; return track_frame_size >= 0;
} }
/* Build the function declaration for stackleak_track_stack() */ /* Build the function declaration for __sanitizer_cov_stack_depth() */
static void stackleak_start_unit(void *gcc_data __unused, static void stackleak_start_unit(void *gcc_data __unused,
void *user_data __unused) void *user_data __unused)
{ {
tree fntype; tree fntype;
/* void stackleak_track_stack(void) */ /* void __sanitizer_cov_stack_depth(void) */
fntype = build_function_type_list(void_type_node, NULL_TREE); fntype = build_function_type_list(void_type_node, NULL_TREE);
track_function_decl = build_fn_decl(track_function, fntype); track_function_decl = build_fn_decl(track_function, fntype);
DECL_ASSEMBLER_NAME(track_function_decl); /* for LTO */ DECL_ASSEMBLER_NAME(track_function_decl); /* for LTO */

View file

@ -82,10 +82,13 @@ choice
endchoice endchoice
config GCC_PLUGIN_STACKLEAK config CC_HAS_SANCOV_STACK_DEPTH_CALLBACK
def_bool $(cc-option,-fsanitize-coverage-stack-depth-callback-min=1)
config KSTACK_ERASE
bool "Poison kernel stack before returning from syscalls" bool "Poison kernel stack before returning from syscalls"
depends on GCC_PLUGINS depends on HAVE_ARCH_KSTACK_ERASE
depends on HAVE_ARCH_STACKLEAK depends on GCC_PLUGINS || CC_HAS_SANCOV_STACK_DEPTH_CALLBACK
help help
This option makes the kernel erase the kernel stack before This option makes the kernel erase the kernel stack before
returning from system calls. This has the effect of leaving returning from system calls. This has the effect of leaving
@ -103,6 +106,10 @@ config GCC_PLUGIN_STACKLEAK
are advised to test this feature on your expected workload before are advised to test this feature on your expected workload before
deploying it. deploying it.
config GCC_PLUGIN_STACKLEAK
def_bool KSTACK_ERASE
depends on GCC_PLUGINS
help
This plugin was ported from grsecurity/PaX. More information at: This plugin was ported from grsecurity/PaX. More information at:
* https://grsecurity.net/ * https://grsecurity.net/
* https://pax.grsecurity.net/ * https://pax.grsecurity.net/
@ -117,37 +124,37 @@ config GCC_PLUGIN_STACKLEAK_VERBOSE
instrumented. This is useful for comparing coverage between instrumented. This is useful for comparing coverage between
builds. builds.
config STACKLEAK_TRACK_MIN_SIZE config KSTACK_ERASE_TRACK_MIN_SIZE
int "Minimum stack frame size of functions tracked by STACKLEAK" int "Minimum stack frame size of functions tracked by KSTACK_ERASE"
default 100 default 100
range 0 4096 range 0 4096
depends on GCC_PLUGIN_STACKLEAK depends on KSTACK_ERASE
help help
The STACKLEAK gcc plugin instruments the kernel code for tracking The KSTACK_ERASE option instruments the kernel code for tracking
the lowest border of the kernel stack (and for some other purposes). the lowest border of the kernel stack (and for some other purposes).
It inserts the stackleak_track_stack() call for the functions with It inserts the __sanitizer_cov_stack_depth() call for the functions
a stack frame size greater than or equal to this parameter. with a stack frame size greater than or equal to this parameter.
If unsure, leave the default value 100. If unsure, leave the default value 100.
config STACKLEAK_METRICS config KSTACK_ERASE_METRICS
bool "Show STACKLEAK metrics in the /proc file system" bool "Show KSTACK_ERASE metrics in the /proc file system"
depends on GCC_PLUGIN_STACKLEAK depends on KSTACK_ERASE
depends on PROC_FS depends on PROC_FS
help help
If this is set, STACKLEAK metrics for every task are available in If this is set, KSTACK_ERASE metrics for every task are available
the /proc file system. In particular, /proc/<pid>/stack_depth in the /proc file system. In particular, /proc/<pid>/stack_depth
shows the maximum kernel stack consumption for the current and shows the maximum kernel stack consumption for the current and
previous syscalls. Although this information is not precise, it previous syscalls. Although this information is not precise, it
can be useful for estimating the STACKLEAK performance impact for can be useful for estimating the KSTACK_ERASE performance impact
your workloads. for your workloads.
config STACKLEAK_RUNTIME_DISABLE config KSTACK_ERASE_RUNTIME_DISABLE
bool "Allow runtime disabling of kernel stack erasing" bool "Allow runtime disabling of kernel stack erasing"
depends on GCC_PLUGIN_STACKLEAK depends on KSTACK_ERASE
help help
This option provides 'stack_erasing' sysctl, which can be used in This option provides 'stack_erasing' sysctl, which can be used in
runtime to control kernel stack erasing for kernels built with runtime to control kernel stack erasing for kernels built with
CONFIG_GCC_PLUGIN_STACKLEAK. CONFIG_KSTACK_ERASE.
config INIT_ON_ALLOC_DEFAULT_ON config INIT_ON_ALLOC_DEFAULT_ON
bool "Enable heap memory zeroing on allocation by default" bool "Enable heap memory zeroing on allocation by default"

View file

@ -1193,8 +1193,8 @@ static const char *uaccess_safe_builtin[] = {
"__ubsan_handle_type_mismatch_v1", "__ubsan_handle_type_mismatch_v1",
"__ubsan_handle_shift_out_of_bounds", "__ubsan_handle_shift_out_of_bounds",
"__ubsan_handle_load_invalid_value", "__ubsan_handle_load_invalid_value",
/* STACKLEAK */ /* KSTACK_ERASE */
"stackleak_track_stack", "__sanitizer_cov_stack_depth",
/* TRACE_BRANCH_PROFILING */ /* TRACE_BRANCH_PROFILING */
"ftrace_likely_update", "ftrace_likely_update",
/* STACKPROTECTOR */ /* STACKPROTECTOR */

View file

@ -2,7 +2,7 @@ CONFIG_LKDTM=y
CONFIG_DEBUG_LIST=y CONFIG_DEBUG_LIST=y
CONFIG_SLAB_FREELIST_HARDENED=y CONFIG_SLAB_FREELIST_HARDENED=y
CONFIG_FORTIFY_SOURCE=y CONFIG_FORTIFY_SOURCE=y
CONFIG_GCC_PLUGIN_STACKLEAK=y CONFIG_KSTACK_ERASE=y
CONFIG_HARDENED_USERCOPY=y CONFIG_HARDENED_USERCOPY=y
CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT=y CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT=y
CONFIG_INIT_ON_FREE_DEFAULT_ON=y CONFIG_INIT_ON_FREE_DEFAULT_ON=y