2019-05-19 13:07:45 +01:00
|
|
|
# SPDX-License-Identifier: GPL-2.0-only
|
2019-04-10 08:23:44 -07:00
|
|
|
menu "Kernel hardening options"
|
|
|
|
|
|
|
|
menu "Memory initialization"
|
|
|
|
|
security: allow using Clang's zero initialization for stack variables
In addition to -ftrivial-auto-var-init=pattern (used by
CONFIG_INIT_STACK_ALL now) Clang also supports zero initialization for
locals enabled by -ftrivial-auto-var-init=zero. The future of this flag
is still being debated (see https://bugs.llvm.org/show_bug.cgi?id=45497).
Right now it is guarded by another flag,
-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang,
which means it may not be supported by future Clang releases. Another
possible resolution is that -ftrivial-auto-var-init=zero will persist
(as certain users have already started depending on it), but the name
of the guard flag will change.
In the meantime, zero initialization has proven itself as a good
production mitigation measure against uninitialized locals. Unlike pattern
initialization, which has a higher chance of triggering existing bugs,
zero initialization provides safe defaults for strings, pointers, indexes,
and sizes. On the other hand, pattern initialization remains safer for
return values. Chrome OS and Android are moving to using zero
initialization for production builds.
Performance-wise, the difference between pattern and zero initialization
is usually negligible, although the generated code for zero
initialization is more compact.
This patch renames CONFIG_INIT_STACK_ALL to CONFIG_INIT_STACK_ALL_PATTERN
and introduces another config option, CONFIG_INIT_STACK_ALL_ZERO, that
enables zero initialization for locals if the corresponding flags are
supported by Clang.
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Alexander Potapenko <glider@google.com>
Link: https://lore.kernel.org/r/20200616083435.223038-1-glider@google.com
Reviewed-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-16 10:34:35 +02:00
|
|
|
config CC_HAS_AUTO_VAR_INIT_PATTERN
|
2019-04-10 08:48:31 -07:00
|
|
|
def_bool $(cc-option,-ftrivial-auto-var-init=pattern)
|
|
|
|
|
2022-09-29 22:57:43 -07:00
|
|
|
config CC_HAS_AUTO_VAR_INIT_ZERO_BARE
|
|
|
|
def_bool $(cc-option,-ftrivial-auto-var-init=zero)
|
|
|
|
|
|
|
|
config CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER
|
|
|
|
# Clang 16 and later warn about using the -enable flag, but it
|
|
|
|
# is required before then.
|
security: allow using Clang's zero initialization for stack variables
In addition to -ftrivial-auto-var-init=pattern (used by
CONFIG_INIT_STACK_ALL now) Clang also supports zero initialization for
locals enabled by -ftrivial-auto-var-init=zero. The future of this flag
is still being debated (see https://bugs.llvm.org/show_bug.cgi?id=45497).
Right now it is guarded by another flag,
-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang,
which means it may not be supported by future Clang releases. Another
possible resolution is that -ftrivial-auto-var-init=zero will persist
(as certain users have already started depending on it), but the name
of the guard flag will change.
In the meantime, zero initialization has proven itself as a good
production mitigation measure against uninitialized locals. Unlike pattern
initialization, which has a higher chance of triggering existing bugs,
zero initialization provides safe defaults for strings, pointers, indexes,
and sizes. On the other hand, pattern initialization remains safer for
return values. Chrome OS and Android are moving to using zero
initialization for production builds.
Performance-wise, the difference between pattern and zero initialization
is usually negligible, although the generated code for zero
initialization is more compact.
This patch renames CONFIG_INIT_STACK_ALL to CONFIG_INIT_STACK_ALL_PATTERN
and introduces another config option, CONFIG_INIT_STACK_ALL_ZERO, that
enables zero initialization for locals if the corresponding flags are
supported by Clang.
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Alexander Potapenko <glider@google.com>
Link: https://lore.kernel.org/r/20200616083435.223038-1-glider@google.com
Reviewed-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-16 10:34:35 +02:00
|
|
|
def_bool $(cc-option,-ftrivial-auto-var-init=zero -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang)
|
2022-09-29 22:57:43 -07:00
|
|
|
depends on !CC_HAS_AUTO_VAR_INIT_ZERO_BARE
|
|
|
|
|
|
|
|
config CC_HAS_AUTO_VAR_INIT_ZERO
|
|
|
|
def_bool CC_HAS_AUTO_VAR_INIT_ZERO_BARE || CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER
|
security: allow using Clang's zero initialization for stack variables
In addition to -ftrivial-auto-var-init=pattern (used by
CONFIG_INIT_STACK_ALL now) Clang also supports zero initialization for
locals enabled by -ftrivial-auto-var-init=zero. The future of this flag
is still being debated (see https://bugs.llvm.org/show_bug.cgi?id=45497).
Right now it is guarded by another flag,
-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang,
which means it may not be supported by future Clang releases. Another
possible resolution is that -ftrivial-auto-var-init=zero will persist
(as certain users have already started depending on it), but the name
of the guard flag will change.
In the meantime, zero initialization has proven itself as a good
production mitigation measure against uninitialized locals. Unlike pattern
initialization, which has a higher chance of triggering existing bugs,
zero initialization provides safe defaults for strings, pointers, indexes,
and sizes. On the other hand, pattern initialization remains safer for
return values. Chrome OS and Android are moving to using zero
initialization for production builds.
Performance-wise, the difference between pattern and zero initialization
is usually negligible, although the generated code for zero
initialization is more compact.
This patch renames CONFIG_INIT_STACK_ALL to CONFIG_INIT_STACK_ALL_PATTERN
and introduces another config option, CONFIG_INIT_STACK_ALL_ZERO, that
enables zero initialization for locals if the corresponding flags are
supported by Clang.
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Alexander Potapenko <glider@google.com>
Link: https://lore.kernel.org/r/20200616083435.223038-1-glider@google.com
Reviewed-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-16 10:34:35 +02:00
|
|
|
|
2019-04-10 08:23:44 -07:00
|
|
|
choice
|
|
|
|
prompt "Initialize kernel stack variables at function entry"
|
security: allow using Clang's zero initialization for stack variables
In addition to -ftrivial-auto-var-init=pattern (used by
CONFIG_INIT_STACK_ALL now) Clang also supports zero initialization for
locals enabled by -ftrivial-auto-var-init=zero. The future of this flag
is still being debated (see https://bugs.llvm.org/show_bug.cgi?id=45497).
Right now it is guarded by another flag,
-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang,
which means it may not be supported by future Clang releases. Another
possible resolution is that -ftrivial-auto-var-init=zero will persist
(as certain users have already started depending on it), but the name
of the guard flag will change.
In the meantime, zero initialization has proven itself as a good
production mitigation measure against uninitialized locals. Unlike pattern
initialization, which has a higher chance of triggering existing bugs,
zero initialization provides safe defaults for strings, pointers, indexes,
and sizes. On the other hand, pattern initialization remains safer for
return values. Chrome OS and Android are moving to using zero
initialization for production builds.
Performance-wise, the difference between pattern and zero initialization
is usually negligible, although the generated code for zero
initialization is more compact.
This patch renames CONFIG_INIT_STACK_ALL to CONFIG_INIT_STACK_ALL_PATTERN
and introduces another config option, CONFIG_INIT_STACK_ALL_ZERO, that
enables zero initialization for locals if the corresponding flags are
supported by Clang.
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Alexander Potapenko <glider@google.com>
Link: https://lore.kernel.org/r/20200616083435.223038-1-glider@google.com
Reviewed-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-16 10:34:35 +02:00
|
|
|
default INIT_STACK_ALL_PATTERN if COMPILE_TEST && CC_HAS_AUTO_VAR_INIT_PATTERN
|
2021-09-14 12:49:03 -07:00
|
|
|
default INIT_STACK_ALL_ZERO if CC_HAS_AUTO_VAR_INIT_ZERO
|
2019-04-10 08:23:44 -07:00
|
|
|
default INIT_STACK_NONE
|
|
|
|
help
|
|
|
|
This option enables initialization of stack variables at
|
|
|
|
function entry time. This has the possibility to have the
|
|
|
|
greatest coverage (since all functions can have their
|
|
|
|
variables initialized), but the performance impact depends
|
|
|
|
on the function calling complexity of a given workload's
|
|
|
|
syscalls.
|
|
|
|
|
|
|
|
This chooses the level of coverage over classes of potentially
|
2021-07-20 14:54:17 -07:00
|
|
|
uninitialized variables. The selected class of variable will be
|
2019-04-10 08:23:44 -07:00
|
|
|
initialized before use in a function.
|
|
|
|
|
|
|
|
config INIT_STACK_NONE
|
2021-07-20 14:54:17 -07:00
|
|
|
bool "no automatic stack variable initialization (weakest)"
|
2019-04-10 08:23:44 -07:00
|
|
|
help
|
|
|
|
Disable automatic stack variable initialization.
|
|
|
|
This leaves the kernel vulnerable to the standard
|
|
|
|
classes of uninitialized stack variable exploits
|
|
|
|
and information exposures.
|
|
|
|
|
security: allow using Clang's zero initialization for stack variables
In addition to -ftrivial-auto-var-init=pattern (used by
CONFIG_INIT_STACK_ALL now) Clang also supports zero initialization for
locals enabled by -ftrivial-auto-var-init=zero. The future of this flag
is still being debated (see https://bugs.llvm.org/show_bug.cgi?id=45497).
Right now it is guarded by another flag,
-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang,
which means it may not be supported by future Clang releases. Another
possible resolution is that -ftrivial-auto-var-init=zero will persist
(as certain users have already started depending on it), but the name
of the guard flag will change.
In the meantime, zero initialization has proven itself as a good
production mitigation measure against uninitialized locals. Unlike pattern
initialization, which has a higher chance of triggering existing bugs,
zero initialization provides safe defaults for strings, pointers, indexes,
and sizes. On the other hand, pattern initialization remains safer for
return values. Chrome OS and Android are moving to using zero
initialization for production builds.
Performance-wise, the difference between pattern and zero initialization
is usually negligible, although the generated code for zero
initialization is more compact.
This patch renames CONFIG_INIT_STACK_ALL to CONFIG_INIT_STACK_ALL_PATTERN
and introduces another config option, CONFIG_INIT_STACK_ALL_ZERO, that
enables zero initialization for locals if the corresponding flags are
supported by Clang.
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Alexander Potapenko <glider@google.com>
Link: https://lore.kernel.org/r/20200616083435.223038-1-glider@google.com
Reviewed-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-16 10:34:35 +02:00
|
|
|
config INIT_STACK_ALL_PATTERN
|
2021-07-20 14:54:17 -07:00
|
|
|
bool "pattern-init everything (strongest)"
|
security: allow using Clang's zero initialization for stack variables
In addition to -ftrivial-auto-var-init=pattern (used by
CONFIG_INIT_STACK_ALL now) Clang also supports zero initialization for
locals enabled by -ftrivial-auto-var-init=zero. The future of this flag
is still being debated (see https://bugs.llvm.org/show_bug.cgi?id=45497).
Right now it is guarded by another flag,
-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang,
which means it may not be supported by future Clang releases. Another
possible resolution is that -ftrivial-auto-var-init=zero will persist
(as certain users have already started depending on it), but the name
of the guard flag will change.
In the meantime, zero initialization has proven itself as a good
production mitigation measure against uninitialized locals. Unlike pattern
initialization, which has a higher chance of triggering existing bugs,
zero initialization provides safe defaults for strings, pointers, indexes,
and sizes. On the other hand, pattern initialization remains safer for
return values. Chrome OS and Android are moving to using zero
initialization for production builds.
Performance-wise, the difference between pattern and zero initialization
is usually negligible, although the generated code for zero
initialization is more compact.
This patch renames CONFIG_INIT_STACK_ALL to CONFIG_INIT_STACK_ALL_PATTERN
and introduces another config option, CONFIG_INIT_STACK_ALL_ZERO, that
enables zero initialization for locals if the corresponding flags are
supported by Clang.
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Alexander Potapenko <glider@google.com>
Link: https://lore.kernel.org/r/20200616083435.223038-1-glider@google.com
Reviewed-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-16 10:34:35 +02:00
|
|
|
depends on CC_HAS_AUTO_VAR_INIT_PATTERN
|
2022-09-15 17:04:04 +02:00
|
|
|
depends on !KMSAN
|
2019-04-10 08:48:31 -07:00
|
|
|
help
|
2021-07-20 14:54:17 -07:00
|
|
|
Initializes everything on the stack (including padding)
|
|
|
|
with a specific debug value. This is intended to eliminate
|
|
|
|
all classes of uninitialized stack variable exploits and
|
|
|
|
information exposures, even variables that were warned about
|
|
|
|
having been left uninitialized.
|
2019-04-10 08:48:31 -07:00
|
|
|
|
security: allow using Clang's zero initialization for stack variables
In addition to -ftrivial-auto-var-init=pattern (used by
CONFIG_INIT_STACK_ALL now) Clang also supports zero initialization for
locals enabled by -ftrivial-auto-var-init=zero. The future of this flag
is still being debated (see https://bugs.llvm.org/show_bug.cgi?id=45497).
Right now it is guarded by another flag,
-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang,
which means it may not be supported by future Clang releases. Another
possible resolution is that -ftrivial-auto-var-init=zero will persist
(as certain users have already started depending on it), but the name
of the guard flag will change.
In the meantime, zero initialization has proven itself as a good
production mitigation measure against uninitialized locals. Unlike pattern
initialization, which has a higher chance of triggering existing bugs,
zero initialization provides safe defaults for strings, pointers, indexes,
and sizes. On the other hand, pattern initialization remains safer for
return values. Chrome OS and Android are moving to using zero
initialization for production builds.
Performance-wise, the difference between pattern and zero initialization
is usually negligible, although the generated code for zero
initialization is more compact.
This patch renames CONFIG_INIT_STACK_ALL to CONFIG_INIT_STACK_ALL_PATTERN
and introduces another config option, CONFIG_INIT_STACK_ALL_ZERO, that
enables zero initialization for locals if the corresponding flags are
supported by Clang.
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Alexander Potapenko <glider@google.com>
Link: https://lore.kernel.org/r/20200616083435.223038-1-glider@google.com
Reviewed-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-16 10:34:35 +02:00
|
|
|
Pattern initialization is known to provoke many existing bugs
|
|
|
|
related to uninitialized locals, e.g. pointers receive
|
2021-07-20 14:54:17 -07:00
|
|
|
non-NULL values, buffer sizes and indices are very big. The
|
|
|
|
pattern is situation-specific; Clang on 64-bit uses 0xAA
|
|
|
|
repeating for all types and padding except float and double
|
|
|
|
which use 0xFF repeating (-NaN). Clang on 32-bit uses 0xFF
|
|
|
|
repeating for all types and padding.
|
2025-01-07 09:38:57 +01:00
|
|
|
GCC uses 0xFE repeating for all types, and zero for padding.
|
security: allow using Clang's zero initialization for stack variables
In addition to -ftrivial-auto-var-init=pattern (used by
CONFIG_INIT_STACK_ALL now) Clang also supports zero initialization for
locals enabled by -ftrivial-auto-var-init=zero. The future of this flag
is still being debated (see https://bugs.llvm.org/show_bug.cgi?id=45497).
Right now it is guarded by another flag,
-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang,
which means it may not be supported by future Clang releases. Another
possible resolution is that -ftrivial-auto-var-init=zero will persist
(as certain users have already started depending on it), but the name
of the guard flag will change.
In the meantime, zero initialization has proven itself as a good
production mitigation measure against uninitialized locals. Unlike pattern
initialization, which has a higher chance of triggering existing bugs,
zero initialization provides safe defaults for strings, pointers, indexes,
and sizes. On the other hand, pattern initialization remains safer for
return values. Chrome OS and Android are moving to using zero
initialization for production builds.
Performance-wise, the difference between pattern and zero initialization
is usually negligible, although the generated code for zero
initialization is more compact.
This patch renames CONFIG_INIT_STACK_ALL to CONFIG_INIT_STACK_ALL_PATTERN
and introduces another config option, CONFIG_INIT_STACK_ALL_ZERO, that
enables zero initialization for locals if the corresponding flags are
supported by Clang.
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Alexander Potapenko <glider@google.com>
Link: https://lore.kernel.org/r/20200616083435.223038-1-glider@google.com
Reviewed-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-16 10:34:35 +02:00
|
|
|
|
|
|
|
config INIT_STACK_ALL_ZERO
|
2021-07-20 14:54:17 -07:00
|
|
|
bool "zero-init everything (strongest and safest)"
|
security: allow using Clang's zero initialization for stack variables
In addition to -ftrivial-auto-var-init=pattern (used by
CONFIG_INIT_STACK_ALL now) Clang also supports zero initialization for
locals enabled by -ftrivial-auto-var-init=zero. The future of this flag
is still being debated (see https://bugs.llvm.org/show_bug.cgi?id=45497).
Right now it is guarded by another flag,
-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang,
which means it may not be supported by future Clang releases. Another
possible resolution is that -ftrivial-auto-var-init=zero will persist
(as certain users have already started depending on it), but the name
of the guard flag will change.
In the meantime, zero initialization has proven itself as a good
production mitigation measure against uninitialized locals. Unlike pattern
initialization, which has a higher chance of triggering existing bugs,
zero initialization provides safe defaults for strings, pointers, indexes,
and sizes. On the other hand, pattern initialization remains safer for
return values. Chrome OS and Android are moving to using zero
initialization for production builds.
Performance-wise, the difference between pattern and zero initialization
is usually negligible, although the generated code for zero
initialization is more compact.
This patch renames CONFIG_INIT_STACK_ALL to CONFIG_INIT_STACK_ALL_PATTERN
and introduces another config option, CONFIG_INIT_STACK_ALL_ZERO, that
enables zero initialization for locals if the corresponding flags are
supported by Clang.
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Alexander Potapenko <glider@google.com>
Link: https://lore.kernel.org/r/20200616083435.223038-1-glider@google.com
Reviewed-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-16 10:34:35 +02:00
|
|
|
depends on CC_HAS_AUTO_VAR_INIT_ZERO
|
2022-09-15 17:04:04 +02:00
|
|
|
depends on !KMSAN
|
security: allow using Clang's zero initialization for stack variables
In addition to -ftrivial-auto-var-init=pattern (used by
CONFIG_INIT_STACK_ALL now) Clang also supports zero initialization for
locals enabled by -ftrivial-auto-var-init=zero. The future of this flag
is still being debated (see https://bugs.llvm.org/show_bug.cgi?id=45497).
Right now it is guarded by another flag,
-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang,
which means it may not be supported by future Clang releases. Another
possible resolution is that -ftrivial-auto-var-init=zero will persist
(as certain users have already started depending on it), but the name
of the guard flag will change.
In the meantime, zero initialization has proven itself as a good
production mitigation measure against uninitialized locals. Unlike pattern
initialization, which has a higher chance of triggering existing bugs,
zero initialization provides safe defaults for strings, pointers, indexes,
and sizes. On the other hand, pattern initialization remains safer for
return values. Chrome OS and Android are moving to using zero
initialization for production builds.
Performance-wise, the difference between pattern and zero initialization
is usually negligible, although the generated code for zero
initialization is more compact.
This patch renames CONFIG_INIT_STACK_ALL to CONFIG_INIT_STACK_ALL_PATTERN
and introduces another config option, CONFIG_INIT_STACK_ALL_ZERO, that
enables zero initialization for locals if the corresponding flags are
supported by Clang.
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Alexander Potapenko <glider@google.com>
Link: https://lore.kernel.org/r/20200616083435.223038-1-glider@google.com
Reviewed-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-16 10:34:35 +02:00
|
|
|
help
|
2021-07-20 14:54:17 -07:00
|
|
|
Initializes everything on the stack (including padding)
|
|
|
|
with a zero value. This is intended to eliminate all
|
|
|
|
classes of uninitialized stack variable exploits and
|
|
|
|
information exposures, even variables that were warned
|
|
|
|
about having been left uninitialized.
|
|
|
|
|
|
|
|
Zero initialization provides safe defaults for strings
|
|
|
|
(immediately NUL-terminated), pointers (NULL), indices
|
|
|
|
(index 0), and sizes (0 length), so it is therefore more
|
|
|
|
suitable as a production security mitigation than pattern
|
|
|
|
initialization.
|
security: allow using Clang's zero initialization for stack variables
In addition to -ftrivial-auto-var-init=pattern (used by
CONFIG_INIT_STACK_ALL now) Clang also supports zero initialization for
locals enabled by -ftrivial-auto-var-init=zero. The future of this flag
is still being debated (see https://bugs.llvm.org/show_bug.cgi?id=45497).
Right now it is guarded by another flag,
-enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang,
which means it may not be supported by future Clang releases. Another
possible resolution is that -ftrivial-auto-var-init=zero will persist
(as certain users have already started depending on it), but the name
of the guard flag will change.
In the meantime, zero initialization has proven itself as a good
production mitigation measure against uninitialized locals. Unlike pattern
initialization, which has a higher chance of triggering existing bugs,
zero initialization provides safe defaults for strings, pointers, indexes,
and sizes. On the other hand, pattern initialization remains safer for
return values. Chrome OS and Android are moving to using zero
initialization for production builds.
Performance-wise, the difference between pattern and zero initialization
is usually negligible, although the generated code for zero
initialization is more compact.
This patch renames CONFIG_INIT_STACK_ALL to CONFIG_INIT_STACK_ALL_PATTERN
and introduces another config option, CONFIG_INIT_STACK_ALL_ZERO, that
enables zero initialization for locals if the corresponding flags are
supported by Clang.
Cc: Kees Cook <keescook@chromium.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Alexander Potapenko <glider@google.com>
Link: https://lore.kernel.org/r/20200616083435.223038-1-glider@google.com
Reviewed-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-16 10:34:35 +02:00
|
|
|
|
2019-04-10 08:23:44 -07:00
|
|
|
endchoice
|
|
|
|
|
2025-07-23 22:50:28 -07:00
|
|
|
config CC_HAS_SANCOV_STACK_DEPTH_CALLBACK
|
|
|
|
def_bool $(cc-option,-fsanitize-coverage-stack-depth-callback-min=1)
|
|
|
|
|
2025-07-17 16:25:06 -07:00
|
|
|
config KSTACK_ERASE
|
2019-04-10 09:04:40 -07:00
|
|
|
bool "Poison kernel stack before returning from syscalls"
|
2025-07-17 16:25:06 -07:00
|
|
|
depends on HAVE_ARCH_KSTACK_ERASE
|
2025-07-23 22:50:28 -07:00
|
|
|
depends on GCC_PLUGINS || CC_HAS_SANCOV_STACK_DEPTH_CALLBACK
|
2019-04-10 09:04:40 -07:00
|
|
|
help
|
|
|
|
This option makes the kernel erase the kernel stack before
|
|
|
|
returning from system calls. This has the effect of leaving
|
|
|
|
the stack initialized to the poison value, which both reduces
|
|
|
|
the lifetime of any sensitive stack contents and reduces
|
|
|
|
potential for uninitialized stack variable exploits or information
|
|
|
|
exposures (it does not cover functions reaching the same stack
|
|
|
|
depth as prior functions during the same syscall). This blocks
|
|
|
|
most uninitialized stack variable attacks, with the performance
|
|
|
|
impact being driven by the depth of the stack usage, rather than
|
|
|
|
the function calling complexity.
|
|
|
|
|
|
|
|
The performance impact on a single CPU system kernel compilation
|
|
|
|
sees a 1% slowdown, other systems and workloads may vary and you
|
|
|
|
are advised to test this feature on your expected workload before
|
|
|
|
deploying it.
|
|
|
|
|
2025-07-17 16:25:06 -07:00
|
|
|
config GCC_PLUGIN_STACKLEAK
|
|
|
|
def_bool KSTACK_ERASE
|
|
|
|
depends on GCC_PLUGINS
|
|
|
|
help
|
2019-04-10 09:04:40 -07:00
|
|
|
This plugin was ported from grsecurity/PaX. More information at:
|
|
|
|
* https://grsecurity.net/
|
|
|
|
* https://pax.grsecurity.net/
|
|
|
|
|
2022-02-06 09:20:08 -08:00
|
|
|
config GCC_PLUGIN_STACKLEAK_VERBOSE
|
|
|
|
bool "Report stack depth analysis instrumentation" if EXPERT
|
|
|
|
depends on GCC_PLUGIN_STACKLEAK
|
|
|
|
depends on !COMPILE_TEST # too noisy
|
|
|
|
help
|
|
|
|
This option will cause a warning to be printed each time the
|
|
|
|
stackleak plugin finds a function it thinks needs to be
|
|
|
|
instrumented. This is useful for comparing coverage between
|
|
|
|
builds.
|
|
|
|
|
2025-07-17 16:25:06 -07:00
|
|
|
config KSTACK_ERASE_TRACK_MIN_SIZE
|
|
|
|
int "Minimum stack frame size of functions tracked by KSTACK_ERASE"
|
2019-04-10 09:04:40 -07:00
|
|
|
default 100
|
|
|
|
range 0 4096
|
2025-07-17 16:25:06 -07:00
|
|
|
depends on KSTACK_ERASE
|
2019-04-10 09:04:40 -07:00
|
|
|
help
|
2025-07-17 16:25:06 -07:00
|
|
|
The KSTACK_ERASE option instruments the kernel code for tracking
|
2019-04-10 09:04:40 -07:00
|
|
|
the lowest border of the kernel stack (and for some other purposes).
|
2025-07-17 16:25:07 -07:00
|
|
|
It inserts the __sanitizer_cov_stack_depth() call for the functions
|
|
|
|
with a stack frame size greater than or equal to this parameter.
|
2019-04-10 09:04:40 -07:00
|
|
|
If unsure, leave the default value 100.
|
|
|
|
|
2025-07-17 16:25:06 -07:00
|
|
|
config KSTACK_ERASE_METRICS
|
|
|
|
bool "Show KSTACK_ERASE metrics in the /proc file system"
|
|
|
|
depends on KSTACK_ERASE
|
2019-04-10 09:04:40 -07:00
|
|
|
depends on PROC_FS
|
|
|
|
help
|
2025-07-17 16:25:06 -07:00
|
|
|
If this is set, KSTACK_ERASE metrics for every task are available
|
|
|
|
in the /proc file system. In particular, /proc/<pid>/stack_depth
|
2019-04-10 09:04:40 -07:00
|
|
|
shows the maximum kernel stack consumption for the current and
|
|
|
|
previous syscalls. Although this information is not precise, it
|
2025-07-17 16:25:06 -07:00
|
|
|
can be useful for estimating the KSTACK_ERASE performance impact
|
|
|
|
for your workloads.
|
2019-04-10 09:04:40 -07:00
|
|
|
|
2025-07-17 16:25:06 -07:00
|
|
|
config KSTACK_ERASE_RUNTIME_DISABLE
|
2019-04-10 09:04:40 -07:00
|
|
|
bool "Allow runtime disabling of kernel stack erasing"
|
2025-07-17 16:25:06 -07:00
|
|
|
depends on KSTACK_ERASE
|
2019-04-10 09:04:40 -07:00
|
|
|
help
|
|
|
|
This option provides 'stack_erasing' sysctl, which can be used in
|
|
|
|
runtime to control kernel stack erasing for kernels built with
|
2025-07-17 16:25:06 -07:00
|
|
|
CONFIG_KSTACK_ERASE.
|
2019-04-10 09:04:40 -07:00
|
|
|
|
mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options
Patch series "add init_on_alloc/init_on_free boot options", v10.
Provide init_on_alloc and init_on_free boot options.
These are aimed at preventing possible information leaks and making the
control-flow bugs that depend on uninitialized values more deterministic.
Enabling either of the options guarantees that the memory returned by the
page allocator and SL[AU]B is initialized with zeroes. SLOB allocator
isn't supported at the moment, as its emulation of kmem caches complicates
handling of SLAB_TYPESAFE_BY_RCU caches correctly.
Enabling init_on_free also guarantees that pages and heap objects are
initialized right after they're freed, so it won't be possible to access
stale data by using a dangling pointer.
As suggested by Michal Hocko, right now we don't let the heap users to
disable initialization for certain allocations. There's not enough
evidence that doing so can speed up real-life cases, and introducing ways
to opt-out may result in things going out of control.
This patch (of 2):
The new options are needed to prevent possible information leaks and make
control-flow bugs that depend on uninitialized values more deterministic.
This is expected to be on-by-default on Android and Chrome OS. And it
gives the opportunity for anyone else to use it under distros too via the
boot args. (The init_on_free feature is regularly requested by folks
where memory forensics is included in their threat models.)
init_on_alloc=1 makes the kernel initialize newly allocated pages and heap
objects with zeroes. Initialization is done at allocation time at the
places where checks for __GFP_ZERO are performed.
init_on_free=1 makes the kernel initialize freed pages and heap objects
with zeroes upon their deletion. This helps to ensure sensitive data
doesn't leak via use-after-free accesses.
Both init_on_alloc=1 and init_on_free=1 guarantee that the allocator
returns zeroed memory. The two exceptions are slab caches with
constructors and SLAB_TYPESAFE_BY_RCU flag. Those are never
zero-initialized to preserve their semantics.
Both init_on_alloc and init_on_free default to zero, but those defaults
can be overridden with CONFIG_INIT_ON_ALLOC_DEFAULT_ON and
CONFIG_INIT_ON_FREE_DEFAULT_ON.
If either SLUB poisoning or page poisoning is enabled, those options take
precedence over init_on_alloc and init_on_free: initialization is only
applied to unpoisoned allocations.
Slowdown for the new features compared to init_on_free=0, init_on_alloc=0:
hackbench, init_on_free=1: +7.62% sys time (st.err 0.74%)
hackbench, init_on_alloc=1: +7.75% sys time (st.err 2.14%)
Linux build with -j12, init_on_free=1: +8.38% wall time (st.err 0.39%)
Linux build with -j12, init_on_free=1: +24.42% sys time (st.err 0.52%)
Linux build with -j12, init_on_alloc=1: -0.13% wall time (st.err 0.42%)
Linux build with -j12, init_on_alloc=1: +0.57% sys time (st.err 0.40%)
The slowdown for init_on_free=0, init_on_alloc=0 compared to the baseline
is within the standard error.
The new features are also going to pave the way for hardware memory
tagging (e.g. arm64's MTE), which will require both on_alloc and on_free
hooks to set the tags for heap objects. With MTE, tagging will have the
same cost as memory initialization.
Although init_on_free is rather costly, there are paranoid use-cases where
in-memory data lifetime is desired to be minimized. There are various
arguments for/against the realism of the associated threat models, but
given that we'll need the infrastructure for MTE anyway, and there are
people who want wipe-on-free behavior no matter what the performance cost,
it seems reasonable to include it in this series.
[glider@google.com: v8]
Link: http://lkml.kernel.org/r/20190626121943.131390-2-glider@google.com
[glider@google.com: v9]
Link: http://lkml.kernel.org/r/20190627130316.254309-2-glider@google.com
[glider@google.com: v10]
Link: http://lkml.kernel.org/r/20190628093131.199499-2-glider@google.com
Link: http://lkml.kernel.org/r/20190617151050.92663-2-glider@google.com
Signed-off-by: Alexander Potapenko <glider@google.com>
Acked-by: Kees Cook <keescook@chromium.org>
Acked-by: Michal Hocko <mhocko@suse.cz> [page and dmapool parts
Acked-by: James Morris <jamorris@linux.microsoft.com>]
Cc: Christoph Lameter <cl@linux.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Sandeep Patil <sspatil@android.com>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Jann Horn <jannh@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marco Elver <elver@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-11 20:59:19 -07:00
|
|
|
config INIT_ON_ALLOC_DEFAULT_ON
|
|
|
|
bool "Enable heap memory zeroing on allocation by default"
|
2022-09-15 17:04:04 +02:00
|
|
|
depends on !KMSAN
|
mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options
Patch series "add init_on_alloc/init_on_free boot options", v10.
Provide init_on_alloc and init_on_free boot options.
These are aimed at preventing possible information leaks and making the
control-flow bugs that depend on uninitialized values more deterministic.
Enabling either of the options guarantees that the memory returned by the
page allocator and SL[AU]B is initialized with zeroes. SLOB allocator
isn't supported at the moment, as its emulation of kmem caches complicates
handling of SLAB_TYPESAFE_BY_RCU caches correctly.
Enabling init_on_free also guarantees that pages and heap objects are
initialized right after they're freed, so it won't be possible to access
stale data by using a dangling pointer.
As suggested by Michal Hocko, right now we don't let the heap users to
disable initialization for certain allocations. There's not enough
evidence that doing so can speed up real-life cases, and introducing ways
to opt-out may result in things going out of control.
This patch (of 2):
The new options are needed to prevent possible information leaks and make
control-flow bugs that depend on uninitialized values more deterministic.
This is expected to be on-by-default on Android and Chrome OS. And it
gives the opportunity for anyone else to use it under distros too via the
boot args. (The init_on_free feature is regularly requested by folks
where memory forensics is included in their threat models.)
init_on_alloc=1 makes the kernel initialize newly allocated pages and heap
objects with zeroes. Initialization is done at allocation time at the
places where checks for __GFP_ZERO are performed.
init_on_free=1 makes the kernel initialize freed pages and heap objects
with zeroes upon their deletion. This helps to ensure sensitive data
doesn't leak via use-after-free accesses.
Both init_on_alloc=1 and init_on_free=1 guarantee that the allocator
returns zeroed memory. The two exceptions are slab caches with
constructors and SLAB_TYPESAFE_BY_RCU flag. Those are never
zero-initialized to preserve their semantics.
Both init_on_alloc and init_on_free default to zero, but those defaults
can be overridden with CONFIG_INIT_ON_ALLOC_DEFAULT_ON and
CONFIG_INIT_ON_FREE_DEFAULT_ON.
If either SLUB poisoning or page poisoning is enabled, those options take
precedence over init_on_alloc and init_on_free: initialization is only
applied to unpoisoned allocations.
Slowdown for the new features compared to init_on_free=0, init_on_alloc=0:
hackbench, init_on_free=1: +7.62% sys time (st.err 0.74%)
hackbench, init_on_alloc=1: +7.75% sys time (st.err 2.14%)
Linux build with -j12, init_on_free=1: +8.38% wall time (st.err 0.39%)
Linux build with -j12, init_on_free=1: +24.42% sys time (st.err 0.52%)
Linux build with -j12, init_on_alloc=1: -0.13% wall time (st.err 0.42%)
Linux build with -j12, init_on_alloc=1: +0.57% sys time (st.err 0.40%)
The slowdown for init_on_free=0, init_on_alloc=0 compared to the baseline
is within the standard error.
The new features are also going to pave the way for hardware memory
tagging (e.g. arm64's MTE), which will require both on_alloc and on_free
hooks to set the tags for heap objects. With MTE, tagging will have the
same cost as memory initialization.
Although init_on_free is rather costly, there are paranoid use-cases where
in-memory data lifetime is desired to be minimized. There are various
arguments for/against the realism of the associated threat models, but
given that we'll need the infrastructure for MTE anyway, and there are
people who want wipe-on-free behavior no matter what the performance cost,
it seems reasonable to include it in this series.
[glider@google.com: v8]
Link: http://lkml.kernel.org/r/20190626121943.131390-2-glider@google.com
[glider@google.com: v9]
Link: http://lkml.kernel.org/r/20190627130316.254309-2-glider@google.com
[glider@google.com: v10]
Link: http://lkml.kernel.org/r/20190628093131.199499-2-glider@google.com
Link: http://lkml.kernel.org/r/20190617151050.92663-2-glider@google.com
Signed-off-by: Alexander Potapenko <glider@google.com>
Acked-by: Kees Cook <keescook@chromium.org>
Acked-by: Michal Hocko <mhocko@suse.cz> [page and dmapool parts
Acked-by: James Morris <jamorris@linux.microsoft.com>]
Cc: Christoph Lameter <cl@linux.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Sandeep Patil <sspatil@android.com>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Jann Horn <jannh@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marco Elver <elver@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-11 20:59:19 -07:00
|
|
|
help
|
|
|
|
This has the effect of setting "init_on_alloc=1" on the kernel
|
|
|
|
command line. This can be disabled with "init_on_alloc=0".
|
|
|
|
When "init_on_alloc" is enabled, all page allocator and slab
|
|
|
|
allocator memory will be zeroed when allocated, eliminating
|
|
|
|
many kinds of "uninitialized heap memory" flaws, especially
|
|
|
|
heap content exposures. The performance impact varies by
|
|
|
|
workload, but most cases see <1% impact. Some synthetic
|
|
|
|
workloads have measured as high as 7%.
|
|
|
|
|
|
|
|
config INIT_ON_FREE_DEFAULT_ON
|
|
|
|
bool "Enable heap memory zeroing on free by default"
|
2022-09-15 17:04:04 +02:00
|
|
|
depends on !KMSAN
|
mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options
Patch series "add init_on_alloc/init_on_free boot options", v10.
Provide init_on_alloc and init_on_free boot options.
These are aimed at preventing possible information leaks and making the
control-flow bugs that depend on uninitialized values more deterministic.
Enabling either of the options guarantees that the memory returned by the
page allocator and SL[AU]B is initialized with zeroes. SLOB allocator
isn't supported at the moment, as its emulation of kmem caches complicates
handling of SLAB_TYPESAFE_BY_RCU caches correctly.
Enabling init_on_free also guarantees that pages and heap objects are
initialized right after they're freed, so it won't be possible to access
stale data by using a dangling pointer.
As suggested by Michal Hocko, right now we don't let the heap users to
disable initialization for certain allocations. There's not enough
evidence that doing so can speed up real-life cases, and introducing ways
to opt-out may result in things going out of control.
This patch (of 2):
The new options are needed to prevent possible information leaks and make
control-flow bugs that depend on uninitialized values more deterministic.
This is expected to be on-by-default on Android and Chrome OS. And it
gives the opportunity for anyone else to use it under distros too via the
boot args. (The init_on_free feature is regularly requested by folks
where memory forensics is included in their threat models.)
init_on_alloc=1 makes the kernel initialize newly allocated pages and heap
objects with zeroes. Initialization is done at allocation time at the
places where checks for __GFP_ZERO are performed.
init_on_free=1 makes the kernel initialize freed pages and heap objects
with zeroes upon their deletion. This helps to ensure sensitive data
doesn't leak via use-after-free accesses.
Both init_on_alloc=1 and init_on_free=1 guarantee that the allocator
returns zeroed memory. The two exceptions are slab caches with
constructors and SLAB_TYPESAFE_BY_RCU flag. Those are never
zero-initialized to preserve their semantics.
Both init_on_alloc and init_on_free default to zero, but those defaults
can be overridden with CONFIG_INIT_ON_ALLOC_DEFAULT_ON and
CONFIG_INIT_ON_FREE_DEFAULT_ON.
If either SLUB poisoning or page poisoning is enabled, those options take
precedence over init_on_alloc and init_on_free: initialization is only
applied to unpoisoned allocations.
Slowdown for the new features compared to init_on_free=0, init_on_alloc=0:
hackbench, init_on_free=1: +7.62% sys time (st.err 0.74%)
hackbench, init_on_alloc=1: +7.75% sys time (st.err 2.14%)
Linux build with -j12, init_on_free=1: +8.38% wall time (st.err 0.39%)
Linux build with -j12, init_on_free=1: +24.42% sys time (st.err 0.52%)
Linux build with -j12, init_on_alloc=1: -0.13% wall time (st.err 0.42%)
Linux build with -j12, init_on_alloc=1: +0.57% sys time (st.err 0.40%)
The slowdown for init_on_free=0, init_on_alloc=0 compared to the baseline
is within the standard error.
The new features are also going to pave the way for hardware memory
tagging (e.g. arm64's MTE), which will require both on_alloc and on_free
hooks to set the tags for heap objects. With MTE, tagging will have the
same cost as memory initialization.
Although init_on_free is rather costly, there are paranoid use-cases where
in-memory data lifetime is desired to be minimized. There are various
arguments for/against the realism of the associated threat models, but
given that we'll need the infrastructure for MTE anyway, and there are
people who want wipe-on-free behavior no matter what the performance cost,
it seems reasonable to include it in this series.
[glider@google.com: v8]
Link: http://lkml.kernel.org/r/20190626121943.131390-2-glider@google.com
[glider@google.com: v9]
Link: http://lkml.kernel.org/r/20190627130316.254309-2-glider@google.com
[glider@google.com: v10]
Link: http://lkml.kernel.org/r/20190628093131.199499-2-glider@google.com
Link: http://lkml.kernel.org/r/20190617151050.92663-2-glider@google.com
Signed-off-by: Alexander Potapenko <glider@google.com>
Acked-by: Kees Cook <keescook@chromium.org>
Acked-by: Michal Hocko <mhocko@suse.cz> [page and dmapool parts
Acked-by: James Morris <jamorris@linux.microsoft.com>]
Cc: Christoph Lameter <cl@linux.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Sandeep Patil <sspatil@android.com>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Jann Horn <jannh@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marco Elver <elver@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-07-11 20:59:19 -07:00
|
|
|
help
|
|
|
|
This has the effect of setting "init_on_free=1" on the kernel
|
|
|
|
command line. This can be disabled with "init_on_free=0".
|
|
|
|
Similar to "init_on_alloc", when "init_on_free" is enabled,
|
|
|
|
all page allocator and slab allocator memory will be zeroed
|
|
|
|
when freed, eliminating many kinds of "uninitialized heap memory"
|
|
|
|
flaws, especially heap content exposures. The primary difference
|
|
|
|
with "init_on_free" is that data lifetime in memory is reduced,
|
|
|
|
as anything freed is wiped immediately, making live forensics or
|
|
|
|
cold boot memory attacks unable to recover freed memory contents.
|
|
|
|
The performance impact varies by workload, but is more expensive
|
|
|
|
than "init_on_alloc" due to the negative cache effects of
|
|
|
|
touching "cold" memory areas. Most cases see 3-5% impact. Some
|
|
|
|
synthetic workloads have measured as high as 8%.
|
|
|
|
|
hardening: Introduce CONFIG_ZERO_CALL_USED_REGS
When CONFIG_ZERO_CALL_USED_REGS is enabled, build the kernel with
"-fzero-call-used-regs=used-gpr" (in GCC 11). This option will zero any
caller-used register contents just before returning from a function,
ensuring that temporary values are not leaked beyond the function
boundary. This means that register contents are less likely to be
available for side channel attacks and information exposures.
Additionally this helps reduce the number of useful ROP gadgets in the
kernel image by about 20%:
$ ROPgadget.py --nosys --nojop --binary vmlinux.stock | tail -n1
Unique gadgets found: 337245
$ ROPgadget.py --nosys --nojop --binary vmlinux.zero-call-regs | tail -n1
Unique gadgets found: 267175
and more notably removes simple "write-what-where" gadgets:
$ ROPgadget.py --ropchain --binary vmlinux.stock | sed -n '/Step 1/,/Step 2/p'
- Step 1 -- Write-what-where gadgets
[+] Gadget found: 0xffffffff8102d76c mov qword ptr [rsi], rdx ; ret
[+] Gadget found: 0xffffffff81000cf5 pop rsi ; ret
[+] Gadget found: 0xffffffff8104d7c8 pop rdx ; ret
[-] Can't find the 'xor rdx, rdx' gadget. Try with another 'mov [reg], reg'
[+] Gadget found: 0xffffffff814c2b4c mov qword ptr [rsi], rdi ; ret
[+] Gadget found: 0xffffffff81000cf5 pop rsi ; ret
[+] Gadget found: 0xffffffff81001e51 pop rdi ; ret
[-] Can't find the 'xor rdi, rdi' gadget. Try with another 'mov [reg], reg'
[+] Gadget found: 0xffffffff81540d61 mov qword ptr [rsi], rdi ; pop rbx ; pop rbp ; ret
[+] Gadget found: 0xffffffff81000cf5 pop rsi ; ret
[+] Gadget found: 0xffffffff81001e51 pop rdi ; ret
[-] Can't find the 'xor rdi, rdi' gadget. Try with another 'mov [reg], reg'
[+] Gadget found: 0xffffffff8105341e mov qword ptr [rsi], rax ; ret
[+] Gadget found: 0xffffffff81000cf5 pop rsi ; ret
[+] Gadget found: 0xffffffff81029a11 pop rax ; ret
[+] Gadget found: 0xffffffff811f1c3b xor rax, rax ; ret
- Step 2 -- Init syscall number gadgets
$ ROPgadget.py --ropchain --binary vmlinux.zero* | sed -n '/Step 1/,/Step 2/p'
- Step 1 -- Write-what-where gadgets
[-] Can't find the 'mov qword ptr [r64], r64' gadget
For an x86_64 parallel build tests, this has a less than 1% performance
impact, and grows the image size less than 1%:
$ size vmlinux.stock vmlinux.zero-call-regs
text data bss dec hex filename
22437676 8559152 14127340 45124168 2b08a48 vmlinux.stock
22453184 8563248 14110956 45127388 2b096dc vmlinux.zero-call-regs
Impact for other architectures may vary. For example, arm64 sees a 5.5%
image size growth, mainly due to needing to always clear x16 and x17:
https://lore.kernel.org/lkml/20210510134503.GA88495@C02TD0UTHF1T.local/
Signed-off-by: Kees Cook <keescook@chromium.org>
2021-04-12 19:56:54 -07:00
|
|
|
config CC_HAS_ZERO_CALL_USED_REGS
|
|
|
|
def_bool $(cc-option,-fzero-call-used-regs=used-gpr)
|
2022-12-14 16:26:03 -07:00
|
|
|
# https://github.com/ClangBuiltLinux/linux/issues/1766
|
|
|
|
# https://github.com/llvm/llvm-project/issues/59242
|
|
|
|
depends on !CC_IS_CLANG || CLANG_VERSION > 150006
|
hardening: Introduce CONFIG_ZERO_CALL_USED_REGS
When CONFIG_ZERO_CALL_USED_REGS is enabled, build the kernel with
"-fzero-call-used-regs=used-gpr" (in GCC 11). This option will zero any
caller-used register contents just before returning from a function,
ensuring that temporary values are not leaked beyond the function
boundary. This means that register contents are less likely to be
available for side channel attacks and information exposures.
Additionally this helps reduce the number of useful ROP gadgets in the
kernel image by about 20%:
$ ROPgadget.py --nosys --nojop --binary vmlinux.stock | tail -n1
Unique gadgets found: 337245
$ ROPgadget.py --nosys --nojop --binary vmlinux.zero-call-regs | tail -n1
Unique gadgets found: 267175
and more notably removes simple "write-what-where" gadgets:
$ ROPgadget.py --ropchain --binary vmlinux.stock | sed -n '/Step 1/,/Step 2/p'
- Step 1 -- Write-what-where gadgets
[+] Gadget found: 0xffffffff8102d76c mov qword ptr [rsi], rdx ; ret
[+] Gadget found: 0xffffffff81000cf5 pop rsi ; ret
[+] Gadget found: 0xffffffff8104d7c8 pop rdx ; ret
[-] Can't find the 'xor rdx, rdx' gadget. Try with another 'mov [reg], reg'
[+] Gadget found: 0xffffffff814c2b4c mov qword ptr [rsi], rdi ; ret
[+] Gadget found: 0xffffffff81000cf5 pop rsi ; ret
[+] Gadget found: 0xffffffff81001e51 pop rdi ; ret
[-] Can't find the 'xor rdi, rdi' gadget. Try with another 'mov [reg], reg'
[+] Gadget found: 0xffffffff81540d61 mov qword ptr [rsi], rdi ; pop rbx ; pop rbp ; ret
[+] Gadget found: 0xffffffff81000cf5 pop rsi ; ret
[+] Gadget found: 0xffffffff81001e51 pop rdi ; ret
[-] Can't find the 'xor rdi, rdi' gadget. Try with another 'mov [reg], reg'
[+] Gadget found: 0xffffffff8105341e mov qword ptr [rsi], rax ; ret
[+] Gadget found: 0xffffffff81000cf5 pop rsi ; ret
[+] Gadget found: 0xffffffff81029a11 pop rax ; ret
[+] Gadget found: 0xffffffff811f1c3b xor rax, rax ; ret
- Step 2 -- Init syscall number gadgets
$ ROPgadget.py --ropchain --binary vmlinux.zero* | sed -n '/Step 1/,/Step 2/p'
- Step 1 -- Write-what-where gadgets
[-] Can't find the 'mov qword ptr [r64], r64' gadget
For an x86_64 parallel build tests, this has a less than 1% performance
impact, and grows the image size less than 1%:
$ size vmlinux.stock vmlinux.zero-call-regs
text data bss dec hex filename
22437676 8559152 14127340 45124168 2b08a48 vmlinux.stock
22453184 8563248 14110956 45127388 2b096dc vmlinux.zero-call-regs
Impact for other architectures may vary. For example, arm64 sees a 5.5%
image size growth, mainly due to needing to always clear x16 and x17:
https://lore.kernel.org/lkml/20210510134503.GA88495@C02TD0UTHF1T.local/
Signed-off-by: Kees Cook <keescook@chromium.org>
2021-04-12 19:56:54 -07:00
|
|
|
|
|
|
|
config ZERO_CALL_USED_REGS
|
|
|
|
bool "Enable register zeroing on function exit"
|
|
|
|
depends on CC_HAS_ZERO_CALL_USED_REGS
|
|
|
|
help
|
|
|
|
At the end of functions, always zero any caller-used register
|
|
|
|
contents. This helps ensure that temporary values are not
|
|
|
|
leaked beyond the function boundary. This means that register
|
|
|
|
contents are less likely to be available for side channels
|
|
|
|
and information exposures. Additionally, this helps reduce the
|
|
|
|
number of useful ROP gadgets by about 20% (and removes compiler
|
|
|
|
generated "write-what-where" gadgets) in the resulting kernel
|
|
|
|
image. This has a less than 1% performance impact on most
|
|
|
|
workloads. Image size growth depends on architecture, and should
|
|
|
|
be evaluated for suitability. For example, x86_64 grows by less
|
|
|
|
than 1%, and arm64 grows by about 5%.
|
|
|
|
|
2019-04-10 08:23:44 -07:00
|
|
|
endmenu
|
|
|
|
|
2025-01-23 22:11:12 +00:00
|
|
|
menu "Bounds checking"
|
|
|
|
|
2025-01-23 22:11:15 +00:00
|
|
|
config FORTIFY_SOURCE
|
|
|
|
bool "Harden common str/mem functions against buffer overflows"
|
|
|
|
depends on ARCH_HAS_FORTIFY_SOURCE
|
|
|
|
# https://github.com/llvm/llvm-project/issues/53645
|
2025-03-07 20:29:26 -08:00
|
|
|
depends on !X86_32 || !CC_IS_CLANG || CLANG_VERSION >= 160000
|
2025-01-23 22:11:15 +00:00
|
|
|
help
|
|
|
|
Detect overflows of buffers in common string and memory functions
|
|
|
|
where the compiler can determine and validate the buffer sizes.
|
|
|
|
|
2025-01-23 22:11:12 +00:00
|
|
|
config HARDENED_USERCOPY
|
|
|
|
bool "Harden memory copies between kernel and userspace"
|
|
|
|
imply STRICT_DEVMEM
|
|
|
|
help
|
|
|
|
This option checks for obviously wrong memory regions when
|
|
|
|
copying memory to/from the kernel (via copy_to_user() and
|
|
|
|
copy_from_user() functions) by rejecting memory ranges that
|
|
|
|
are larger than the specified heap object, span multiple
|
|
|
|
separately allocated pages, are not on the process stack,
|
|
|
|
or are part of the kernel text. This prevents entire classes
|
|
|
|
of heap overflow exploits and similar kernel memory exposures.
|
|
|
|
|
2025-01-23 22:11:13 +00:00
|
|
|
config HARDENED_USERCOPY_DEFAULT_ON
|
|
|
|
bool "Harden memory copies by default"
|
|
|
|
depends on HARDENED_USERCOPY
|
|
|
|
default HARDENED_USERCOPY
|
|
|
|
help
|
|
|
|
This has the effect of setting "hardened_usercopy=on" on the kernel
|
|
|
|
command line. This can be disabled with "hardened_usercopy=off".
|
|
|
|
|
2025-01-23 22:11:12 +00:00
|
|
|
endmenu
|
|
|
|
|
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-11 17:18:40 +02:00
|
|
|
menu "Hardening of kernel data structures"
|
|
|
|
|
|
|
|
config LIST_HARDENED
|
|
|
|
bool "Check integrity of linked list manipulation"
|
|
|
|
help
|
|
|
|
Minimal integrity checking in the linked-list manipulation routines
|
|
|
|
to catch memory corruptions that are not guaranteed to result in an
|
|
|
|
immediate access fault.
|
|
|
|
|
|
|
|
If unsure, say N.
|
|
|
|
|
2023-08-11 17:18:41 +02:00
|
|
|
config BUG_ON_DATA_CORRUPTION
|
|
|
|
bool "Trigger a BUG when data corruption is detected"
|
|
|
|
select LIST_HARDENED
|
|
|
|
help
|
|
|
|
Select this option if the kernel should BUG when it encounters
|
|
|
|
data corruption in kernel memory structures when they get checked
|
|
|
|
for validity.
|
|
|
|
|
|
|
|
If unsure, say N.
|
|
|
|
|
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-11 17:18:40 +02:00
|
|
|
endmenu
|
|
|
|
|
2022-05-03 13:55:03 -07:00
|
|
|
config CC_HAS_RANDSTRUCT
|
|
|
|
def_bool $(cc-option,-frandomize-layout-seed-file=/dev/null)
|
2023-02-07 22:51:33 -08:00
|
|
|
# Randstruct was first added in Clang 15, but it isn't safe to use until
|
|
|
|
# Clang 16 due to https://github.com/llvm/llvm-project/issues/60349
|
|
|
|
depends on !CC_IS_CLANG || CLANG_VERSION >= 160000
|
2022-05-03 13:55:03 -07:00
|
|
|
|
2022-05-03 13:55:00 -07:00
|
|
|
choice
|
|
|
|
prompt "Randomize layout of sensitive kernel structures"
|
2025-04-26 00:37:55 -07:00
|
|
|
default RANDSTRUCT_FULL if COMPILE_TEST && (GCC_PLUGINS || CC_HAS_RANDSTRUCT)
|
2022-05-03 13:55:00 -07:00
|
|
|
default RANDSTRUCT_NONE
|
|
|
|
help
|
|
|
|
If you enable this, the layouts of structures that are entirely
|
|
|
|
function pointers (and have not been manually annotated with
|
|
|
|
__no_randomize_layout), or structures that have been explicitly
|
|
|
|
marked with __randomize_layout, will be randomized at compile-time.
|
|
|
|
This can introduce the requirement of an additional information
|
|
|
|
exposure vulnerability for exploits targeting these structure
|
|
|
|
types.
|
|
|
|
|
|
|
|
Enabling this feature will introduce some performance impact,
|
|
|
|
slightly increase memory usage, and prevent the use of forensic
|
|
|
|
tools like Volatility against the system (unless the kernel
|
|
|
|
source tree isn't cleaned after kernel installation).
|
|
|
|
|
2022-05-03 13:55:02 -07:00
|
|
|
The seed used for compilation is in scripts/basic/randomize.seed.
|
|
|
|
It remains after a "make clean" to allow for external modules to
|
|
|
|
be compiled with the existing seed and will be removed by a
|
|
|
|
"make mrproper" or "make distclean". This file should not be made
|
|
|
|
public, or the structure layout can be determined.
|
2022-05-03 13:55:00 -07:00
|
|
|
|
|
|
|
config RANDSTRUCT_NONE
|
|
|
|
bool "Disable structure layout randomization"
|
|
|
|
help
|
|
|
|
Build normally: no structure layout randomization.
|
|
|
|
|
|
|
|
config RANDSTRUCT_FULL
|
|
|
|
bool "Fully randomize structure layout"
|
2022-05-03 13:55:03 -07:00
|
|
|
depends on CC_HAS_RANDSTRUCT || GCC_PLUGINS
|
2024-09-28 11:13:13 -07:00
|
|
|
select MODVERSIONS if MODULES && !COMPILE_TEST
|
2022-05-03 13:55:00 -07:00
|
|
|
help
|
|
|
|
Fully randomize the member layout of sensitive
|
|
|
|
structures as much as possible, which may have both a
|
|
|
|
memory size and performance impact.
|
|
|
|
|
2022-05-03 13:55:03 -07:00
|
|
|
One difference between the Clang and GCC plugin
|
|
|
|
implementations is the handling of bitfields. The GCC
|
|
|
|
plugin treats them as fully separate variables,
|
|
|
|
introducing sometimes significant padding. Clang tries
|
|
|
|
to keep adjacent bitfields together, but with their bit
|
|
|
|
ordering randomized.
|
|
|
|
|
2022-05-03 13:55:00 -07:00
|
|
|
config RANDSTRUCT_PERFORMANCE
|
|
|
|
bool "Limit randomization of structure layout to cache-lines"
|
|
|
|
depends on GCC_PLUGINS
|
2024-09-28 11:13:13 -07:00
|
|
|
select MODVERSIONS if MODULES && !COMPILE_TEST
|
2022-05-03 13:55:00 -07:00
|
|
|
help
|
|
|
|
Randomization of sensitive kernel structures will make a
|
|
|
|
best effort at restricting randomization to cacheline-sized
|
|
|
|
groups of members. It will further not randomize bitfields
|
|
|
|
in structures. This reduces the performance hit of RANDSTRUCT
|
|
|
|
at the cost of weakened randomization.
|
|
|
|
endchoice
|
|
|
|
|
|
|
|
config RANDSTRUCT
|
|
|
|
def_bool !RANDSTRUCT_NONE
|
|
|
|
|
|
|
|
config GCC_PLUGIN_RANDSTRUCT
|
|
|
|
def_bool GCC_PLUGINS && RANDSTRUCT
|
|
|
|
help
|
|
|
|
Use GCC plugin to randomize structure layout.
|
|
|
|
|
|
|
|
This plugin was ported from grsecurity/PaX. More
|
|
|
|
information at:
|
|
|
|
* https://grsecurity.net/
|
|
|
|
* https://pax.grsecurity.net/
|
|
|
|
|
2019-04-10 08:23:44 -07:00
|
|
|
endmenu
|