2019-06-03 07:44:50 +02:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2012-03-05 11:49:33 +00:00
|
|
|
/*
|
|
|
|
* ARMv8 single-step debug support and mdscr context switching.
|
|
|
|
*
|
|
|
|
* Copyright (C) 2012 ARM Limited
|
|
|
|
*
|
|
|
|
* Author: Will Deacon <will.deacon@arm.com>
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/cpu.h>
|
|
|
|
#include <linux/debugfs.h>
|
|
|
|
#include <linux/hardirq.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/ptrace.h>
|
arm64: Kprobes with single stepping support
Add support for basic kernel probes(kprobes) and jump probes
(jprobes) for ARM64.
Kprobes utilizes software breakpoint and single step debug
exceptions supported on ARM v8.
A software breakpoint is placed at the probe address to trap the
kernel execution into the kprobe handler.
ARM v8 supports enabling single stepping before the break exception
return (ERET), with next PC in exception return address (ELR_EL1). The
kprobe handler prepares an executable memory slot for out-of-line
execution with a copy of the original instruction being probed, and
enables single stepping. The PC is set to the out-of-line slot address
before the ERET. With this scheme, the instruction is executed with the
exact same register context except for the PC (and DAIF) registers.
Debug mask (PSTATE.D) is enabled only when single stepping a recursive
kprobe, e.g.: during kprobes reenter so that probed instruction can be
single stepped within the kprobe handler -exception- context.
The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
any further re-entry is prevented by not calling handlers and the case
counted as a missed kprobe).
Single stepping from the x-o-l slot has a drawback for PC-relative accesses
like branching and symbolic literals access as the offset from the new PC
(slot address) may not be ensured to fit in the immediate value of
the opcode. Such instructions need simulation, so reject
probing them.
Instructions generating exceptions or cpu mode change are rejected
for probing.
Exclusive load/store instructions are rejected too. Additionally, the
code is checked to see if it is inside an exclusive load/store sequence
(code from Pratyush).
System instructions are mostly enabled for stepping, except MSR/MRS
accesses to "DAIF" flags in PSTATE, which are not safe for
probing.
This also changes arch/arm64/include/asm/ptrace.h to use
include/asm-generic/ptrace.h.
Thanks to Steve Capper and Pratyush Anand for several suggested
Changes.
Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
Signed-off-by: David A. Long <dave.long@linaro.org>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-08 12:35:48 -04:00
|
|
|
#include <linux/kprobes.h>
|
2012-03-05 11:49:33 +00:00
|
|
|
#include <linux/stat.h>
|
2013-03-16 08:48:13 +00:00
|
|
|
#include <linux/uaccess.h>
|
2017-02-08 18:51:37 +01:00
|
|
|
#include <linux/sched/task_stack.h>
|
2012-03-05 11:49:33 +00:00
|
|
|
|
2015-10-19 14:24:54 +01:00
|
|
|
#include <asm/cpufeature.h>
|
2012-03-05 11:49:33 +00:00
|
|
|
#include <asm/cputype.h>
|
2017-11-02 12:12:35 +00:00
|
|
|
#include <asm/daifflags.h>
|
2015-10-19 14:24:54 +01:00
|
|
|
#include <asm/debug-monitors.h>
|
arm64: debug: split single stepping exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The single stepping exception has the most constraints : it can be
exploited to train branch predictors and it needs special handling at EL1
for the Cortex-A76 erratum #1463225. We need to conserve all those
mitigations.
However, it does not write an address at FAR_EL1, as only hardware
watchpoints do so.
The single-step handler does its own signaling if it needs to and only
returns 0, so we can call it directly from `entry-common.c`.
Split the single stepping exception entry, adjust the function signature,
keep the security mitigation and erratum handling.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_softstep()` into `do_el0_softstep()` and `do_el1_softstep()` and
call them directly from the relevant entry paths.
We can also remove `NOKPROBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that
we can do it as early as possible, and only for the exceptions coming
from EL0, where it is needed.
This is safe to do as it is `noinstr`, as are all the functions it
may call. `el0_ia()` and `el0_pc()` already call it this way.
When taking a soft-step exception from EL0, most of the single stepping
handling is safely preemptible : the only possible handler is
`uprobe_single_step_handler()`. It only operates on task-local data and
properly checks its validity, then raises a Thread Information Flag,
processed before returning to userspace in `do_notify_resume()`, which
is already preemptible.
However, the soft-step handler first calls `reinstall_suspended_bps()`
to check if there is any hardware breakpoint or watchpoint pending
or already stepped through.
This cannot be preempted as it manipulates the hardware breakpoint and
watchpoint registers.
Move the call to `try_step_suspended_breakpoints()` to `entry-common.c`
and adjust the relevant comments.
We can now safely unmask interrupts before handling the step itself,
fixing a PREEMPT_RT issue where the handler could call a sleeping function
with preemption disabled.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/Z6YW_Kx4S2tmj2BP@uudg.org/
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-10-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:05 +01:00
|
|
|
#include <asm/exception.h>
|
2025-07-07 12:40:59 +01:00
|
|
|
#include <asm/kgdb.h>
|
|
|
|
#include <asm/kprobes.h>
|
2012-03-05 11:49:33 +00:00
|
|
|
#include <asm/system_misc.h>
|
2018-02-20 15:18:13 +00:00
|
|
|
#include <asm/traps.h>
|
2025-07-07 12:40:59 +01:00
|
|
|
#include <asm/uprobes.h>
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
/* Determine debug architecture. */
|
|
|
|
u8 debug_monitors_arch(void)
|
|
|
|
{
|
2017-03-23 15:14:39 +00:00
|
|
|
return cpuid_feature_extract_unsigned_field(read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1),
|
2022-09-10 17:33:50 +01:00
|
|
|
ID_AA64DFR0_EL1_DebugVer_SHIFT);
|
2012-03-05 11:49:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* MDSCR access routines.
|
|
|
|
*/
|
2025-06-13 08:06:45 +05:30
|
|
|
static void mdscr_write(u64 mdscr)
|
2012-03-05 11:49:33 +00:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
2017-11-02 12:12:35 +00:00
|
|
|
flags = local_daif_save();
|
2016-09-08 13:55:38 +01:00
|
|
|
write_sysreg(mdscr, mdscr_el1);
|
2017-11-02 12:12:35 +00:00
|
|
|
local_daif_restore(flags);
|
2012-03-05 11:49:33 +00:00
|
|
|
}
|
2016-07-08 12:35:49 -04:00
|
|
|
NOKPROBE_SYMBOL(mdscr_write);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
2025-06-13 08:06:45 +05:30
|
|
|
static u64 mdscr_read(void)
|
2012-03-05 11:49:33 +00:00
|
|
|
{
|
2016-09-08 13:55:38 +01:00
|
|
|
return read_sysreg(mdscr_el1);
|
2012-03-05 11:49:33 +00:00
|
|
|
}
|
2016-07-08 12:35:49 -04:00
|
|
|
NOKPROBE_SYMBOL(mdscr_read);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Allow root to disable self-hosted debug from userspace.
|
|
|
|
* This is useful if you want to connect an external JTAG debugger.
|
|
|
|
*/
|
2015-09-26 15:04:07 -07:00
|
|
|
static bool debug_enabled = true;
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
static int create_debug_debugfs_entry(void)
|
|
|
|
{
|
|
|
|
debugfs_create_bool("debug_enabled", 0644, NULL, &debug_enabled);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
fs_initcall(create_debug_debugfs_entry);
|
|
|
|
|
|
|
|
static int __init early_debug_disable(char *buf)
|
|
|
|
{
|
2015-09-26 15:04:07 -07:00
|
|
|
debug_enabled = false;
|
2012-03-05 11:49:33 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
early_param("nodebugmon", early_debug_disable);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Keep track of debug users on each core.
|
|
|
|
* The ref counts are per-cpu so we use a local_t type.
|
|
|
|
*/
|
2013-10-21 13:17:08 +01:00
|
|
|
static DEFINE_PER_CPU(int, mde_ref_count);
|
|
|
|
static DEFINE_PER_CPU(int, kde_ref_count);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
2015-07-27 18:36:54 +01:00
|
|
|
void enable_debug_monitors(enum dbg_active_el el)
|
2012-03-05 11:49:33 +00:00
|
|
|
{
|
2025-06-13 08:06:45 +05:30
|
|
|
u64 mdscr, enable = 0;
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
WARN_ON(preemptible());
|
|
|
|
|
2013-10-21 13:17:08 +01:00
|
|
|
if (this_cpu_inc_return(mde_ref_count) == 1)
|
2025-06-13 08:06:45 +05:30
|
|
|
enable = MDSCR_EL1_MDE;
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
if (el == DBG_ACTIVE_EL1 &&
|
2013-10-21 13:17:08 +01:00
|
|
|
this_cpu_inc_return(kde_ref_count) == 1)
|
2025-06-13 08:06:45 +05:30
|
|
|
enable |= MDSCR_EL1_KDE;
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
if (enable && debug_enabled) {
|
|
|
|
mdscr = mdscr_read();
|
|
|
|
mdscr |= enable;
|
|
|
|
mdscr_write(mdscr);
|
|
|
|
}
|
|
|
|
}
|
2016-07-08 12:35:49 -04:00
|
|
|
NOKPROBE_SYMBOL(enable_debug_monitors);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
2015-07-27 18:36:54 +01:00
|
|
|
void disable_debug_monitors(enum dbg_active_el el)
|
2012-03-05 11:49:33 +00:00
|
|
|
{
|
2025-06-13 08:06:45 +05:30
|
|
|
u64 mdscr, disable = 0;
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
WARN_ON(preemptible());
|
|
|
|
|
2013-10-21 13:17:08 +01:00
|
|
|
if (this_cpu_dec_return(mde_ref_count) == 0)
|
2025-06-13 08:06:45 +05:30
|
|
|
disable = ~MDSCR_EL1_MDE;
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
if (el == DBG_ACTIVE_EL1 &&
|
2013-10-21 13:17:08 +01:00
|
|
|
this_cpu_dec_return(kde_ref_count) == 0)
|
2025-06-13 08:06:45 +05:30
|
|
|
disable &= ~MDSCR_EL1_KDE;
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
if (disable) {
|
|
|
|
mdscr = mdscr_read();
|
|
|
|
mdscr &= disable;
|
|
|
|
mdscr_write(mdscr);
|
|
|
|
}
|
|
|
|
}
|
2016-07-08 12:35:49 -04:00
|
|
|
NOKPROBE_SYMBOL(disable_debug_monitors);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* OS lock clearing.
|
|
|
|
*/
|
2016-08-16 11:29:17 +01:00
|
|
|
static int clear_os_lock(unsigned int cpu)
|
2012-03-05 11:49:33 +00:00
|
|
|
{
|
2019-04-08 18:17:18 +01:00
|
|
|
write_sysreg(0, osdlr_el1);
|
2016-09-08 13:55:38 +01:00
|
|
|
write_sysreg(0, oslar_el1);
|
2016-08-16 11:29:17 +01:00
|
|
|
isb();
|
|
|
|
return 0;
|
2012-03-05 11:49:33 +00:00
|
|
|
}
|
|
|
|
|
2020-05-31 13:00:15 +02:00
|
|
|
static int __init debug_monitors_init(void)
|
2012-03-05 11:49:33 +00:00
|
|
|
{
|
2016-08-16 11:29:17 +01:00
|
|
|
return cpuhp_setup_state(CPUHP_AP_ARM64_DEBUG_MONITORS_STARTING,
|
2016-12-21 20:19:54 +01:00
|
|
|
"arm64/debug_monitors:starting",
|
2016-08-16 11:29:17 +01:00
|
|
|
clear_os_lock, NULL);
|
2012-03-05 11:49:33 +00:00
|
|
|
}
|
|
|
|
postcore_initcall(debug_monitors_init);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Single step API and exception handling.
|
|
|
|
*/
|
arm64: ptrace: Override SPSR.SS when single-stepping is enabled
Luis reports that, when reverse debugging with GDB, single-step does not
function as expected on arm64:
| I've noticed, under very specific conditions, that a PTRACE_SINGLESTEP
| request by GDB won't execute the underlying instruction. As a consequence,
| the PC doesn't move, but we return a SIGTRAP just like we would for a
| regular successful PTRACE_SINGLESTEP request.
The underlying problem is that when the CPU register state is restored
as part of a reverse step, the SPSR.SS bit is cleared and so the hardware
single-step state can transition to the "active-pending" state, causing
an unexpected step exception to be taken immediately if a step operation
is attempted.
In hindsight, we probably shouldn't have exposed SPSR.SS in the pstate
accessible by the GPR regset, but it's a bit late for that now. Instead,
simply prevent userspace from configuring the bit to a value which is
inconsistent with the TIF_SINGLESTEP state for the task being traced.
Cc: <stable@vger.kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Keno Fischer <keno@juliacomputing.com>
Link: https://lore.kernel.org/r/1eed6d69-d53d-9657-1fc9-c089be07f98c@linaro.org
Reported-by: Luis Machado <luis.machado@linaro.org>
Tested-by: Luis Machado <luis.machado@linaro.org>
Signed-off-by: Will Deacon <will@kernel.org>
2020-02-13 12:06:26 +00:00
|
|
|
static void set_user_regs_spsr_ss(struct user_pt_regs *regs)
|
2012-03-05 11:49:33 +00:00
|
|
|
{
|
2016-07-19 15:07:38 +01:00
|
|
|
regs->pstate |= DBG_SPSR_SS;
|
2012-03-05 11:49:33 +00:00
|
|
|
}
|
arm64: ptrace: Override SPSR.SS when single-stepping is enabled
Luis reports that, when reverse debugging with GDB, single-step does not
function as expected on arm64:
| I've noticed, under very specific conditions, that a PTRACE_SINGLESTEP
| request by GDB won't execute the underlying instruction. As a consequence,
| the PC doesn't move, but we return a SIGTRAP just like we would for a
| regular successful PTRACE_SINGLESTEP request.
The underlying problem is that when the CPU register state is restored
as part of a reverse step, the SPSR.SS bit is cleared and so the hardware
single-step state can transition to the "active-pending" state, causing
an unexpected step exception to be taken immediately if a step operation
is attempted.
In hindsight, we probably shouldn't have exposed SPSR.SS in the pstate
accessible by the GPR regset, but it's a bit late for that now. Instead,
simply prevent userspace from configuring the bit to a value which is
inconsistent with the TIF_SINGLESTEP state for the task being traced.
Cc: <stable@vger.kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Keno Fischer <keno@juliacomputing.com>
Link: https://lore.kernel.org/r/1eed6d69-d53d-9657-1fc9-c089be07f98c@linaro.org
Reported-by: Luis Machado <luis.machado@linaro.org>
Tested-by: Luis Machado <luis.machado@linaro.org>
Signed-off-by: Will Deacon <will@kernel.org>
2020-02-13 12:06:26 +00:00
|
|
|
NOKPROBE_SYMBOL(set_user_regs_spsr_ss);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
arm64: ptrace: Override SPSR.SS when single-stepping is enabled
Luis reports that, when reverse debugging with GDB, single-step does not
function as expected on arm64:
| I've noticed, under very specific conditions, that a PTRACE_SINGLESTEP
| request by GDB won't execute the underlying instruction. As a consequence,
| the PC doesn't move, but we return a SIGTRAP just like we would for a
| regular successful PTRACE_SINGLESTEP request.
The underlying problem is that when the CPU register state is restored
as part of a reverse step, the SPSR.SS bit is cleared and so the hardware
single-step state can transition to the "active-pending" state, causing
an unexpected step exception to be taken immediately if a step operation
is attempted.
In hindsight, we probably shouldn't have exposed SPSR.SS in the pstate
accessible by the GPR regset, but it's a bit late for that now. Instead,
simply prevent userspace from configuring the bit to a value which is
inconsistent with the TIF_SINGLESTEP state for the task being traced.
Cc: <stable@vger.kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Keno Fischer <keno@juliacomputing.com>
Link: https://lore.kernel.org/r/1eed6d69-d53d-9657-1fc9-c089be07f98c@linaro.org
Reported-by: Luis Machado <luis.machado@linaro.org>
Tested-by: Luis Machado <luis.machado@linaro.org>
Signed-off-by: Will Deacon <will@kernel.org>
2020-02-13 12:06:26 +00:00
|
|
|
static void clear_user_regs_spsr_ss(struct user_pt_regs *regs)
|
2012-03-05 11:49:33 +00:00
|
|
|
{
|
2016-07-19 15:07:38 +01:00
|
|
|
regs->pstate &= ~DBG_SPSR_SS;
|
2012-03-05 11:49:33 +00:00
|
|
|
}
|
arm64: ptrace: Override SPSR.SS when single-stepping is enabled
Luis reports that, when reverse debugging with GDB, single-step does not
function as expected on arm64:
| I've noticed, under very specific conditions, that a PTRACE_SINGLESTEP
| request by GDB won't execute the underlying instruction. As a consequence,
| the PC doesn't move, but we return a SIGTRAP just like we would for a
| regular successful PTRACE_SINGLESTEP request.
The underlying problem is that when the CPU register state is restored
as part of a reverse step, the SPSR.SS bit is cleared and so the hardware
single-step state can transition to the "active-pending" state, causing
an unexpected step exception to be taken immediately if a step operation
is attempted.
In hindsight, we probably shouldn't have exposed SPSR.SS in the pstate
accessible by the GPR regset, but it's a bit late for that now. Instead,
simply prevent userspace from configuring the bit to a value which is
inconsistent with the TIF_SINGLESTEP state for the task being traced.
Cc: <stable@vger.kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Keno Fischer <keno@juliacomputing.com>
Link: https://lore.kernel.org/r/1eed6d69-d53d-9657-1fc9-c089be07f98c@linaro.org
Reported-by: Luis Machado <luis.machado@linaro.org>
Tested-by: Luis Machado <luis.machado@linaro.org>
Signed-off-by: Will Deacon <will@kernel.org>
2020-02-13 12:06:26 +00:00
|
|
|
NOKPROBE_SYMBOL(clear_user_regs_spsr_ss);
|
|
|
|
|
|
|
|
#define set_regs_spsr_ss(r) set_user_regs_spsr_ss(&(r)->user_regs)
|
|
|
|
#define clear_regs_spsr_ss(r) clear_user_regs_spsr_ss(&(r)->user_regs)
|
2012-03-05 11:49:33 +00:00
|
|
|
|
2016-02-10 16:05:28 +00:00
|
|
|
static void send_user_sigtrap(int si_code)
|
|
|
|
{
|
|
|
|
struct pt_regs *regs = current_pt_regs();
|
|
|
|
|
|
|
|
if (WARN_ON(!user_mode(regs)))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (interrupts_enabled(regs))
|
|
|
|
local_irq_enable();
|
|
|
|
|
2020-11-20 12:33:46 -08:00
|
|
|
arm64_force_sig_fault(SIGTRAP, si_code, instruction_pointer(regs),
|
|
|
|
"User debug trap");
|
2016-02-10 16:05:28 +00:00
|
|
|
}
|
|
|
|
|
arm64: debug: split single stepping exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The single stepping exception has the most constraints : it can be
exploited to train branch predictors and it needs special handling at EL1
for the Cortex-A76 erratum #1463225. We need to conserve all those
mitigations.
However, it does not write an address at FAR_EL1, as only hardware
watchpoints do so.
The single-step handler does its own signaling if it needs to and only
returns 0, so we can call it directly from `entry-common.c`.
Split the single stepping exception entry, adjust the function signature,
keep the security mitigation and erratum handling.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_softstep()` into `do_el0_softstep()` and `do_el1_softstep()` and
call them directly from the relevant entry paths.
We can also remove `NOKPROBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that
we can do it as early as possible, and only for the exceptions coming
from EL0, where it is needed.
This is safe to do as it is `noinstr`, as are all the functions it
may call. `el0_ia()` and `el0_pc()` already call it this way.
When taking a soft-step exception from EL0, most of the single stepping
handling is safely preemptible : the only possible handler is
`uprobe_single_step_handler()`. It only operates on task-local data and
properly checks its validity, then raises a Thread Information Flag,
processed before returning to userspace in `do_notify_resume()`, which
is already preemptible.
However, the soft-step handler first calls `reinstall_suspended_bps()`
to check if there is any hardware breakpoint or watchpoint pending
or already stepped through.
This cannot be preempted as it manipulates the hardware breakpoint and
watchpoint registers.
Move the call to `try_step_suspended_breakpoints()` to `entry-common.c`
and adjust the relevant comments.
We can now safely unmask interrupts before handling the step itself,
fixing a PREEMPT_RT issue where the handler could call a sleeping function
with preemption disabled.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/Z6YW_Kx4S2tmj2BP@uudg.org/
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-10-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:05 +01:00
|
|
|
/*
|
|
|
|
* We have already unmasked interrupts and enabled preemption
|
|
|
|
* when calling do_el0_softstep() from entry-common.c.
|
|
|
|
*/
|
|
|
|
void do_el0_softstep(unsigned long esr, struct pt_regs *regs)
|
2012-03-05 11:49:33 +00:00
|
|
|
{
|
arm64: debug: split single stepping exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The single stepping exception has the most constraints : it can be
exploited to train branch predictors and it needs special handling at EL1
for the Cortex-A76 erratum #1463225. We need to conserve all those
mitigations.
However, it does not write an address at FAR_EL1, as only hardware
watchpoints do so.
The single-step handler does its own signaling if it needs to and only
returns 0, so we can call it directly from `entry-common.c`.
Split the single stepping exception entry, adjust the function signature,
keep the security mitigation and erratum handling.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_softstep()` into `do_el0_softstep()` and `do_el1_softstep()` and
call them directly from the relevant entry paths.
We can also remove `NOKPROBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that
we can do it as early as possible, and only for the exceptions coming
from EL0, where it is needed.
This is safe to do as it is `noinstr`, as are all the functions it
may call. `el0_ia()` and `el0_pc()` already call it this way.
When taking a soft-step exception from EL0, most of the single stepping
handling is safely preemptible : the only possible handler is
`uprobe_single_step_handler()`. It only operates on task-local data and
properly checks its validity, then raises a Thread Information Flag,
processed before returning to userspace in `do_notify_resume()`, which
is already preemptible.
However, the soft-step handler first calls `reinstall_suspended_bps()`
to check if there is any hardware breakpoint or watchpoint pending
or already stepped through.
This cannot be preempted as it manipulates the hardware breakpoint and
watchpoint registers.
Move the call to `try_step_suspended_breakpoints()` to `entry-common.c`
and adjust the relevant comments.
We can now safely unmask interrupts before handling the step itself,
fixing a PREEMPT_RT issue where the handler could call a sleeping function
with preemption disabled.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/Z6YW_Kx4S2tmj2BP@uudg.org/
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-10-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:05 +01:00
|
|
|
if (uprobe_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
|
|
|
|
return;
|
2016-11-02 14:40:43 +05:30
|
|
|
|
arm64: debug: split single stepping exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The single stepping exception has the most constraints : it can be
exploited to train branch predictors and it needs special handling at EL1
for the Cortex-A76 erratum #1463225. We need to conserve all those
mitigations.
However, it does not write an address at FAR_EL1, as only hardware
watchpoints do so.
The single-step handler does its own signaling if it needs to and only
returns 0, so we can call it directly from `entry-common.c`.
Split the single stepping exception entry, adjust the function signature,
keep the security mitigation and erratum handling.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_softstep()` into `do_el0_softstep()` and `do_el1_softstep()` and
call them directly from the relevant entry paths.
We can also remove `NOKPROBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that
we can do it as early as possible, and only for the exceptions coming
from EL0, where it is needed.
This is safe to do as it is `noinstr`, as are all the functions it
may call. `el0_ia()` and `el0_pc()` already call it this way.
When taking a soft-step exception from EL0, most of the single stepping
handling is safely preemptible : the only possible handler is
`uprobe_single_step_handler()`. It only operates on task-local data and
properly checks its validity, then raises a Thread Information Flag,
processed before returning to userspace in `do_notify_resume()`, which
is already preemptible.
However, the soft-step handler first calls `reinstall_suspended_bps()`
to check if there is any hardware breakpoint or watchpoint pending
or already stepped through.
This cannot be preempted as it manipulates the hardware breakpoint and
watchpoint registers.
Move the call to `try_step_suspended_breakpoints()` to `entry-common.c`
and adjust the relevant comments.
We can now safely unmask interrupts before handling the step itself,
fixing a PREEMPT_RT issue where the handler could call a sleeping function
with preemption disabled.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/Z6YW_Kx4S2tmj2BP@uudg.org/
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-10-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:05 +01:00
|
|
|
send_user_sigtrap(TRAP_TRACE);
|
2012-03-05 11:49:33 +00:00
|
|
|
/*
|
arm64: debug: split single stepping exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The single stepping exception has the most constraints : it can be
exploited to train branch predictors and it needs special handling at EL1
for the Cortex-A76 erratum #1463225. We need to conserve all those
mitigations.
However, it does not write an address at FAR_EL1, as only hardware
watchpoints do so.
The single-step handler does its own signaling if it needs to and only
returns 0, so we can call it directly from `entry-common.c`.
Split the single stepping exception entry, adjust the function signature,
keep the security mitigation and erratum handling.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_softstep()` into `do_el0_softstep()` and `do_el1_softstep()` and
call them directly from the relevant entry paths.
We can also remove `NOKPROBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that
we can do it as early as possible, and only for the exceptions coming
from EL0, where it is needed.
This is safe to do as it is `noinstr`, as are all the functions it
may call. `el0_ia()` and `el0_pc()` already call it this way.
When taking a soft-step exception from EL0, most of the single stepping
handling is safely preemptible : the only possible handler is
`uprobe_single_step_handler()`. It only operates on task-local data and
properly checks its validity, then raises a Thread Information Flag,
processed before returning to userspace in `do_notify_resume()`, which
is already preemptible.
However, the soft-step handler first calls `reinstall_suspended_bps()`
to check if there is any hardware breakpoint or watchpoint pending
or already stepped through.
This cannot be preempted as it manipulates the hardware breakpoint and
watchpoint registers.
Move the call to `try_step_suspended_breakpoints()` to `entry-common.c`
and adjust the relevant comments.
We can now safely unmask interrupts before handling the step itself,
fixing a PREEMPT_RT issue where the handler could call a sleeping function
with preemption disabled.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/Z6YW_Kx4S2tmj2BP@uudg.org/
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-10-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:05 +01:00
|
|
|
* ptrace will disable single step unless explicitly
|
|
|
|
* asked to re-enable it. For other clients, it makes
|
|
|
|
* sense to leave it enabled (i.e. rewind the controls
|
|
|
|
* to the active-not-pending state).
|
2012-03-05 11:49:33 +00:00
|
|
|
*/
|
arm64: debug: split single stepping exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The single stepping exception has the most constraints : it can be
exploited to train branch predictors and it needs special handling at EL1
for the Cortex-A76 erratum #1463225. We need to conserve all those
mitigations.
However, it does not write an address at FAR_EL1, as only hardware
watchpoints do so.
The single-step handler does its own signaling if it needs to and only
returns 0, so we can call it directly from `entry-common.c`.
Split the single stepping exception entry, adjust the function signature,
keep the security mitigation and erratum handling.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_softstep()` into `do_el0_softstep()` and `do_el1_softstep()` and
call them directly from the relevant entry paths.
We can also remove `NOKPROBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that
we can do it as early as possible, and only for the exceptions coming
from EL0, where it is needed.
This is safe to do as it is `noinstr`, as are all the functions it
may call. `el0_ia()` and `el0_pc()` already call it this way.
When taking a soft-step exception from EL0, most of the single stepping
handling is safely preemptible : the only possible handler is
`uprobe_single_step_handler()`. It only operates on task-local data and
properly checks its validity, then raises a Thread Information Flag,
processed before returning to userspace in `do_notify_resume()`, which
is already preemptible.
However, the soft-step handler first calls `reinstall_suspended_bps()`
to check if there is any hardware breakpoint or watchpoint pending
or already stepped through.
This cannot be preempted as it manipulates the hardware breakpoint and
watchpoint registers.
Move the call to `try_step_suspended_breakpoints()` to `entry-common.c`
and adjust the relevant comments.
We can now safely unmask interrupts before handling the step itself,
fixing a PREEMPT_RT issue where the handler could call a sleeping function
with preemption disabled.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/Z6YW_Kx4S2tmj2BP@uudg.org/
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-10-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:05 +01:00
|
|
|
user_rewind_single_step(current);
|
2012-03-05 11:49:33 +00:00
|
|
|
}
|
2013-12-04 05:50:20 +00:00
|
|
|
|
arm64: debug: split single stepping exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The single stepping exception has the most constraints : it can be
exploited to train branch predictors and it needs special handling at EL1
for the Cortex-A76 erratum #1463225. We need to conserve all those
mitigations.
However, it does not write an address at FAR_EL1, as only hardware
watchpoints do so.
The single-step handler does its own signaling if it needs to and only
returns 0, so we can call it directly from `entry-common.c`.
Split the single stepping exception entry, adjust the function signature,
keep the security mitigation and erratum handling.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_softstep()` into `do_el0_softstep()` and `do_el1_softstep()` and
call them directly from the relevant entry paths.
We can also remove `NOKPROBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that
we can do it as early as possible, and only for the exceptions coming
from EL0, where it is needed.
This is safe to do as it is `noinstr`, as are all the functions it
may call. `el0_ia()` and `el0_pc()` already call it this way.
When taking a soft-step exception from EL0, most of the single stepping
handling is safely preemptible : the only possible handler is
`uprobe_single_step_handler()`. It only operates on task-local data and
properly checks its validity, then raises a Thread Information Flag,
processed before returning to userspace in `do_notify_resume()`, which
is already preemptible.
However, the soft-step handler first calls `reinstall_suspended_bps()`
to check if there is any hardware breakpoint or watchpoint pending
or already stepped through.
This cannot be preempted as it manipulates the hardware breakpoint and
watchpoint registers.
Move the call to `try_step_suspended_breakpoints()` to `entry-common.c`
and adjust the relevant comments.
We can now safely unmask interrupts before handling the step itself,
fixing a PREEMPT_RT issue where the handler could call a sleeping function
with preemption disabled.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/Z6YW_Kx4S2tmj2BP@uudg.org/
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-10-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:05 +01:00
|
|
|
void do_el1_softstep(unsigned long esr, struct pt_regs *regs)
|
2013-12-04 05:50:20 +00:00
|
|
|
{
|
arm64: debug: split single stepping exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The single stepping exception has the most constraints : it can be
exploited to train branch predictors and it needs special handling at EL1
for the Cortex-A76 erratum #1463225. We need to conserve all those
mitigations.
However, it does not write an address at FAR_EL1, as only hardware
watchpoints do so.
The single-step handler does its own signaling if it needs to and only
returns 0, so we can call it directly from `entry-common.c`.
Split the single stepping exception entry, adjust the function signature,
keep the security mitigation and erratum handling.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_softstep()` into `do_el0_softstep()` and `do_el1_softstep()` and
call them directly from the relevant entry paths.
We can also remove `NOKPROBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that
we can do it as early as possible, and only for the exceptions coming
from EL0, where it is needed.
This is safe to do as it is `noinstr`, as are all the functions it
may call. `el0_ia()` and `el0_pc()` already call it this way.
When taking a soft-step exception from EL0, most of the single stepping
handling is safely preemptible : the only possible handler is
`uprobe_single_step_handler()`. It only operates on task-local data and
properly checks its validity, then raises a Thread Information Flag,
processed before returning to userspace in `do_notify_resume()`, which
is already preemptible.
However, the soft-step handler first calls `reinstall_suspended_bps()`
to check if there is any hardware breakpoint or watchpoint pending
or already stepped through.
This cannot be preempted as it manipulates the hardware breakpoint and
watchpoint registers.
Move the call to `try_step_suspended_breakpoints()` to `entry-common.c`
and adjust the relevant comments.
We can now safely unmask interrupts before handling the step itself,
fixing a PREEMPT_RT issue where the handler could call a sleeping function
with preemption disabled.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/Z6YW_Kx4S2tmj2BP@uudg.org/
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-10-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:05 +01:00
|
|
|
if (kgdb_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
|
|
|
|
return;
|
2013-12-04 05:50:20 +00:00
|
|
|
|
arm64: debug: split single stepping exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The single stepping exception has the most constraints : it can be
exploited to train branch predictors and it needs special handling at EL1
for the Cortex-A76 erratum #1463225. We need to conserve all those
mitigations.
However, it does not write an address at FAR_EL1, as only hardware
watchpoints do so.
The single-step handler does its own signaling if it needs to and only
returns 0, so we can call it directly from `entry-common.c`.
Split the single stepping exception entry, adjust the function signature,
keep the security mitigation and erratum handling.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_softstep()` into `do_el0_softstep()` and `do_el1_softstep()` and
call them directly from the relevant entry paths.
We can also remove `NOKPROBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that
we can do it as early as possible, and only for the exceptions coming
from EL0, where it is needed.
This is safe to do as it is `noinstr`, as are all the functions it
may call. `el0_ia()` and `el0_pc()` already call it this way.
When taking a soft-step exception from EL0, most of the single stepping
handling is safely preemptible : the only possible handler is
`uprobe_single_step_handler()`. It only operates on task-local data and
properly checks its validity, then raises a Thread Information Flag,
processed before returning to userspace in `do_notify_resume()`, which
is already preemptible.
However, the soft-step handler first calls `reinstall_suspended_bps()`
to check if there is any hardware breakpoint or watchpoint pending
or already stepped through.
This cannot be preempted as it manipulates the hardware breakpoint and
watchpoint registers.
Move the call to `try_step_suspended_breakpoints()` to `entry-common.c`
and adjust the relevant comments.
We can now safely unmask interrupts before handling the step itself,
fixing a PREEMPT_RT issue where the handler could call a sleeping function
with preemption disabled.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/Z6YW_Kx4S2tmj2BP@uudg.org/
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-10-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:05 +01:00
|
|
|
pr_warn("Unexpected kernel single-step exception at EL1\n");
|
|
|
|
/*
|
|
|
|
* Re-enable stepping since we know that we will be
|
|
|
|
* returning to regs.
|
|
|
|
*/
|
|
|
|
set_regs_spsr_ss(regs);
|
2019-02-26 12:52:47 +00:00
|
|
|
}
|
arm64: debug: split single stepping exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The single stepping exception has the most constraints : it can be
exploited to train branch predictors and it needs special handling at EL1
for the Cortex-A76 erratum #1463225. We need to conserve all those
mitigations.
However, it does not write an address at FAR_EL1, as only hardware
watchpoints do so.
The single-step handler does its own signaling if it needs to and only
returns 0, so we can call it directly from `entry-common.c`.
Split the single stepping exception entry, adjust the function signature,
keep the security mitigation and erratum handling.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_softstep()` into `do_el0_softstep()` and `do_el1_softstep()` and
call them directly from the relevant entry paths.
We can also remove `NOKPROBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
Move the call to `arm64_apply_bp_hardening()` to `entry-common.c` so that
we can do it as early as possible, and only for the exceptions coming
from EL0, where it is needed.
This is safe to do as it is `noinstr`, as are all the functions it
may call. `el0_ia()` and `el0_pc()` already call it this way.
When taking a soft-step exception from EL0, most of the single stepping
handling is safely preemptible : the only possible handler is
`uprobe_single_step_handler()`. It only operates on task-local data and
properly checks its validity, then raises a Thread Information Flag,
processed before returning to userspace in `do_notify_resume()`, which
is already preemptible.
However, the soft-step handler first calls `reinstall_suspended_bps()`
to check if there is any hardware breakpoint or watchpoint pending
or already stepped through.
This cannot be preempted as it manipulates the hardware breakpoint and
watchpoint registers.
Move the call to `try_step_suspended_breakpoints()` to `entry-common.c`
and adjust the relevant comments.
We can now safely unmask interrupts before handling the step itself,
fixing a PREEMPT_RT issue where the handler could call a sleeping function
with preemption disabled.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/Z6YW_Kx4S2tmj2BP@uudg.org/
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-10-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:05 +01:00
|
|
|
NOKPROBE_SYMBOL(do_el1_softstep);
|
2019-02-26 12:52:47 +00:00
|
|
|
|
arm64: debug: split brk64 exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The BRK64 instruction can only be triggered by a BRK instruction. Thus,
we know that the PC is a legitimate address and isn't being used to train
a branch predictor with a bogus address : we don't need to call
`arm64_apply_bp_hardening()`.
We do not need to handle the Cortex-A76 erratum #1463225 either, as it
only relevant for single stepping at EL1.
BRK64 does not write FAR_EL1 either, as only hardware watchpoints do so.
Split the BRK64 exception entry, adjust the function signature, and its
behaviour to match the lack of needed mitigations.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_brk64()` into `do_el0_brk64()` and `do_el1_brk64()`, and call them
directly from the relevant entry paths.
Use `die()` directly for the EL1 error path, as in `do_el1_bti()` and
`do_el1_undef()`.
We can also remove `NOKRPOBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
When taking a BRK64 exception from EL0, the exception handling is safely
preemptible : the only possible handler is `uprobe_brk_handler()`.
It only operates on task-local data and properly checks its validity,
then raises a Thread Information Flag, processed before returning
to userspace in `do_notify_resume()`, which is already preemptible.
Thus we can safely unmask interrupts and enable preemption before
handling the break itself, fixing a PREEMPT_RT issue where the handler
could call a sleeping function with preemption disabled.
Given that the break hook registration is handled statically in
`call_break_hook` since
(arm64: debug: call software break handlers statically)
and that we now bypass the exception handler registration, this change
renders `early_brk64` redundant : its functionality is now handled through
the post-init path.
This also removes the last usage of `el1_dbg()`.
This also removes the last usage of `el0_dbg()` without `CONFIG_COMPAT`.
Mark it `__maybe_unused`, to prevent a warning when building this patch
without `CONFIG_COMPAT`, as the following patch removes `el0_dbg()`.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-12-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:07 +01:00
|
|
|
static int call_el1_break_hook(struct pt_regs *regs, unsigned long esr)
|
2019-02-26 12:52:47 +00:00
|
|
|
{
|
2025-07-07 12:40:59 +01:00
|
|
|
if (esr_brk_comment(esr) == BUG_BRK_IMM)
|
|
|
|
return bug_brk_handler(regs, esr);
|
2019-02-26 12:52:47 +00:00
|
|
|
|
2025-07-07 12:40:59 +01:00
|
|
|
if (IS_ENABLED(CONFIG_CFI_CLANG) && esr_is_cfi_brk(esr))
|
|
|
|
return cfi_brk_handler(regs, esr);
|
2013-12-04 05:50:20 +00:00
|
|
|
|
2025-07-07 12:40:59 +01:00
|
|
|
if (esr_brk_comment(esr) == FAULT_BRK_IMM)
|
|
|
|
return reserved_fault_brk_handler(regs, esr);
|
2013-12-04 05:50:20 +00:00
|
|
|
|
2025-07-07 12:40:59 +01:00
|
|
|
if (IS_ENABLED(CONFIG_KASAN_SW_TAGS) &&
|
|
|
|
(esr_brk_comment(esr) & ~KASAN_BRK_MASK) == KASAN_BRK_IMM)
|
|
|
|
return kasan_brk_handler(regs, esr);
|
2019-02-26 12:52:47 +00:00
|
|
|
|
2025-07-07 12:40:59 +01:00
|
|
|
if (IS_ENABLED(CONFIG_UBSAN_TRAP) && esr_is_ubsan_brk(esr))
|
|
|
|
return ubsan_brk_handler(regs, esr);
|
|
|
|
|
|
|
|
if (IS_ENABLED(CONFIG_KGDB)) {
|
|
|
|
if (esr_brk_comment(esr) == KGDB_DYN_DBG_BRK_IMM)
|
|
|
|
return kgdb_brk_handler(regs, esr);
|
|
|
|
if (esr_brk_comment(esr) == KGDB_COMPILED_DBG_BRK_IMM)
|
|
|
|
return kgdb_compiled_brk_handler(regs, esr);
|
2019-02-26 12:52:47 +00:00
|
|
|
}
|
2013-12-04 05:50:20 +00:00
|
|
|
|
2025-07-07 12:40:59 +01:00
|
|
|
if (IS_ENABLED(CONFIG_KPROBES)) {
|
|
|
|
if (esr_brk_comment(esr) == KPROBES_BRK_IMM)
|
|
|
|
return kprobe_brk_handler(regs, esr);
|
|
|
|
if (esr_brk_comment(esr) == KPROBES_BRK_SS_IMM)
|
|
|
|
return kprobe_ss_brk_handler(regs, esr);
|
2019-02-26 12:52:47 +00:00
|
|
|
}
|
2013-12-04 05:50:20 +00:00
|
|
|
|
2025-07-07 12:40:59 +01:00
|
|
|
if (IS_ENABLED(CONFIG_KRETPROBES) &&
|
|
|
|
esr_brk_comment(esr) == KRETPROBES_BRK_IMM)
|
|
|
|
return kretprobe_brk_handler(regs, esr);
|
|
|
|
|
2024-10-24 03:41:20 +00:00
|
|
|
return DBG_HOOK_ERROR;
|
2013-12-04 05:50:20 +00:00
|
|
|
}
|
arm64: debug: split brk64 exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The BRK64 instruction can only be triggered by a BRK instruction. Thus,
we know that the PC is a legitimate address and isn't being used to train
a branch predictor with a bogus address : we don't need to call
`arm64_apply_bp_hardening()`.
We do not need to handle the Cortex-A76 erratum #1463225 either, as it
only relevant for single stepping at EL1.
BRK64 does not write FAR_EL1 either, as only hardware watchpoints do so.
Split the BRK64 exception entry, adjust the function signature, and its
behaviour to match the lack of needed mitigations.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_brk64()` into `do_el0_brk64()` and `do_el1_brk64()`, and call them
directly from the relevant entry paths.
Use `die()` directly for the EL1 error path, as in `do_el1_bti()` and
`do_el1_undef()`.
We can also remove `NOKRPOBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
When taking a BRK64 exception from EL0, the exception handling is safely
preemptible : the only possible handler is `uprobe_brk_handler()`.
It only operates on task-local data and properly checks its validity,
then raises a Thread Information Flag, processed before returning
to userspace in `do_notify_resume()`, which is already preemptible.
Thus we can safely unmask interrupts and enable preemption before
handling the break itself, fixing a PREEMPT_RT issue where the handler
could call a sleeping function with preemption disabled.
Given that the break hook registration is handled statically in
`call_break_hook` since
(arm64: debug: call software break handlers statically)
and that we now bypass the exception handler registration, this change
renders `early_brk64` redundant : its functionality is now handled through
the post-init path.
This also removes the last usage of `el1_dbg()`.
This also removes the last usage of `el0_dbg()` without `CONFIG_COMPAT`.
Mark it `__maybe_unused`, to prevent a warning when building this patch
without `CONFIG_COMPAT`, as the following patch removes `el0_dbg()`.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-12-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:07 +01:00
|
|
|
NOKPROBE_SYMBOL(call_el1_break_hook);
|
2013-12-04 05:50:20 +00:00
|
|
|
|
arm64: debug: split brk64 exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The BRK64 instruction can only be triggered by a BRK instruction. Thus,
we know that the PC is a legitimate address and isn't being used to train
a branch predictor with a bogus address : we don't need to call
`arm64_apply_bp_hardening()`.
We do not need to handle the Cortex-A76 erratum #1463225 either, as it
only relevant for single stepping at EL1.
BRK64 does not write FAR_EL1 either, as only hardware watchpoints do so.
Split the BRK64 exception entry, adjust the function signature, and its
behaviour to match the lack of needed mitigations.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_brk64()` into `do_el0_brk64()` and `do_el1_brk64()`, and call them
directly from the relevant entry paths.
Use `die()` directly for the EL1 error path, as in `do_el1_bti()` and
`do_el1_undef()`.
We can also remove `NOKRPOBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
When taking a BRK64 exception from EL0, the exception handling is safely
preemptible : the only possible handler is `uprobe_brk_handler()`.
It only operates on task-local data and properly checks its validity,
then raises a Thread Information Flag, processed before returning
to userspace in `do_notify_resume()`, which is already preemptible.
Thus we can safely unmask interrupts and enable preemption before
handling the break itself, fixing a PREEMPT_RT issue where the handler
could call a sleeping function with preemption disabled.
Given that the break hook registration is handled statically in
`call_break_hook` since
(arm64: debug: call software break handlers statically)
and that we now bypass the exception handler registration, this change
renders `early_brk64` redundant : its functionality is now handled through
the post-init path.
This also removes the last usage of `el1_dbg()`.
This also removes the last usage of `el0_dbg()` without `CONFIG_COMPAT`.
Mark it `__maybe_unused`, to prevent a warning when building this patch
without `CONFIG_COMPAT`, as the following patch removes `el0_dbg()`.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-12-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:07 +01:00
|
|
|
/*
|
|
|
|
* We have already unmasked interrupts and enabled preemption
|
|
|
|
* when calling do_el0_brk64() from entry-common.c.
|
|
|
|
*/
|
|
|
|
void do_el0_brk64(unsigned long esr, struct pt_regs *regs)
|
2013-03-16 08:48:13 +00:00
|
|
|
{
|
arm64: debug: split brk64 exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The BRK64 instruction can only be triggered by a BRK instruction. Thus,
we know that the PC is a legitimate address and isn't being used to train
a branch predictor with a bogus address : we don't need to call
`arm64_apply_bp_hardening()`.
We do not need to handle the Cortex-A76 erratum #1463225 either, as it
only relevant for single stepping at EL1.
BRK64 does not write FAR_EL1 either, as only hardware watchpoints do so.
Split the BRK64 exception entry, adjust the function signature, and its
behaviour to match the lack of needed mitigations.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_brk64()` into `do_el0_brk64()` and `do_el1_brk64()`, and call them
directly from the relevant entry paths.
Use `die()` directly for the EL1 error path, as in `do_el1_bti()` and
`do_el1_undef()`.
We can also remove `NOKRPOBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
When taking a BRK64 exception from EL0, the exception handling is safely
preemptible : the only possible handler is `uprobe_brk_handler()`.
It only operates on task-local data and properly checks its validity,
then raises a Thread Information Flag, processed before returning
to userspace in `do_notify_resume()`, which is already preemptible.
Thus we can safely unmask interrupts and enable preemption before
handling the break itself, fixing a PREEMPT_RT issue where the handler
could call a sleeping function with preemption disabled.
Given that the break hook registration is handled statically in
`call_break_hook` since
(arm64: debug: call software break handlers statically)
and that we now bypass the exception handler registration, this change
renders `early_brk64` redundant : its functionality is now handled through
the post-init path.
This also removes the last usage of `el1_dbg()`.
This also removes the last usage of `el0_dbg()` without `CONFIG_COMPAT`.
Mark it `__maybe_unused`, to prevent a warning when building this patch
without `CONFIG_COMPAT`, as the following patch removes `el0_dbg()`.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-12-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:07 +01:00
|
|
|
if (IS_ENABLED(CONFIG_UPROBES) &&
|
|
|
|
esr_brk_comment(esr) == UPROBES_BRK_IMM &&
|
|
|
|
uprobe_brk_handler(regs, esr) == DBG_HOOK_HANDLED)
|
|
|
|
return;
|
2016-11-02 14:40:44 +05:30
|
|
|
|
arm64: debug: split brk64 exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The BRK64 instruction can only be triggered by a BRK instruction. Thus,
we know that the PC is a legitimate address and isn't being used to train
a branch predictor with a bogus address : we don't need to call
`arm64_apply_bp_hardening()`.
We do not need to handle the Cortex-A76 erratum #1463225 either, as it
only relevant for single stepping at EL1.
BRK64 does not write FAR_EL1 either, as only hardware watchpoints do so.
Split the BRK64 exception entry, adjust the function signature, and its
behaviour to match the lack of needed mitigations.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_brk64()` into `do_el0_brk64()` and `do_el1_brk64()`, and call them
directly from the relevant entry paths.
Use `die()` directly for the EL1 error path, as in `do_el1_bti()` and
`do_el1_undef()`.
We can also remove `NOKRPOBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
When taking a BRK64 exception from EL0, the exception handling is safely
preemptible : the only possible handler is `uprobe_brk_handler()`.
It only operates on task-local data and properly checks its validity,
then raises a Thread Information Flag, processed before returning
to userspace in `do_notify_resume()`, which is already preemptible.
Thus we can safely unmask interrupts and enable preemption before
handling the break itself, fixing a PREEMPT_RT issue where the handler
could call a sleeping function with preemption disabled.
Given that the break hook registration is handled statically in
`call_break_hook` since
(arm64: debug: call software break handlers statically)
and that we now bypass the exception handler registration, this change
renders `early_brk64` redundant : its functionality is now handled through
the post-init path.
This also removes the last usage of `el1_dbg()`.
This also removes the last usage of `el0_dbg()` without `CONFIG_COMPAT`.
Mark it `__maybe_unused`, to prevent a warning when building this patch
without `CONFIG_COMPAT`, as the following patch removes `el0_dbg()`.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-12-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:07 +01:00
|
|
|
send_user_sigtrap(TRAP_BRKPT);
|
|
|
|
}
|
2013-03-16 08:48:13 +00:00
|
|
|
|
arm64: debug: split brk64 exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The BRK64 instruction can only be triggered by a BRK instruction. Thus,
we know that the PC is a legitimate address and isn't being used to train
a branch predictor with a bogus address : we don't need to call
`arm64_apply_bp_hardening()`.
We do not need to handle the Cortex-A76 erratum #1463225 either, as it
only relevant for single stepping at EL1.
BRK64 does not write FAR_EL1 either, as only hardware watchpoints do so.
Split the BRK64 exception entry, adjust the function signature, and its
behaviour to match the lack of needed mitigations.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_brk64()` into `do_el0_brk64()` and `do_el1_brk64()`, and call them
directly from the relevant entry paths.
Use `die()` directly for the EL1 error path, as in `do_el1_bti()` and
`do_el1_undef()`.
We can also remove `NOKRPOBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
When taking a BRK64 exception from EL0, the exception handling is safely
preemptible : the only possible handler is `uprobe_brk_handler()`.
It only operates on task-local data and properly checks its validity,
then raises a Thread Information Flag, processed before returning
to userspace in `do_notify_resume()`, which is already preemptible.
Thus we can safely unmask interrupts and enable preemption before
handling the break itself, fixing a PREEMPT_RT issue where the handler
could call a sleeping function with preemption disabled.
Given that the break hook registration is handled statically in
`call_break_hook` since
(arm64: debug: call software break handlers statically)
and that we now bypass the exception handler registration, this change
renders `early_brk64` redundant : its functionality is now handled through
the post-init path.
This also removes the last usage of `el1_dbg()`.
This also removes the last usage of `el0_dbg()` without `CONFIG_COMPAT`.
Mark it `__maybe_unused`, to prevent a warning when building this patch
without `CONFIG_COMPAT`, as the following patch removes `el0_dbg()`.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-12-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:07 +01:00
|
|
|
void do_el1_brk64(unsigned long esr, struct pt_regs *regs)
|
|
|
|
{
|
|
|
|
if (call_el1_break_hook(regs, esr) == DBG_HOOK_HANDLED)
|
|
|
|
return;
|
|
|
|
|
|
|
|
die("Oops - BRK", regs, esr);
|
2013-03-16 08:48:13 +00:00
|
|
|
}
|
arm64: debug: split brk64 exception entry
Currently all debug exceptions share common entry code and are routed
to `do_debug_exception()`, which calls dynamically-registered
handlers for each specific debug exception. This is unfortunate as
different debug exceptions have different entry handling requirements,
and it would be better to handle these distinct requirements earlier.
The BRK64 instruction can only be triggered by a BRK instruction. Thus,
we know that the PC is a legitimate address and isn't being used to train
a branch predictor with a bogus address : we don't need to call
`arm64_apply_bp_hardening()`.
We do not need to handle the Cortex-A76 erratum #1463225 either, as it
only relevant for single stepping at EL1.
BRK64 does not write FAR_EL1 either, as only hardware watchpoints do so.
Split the BRK64 exception entry, adjust the function signature, and its
behaviour to match the lack of needed mitigations.
Further, as the EL0 and EL1 code paths are cleanly separated, we can split
`do_brk64()` into `do_el0_brk64()` and `do_el1_brk64()`, and call them
directly from the relevant entry paths.
Use `die()` directly for the EL1 error path, as in `do_el1_bti()` and
`do_el1_undef()`.
We can also remove `NOKRPOBE_SYMBOL` for the EL0 path, as it cannot
lead to a kprobe recursion.
When taking a BRK64 exception from EL0, the exception handling is safely
preemptible : the only possible handler is `uprobe_brk_handler()`.
It only operates on task-local data and properly checks its validity,
then raises a Thread Information Flag, processed before returning
to userspace in `do_notify_resume()`, which is already preemptible.
Thus we can safely unmask interrupts and enable preemption before
handling the break itself, fixing a PREEMPT_RT issue where the handler
could call a sleeping function with preemption disabled.
Given that the break hook registration is handled statically in
`call_break_hook` since
(arm64: debug: call software break handlers statically)
and that we now bypass the exception handler registration, this change
renders `early_brk64` redundant : its functionality is now handled through
the post-init path.
This also removes the last usage of `el1_dbg()`.
This also removes the last usage of `el0_dbg()` without `CONFIG_COMPAT`.
Mark it `__maybe_unused`, to prevent a warning when building this patch
without `CONFIG_COMPAT`, as the following patch removes `el0_dbg()`.
Signed-off-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Tested-by: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Reviewed-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250707114109.35672-12-ada.coupriediaz@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2025-07-07 12:41:07 +01:00
|
|
|
NOKPROBE_SYMBOL(do_el1_brk64);
|
2013-03-16 08:48:13 +00:00
|
|
|
|
2025-07-07 12:41:08 +01:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
void do_bkpt32(unsigned long esr, struct pt_regs *regs)
|
|
|
|
{
|
|
|
|
arm64_notify_die("aarch32 BKPT", regs, SIGTRAP, TRAP_BRKPT, regs->pc, esr);
|
2013-03-16 08:48:13 +00:00
|
|
|
}
|
2025-07-07 12:41:08 +01:00
|
|
|
#endif /* CONFIG_COMPAT */
|
2013-03-16 08:48:13 +00:00
|
|
|
|
2025-07-07 12:40:58 +01:00
|
|
|
bool try_handle_aarch32_break(struct pt_regs *regs)
|
2013-03-16 08:48:13 +00:00
|
|
|
{
|
2013-11-28 12:07:23 +00:00
|
|
|
u32 arm_instr;
|
|
|
|
u16 thumb_instr;
|
2013-03-16 08:48:13 +00:00
|
|
|
bool bp = false;
|
|
|
|
void __user *pc = (void __user *)instruction_pointer(regs);
|
|
|
|
|
|
|
|
if (!compat_user_mode(regs))
|
2025-07-07 12:40:58 +01:00
|
|
|
return false;
|
2013-03-16 08:48:13 +00:00
|
|
|
|
|
|
|
if (compat_thumb_mode(regs)) {
|
|
|
|
/* get 16-bit Thumb instruction */
|
2017-06-28 16:55:52 +02:00
|
|
|
__le16 instr;
|
|
|
|
get_user(instr, (__le16 __user *)pc);
|
|
|
|
thumb_instr = le16_to_cpu(instr);
|
2013-11-28 12:07:23 +00:00
|
|
|
if (thumb_instr == AARCH32_BREAK_THUMB2_LO) {
|
2013-03-16 08:48:13 +00:00
|
|
|
/* get second half of 32-bit Thumb-2 instruction */
|
2017-06-28 16:55:52 +02:00
|
|
|
get_user(instr, (__le16 __user *)(pc + 2));
|
|
|
|
thumb_instr = le16_to_cpu(instr);
|
2013-11-28 12:07:23 +00:00
|
|
|
bp = thumb_instr == AARCH32_BREAK_THUMB2_HI;
|
2013-03-16 08:48:13 +00:00
|
|
|
} else {
|
2013-11-28 12:07:23 +00:00
|
|
|
bp = thumb_instr == AARCH32_BREAK_THUMB;
|
2013-03-16 08:48:13 +00:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* 32-bit ARM instruction */
|
2017-06-28 16:55:52 +02:00
|
|
|
__le32 instr;
|
|
|
|
get_user(instr, (__le32 __user *)pc);
|
|
|
|
arm_instr = le32_to_cpu(instr);
|
2013-11-28 12:07:23 +00:00
|
|
|
bp = (arm_instr & ~0xf0000000) == AARCH32_BREAK_ARM;
|
2013-03-16 08:48:13 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!bp)
|
2025-07-07 12:40:58 +01:00
|
|
|
return false;
|
2013-03-16 08:48:13 +00:00
|
|
|
|
2016-02-10 16:05:28 +00:00
|
|
|
send_user_sigtrap(TRAP_BRKPT);
|
2025-07-07 12:40:58 +01:00
|
|
|
return true;
|
2012-03-05 11:49:33 +00:00
|
|
|
}
|
2025-07-07 12:40:58 +01:00
|
|
|
NOKPROBE_SYMBOL(try_handle_aarch32_break);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
/* Re-enable single step for syscall restarting. */
|
|
|
|
void user_rewind_single_step(struct task_struct *task)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If single step is active for this thread, then set SPSR.SS
|
|
|
|
* to 1 to avoid returning to the active-pending state.
|
|
|
|
*/
|
2020-02-13 12:12:26 +00:00
|
|
|
if (test_tsk_thread_flag(task, TIF_SINGLESTEP))
|
2012-03-05 11:49:33 +00:00
|
|
|
set_regs_spsr_ss(task_pt_regs(task));
|
|
|
|
}
|
2016-07-08 12:35:49 -04:00
|
|
|
NOKPROBE_SYMBOL(user_rewind_single_step);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
void user_fastforward_single_step(struct task_struct *task)
|
|
|
|
{
|
2020-02-13 12:12:26 +00:00
|
|
|
if (test_tsk_thread_flag(task, TIF_SINGLESTEP))
|
2012-03-05 11:49:33 +00:00
|
|
|
clear_regs_spsr_ss(task_pt_regs(task));
|
|
|
|
}
|
|
|
|
|
arm64: ptrace: Override SPSR.SS when single-stepping is enabled
Luis reports that, when reverse debugging with GDB, single-step does not
function as expected on arm64:
| I've noticed, under very specific conditions, that a PTRACE_SINGLESTEP
| request by GDB won't execute the underlying instruction. As a consequence,
| the PC doesn't move, but we return a SIGTRAP just like we would for a
| regular successful PTRACE_SINGLESTEP request.
The underlying problem is that when the CPU register state is restored
as part of a reverse step, the SPSR.SS bit is cleared and so the hardware
single-step state can transition to the "active-pending" state, causing
an unexpected step exception to be taken immediately if a step operation
is attempted.
In hindsight, we probably shouldn't have exposed SPSR.SS in the pstate
accessible by the GPR regset, but it's a bit late for that now. Instead,
simply prevent userspace from configuring the bit to a value which is
inconsistent with the TIF_SINGLESTEP state for the task being traced.
Cc: <stable@vger.kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Keno Fischer <keno@juliacomputing.com>
Link: https://lore.kernel.org/r/1eed6d69-d53d-9657-1fc9-c089be07f98c@linaro.org
Reported-by: Luis Machado <luis.machado@linaro.org>
Tested-by: Luis Machado <luis.machado@linaro.org>
Signed-off-by: Will Deacon <will@kernel.org>
2020-02-13 12:06:26 +00:00
|
|
|
void user_regs_reset_single_step(struct user_pt_regs *regs,
|
|
|
|
struct task_struct *task)
|
|
|
|
{
|
|
|
|
if (test_tsk_thread_flag(task, TIF_SINGLESTEP))
|
|
|
|
set_user_regs_spsr_ss(regs);
|
|
|
|
else
|
|
|
|
clear_user_regs_spsr_ss(regs);
|
|
|
|
}
|
|
|
|
|
2012-03-05 11:49:33 +00:00
|
|
|
/* Kernel API */
|
|
|
|
void kernel_enable_single_step(struct pt_regs *regs)
|
|
|
|
{
|
|
|
|
WARN_ON(!irqs_disabled());
|
|
|
|
set_regs_spsr_ss(regs);
|
2025-06-13 08:06:45 +05:30
|
|
|
mdscr_write(mdscr_read() | MDSCR_EL1_SS);
|
2012-03-05 11:49:33 +00:00
|
|
|
enable_debug_monitors(DBG_ACTIVE_EL1);
|
|
|
|
}
|
2016-07-08 12:35:49 -04:00
|
|
|
NOKPROBE_SYMBOL(kernel_enable_single_step);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
void kernel_disable_single_step(void)
|
|
|
|
{
|
|
|
|
WARN_ON(!irqs_disabled());
|
2025-06-13 08:06:45 +05:30
|
|
|
mdscr_write(mdscr_read() & ~MDSCR_EL1_SS);
|
2012-03-05 11:49:33 +00:00
|
|
|
disable_debug_monitors(DBG_ACTIVE_EL1);
|
|
|
|
}
|
2016-07-08 12:35:49 -04:00
|
|
|
NOKPROBE_SYMBOL(kernel_disable_single_step);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
int kernel_active_single_step(void)
|
|
|
|
{
|
|
|
|
WARN_ON(!irqs_disabled());
|
2025-06-13 08:06:45 +05:30
|
|
|
return mdscr_read() & MDSCR_EL1_SS;
|
2012-03-05 11:49:33 +00:00
|
|
|
}
|
2016-07-08 12:35:49 -04:00
|
|
|
NOKPROBE_SYMBOL(kernel_active_single_step);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
arm64: kgdb: Set PSTATE.SS to 1 to re-enable single-step
Currently only the first attempt to single-step has any effect. After
that all further stepping remains "stuck" at the same program counter
value.
Refer to the ARM Architecture Reference Manual (ARM DDI 0487E.a) D2.12,
PSTATE.SS=1 should be set at each step before transferring the PE to the
'Active-not-pending' state. The problem here is PSTATE.SS=1 is not set
since the second single-step.
After the first single-step, the PE transferes to the 'Inactive' state,
with PSTATE.SS=0 and MDSCR.SS=1, thus PSTATE.SS won't be set to 1 due to
kernel_active_single_step()=true. Then the PE transferes to the
'Active-pending' state when ERET and returns to the debugger by step
exception.
Before this patch:
==================
Entering kdb (current=0xffff3376039f0000, pid 1) on processor 0 due to Keyboard Entry
[0]kdb>
[0]kdb>
[0]kdb> bp write_sysrq_trigger
Instruction(i) BP #0 at 0xffffa45c13d09290 (write_sysrq_trigger)
is enabled addr at ffffa45c13d09290, hardtype=0 installed=0
[0]kdb> go
$ echo h > /proc/sysrq-trigger
Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to Breakpoint @ 0xffffad651a309290
[1]kdb> ss
Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to SS trap @ 0xffffad651a309294
[1]kdb> ss
Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to SS trap @ 0xffffad651a309294
[1]kdb>
After this patch:
=================
Entering kdb (current=0xffff6851c39f0000, pid 1) on processor 0 due to Keyboard Entry
[0]kdb> bp write_sysrq_trigger
Instruction(i) BP #0 at 0xffffc02d2dd09290 (write_sysrq_trigger)
is enabled addr at ffffc02d2dd09290, hardtype=0 installed=0
[0]kdb> go
$ echo h > /proc/sysrq-trigger
Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to Breakpoint @ 0xffffc02d2dd09290
[1]kdb> ss
Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd09294
[1]kdb> ss
Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd09298
[1]kdb> ss
Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd0929c
[1]kdb>
Fixes: 44679a4f142b ("arm64: KGDB: Add step debugging support")
Co-developed-by: Wei Li <liwei391@huawei.com>
Signed-off-by: Wei Li <liwei391@huawei.com>
Signed-off-by: Sumit Garg <sumit.garg@linaro.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Acked-by: Daniel Thompson <daniel.thompson@linaro.org>
Tested-by: Daniel Thompson <daniel.thompson@linaro.org>
Link: https://lore.kernel.org/r/20230202073148.657746-3-sumit.garg@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
2023-02-02 13:01:48 +05:30
|
|
|
void kernel_rewind_single_step(struct pt_regs *regs)
|
|
|
|
{
|
|
|
|
set_regs_spsr_ss(regs);
|
|
|
|
}
|
|
|
|
|
2024-09-30 17:10:48 +01:00
|
|
|
void kernel_fastforward_single_step(struct pt_regs *regs)
|
|
|
|
{
|
|
|
|
clear_regs_spsr_ss(regs);
|
|
|
|
}
|
|
|
|
|
2012-03-05 11:49:33 +00:00
|
|
|
/* ptrace API */
|
|
|
|
void user_enable_single_step(struct task_struct *task)
|
|
|
|
{
|
2016-08-26 11:36:39 +01:00
|
|
|
struct thread_info *ti = task_thread_info(task);
|
|
|
|
|
|
|
|
if (!test_and_set_ti_thread_flag(ti, TIF_SINGLESTEP))
|
|
|
|
set_regs_spsr_ss(task_pt_regs(task));
|
2012-03-05 11:49:33 +00:00
|
|
|
}
|
2016-07-08 12:35:49 -04:00
|
|
|
NOKPROBE_SYMBOL(user_enable_single_step);
|
2012-03-05 11:49:33 +00:00
|
|
|
|
|
|
|
void user_disable_single_step(struct task_struct *task)
|
|
|
|
{
|
|
|
|
clear_ti_thread_flag(task_thread_info(task), TIF_SINGLESTEP);
|
|
|
|
}
|
2016-07-08 12:35:49 -04:00
|
|
|
NOKPROBE_SYMBOL(user_disable_single_step);
|