2019-05-27 08:55:01 +02:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
2005-10-10 22:36:14 +10:00
|
|
|
/*
|
|
|
|
* PowerPC version
|
|
|
|
* Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org)
|
|
|
|
* Rewritten by Cort Dougan (cort@cs.nmt.edu) for PReP
|
|
|
|
* Copyright (C) 1996 Cort Dougan <cort@cs.nmt.edu>
|
|
|
|
* Adapted for Power Macintosh by Paul Mackerras.
|
|
|
|
* Low-level exception handlers and MMU support
|
|
|
|
* rewritten by Paul Mackerras.
|
|
|
|
* Copyright (C) 1996 Paul Mackerras.
|
|
|
|
* MPC8xx modifications Copyright (C) 1997 Dan Malek (dmalek@jlc.net).
|
|
|
|
*
|
|
|
|
* This file contains the system call entry code, context switch
|
|
|
|
* code, and exception/interrupt return code for PowerPC.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/errno.h>
|
powerpc/kernel: Switch to using MAX_ERRNO
Currently on powerpc we have our own #define for the highest (negative)
errno value, called _LAST_ERRNO. This is defined to be 516, for reasons
which are not clear.
The generic code, and x86, use MAX_ERRNO, which is defined to be 4095.
In particular seccomp uses MAX_ERRNO to restrict the value that a
seccomp filter can return.
Currently with the mismatch between _LAST_ERRNO and MAX_ERRNO, a seccomp
tracer wanting to return 600, expecting it to be seen as an error, would
instead find on powerpc that userspace sees a successful syscall with a
return value of 600.
To avoid this inconsistency, switch powerpc to use MAX_ERRNO.
We are somewhat confident that generic syscalls that can return a
non-error value above negative MAX_ERRNO have already been updated to
use force_successful_syscall_return().
I have also checked all the powerpc specific syscalls, and believe that
none of them expect to return a non-error value between -MAX_ERRNO and
-516. So this change should be safe ...
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Kees Cook <keescook@chromium.org>
2015-07-23 20:21:01 +10:00
|
|
|
#include <linux/err.h>
|
2005-10-10 22:36:14 +10:00
|
|
|
#include <asm/unistd.h>
|
|
|
|
#include <asm/processor.h>
|
|
|
|
#include <asm/page.h>
|
|
|
|
#include <asm/mmu.h>
|
|
|
|
#include <asm/thread_info.h>
|
2018-07-24 01:07:54 +10:00
|
|
|
#include <asm/code-patching-asm.h>
|
2005-10-10 22:36:14 +10:00
|
|
|
#include <asm/ppc_asm.h>
|
|
|
|
#include <asm/asm-offsets.h>
|
|
|
|
#include <asm/cputable.h>
|
2006-09-25 18:19:00 +10:00
|
|
|
#include <asm/firmware.h>
|
2007-01-01 18:45:34 +00:00
|
|
|
#include <asm/bug.h>
|
2008-04-17 14:34:59 +10:00
|
|
|
#include <asm/ptrace.h>
|
2008-04-17 14:35:01 +10:00
|
|
|
#include <asm/irqflags.h>
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
#include <asm/hw_irq.h>
|
2013-05-13 16:16:43 +00:00
|
|
|
#include <asm/context_tracking.h>
|
2015-06-12 11:06:32 +10:00
|
|
|
#include <asm/tm.h>
|
2016-04-26 10:28:50 +10:00
|
|
|
#include <asm/ppc-opcode.h>
|
2018-04-24 14:15:59 +10:00
|
|
|
#include <asm/barrier.h>
|
2016-01-13 23:33:46 -05:00
|
|
|
#include <asm/export.h>
|
2018-07-05 16:24:57 +00:00
|
|
|
#include <asm/asm-compat.h>
|
2018-01-10 03:07:15 +11:00
|
|
|
#ifdef CONFIG_PPC_BOOK3S
|
|
|
|
#include <asm/exception-64s.h>
|
|
|
|
#else
|
|
|
|
#include <asm/exception-64e.h>
|
|
|
|
#endif
|
2018-07-05 16:25:01 +00:00
|
|
|
#include <asm/feature-fixups.h>
|
2019-04-18 16:51:24 +10:00
|
|
|
#include <asm/kup.h>
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
/*
|
|
|
|
* System calls.
|
|
|
|
*/
|
|
|
|
.section ".toc","aw"
|
2014-02-04 16:05:53 +11:00
|
|
|
SYS_CALL_TABLE:
|
|
|
|
.tc sys_call_table[TC],sys_call_table
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2018-12-17 16:10:35 +05:30
|
|
|
COMPAT_SYS_CALL_TABLE:
|
|
|
|
.tc compat_sys_call_table[TC],compat_sys_call_table
|
|
|
|
|
2005-10-10 22:36:14 +10:00
|
|
|
/* This value is used to mark exception frames on the stack. */
|
|
|
|
exception_marker:
|
2008-04-17 14:34:59 +10:00
|
|
|
.tc ID_EXC_MARKER[TC],STACK_FRAME_REGS_MARKER
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
.section ".text"
|
|
|
|
.align 7
|
|
|
|
|
|
|
|
.globl system_call_common
|
|
|
|
system_call_common:
|
2015-06-12 11:06:32 +10:00
|
|
|
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
|
|
|
|
BEGIN_FTR_SECTION
|
|
|
|
extrdi. r10, r12, 1, (63-MSR_TS_T_LG) /* transaction active? */
|
2017-06-29 23:19:16 +05:30
|
|
|
bne .Ltabort_syscall
|
2015-06-12 11:06:32 +10:00
|
|
|
END_FTR_SECTION_IFSET(CPU_FTR_TM)
|
|
|
|
#endif
|
2020-02-26 03:35:34 +10:00
|
|
|
_ASM_NOKPROBE_SYMBOL(system_call_common)
|
2005-10-10 22:36:14 +10:00
|
|
|
mr r10,r1
|
|
|
|
ld r1,PACAKSAVE(r13)
|
2019-08-27 13:30:07 +10:00
|
|
|
std r10,0(r1)
|
2005-10-10 22:36:14 +10:00
|
|
|
std r11,_NIP(r1)
|
|
|
|
std r12,_MSR(r1)
|
|
|
|
std r0,GPR0(r1)
|
|
|
|
std r10,GPR1(r1)
|
2020-02-26 03:35:34 +10:00
|
|
|
std r2,GPR2(r1)
|
2018-12-12 16:03:05 +02:00
|
|
|
#ifdef CONFIG_PPC_FSL_BOOK3E
|
|
|
|
START_BTB_FLUSH_SECTION
|
|
|
|
BTB_FLUSH(r10)
|
|
|
|
END_BTB_FLUSH_SECTION
|
|
|
|
#endif
|
2020-02-26 03:35:34 +10:00
|
|
|
ld r2,PACATOC(r13)
|
|
|
|
mfcr r12
|
|
|
|
li r11,0
|
|
|
|
/* Can we avoid saving r3-r8 in common case? */
|
2005-10-10 22:36:14 +10:00
|
|
|
std r3,GPR3(r1)
|
|
|
|
std r4,GPR4(r1)
|
|
|
|
std r5,GPR5(r1)
|
|
|
|
std r6,GPR6(r1)
|
|
|
|
std r7,GPR7(r1)
|
|
|
|
std r8,GPR8(r1)
|
2020-02-26 03:35:34 +10:00
|
|
|
/* Zero r9-r12, this should only be required when restoring all GPRs */
|
2005-10-10 22:36:14 +10:00
|
|
|
std r11,GPR9(r1)
|
|
|
|
std r11,GPR10(r1)
|
|
|
|
std r11,GPR11(r1)
|
|
|
|
std r11,GPR12(r1)
|
|
|
|
std r9,GPR13(r1)
|
powerpc/64/syscall: Remove non-volatile GPR save optimisation
powerpc has an optimisation where interrupts avoid saving the
non-volatile (or callee saved) registers to the interrupt stack frame
if they are not required.
Two problems with this are that an interrupt does not always know
whether it will need non-volatiles; and if it does need them, they can
only be saved from the entry-scoped asm code (because we don't control
what the C compiler does with these registers).
system calls are the most difficult: some system calls always require
all registers (e.g., fork, to copy regs into the child). Sometimes
registers are only required under certain conditions (e.g., tracing,
signal delivery). These cases require ugly logic in the call
chains (e.g., ppc_fork), and require a lot of logic to be implemented
in asm.
So remove the optimisation for system calls, and always save NVGPRs on
entry. Modern high performance CPUs are not so sensitive, because the
stores are dense in cache and can be hidden by other expensive work in
the syscall path -- the null syscall selftests benchmark on POWER9 is
not slowed (124.40ns before and 123.64ns after, i.e., within the
noise).
Other interrupts retain the NVGPR optimisation for now.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-24-npiggin@gmail.com
2020-02-26 03:35:32 +10:00
|
|
|
SAVE_NVGPRS(r1)
|
2020-02-26 03:35:34 +10:00
|
|
|
std r11,_XER(r1)
|
|
|
|
std r11,_CTR(r1)
|
2005-10-10 22:36:14 +10:00
|
|
|
mflr r10
|
2020-02-26 03:35:34 +10:00
|
|
|
|
2012-04-05 03:44:48 +00:00
|
|
|
/*
|
|
|
|
* This clears CR0.SO (bit 28), which is the error indication on
|
|
|
|
* return from this system call.
|
|
|
|
*/
|
2020-02-26 03:35:34 +10:00
|
|
|
rldimi r12,r11,28,(63-28)
|
powerpc/64/syscall: Remove non-volatile GPR save optimisation
powerpc has an optimisation where interrupts avoid saving the
non-volatile (or callee saved) registers to the interrupt stack frame
if they are not required.
Two problems with this are that an interrupt does not always know
whether it will need non-volatiles; and if it does need them, they can
only be saved from the entry-scoped asm code (because we don't control
what the C compiler does with these registers).
system calls are the most difficult: some system calls always require
all registers (e.g., fork, to copy regs into the child). Sometimes
registers are only required under certain conditions (e.g., tracing,
signal delivery). These cases require ugly logic in the call
chains (e.g., ppc_fork), and require a lot of logic to be implemented
in asm.
So remove the optimisation for system calls, and always save NVGPRs on
entry. Modern high performance CPUs are not so sensitive, because the
stores are dense in cache and can be hidden by other expensive work in
the syscall path -- the null syscall selftests benchmark on POWER9 is
not slowed (124.40ns before and 123.64ns after, i.e., within the
noise).
Other interrupts retain the NVGPR optimisation for now.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-24-npiggin@gmail.com
2020-02-26 03:35:32 +10:00
|
|
|
li r11,0xc00
|
2005-10-10 22:36:14 +10:00
|
|
|
std r10,_LINK(r1)
|
|
|
|
std r11,_TRAP(r1)
|
2020-02-26 03:35:34 +10:00
|
|
|
std r12,_CCR(r1)
|
2005-10-10 22:36:14 +10:00
|
|
|
std r3,ORIG_GPR3(r1)
|
2020-02-26 03:35:34 +10:00
|
|
|
addi r10,r1,STACK_FRAME_OVERHEAD
|
2005-10-10 22:36:14 +10:00
|
|
|
ld r11,exception_marker@toc(r2)
|
2020-02-26 03:35:34 +10:00
|
|
|
std r11,-16(r10) /* "regshere" marker */
|
2018-04-24 14:15:59 +10:00
|
|
|
|
2020-02-26 03:35:34 +10:00
|
|
|
/* Calling convention has r9 = orig r0, r10 = regs */
|
|
|
|
mr r9,r0
|
|
|
|
bl system_call_exception
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2014-12-05 21:16:59 +11:00
|
|
|
.Lsyscall_exit:
|
2020-02-26 03:35:34 +10:00
|
|
|
addi r4,r1,STACK_FRAME_OVERHEAD
|
|
|
|
bl syscall_exit_prepare
|
2009-07-23 23:15:59 +00:00
|
|
|
|
2020-02-26 03:35:34 +10:00
|
|
|
ld r2,_CCR(r1)
|
|
|
|
ld r4,_NIP(r1)
|
|
|
|
ld r5,_MSR(r1)
|
|
|
|
ld r6,_LINK(r1)
|
2016-02-29 17:53:47 +11:00
|
|
|
|
2010-08-11 01:40:27 +00:00
|
|
|
BEGIN_FTR_SECTION
|
2005-10-10 22:36:14 +10:00
|
|
|
stdcx. r0,0,r1 /* to clear the reservation */
|
2010-08-11 01:40:27 +00:00
|
|
|
END_FTR_SECTION_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
|
2019-04-18 16:51:24 +10:00
|
|
|
|
2020-02-26 03:35:34 +10:00
|
|
|
mtspr SPRN_SRR0,r4
|
|
|
|
mtspr SPRN_SRR1,r5
|
|
|
|
mtlr r6
|
2018-11-29 17:42:24 +11:00
|
|
|
|
2020-02-26 03:35:34 +10:00
|
|
|
cmpdi r3,0
|
|
|
|
bne .Lsyscall_restore_regs
|
|
|
|
.Lsyscall_restore_regs_cont:
|
2015-11-25 14:25:17 +11:00
|
|
|
|
|
|
|
BEGIN_FTR_SECTION
|
|
|
|
HMT_MEDIUM_LOW
|
|
|
|
END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
|
|
|
|
|
2019-04-18 16:51:24 +10:00
|
|
|
/*
|
|
|
|
* We don't need to restore AMR on the way back to userspace for KUAP.
|
|
|
|
* The value of AMR only matters while we're in the kernel.
|
|
|
|
*/
|
2020-02-26 03:35:34 +10:00
|
|
|
mtcr r2
|
2018-01-10 03:07:15 +11:00
|
|
|
ld r2,GPR2(r1)
|
2020-02-26 03:35:34 +10:00
|
|
|
ld r3,GPR3(r1)
|
|
|
|
ld r13,GPR13(r1)
|
2018-01-10 03:07:15 +11:00
|
|
|
ld r1,GPR1(r1)
|
|
|
|
RFI_TO_USER
|
|
|
|
b . /* prevent speculative execution */
|
|
|
|
|
2020-02-26 03:35:34 +10:00
|
|
|
.Lsyscall_restore_regs:
|
|
|
|
ld r3,_CTR(r1)
|
|
|
|
ld r4,_XER(r1)
|
2006-03-08 13:24:22 +11:00
|
|
|
REST_NVGPRS(r1)
|
2020-02-26 03:35:34 +10:00
|
|
|
mtctr r3
|
|
|
|
mtspr SPRN_XER,r4
|
|
|
|
ld r0,GPR0(r1)
|
|
|
|
REST_8GPRS(4, r1)
|
|
|
|
ld r12,GPR12(r1)
|
|
|
|
b .Lsyscall_restore_regs_cont
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2015-06-12 11:06:32 +10:00
|
|
|
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
|
2017-06-29 23:19:16 +05:30
|
|
|
.Ltabort_syscall:
|
2015-06-12 11:06:32 +10:00
|
|
|
/* Firstly we need to enable TM in the kernel */
|
|
|
|
mfmsr r10
|
2016-07-25 14:26:51 +10:00
|
|
|
li r9, 1
|
|
|
|
rldimi r10, r9, MSR_TM_LG, 63-MSR_TM_LG
|
2015-06-12 11:06:32 +10:00
|
|
|
mtmsrd r10, 0
|
|
|
|
|
|
|
|
/* tabort, this dooms the transaction, nothing else */
|
2016-07-25 14:26:51 +10:00
|
|
|
li r9, (TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT)
|
|
|
|
TABORT(R9)
|
2015-06-12 11:06:32 +10:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Return directly to userspace. We have corrupted user register state,
|
|
|
|
* but userspace will never see that register state. Execution will
|
|
|
|
* resume after the tbegin of the aborted transaction with the
|
|
|
|
* checkpointed register state.
|
|
|
|
*/
|
2016-07-25 14:26:51 +10:00
|
|
|
li r9, MSR_RI
|
|
|
|
andc r10, r10, r9
|
2015-06-12 11:06:32 +10:00
|
|
|
mtmsrd r10, 1
|
|
|
|
mtspr SPRN_SRR0, r11
|
|
|
|
mtspr SPRN_SRR1, r12
|
2018-01-10 03:07:15 +11:00
|
|
|
RFI_TO_USER
|
2015-06-12 11:06:32 +10:00
|
|
|
b . /* prevent speculative execution */
|
|
|
|
#endif
|
|
|
|
|
2005-10-10 22:36:14 +10:00
|
|
|
_GLOBAL(ret_from_fork)
|
2014-02-04 16:04:35 +11:00
|
|
|
bl schedule_tail
|
2005-10-10 22:36:14 +10:00
|
|
|
REST_NVGPRS(r1)
|
|
|
|
li r3,0
|
2014-12-05 21:16:59 +11:00
|
|
|
b .Lsyscall_exit
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2012-09-12 18:32:42 -04:00
|
|
|
_GLOBAL(ret_from_kernel_thread)
|
2014-02-04 16:04:35 +11:00
|
|
|
bl schedule_tail
|
2012-09-12 18:32:42 -04:00
|
|
|
REST_NVGPRS(r1)
|
|
|
|
mtlr r14
|
|
|
|
mr r3,r15
|
2016-06-06 22:26:10 +05:30
|
|
|
#ifdef PPC64_ELF_ABI_v2
|
2014-02-04 16:08:51 +11:00
|
|
|
mr r12,r14
|
|
|
|
#endif
|
2012-09-12 18:32:42 -04:00
|
|
|
blrl
|
|
|
|
li r3,0
|
2014-12-05 21:16:59 +11:00
|
|
|
b .Lsyscall_exit
|
2012-08-31 15:48:05 -04:00
|
|
|
|
powerpc/64/syscall: Remove non-volatile GPR save optimisation
powerpc has an optimisation where interrupts avoid saving the
non-volatile (or callee saved) registers to the interrupt stack frame
if they are not required.
Two problems with this are that an interrupt does not always know
whether it will need non-volatiles; and if it does need them, they can
only be saved from the entry-scoped asm code (because we don't control
what the C compiler does with these registers).
system calls are the most difficult: some system calls always require
all registers (e.g., fork, to copy regs into the child). Sometimes
registers are only required under certain conditions (e.g., tracing,
signal delivery). These cases require ugly logic in the call
chains (e.g., ppc_fork), and require a lot of logic to be implemented
in asm.
So remove the optimisation for system calls, and always save NVGPRs on
entry. Modern high performance CPUs are not so sensitive, because the
stores are dense in cache and can be hidden by other expensive work in
the syscall path -- the null syscall selftests benchmark on POWER9 is
not slowed (124.40ns before and 123.64ns after, i.e., within the
noise).
Other interrupts retain the NVGPR optimisation for now.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200225173541.1549955-24-npiggin@gmail.com
2020-02-26 03:35:32 +10:00
|
|
|
/* Save non-volatile GPRs, if not already saved. */
|
|
|
|
_GLOBAL(save_nvgprs)
|
|
|
|
ld r11,_TRAP(r1)
|
|
|
|
andi. r0,r11,1
|
|
|
|
beqlr-
|
|
|
|
SAVE_NVGPRS(r1)
|
|
|
|
clrrdi r0,r11,1
|
|
|
|
std r0,_TRAP(r1)
|
|
|
|
blr
|
|
|
|
_ASM_NOKPROBE_SYMBOL(save_nvgprs);
|
|
|
|
|
2018-07-24 01:07:54 +10:00
|
|
|
#ifdef CONFIG_PPC_BOOK3S_64
|
|
|
|
|
|
|
|
#define FLUSH_COUNT_CACHE \
|
|
|
|
1: nop; \
|
|
|
|
patch_site 1b, patch__call_flush_count_cache
|
|
|
|
|
|
|
|
|
|
|
|
#define BCCTR_FLUSH .long 0x4c400420
|
|
|
|
|
|
|
|
.macro nops number
|
|
|
|
.rept \number
|
|
|
|
nop
|
|
|
|
.endr
|
|
|
|
.endm
|
|
|
|
|
|
|
|
.balign 32
|
|
|
|
.global flush_count_cache
|
|
|
|
flush_count_cache:
|
|
|
|
/* Save LR into r9 */
|
|
|
|
mflr r9
|
|
|
|
|
2019-11-13 21:05:41 +11:00
|
|
|
// Flush the link stack
|
2018-07-24 01:07:54 +10:00
|
|
|
.rept 64
|
|
|
|
bl .+4
|
|
|
|
.endr
|
|
|
|
b 1f
|
|
|
|
nops 6
|
|
|
|
|
|
|
|
.balign 32
|
|
|
|
/* Restore LR */
|
|
|
|
1: mtlr r9
|
2019-11-13 21:05:41 +11:00
|
|
|
|
|
|
|
// If we're just flushing the link stack, return here
|
|
|
|
3: nop
|
|
|
|
patch_site 3b patch__flush_link_stack_return
|
|
|
|
|
2018-07-24 01:07:54 +10:00
|
|
|
li r9,0x7fff
|
|
|
|
mtctr r9
|
|
|
|
|
|
|
|
BCCTR_FLUSH
|
|
|
|
|
|
|
|
2: nop
|
|
|
|
patch_site 2b patch__flush_count_cache_return
|
|
|
|
|
|
|
|
nops 3
|
|
|
|
|
|
|
|
.rept 278
|
|
|
|
.balign 32
|
|
|
|
BCCTR_FLUSH
|
|
|
|
nops 7
|
|
|
|
.endr
|
|
|
|
|
|
|
|
blr
|
|
|
|
#else
|
|
|
|
#define FLUSH_COUNT_CACHE
|
|
|
|
#endif /* CONFIG_PPC_BOOK3S_64 */
|
|
|
|
|
2005-10-10 22:36:14 +10:00
|
|
|
/*
|
|
|
|
* This routine switches between two different tasks. The process
|
|
|
|
* state of one is saved on its kernel stack. Then the state
|
|
|
|
* of the other is restored from its kernel stack. The memory
|
|
|
|
* management hardware is updated to the second process's state.
|
|
|
|
* Finally, we can return to the second process, via ret_from_except.
|
|
|
|
* On entry, r3 points to the THREAD for the current task, r4
|
|
|
|
* points to the THREAD for the new task.
|
|
|
|
*
|
|
|
|
* Note: there are two ways to get to the "going out" portion
|
|
|
|
* of this code; either by coming in via the entry (_switch)
|
|
|
|
* or via "fork" which must set up an environment equivalent
|
|
|
|
* to the "_switch" path. If you change this you'll have to change
|
|
|
|
* the fork code also.
|
|
|
|
*
|
|
|
|
* The code which creates the new task context is in 'copy_thread'
|
2006-01-23 10:58:20 -06:00
|
|
|
* in arch/powerpc/kernel/process.c
|
2005-10-10 22:36:14 +10:00
|
|
|
*/
|
|
|
|
.align 7
|
|
|
|
_GLOBAL(_switch)
|
|
|
|
mflr r0
|
|
|
|
std r0,16(r1)
|
|
|
|
stdu r1,-SWITCH_FRAME_SIZE(r1)
|
|
|
|
/* r3-r13 are caller saved -- Cort */
|
2019-12-11 13:35:52 +11:00
|
|
|
SAVE_NVGPRS(r1)
|
2015-10-29 11:43:56 +11:00
|
|
|
std r0,_NIP(r1) /* Return to switch caller */
|
2005-10-10 22:36:14 +10:00
|
|
|
mfcr r23
|
|
|
|
std r23,_CCR(r1)
|
|
|
|
std r1,KSP(r3) /* Set old stack pointer */
|
|
|
|
|
2019-04-18 16:51:24 +10:00
|
|
|
kuap_check_amr r9, r10
|
|
|
|
|
2018-07-24 01:07:54 +10:00
|
|
|
FLUSH_COUNT_CACHE
|
|
|
|
|
2017-06-09 01:36:08 +10:00
|
|
|
/*
|
|
|
|
* On SMP kernels, care must be taken because a task may be
|
|
|
|
* scheduled off CPUx and on to CPUy. Memory ordering must be
|
|
|
|
* considered.
|
|
|
|
*
|
|
|
|
* Cacheable stores on CPUx will be visible when the task is
|
|
|
|
* scheduled on CPUy by virtue of the core scheduler barriers
|
|
|
|
* (see "Notes on Program-Order guarantees on SMP systems." in
|
|
|
|
* kernel/sched/core.c).
|
|
|
|
*
|
|
|
|
* Uncacheable stores in the case of involuntary preemption must
|
|
|
|
* be taken care of. The smp_mb__before_spin_lock() in __schedule()
|
|
|
|
* is implemented as hwsync on powerpc, which orders MMIO too. So
|
|
|
|
* long as there is an hwsync in the context switch path, it will
|
|
|
|
* be executed on the source CPU after the task has performed
|
|
|
|
* all MMIO ops on that CPU, and on the destination CPU before the
|
|
|
|
* task performs any MMIO ops there.
|
2005-10-10 22:36:14 +10:00
|
|
|
*/
|
|
|
|
|
2010-08-11 01:40:27 +00:00
|
|
|
/*
|
2017-06-09 01:36:07 +10:00
|
|
|
* The kernel context switch path must contain a spin_lock,
|
|
|
|
* which contains larx/stcx, which will clear any reservation
|
|
|
|
* of the task being switched.
|
2010-08-11 01:40:27 +00:00
|
|
|
*/
|
2013-05-29 19:34:27 +00:00
|
|
|
#ifdef CONFIG_PPC_BOOK3S
|
|
|
|
/* Cancel all explict user streams as they will have no use after context
|
|
|
|
* switch and will stop the HW from creating streams itself
|
|
|
|
*/
|
2018-02-21 05:08:26 +10:00
|
|
|
DCBT_BOOK3S_STOP_ALL_STREAM_IDS(r6)
|
2013-05-29 19:34:27 +00:00
|
|
|
#endif
|
|
|
|
|
2005-10-10 22:36:14 +10:00
|
|
|
addi r6,r4,-THREAD /* Convert THREAD to 'current' */
|
|
|
|
std r6,PACACURRENT(r13) /* Set new 'current' */
|
2018-09-27 07:05:55 +00:00
|
|
|
#if defined(CONFIG_STACKPROTECTOR)
|
|
|
|
ld r6, TASK_CANARY(r6)
|
|
|
|
std r6, PACA_CANARY(r13)
|
|
|
|
#endif
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
ld r8,KSP(r4) /* new stack pointer */
|
2017-10-19 15:08:43 +11:00
|
|
|
#ifdef CONFIG_PPC_BOOK3S_64
|
2016-04-29 23:26:07 +10:00
|
|
|
BEGIN_MMU_FTR_SECTION
|
|
|
|
b 2f
|
2016-07-27 13:19:01 +10:00
|
|
|
END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
|
2007-10-11 20:37:10 +10:00
|
|
|
BEGIN_FTR_SECTION
|
2005-10-10 22:36:14 +10:00
|
|
|
clrrdi r6,r8,28 /* get its ESID */
|
|
|
|
clrrdi r9,r1,28 /* get current sp ESID */
|
2014-07-10 12:29:20 +10:00
|
|
|
FTR_SECTION_ELSE
|
2007-10-11 20:37:10 +10:00
|
|
|
clrrdi r6,r8,40 /* get its 1T ESID */
|
|
|
|
clrrdi r9,r1,40 /* get current sp 1T ESID */
|
2014-07-10 12:29:20 +10:00
|
|
|
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_1T_SEGMENT)
|
2005-10-10 22:36:14 +10:00
|
|
|
clrldi. r0,r6,2 /* is new ESID c00000000? */
|
|
|
|
cmpd cr1,r6,r9 /* or is new ESID the same as current ESID? */
|
|
|
|
cror eq,4*cr1+eq,eq
|
|
|
|
beq 2f /* if yes, don't slbie it */
|
|
|
|
|
|
|
|
/* Bolt in the new stack SLB entry */
|
|
|
|
ld r7,KSP_VSID(r4) /* Get new stack's VSID */
|
|
|
|
oris r0,r6,(SLB_ESID_V)@h
|
|
|
|
ori r0,r0,(SLB_NUM_BOLTED-1)@l
|
2007-10-11 20:37:10 +10:00
|
|
|
BEGIN_FTR_SECTION
|
|
|
|
li r9,MMU_SEGSIZE_1T /* insert B field */
|
|
|
|
oris r6,r6,(MMU_SEGSIZE_1T << SLBIE_SSIZE_SHIFT)@h
|
|
|
|
rldimi r7,r9,SLB_VSID_SSIZE_SHIFT,0
|
2011-04-06 19:48:50 +00:00
|
|
|
END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
|
2006-08-07 16:19:19 +10:00
|
|
|
|
2007-08-24 16:58:37 +10:00
|
|
|
/* Update the last bolted SLB. No write barriers are needed
|
|
|
|
* here, provided we only update the current CPU's SLB shadow
|
|
|
|
* buffer.
|
|
|
|
*/
|
2006-08-07 16:19:19 +10:00
|
|
|
ld r9,PACA_SLBSHADOWPTR(r13)
|
2006-08-09 17:00:30 +10:00
|
|
|
li r12,0
|
2013-08-07 02:01:46 +10:00
|
|
|
std r12,SLBSHADOW_STACKESID(r9) /* Clear ESID */
|
|
|
|
li r12,SLBSHADOW_STACKVSID
|
|
|
|
STDX_BE r7,r12,r9 /* Save VSID */
|
|
|
|
li r12,SLBSHADOW_STACKESID
|
|
|
|
STDX_BE r0,r12,r9 /* Save ESID */
|
2006-08-07 16:19:19 +10:00
|
|
|
|
2011-04-06 19:48:50 +00:00
|
|
|
/* No need to check for MMU_FTR_NO_SLBIE_B here, since when
|
2007-10-16 00:58:59 +10:00
|
|
|
* we have 1TB segments, the only CPUs known to have the errata
|
|
|
|
* only support less than 1TB of system memory and we'll never
|
|
|
|
* actually hit this code path.
|
|
|
|
*/
|
|
|
|
|
powerpc/mm/hash: Add missing isync prior to kernel stack SLB switch
Currently we do not have an isync, or any other context synchronizing
instruction prior to the slbie/slbmte in _switch() that updates the
SLB entry for the kernel stack.
However that is not correct as outlined in the ISA.
From Power ISA Version 3.0B, Book III, Chapter 11, page 1133:
"Changing the contents of ... the contents of SLB entries ... can
have the side effect of altering the context in which data
addresses and instruction addresses are interpreted, and in which
instructions are executed and data accesses are performed.
...
These side effects need not occur in program order, and therefore
may require explicit synchronization by software.
...
The synchronizing instruction before the context-altering
instruction ensures that all instructions up to and including that
synchronizing instruction are fetched and executed in the context
that existed before the alteration."
And page 1136:
"For data accesses, the context synchronizing instruction before the
slbie, slbieg, slbia, slbmte, tlbie, or tlbiel instruction ensures
that all preceding instructions that access data storage have
completed to a point at which they have reported all exceptions
they will cause."
We're not aware of any bugs caused by this, but it should be fixed
regardless.
Add the missing isync when updating kernel stack SLB entry.
Cc: stable@vger.kernel.org
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
[mpe: Flesh out change log with more ISA text & explanation]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-05-30 18:48:04 +05:30
|
|
|
isync
|
2005-10-10 22:36:14 +10:00
|
|
|
slbie r6
|
2018-09-15 01:30:46 +10:00
|
|
|
BEGIN_FTR_SECTION
|
2005-10-10 22:36:14 +10:00
|
|
|
slbie r6 /* Workaround POWER5 < DD2.1 issue */
|
2018-09-15 01:30:46 +10:00
|
|
|
END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
|
2005-10-10 22:36:14 +10:00
|
|
|
slbmte r7,r0
|
|
|
|
isync
|
|
|
|
2:
|
2017-10-19 15:08:43 +11:00
|
|
|
#endif /* CONFIG_PPC_BOOK3S_64 */
|
2009-07-23 23:15:59 +00:00
|
|
|
|
2019-01-17 23:23:57 +11:00
|
|
|
clrrdi r7, r8, THREAD_SHIFT /* base of new stack */
|
2005-10-10 22:36:14 +10:00
|
|
|
/* Note: this uses SWITCH_FRAME_SIZE rather than INT_FRAME_SIZE
|
|
|
|
because we don't need to leave the 288-byte ABI gap at the
|
|
|
|
top of the kernel stack. */
|
|
|
|
addi r7,r7,THREAD_SIZE-SWITCH_FRAME_SIZE
|
|
|
|
|
2017-06-09 01:36:06 +10:00
|
|
|
/*
|
|
|
|
* PMU interrupts in radix may come in here. They will use r1, not
|
|
|
|
* PACAKSAVE, so this stack switch will not cause a problem. They
|
|
|
|
* will store to the process stack, which may then be migrated to
|
|
|
|
* another CPU. However the rq lock release on this CPU paired with
|
|
|
|
* the rq lock acquire on the new CPU before the stack becomes
|
|
|
|
* active on the new CPU, will order those stores.
|
|
|
|
*/
|
2005-10-10 22:36:14 +10:00
|
|
|
mr r1,r8 /* start using new stack pointer */
|
|
|
|
std r7,PACAKSAVE(r13)
|
|
|
|
|
2012-09-03 16:51:10 +00:00
|
|
|
ld r6,_CCR(r1)
|
|
|
|
mtcrf 0xFF,r6
|
|
|
|
|
2005-10-10 22:36:14 +10:00
|
|
|
/* r3-r13 are destroyed -- Cort */
|
2019-12-11 13:35:52 +11:00
|
|
|
REST_NVGPRS(r1)
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
/* convert old thread to its task_struct for return value */
|
|
|
|
addi r3,r3,-THREAD
|
|
|
|
ld r7,_NIP(r1) /* Return to _switch caller in new task */
|
|
|
|
mtlr r7
|
|
|
|
addi r1,r1,SWITCH_FRAME_SIZE
|
|
|
|
blr
|
|
|
|
|
|
|
|
.align 7
|
|
|
|
_GLOBAL(ret_from_except)
|
|
|
|
ld r11,_TRAP(r1)
|
|
|
|
andi. r0,r11,1
|
2014-02-04 16:04:35 +11:00
|
|
|
bne ret_from_except_lite
|
2005-10-10 22:36:14 +10:00
|
|
|
REST_NVGPRS(r1)
|
|
|
|
|
|
|
|
_GLOBAL(ret_from_except_lite)
|
|
|
|
/*
|
|
|
|
* Disable interrupts so that current_thread_info()->flags
|
|
|
|
* can't change between when we test it and when we return
|
|
|
|
* from the interrupt.
|
|
|
|
*/
|
2009-07-23 23:15:59 +00:00
|
|
|
#ifdef CONFIG_PPC_BOOK3E
|
|
|
|
wrteei 0
|
|
|
|
#else
|
2016-09-15 19:04:46 +10:00
|
|
|
li r10,MSR_RI
|
2012-03-02 11:33:52 +11:00
|
|
|
mtmsrd r10,1 /* Update machine state */
|
2009-07-23 23:15:59 +00:00
|
|
|
#endif /* CONFIG_PPC_BOOK3E */
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2019-01-12 09:55:50 +00:00
|
|
|
ld r9, PACA_THREAD_INFO(r13)
|
2005-10-10 22:36:14 +10:00
|
|
|
ld r3,_MSR(r1)
|
2013-05-22 09:50:59 +05:30
|
|
|
#ifdef CONFIG_PPC_BOOK3E
|
|
|
|
ld r10,PACACURRENT(r13)
|
|
|
|
#endif /* CONFIG_PPC_BOOK3E */
|
2005-10-10 22:36:14 +10:00
|
|
|
ld r4,TI_FLAGS(r9)
|
|
|
|
andi. r3,r3,MSR_PR
|
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
|
|
|
beq resume_kernel
|
2013-05-22 09:50:59 +05:30
|
|
|
#ifdef CONFIG_PPC_BOOK3E
|
|
|
|
lwz r3,(THREAD+THREAD_DBCR0)(r10)
|
|
|
|
#endif /* CONFIG_PPC_BOOK3E */
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
/* Check current_thread_info()->flags */
|
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
|
|
|
andi. r0,r4,_TIF_USER_WORK_MASK
|
2013-05-22 09:50:59 +05:30
|
|
|
bne 1f
|
2016-02-29 17:53:47 +11:00
|
|
|
#ifdef CONFIG_PPC_BOOK3E
|
2013-05-22 09:50:59 +05:30
|
|
|
/*
|
|
|
|
* Check to see if the dbcr0 register is set up to debug.
|
|
|
|
* Use the internal debug mode bit to do this.
|
|
|
|
*/
|
|
|
|
andis. r0,r3,DBCR0_IDM@h
|
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
|
|
|
beq restore
|
2013-05-22 09:50:59 +05:30
|
|
|
mfmsr r0
|
|
|
|
rlwinm r0,r0,0,~MSR_DE /* Clear MSR.DE */
|
|
|
|
mtmsr r0
|
|
|
|
mtspr SPRN_DBCR0,r3
|
|
|
|
li r10, -1
|
|
|
|
mtspr SPRN_DBSR,r10
|
|
|
|
b restore
|
|
|
|
#else
|
2016-02-29 17:53:47 +11:00
|
|
|
addi r3,r1,STACK_FRAME_OVERHEAD
|
|
|
|
bl restore_math
|
|
|
|
b restore
|
2013-05-22 09:50:59 +05:30
|
|
|
#endif
|
|
|
|
1: andi. r0,r4,_TIF_NEED_RESCHED
|
|
|
|
beq 2f
|
2014-02-04 16:04:35 +11:00
|
|
|
bl restore_interrupts
|
2013-05-13 16:16:43 +00:00
|
|
|
SCHEDULE_USER
|
2014-02-04 16:04:35 +11:00
|
|
|
b ret_from_except_lite
|
powerpc: Don't corrupt transactional state when using FP/VMX in kernel
Currently, when we have a process using the transactional memory
facilities on POWER8 (that is, the processor is in transactional
or suspended state), and the process enters the kernel and the
kernel then uses the floating-point or vector (VMX/Altivec) facility,
we end up corrupting the user-visible FP/VMX/VSX state. This
happens, for example, if a page fault causes a copy-on-write
operation, because the copy_page function will use VMX to do the
copy on POWER8. The test program below demonstrates the bug.
The bug happens because when FP/VMX state for a transactional process
is stored in the thread_struct, we store the checkpointed state in
.fp_state/.vr_state and the transactional (current) state in
.transact_fp/.transact_vr. However, when the kernel wants to use
FP/VMX, it calls enable_kernel_fp() or enable_kernel_altivec(),
which saves the current state in .fp_state/.vr_state. Furthermore,
when we return to the user process we return with FP/VMX/VSX
disabled. The next time the process uses FP/VMX/VSX, we don't know
which set of state (the current register values, .fp_state/.vr_state,
or .transact_fp/.transact_vr) we should be using, since we have no
way to tell if we are still in the same transaction, and if not,
whether the previous transaction succeeded or failed.
Thus it is necessary to strictly adhere to the rule that if FP has
been enabled at any point in a transaction, we must keep FP enabled
for the user process with the current transactional state in the
FP registers, until we detect that it is no longer in a transaction.
Similarly for VMX; once enabled it must stay enabled until the
process is no longer transactional.
In order to keep this rule, we add a new thread_info flag which we
test when returning from the kernel to userspace, called TIF_RESTORE_TM.
This flag indicates that there is FP/VMX/VSX state to be restored
before entering userspace, and when it is set the .tm_orig_msr field
in the thread_struct indicates what state needs to be restored.
The restoration is done by restore_tm_state(). The TIF_RESTORE_TM
bit is set by new giveup_fpu/altivec_maybe_transactional helpers,
which are called from enable_kernel_fp/altivec, giveup_vsx, and
flush_fp/altivec_to_thread instead of giveup_fpu/altivec.
The other thing to be done is to get the transactional FP/VMX/VSX
state from .fp_state/.vr_state when doing reclaim, if that state
has been saved there by giveup_fpu/altivec_maybe_transactional.
Having done this, we set the FP/VMX bit in the thread's MSR after
reclaim to indicate that that part of the state is now valid
(having been reclaimed from the processor's checkpointed state).
Finally, in the signal handling code, we move the clearing of the
transactional state bits in the thread's MSR a bit earlier, before
calling flush_fp_to_thread(), so that we don't unnecessarily set
the TIF_RESTORE_TM bit.
This is the test program:
/* Michael Neuling 4/12/2013
*
* See if the altivec state is leaked out of an aborted transaction due to
* kernel vmx copy loops.
*
* gcc -m64 htm_vmxcopy.c -o htm_vmxcopy
*
*/
/* We don't use all of these, but for reference: */
int main(int argc, char *argv[])
{
long double vecin = 1.3;
long double vecout;
unsigned long pgsize = getpagesize();
int i;
int fd;
int size = pgsize*16;
char tmpfile[] = "/tmp/page_faultXXXXXX";
char buf[pgsize];
char *a;
uint64_t aborted = 0;
fd = mkstemp(tmpfile);
assert(fd >= 0);
memset(buf, 0, pgsize);
for (i = 0; i < size; i += pgsize)
assert(write(fd, buf, pgsize) == pgsize);
unlink(tmpfile);
a = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
assert(a != MAP_FAILED);
asm __volatile__(
"lxvd2x 40,0,%[vecinptr] ; " // set 40 to initial value
TBEGIN
"beq 3f ;"
TSUSPEND
"xxlxor 40,40,40 ; " // set 40 to 0
"std 5, 0(%[map]) ;" // cause kernel vmx copy page
TABORT
TRESUME
TEND
"li %[res], 0 ;"
"b 5f ;"
"3: ;" // Abort handler
"li %[res], 1 ;"
"5: ;"
"stxvd2x 40,0,%[vecoutptr] ; "
: [res]"=r"(aborted)
: [vecinptr]"r"(&vecin),
[vecoutptr]"r"(&vecout),
[map]"r"(a)
: "memory", "r0", "r3", "r4", "r5", "r6", "r7");
if (aborted && (vecin != vecout)){
printf("FAILED: vector state leaked on abort %f != %f\n",
(double)vecin, (double)vecout);
exit(1);
}
munmap(a, size);
close(fd);
printf("PASSED!\n");
return 0;
}
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-01-13 15:56:29 +11:00
|
|
|
2:
|
|
|
|
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
|
|
|
|
andi. r0,r4,_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM
|
|
|
|
bne 3f /* only restore TM if nothing else to do */
|
|
|
|
addi r3,r1,STACK_FRAME_OVERHEAD
|
2014-02-04 16:04:35 +11:00
|
|
|
bl restore_tm_state
|
powerpc: Don't corrupt transactional state when using FP/VMX in kernel
Currently, when we have a process using the transactional memory
facilities on POWER8 (that is, the processor is in transactional
or suspended state), and the process enters the kernel and the
kernel then uses the floating-point or vector (VMX/Altivec) facility,
we end up corrupting the user-visible FP/VMX/VSX state. This
happens, for example, if a page fault causes a copy-on-write
operation, because the copy_page function will use VMX to do the
copy on POWER8. The test program below demonstrates the bug.
The bug happens because when FP/VMX state for a transactional process
is stored in the thread_struct, we store the checkpointed state in
.fp_state/.vr_state and the transactional (current) state in
.transact_fp/.transact_vr. However, when the kernel wants to use
FP/VMX, it calls enable_kernel_fp() or enable_kernel_altivec(),
which saves the current state in .fp_state/.vr_state. Furthermore,
when we return to the user process we return with FP/VMX/VSX
disabled. The next time the process uses FP/VMX/VSX, we don't know
which set of state (the current register values, .fp_state/.vr_state,
or .transact_fp/.transact_vr) we should be using, since we have no
way to tell if we are still in the same transaction, and if not,
whether the previous transaction succeeded or failed.
Thus it is necessary to strictly adhere to the rule that if FP has
been enabled at any point in a transaction, we must keep FP enabled
for the user process with the current transactional state in the
FP registers, until we detect that it is no longer in a transaction.
Similarly for VMX; once enabled it must stay enabled until the
process is no longer transactional.
In order to keep this rule, we add a new thread_info flag which we
test when returning from the kernel to userspace, called TIF_RESTORE_TM.
This flag indicates that there is FP/VMX/VSX state to be restored
before entering userspace, and when it is set the .tm_orig_msr field
in the thread_struct indicates what state needs to be restored.
The restoration is done by restore_tm_state(). The TIF_RESTORE_TM
bit is set by new giveup_fpu/altivec_maybe_transactional helpers,
which are called from enable_kernel_fp/altivec, giveup_vsx, and
flush_fp/altivec_to_thread instead of giveup_fpu/altivec.
The other thing to be done is to get the transactional FP/VMX/VSX
state from .fp_state/.vr_state when doing reclaim, if that state
has been saved there by giveup_fpu/altivec_maybe_transactional.
Having done this, we set the FP/VMX bit in the thread's MSR after
reclaim to indicate that that part of the state is now valid
(having been reclaimed from the processor's checkpointed state).
Finally, in the signal handling code, we move the clearing of the
transactional state bits in the thread's MSR a bit earlier, before
calling flush_fp_to_thread(), so that we don't unnecessarily set
the TIF_RESTORE_TM bit.
This is the test program:
/* Michael Neuling 4/12/2013
*
* See if the altivec state is leaked out of an aborted transaction due to
* kernel vmx copy loops.
*
* gcc -m64 htm_vmxcopy.c -o htm_vmxcopy
*
*/
/* We don't use all of these, but for reference: */
int main(int argc, char *argv[])
{
long double vecin = 1.3;
long double vecout;
unsigned long pgsize = getpagesize();
int i;
int fd;
int size = pgsize*16;
char tmpfile[] = "/tmp/page_faultXXXXXX";
char buf[pgsize];
char *a;
uint64_t aborted = 0;
fd = mkstemp(tmpfile);
assert(fd >= 0);
memset(buf, 0, pgsize);
for (i = 0; i < size; i += pgsize)
assert(write(fd, buf, pgsize) == pgsize);
unlink(tmpfile);
a = mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
assert(a != MAP_FAILED);
asm __volatile__(
"lxvd2x 40,0,%[vecinptr] ; " // set 40 to initial value
TBEGIN
"beq 3f ;"
TSUSPEND
"xxlxor 40,40,40 ; " // set 40 to 0
"std 5, 0(%[map]) ;" // cause kernel vmx copy page
TABORT
TRESUME
TEND
"li %[res], 0 ;"
"b 5f ;"
"3: ;" // Abort handler
"li %[res], 1 ;"
"5: ;"
"stxvd2x 40,0,%[vecoutptr] ; "
: [res]"=r"(aborted)
: [vecinptr]"r"(&vecin),
[vecoutptr]"r"(&vecout),
[map]"r"(a)
: "memory", "r0", "r3", "r4", "r5", "r6", "r7");
if (aborted && (vecin != vecout)){
printf("FAILED: vector state leaked on abort %f != %f\n",
(double)vecin, (double)vecout);
exit(1);
}
munmap(a, size);
close(fd);
printf("PASSED!\n");
return 0;
}
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-01-13 15:56:29 +11:00
|
|
|
b restore
|
|
|
|
3:
|
|
|
|
#endif
|
2014-02-04 16:04:35 +11:00
|
|
|
bl save_nvgprs
|
2014-10-31 16:50:57 +11:00
|
|
|
/*
|
|
|
|
* Use a non volatile GPR to save and restore our thread_info flags
|
|
|
|
* across the call to restore_interrupts.
|
|
|
|
*/
|
|
|
|
mr r30,r4
|
2014-02-04 16:04:35 +11:00
|
|
|
bl restore_interrupts
|
2014-10-31 16:50:57 +11:00
|
|
|
mr r4,r30
|
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
|
|
|
addi r3,r1,STACK_FRAME_OVERHEAD
|
2014-02-04 16:04:35 +11:00
|
|
|
bl do_notify_resume
|
|
|
|
b ret_from_except
|
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
|
|
|
|
|
|
|
resume_kernel:
|
2012-09-16 23:54:30 +00:00
|
|
|
/* check current_thread_info, _TIF_EMULATE_STACK_STORE */
|
2013-09-26 16:41:34 +08:00
|
|
|
andis. r8,r4,_TIF_EMULATE_STACK_STORE@h
|
2012-09-16 23:54:30 +00:00
|
|
|
beq+ 1f
|
|
|
|
|
|
|
|
addi r8,r1,INT_FRAME_SIZE /* Get the kprobed function entry */
|
|
|
|
|
2017-04-11 10:38:13 +05:30
|
|
|
ld r3,GPR1(r1)
|
2012-09-16 23:54:30 +00:00
|
|
|
subi r3,r3,INT_FRAME_SIZE /* dst: Allocate a trampoline exception frame */
|
|
|
|
mr r4,r1 /* src: current exception frame */
|
|
|
|
mr r1,r3 /* Reroute the trampoline frame to r1 */
|
|
|
|
|
|
|
|
/* Copy from the original to the trampoline. */
|
|
|
|
li r5,INT_FRAME_SIZE/8 /* size: INT_FRAME_SIZE */
|
|
|
|
li r6,0 /* start offset: 0 */
|
|
|
|
mtctr r5
|
|
|
|
2: ldx r0,r6,r4
|
|
|
|
stdx r0,r6,r3
|
|
|
|
addi r6,r6,8
|
|
|
|
bdnz 2b
|
|
|
|
|
2017-04-11 10:38:13 +05:30
|
|
|
/* Do real store operation to complete stdu */
|
|
|
|
ld r5,GPR1(r1)
|
2012-09-16 23:54:30 +00:00
|
|
|
std r8,0(r5)
|
|
|
|
|
|
|
|
/* Clear _TIF_EMULATE_STACK_STORE flag */
|
|
|
|
lis r11,_TIF_EMULATE_STACK_STORE@h
|
|
|
|
addi r5,r9,TI_FLAGS
|
2013-04-09 22:31:24 +00:00
|
|
|
0: ldarx r4,0,r5
|
2012-09-16 23:54:30 +00:00
|
|
|
andc r4,r4,r11
|
|
|
|
stdcx. r4,0,r5
|
|
|
|
bne- 0b
|
|
|
|
1:
|
|
|
|
|
2019-10-24 18:04:58 +02:00
|
|
|
#ifdef CONFIG_PREEMPTION
|
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
|
|
|
/* Check if we need to preempt */
|
|
|
|
andi. r0,r4,_TIF_NEED_RESCHED
|
|
|
|
beq+ restore
|
|
|
|
/* Check that preempt_count() == 0 and interrupts are enabled */
|
|
|
|
lwz r8,TI_PREEMPT(r9)
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 09:25:49 +05:30
|
|
|
cmpwi cr0,r8,0
|
|
|
|
bne restore
|
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
|
|
|
ld r0,SOFTE(r1)
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 09:25:49 +05:30
|
|
|
andi. r0,r0,IRQS_DISABLED
|
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
|
|
|
bne restore
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Here we are preempting the current task. We want to make
|
2013-07-16 11:09:30 +08:00
|
|
|
* sure we are soft-disabled first and reconcile irq state.
|
ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set and always test NEED_RESCHED ...
but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
that means that with preempt, we completely fail to test for things
like single step, syscall tracing, etc...
This should be fixed as the following path:
- Test PR. If not set, go to resume_kernel, else continue.
- If go resume_kernel, to do that original do_work.
- If else, then always test for _TIF_USER_WORK_MASK to decide to do
that original user_work, else restore directly.
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-06-06 20:56:43 +00:00
|
|
|
*/
|
2013-07-16 11:09:30 +08:00
|
|
|
RECONCILE_IRQ_STATE(r3,r4)
|
2019-03-11 22:47:46 +00:00
|
|
|
bl preempt_schedule_irq
|
2013-01-06 00:49:34 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* arch_local_irq_restore() from preempt_schedule_irq above may
|
|
|
|
* enable hard interrupt but we really should disable interrupts
|
|
|
|
* when we return from the interrupt, and so that we don't get
|
|
|
|
* interrupted after loading SRR0/1.
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_PPC_BOOK3E
|
|
|
|
wrteei 0
|
|
|
|
#else
|
2016-09-15 19:04:46 +10:00
|
|
|
li r10,MSR_RI
|
2013-01-06 00:49:34 +00:00
|
|
|
mtmsrd r10,1 /* Update machine state */
|
|
|
|
#endif /* CONFIG_PPC_BOOK3E */
|
2019-10-24 18:04:58 +02:00
|
|
|
#endif /* CONFIG_PREEMPTION */
|
2005-10-10 22:36:14 +10:00
|
|
|
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
.globl fast_exc_return_irq
|
|
|
|
fast_exc_return_irq:
|
2005-10-10 22:36:14 +10:00
|
|
|
restore:
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
/*
|
2012-05-10 16:12:38 +00:00
|
|
|
* This is the main kernel exit path. First we check if we
|
|
|
|
* are about to re-enable interrupts
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
*/
|
2008-07-16 14:21:34 +10:00
|
|
|
ld r5,SOFTE(r1)
|
2017-12-20 09:25:50 +05:30
|
|
|
lbz r6,PACAIRQSOFTMASK(r13)
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 09:25:49 +05:30
|
|
|
andi. r5,r5,IRQS_DISABLED
|
|
|
|
bne .Lrestore_irq_off
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
|
2012-05-10 16:12:38 +00:00
|
|
|
/* We are enabling, were we already enabled ? Yes, just return */
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 09:25:49 +05:30
|
|
|
andi. r6,r6,IRQS_DISABLED
|
2017-06-29 23:19:19 +05:30
|
|
|
beq cr0,.Ldo_restore
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2012-05-10 16:12:38 +00:00
|
|
|
/*
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
* We are about to soft-enable interrupts (we are hard disabled
|
|
|
|
* at this point). We check if there's anything that needs to
|
|
|
|
* be replayed first.
|
|
|
|
*/
|
|
|
|
lbz r0,PACAIRQHAPPENED(r13)
|
|
|
|
cmpwi cr0,r0,0
|
2017-06-29 23:19:19 +05:30
|
|
|
bne- .Lrestore_check_irq_replay
|
2007-02-07 13:13:26 +11:00
|
|
|
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
/*
|
|
|
|
* Get here when nothing happened while soft-disabled, just
|
|
|
|
* soft-enable and move-on. We will hard-enable as a side
|
|
|
|
* effect of rfi
|
|
|
|
*/
|
2017-06-29 23:19:19 +05:30
|
|
|
.Lrestore_no_replay:
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
TRACE_ENABLE_INTS
|
2017-12-20 09:25:42 +05:30
|
|
|
li r0,IRQS_ENABLED
|
2017-12-20 09:25:50 +05:30
|
|
|
stb r0,PACAIRQSOFTMASK(r13);
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Final return path. BookE is handled in a different file
|
|
|
|
*/
|
2017-06-29 23:19:19 +05:30
|
|
|
.Ldo_restore:
|
2009-07-23 23:15:59 +00:00
|
|
|
#ifdef CONFIG_PPC_BOOK3E
|
2014-02-04 16:04:35 +11:00
|
|
|
b exception_return_book3e
|
2009-07-23 23:15:59 +00:00
|
|
|
#else
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
/*
|
|
|
|
* Clear the reservation. If we know the CPU tracks the address of
|
|
|
|
* the reservation then we can potentially save some cycles and use
|
|
|
|
* a larx. On POWER6 and POWER7 this is significantly faster.
|
|
|
|
*/
|
|
|
|
BEGIN_FTR_SECTION
|
|
|
|
stdcx. r0,0,r1 /* to clear the reservation */
|
|
|
|
FTR_SECTION_ELSE
|
|
|
|
ldarx r4,0,r1
|
|
|
|
ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Some code path such as load_up_fpu or altivec return directly
|
|
|
|
* here. They run entirely hard disabled and do not alter the
|
|
|
|
* interrupt state. They also don't use lwarx/stwcx. and thus
|
|
|
|
* are known not to leave dangling reservations.
|
|
|
|
*/
|
|
|
|
.globl fast_exception_return
|
|
|
|
fast_exception_return:
|
|
|
|
ld r3,_MSR(r1)
|
2007-02-07 13:13:26 +11:00
|
|
|
ld r4,_CTR(r1)
|
|
|
|
ld r0,_LINK(r1)
|
|
|
|
mtctr r4
|
|
|
|
mtlr r0
|
|
|
|
ld r4,_XER(r1)
|
|
|
|
mtspr SPRN_XER,r4
|
|
|
|
|
2019-04-18 16:51:24 +10:00
|
|
|
kuap_check_amr r5, r6
|
|
|
|
|
2007-02-07 13:13:26 +11:00
|
|
|
REST_8GPRS(5, r1)
|
|
|
|
|
2005-10-10 22:36:14 +10:00
|
|
|
andi. r0,r3,MSR_RI
|
2017-06-29 23:19:19 +05:30
|
|
|
beq- .Lunrecov_restore
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2007-02-07 13:13:26 +11:00
|
|
|
/*
|
|
|
|
* Clear RI before restoring r13. If we are returning to
|
|
|
|
* userspace and we take an exception after restoring r13,
|
|
|
|
* we end up corrupting the userspace r13 value.
|
|
|
|
*/
|
2016-09-15 19:04:46 +10:00
|
|
|
li r4,0
|
2007-02-07 13:13:26 +11:00
|
|
|
mtmsrd r4,1
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2013-02-13 16:21:34 +00:00
|
|
|
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
|
|
|
|
/* TM debug */
|
|
|
|
std r3, PACATMSCRATCH(r13) /* Stash returned-to MSR */
|
|
|
|
#endif
|
2005-10-10 22:36:14 +10:00
|
|
|
/*
|
|
|
|
* r13 is our per cpu area, only restore it if we are returning to
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
* userspace the value stored in the stack frame may belong to
|
|
|
|
* another CPU.
|
2005-10-10 22:36:14 +10:00
|
|
|
*/
|
2007-02-07 13:13:26 +11:00
|
|
|
andi. r0,r3,MSR_PR
|
2005-10-10 22:36:14 +10:00
|
|
|
beq 1f
|
2013-11-05 16:33:22 +11:00
|
|
|
BEGIN_FTR_SECTION
|
2018-10-13 00:15:16 +11:00
|
|
|
/* Restore PPR */
|
|
|
|
ld r2,_PPR(r1)
|
|
|
|
mtspr SPRN_PPR,r2
|
2013-11-05 16:33:22 +11:00
|
|
|
END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
|
2016-05-17 08:33:46 +02:00
|
|
|
ACCOUNT_CPU_USER_EXIT(r13, r2, r4)
|
2005-10-10 22:36:14 +10:00
|
|
|
REST_GPR(13, r1)
|
2018-01-10 03:07:15 +11:00
|
|
|
|
2019-04-18 16:51:24 +10:00
|
|
|
/*
|
|
|
|
* We don't need to restore AMR on the way back to userspace for KUAP.
|
|
|
|
* The value of AMR only matters while we're in the kernel.
|
|
|
|
*/
|
2007-02-07 13:13:26 +11:00
|
|
|
mtspr SPRN_SRR1,r3
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
ld r2,_CCR(r1)
|
|
|
|
mtcrf 0xFF,r2
|
|
|
|
ld r2,_NIP(r1)
|
|
|
|
mtspr SPRN_SRR0,r2
|
|
|
|
|
|
|
|
ld r0,GPR0(r1)
|
|
|
|
ld r2,GPR2(r1)
|
|
|
|
ld r3,GPR3(r1)
|
|
|
|
ld r4,GPR4(r1)
|
|
|
|
ld r1,GPR1(r1)
|
2018-01-10 03:07:15 +11:00
|
|
|
RFI_TO_USER
|
|
|
|
b . /* prevent speculative execution */
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2018-01-10 03:07:15 +11:00
|
|
|
1: mtspr SPRN_SRR1,r3
|
|
|
|
|
|
|
|
ld r2,_CCR(r1)
|
|
|
|
mtcrf 0xFF,r2
|
|
|
|
ld r2,_NIP(r1)
|
|
|
|
mtspr SPRN_SRR0,r2
|
2005-10-10 22:36:14 +10:00
|
|
|
|
powerpc/64s: Clear on-stack exception marker upon exception return
The ppc64 specific implementation of the reliable stacktracer,
save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
trace" whenever it finds an exception frame on the stack. Stack frames
are classified as exception frames if the STACK_FRAME_REGS_MARKER
magic, as written by exception prologues, is found at a particular
location.
However, as observed by Joe Lawrence, it is possible in practice that
non-exception stack frames can alias with prior exception frames and
thus, that the reliable stacktracer can find a stale
STACK_FRAME_REGS_MARKER on the stack. It in turn falsely reports an
unreliable stacktrace and blocks any live patching transition to
finish. Said condition lasts until the stack frame is
overwritten/initialized by function call or other means.
In principle, we could mitigate this by making the exception frame
classification condition in save_stack_trace_tsk_reliable() stronger:
in addition to testing for STACK_FRAME_REGS_MARKER, we could also take
into account that for all exceptions executing on the kernel stack
- their stack frames's backlink pointers always match what is saved
in their pt_regs instance's ->gpr[1] slot and that
- their exception frame size equals STACK_INT_FRAME_SIZE, a value
uncommonly large for non-exception frames.
However, while these are currently true, relying on them would make
the reliable stacktrace implementation more sensitive towards future
changes in the exception entry code. Note that false negatives, i.e.
not detecting exception frames, would silently break the live patching
consistency model.
Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
rely on STACK_FRAME_REGS_MARKER as well.
Make the exception exit code clear the on-stack
STACK_FRAME_REGS_MARKER for those exceptions running on the "normal"
kernel stack and returning to kernelspace: because the topmost frame
is ignored by the reliable stack tracer anyway, returns to userspace
don't need to take care of clearing the marker.
Furthermore, as I don't have the ability to test this on Book 3E or 32
bits, limit the change to Book 3S and 64 bits.
Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
Reported-by: Joe Lawrence <joe.lawrence@redhat.com>
Signed-off-by: Nicolai Stange <nstange@suse.de>
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-01-22 10:57:21 -05:00
|
|
|
/*
|
|
|
|
* Leaving a stale exception_marker on the stack can confuse
|
|
|
|
* the reliable stack unwinder later on. Clear it.
|
|
|
|
*/
|
|
|
|
li r2,0
|
|
|
|
std r2,STACK_FRAME_OVERHEAD-16(r1)
|
|
|
|
|
2018-01-10 03:07:15 +11:00
|
|
|
ld r0,GPR0(r1)
|
|
|
|
ld r2,GPR2(r1)
|
|
|
|
ld r3,GPR3(r1)
|
2019-04-18 16:51:24 +10:00
|
|
|
|
|
|
|
kuap_restore_amr r4
|
|
|
|
|
2018-01-10 03:07:15 +11:00
|
|
|
ld r4,GPR4(r1)
|
|
|
|
ld r1,GPR1(r1)
|
|
|
|
RFI_TO_KERNEL
|
2005-10-10 22:36:14 +10:00
|
|
|
b . /* prevent speculative execution */
|
|
|
|
|
2009-07-23 23:15:59 +00:00
|
|
|
#endif /* CONFIG_PPC_BOOK3E */
|
|
|
|
|
2012-05-10 16:12:38 +00:00
|
|
|
/*
|
|
|
|
* We are returning to a context with interrupts soft disabled.
|
|
|
|
*
|
|
|
|
* However, we may also about to hard enable, so we need to
|
|
|
|
* make sure that in this case, we also clear PACA_IRQ_HARD_DIS
|
|
|
|
* or that bit can get out of sync and bad things will happen
|
|
|
|
*/
|
2017-06-29 23:19:19 +05:30
|
|
|
.Lrestore_irq_off:
|
2012-05-10 16:12:38 +00:00
|
|
|
ld r3,_MSR(r1)
|
|
|
|
lbz r7,PACAIRQHAPPENED(r13)
|
|
|
|
andi. r0,r3,MSR_EE
|
|
|
|
beq 1f
|
|
|
|
rlwinm r7,r7,0,~PACA_IRQ_HARD_DIS
|
|
|
|
stb r7,PACAIRQHAPPENED(r13)
|
2017-11-17 02:00:50 +10:00
|
|
|
1:
|
2017-12-20 09:25:54 +05:30
|
|
|
#if defined(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG) && defined(CONFIG_BUG)
|
2017-11-17 02:00:50 +10:00
|
|
|
/* The interrupt should not have soft enabled. */
|
2017-12-20 09:25:50 +05:30
|
|
|
lbz r7,PACAIRQSOFTMASK(r13)
|
|
|
|
1: tdeqi r7,IRQS_ENABLED
|
2017-11-17 02:00:50 +10:00
|
|
|
EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
|
|
|
|
#endif
|
2017-06-29 23:19:19 +05:30
|
|
|
b .Ldo_restore
|
2012-05-10 16:12:38 +00:00
|
|
|
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
/*
|
|
|
|
* Something did happen, check if a re-emit is needed
|
|
|
|
* (this also clears paca->irq_happened)
|
|
|
|
*/
|
2017-06-29 23:19:19 +05:30
|
|
|
.Lrestore_check_irq_replay:
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
/* XXX: We could implement a fast path here where we check
|
|
|
|
* for irq_happened being just 0x01, in which case we can
|
|
|
|
* clear it and return. That means that we would potentially
|
|
|
|
* miss a decrementer having wrapped all the way around.
|
|
|
|
*
|
|
|
|
* Still, this might be useful for things like hash_page
|
|
|
|
*/
|
2014-02-04 16:04:35 +11:00
|
|
|
bl __check_irq_replay
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
cmpwi cr0,r3,0
|
2017-06-29 23:19:19 +05:30
|
|
|
beq .Lrestore_no_replay
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We need to re-emit an interrupt. We do so by re-using our
|
|
|
|
* existing exception frame. We first change the trap value,
|
|
|
|
* but we need to ensure we preserve the low nibble of it
|
|
|
|
*/
|
|
|
|
ld r4,_TRAP(r1)
|
|
|
|
clrldi r4,r4,60
|
|
|
|
or r4,r4,r3
|
|
|
|
std r4,_TRAP(r1)
|
|
|
|
|
2018-06-03 22:24:32 +10:00
|
|
|
/*
|
|
|
|
* PACA_IRQ_HARD_DIS won't always be set here, so set it now
|
|
|
|
* to reconcile the IRQ state. Tracing is already accounted for.
|
|
|
|
*/
|
|
|
|
lbz r4,PACAIRQHAPPENED(r13)
|
|
|
|
ori r4,r4,PACA_IRQ_HARD_DIS
|
|
|
|
stb r4,PACAIRQHAPPENED(r13)
|
|
|
|
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
/*
|
|
|
|
* Then find the right handler and call it. Interrupts are
|
|
|
|
* still soft-disabled and we keep them that way.
|
|
|
|
*/
|
|
|
|
cmpwi cr0,r3,0x500
|
|
|
|
bne 1f
|
|
|
|
addi r3,r1,STACK_FRAME_OVERHEAD;
|
2014-02-04 16:04:35 +11:00
|
|
|
bl do_IRQ
|
|
|
|
b ret_from_except
|
2017-12-20 09:25:53 +05:30
|
|
|
1: cmpwi cr0,r3,0xf00
|
|
|
|
bne 1f
|
|
|
|
addi r3,r1,STACK_FRAME_OVERHEAD;
|
|
|
|
bl performance_monitor_exception
|
|
|
|
b ret_from_except
|
2014-07-29 18:40:01 +05:30
|
|
|
1: cmpwi cr0,r3,0xe60
|
|
|
|
bne 1f
|
|
|
|
addi r3,r1,STACK_FRAME_OVERHEAD;
|
|
|
|
bl handle_hmi_exception
|
|
|
|
b ret_from_except
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
1: cmpwi cr0,r3,0x900
|
|
|
|
bne 1f
|
|
|
|
addi r3,r1,STACK_FRAME_OVERHEAD;
|
2014-02-04 16:04:35 +11:00
|
|
|
bl timer_interrupt
|
|
|
|
b ret_from_except
|
2012-11-14 18:49:48 +00:00
|
|
|
#ifdef CONFIG_PPC_DOORBELL
|
|
|
|
1:
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
#ifdef CONFIG_PPC_BOOK3E
|
2012-11-14 18:49:48 +00:00
|
|
|
cmpwi cr0,r3,0x280
|
|
|
|
#else
|
2017-08-12 02:39:03 +10:00
|
|
|
cmpwi cr0,r3,0xa00
|
2012-11-14 18:49:48 +00:00
|
|
|
#endif /* CONFIG_PPC_BOOK3E */
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
bne 1f
|
|
|
|
addi r3,r1,STACK_FRAME_OVERHEAD;
|
2014-02-04 16:04:35 +11:00
|
|
|
bl doorbell_exception
|
2012-11-14 18:49:48 +00:00
|
|
|
#endif /* CONFIG_PPC_DOORBELL */
|
2014-02-04 16:04:35 +11:00
|
|
|
1: b ret_from_except /* What else to do here ? */
|
powerpc: Rework lazy-interrupt handling
The current implementation of lazy interrupts handling has some
issues that this tries to address.
We don't do the various workarounds we need to do when re-enabling
interrupts in some cases such as when returning from an interrupt
and thus we may still lose or get delayed decrementer or doorbell
interrupts.
The current scheme also makes it much harder to handle the external
"edge" interrupts provided by some BookE processors when using the
EPR facility (External Proxy) and the Freescale Hypervisor.
Additionally, we tend to keep interrupts hard disabled in a number
of cases, such as decrementer interrupts, external interrupts, or
when a masked decrementer interrupt is pending. This is sub-optimal.
This is an attempt at fixing it all in one go by reworking the way
we do the lazy interrupt disabling from the ground up.
The base idea is to replace the "hard_enabled" field with a
"irq_happened" field in which we store a bit mask of what interrupt
occurred while soft-disabled.
When re-enabling, either via arch_local_irq_restore() or when returning
from an interrupt, we can now decide what to do by testing bits in that
field.
We then implement replaying of the missed interrupts either by
re-using the existing exception frame (in exception exit case) or via
the creation of a new one from an assembly trampoline (in the
arch_local_irq_enable case).
This removes the need to play with the decrementer to try to create
fake interrupts, among others.
In addition, this adds a few refinements:
- We no longer hard disable decrementer interrupts that occur
while soft-disabled. We now simply bump the decrementer back to max
(on BookS) or leave it stopped (on BookE) and continue with hard interrupts
enabled, which means that we'll potentially get better sample quality from
performance monitor interrupts.
- Timer, decrementer and doorbell interrupts now hard-enable
shortly after removing the source of the interrupt, which means
they no longer run entirely hard disabled. Again, this will improve
perf sample quality.
- On Book3E 64-bit, we now make the performance monitor interrupt
act as an NMI like Book3S (the necessary C code for that to work
appear to already be present in the FSL perf code, notably calling
nmi_enter instead of irq_enter). (This also fixes a bug where BookE
perfmon interrupts could clobber r14 ... oops)
- We could make "masked" decrementer interrupts act as NMIs when doing
timer-based perf sampling to improve the sample quality.
Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---
v2:
- Add hard-enable to decrementer, timer and doorbells
- Fix CR clobber in masked irq handling on BookE
- Make embedded perf interrupt act as an NMI
- Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
to retrigger an interrupt without preventing hard-enable
v3:
- Fix or vs. ori bug on Book3E
- Fix enabling of interrupts for some exceptions on Book3E
v4:
- Fix resend of doorbells on return from interrupt on Book3E
v5:
- Rebased on top of my latest series, which involves some significant
rework of some aspects of the patch.
v6:
- 32-bit compile fix
- more compile fixes with various .config combos
- factor out the asm code to soft-disable interrupts
- remove the C wrapper around preempt_schedule_irq
v7:
- Fix a bug with hard irq state tracking on native power7
2012-03-06 18:27:59 +11:00
|
|
|
|
2017-06-29 23:19:19 +05:30
|
|
|
.Lunrecov_restore:
|
2005-10-10 22:36:14 +10:00
|
|
|
addi r3,r1,STACK_FRAME_OVERHEAD
|
2014-02-04 16:04:35 +11:00
|
|
|
bl unrecoverable_exception
|
2017-06-29 23:19:19 +05:30
|
|
|
b .Lunrecov_restore
|
|
|
|
|
|
|
|
_ASM_NOKPROBE_SYMBOL(ret_from_except);
|
|
|
|
_ASM_NOKPROBE_SYMBOL(ret_from_except_lite);
|
|
|
|
_ASM_NOKPROBE_SYMBOL(resume_kernel);
|
|
|
|
_ASM_NOKPROBE_SYMBOL(fast_exc_return_irq);
|
|
|
|
_ASM_NOKPROBE_SYMBOL(restore);
|
|
|
|
_ASM_NOKPROBE_SYMBOL(fast_exception_return);
|
|
|
|
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
#ifdef CONFIG_PPC_RTAS
|
|
|
|
/*
|
|
|
|
* On CHRP, the Run-Time Abstraction Services (RTAS) have to be
|
|
|
|
* called with the MMU off.
|
|
|
|
*
|
|
|
|
* In addition, we need to be in 32b mode, at least for now.
|
|
|
|
*
|
|
|
|
* Note: r3 is an input parameter to rtas, so don't trash it...
|
|
|
|
*/
|
|
|
|
_GLOBAL(enter_rtas)
|
|
|
|
mflr r0
|
|
|
|
std r0,16(r1)
|
2018-10-12 13:14:06 +10:30
|
|
|
stdu r1,-SWITCH_FRAME_SIZE(r1) /* Save SP and create stack space. */
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
/* Because RTAS is running in 32b mode, it clobbers the high order half
|
|
|
|
* of all registers that it saves. We therefore save those registers
|
|
|
|
* RTAS might touch to the stack. (r0, r3-r13 are caller saved)
|
|
|
|
*/
|
|
|
|
SAVE_GPR(2, r1) /* Save the TOC */
|
|
|
|
SAVE_GPR(13, r1) /* Save paca */
|
2019-12-11 13:35:52 +11:00
|
|
|
SAVE_NVGPRS(r1) /* Save the non-volatiles */
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
mfcr r4
|
|
|
|
std r4,_CCR(r1)
|
|
|
|
mfctr r5
|
|
|
|
std r5,_CTR(r1)
|
|
|
|
mfspr r6,SPRN_XER
|
|
|
|
std r6,_XER(r1)
|
|
|
|
mfdar r7
|
|
|
|
std r7,_DAR(r1)
|
|
|
|
mfdsisr r8
|
|
|
|
std r8,_DSISR(r1)
|
|
|
|
|
2006-03-27 15:20:00 -08:00
|
|
|
/* Temporary workaround to clear CR until RTAS can be modified to
|
|
|
|
* ignore all bits.
|
|
|
|
*/
|
|
|
|
li r0,0
|
|
|
|
mtcr r0
|
|
|
|
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 09:25:49 +05:30
|
|
|
#ifdef CONFIG_BUG
|
2005-10-10 22:36:14 +10:00
|
|
|
/* There is no way it is acceptable to get here with interrupts enabled,
|
|
|
|
* check it with the asm equivalent of WARN_ON
|
|
|
|
*/
|
2017-12-20 09:25:50 +05:30
|
|
|
lbz r0,PACAIRQSOFTMASK(r13)
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 09:25:49 +05:30
|
|
|
1: tdeqi r0,IRQS_ENABLED
|
2007-01-01 18:45:34 +00:00
|
|
|
EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
|
|
|
|
#endif
|
powerpc/64: Change soft_enabled from flag to bitmask
"paca->soft_enabled" is used as a flag to mask some of interrupts.
Currently supported flags values and their details:
soft_enabled MSR[EE]
0 0 Disabled (PMI and HMI not masked)
1 1 Enabled
"paca->soft_enabled" is initialized to 1 to make the interripts as
enabled. arch_local_irq_disable() will toggle the value when
interrupts needs to disbled. At this point, the interrupts are not
actually disabled, instead, interrupt vector has code to check for the
flag and mask it when it occurs. By "mask it", it update interrupt
paca->irq_happened and return. arch_local_irq_restore() is called to
re-enable interrupts, which checks and replays interrupts if any
occured.
Now, as mentioned, current logic doesnot mask "performance monitoring
interrupts" and PMIs are implemented as NMI. But this patchset depends
on local_irq_* for a successful local_* update. Meaning, mask all
possible interrupts during local_* update and replay them after the
update.
So the idea here is to reserve the "paca->soft_enabled" logic. New
values and details:
soft_enabled MSR[EE]
1 0 Disabled (PMI and HMI not masked)
0 1 Enabled
Reason for the this change is to create foundation for a third mask
value "0x2" for "soft_enabled" to add support to mask PMIs. When
->soft_enabled is set to a value "3", PMI interrupts are mask and when
set to a value of "1", PMI are not mask. With this patch also extends
soft_enabled as interrupt disable mask.
Current flags are renamed from IRQ_[EN?DIS}ABLED to
IRQS_ENABLED and IRQS_DISABLED.
Patch also fixes the ptrace call to force the user to see the softe
value to be alway 1. Reason being, even though userspace has no
business knowing about softe, it is part of pt_regs. Like-wise in
signal context.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-12-20 09:25:49 +05:30
|
|
|
|
[POWERPC] Lazy interrupt disabling for 64-bit machines
This implements a lazy strategy for disabling interrupts. This means
that local_irq_disable() et al. just clear the 'interrupts are
enabled' flag in the paca. If an interrupt comes along, the interrupt
entry code notices that interrupts are supposed to be disabled, and
clears the EE bit in SRR1, clears the 'interrupts are hard-enabled'
flag in the paca, and returns. This means that interrupts only
actually get disabled in the processor when an interrupt comes along.
When interrupts are enabled by local_irq_enable() et al., the code
sets the interrupts-enabled flag in the paca, and then checks whether
interrupts got hard-disabled. If so, it also sets the EE bit in the
MSR to hard-enable the interrupts.
This has the potential to improve performance, and also makes it
easier to make a kernel that can boot on iSeries and on other 64-bit
machines, since this lazy-disable strategy is very similar to the
soft-disable strategy that iSeries already uses.
This version renames paca->proc_enabled to paca->soft_enabled, and
changes a couple of soft-disables in the kexec code to hard-disables,
which should fix the crash that Michael Ellerman saw. This doesn't
yet use a reserved CR field for the soft_enabled and hard_enabled
flags. This applies on top of Stephen Rothwell's patches to make it
possible to build a combined iSeries/other kernel.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-10-04 16:47:49 +10:00
|
|
|
/* Hard-disable interrupts */
|
|
|
|
mfmsr r6
|
|
|
|
rldicl r7,r6,48,1
|
|
|
|
rotldi r7,r7,16
|
|
|
|
mtmsrd r7,1
|
|
|
|
|
2005-10-10 22:36:14 +10:00
|
|
|
/* Unfortunately, the stack pointer and the MSR are also clobbered,
|
|
|
|
* so they are saved in the PACA which allows us to restore
|
|
|
|
* our original state after RTAS returns.
|
|
|
|
*/
|
|
|
|
std r1,PACAR1(r13)
|
|
|
|
std r6,PACASAVEDMSR(r13)
|
|
|
|
|
|
|
|
/* Setup our real return addr */
|
2014-02-04 16:04:52 +11:00
|
|
|
LOAD_REG_ADDR(r4,rtas_return_loc)
|
2006-01-13 14:56:25 +11:00
|
|
|
clrldi r4,r4,2 /* convert to realmode address */
|
2005-10-10 22:36:14 +10:00
|
|
|
mtlr r4
|
|
|
|
|
|
|
|
li r0,0
|
|
|
|
ori r0,r0,MSR_EE|MSR_SE|MSR_BE|MSR_RI
|
|
|
|
andc r0,r6,r0
|
|
|
|
|
|
|
|
li r9,1
|
|
|
|
rldicr r9,r9,MSR_SF_LG,(63-MSR_SF_LG)
|
2013-09-23 12:04:45 +10:00
|
|
|
ori r9,r9,MSR_IR|MSR_DR|MSR_FE0|MSR_FE1|MSR_FP|MSR_RI|MSR_LE
|
2005-10-10 22:36:14 +10:00
|
|
|
andc r6,r0,r9
|
2017-06-29 23:19:20 +05:30
|
|
|
|
|
|
|
__enter_rtas:
|
2005-10-10 22:36:14 +10:00
|
|
|
sync /* disable interrupts so SRR0/1 */
|
|
|
|
mtmsrd r0 /* don't get trashed */
|
|
|
|
|
2006-01-13 14:56:25 +11:00
|
|
|
LOAD_REG_ADDR(r4, rtas)
|
2005-10-10 22:36:14 +10:00
|
|
|
ld r5,RTASENTRY(r4) /* get the rtas->entry value */
|
|
|
|
ld r4,RTASBASE(r4) /* get the rtas->base value */
|
|
|
|
|
|
|
|
mtspr SPRN_SRR0,r5
|
|
|
|
mtspr SPRN_SRR1,r6
|
2018-01-10 03:07:15 +11:00
|
|
|
RFI_TO_KERNEL
|
2005-10-10 22:36:14 +10:00
|
|
|
b . /* prevent speculative execution */
|
|
|
|
|
2014-02-04 16:04:52 +11:00
|
|
|
rtas_return_loc:
|
2013-09-23 12:04:45 +10:00
|
|
|
FIXUP_ENDIAN
|
|
|
|
|
2017-12-22 21:17:10 +10:00
|
|
|
/*
|
|
|
|
* Clear RI and set SF before anything.
|
|
|
|
*/
|
|
|
|
mfmsr r6
|
|
|
|
li r0,MSR_RI
|
|
|
|
andc r6,r6,r0
|
|
|
|
sldi r0,r0,(MSR_SF_LG - MSR_RI_LG)
|
|
|
|
or r6,r6,r0
|
|
|
|
sync
|
|
|
|
mtmsrd r6
|
|
|
|
|
2005-10-10 22:36:14 +10:00
|
|
|
/* relocation is off at this point */
|
2011-01-20 17:50:21 +11:00
|
|
|
GET_PACA(r4)
|
2006-01-13 14:56:25 +11:00
|
|
|
clrldi r4,r4,2 /* convert to realmode address */
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2008-08-30 11:41:12 +10:00
|
|
|
bcl 20,31,$+4
|
|
|
|
0: mflr r3
|
2014-02-04 16:04:52 +11:00
|
|
|
ld r3,(1f-0b)(r3) /* get &rtas_restore_regs */
|
2008-08-30 11:41:12 +10:00
|
|
|
|
2005-10-10 22:36:14 +10:00
|
|
|
ld r1,PACAR1(r4) /* Restore our SP */
|
|
|
|
ld r4,PACASAVEDMSR(r4) /* Restore our MSR */
|
|
|
|
|
|
|
|
mtspr SPRN_SRR0,r3
|
|
|
|
mtspr SPRN_SRR1,r4
|
2018-01-10 03:07:15 +11:00
|
|
|
RFI_TO_KERNEL
|
2005-10-10 22:36:14 +10:00
|
|
|
b . /* prevent speculative execution */
|
2017-06-29 23:19:20 +05:30
|
|
|
_ASM_NOKPROBE_SYMBOL(__enter_rtas)
|
|
|
|
_ASM_NOKPROBE_SYMBOL(rtas_return_loc)
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2008-08-30 11:41:12 +10:00
|
|
|
.align 3
|
2017-03-09 16:42:12 +11:00
|
|
|
1: .8byte rtas_restore_regs
|
2008-08-30 11:41:12 +10:00
|
|
|
|
2014-02-04 16:04:52 +11:00
|
|
|
rtas_restore_regs:
|
2005-10-10 22:36:14 +10:00
|
|
|
/* relocation is on at this point */
|
|
|
|
REST_GPR(2, r1) /* Restore the TOC */
|
|
|
|
REST_GPR(13, r1) /* Restore paca */
|
2019-12-11 13:35:52 +11:00
|
|
|
REST_NVGPRS(r1) /* Restore the non-volatiles */
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2011-01-20 17:50:21 +11:00
|
|
|
GET_PACA(r13)
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
ld r4,_CCR(r1)
|
|
|
|
mtcr r4
|
|
|
|
ld r5,_CTR(r1)
|
|
|
|
mtctr r5
|
|
|
|
ld r6,_XER(r1)
|
|
|
|
mtspr SPRN_XER,r6
|
|
|
|
ld r7,_DAR(r1)
|
|
|
|
mtdar r7
|
|
|
|
ld r8,_DSISR(r1)
|
|
|
|
mtdsisr r8
|
|
|
|
|
2018-10-12 13:14:06 +10:30
|
|
|
addi r1,r1,SWITCH_FRAME_SIZE /* Unstack our frame */
|
2005-10-10 22:36:14 +10:00
|
|
|
ld r0,16(r1) /* get return address */
|
|
|
|
|
|
|
|
mtlr r0
|
|
|
|
blr /* return to caller */
|
|
|
|
|
|
|
|
#endif /* CONFIG_PPC_RTAS */
|
|
|
|
|
|
|
|
_GLOBAL(enter_prom)
|
|
|
|
mflr r0
|
|
|
|
std r0,16(r1)
|
2018-10-12 13:14:06 +10:30
|
|
|
stdu r1,-SWITCH_FRAME_SIZE(r1) /* Save SP and create stack space */
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
/* Because PROM is running in 32b mode, it clobbers the high order half
|
|
|
|
* of all registers that it saves. We therefore save those registers
|
|
|
|
* PROM might touch to the stack. (r0, r3-r13 are caller saved)
|
|
|
|
*/
|
2009-07-23 23:15:07 +00:00
|
|
|
SAVE_GPR(2, r1)
|
2005-10-10 22:36:14 +10:00
|
|
|
SAVE_GPR(13, r1)
|
2019-12-11 13:35:52 +11:00
|
|
|
SAVE_NVGPRS(r1)
|
2009-07-23 23:15:07 +00:00
|
|
|
mfcr r10
|
2005-10-10 22:36:14 +10:00
|
|
|
mfmsr r11
|
2009-07-23 23:15:07 +00:00
|
|
|
std r10,_CCR(r1)
|
2005-10-10 22:36:14 +10:00
|
|
|
std r11,_MSR(r1)
|
|
|
|
|
2013-09-23 12:04:45 +10:00
|
|
|
/* Put PROM address in SRR0 */
|
|
|
|
mtsrr0 r4
|
|
|
|
|
|
|
|
/* Setup our trampoline return addr in LR */
|
|
|
|
bcl 20,31,$+4
|
|
|
|
0: mflr r4
|
|
|
|
addi r4,r4,(1f - 0b)
|
|
|
|
mtlr r4
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2013-09-23 12:04:45 +10:00
|
|
|
/* Prepare a 32-bit mode big endian MSR
|
2005-10-10 22:36:14 +10:00
|
|
|
*/
|
2009-07-23 23:15:59 +00:00
|
|
|
#ifdef CONFIG_PPC_BOOK3E
|
|
|
|
rlwinm r11,r11,0,1,31
|
2013-09-23 12:04:45 +10:00
|
|
|
mtsrr1 r11
|
|
|
|
rfi
|
2009-07-23 23:15:59 +00:00
|
|
|
#else /* CONFIG_PPC_BOOK3E */
|
2013-09-23 12:04:45 +10:00
|
|
|
LOAD_REG_IMMEDIATE(r12, MSR_SF | MSR_ISF | MSR_LE)
|
|
|
|
andc r11,r11,r12
|
|
|
|
mtsrr1 r11
|
2018-01-10 03:07:15 +11:00
|
|
|
RFI_TO_KERNEL
|
2009-07-23 23:15:59 +00:00
|
|
|
#endif /* CONFIG_PPC_BOOK3E */
|
2005-10-10 22:36:14 +10:00
|
|
|
|
2013-09-23 12:04:45 +10:00
|
|
|
1: /* Return from OF */
|
|
|
|
FIXUP_ENDIAN
|
2005-10-10 22:36:14 +10:00
|
|
|
|
|
|
|
/* Just make sure that r1 top 32 bits didn't get
|
|
|
|
* corrupt by OF
|
|
|
|
*/
|
|
|
|
rldicl r1,r1,0,32
|
|
|
|
|
|
|
|
/* Restore the MSR (back to 64 bits) */
|
|
|
|
ld r0,_MSR(r1)
|
2009-07-23 23:15:07 +00:00
|
|
|
MTMSRD(r0)
|
2005-10-10 22:36:14 +10:00
|
|
|
isync
|
|
|
|
|
|
|
|
/* Restore other registers */
|
|
|
|
REST_GPR(2, r1)
|
|
|
|
REST_GPR(13, r1)
|
2019-12-11 13:35:52 +11:00
|
|
|
REST_NVGPRS(r1)
|
2005-10-10 22:36:14 +10:00
|
|
|
ld r4,_CCR(r1)
|
|
|
|
mtcr r4
|
2018-10-12 13:14:06 +10:30
|
|
|
|
|
|
|
addi r1,r1,SWITCH_FRAME_SIZE
|
2005-10-10 22:36:14 +10:00
|
|
|
ld r0,16(r1)
|
|
|
|
mtlr r0
|
|
|
|
blr
|