2018-02-13 13:13:17 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
|
|
|
/* Copyright (C) 2017 Andes Technology Corporation */
|
|
|
|
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/linkage.h>
|
2023-12-14 19:19:22 +00:00
|
|
|
#include <linux/export.h>
|
2018-02-13 13:13:17 +08:00
|
|
|
#include <asm/asm.h>
|
|
|
|
#include <asm/csr.h>
|
|
|
|
#include <asm/unistd.h>
|
|
|
|
#include <asm/thread_info.h>
|
|
|
|
#include <asm/asm-offsets.h>
|
|
|
|
#include <asm/ftrace.h>
|
|
|
|
|
|
|
|
.text
|
|
|
|
|
riscv: ftrace: Reduce the detour code size to half
Use a temporary register to reduce the size of detour code from 16 bytes to
8 bytes. The previous implementation is from 'commit afc76b8b8011 ("riscv:
Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT")'.
Before the patch:
<func_prolog>:
0: REG_S ra, -SZREG(sp)
4: auipc ra, ?
8: jalr ?(ra)
12: REG_L ra, -SZREG(sp)
(func_boddy)
After the patch:
<func_prolog>:
0: auipc t0, ?
4: jalr t0, ?(t0)
(func_boddy)
This patch not just reduces the size of detour code, but also fixes an
important issue:
An Ftrace callback registered with FTRACE_OPS_FL_IPMODIFY flag can
actually change the instruction pointer, e.g. to "replace" the given
kernel function with a new one, which is needed for livepatching, etc.
In this case, the trampoline (ftrace_regs_caller) would not return to
<func_prolog+12> but would rather jump to the new function. So, "REG_L
ra, -SZREG(sp)" would not run and the original return address would not
be restored. The kernel is likely to hang or crash as a result.
This can be easily demonstrated if one tries to "replace", say,
cmdline_proc_show() with a new function with the same signature using
instruction_pointer_set(&fregs->regs, new_func_addr) in the Ftrace
callback.
Link: https://lore.kernel.org/linux-riscv/20221122075440.1165172-1-suagrfillet@gmail.com/
Link: https://lore.kernel.org/linux-riscv/d7d5730b-ebef-68e5-5046-e763e1ee6164@yadro.com/
Co-developed-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Evgenii Shatokhin <e.shatokhin@yadro.com>
Reviewed-by: Evgenii Shatokhin <e.shatokhin@yadro.com>
Link: https://lore.kernel.org/r/20230112090603.1295340-4-guoren@kernel.org
Cc: stable@vger.kernel.org
Fixes: 10626c32e382 ("riscv/ftrace: Add basic support")
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-01-12 04:05:59 -05:00
|
|
|
#define FENTRY_RA_OFFSET 8
|
|
|
|
#define ABI_SIZE_ON_STACK 80
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
#define ABI_A0 0
|
|
|
|
#define ABI_A1 8
|
|
|
|
#define ABI_A2 16
|
|
|
|
#define ABI_A3 24
|
|
|
|
#define ABI_A4 32
|
|
|
|
#define ABI_A5 40
|
|
|
|
#define ABI_A6 48
|
|
|
|
#define ABI_A7 56
|
riscv: ftrace: Reduce the detour code size to half
Use a temporary register to reduce the size of detour code from 16 bytes to
8 bytes. The previous implementation is from 'commit afc76b8b8011 ("riscv:
Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT")'.
Before the patch:
<func_prolog>:
0: REG_S ra, -SZREG(sp)
4: auipc ra, ?
8: jalr ?(ra)
12: REG_L ra, -SZREG(sp)
(func_boddy)
After the patch:
<func_prolog>:
0: auipc t0, ?
4: jalr t0, ?(t0)
(func_boddy)
This patch not just reduces the size of detour code, but also fixes an
important issue:
An Ftrace callback registered with FTRACE_OPS_FL_IPMODIFY flag can
actually change the instruction pointer, e.g. to "replace" the given
kernel function with a new one, which is needed for livepatching, etc.
In this case, the trampoline (ftrace_regs_caller) would not return to
<func_prolog+12> but would rather jump to the new function. So, "REG_L
ra, -SZREG(sp)" would not run and the original return address would not
be restored. The kernel is likely to hang or crash as a result.
This can be easily demonstrated if one tries to "replace", say,
cmdline_proc_show() with a new function with the same signature using
instruction_pointer_set(&fregs->regs, new_func_addr) in the Ftrace
callback.
Link: https://lore.kernel.org/linux-riscv/20221122075440.1165172-1-suagrfillet@gmail.com/
Link: https://lore.kernel.org/linux-riscv/d7d5730b-ebef-68e5-5046-e763e1ee6164@yadro.com/
Co-developed-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Evgenii Shatokhin <e.shatokhin@yadro.com>
Reviewed-by: Evgenii Shatokhin <e.shatokhin@yadro.com>
Link: https://lore.kernel.org/r/20230112090603.1295340-4-guoren@kernel.org
Cc: stable@vger.kernel.org
Fixes: 10626c32e382 ("riscv/ftrace: Add basic support")
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-01-12 04:05:59 -05:00
|
|
|
#define ABI_T0 64
|
|
|
|
#define ABI_RA 72
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
|
|
|
|
.macro SAVE_ABI
|
|
|
|
addi sp, sp, -ABI_SIZE_ON_STACK
|
|
|
|
|
|
|
|
REG_S a0, ABI_A0(sp)
|
|
|
|
REG_S a1, ABI_A1(sp)
|
|
|
|
REG_S a2, ABI_A2(sp)
|
|
|
|
REG_S a3, ABI_A3(sp)
|
|
|
|
REG_S a4, ABI_A4(sp)
|
|
|
|
REG_S a5, ABI_A5(sp)
|
|
|
|
REG_S a6, ABI_A6(sp)
|
|
|
|
REG_S a7, ABI_A7(sp)
|
riscv: ftrace: Reduce the detour code size to half
Use a temporary register to reduce the size of detour code from 16 bytes to
8 bytes. The previous implementation is from 'commit afc76b8b8011 ("riscv:
Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT")'.
Before the patch:
<func_prolog>:
0: REG_S ra, -SZREG(sp)
4: auipc ra, ?
8: jalr ?(ra)
12: REG_L ra, -SZREG(sp)
(func_boddy)
After the patch:
<func_prolog>:
0: auipc t0, ?
4: jalr t0, ?(t0)
(func_boddy)
This patch not just reduces the size of detour code, but also fixes an
important issue:
An Ftrace callback registered with FTRACE_OPS_FL_IPMODIFY flag can
actually change the instruction pointer, e.g. to "replace" the given
kernel function with a new one, which is needed for livepatching, etc.
In this case, the trampoline (ftrace_regs_caller) would not return to
<func_prolog+12> but would rather jump to the new function. So, "REG_L
ra, -SZREG(sp)" would not run and the original return address would not
be restored. The kernel is likely to hang or crash as a result.
This can be easily demonstrated if one tries to "replace", say,
cmdline_proc_show() with a new function with the same signature using
instruction_pointer_set(&fregs->regs, new_func_addr) in the Ftrace
callback.
Link: https://lore.kernel.org/linux-riscv/20221122075440.1165172-1-suagrfillet@gmail.com/
Link: https://lore.kernel.org/linux-riscv/d7d5730b-ebef-68e5-5046-e763e1ee6164@yadro.com/
Co-developed-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Evgenii Shatokhin <e.shatokhin@yadro.com>
Reviewed-by: Evgenii Shatokhin <e.shatokhin@yadro.com>
Link: https://lore.kernel.org/r/20230112090603.1295340-4-guoren@kernel.org
Cc: stable@vger.kernel.org
Fixes: 10626c32e382 ("riscv/ftrace: Add basic support")
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-01-12 04:05:59 -05:00
|
|
|
REG_S t0, ABI_T0(sp)
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
REG_S ra, ABI_RA(sp)
|
2018-02-13 13:13:17 +08:00
|
|
|
.endm
|
|
|
|
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
.macro RESTORE_ABI
|
|
|
|
REG_L a0, ABI_A0(sp)
|
|
|
|
REG_L a1, ABI_A1(sp)
|
|
|
|
REG_L a2, ABI_A2(sp)
|
|
|
|
REG_L a3, ABI_A3(sp)
|
|
|
|
REG_L a4, ABI_A4(sp)
|
|
|
|
REG_L a5, ABI_A5(sp)
|
|
|
|
REG_L a6, ABI_A6(sp)
|
|
|
|
REG_L a7, ABI_A7(sp)
|
riscv: ftrace: Reduce the detour code size to half
Use a temporary register to reduce the size of detour code from 16 bytes to
8 bytes. The previous implementation is from 'commit afc76b8b8011 ("riscv:
Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT")'.
Before the patch:
<func_prolog>:
0: REG_S ra, -SZREG(sp)
4: auipc ra, ?
8: jalr ?(ra)
12: REG_L ra, -SZREG(sp)
(func_boddy)
After the patch:
<func_prolog>:
0: auipc t0, ?
4: jalr t0, ?(t0)
(func_boddy)
This patch not just reduces the size of detour code, but also fixes an
important issue:
An Ftrace callback registered with FTRACE_OPS_FL_IPMODIFY flag can
actually change the instruction pointer, e.g. to "replace" the given
kernel function with a new one, which is needed for livepatching, etc.
In this case, the trampoline (ftrace_regs_caller) would not return to
<func_prolog+12> but would rather jump to the new function. So, "REG_L
ra, -SZREG(sp)" would not run and the original return address would not
be restored. The kernel is likely to hang or crash as a result.
This can be easily demonstrated if one tries to "replace", say,
cmdline_proc_show() with a new function with the same signature using
instruction_pointer_set(&fregs->regs, new_func_addr) in the Ftrace
callback.
Link: https://lore.kernel.org/linux-riscv/20221122075440.1165172-1-suagrfillet@gmail.com/
Link: https://lore.kernel.org/linux-riscv/d7d5730b-ebef-68e5-5046-e763e1ee6164@yadro.com/
Co-developed-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Evgenii Shatokhin <e.shatokhin@yadro.com>
Reviewed-by: Evgenii Shatokhin <e.shatokhin@yadro.com>
Link: https://lore.kernel.org/r/20230112090603.1295340-4-guoren@kernel.org
Cc: stable@vger.kernel.org
Fixes: 10626c32e382 ("riscv/ftrace: Add basic support")
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-01-12 04:05:59 -05:00
|
|
|
REG_L t0, ABI_T0(sp)
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
REG_L ra, ABI_RA(sp)
|
|
|
|
|
|
|
|
addi sp, sp, ABI_SIZE_ON_STACK
|
2018-02-13 13:13:17 +08:00
|
|
|
.endm
|
|
|
|
|
2024-04-05 14:24:53 +00:00
|
|
|
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
|
2023-11-30 13:15:29 +01:00
|
|
|
|
|
|
|
/**
|
2024-04-05 14:24:53 +00:00
|
|
|
* SAVE_ABI_REGS - save regs against the ftrace_regs struct
|
2023-11-30 13:15:29 +01:00
|
|
|
*
|
|
|
|
* After the stack is established,
|
|
|
|
*
|
|
|
|
* 0(sp) stores the PC of the traced function which can be accessed
|
2024-04-05 14:24:53 +00:00
|
|
|
* by &(fregs)->epc in tracing function. Note that the real
|
2023-11-30 13:15:29 +01:00
|
|
|
* function entry address should be computed with -FENTRY_RA_OFFSET.
|
|
|
|
*
|
|
|
|
* 8(sp) stores the function return address (i.e. parent IP) that
|
2024-04-05 14:24:53 +00:00
|
|
|
* can be accessed by &(fregs)->ra in tracing function.
|
2023-11-30 13:15:29 +01:00
|
|
|
*
|
|
|
|
* The other regs are saved at the respective localtion and accessed
|
2024-04-05 14:24:53 +00:00
|
|
|
* by the respective ftrace_regs member.
|
2023-11-30 13:15:29 +01:00
|
|
|
*
|
|
|
|
* Here is the layout of stack for your reference.
|
|
|
|
*
|
|
|
|
* PT_SIZE_ON_STACK -> +++++++++
|
|
|
|
* + ..... +
|
|
|
|
* + a0-a7 + --++++-> ftrace_caller saved
|
2024-04-05 14:24:53 +00:00
|
|
|
* + t1 + --++++-> direct tramp address
|
|
|
|
* + s0 + --+ // frame pointer
|
2023-11-30 13:15:29 +01:00
|
|
|
* + sp + +
|
|
|
|
* + ra + --+ // parent IP
|
|
|
|
* sp -> + epc + --+ // PC
|
|
|
|
* +++++++++
|
|
|
|
**/
|
2024-04-05 14:24:53 +00:00
|
|
|
.macro SAVE_ABI_REGS
|
|
|
|
mv t4, sp // Save original SP in T4
|
|
|
|
addi sp, sp, -FREGS_SIZE_ON_STACK
|
2023-11-30 13:15:29 +01:00
|
|
|
|
2024-04-05 14:24:53 +00:00
|
|
|
REG_S t0, FREGS_EPC(sp)
|
|
|
|
REG_S x1, FREGS_RA(sp)
|
|
|
|
REG_S t4, FREGS_SP(sp) // Put original SP on stack
|
2023-11-30 13:15:29 +01:00
|
|
|
#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
|
2024-04-05 14:24:53 +00:00
|
|
|
REG_S x8, FREGS_S0(sp)
|
2023-11-30 13:15:29 +01:00
|
|
|
#endif
|
2024-04-05 14:24:53 +00:00
|
|
|
REG_S x6, FREGS_T1(sp)
|
|
|
|
|
|
|
|
// save the arguments
|
|
|
|
REG_S x10, FREGS_A0(sp)
|
|
|
|
REG_S x11, FREGS_A1(sp)
|
|
|
|
REG_S x12, FREGS_A2(sp)
|
|
|
|
REG_S x13, FREGS_A3(sp)
|
|
|
|
REG_S x14, FREGS_A4(sp)
|
|
|
|
REG_S x15, FREGS_A5(sp)
|
|
|
|
REG_S x16, FREGS_A6(sp)
|
|
|
|
REG_S x17, FREGS_A7(sp)
|
2018-02-13 13:13:18 +08:00
|
|
|
.endm
|
|
|
|
|
2023-11-30 13:15:29 +01:00
|
|
|
.macro RESTORE_ABI_REGS, all=0
|
2024-04-05 14:24:53 +00:00
|
|
|
REG_L t0, FREGS_EPC(sp)
|
|
|
|
REG_L x1, FREGS_RA(sp)
|
2023-11-30 13:15:29 +01:00
|
|
|
#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
|
2024-04-05 14:24:53 +00:00
|
|
|
REG_L x8, FREGS_S0(sp)
|
2023-11-30 13:15:29 +01:00
|
|
|
#endif
|
2024-04-05 14:24:53 +00:00
|
|
|
REG_L x6, FREGS_T1(sp)
|
|
|
|
|
|
|
|
// restore the arguments
|
|
|
|
REG_L x10, FREGS_A0(sp)
|
|
|
|
REG_L x11, FREGS_A1(sp)
|
|
|
|
REG_L x12, FREGS_A2(sp)
|
|
|
|
REG_L x13, FREGS_A3(sp)
|
|
|
|
REG_L x14, FREGS_A4(sp)
|
|
|
|
REG_L x15, FREGS_A5(sp)
|
|
|
|
REG_L x16, FREGS_A6(sp)
|
|
|
|
REG_L x17, FREGS_A7(sp)
|
|
|
|
|
|
|
|
addi sp, sp, FREGS_SIZE_ON_STACK
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
.endm
|
2023-11-30 13:15:29 +01:00
|
|
|
|
|
|
|
.macro PREPARE_ARGS
|
|
|
|
addi a0, t0, -FENTRY_RA_OFFSET
|
|
|
|
la a1, function_trace_op
|
|
|
|
REG_L a2, 0(a1)
|
|
|
|
mv a1, ra
|
|
|
|
mv a3, sp
|
|
|
|
.endm
|
|
|
|
|
2024-04-05 14:24:53 +00:00
|
|
|
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */
|
2018-02-13 13:13:18 +08:00
|
|
|
|
2024-04-05 14:24:53 +00:00
|
|
|
#ifndef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
|
2023-10-24 15:26:52 +02:00
|
|
|
SYM_FUNC_START(ftrace_caller)
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
SAVE_ABI
|
2018-02-13 13:13:18 +08:00
|
|
|
|
riscv: ftrace: Reduce the detour code size to half
Use a temporary register to reduce the size of detour code from 16 bytes to
8 bytes. The previous implementation is from 'commit afc76b8b8011 ("riscv:
Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT")'.
Before the patch:
<func_prolog>:
0: REG_S ra, -SZREG(sp)
4: auipc ra, ?
8: jalr ?(ra)
12: REG_L ra, -SZREG(sp)
(func_boddy)
After the patch:
<func_prolog>:
0: auipc t0, ?
4: jalr t0, ?(t0)
(func_boddy)
This patch not just reduces the size of detour code, but also fixes an
important issue:
An Ftrace callback registered with FTRACE_OPS_FL_IPMODIFY flag can
actually change the instruction pointer, e.g. to "replace" the given
kernel function with a new one, which is needed for livepatching, etc.
In this case, the trampoline (ftrace_regs_caller) would not return to
<func_prolog+12> but would rather jump to the new function. So, "REG_L
ra, -SZREG(sp)" would not run and the original return address would not
be restored. The kernel is likely to hang or crash as a result.
This can be easily demonstrated if one tries to "replace", say,
cmdline_proc_show() with a new function with the same signature using
instruction_pointer_set(&fregs->regs, new_func_addr) in the Ftrace
callback.
Link: https://lore.kernel.org/linux-riscv/20221122075440.1165172-1-suagrfillet@gmail.com/
Link: https://lore.kernel.org/linux-riscv/d7d5730b-ebef-68e5-5046-e763e1ee6164@yadro.com/
Co-developed-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Evgenii Shatokhin <e.shatokhin@yadro.com>
Reviewed-by: Evgenii Shatokhin <e.shatokhin@yadro.com>
Link: https://lore.kernel.org/r/20230112090603.1295340-4-guoren@kernel.org
Cc: stable@vger.kernel.org
Fixes: 10626c32e382 ("riscv/ftrace: Add basic support")
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-01-12 04:05:59 -05:00
|
|
|
addi a0, t0, -FENTRY_RA_OFFSET
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
la a1, function_trace_op
|
|
|
|
REG_L a2, 0(a1)
|
riscv: ftrace: Reduce the detour code size to half
Use a temporary register to reduce the size of detour code from 16 bytes to
8 bytes. The previous implementation is from 'commit afc76b8b8011 ("riscv:
Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT")'.
Before the patch:
<func_prolog>:
0: REG_S ra, -SZREG(sp)
4: auipc ra, ?
8: jalr ?(ra)
12: REG_L ra, -SZREG(sp)
(func_boddy)
After the patch:
<func_prolog>:
0: auipc t0, ?
4: jalr t0, ?(t0)
(func_boddy)
This patch not just reduces the size of detour code, but also fixes an
important issue:
An Ftrace callback registered with FTRACE_OPS_FL_IPMODIFY flag can
actually change the instruction pointer, e.g. to "replace" the given
kernel function with a new one, which is needed for livepatching, etc.
In this case, the trampoline (ftrace_regs_caller) would not return to
<func_prolog+12> but would rather jump to the new function. So, "REG_L
ra, -SZREG(sp)" would not run and the original return address would not
be restored. The kernel is likely to hang or crash as a result.
This can be easily demonstrated if one tries to "replace", say,
cmdline_proc_show() with a new function with the same signature using
instruction_pointer_set(&fregs->regs, new_func_addr) in the Ftrace
callback.
Link: https://lore.kernel.org/linux-riscv/20221122075440.1165172-1-suagrfillet@gmail.com/
Link: https://lore.kernel.org/linux-riscv/d7d5730b-ebef-68e5-5046-e763e1ee6164@yadro.com/
Co-developed-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Evgenii Shatokhin <e.shatokhin@yadro.com>
Reviewed-by: Evgenii Shatokhin <e.shatokhin@yadro.com>
Link: https://lore.kernel.org/r/20230112090603.1295340-4-guoren@kernel.org
Cc: stable@vger.kernel.org
Fixes: 10626c32e382 ("riscv/ftrace: Add basic support")
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-01-12 04:05:59 -05:00
|
|
|
mv a1, ra
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
mv a3, sp
|
2018-02-13 13:13:18 +08:00
|
|
|
|
2023-10-24 15:26:52 +02:00
|
|
|
SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
|
2018-02-13 13:13:17 +08:00
|
|
|
call ftrace_stub
|
2018-02-13 13:13:18 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
|
riscv: ftrace: Reduce the detour code size to half
Use a temporary register to reduce the size of detour code from 16 bytes to
8 bytes. The previous implementation is from 'commit afc76b8b8011 ("riscv:
Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT")'.
Before the patch:
<func_prolog>:
0: REG_S ra, -SZREG(sp)
4: auipc ra, ?
8: jalr ?(ra)
12: REG_L ra, -SZREG(sp)
(func_boddy)
After the patch:
<func_prolog>:
0: auipc t0, ?
4: jalr t0, ?(t0)
(func_boddy)
This patch not just reduces the size of detour code, but also fixes an
important issue:
An Ftrace callback registered with FTRACE_OPS_FL_IPMODIFY flag can
actually change the instruction pointer, e.g. to "replace" the given
kernel function with a new one, which is needed for livepatching, etc.
In this case, the trampoline (ftrace_regs_caller) would not return to
<func_prolog+12> but would rather jump to the new function. So, "REG_L
ra, -SZREG(sp)" would not run and the original return address would not
be restored. The kernel is likely to hang or crash as a result.
This can be easily demonstrated if one tries to "replace", say,
cmdline_proc_show() with a new function with the same signature using
instruction_pointer_set(&fregs->regs, new_func_addr) in the Ftrace
callback.
Link: https://lore.kernel.org/linux-riscv/20221122075440.1165172-1-suagrfillet@gmail.com/
Link: https://lore.kernel.org/linux-riscv/d7d5730b-ebef-68e5-5046-e763e1ee6164@yadro.com/
Co-developed-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Song Shuai <suagrfillet@gmail.com>
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Evgenii Shatokhin <e.shatokhin@yadro.com>
Reviewed-by: Evgenii Shatokhin <e.shatokhin@yadro.com>
Link: https://lore.kernel.org/r/20230112090603.1295340-4-guoren@kernel.org
Cc: stable@vger.kernel.org
Fixes: 10626c32e382 ("riscv/ftrace: Add basic support")
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-01-12 04:05:59 -05:00
|
|
|
addi a0, sp, ABI_RA
|
|
|
|
REG_L a1, ABI_T0(sp)
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
addi a1, a1, -FENTRY_RA_OFFSET
|
|
|
|
#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
|
|
|
|
mv a2, s0
|
2018-02-13 13:13:18 +08:00
|
|
|
#endif
|
2023-10-24 15:26:52 +02:00
|
|
|
SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL)
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
call ftrace_stub
|
|
|
|
#endif
|
|
|
|
RESTORE_ABI
|
2023-11-30 13:15:29 +01:00
|
|
|
jr t0
|
2023-10-24 15:26:52 +02:00
|
|
|
SYM_FUNC_END(ftrace_caller)
|
2018-02-13 13:13:20 +08:00
|
|
|
|
2024-04-05 14:24:53 +00:00
|
|
|
#else /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */
|
|
|
|
SYM_FUNC_START(ftrace_caller)
|
2023-11-30 13:15:30 +01:00
|
|
|
mv t1, zero
|
2024-04-05 14:24:53 +00:00
|
|
|
SAVE_ABI_REGS
|
2023-11-30 13:15:29 +01:00
|
|
|
PREPARE_ARGS
|
riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT
This patch changes the current detour mechanism of dynamic ftrace
which has been discussed during LPC 2020 RISCV-MC [1].
Before the patch, we used mcount for detour:
<funca>:
addi sp,sp,-16
sd ra,8(sp)
sd s0,0(sp)
addi s0,sp,16
mv a5,ra
mv a0,a5
auipc ra,0x0 -> nop
jalr -296(ra) <_mcount@plt> ->nop
...
After the patch, we use nop call site area for detour:
<funca>:
nop -> REG_S ra, -SZREG(sp)
nop -> auipc ra, 0x?
nop -> jalr ?(ra)
nop -> REG_L ra, -SZREG(sp)
...
The mcount mechanism is mixed with gcc function prologue which is
not very clear. The patchable function entry just put 16 bytes nop
before the front of the function prologue which could be filled
with a separated detour mechanism.
[1] https://www.linuxplumbersconf.org/event/7/contributions/807/
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-17 16:01:41 +00:00
|
|
|
|
2024-04-05 14:24:53 +00:00
|
|
|
SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
|
2018-02-13 13:13:20 +08:00
|
|
|
call ftrace_stub
|
|
|
|
|
2024-04-05 14:24:53 +00:00
|
|
|
RESTORE_ABI_REGS
|
2023-11-30 13:15:30 +01:00
|
|
|
bnez t1, .Ldirect
|
2023-11-30 13:15:29 +01:00
|
|
|
jr t0
|
2023-11-30 13:15:30 +01:00
|
|
|
.Ldirect:
|
|
|
|
jr t1
|
2023-11-30 13:15:29 +01:00
|
|
|
SYM_FUNC_END(ftrace_caller)
|
2024-04-05 14:24:53 +00:00
|
|
|
|
|
|
|
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */
|
2023-11-30 13:15:30 +01:00
|
|
|
|
|
|
|
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
|
|
|
|
SYM_CODE_START(ftrace_stub_direct_tramp)
|
|
|
|
jr t0
|
|
|
|
SYM_CODE_END(ftrace_stub_direct_tramp)
|
|
|
|
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
|