KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines
Transitioning to/from a VMX guest requires KVM to manually save/load
the bulk of CPU state that the guest is allowed to direclty access,
e.g. XSAVE state, CR2, GPRs, etc... For obvious reasons, loading the
guest's GPR snapshot prior to VM-Enter and saving the snapshot after
VM-Exit is done via handcoded assembly. The assembly blob is written
as inline asm so that it can easily access KVM-defined structs that
are used to hold guest state, e.g. moving the blob to a standalone
assembly file would require generating defines for struct offsets.
The other relevant aspect of VMX transitions in KVM is the handling of
VM-Exits. KVM doesn't employ a separate VM-Exit handler per se, but
rather treats the VMX transition as a mega instruction (with many side
effects), i.e. sets the VMCS.HOST_RIP to a label immediately following
VMLAUNCH/VMRESUME. The label is then exposed to C code via a global
variable definition in the inline assembly.
Because of the global variable, KVM takes steps to (attempt to) ensure
only a single instance of the owning C function, e.g. vmx_vcpu_run, is
generated by the compiler. The earliest approach placed the inline
assembly in a separate noinline function[1]. Later, the assembly was
folded back into vmx_vcpu_run() and tagged with __noclone[2][3], which
is still used today.
After moving to __noclone, an edge case was encountered where GCC's
-ftracer optimization resulted in the inline assembly blob being
duplicated. This was "fixed" by explicitly disabling -ftracer in the
__noclone definition[4].
Recently, it was found that disabling -ftracer causes build warnings
for unsuspecting users of __noclone[5], and more importantly for KVM,
prevents the compiler for properly optimizing vmx_vcpu_run()[6]. And
perhaps most importantly of all, it was pointed out that there is no
way to prevent duplication of a function with 100% reliability[7],
i.e. more edge cases may be encountered in the future.
So to summarize, the only way to prevent the compiler from duplicating
the global variable definition is to move the variable out of inline
assembly, which has been suggested several times over[1][7][8].
Resolve the aforementioned issues by moving the VMLAUNCH+VRESUME and
VM-Exit "handler" to standalone assembly sub-routines. Moving only
the core VMX transition codes allows the struct indexing to remain as
inline assembly and also allows the sub-routines to be used by
nested_vmx_check_vmentry_hw(). Reusing the sub-routines has a happy
side-effect of eliminating two VMWRITEs in the nested_early_check path
as there is no longer a need to dynamically change VMCS.HOST_RIP.
Note that callers to vmx_vmenter() must account for the CALL modifying
RSP, e.g. must subtract op-size from RSP when synchronizing RSP with
VMCS.HOST_RSP and "restore" RSP prior to the CALL. There are no great
alternatives to fudging RSP. Saving RSP in vmx_enter() is difficult
because doing so requires a second register (VMWRITE does not provide
an immediate encoding for the VMCS field and KVM supports Hyper-V's
memory-based eVMCS ABI). The other more drastic alternative would be
to use eschew VMCS.HOST_RSP and manually save/load RSP using a per-cpu
variable (which can be encoded as e.g. gs:[imm]). But because a valid
stack is needed at the time of VM-Exit (NMIs aren't blocked and a user
could theoretically insert INT3/INT1ICEBRK at the VM-Exit handler), a
dedicated per-cpu VM-Exit stack would be required. A dedicated stack
isn't difficult to implement, but it would require at least one page
per CPU and knowledge of the stack in the dumpstack routines. And in
most cases there is essentially zero overhead in dynamically updating
VMCS.HOST_RSP, e.g. the VMWRITE can be avoided for all but the first
VMLAUNCH unless nested_early_check=1, which is not a fast path. In
other words, avoiding the VMCS.HOST_RSP by using a dedicated stack
would only make the code marginally less ugly while requiring at least
one page per CPU and forcing the kernel to be aware (and approve) of
the VM-Exit stack shenanigans.
[1] cea15c24ca39 ("KVM: Move KVM context switch into own function")
[2] a3b5ba49a8c5 ("KVM: VMX: add the __noclone attribute to vmx_vcpu_run")
[3] 104f226bfd0a ("KVM: VMX: Fold __vmx_vcpu_run() into vmx_vcpu_run()")
[4] 95272c29378e ("compiler-gcc: disable -ftracer for __noclone functions")
[5] https://lkml.kernel.org/r/20181218140105.ajuiglkpvstt3qxs@treble
[6] https://patchwork.kernel.org/patch/8707981/#21817015
[7] https://lkml.kernel.org/r/ri6y38lo23g.fsf@suse.cz
[8] https://lkml.kernel.org/r/20181218212042.GE25620@tassilo.jf.intel.com
Suggested-by: Andi Kleen <ak@linux.intel.com>
Suggested-by: Martin Jambor <mjambor@suse.cz>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Martin Jambor <mjambor@suse.cz>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-20 12:25:17 -08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
|
|
|
#include <linux/linkage.h>
|
|
|
|
#include <asm/asm.h>
|
2019-01-25 07:41:12 -08:00
|
|
|
#include <asm/bitsperlong.h>
|
|
|
|
#include <asm/kvm_vcpu_regs.h>
|
2019-04-26 17:23:58 -07:00
|
|
|
#include <asm/nospec-branch.h>
|
2022-06-14 23:16:16 +02:00
|
|
|
#include <asm/percpu.h>
|
2020-09-15 12:15:04 -07:00
|
|
|
#include <asm/segment.h>
|
2022-11-08 09:44:53 +01:00
|
|
|
#include "kvm-asm-offsets.h"
|
2022-06-14 23:16:12 +02:00
|
|
|
#include "run_flags.h"
|
2019-01-25 07:41:12 -08:00
|
|
|
|
|
|
|
#define WORD_SIZE (BITS_PER_LONG / 8)
|
|
|
|
|
|
|
|
#define VCPU_RAX __VCPU_REGS_RAX * WORD_SIZE
|
|
|
|
#define VCPU_RCX __VCPU_REGS_RCX * WORD_SIZE
|
|
|
|
#define VCPU_RDX __VCPU_REGS_RDX * WORD_SIZE
|
|
|
|
#define VCPU_RBX __VCPU_REGS_RBX * WORD_SIZE
|
|
|
|
/* Intentionally omit RSP as it's context switched by hardware */
|
|
|
|
#define VCPU_RBP __VCPU_REGS_RBP * WORD_SIZE
|
|
|
|
#define VCPU_RSI __VCPU_REGS_RSI * WORD_SIZE
|
|
|
|
#define VCPU_RDI __VCPU_REGS_RDI * WORD_SIZE
|
|
|
|
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
#define VCPU_R8 __VCPU_REGS_R8 * WORD_SIZE
|
|
|
|
#define VCPU_R9 __VCPU_REGS_R9 * WORD_SIZE
|
|
|
|
#define VCPU_R10 __VCPU_REGS_R10 * WORD_SIZE
|
|
|
|
#define VCPU_R11 __VCPU_REGS_R11 * WORD_SIZE
|
|
|
|
#define VCPU_R12 __VCPU_REGS_R12 * WORD_SIZE
|
|
|
|
#define VCPU_R13 __VCPU_REGS_R13 * WORD_SIZE
|
|
|
|
#define VCPU_R14 __VCPU_REGS_R14 * WORD_SIZE
|
|
|
|
#define VCPU_R15 __VCPU_REGS_R15 * WORD_SIZE
|
|
|
|
#endif
|
KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines
Transitioning to/from a VMX guest requires KVM to manually save/load
the bulk of CPU state that the guest is allowed to direclty access,
e.g. XSAVE state, CR2, GPRs, etc... For obvious reasons, loading the
guest's GPR snapshot prior to VM-Enter and saving the snapshot after
VM-Exit is done via handcoded assembly. The assembly blob is written
as inline asm so that it can easily access KVM-defined structs that
are used to hold guest state, e.g. moving the blob to a standalone
assembly file would require generating defines for struct offsets.
The other relevant aspect of VMX transitions in KVM is the handling of
VM-Exits. KVM doesn't employ a separate VM-Exit handler per se, but
rather treats the VMX transition as a mega instruction (with many side
effects), i.e. sets the VMCS.HOST_RIP to a label immediately following
VMLAUNCH/VMRESUME. The label is then exposed to C code via a global
variable definition in the inline assembly.
Because of the global variable, KVM takes steps to (attempt to) ensure
only a single instance of the owning C function, e.g. vmx_vcpu_run, is
generated by the compiler. The earliest approach placed the inline
assembly in a separate noinline function[1]. Later, the assembly was
folded back into vmx_vcpu_run() and tagged with __noclone[2][3], which
is still used today.
After moving to __noclone, an edge case was encountered where GCC's
-ftracer optimization resulted in the inline assembly blob being
duplicated. This was "fixed" by explicitly disabling -ftracer in the
__noclone definition[4].
Recently, it was found that disabling -ftracer causes build warnings
for unsuspecting users of __noclone[5], and more importantly for KVM,
prevents the compiler for properly optimizing vmx_vcpu_run()[6]. And
perhaps most importantly of all, it was pointed out that there is no
way to prevent duplication of a function with 100% reliability[7],
i.e. more edge cases may be encountered in the future.
So to summarize, the only way to prevent the compiler from duplicating
the global variable definition is to move the variable out of inline
assembly, which has been suggested several times over[1][7][8].
Resolve the aforementioned issues by moving the VMLAUNCH+VRESUME and
VM-Exit "handler" to standalone assembly sub-routines. Moving only
the core VMX transition codes allows the struct indexing to remain as
inline assembly and also allows the sub-routines to be used by
nested_vmx_check_vmentry_hw(). Reusing the sub-routines has a happy
side-effect of eliminating two VMWRITEs in the nested_early_check path
as there is no longer a need to dynamically change VMCS.HOST_RIP.
Note that callers to vmx_vmenter() must account for the CALL modifying
RSP, e.g. must subtract op-size from RSP when synchronizing RSP with
VMCS.HOST_RSP and "restore" RSP prior to the CALL. There are no great
alternatives to fudging RSP. Saving RSP in vmx_enter() is difficult
because doing so requires a second register (VMWRITE does not provide
an immediate encoding for the VMCS field and KVM supports Hyper-V's
memory-based eVMCS ABI). The other more drastic alternative would be
to use eschew VMCS.HOST_RSP and manually save/load RSP using a per-cpu
variable (which can be encoded as e.g. gs:[imm]). But because a valid
stack is needed at the time of VM-Exit (NMIs aren't blocked and a user
could theoretically insert INT3/INT1ICEBRK at the VM-Exit handler), a
dedicated per-cpu VM-Exit stack would be required. A dedicated stack
isn't difficult to implement, but it would require at least one page
per CPU and knowledge of the stack in the dumpstack routines. And in
most cases there is essentially zero overhead in dynamically updating
VMCS.HOST_RSP, e.g. the VMWRITE can be avoided for all but the first
VMLAUNCH unless nested_early_check=1, which is not a fast path. In
other words, avoiding the VMCS.HOST_RSP by using a dedicated stack
would only make the code marginally less ugly while requiring at least
one page per CPU and forcing the kernel to be aware (and approve) of
the VM-Exit stack shenanigans.
[1] cea15c24ca39 ("KVM: Move KVM context switch into own function")
[2] a3b5ba49a8c5 ("KVM: VMX: add the __noclone attribute to vmx_vcpu_run")
[3] 104f226bfd0a ("KVM: VMX: Fold __vmx_vcpu_run() into vmx_vcpu_run()")
[4] 95272c29378e ("compiler-gcc: disable -ftracer for __noclone functions")
[5] https://lkml.kernel.org/r/20181218140105.ajuiglkpvstt3qxs@treble
[6] https://patchwork.kernel.org/patch/8707981/#21817015
[7] https://lkml.kernel.org/r/ri6y38lo23g.fsf@suse.cz
[8] https://lkml.kernel.org/r/20181218212042.GE25620@tassilo.jf.intel.com
Suggested-by: Andi Kleen <ak@linux.intel.com>
Suggested-by: Martin Jambor <mjambor@suse.cz>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Martin Jambor <mjambor@suse.cz>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-20 12:25:17 -08:00
|
|
|
|
2022-12-13 06:09:11 +00:00
|
|
|
.macro VMX_DO_EVENT_IRQOFF call_insn call_target
|
|
|
|
/*
|
|
|
|
* Unconditionally create a stack frame, getting the correct RSP on the
|
|
|
|
* stack (for x86-64) would take two instructions anyways, and RBP can
|
|
|
|
* be used to restore RSP to make objtool happy (see below).
|
|
|
|
*/
|
|
|
|
push %_ASM_BP
|
|
|
|
mov %_ASM_SP, %_ASM_BP
|
|
|
|
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
/*
|
|
|
|
* Align RSP to a 16-byte boundary (to emulate CPU behavior) before
|
|
|
|
* creating the synthetic interrupt stack frame for the IRQ/NMI.
|
|
|
|
*/
|
|
|
|
and $-16, %rsp
|
|
|
|
push $__KERNEL_DS
|
|
|
|
push %rbp
|
|
|
|
#endif
|
|
|
|
pushf
|
|
|
|
push $__KERNEL_CS
|
|
|
|
\call_insn \call_target
|
|
|
|
|
|
|
|
/*
|
|
|
|
* "Restore" RSP from RBP, even though IRET has already unwound RSP to
|
|
|
|
* the correct value. objtool doesn't know the callee will IRET and,
|
|
|
|
* without the explicit restore, thinks the stack is getting walloped.
|
|
|
|
* Using an unwind hint is problematic due to x86-64's dynamic alignment.
|
|
|
|
*/
|
2025-04-14 10:10:51 +02:00
|
|
|
leave
|
2022-12-13 06:09:11 +00:00
|
|
|
RET
|
|
|
|
.endm
|
|
|
|
|
2020-07-08 21:51:57 +02:00
|
|
|
.section .noinstr.text, "ax"
|
KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines
Transitioning to/from a VMX guest requires KVM to manually save/load
the bulk of CPU state that the guest is allowed to direclty access,
e.g. XSAVE state, CR2, GPRs, etc... For obvious reasons, loading the
guest's GPR snapshot prior to VM-Enter and saving the snapshot after
VM-Exit is done via handcoded assembly. The assembly blob is written
as inline asm so that it can easily access KVM-defined structs that
are used to hold guest state, e.g. moving the blob to a standalone
assembly file would require generating defines for struct offsets.
The other relevant aspect of VMX transitions in KVM is the handling of
VM-Exits. KVM doesn't employ a separate VM-Exit handler per se, but
rather treats the VMX transition as a mega instruction (with many side
effects), i.e. sets the VMCS.HOST_RIP to a label immediately following
VMLAUNCH/VMRESUME. The label is then exposed to C code via a global
variable definition in the inline assembly.
Because of the global variable, KVM takes steps to (attempt to) ensure
only a single instance of the owning C function, e.g. vmx_vcpu_run, is
generated by the compiler. The earliest approach placed the inline
assembly in a separate noinline function[1]. Later, the assembly was
folded back into vmx_vcpu_run() and tagged with __noclone[2][3], which
is still used today.
After moving to __noclone, an edge case was encountered where GCC's
-ftracer optimization resulted in the inline assembly blob being
duplicated. This was "fixed" by explicitly disabling -ftracer in the
__noclone definition[4].
Recently, it was found that disabling -ftracer causes build warnings
for unsuspecting users of __noclone[5], and more importantly for KVM,
prevents the compiler for properly optimizing vmx_vcpu_run()[6]. And
perhaps most importantly of all, it was pointed out that there is no
way to prevent duplication of a function with 100% reliability[7],
i.e. more edge cases may be encountered in the future.
So to summarize, the only way to prevent the compiler from duplicating
the global variable definition is to move the variable out of inline
assembly, which has been suggested several times over[1][7][8].
Resolve the aforementioned issues by moving the VMLAUNCH+VRESUME and
VM-Exit "handler" to standalone assembly sub-routines. Moving only
the core VMX transition codes allows the struct indexing to remain as
inline assembly and also allows the sub-routines to be used by
nested_vmx_check_vmentry_hw(). Reusing the sub-routines has a happy
side-effect of eliminating two VMWRITEs in the nested_early_check path
as there is no longer a need to dynamically change VMCS.HOST_RIP.
Note that callers to vmx_vmenter() must account for the CALL modifying
RSP, e.g. must subtract op-size from RSP when synchronizing RSP with
VMCS.HOST_RSP and "restore" RSP prior to the CALL. There are no great
alternatives to fudging RSP. Saving RSP in vmx_enter() is difficult
because doing so requires a second register (VMWRITE does not provide
an immediate encoding for the VMCS field and KVM supports Hyper-V's
memory-based eVMCS ABI). The other more drastic alternative would be
to use eschew VMCS.HOST_RSP and manually save/load RSP using a per-cpu
variable (which can be encoded as e.g. gs:[imm]). But because a valid
stack is needed at the time of VM-Exit (NMIs aren't blocked and a user
could theoretically insert INT3/INT1ICEBRK at the VM-Exit handler), a
dedicated per-cpu VM-Exit stack would be required. A dedicated stack
isn't difficult to implement, but it would require at least one page
per CPU and knowledge of the stack in the dumpstack routines. And in
most cases there is essentially zero overhead in dynamically updating
VMCS.HOST_RSP, e.g. the VMWRITE can be avoided for all but the first
VMLAUNCH unless nested_early_check=1, which is not a fast path. In
other words, avoiding the VMCS.HOST_RSP by using a dedicated stack
would only make the code marginally less ugly while requiring at least
one page per CPU and forcing the kernel to be aware (and approve) of
the VM-Exit stack shenanigans.
[1] cea15c24ca39 ("KVM: Move KVM context switch into own function")
[2] a3b5ba49a8c5 ("KVM: VMX: add the __noclone attribute to vmx_vcpu_run")
[3] 104f226bfd0a ("KVM: VMX: Fold __vmx_vcpu_run() into vmx_vcpu_run()")
[4] 95272c29378e ("compiler-gcc: disable -ftracer for __noclone functions")
[5] https://lkml.kernel.org/r/20181218140105.ajuiglkpvstt3qxs@treble
[6] https://patchwork.kernel.org/patch/8707981/#21817015
[7] https://lkml.kernel.org/r/ri6y38lo23g.fsf@suse.cz
[8] https://lkml.kernel.org/r/20181218212042.GE25620@tassilo.jf.intel.com
Suggested-by: Andi Kleen <ak@linux.intel.com>
Suggested-by: Martin Jambor <mjambor@suse.cz>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Martin Jambor <mjambor@suse.cz>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-12-20 12:25:17 -08:00
|
|
|
|
2019-01-25 07:41:12 -08:00
|
|
|
/**
|
2019-01-25 07:41:14 -08:00
|
|
|
* __vmx_vcpu_run - Run a vCPU via a transition to VMX guest mode
|
2022-06-14 23:16:13 +02:00
|
|
|
* @vmx: struct vcpu_vmx *
|
2019-01-25 07:41:12 -08:00
|
|
|
* @regs: unsigned long * (to guest registers)
|
2022-06-14 23:16:13 +02:00
|
|
|
* @flags: VMX_RUN_VMRESUME: use VMRESUME instead of VMLAUNCH
|
|
|
|
* VMX_RUN_SAVE_SPEC_CTRL: save guest SPEC_CTRL into vmx->spec_ctrl
|
2019-01-25 07:41:12 -08:00
|
|
|
*
|
|
|
|
* Returns:
|
2019-01-25 07:41:17 -08:00
|
|
|
* 0 on VM-Exit, 1 on VM-Fail
|
2019-01-25 07:41:12 -08:00
|
|
|
*/
|
2019-10-11 13:51:04 +02:00
|
|
|
SYM_FUNC_START(__vmx_vcpu_run)
|
2019-01-25 07:41:12 -08:00
|
|
|
push %_ASM_BP
|
|
|
|
mov %_ASM_SP, %_ASM_BP
|
2019-01-25 07:41:18 -08:00
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
push %r15
|
|
|
|
push %r14
|
|
|
|
push %r13
|
|
|
|
push %r12
|
|
|
|
#else
|
|
|
|
push %edi
|
|
|
|
push %esi
|
|
|
|
#endif
|
|
|
|
push %_ASM_BX
|
2019-01-25 07:41:12 -08:00
|
|
|
|
2022-06-14 23:16:13 +02:00
|
|
|
/* Save @vmx for SPEC_CTRL handling */
|
|
|
|
push %_ASM_ARG1
|
|
|
|
|
|
|
|
/* Save @flags for SPEC_CTRL handling */
|
|
|
|
push %_ASM_ARG3
|
|
|
|
|
2019-01-25 07:41:12 -08:00
|
|
|
/*
|
|
|
|
* Save @regs, _ASM_ARG2 may be modified by vmx_update_host_rsp() and
|
|
|
|
* @regs is needed after VM-Exit to save the guest's register values.
|
|
|
|
*/
|
|
|
|
push %_ASM_ARG2
|
|
|
|
|
2022-11-19 00:37:47 +00:00
|
|
|
/* Copy @flags to EBX, _ASM_ARG3 is volatile. */
|
|
|
|
mov %_ASM_ARG3L, %ebx
|
2019-01-25 07:41:16 -08:00
|
|
|
|
2022-06-14 23:16:11 +02:00
|
|
|
lea (%_ASM_SP), %_ASM_ARG2
|
2019-01-25 07:41:12 -08:00
|
|
|
call vmx_update_host_rsp
|
|
|
|
|
2022-06-14 23:16:16 +02:00
|
|
|
ALTERNATIVE "jmp .Lspec_ctrl_done", "", X86_FEATURE_MSR_SPEC_CTRL
|
|
|
|
|
|
|
|
/*
|
|
|
|
* SPEC_CTRL handling: if the guest's SPEC_CTRL value differs from the
|
|
|
|
* host's, write the MSR.
|
|
|
|
*
|
|
|
|
* IMPORTANT: To avoid RSB underflow attacks and any other nastiness,
|
|
|
|
* there must not be any returns or indirect branches between this code
|
|
|
|
* and vmentry.
|
|
|
|
*/
|
|
|
|
mov 2*WORD_SIZE(%_ASM_SP), %_ASM_DI
|
|
|
|
movl VMX_spec_ctrl(%_ASM_DI), %edi
|
|
|
|
movl PER_CPU_VAR(x86_spec_ctrl_current), %esi
|
|
|
|
cmp %edi, %esi
|
|
|
|
je .Lspec_ctrl_done
|
|
|
|
mov $MSR_IA32_SPEC_CTRL, %ecx
|
|
|
|
xor %edx, %edx
|
|
|
|
mov %edi, %eax
|
|
|
|
wrmsr
|
|
|
|
|
|
|
|
.Lspec_ctrl_done:
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Since vmentry is serializing on affected CPUs, there's no need for
|
|
|
|
* an LFENCE to stop speculation from skipping the wrmsr.
|
|
|
|
*/
|
|
|
|
|
2019-01-25 07:41:15 -08:00
|
|
|
/* Load @regs to RAX. */
|
|
|
|
mov (%_ASM_SP), %_ASM_AX
|
2019-01-25 07:41:12 -08:00
|
|
|
|
|
|
|
/* Check if vmlaunch or vmresume is needed */
|
2024-02-13 18:22:40 -08:00
|
|
|
bt $VMX_RUN_VMRESUME_SHIFT, %ebx
|
2019-01-25 07:41:12 -08:00
|
|
|
|
|
|
|
/* Load guest registers. Don't clobber flags. */
|
2019-01-25 07:41:15 -08:00
|
|
|
mov VCPU_RCX(%_ASM_AX), %_ASM_CX
|
|
|
|
mov VCPU_RDX(%_ASM_AX), %_ASM_DX
|
2020-03-10 18:10:24 +01:00
|
|
|
mov VCPU_RBX(%_ASM_AX), %_ASM_BX
|
|
|
|
mov VCPU_RBP(%_ASM_AX), %_ASM_BP
|
2019-01-25 07:41:15 -08:00
|
|
|
mov VCPU_RSI(%_ASM_AX), %_ASM_SI
|
|
|
|
mov VCPU_RDI(%_ASM_AX), %_ASM_DI
|
2019-01-25 07:41:12 -08:00
|
|
|
#ifdef CONFIG_X86_64
|
2019-01-25 07:41:15 -08:00
|
|
|
mov VCPU_R8 (%_ASM_AX), %r8
|
|
|
|
mov VCPU_R9 (%_ASM_AX), %r9
|
|
|
|
mov VCPU_R10(%_ASM_AX), %r10
|
|
|
|
mov VCPU_R11(%_ASM_AX), %r11
|
|
|
|
mov VCPU_R12(%_ASM_AX), %r12
|
|
|
|
mov VCPU_R13(%_ASM_AX), %r13
|
|
|
|
mov VCPU_R14(%_ASM_AX), %r14
|
|
|
|
mov VCPU_R15(%_ASM_AX), %r15
|
2019-01-25 07:41:12 -08:00
|
|
|
#endif
|
2019-08-15 13:09:31 -07:00
|
|
|
/* Load guest RAX. This kills the @regs pointer! */
|
2019-01-25 07:41:15 -08:00
|
|
|
mov VCPU_RAX(%_ASM_AX), %_ASM_AX
|
2019-01-25 07:41:12 -08:00
|
|
|
|
2024-02-13 18:22:56 -08:00
|
|
|
/* Clobbers EFLAGS.ZF */
|
|
|
|
CLEAR_CPU_BUFFERS
|
|
|
|
|
2024-02-13 18:22:40 -08:00
|
|
|
/* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
|
|
|
|
jnc .Lvmlaunch
|
2019-01-25 07:41:12 -08:00
|
|
|
|
2022-06-14 23:16:11 +02:00
|
|
|
/*
|
|
|
|
* After a successful VMRESUME/VMLAUNCH, control flow "magically"
|
|
|
|
* resumes below at 'vmx_vmexit' due to the VMCS HOST_RIP setting.
|
|
|
|
* So this isn't a typical function and objtool needs to be told to
|
|
|
|
* save the unwind state here and restore it below.
|
|
|
|
*/
|
|
|
|
UNWIND_HINT_SAVE
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If VMRESUME/VMLAUNCH and corresponding vmexit succeed, execution resumes at
|
|
|
|
* the 'vmx_vmexit' label below.
|
|
|
|
*/
|
|
|
|
.Lvmresume:
|
|
|
|
vmresume
|
|
|
|
jmp .Lvmfail
|
|
|
|
|
|
|
|
.Lvmlaunch:
|
|
|
|
vmlaunch
|
|
|
|
jmp .Lvmfail
|
|
|
|
|
|
|
|
_ASM_EXTABLE(.Lvmresume, .Lfixup)
|
|
|
|
_ASM_EXTABLE(.Lvmlaunch, .Lfixup)
|
|
|
|
|
2023-05-31 11:58:21 -04:00
|
|
|
SYM_INNER_LABEL_ALIGN(vmx_vmexit, SYM_L_GLOBAL)
|
2022-06-14 23:16:11 +02:00
|
|
|
|
|
|
|
/* Restore unwind state from before the VMRESUME/VMLAUNCH. */
|
|
|
|
UNWIND_HINT_RESTORE
|
|
|
|
ENDBR
|
2019-01-25 07:41:12 -08:00
|
|
|
|
2019-01-25 07:41:15 -08:00
|
|
|
/* Temporarily save guest's RAX. */
|
|
|
|
push %_ASM_AX
|
2019-01-25 07:41:12 -08:00
|
|
|
|
2019-01-25 07:41:15 -08:00
|
|
|
/* Reload @regs to RAX. */
|
|
|
|
mov WORD_SIZE(%_ASM_SP), %_ASM_AX
|
2019-01-25 07:41:12 -08:00
|
|
|
|
2019-01-25 07:41:15 -08:00
|
|
|
/* Save all guest registers, including RAX from the stack */
|
2020-04-27 22:50:35 +02:00
|
|
|
pop VCPU_RAX(%_ASM_AX)
|
|
|
|
mov %_ASM_CX, VCPU_RCX(%_ASM_AX)
|
|
|
|
mov %_ASM_DX, VCPU_RDX(%_ASM_AX)
|
|
|
|
mov %_ASM_BX, VCPU_RBX(%_ASM_AX)
|
|
|
|
mov %_ASM_BP, VCPU_RBP(%_ASM_AX)
|
|
|
|
mov %_ASM_SI, VCPU_RSI(%_ASM_AX)
|
|
|
|
mov %_ASM_DI, VCPU_RDI(%_ASM_AX)
|
2019-01-25 07:41:12 -08:00
|
|
|
#ifdef CONFIG_X86_64
|
2019-01-25 07:41:15 -08:00
|
|
|
mov %r8, VCPU_R8 (%_ASM_AX)
|
|
|
|
mov %r9, VCPU_R9 (%_ASM_AX)
|
|
|
|
mov %r10, VCPU_R10(%_ASM_AX)
|
|
|
|
mov %r11, VCPU_R11(%_ASM_AX)
|
|
|
|
mov %r12, VCPU_R12(%_ASM_AX)
|
|
|
|
mov %r13, VCPU_R13(%_ASM_AX)
|
|
|
|
mov %r14, VCPU_R14(%_ASM_AX)
|
|
|
|
mov %r15, VCPU_R15(%_ASM_AX)
|
2019-01-25 07:41:12 -08:00
|
|
|
#endif
|
|
|
|
|
2022-06-14 23:16:13 +02:00
|
|
|
/* Clear return value to indicate VM-Exit (as opposed to VM-Fail). */
|
|
|
|
xor %ebx, %ebx
|
2019-01-25 07:41:12 -08:00
|
|
|
|
2022-06-14 23:16:11 +02:00
|
|
|
.Lclear_regs:
|
2022-08-16 23:10:10 +02:00
|
|
|
/* Discard @regs. The register is irrelevant, it just can't be RBX. */
|
|
|
|
pop %_ASM_AX
|
|
|
|
|
2019-01-25 07:41:12 -08:00
|
|
|
/*
|
2022-06-14 23:16:13 +02:00
|
|
|
* Clear all general purpose registers except RSP and RBX to prevent
|
2019-01-25 07:41:12 -08:00
|
|
|
* speculative use of the guest's values, even those that are reloaded
|
|
|
|
* via the stack. In theory, an L1 cache miss when restoring registers
|
|
|
|
* could lead to speculative execution with the guest's values.
|
|
|
|
* Zeroing XORs are dirt cheap, i.e. the extra paranoia is essentially
|
2022-08-16 23:10:10 +02:00
|
|
|
* free. RSP and RBX are exempt as RSP is restored by hardware during
|
2022-06-14 23:16:13 +02:00
|
|
|
* VM-Exit and RBX is explicitly loaded with 0 or 1 to hold the return
|
|
|
|
* value.
|
2019-01-25 07:41:12 -08:00
|
|
|
*/
|
2022-06-14 23:16:13 +02:00
|
|
|
xor %eax, %eax
|
2022-06-14 23:16:11 +02:00
|
|
|
xor %ecx, %ecx
|
2019-01-25 07:41:20 -08:00
|
|
|
xor %edx, %edx
|
2020-03-10 18:10:24 +01:00
|
|
|
xor %ebp, %ebp
|
2019-01-25 07:41:20 -08:00
|
|
|
xor %esi, %esi
|
|
|
|
xor %edi, %edi
|
2019-01-25 07:41:12 -08:00
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
xor %r8d, %r8d
|
|
|
|
xor %r9d, %r9d
|
|
|
|
xor %r10d, %r10d
|
|
|
|
xor %r11d, %r11d
|
|
|
|
xor %r12d, %r12d
|
|
|
|
xor %r13d, %r13d
|
|
|
|
xor %r14d, %r14d
|
|
|
|
xor %r15d, %r15d
|
|
|
|
#endif
|
|
|
|
|
2022-06-14 23:16:13 +02:00
|
|
|
/*
|
|
|
|
* IMPORTANT: RSB filling and SPEC_CTRL handling must be done before
|
|
|
|
* the first unbalanced RET after vmexit!
|
|
|
|
*
|
2022-06-14 23:16:15 +02:00
|
|
|
* For retpoline or IBRS, RSB filling is needed to prevent poisoned RSB
|
|
|
|
* entries and (in some cases) RSB underflow.
|
2022-06-14 23:16:13 +02:00
|
|
|
*
|
|
|
|
* eIBRS has its own protection against poisoned RSB, so it doesn't
|
x86/speculation: Add RSB VM Exit protections
tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as
documented for RET instructions after VM exits. Mitigate it with a new
one-entry RSB stuffing mechanism and a new LFENCE.
== Background ==
Indirect Branch Restricted Speculation (IBRS) was designed to help
mitigate Branch Target Injection and Speculative Store Bypass, i.e.
Spectre, attacks. IBRS prevents software run in less privileged modes
from affecting branch prediction in more privileged modes. IBRS requires
the MSR to be written on every privilege level change.
To overcome some of the performance issues of IBRS, Enhanced IBRS was
introduced. eIBRS is an "always on" IBRS, in other words, just turn
it on once instead of writing the MSR on every privilege level change.
When eIBRS is enabled, more privileged modes should be protected from
less privileged modes, including protecting VMMs from guests.
== Problem ==
Here's a simplification of how guests are run on Linux' KVM:
void run_kvm_guest(void)
{
// Prepare to run guest
VMRESUME();
// Clean up after guest runs
}
The execution flow for that would look something like this to the
processor:
1. Host-side: call run_kvm_guest()
2. Host-side: VMRESUME
3. Guest runs, does "CALL guest_function"
4. VM exit, host runs again
5. Host might make some "cleanup" function calls
6. Host-side: RET from run_kvm_guest()
Now, when back on the host, there are a couple of possible scenarios of
post-guest activity the host needs to do before executing host code:
* on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not
touched and Linux has to do a 32-entry stuffing.
* on eIBRS hardware, VM exit with IBRS enabled, or restoring the host
IBRS=1 shortly after VM exit, has a documented side effect of flushing
the RSB except in this PBRSB situation where the software needs to stuff
the last RSB entry "by hand".
IOW, with eIBRS supported, host RET instructions should no longer be
influenced by guest behavior after the host retires a single CALL
instruction.
However, if the RET instructions are "unbalanced" with CALLs after a VM
exit as is the RET in #6, it might speculatively use the address for the
instruction after the CALL in #3 as an RSB prediction. This is a problem
since the (untrusted) guest controls this address.
Balanced CALL/RET instruction pairs such as in step #5 are not affected.
== Solution ==
The PBRSB issue affects a wide variety of Intel processors which
support eIBRS. But not all of them need mitigation. Today,
X86_FEATURE_RSB_VMEXIT triggers an RSB filling sequence that mitigates
PBRSB. Systems setting RSB_VMEXIT need no further mitigation - i.e.,
eIBRS systems which enable legacy IBRS explicitly.
However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RSB_VMEXIT
and most of them need a new mitigation.
Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE
which triggers a lighter-weight PBRSB mitigation versus RSB_VMEXIT.
The lighter-weight mitigation performs a CALL instruction which is
immediately followed by a speculative execution barrier (INT3). This
steers speculative execution to the barrier -- just like a retpoline
-- which ensures that speculation can never reach an unbalanced RET.
Then, ensure this CALL is retired before continuing execution with an
LFENCE.
In other words, the window of exposure is opened at VM exit where RET
behavior is troublesome. While the window is open, force RSB predictions
sampling for RET targets to a dead end at the INT3. Close the window
with the LFENCE.
There is a subset of eIBRS systems which are not vulnerable to PBRSB.
Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB.
Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO.
[ bp: Massage, incorporate review comments from Andy Cooper. ]
Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com>
Co-developed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
2022-08-02 15:47:01 -07:00
|
|
|
* need the RSB filling sequence. But it does need to be enabled, and a
|
|
|
|
* single call to retire, before the first unbalanced RET.
|
2022-12-21 20:28:49 +08:00
|
|
|
*/
|
2022-06-14 23:16:13 +02:00
|
|
|
|
x86/speculation: Add RSB VM Exit protections
tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as
documented for RET instructions after VM exits. Mitigate it with a new
one-entry RSB stuffing mechanism and a new LFENCE.
== Background ==
Indirect Branch Restricted Speculation (IBRS) was designed to help
mitigate Branch Target Injection and Speculative Store Bypass, i.e.
Spectre, attacks. IBRS prevents software run in less privileged modes
from affecting branch prediction in more privileged modes. IBRS requires
the MSR to be written on every privilege level change.
To overcome some of the performance issues of IBRS, Enhanced IBRS was
introduced. eIBRS is an "always on" IBRS, in other words, just turn
it on once instead of writing the MSR on every privilege level change.
When eIBRS is enabled, more privileged modes should be protected from
less privileged modes, including protecting VMMs from guests.
== Problem ==
Here's a simplification of how guests are run on Linux' KVM:
void run_kvm_guest(void)
{
// Prepare to run guest
VMRESUME();
// Clean up after guest runs
}
The execution flow for that would look something like this to the
processor:
1. Host-side: call run_kvm_guest()
2. Host-side: VMRESUME
3. Guest runs, does "CALL guest_function"
4. VM exit, host runs again
5. Host might make some "cleanup" function calls
6. Host-side: RET from run_kvm_guest()
Now, when back on the host, there are a couple of possible scenarios of
post-guest activity the host needs to do before executing host code:
* on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not
touched and Linux has to do a 32-entry stuffing.
* on eIBRS hardware, VM exit with IBRS enabled, or restoring the host
IBRS=1 shortly after VM exit, has a documented side effect of flushing
the RSB except in this PBRSB situation where the software needs to stuff
the last RSB entry "by hand".
IOW, with eIBRS supported, host RET instructions should no longer be
influenced by guest behavior after the host retires a single CALL
instruction.
However, if the RET instructions are "unbalanced" with CALLs after a VM
exit as is the RET in #6, it might speculatively use the address for the
instruction after the CALL in #3 as an RSB prediction. This is a problem
since the (untrusted) guest controls this address.
Balanced CALL/RET instruction pairs such as in step #5 are not affected.
== Solution ==
The PBRSB issue affects a wide variety of Intel processors which
support eIBRS. But not all of them need mitigation. Today,
X86_FEATURE_RSB_VMEXIT triggers an RSB filling sequence that mitigates
PBRSB. Systems setting RSB_VMEXIT need no further mitigation - i.e.,
eIBRS systems which enable legacy IBRS explicitly.
However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RSB_VMEXIT
and most of them need a new mitigation.
Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE
which triggers a lighter-weight PBRSB mitigation versus RSB_VMEXIT.
The lighter-weight mitigation performs a CALL instruction which is
immediately followed by a speculative execution barrier (INT3). This
steers speculative execution to the barrier -- just like a retpoline
-- which ensures that speculation can never reach an unbalanced RET.
Then, ensure this CALL is retired before continuing execution with an
LFENCE.
In other words, the window of exposure is opened at VM exit where RET
behavior is troublesome. While the window is open, force RSB predictions
sampling for RET targets to a dead end at the INT3. Close the window
with the LFENCE.
There is a subset of eIBRS systems which are not vulnerable to PBRSB.
Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB.
Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO.
[ bp: Massage, incorporate review comments from Andy Cooper. ]
Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com>
Co-developed-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
2022-08-02 15:47:01 -07:00
|
|
|
FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT,\
|
|
|
|
X86_FEATURE_RSB_VMEXIT_LITE
|
|
|
|
|
2022-06-14 23:16:13 +02:00
|
|
|
pop %_ASM_ARG2 /* @flags */
|
|
|
|
pop %_ASM_ARG1 /* @vmx */
|
|
|
|
|
|
|
|
call vmx_spec_ctrl_restore_host
|
|
|
|
|
2024-03-11 08:57:09 -07:00
|
|
|
CLEAR_BRANCH_HISTORY_VMEXIT
|
2024-03-11 08:56:58 -07:00
|
|
|
|
2022-06-14 23:16:13 +02:00
|
|
|
/* Put return value in AX */
|
|
|
|
mov %_ASM_BX, %_ASM_AX
|
|
|
|
|
2022-06-14 23:16:11 +02:00
|
|
|
pop %_ASM_BX
|
2019-01-25 07:41:18 -08:00
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
pop %r12
|
|
|
|
pop %r13
|
|
|
|
pop %r14
|
|
|
|
pop %r15
|
|
|
|
#else
|
|
|
|
pop %esi
|
|
|
|
pop %edi
|
|
|
|
#endif
|
2019-01-25 07:41:12 -08:00
|
|
|
pop %_ASM_BP
|
2021-12-04 14:43:40 +01:00
|
|
|
RET
|
2019-01-25 07:41:12 -08:00
|
|
|
|
2022-06-14 23:16:11 +02:00
|
|
|
.Lfixup:
|
2023-10-31 08:52:40 +01:00
|
|
|
cmpb $0, _ASM_RIP(kvm_rebooting)
|
2022-06-14 23:16:11 +02:00
|
|
|
jne .Lvmfail
|
|
|
|
ud2
|
|
|
|
.Lvmfail:
|
|
|
|
/* VM-Fail: set return value to 1 */
|
2022-06-14 23:16:13 +02:00
|
|
|
mov $1, %_ASM_BX
|
2022-06-14 23:16:11 +02:00
|
|
|
jmp .Lclear_regs
|
|
|
|
|
2019-10-11 13:51:04 +02:00
|
|
|
SYM_FUNC_END(__vmx_vcpu_run)
|
2020-03-26 09:07:12 -07:00
|
|
|
|
2022-12-13 06:09:12 +00:00
|
|
|
SYM_FUNC_START(vmx_do_nmi_irqoff)
|
|
|
|
VMX_DO_EVENT_IRQOFF call asm_exc_nmi_kvm_vmx
|
|
|
|
SYM_FUNC_END(vmx_do_nmi_irqoff)
|
|
|
|
|
2022-09-28 23:20:15 +00:00
|
|
|
#ifndef CONFIG_CC_HAS_ASM_GOTO_OUTPUT
|
2023-07-21 16:56:36 -07:00
|
|
|
|
2020-03-26 09:07:12 -07:00
|
|
|
/**
|
|
|
|
* vmread_error_trampoline - Trampoline from inline asm to vmread_error()
|
|
|
|
* @field: VMCS field encoding that failed
|
|
|
|
* @fault: %true if the VMREAD faulted, %false if it failed
|
2022-12-21 20:28:49 +08:00
|
|
|
*
|
2020-03-26 09:07:12 -07:00
|
|
|
* Save and restore volatile registers across a call to vmread_error(). Note,
|
|
|
|
* all parameters are passed on the stack.
|
|
|
|
*/
|
|
|
|
SYM_FUNC_START(vmread_error_trampoline)
|
|
|
|
push %_ASM_BP
|
|
|
|
mov %_ASM_SP, %_ASM_BP
|
|
|
|
|
|
|
|
push %_ASM_AX
|
|
|
|
push %_ASM_CX
|
|
|
|
push %_ASM_DX
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
push %rdi
|
|
|
|
push %rsi
|
|
|
|
push %r8
|
|
|
|
push %r9
|
|
|
|
push %r10
|
|
|
|
push %r11
|
|
|
|
#endif
|
2022-08-17 16:40:45 +02:00
|
|
|
|
2020-03-26 09:07:12 -07:00
|
|
|
/* Load @field and @fault to arg1 and arg2 respectively. */
|
2022-08-17 16:40:45 +02:00
|
|
|
mov 3*WORD_SIZE(%_ASM_BP), %_ASM_ARG2
|
|
|
|
mov 2*WORD_SIZE(%_ASM_BP), %_ASM_ARG1
|
2020-03-26 09:07:12 -07:00
|
|
|
|
2023-07-21 16:56:36 -07:00
|
|
|
call vmread_error_trampoline2
|
2020-03-26 09:07:12 -07:00
|
|
|
|
|
|
|
/* Zero out @fault, which will be popped into the result register. */
|
|
|
|
_ASM_MOV $0, 3*WORD_SIZE(%_ASM_BP)
|
|
|
|
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
pop %r11
|
|
|
|
pop %r10
|
|
|
|
pop %r9
|
|
|
|
pop %r8
|
|
|
|
pop %rsi
|
|
|
|
pop %rdi
|
|
|
|
#endif
|
|
|
|
pop %_ASM_DX
|
|
|
|
pop %_ASM_CX
|
|
|
|
pop %_ASM_AX
|
|
|
|
pop %_ASM_BP
|
|
|
|
|
2021-12-04 14:43:40 +01:00
|
|
|
RET
|
2020-03-26 09:07:12 -07:00
|
|
|
SYM_FUNC_END(vmread_error_trampoline)
|
2022-09-28 23:20:15 +00:00
|
|
|
#endif
|
2020-09-15 12:15:04 -07:00
|
|
|
|
2023-07-21 16:56:36 -07:00
|
|
|
.section .text, "ax"
|
|
|
|
|
2022-12-13 06:09:11 +00:00
|
|
|
SYM_FUNC_START(vmx_do_interrupt_irqoff)
|
|
|
|
VMX_DO_EVENT_IRQOFF CALL_NOSPEC _ASM_ARG1
|
|
|
|
SYM_FUNC_END(vmx_do_interrupt_irqoff)
|