2018-12-03 13:53:06 -08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
KVM: x86: Unify pr_fmt to use module name for all KVM modules
Define pr_fmt using KBUILD_MODNAME for all KVM x86 code so that printks
use consistent formatting across common x86, Intel, and AMD code. In
addition to providing consistent print formatting, using KBUILD_MODNAME,
e.g. kvm_amd and kvm_intel, allows referencing SVM and VMX (and SEV and
SGX and ...) as technologies without generating weird messages, and
without causing naming conflicts with other kernel code, e.g. "SEV: ",
"tdx: ", "sgx: " etc.. are all used by the kernel for non-KVM subsystems.
Opportunistically move away from printk() for prints that need to be
modified anyways, e.g. to drop a manual "kvm: " prefix.
Opportunistically convert a few SGX WARNs that are similarly modified to
WARN_ONCE; in the very unlikely event that the WARNs fire, odds are good
that they would fire repeatedly and spam the kernel log without providing
unique information in each print.
Note, defining pr_fmt yields undesirable results for code that uses KVM's
printk wrappers, e.g. vcpu_unimpl(). But, that's a pre-existing problem
as SVM/kvm_amd already defines a pr_fmt, and thankfully use of KVM's
wrappers is relatively limited in KVM x86 code.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Message-Id: <20221130230934.1014142-35-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2022-11-30 23:09:18 +00:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
2022-11-04 15:47:08 +01:00
|
|
|
|
2018-12-03 13:53:06 -08:00
|
|
|
#include <linux/errno.h>
|
|
|
|
#include <linux/smp.h>
|
|
|
|
|
2024-09-06 18:18:21 -04:00
|
|
|
#include "x86.h"
|
2020-04-17 10:34:29 -04:00
|
|
|
#include "../cpuid.h"
|
2022-11-01 15:53:44 +01:00
|
|
|
#include "hyperv.h"
|
2022-11-01 15:54:04 +01:00
|
|
|
#include "nested.h"
|
2018-12-03 13:53:06 -08:00
|
|
|
#include "vmcs.h"
|
|
|
|
#include "vmx.h"
|
2020-02-05 13:30:34 +01:00
|
|
|
#include "trace.h"
|
2018-12-03 13:53:06 -08:00
|
|
|
|
2022-08-30 15:37:12 +02:00
|
|
|
#define CC KVM_NESTED_VMENTER_CONSISTENCY_CHECK
|
|
|
|
|
2022-11-01 15:54:03 +01:00
|
|
|
u64 nested_get_evmptr(struct kvm_vcpu *vcpu)
|
x86/kvm/nVMX: fix VMCLEAR when Enlightened VMCS is in use
When Enlightened VMCS is in use, it is valid to do VMCLEAR and,
according to TLFS, this should "transition an enlightened VMCS from the
active to the non-active state". It is, however, wrong to assume that
it is only valid to do VMCLEAR for the eVMCS which is currently active
on the vCPU performing VMCLEAR.
Currently, the logic in handle_vmclear() is broken: in case, there is no
active eVMCS on the vCPU doing VMCLEAR we treat the argument as a 'normal'
VMCS and kvm_vcpu_write_guest() to the 'launch_state' field irreversibly
corrupts the memory area.
So, in case the VMCLEAR argument is not the current active eVMCS on the
vCPU, how can we know if the area it is pointing to is a normal or an
enlightened VMCS?
Thanks to the bug in Hyper-V (see commit 72aeb60c52bf7 ("KVM: nVMX: Verify
eVMCS revision id match supported eVMCS version on eVMCS VMPTRLD")) we can
not, the revision can't be used to distinguish between them. So let's
assume it is always enlightened in case enlightened vmentry is enabled in
the assist page. Also, check if vmx->nested.enlightened_vmcs_enabled to
minimize the impact for 'unenlightened' workloads.
Fixes: b8bbab928fb1 ("KVM: nVMX: implement enlightened VMPTRLD and VMCLEAR")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-06-28 13:23:33 +02:00
|
|
|
{
|
2022-11-01 15:54:03 +01:00
|
|
|
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
|
x86/kvm/nVMX: fix VMCLEAR when Enlightened VMCS is in use
When Enlightened VMCS is in use, it is valid to do VMCLEAR and,
according to TLFS, this should "transition an enlightened VMCS from the
active to the non-active state". It is, however, wrong to assume that
it is only valid to do VMCLEAR for the eVMCS which is currently active
on the vCPU performing VMCLEAR.
Currently, the logic in handle_vmclear() is broken: in case, there is no
active eVMCS on the vCPU doing VMCLEAR we treat the argument as a 'normal'
VMCS and kvm_vcpu_write_guest() to the 'launch_state' field irreversibly
corrupts the memory area.
So, in case the VMCLEAR argument is not the current active eVMCS on the
vCPU, how can we know if the area it is pointing to is a normal or an
enlightened VMCS?
Thanks to the bug in Hyper-V (see commit 72aeb60c52bf7 ("KVM: nVMX: Verify
eVMCS revision id match supported eVMCS version on eVMCS VMPTRLD")) we can
not, the revision can't be used to distinguish between them. So let's
assume it is always enlightened in case enlightened vmentry is enabled in
the assist page. Also, check if vmx->nested.enlightened_vmcs_enabled to
minimize the impact for 'unenlightened' workloads.
Fixes: b8bbab928fb1 ("KVM: nVMX: implement enlightened VMPTRLD and VMCLEAR")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-06-28 13:23:33 +02:00
|
|
|
|
2022-11-01 15:54:05 +01:00
|
|
|
if (unlikely(kvm_hv_get_assist_page(vcpu)))
|
2022-11-01 15:54:03 +01:00
|
|
|
return EVMPTR_INVALID;
|
x86/kvm/nVMX: fix VMCLEAR when Enlightened VMCS is in use
When Enlightened VMCS is in use, it is valid to do VMCLEAR and,
according to TLFS, this should "transition an enlightened VMCS from the
active to the non-active state". It is, however, wrong to assume that
it is only valid to do VMCLEAR for the eVMCS which is currently active
on the vCPU performing VMCLEAR.
Currently, the logic in handle_vmclear() is broken: in case, there is no
active eVMCS on the vCPU doing VMCLEAR we treat the argument as a 'normal'
VMCS and kvm_vcpu_write_guest() to the 'launch_state' field irreversibly
corrupts the memory area.
So, in case the VMCLEAR argument is not the current active eVMCS on the
vCPU, how can we know if the area it is pointing to is a normal or an
enlightened VMCS?
Thanks to the bug in Hyper-V (see commit 72aeb60c52bf7 ("KVM: nVMX: Verify
eVMCS revision id match supported eVMCS version on eVMCS VMPTRLD")) we can
not, the revision can't be used to distinguish between them. So let's
assume it is always enlightened in case enlightened vmentry is enabled in
the assist page. Also, check if vmx->nested.enlightened_vmcs_enabled to
minimize the impact for 'unenlightened' workloads.
Fixes: b8bbab928fb1 ("KVM: nVMX: implement enlightened VMPTRLD and VMCLEAR")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-06-28 13:23:33 +02:00
|
|
|
|
2022-11-01 15:54:03 +01:00
|
|
|
if (unlikely(!hv_vcpu->vp_assist_page.enlighten_vmentry))
|
|
|
|
return EVMPTR_INVALID;
|
x86/kvm/nVMX: fix VMCLEAR when Enlightened VMCS is in use
When Enlightened VMCS is in use, it is valid to do VMCLEAR and,
according to TLFS, this should "transition an enlightened VMCS from the
active to the non-active state". It is, however, wrong to assume that
it is only valid to do VMCLEAR for the eVMCS which is currently active
on the vCPU performing VMCLEAR.
Currently, the logic in handle_vmclear() is broken: in case, there is no
active eVMCS on the vCPU doing VMCLEAR we treat the argument as a 'normal'
VMCS and kvm_vcpu_write_guest() to the 'launch_state' field irreversibly
corrupts the memory area.
So, in case the VMCLEAR argument is not the current active eVMCS on the
vCPU, how can we know if the area it is pointing to is a normal or an
enlightened VMCS?
Thanks to the bug in Hyper-V (see commit 72aeb60c52bf7 ("KVM: nVMX: Verify
eVMCS revision id match supported eVMCS version on eVMCS VMPTRLD")) we can
not, the revision can't be used to distinguish between them. So let's
assume it is always enlightened in case enlightened vmentry is enabled in
the assist page. Also, check if vmx->nested.enlightened_vmcs_enabled to
minimize the impact for 'unenlightened' workloads.
Fixes: b8bbab928fb1 ("KVM: nVMX: implement enlightened VMPTRLD and VMCLEAR")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-06-28 13:23:33 +02:00
|
|
|
|
2022-11-01 15:54:03 +01:00
|
|
|
return hv_vcpu->vp_assist_page.current_nested_vmcs;
|
x86/kvm/nVMX: fix VMCLEAR when Enlightened VMCS is in use
When Enlightened VMCS is in use, it is valid to do VMCLEAR and,
according to TLFS, this should "transition an enlightened VMCS from the
active to the non-active state". It is, however, wrong to assume that
it is only valid to do VMCLEAR for the eVMCS which is currently active
on the vCPU performing VMCLEAR.
Currently, the logic in handle_vmclear() is broken: in case, there is no
active eVMCS on the vCPU doing VMCLEAR we treat the argument as a 'normal'
VMCS and kvm_vcpu_write_guest() to the 'launch_state' field irreversibly
corrupts the memory area.
So, in case the VMCLEAR argument is not the current active eVMCS on the
vCPU, how can we know if the area it is pointing to is a normal or an
enlightened VMCS?
Thanks to the bug in Hyper-V (see commit 72aeb60c52bf7 ("KVM: nVMX: Verify
eVMCS revision id match supported eVMCS version on eVMCS VMPTRLD")) we can
not, the revision can't be used to distinguish between them. So let's
assume it is always enlightened in case enlightened vmentry is enabled in
the assist page. Also, check if vmx->nested.enlightened_vmcs_enabled to
minimize the impact for 'unenlightened' workloads.
Fixes: b8bbab928fb1 ("KVM: nVMX: implement enlightened VMPTRLD and VMCLEAR")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-06-28 13:23:33 +02:00
|
|
|
}
|
|
|
|
|
2018-12-10 18:21:55 +01:00
|
|
|
uint16_t nested_get_evmcs_version(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
2020-04-17 10:34:29 -04:00
|
|
|
/*
|
|
|
|
* vmcs_version represents the range of supported Enlightened VMCS
|
|
|
|
* versions: lower 8 bits is the minimal version, higher 8 bits is the
|
|
|
|
* maximum supported version. KVM supports versions from 1 to
|
|
|
|
* KVM_EVMCS_VERSION.
|
2022-08-30 15:37:10 +02:00
|
|
|
*
|
|
|
|
* Note, do not check the Hyper-V is fully enabled in guest CPUID, this
|
|
|
|
* helper is used to _get_ the vCPU's supported CPUID.
|
2020-04-17 10:34:29 -04:00
|
|
|
*/
|
|
|
|
if (kvm_cpu_cap_get(X86_FEATURE_VMX) &&
|
2020-09-29 17:09:43 +02:00
|
|
|
(!vcpu || to_vmx(vcpu)->nested.enlightened_vmcs_enabled))
|
2020-04-17 10:34:29 -04:00
|
|
|
return (KVM_EVMCS_VERSION << 8) | 1;
|
2018-12-10 18:21:55 +01:00
|
|
|
|
2020-04-17 10:34:29 -04:00
|
|
|
return 0;
|
2018-12-10 18:21:55 +01:00
|
|
|
}
|
|
|
|
|
2022-08-30 15:37:11 +02:00
|
|
|
enum evmcs_revision {
|
|
|
|
EVMCSv1_LEGACY,
|
|
|
|
NR_EVMCS_REVISIONS,
|
|
|
|
};
|
|
|
|
|
|
|
|
enum evmcs_ctrl_type {
|
|
|
|
EVMCS_EXIT_CTRLS,
|
|
|
|
EVMCS_ENTRY_CTRLS,
|
2022-11-04 15:47:05 +01:00
|
|
|
EVMCS_EXEC_CTRL,
|
2022-08-30 15:37:11 +02:00
|
|
|
EVMCS_2NDEXEC,
|
2022-11-04 15:47:07 +01:00
|
|
|
EVMCS_3RDEXEC,
|
2022-08-30 15:37:11 +02:00
|
|
|
EVMCS_PINCTRL,
|
|
|
|
EVMCS_VMFUNC,
|
|
|
|
NR_EVMCS_CTRLS,
|
|
|
|
};
|
|
|
|
|
2022-11-04 15:47:06 +01:00
|
|
|
static const u32 evmcs_supported_ctrls[NR_EVMCS_CTRLS][NR_EVMCS_REVISIONS] = {
|
2022-08-30 15:37:11 +02:00
|
|
|
[EVMCS_EXIT_CTRLS] = {
|
2022-11-04 15:47:06 +01:00
|
|
|
[EVMCSv1_LEGACY] = EVMCS1_SUPPORTED_VMEXIT_CTRL,
|
2022-08-30 15:37:11 +02:00
|
|
|
},
|
|
|
|
[EVMCS_ENTRY_CTRLS] = {
|
2022-11-04 15:47:06 +01:00
|
|
|
[EVMCSv1_LEGACY] = EVMCS1_SUPPORTED_VMENTRY_CTRL,
|
2022-08-30 15:37:11 +02:00
|
|
|
},
|
2022-11-04 15:47:05 +01:00
|
|
|
[EVMCS_EXEC_CTRL] = {
|
2022-11-04 15:47:06 +01:00
|
|
|
[EVMCSv1_LEGACY] = EVMCS1_SUPPORTED_EXEC_CTRL,
|
2022-11-04 15:47:05 +01:00
|
|
|
},
|
2022-08-30 15:37:11 +02:00
|
|
|
[EVMCS_2NDEXEC] = {
|
2022-11-04 15:47:06 +01:00
|
|
|
[EVMCSv1_LEGACY] = EVMCS1_SUPPORTED_2NDEXEC & ~SECONDARY_EXEC_TSC_SCALING,
|
2022-08-30 15:37:11 +02:00
|
|
|
},
|
2022-11-04 15:47:07 +01:00
|
|
|
[EVMCS_3RDEXEC] = {
|
|
|
|
[EVMCSv1_LEGACY] = EVMCS1_SUPPORTED_3RDEXEC,
|
|
|
|
},
|
2022-08-30 15:37:11 +02:00
|
|
|
[EVMCS_PINCTRL] = {
|
2022-11-04 15:47:06 +01:00
|
|
|
[EVMCSv1_LEGACY] = EVMCS1_SUPPORTED_PINCTRL,
|
2022-08-30 15:37:11 +02:00
|
|
|
},
|
|
|
|
[EVMCS_VMFUNC] = {
|
2022-11-04 15:47:06 +01:00
|
|
|
[EVMCSv1_LEGACY] = EVMCS1_SUPPORTED_VMFUNC,
|
2022-08-30 15:37:11 +02:00
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2022-11-04 15:47:06 +01:00
|
|
|
static u32 evmcs_get_supported_ctls(enum evmcs_ctrl_type ctrl_type)
|
2022-08-30 15:37:11 +02:00
|
|
|
{
|
|
|
|
enum evmcs_revision evmcs_rev = EVMCSv1_LEGACY;
|
|
|
|
|
2022-11-04 15:47:06 +01:00
|
|
|
return evmcs_supported_ctrls[ctrl_type][evmcs_rev];
|
2022-08-30 15:37:11 +02:00
|
|
|
}
|
|
|
|
|
2022-08-30 15:37:19 +02:00
|
|
|
static bool evmcs_has_perf_global_ctrl(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* PERF_GLOBAL_CTRL has a quirk where some Windows guests may fail to
|
|
|
|
* boot if a PV CPUID feature flag is not also set. Treat the fields
|
|
|
|
* as unsupported if the flag is not set in guest CPUID. This should
|
|
|
|
* be called only for guest accesses, and all guest accesses should be
|
|
|
|
* gated on Hyper-V being enabled and initialized.
|
|
|
|
*/
|
|
|
|
if (WARN_ON_ONCE(!hv_vcpu))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return hv_vcpu->cpuid_cache.nested_ebx & HV_X64_NESTED_EVMCS1_PERF_GLOBAL_CTRL;
|
|
|
|
}
|
|
|
|
|
|
|
|
void nested_evmcs_filter_control_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 *pdata)
|
2020-02-05 13:30:33 +01:00
|
|
|
{
|
|
|
|
u32 ctl_low = (u32)*pdata;
|
|
|
|
u32 ctl_high = (u32)(*pdata >> 32);
|
2022-11-04 15:47:06 +01:00
|
|
|
u32 supported_ctrls;
|
2020-02-05 13:30:33 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Hyper-V 2016 and 2019 try using these features even when eVMCS
|
|
|
|
* is enabled but there are no corresponding fields.
|
|
|
|
*/
|
|
|
|
switch (msr_index) {
|
|
|
|
case MSR_IA32_VMX_EXIT_CTLS:
|
|
|
|
case MSR_IA32_VMX_TRUE_EXIT_CTLS:
|
2022-11-04 15:47:06 +01:00
|
|
|
supported_ctrls = evmcs_get_supported_ctls(EVMCS_EXIT_CTRLS);
|
2022-08-30 15:37:19 +02:00
|
|
|
if (!evmcs_has_perf_global_ctrl(vcpu))
|
2022-11-04 15:47:06 +01:00
|
|
|
supported_ctrls &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL;
|
|
|
|
ctl_high &= supported_ctrls;
|
2020-02-05 13:30:33 +01:00
|
|
|
break;
|
|
|
|
case MSR_IA32_VMX_ENTRY_CTLS:
|
|
|
|
case MSR_IA32_VMX_TRUE_ENTRY_CTLS:
|
2022-11-04 15:47:06 +01:00
|
|
|
supported_ctrls = evmcs_get_supported_ctls(EVMCS_ENTRY_CTRLS);
|
2022-08-30 15:37:19 +02:00
|
|
|
if (!evmcs_has_perf_global_ctrl(vcpu))
|
2022-11-04 15:47:06 +01:00
|
|
|
supported_ctrls &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
|
|
|
|
ctl_high &= supported_ctrls;
|
2020-02-05 13:30:33 +01:00
|
|
|
break;
|
2022-11-04 15:47:05 +01:00
|
|
|
case MSR_IA32_VMX_PROCBASED_CTLS:
|
|
|
|
case MSR_IA32_VMX_TRUE_PROCBASED_CTLS:
|
2022-11-04 15:47:06 +01:00
|
|
|
ctl_high &= evmcs_get_supported_ctls(EVMCS_EXEC_CTRL);
|
2022-11-04 15:47:05 +01:00
|
|
|
break;
|
2020-02-05 13:30:33 +01:00
|
|
|
case MSR_IA32_VMX_PROCBASED_CTLS2:
|
2022-11-04 15:47:06 +01:00
|
|
|
ctl_high &= evmcs_get_supported_ctls(EVMCS_2NDEXEC);
|
2021-09-07 18:35:30 +02:00
|
|
|
break;
|
2022-01-12 18:01:30 +01:00
|
|
|
case MSR_IA32_VMX_TRUE_PINBASED_CTLS:
|
2021-09-07 18:35:30 +02:00
|
|
|
case MSR_IA32_VMX_PINBASED_CTLS:
|
2022-11-04 15:47:06 +01:00
|
|
|
ctl_high &= evmcs_get_supported_ctls(EVMCS_PINCTRL);
|
2021-09-07 18:35:30 +02:00
|
|
|
break;
|
|
|
|
case MSR_IA32_VMX_VMFUNC:
|
2022-11-04 15:47:06 +01:00
|
|
|
ctl_low &= evmcs_get_supported_ctls(EVMCS_VMFUNC);
|
2020-02-05 13:30:33 +01:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
*pdata = ctl_low | ((u64)ctl_high << 32);
|
|
|
|
}
|
|
|
|
|
2022-08-30 15:37:12 +02:00
|
|
|
static bool nested_evmcs_is_valid_controls(enum evmcs_ctrl_type ctrl_type,
|
|
|
|
u32 val)
|
|
|
|
{
|
2022-11-04 15:47:06 +01:00
|
|
|
return !(val & ~evmcs_get_supported_ctls(ctrl_type));
|
2022-08-30 15:37:12 +02:00
|
|
|
}
|
|
|
|
|
2020-02-05 13:30:34 +01:00
|
|
|
int nested_evmcs_check_controls(struct vmcs12 *vmcs12)
|
|
|
|
{
|
2022-08-30 15:37:12 +02:00
|
|
|
if (CC(!nested_evmcs_is_valid_controls(EVMCS_PINCTRL,
|
|
|
|
vmcs12->pin_based_vm_exec_control)))
|
|
|
|
return -EINVAL;
|
2020-02-05 13:30:34 +01:00
|
|
|
|
2022-11-04 15:47:05 +01:00
|
|
|
if (CC(!nested_evmcs_is_valid_controls(EVMCS_EXEC_CTRL,
|
|
|
|
vmcs12->cpu_based_vm_exec_control)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2022-08-30 15:37:12 +02:00
|
|
|
if (CC(!nested_evmcs_is_valid_controls(EVMCS_2NDEXEC,
|
|
|
|
vmcs12->secondary_vm_exec_control)))
|
|
|
|
return -EINVAL;
|
2020-02-05 13:30:34 +01:00
|
|
|
|
2022-08-30 15:37:12 +02:00
|
|
|
if (CC(!nested_evmcs_is_valid_controls(EVMCS_EXIT_CTRLS,
|
|
|
|
vmcs12->vm_exit_controls)))
|
|
|
|
return -EINVAL;
|
2020-02-05 13:30:34 +01:00
|
|
|
|
2022-08-30 15:37:12 +02:00
|
|
|
if (CC(!nested_evmcs_is_valid_controls(EVMCS_ENTRY_CTRLS,
|
|
|
|
vmcs12->vm_entry_controls)))
|
|
|
|
return -EINVAL;
|
2020-02-05 13:30:34 +01:00
|
|
|
|
2022-08-30 15:37:18 +02:00
|
|
|
/*
|
|
|
|
* VM-Func controls are 64-bit, but KVM currently doesn't support any
|
|
|
|
* controls in bits 63:32, i.e. dropping those bits on the consistency
|
|
|
|
* check is intentional.
|
|
|
|
*/
|
|
|
|
if (WARN_ON_ONCE(vmcs12->vm_function_control >> 32))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2022-08-30 15:37:12 +02:00
|
|
|
if (CC(!nested_evmcs_is_valid_controls(EVMCS_VMFUNC,
|
|
|
|
vmcs12->vm_function_control)))
|
|
|
|
return -EINVAL;
|
2020-02-05 13:30:34 +01:00
|
|
|
|
2022-08-30 15:37:12 +02:00
|
|
|
return 0;
|
2020-02-05 13:30:34 +01:00
|
|
|
}
|
|
|
|
|
2018-12-03 13:53:06 -08:00
|
|
|
int nested_enable_evmcs(struct kvm_vcpu *vcpu,
|
|
|
|
uint16_t *vmcs_version)
|
|
|
|
{
|
|
|
|
struct vcpu_vmx *vmx = to_vmx(vcpu);
|
2019-01-17 18:12:09 +01:00
|
|
|
|
|
|
|
vmx->nested.enlightened_vmcs_enabled = true;
|
2018-12-03 13:53:06 -08:00
|
|
|
|
|
|
|
if (vmcs_version)
|
2018-12-10 18:21:55 +01:00
|
|
|
*vmcs_version = nested_get_evmcs_version(vcpu);
|
2018-12-03 13:53:06 -08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
2022-11-01 15:53:59 +01:00
|
|
|
|
2022-11-01 15:54:04 +01:00
|
|
|
bool nested_evmcs_l2_tlb_flush_enabled(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
struct kvm_vcpu_hv *hv_vcpu = to_hv_vcpu(vcpu);
|
|
|
|
struct vcpu_vmx *vmx = to_vmx(vcpu);
|
|
|
|
struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs;
|
|
|
|
|
|
|
|
if (!hv_vcpu || !evmcs)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (!evmcs->hv_enlightenments_control.nested_flush_hypercall)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return hv_vcpu->vp_assist_page.nested_control.features.directhypercall;
|
|
|
|
}
|
|
|
|
|
2022-11-01 15:53:59 +01:00
|
|
|
void vmx_hv_inject_synthetic_vmexit_post_tlb_flush(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
2022-11-01 15:54:04 +01:00
|
|
|
nested_vmx_vmexit(vcpu, HV_VMX_SYNTHETIC_EXIT_REASON_TRAP_AFTER_FLUSH, 0, 0);
|
2022-11-01 15:53:59 +01:00
|
|
|
}
|