License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2007-12-17 13:59:56 +08:00
|
|
|
#ifndef __KVM_X86_LAPIC_H
|
|
|
|
#define __KVM_X86_LAPIC_H
|
|
|
|
|
2015-03-26 14:39:29 +00:00
|
|
|
#include <kvm/iodev.h>
|
2007-12-17 13:59:56 +08:00
|
|
|
|
2025-07-09 09:02:18 +05:30
|
|
|
#include <asm/apic.h>
|
|
|
|
|
2007-12-17 13:59:56 +08:00
|
|
|
#include <linux/kvm_host.h>
|
|
|
|
|
2021-01-26 14:48:11 +01:00
|
|
|
#include "hyperv.h"
|
2022-09-29 13:20:09 -04:00
|
|
|
#include "smm.h"
|
2021-01-26 14:48:11 +01:00
|
|
|
|
2013-03-13 12:42:34 +01:00
|
|
|
#define KVM_APIC_INIT 0
|
|
|
|
#define KVM_APIC_SIPI 1
|
|
|
|
|
2019-12-04 20:07:19 +01:00
|
|
|
#define APIC_SHORT_MASK 0xc0000
|
|
|
|
#define APIC_DEST_NOSHORT 0x0
|
|
|
|
#define APIC_DEST_MASK 0x800
|
2016-05-04 14:09:48 -05:00
|
|
|
|
2024-04-25 15:07:00 -07:00
|
|
|
#define APIC_BUS_CYCLE_NS_DEFAULT 1
|
2017-07-26 13:32:59 +02:00
|
|
|
|
2020-04-02 16:20:26 +08:00
|
|
|
#define APIC_BROADCAST 0xFF
|
|
|
|
#define X2APIC_BROADCAST 0xFFFFFFFFul
|
|
|
|
|
2025-06-10 15:57:22 -07:00
|
|
|
#define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4))
|
|
|
|
|
2018-05-09 16:56:04 -04:00
|
|
|
enum lapic_mode {
|
|
|
|
LAPIC_MODE_DISABLED = 0,
|
|
|
|
LAPIC_MODE_INVALID = X2APIC_ENABLE,
|
|
|
|
LAPIC_MODE_XAPIC = MSR_IA32_APICBASE_ENABLE,
|
|
|
|
LAPIC_MODE_X2APIC = MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE,
|
|
|
|
};
|
|
|
|
|
2022-06-10 10:11:28 -07:00
|
|
|
enum lapic_lvt_entry {
|
|
|
|
LVT_TIMER,
|
|
|
|
LVT_THERMAL_MONITOR,
|
|
|
|
LVT_PERFORMANCE_COUNTER,
|
|
|
|
LVT_LINT0,
|
|
|
|
LVT_LINT1,
|
|
|
|
LVT_ERROR,
|
2022-06-10 10:11:30 -07:00
|
|
|
LVT_CMCI,
|
2022-06-10 10:11:28 -07:00
|
|
|
|
|
|
|
KVM_APIC_MAX_NR_LVT_ENTRIES,
|
|
|
|
};
|
|
|
|
|
2022-06-10 10:11:30 -07:00
|
|
|
#define APIC_LVTx(x) ((x) == LVT_CMCI ? APIC_LVTCMCI : APIC_LVTT + 0x10 * (x))
|
2022-06-10 10:11:29 -07:00
|
|
|
|
2012-07-26 18:01:50 +03:00
|
|
|
struct kvm_timer {
|
|
|
|
struct hrtimer timer;
|
|
|
|
s64 period; /* unit: ns */
|
2016-10-24 18:23:13 +08:00
|
|
|
ktime_t target_expiration;
|
2014-10-30 15:06:46 +01:00
|
|
|
u32 timer_mode;
|
2012-07-26 18:01:50 +03:00
|
|
|
u32 timer_mode_mask;
|
|
|
|
u64 tscdeadline;
|
2014-12-16 09:08:15 -05:00
|
|
|
u64 expired_tscdeadline;
|
2019-04-17 10:15:32 -07:00
|
|
|
u32 timer_advance_ns;
|
2012-07-26 18:01:50 +03:00
|
|
|
atomic_t pending; /* accumulated triggered timers */
|
2016-06-13 14:20:01 -07:00
|
|
|
bool hv_timer_in_use;
|
2012-07-26 18:01:50 +03:00
|
|
|
};
|
|
|
|
|
2007-12-17 13:59:56 +08:00
|
|
|
struct kvm_lapic {
|
|
|
|
unsigned long base_address;
|
|
|
|
struct kvm_io_device dev;
|
2009-02-23 10:57:41 -03:00
|
|
|
struct kvm_timer lapic_timer;
|
|
|
|
u32 divide_count;
|
2007-12-17 13:59:56 +08:00
|
|
|
struct kvm_vcpu *vcpu;
|
2022-06-14 23:05:47 +00:00
|
|
|
bool apicv_active;
|
2014-10-30 15:06:45 +01:00
|
|
|
bool sw_enabled;
|
2009-06-11 11:06:51 +03:00
|
|
|
bool irr_pending;
|
2015-06-30 22:19:16 +02:00
|
|
|
bool lvt0_in_nmi_mode;
|
KVM: TDX: Add support for find pending IRQ in a protected local APIC
Add flag and hook to KVM's local APIC management to support determining
whether or not a TDX guest has a pending IRQ. For TDX vCPUs, the virtual
APIC page is owned by the TDX module and cannot be accessed by KVM. As a
result, registers that are virtualized by the CPU, e.g. PPR, cannot be
read or written by KVM. To deliver interrupts for TDX guests, KVM must
send an IRQ to the CPU on the posted interrupt notification vector. And
to determine if TDX vCPU has a pending interrupt, KVM must check if there
is an outstanding notification.
Return "no interrupt" in kvm_apic_has_interrupt() if the guest APIC is
protected to short-circuit the various other flows that try to pull an
IRQ out of the vAPIC, the only valid operation is querying _if_ an IRQ is
pending, KVM can't do anything based on _which_ IRQ is pending.
Intentionally omit sanity checks from other flows, e.g. PPR update, so as
not to degrade non-TDX guests with unnecessary checks. A well-behaved KVM
and userspace will never reach those flows for TDX guests, but reaching
them is not fatal if something does go awry.
For the TD exits not due to HLT TDCALL, skip checking RVI pending in
tdx_protected_apic_has_interrupt(). Except for the guest being stupid
(e.g., non-HLT TDCALL in an interrupt shadow), it's not even possible to
have an interrupt in RVI that is fully unmasked. There is no any CPU flows
that modify RVI in the middle of instruction execution. I.e. if RVI is
non-zero, then either the interrupt has been pending since before the TD
exit, or the instruction caused the TD exit is in an STI/SS shadow. KVM
doesn't care about STI/SS shadows outside of the HALTED case. And if the
interrupt was pending before TD exit, then it _must_ be blocked, otherwise
the interrupt would have been serviced at the instruction boundary.
For the HLT TDCALL case, it will be handled in a future patch when HLT
TDCALL is supported.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com>
Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
Message-ID: <20250222014757.897978-2-binbin.wu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-02-22 09:47:42 +08:00
|
|
|
/* Select registers in the vAPIC cannot be read/written. */
|
|
|
|
bool guest_apic_protected;
|
2012-06-24 19:24:26 +03:00
|
|
|
/* Number of bits set in ISR. */
|
|
|
|
s16 isr_count;
|
|
|
|
/* The highest vector set in ISR; if -1 - invalid, must scan ISR. */
|
|
|
|
int highest_isr_cache;
|
2012-06-24 19:24:19 +03:00
|
|
|
/**
|
|
|
|
* APIC register page. The layout matches the register layout seen by
|
|
|
|
* the guest 1:1, because it is accessed by the vmx microcode.
|
|
|
|
* Note: Only one register, the TPR, is used by the microcode.
|
|
|
|
*/
|
2007-12-17 13:59:56 +08:00
|
|
|
void *regs;
|
2007-10-25 16:52:32 +02:00
|
|
|
gpa_t vapic_addr;
|
2013-11-20 10:23:22 -08:00
|
|
|
struct gfn_to_hva_cache vapic_cache;
|
2013-03-13 12:42:34 +01:00
|
|
|
unsigned long pending_events;
|
|
|
|
unsigned int sipi_vector;
|
2022-06-10 10:11:30 -07:00
|
|
|
int nr_lvt_entries;
|
2007-12-17 13:59:56 +08:00
|
|
|
};
|
2016-02-29 16:04:43 +01:00
|
|
|
|
|
|
|
struct dest_map;
|
|
|
|
|
KVM: x86: Drop support for hand tuning APIC timer advancement from userspace
Remove support for specifying a static local APIC timer advancement value,
and instead present a read-only boolean parameter to let userspace enable
or disable KVM's dynamic APIC timer advancement. Realistically, it's all
but impossible for userspace to specify an advancement that is more
precise than what KVM's adaptive tuning can provide. E.g. a static value
needs to be tuned for the exact hardware and kernel, and if KVM is using
hrtimers, likely requires additional tuning for the exact configuration of
the entire system.
Dropping support for a userspace provided value also fixes several flaws
in the interface. E.g. KVM interprets a negative value other than -1 as a
large advancement, toggling between a negative and positive value yields
unpredictable behavior as vCPUs will switch from dynamic to static
advancement, changing the advancement in the middle of VM creation can
result in different values for vCPUs within a VM, etc. Those flaws are
mostly fixable, but there's almost no justification for taking on yet more
complexity (it's minimal complexity, but still non-zero).
The only arguments against using KVM's adaptive tuning is if a setup needs
a higher maximum, or if the adjustments are too reactive, but those are
arguments for letting userspace control the absolute max advancement and
the granularity of each adjustment, e.g. similar to how KVM provides knobs
for halt polling.
Link: https://lore.kernel.org/all/20240520115334.852510-1-zhoushuling@huawei.com
Cc: Shuling Zhou <zhoushuling@huawei.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20240522010304.1650603-1-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-05-21 18:03:04 -07:00
|
|
|
int kvm_create_lapic(struct kvm_vcpu *vcpu);
|
2007-12-17 13:59:56 +08:00
|
|
|
void kvm_free_lapic(struct kvm_vcpu *vcpu);
|
|
|
|
|
|
|
|
int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu);
|
2024-09-05 21:34:07 -07:00
|
|
|
void kvm_apic_ack_interrupt(struct kvm_vcpu *vcpu, int vector);
|
2007-12-17 13:59:56 +08:00
|
|
|
int kvm_apic_accept_pic_intr(struct kvm_vcpu *vcpu);
|
2021-06-04 10:26:04 -07:00
|
|
|
int kvm_apic_accept_events(struct kvm_vcpu *vcpu);
|
KVM: x86: INIT and reset sequences are different
x86 architecture defines differences between the reset and INIT sequences.
INIT does not initialize the FPU (including MMX, XMM, YMM, etc.), TSC, PMU,
MSRs (in general), MTRRs machine-check, APIC ID, APIC arbitration ID and BSP.
References (from Intel SDM):
"If the MP protocol has completed and a BSP is chosen, subsequent INITs (either
to a specific processor or system wide) do not cause the MP protocol to be
repeated." [8.4.2: MP Initialization Protocol Requirements and Restrictions]
[Table 9-1. IA-32 Processor States Following Power-up, Reset, or INIT]
"If the processor is reset by asserting the INIT# pin, the x87 FPU state is not
changed." [9.2: X87 FPU INITIALIZATION]
"The state of the local APIC following an INIT reset is the same as it is after
a power-up or hardware reset, except that the APIC ID and arbitration ID
registers are not affected." [10.4.7.3: Local APIC State After an INIT Reset
("Wait-for-SIPI" State)]
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Message-Id: <1428924848-28212-1-git-send-email-namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-13 14:34:08 +03:00
|
|
|
void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event);
|
2007-12-17 13:59:56 +08:00
|
|
|
u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu);
|
|
|
|
void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8);
|
2011-08-30 13:56:17 +03:00
|
|
|
void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu);
|
2009-07-05 17:39:35 +03:00
|
|
|
void kvm_apic_set_version(struct kvm_vcpu *vcpu);
|
2022-07-08 15:48:10 -07:00
|
|
|
void kvm_apic_after_set_mcg_cap(struct kvm_vcpu *vcpu);
|
2016-05-04 14:09:40 -05:00
|
|
|
bool kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
|
2019-12-04 20:07:20 +01:00
|
|
|
int shorthand, unsigned int dest, int dest_mode);
|
2019-12-04 20:07:17 +01:00
|
|
|
int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2);
|
KVM: nVMX: Morph notification vector IRQ on nested VM-Enter to pending PI
On successful nested VM-Enter, check for pending interrupts and convert
the highest priority interrupt to a pending posted interrupt if it
matches L2's notification vector. If the vCPU receives a notification
interrupt before nested VM-Enter (assuming L1 disables IRQs before doing
VM-Enter), the pending interrupt (for L1) should be recognized and
processed as a posted interrupt when interrupts become unblocked after
VM-Enter to L2.
This fixes a bug where L1/L2 will get stuck in an infinite loop if L1 is
trying to inject an interrupt into L2 by setting the appropriate bit in
L2's PIR and sending a self-IPI prior to VM-Enter (as opposed to KVM's
method of manually moving the vector from PIR->vIRR/RVI). KVM will
observe the IPI while the vCPU is in L1 context and so won't immediately
morph it to a posted interrupt for L2. The pending interrupt will be
seen by vmx_check_nested_events(), cause KVM to force an immediate exit
after nested VM-Enter, and eventually be reflected to L1 as a VM-Exit.
After handling the VM-Exit, L1 will see that L2 has a pending interrupt
in PIR, send another IPI, and repeat until L2 is killed.
Note, posted interrupts require virtual interrupt deliveriy, and virtual
interrupt delivery requires exit-on-interrupt, ergo interrupts will be
unconditionally unmasked on VM-Enter if posted interrupts are enabled.
Fixes: 705699a13994 ("KVM: nVMX: Enable nested posted interrupt processing")
Cc: stable@vger.kernel.org
Cc: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200812175129.12172-1-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-08-12 10:51:29 -07:00
|
|
|
void kvm_apic_clear_irr(struct kvm_vcpu *vcpu, int vec);
|
2025-04-01 09:34:43 -07:00
|
|
|
bool __kvm_apic_update_irr(unsigned long *pir, void *regs, int *max_irr);
|
|
|
|
bool kvm_apic_update_irr(struct kvm_vcpu *vcpu, unsigned long *pir, int *max_irr);
|
2016-12-18 14:02:21 +01:00
|
|
|
void kvm_apic_update_ppr(struct kvm_vcpu *vcpu);
|
2013-04-11 19:21:37 +08:00
|
|
|
int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq,
|
2016-02-29 16:04:43 +01:00
|
|
|
struct dest_map *dest_map);
|
2011-11-10 14:57:21 +02:00
|
|
|
int kvm_apic_local_deliver(struct kvm_lapic *apic, int lvt_type);
|
2019-11-14 14:15:04 -06:00
|
|
|
void kvm_apic_update_apicv(struct kvm_vcpu *vcpu);
|
2023-01-06 01:12:42 +00:00
|
|
|
int kvm_alloc_apic_access_page(struct kvm *kvm);
|
KVM: x86: Inhibit APIC memslot if x2APIC and AVIC are enabled
Free the APIC access page memslot if any vCPU enables x2APIC and SVM's
AVIC is enabled to prevent accesses to the virtual APIC on vCPUs with
x2APIC enabled. On AMD, if its "hybrid" mode is enabled (AVIC is enabled
when x2APIC is enabled even without x2AVIC support), keeping the APIC
access page memslot results in the guest being able to access the virtual
APIC page as x2APIC is fully emulated by KVM. I.e. hardware isn't aware
that the guest is operating in x2APIC mode.
Exempt nested SVM's update of APICv state from the new logic as x2APIC
can't be toggled on VM-Exit. In practice, invoking the x2APIC logic
should be harmless precisely because it should be a glorified nop, but
play it safe to avoid latent bugs, e.g. with dropping the vCPU's SRCU
lock.
Intel doesn't suffer from the same issue as APICv has fully independent
VMCS controls for xAPIC vs. x2APIC virtualization. Technically, KVM
should provide bus error semantics and not memory semantics for the APIC
page when x2APIC is enabled, but KVM already provides memory semantics in
other scenarios, e.g. if APICv/AVIC is enabled and the APIC is hardware
disabled (via APIC_BASE MSR).
Note, checking apic_access_memslot_enabled without taking locks relies
it being set during vCPU creation (before kvm_vcpu_reset()). vCPUs can
race to set the inhibit and delete the memslot, i.e. can get false
positives, but can't get false negatives as apic_access_memslot_enabled
can't be toggled "on" once any vCPU reaches KVM_RUN.
Opportunistically drop the "can" while updating avic_activate_vmcb()'s
comment, i.e. to state that KVM _does_ support the hybrid mode. Move
the "Note:" down a line to conform to preferred kernel/KVM multi-line
comment style.
Opportunistically update the apicv_update_lock comment, as it isn't
actually used to protect apic_access_memslot_enabled (which is protected
by slots_lock).
Fixes: 0e311d33bfbe ("KVM: SVM: Introduce hybrid-AVIC mode")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-Id: <20230106011306.85230-11-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2023-01-06 01:12:43 +00:00
|
|
|
void kvm_inhibit_apic_access_page(struct kvm_vcpu *vcpu);
|
2007-12-17 13:59:56 +08:00
|
|
|
|
2012-09-13 17:19:24 +03:00
|
|
|
bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src,
|
2016-02-29 16:04:43 +01:00
|
|
|
struct kvm_lapic_irq *irq, int *r, struct dest_map *dest_map);
|
2020-03-26 10:20:02 +08:00
|
|
|
void kvm_apic_send_ipi(struct kvm_lapic *apic, u32 icr_low, u32 icr_high);
|
2012-09-13 17:19:24 +03:00
|
|
|
|
2024-11-01 11:35:54 -07:00
|
|
|
int kvm_apic_set_base(struct kvm_vcpu *vcpu, u64 value, bool host_initiated);
|
2016-07-12 22:09:22 +02:00
|
|
|
int kvm_apic_get_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s);
|
|
|
|
int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s);
|
KVM: nVMX: Defer SVI update to vmcs01 on EOI when L2 is active w/o VID
If KVM emulates an EOI for L1's virtual APIC while L2 is active, defer
updating GUEST_INTERUPT_STATUS.SVI, i.e. the VMCS's cache of the highest
in-service IRQ, until L1 is active, as vmcs01, not vmcs02, needs to track
vISR. The missed SVI update for vmcs01 can result in L1 interrupts being
incorrectly blocked, e.g. if there is a pending interrupt with lower
priority than the interrupt that was EOI'd.
This bug only affects use cases where L1's vAPIC is effectively passed
through to L2, e.g. in a pKVM scenario where L2 is L1's depriveleged host,
as KVM will only emulate an EOI for L1's vAPIC if Virtual Interrupt
Delivery (VID) is disabled in vmc12, and L1 isn't intercepting L2 accesses
to its (virtual) APIC page (or if x2APIC is enabled, the EOI MSR).
WARN() if KVM updates L1's ISR while L2 is active with VID enabled, as an
EOI from L2 is supposed to affect L2's vAPIC, but still defer the update,
to try to keep L1 alive. Specifically, KVM forwards all APICv-related
VM-Exits to L1 via nested_vmx_l1_wants_exit():
case EXIT_REASON_APIC_ACCESS:
case EXIT_REASON_APIC_WRITE:
case EXIT_REASON_EOI_INDUCED:
/*
* The controls for "virtualize APIC accesses," "APIC-
* register virtualization," and "virtual-interrupt
* delivery" only come from vmcs12.
*/
return true;
Fixes: c7c9c56ca26f ("x86, apicv: add virtual interrupt delivery support")
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/kvm/20230312180048.1778187-1-jason.cj.chen@intel.com
Reported-by: Markku Ahvenjärvi <mankku@gmail.com>
Closes: https://lore.kernel.org/all/20240920080012.74405-1-mankku@gmail.com
Cc: Janne Karhunen <janne.karhunen@gmail.com>
Signed-off-by: Chao Gao <chao.gao@intel.com>
[sean: drop request, handle in VMX, write changelog]
Tested-by: Chao Gao <chao.gao@intel.com>
Link: https://lore.kernel.org/r/20241128000010.4051275-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
2024-11-27 16:00:10 -08:00
|
|
|
void kvm_apic_update_hwapic_isr(struct kvm_vcpu *vcpu);
|
2007-12-17 13:59:56 +08:00
|
|
|
int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu);
|
|
|
|
|
2011-09-22 16:55:52 +08:00
|
|
|
u64 kvm_get_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu);
|
|
|
|
void kvm_set_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu, u64 data);
|
|
|
|
|
2013-01-25 10:18:49 +08:00
|
|
|
void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset);
|
2013-01-25 10:18:51 +08:00
|
|
|
void kvm_apic_set_eoi_accelerated(struct kvm_vcpu *vcpu, int vector);
|
2013-01-25 10:18:49 +08:00
|
|
|
|
2013-11-20 10:23:22 -08:00
|
|
|
int kvm_lapic_set_vapic_addr(struct kvm_vcpu *vcpu, gpa_t vapic_addr);
|
2007-10-25 16:52:32 +02:00
|
|
|
void kvm_lapic_sync_from_vapic(struct kvm_vcpu *vcpu);
|
|
|
|
void kvm_lapic_sync_to_vapic(struct kvm_vcpu *vcpu);
|
|
|
|
|
2022-02-04 21:42:04 +00:00
|
|
|
int kvm_x2apic_icr_write(struct kvm_lapic *apic, u64 data);
|
2009-07-05 17:39:36 +03:00
|
|
|
int kvm_x2apic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data);
|
|
|
|
int kvm_x2apic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data);
|
2010-01-17 15:51:23 +02:00
|
|
|
|
|
|
|
int kvm_hv_vapic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data);
|
|
|
|
int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data);
|
|
|
|
|
2021-11-08 16:28:18 +01:00
|
|
|
int kvm_lapic_set_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len);
|
2016-12-16 14:30:36 -08:00
|
|
|
void kvm_lapic_exit(void);
|
2012-08-05 15:58:33 +03:00
|
|
|
|
2023-01-07 01:10:23 +00:00
|
|
|
u64 kvm_lapic_readable_reg_mask(struct kvm_lapic *apic);
|
|
|
|
|
2016-05-04 14:09:40 -05:00
|
|
|
static inline void kvm_lapic_set_irr(int vec, struct kvm_lapic *apic)
|
|
|
|
{
|
2025-07-09 09:02:16 +05:30
|
|
|
apic_set_vector(vec, apic->regs + APIC_IRR);
|
2016-05-04 14:09:40 -05:00
|
|
|
/*
|
|
|
|
* irr_pending must be true if any interrupt is pending; set it after
|
|
|
|
* APIC_IRR to avoid race with apic_clear_irr
|
|
|
|
*/
|
|
|
|
apic->irr_pending = true;
|
|
|
|
}
|
|
|
|
|
2022-02-04 21:42:04 +00:00
|
|
|
static inline u32 kvm_lapic_get_reg(struct kvm_lapic *apic, int reg_off)
|
2016-05-04 14:09:40 -05:00
|
|
|
{
|
2025-07-09 09:02:14 +05:30
|
|
|
return apic_get_reg(apic->regs, reg_off);
|
2016-05-04 14:09:40 -05:00
|
|
|
}
|
|
|
|
|
2021-01-11 23:24:35 +08:00
|
|
|
DECLARE_STATIC_KEY_FALSE(kvm_has_noapic_vcpu);
|
2012-08-05 15:58:33 +03:00
|
|
|
|
2016-01-08 13:48:51 +01:00
|
|
|
static inline bool lapic_in_kernel(struct kvm_vcpu *vcpu)
|
2012-08-05 15:58:33 +03:00
|
|
|
{
|
2021-01-11 23:24:35 +08:00
|
|
|
if (static_branch_unlikely(&kvm_has_noapic_vcpu))
|
2012-08-05 15:58:33 +03:00
|
|
|
return vcpu->arch.apic;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2021-01-11 23:24:35 +08:00
|
|
|
extern struct static_key_false_deferred apic_hw_disabled;
|
2012-08-05 15:58:33 +03:00
|
|
|
|
2022-12-06 17:20:15 +08:00
|
|
|
static inline bool kvm_apic_hw_enabled(struct kvm_lapic *apic)
|
2012-08-05 15:58:33 +03:00
|
|
|
{
|
2021-01-11 23:24:35 +08:00
|
|
|
if (static_branch_unlikely(&apic_hw_disabled.key))
|
2012-08-05 15:58:33 +03:00
|
|
|
return apic->vcpu->arch.apic_base & MSR_IA32_APICBASE_ENABLE;
|
2022-12-06 17:20:15 +08:00
|
|
|
return true;
|
2012-08-05 15:58:33 +03:00
|
|
|
}
|
|
|
|
|
2021-01-11 23:24:35 +08:00
|
|
|
extern struct static_key_false_deferred apic_sw_disabled;
|
2012-08-05 15:58:33 +03:00
|
|
|
|
2014-10-30 15:06:47 +01:00
|
|
|
static inline bool kvm_apic_sw_enabled(struct kvm_lapic *apic)
|
2012-08-05 15:58:33 +03:00
|
|
|
{
|
2021-01-11 23:24:35 +08:00
|
|
|
if (static_branch_unlikely(&apic_sw_disabled.key))
|
2014-10-30 15:06:47 +01:00
|
|
|
return apic->sw_enabled;
|
|
|
|
return true;
|
2012-08-05 15:58:33 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool kvm_apic_present(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
2016-01-08 13:48:51 +01:00
|
|
|
return lapic_in_kernel(vcpu) && kvm_apic_hw_enabled(vcpu->arch.apic);
|
2012-08-05 15:58:33 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int kvm_lapic_enabled(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
return kvm_apic_present(vcpu) && kvm_apic_sw_enabled(vcpu->arch.apic);
|
|
|
|
}
|
|
|
|
|
2013-01-25 10:18:50 +08:00
|
|
|
static inline int apic_x2apic_mode(struct kvm_lapic *apic)
|
|
|
|
{
|
|
|
|
return apic->vcpu->arch.apic_base & X2APIC_ENABLE;
|
|
|
|
}
|
|
|
|
|
2015-11-10 15:36:33 +03:00
|
|
|
static inline bool kvm_vcpu_apicv_active(struct kvm_vcpu *vcpu)
|
2013-01-25 10:18:51 +08:00
|
|
|
{
|
2022-06-14 23:05:48 +00:00
|
|
|
return lapic_in_kernel(vcpu) && vcpu->arch.apic->apicv_active;
|
2013-01-25 10:18:51 +08:00
|
|
|
}
|
|
|
|
|
2022-09-21 00:31:53 +00:00
|
|
|
static inline bool kvm_apic_has_pending_init_or_sipi(struct kvm_vcpu *vcpu)
|
2013-03-13 12:42:34 +01:00
|
|
|
{
|
2016-01-08 13:48:51 +01:00
|
|
|
return lapic_in_kernel(vcpu) && vcpu->arch.apic->pending_events;
|
2013-03-13 12:42:34 +01:00
|
|
|
}
|
|
|
|
|
2022-09-21 00:31:52 +00:00
|
|
|
static inline bool kvm_apic_init_sipi_allowed(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
return !is_smm(vcpu) &&
|
2024-05-07 21:31:02 +08:00
|
|
|
!kvm_x86_call(apic_init_signal_blocked)(vcpu);
|
2022-09-21 00:31:52 +00:00
|
|
|
}
|
|
|
|
|
2015-03-18 19:26:04 -06:00
|
|
|
static inline bool kvm_lowest_prio_delivery(struct kvm_lapic_irq *irq)
|
|
|
|
{
|
|
|
|
return (irq->delivery_mode == APIC_DM_LOWEST ||
|
|
|
|
irq->msi_redir_hint);
|
|
|
|
}
|
|
|
|
|
2015-04-01 15:06:40 +02:00
|
|
|
static inline int kvm_lapic_latched_init(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
2016-01-08 13:48:51 +01:00
|
|
|
return lapic_in_kernel(vcpu) && test_bit(KVM_APIC_INIT, &vcpu->arch.apic->pending_events);
|
2015-04-01 15:06:40 +02:00
|
|
|
}
|
|
|
|
|
2013-04-11 19:21:38 +08:00
|
|
|
bool kvm_apic_pending_eoi(struct kvm_vcpu *vcpu, int vector);
|
|
|
|
|
2019-05-20 16:18:09 +08:00
|
|
|
void kvm_wait_lapic_expire(struct kvm_vcpu *vcpu);
|
2014-12-16 09:08:15 -05:00
|
|
|
|
2019-11-07 07:53:43 -05:00
|
|
|
void kvm_bitmap_or_dest_vcpus(struct kvm *kvm, struct kvm_lapic_irq *irq,
|
|
|
|
unsigned long *vcpu_bitmap);
|
|
|
|
|
2015-09-18 22:29:47 +08:00
|
|
|
bool kvm_intr_is_single_vcpu_fast(struct kvm *kvm, struct kvm_lapic_irq *irq,
|
|
|
|
struct kvm_vcpu **dest_vcpu);
|
2016-01-25 16:53:33 +08:00
|
|
|
int kvm_vector_to_index(u32 vector, u32 dest_vcpus,
|
|
|
|
const unsigned long *bitmap, u32 bitmap_size);
|
2016-06-13 14:20:01 -07:00
|
|
|
void kvm_lapic_switch_to_sw_timer(struct kvm_vcpu *vcpu);
|
|
|
|
void kvm_lapic_switch_to_hv_timer(struct kvm_vcpu *vcpu);
|
|
|
|
void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu);
|
|
|
|
bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu);
|
2017-06-29 17:14:50 +02:00
|
|
|
void kvm_lapic_restart_hv_timer(struct kvm_vcpu *vcpu);
|
2020-05-05 06:45:35 -04:00
|
|
|
bool kvm_can_use_hv_timer(struct kvm_vcpu *vcpu);
|
2018-05-09 16:56:04 -04:00
|
|
|
|
|
|
|
static inline enum lapic_mode kvm_apic_mode(u64 apic_base)
|
|
|
|
{
|
|
|
|
return apic_base & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE);
|
|
|
|
}
|
|
|
|
|
2024-11-01 11:35:50 -07:00
|
|
|
static inline enum lapic_mode kvm_get_apic_mode(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
return kvm_apic_mode(vcpu->arch.apic_base);
|
|
|
|
}
|
|
|
|
|
2019-10-18 10:50:31 +08:00
|
|
|
static inline u8 kvm_xapic_id(struct kvm_lapic *apic)
|
|
|
|
{
|
|
|
|
return kvm_lapic_get_reg(apic, APIC_ID) >> 24;
|
|
|
|
}
|
|
|
|
|
2007-12-17 13:59:56 +08:00
|
|
|
#endif
|