linux/drivers/firmware/efi/unaccepted_memory.c

230 lines
6.4 KiB
C
Raw Permalink Normal View History

// SPDX-License-Identifier: GPL-2.0-only
#include <linux/efi.h>
#include <linux/memblock.h>
#include <linux/spinlock.h>
efi/unaccepted: do not let /proc/vmcore try to access unaccepted memory Patch series "Do not try to access unaccepted memory", v2. Support for unaccepted memory was added recently, refer commit dcdfdd40fa82 ("mm: Add support for unaccepted memory"), whereby a virtual machine may need to accept memory before it can be used. Plug a few gaps where RAM is exposed without checking if it is unaccepted memory. This patch (of 2): Support for unaccepted memory was added recently, refer commit dcdfdd40fa82 ("mm: Add support for unaccepted memory"), whereby a virtual machine may need to accept memory before it can be used. Do not let /proc/vmcore try to access unaccepted memory because it can cause the guest to fail. For /proc/vmcore, which is read-only, this means a read or mmap of unaccepted memory will return zeros. Link: https://lkml.kernel.org/r/20230911112114.91323-1-adrian.hunter@intel.com Link: https://lkml.kernel.org/r/20230911112114.91323-2-adrian.hunter@intel.com Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Dave Young <dyoung@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-09-11 14:21:13 +03:00
#include <linux/crash_dump.h>
efi/unaccepted: touch soft lockup during memory accept Commit 50e782a86c98 ("efi/unaccepted: Fix soft lockups caused by parallel memory acceptance") has released the spinlock so other CPUs can do memory acceptance in parallel and not triggers softlockup on other CPUs. However the softlock up was intermittent shown up if the memory of the TD guest is large, and the timeout of softlockup is set to 1 second: RIP: 0010:_raw_spin_unlock_irqrestore Call Trace: ? __hrtimer_run_queues <IRQ> ? hrtimer_interrupt ? watchdog_timer_fn ? __sysvec_apic_timer_interrupt ? __pfx_watchdog_timer_fn ? sysvec_apic_timer_interrupt </IRQ> ? __hrtimer_run_queues <TASK> ? hrtimer_interrupt ? asm_sysvec_apic_timer_interrupt ? _raw_spin_unlock_irqrestore ? __sysvec_apic_timer_interrupt ? sysvec_apic_timer_interrupt accept_memory try_to_accept_memory do_huge_pmd_anonymous_page get_page_from_freelist __handle_mm_fault __alloc_pages __folio_alloc ? __tdx_hypercall handle_mm_fault vma_alloc_folio do_user_addr_fault do_huge_pmd_anonymous_page exc_page_fault ? __do_huge_pmd_anonymous_page asm_exc_page_fault __handle_mm_fault When the local irq is enabled at the end of accept_memory(), the softlockup detects that the watchdog on single CPU has not been fed for a while. That is to say, even other CPUs will not be blocked by spinlock, the current CPU might be stunk with local irq disabled for a while, which hurts not only nmi watchdog but also softlockup. Chao Gao pointed out that the memory accept could be time costly and there was similar report before. Thus to avoid any softlocup detection during this stage, give the softlockup a flag to skip the timeout check at the end of accept_memory(), by invoking touch_softlockup_watchdog(). Reported-by: Hossain, Md Iqbal <md.iqbal.hossain@intel.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Fixes: 50e782a86c98 ("efi/unaccepted: Fix soft lockups caused by parallel memory acceptance") Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2024-04-10 18:23:01 +08:00
#include <linux/nmi.h>
#include <asm/unaccepted_memory.h>
efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Michael Roth <michael.roth@amd.com Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Tested-by: Michael Roth <michael.roth@amd.com> [ardb: drop unnecessary cpu_relax() call] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-10-16 19:31:22 +03:00
/* Protects unaccepted memory bitmap and accepting_list */
static DEFINE_SPINLOCK(unaccepted_memory_lock);
efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Michael Roth <michael.roth@amd.com Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Tested-by: Michael Roth <michael.roth@amd.com> [ardb: drop unnecessary cpu_relax() call] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-10-16 19:31:22 +03:00
struct accept_range {
struct list_head list;
unsigned long start;
unsigned long end;
};
static LIST_HEAD(accepting_list);
/*
* accept_memory() -- Consult bitmap and accept the memory if needed.
*
* Only memory that is explicitly marked as unaccepted in the bitmap requires
* an action. All the remaining memory is implicitly accepted and doesn't need
* acceptance.
*
* No need to accept:
* - anything if the system has no unaccepted table;
* - memory that is below phys_base;
* - memory that is above the memory that addressable by the bitmap;
*/
void accept_memory(phys_addr_t start, unsigned long size)
{
struct efi_unaccepted_memory *unaccepted;
unsigned long range_start, range_end;
efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Michael Roth <michael.roth@amd.com Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Tested-by: Michael Roth <michael.roth@amd.com> [ardb: drop unnecessary cpu_relax() call] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-10-16 19:31:22 +03:00
struct accept_range range, *entry;
phys_addr_t end = start + size;
unsigned long flags;
u64 unit_size;
unaccepted = efi_get_unaccepted_table();
if (!unaccepted)
return;
unit_size = unaccepted->unit_size;
/*
* Only care for the part of the range that is represented
* in the bitmap.
*/
if (start < unaccepted->phys_base)
start = unaccepted->phys_base;
if (end < unaccepted->phys_base)
return;
/* Translate to offsets from the beginning of the bitmap */
start -= unaccepted->phys_base;
end -= unaccepted->phys_base;
efi/unaccepted: Avoid load_unaligned_zeropad() stepping into unaccepted memory load_unaligned_zeropad() can lead to unwanted loads across page boundaries. The unwanted loads are typically harmless. But, they might be made to totally unrelated or even unmapped memory. load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now #VE) to recover from these unwanted loads. But, this approach does not work for unaccepted memory. For TDX, a load from unaccepted memory will not lead to a recoverable exception within the guest. The guest will exit to the VMM where the only recourse is to terminate the guest. There are two parts to fix this issue and comprehensively avoid access to unaccepted memory. Together these ensure that an extra "guard" page is accepted in addition to the memory that needs to be used. 1. Implicitly extend the range_contains_unaccepted_memory(start, end) checks up to end+unit_size if 'end' is aligned on a unit_size boundary. 2. Implicitly extend accept_memory(start, end) to end+unit_size if 'end' is aligned on a unit_size boundary. Side note: This leads to something strange. Pages which were accepted at boot, marked by the firmware as accepted and will never _need_ to be accepted might be on unaccepted_pages list This is a cue to ensure that the next page is accepted before 'page' can be used. This is an actual, real-world problem which was discovered during TDX testing. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20230606142637.5171-7-kirill.shutemov@linux.intel.com
2023-06-06 17:26:34 +03:00
/*
* load_unaligned_zeropad() can lead to unwanted loads across page
* boundaries. The unwanted loads are typically harmless. But, they
* might be made to totally unrelated or even unmapped memory.
* load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now
* #VE) to recover from these unwanted loads.
*
* But, this approach does not work for unaccepted memory. For TDX, a
* load from unaccepted memory will not lead to a recoverable exception
* within the guest. The guest will exit to the VMM where the only
* recourse is to terminate the guest.
*
* There are two parts to fix this issue and comprehensively avoid
* access to unaccepted memory. Together these ensure that an extra
* "guard" page is accepted in addition to the memory that needs to be
* used:
*
* 1. Implicitly extend the range_contains_unaccepted_memory(start, size)
* checks up to the next unit_size if 'start+size' is aligned on a
* unit_size boundary.
efi/unaccepted: Avoid load_unaligned_zeropad() stepping into unaccepted memory load_unaligned_zeropad() can lead to unwanted loads across page boundaries. The unwanted loads are typically harmless. But, they might be made to totally unrelated or even unmapped memory. load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now #VE) to recover from these unwanted loads. But, this approach does not work for unaccepted memory. For TDX, a load from unaccepted memory will not lead to a recoverable exception within the guest. The guest will exit to the VMM where the only recourse is to terminate the guest. There are two parts to fix this issue and comprehensively avoid access to unaccepted memory. Together these ensure that an extra "guard" page is accepted in addition to the memory that needs to be used. 1. Implicitly extend the range_contains_unaccepted_memory(start, end) checks up to end+unit_size if 'end' is aligned on a unit_size boundary. 2. Implicitly extend accept_memory(start, end) to end+unit_size if 'end' is aligned on a unit_size boundary. Side note: This leads to something strange. Pages which were accepted at boot, marked by the firmware as accepted and will never _need_ to be accepted might be on unaccepted_pages list This is a cue to ensure that the next page is accepted before 'page' can be used. This is an actual, real-world problem which was discovered during TDX testing. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20230606142637.5171-7-kirill.shutemov@linux.intel.com
2023-06-06 17:26:34 +03:00
*
* 2. Implicitly extend accept_memory(start, size) to the next unit_size
* if 'size+end' is aligned on a unit_size boundary. (immediately
* following this comment)
efi/unaccepted: Avoid load_unaligned_zeropad() stepping into unaccepted memory load_unaligned_zeropad() can lead to unwanted loads across page boundaries. The unwanted loads are typically harmless. But, they might be made to totally unrelated or even unmapped memory. load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now #VE) to recover from these unwanted loads. But, this approach does not work for unaccepted memory. For TDX, a load from unaccepted memory will not lead to a recoverable exception within the guest. The guest will exit to the VMM where the only recourse is to terminate the guest. There are two parts to fix this issue and comprehensively avoid access to unaccepted memory. Together these ensure that an extra "guard" page is accepted in addition to the memory that needs to be used. 1. Implicitly extend the range_contains_unaccepted_memory(start, end) checks up to end+unit_size if 'end' is aligned on a unit_size boundary. 2. Implicitly extend accept_memory(start, end) to end+unit_size if 'end' is aligned on a unit_size boundary. Side note: This leads to something strange. Pages which were accepted at boot, marked by the firmware as accepted and will never _need_ to be accepted might be on unaccepted_pages list This is a cue to ensure that the next page is accepted before 'page' can be used. This is an actual, real-world problem which was discovered during TDX testing. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20230606142637.5171-7-kirill.shutemov@linux.intel.com
2023-06-06 17:26:34 +03:00
*/
if (!(end % unit_size))
end += unit_size;
/* Make sure not to overrun the bitmap */
if (end > unaccepted->size * unit_size * BITS_PER_BYTE)
end = unaccepted->size * unit_size * BITS_PER_BYTE;
efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Michael Roth <michael.roth@amd.com Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Tested-by: Michael Roth <michael.roth@amd.com> [ardb: drop unnecessary cpu_relax() call] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-10-16 19:31:22 +03:00
range.start = start / unit_size;
range.end = DIV_ROUND_UP(end, unit_size);
retry:
spin_lock_irqsave(&unaccepted_memory_lock, flags);
efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Michael Roth <michael.roth@amd.com Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Tested-by: Michael Roth <michael.roth@amd.com> [ardb: drop unnecessary cpu_relax() call] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-10-16 19:31:22 +03:00
/*
* Check if anybody works on accepting the same range of the memory.
*
* The check is done with unit_size granularity. It is crucial to catch
* all accept requests to the same unit_size block, even if they don't
* overlap on physical address level.
*/
list_for_each_entry(entry, &accepting_list, list) {
if (entry->end <= range.start)
efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Michael Roth <michael.roth@amd.com Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Tested-by: Michael Roth <michael.roth@amd.com> [ardb: drop unnecessary cpu_relax() call] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-10-16 19:31:22 +03:00
continue;
if (entry->start >= range.end)
continue;
/*
* Somebody else accepting the range. Or at least part of it.
*
* Drop the lock and retry until it is complete.
*/
spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
goto retry;
}
/*
* Register that the range is about to be accepted.
* Make sure nobody else will accept it.
*/
list_add(&range.list, &accepting_list);
range_start = range.start;
for_each_set_bitrange_from(range_start, range_end, unaccepted->bitmap,
efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Michael Roth <michael.roth@amd.com Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Tested-by: Michael Roth <michael.roth@amd.com> [ardb: drop unnecessary cpu_relax() call] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-10-16 19:31:22 +03:00
range.end) {
unsigned long phys_start, phys_end;
unsigned long len = range_end - range_start;
phys_start = range_start * unit_size + unaccepted->phys_base;
phys_end = range_end * unit_size + unaccepted->phys_base;
efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Michael Roth <michael.roth@amd.com Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Tested-by: Michael Roth <michael.roth@amd.com> [ardb: drop unnecessary cpu_relax() call] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-10-16 19:31:22 +03:00
/*
* Keep interrupts disabled until the accept operation is
* complete in order to prevent deadlocks.
*
* Enabling interrupts before calling arch_accept_memory()
* creates an opportunity for an interrupt handler to request
* acceptance for the same memory. The handler will continuously
* spin with interrupts disabled, preventing other task from
* making progress with the acceptance process.
*/
spin_unlock(&unaccepted_memory_lock);
arch_accept_memory(phys_start, phys_end);
efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Michael Roth <michael.roth@amd.com Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Tested-by: Michael Roth <michael.roth@amd.com> [ardb: drop unnecessary cpu_relax() call] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-10-16 19:31:22 +03:00
spin_lock(&unaccepted_memory_lock);
bitmap_clear(unaccepted->bitmap, range_start, len);
}
efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Michael Roth <michael.roth@amd.com Reviewed-by: Nikolay Borisov <nik.borisov@suse.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Tested-by: Michael Roth <michael.roth@amd.com> [ardb: drop unnecessary cpu_relax() call] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2023-10-16 19:31:22 +03:00
list_del(&range.list);
efi/unaccepted: touch soft lockup during memory accept Commit 50e782a86c98 ("efi/unaccepted: Fix soft lockups caused by parallel memory acceptance") has released the spinlock so other CPUs can do memory acceptance in parallel and not triggers softlockup on other CPUs. However the softlock up was intermittent shown up if the memory of the TD guest is large, and the timeout of softlockup is set to 1 second: RIP: 0010:_raw_spin_unlock_irqrestore Call Trace: ? __hrtimer_run_queues <IRQ> ? hrtimer_interrupt ? watchdog_timer_fn ? __sysvec_apic_timer_interrupt ? __pfx_watchdog_timer_fn ? sysvec_apic_timer_interrupt </IRQ> ? __hrtimer_run_queues <TASK> ? hrtimer_interrupt ? asm_sysvec_apic_timer_interrupt ? _raw_spin_unlock_irqrestore ? __sysvec_apic_timer_interrupt ? sysvec_apic_timer_interrupt accept_memory try_to_accept_memory do_huge_pmd_anonymous_page get_page_from_freelist __handle_mm_fault __alloc_pages __folio_alloc ? __tdx_hypercall handle_mm_fault vma_alloc_folio do_user_addr_fault do_huge_pmd_anonymous_page exc_page_fault ? __do_huge_pmd_anonymous_page asm_exc_page_fault __handle_mm_fault When the local irq is enabled at the end of accept_memory(), the softlockup detects that the watchdog on single CPU has not been fed for a while. That is to say, even other CPUs will not be blocked by spinlock, the current CPU might be stunk with local irq disabled for a while, which hurts not only nmi watchdog but also softlockup. Chao Gao pointed out that the memory accept could be time costly and there was similar report before. Thus to avoid any softlocup detection during this stage, give the softlockup a flag to skip the timeout check at the end of accept_memory(), by invoking touch_softlockup_watchdog(). Reported-by: Hossain, Md Iqbal <md.iqbal.hossain@intel.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Fixes: 50e782a86c98 ("efi/unaccepted: Fix soft lockups caused by parallel memory acceptance") Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2024-04-10 18:23:01 +08:00
touch_softlockup_watchdog();
spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
}
bool range_contains_unaccepted_memory(phys_addr_t start, unsigned long size)
{
struct efi_unaccepted_memory *unaccepted;
phys_addr_t end = start + size;
unsigned long flags;
bool ret = false;
u64 unit_size;
unaccepted = efi_get_unaccepted_table();
if (!unaccepted)
return false;
unit_size = unaccepted->unit_size;
/*
* Only care for the part of the range that is represented
* in the bitmap.
*/
if (start < unaccepted->phys_base)
start = unaccepted->phys_base;
if (end < unaccepted->phys_base)
return false;
/* Translate to offsets from the beginning of the bitmap */
start -= unaccepted->phys_base;
end -= unaccepted->phys_base;
efi/unaccepted: Avoid load_unaligned_zeropad() stepping into unaccepted memory load_unaligned_zeropad() can lead to unwanted loads across page boundaries. The unwanted loads are typically harmless. But, they might be made to totally unrelated or even unmapped memory. load_unaligned_zeropad() relies on exception fixup (#PF, #GP and now #VE) to recover from these unwanted loads. But, this approach does not work for unaccepted memory. For TDX, a load from unaccepted memory will not lead to a recoverable exception within the guest. The guest will exit to the VMM where the only recourse is to terminate the guest. There are two parts to fix this issue and comprehensively avoid access to unaccepted memory. Together these ensure that an extra "guard" page is accepted in addition to the memory that needs to be used. 1. Implicitly extend the range_contains_unaccepted_memory(start, end) checks up to end+unit_size if 'end' is aligned on a unit_size boundary. 2. Implicitly extend accept_memory(start, end) to end+unit_size if 'end' is aligned on a unit_size boundary. Side note: This leads to something strange. Pages which were accepted at boot, marked by the firmware as accepted and will never _need_ to be accepted might be on unaccepted_pages list This is a cue to ensure that the next page is accepted before 'page' can be used. This is an actual, real-world problem which was discovered during TDX testing. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20230606142637.5171-7-kirill.shutemov@linux.intel.com
2023-06-06 17:26:34 +03:00
/*
* Also consider the unaccepted state of the *next* page. See fix #1 in
* the comment on load_unaligned_zeropad() in accept_memory().
*/
if (!(end % unit_size))
end += unit_size;
/* Make sure not to overrun the bitmap */
if (end > unaccepted->size * unit_size * BITS_PER_BYTE)
end = unaccepted->size * unit_size * BITS_PER_BYTE;
spin_lock_irqsave(&unaccepted_memory_lock, flags);
while (start < end) {
if (test_bit(start / unit_size, unaccepted->bitmap)) {
ret = true;
break;
}
start += unit_size;
}
spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
return ret;
}
efi/unaccepted: do not let /proc/vmcore try to access unaccepted memory Patch series "Do not try to access unaccepted memory", v2. Support for unaccepted memory was added recently, refer commit dcdfdd40fa82 ("mm: Add support for unaccepted memory"), whereby a virtual machine may need to accept memory before it can be used. Plug a few gaps where RAM is exposed without checking if it is unaccepted memory. This patch (of 2): Support for unaccepted memory was added recently, refer commit dcdfdd40fa82 ("mm: Add support for unaccepted memory"), whereby a virtual machine may need to accept memory before it can be used. Do not let /proc/vmcore try to access unaccepted memory because it can cause the guest to fail. For /proc/vmcore, which is read-only, this means a read or mmap of unaccepted memory will return zeros. Link: https://lkml.kernel.org/r/20230911112114.91323-1-adrian.hunter@intel.com Link: https://lkml.kernel.org/r/20230911112114.91323-2-adrian.hunter@intel.com Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Dave Young <dyoung@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-09-11 14:21:13 +03:00
#ifdef CONFIG_PROC_VMCORE
static bool unaccepted_memory_vmcore_pfn_is_ram(struct vmcore_cb *cb,
unsigned long pfn)
{
return !pfn_is_unaccepted_memory(pfn);
}
static struct vmcore_cb vmcore_cb = {
.pfn_is_ram = unaccepted_memory_vmcore_pfn_is_ram,
};
static int __init unaccepted_memory_init_kdump(void)
{
register_vmcore_cb(&vmcore_cb);
return 0;
}
core_initcall(unaccepted_memory_init_kdump);
#endif /* CONFIG_PROC_VMCORE */