2014-04-07 15:39:52 -07:00
|
|
|
/*
|
|
|
|
* fixmap.h: compile-time virtual memory allocation
|
|
|
|
*
|
|
|
|
* This file is subject to the terms and conditions of the GNU General Public
|
|
|
|
* License. See the file "COPYING" in the main directory of this archive
|
|
|
|
* for more details.
|
|
|
|
*
|
|
|
|
* Copyright (C) 1998 Ingo Molnar
|
|
|
|
* Copyright (C) 2013 Mark Salter <msalter@redhat.com>
|
|
|
|
*
|
2015-08-23 14:24:44 +01:00
|
|
|
* Adapted from arch/x86 version.
|
2014-04-07 15:39:52 -07:00
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _ASM_ARM64_FIXMAP_H
|
|
|
|
#define _ASM_ARM64_FIXMAP_H
|
|
|
|
|
|
|
|
#ifndef __ASSEMBLY__
|
|
|
|
#include <linux/kernel.h>
|
arm64: mm: always map fixmap at page granularity
Today the fixmap code largely maps elements at PAGE_SIZE granularity,
but we special-case the FDT mapping such that it can be mapped with 2M
block mappings when 4K pages are in use. The original rationale for this
was simplicity, but it has some unfortunate side-effects, and
complicates portions of the fixmap code (i.e. is not so simple after
all).
The FDT can be up to 2M in size but is only required to have 8-byte
alignment, and so it may straddle a 2M boundary. Thus when using 2M
block mappings we may map up to 4M of memory surrounding the FDT. This
is unfortunate as most of that memory will be unrelated to the FDT, and
any pages which happen to share a 2M block with the FDT will by mapped
with Normal Write-Back Cacheable attributes, which might not be what we
want elsewhere (e.g. for carve-outs using Non-Cacheable attributes).
The logic to handle mapping the FDT with 2M blocks requires some special
cases in the fixmap code, and ties it to the early page table
configuration by virtue of the SWAPPER_TABLE_SHIFT and
SWAPPER_BLOCK_SIZE constants used to determine the granularity used to
map the FDT.
This patch simplifies the FDT logic and removes the unnecessary mappings
of surrounding pages by always mapping the FDT at page granularity as
with all other fixmap mappings. To do so we statically reserve multiple
PTE tables to cover the fixmap VA range. Since the FDT can be at most
2M, for 4K pages we only need to allocate a single additional PTE table,
and for 16K and 64K pages the existing single PTE table is sufficient.
The PTE table allocation scales with the number of slots reserved in the
fixmap, and so this also makes it easier to add more fixmap entries if
we require those in future.
Our VA layout means that the fixmap will always fall within a single PMD
table (and consequently, within a single PUD/P4D/PGD entry), which we
can verify at compile time with a static_assert(). With that assert a
number of runtime warnings become impossible, and are removed.
I've boot-tested this patch with both 4K and 64K pages.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Link: https://lore.kernel.org/r/20230406152759.4164229-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2023-04-06 16:27:59 +01:00
|
|
|
#include <linux/math.h>
|
2015-10-19 14:19:33 +01:00
|
|
|
#include <linux/sizes.h>
|
2015-06-01 13:40:32 +02:00
|
|
|
#include <asm/boot.h>
|
2014-04-07 15:39:52 -07:00
|
|
|
#include <asm/page.h>
|
arm64: Remove fixmap include fragility
The asm-generic fixmap.h depends on each architecture's fixmap.h to pull
in the definition of PAGE_KERNEL_RO, if this exists. In the absence of
this, FIXMAP_PAGE_RO will not be defined. In mm/early_ioremap.c the
definition of early_memremap_ro is predicated on FIXMAP_PAGE_RO being
defined.
Currently, the arm64 fixmap.h doesn't include pgtable.h for the
definition of PAGE_KERNEL_RO, and as a knock-on effect early_memremap_ro
is not always defined, leading to link-time failures when it is used.
This has been observed with defconfig on next-20160226.
Unfortunately, as pgtable.h includes fixmap.h, adding the include
introduces a circular dependency, which is just as fragile.
Instead, this patch factors out PAGE_KERNEL_RO and other prot
definitions into a new pgtable-prot header which can be included by poth
pgtable.h and fixmap.h, avoiding the circular dependency, and ensuring
that early_memremap_ro is alwyas defined where it is used.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-02-26 14:31:32 +00:00
|
|
|
#include <asm/pgtable-prot.h>
|
2014-04-07 15:39:52 -07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Here we define all the compile-time 'special' virtual
|
|
|
|
* addresses. The point is to have a constant address at
|
|
|
|
* compile time, but to set the physical address only
|
|
|
|
* in the boot process.
|
|
|
|
*
|
2020-08-03 21:18:40 +08:00
|
|
|
* Each enum increment in these 'compile-time allocated'
|
|
|
|
* memory buffers is page-sized. Use set_fixmap(idx,phys)
|
|
|
|
* to associate physical memory with a fixmap index.
|
2014-04-07 15:39:52 -07:00
|
|
|
*/
|
|
|
|
enum fixed_addresses {
|
2014-11-26 00:14:16 +00:00
|
|
|
FIX_HOLE,
|
2015-06-01 13:40:32 +02:00
|
|
|
|
|
|
|
/*
|
arm64: mm: always map fixmap at page granularity
Today the fixmap code largely maps elements at PAGE_SIZE granularity,
but we special-case the FDT mapping such that it can be mapped with 2M
block mappings when 4K pages are in use. The original rationale for this
was simplicity, but it has some unfortunate side-effects, and
complicates portions of the fixmap code (i.e. is not so simple after
all).
The FDT can be up to 2M in size but is only required to have 8-byte
alignment, and so it may straddle a 2M boundary. Thus when using 2M
block mappings we may map up to 4M of memory surrounding the FDT. This
is unfortunate as most of that memory will be unrelated to the FDT, and
any pages which happen to share a 2M block with the FDT will by mapped
with Normal Write-Back Cacheable attributes, which might not be what we
want elsewhere (e.g. for carve-outs using Non-Cacheable attributes).
The logic to handle mapping the FDT with 2M blocks requires some special
cases in the fixmap code, and ties it to the early page table
configuration by virtue of the SWAPPER_TABLE_SHIFT and
SWAPPER_BLOCK_SIZE constants used to determine the granularity used to
map the FDT.
This patch simplifies the FDT logic and removes the unnecessary mappings
of surrounding pages by always mapping the FDT at page granularity as
with all other fixmap mappings. To do so we statically reserve multiple
PTE tables to cover the fixmap VA range. Since the FDT can be at most
2M, for 4K pages we only need to allocate a single additional PTE table,
and for 16K and 64K pages the existing single PTE table is sufficient.
The PTE table allocation scales with the number of slots reserved in the
fixmap, and so this also makes it easier to add more fixmap entries if
we require those in future.
Our VA layout means that the fixmap will always fall within a single PMD
table (and consequently, within a single PUD/P4D/PGD entry), which we
can verify at compile time with a static_assert(). With that assert a
number of runtime warnings become impossible, and are removed.
I've boot-tested this patch with both 4K and 64K pages.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Link: https://lore.kernel.org/r/20230406152759.4164229-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2023-04-06 16:27:59 +01:00
|
|
|
* Reserve a virtual window for the FDT that is a page bigger than the
|
|
|
|
* maximum supported size. The additional space ensures that any FDT
|
|
|
|
* that does not exceed MAX_FDT_SIZE can be mapped regardless of
|
|
|
|
* whether it crosses any page boundary.
|
2015-06-01 13:40:32 +02:00
|
|
|
*/
|
|
|
|
FIX_FDT_END,
|
arm64: mm: always map fixmap at page granularity
Today the fixmap code largely maps elements at PAGE_SIZE granularity,
but we special-case the FDT mapping such that it can be mapped with 2M
block mappings when 4K pages are in use. The original rationale for this
was simplicity, but it has some unfortunate side-effects, and
complicates portions of the fixmap code (i.e. is not so simple after
all).
The FDT can be up to 2M in size but is only required to have 8-byte
alignment, and so it may straddle a 2M boundary. Thus when using 2M
block mappings we may map up to 4M of memory surrounding the FDT. This
is unfortunate as most of that memory will be unrelated to the FDT, and
any pages which happen to share a 2M block with the FDT will by mapped
with Normal Write-Back Cacheable attributes, which might not be what we
want elsewhere (e.g. for carve-outs using Non-Cacheable attributes).
The logic to handle mapping the FDT with 2M blocks requires some special
cases in the fixmap code, and ties it to the early page table
configuration by virtue of the SWAPPER_TABLE_SHIFT and
SWAPPER_BLOCK_SIZE constants used to determine the granularity used to
map the FDT.
This patch simplifies the FDT logic and removes the unnecessary mappings
of surrounding pages by always mapping the FDT at page granularity as
with all other fixmap mappings. To do so we statically reserve multiple
PTE tables to cover the fixmap VA range. Since the FDT can be at most
2M, for 4K pages we only need to allocate a single additional PTE table,
and for 16K and 64K pages the existing single PTE table is sufficient.
The PTE table allocation scales with the number of slots reserved in the
fixmap, and so this also makes it easier to add more fixmap entries if
we require those in future.
Our VA layout means that the fixmap will always fall within a single PMD
table (and consequently, within a single PUD/P4D/PGD entry), which we
can verify at compile time with a static_assert(). With that assert a
number of runtime warnings become impossible, and are removed.
I've boot-tested this patch with both 4K and 64K pages.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Link: https://lore.kernel.org/r/20230406152759.4164229-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2023-04-06 16:27:59 +01:00
|
|
|
FIX_FDT = FIX_FDT_END + DIV_ROUND_UP(MAX_FDT_SIZE, PAGE_SIZE) + 1,
|
2015-06-01 13:40:32 +02:00
|
|
|
|
2014-04-07 15:39:52 -07:00
|
|
|
FIX_EARLYCON_MEM_BASE,
|
2015-03-04 13:27:34 +00:00
|
|
|
FIX_TEXT_POKE0,
|
2017-11-06 18:44:24 +00:00
|
|
|
|
KVM: arm64: nv: Handle mapping of VNCR_EL2 at EL2
Now that we can handle faults triggered through VNCR_EL2, we need
to map the corresponding page at EL2. But where, you'll ask?
Since each CPU in the system can run a vcpu, we need a per-CPU
mapping. For that, we carve a NR_CPUS range in the fixmap, giving
us a per-CPU va at which to map the guest's VNCR's page.
The mapping occurs both on vcpu load and on the back of a fault,
both generating a request that will take care of the mapping.
That mapping will also get dropped on vcpu put.
Yes, this is a bit heavy handed, but it is simple. Eventually,
we may want to have a per-VM, per-CPU mapping, which would avoid
all the TLBI overhead.
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Link: https://lore.kernel.org/r/20250514103501.2225951-11-maz@kernel.org
Signed-off-by: Marc Zyngier <maz@kernel.org>
2025-05-14 11:34:53 +01:00
|
|
|
#ifdef CONFIG_KVM
|
|
|
|
/* One slot per CPU, mapping the guest's VNCR page at EL2. */
|
|
|
|
FIX_VNCR_END,
|
|
|
|
FIX_VNCR = FIX_VNCR_END + NR_CPUS,
|
|
|
|
#endif
|
|
|
|
|
2017-11-06 18:44:24 +00:00
|
|
|
#ifdef CONFIG_ACPI_APEI_GHES
|
|
|
|
/* Used for GHES mapping from assorted contexts */
|
|
|
|
FIX_APEI_GHES_IRQ,
|
2019-01-29 18:48:57 +00:00
|
|
|
FIX_APEI_GHES_SEA,
|
2019-01-29 18:49:01 +00:00
|
|
|
#ifdef CONFIG_ARM_SDE_INTERFACE
|
|
|
|
FIX_APEI_GHES_SDEI_NORMAL,
|
|
|
|
FIX_APEI_GHES_SDEI_CRITICAL,
|
|
|
|
#endif
|
2017-11-06 18:44:24 +00:00
|
|
|
#endif /* CONFIG_ACPI_APEI_GHES */
|
|
|
|
|
2017-11-14 14:14:17 +00:00
|
|
|
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
|
2022-06-22 18:10:10 +02:00
|
|
|
#ifdef CONFIG_RELOCATABLE
|
|
|
|
FIX_ENTRY_TRAMP_TEXT4, /* one extra slot for the data page */
|
|
|
|
#endif
|
2021-11-18 15:04:32 +00:00
|
|
|
FIX_ENTRY_TRAMP_TEXT3,
|
|
|
|
FIX_ENTRY_TRAMP_TEXT2,
|
|
|
|
FIX_ENTRY_TRAMP_TEXT1,
|
|
|
|
#define TRAMP_VALIAS (__fix_to_virt(FIX_ENTRY_TRAMP_TEXT1))
|
2017-11-14 14:14:17 +00:00
|
|
|
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
|
2014-04-07 15:39:52 -07:00
|
|
|
__end_of_permanent_fixed_addresses,
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Temporary boot-time mappings, used by early_ioremap(),
|
|
|
|
* before ioremap() is functional.
|
|
|
|
*/
|
2015-10-19 14:19:33 +01:00
|
|
|
#define NR_FIX_BTMAPS (SZ_256K / PAGE_SIZE)
|
2014-04-07 15:39:52 -07:00
|
|
|
#define FIX_BTMAPS_SLOTS 7
|
|
|
|
#define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS)
|
|
|
|
|
|
|
|
FIX_BTMAP_END = __end_of_permanent_fixed_addresses,
|
|
|
|
FIX_BTMAP_BEGIN = FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1,
|
2016-01-25 11:45:07 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Used for kernel page table creation, so unmapped memory may be used
|
|
|
|
* for tables.
|
|
|
|
*/
|
|
|
|
FIX_PTE,
|
|
|
|
FIX_PMD,
|
|
|
|
FIX_PUD,
|
2024-02-14 13:29:20 +01:00
|
|
|
FIX_P4D,
|
2016-01-25 11:45:07 +00:00
|
|
|
FIX_PGD,
|
|
|
|
|
2014-04-07 15:39:52 -07:00
|
|
|
__end_of_fixed_addresses
|
|
|
|
};
|
|
|
|
|
arm64: add FIXADDR_TOT_{START,SIZE}
Currently arm64's FIXADDR_{START,SIZE} definitions only cover the
runtime fixmap slots (and not the boot-time fixmap slots), but the code
for creating the fixmap assumes that these definitions cover the entire
fixmap range. This means that the ptdump boundaries are reported in a
misleading way, missing the VA region of the runtime slots. In theory
this could also cause the fixmap creation to go wrong if the boot-time
fixmap slots end up spilling into a separate PMD entry, though luckily
this is not currently the case in any configuration.
While it seems like we could extend FIXADDR_{START,SIZE} to cover the
entire fixmap area, core code relies upon these *only* covering the
runtime slots. For example, fix_to_virt() and virt_to_fix() try to
reject manipulation of the boot-time slots based upon
FIXADDR_{START,SIZE}, while __fix_to_virt() and __virt_to_fix() can
handle any fixmap slot.
This patch follows the lead of x86 in commit:
55f49fcb879fbeeb ("x86/mm: Fix overlap of i386 CPU_ENTRY_AREA with FIX_BTMAP")
... and add new FIXADDR_TOT_{START,SIZE} definitions which cover the
entire fixmap area, using these for the fixmap creation and ptdump code.
As the boot-time fixmap slots are now rejected by fix_to_virt(),
the early_fixmap_init() code is changed to consistently use
__fix_to_virt(), as it already does in a few cases.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Link: https://lore.kernel.org/r/20230406152759.4164229-2-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2023-04-06 16:27:57 +01:00
|
|
|
#define FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
|
|
|
|
#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
|
|
|
|
#define FIXADDR_TOT_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
|
|
|
|
#define FIXADDR_TOT_START (FIXADDR_TOP - FIXADDR_TOT_SIZE)
|
2014-04-07 15:39:52 -07:00
|
|
|
|
|
|
|
#define FIXMAP_PAGE_IO __pgprot(PROT_DEVICE_nGnRE)
|
|
|
|
|
2014-11-21 21:50:42 +00:00
|
|
|
void __init early_fixmap_init(void);
|
2014-04-07 15:39:52 -07:00
|
|
|
|
2014-11-21 21:50:42 +00:00
|
|
|
#define __early_set_fixmap __set_fixmap
|
|
|
|
|
2015-03-24 14:02:36 +00:00
|
|
|
#define __late_set_fixmap __set_fixmap
|
|
|
|
#define __late_clear_fixmap(idx) __set_fixmap((idx), 0, FIXMAP_PAGE_CLEAR)
|
|
|
|
|
2014-11-21 21:50:42 +00:00
|
|
|
extern void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot);
|
2014-04-07 15:39:52 -07:00
|
|
|
|
|
|
|
#include <asm-generic/fixmap.h>
|
|
|
|
|
|
|
|
#endif /* !__ASSEMBLY__ */
|
|
|
|
#endif /* _ASM_ARM64_FIXMAP_H */
|