License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-16 15:20:36 -07:00
|
|
|
/*
|
|
|
|
* S390 version
|
2012-07-20 11:15:04 +02:00
|
|
|
* Copyright IBM Corp. 1999, 2000
|
2005-04-16 15:20:36 -07:00
|
|
|
* Author(s): Hartmut Penner (hp@de.ibm.com)
|
|
|
|
* Ulrich Weigand (weigand@de.ibm.com)
|
|
|
|
* Martin Schwidefsky (schwidefsky@de.ibm.com)
|
|
|
|
*
|
|
|
|
* Derived from "include/asm-i386/pgtable.h"
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _ASM_S390_PGTABLE_H
|
|
|
|
#define _ASM_S390_PGTABLE_H
|
|
|
|
|
2008-07-14 09:59:11 +02:00
|
|
|
#include <linux/sched.h>
|
2006-09-29 01:58:41 -07:00
|
|
|
#include <linux/mm_types.h>
|
2025-02-07 15:48:48 +01:00
|
|
|
#include <linux/cpufeature.h>
|
2012-11-07 13:17:37 +01:00
|
|
|
#include <linux/page-flags.h>
|
2014-04-30 16:04:25 +02:00
|
|
|
#include <linux/radix-tree.h>
|
2016-05-20 08:08:14 +02:00
|
|
|
#include <linux/atomic.h>
|
2023-09-11 21:40:04 +02:00
|
|
|
#include <asm/ctlreg.h>
|
2005-04-16 15:20:36 -07:00
|
|
|
#include <asm/bug.h>
|
2011-05-23 10:24:40 +02:00
|
|
|
#include <asm/page.h>
|
2020-01-21 09:48:44 +01:00
|
|
|
#include <asm/uv.h>
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2016-05-28 10:03:55 +02:00
|
|
|
extern pgd_t swapper_pg_dir[];
|
2022-12-13 11:35:11 +01:00
|
|
|
extern pgd_t invalid_pg_dir[];
|
2005-04-16 15:20:36 -07:00
|
|
|
extern void paging_init(void);
|
2023-09-11 21:40:04 +02:00
|
|
|
extern struct ctlreg s390_invalid_asce;
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2016-05-20 08:08:14 +02:00
|
|
|
enum {
|
|
|
|
PG_DIRECT_MAP_4K = 0,
|
|
|
|
PG_DIRECT_MAP_1M,
|
|
|
|
PG_DIRECT_MAP_2G,
|
|
|
|
PG_DIRECT_MAP_MAX
|
|
|
|
};
|
|
|
|
|
2024-12-10 12:35:40 +01:00
|
|
|
extern atomic_long_t direct_pages_count[PG_DIRECT_MAP_MAX];
|
2016-05-20 08:08:14 +02:00
|
|
|
|
|
|
|
static inline void update_page_count(int level, long count)
|
|
|
|
{
|
|
|
|
if (IS_ENABLED(CONFIG_PROC_FS))
|
|
|
|
atomic_long_add(count, &direct_pages_count[level]);
|
|
|
|
}
|
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
/*
|
|
|
|
* The S390 doesn't have any external MMU info: the kernel page
|
|
|
|
* tables contain all the necessary information.
|
|
|
|
*/
|
MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself
On VIVT ARM, when we have multiple shared mappings of the same file
in the same MM, we need to ensure that we have coherency across all
copies. We do this via make_coherent() by making the pages
uncacheable.
This used to work fine, until we allowed highmem with highpte - we
now have a page table which is mapped as required, and is not available
for modification via update_mmu_cache().
Ralf Beache suggested getting rid of the PTE value passed to
update_mmu_cache():
On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
to construct a pointer to the pte again. Passing a pte_t * is much
more elegant. Maybe we might even replace the pte argument with the
pte_t?
Ben Herrenschmidt would also like the pte pointer for PowerPC:
Passing the ptep in there is exactly what I want. I want that
-instead- of the PTE value, because I have issue on some ppc cases,
for I$/D$ coherency, where set_pte_at() may decide to mask out the
_PAGE_EXEC.
So, pass in the mapped page table pointer into update_mmu_cache(), and
remove the PTE value, updating all implementations and call sites to
suit.
Includes a fix from Stephen Rothwell:
sparc: fix fallout from update_mmu_cache API change
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-18 16:40:18 +00:00
|
|
|
#define update_mmu_cache(vma, address, ptep) do { } while (0)
|
2023-08-02 16:13:51 +01:00
|
|
|
#define update_mmu_cache_range(vmf, vma, addr, ptep, nr) do { } while (0)
|
2012-10-08 16:34:25 -07:00
|
|
|
#define update_mmu_cache_pmd(vma, address, ptep) do { } while (0)
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
/*
|
2010-10-25 16:10:07 +02:00
|
|
|
* ZERO_PAGE is a global shared page that is always zero; used
|
2005-04-16 15:20:36 -07:00
|
|
|
* for zero-mapped memory areas etc..
|
|
|
|
*/
|
2010-10-25 16:10:07 +02:00
|
|
|
|
|
|
|
extern unsigned long empty_zero_page;
|
|
|
|
extern unsigned long zero_page_mask;
|
|
|
|
|
|
|
|
#define ZERO_PAGE(vaddr) \
|
|
|
|
(virt_to_page((void *)(empty_zero_page + \
|
|
|
|
(((unsigned long)(vaddr)) &zero_page_mask))))
|
2012-12-12 13:52:36 -08:00
|
|
|
#define __HAVE_COLOR_ZERO_PAGE
|
2010-10-25 16:10:07 +02:00
|
|
|
|
2013-04-17 08:46:19 -07:00
|
|
|
/* TODO: s390 cannot support io_remap_pfn_range... */
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
#define pte_ERROR(e) \
|
2021-07-12 19:19:03 +02:00
|
|
|
pr_err("%s:%d: bad pte %016lx.\n", __FILE__, __LINE__, pte_val(e))
|
2005-04-16 15:20:36 -07:00
|
|
|
#define pmd_ERROR(e) \
|
2021-07-12 19:19:03 +02:00
|
|
|
pr_err("%s:%d: bad pmd %016lx.\n", __FILE__, __LINE__, pmd_val(e))
|
2007-10-22 12:52:48 +02:00
|
|
|
#define pud_ERROR(e) \
|
2021-07-12 19:19:03 +02:00
|
|
|
pr_err("%s:%d: bad pud %016lx.\n", __FILE__, __LINE__, pud_val(e))
|
2017-04-24 18:19:10 +02:00
|
|
|
#define p4d_ERROR(e) \
|
2021-07-12 19:19:03 +02:00
|
|
|
pr_err("%s:%d: bad p4d %016lx.\n", __FILE__, __LINE__, p4d_val(e))
|
2005-04-16 15:20:36 -07:00
|
|
|
#define pgd_ERROR(e) \
|
2021-07-12 19:19:03 +02:00
|
|
|
pr_err("%s:%d: bad pgd %016lx.\n", __FILE__, __LINE__, pgd_val(e))
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
/*
|
2015-04-22 13:55:59 +02:00
|
|
|
* The vmalloc and module area will always be on the topmost area of the
|
2020-09-18 12:25:36 +02:00
|
|
|
* kernel mapping. 512GB are reserved for vmalloc by default.
|
|
|
|
* At the top of the vmalloc area a 2GB area is reserved where modules
|
|
|
|
* will reside. That makes sure that inter module branches always
|
|
|
|
* happen without trampolines and in addition the placement within a
|
|
|
|
* 2GB frame is branch prediction unit friendly.
|
2006-12-04 15:40:56 +01:00
|
|
|
*/
|
2024-12-10 12:35:40 +01:00
|
|
|
extern unsigned long VMALLOC_START;
|
|
|
|
extern unsigned long VMALLOC_END;
|
2020-09-18 12:25:36 +02:00
|
|
|
#define VMALLOC_DEFAULT_SIZE ((512UL << 30) - MODULES_LEN)
|
2024-12-10 12:35:40 +01:00
|
|
|
extern struct page *vmemmap;
|
|
|
|
extern unsigned long vmemmap_size;
|
2009-06-12 10:26:33 +02:00
|
|
|
|
2024-12-10 12:35:40 +01:00
|
|
|
extern unsigned long MODULES_VADDR;
|
|
|
|
extern unsigned long MODULES_END;
|
2012-10-05 16:52:18 +02:00
|
|
|
#define MODULES_VADDR MODULES_VADDR
|
|
|
|
#define MODULES_END MODULES_END
|
|
|
|
#define MODULES_LEN (1UL << 31)
|
|
|
|
|
2014-10-15 12:17:38 +02:00
|
|
|
static inline int is_module_addr(void *addr)
|
|
|
|
{
|
|
|
|
BUILD_BUG_ON(MODULES_LEN > (1UL << 31));
|
|
|
|
if (addr < (void *)MODULES_VADDR)
|
|
|
|
return 0;
|
|
|
|
if (addr > (void *)MODULES_END)
|
|
|
|
return 0;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2024-06-21 13:35:15 +02:00
|
|
|
#ifdef CONFIG_KMSAN
|
|
|
|
#define KMSAN_VMALLOC_SIZE (VMALLOC_END - VMALLOC_START)
|
|
|
|
#define KMSAN_VMALLOC_SHADOW_START VMALLOC_END
|
|
|
|
#define KMSAN_VMALLOC_SHADOW_END (KMSAN_VMALLOC_SHADOW_START + KMSAN_VMALLOC_SIZE)
|
|
|
|
#define KMSAN_VMALLOC_ORIGIN_START KMSAN_VMALLOC_SHADOW_END
|
|
|
|
#define KMSAN_VMALLOC_ORIGIN_END (KMSAN_VMALLOC_ORIGIN_START + KMSAN_VMALLOC_SIZE)
|
|
|
|
#define KMSAN_MODULES_SHADOW_START KMSAN_VMALLOC_ORIGIN_END
|
|
|
|
#define KMSAN_MODULES_SHADOW_END (KMSAN_MODULES_SHADOW_START + MODULES_LEN)
|
|
|
|
#define KMSAN_MODULES_ORIGIN_START KMSAN_MODULES_SHADOW_END
|
|
|
|
#define KMSAN_MODULES_ORIGIN_END (KMSAN_MODULES_ORIGIN_START + MODULES_LEN)
|
|
|
|
#endif
|
|
|
|
|
s390/mm: Uncouple physical vs virtual address spaces
The uncoupling physical vs virtual address spaces brings
the following benefits to s390:
- virtual memory layout flexibility;
- closes the address gap between kernel and modules, it
caused s390-only problems in the past (e.g. 'perf' bugs);
- allows getting rid of trampolines used for module calls
into kernel;
- allows simplifying BPF trampoline;
- minor performance improvement in branch prediction;
- kernel randomization entropy is magnitude bigger, as it is
derived from the amount of available virtual, not physical
memory;
The whole change could be described in two pictures below:
before and after the change.
Some aspects of the virtual memory layout setup are not
clarified (number of page levels, alignment, DMA memory),
since these are not a part of this change or secondary
with regard to how the uncoupling itself is implemented.
The focus of the pictures is to explain why __va() and __pa()
macros are implemented the way they are.
Memory layout in V==R mode:
| Physical | Virtual |
+- 0 --------------+- 0 --------------+ identity mapping start
| | S390_lowcore | Low-address memory
| +- 8 KB -----------+
| | |
| | identity | phys == virt
| | mapping | virt == phys
| | |
+- AMODE31_START --+- AMODE31_START --+ .amode31 rand. phys/virt start
|.amode31 text/data|.amode31 text/data|
+- AMODE31_END ----+- AMODE31_END ----+ .amode31 rand. phys/virt start
| | |
| | |
+- __kaslr_offset, __kaslr_offset_phys| kernel rand. phys/virt start
| | |
| kernel text/data | kernel text/data | phys == kvirt
| | |
+------------------+------------------+ kernel phys/virt end
| | |
| | |
| | |
| | |
+- ident_map_size -+- ident_map_size -+ identity mapping end
| |
| ... unused gap |
| |
+---- vmemmap -----+ 'struct page' array start
| |
| virtually mapped |
| memory map |
| |
+- __abs_lowcore --+
| |
| Absolute Lowcore |
| |
+- __memcpy_real_area
| |
| Real Memory Copy|
| |
+- VMALLOC_START --+ vmalloc area start
| |
| vmalloc area |
| |
+- MODULES_VADDR --+ modules area start
| |
| modules area |
| |
+------------------+ UltraVisor Secure Storage limit
| |
| ... unused gap |
| |
+KASAN_SHADOW_START+ KASAN shadow memory start
| |
| KASAN shadow |
| |
+------------------+ ASCE limit
Memory layout in V!=R mode:
| Physical | Virtual |
+- 0 --------------+- 0 --------------+
| | S390_lowcore | Low-address memory
| +- 8 KB -----------+
| | |
| | |
| | ... unused gap |
| | |
+- AMODE31_START --+- AMODE31_START --+ .amode31 rand. phys/virt start
|.amode31 text/data|.amode31 text/data|
+- AMODE31_END ----+- AMODE31_END ----+ .amode31 rand. phys/virt end (<2GB)
| | |
| | |
+- __kaslr_offset_phys | kernel rand. phys start
| | |
| kernel text/data | |
| | |
+------------------+ | kernel phys end
| | |
| | |
| | |
| | |
+- ident_map_size -+ |
| |
| ... unused gap |
| |
+- __identity_base + identity mapping start (>= 2GB)
| |
| identity | phys == virt - __identity_base
| mapping | virt == phys + __identity_base
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
+---- vmemmap -----+ 'struct page' array start
| |
| virtually mapped |
| memory map |
| |
+- __abs_lowcore --+
| |
| Absolute Lowcore |
| |
+- __memcpy_real_area
| |
| Real Memory Copy|
| |
+- VMALLOC_START --+ vmalloc area start
| |
| vmalloc area |
| |
+- MODULES_VADDR --+ modules area start
| |
| modules area |
| |
+- __kaslr_offset -+ kernel rand. virt start
| |
| kernel text/data | phys == (kvirt - __kaslr_offset) +
| | __kaslr_offset_phys
+- kernel .bss end + kernel rand. virt end
| |
| ... unused gap |
| |
+------------------+ UltraVisor Secure Storage limit
| |
| ... unused gap |
| |
+KASAN_SHADOW_START+ KASAN shadow memory start
| |
| KASAN shadow |
| |
+------------------+ ASCE limit
Unused gaps in the virtual memory layout could be present
or not - depending on how partucular system is configured.
No page tables are created for the unused gaps.
The relative order of vmalloc, modules and kernel image in
virtual memory is defined by following considerations:
- start of the modules area and end of the kernel should reside
within 4GB to accommodate relative 32-bit jumps. The best way
to achieve that is to place kernel next to modules;
- vmalloc and module areas should locate next to each other
to prevent failures and extra reworks in user level tools
(makedumpfile, crash, etc.) which treat vmalloc and module
addresses similarily;
- kernel needs to be the last area in the virtual memory
layout to easily distinguish between kernel and non-kernel
virtual addresses. That is needed to (again) simplify
handling of addresses in user level tools and make __pa()
macro faster (see below);
Concluding the above, the relative order of the considered
virtual areas in memory is: vmalloc - modules - kernel.
Therefore, the only change to the current memory layout is
moving kernel to the end of virtual address space.
With that approach the implementation of __pa() macro is
straightforward - all linear virtual addresses less than
kernel base are considered identity mapping:
phys == virt - __identity_base
All addresses greater than kernel base are kernel ones:
phys == (kvirt - __kaslr_offset) + __kaslr_offset_phys
By contrast, __va() macro deals only with identity mapping
addresses:
virt == phys + __identity_base
.amode31 section is mapped separately and is not covered by
__pa() macro. In fact, it could have been handled easily by
checking whether a virtual address is within the section or
not, but there is no need for that. Thus, let __pa() code
do as little machine cycles as possible.
The KASAN shadow memory is located at the very end of the
virtual memory layout, at addresses higher than the kernel.
However, that is not a linear mapping and no code other than
KASAN instrumentation or API is expected to access it.
When KASLR mode is enabled the kernel base address randomized
within a memory window that spans whole unused virtual address
space. The size of that window depends from the amount of
physical memory available to the system, the limit imposed by
UltraVisor (if present) and the vmalloc area size as provided
by vmalloc= kernel command line parameter.
In case the virtual memory is exhausted the minimum size of
the randomization window is forcefully set to 2GB, which
amounts to in 15 bits of entropy if KASAN is enabled or 17
bits of entropy in default configuration.
The default kernel offset 0x100000 is used as a magic value
both in the decompressor code and vmlinux linker script, but
it will be removed with a follow-up change.
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2024-03-01 07:15:22 +01:00
|
|
|
#ifdef CONFIG_RANDOMIZE_BASE
|
|
|
|
#define KASLR_LEN (1UL << 31)
|
|
|
|
#else
|
|
|
|
#define KASLR_LEN 0UL
|
|
|
|
#endif
|
|
|
|
|
2024-12-09 10:45:18 +01:00
|
|
|
void setup_protection_map(void);
|
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
/*
|
|
|
|
* A 64 bit pagetable entry of S390 has following format:
|
2009-12-07 12:52:11 +01:00
|
|
|
* | PFRA |0IPC| OS |
|
2005-04-16 15:20:36 -07:00
|
|
|
* 0000000000111111111122222222223333333333444444444455555555556666
|
|
|
|
* 0123456789012345678901234567890123456789012345678901234567890123
|
|
|
|
*
|
|
|
|
* I Page-Invalid Bit: Page is not available for address-translation
|
|
|
|
* P Page-Protection Bit: Store access not possible for page
|
2009-12-07 12:52:11 +01:00
|
|
|
* C Change-bit override: HW is not required to set change bit
|
2005-04-16 15:20:36 -07:00
|
|
|
*
|
|
|
|
* A 64 bit segmenttable entry of S390 has following format:
|
|
|
|
* | P-table origin | TT
|
|
|
|
* 0000000000111111111122222222223333333333444444444455555555556666
|
|
|
|
* 0123456789012345678901234567890123456789012345678901234567890123
|
|
|
|
*
|
|
|
|
* I Segment-Invalid Bit: Segment is not available for address-translation
|
|
|
|
* C Common-Segment Bit: Segment is not private (PoP 3-30)
|
|
|
|
* P Page-Protection Bit: Store access not possible for page
|
|
|
|
* TT Type 00
|
|
|
|
*
|
|
|
|
* A 64 bit region table entry of S390 has following format:
|
|
|
|
* | S-table origin | TF TTTL
|
|
|
|
* 0000000000111111111122222222223333333333444444444455555555556666
|
|
|
|
* 0123456789012345678901234567890123456789012345678901234567890123
|
|
|
|
*
|
|
|
|
* I Segment-Invalid Bit: Segment is not available for address-translation
|
|
|
|
* TT Type 01
|
|
|
|
* TF
|
2007-10-22 12:52:48 +02:00
|
|
|
* TL Table length
|
2005-04-16 15:20:36 -07:00
|
|
|
*
|
|
|
|
* The 64 bit regiontable origin of S390 has following format:
|
|
|
|
* | region table origon | DTTL
|
|
|
|
* 0000000000111111111122222222223333333333444444444455555555556666
|
|
|
|
* 0123456789012345678901234567890123456789012345678901234567890123
|
|
|
|
*
|
|
|
|
* X Space-Switch event:
|
|
|
|
* G Segment-Invalid Bit:
|
|
|
|
* P Private-Space Bit:
|
|
|
|
* S Storage-Alteration:
|
|
|
|
* R Real space
|
|
|
|
* TL Table-Length:
|
|
|
|
*
|
|
|
|
* A storage key has the following format:
|
|
|
|
* | ACC |F|R|C|0|
|
|
|
|
* 0 3 4 5 6 7
|
|
|
|
* ACC: access key
|
|
|
|
* F : fetch protection bit
|
|
|
|
* R : referenced bit
|
|
|
|
* C : changed bit
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Hardware bits in the page table entry */
|
2016-03-22 10:54:24 +01:00
|
|
|
#define _PAGE_NOEXEC 0x100 /* HW no-execute bit */
|
2013-07-23 20:57:57 +02:00
|
|
|
#define _PAGE_PROTECT 0x200 /* HW read-only bit */
|
2006-10-18 18:30:51 +02:00
|
|
|
#define _PAGE_INVALID 0x400 /* HW invalid bit */
|
2013-07-23 20:57:57 +02:00
|
|
|
#define _PAGE_LARGE 0x800 /* Bit to mark a large pte */
|
2007-10-22 12:52:47 +02:00
|
|
|
|
|
|
|
/* Software bits in the page table entry */
|
2013-07-23 20:57:57 +02:00
|
|
|
#define _PAGE_PRESENT 0x001 /* SW pte present bit */
|
|
|
|
#define _PAGE_YOUNG 0x004 /* SW pte young bit */
|
|
|
|
#define _PAGE_DIRTY 0x008 /* SW pte dirty bit */
|
2013-07-23 22:11:42 +02:00
|
|
|
#define _PAGE_READ 0x010 /* SW pte read bit */
|
|
|
|
#define _PAGE_WRITE 0x020 /* SW pte write bit */
|
|
|
|
#define _PAGE_SPECIAL 0x040 /* SW associated with special page */
|
2013-04-17 17:36:29 +02:00
|
|
|
#define _PAGE_UNUSED 0x080 /* SW bit for pgste usage state */
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2015-04-22 14:47:42 +02:00
|
|
|
#ifdef CONFIG_MEM_SOFT_DIRTY
|
|
|
|
#define _PAGE_SOFT_DIRTY 0x002 /* SW pte soft dirty bit */
|
|
|
|
#else
|
|
|
|
#define _PAGE_SOFT_DIRTY 0x000
|
|
|
|
#endif
|
|
|
|
|
s390/mm: add support for RDP (Reset DAT-Protection)
RDP instruction allows to reset DAT-protection bit in a PTE, with less
CPU synchronization overhead than IPTE instruction. In particular, IPTE
can cause machine-wide synchronization overhead, and excessive IPTE usage
can negatively impact machine performance.
RDP can be used instead of IPTE, if the new PTE only differs in SW bits
and _PAGE_PROTECT HW bit, for PTE protection changes from RO to RW.
SW PTE bit changes are allowed, e.g. for dirty and young tracking, but none
of the other HW-defined part of the PTE must change. This is because the
architecture forbids such changes to an active and valid PTE, which
is why invalidation with IPTE is always used first, before writing a new
entry.
The RDP optimization helps mainly for fault-driven SW dirty-bit tracking.
Writable PTEs are initially always mapped with HW _PAGE_PROTECT bit set,
to allow SW dirty-bit accounting on first write protection fault, where
the DAT-protection would then be reset. The reset is now done with RDP
instead of IPTE, if RDP instruction is available.
RDP cannot always guarantee that the DAT-protection reset is propagated
to all CPUs immediately. This means that spurious TLB protection faults
on other CPUs can now occur. For this, common code provides a
flush_tlb_fix_spurious_fault() handler, which will now be used to do a
CPU-local TLB flush. However, this will clear the whole TLB of a CPU, and
not just the affected entry. For more fine-grained flushing, by simply
doing a (local) RDP again, flush_tlb_fix_spurious_fault() would need to
also provide the PTE pointer.
Note that spurious TLB protection faults cannot really be distinguished
from racing pagetable updates, where another thread already installed the
correct PTE. In such a case, the local TLB flush would be unnecessary
overhead, but overall reduction of CPU synchronization overhead by not
using IPTE is still expected to be beneficial.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-02-06 17:48:21 +01:00
|
|
|
#define _PAGE_SW_BITS 0xffUL /* All SW bits */
|
|
|
|
|
2022-05-09 18:20:46 -07:00
|
|
|
#define _PAGE_SWP_EXCLUSIVE _PAGE_LARGE /* SW pte exclusive swap bit */
|
|
|
|
|
2008-07-08 11:31:06 +02:00
|
|
|
/* Set of bits not changed in pte_modify */
|
2014-09-22 08:50:51 +02:00
|
|
|
#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_SPECIAL | _PAGE_DIRTY | \
|
2015-04-22 14:47:42 +02:00
|
|
|
_PAGE_YOUNG | _PAGE_SOFT_DIRTY)
|
2008-04-30 13:38:46 +02:00
|
|
|
|
s390/mm: add support for RDP (Reset DAT-Protection)
RDP instruction allows to reset DAT-protection bit in a PTE, with less
CPU synchronization overhead than IPTE instruction. In particular, IPTE
can cause machine-wide synchronization overhead, and excessive IPTE usage
can negatively impact machine performance.
RDP can be used instead of IPTE, if the new PTE only differs in SW bits
and _PAGE_PROTECT HW bit, for PTE protection changes from RO to RW.
SW PTE bit changes are allowed, e.g. for dirty and young tracking, but none
of the other HW-defined part of the PTE must change. This is because the
architecture forbids such changes to an active and valid PTE, which
is why invalidation with IPTE is always used first, before writing a new
entry.
The RDP optimization helps mainly for fault-driven SW dirty-bit tracking.
Writable PTEs are initially always mapped with HW _PAGE_PROTECT bit set,
to allow SW dirty-bit accounting on first write protection fault, where
the DAT-protection would then be reset. The reset is now done with RDP
instead of IPTE, if RDP instruction is available.
RDP cannot always guarantee that the DAT-protection reset is propagated
to all CPUs immediately. This means that spurious TLB protection faults
on other CPUs can now occur. For this, common code provides a
flush_tlb_fix_spurious_fault() handler, which will now be used to do a
CPU-local TLB flush. However, this will clear the whole TLB of a CPU, and
not just the affected entry. For more fine-grained flushing, by simply
doing a (local) RDP again, flush_tlb_fix_spurious_fault() would need to
also provide the PTE pointer.
Note that spurious TLB protection faults cannot really be distinguished
from racing pagetable updates, where another thread already installed the
correct PTE. In such a case, the local TLB flush would be unnecessary
overhead, but overall reduction of CPU synchronization overhead by not
using IPTE is still expected to be beneficial.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-02-06 17:48:21 +01:00
|
|
|
/*
|
|
|
|
* Mask of bits that must not be changed with RDP. Allow only _PAGE_PROTECT
|
|
|
|
* HW bit and all SW bits.
|
|
|
|
*/
|
|
|
|
#define _PAGE_RDP_MASK ~(_PAGE_PROTECT | _PAGE_SW_BITS)
|
|
|
|
|
2006-10-18 18:30:51 +02:00
|
|
|
/*
|
2015-02-10 14:11:04 -08:00
|
|
|
* handle_pte_fault uses pte_present and pte_none to find out the pte type
|
|
|
|
* WITHOUT holding the page table lock. The _PAGE_PRESENT bit is used to
|
|
|
|
* distinguish present from not-present ptes. It is changed only with the page
|
|
|
|
* table lock held.
|
2006-10-18 18:30:51 +02:00
|
|
|
*
|
2013-07-23 20:57:57 +02:00
|
|
|
* The following table gives the different possible bit combinations for
|
2015-04-22 13:55:59 +02:00
|
|
|
* the pte hardware and software bits in the last 12 bits of a pte
|
|
|
|
* (. unassigned bit, x don't care, t swap type):
|
2006-10-18 18:30:51 +02:00
|
|
|
*
|
2013-07-23 22:11:42 +02:00
|
|
|
* 842100000000
|
|
|
|
* 000084210000
|
|
|
|
* 000000008421
|
2015-04-22 13:55:59 +02:00
|
|
|
* .IR.uswrdy.p
|
|
|
|
* empty .10.00000000
|
|
|
|
* swap .11..ttttt.0
|
|
|
|
* prot-none, clean, old .11.xx0000.1
|
|
|
|
* prot-none, clean, young .11.xx0001.1
|
2016-07-18 14:35:13 +02:00
|
|
|
* prot-none, dirty, old .11.xx0010.1
|
|
|
|
* prot-none, dirty, young .11.xx0011.1
|
2015-04-22 13:55:59 +02:00
|
|
|
* read-only, clean, old .11.xx0100.1
|
|
|
|
* read-only, clean, young .01.xx0101.1
|
|
|
|
* read-only, dirty, old .11.xx0110.1
|
|
|
|
* read-only, dirty, young .01.xx0111.1
|
|
|
|
* read-write, clean, old .11.xx1100.1
|
|
|
|
* read-write, clean, young .01.xx1101.1
|
|
|
|
* read-write, dirty, old .10.xx1110.1
|
|
|
|
* read-write, dirty, young .00.xx1111.1
|
|
|
|
* HW-bits: R read-only, I invalid
|
|
|
|
* SW-bits: p present, y young, d dirty, r read, w write, s special,
|
|
|
|
* u unused, l large
|
2013-07-23 20:57:57 +02:00
|
|
|
*
|
2015-04-22 13:55:59 +02:00
|
|
|
* pte_none is true for the bit pattern .10.00000000, pte == 0x400
|
|
|
|
* pte_swap is true for the bit pattern .11..ooooo.0, (pte & 0x201) == 0x200
|
|
|
|
* pte_present is true for the bit pattern .xx.xxxxxx.1, (pte & 0x001) == 0x001
|
2006-10-18 18:30:51 +02:00
|
|
|
*/
|
|
|
|
|
2007-10-22 12:52:47 +02:00
|
|
|
/* Bits in the segment/region table address-space-control-element */
|
2017-06-14 08:57:24 +02:00
|
|
|
#define _ASCE_ORIGIN ~0xfffUL/* region/segment table origin */
|
2007-10-22 12:52:47 +02:00
|
|
|
#define _ASCE_PRIVATE_SPACE 0x100 /* private space control */
|
|
|
|
#define _ASCE_ALT_EVENT 0x80 /* storage alteration event control */
|
|
|
|
#define _ASCE_SPACE_SWITCH 0x40 /* space switch event */
|
|
|
|
#define _ASCE_REAL_SPACE 0x20 /* real space control */
|
|
|
|
#define _ASCE_TYPE_MASK 0x0c /* asce table type mask */
|
|
|
|
#define _ASCE_TYPE_REGION1 0x0c /* region first table type */
|
|
|
|
#define _ASCE_TYPE_REGION2 0x08 /* region second table type */
|
|
|
|
#define _ASCE_TYPE_REGION3 0x04 /* region third table type */
|
|
|
|
#define _ASCE_TYPE_SEGMENT 0x00 /* segment table type */
|
|
|
|
#define _ASCE_TABLE_LENGTH 0x03 /* region table length */
|
|
|
|
|
|
|
|
/* Bits in the region table entry */
|
|
|
|
#define _REGION_ENTRY_ORIGIN ~0xfffUL/* region/segment table origin */
|
2013-07-23 20:57:57 +02:00
|
|
|
#define _REGION_ENTRY_PROTECT 0x200 /* region protection bit */
|
2016-03-22 10:54:24 +01:00
|
|
|
#define _REGION_ENTRY_NOEXEC 0x100 /* region no-execute bit */
|
2016-03-08 12:12:18 +01:00
|
|
|
#define _REGION_ENTRY_OFFSET 0xc0 /* region table offset */
|
2013-07-23 20:57:57 +02:00
|
|
|
#define _REGION_ENTRY_INVALID 0x20 /* invalid region table entry */
|
2019-04-24 12:49:44 +02:00
|
|
|
#define _REGION_ENTRY_TYPE_MASK 0x0c /* region table type mask */
|
2007-10-22 12:52:47 +02:00
|
|
|
#define _REGION_ENTRY_TYPE_R1 0x0c /* region first table type */
|
|
|
|
#define _REGION_ENTRY_TYPE_R2 0x08 /* region second table type */
|
|
|
|
#define _REGION_ENTRY_TYPE_R3 0x04 /* region third table type */
|
|
|
|
#define _REGION_ENTRY_LENGTH 0x03 /* region third length */
|
|
|
|
|
|
|
|
#define _REGION1_ENTRY (_REGION_ENTRY_TYPE_R1 | _REGION_ENTRY_LENGTH)
|
2013-07-23 20:57:57 +02:00
|
|
|
#define _REGION1_ENTRY_EMPTY (_REGION_ENTRY_TYPE_R1 | _REGION_ENTRY_INVALID)
|
2007-10-22 12:52:47 +02:00
|
|
|
#define _REGION2_ENTRY (_REGION_ENTRY_TYPE_R2 | _REGION_ENTRY_LENGTH)
|
2013-07-23 20:57:57 +02:00
|
|
|
#define _REGION2_ENTRY_EMPTY (_REGION_ENTRY_TYPE_R2 | _REGION_ENTRY_INVALID)
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
#define _REGION3_ENTRY (_REGION_ENTRY_TYPE_R3 | _REGION_ENTRY_LENGTH | \
|
|
|
|
_REGION3_ENTRY_PRESENT)
|
2013-07-23 20:57:57 +02:00
|
|
|
#define _REGION3_ENTRY_EMPTY (_REGION_ENTRY_TYPE_R3 | _REGION_ENTRY_INVALID)
|
2007-10-22 12:52:47 +02:00
|
|
|
|
2024-04-29 16:34:09 +02:00
|
|
|
#define _REGION3_ENTRY_HARDWARE_BITS 0xfffffffffffff6ffUL
|
|
|
|
#define _REGION3_ENTRY_HARDWARE_BITS_LARGE 0xffffffff8001073cUL
|
2016-05-10 10:34:47 +02:00
|
|
|
#define _REGION3_ENTRY_ORIGIN_LARGE ~0x7fffffffUL /* large page address */
|
2016-05-11 10:52:07 +02:00
|
|
|
#define _REGION3_ENTRY_DIRTY 0x2000 /* SW region dirty bit */
|
|
|
|
#define _REGION3_ENTRY_YOUNG 0x1000 /* SW region young bit */
|
s390/mm: Introduce region-third and segment table swap entries
Introduce region-third (PUD) and segment table (PMD) swap entries, and
make hugetlbfs RSTE <-> PTE conversion code aware of them, so that they
can be used for hugetlbfs PTE_MARKER entries. Future work could also
build on this to enable THP_SWAP and THP_MIGRATION for s390.
Similar to PTE swap entries, bits 0-51 can be used to store the swap
offset, but bits 57-61 cannot be used for swap type because that overlaps
with the INVALID and TABLE TYPE bits. PMD/PUD swap entries must be invalid,
and have a correct table type so that pud_folded() check still works.
Bits 53-57 can be used for swap type, but those include the PROTECT bit.
So unlike swap PTEs, the PROTECT bit cannot be used to mark the swap entry.
Use the "Common-Segment/Region" bit 59 instead for that.
Also remove the !MACHINE_HAS_NX check in __set_huge_pte_at(). Otherwise,
that would clear the _SEGMENT_ENTRY_NOEXEC bit also for swap entries, where
it is used for encoding the swap type. The architecture only requires this
bit to be 0 for PTEs, with !MACHINE_HAS_NX, not for segment or region-third
entries. And the check is also redundant, because after __pte_to_rste()
conversion, for non-swap PTEs it would only be set if it was already set in
the PTE, which should never be the case for !MACHINE_HAS_NX.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:22 +01:00
|
|
|
#define _REGION3_ENTRY_COMM 0x0010 /* Common-Region, marks swap entry */
|
2016-05-11 10:52:07 +02:00
|
|
|
#define _REGION3_ENTRY_LARGE 0x0400 /* RTTE-format control, large page */
|
2024-11-21 18:45:20 +01:00
|
|
|
#define _REGION3_ENTRY_WRITE 0x8000 /* SW region write bit */
|
|
|
|
#define _REGION3_ENTRY_READ 0x4000 /* SW region read bit */
|
2016-05-11 10:52:07 +02:00
|
|
|
|
|
|
|
#ifdef CONFIG_MEM_SOFT_DIRTY
|
2024-11-21 18:45:20 +01:00
|
|
|
#define _REGION3_ENTRY_SOFT_DIRTY 0x0002 /* SW region soft dirty bit */
|
2016-05-11 10:52:07 +02:00
|
|
|
#else
|
|
|
|
#define _REGION3_ENTRY_SOFT_DIRTY 0x0000 /* SW region soft dirty bit */
|
|
|
|
#endif
|
|
|
|
|
2017-04-24 18:19:10 +02:00
|
|
|
#define _REGION_ENTRY_BITS 0xfffffffffffff22fUL
|
2016-07-04 14:47:01 +02:00
|
|
|
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
/*
|
|
|
|
* SW region present bit. For non-leaf region-third-table entries, bits 62-63
|
|
|
|
* indicate the TABLE LENGTH and both must be set to 1. But such entries
|
|
|
|
* would always be considered as present, so it is safe to use bit 63 as
|
|
|
|
* PRESENT bit for PUD.
|
|
|
|
*/
|
|
|
|
#define _REGION3_ENTRY_PRESENT 0x0001
|
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
/* Bits in the segment table entry */
|
2024-04-29 16:34:09 +02:00
|
|
|
#define _SEGMENT_ENTRY_BITS 0xfffffffffffffe3fUL
|
|
|
|
#define _SEGMENT_ENTRY_HARDWARE_BITS 0xfffffffffffffe3cUL
|
|
|
|
#define _SEGMENT_ENTRY_HARDWARE_BITS_LARGE 0xfffffffffff1073cUL
|
2013-03-21 12:50:39 +01:00
|
|
|
#define _SEGMENT_ENTRY_ORIGIN_LARGE ~0xfffffUL /* large page address */
|
2017-06-14 08:57:24 +02:00
|
|
|
#define _SEGMENT_ENTRY_ORIGIN ~0x7ffUL/* page table origin */
|
|
|
|
#define _SEGMENT_ENTRY_PROTECT 0x200 /* segment protection bit */
|
|
|
|
#define _SEGMENT_ENTRY_NOEXEC 0x100 /* segment no-execute bit */
|
2013-07-23 20:57:57 +02:00
|
|
|
#define _SEGMENT_ENTRY_INVALID 0x20 /* invalid segment table entry */
|
2019-04-24 12:49:44 +02:00
|
|
|
#define _SEGMENT_ENTRY_TYPE_MASK 0x0c /* segment table type mask */
|
2005-04-16 15:20:36 -07:00
|
|
|
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
#define _SEGMENT_ENTRY (_SEGMENT_ENTRY_PRESENT)
|
2013-07-23 20:57:57 +02:00
|
|
|
#define _SEGMENT_ENTRY_EMPTY (_SEGMENT_ENTRY_INVALID)
|
2007-10-22 12:52:47 +02:00
|
|
|
|
2014-07-24 11:03:41 +02:00
|
|
|
#define _SEGMENT_ENTRY_DIRTY 0x2000 /* SW segment dirty bit */
|
|
|
|
#define _SEGMENT_ENTRY_YOUNG 0x1000 /* SW segment young bit */
|
2024-11-21 18:45:20 +01:00
|
|
|
|
s390/mm: Introduce region-third and segment table swap entries
Introduce region-third (PUD) and segment table (PMD) swap entries, and
make hugetlbfs RSTE <-> PTE conversion code aware of them, so that they
can be used for hugetlbfs PTE_MARKER entries. Future work could also
build on this to enable THP_SWAP and THP_MIGRATION for s390.
Similar to PTE swap entries, bits 0-51 can be used to store the swap
offset, but bits 57-61 cannot be used for swap type because that overlaps
with the INVALID and TABLE TYPE bits. PMD/PUD swap entries must be invalid,
and have a correct table type so that pud_folded() check still works.
Bits 53-57 can be used for swap type, but those include the PROTECT bit.
So unlike swap PTEs, the PROTECT bit cannot be used to mark the swap entry.
Use the "Common-Segment/Region" bit 59 instead for that.
Also remove the !MACHINE_HAS_NX check in __set_huge_pte_at(). Otherwise,
that would clear the _SEGMENT_ENTRY_NOEXEC bit also for swap entries, where
it is used for encoding the swap type. The architecture only requires this
bit to be 0 for PTEs, with !MACHINE_HAS_NX, not for segment or region-third
entries. And the check is also redundant, because after __pte_to_rste()
conversion, for non-swap PTEs it would only be set if it was already set in
the PTE, which should never be the case for !MACHINE_HAS_NX.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:22 +01:00
|
|
|
#define _SEGMENT_ENTRY_COMM 0x0010 /* Common-Segment, marks swap entry */
|
2014-07-24 11:03:41 +02:00
|
|
|
#define _SEGMENT_ENTRY_LARGE 0x0400 /* STE-format control, large page */
|
2024-11-21 18:45:20 +01:00
|
|
|
#define _SEGMENT_ENTRY_WRITE 0x8000 /* SW segment write bit */
|
|
|
|
#define _SEGMENT_ENTRY_READ 0x4000 /* SW segment read bit */
|
2013-07-23 22:11:42 +02:00
|
|
|
|
2015-04-22 14:47:42 +02:00
|
|
|
#ifdef CONFIG_MEM_SOFT_DIRTY
|
2024-11-21 18:45:20 +01:00
|
|
|
#define _SEGMENT_ENTRY_SOFT_DIRTY 0x0002 /* SW segment soft dirty bit */
|
2015-04-22 14:47:42 +02:00
|
|
|
#else
|
|
|
|
#define _SEGMENT_ENTRY_SOFT_DIRTY 0x0000 /* SW segment soft dirty bit */
|
|
|
|
#endif
|
|
|
|
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
#define _SEGMENT_ENTRY_PRESENT 0x0001 /* SW segment present bit */
|
|
|
|
|
s390/mm: Introduce region-third and segment table swap entries
Introduce region-third (PUD) and segment table (PMD) swap entries, and
make hugetlbfs RSTE <-> PTE conversion code aware of them, so that they
can be used for hugetlbfs PTE_MARKER entries. Future work could also
build on this to enable THP_SWAP and THP_MIGRATION for s390.
Similar to PTE swap entries, bits 0-51 can be used to store the swap
offset, but bits 57-61 cannot be used for swap type because that overlaps
with the INVALID and TABLE TYPE bits. PMD/PUD swap entries must be invalid,
and have a correct table type so that pud_folded() check still works.
Bits 53-57 can be used for swap type, but those include the PROTECT bit.
So unlike swap PTEs, the PROTECT bit cannot be used to mark the swap entry.
Use the "Common-Segment/Region" bit 59 instead for that.
Also remove the !MACHINE_HAS_NX check in __set_huge_pte_at(). Otherwise,
that would clear the _SEGMENT_ENTRY_NOEXEC bit also for swap entries, where
it is used for encoding the swap type. The architecture only requires this
bit to be 0 for PTEs, with !MACHINE_HAS_NX, not for segment or region-third
entries. And the check is also redundant, because after __pte_to_rste()
conversion, for non-swap PTEs it would only be set if it was already set in
the PTE, which should never be the case for !MACHINE_HAS_NX.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:22 +01:00
|
|
|
/* Common bits in region and segment table entries, for swap entries */
|
|
|
|
#define _RST_ENTRY_COMM 0x0010 /* Common-Region/Segment, marks swap entry */
|
|
|
|
#define _RST_ENTRY_INVALID 0x0020 /* invalid region/segment table entry */
|
|
|
|
|
2017-06-16 17:24:39 +02:00
|
|
|
#define _CRST_ENTRIES 2048 /* number of region/segment table entries */
|
|
|
|
#define _PAGE_ENTRIES 256 /* number of page table entries */
|
|
|
|
|
|
|
|
#define _CRST_TABLE_SIZE (_CRST_ENTRIES * 8)
|
|
|
|
#define _PAGE_TABLE_SIZE (_PAGE_ENTRIES * 8)
|
|
|
|
|
|
|
|
#define _REGION1_SHIFT 53
|
|
|
|
#define _REGION2_SHIFT 42
|
|
|
|
#define _REGION3_SHIFT 31
|
|
|
|
#define _SEGMENT_SHIFT 20
|
|
|
|
|
|
|
|
#define _REGION1_INDEX (0x7ffUL << _REGION1_SHIFT)
|
|
|
|
#define _REGION2_INDEX (0x7ffUL << _REGION2_SHIFT)
|
|
|
|
#define _REGION3_INDEX (0x7ffUL << _REGION3_SHIFT)
|
|
|
|
#define _SEGMENT_INDEX (0x7ffUL << _SEGMENT_SHIFT)
|
2024-10-14 16:13:40 +01:00
|
|
|
#define _PAGE_INDEX (0xffUL << PAGE_SHIFT)
|
2017-06-16 17:24:39 +02:00
|
|
|
|
|
|
|
#define _REGION1_SIZE (1UL << _REGION1_SHIFT)
|
|
|
|
#define _REGION2_SIZE (1UL << _REGION2_SHIFT)
|
|
|
|
#define _REGION3_SIZE (1UL << _REGION3_SHIFT)
|
|
|
|
#define _SEGMENT_SIZE (1UL << _SEGMENT_SHIFT)
|
|
|
|
|
|
|
|
#define _REGION1_MASK (~(_REGION1_SIZE - 1))
|
|
|
|
#define _REGION2_MASK (~(_REGION2_SIZE - 1))
|
|
|
|
#define _REGION3_MASK (~(_REGION3_SIZE - 1))
|
|
|
|
#define _SEGMENT_MASK (~(_SEGMENT_SIZE - 1))
|
|
|
|
|
|
|
|
#define PMD_SHIFT _SEGMENT_SHIFT
|
|
|
|
#define PUD_SHIFT _REGION3_SHIFT
|
|
|
|
#define P4D_SHIFT _REGION2_SHIFT
|
|
|
|
#define PGDIR_SHIFT _REGION1_SHIFT
|
|
|
|
|
|
|
|
#define PMD_SIZE _SEGMENT_SIZE
|
|
|
|
#define PUD_SIZE _REGION3_SIZE
|
|
|
|
#define P4D_SIZE _REGION2_SIZE
|
|
|
|
#define PGDIR_SIZE _REGION1_SIZE
|
|
|
|
|
|
|
|
#define PMD_MASK _SEGMENT_MASK
|
|
|
|
#define PUD_MASK _REGION3_MASK
|
|
|
|
#define P4D_MASK _REGION2_MASK
|
|
|
|
#define PGDIR_MASK _REGION1_MASK
|
|
|
|
|
|
|
|
#define PTRS_PER_PTE _PAGE_ENTRIES
|
|
|
|
#define PTRS_PER_PMD _CRST_ENTRIES
|
|
|
|
#define PTRS_PER_PUD _CRST_ENTRIES
|
|
|
|
#define PTRS_PER_P4D _CRST_ENTRIES
|
|
|
|
#define PTRS_PER_PGD _CRST_ENTRIES
|
|
|
|
|
2013-07-23 22:11:42 +02:00
|
|
|
/*
|
2016-05-11 10:52:07 +02:00
|
|
|
* Segment table and region3 table entry encoding
|
|
|
|
* (R = read-only, I = invalid, y = young bit):
|
2016-07-18 14:35:13 +02:00
|
|
|
* dy..R...I...wr
|
2014-07-24 11:03:41 +02:00
|
|
|
* prot-none, clean, old 00..1...1...00
|
|
|
|
* prot-none, clean, young 01..1...1...00
|
|
|
|
* prot-none, dirty, old 10..1...1...00
|
|
|
|
* prot-none, dirty, young 11..1...1...00
|
2016-07-18 14:35:13 +02:00
|
|
|
* read-only, clean, old 00..1...1...01
|
|
|
|
* read-only, clean, young 01..1...0...01
|
|
|
|
* read-only, dirty, old 10..1...1...01
|
|
|
|
* read-only, dirty, young 11..1...0...01
|
2014-07-24 11:03:41 +02:00
|
|
|
* read-write, clean, old 00..1...1...11
|
|
|
|
* read-write, clean, young 01..1...0...11
|
|
|
|
* read-write, dirty, old 10..0...1...11
|
|
|
|
* read-write, dirty, young 11..0...0...11
|
2013-07-23 22:11:42 +02:00
|
|
|
* The segment table origin is used to distinguish empty (origin==0) from
|
|
|
|
* read-write, old segment table entries (origin!=0)
|
2015-04-22 13:55:59 +02:00
|
|
|
* HW-bits: R read-only, I invalid
|
|
|
|
* SW-bits: y young, d dirty, r read, w write
|
2013-07-23 22:11:42 +02:00
|
|
|
*/
|
2013-07-23 20:57:57 +02:00
|
|
|
|
2011-06-06 14:14:42 +02:00
|
|
|
/* Page status table bits for virtualization */
|
2013-05-17 14:41:33 +02:00
|
|
|
#define PGSTE_ACC_BITS 0xf000000000000000UL
|
|
|
|
#define PGSTE_FP_BIT 0x0800000000000000UL
|
|
|
|
#define PGSTE_PCL_BIT 0x0080000000000000UL
|
|
|
|
#define PGSTE_HR_BIT 0x0040000000000000UL
|
|
|
|
#define PGSTE_HC_BIT 0x0020000000000000UL
|
|
|
|
#define PGSTE_GR_BIT 0x0004000000000000UL
|
|
|
|
#define PGSTE_GC_BIT 0x0002000000000000UL
|
2025-01-23 15:46:27 +01:00
|
|
|
#define PGSTE_ST2_MASK 0x0000ffff00000000UL
|
2025-01-23 15:46:26 +01:00
|
|
|
#define PGSTE_UC_BIT 0x0000000000008000UL /* user dirty (migration) */
|
|
|
|
#define PGSTE_IN_BIT 0x0000000000004000UL /* IPTE notify bit */
|
|
|
|
#define PGSTE_VSIE_BIT 0x0000000000002000UL /* ref'd in a shadow table */
|
2011-06-06 14:14:42 +02:00
|
|
|
|
2013-04-17 17:36:29 +02:00
|
|
|
/* Guest Page State used for virtualization */
|
2017-04-20 10:03:45 +02:00
|
|
|
#define _PGSTE_GPS_ZERO 0x0000000080000000UL
|
2016-07-26 17:02:31 +02:00
|
|
|
#define _PGSTE_GPS_NODAT 0x0000000040000000UL
|
2017-04-20 10:03:45 +02:00
|
|
|
#define _PGSTE_GPS_USAGE_MASK 0x0000000003000000UL
|
|
|
|
#define _PGSTE_GPS_USAGE_STABLE 0x0000000000000000UL
|
|
|
|
#define _PGSTE_GPS_USAGE_UNUSED 0x0000000001000000UL
|
|
|
|
#define _PGSTE_GPS_USAGE_POT_VOLATILE 0x0000000002000000UL
|
|
|
|
#define _PGSTE_GPS_USAGE_VOLATILE _PGSTE_GPS_USAGE_MASK
|
2013-04-17 17:36:29 +02:00
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
/*
|
2007-10-22 12:52:47 +02:00
|
|
|
* A user page table pointer has the space-switch-event bit, the
|
|
|
|
* private-space-control bit and the storage-alteration-event-control
|
|
|
|
* bit set. A kernel page table pointer doesn't need them.
|
2005-04-16 15:20:36 -07:00
|
|
|
*/
|
2007-10-22 12:52:47 +02:00
|
|
|
#define _ASCE_USER_BITS (_ASCE_SPACE_SWITCH | _ASCE_PRIVATE_SPACE | \
|
|
|
|
_ASCE_ALT_EVENT)
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
/*
|
2006-09-20 15:59:37 +02:00
|
|
|
* Page protection definitions.
|
2005-04-16 15:20:36 -07:00
|
|
|
*/
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __PAGE_NONE (_PAGE_PRESENT | _PAGE_INVALID | _PAGE_PROTECT)
|
|
|
|
#define __PAGE_RO (_PAGE_PRESENT | _PAGE_READ | \
|
2016-03-22 10:54:24 +01:00
|
|
|
_PAGE_NOEXEC | _PAGE_INVALID | _PAGE_PROTECT)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __PAGE_RX (_PAGE_PRESENT | _PAGE_READ | \
|
2013-07-23 22:11:42 +02:00
|
|
|
_PAGE_INVALID | _PAGE_PROTECT)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __PAGE_RW (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
|
2016-03-22 10:54:24 +01:00
|
|
|
_PAGE_NOEXEC | _PAGE_INVALID | _PAGE_PROTECT)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __PAGE_RWX (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
|
2013-07-23 22:11:42 +02:00
|
|
|
_PAGE_INVALID | _PAGE_PROTECT)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __PAGE_SHARED (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
|
2016-03-22 10:54:24 +01:00
|
|
|
_PAGE_YOUNG | _PAGE_DIRTY | _PAGE_NOEXEC)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __PAGE_KERNEL (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
|
2016-03-22 10:54:24 +01:00
|
|
|
_PAGE_YOUNG | _PAGE_DIRTY | _PAGE_NOEXEC)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __PAGE_KERNEL_RO (_PAGE_PRESENT | _PAGE_READ | _PAGE_YOUNG | \
|
2016-03-22 10:54:24 +01:00
|
|
|
_PAGE_PROTECT | _PAGE_NOEXEC)
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2024-12-09 10:45:18 +01:00
|
|
|
extern unsigned long page_noexec_mask;
|
|
|
|
|
|
|
|
#define __pgprot_page_mask(x) __pgprot((x) & page_noexec_mask)
|
|
|
|
|
|
|
|
#define PAGE_NONE __pgprot_page_mask(__PAGE_NONE)
|
|
|
|
#define PAGE_RO __pgprot_page_mask(__PAGE_RO)
|
|
|
|
#define PAGE_RX __pgprot_page_mask(__PAGE_RX)
|
|
|
|
#define PAGE_RW __pgprot_page_mask(__PAGE_RW)
|
|
|
|
#define PAGE_RWX __pgprot_page_mask(__PAGE_RWX)
|
|
|
|
#define PAGE_SHARED __pgprot_page_mask(__PAGE_SHARED)
|
|
|
|
#define PAGE_KERNEL __pgprot_page_mask(__PAGE_KERNEL)
|
|
|
|
#define PAGE_KERNEL_RO __pgprot_page_mask(__PAGE_KERNEL_RO)
|
|
|
|
|
2013-04-29 15:07:23 -07:00
|
|
|
/*
|
|
|
|
* Segment entry (large page) protection definitions.
|
|
|
|
*/
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __SEGMENT_NONE (_SEGMENT_ENTRY_PRESENT | \
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
_SEGMENT_ENTRY_INVALID | \
|
2013-07-23 20:57:57 +02:00
|
|
|
_SEGMENT_ENTRY_PROTECT)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __SEGMENT_RO (_SEGMENT_ENTRY_PRESENT | \
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
_SEGMENT_ENTRY_PROTECT | \
|
2016-03-22 10:54:24 +01:00
|
|
|
_SEGMENT_ENTRY_READ | \
|
|
|
|
_SEGMENT_ENTRY_NOEXEC)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __SEGMENT_RX (_SEGMENT_ENTRY_PRESENT | \
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
_SEGMENT_ENTRY_PROTECT | \
|
2014-07-24 11:03:41 +02:00
|
|
|
_SEGMENT_ENTRY_READ)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __SEGMENT_RW (_SEGMENT_ENTRY_PRESENT | \
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
_SEGMENT_ENTRY_READ | \
|
2016-03-22 10:54:24 +01:00
|
|
|
_SEGMENT_ENTRY_WRITE | \
|
|
|
|
_SEGMENT_ENTRY_NOEXEC)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __SEGMENT_RWX (_SEGMENT_ENTRY_PRESENT | \
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
_SEGMENT_ENTRY_READ | \
|
2014-07-24 11:03:41 +02:00
|
|
|
_SEGMENT_ENTRY_WRITE)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __SEGMENT_KERNEL (_SEGMENT_ENTRY | \
|
2016-05-11 10:52:07 +02:00
|
|
|
_SEGMENT_ENTRY_LARGE | \
|
|
|
|
_SEGMENT_ENTRY_READ | \
|
|
|
|
_SEGMENT_ENTRY_WRITE | \
|
|
|
|
_SEGMENT_ENTRY_YOUNG | \
|
2016-03-22 10:54:24 +01:00
|
|
|
_SEGMENT_ENTRY_DIRTY | \
|
|
|
|
_SEGMENT_ENTRY_NOEXEC)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __SEGMENT_KERNEL_RO (_SEGMENT_ENTRY | \
|
2016-05-11 10:52:07 +02:00
|
|
|
_SEGMENT_ENTRY_LARGE | \
|
|
|
|
_SEGMENT_ENTRY_READ | \
|
|
|
|
_SEGMENT_ENTRY_YOUNG | \
|
2016-03-22 10:54:24 +01:00
|
|
|
_SEGMENT_ENTRY_PROTECT | \
|
|
|
|
_SEGMENT_ENTRY_NOEXEC)
|
2016-05-11 10:52:07 +02:00
|
|
|
|
2024-12-09 10:45:18 +01:00
|
|
|
extern unsigned long segment_noexec_mask;
|
|
|
|
|
|
|
|
#define __pgprot_segment_mask(x) __pgprot((x) & segment_noexec_mask)
|
|
|
|
|
|
|
|
#define SEGMENT_NONE __pgprot_segment_mask(__SEGMENT_NONE)
|
|
|
|
#define SEGMENT_RO __pgprot_segment_mask(__SEGMENT_RO)
|
|
|
|
#define SEGMENT_RX __pgprot_segment_mask(__SEGMENT_RX)
|
|
|
|
#define SEGMENT_RW __pgprot_segment_mask(__SEGMENT_RW)
|
|
|
|
#define SEGMENT_RWX __pgprot_segment_mask(__SEGMENT_RWX)
|
|
|
|
#define SEGMENT_KERNEL __pgprot_segment_mask(__SEGMENT_KERNEL)
|
|
|
|
#define SEGMENT_KERNEL_RO __pgprot_segment_mask(__SEGMENT_KERNEL_RO)
|
|
|
|
|
2016-05-11 10:52:07 +02:00
|
|
|
/*
|
|
|
|
* Region3 entry (large page) protection definitions.
|
|
|
|
*/
|
|
|
|
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __REGION3_KERNEL (_REGION_ENTRY_TYPE_R3 | \
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
_REGION3_ENTRY_PRESENT | \
|
2024-12-09 10:45:18 +01:00
|
|
|
_REGION3_ENTRY_LARGE | \
|
|
|
|
_REGION3_ENTRY_READ | \
|
|
|
|
_REGION3_ENTRY_WRITE | \
|
|
|
|
_REGION3_ENTRY_YOUNG | \
|
2016-03-22 10:54:24 +01:00
|
|
|
_REGION3_ENTRY_DIRTY | \
|
|
|
|
_REGION_ENTRY_NOEXEC)
|
2024-12-09 10:45:18 +01:00
|
|
|
#define __REGION3_KERNEL_RO (_REGION_ENTRY_TYPE_R3 | \
|
|
|
|
_REGION3_ENTRY_PRESENT | \
|
|
|
|
_REGION3_ENTRY_LARGE | \
|
|
|
|
_REGION3_ENTRY_READ | \
|
|
|
|
_REGION3_ENTRY_YOUNG | \
|
|
|
|
_REGION_ENTRY_PROTECT | \
|
|
|
|
_REGION_ENTRY_NOEXEC)
|
|
|
|
|
|
|
|
extern unsigned long region_noexec_mask;
|
|
|
|
|
|
|
|
#define __pgprot_region_mask(x) __pgprot((x) & region_noexec_mask)
|
|
|
|
|
|
|
|
#define REGION3_KERNEL __pgprot_region_mask(__REGION3_KERNEL)
|
|
|
|
#define REGION3_KERNEL_RO __pgprot_region_mask(__REGION3_KERNEL_RO)
|
2013-04-29 15:07:23 -07:00
|
|
|
|
2018-10-15 11:09:16 +02:00
|
|
|
static inline bool mm_p4d_folded(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
return mm->context.asce_limit <= _REGION1_SIZE;
|
|
|
|
}
|
|
|
|
#define mm_p4d_folded(mm) mm_p4d_folded(mm)
|
|
|
|
|
|
|
|
static inline bool mm_pud_folded(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
return mm->context.asce_limit <= _REGION2_SIZE;
|
|
|
|
}
|
|
|
|
#define mm_pud_folded(mm) mm_pud_folded(mm)
|
|
|
|
|
|
|
|
static inline bool mm_pmd_folded(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
return mm->context.asce_limit <= _REGION3_SIZE;
|
|
|
|
}
|
|
|
|
#define mm_pmd_folded(mm) mm_pmd_folded(mm)
|
|
|
|
|
2011-05-23 10:24:40 +02:00
|
|
|
static inline int mm_has_pgste(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_PGSTE
|
|
|
|
if (unlikely(mm->context.has_pgste))
|
|
|
|
return 1;
|
|
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
}
|
2014-01-14 15:02:11 +01:00
|
|
|
|
2020-01-21 09:48:44 +01:00
|
|
|
static inline int mm_is_protected(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_PGSTE
|
2022-06-28 15:56:06 +02:00
|
|
|
if (unlikely(atomic_read(&mm->context.protected_count)))
|
2020-01-21 09:48:44 +01:00
|
|
|
return 1;
|
|
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2025-03-04 09:14:32 +01:00
|
|
|
static inline pgste_t clear_pgste_bit(pgste_t pgste, unsigned long mask)
|
|
|
|
{
|
|
|
|
return __pgste(pgste_val(pgste) & ~mask);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pgste_t set_pgste_bit(pgste_t pgste, unsigned long mask)
|
|
|
|
{
|
|
|
|
return __pgste(pgste_val(pgste) | mask);
|
|
|
|
}
|
|
|
|
|
2022-02-21 21:18:29 +01:00
|
|
|
static inline pte_t clear_pte_bit(pte_t pte, pgprot_t prot)
|
|
|
|
{
|
|
|
|
return __pte(pte_val(pte) & ~pgprot_val(prot));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pte_t set_pte_bit(pte_t pte, pgprot_t prot)
|
|
|
|
{
|
|
|
|
return __pte(pte_val(pte) | pgprot_val(prot));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pmd_t clear_pmd_bit(pmd_t pmd, pgprot_t prot)
|
|
|
|
{
|
|
|
|
return __pmd(pmd_val(pmd) & ~pgprot_val(prot));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pmd_t set_pmd_bit(pmd_t pmd, pgprot_t prot)
|
|
|
|
{
|
|
|
|
return __pmd(pmd_val(pmd) | pgprot_val(prot));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pud_t clear_pud_bit(pud_t pud, pgprot_t prot)
|
|
|
|
{
|
|
|
|
return __pud(pud_val(pud) & ~pgprot_val(prot));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pud_t set_pud_bit(pud_t pud, pgprot_t prot)
|
|
|
|
{
|
|
|
|
return __pud(pud_val(pud) | pgprot_val(prot));
|
|
|
|
}
|
|
|
|
|
2014-10-23 12:08:38 +02:00
|
|
|
/*
|
s390/mm: Re-enable the shared zeropage for !PV and !skeys KVM guests
commit fa41ba0d08de ("s390/mm: avoid empty zero pages for KVM guests to
avoid postcopy hangs") introduced an undesired side effect when combined
with memory ballooning and VM migration: memory part of the inflated
memory balloon will consume memory.
Assuming we have a 100GiB VM and inflated the balloon to 40GiB. Our VM
will consume ~60GiB of memory. If we now trigger a VM migration,
hypervisors like QEMU will read all VM memory. As s390x does not support
the shared zeropage, we'll end up allocating for all previously-inflated
memory part of the memory balloon: 50 GiB. So we might easily
(unexpectedly) crash the VM on the migration source.
Even worse, hypervisors like QEMU optimize for zeropage migration to not
consume memory on the migration destination: when migrating a
"page full of zeroes", on the migration destination they check whether the
target memory is already zero (by reading the destination memory) and avoid
writing to the memory to not allocate memory: however, s390x will also
allocate memory here, implying that also on the migration destination, we
will end up allocating all previously-inflated memory part of the memory
balloon.
This is especially bad if actual memory overcommit was not desired, when
memory ballooning is used for dynamic VM memory resizing, setting aside
some memory during boot that can be added later on demand. Alternatives
like virtio-mem that would avoid this issue are not yet available on
s390x.
There could be ways to optimize some cases in user space: before reading
memory in an anonymous private mapping on the migration source, check via
/proc/self/pagemap if anything is already populated. Similarly check on
the migration destination before reading. While that would avoid
populating tables full of shared zeropages on all architectures, it's
harder to get right and performant, and requires user space changes.
Further, with posctopy live migration we must place a page, so there,
"avoid touching memory to avoid allocating memory" is not really
possible. (Note that a previously we would have falsely inserted
shared zeropages into processes using UFFDIO_ZEROPAGE where
mm_forbids_zeropage() would have actually forbidden it)
PV is currently incompatible with memory ballooning, and in the common
case, KVM guests don't make use of storage keys. Instead of zapping
zeropages when enabling storage keys / PV, that turned out to be
problematic in the past, let's do exactly the same we do with KSM pages:
trigger unsharing faults to replace the shared zeropages by proper
anonymous folios.
What about added latency when enabling storage kes? Having a lot of
zeropages in applicable environments (PV, legacy guests, unittests) is
unexpected. Further, KSM could today already unshare the zeropages
and unmerging KSM pages when enabling storage kets would unshare the
KSM-placed zeropages in the same way, resulting in the same latency.
[ agordeev: Fixed sparse and checkpatch complaints and error handling ]
Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Tested-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Fixes: fa41ba0d08de ("s390/mm: avoid empty zero pages for KVM guests to avoid postcopy hangs")
Signed-off-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/r/20240411161441.910170-3-david@redhat.com
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2024-04-11 18:14:41 +02:00
|
|
|
* As soon as the guest uses storage keys or enables PV, we deduplicate all
|
|
|
|
* mapped shared zeropages and prevent new shared zeropages from getting
|
|
|
|
* mapped.
|
2014-10-23 12:08:38 +02:00
|
|
|
*/
|
s390/mm: Re-enable the shared zeropage for !PV and !skeys KVM guests
commit fa41ba0d08de ("s390/mm: avoid empty zero pages for KVM guests to
avoid postcopy hangs") introduced an undesired side effect when combined
with memory ballooning and VM migration: memory part of the inflated
memory balloon will consume memory.
Assuming we have a 100GiB VM and inflated the balloon to 40GiB. Our VM
will consume ~60GiB of memory. If we now trigger a VM migration,
hypervisors like QEMU will read all VM memory. As s390x does not support
the shared zeropage, we'll end up allocating for all previously-inflated
memory part of the memory balloon: 50 GiB. So we might easily
(unexpectedly) crash the VM on the migration source.
Even worse, hypervisors like QEMU optimize for zeropage migration to not
consume memory on the migration destination: when migrating a
"page full of zeroes", on the migration destination they check whether the
target memory is already zero (by reading the destination memory) and avoid
writing to the memory to not allocate memory: however, s390x will also
allocate memory here, implying that also on the migration destination, we
will end up allocating all previously-inflated memory part of the memory
balloon.
This is especially bad if actual memory overcommit was not desired, when
memory ballooning is used for dynamic VM memory resizing, setting aside
some memory during boot that can be added later on demand. Alternatives
like virtio-mem that would avoid this issue are not yet available on
s390x.
There could be ways to optimize some cases in user space: before reading
memory in an anonymous private mapping on the migration source, check via
/proc/self/pagemap if anything is already populated. Similarly check on
the migration destination before reading. While that would avoid
populating tables full of shared zeropages on all architectures, it's
harder to get right and performant, and requires user space changes.
Further, with posctopy live migration we must place a page, so there,
"avoid touching memory to avoid allocating memory" is not really
possible. (Note that a previously we would have falsely inserted
shared zeropages into processes using UFFDIO_ZEROPAGE where
mm_forbids_zeropage() would have actually forbidden it)
PV is currently incompatible with memory ballooning, and in the common
case, KVM guests don't make use of storage keys. Instead of zapping
zeropages when enabling storage keys / PV, that turned out to be
problematic in the past, let's do exactly the same we do with KSM pages:
trigger unsharing faults to replace the shared zeropages by proper
anonymous folios.
What about added latency when enabling storage kes? Having a lot of
zeropages in applicable environments (PV, legacy guests, unittests) is
unexpected. Further, KSM could today already unshare the zeropages
and unmerging KSM pages when enabling storage kets would unshare the
KSM-placed zeropages in the same way, resulting in the same latency.
[ agordeev: Fixed sparse and checkpatch complaints and error handling ]
Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Tested-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Fixes: fa41ba0d08de ("s390/mm: avoid empty zero pages for KVM guests to avoid postcopy hangs")
Signed-off-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/r/20240411161441.910170-3-david@redhat.com
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2024-04-11 18:14:41 +02:00
|
|
|
#define mm_forbids_zeropage mm_forbids_zeropage
|
|
|
|
static inline int mm_forbids_zeropage(struct mm_struct *mm)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_PGSTE
|
|
|
|
if (!mm->context.allow_cow_sharing)
|
|
|
|
return 1;
|
|
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2018-02-15 16:33:47 +01:00
|
|
|
static inline int mm_uses_skeys(struct mm_struct *mm)
|
2014-01-14 15:02:11 +01:00
|
|
|
{
|
|
|
|
#ifdef CONFIG_PGSTE
|
2018-02-15 16:33:47 +01:00
|
|
|
if (mm->context.uses_skeys)
|
2014-01-14 15:02:11 +01:00
|
|
|
return 1;
|
|
|
|
#endif
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-05-14 10:46:33 +02:00
|
|
|
static inline void csp(unsigned int *ptr, unsigned int old, unsigned int new)
|
|
|
|
{
|
2021-06-14 19:38:07 +02:00
|
|
|
union register_pair r1 = { .even = old, .odd = new, };
|
2016-05-14 10:46:33 +02:00
|
|
|
unsigned long address = (unsigned long)ptr | 1;
|
|
|
|
|
|
|
|
asm volatile(
|
2021-06-14 19:38:07 +02:00
|
|
|
" csp %[r1],%[address]"
|
|
|
|
: [r1] "+&d" (r1.pair), "+m" (*ptr)
|
|
|
|
: [address] "d" (address)
|
2016-05-14 10:46:33 +02:00
|
|
|
: "cc");
|
|
|
|
}
|
|
|
|
|
2024-06-25 17:13:30 +02:00
|
|
|
/**
|
|
|
|
* cspg() - Compare and Swap and Purge (CSPG)
|
|
|
|
* @ptr: Pointer to the value to be exchanged
|
|
|
|
* @old: The expected old value
|
|
|
|
* @new: The new value
|
|
|
|
*
|
|
|
|
* Return: True if compare and swap was successful, otherwise false.
|
|
|
|
*/
|
|
|
|
static inline bool cspg(unsigned long *ptr, unsigned long old, unsigned long new)
|
2016-05-17 10:50:15 +02:00
|
|
|
{
|
2021-06-14 19:38:07 +02:00
|
|
|
union register_pair r1 = { .even = old, .odd = new, };
|
2016-05-17 10:50:15 +02:00
|
|
|
unsigned long address = (unsigned long)ptr | 1;
|
|
|
|
|
|
|
|
asm volatile(
|
2022-02-25 10:39:02 +01:00
|
|
|
" cspg %[r1],%[address]"
|
2021-06-14 19:38:07 +02:00
|
|
|
: [r1] "+&d" (r1.pair), "+m" (*ptr)
|
|
|
|
: [address] "d" (address)
|
2016-05-17 10:50:15 +02:00
|
|
|
: "cc");
|
2024-06-25 17:13:30 +02:00
|
|
|
return old == r1.even;
|
2016-05-17 10:50:15 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
#define CRDTE_DTT_PAGE 0x00UL
|
|
|
|
#define CRDTE_DTT_SEGMENT 0x10UL
|
|
|
|
#define CRDTE_DTT_REGION3 0x14UL
|
|
|
|
#define CRDTE_DTT_REGION2 0x18UL
|
|
|
|
#define CRDTE_DTT_REGION1 0x1cUL
|
|
|
|
|
2024-06-25 17:13:30 +02:00
|
|
|
/**
|
|
|
|
* crdte() - Compare and Replace DAT Table Entry
|
|
|
|
* @old: The expected old value
|
|
|
|
* @new: The new value
|
|
|
|
* @table: Pointer to the value to be exchanged
|
|
|
|
* @dtt: Table type of the table to be exchanged
|
|
|
|
* @address: The address mapped by the entry to be replaced
|
|
|
|
* @asce: The ASCE of this entry
|
|
|
|
*
|
|
|
|
* Return: True if compare and replace was successful, otherwise false.
|
|
|
|
*/
|
|
|
|
static inline bool crdte(unsigned long old, unsigned long new,
|
2021-01-11 11:01:55 +01:00
|
|
|
unsigned long *table, unsigned long dtt,
|
2016-05-17 10:50:15 +02:00
|
|
|
unsigned long address, unsigned long asce)
|
|
|
|
{
|
2021-06-14 19:38:07 +02:00
|
|
|
union register_pair r1 = { .even = old, .odd = new, };
|
2021-01-11 11:01:55 +01:00
|
|
|
union register_pair r2 = { .even = __pa(table) | dtt, .odd = address, };
|
2016-05-17 10:50:15 +02:00
|
|
|
|
2021-06-14 19:38:07 +02:00
|
|
|
asm volatile(".insn rrf,0xb98f0000,%[r1],%[r2],%[asce],0"
|
|
|
|
: [r1] "+&d" (r1.pair)
|
|
|
|
: [r2] "d" (r2.pair), [asce] "a" (asce)
|
2016-05-17 10:50:15 +02:00
|
|
|
: "memory", "cc");
|
2024-06-25 17:13:30 +02:00
|
|
|
return old == r1.even;
|
2016-05-17 10:50:15 +02:00
|
|
|
}
|
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
/*
|
2017-05-20 11:43:26 +02:00
|
|
|
* pgd/p4d/pud/pmd/pte query functions
|
2005-04-16 15:20:36 -07:00
|
|
|
*/
|
2017-05-20 11:43:26 +02:00
|
|
|
static inline int pgd_folded(pgd_t pgd)
|
|
|
|
{
|
|
|
|
return (pgd_val(pgd) & _REGION_ENTRY_TYPE_MASK) < _REGION_ENTRY_TYPE_R1;
|
|
|
|
}
|
|
|
|
|
2008-02-09 18:24:36 +01:00
|
|
|
static inline int pgd_present(pgd_t pgd)
|
|
|
|
{
|
2017-05-20 11:43:26 +02:00
|
|
|
if (pgd_folded(pgd))
|
2008-02-09 18:24:37 +01:00
|
|
|
return 1;
|
2008-02-09 18:24:36 +01:00
|
|
|
return (pgd_val(pgd) & _REGION_ENTRY_ORIGIN) != 0UL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int pgd_none(pgd_t pgd)
|
|
|
|
{
|
2017-05-20 11:43:26 +02:00
|
|
|
if (pgd_folded(pgd))
|
2008-02-09 18:24:37 +01:00
|
|
|
return 0;
|
2013-07-23 20:57:57 +02:00
|
|
|
return (pgd_val(pgd) & _REGION_ENTRY_INVALID) != 0UL;
|
2008-02-09 18:24:36 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int pgd_bad(pgd_t pgd)
|
|
|
|
{
|
2019-04-24 12:49:44 +02:00
|
|
|
if ((pgd_val(pgd) & _REGION_ENTRY_TYPE_MASK) < _REGION_ENTRY_TYPE_R1)
|
|
|
|
return 0;
|
|
|
|
return (pgd_val(pgd) & ~_REGION_ENTRY_BITS) != 0;
|
2008-02-09 18:24:36 +01:00
|
|
|
}
|
2007-10-22 12:52:48 +02:00
|
|
|
|
2018-09-13 10:59:43 +02:00
|
|
|
static inline unsigned long pgd_pfn(pgd_t pgd)
|
|
|
|
{
|
|
|
|
unsigned long origin_mask;
|
|
|
|
|
|
|
|
origin_mask = _REGION_ENTRY_ORIGIN;
|
|
|
|
return (pgd_val(pgd) & origin_mask) >> PAGE_SHIFT;
|
|
|
|
}
|
|
|
|
|
2017-05-20 11:43:26 +02:00
|
|
|
static inline int p4d_folded(p4d_t p4d)
|
|
|
|
{
|
|
|
|
return (p4d_val(p4d) & _REGION_ENTRY_TYPE_MASK) < _REGION_ENTRY_TYPE_R2;
|
|
|
|
}
|
|
|
|
|
2017-04-24 18:19:10 +02:00
|
|
|
static inline int p4d_present(p4d_t p4d)
|
|
|
|
{
|
2017-05-20 11:43:26 +02:00
|
|
|
if (p4d_folded(p4d))
|
2017-04-24 18:19:10 +02:00
|
|
|
return 1;
|
|
|
|
return (p4d_val(p4d) & _REGION_ENTRY_ORIGIN) != 0UL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int p4d_none(p4d_t p4d)
|
|
|
|
{
|
2017-05-20 11:43:26 +02:00
|
|
|
if (p4d_folded(p4d))
|
2017-04-24 18:19:10 +02:00
|
|
|
return 0;
|
|
|
|
return p4d_val(p4d) == _REGION2_ENTRY_EMPTY;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long p4d_pfn(p4d_t p4d)
|
|
|
|
{
|
|
|
|
unsigned long origin_mask;
|
|
|
|
|
|
|
|
origin_mask = _REGION_ENTRY_ORIGIN;
|
|
|
|
return (p4d_val(p4d) & origin_mask) >> PAGE_SHIFT;
|
|
|
|
}
|
|
|
|
|
2017-05-20 11:43:26 +02:00
|
|
|
static inline int pud_folded(pud_t pud)
|
|
|
|
{
|
|
|
|
return (pud_val(pud) & _REGION_ENTRY_TYPE_MASK) < _REGION_ENTRY_TYPE_R3;
|
|
|
|
}
|
|
|
|
|
2007-10-22 12:52:48 +02:00
|
|
|
static inline int pud_present(pud_t pud)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2017-05-20 11:43:26 +02:00
|
|
|
if (pud_folded(pud))
|
2008-02-09 18:24:37 +01:00
|
|
|
return 1;
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
return (pud_val(pud) & _REGION3_ENTRY_PRESENT) != 0;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2007-10-22 12:52:48 +02:00
|
|
|
static inline int pud_none(pud_t pud)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2017-05-20 11:43:26 +02:00
|
|
|
if (pud_folded(pud))
|
2008-02-09 18:24:37 +01:00
|
|
|
return 0;
|
2016-07-04 14:47:01 +02:00
|
|
|
return pud_val(pud) == _REGION3_ENTRY_EMPTY;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2024-03-05 12:37:49 +08:00
|
|
|
#define pud_leaf pud_leaf
|
2024-03-05 12:37:50 +08:00
|
|
|
static inline bool pud_leaf(pud_t pud)
|
2012-10-08 09:18:26 +02:00
|
|
|
{
|
|
|
|
if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) != _REGION_ENTRY_TYPE_R3)
|
|
|
|
return 0;
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
return (pud_present(pud) && (pud_val(pud) & _REGION3_ENTRY_LARGE) != 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int pmd_present(pmd_t pmd)
|
|
|
|
{
|
|
|
|
return (pmd_val(pmd) & _SEGMENT_ENTRY_PRESENT) != 0;
|
2012-10-08 09:18:26 +02:00
|
|
|
}
|
|
|
|
|
2024-03-05 12:37:49 +08:00
|
|
|
#define pmd_leaf pmd_leaf
|
2024-03-05 12:37:50 +08:00
|
|
|
static inline bool pmd_leaf(pmd_t pmd)
|
2016-07-04 14:47:01 +02:00
|
|
|
{
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
return (pmd_present(pmd) && (pmd_val(pmd) & _SEGMENT_ENTRY_LARGE) != 0);
|
2016-07-04 14:47:01 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int pmd_bad(pmd_t pmd)
|
|
|
|
{
|
2024-03-05 12:37:47 +08:00
|
|
|
if ((pmd_val(pmd) & _SEGMENT_ENTRY_TYPE_MASK) > 0 || pmd_leaf(pmd))
|
2019-04-24 12:49:44 +02:00
|
|
|
return 1;
|
2016-07-04 14:47:01 +02:00
|
|
|
return (pmd_val(pmd) & ~_SEGMENT_ENTRY_BITS) != 0;
|
|
|
|
}
|
|
|
|
|
2007-10-22 12:52:48 +02:00
|
|
|
static inline int pud_bad(pud_t pud)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2019-04-24 12:49:44 +02:00
|
|
|
unsigned long type = pud_val(pud) & _REGION_ENTRY_TYPE_MASK;
|
|
|
|
|
2024-03-05 12:37:48 +08:00
|
|
|
if (type > _REGION_ENTRY_TYPE_R3 || pud_leaf(pud))
|
2019-04-24 12:49:44 +02:00
|
|
|
return 1;
|
|
|
|
if (type < _REGION_ENTRY_TYPE_R3)
|
|
|
|
return 0;
|
2016-07-04 14:47:01 +02:00
|
|
|
return (pud_val(pud) & ~_REGION_ENTRY_BITS) != 0;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2017-04-24 18:19:10 +02:00
|
|
|
static inline int p4d_bad(p4d_t p4d)
|
|
|
|
{
|
2019-04-24 12:49:44 +02:00
|
|
|
unsigned long type = p4d_val(p4d) & _REGION_ENTRY_TYPE_MASK;
|
|
|
|
|
|
|
|
if (type > _REGION_ENTRY_TYPE_R2)
|
|
|
|
return 1;
|
|
|
|
if (type < _REGION_ENTRY_TYPE_R2)
|
|
|
|
return 0;
|
2017-04-24 18:19:10 +02:00
|
|
|
return (p4d_val(p4d) & ~_REGION_ENTRY_BITS) != 0;
|
|
|
|
}
|
|
|
|
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline int pmd_none(pmd_t pmd)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2016-04-27 11:43:07 +02:00
|
|
|
return pmd_val(pmd) == _SEGMENT_ENTRY_EMPTY;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2017-11-29 16:10:10 -08:00
|
|
|
#define pmd_write pmd_write
|
2012-10-08 16:30:24 -07:00
|
|
|
static inline int pmd_write(pmd_t pmd)
|
|
|
|
{
|
2014-07-24 11:03:41 +02:00
|
|
|
return (pmd_val(pmd) & _SEGMENT_ENTRY_WRITE) != 0;
|
|
|
|
}
|
|
|
|
|
2020-02-27 12:56:42 +01:00
|
|
|
#define pud_write pud_write
|
|
|
|
static inline int pud_write(pud_t pud)
|
|
|
|
{
|
|
|
|
return (pud_val(pud) & _REGION3_ENTRY_WRITE) != 0;
|
|
|
|
}
|
|
|
|
|
2023-12-27 14:12:04 +00:00
|
|
|
#define pmd_dirty pmd_dirty
|
2014-07-24 11:03:41 +02:00
|
|
|
static inline int pmd_dirty(pmd_t pmd)
|
|
|
|
{
|
s390/mm: simplify page table helpers for large entries
For pmds and puds, there are a couple of page table helper functions that
only make sense for large entries, like pxd_(mk)dirty/young/write etc.
We currently explicitly check if the entries are large, but in practice
those functions must never be used for normal entries, which point to lower
level page tables, so the code can be simplified.
This also fixes a theoretical bug, where common code could use one of the
functions before actually marking a pmd large, like this:
pmd = pmd_mkhuge(pmd_mkdirty(pmd))
With the current implementation, the resulting large pmd would not be dirty
as requested. This could in theory result in the loss of dirty information,
e.g. after collapsing into a transparent hugepage. Common code currently
always marks an entry large before using one of the functions, but there is
no hard requirement for this. The only requirement would be that it never
uses the functions for normal entries pointing to lower level page tables,
but they might be called before marking an entry large during its creation.
In order to avoid issues with future common code, and to simplify the page
table helpers, remove the checks for large entries and rely on common code
never using them for normal entries.
This was found by testing a patch from from Anshuman Khandual, which is
currently discussed on LKML ("mm/debug: Add tests validating architecture
page table helpers").
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-09-10 19:22:09 +02:00
|
|
|
return (pmd_val(pmd) & _SEGMENT_ENTRY_DIRTY) != 0;
|
2012-10-08 16:30:24 -07:00
|
|
|
}
|
|
|
|
|
2022-11-30 14:49:41 -08:00
|
|
|
#define pmd_young pmd_young
|
2012-10-08 16:30:24 -07:00
|
|
|
static inline int pmd_young(pmd_t pmd)
|
|
|
|
{
|
s390/mm: simplify page table helpers for large entries
For pmds and puds, there are a couple of page table helper functions that
only make sense for large entries, like pxd_(mk)dirty/young/write etc.
We currently explicitly check if the entries are large, but in practice
those functions must never be used for normal entries, which point to lower
level page tables, so the code can be simplified.
This also fixes a theoretical bug, where common code could use one of the
functions before actually marking a pmd large, like this:
pmd = pmd_mkhuge(pmd_mkdirty(pmd))
With the current implementation, the resulting large pmd would not be dirty
as requested. This could in theory result in the loss of dirty information,
e.g. after collapsing into a transparent hugepage. Common code currently
always marks an entry large before using one of the functions, but there is
no hard requirement for this. The only requirement would be that it never
uses the functions for normal entries pointing to lower level page tables,
but they might be called before marking an entry large during its creation.
In order to avoid issues with future common code, and to simplify the page
table helpers, remove the checks for large entries and rely on common code
never using them for normal entries.
This was found by testing a patch from from Anshuman Khandual, which is
currently discussed on LKML ("mm/debug: Add tests validating architecture
page table helpers").
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-09-10 19:22:09 +02:00
|
|
|
return (pmd_val(pmd) & _SEGMENT_ENTRY_YOUNG) != 0;
|
2012-10-08 16:30:24 -07:00
|
|
|
}
|
|
|
|
|
2013-07-23 20:57:57 +02:00
|
|
|
static inline int pte_present(pte_t pte)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2013-07-23 20:57:57 +02:00
|
|
|
/* Bit pattern: (pte & 0x001) == 0x001 */
|
|
|
|
return (pte_val(pte) & _PAGE_PRESENT) != 0;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2013-07-23 20:57:57 +02:00
|
|
|
static inline int pte_none(pte_t pte)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2013-07-23 20:57:57 +02:00
|
|
|
/* Bit pattern: pte == 0x400 */
|
|
|
|
return pte_val(pte) == _PAGE_INVALID;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2013-04-17 17:36:29 +02:00
|
|
|
static inline int pte_swap(pte_t pte)
|
|
|
|
{
|
2015-04-22 13:55:59 +02:00
|
|
|
/* Bit pattern: (pte & 0x201) == 0x200 */
|
|
|
|
return (pte_val(pte) & (_PAGE_PROTECT | _PAGE_PRESENT))
|
|
|
|
== _PAGE_PROTECT;
|
2013-04-17 17:36:29 +02:00
|
|
|
}
|
|
|
|
|
mm: introduce pte_special pte bit
s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory
model (which is more dynamic than most). Instead, they had proposed to
implement it with an additional path through vm_normal_page(), using a bit in
the pte to determine whether or not the page should be refcounted:
vm_normal_page()
{
...
if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
if (vma->vm_flags & VM_MIXEDMAP) {
#ifdef s390
if (!mixedmap_refcount_pte(pte))
return NULL;
#else
if (!pfn_valid(pfn))
return NULL;
#endif
goto out;
}
...
}
This is fine, however if we are allowed to use a bit in the pte to determine
refcountedness, we can use that to _completely_ replace all the vma based
schemes. So instead of adding more cases to the already complex vma-based
scheme, we can have a clearly seperate and simple pte-based scheme (and get
slightly better code generation in the process):
vm_normal_page()
{
#ifdef s390
if (!mixedmap_refcount_pte(pte))
return NULL;
return pte_page(pte);
#else
...
#endif
}
And finally, we may rather make this concept usable by any architecture rather
than making it s390 only, so implement a new type of pte state for this.
Unfortunately the old vma based code must stay, because some architectures may
not be able to spare pte bits. This makes vm_normal_page a little bit more
ugly than we would like, but the 2 cases are clearly seperate.
So introduce a pte_special pte state, and use it in mm/memory.c. It is
currently a noop for all architectures, so this doesn't actually result in any
compiled code changes to mm/memory.o.
BTW:
I haven't put vm_normal_page() into arch code as-per an earlier suggestion.
The reason is that, regardless of where vm_normal_page is actually
implemented, the *abstraction* is still exactly the same. Also, while it
depends on whether the architecture has pte_special or not, that is the
only two possible cases, and it really isn't an arch specific function --
the role of the arch code should be to provide primitive functions and
accessors with which to build the core code; pte_special does that. We do
not want architectures to know or care about vm_normal_page itself, and
we definitely don't want them being able to invent something new there
out of sight of mm/ code. If we made vm_normal_page an arch function, then
we have to make vm_insert_mixed (next patch) an arch function too. So I
don't think moving it to arch code fundamentally improves any abstractions,
while it does practically make the code more difficult to follow, for both
mm and arch developers, and easier to misuse.
[akpm@linux-foundation.org: build fix]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Cc: Jared Hulbert <jaredeh@gmail.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-28 02:13:00 -07:00
|
|
|
static inline int pte_special(pte_t pte)
|
|
|
|
{
|
2008-04-28 02:13:03 -07:00
|
|
|
return (pte_val(pte) & _PAGE_SPECIAL);
|
mm: introduce pte_special pte bit
s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory
model (which is more dynamic than most). Instead, they had proposed to
implement it with an additional path through vm_normal_page(), using a bit in
the pte to determine whether or not the page should be refcounted:
vm_normal_page()
{
...
if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
if (vma->vm_flags & VM_MIXEDMAP) {
#ifdef s390
if (!mixedmap_refcount_pte(pte))
return NULL;
#else
if (!pfn_valid(pfn))
return NULL;
#endif
goto out;
}
...
}
This is fine, however if we are allowed to use a bit in the pte to determine
refcountedness, we can use that to _completely_ replace all the vma based
schemes. So instead of adding more cases to the already complex vma-based
scheme, we can have a clearly seperate and simple pte-based scheme (and get
slightly better code generation in the process):
vm_normal_page()
{
#ifdef s390
if (!mixedmap_refcount_pte(pte))
return NULL;
return pte_page(pte);
#else
...
#endif
}
And finally, we may rather make this concept usable by any architecture rather
than making it s390 only, so implement a new type of pte state for this.
Unfortunately the old vma based code must stay, because some architectures may
not be able to spare pte bits. This makes vm_normal_page a little bit more
ugly than we would like, but the 2 cases are clearly seperate.
So introduce a pte_special pte state, and use it in mm/memory.c. It is
currently a noop for all architectures, so this doesn't actually result in any
compiled code changes to mm/memory.o.
BTW:
I haven't put vm_normal_page() into arch code as-per an earlier suggestion.
The reason is that, regardless of where vm_normal_page is actually
implemented, the *abstraction* is still exactly the same. Also, while it
depends on whether the architecture has pte_special or not, that is the
only two possible cases, and it really isn't an arch specific function --
the role of the arch code should be to provide primitive functions and
accessors with which to build the core code; pte_special does that. We do
not want architectures to know or care about vm_normal_page itself, and
we definitely don't want them being able to invent something new there
out of sight of mm/ code. If we made vm_normal_page an arch function, then
we have to make vm_insert_mixed (next patch) an arch function too. So I
don't think moving it to arch code fundamentally improves any abstractions,
while it does practically make the code more difficult to follow, for both
mm and arch developers, and easier to misuse.
[akpm@linux-foundation.org: build fix]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Cc: Jared Hulbert <jaredeh@gmail.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-28 02:13:00 -07:00
|
|
|
}
|
|
|
|
|
[S390] tlb flush fix.
The current tlb flushing code for page table entries violates the
s390 architecture in a small detail. The relevant section from the
principles of operation (SA22-7832-02 page 3-47):
"A valid table entry must not be changed while it is attached
to any CPU and may be used for translation by that CPU except to
(1) invalidate the entry by using INVALIDATE PAGE TABLE ENTRY or
INVALIDATE DAT TABLE ENTRY, (2) alter bits 56-63 of a page-table
entry, or (3) make a change by means of a COMPARE AND SWAP AND
PURGE instruction that purges the TLB."
That means if one thread of a multithreaded applciation uses a vma
while another thread does an unmap on it, the page table entries of
that vma needs to get removed with IPTE, IDTE or CSP. In some strange
and rare situations a cpu could check-stop (die) because a entry has
been pushed out of the TLB that is still needed to complete a
(milli-coded) instruction. I've never seen it happen with the current
code on any of the supported machines, so right now this is a
theoretical problem. But I want to fix it nevertheless, to avoid
headaches in the futures.
To get this implemented correctly without changing common code the
primitives ptep_get_and_clear, ptep_get_and_clear_full and
ptep_set_wrprotect need to use the IPTE instruction to invalidate the
pte before the new pte value gets stored. If IPTE is always used for
the three primitives three important operations will have a performace
hit: fork, mprotect and exit_mmap. Time for some workarounds:
* 1: ptep_get_and_clear_full is used in unmap_vmas to remove page
tables entries in a batched tlb gather operation. If the mmu_gather
context passed to unmap_vmas has been started with full_mm_flush==1
or if only one cpu is online or if the only user of a mm_struct is the
current process then the fullmm indication in the mmu_gather context is
set to one. All TLBs for mm_struct are flushed by the tlb_gather_mmu
call. No new TLBs can be created while the unmap is in progress. In
this case ptep_get_and_clear_full clears the ptes with a simple store.
* 2: ptep_get_and_clear is used in change_protection to clear the
ptes from the page tables before they are reentered with the new
access flags. At the end of the update flush_tlb_range clears the
remaining TLBs. In general the ptep_get_and_clear has to issue IPTE
for each pte and flush_tlb_range is a nop. But if there is only one
user of the mm_struct then ptep_get_and_clear uses simple stores
to do the update and flush_tlb_range will flush the TLBs.
* 3: Similar to 2, ptep_set_wrprotect is used in copy_page_range
for a fork to make all ptes of a cow mapping read-only. At the end of
of copy_page_range dup_mmap will flush the TLBs with a call to
flush_tlb_mm. Check for mm->mm_users and if there is only one user
avoid using IPTE in ptep_set_wrprotect and let flush_tlb_mm clear the
TLBs.
Overall for single threaded programs the tlb flush code now performs
better, for multi threaded programs it is slightly worse. In particular
exit_mmap() now does a single IDTE for the mm and then just frees every
page cache reference and every page table page directly without a delay
over the mmu_gather structure.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-10-22 12:52:44 +02:00
|
|
|
#define __HAVE_ARCH_PTE_SAME
|
2011-05-23 10:24:40 +02:00
|
|
|
static inline int pte_same(pte_t a, pte_t b)
|
|
|
|
{
|
|
|
|
return pte_val(a) == pte_val(b);
|
|
|
|
}
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2014-09-23 14:01:34 +02:00
|
|
|
#ifdef CONFIG_NUMA_BALANCING
|
|
|
|
static inline int pte_protnone(pte_t pte)
|
|
|
|
{
|
|
|
|
return pte_present(pte) && !(pte_val(pte) & _PAGE_READ);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int pmd_protnone(pmd_t pmd)
|
|
|
|
{
|
2024-03-05 12:37:47 +08:00
|
|
|
/* pmd_leaf(pmd) implies pmd_present(pmd) */
|
|
|
|
return pmd_leaf(pmd) && !(pmd_val(pmd) & _SEGMENT_ENTRY_READ);
|
2014-09-23 14:01:34 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2025-02-18 18:55:14 +01:00
|
|
|
static inline bool pte_swp_exclusive(pte_t pte)
|
2022-05-09 18:20:46 -07:00
|
|
|
{
|
|
|
|
return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pte_t pte_swp_mkexclusive(pte_t pte)
|
|
|
|
{
|
|
|
|
return set_pte_bit(pte, __pgprot(_PAGE_SWP_EXCLUSIVE));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pte_t pte_swp_clear_exclusive(pte_t pte)
|
|
|
|
{
|
|
|
|
return clear_pte_bit(pte, __pgprot(_PAGE_SWP_EXCLUSIVE));
|
|
|
|
}
|
|
|
|
|
2015-04-22 14:47:42 +02:00
|
|
|
static inline int pte_soft_dirty(pte_t pte)
|
|
|
|
{
|
|
|
|
return pte_val(pte) & _PAGE_SOFT_DIRTY;
|
|
|
|
}
|
|
|
|
#define pte_swp_soft_dirty pte_soft_dirty
|
|
|
|
|
|
|
|
static inline pte_t pte_mksoft_dirty(pte_t pte)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
return set_pte_bit(pte, __pgprot(_PAGE_SOFT_DIRTY));
|
2015-04-22 14:47:42 +02:00
|
|
|
}
|
|
|
|
#define pte_swp_mksoft_dirty pte_mksoft_dirty
|
|
|
|
|
|
|
|
static inline pte_t pte_clear_soft_dirty(pte_t pte)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
return clear_pte_bit(pte, __pgprot(_PAGE_SOFT_DIRTY));
|
2015-04-22 14:47:42 +02:00
|
|
|
}
|
|
|
|
#define pte_swp_clear_soft_dirty pte_clear_soft_dirty
|
|
|
|
|
|
|
|
static inline int pmd_soft_dirty(pmd_t pmd)
|
|
|
|
{
|
|
|
|
return pmd_val(pmd) & _SEGMENT_ENTRY_SOFT_DIRTY;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pmd_t pmd_mksoft_dirty(pmd_t pmd)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
return set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_SOFT_DIRTY));
|
2015-04-22 14:47:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline pmd_t pmd_clear_soft_dirty(pmd_t pmd)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
return clear_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_SOFT_DIRTY));
|
2015-04-22 14:47:42 +02:00
|
|
|
}
|
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
/*
|
|
|
|
* query functions pte_write/pte_dirty/pte_young only work if
|
|
|
|
* pte_present() is true. Undefined behaviour if not..
|
|
|
|
*/
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline int pte_write(pte_t pte)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2013-07-23 20:57:57 +02:00
|
|
|
return (pte_val(pte) & _PAGE_WRITE) != 0;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline int pte_dirty(pte_t pte)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2013-07-23 20:57:57 +02:00
|
|
|
return (pte_val(pte) & _PAGE_DIRTY) != 0;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline int pte_young(pte_t pte)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2013-07-23 22:11:42 +02:00
|
|
|
return (pte_val(pte) & _PAGE_YOUNG) != 0;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2013-04-17 17:36:29 +02:00
|
|
|
#define __HAVE_ARCH_PTE_UNUSED
|
|
|
|
static inline int pte_unused(pte_t pte)
|
|
|
|
{
|
|
|
|
return pte_val(pte) & _PAGE_UNUSED;
|
|
|
|
}
|
|
|
|
|
2021-02-19 12:00:52 +01:00
|
|
|
/*
|
|
|
|
* Extract the pgprot value from the given pte while at the same time making it
|
|
|
|
* usable for kernel address space mappings where fault driven dirty and
|
|
|
|
* young/old accounting is not supported, i.e _PAGE_PROTECT and _PAGE_INVALID
|
|
|
|
* must not be set.
|
|
|
|
*/
|
2024-08-26 16:43:42 -04:00
|
|
|
#define pte_pgprot pte_pgprot
|
2021-02-19 12:00:52 +01:00
|
|
|
static inline pgprot_t pte_pgprot(pte_t pte)
|
|
|
|
{
|
|
|
|
unsigned long pte_flags = pte_val(pte) & _PAGE_CHG_MASK;
|
|
|
|
|
|
|
|
if (pte_write(pte))
|
|
|
|
pte_flags |= pgprot_val(PAGE_KERNEL);
|
|
|
|
else
|
|
|
|
pte_flags |= pgprot_val(PAGE_KERNEL_RO);
|
|
|
|
pte_flags |= pte_val(pte) & mio_wb_bit_mask;
|
|
|
|
|
|
|
|
return __pgprot(pte_flags);
|
|
|
|
}
|
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
/*
|
|
|
|
* pgd/pmd/pte modification functions
|
|
|
|
*/
|
|
|
|
|
2022-02-10 16:08:29 +01:00
|
|
|
static inline void set_pgd(pgd_t *pgdp, pgd_t pgd)
|
|
|
|
{
|
|
|
|
WRITE_ONCE(*pgdp, pgd);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void set_p4d(p4d_t *p4dp, p4d_t p4d)
|
|
|
|
{
|
|
|
|
WRITE_ONCE(*p4dp, p4d);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void set_pud(pud_t *pudp, pud_t pud)
|
|
|
|
{
|
|
|
|
WRITE_ONCE(*pudp, pud);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
|
|
|
|
{
|
|
|
|
WRITE_ONCE(*pmdp, pmd);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void set_pte(pte_t *ptep, pte_t pte)
|
|
|
|
{
|
|
|
|
WRITE_ONCE(*ptep, pte);
|
|
|
|
}
|
|
|
|
|
2011-05-23 10:24:40 +02:00
|
|
|
static inline void pgd_clear(pgd_t *pgd)
|
2008-02-09 18:24:36 +01:00
|
|
|
{
|
2017-04-24 18:19:10 +02:00
|
|
|
if ((pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R1)
|
2022-02-21 20:50:07 +01:00
|
|
|
set_pgd(pgd, __pgd(_REGION1_ENTRY_EMPTY));
|
2017-04-24 18:19:10 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void p4d_clear(p4d_t *p4d)
|
|
|
|
{
|
|
|
|
if ((p4d_val(*p4d) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R2)
|
2022-02-21 20:50:07 +01:00
|
|
|
set_p4d(p4d, __p4d(_REGION2_ENTRY_EMPTY));
|
2008-02-09 18:24:36 +01:00
|
|
|
}
|
|
|
|
|
2011-05-23 10:24:40 +02:00
|
|
|
static inline void pud_clear(pud_t *pud)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2008-02-09 18:24:37 +01:00
|
|
|
if ((pud_val(*pud) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
|
2022-02-21 20:50:07 +01:00
|
|
|
set_pud(pud, __pud(_REGION3_ENTRY_EMPTY));
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2011-05-23 10:24:40 +02:00
|
|
|
static inline void pmd_clear(pmd_t *pmdp)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2022-02-21 20:50:07 +01:00
|
|
|
set_pmd(pmdp, __pmd(_SEGMENT_ENTRY_EMPTY));
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2022-02-21 20:50:07 +01:00
|
|
|
set_pte(ptep, __pte(_PAGE_INVALID));
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The following pte modification functions only work if
|
|
|
|
* pte_present() is true. Undefined behaviour if not..
|
|
|
|
*/
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = clear_pte_bit(pte, __pgprot(~_PAGE_CHG_MASK));
|
|
|
|
pte = set_pte_bit(pte, newprot);
|
2013-07-23 22:11:42 +02:00
|
|
|
/*
|
2016-03-22 10:54:24 +01:00
|
|
|
* newprot for PAGE_NONE, PAGE_RO, PAGE_RX, PAGE_RW and PAGE_RWX
|
|
|
|
* has the invalid bit set, clear it again for readable, young pages
|
2013-07-23 22:11:42 +02:00
|
|
|
*/
|
|
|
|
if ((pte_val(pte) & _PAGE_YOUNG) && (pte_val(pte) & _PAGE_READ))
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = clear_pte_bit(pte, __pgprot(_PAGE_INVALID));
|
2013-07-23 22:11:42 +02:00
|
|
|
/*
|
2016-03-22 10:54:24 +01:00
|
|
|
* newprot for PAGE_RO, PAGE_RX, PAGE_RW and PAGE_RWX has the page
|
|
|
|
* protection bit set, clear it again for writable, dirty pages
|
2013-07-23 22:11:42 +02:00
|
|
|
*/
|
2013-07-23 20:57:57 +02:00
|
|
|
if ((pte_val(pte) & _PAGE_DIRTY) && (pte_val(pte) & _PAGE_WRITE))
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = clear_pte_bit(pte, __pgprot(_PAGE_PROTECT));
|
2005-04-16 15:20:36 -07:00
|
|
|
return pte;
|
|
|
|
}
|
|
|
|
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline pte_t pte_wrprotect(pte_t pte)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = clear_pte_bit(pte, __pgprot(_PAGE_WRITE));
|
|
|
|
return set_pte_bit(pte, __pgprot(_PAGE_PROTECT));
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2023-06-12 17:10:27 -07:00
|
|
|
static inline pte_t pte_mkwrite_novma(pte_t pte)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = set_pte_bit(pte, __pgprot(_PAGE_WRITE));
|
2013-07-23 20:57:57 +02:00
|
|
|
if (pte_val(pte) & _PAGE_DIRTY)
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = clear_pte_bit(pte, __pgprot(_PAGE_PROTECT));
|
2005-04-16 15:20:36 -07:00
|
|
|
return pte;
|
|
|
|
}
|
|
|
|
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline pte_t pte_mkclean(pte_t pte)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = clear_pte_bit(pte, __pgprot(_PAGE_DIRTY));
|
|
|
|
return set_pte_bit(pte, __pgprot(_PAGE_PROTECT));
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline pte_t pte_mkdirty(pte_t pte)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = set_pte_bit(pte, __pgprot(_PAGE_DIRTY | _PAGE_SOFT_DIRTY));
|
2013-07-23 20:57:57 +02:00
|
|
|
if (pte_val(pte) & _PAGE_WRITE)
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = clear_pte_bit(pte, __pgprot(_PAGE_PROTECT));
|
2005-04-16 15:20:36 -07:00
|
|
|
return pte;
|
|
|
|
}
|
|
|
|
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline pte_t pte_mkold(pte_t pte)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = clear_pte_bit(pte, __pgprot(_PAGE_YOUNG));
|
|
|
|
return set_pte_bit(pte, __pgprot(_PAGE_INVALID));
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline pte_t pte_mkyoung(pte_t pte)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = set_pte_bit(pte, __pgprot(_PAGE_YOUNG));
|
2013-07-23 22:11:42 +02:00
|
|
|
if (pte_val(pte) & _PAGE_READ)
|
2022-02-21 21:24:01 +01:00
|
|
|
pte = clear_pte_bit(pte, __pgprot(_PAGE_INVALID));
|
2005-04-16 15:20:36 -07:00
|
|
|
return pte;
|
|
|
|
}
|
|
|
|
|
mm: introduce pte_special pte bit
s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory
model (which is more dynamic than most). Instead, they had proposed to
implement it with an additional path through vm_normal_page(), using a bit in
the pte to determine whether or not the page should be refcounted:
vm_normal_page()
{
...
if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
if (vma->vm_flags & VM_MIXEDMAP) {
#ifdef s390
if (!mixedmap_refcount_pte(pte))
return NULL;
#else
if (!pfn_valid(pfn))
return NULL;
#endif
goto out;
}
...
}
This is fine, however if we are allowed to use a bit in the pte to determine
refcountedness, we can use that to _completely_ replace all the vma based
schemes. So instead of adding more cases to the already complex vma-based
scheme, we can have a clearly seperate and simple pte-based scheme (and get
slightly better code generation in the process):
vm_normal_page()
{
#ifdef s390
if (!mixedmap_refcount_pte(pte))
return NULL;
return pte_page(pte);
#else
...
#endif
}
And finally, we may rather make this concept usable by any architecture rather
than making it s390 only, so implement a new type of pte state for this.
Unfortunately the old vma based code must stay, because some architectures may
not be able to spare pte bits. This makes vm_normal_page a little bit more
ugly than we would like, but the 2 cases are clearly seperate.
So introduce a pte_special pte state, and use it in mm/memory.c. It is
currently a noop for all architectures, so this doesn't actually result in any
compiled code changes to mm/memory.o.
BTW:
I haven't put vm_normal_page() into arch code as-per an earlier suggestion.
The reason is that, regardless of where vm_normal_page is actually
implemented, the *abstraction* is still exactly the same. Also, while it
depends on whether the architecture has pte_special or not, that is the
only two possible cases, and it really isn't an arch specific function --
the role of the arch code should be to provide primitive functions and
accessors with which to build the core code; pte_special does that. We do
not want architectures to know or care about vm_normal_page itself, and
we definitely don't want them being able to invent something new there
out of sight of mm/ code. If we made vm_normal_page an arch function, then
we have to make vm_insert_mixed (next patch) an arch function too. So I
don't think moving it to arch code fundamentally improves any abstractions,
while it does practically make the code more difficult to follow, for both
mm and arch developers, and easier to misuse.
[akpm@linux-foundation.org: build fix]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Cc: Jared Hulbert <jaredeh@gmail.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-28 02:13:00 -07:00
|
|
|
static inline pte_t pte_mkspecial(pte_t pte)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
return set_pte_bit(pte, __pgprot(_PAGE_SPECIAL));
|
mm: introduce pte_special pte bit
s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory
model (which is more dynamic than most). Instead, they had proposed to
implement it with an additional path through vm_normal_page(), using a bit in
the pte to determine whether or not the page should be refcounted:
vm_normal_page()
{
...
if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
if (vma->vm_flags & VM_MIXEDMAP) {
#ifdef s390
if (!mixedmap_refcount_pte(pte))
return NULL;
#else
if (!pfn_valid(pfn))
return NULL;
#endif
goto out;
}
...
}
This is fine, however if we are allowed to use a bit in the pte to determine
refcountedness, we can use that to _completely_ replace all the vma based
schemes. So instead of adding more cases to the already complex vma-based
scheme, we can have a clearly seperate and simple pte-based scheme (and get
slightly better code generation in the process):
vm_normal_page()
{
#ifdef s390
if (!mixedmap_refcount_pte(pte))
return NULL;
return pte_page(pte);
#else
...
#endif
}
And finally, we may rather make this concept usable by any architecture rather
than making it s390 only, so implement a new type of pte state for this.
Unfortunately the old vma based code must stay, because some architectures may
not be able to spare pte bits. This makes vm_normal_page a little bit more
ugly than we would like, but the 2 cases are clearly seperate.
So introduce a pte_special pte state, and use it in mm/memory.c. It is
currently a noop for all architectures, so this doesn't actually result in any
compiled code changes to mm/memory.o.
BTW:
I haven't put vm_normal_page() into arch code as-per an earlier suggestion.
The reason is that, regardless of where vm_normal_page is actually
implemented, the *abstraction* is still exactly the same. Also, while it
depends on whether the architecture has pte_special or not, that is the
only two possible cases, and it really isn't an arch specific function --
the role of the arch code should be to provide primitive functions and
accessors with which to build the core code; pte_special does that. We do
not want architectures to know or care about vm_normal_page itself, and
we definitely don't want them being able to invent something new there
out of sight of mm/ code. If we made vm_normal_page an arch function, then
we have to make vm_insert_mixed (next patch) an arch function too. So I
don't think moving it to arch code fundamentally improves any abstractions,
while it does practically make the code more difficult to follow, for both
mm and arch developers, and easier to misuse.
[akpm@linux-foundation.org: build fix]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Carsten Otte <cotte@de.ibm.com>
Cc: Jared Hulbert <jaredeh@gmail.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-28 02:13:00 -07:00
|
|
|
}
|
|
|
|
|
2010-10-25 16:10:36 +02:00
|
|
|
#ifdef CONFIG_HUGETLB_PAGE
|
|
|
|
static inline pte_t pte_mkhuge(pte_t pte)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
return set_pte_bit(pte, __pgprot(_PAGE_LARGE));
|
2010-10-25 16:10:36 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-06-14 12:38:40 +02:00
|
|
|
#define IPTE_GLOBAL 0
|
|
|
|
#define IPTE_LOCAL 1
|
2012-09-10 13:00:09 +02:00
|
|
|
|
2016-07-26 16:53:09 +02:00
|
|
|
#define IPTE_NODAT 0x400
|
2016-07-26 16:00:22 +02:00
|
|
|
#define IPTE_GUEST_ASCE 0x800
|
2016-07-26 16:53:09 +02:00
|
|
|
|
s390/mm: add support for RDP (Reset DAT-Protection)
RDP instruction allows to reset DAT-protection bit in a PTE, with less
CPU synchronization overhead than IPTE instruction. In particular, IPTE
can cause machine-wide synchronization overhead, and excessive IPTE usage
can negatively impact machine performance.
RDP can be used instead of IPTE, if the new PTE only differs in SW bits
and _PAGE_PROTECT HW bit, for PTE protection changes from RO to RW.
SW PTE bit changes are allowed, e.g. for dirty and young tracking, but none
of the other HW-defined part of the PTE must change. This is because the
architecture forbids such changes to an active and valid PTE, which
is why invalidation with IPTE is always used first, before writing a new
entry.
The RDP optimization helps mainly for fault-driven SW dirty-bit tracking.
Writable PTEs are initially always mapped with HW _PAGE_PROTECT bit set,
to allow SW dirty-bit accounting on first write protection fault, where
the DAT-protection would then be reset. The reset is now done with RDP
instead of IPTE, if RDP instruction is available.
RDP cannot always guarantee that the DAT-protection reset is propagated
to all CPUs immediately. This means that spurious TLB protection faults
on other CPUs can now occur. For this, common code provides a
flush_tlb_fix_spurious_fault() handler, which will now be used to do a
CPU-local TLB flush. However, this will clear the whole TLB of a CPU, and
not just the affected entry. For more fine-grained flushing, by simply
doing a (local) RDP again, flush_tlb_fix_spurious_fault() would need to
also provide the PTE pointer.
Note that spurious TLB protection faults cannot really be distinguished
from racing pagetable updates, where another thread already installed the
correct PTE. In such a case, the local TLB flush would be unnecessary
overhead, but overall reduction of CPU synchronization overhead by not
using IPTE is still expected to be beneficial.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-02-06 17:48:21 +01:00
|
|
|
static __always_inline void __ptep_rdp(unsigned long addr, pte_t *ptep,
|
|
|
|
unsigned long opt, unsigned long asce,
|
|
|
|
int local)
|
|
|
|
{
|
|
|
|
unsigned long pto;
|
|
|
|
|
|
|
|
pto = __pa(ptep) & ~(PTRS_PER_PTE * sizeof(pte_t) - 1);
|
|
|
|
asm volatile(".insn rrf,0xb98b0000,%[r1],%[r2],%[asce],%[m4]"
|
|
|
|
: "+m" (*ptep)
|
|
|
|
: [r1] "a" (pto), [r2] "a" ((addr & PAGE_MASK) | opt),
|
|
|
|
[asce] "a" (asce), [m4] "i" (local));
|
|
|
|
}
|
|
|
|
|
2019-10-04 12:29:37 +02:00
|
|
|
static __always_inline void __ptep_ipte(unsigned long address, pte_t *ptep,
|
|
|
|
unsigned long opt, unsigned long asce,
|
|
|
|
int local)
|
2014-04-03 13:55:01 +02:00
|
|
|
{
|
2021-01-11 11:01:55 +01:00
|
|
|
unsigned long pto = __pa(ptep);
|
2014-04-03 13:55:01 +02:00
|
|
|
|
2016-07-26 16:53:09 +02:00
|
|
|
if (__builtin_constant_p(opt) && opt == 0) {
|
|
|
|
/* Invalidation + TLB flush for the pte */
|
|
|
|
asm volatile(
|
2022-02-25 10:39:02 +01:00
|
|
|
" ipte %[r1],%[r2],0,%[m4]"
|
2016-07-26 16:53:09 +02:00
|
|
|
: "+m" (*ptep) : [r1] "a" (pto), [r2] "a" (address),
|
|
|
|
[m4] "i" (local));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Invalidate ptes with options + TLB flush of the ptes */
|
2016-07-26 16:00:22 +02:00
|
|
|
opt = opt | (asce & _ASCE_ORIGIN);
|
2014-04-03 13:55:01 +02:00
|
|
|
asm volatile(
|
2022-02-25 10:39:02 +01:00
|
|
|
" ipte %[r1],%[r2],%[r3],%[m4]"
|
2016-07-26 16:53:09 +02:00
|
|
|
: [r2] "+a" (address), [r3] "+a" (opt)
|
|
|
|
: [r1] "a" (pto), [m4] "i" (local) : "memory");
|
2014-04-03 13:55:01 +02:00
|
|
|
}
|
|
|
|
|
2019-10-04 12:29:37 +02:00
|
|
|
static __always_inline void __ptep_ipte_range(unsigned long address, int nr,
|
|
|
|
pte_t *ptep, int local)
|
2014-09-23 21:29:20 +02:00
|
|
|
{
|
2021-01-11 11:01:55 +01:00
|
|
|
unsigned long pto = __pa(ptep);
|
2014-09-23 21:29:20 +02:00
|
|
|
|
2016-06-14 12:38:40 +02:00
|
|
|
/* Invalidate a range of ptes + TLB flush of the ptes */
|
2014-09-23 21:29:20 +02:00
|
|
|
do {
|
|
|
|
asm volatile(
|
2022-02-25 10:39:02 +01:00
|
|
|
" ipte %[r1],%[r2],%[r3],%[m4]"
|
2016-06-14 12:38:40 +02:00
|
|
|
: [r2] "+a" (address), [r3] "+a" (nr)
|
|
|
|
: [r1] "a" (pto), [m4] "i" (local) : "memory");
|
2014-09-23 21:29:20 +02:00
|
|
|
} while (nr != 255);
|
|
|
|
}
|
|
|
|
|
2013-10-18 12:03:41 +02:00
|
|
|
/*
|
2016-03-08 11:08:09 +01:00
|
|
|
* This is hard to understand. ptep_get_and_clear and ptep_clear_flush
|
|
|
|
* both clear the TLB for the unmapped pte. The reason is that
|
|
|
|
* ptep_get_and_clear is used in common code (e.g. change_pte_range)
|
|
|
|
* to modify an active pte. The sequence is
|
|
|
|
* 1) ptep_get_and_clear
|
|
|
|
* 2) set_pte_at
|
|
|
|
* 3) flush_tlb_range
|
|
|
|
* On s390 the tlb needs to get flushed with the modification of the pte
|
|
|
|
* if the pte is active. The only way how this can be implemented is to
|
|
|
|
* have ptep_get_and_clear do the tlb flush. In exchange flush_tlb_range
|
|
|
|
* is a nop.
|
2013-10-18 12:03:41 +02:00
|
|
|
*/
|
2016-03-08 11:08:09 +01:00
|
|
|
pte_t ptep_xchg_direct(struct mm_struct *, unsigned long, pte_t *, pte_t);
|
|
|
|
pte_t ptep_xchg_lazy(struct mm_struct *, unsigned long, pte_t *, pte_t);
|
2013-10-18 12:03:41 +02:00
|
|
|
|
2013-07-23 22:11:42 +02:00
|
|
|
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
|
|
|
|
static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
|
|
|
|
unsigned long addr, pte_t *ptep)
|
|
|
|
{
|
2016-03-08 11:08:09 +01:00
|
|
|
pte_t pte = *ptep;
|
2013-07-23 22:11:42 +02:00
|
|
|
|
2016-03-08 11:08:09 +01:00
|
|
|
pte = ptep_xchg_direct(vma->vm_mm, addr, ptep, pte_mkold(pte));
|
|
|
|
return pte_young(pte);
|
2013-07-23 22:11:42 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
|
|
|
|
static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
|
|
|
|
unsigned long address, pte_t *ptep)
|
|
|
|
{
|
|
|
|
return ptep_test_and_clear_young(vma, address, ptep);
|
|
|
|
}
|
|
|
|
|
[S390] tlb flush fix.
The current tlb flushing code for page table entries violates the
s390 architecture in a small detail. The relevant section from the
principles of operation (SA22-7832-02 page 3-47):
"A valid table entry must not be changed while it is attached
to any CPU and may be used for translation by that CPU except to
(1) invalidate the entry by using INVALIDATE PAGE TABLE ENTRY or
INVALIDATE DAT TABLE ENTRY, (2) alter bits 56-63 of a page-table
entry, or (3) make a change by means of a COMPARE AND SWAP AND
PURGE instruction that purges the TLB."
That means if one thread of a multithreaded applciation uses a vma
while another thread does an unmap on it, the page table entries of
that vma needs to get removed with IPTE, IDTE or CSP. In some strange
and rare situations a cpu could check-stop (die) because a entry has
been pushed out of the TLB that is still needed to complete a
(milli-coded) instruction. I've never seen it happen with the current
code on any of the supported machines, so right now this is a
theoretical problem. But I want to fix it nevertheless, to avoid
headaches in the futures.
To get this implemented correctly without changing common code the
primitives ptep_get_and_clear, ptep_get_and_clear_full and
ptep_set_wrprotect need to use the IPTE instruction to invalidate the
pte before the new pte value gets stored. If IPTE is always used for
the three primitives three important operations will have a performace
hit: fork, mprotect and exit_mmap. Time for some workarounds:
* 1: ptep_get_and_clear_full is used in unmap_vmas to remove page
tables entries in a batched tlb gather operation. If the mmu_gather
context passed to unmap_vmas has been started with full_mm_flush==1
or if only one cpu is online or if the only user of a mm_struct is the
current process then the fullmm indication in the mmu_gather context is
set to one. All TLBs for mm_struct are flushed by the tlb_gather_mmu
call. No new TLBs can be created while the unmap is in progress. In
this case ptep_get_and_clear_full clears the ptes with a simple store.
* 2: ptep_get_and_clear is used in change_protection to clear the
ptes from the page tables before they are reentered with the new
access flags. At the end of the update flush_tlb_range clears the
remaining TLBs. In general the ptep_get_and_clear has to issue IPTE
for each pte and flush_tlb_range is a nop. But if there is only one
user of the mm_struct then ptep_get_and_clear uses simple stores
to do the update and flush_tlb_range will flush the TLBs.
* 3: Similar to 2, ptep_set_wrprotect is used in copy_page_range
for a fork to make all ptes of a cow mapping read-only. At the end of
of copy_page_range dup_mmap will flush the TLBs with a call to
flush_tlb_mm. Check for mm->mm_users and if there is only one user
avoid using IPTE in ptep_set_wrprotect and let flush_tlb_mm clear the
TLBs.
Overall for single threaded programs the tlb flush code now performs
better, for multi threaded programs it is slightly worse. In particular
exit_mmap() now does a single IDTE for the mm and then just frees every
page cache reference and every page table page directly without a delay
over the mmu_gather structure.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-10-22 12:52:44 +02:00
|
|
|
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
|
2011-05-23 10:24:40 +02:00
|
|
|
static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
|
2016-03-08 11:08:09 +01:00
|
|
|
unsigned long addr, pte_t *ptep)
|
2011-05-23 10:24:40 +02:00
|
|
|
{
|
2020-01-21 09:48:44 +01:00
|
|
|
pte_t res;
|
|
|
|
|
|
|
|
res = ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
|
2021-09-20 15:24:54 +02:00
|
|
|
/* At this point the reference through the mapping is still present */
|
2020-01-21 09:48:44 +01:00
|
|
|
if (mm_is_protected(mm) && pte_present(res))
|
2024-05-08 20:29:53 +02:00
|
|
|
uv_convert_from_secure_pte(res);
|
2020-01-21 09:48:44 +01:00
|
|
|
return res;
|
2011-05-23 10:24:40 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
#define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
|
2019-03-05 15:46:26 -08:00
|
|
|
pte_t ptep_modify_prot_start(struct vm_area_struct *, unsigned long, pte_t *);
|
2019-03-05 15:46:29 -08:00
|
|
|
void ptep_modify_prot_commit(struct vm_area_struct *, unsigned long,
|
|
|
|
pte_t *, pte_t, pte_t);
|
[S390] tlb flush fix.
The current tlb flushing code for page table entries violates the
s390 architecture in a small detail. The relevant section from the
principles of operation (SA22-7832-02 page 3-47):
"A valid table entry must not be changed while it is attached
to any CPU and may be used for translation by that CPU except to
(1) invalidate the entry by using INVALIDATE PAGE TABLE ENTRY or
INVALIDATE DAT TABLE ENTRY, (2) alter bits 56-63 of a page-table
entry, or (3) make a change by means of a COMPARE AND SWAP AND
PURGE instruction that purges the TLB."
That means if one thread of a multithreaded applciation uses a vma
while another thread does an unmap on it, the page table entries of
that vma needs to get removed with IPTE, IDTE or CSP. In some strange
and rare situations a cpu could check-stop (die) because a entry has
been pushed out of the TLB that is still needed to complete a
(milli-coded) instruction. I've never seen it happen with the current
code on any of the supported machines, so right now this is a
theoretical problem. But I want to fix it nevertheless, to avoid
headaches in the futures.
To get this implemented correctly without changing common code the
primitives ptep_get_and_clear, ptep_get_and_clear_full and
ptep_set_wrprotect need to use the IPTE instruction to invalidate the
pte before the new pte value gets stored. If IPTE is always used for
the three primitives three important operations will have a performace
hit: fork, mprotect and exit_mmap. Time for some workarounds:
* 1: ptep_get_and_clear_full is used in unmap_vmas to remove page
tables entries in a batched tlb gather operation. If the mmu_gather
context passed to unmap_vmas has been started with full_mm_flush==1
or if only one cpu is online or if the only user of a mm_struct is the
current process then the fullmm indication in the mmu_gather context is
set to one. All TLBs for mm_struct are flushed by the tlb_gather_mmu
call. No new TLBs can be created while the unmap is in progress. In
this case ptep_get_and_clear_full clears the ptes with a simple store.
* 2: ptep_get_and_clear is used in change_protection to clear the
ptes from the page tables before they are reentered with the new
access flags. At the end of the update flush_tlb_range clears the
remaining TLBs. In general the ptep_get_and_clear has to issue IPTE
for each pte and flush_tlb_range is a nop. But if there is only one
user of the mm_struct then ptep_get_and_clear uses simple stores
to do the update and flush_tlb_range will flush the TLBs.
* 3: Similar to 2, ptep_set_wrprotect is used in copy_page_range
for a fork to make all ptes of a cow mapping read-only. At the end of
of copy_page_range dup_mmap will flush the TLBs with a call to
flush_tlb_mm. Check for mm->mm_users and if there is only one user
avoid using IPTE in ptep_set_wrprotect and let flush_tlb_mm clear the
TLBs.
Overall for single threaded programs the tlb flush code now performs
better, for multi threaded programs it is slightly worse. In particular
exit_mmap() now does a single IDTE for the mm and then just frees every
page cache reference and every page table page directly without a delay
over the mmu_gather structure.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-10-22 12:52:44 +02:00
|
|
|
|
|
|
|
#define __HAVE_ARCH_PTEP_CLEAR_FLUSH
|
2007-07-17 04:03:03 -07:00
|
|
|
static inline pte_t ptep_clear_flush(struct vm_area_struct *vma,
|
2016-03-08 11:08:09 +01:00
|
|
|
unsigned long addr, pte_t *ptep)
|
2007-07-17 04:03:03 -07:00
|
|
|
{
|
2020-01-21 09:48:44 +01:00
|
|
|
pte_t res;
|
|
|
|
|
|
|
|
res = ptep_xchg_direct(vma->vm_mm, addr, ptep, __pte(_PAGE_INVALID));
|
2021-09-20 15:24:54 +02:00
|
|
|
/* At this point the reference through the mapping is still present */
|
2020-01-21 09:48:44 +01:00
|
|
|
if (mm_is_protected(vma->vm_mm) && pte_present(res))
|
2024-05-08 20:29:53 +02:00
|
|
|
uv_convert_from_secure_pte(res);
|
2020-01-21 09:48:44 +01:00
|
|
|
return res;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
[S390] tlb flush fix.
The current tlb flushing code for page table entries violates the
s390 architecture in a small detail. The relevant section from the
principles of operation (SA22-7832-02 page 3-47):
"A valid table entry must not be changed while it is attached
to any CPU and may be used for translation by that CPU except to
(1) invalidate the entry by using INVALIDATE PAGE TABLE ENTRY or
INVALIDATE DAT TABLE ENTRY, (2) alter bits 56-63 of a page-table
entry, or (3) make a change by means of a COMPARE AND SWAP AND
PURGE instruction that purges the TLB."
That means if one thread of a multithreaded applciation uses a vma
while another thread does an unmap on it, the page table entries of
that vma needs to get removed with IPTE, IDTE or CSP. In some strange
and rare situations a cpu could check-stop (die) because a entry has
been pushed out of the TLB that is still needed to complete a
(milli-coded) instruction. I've never seen it happen with the current
code on any of the supported machines, so right now this is a
theoretical problem. But I want to fix it nevertheless, to avoid
headaches in the futures.
To get this implemented correctly without changing common code the
primitives ptep_get_and_clear, ptep_get_and_clear_full and
ptep_set_wrprotect need to use the IPTE instruction to invalidate the
pte before the new pte value gets stored. If IPTE is always used for
the three primitives three important operations will have a performace
hit: fork, mprotect and exit_mmap. Time for some workarounds:
* 1: ptep_get_and_clear_full is used in unmap_vmas to remove page
tables entries in a batched tlb gather operation. If the mmu_gather
context passed to unmap_vmas has been started with full_mm_flush==1
or if only one cpu is online or if the only user of a mm_struct is the
current process then the fullmm indication in the mmu_gather context is
set to one. All TLBs for mm_struct are flushed by the tlb_gather_mmu
call. No new TLBs can be created while the unmap is in progress. In
this case ptep_get_and_clear_full clears the ptes with a simple store.
* 2: ptep_get_and_clear is used in change_protection to clear the
ptes from the page tables before they are reentered with the new
access flags. At the end of the update flush_tlb_range clears the
remaining TLBs. In general the ptep_get_and_clear has to issue IPTE
for each pte and flush_tlb_range is a nop. But if there is only one
user of the mm_struct then ptep_get_and_clear uses simple stores
to do the update and flush_tlb_range will flush the TLBs.
* 3: Similar to 2, ptep_set_wrprotect is used in copy_page_range
for a fork to make all ptes of a cow mapping read-only. At the end of
of copy_page_range dup_mmap will flush the TLBs with a call to
flush_tlb_mm. Check for mm->mm_users and if there is only one user
avoid using IPTE in ptep_set_wrprotect and let flush_tlb_mm clear the
TLBs.
Overall for single threaded programs the tlb flush code now performs
better, for multi threaded programs it is slightly worse. In particular
exit_mmap() now does a single IDTE for the mm and then just frees every
page cache reference and every page table page directly without a delay
over the mmu_gather structure.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-10-22 12:52:44 +02:00
|
|
|
/*
|
|
|
|
* The batched pte unmap code uses ptep_get_and_clear_full to clear the
|
|
|
|
* ptes. Here an optimization is possible. tlb_gather_mmu flushes all
|
|
|
|
* tlbs of an mm if it can guarantee that the ptes of the mm_struct
|
|
|
|
* cannot be accessed while the batched unmap is running. In this case
|
|
|
|
* full==1 and a simple pte_clear is enough. See tlb.h.
|
|
|
|
*/
|
|
|
|
#define __HAVE_ARCH_PTEP_GET_AND_CLEAR_FULL
|
|
|
|
static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
|
2016-03-08 11:08:09 +01:00
|
|
|
unsigned long addr,
|
[S390] tlb flush fix.
The current tlb flushing code for page table entries violates the
s390 architecture in a small detail. The relevant section from the
principles of operation (SA22-7832-02 page 3-47):
"A valid table entry must not be changed while it is attached
to any CPU and may be used for translation by that CPU except to
(1) invalidate the entry by using INVALIDATE PAGE TABLE ENTRY or
INVALIDATE DAT TABLE ENTRY, (2) alter bits 56-63 of a page-table
entry, or (3) make a change by means of a COMPARE AND SWAP AND
PURGE instruction that purges the TLB."
That means if one thread of a multithreaded applciation uses a vma
while another thread does an unmap on it, the page table entries of
that vma needs to get removed with IPTE, IDTE or CSP. In some strange
and rare situations a cpu could check-stop (die) because a entry has
been pushed out of the TLB that is still needed to complete a
(milli-coded) instruction. I've never seen it happen with the current
code on any of the supported machines, so right now this is a
theoretical problem. But I want to fix it nevertheless, to avoid
headaches in the futures.
To get this implemented correctly without changing common code the
primitives ptep_get_and_clear, ptep_get_and_clear_full and
ptep_set_wrprotect need to use the IPTE instruction to invalidate the
pte before the new pte value gets stored. If IPTE is always used for
the three primitives three important operations will have a performace
hit: fork, mprotect and exit_mmap. Time for some workarounds:
* 1: ptep_get_and_clear_full is used in unmap_vmas to remove page
tables entries in a batched tlb gather operation. If the mmu_gather
context passed to unmap_vmas has been started with full_mm_flush==1
or if only one cpu is online or if the only user of a mm_struct is the
current process then the fullmm indication in the mmu_gather context is
set to one. All TLBs for mm_struct are flushed by the tlb_gather_mmu
call. No new TLBs can be created while the unmap is in progress. In
this case ptep_get_and_clear_full clears the ptes with a simple store.
* 2: ptep_get_and_clear is used in change_protection to clear the
ptes from the page tables before they are reentered with the new
access flags. At the end of the update flush_tlb_range clears the
remaining TLBs. In general the ptep_get_and_clear has to issue IPTE
for each pte and flush_tlb_range is a nop. But if there is only one
user of the mm_struct then ptep_get_and_clear uses simple stores
to do the update and flush_tlb_range will flush the TLBs.
* 3: Similar to 2, ptep_set_wrprotect is used in copy_page_range
for a fork to make all ptes of a cow mapping read-only. At the end of
of copy_page_range dup_mmap will flush the TLBs with a call to
flush_tlb_mm. Check for mm->mm_users and if there is only one user
avoid using IPTE in ptep_set_wrprotect and let flush_tlb_mm clear the
TLBs.
Overall for single threaded programs the tlb flush code now performs
better, for multi threaded programs it is slightly worse. In particular
exit_mmap() now does a single IDTE for the mm and then just frees every
page cache reference and every page table page directly without a delay
over the mmu_gather structure.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-10-22 12:52:44 +02:00
|
|
|
pte_t *ptep, int full)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2020-01-21 09:48:44 +01:00
|
|
|
pte_t res;
|
|
|
|
|
2016-03-08 11:08:09 +01:00
|
|
|
if (full) {
|
2020-01-21 09:48:44 +01:00
|
|
|
res = *ptep;
|
2022-02-21 20:50:07 +01:00
|
|
|
set_pte(ptep, __pte(_PAGE_INVALID));
|
2020-01-21 09:48:44 +01:00
|
|
|
} else {
|
|
|
|
res = ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
|
2011-05-23 10:24:40 +02:00
|
|
|
}
|
2022-06-28 15:56:11 +02:00
|
|
|
/* Nothing to do */
|
|
|
|
if (!mm_is_protected(mm) || !pte_present(res))
|
|
|
|
return res;
|
|
|
|
/*
|
|
|
|
* At this point the reference through the mapping is still present.
|
|
|
|
* The notifier should have destroyed all protected vCPUs at this
|
|
|
|
* point, so the destroy should be successful.
|
|
|
|
*/
|
2024-05-08 20:29:52 +02:00
|
|
|
if (full && !uv_destroy_pte(res))
|
2022-06-28 15:56:11 +02:00
|
|
|
return res;
|
|
|
|
/*
|
|
|
|
* If something went wrong and the page could not be destroyed, or
|
|
|
|
* if this is not a mm teardown, the slower export is used as
|
|
|
|
* fallback instead.
|
|
|
|
*/
|
2024-05-08 20:29:53 +02:00
|
|
|
uv_convert_from_secure_pte(res);
|
2020-01-21 09:48:44 +01:00
|
|
|
return res;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
[S390] tlb flush fix.
The current tlb flushing code for page table entries violates the
s390 architecture in a small detail. The relevant section from the
principles of operation (SA22-7832-02 page 3-47):
"A valid table entry must not be changed while it is attached
to any CPU and may be used for translation by that CPU except to
(1) invalidate the entry by using INVALIDATE PAGE TABLE ENTRY or
INVALIDATE DAT TABLE ENTRY, (2) alter bits 56-63 of a page-table
entry, or (3) make a change by means of a COMPARE AND SWAP AND
PURGE instruction that purges the TLB."
That means if one thread of a multithreaded applciation uses a vma
while another thread does an unmap on it, the page table entries of
that vma needs to get removed with IPTE, IDTE or CSP. In some strange
and rare situations a cpu could check-stop (die) because a entry has
been pushed out of the TLB that is still needed to complete a
(milli-coded) instruction. I've never seen it happen with the current
code on any of the supported machines, so right now this is a
theoretical problem. But I want to fix it nevertheless, to avoid
headaches in the futures.
To get this implemented correctly without changing common code the
primitives ptep_get_and_clear, ptep_get_and_clear_full and
ptep_set_wrprotect need to use the IPTE instruction to invalidate the
pte before the new pte value gets stored. If IPTE is always used for
the three primitives three important operations will have a performace
hit: fork, mprotect and exit_mmap. Time for some workarounds:
* 1: ptep_get_and_clear_full is used in unmap_vmas to remove page
tables entries in a batched tlb gather operation. If the mmu_gather
context passed to unmap_vmas has been started with full_mm_flush==1
or if only one cpu is online or if the only user of a mm_struct is the
current process then the fullmm indication in the mmu_gather context is
set to one. All TLBs for mm_struct are flushed by the tlb_gather_mmu
call. No new TLBs can be created while the unmap is in progress. In
this case ptep_get_and_clear_full clears the ptes with a simple store.
* 2: ptep_get_and_clear is used in change_protection to clear the
ptes from the page tables before they are reentered with the new
access flags. At the end of the update flush_tlb_range clears the
remaining TLBs. In general the ptep_get_and_clear has to issue IPTE
for each pte and flush_tlb_range is a nop. But if there is only one
user of the mm_struct then ptep_get_and_clear uses simple stores
to do the update and flush_tlb_range will flush the TLBs.
* 3: Similar to 2, ptep_set_wrprotect is used in copy_page_range
for a fork to make all ptes of a cow mapping read-only. At the end of
of copy_page_range dup_mmap will flush the TLBs with a call to
flush_tlb_mm. Check for mm->mm_users and if there is only one user
avoid using IPTE in ptep_set_wrprotect and let flush_tlb_mm clear the
TLBs.
Overall for single threaded programs the tlb flush code now performs
better, for multi threaded programs it is slightly worse. In particular
exit_mmap() now does a single IDTE for the mm and then just frees every
page cache reference and every page table page directly without a delay
over the mmu_gather structure.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-10-22 12:52:44 +02:00
|
|
|
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
|
2016-03-08 11:08:09 +01:00
|
|
|
static inline void ptep_set_wrprotect(struct mm_struct *mm,
|
|
|
|
unsigned long addr, pte_t *ptep)
|
2011-05-23 10:24:40 +02:00
|
|
|
{
|
|
|
|
pte_t pte = *ptep;
|
|
|
|
|
2016-03-08 11:08:09 +01:00
|
|
|
if (pte_write(pte))
|
|
|
|
ptep_xchg_lazy(mm, addr, ptep, pte_wrprotect(pte));
|
2011-05-23 10:24:40 +02:00
|
|
|
}
|
[S390] tlb flush fix.
The current tlb flushing code for page table entries violates the
s390 architecture in a small detail. The relevant section from the
principles of operation (SA22-7832-02 page 3-47):
"A valid table entry must not be changed while it is attached
to any CPU and may be used for translation by that CPU except to
(1) invalidate the entry by using INVALIDATE PAGE TABLE ENTRY or
INVALIDATE DAT TABLE ENTRY, (2) alter bits 56-63 of a page-table
entry, or (3) make a change by means of a COMPARE AND SWAP AND
PURGE instruction that purges the TLB."
That means if one thread of a multithreaded applciation uses a vma
while another thread does an unmap on it, the page table entries of
that vma needs to get removed with IPTE, IDTE or CSP. In some strange
and rare situations a cpu could check-stop (die) because a entry has
been pushed out of the TLB that is still needed to complete a
(milli-coded) instruction. I've never seen it happen with the current
code on any of the supported machines, so right now this is a
theoretical problem. But I want to fix it nevertheless, to avoid
headaches in the futures.
To get this implemented correctly without changing common code the
primitives ptep_get_and_clear, ptep_get_and_clear_full and
ptep_set_wrprotect need to use the IPTE instruction to invalidate the
pte before the new pte value gets stored. If IPTE is always used for
the three primitives three important operations will have a performace
hit: fork, mprotect and exit_mmap. Time for some workarounds:
* 1: ptep_get_and_clear_full is used in unmap_vmas to remove page
tables entries in a batched tlb gather operation. If the mmu_gather
context passed to unmap_vmas has been started with full_mm_flush==1
or if only one cpu is online or if the only user of a mm_struct is the
current process then the fullmm indication in the mmu_gather context is
set to one. All TLBs for mm_struct are flushed by the tlb_gather_mmu
call. No new TLBs can be created while the unmap is in progress. In
this case ptep_get_and_clear_full clears the ptes with a simple store.
* 2: ptep_get_and_clear is used in change_protection to clear the
ptes from the page tables before they are reentered with the new
access flags. At the end of the update flush_tlb_range clears the
remaining TLBs. In general the ptep_get_and_clear has to issue IPTE
for each pte and flush_tlb_range is a nop. But if there is only one
user of the mm_struct then ptep_get_and_clear uses simple stores
to do the update and flush_tlb_range will flush the TLBs.
* 3: Similar to 2, ptep_set_wrprotect is used in copy_page_range
for a fork to make all ptes of a cow mapping read-only. At the end of
of copy_page_range dup_mmap will flush the TLBs with a call to
flush_tlb_mm. Check for mm->mm_users and if there is only one user
avoid using IPTE in ptep_set_wrprotect and let flush_tlb_mm clear the
TLBs.
Overall for single threaded programs the tlb flush code now performs
better, for multi threaded programs it is slightly worse. In particular
exit_mmap() now does a single IDTE for the mm and then just frees every
page cache reference and every page table page directly without a delay
over the mmu_gather structure.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-10-22 12:52:44 +02:00
|
|
|
|
s390/mm: add support for RDP (Reset DAT-Protection)
RDP instruction allows to reset DAT-protection bit in a PTE, with less
CPU synchronization overhead than IPTE instruction. In particular, IPTE
can cause machine-wide synchronization overhead, and excessive IPTE usage
can negatively impact machine performance.
RDP can be used instead of IPTE, if the new PTE only differs in SW bits
and _PAGE_PROTECT HW bit, for PTE protection changes from RO to RW.
SW PTE bit changes are allowed, e.g. for dirty and young tracking, but none
of the other HW-defined part of the PTE must change. This is because the
architecture forbids such changes to an active and valid PTE, which
is why invalidation with IPTE is always used first, before writing a new
entry.
The RDP optimization helps mainly for fault-driven SW dirty-bit tracking.
Writable PTEs are initially always mapped with HW _PAGE_PROTECT bit set,
to allow SW dirty-bit accounting on first write protection fault, where
the DAT-protection would then be reset. The reset is now done with RDP
instead of IPTE, if RDP instruction is available.
RDP cannot always guarantee that the DAT-protection reset is propagated
to all CPUs immediately. This means that spurious TLB protection faults
on other CPUs can now occur. For this, common code provides a
flush_tlb_fix_spurious_fault() handler, which will now be used to do a
CPU-local TLB flush. However, this will clear the whole TLB of a CPU, and
not just the affected entry. For more fine-grained flushing, by simply
doing a (local) RDP again, flush_tlb_fix_spurious_fault() would need to
also provide the PTE pointer.
Note that spurious TLB protection faults cannot really be distinguished
from racing pagetable updates, where another thread already installed the
correct PTE. In such a case, the local TLB flush would be unnecessary
overhead, but overall reduction of CPU synchronization overhead by not
using IPTE is still expected to be beneficial.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-02-06 17:48:21 +01:00
|
|
|
/*
|
|
|
|
* Check if PTEs only differ in _PAGE_PROTECT HW bit, but also allow SW PTE
|
|
|
|
* bits in the comparison. Those might change e.g. because of dirty and young
|
|
|
|
* tracking.
|
|
|
|
*/
|
|
|
|
static inline int pte_allow_rdp(pte_t old, pte_t new)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Only allow changes from RO to RW
|
|
|
|
*/
|
|
|
|
if (!(pte_val(old) & _PAGE_PROTECT) || pte_val(new) & _PAGE_PROTECT)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return (pte_val(old) & _PAGE_RDP_MASK) == (pte_val(new) & _PAGE_RDP_MASK);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
|
2023-03-06 17:15:48 +01:00
|
|
|
unsigned long address,
|
|
|
|
pte_t *ptep)
|
s390/mm: add support for RDP (Reset DAT-Protection)
RDP instruction allows to reset DAT-protection bit in a PTE, with less
CPU synchronization overhead than IPTE instruction. In particular, IPTE
can cause machine-wide synchronization overhead, and excessive IPTE usage
can negatively impact machine performance.
RDP can be used instead of IPTE, if the new PTE only differs in SW bits
and _PAGE_PROTECT HW bit, for PTE protection changes from RO to RW.
SW PTE bit changes are allowed, e.g. for dirty and young tracking, but none
of the other HW-defined part of the PTE must change. This is because the
architecture forbids such changes to an active and valid PTE, which
is why invalidation with IPTE is always used first, before writing a new
entry.
The RDP optimization helps mainly for fault-driven SW dirty-bit tracking.
Writable PTEs are initially always mapped with HW _PAGE_PROTECT bit set,
to allow SW dirty-bit accounting on first write protection fault, where
the DAT-protection would then be reset. The reset is now done with RDP
instead of IPTE, if RDP instruction is available.
RDP cannot always guarantee that the DAT-protection reset is propagated
to all CPUs immediately. This means that spurious TLB protection faults
on other CPUs can now occur. For this, common code provides a
flush_tlb_fix_spurious_fault() handler, which will now be used to do a
CPU-local TLB flush. However, this will clear the whole TLB of a CPU, and
not just the affected entry. For more fine-grained flushing, by simply
doing a (local) RDP again, flush_tlb_fix_spurious_fault() would need to
also provide the PTE pointer.
Note that spurious TLB protection faults cannot really be distinguished
from racing pagetable updates, where another thread already installed the
correct PTE. In such a case, the local TLB flush would be unnecessary
overhead, but overall reduction of CPU synchronization overhead by not
using IPTE is still expected to be beneficial.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-02-06 17:48:21 +01:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* RDP might not have propagated the PTE protection reset to all CPUs,
|
|
|
|
* so there could be spurious TLB protection faults.
|
|
|
|
* NOTE: This will also be called when a racing pagetable update on
|
|
|
|
* another thread already installed the correct PTE. Both cases cannot
|
|
|
|
* really be distinguished.
|
2023-03-06 17:15:48 +01:00
|
|
|
* Therefore, only do the local TLB flush when RDP can be used, and the
|
|
|
|
* PTE does not have _PAGE_PROTECT set, to avoid unnecessary overhead.
|
|
|
|
* A local RDP can be used to do the flush.
|
s390/mm: add support for RDP (Reset DAT-Protection)
RDP instruction allows to reset DAT-protection bit in a PTE, with less
CPU synchronization overhead than IPTE instruction. In particular, IPTE
can cause machine-wide synchronization overhead, and excessive IPTE usage
can negatively impact machine performance.
RDP can be used instead of IPTE, if the new PTE only differs in SW bits
and _PAGE_PROTECT HW bit, for PTE protection changes from RO to RW.
SW PTE bit changes are allowed, e.g. for dirty and young tracking, but none
of the other HW-defined part of the PTE must change. This is because the
architecture forbids such changes to an active and valid PTE, which
is why invalidation with IPTE is always used first, before writing a new
entry.
The RDP optimization helps mainly for fault-driven SW dirty-bit tracking.
Writable PTEs are initially always mapped with HW _PAGE_PROTECT bit set,
to allow SW dirty-bit accounting on first write protection fault, where
the DAT-protection would then be reset. The reset is now done with RDP
instead of IPTE, if RDP instruction is available.
RDP cannot always guarantee that the DAT-protection reset is propagated
to all CPUs immediately. This means that spurious TLB protection faults
on other CPUs can now occur. For this, common code provides a
flush_tlb_fix_spurious_fault() handler, which will now be used to do a
CPU-local TLB flush. However, this will clear the whole TLB of a CPU, and
not just the affected entry. For more fine-grained flushing, by simply
doing a (local) RDP again, flush_tlb_fix_spurious_fault() would need to
also provide the PTE pointer.
Note that spurious TLB protection faults cannot really be distinguished
from racing pagetable updates, where another thread already installed the
correct PTE. In such a case, the local TLB flush would be unnecessary
overhead, but overall reduction of CPU synchronization overhead by not
using IPTE is still expected to be beneficial.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-02-06 17:48:21 +01:00
|
|
|
*/
|
2025-02-07 15:48:48 +01:00
|
|
|
if (cpu_has_rdp() && !(pte_val(*ptep) & _PAGE_PROTECT))
|
2023-03-06 17:15:48 +01:00
|
|
|
__ptep_rdp(address, ptep, 0, 0, 1);
|
s390/mm: add support for RDP (Reset DAT-Protection)
RDP instruction allows to reset DAT-protection bit in a PTE, with less
CPU synchronization overhead than IPTE instruction. In particular, IPTE
can cause machine-wide synchronization overhead, and excessive IPTE usage
can negatively impact machine performance.
RDP can be used instead of IPTE, if the new PTE only differs in SW bits
and _PAGE_PROTECT HW bit, for PTE protection changes from RO to RW.
SW PTE bit changes are allowed, e.g. for dirty and young tracking, but none
of the other HW-defined part of the PTE must change. This is because the
architecture forbids such changes to an active and valid PTE, which
is why invalidation with IPTE is always used first, before writing a new
entry.
The RDP optimization helps mainly for fault-driven SW dirty-bit tracking.
Writable PTEs are initially always mapped with HW _PAGE_PROTECT bit set,
to allow SW dirty-bit accounting on first write protection fault, where
the DAT-protection would then be reset. The reset is now done with RDP
instead of IPTE, if RDP instruction is available.
RDP cannot always guarantee that the DAT-protection reset is propagated
to all CPUs immediately. This means that spurious TLB protection faults
on other CPUs can now occur. For this, common code provides a
flush_tlb_fix_spurious_fault() handler, which will now be used to do a
CPU-local TLB flush. However, this will clear the whole TLB of a CPU, and
not just the affected entry. For more fine-grained flushing, by simply
doing a (local) RDP again, flush_tlb_fix_spurious_fault() would need to
also provide the PTE pointer.
Note that spurious TLB protection faults cannot really be distinguished
from racing pagetable updates, where another thread already installed the
correct PTE. In such a case, the local TLB flush would be unnecessary
overhead, but overall reduction of CPU synchronization overhead by not
using IPTE is still expected to be beneficial.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-02-06 17:48:21 +01:00
|
|
|
}
|
|
|
|
#define flush_tlb_fix_spurious_fault flush_tlb_fix_spurious_fault
|
|
|
|
|
|
|
|
void ptep_reset_dat_prot(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
|
|
|
|
pte_t new);
|
|
|
|
|
[S390] tlb flush fix.
The current tlb flushing code for page table entries violates the
s390 architecture in a small detail. The relevant section from the
principles of operation (SA22-7832-02 page 3-47):
"A valid table entry must not be changed while it is attached
to any CPU and may be used for translation by that CPU except to
(1) invalidate the entry by using INVALIDATE PAGE TABLE ENTRY or
INVALIDATE DAT TABLE ENTRY, (2) alter bits 56-63 of a page-table
entry, or (3) make a change by means of a COMPARE AND SWAP AND
PURGE instruction that purges the TLB."
That means if one thread of a multithreaded applciation uses a vma
while another thread does an unmap on it, the page table entries of
that vma needs to get removed with IPTE, IDTE or CSP. In some strange
and rare situations a cpu could check-stop (die) because a entry has
been pushed out of the TLB that is still needed to complete a
(milli-coded) instruction. I've never seen it happen with the current
code on any of the supported machines, so right now this is a
theoretical problem. But I want to fix it nevertheless, to avoid
headaches in the futures.
To get this implemented correctly without changing common code the
primitives ptep_get_and_clear, ptep_get_and_clear_full and
ptep_set_wrprotect need to use the IPTE instruction to invalidate the
pte before the new pte value gets stored. If IPTE is always used for
the three primitives three important operations will have a performace
hit: fork, mprotect and exit_mmap. Time for some workarounds:
* 1: ptep_get_and_clear_full is used in unmap_vmas to remove page
tables entries in a batched tlb gather operation. If the mmu_gather
context passed to unmap_vmas has been started with full_mm_flush==1
or if only one cpu is online or if the only user of a mm_struct is the
current process then the fullmm indication in the mmu_gather context is
set to one. All TLBs for mm_struct are flushed by the tlb_gather_mmu
call. No new TLBs can be created while the unmap is in progress. In
this case ptep_get_and_clear_full clears the ptes with a simple store.
* 2: ptep_get_and_clear is used in change_protection to clear the
ptes from the page tables before they are reentered with the new
access flags. At the end of the update flush_tlb_range clears the
remaining TLBs. In general the ptep_get_and_clear has to issue IPTE
for each pte and flush_tlb_range is a nop. But if there is only one
user of the mm_struct then ptep_get_and_clear uses simple stores
to do the update and flush_tlb_range will flush the TLBs.
* 3: Similar to 2, ptep_set_wrprotect is used in copy_page_range
for a fork to make all ptes of a cow mapping read-only. At the end of
of copy_page_range dup_mmap will flush the TLBs with a call to
flush_tlb_mm. Check for mm->mm_users and if there is only one user
avoid using IPTE in ptep_set_wrprotect and let flush_tlb_mm clear the
TLBs.
Overall for single threaded programs the tlb flush code now performs
better, for multi threaded programs it is slightly worse. In particular
exit_mmap() now does a single IDTE for the mm and then just frees every
page cache reference and every page table page directly without a delay
over the mmu_gather structure.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2007-10-22 12:52:44 +02:00
|
|
|
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
|
2011-05-23 10:24:40 +02:00
|
|
|
static inline int ptep_set_access_flags(struct vm_area_struct *vma,
|
2016-03-08 11:08:09 +01:00
|
|
|
unsigned long addr, pte_t *ptep,
|
2011-05-23 10:24:40 +02:00
|
|
|
pte_t entry, int dirty)
|
|
|
|
{
|
2016-03-08 11:08:09 +01:00
|
|
|
if (pte_same(*ptep, entry))
|
2011-05-23 10:24:40 +02:00
|
|
|
return 0;
|
2025-02-07 15:48:48 +01:00
|
|
|
if (cpu_has_rdp() && !mm_has_pgste(vma->vm_mm) && pte_allow_rdp(*ptep, entry))
|
s390/mm: add support for RDP (Reset DAT-Protection)
RDP instruction allows to reset DAT-protection bit in a PTE, with less
CPU synchronization overhead than IPTE instruction. In particular, IPTE
can cause machine-wide synchronization overhead, and excessive IPTE usage
can negatively impact machine performance.
RDP can be used instead of IPTE, if the new PTE only differs in SW bits
and _PAGE_PROTECT HW bit, for PTE protection changes from RO to RW.
SW PTE bit changes are allowed, e.g. for dirty and young tracking, but none
of the other HW-defined part of the PTE must change. This is because the
architecture forbids such changes to an active and valid PTE, which
is why invalidation with IPTE is always used first, before writing a new
entry.
The RDP optimization helps mainly for fault-driven SW dirty-bit tracking.
Writable PTEs are initially always mapped with HW _PAGE_PROTECT bit set,
to allow SW dirty-bit accounting on first write protection fault, where
the DAT-protection would then be reset. The reset is now done with RDP
instead of IPTE, if RDP instruction is available.
RDP cannot always guarantee that the DAT-protection reset is propagated
to all CPUs immediately. This means that spurious TLB protection faults
on other CPUs can now occur. For this, common code provides a
flush_tlb_fix_spurious_fault() handler, which will now be used to do a
CPU-local TLB flush. However, this will clear the whole TLB of a CPU, and
not just the affected entry. For more fine-grained flushing, by simply
doing a (local) RDP again, flush_tlb_fix_spurious_fault() would need to
also provide the PTE pointer.
Note that spurious TLB protection faults cannot really be distinguished
from racing pagetable updates, where another thread already installed the
correct PTE. In such a case, the local TLB flush would be unnecessary
overhead, but overall reduction of CPU synchronization overhead by not
using IPTE is still expected to be beneficial.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2023-02-06 17:48:21 +01:00
|
|
|
ptep_reset_dat_prot(vma->vm_mm, addr, ptep, entry);
|
|
|
|
else
|
|
|
|
ptep_xchg_direct(vma->vm_mm, addr, ptep, entry);
|
2016-03-08 11:08:09 +01:00
|
|
|
return 1;
|
|
|
|
}
|
2011-05-23 10:24:40 +02:00
|
|
|
|
2016-03-08 11:49:57 +01:00
|
|
|
/*
|
|
|
|
* Additional functions to handle KVM guest page tables
|
|
|
|
*/
|
|
|
|
void ptep_set_pte_at(struct mm_struct *mm, unsigned long addr,
|
|
|
|
pte_t *ptep, pte_t entry);
|
|
|
|
void ptep_set_notify(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
|
2016-03-08 12:12:18 +01:00
|
|
|
void ptep_notify(struct mm_struct *mm, unsigned long addr,
|
|
|
|
pte_t *ptep, unsigned long bits);
|
2016-03-08 11:54:42 +01:00
|
|
|
int ptep_force_prot(struct mm_struct *mm, unsigned long gaddr,
|
2016-03-08 12:12:18 +01:00
|
|
|
pte_t *ptep, int prot, unsigned long bit);
|
2016-03-08 11:49:57 +01:00
|
|
|
void ptep_zap_unused(struct mm_struct *mm, unsigned long addr,
|
|
|
|
pte_t *ptep , int reset);
|
|
|
|
void ptep_zap_key(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
|
2016-03-08 12:12:18 +01:00
|
|
|
int ptep_shadow_pte(struct mm_struct *mm, unsigned long saddr,
|
2016-03-08 12:21:41 +01:00
|
|
|
pte_t *sptep, pte_t *tptep, pte_t pte);
|
2016-03-08 12:12:18 +01:00
|
|
|
void ptep_unshadow_pte(struct mm_struct *mm, unsigned long saddr, pte_t *ptep);
|
2016-03-08 11:49:57 +01:00
|
|
|
|
2018-07-17 13:21:22 +01:00
|
|
|
bool ptep_test_and_clear_uc(struct mm_struct *mm, unsigned long address,
|
|
|
|
pte_t *ptep);
|
2016-03-08 11:49:57 +01:00
|
|
|
int set_guest_storage_key(struct mm_struct *mm, unsigned long addr,
|
|
|
|
unsigned char key, bool nq);
|
2016-05-10 09:43:11 +02:00
|
|
|
int cond_set_guest_storage_key(struct mm_struct *mm, unsigned long addr,
|
|
|
|
unsigned char key, unsigned char *oldkey,
|
|
|
|
bool nq, bool mr, bool mc);
|
2016-05-10 09:50:21 +02:00
|
|
|
int reset_guest_reference_bit(struct mm_struct *mm, unsigned long addr);
|
2016-05-09 11:22:34 +02:00
|
|
|
int get_guest_storage_key(struct mm_struct *mm, unsigned long addr,
|
|
|
|
unsigned char *key);
|
2011-05-23 10:24:40 +02:00
|
|
|
|
2017-04-20 10:03:45 +02:00
|
|
|
int set_pgste_bits(struct mm_struct *mm, unsigned long addr,
|
|
|
|
unsigned long bits, unsigned long value);
|
|
|
|
int get_pgste(struct mm_struct *mm, unsigned long hva, unsigned long *pgstep);
|
|
|
|
int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc,
|
|
|
|
unsigned long *oldpte, unsigned long *oldpgste);
|
2018-07-13 11:28:22 +01:00
|
|
|
void gmap_pmdp_csp(struct mm_struct *mm, unsigned long vmaddr);
|
|
|
|
void gmap_pmdp_invalidate(struct mm_struct *mm, unsigned long vmaddr);
|
|
|
|
void gmap_pmdp_idte_local(struct mm_struct *mm, unsigned long vmaddr);
|
|
|
|
void gmap_pmdp_idte_global(struct mm_struct *mm, unsigned long vmaddr);
|
2017-04-20 10:03:45 +02:00
|
|
|
|
2020-07-13 14:12:49 +02:00
|
|
|
#define pgprot_writecombine pgprot_writecombine
|
|
|
|
pgprot_t pgprot_writecombine(pgprot_t prot);
|
|
|
|
|
2024-01-29 13:46:40 +01:00
|
|
|
#define PFN_PTE_SHIFT PAGE_SHIFT
|
|
|
|
|
2016-03-08 11:08:09 +01:00
|
|
|
/*
|
2023-08-02 16:13:51 +01:00
|
|
|
* Set multiple PTEs to consecutive pages with a single call. All PTEs
|
|
|
|
* are within the same folio, PMD and VMA.
|
2016-03-08 11:08:09 +01:00
|
|
|
*/
|
2023-08-02 16:13:51 +01:00
|
|
|
static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
|
|
|
|
pte_t *ptep, pte_t entry, unsigned int nr)
|
2016-03-08 11:08:09 +01:00
|
|
|
{
|
2017-04-09 22:09:38 +02:00
|
|
|
if (pte_present(entry))
|
2022-02-21 21:24:01 +01:00
|
|
|
entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED));
|
2023-08-02 16:13:51 +01:00
|
|
|
if (mm_has_pgste(mm)) {
|
|
|
|
for (;;) {
|
|
|
|
ptep_set_pte_at(mm, addr, ptep, entry);
|
|
|
|
if (--nr == 0)
|
|
|
|
break;
|
|
|
|
ptep++;
|
|
|
|
entry = __pte(pte_val(entry) + PAGE_SIZE);
|
|
|
|
addr += PAGE_SIZE;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
for (;;) {
|
|
|
|
set_pte(ptep, entry);
|
|
|
|
if (--nr == 0)
|
|
|
|
break;
|
|
|
|
ptep++;
|
|
|
|
entry = __pte(pte_val(entry) + PAGE_SIZE);
|
|
|
|
}
|
|
|
|
}
|
2011-05-23 10:24:40 +02:00
|
|
|
}
|
2023-08-02 16:13:51 +01:00
|
|
|
#define set_ptes set_ptes
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Conversion functions: convert a page and protection to a page entry,
|
|
|
|
* and a page entry and page directory to the page they refer to.
|
|
|
|
*/
|
|
|
|
static inline pte_t mk_pte_phys(unsigned long physpage, pgprot_t pgprot)
|
|
|
|
{
|
|
|
|
pte_t __pte;
|
2020-07-13 14:12:49 +02:00
|
|
|
|
2022-02-21 21:24:01 +01:00
|
|
|
__pte = __pte(physpage | pgprot_val(pgprot));
|
2013-07-23 22:11:42 +02:00
|
|
|
return pte_mkyoung(__pte);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2007-10-22 12:52:48 +02:00
|
|
|
#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
|
2017-04-24 18:19:10 +02:00
|
|
|
#define p4d_index(address) (((address) >> P4D_SHIFT) & (PTRS_PER_P4D-1))
|
2007-10-22 12:52:48 +02:00
|
|
|
#define pud_index(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1))
|
|
|
|
#define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2021-02-12 07:43:16 +01:00
|
|
|
#define p4d_deref(pud) ((unsigned long)__va(p4d_val(pud) & _REGION_ENTRY_ORIGIN))
|
|
|
|
#define pgd_deref(pgd) ((unsigned long)__va(pgd_val(pgd) & _REGION_ENTRY_ORIGIN))
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2020-10-20 20:20:07 +02:00
|
|
|
static inline unsigned long pmd_deref(pmd_t pmd)
|
|
|
|
{
|
|
|
|
unsigned long origin_mask;
|
|
|
|
|
|
|
|
origin_mask = _SEGMENT_ENTRY_ORIGIN;
|
2024-03-05 12:37:47 +08:00
|
|
|
if (pmd_leaf(pmd))
|
2020-10-20 20:20:07 +02:00
|
|
|
origin_mask = _SEGMENT_ENTRY_ORIGIN_LARGE;
|
2021-02-12 07:43:16 +01:00
|
|
|
return (unsigned long)__va(pmd_val(pmd) & origin_mask);
|
2020-10-20 20:20:07 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long pmd_pfn(pmd_t pmd)
|
|
|
|
{
|
2021-02-12 07:43:16 +01:00
|
|
|
return __pa(pmd_deref(pmd)) >> PAGE_SHIFT;
|
2020-10-20 20:20:07 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long pud_deref(pud_t pud)
|
|
|
|
{
|
|
|
|
unsigned long origin_mask;
|
|
|
|
|
|
|
|
origin_mask = _REGION_ENTRY_ORIGIN;
|
2024-03-05 12:37:48 +08:00
|
|
|
if (pud_leaf(pud))
|
2020-10-20 20:20:07 +02:00
|
|
|
origin_mask = _REGION3_ENTRY_ORIGIN_LARGE;
|
2021-02-12 07:43:16 +01:00
|
|
|
return (unsigned long)__va(pud_val(pud) & origin_mask);
|
2020-10-20 20:20:07 +02:00
|
|
|
}
|
|
|
|
|
2024-03-27 11:23:24 -04:00
|
|
|
#define pud_pfn pud_pfn
|
2020-10-20 20:20:07 +02:00
|
|
|
static inline unsigned long pud_pfn(pud_t pud)
|
|
|
|
{
|
2021-02-12 07:43:16 +01:00
|
|
|
return __pa(pud_deref(pud)) >> PAGE_SHIFT;
|
2020-10-20 20:20:07 +02:00
|
|
|
}
|
|
|
|
|
s390/mm: make the pxd_offset functions more robust
Change the way how pgd_offset, p4d_offset, pud_offset and pmd_offset
walk the page tables. pgd_offset now always calculates the index for
the top-level page table and adds it to the pgd, this is either a
segment table offset for a 2-level setup, a region-3 offset for 3-levels,
region-2 offset for 4-levels, or a region-1 offset for a 5-level setup.
The other three functions p4d_offset, pud_offset and pmd_offset will
only add the respective offset if they dereference the passed pointer.
With the new way of walking the page tables a sequence like this from
mm/gup.c now works:
pgdp = pgd_offset(current->mm, addr);
pgd = READ_ONCE(*pgdp);
p4dp = p4d_offset(&pgd, addr);
p4d = READ_ONCE(*p4dp);
pudp = pud_offset(&p4d, addr);
pud = READ_ONCE(*pudp);
pmdp = pmd_offset(&pud, addr);
pmd = READ_ONCE(*pmdp);
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2019-04-23 10:51:12 +02:00
|
|
|
/*
|
|
|
|
* The pgd_offset function *always* adds the index for the top-level
|
|
|
|
* region/segment table. This is done to get a sequence like the
|
|
|
|
* following to work:
|
|
|
|
* pgdp = pgd_offset(current->mm, addr);
|
|
|
|
* pgd = READ_ONCE(*pgdp);
|
|
|
|
* p4dp = p4d_offset(&pgd, addr);
|
|
|
|
* ...
|
|
|
|
* The subsequent p4d_offset, pud_offset and pmd_offset functions
|
|
|
|
* only add an index if they dereferenced the pointer.
|
|
|
|
*/
|
|
|
|
static inline pgd_t *pgd_offset_raw(pgd_t *pgd, unsigned long address)
|
2008-02-09 18:24:36 +01:00
|
|
|
{
|
s390/mm: make the pxd_offset functions more robust
Change the way how pgd_offset, p4d_offset, pud_offset and pmd_offset
walk the page tables. pgd_offset now always calculates the index for
the top-level page table and adds it to the pgd, this is either a
segment table offset for a 2-level setup, a region-3 offset for 3-levels,
region-2 offset for 4-levels, or a region-1 offset for a 5-level setup.
The other three functions p4d_offset, pud_offset and pmd_offset will
only add the respective offset if they dereference the passed pointer.
With the new way of walking the page tables a sequence like this from
mm/gup.c now works:
pgdp = pgd_offset(current->mm, addr);
pgd = READ_ONCE(*pgdp);
p4dp = p4d_offset(&pgd, addr);
p4d = READ_ONCE(*p4dp);
pudp = pud_offset(&p4d, addr);
pud = READ_ONCE(*pudp);
pmdp = pmd_offset(&pud, addr);
pmd = READ_ONCE(*pmdp);
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2019-04-23 10:51:12 +02:00
|
|
|
unsigned long rste;
|
|
|
|
unsigned int shift;
|
2017-04-24 18:19:10 +02:00
|
|
|
|
s390/mm: make the pxd_offset functions more robust
Change the way how pgd_offset, p4d_offset, pud_offset and pmd_offset
walk the page tables. pgd_offset now always calculates the index for
the top-level page table and adds it to the pgd, this is either a
segment table offset for a 2-level setup, a region-3 offset for 3-levels,
region-2 offset for 4-levels, or a region-1 offset for a 5-level setup.
The other three functions p4d_offset, pud_offset and pmd_offset will
only add the respective offset if they dereference the passed pointer.
With the new way of walking the page tables a sequence like this from
mm/gup.c now works:
pgdp = pgd_offset(current->mm, addr);
pgd = READ_ONCE(*pgdp);
p4dp = p4d_offset(&pgd, addr);
p4d = READ_ONCE(*p4dp);
pudp = pud_offset(&p4d, addr);
pud = READ_ONCE(*pudp);
pmdp = pmd_offset(&pud, addr);
pmd = READ_ONCE(*pmdp);
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2019-04-23 10:51:12 +02:00
|
|
|
/* Get the first entry of the top level table */
|
|
|
|
rste = pgd_val(*pgd);
|
|
|
|
/* Pick up the shift from the table type of the first entry */
|
|
|
|
shift = ((rste & _REGION_ENTRY_TYPE_MASK) >> 2) * 11 + 20;
|
|
|
|
return pgd + ((address >> shift) & (PTRS_PER_PGD - 1));
|
2017-04-24 18:19:10 +02:00
|
|
|
}
|
|
|
|
|
s390/mm: make the pxd_offset functions more robust
Change the way how pgd_offset, p4d_offset, pud_offset and pmd_offset
walk the page tables. pgd_offset now always calculates the index for
the top-level page table and adds it to the pgd, this is either a
segment table offset for a 2-level setup, a region-3 offset for 3-levels,
region-2 offset for 4-levels, or a region-1 offset for a 5-level setup.
The other three functions p4d_offset, pud_offset and pmd_offset will
only add the respective offset if they dereference the passed pointer.
With the new way of walking the page tables a sequence like this from
mm/gup.c now works:
pgdp = pgd_offset(current->mm, addr);
pgd = READ_ONCE(*pgdp);
p4dp = p4d_offset(&pgd, addr);
p4d = READ_ONCE(*p4dp);
pudp = pud_offset(&p4d, addr);
pud = READ_ONCE(*pudp);
pmdp = pmd_offset(&pud, addr);
pmd = READ_ONCE(*pmdp);
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2019-04-23 10:51:12 +02:00
|
|
|
#define pgd_offset(mm, address) pgd_offset_raw(READ_ONCE((mm)->pgd), address)
|
|
|
|
|
mm/gup: fix gup_fast with dynamic page table folding
Currently to make sure that every page table entry is read just once
gup_fast walks perform READ_ONCE and pass pXd value down to the next
gup_pXd_range function by value e.g.:
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
...
pudp = pud_offset(&p4d, addr);
This function passes a reference on that local value copy to pXd_offset,
and might get the very same pointer in return. This happens when the
level is folded (on most arches), and that pointer should not be
iterated.
On s390 due to the fact that each task might have different 5,4 or
3-level address translation and hence different levels folded the logic
is more complex and non-iteratable pointer to a local copy leads to
severe problems.
Here is an example of what happens with gup_fast on s390, for a task
with 3-level paging, crossing a 2 GB pud boundary:
// addr = 0x1007ffff000, end = 0x10080001000
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
{
unsigned long next;
pud_t *pudp;
// pud_offset returns &p4d itself (a pointer to a value on stack)
pudp = pud_offset(&p4d, addr);
do {
// on second iteratation reading "random" stack value
pud_t pud = READ_ONCE(*pudp);
// next = 0x10080000000, due to PUD_SIZE/MASK != PGDIR_SIZE/MASK on s390
next = pud_addr_end(addr, end);
...
} while (pudp++, addr = next, addr != end); // pudp++ iterating over stack
return 1;
}
This happens since s390 moved to common gup code with commit
d1874a0c2805 ("s390/mm: make the pxd_offset functions more robust") and
commit 1a42010cdc26 ("s390/mm: convert to the generic
get_user_pages_fast code").
s390 tried to mimic static level folding by changing pXd_offset
primitives to always calculate top level page table offset in pgd_offset
and just return the value passed when pXd_offset has to act as folded.
What is crucial for gup_fast and what has been overlooked is that
PxD_SIZE/MASK and thus pXd_addr_end should also change correspondingly.
And the latter is not possible with dynamic folding.
To fix the issue in addition to pXd values pass original pXdp pointers
down to gup_pXd_range functions. And introduce pXd_offset_lockless
helpers, which take an additional pXd entry value parameter. This has
already been discussed in
https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1
Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code")
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: <stable@vger.kernel.org> [5.2+]
Link: https://lkml.kernel.org/r/patch.git-943f1e5dcff2.your-ad-here.call-01599856292-ext-8676@work.hours
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-09-25 21:19:10 -07:00
|
|
|
static inline p4d_t *p4d_offset_lockless(pgd_t *pgdp, pgd_t pgd, unsigned long address)
|
2017-04-24 18:19:10 +02:00
|
|
|
{
|
mm/gup: fix gup_fast with dynamic page table folding
Currently to make sure that every page table entry is read just once
gup_fast walks perform READ_ONCE and pass pXd value down to the next
gup_pXd_range function by value e.g.:
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
...
pudp = pud_offset(&p4d, addr);
This function passes a reference on that local value copy to pXd_offset,
and might get the very same pointer in return. This happens when the
level is folded (on most arches), and that pointer should not be
iterated.
On s390 due to the fact that each task might have different 5,4 or
3-level address translation and hence different levels folded the logic
is more complex and non-iteratable pointer to a local copy leads to
severe problems.
Here is an example of what happens with gup_fast on s390, for a task
with 3-level paging, crossing a 2 GB pud boundary:
// addr = 0x1007ffff000, end = 0x10080001000
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
{
unsigned long next;
pud_t *pudp;
// pud_offset returns &p4d itself (a pointer to a value on stack)
pudp = pud_offset(&p4d, addr);
do {
// on second iteratation reading "random" stack value
pud_t pud = READ_ONCE(*pudp);
// next = 0x10080000000, due to PUD_SIZE/MASK != PGDIR_SIZE/MASK on s390
next = pud_addr_end(addr, end);
...
} while (pudp++, addr = next, addr != end); // pudp++ iterating over stack
return 1;
}
This happens since s390 moved to common gup code with commit
d1874a0c2805 ("s390/mm: make the pxd_offset functions more robust") and
commit 1a42010cdc26 ("s390/mm: convert to the generic
get_user_pages_fast code").
s390 tried to mimic static level folding by changing pXd_offset
primitives to always calculate top level page table offset in pgd_offset
and just return the value passed when pXd_offset has to act as folded.
What is crucial for gup_fast and what has been overlooked is that
PxD_SIZE/MASK and thus pXd_addr_end should also change correspondingly.
And the latter is not possible with dynamic folding.
To fix the issue in addition to pXd values pass original pXdp pointers
down to gup_pXd_range functions. And introduce pXd_offset_lockless
helpers, which take an additional pXd entry value parameter. This has
already been discussed in
https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1
Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code")
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: <stable@vger.kernel.org> [5.2+]
Link: https://lkml.kernel.org/r/patch.git-943f1e5dcff2.your-ad-here.call-01599856292-ext-8676@work.hours
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-09-25 21:19:10 -07:00
|
|
|
if ((pgd_val(pgd) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R1)
|
|
|
|
return (p4d_t *) pgd_deref(pgd) + p4d_index(address);
|
|
|
|
return (p4d_t *) pgdp;
|
s390/mm: make the pxd_offset functions more robust
Change the way how pgd_offset, p4d_offset, pud_offset and pmd_offset
walk the page tables. pgd_offset now always calculates the index for
the top-level page table and adds it to the pgd, this is either a
segment table offset for a 2-level setup, a region-3 offset for 3-levels,
region-2 offset for 4-levels, or a region-1 offset for a 5-level setup.
The other three functions p4d_offset, pud_offset and pmd_offset will
only add the respective offset if they dereference the passed pointer.
With the new way of walking the page tables a sequence like this from
mm/gup.c now works:
pgdp = pgd_offset(current->mm, addr);
pgd = READ_ONCE(*pgdp);
p4dp = p4d_offset(&pgd, addr);
p4d = READ_ONCE(*p4dp);
pudp = pud_offset(&p4d, addr);
pud = READ_ONCE(*pudp);
pmdp = pmd_offset(&pud, addr);
pmd = READ_ONCE(*pmdp);
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2019-04-23 10:51:12 +02:00
|
|
|
}
|
mm/gup: fix gup_fast with dynamic page table folding
Currently to make sure that every page table entry is read just once
gup_fast walks perform READ_ONCE and pass pXd value down to the next
gup_pXd_range function by value e.g.:
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
...
pudp = pud_offset(&p4d, addr);
This function passes a reference on that local value copy to pXd_offset,
and might get the very same pointer in return. This happens when the
level is folded (on most arches), and that pointer should not be
iterated.
On s390 due to the fact that each task might have different 5,4 or
3-level address translation and hence different levels folded the logic
is more complex and non-iteratable pointer to a local copy leads to
severe problems.
Here is an example of what happens with gup_fast on s390, for a task
with 3-level paging, crossing a 2 GB pud boundary:
// addr = 0x1007ffff000, end = 0x10080001000
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
{
unsigned long next;
pud_t *pudp;
// pud_offset returns &p4d itself (a pointer to a value on stack)
pudp = pud_offset(&p4d, addr);
do {
// on second iteratation reading "random" stack value
pud_t pud = READ_ONCE(*pudp);
// next = 0x10080000000, due to PUD_SIZE/MASK != PGDIR_SIZE/MASK on s390
next = pud_addr_end(addr, end);
...
} while (pudp++, addr = next, addr != end); // pudp++ iterating over stack
return 1;
}
This happens since s390 moved to common gup code with commit
d1874a0c2805 ("s390/mm: make the pxd_offset functions more robust") and
commit 1a42010cdc26 ("s390/mm: convert to the generic
get_user_pages_fast code").
s390 tried to mimic static level folding by changing pXd_offset
primitives to always calculate top level page table offset in pgd_offset
and just return the value passed when pXd_offset has to act as folded.
What is crucial for gup_fast and what has been overlooked is that
PxD_SIZE/MASK and thus pXd_addr_end should also change correspondingly.
And the latter is not possible with dynamic folding.
To fix the issue in addition to pXd values pass original pXdp pointers
down to gup_pXd_range functions. And introduce pXd_offset_lockless
helpers, which take an additional pXd entry value parameter. This has
already been discussed in
https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1
Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code")
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: <stable@vger.kernel.org> [5.2+]
Link: https://lkml.kernel.org/r/patch.git-943f1e5dcff2.your-ad-here.call-01599856292-ext-8676@work.hours
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-09-25 21:19:10 -07:00
|
|
|
#define p4d_offset_lockless p4d_offset_lockless
|
2017-04-24 18:19:10 +02:00
|
|
|
|
mm/gup: fix gup_fast with dynamic page table folding
Currently to make sure that every page table entry is read just once
gup_fast walks perform READ_ONCE and pass pXd value down to the next
gup_pXd_range function by value e.g.:
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
...
pudp = pud_offset(&p4d, addr);
This function passes a reference on that local value copy to pXd_offset,
and might get the very same pointer in return. This happens when the
level is folded (on most arches), and that pointer should not be
iterated.
On s390 due to the fact that each task might have different 5,4 or
3-level address translation and hence different levels folded the logic
is more complex and non-iteratable pointer to a local copy leads to
severe problems.
Here is an example of what happens with gup_fast on s390, for a task
with 3-level paging, crossing a 2 GB pud boundary:
// addr = 0x1007ffff000, end = 0x10080001000
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
{
unsigned long next;
pud_t *pudp;
// pud_offset returns &p4d itself (a pointer to a value on stack)
pudp = pud_offset(&p4d, addr);
do {
// on second iteratation reading "random" stack value
pud_t pud = READ_ONCE(*pudp);
// next = 0x10080000000, due to PUD_SIZE/MASK != PGDIR_SIZE/MASK on s390
next = pud_addr_end(addr, end);
...
} while (pudp++, addr = next, addr != end); // pudp++ iterating over stack
return 1;
}
This happens since s390 moved to common gup code with commit
d1874a0c2805 ("s390/mm: make the pxd_offset functions more robust") and
commit 1a42010cdc26 ("s390/mm: convert to the generic
get_user_pages_fast code").
s390 tried to mimic static level folding by changing pXd_offset
primitives to always calculate top level page table offset in pgd_offset
and just return the value passed when pXd_offset has to act as folded.
What is crucial for gup_fast and what has been overlooked is that
PxD_SIZE/MASK and thus pXd_addr_end should also change correspondingly.
And the latter is not possible with dynamic folding.
To fix the issue in addition to pXd values pass original pXdp pointers
down to gup_pXd_range functions. And introduce pXd_offset_lockless
helpers, which take an additional pXd entry value parameter. This has
already been discussed in
https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1
Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code")
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: <stable@vger.kernel.org> [5.2+]
Link: https://lkml.kernel.org/r/patch.git-943f1e5dcff2.your-ad-here.call-01599856292-ext-8676@work.hours
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-09-25 21:19:10 -07:00
|
|
|
static inline p4d_t *p4d_offset(pgd_t *pgdp, unsigned long address)
|
s390/mm: make the pxd_offset functions more robust
Change the way how pgd_offset, p4d_offset, pud_offset and pmd_offset
walk the page tables. pgd_offset now always calculates the index for
the top-level page table and adds it to the pgd, this is either a
segment table offset for a 2-level setup, a region-3 offset for 3-levels,
region-2 offset for 4-levels, or a region-1 offset for a 5-level setup.
The other three functions p4d_offset, pud_offset and pmd_offset will
only add the respective offset if they dereference the passed pointer.
With the new way of walking the page tables a sequence like this from
mm/gup.c now works:
pgdp = pgd_offset(current->mm, addr);
pgd = READ_ONCE(*pgdp);
p4dp = p4d_offset(&pgd, addr);
p4d = READ_ONCE(*p4dp);
pudp = pud_offset(&p4d, addr);
pud = READ_ONCE(*pudp);
pmdp = pmd_offset(&pud, addr);
pmd = READ_ONCE(*pmdp);
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2019-04-23 10:51:12 +02:00
|
|
|
{
|
mm/gup: fix gup_fast with dynamic page table folding
Currently to make sure that every page table entry is read just once
gup_fast walks perform READ_ONCE and pass pXd value down to the next
gup_pXd_range function by value e.g.:
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
...
pudp = pud_offset(&p4d, addr);
This function passes a reference on that local value copy to pXd_offset,
and might get the very same pointer in return. This happens when the
level is folded (on most arches), and that pointer should not be
iterated.
On s390 due to the fact that each task might have different 5,4 or
3-level address translation and hence different levels folded the logic
is more complex and non-iteratable pointer to a local copy leads to
severe problems.
Here is an example of what happens with gup_fast on s390, for a task
with 3-level paging, crossing a 2 GB pud boundary:
// addr = 0x1007ffff000, end = 0x10080001000
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
{
unsigned long next;
pud_t *pudp;
// pud_offset returns &p4d itself (a pointer to a value on stack)
pudp = pud_offset(&p4d, addr);
do {
// on second iteratation reading "random" stack value
pud_t pud = READ_ONCE(*pudp);
// next = 0x10080000000, due to PUD_SIZE/MASK != PGDIR_SIZE/MASK on s390
next = pud_addr_end(addr, end);
...
} while (pudp++, addr = next, addr != end); // pudp++ iterating over stack
return 1;
}
This happens since s390 moved to common gup code with commit
d1874a0c2805 ("s390/mm: make the pxd_offset functions more robust") and
commit 1a42010cdc26 ("s390/mm: convert to the generic
get_user_pages_fast code").
s390 tried to mimic static level folding by changing pXd_offset
primitives to always calculate top level page table offset in pgd_offset
and just return the value passed when pXd_offset has to act as folded.
What is crucial for gup_fast and what has been overlooked is that
PxD_SIZE/MASK and thus pXd_addr_end should also change correspondingly.
And the latter is not possible with dynamic folding.
To fix the issue in addition to pXd values pass original pXdp pointers
down to gup_pXd_range functions. And introduce pXd_offset_lockless
helpers, which take an additional pXd entry value parameter. This has
already been discussed in
https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1
Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code")
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: <stable@vger.kernel.org> [5.2+]
Link: https://lkml.kernel.org/r/patch.git-943f1e5dcff2.your-ad-here.call-01599856292-ext-8676@work.hours
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-09-25 21:19:10 -07:00
|
|
|
return p4d_offset_lockless(pgdp, *pgdp, address);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pud_t *pud_offset_lockless(p4d_t *p4dp, p4d_t p4d, unsigned long address)
|
|
|
|
{
|
|
|
|
if ((p4d_val(p4d) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R2)
|
|
|
|
return (pud_t *) p4d_deref(p4d) + pud_index(address);
|
|
|
|
return (pud_t *) p4dp;
|
|
|
|
}
|
|
|
|
#define pud_offset_lockless pud_offset_lockless
|
|
|
|
|
|
|
|
static inline pud_t *pud_offset(p4d_t *p4dp, unsigned long address)
|
|
|
|
{
|
|
|
|
return pud_offset_lockless(p4dp, *p4dp, address);
|
2008-02-09 18:24:36 +01:00
|
|
|
}
|
2020-06-08 21:33:10 -07:00
|
|
|
#define pud_offset pud_offset
|
2005-04-16 15:20:36 -07:00
|
|
|
|
mm/gup: fix gup_fast with dynamic page table folding
Currently to make sure that every page table entry is read just once
gup_fast walks perform READ_ONCE and pass pXd value down to the next
gup_pXd_range function by value e.g.:
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
...
pudp = pud_offset(&p4d, addr);
This function passes a reference on that local value copy to pXd_offset,
and might get the very same pointer in return. This happens when the
level is folded (on most arches), and that pointer should not be
iterated.
On s390 due to the fact that each task might have different 5,4 or
3-level address translation and hence different levels folded the logic
is more complex and non-iteratable pointer to a local copy leads to
severe problems.
Here is an example of what happens with gup_fast on s390, for a task
with 3-level paging, crossing a 2 GB pud boundary:
// addr = 0x1007ffff000, end = 0x10080001000
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
{
unsigned long next;
pud_t *pudp;
// pud_offset returns &p4d itself (a pointer to a value on stack)
pudp = pud_offset(&p4d, addr);
do {
// on second iteratation reading "random" stack value
pud_t pud = READ_ONCE(*pudp);
// next = 0x10080000000, due to PUD_SIZE/MASK != PGDIR_SIZE/MASK on s390
next = pud_addr_end(addr, end);
...
} while (pudp++, addr = next, addr != end); // pudp++ iterating over stack
return 1;
}
This happens since s390 moved to common gup code with commit
d1874a0c2805 ("s390/mm: make the pxd_offset functions more robust") and
commit 1a42010cdc26 ("s390/mm: convert to the generic
get_user_pages_fast code").
s390 tried to mimic static level folding by changing pXd_offset
primitives to always calculate top level page table offset in pgd_offset
and just return the value passed when pXd_offset has to act as folded.
What is crucial for gup_fast and what has been overlooked is that
PxD_SIZE/MASK and thus pXd_addr_end should also change correspondingly.
And the latter is not possible with dynamic folding.
To fix the issue in addition to pXd values pass original pXdp pointers
down to gup_pXd_range functions. And introduce pXd_offset_lockless
helpers, which take an additional pXd entry value parameter. This has
already been discussed in
https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1
Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code")
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: <stable@vger.kernel.org> [5.2+]
Link: https://lkml.kernel.org/r/patch.git-943f1e5dcff2.your-ad-here.call-01599856292-ext-8676@work.hours
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-09-25 21:19:10 -07:00
|
|
|
static inline pmd_t *pmd_offset_lockless(pud_t *pudp, pud_t pud, unsigned long address)
|
|
|
|
{
|
|
|
|
if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R3)
|
|
|
|
return (pmd_t *) pud_deref(pud) + pmd_index(address);
|
|
|
|
return (pmd_t *) pudp;
|
|
|
|
}
|
|
|
|
#define pmd_offset_lockless pmd_offset_lockless
|
|
|
|
|
|
|
|
static inline pmd_t *pmd_offset(pud_t *pudp, unsigned long address)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
mm/gup: fix gup_fast with dynamic page table folding
Currently to make sure that every page table entry is read just once
gup_fast walks perform READ_ONCE and pass pXd value down to the next
gup_pXd_range function by value e.g.:
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
...
pudp = pud_offset(&p4d, addr);
This function passes a reference on that local value copy to pXd_offset,
and might get the very same pointer in return. This happens when the
level is folded (on most arches), and that pointer should not be
iterated.
On s390 due to the fact that each task might have different 5,4 or
3-level address translation and hence different levels folded the logic
is more complex and non-iteratable pointer to a local copy leads to
severe problems.
Here is an example of what happens with gup_fast on s390, for a task
with 3-level paging, crossing a 2 GB pud boundary:
// addr = 0x1007ffff000, end = 0x10080001000
static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
{
unsigned long next;
pud_t *pudp;
// pud_offset returns &p4d itself (a pointer to a value on stack)
pudp = pud_offset(&p4d, addr);
do {
// on second iteratation reading "random" stack value
pud_t pud = READ_ONCE(*pudp);
// next = 0x10080000000, due to PUD_SIZE/MASK != PGDIR_SIZE/MASK on s390
next = pud_addr_end(addr, end);
...
} while (pudp++, addr = next, addr != end); // pudp++ iterating over stack
return 1;
}
This happens since s390 moved to common gup code with commit
d1874a0c2805 ("s390/mm: make the pxd_offset functions more robust") and
commit 1a42010cdc26 ("s390/mm: convert to the generic
get_user_pages_fast code").
s390 tried to mimic static level folding by changing pXd_offset
primitives to always calculate top level page table offset in pgd_offset
and just return the value passed when pXd_offset has to act as folded.
What is crucial for gup_fast and what has been overlooked is that
PxD_SIZE/MASK and thus pXd_addr_end should also change correspondingly.
And the latter is not possible with dynamic folding.
To fix the issue in addition to pXd values pass original pXdp pointers
down to gup_pXd_range functions. And introduce pXd_offset_lockless
helpers, which take an additional pXd entry value parameter. This has
already been discussed in
https://lkml.kernel.org/r/20190418100218.0a4afd51@mschwideX1
Fixes: 1a42010cdc26 ("s390/mm: convert to the generic get_user_pages_fast code")
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: <stable@vger.kernel.org> [5.2+]
Link: https://lkml.kernel.org/r/patch.git-943f1e5dcff2.your-ad-here.call-01599856292-ext-8676@work.hours
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-09-25 21:19:10 -07:00
|
|
|
return pmd_offset_lockless(pudp, *pudp, address);
|
s390/mm: make the pxd_offset functions more robust
Change the way how pgd_offset, p4d_offset, pud_offset and pmd_offset
walk the page tables. pgd_offset now always calculates the index for
the top-level page table and adds it to the pgd, this is either a
segment table offset for a 2-level setup, a region-3 offset for 3-levels,
region-2 offset for 4-levels, or a region-1 offset for a 5-level setup.
The other three functions p4d_offset, pud_offset and pmd_offset will
only add the respective offset if they dereference the passed pointer.
With the new way of walking the page tables a sequence like this from
mm/gup.c now works:
pgdp = pgd_offset(current->mm, addr);
pgd = READ_ONCE(*pgdp);
p4dp = p4d_offset(&pgd, addr);
p4d = READ_ONCE(*p4dp);
pudp = pud_offset(&p4d, addr);
pud = READ_ONCE(*pudp);
pmdp = pmd_offset(&pud, addr);
pmd = READ_ONCE(*pmdp);
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2019-04-23 10:51:12 +02:00
|
|
|
}
|
2020-06-08 21:33:10 -07:00
|
|
|
#define pmd_offset pmd_offset
|
2017-04-24 18:19:10 +02:00
|
|
|
|
2020-06-08 21:33:10 -07:00
|
|
|
static inline unsigned long pmd_page_vaddr(pmd_t pmd)
|
s390/mm: make the pxd_offset functions more robust
Change the way how pgd_offset, p4d_offset, pud_offset and pmd_offset
walk the page tables. pgd_offset now always calculates the index for
the top-level page table and adds it to the pgd, this is either a
segment table offset for a 2-level setup, a region-3 offset for 3-levels,
region-2 offset for 4-levels, or a region-1 offset for a 5-level setup.
The other three functions p4d_offset, pud_offset and pmd_offset will
only add the respective offset if they dereference the passed pointer.
With the new way of walking the page tables a sequence like this from
mm/gup.c now works:
pgdp = pgd_offset(current->mm, addr);
pgd = READ_ONCE(*pgdp);
p4dp = p4d_offset(&pgd, addr);
p4d = READ_ONCE(*p4dp);
pudp = pud_offset(&p4d, addr);
pud = READ_ONCE(*pudp);
pmdp = pmd_offset(&pud, addr);
pmd = READ_ONCE(*pmdp);
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2019-04-23 10:51:12 +02:00
|
|
|
{
|
2020-06-08 21:33:10 -07:00
|
|
|
return (unsigned long) pmd_deref(pmd);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2019-07-11 20:56:45 -07:00
|
|
|
static inline bool gup_fast_permitted(unsigned long start, unsigned long end)
|
2019-04-23 10:53:21 +02:00
|
|
|
{
|
|
|
|
return end <= current->mm->context.asce_limit;
|
|
|
|
}
|
|
|
|
#define gup_fast_permitted gup_fast_permitted
|
|
|
|
|
2021-02-12 07:43:17 +01:00
|
|
|
#define pfn_pte(pfn, pgprot) mk_pte_phys(((pfn) << PAGE_SHIFT), (pgprot))
|
2007-10-22 12:52:48 +02:00
|
|
|
#define pte_pfn(x) (pte_val(x) >> PAGE_SHIFT)
|
|
|
|
#define pte_page(x) pfn_to_page(pte_pfn(x))
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2014-07-24 11:03:41 +02:00
|
|
|
#define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd))
|
2016-07-04 14:47:01 +02:00
|
|
|
#define pud_page(pud) pfn_to_page(pud_pfn(pud))
|
2018-09-13 10:59:43 +02:00
|
|
|
#define p4d_page(p4d) pfn_to_page(p4d_pfn(p4d))
|
|
|
|
#define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd))
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2014-07-24 11:03:41 +02:00
|
|
|
static inline pmd_t pmd_wrprotect(pmd_t pmd)
|
2013-07-23 22:11:42 +02:00
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = clear_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_WRITE));
|
|
|
|
return set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_PROTECT));
|
2014-07-24 11:03:41 +02:00
|
|
|
}
|
|
|
|
|
2023-06-12 17:10:27 -07:00
|
|
|
static inline pmd_t pmd_mkwrite_novma(pmd_t pmd)
|
2014-07-24 11:03:41 +02:00
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_WRITE));
|
s390/mm: simplify page table helpers for large entries
For pmds and puds, there are a couple of page table helper functions that
only make sense for large entries, like pxd_(mk)dirty/young/write etc.
We currently explicitly check if the entries are large, but in practice
those functions must never be used for normal entries, which point to lower
level page tables, so the code can be simplified.
This also fixes a theoretical bug, where common code could use one of the
functions before actually marking a pmd large, like this:
pmd = pmd_mkhuge(pmd_mkdirty(pmd))
With the current implementation, the resulting large pmd would not be dirty
as requested. This could in theory result in the loss of dirty information,
e.g. after collapsing into a transparent hugepage. Common code currently
always marks an entry large before using one of the functions, but there is
no hard requirement for this. The only requirement would be that it never
uses the functions for normal entries pointing to lower level page tables,
but they might be called before marking an entry large during its creation.
In order to avoid issues with future common code, and to simplify the page
table helpers, remove the checks for large entries and rely on common code
never using them for normal entries.
This was found by testing a patch from from Anshuman Khandual, which is
currently discussed on LKML ("mm/debug: Add tests validating architecture
page table helpers").
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-09-10 19:22:09 +02:00
|
|
|
if (pmd_val(pmd) & _SEGMENT_ENTRY_DIRTY)
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = clear_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_PROTECT));
|
2014-07-24 11:03:41 +02:00
|
|
|
return pmd;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pmd_t pmd_mkclean(pmd_t pmd)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = clear_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_DIRTY));
|
|
|
|
return set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_PROTECT));
|
2014-07-24 11:03:41 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline pmd_t pmd_mkdirty(pmd_t pmd)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_DIRTY | _SEGMENT_ENTRY_SOFT_DIRTY));
|
s390/mm: simplify page table helpers for large entries
For pmds and puds, there are a couple of page table helper functions that
only make sense for large entries, like pxd_(mk)dirty/young/write etc.
We currently explicitly check if the entries are large, but in practice
those functions must never be used for normal entries, which point to lower
level page tables, so the code can be simplified.
This also fixes a theoretical bug, where common code could use one of the
functions before actually marking a pmd large, like this:
pmd = pmd_mkhuge(pmd_mkdirty(pmd))
With the current implementation, the resulting large pmd would not be dirty
as requested. This could in theory result in the loss of dirty information,
e.g. after collapsing into a transparent hugepage. Common code currently
always marks an entry large before using one of the functions, but there is
no hard requirement for this. The only requirement would be that it never
uses the functions for normal entries pointing to lower level page tables,
but they might be called before marking an entry large during its creation.
In order to avoid issues with future common code, and to simplify the page
table helpers, remove the checks for large entries and rely on common code
never using them for normal entries.
This was found by testing a patch from from Anshuman Khandual, which is
currently discussed on LKML ("mm/debug: Add tests validating architecture
page table helpers").
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-09-10 19:22:09 +02:00
|
|
|
if (pmd_val(pmd) & _SEGMENT_ENTRY_WRITE)
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = clear_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_PROTECT));
|
2014-07-24 11:03:41 +02:00
|
|
|
return pmd;
|
|
|
|
}
|
|
|
|
|
2016-05-10 10:34:47 +02:00
|
|
|
static inline pud_t pud_wrprotect(pud_t pud)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pud = clear_pud_bit(pud, __pgprot(_REGION3_ENTRY_WRITE));
|
|
|
|
return set_pud_bit(pud, __pgprot(_REGION_ENTRY_PROTECT));
|
2016-05-10 10:34:47 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline pud_t pud_mkwrite(pud_t pud)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pud = set_pud_bit(pud, __pgprot(_REGION3_ENTRY_WRITE));
|
s390/mm: simplify page table helpers for large entries
For pmds and puds, there are a couple of page table helper functions that
only make sense for large entries, like pxd_(mk)dirty/young/write etc.
We currently explicitly check if the entries are large, but in practice
those functions must never be used for normal entries, which point to lower
level page tables, so the code can be simplified.
This also fixes a theoretical bug, where common code could use one of the
functions before actually marking a pmd large, like this:
pmd = pmd_mkhuge(pmd_mkdirty(pmd))
With the current implementation, the resulting large pmd would not be dirty
as requested. This could in theory result in the loss of dirty information,
e.g. after collapsing into a transparent hugepage. Common code currently
always marks an entry large before using one of the functions, but there is
no hard requirement for this. The only requirement would be that it never
uses the functions for normal entries pointing to lower level page tables,
but they might be called before marking an entry large during its creation.
In order to avoid issues with future common code, and to simplify the page
table helpers, remove the checks for large entries and rely on common code
never using them for normal entries.
This was found by testing a patch from from Anshuman Khandual, which is
currently discussed on LKML ("mm/debug: Add tests validating architecture
page table helpers").
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-09-10 19:22:09 +02:00
|
|
|
if (pud_val(pud) & _REGION3_ENTRY_DIRTY)
|
2022-02-21 21:24:01 +01:00
|
|
|
pud = clear_pud_bit(pud, __pgprot(_REGION_ENTRY_PROTECT));
|
2016-05-10 10:34:47 +02:00
|
|
|
return pud;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pud_t pud_mkclean(pud_t pud)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pud = clear_pud_bit(pud, __pgprot(_REGION3_ENTRY_DIRTY));
|
|
|
|
return set_pud_bit(pud, __pgprot(_REGION_ENTRY_PROTECT));
|
2016-05-10 10:34:47 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline pud_t pud_mkdirty(pud_t pud)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pud = set_pud_bit(pud, __pgprot(_REGION3_ENTRY_DIRTY | _REGION3_ENTRY_SOFT_DIRTY));
|
s390/mm: simplify page table helpers for large entries
For pmds and puds, there are a couple of page table helper functions that
only make sense for large entries, like pxd_(mk)dirty/young/write etc.
We currently explicitly check if the entries are large, but in practice
those functions must never be used for normal entries, which point to lower
level page tables, so the code can be simplified.
This also fixes a theoretical bug, where common code could use one of the
functions before actually marking a pmd large, like this:
pmd = pmd_mkhuge(pmd_mkdirty(pmd))
With the current implementation, the resulting large pmd would not be dirty
as requested. This could in theory result in the loss of dirty information,
e.g. after collapsing into a transparent hugepage. Common code currently
always marks an entry large before using one of the functions, but there is
no hard requirement for this. The only requirement would be that it never
uses the functions for normal entries pointing to lower level page tables,
but they might be called before marking an entry large during its creation.
In order to avoid issues with future common code, and to simplify the page
table helpers, remove the checks for large entries and rely on common code
never using them for normal entries.
This was found by testing a patch from from Anshuman Khandual, which is
currently discussed on LKML ("mm/debug: Add tests validating architecture
page table helpers").
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-09-10 19:22:09 +02:00
|
|
|
if (pud_val(pud) & _REGION3_ENTRY_WRITE)
|
2022-02-21 21:24:01 +01:00
|
|
|
pud = clear_pud_bit(pud, __pgprot(_REGION_ENTRY_PROTECT));
|
2016-05-10 10:34:47 +02:00
|
|
|
return pud;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLB_PAGE)
|
|
|
|
static inline unsigned long massage_pgprot_pmd(pgprot_t pgprot)
|
|
|
|
{
|
|
|
|
/*
|
2016-03-22 10:54:24 +01:00
|
|
|
* pgprot is PAGE_NONE, PAGE_RO, PAGE_RX, PAGE_RW or PAGE_RWX
|
|
|
|
* (see __Pxxx / __Sxxx). Convert to segment table entry format.
|
2016-05-10 10:34:47 +02:00
|
|
|
*/
|
|
|
|
if (pgprot_val(pgprot) == pgprot_val(PAGE_NONE))
|
|
|
|
return pgprot_val(SEGMENT_NONE);
|
2016-03-22 10:54:24 +01:00
|
|
|
if (pgprot_val(pgprot) == pgprot_val(PAGE_RO))
|
|
|
|
return pgprot_val(SEGMENT_RO);
|
|
|
|
if (pgprot_val(pgprot) == pgprot_val(PAGE_RX))
|
|
|
|
return pgprot_val(SEGMENT_RX);
|
|
|
|
if (pgprot_val(pgprot) == pgprot_val(PAGE_RW))
|
|
|
|
return pgprot_val(SEGMENT_RW);
|
|
|
|
return pgprot_val(SEGMENT_RWX);
|
2016-05-10 10:34:47 +02:00
|
|
|
}
|
|
|
|
|
2014-07-24 11:03:41 +02:00
|
|
|
static inline pmd_t pmd_mkyoung(pmd_t pmd)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_YOUNG));
|
s390/mm: simplify page table helpers for large entries
For pmds and puds, there are a couple of page table helper functions that
only make sense for large entries, like pxd_(mk)dirty/young/write etc.
We currently explicitly check if the entries are large, but in practice
those functions must never be used for normal entries, which point to lower
level page tables, so the code can be simplified.
This also fixes a theoretical bug, where common code could use one of the
functions before actually marking a pmd large, like this:
pmd = pmd_mkhuge(pmd_mkdirty(pmd))
With the current implementation, the resulting large pmd would not be dirty
as requested. This could in theory result in the loss of dirty information,
e.g. after collapsing into a transparent hugepage. Common code currently
always marks an entry large before using one of the functions, but there is
no hard requirement for this. The only requirement would be that it never
uses the functions for normal entries pointing to lower level page tables,
but they might be called before marking an entry large during its creation.
In order to avoid issues with future common code, and to simplify the page
table helpers, remove the checks for large entries and rely on common code
never using them for normal entries.
This was found by testing a patch from from Anshuman Khandual, which is
currently discussed on LKML ("mm/debug: Add tests validating architecture
page table helpers").
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-09-10 19:22:09 +02:00
|
|
|
if (pmd_val(pmd) & _SEGMENT_ENTRY_READ)
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = clear_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_INVALID));
|
2013-07-23 22:11:42 +02:00
|
|
|
return pmd;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pmd_t pmd_mkold(pmd_t pmd)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = clear_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_YOUNG));
|
|
|
|
return set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_INVALID));
|
2013-07-23 22:11:42 +02:00
|
|
|
}
|
|
|
|
|
2012-10-08 16:30:24 -07:00
|
|
|
static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
unsigned long mask;
|
|
|
|
|
|
|
|
mask = _SEGMENT_ENTRY_ORIGIN_LARGE;
|
|
|
|
mask |= _SEGMENT_ENTRY_DIRTY;
|
|
|
|
mask |= _SEGMENT_ENTRY_YOUNG;
|
|
|
|
mask |= _SEGMENT_ENTRY_LARGE;
|
|
|
|
mask |= _SEGMENT_ENTRY_SOFT_DIRTY;
|
|
|
|
pmd = __pmd(pmd_val(pmd) & mask);
|
|
|
|
pmd = set_pmd_bit(pmd, __pgprot(massage_pgprot_pmd(newprot)));
|
s390/mm: simplify page table helpers for large entries
For pmds and puds, there are a couple of page table helper functions that
only make sense for large entries, like pxd_(mk)dirty/young/write etc.
We currently explicitly check if the entries are large, but in practice
those functions must never be used for normal entries, which point to lower
level page tables, so the code can be simplified.
This also fixes a theoretical bug, where common code could use one of the
functions before actually marking a pmd large, like this:
pmd = pmd_mkhuge(pmd_mkdirty(pmd))
With the current implementation, the resulting large pmd would not be dirty
as requested. This could in theory result in the loss of dirty information,
e.g. after collapsing into a transparent hugepage. Common code currently
always marks an entry large before using one of the functions, but there is
no hard requirement for this. The only requirement would be that it never
uses the functions for normal entries pointing to lower level page tables,
but they might be called before marking an entry large during its creation.
In order to avoid issues with future common code, and to simplify the page
table helpers, remove the checks for large entries and rely on common code
never using them for normal entries.
This was found by testing a patch from from Anshuman Khandual, which is
currently discussed on LKML ("mm/debug: Add tests validating architecture
page table helpers").
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-09-10 19:22:09 +02:00
|
|
|
if (!(pmd_val(pmd) & _SEGMENT_ENTRY_DIRTY))
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_PROTECT));
|
s390/mm: simplify page table helpers for large entries
For pmds and puds, there are a couple of page table helper functions that
only make sense for large entries, like pxd_(mk)dirty/young/write etc.
We currently explicitly check if the entries are large, but in practice
those functions must never be used for normal entries, which point to lower
level page tables, so the code can be simplified.
This also fixes a theoretical bug, where common code could use one of the
functions before actually marking a pmd large, like this:
pmd = pmd_mkhuge(pmd_mkdirty(pmd))
With the current implementation, the resulting large pmd would not be dirty
as requested. This could in theory result in the loss of dirty information,
e.g. after collapsing into a transparent hugepage. Common code currently
always marks an entry large before using one of the functions, but there is
no hard requirement for this. The only requirement would be that it never
uses the functions for normal entries pointing to lower level page tables,
but they might be called before marking an entry large during its creation.
In order to avoid issues with future common code, and to simplify the page
table helpers, remove the checks for large entries and rely on common code
never using them for normal entries.
This was found by testing a patch from from Anshuman Khandual, which is
currently discussed on LKML ("mm/debug: Add tests validating architecture
page table helpers").
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-09-10 19:22:09 +02:00
|
|
|
if (!(pmd_val(pmd) & _SEGMENT_ENTRY_YOUNG))
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_INVALID));
|
2012-10-08 16:30:24 -07:00
|
|
|
return pmd;
|
|
|
|
}
|
|
|
|
|
2013-04-29 15:07:23 -07:00
|
|
|
static inline pmd_t mk_pmd_phys(unsigned long physpage, pgprot_t pgprot)
|
2012-10-08 16:30:24 -07:00
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
return __pmd(physpage + massage_pgprot_pmd(pgprot));
|
2012-10-08 16:30:24 -07:00
|
|
|
}
|
|
|
|
|
2013-04-29 15:07:23 -07:00
|
|
|
#endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLB_PAGE */
|
|
|
|
|
2014-04-03 13:55:01 +02:00
|
|
|
static inline void __pmdp_csp(pmd_t *pmdp)
|
|
|
|
{
|
2016-05-14 10:46:33 +02:00
|
|
|
csp((unsigned int *)pmdp + 1, pmd_val(*pmdp),
|
|
|
|
pmd_val(*pmdp) | _SEGMENT_ENTRY_INVALID);
|
2014-04-03 13:55:01 +02:00
|
|
|
}
|
|
|
|
|
2016-06-14 12:41:35 +02:00
|
|
|
#define IDTE_GLOBAL 0
|
|
|
|
#define IDTE_LOCAL 1
|
2016-07-04 14:47:01 +02:00
|
|
|
|
2016-07-26 16:53:09 +02:00
|
|
|
#define IDTE_PTOA 0x0800
|
|
|
|
#define IDTE_NODAT 0x1000
|
2016-07-26 16:00:22 +02:00
|
|
|
#define IDTE_GUEST_ASCE 0x2000
|
2016-07-26 16:53:09 +02:00
|
|
|
|
2019-10-04 12:29:37 +02:00
|
|
|
static __always_inline void __pmdp_idte(unsigned long addr, pmd_t *pmdp,
|
|
|
|
unsigned long opt, unsigned long asce,
|
|
|
|
int local)
|
2014-04-03 13:55:01 +02:00
|
|
|
{
|
|
|
|
unsigned long sto;
|
|
|
|
|
2021-01-11 11:01:55 +01:00
|
|
|
sto = __pa(pmdp) - pmd_index(addr) * sizeof(pmd_t);
|
2016-07-26 16:00:22 +02:00
|
|
|
if (__builtin_constant_p(opt) && opt == 0) {
|
|
|
|
/* flush without guest asce */
|
|
|
|
asm volatile(
|
2022-02-25 10:39:02 +01:00
|
|
|
" idte %[r1],0,%[r2],%[m4]"
|
2016-07-26 16:00:22 +02:00
|
|
|
: "+m" (*pmdp)
|
|
|
|
: [r1] "a" (sto), [r2] "a" ((addr & HPAGE_MASK)),
|
|
|
|
[m4] "i" (local)
|
|
|
|
: "cc" );
|
|
|
|
} else {
|
|
|
|
/* flush with guest asce */
|
|
|
|
asm volatile(
|
2022-02-25 10:39:02 +01:00
|
|
|
" idte %[r1],%[r3],%[r2],%[m4]"
|
2016-07-26 16:00:22 +02:00
|
|
|
: "+m" (*pmdp)
|
|
|
|
: [r1] "a" (sto), [r2] "a" ((addr & HPAGE_MASK) | opt),
|
|
|
|
[r3] "a" (asce), [m4] "i" (local)
|
|
|
|
: "cc" );
|
|
|
|
}
|
2014-04-03 13:55:01 +02:00
|
|
|
}
|
|
|
|
|
2019-10-04 12:29:37 +02:00
|
|
|
static __always_inline void __pudp_idte(unsigned long addr, pud_t *pudp,
|
|
|
|
unsigned long opt, unsigned long asce,
|
|
|
|
int local)
|
2016-07-04 14:47:01 +02:00
|
|
|
{
|
|
|
|
unsigned long r3o;
|
|
|
|
|
2021-01-11 11:01:55 +01:00
|
|
|
r3o = __pa(pudp) - pud_index(addr) * sizeof(pud_t);
|
2016-07-04 14:47:01 +02:00
|
|
|
r3o |= _ASCE_TYPE_REGION3;
|
2016-07-26 16:00:22 +02:00
|
|
|
if (__builtin_constant_p(opt) && opt == 0) {
|
|
|
|
/* flush without guest asce */
|
|
|
|
asm volatile(
|
2022-02-25 10:39:02 +01:00
|
|
|
" idte %[r1],0,%[r2],%[m4]"
|
2016-07-26 16:00:22 +02:00
|
|
|
: "+m" (*pudp)
|
|
|
|
: [r1] "a" (r3o), [r2] "a" ((addr & PUD_MASK)),
|
|
|
|
[m4] "i" (local)
|
|
|
|
: "cc");
|
|
|
|
} else {
|
|
|
|
/* flush with guest asce */
|
|
|
|
asm volatile(
|
2022-02-25 10:39:02 +01:00
|
|
|
" idte %[r1],%[r3],%[r2],%[m4]"
|
2016-07-26 16:00:22 +02:00
|
|
|
: "+m" (*pudp)
|
|
|
|
: [r1] "a" (r3o), [r2] "a" ((addr & PUD_MASK) | opt),
|
|
|
|
[r3] "a" (asce), [m4] "i" (local)
|
|
|
|
: "cc" );
|
|
|
|
}
|
2016-07-04 14:47:01 +02:00
|
|
|
}
|
|
|
|
|
2016-03-08 11:09:25 +01:00
|
|
|
pmd_t pmdp_xchg_direct(struct mm_struct *, unsigned long, pmd_t *, pmd_t);
|
|
|
|
pmd_t pmdp_xchg_lazy(struct mm_struct *, unsigned long, pmd_t *, pmd_t);
|
2016-07-04 14:47:01 +02:00
|
|
|
pud_t pudp_xchg_direct(struct mm_struct *, unsigned long, pud_t *, pud_t);
|
2014-04-03 13:55:01 +02:00
|
|
|
|
2016-03-08 11:09:25 +01:00
|
|
|
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
|
|
|
|
|
|
|
#define __HAVE_ARCH_PGTABLE_DEPOSIT
|
|
|
|
void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
|
|
|
|
pgtable_t pgtable);
|
|
|
|
|
|
|
|
#define __HAVE_ARCH_PGTABLE_WITHDRAW
|
|
|
|
pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
|
2014-04-03 13:55:01 +02:00
|
|
|
|
2016-03-08 11:09:25 +01:00
|
|
|
#define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
|
|
|
|
static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
|
|
|
|
unsigned long addr, pmd_t *pmdp,
|
|
|
|
pmd_t entry, int dirty)
|
2013-07-26 15:04:02 +02:00
|
|
|
{
|
2016-03-08 11:09:25 +01:00
|
|
|
VM_BUG_ON(addr & ~HPAGE_MASK);
|
2013-07-26 15:04:02 +02:00
|
|
|
|
2016-03-08 11:09:25 +01:00
|
|
|
entry = pmd_mkyoung(entry);
|
|
|
|
if (dirty)
|
|
|
|
entry = pmd_mkdirty(entry);
|
|
|
|
if (pmd_val(*pmdp) == pmd_val(entry))
|
|
|
|
return 0;
|
|
|
|
pmdp_xchg_direct(vma->vm_mm, addr, pmdp, entry);
|
|
|
|
return 1;
|
2013-07-26 15:04:02 +02:00
|
|
|
}
|
|
|
|
|
2016-03-08 11:09:25 +01:00
|
|
|
#define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
|
|
|
|
static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
|
|
|
|
unsigned long addr, pmd_t *pmdp)
|
|
|
|
{
|
|
|
|
pmd_t pmd = *pmdp;
|
2013-04-29 15:07:23 -07:00
|
|
|
|
2016-03-08 11:09:25 +01:00
|
|
|
pmd = pmdp_xchg_direct(vma->vm_mm, addr, pmdp, pmd_mkold(pmd));
|
|
|
|
return pmd_young(pmd);
|
|
|
|
}
|
2013-04-29 15:07:23 -07:00
|
|
|
|
2016-03-08 11:09:25 +01:00
|
|
|
#define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
|
|
|
|
static inline int pmdp_clear_flush_young(struct vm_area_struct *vma,
|
|
|
|
unsigned long addr, pmd_t *pmdp)
|
|
|
|
{
|
|
|
|
VM_BUG_ON(addr & ~HPAGE_MASK);
|
|
|
|
return pmdp_test_and_clear_young(vma, addr, pmdp);
|
|
|
|
}
|
2013-04-29 15:07:23 -07:00
|
|
|
|
|
|
|
static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
|
|
|
|
pmd_t *pmdp, pmd_t entry)
|
|
|
|
{
|
2022-02-21 20:50:07 +01:00
|
|
|
set_pmd(pmdp, entry);
|
2013-04-29 15:07:23 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline pmd_t pmd_mkhuge(pmd_t pmd)
|
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
pmd = set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_LARGE));
|
|
|
|
pmd = set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_YOUNG));
|
|
|
|
return set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_PROTECT));
|
2012-10-08 16:30:24 -07:00
|
|
|
}
|
|
|
|
|
2015-06-24 16:57:44 -07:00
|
|
|
#define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR
|
|
|
|
static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
|
2016-03-08 11:09:25 +01:00
|
|
|
unsigned long addr, pmd_t *pmdp)
|
2012-10-08 16:30:24 -07:00
|
|
|
{
|
2016-04-27 11:43:07 +02:00
|
|
|
return pmdp_xchg_direct(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY));
|
2012-10-08 16:30:24 -07:00
|
|
|
}
|
|
|
|
|
2015-06-24 16:57:44 -07:00
|
|
|
#define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
|
2020-05-05 12:47:28 +05:30
|
|
|
static inline pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
|
2016-03-08 11:09:25 +01:00
|
|
|
unsigned long addr,
|
2015-06-24 16:57:44 -07:00
|
|
|
pmd_t *pmdp, int full)
|
2014-10-24 10:52:29 +02:00
|
|
|
{
|
2016-03-08 11:09:25 +01:00
|
|
|
if (full) {
|
|
|
|
pmd_t pmd = *pmdp;
|
2022-02-21 20:50:07 +01:00
|
|
|
set_pmd(pmdp, __pmd(_SEGMENT_ENTRY_EMPTY));
|
2016-03-08 11:09:25 +01:00
|
|
|
return pmd;
|
|
|
|
}
|
2020-05-05 12:47:28 +05:30
|
|
|
return pmdp_xchg_lazy(vma->vm_mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY));
|
2014-10-24 10:52:29 +02:00
|
|
|
}
|
|
|
|
|
2015-06-24 16:57:44 -07:00
|
|
|
#define __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
|
|
|
|
static inline pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,
|
2016-03-08 11:09:25 +01:00
|
|
|
unsigned long addr, pmd_t *pmdp)
|
2012-10-08 16:30:24 -07:00
|
|
|
{
|
2016-03-08 11:09:25 +01:00
|
|
|
return pmdp_huge_get_and_clear(vma->vm_mm, addr, pmdp);
|
2012-10-08 16:30:24 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
#define __HAVE_ARCH_PMDP_INVALIDATE
|
2018-01-31 16:18:05 -08:00
|
|
|
static inline pmd_t pmdp_invalidate(struct vm_area_struct *vma,
|
2016-03-08 11:09:25 +01:00
|
|
|
unsigned long addr, pmd_t *pmdp)
|
2012-10-08 16:30:24 -07:00
|
|
|
{
|
2024-05-01 15:33:10 +01:00
|
|
|
pmd_t pmd;
|
2017-09-18 16:10:35 +02:00
|
|
|
|
2024-05-01 15:33:10 +01:00
|
|
|
VM_WARN_ON_ONCE(!pmd_present(*pmdp));
|
|
|
|
pmd = __pmd(pmd_val(*pmdp) | _SEGMENT_ENTRY_INVALID);
|
2018-01-31 16:18:05 -08:00
|
|
|
return pmdp_xchg_direct(vma->vm_mm, addr, pmdp, pmd);
|
2012-10-08 16:30:24 -07:00
|
|
|
}
|
|
|
|
|
2013-01-21 16:48:07 +01:00
|
|
|
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
|
|
|
|
static inline void pmdp_set_wrprotect(struct mm_struct *mm,
|
2016-03-08 11:09:25 +01:00
|
|
|
unsigned long addr, pmd_t *pmdp)
|
2013-01-21 16:48:07 +01:00
|
|
|
{
|
|
|
|
pmd_t pmd = *pmdp;
|
|
|
|
|
2016-03-08 11:09:25 +01:00
|
|
|
if (pmd_write(pmd))
|
|
|
|
pmd = pmdp_xchg_lazy(mm, addr, pmdp, pmd_wrprotect(pmd));
|
2013-01-21 16:48:07 +01:00
|
|
|
}
|
|
|
|
|
2015-06-24 16:57:42 -07:00
|
|
|
static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
|
|
|
|
unsigned long address,
|
|
|
|
pmd_t *pmdp)
|
|
|
|
{
|
2015-06-24 16:57:44 -07:00
|
|
|
return pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
|
2015-06-24 16:57:42 -07:00
|
|
|
}
|
|
|
|
#define pmdp_collapse_flush pmdp_collapse_flush
|
|
|
|
|
2021-02-12 07:43:17 +01:00
|
|
|
#define pfn_pmd(pfn, pgprot) mk_pmd_phys(((pfn) << PAGE_SHIFT), (pgprot))
|
2012-10-08 16:30:24 -07:00
|
|
|
|
|
|
|
static inline int pmd_trans_huge(pmd_t pmd)
|
|
|
|
{
|
s390/mm: Introduce region-third and segment table entry present bits
Introduce region-third and segment table entry present SW bits, and adjust
pmd/pud_present() accordingly.
Also add pmd/pud_present() checks to pmd/pud_leaf(), to return false for
future swap entries. Same logic applies to pmd_trans_huge(), make that
return pmd_leaf() instead of duplicating the same check.
huge_pte_offset() also needs to be adjusted, current code would return
NULL for !pud_present(). Use the same logic as in the generic version,
which allows for !pud_present() swap entries.
Similar to PTE, bit 63 can be used for the new SW present bit in region
and segment table entries. For segment-table entries (PMD) the architecture
says that "Bits 62-63 are available for programming", so they are safe to
use. The same is true for large leaf region-third-table entries (PUD).
However, for non-leaf region-third-table entries, bits 62-63 indicate the
TABLE LENGTH and both must be set to 1. But such entries would always be
considered as present, so it is safe to use bit 63 as PRESENT bit for PUD.
They also should not conflict with bit 62 potentially later used for
preserving SOFT_DIRTY in swap entries, because they are not swap entries.
Valid PMDs / PUDs should always have the present bit set, so add it to
the various pgprot defines, and also _SEGMENT_ENTRY which is OR'ed e.g.
in pmd_populate(). _REGION3_ENTRY wouldn't need any change, as the present
bit is already included in the TABLE LENGTH, but also explicitly add it
there, for completeness, and just in case the bit would ever be changed.
gmap code needs some adjustment, to also OR the _SEGMENT_ENTRY, like it
is already done gmap_shadow_pgt() when creating new PMDs, but not in
__gmap_link(). Otherwise, the gmap PMDs would not be considered present,
e.g. when using pmd_leaf() checks in gmap code. The various WARN_ON
checks in gmap code also need adjustment, to tolerate the new present
bit.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:21 +01:00
|
|
|
return pmd_leaf(pmd);
|
2012-10-08 16:30:24 -07:00
|
|
|
}
|
|
|
|
|
arch: fix has_transparent_hugepage()
I've just discovered that the useful-sounding has_transparent_hugepage()
is actually an architecture-dependent minefield: on some arches it only
builds if CONFIG_TRANSPARENT_HUGEPAGE=y, on others it's also there when
not, but on some of those (arm and arm64) it then gives the wrong
answer; and on mips alone it's marked __init, which would crash if
called later (but so far it has not been called later).
Straighten this out: make it available to all configs, with a sensible
default in asm-generic/pgtable.h, removing its definitions from those
arches (arc, arm, arm64, sparc, tile) which are served by the default,
adding #define has_transparent_hugepage has_transparent_hugepage to
those (mips, powerpc, s390, x86) which need to override the default at
runtime, and removing the __init from mips (but maybe that kind of code
should be avoided after init: set a static variable the first time it's
called).
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Yang Shi <yang.shi@linaro.org>
Cc: Ning Qu <quning@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Vineet Gupta <vgupta@synopsys.com> [arch/arc]
Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [arch/s390]
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-19 17:13:00 -07:00
|
|
|
#define has_transparent_hugepage has_transparent_hugepage
|
2012-10-08 16:30:24 -07:00
|
|
|
static inline int has_transparent_hugepage(void)
|
|
|
|
{
|
2025-02-07 15:48:53 +01:00
|
|
|
return cpu_has_edat1() ? 1 : 0;
|
2012-10-08 16:30:24 -07:00
|
|
|
}
|
2012-10-08 16:30:15 -07:00
|
|
|
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
/*
|
|
|
|
* 64 bit swap entry format:
|
|
|
|
* A page-table entry has some bits we have to treat in a special way.
|
2022-05-09 18:20:46 -07:00
|
|
|
* Bits 54 and 63 are used to indicate the page type. Bit 53 marks the pte
|
|
|
|
* as invalid.
|
2015-04-22 13:55:59 +02:00
|
|
|
* A swap pte is indicated by bit pattern (pte & 0x201) == 0x200
|
2022-05-09 18:20:46 -07:00
|
|
|
* | offset |E11XX|type |S0|
|
2015-04-22 13:55:59 +02:00
|
|
|
* |0000000000111111111122222222223333333333444444444455|55555|55566|66|
|
|
|
|
* |0123456789012345678901234567890123456789012345678901|23456|78901|23|
|
2022-05-09 18:20:46 -07:00
|
|
|
*
|
|
|
|
* Bits 0-51 store the offset.
|
2022-05-09 18:20:46 -07:00
|
|
|
* Bit 52 (E) is used to remember PG_anon_exclusive.
|
2022-05-09 18:20:46 -07:00
|
|
|
* Bits 57-61 store the type.
|
|
|
|
* Bit 62 (S) is used for softdirty tracking.
|
2022-05-09 18:20:46 -07:00
|
|
|
* Bits 55 and 56 (X) are unused.
|
2005-04-16 15:20:36 -07:00
|
|
|
*/
|
2015-02-12 13:08:27 +01:00
|
|
|
|
2015-04-22 13:55:59 +02:00
|
|
|
#define __SWP_OFFSET_MASK ((1UL << 52) - 1)
|
|
|
|
#define __SWP_OFFSET_SHIFT 12
|
|
|
|
#define __SWP_TYPE_MASK ((1UL << 5) - 1)
|
|
|
|
#define __SWP_TYPE_SHIFT 2
|
2015-02-12 13:08:27 +01:00
|
|
|
|
2005-11-08 21:34:42 -08:00
|
|
|
static inline pte_t mk_swap_pte(unsigned long type, unsigned long offset)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2022-02-21 21:24:01 +01:00
|
|
|
unsigned long pteval;
|
2015-04-22 13:55:59 +02:00
|
|
|
|
2022-02-21 21:24:01 +01:00
|
|
|
pteval = _PAGE_INVALID | _PAGE_PROTECT;
|
|
|
|
pteval |= (offset & __SWP_OFFSET_MASK) << __SWP_OFFSET_SHIFT;
|
|
|
|
pteval |= (type & __SWP_TYPE_MASK) << __SWP_TYPE_SHIFT;
|
|
|
|
return __pte(pteval);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2015-04-22 13:55:59 +02:00
|
|
|
static inline unsigned long __swp_type(swp_entry_t entry)
|
|
|
|
{
|
|
|
|
return (entry.val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long __swp_offset(swp_entry_t entry)
|
|
|
|
{
|
|
|
|
return (entry.val >> __SWP_OFFSET_SHIFT) & __SWP_OFFSET_MASK;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline swp_entry_t __swp_entry(unsigned long type, unsigned long offset)
|
|
|
|
{
|
|
|
|
return (swp_entry_t) { pte_val(mk_swap_pte(type, offset)) };
|
|
|
|
}
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
|
|
|
|
#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
|
|
|
|
|
s390/mm: Introduce region-third and segment table swap entries
Introduce region-third (PUD) and segment table (PMD) swap entries, and
make hugetlbfs RSTE <-> PTE conversion code aware of them, so that they
can be used for hugetlbfs PTE_MARKER entries. Future work could also
build on this to enable THP_SWAP and THP_MIGRATION for s390.
Similar to PTE swap entries, bits 0-51 can be used to store the swap
offset, but bits 57-61 cannot be used for swap type because that overlaps
with the INVALID and TABLE TYPE bits. PMD/PUD swap entries must be invalid,
and have a correct table type so that pud_folded() check still works.
Bits 53-57 can be used for swap type, but those include the PROTECT bit.
So unlike swap PTEs, the PROTECT bit cannot be used to mark the swap entry.
Use the "Common-Segment/Region" bit 59 instead for that.
Also remove the !MACHINE_HAS_NX check in __set_huge_pte_at(). Otherwise,
that would clear the _SEGMENT_ENTRY_NOEXEC bit also for swap entries, where
it is used for encoding the swap type. The architecture only requires this
bit to be 0 for PTEs, with !MACHINE_HAS_NX, not for segment or region-third
entries. And the check is also redundant, because after __pte_to_rste()
conversion, for non-swap PTEs it would only be set if it was already set in
the PTE, which should never be the case for !MACHINE_HAS_NX.
This is a prerequisite for hugetlbfs PTE_MARKER support on s390, which
is needed to fix a regression introduced with commit 8a13897fb0da
("mm: userfaultfd: support UFFDIO_POISON for hugetlbfs"). That commit
depends on the availability of swap entries for hugetlbfs, which were
not available for s390 so far.
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-11-21 18:45:22 +01:00
|
|
|
/*
|
|
|
|
* 64 bit swap entry format for REGION3 and SEGMENT table entries (RSTE)
|
|
|
|
* Bits 59 and 63 are used to indicate the swap entry. Bit 58 marks the rste
|
|
|
|
* as invalid.
|
|
|
|
* A swap entry is indicated by bit pattern (rste & 0x011) == 0x010
|
|
|
|
* | offset |Xtype |11TT|S0|
|
|
|
|
* |0000000000111111111122222222223333333333444444444455|555555|5566|66|
|
|
|
|
* |0123456789012345678901234567890123456789012345678901|234567|8901|23|
|
|
|
|
*
|
|
|
|
* Bits 0-51 store the offset.
|
|
|
|
* Bits 53-57 store the type.
|
|
|
|
* Bit 62 (S) is used for softdirty tracking.
|
|
|
|
* Bits 60-61 (TT) indicate the table type: 0x01 for REGION3 and 0x00 for SEGMENT.
|
|
|
|
* Bit 52 (X) is unused.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define __SWP_OFFSET_MASK_RSTE ((1UL << 52) - 1)
|
|
|
|
#define __SWP_OFFSET_SHIFT_RSTE 12
|
|
|
|
#define __SWP_TYPE_MASK_RSTE ((1UL << 5) - 1)
|
|
|
|
#define __SWP_TYPE_SHIFT_RSTE 6
|
|
|
|
|
|
|
|
/*
|
|
|
|
* TT bits set to 0x00 == SEGMENT. For REGION3 entries, caller must add R3
|
|
|
|
* bits 0x01. See also __set_huge_pte_at().
|
|
|
|
*/
|
|
|
|
static inline unsigned long mk_swap_rste(unsigned long type, unsigned long offset)
|
|
|
|
{
|
|
|
|
unsigned long rste;
|
|
|
|
|
|
|
|
rste = _RST_ENTRY_INVALID | _RST_ENTRY_COMM;
|
|
|
|
rste |= (offset & __SWP_OFFSET_MASK_RSTE) << __SWP_OFFSET_SHIFT_RSTE;
|
|
|
|
rste |= (type & __SWP_TYPE_MASK_RSTE) << __SWP_TYPE_SHIFT_RSTE;
|
|
|
|
return rste;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long __swp_type_rste(swp_entry_t entry)
|
|
|
|
{
|
|
|
|
return (entry.val >> __SWP_TYPE_SHIFT_RSTE) & __SWP_TYPE_MASK_RSTE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long __swp_offset_rste(swp_entry_t entry)
|
|
|
|
{
|
|
|
|
return (entry.val >> __SWP_OFFSET_SHIFT_RSTE) & __SWP_OFFSET_MASK_RSTE;
|
|
|
|
}
|
|
|
|
|
|
|
|
#define __rste_to_swp_entry(rste) ((swp_entry_t) { rste })
|
|
|
|
|
2008-04-30 13:38:47 +02:00
|
|
|
extern int vmem_add_mapping(unsigned long start, unsigned long size);
|
2020-06-25 17:00:29 +02:00
|
|
|
extern void vmem_remove_mapping(unsigned long start, unsigned long size);
|
2022-07-20 08:22:01 +02:00
|
|
|
extern int __vmem_map_4k_page(unsigned long addr, unsigned long phys, pgprot_t prot, bool alloc);
|
|
|
|
extern int vmem_map_4k_page(unsigned long addr, unsigned long phys, pgprot_t prot);
|
|
|
|
extern void vmem_unmap_4k_page(unsigned long addr);
|
2022-07-24 15:02:16 +02:00
|
|
|
extern pte_t *vmem_get_alloc_pte(unsigned long addr, bool alloc);
|
2008-03-25 18:47:10 +01:00
|
|
|
extern int s390_enable_sie(void);
|
2014-10-23 12:09:17 +02:00
|
|
|
extern int s390_enable_skey(void);
|
2014-10-23 12:07:14 +02:00
|
|
|
extern void s390_reset_cmma(struct mm_struct *mm);
|
2006-12-08 15:56:07 +01:00
|
|
|
|
2015-01-14 17:51:17 +01:00
|
|
|
/* s390 has a private copy of get unmapped area to deal with cache synonyms */
|
|
|
|
#define HAVE_ARCH_UNMAPPED_AREA
|
|
|
|
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
|
|
|
|
|
2021-06-30 18:53:59 -07:00
|
|
|
#define pmd_pgtable(pmd) \
|
|
|
|
((pgtable_t)__va(pmd_val(pmd) & -sizeof(pte_t)*PTRS_PER_PTE))
|
|
|
|
|
2025-01-23 15:46:27 +01:00
|
|
|
static inline unsigned long gmap_pgste_get_pgt_addr(unsigned long *pgt)
|
|
|
|
{
|
|
|
|
unsigned long *pgstes, res;
|
|
|
|
|
|
|
|
pgstes = pgt + _PAGE_ENTRIES;
|
|
|
|
|
|
|
|
res = (pgstes[0] & PGSTE_ST2_MASK) << 16;
|
|
|
|
res |= pgstes[1] & PGSTE_ST2_MASK;
|
|
|
|
res |= (pgstes[2] & PGSTE_ST2_MASK) >> 16;
|
|
|
|
res |= (pgstes[3] & PGSTE_ST2_MASK) >> 32;
|
|
|
|
|
|
|
|
return res;
|
|
|
|
}
|
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
#endif /* _S390_PAGE_H */
|