License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2015-12-01 09:06:38 +05:30
|
|
|
#ifndef _ASM_POWERPC_NOHASH_32_PGTABLE_H
|
|
|
|
#define _ASM_POWERPC_NOHASH_32_PGTABLE_H
|
2007-04-30 16:30:56 +10:00
|
|
|
|
2007-05-08 12:46:49 +10:00
|
|
|
#include <asm-generic/pgtable-nopmd.h>
|
2007-04-30 16:30:56 +10:00
|
|
|
|
|
|
|
#ifndef __ASSEMBLY__
|
|
|
|
#include <linux/sched.h>
|
|
|
|
#include <linux/threads.h>
|
2018-07-05 16:25:07 +00:00
|
|
|
#include <asm/mmu.h> /* For sub-arch specific PPC_PIN_SIZE */
|
2007-04-30 16:30:56 +10:00
|
|
|
|
|
|
|
#endif /* __ASSEMBLY__ */
|
|
|
|
|
2016-12-07 08:47:24 +01:00
|
|
|
#define PTE_INDEX_SIZE PTE_SHIFT
|
|
|
|
#define PMD_INDEX_SIZE 0
|
|
|
|
#define PUD_INDEX_SIZE 0
|
|
|
|
#define PGD_INDEX_SIZE (32 - PGDIR_SHIFT)
|
|
|
|
|
|
|
|
#define PMD_CACHE_INDEX PMD_INDEX_SIZE
|
2018-02-11 20:30:06 +05:30
|
|
|
#define PUD_CACHE_INDEX PUD_INDEX_SIZE
|
2016-12-07 08:47:24 +01:00
|
|
|
|
|
|
|
#ifndef __ASSEMBLY__
|
|
|
|
#define PTE_TABLE_SIZE (sizeof(pte_t) << PTE_INDEX_SIZE)
|
|
|
|
#define PMD_TABLE_SIZE 0
|
|
|
|
#define PUD_TABLE_SIZE 0
|
|
|
|
#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
|
2020-06-08 21:33:10 -07:00
|
|
|
|
|
|
|
#define PMD_MASKED_BITS (PTE_TABLE_SIZE - 1)
|
2016-12-07 08:47:24 +01:00
|
|
|
#endif /* __ASSEMBLY__ */
|
|
|
|
|
|
|
|
#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
|
|
|
|
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
|
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
/*
|
|
|
|
* The normal case is that PTEs are 32-bits and we have a 1-page
|
|
|
|
* 1024-entry pgdir pointing to 1-page 1024-entry PTE pages. -- paulus
|
|
|
|
*
|
|
|
|
* For any >32-bit physical address platform, we can use the following
|
|
|
|
* two level page table layout where the pgdir is 8KB and the MS 13 bits
|
|
|
|
* are an index to the second level table. The combined pgdir/pmd first
|
|
|
|
* level has 2048 entries and the second level has 512 64-bit PTE entries.
|
|
|
|
* -Matt
|
|
|
|
*/
|
|
|
|
/* PGDIR_SHIFT determines what a top-level page table entry can map */
|
2016-12-07 08:47:24 +01:00
|
|
|
#define PGDIR_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
|
2007-04-30 16:30:56 +10:00
|
|
|
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
|
|
|
|
#define PGDIR_MASK (~(PGDIR_SIZE-1))
|
|
|
|
|
2016-12-07 08:47:24 +01:00
|
|
|
/* Bits to mask out from a PGD to get to the PUD page */
|
|
|
|
#define PGD_MASKED_BITS 0
|
2007-04-30 16:30:56 +10:00
|
|
|
|
|
|
|
#define USER_PTRS_PER_PGD (TASK_SIZE / PGDIR_SIZE)
|
|
|
|
|
|
|
|
#define pgd_ERROR(e) \
|
2024-08-22 09:58:42 +02:00
|
|
|
pr_err("%s:%d: bad pgd %08llx.\n", __FILE__, __LINE__, (unsigned long long)pgd_val(e))
|
2007-04-30 16:30:56 +10:00
|
|
|
|
2009-05-27 13:44:50 +10:00
|
|
|
/*
|
|
|
|
* This is the bottom of the PKMAP area with HIGHMEM or an arbitrary
|
|
|
|
* value (for now) on others, from where we can start layout kernel
|
|
|
|
* virtual space that goes below PKMAP and FIXMAP
|
|
|
|
*/
|
2023-09-25 20:31:21 +02:00
|
|
|
|
|
|
|
#define FIXADDR_SIZE 0
|
|
|
|
#ifdef CONFIG_KASAN
|
|
|
|
#include <asm/kasan.h>
|
|
|
|
#define FIXADDR_TOP (KASAN_SHADOW_START - PAGE_SIZE)
|
|
|
|
#else
|
|
|
|
#define FIXADDR_TOP ((unsigned long)(-PAGE_SIZE))
|
|
|
|
#endif
|
2019-04-26 16:23:31 +00:00
|
|
|
|
2009-05-27 13:44:50 +10:00
|
|
|
/*
|
|
|
|
* ioremap_bot starts at that address. Early ioremaps move down from there,
|
|
|
|
* until mem_init() at which point this becomes the top of the vmalloc
|
|
|
|
* and ioremap space
|
|
|
|
*/
|
2019-08-14 15:22:30 +02:00
|
|
|
#ifdef CONFIG_HIGHMEM
|
|
|
|
#define IOREMAP_TOP PKMAP_BASE
|
2009-05-27 13:50:33 +10:00
|
|
|
#else
|
2019-08-14 15:22:30 +02:00
|
|
|
#define IOREMAP_TOP FIXADDR_START
|
2009-05-27 13:50:33 +10:00
|
|
|
#endif
|
2009-05-27 13:44:50 +10:00
|
|
|
|
2019-08-20 14:07:19 +00:00
|
|
|
/* PPC32 shares vmalloc area with ioremap */
|
|
|
|
#define IOREMAP_START VMALLOC_START
|
|
|
|
#define IOREMAP_END VMALLOC_END
|
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
/*
|
|
|
|
* Just any arbitrary offset to the start of the vmalloc VM area: the
|
2009-05-27 13:44:50 +10:00
|
|
|
* current 16MB value just means that there will be a 64MB "hole" after the
|
2007-04-30 16:30:56 +10:00
|
|
|
* physical memory until the kernel virtual memory starts. That means that
|
|
|
|
* any out-of-bounds memory accesses will hopefully be caught.
|
|
|
|
* The vmalloc() routines leaves a hole of 4kB between each vmalloced
|
|
|
|
* area for the same reason. ;)
|
|
|
|
*
|
|
|
|
* We no longer map larger than phys RAM with the BATs so we don't have
|
|
|
|
* to worry about the VMALLOC_OFFSET causing problems. We do have to worry
|
|
|
|
* about clashes between our early calls to ioremap() that start growing down
|
2016-02-09 17:08:10 +01:00
|
|
|
* from IOREMAP_TOP being run into the VM area allocations (growing upwards
|
2007-04-30 16:30:56 +10:00
|
|
|
* from VMALLOC_START). For this reason we have ioremap_bot to check when
|
|
|
|
* we actually run into our mappings setup in the early boot with the VM
|
|
|
|
* system. This really does become a problem for machines with good amounts
|
|
|
|
* of RAM. -- Cort
|
|
|
|
*/
|
|
|
|
#define VMALLOC_OFFSET (0x1000000) /* 16M */
|
|
|
|
#ifdef PPC_PIN_SIZE
|
2020-04-20 18:36:37 +00:00
|
|
|
#define VMALLOC_START (((ALIGN((long)high_memory, PPC_PIN_SIZE) + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)))
|
2007-04-30 16:30:56 +10:00
|
|
|
#else
|
|
|
|
#define VMALLOC_START ((((long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)))
|
|
|
|
#endif
|
2020-01-14 17:54:00 +00:00
|
|
|
|
|
|
|
#ifdef CONFIG_KASAN_VMALLOC
|
2020-04-20 18:36:35 +00:00
|
|
|
#define VMALLOC_END ALIGN_DOWN(ioremap_bot, PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT)
|
2020-01-14 17:54:00 +00:00
|
|
|
#else
|
2007-04-30 16:30:56 +10:00
|
|
|
#define VMALLOC_END ioremap_bot
|
2020-01-14 17:54:00 +00:00
|
|
|
#endif
|
2007-04-30 16:30:56 +10:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Bits in a linux-style PTE. These match the bits in the
|
|
|
|
* (hardware-defined) PowerPC PTE as closely as possible.
|
|
|
|
*/
|
|
|
|
|
2024-06-28 22:11:58 +10:00
|
|
|
#if defined(CONFIG_44x)
|
2015-12-01 09:06:38 +05:30
|
|
|
#include <asm/nohash/32/pte-44x.h>
|
2022-09-19 19:01:31 +02:00
|
|
|
#elif defined(CONFIG_PPC_85xx) && defined(CONFIG_PTE_64BIT)
|
2022-09-19 19:01:38 +02:00
|
|
|
#include <asm/nohash/pte-e500.h>
|
2022-09-19 19:01:31 +02:00
|
|
|
#elif defined(CONFIG_PPC_85xx)
|
|
|
|
#include <asm/nohash/32/pte-85xx.h>
|
2017-08-08 13:58:54 +02:00
|
|
|
#elif defined(CONFIG_PPC_8xx)
|
2015-12-01 09:06:38 +05:30
|
|
|
#include <asm/nohash/32/pte-8xx.h>
|
2008-09-24 11:01:24 -05:00
|
|
|
#endif
|
2007-04-30 16:30:56 +10:00
|
|
|
|
2018-10-09 13:52:16 +00:00
|
|
|
/*
|
|
|
|
* Location of the PFN in the PTE. Most 32-bit platforms use the same
|
|
|
|
* as _PAGE_SHIFT here (ie, naturally aligned).
|
|
|
|
* Platform who don't just pre-define the value so we don't override it here.
|
|
|
|
*/
|
|
|
|
#ifndef PTE_RPN_SHIFT
|
|
|
|
#define PTE_RPN_SHIFT (PAGE_SHIFT)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The mask covered by the RPN must be a ULL on 32-bit platforms with
|
|
|
|
* 64-bit PTEs.
|
|
|
|
*/
|
2023-09-25 20:31:28 +02:00
|
|
|
#ifdef CONFIG_PTE_64BIT
|
2018-10-09 13:52:16 +00:00
|
|
|
#define PTE_RPN_MASK (~((1ULL << PTE_RPN_SHIFT) - 1))
|
arch: pgtable: define MAX_POSSIBLE_PHYSMEM_BITS where needed
Stefan Agner reported a bug when using zsram on 32-bit Arm machines
with RAM above the 4GB address boundary:
Unable to handle kernel NULL pointer dereference at virtual address 00000000
pgd = a27bd01c
[00000000] *pgd=236a0003, *pmd=1ffa64003
Internal error: Oops: 207 [#1] SMP ARM
Modules linked in: mdio_bcm_unimac(+) brcmfmac cfg80211 brcmutil raspberrypi_hwmon hci_uart crc32_arm_ce bcm2711_thermal phy_generic genet
CPU: 0 PID: 123 Comm: mkfs.ext4 Not tainted 5.9.6 #1
Hardware name: BCM2711
PC is at zs_map_object+0x94/0x338
LR is at zram_bvec_rw.constprop.0+0x330/0xa64
pc : [<c0602b38>] lr : [<c0bda6a0>] psr: 60000013
sp : e376bbe0 ip : 00000000 fp : c1e2921c
r10: 00000002 r9 : c1dda730 r8 : 00000000
r7 : e8ff7a00 r6 : 00000000 r5 : 02f9ffa0 r4 : e3710000
r3 : 000fdffe r2 : c1e0ce80 r1 : ebf979a0 r0 : 00000000
Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
Control: 30c5383d Table: 235c2a80 DAC: fffffffd
Process mkfs.ext4 (pid: 123, stack limit = 0x495a22e6)
Stack: (0xe376bbe0 to 0xe376c000)
As it turns out, zsram needs to know the maximum memory size, which
is defined in MAX_PHYSMEM_BITS when CONFIG_SPARSEMEM is set, or in
MAX_POSSIBLE_PHYSMEM_BITS on the x86 architecture.
The same problem will be hit on all 32-bit architectures that have a
physical address space larger than 4GB and happen to not enable sparsemem
and include asm/sparsemem.h from asm/pgtable.h.
After the initial discussion, I suggested just always defining
MAX_POSSIBLE_PHYSMEM_BITS whenever CONFIG_PHYS_ADDR_T_64BIT is
set, or provoking a build error otherwise. This addresses all
configurations that can currently have this runtime bug, but
leaves all other configurations unchanged.
I looked up the possible number of bits in source code and
datasheets, here is what I found:
- on ARC, CONFIG_ARC_HAS_PAE40 controls whether 32 or 40 bits are used
- on ARM, CONFIG_LPAE enables 40 bit addressing, without it we never
support more than 32 bits, even though supersections in theory allow
up to 40 bits as well.
- on MIPS, some MIPS32r1 or later chips support 36 bits, and MIPS32r5
XPA supports up to 60 bits in theory, but 40 bits are more than
anyone will ever ship
- On PowerPC, there are three different implementations of 36 bit
addressing, but 32-bit is used without CONFIG_PTE_64BIT
- On RISC-V, the normal page table format can support 34 bit
addressing. There is no highmem support on RISC-V, so anything
above 2GB is unused, but it might be useful to eventually support
CONFIG_ZRAM for high pages.
Fixes: 61989a80fb3a ("staging: zsmalloc: zsmalloc memory allocation library")
Fixes: 02390b87a945 ("mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS")
Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Reviewed-by: Stefan Agner <stefan@agner.ch>
Tested-by: Stefan Agner <stefan@agner.ch>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/linux-mm/bdfa44bf1c570b05d6c70898e2bbb0acf234ecdf.1604762181.git.stefan@agner.ch/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-11-11 17:52:58 +01:00
|
|
|
#define MAX_POSSIBLE_PHYSMEM_BITS 36
|
2018-10-09 13:52:16 +00:00
|
|
|
#else
|
|
|
|
#define PTE_RPN_MASK (~((1UL << PTE_RPN_SHIFT) - 1))
|
arch: pgtable: define MAX_POSSIBLE_PHYSMEM_BITS where needed
Stefan Agner reported a bug when using zsram on 32-bit Arm machines
with RAM above the 4GB address boundary:
Unable to handle kernel NULL pointer dereference at virtual address 00000000
pgd = a27bd01c
[00000000] *pgd=236a0003, *pmd=1ffa64003
Internal error: Oops: 207 [#1] SMP ARM
Modules linked in: mdio_bcm_unimac(+) brcmfmac cfg80211 brcmutil raspberrypi_hwmon hci_uart crc32_arm_ce bcm2711_thermal phy_generic genet
CPU: 0 PID: 123 Comm: mkfs.ext4 Not tainted 5.9.6 #1
Hardware name: BCM2711
PC is at zs_map_object+0x94/0x338
LR is at zram_bvec_rw.constprop.0+0x330/0xa64
pc : [<c0602b38>] lr : [<c0bda6a0>] psr: 60000013
sp : e376bbe0 ip : 00000000 fp : c1e2921c
r10: 00000002 r9 : c1dda730 r8 : 00000000
r7 : e8ff7a00 r6 : 00000000 r5 : 02f9ffa0 r4 : e3710000
r3 : 000fdffe r2 : c1e0ce80 r1 : ebf979a0 r0 : 00000000
Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
Control: 30c5383d Table: 235c2a80 DAC: fffffffd
Process mkfs.ext4 (pid: 123, stack limit = 0x495a22e6)
Stack: (0xe376bbe0 to 0xe376c000)
As it turns out, zsram needs to know the maximum memory size, which
is defined in MAX_PHYSMEM_BITS when CONFIG_SPARSEMEM is set, or in
MAX_POSSIBLE_PHYSMEM_BITS on the x86 architecture.
The same problem will be hit on all 32-bit architectures that have a
physical address space larger than 4GB and happen to not enable sparsemem
and include asm/sparsemem.h from asm/pgtable.h.
After the initial discussion, I suggested just always defining
MAX_POSSIBLE_PHYSMEM_BITS whenever CONFIG_PHYS_ADDR_T_64BIT is
set, or provoking a build error otherwise. This addresses all
configurations that can currently have this runtime bug, but
leaves all other configurations unchanged.
I looked up the possible number of bits in source code and
datasheets, here is what I found:
- on ARC, CONFIG_ARC_HAS_PAE40 controls whether 32 or 40 bits are used
- on ARM, CONFIG_LPAE enables 40 bit addressing, without it we never
support more than 32 bits, even though supersections in theory allow
up to 40 bits as well.
- on MIPS, some MIPS32r1 or later chips support 36 bits, and MIPS32r5
XPA supports up to 60 bits in theory, but 40 bits are more than
anyone will ever ship
- On PowerPC, there are three different implementations of 36 bit
addressing, but 32-bit is used without CONFIG_PTE_64BIT
- On RISC-V, the normal page table format can support 34 bit
addressing. There is no highmem support on RISC-V, so anything
above 2GB is unused, but it might be useful to eventually support
CONFIG_ZRAM for high pages.
Fixes: 61989a80fb3a ("staging: zsmalloc: zsmalloc memory allocation library")
Fixes: 02390b87a945 ("mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS")
Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Reviewed-by: Stefan Agner <stefan@agner.ch>
Tested-by: Stefan Agner <stefan@agner.ch>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Link: https://lore.kernel.org/linux-mm/bdfa44bf1c570b05d6c70898e2bbb0acf234ecdf.1604762181.git.stefan@agner.ch/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-11-11 17:52:58 +01:00
|
|
|
#define MAX_POSSIBLE_PHYSMEM_BITS 32
|
2018-10-09 13:52:16 +00:00
|
|
|
#endif
|
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
#ifndef __ASSEMBLY__
|
|
|
|
|
|
|
|
#define pmd_none(pmd) (!pmd_val(pmd))
|
|
|
|
#define pmd_bad(pmd) (pmd_val(pmd) & _PMD_BAD)
|
|
|
|
#define pmd_present(pmd) (pmd_val(pmd) & _PMD_PRESENT_MASK)
|
2015-12-01 09:06:35 +05:30
|
|
|
static inline void pmd_clear(pmd_t *pmdp)
|
|
|
|
{
|
|
|
|
*pmdp = __pmd(0);
|
|
|
|
}
|
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
/*
|
|
|
|
* Note that on Book E processors, the pmd contains the kernel virtual
|
|
|
|
* (lowmem) address of the pte page. The physical address is less useful
|
|
|
|
* because everything runs with translation enabled (even the TLB miss
|
|
|
|
* handler). On everything else the pmd contains the physical address
|
|
|
|
* of the pte page. -- paulus
|
|
|
|
*/
|
|
|
|
#ifndef CONFIG_BOOKE
|
2022-02-14 10:30:07 -05:00
|
|
|
#define pmd_pfn(pmd) (pmd_val(pmd) >> PAGE_SHIFT)
|
2007-04-30 16:30:56 +10:00
|
|
|
#else
|
|
|
|
#define pmd_page_vaddr(pmd) \
|
2024-08-22 09:58:42 +02:00
|
|
|
((const void *)((unsigned long)pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1)))
|
2022-02-14 10:30:07 -05:00
|
|
|
#define pmd_pfn(pmd) (__pa(pmd_val(pmd)) >> PAGE_SHIFT)
|
2007-04-30 16:30:56 +10:00
|
|
|
#endif
|
|
|
|
|
2022-02-14 10:30:07 -05:00
|
|
|
#define pmd_page(pmd) pfn_to_page(pmd_pfn(pmd))
|
2023-01-13 18:10:18 +01:00
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
/*
|
2023-01-13 18:10:18 +01:00
|
|
|
* Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that
|
|
|
|
* are !pte_none() && !pte_present().
|
|
|
|
*
|
|
|
|
* Format of swap PTEs (32bit PTEs):
|
|
|
|
*
|
|
|
|
* 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
|
|
|
|
* 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|
|
|
|
* <------------------ offset -------------------> < type -> E 0 0
|
|
|
|
*
|
|
|
|
* E is the exclusive marker that is not stored in swap entries.
|
|
|
|
*
|
|
|
|
* For 64bit PTEs, the offset is extended by 32bit.
|
2007-04-30 16:30:56 +10:00
|
|
|
*/
|
|
|
|
#define __swp_type(entry) ((entry).val & 0x1f)
|
|
|
|
#define __swp_offset(entry) ((entry).val >> 5)
|
2023-01-13 18:10:18 +01:00
|
|
|
#define __swp_entry(type, offset) ((swp_entry_t) { ((type) & 0x1f) | ((offset) << 5) })
|
2007-04-30 16:30:56 +10:00
|
|
|
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 3 })
|
|
|
|
#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 3 })
|
|
|
|
|
2023-01-13 18:10:18 +01:00
|
|
|
/* We borrow LSB 2 to store the exclusive marker in swap PTEs. */
|
|
|
|
#define _PAGE_SWP_EXCLUSIVE 0x000004
|
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
#endif /* !__ASSEMBLY__ */
|
|
|
|
|
2015-12-01 09:06:38 +05:30
|
|
|
#endif /* __ASM_POWERPC_NOHASH_32_PGTABLE_H */
|