License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2015-12-01 09:06:30 +05:30
|
|
|
#ifndef _ASM_POWERPC_BOOK3S_64_HASH_64K_H
|
|
|
|
#define _ASM_POWERPC_BOOK3S_64_HASH_64K_H
|
|
|
|
|
2019-03-14 23:54:53 +11:00
|
|
|
#define H_PTE_INDEX_SIZE 8 // size: 8B << 8 = 2KB, maps 2^8 x 64KB = 16MB
|
|
|
|
#define H_PMD_INDEX_SIZE 10 // size: 8B << 10 = 8KB, maps 2^10 x 16MB = 16GB
|
|
|
|
#define H_PUD_INDEX_SIZE 10 // size: 8B << 10 = 8KB, maps 2^10 x 16GB = 16TB
|
|
|
|
#define H_PGD_INDEX_SIZE 8 // size: 8B << 8 = 2KB, maps 2^8 x 16TB = 4PB
|
|
|
|
|
2020-06-08 12:39:04 +05:30
|
|
|
/*
|
|
|
|
* If we store section details in page->flags we can't increase the MAX_PHYSMEM_BITS
|
|
|
|
* if we increase SECTIONS_WIDTH we will not store node details in page->flags and
|
|
|
|
* page_to_nid does a page->section->node lookup
|
|
|
|
* Hence only increase for VMEMMAP. Further depending on SPARSEMEM_EXTREME reduce
|
|
|
|
* memory requirements with large number of sections.
|
|
|
|
* 51 bits is the max physical real address on POWER9
|
|
|
|
*/
|
|
|
|
#if defined(CONFIG_SPARSEMEM_VMEMMAP) && defined(CONFIG_SPARSEMEM_EXTREME)
|
|
|
|
#define H_MAX_PHYSMEM_BITS 51
|
|
|
|
#else
|
|
|
|
#define H_MAX_PHYSMEM_BITS 46
|
|
|
|
#endif
|
2015-12-01 09:06:30 +05:30
|
|
|
|
2018-03-26 15:34:48 +05:30
|
|
|
/*
|
|
|
|
* Each context is 512TB size. SLB miss for first context/default context
|
|
|
|
* is handled in the hotpath.
|
|
|
|
*/
|
|
|
|
#define MAX_EA_BITS_PER_CONTEXT 49
|
2019-04-17 18:29:17 +05:30
|
|
|
#define REGION_SHIFT MAX_EA_BITS_PER_CONTEXT
|
2018-03-26 15:34:48 +05:30
|
|
|
|
2019-04-17 18:29:14 +05:30
|
|
|
/*
|
|
|
|
* We use one context for each MAP area.
|
|
|
|
*/
|
|
|
|
#define H_KERN_MAP_SIZE (1UL << MAX_EA_BITS_PER_CONTEXT)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Define the address range of the kernel non-linear virtual area
|
|
|
|
* 2PB
|
|
|
|
*/
|
|
|
|
#define H_KERN_VIRT_START ASM_CONST(0xc008000000000000)
|
|
|
|
|
2017-03-21 22:59:53 +05:30
|
|
|
/*
|
|
|
|
* 64k aligned address free up few of the lower bits of RPN for us
|
|
|
|
* We steal that here. For more deatils look at pte_pfn/pfn_pte()
|
|
|
|
*/
|
2017-03-21 22:59:58 +05:30
|
|
|
#define H_PAGE_COMBO _RPAGE_RPN0 /* this is a combo 4k page */
|
|
|
|
#define H_PAGE_4K_PFN _RPAGE_RPN1 /* PFN is for a single 4k page */
|
2020-07-09 08:59:27 +05:30
|
|
|
#define H_PAGE_BUSY _RPAGE_RSV1 /* software: PTE & hash are busy */
|
powerpc: Swizzle around 4K PTE bits to free up bit 5 and bit 6
We need PTE bits 3 ,4, 5, 6 and 57 to support protection-keys,
because these are the bits we want to consolidate on across all
configuration to support protection keys.
Bit 3,4,5 and 6 are currently used on 4K-pte kernels. But bit 9
and 10 are available. Hence we use the two available bits and
free up bit 5 and 6. We will still not be able to free up bit 3
and 4. In the absence of any other free bits, we will have to
stay satisfied with what we have :-(. This means we will not
be able to support 32 protection keys, but only 8. The bit
numbers are big-endian as defined in the ISA3.0
This patch does the following change to 4K PTE.
H_PAGE_F_SECOND (S) which occupied bit 4 moves to bit 7.
H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also moves
to bit 8,9, 10 respectively.
H_PAGE_HASHPTE (H) which occupied bit 8 moves to bit 4.
Before the patch, the 4k PTE format was as follows
0 1 2 3 4 5 6 7 8 9 10....................57.....63
: : : : : : : : : : : : :
v v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x|B|S |G |I |X |H| | |x|x|................| |x|x|x|
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
After the patch, the 4k PTE format is as follows
0 1 2 3 4 5 6 7 8 9 10....................57.....63
: : : : : : : : : : : : :
v v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x|B|H | | |S |G|I|X|x|x|................| |.|.|.|
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
The patch has no code changes; just swizzles around bits.
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-11-06 00:50:50 -08:00
|
|
|
#define H_PAGE_HASHPTE _RPAGE_RPN43 /* PTE has associated HPTE */
|
powerpc: Free up four 64K PTE bits in 4K backed HPTE pages
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6,
in the 4K backed HPTE pages.These bits continue to be used
for 64K backed HPTE pages in this patch, but will be freed
up in the next patch. The bit numbers are big-endian as
defined in the ISA3.0
The patch does the following change to the 4k HTPE backed
64K PTE's format.
H_PAGE_BUSY moves from bit 3 to bit 9 (B bit in the figure
below)
V0 which occupied bit 4 is not used anymore.
V1 which occupied bit 5 is not used anymore.
V2 which occupied bit 6 is not used anymore.
V3 which occupied bit 7 is not used anymore.
Before the patch, the 4k backed 64k PTE format was as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x|B|V0|V1|V2|V3|x| | |x|x|................|x|x|x|x| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
|S|G|I|X|S |G |I |X |S|G|I|X|..................|S|G|I|X| <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
After the patch, the 4k backed 64k PTE format is as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| | | | | |x|B| |x|x|................|.|.|.|.| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
|S|G|I|X|S |G |I |X |S|G|I|X|..................|S|G|I|X| <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
the four bits S,G,I,X (one quadruplet per 4k HPTE) that
cache the hash-bucket slot value, is initialized to
1,1,1,1 indicating -- an invalid slot. If a HPTE gets
cached in a 1111 slot(i.e 7th slot of secondary hash
bucket), it is released immediately. In other words,
even though 1111 is a valid slot value in the hash
bucket, we consider it invalid and release the slot and
the HPTE. This gives us the opportunity to determine
the validity of S,G,I,X bits based on its contents and
not on any of the bits V0,V1,V2 or V3 in the primary PTE
When we release a HPTE cached in the 1111 slot
we also release a legitimate slot in the primary
hash bucket and unmap its corresponding HPTE. This
is to ensure that we do get a HPTE cached in a slot
of the primary hash bucket, the next time we retry.
Though treating 1111 slot as invalid, reduces the
number of available slots in the hash bucket and may
have an effect on the performance, the probabilty of
hitting a 1111 slot is extermely low.
Compared to the current scheme, the above scheme
reduces the number of false hash table updates
significantly and has the added advantage of releasing
four valuable PTE bits for other purpose.
NOTE:even though bits 3, 4, 5, 6, 7 are not used when
the 64K PTE is backed by 4k HPTE, they continue to be
used if the PTE gets backed by 64k HPTE. The next
patch will decouple that aswell, and truely release the
bits.
This idea was jointly developed by Paul Mackerras,
Aneesh, Michael Ellermen and myself.
4K PTE format remains unchanged currently.
The patch does the following code changes
a) PTE flags are split between 64k and 4k header files.
b) __hash_page_4K() is reimplemented to reflect the
above logic.
Acked-by: Balbir Singh <bsingharora@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-11-06 00:50:47 -08:00
|
|
|
|
2018-03-07 19:06:44 +05:30
|
|
|
/* memory key bits. */
|
2020-07-09 08:59:27 +05:30
|
|
|
#define H_PTE_PKEY_BIT4 _RPAGE_PKEY_BIT4
|
|
|
|
#define H_PTE_PKEY_BIT3 _RPAGE_PKEY_BIT3
|
|
|
|
#define H_PTE_PKEY_BIT2 _RPAGE_PKEY_BIT2
|
|
|
|
#define H_PTE_PKEY_BIT1 _RPAGE_PKEY_BIT1
|
|
|
|
#define H_PTE_PKEY_BIT0 _RPAGE_PKEY_BIT0
|
2018-03-07 19:06:44 +05:30
|
|
|
|
2015-12-01 09:06:45 +05:30
|
|
|
/*
|
2016-04-29 23:25:45 +10:00
|
|
|
* We need to differentiate between explicit huge page and THP huge
|
|
|
|
* page, since THP huge page also need to track real subpage details
|
2007-05-08 16:27:28 +10:00
|
|
|
*/
|
2016-04-29 23:25:45 +10:00
|
|
|
#define H_PAGE_THP_HUGE H_PAGE_4K_PFN
|
|
|
|
|
2005-11-07 11:06:55 +11:00
|
|
|
/* PTE flags to conserve for HPTE identification */
|
powerpc: Free up four 64K PTE bits in 64K backed HPTE pages
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6
in the 64K backed HPTE pages. This along with the earlier
patch will entirely free up the four bits from 64K PTE.
The bit numbers are big-endian as defined in the ISA3.0
This patch does the following change to 64K PTE backed
by 64K HPTE.
H_PAGE_F_SECOND (S) which occupied bit 4 moves to the
second part of the pte to bit 60.
H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also
moves to the second part of the pte to bit 61,
62, 63, 64 respectively
since bit 7 is now freed up, we move H_PAGE_BUSY (B) from
bit 9 to bit 7.
The second part of the PTE will hold
(H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63.
NOTE: None of the bits in the secondary PTE were not used
by 64k-HPTE backed PTE.
Before the patch, the 64K HPTE backed 64k PTE format was
as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| |S |G |I |X |x|B| |x|x|................|x|x|x|x| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
| | | | | | | | | | | | |..................| | | | | <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
After the patch, the 64k HPTE backed 64k PTE format is
as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| | | | |B |x| | |x|x|................|.|.|.|.| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
| | | | | | | | | | | | |..................|S|G|I|X| <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
The above PTE changes is applicable to hugetlbpages aswell.
The patch does the following code changes:
a) moves the H_PAGE_F_SECOND and H_PAGE_F_GIX to 4k PTE
header since it is no more needed b the 64k PTEs.
b) abstracts out __real_pte() and __rpte_to_hidx() so the
caller need not know the bit location of the slot.
c) moves the slot bits to the secondary pte.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-11-06 00:50:48 -08:00
|
|
|
#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | H_PAGE_COMBO)
|
2015-12-01 09:06:55 +05:30
|
|
|
/*
|
|
|
|
* We use a 2K PTE page fragment and another 2K for storing
|
|
|
|
* real_pte_t hash index
|
2018-03-22 14:13:50 +05:30
|
|
|
* 8 bytes per each pte entry and another 8 bytes for storing
|
|
|
|
* slot details.
|
2015-12-01 09:06:55 +05:30
|
|
|
*/
|
2018-03-22 14:13:50 +05:30
|
|
|
#define H_PTE_FRAG_SIZE_SHIFT (H_PTE_INDEX_SIZE + 3 + 1)
|
|
|
|
#define H_PTE_FRAG_NR (PAGE_SIZE >> H_PTE_FRAG_SIZE_SHIFT)
|
2015-12-01 09:06:55 +05:30
|
|
|
|
2018-04-16 16:57:22 +05:30
|
|
|
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLB_PAGE)
|
|
|
|
#define H_PMD_FRAG_SIZE_SHIFT (H_PMD_INDEX_SIZE + 3 + 1)
|
|
|
|
#else
|
|
|
|
#define H_PMD_FRAG_SIZE_SHIFT (H_PMD_INDEX_SIZE + 3)
|
|
|
|
#endif
|
|
|
|
#define H_PMD_FRAG_NR (PAGE_SIZE >> H_PMD_FRAG_SIZE_SHIFT)
|
|
|
|
|
2009-03-10 17:53:29 +00:00
|
|
|
#ifndef __ASSEMBLY__
|
2016-04-29 23:25:35 +10:00
|
|
|
#include <asm/errno.h>
|
2005-11-07 11:06:55 +11:00
|
|
|
|
2009-03-10 17:53:29 +00:00
|
|
|
/*
|
|
|
|
* With 64K pages on hash table, we have a special PTE format that
|
|
|
|
* uses a second "half" of the page table to encode sub-page information
|
|
|
|
* in order to deal with 64K made of 4K HW pages. Thus we override the
|
|
|
|
* generic accessors and iterators here
|
|
|
|
*/
|
2014-08-13 12:32:03 +05:30
|
|
|
#define __real_pte __real_pte
|
2018-02-11 20:30:08 +05:30
|
|
|
static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep, int offset)
|
2014-08-13 12:32:03 +05:30
|
|
|
{
|
|
|
|
real_pte_t rpte;
|
2015-12-01 09:06:46 +05:30
|
|
|
unsigned long *hidxp;
|
2014-08-13 12:32:03 +05:30
|
|
|
|
|
|
|
rpte.pte = pte;
|
powerpc: Free up four 64K PTE bits in 64K backed HPTE pages
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6
in the 64K backed HPTE pages. This along with the earlier
patch will entirely free up the four bits from 64K PTE.
The bit numbers are big-endian as defined in the ISA3.0
This patch does the following change to 64K PTE backed
by 64K HPTE.
H_PAGE_F_SECOND (S) which occupied bit 4 moves to the
second part of the pte to bit 60.
H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also
moves to the second part of the pte to bit 61,
62, 63, 64 respectively
since bit 7 is now freed up, we move H_PAGE_BUSY (B) from
bit 9 to bit 7.
The second part of the PTE will hold
(H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63.
NOTE: None of the bits in the secondary PTE were not used
by 64k-HPTE backed PTE.
Before the patch, the 64K HPTE backed 64k PTE format was
as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| |S |G |I |X |x|B| |x|x|................|x|x|x|x| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
| | | | | | | | | | | | |..................| | | | | <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
After the patch, the 64k HPTE backed 64k PTE format is
as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| | | | |B |x| | |x|x|................|.|.|.|.| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
| | | | | | | | | | | | |..................|S|G|I|X| <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
The above PTE changes is applicable to hugetlbpages aswell.
The patch does the following code changes:
a) moves the H_PAGE_F_SECOND and H_PAGE_F_GIX to 4k PTE
header since it is no more needed b the 64k PTEs.
b) abstracts out __real_pte() and __rpte_to_hidx() so the
caller need not know the bit location of the slot.
c) moves the slot bits to the secondary pte.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-11-06 00:50:48 -08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Ensure that we do not read the hidx before we read the PTE. Because
|
|
|
|
* the writer side is expected to finish writing the hidx first followed
|
|
|
|
* by the PTE, by using smp_wmb(). pte_set_hash_slot() ensures that.
|
|
|
|
*/
|
|
|
|
smp_rmb();
|
|
|
|
|
2018-02-11 20:30:08 +05:30
|
|
|
hidxp = (unsigned long *)(ptep + offset);
|
powerpc: Free up four 64K PTE bits in 64K backed HPTE pages
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6
in the 64K backed HPTE pages. This along with the earlier
patch will entirely free up the four bits from 64K PTE.
The bit numbers are big-endian as defined in the ISA3.0
This patch does the following change to 64K PTE backed
by 64K HPTE.
H_PAGE_F_SECOND (S) which occupied bit 4 moves to the
second part of the pte to bit 60.
H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also
moves to the second part of the pte to bit 61,
62, 63, 64 respectively
since bit 7 is now freed up, we move H_PAGE_BUSY (B) from
bit 9 to bit 7.
The second part of the PTE will hold
(H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63.
NOTE: None of the bits in the secondary PTE were not used
by 64k-HPTE backed PTE.
Before the patch, the 64K HPTE backed 64k PTE format was
as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| |S |G |I |X |x|B| |x|x|................|x|x|x|x| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
| | | | | | | | | | | | |..................| | | | | <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
After the patch, the 64k HPTE backed 64k PTE format is
as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| | | | |B |x| | |x|x|................|.|.|.|.| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
| | | | | | | | | | | | |..................|S|G|I|X| <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
The above PTE changes is applicable to hugetlbpages aswell.
The patch does the following code changes:
a) moves the H_PAGE_F_SECOND and H_PAGE_F_GIX to 4k PTE
header since it is no more needed b the 64k PTEs.
b) abstracts out __real_pte() and __rpte_to_hidx() so the
caller need not know the bit location of the slot.
c) moves the slot bits to the secondary pte.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-11-06 00:50:48 -08:00
|
|
|
rpte.hidx = *hidxp;
|
2014-08-13 12:32:03 +05:30
|
|
|
return rpte;
|
|
|
|
}
|
|
|
|
|
2017-11-06 00:50:49 -08:00
|
|
|
/*
|
|
|
|
* shift the hidx representation by one-modulo-0xf; i.e hidx 0 is respresented
|
|
|
|
* as 1, 1 as 2,... , and 0xf as 0. This convention lets us represent a
|
|
|
|
* invalid hidx 0xf with a 0x0 bit value. PTEs are anyway zero'd when
|
|
|
|
* allocated. We dont have to zero them gain; thus save on the initialization.
|
|
|
|
*/
|
|
|
|
#define HIDX_UNSHIFT_BY_ONE(x) ((x + 0xfUL) & 0xfUL) /* shift backward by one */
|
|
|
|
#define HIDX_SHIFT_BY_ONE(x) ((x + 0x1UL) & 0xfUL) /* shift forward by one */
|
2017-11-06 00:50:45 -08:00
|
|
|
#define HIDX_BITS(x, index) (x << (index << 2))
|
powerpc: Free up four 64K PTE bits in 64K backed HPTE pages
Rearrange 64K PTE bits to free up bits 3, 4, 5 and 6
in the 64K backed HPTE pages. This along with the earlier
patch will entirely free up the four bits from 64K PTE.
The bit numbers are big-endian as defined in the ISA3.0
This patch does the following change to 64K PTE backed
by 64K HPTE.
H_PAGE_F_SECOND (S) which occupied bit 4 moves to the
second part of the pte to bit 60.
H_PAGE_F_GIX (G,I,X) which occupied bit 5, 6 and 7 also
moves to the second part of the pte to bit 61,
62, 63, 64 respectively
since bit 7 is now freed up, we move H_PAGE_BUSY (B) from
bit 9 to bit 7.
The second part of the PTE will hold
(H_PAGE_F_SECOND|H_PAGE_F_GIX) at bit 60,61,62,63.
NOTE: None of the bits in the secondary PTE were not used
by 64k-HPTE backed PTE.
Before the patch, the 64K HPTE backed 64k PTE format was
as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| |S |G |I |X |x|B| |x|x|................|x|x|x|x| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
| | | | | | | | | | | | |..................| | | | | <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
After the patch, the 64k HPTE backed 64k PTE format is
as follows
0 1 2 3 4 5 6 7 8 9 10...........................63
: : : : : : : : : : : :
v v v v v v v v v v v v
,-,-,-,-,--,--,--,--,-,-,-,-,-,------------------,-,-,-,
|x|x|x| | | | |B |x| | |x|x|................|.|.|.|.| <- primary pte
'_'_'_'_'__'__'__'__'_'_'_'_'_'________________'_'_'_'_'
| | | | | | | | | | | | |..................|S|G|I|X| <- secondary pte
'_'_'_'_'__'__'__'__'_'_'_'_'__________________'_'_'_'_'
The above PTE changes is applicable to hugetlbpages aswell.
The patch does the following code changes:
a) moves the H_PAGE_F_SECOND and H_PAGE_F_GIX to 4k PTE
header since it is no more needed b the 64k PTEs.
b) abstracts out __real_pte() and __rpte_to_hidx() so the
caller need not know the bit location of the slot.
c) moves the slot bits to the secondary pte.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2017-11-06 00:50:48 -08:00
|
|
|
#define BITS_TO_HIDX(x, index) ((x >> (index << 2)) & 0xfUL)
|
2017-11-06 00:50:49 -08:00
|
|
|
#define INVALID_RPTE_HIDX 0x0UL
|
2017-11-06 00:50:45 -08:00
|
|
|
|
2014-08-13 12:32:03 +05:30
|
|
|
static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
|
|
|
|
{
|
2017-11-06 00:50:49 -08:00
|
|
|
return HIDX_UNSHIFT_BY_ONE(BITS_TO_HIDX(rpte.hidx, index));
|
2014-08-13 12:32:03 +05:30
|
|
|
}
|
|
|
|
|
2017-11-06 00:50:45 -08:00
|
|
|
/*
|
|
|
|
* Commit the hidx and return PTE bits that needs to be modified. The caller is
|
|
|
|
* expected to modify the PTE bits accordingly and commit the PTE to memory.
|
|
|
|
*/
|
|
|
|
static inline unsigned long pte_set_hidx(pte_t *ptep, real_pte_t rpte,
|
2018-02-11 20:30:08 +05:30
|
|
|
unsigned int subpg_index,
|
|
|
|
unsigned long hidx, int offset)
|
2017-11-06 00:50:45 -08:00
|
|
|
{
|
2018-02-11 20:30:08 +05:30
|
|
|
unsigned long *hidxp = (unsigned long *)(ptep + offset);
|
2017-11-06 00:50:45 -08:00
|
|
|
|
|
|
|
rpte.hidx &= ~HIDX_BITS(0xfUL, subpg_index);
|
2017-11-06 00:50:49 -08:00
|
|
|
*hidxp = rpte.hidx | HIDX_BITS(HIDX_SHIFT_BY_ONE(hidx), subpg_index);
|
2017-11-06 00:50:45 -08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Anyone reading PTE must ensure hidx bits are read after reading the
|
|
|
|
* PTE by using the read-side barrier smp_rmb(). __real_pte() can be
|
|
|
|
* used for that.
|
|
|
|
*/
|
|
|
|
smp_wmb();
|
|
|
|
|
|
|
|
/* No PTE bits to be modified, return 0x0UL */
|
|
|
|
return 0x0UL;
|
2014-08-13 12:32:03 +05:30
|
|
|
}
|
|
|
|
|
2005-11-07 11:06:55 +11:00
|
|
|
#define __rpte_to_pte(r) ((r).pte)
|
2015-12-01 09:06:45 +05:30
|
|
|
extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index);
|
2015-12-01 09:06:30 +05:30
|
|
|
/*
|
|
|
|
* Trick: we set __end to va + 64k, which happens works for
|
2005-11-07 11:06:55 +11:00
|
|
|
* a 16M page as well as we want only one iteration
|
|
|
|
*/
|
2012-09-10 02:52:50 +00:00
|
|
|
#define pte_iterate_hashed_subpages(rpte, psize, vpn, index, shift) \
|
|
|
|
do { \
|
|
|
|
unsigned long __end = vpn + (1UL << (PAGE_SHIFT - VPN_SHIFT)); \
|
|
|
|
unsigned __split = (psize == MMU_PAGE_4K || \
|
|
|
|
psize == MMU_PAGE_64K_AP); \
|
|
|
|
shift = mmu_psize_defs[psize].shift; \
|
|
|
|
for (index = 0; vpn < __end; index++, \
|
|
|
|
vpn += (1L << (shift - VPN_SHIFT))) { \
|
2018-08-09 19:06:42 +05:30
|
|
|
if (!__split || __rpte_sub_valid(rpte, index))
|
2005-11-07 11:06:55 +11:00
|
|
|
|
2018-08-09 19:06:42 +05:30
|
|
|
#define pte_iterate_hashed_end() } } while(0)
|
2005-11-07 11:06:55 +11:00
|
|
|
|
2007-05-08 16:27:28 +10:00
|
|
|
#define pte_pagesize_index(mm, addr, pte) \
|
2016-04-29 23:25:45 +10:00
|
|
|
(((pte) & H_PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
|
2005-11-07 11:06:55 +11:00
|
|
|
|
2016-04-29 23:25:35 +10:00
|
|
|
extern int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
|
|
|
|
unsigned long pfn, unsigned long size, pgprot_t);
|
2016-04-29 23:25:56 +10:00
|
|
|
static inline int hash__remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
|
|
|
|
unsigned long pfn, pgprot_t prot)
|
2016-04-29 23:25:35 +10:00
|
|
|
{
|
|
|
|
if (pfn > (PTE_RPN_MASK >> PAGE_SHIFT)) {
|
|
|
|
WARN(1, "remap_4k_pfn called with wrong pfn value\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
return remap_pfn_range(vma, addr, pfn, PAGE_SIZE,
|
2016-04-29 23:25:45 +10:00
|
|
|
__pgprot(pgprot_val(prot) | H_PAGE_4K_PFN));
|
2016-04-29 23:25:35 +10:00
|
|
|
}
|
[POWERPC] Allow drivers to map individual 4k pages to userspace
Some drivers have resources that they want to be able to map into
userspace that are 4k in size. On a kernel configured with 64k pages
we currently end up mapping the 4k we want plus another 60k of
physical address space, which could contain anything. This can
introduce security problems, for example in the case of an infiniband
adaptor where the other 60k could contain registers that some other
program is using for its communications.
This patch adds a new function, remap_4k_pfn, which drivers can use to
map a single 4k page to userspace regardless of whether the kernel is
using a 4k or a 64k page size. Like remap_pfn_range, it would
typically be called in a driver's mmap function. It only maps a
single 4k page, which on a 64k page kernel appears replicated 16 times
throughout a 64k page. On a 4k page kernel it reduces to a call to
remap_pfn_range.
The way this works on a 64k kernel is that a new bit, _PAGE_4K_PFN,
gets set on the linux PTE. This alters the way that __hash_page_4K
computes the real address to put in the HPTE. The RPN field of the
linux PTE becomes the 4k RPN directly rather than being interpreted as
a 64k RPN. Since the RPN field is 32 bits, this means that physical
addresses being mapped with remap_4k_pfn have to be below 2^44,
i.e. 0x100000000000.
The patch also factors out the code in arch/powerpc/mm/hash_utils_64.c
that deals with demoting a process to use 4k pages into one function
that gets called in the various different places where we need to do
that. There were some discrepancies between exactly what was done in
the various places, such as a call to spu_flush_all_slbs in one case
but not in others.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-03 21:24:02 +10:00
|
|
|
|
2016-04-29 23:25:49 +10:00
|
|
|
#define H_PTE_TABLE_SIZE PTE_FRAG_SIZE
|
2018-02-11 20:30:07 +05:30
|
|
|
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined (CONFIG_HUGETLB_PAGE)
|
2016-04-29 23:25:49 +10:00
|
|
|
#define H_PMD_TABLE_SIZE ((sizeof(pmd_t) << PMD_INDEX_SIZE) + \
|
|
|
|
(sizeof(unsigned long) << PMD_INDEX_SIZE))
|
2015-12-01 09:06:55 +05:30
|
|
|
#else
|
2016-04-29 23:25:49 +10:00
|
|
|
#define H_PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
|
2015-12-01 09:06:55 +05:30
|
|
|
#endif
|
2018-02-11 20:30:06 +05:30
|
|
|
#ifdef CONFIG_HUGETLB_PAGE
|
|
|
|
#define H_PUD_TABLE_SIZE ((sizeof(pud_t) << PUD_INDEX_SIZE) + \
|
|
|
|
(sizeof(unsigned long) << PUD_INDEX_SIZE))
|
|
|
|
#else
|
2016-04-29 23:25:49 +10:00
|
|
|
#define H_PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE)
|
2018-02-11 20:30:06 +05:30
|
|
|
#endif
|
2016-04-29 23:25:49 +10:00
|
|
|
#define H_PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
|
2015-12-01 09:06:30 +05:30
|
|
|
|
2015-12-01 09:06:53 +05:30
|
|
|
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
|
|
|
static inline char *get_hpte_slot_array(pmd_t *pmdp)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The hpte hindex is stored in the pgtable whose address is in the
|
|
|
|
* second half of the PMD
|
|
|
|
*
|
|
|
|
* Order this load with the test for pmd_trans_huge in the caller
|
|
|
|
*/
|
|
|
|
smp_rmb();
|
|
|
|
return *(char **)(pmdp + PTRS_PER_PMD);
|
|
|
|
|
|
|
|
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* The linux hugepage PMD now include the pmd entries followed by the address
|
|
|
|
* to the stashed pgtable_t. The stashed pgtable_t contains the hpte bits.
|
2016-02-22 13:41:15 +11:00
|
|
|
* [ 000 | 1 bit secondary | 3 bit hidx | 1 bit valid]. We use one byte per
|
2015-12-01 09:06:53 +05:30
|
|
|
* each HPTE entry. With 16MB hugepage and 64K HPTE we need 256 entries and
|
|
|
|
* with 4K HPTE we need 4096 entries. Both will fit in a 4K pgtable_t.
|
|
|
|
*
|
2016-02-22 13:41:15 +11:00
|
|
|
* The top three bits are intentionally left as zero. This memory location
|
2015-12-01 09:06:53 +05:30
|
|
|
* are also used as normal page PTE pointers. So if we have any pointers
|
|
|
|
* left around while we collapse a hugepage, we need to make sure
|
|
|
|
* _PAGE_PRESENT bit of that is zero when we look at them
|
|
|
|
*/
|
|
|
|
static inline unsigned int hpte_valid(unsigned char *hpte_slot_array, int index)
|
|
|
|
{
|
2016-02-22 13:41:15 +11:00
|
|
|
return hpte_slot_array[index] & 0x1;
|
2015-12-01 09:06:53 +05:30
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int hpte_hash_index(unsigned char *hpte_slot_array,
|
|
|
|
int index)
|
|
|
|
{
|
2016-02-22 13:41:15 +11:00
|
|
|
return hpte_slot_array[index] >> 1;
|
2015-12-01 09:06:53 +05:30
|
|
|
}
|
|
|
|
|
|
|
|
static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
|
|
|
|
unsigned int index, unsigned int hidx)
|
|
|
|
{
|
2016-02-22 13:41:15 +11:00
|
|
|
hpte_slot_array[index] = (hidx << 1) | 0x1;
|
2015-12-01 09:06:53 +05:30
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
*
|
|
|
|
* For core kernel code by design pmd_trans_huge is never run on any hugetlbfs
|
|
|
|
* page. The hugetlbfs page table walking and mangling paths are totally
|
|
|
|
* separated form the core VM paths and they're differentiated by
|
|
|
|
* VM_HUGETLB being set on vm_flags well before any pmd_trans_huge could run.
|
|
|
|
*
|
|
|
|
* pmd_trans_huge() is defined as false at build time if
|
|
|
|
* CONFIG_TRANSPARENT_HUGEPAGE=n to optimize away code blocks at build
|
|
|
|
* time in such case.
|
|
|
|
*
|
|
|
|
* For ppc64 we need to differntiate from explicit hugepages from THP, because
|
|
|
|
* for THP we also track the subpage details at the pmd level. We don't do
|
|
|
|
* that for explicit huge pages.
|
|
|
|
*
|
|
|
|
*/
|
2016-04-29 23:25:56 +10:00
|
|
|
static inline int hash__pmd_trans_huge(pmd_t pmd)
|
2015-12-01 09:06:53 +05:30
|
|
|
{
|
2025-06-19 18:58:03 +10:00
|
|
|
return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE)) ==
|
2016-04-29 23:25:45 +10:00
|
|
|
(_PAGE_PTE | H_PAGE_THP_HUGE));
|
2015-12-01 09:06:53 +05:30
|
|
|
}
|
|
|
|
|
2016-04-29 23:26:29 +10:00
|
|
|
static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
|
|
|
|
{
|
|
|
|
return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE));
|
|
|
|
}
|
|
|
|
|
|
|
|
extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm,
|
|
|
|
unsigned long addr, pmd_t *pmdp,
|
|
|
|
unsigned long clr, unsigned long set);
|
|
|
|
extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma,
|
|
|
|
unsigned long address, pmd_t *pmdp);
|
|
|
|
extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
|
|
|
|
pgtable_t pgtable);
|
|
|
|
extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
|
|
|
|
extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
|
|
|
|
unsigned long addr, pmd_t *pmdp);
|
|
|
|
extern int hash__has_transparent_hugepage(void);
|
2015-12-01 09:06:53 +05:30
|
|
|
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
2020-03-13 15:18:42 +05:30
|
|
|
|
2009-03-10 17:53:29 +00:00
|
|
|
#endif /* __ASSEMBLY__ */
|
2015-12-01 09:06:30 +05:30
|
|
|
|
|
|
|
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_64K_H */
|