License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2015-12-01 09:06:38 +05:30
|
|
|
#ifndef _ASM_POWERPC_NOHASH_64_PGTABLE_H
|
|
|
|
#define _ASM_POWERPC_NOHASH_64_PGTABLE_H
|
2007-04-30 16:30:56 +10:00
|
|
|
/*
|
|
|
|
* This file contains the functions and defines necessary to modify and use
|
2018-07-05 16:25:11 +00:00
|
|
|
* the ppc64 non-hashed page table.
|
2007-04-30 16:30:56 +10:00
|
|
|
*/
|
|
|
|
|
2021-04-20 13:32:48 +00:00
|
|
|
#include <linux/sizes.h>
|
|
|
|
|
2015-12-01 09:06:38 +05:30
|
|
|
#include <asm/nohash/64/pgtable-4k.h>
|
2013-06-20 14:30:15 +05:30
|
|
|
#include <asm/barrier.h>
|
2018-07-05 16:24:57 +00:00
|
|
|
#include <asm/asm-const.h>
|
2007-04-30 16:30:56 +10:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Size of EA range mapped by our pagetables.
|
|
|
|
*/
|
|
|
|
#define PGTABLE_EADDR_SIZE (PTE_INDEX_SIZE + PMD_INDEX_SIZE + \
|
2015-12-01 09:06:38 +05:30
|
|
|
PUD_INDEX_SIZE + PGD_INDEX_SIZE + PAGE_SHIFT)
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-04 15:15:36 +10:00
|
|
|
#define PGTABLE_RANGE (ASM_CONST(1) << PGTABLE_EADDR_SIZE)
|
2007-04-30 16:30:56 +10:00
|
|
|
|
2013-06-20 14:30:14 +05:30
|
|
|
#define PMD_CACHE_INDEX PMD_INDEX_SIZE
|
2018-02-11 20:30:06 +05:30
|
|
|
#define PUD_CACHE_INDEX PUD_INDEX_SIZE
|
2016-08-25 14:31:10 +08:00
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
/*
|
2009-07-28 11:59:34 +10:00
|
|
|
* Define the address range of the kernel non-linear virtual area
|
2007-04-30 16:30:56 +10:00
|
|
|
*/
|
2022-06-28 16:48:57 +02:00
|
|
|
#define KERN_VIRT_START ASM_CONST(0xc000100000000000)
|
2012-09-10 02:52:51 +00:00
|
|
|
#define KERN_VIRT_SIZE ASM_CONST(0x0000100000000000)
|
2007-04-30 16:30:56 +10:00
|
|
|
|
|
|
|
/*
|
2009-07-28 11:59:34 +10:00
|
|
|
* The vmalloc space starts at the beginning of that region, and
|
2018-07-05 16:25:11 +00:00
|
|
|
* occupies a quarter of it on Book3E
|
2009-07-23 23:15:58 +00:00
|
|
|
* (we keep a quarter for the virtual memmap)
|
2009-07-28 11:59:34 +10:00
|
|
|
*/
|
|
|
|
#define VMALLOC_START KERN_VIRT_START
|
|
|
|
#define VMALLOC_SIZE (KERN_VIRT_SIZE >> 2)
|
|
|
|
#define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE)
|
|
|
|
|
|
|
|
/*
|
2022-06-28 16:48:58 +02:00
|
|
|
* The third quarter of the kernel virtual space is used for IO mappings,
|
2009-07-28 11:59:34 +10:00
|
|
|
* it's itself carved into the PIO region (ISA and PHB IO space) and
|
|
|
|
* the ioremap space
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-04 15:15:36 +10:00
|
|
|
*
|
2009-07-28 11:59:34 +10:00
|
|
|
* ISA_IO_BASE = KERN_IO_START, 64K reserved area
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-04 15:15:36 +10:00
|
|
|
* PHB_IO_BASE = ISA_IO_BASE + 64K to ISA_IO_BASE + 2G, PHB IO spaces
|
2022-06-28 16:48:58 +02:00
|
|
|
* IOREMAP_BASE = ISA_IO_BASE + 2G to KERN_IO_START + KERN_IO_SIZE
|
2007-04-30 16:30:56 +10:00
|
|
|
*/
|
2009-07-28 11:59:34 +10:00
|
|
|
#define KERN_IO_START (KERN_VIRT_START + (KERN_VIRT_SIZE >> 1))
|
2022-06-28 16:48:58 +02:00
|
|
|
#define KERN_IO_SIZE (KERN_VIRT_SIZE >> 2)
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-04 15:15:36 +10:00
|
|
|
#define FULL_IO_SIZE 0x80000000ul
|
2009-07-28 11:59:34 +10:00
|
|
|
#define ISA_IO_BASE (KERN_IO_START)
|
|
|
|
#define ISA_IO_END (KERN_IO_START + 0x10000ul)
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-04 15:15:36 +10:00
|
|
|
#define PHB_IO_BASE (ISA_IO_END)
|
2009-07-28 11:59:34 +10:00
|
|
|
#define PHB_IO_END (KERN_IO_START + FULL_IO_SIZE)
|
[POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64. The main goals are:
- Get rid of imalloc and use more common code where possible
- Simplify the current mess so that PIO space is allocated and
mapped in a single place for PCI bridges
- Handle allocation constraints of PIO for all bridges including
hot plugged ones within the 2GB space reserved for IO ports,
so that devices on hotplugged busses will now work with drivers
that assume IO ports fit in an int.
- Cleanup and separate tracking of the ISA space in the reserved
low 64K of IO space. No ISA -> Nothing mapped there.
I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)
With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.
This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)
A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).
imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.
I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.
This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-04 15:15:36 +10:00
|
|
|
#define IOREMAP_BASE (PHB_IO_END)
|
2019-08-20 14:07:19 +00:00
|
|
|
#define IOREMAP_START (ioremap_bot)
|
2022-06-28 16:48:58 +02:00
|
|
|
#define IOREMAP_END (KERN_IO_START + KERN_IO_SIZE - FIXADDR_SIZE)
|
2021-04-20 13:32:48 +00:00
|
|
|
#define FIXADDR_SIZE SZ_32M
|
2023-09-25 20:31:21 +02:00
|
|
|
#define FIXADDR_TOP (IOREMAP_END + FIXADDR_SIZE)
|
2009-07-28 11:59:34 +10:00
|
|
|
|
2007-10-16 01:24:17 -07:00
|
|
|
/*
|
2009-07-28 11:59:34 +10:00
|
|
|
* Defines the address of the vmemap area, in its own region on
|
2018-07-05 16:25:11 +00:00
|
|
|
* after the vmalloc space on Book3E
|
2007-10-16 01:24:17 -07:00
|
|
|
*/
|
2009-07-28 11:59:34 +10:00
|
|
|
#define VMEMMAP_BASE VMALLOC_END
|
|
|
|
#define VMEMMAP_END KERN_IO_START
|
[POWERPC] vmemmap fixes to use smaller pages
This changes vmemmap to use a different region (region 0xf) of the
address space, and to configure the page size of that region
dynamically at boot.
The problem with the current approach of always using 16M pages is that
it's not well suited to machines that have small amounts of memory such
as small partitions on pseries, or PS3's.
In fact, on the PS3, failure to allocate the 16M page backing vmmemmap
tends to prevent hotplugging the HV's "additional" memory, thus limiting
the available memory even more, from my experience down to something
like 80M total, which makes it really not very useable.
The logic used by my match to choose the vmemmap page size is:
- If 16M pages are available and there's 1G or more RAM at boot,
use that size.
- Else if 64K pages are available, use that
- Else use 4K pages
I've tested on a POWER6 (16M pages) and on an iSeries POWER3 (4K pages)
and it seems to work fine.
Note that I intend to change the way we organize the kernel regions &
SLBs so the actual region will change from 0xf back to something else at
one point, as I simplify the SLB miss handler, but that will be for a
later patch.
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-30 15:41:48 +10:00
|
|
|
#define vmemmap ((struct page *)VMEMMAP_BASE)
|
|
|
|
|
2007-10-16 01:24:17 -07:00
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
/*
|
2009-03-10 17:53:29 +00:00
|
|
|
* Include the PTE bits definitions
|
2007-04-30 16:30:56 +10:00
|
|
|
*/
|
2022-09-19 19:01:38 +02:00
|
|
|
#include <asm/nohash/pte-e500.h>
|
2018-10-09 13:52:10 +00:00
|
|
|
|
|
|
|
#define PTE_RPN_MASK (~((1UL << PTE_RPN_SHIFT) - 1))
|
|
|
|
|
|
|
|
#define H_PAGE_4K_PFN 0
|
2009-03-10 17:53:29 +00:00
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
#ifndef __ASSEMBLY__
|
|
|
|
/* pte_clear moved to later in this file */
|
|
|
|
|
|
|
|
#define PMD_BAD_BITS (PTE_TABLE_SIZE-1)
|
|
|
|
#define PUD_BAD_BITS (PMD_TABLE_SIZE-1)
|
|
|
|
|
2015-12-01 09:06:35 +05:30
|
|
|
static inline void pmd_set(pmd_t *pmdp, unsigned long val)
|
|
|
|
{
|
|
|
|
*pmdp = __pmd(val);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void pmd_clear(pmd_t *pmdp)
|
|
|
|
{
|
|
|
|
*pmdp = __pmd(0);
|
|
|
|
}
|
|
|
|
|
2015-12-01 09:06:53 +05:30
|
|
|
static inline pte_t pmd_pte(pmd_t pmd)
|
|
|
|
{
|
|
|
|
return __pte(pmd_val(pmd));
|
|
|
|
}
|
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
#define pmd_none(pmd) (!pmd_val(pmd))
|
|
|
|
#define pmd_bad(pmd) (!is_kernel_addr(pmd_val(pmd)) \
|
|
|
|
|| (pmd_val(pmd) & PMD_BAD_BITS))
|
2014-11-05 21:57:39 +05:30
|
|
|
#define pmd_present(pmd) (!pmd_none(pmd))
|
2023-08-09 10:07:13 +02:00
|
|
|
#define pmd_page_vaddr(pmd) ((const void *)(pmd_val(pmd) & ~PMD_MASKED_BITS))
|
2013-06-20 14:30:15 +05:30
|
|
|
extern struct page *pmd_page(pmd_t pmd);
|
2022-02-14 10:30:07 -05:00
|
|
|
#define pmd_pfn(pmd) (page_to_pfn(pmd_page(pmd)))
|
2007-04-30 16:30:56 +10:00
|
|
|
|
2015-12-01 09:06:35 +05:30
|
|
|
static inline void pud_set(pud_t *pudp, unsigned long val)
|
|
|
|
{
|
|
|
|
*pudp = __pud(val);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void pud_clear(pud_t *pudp)
|
|
|
|
{
|
|
|
|
*pudp = __pud(0);
|
|
|
|
}
|
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
#define pud_none(pud) (!pud_val(pud))
|
|
|
|
#define pud_bad(pud) (!is_kernel_addr(pud_val(pud)) \
|
|
|
|
|| (pud_val(pud) & PUD_BAD_BITS))
|
|
|
|
#define pud_present(pud) (pud_val(pud) != 0)
|
2021-07-07 18:09:53 -07:00
|
|
|
|
|
|
|
static inline pmd_t *pud_pgtable(pud_t pud)
|
|
|
|
{
|
|
|
|
return (pmd_t *)(pud_val(pud) & ~PUD_MASKED_BITS);
|
|
|
|
}
|
2007-04-30 16:30:56 +10:00
|
|
|
|
2014-11-05 21:57:39 +05:30
|
|
|
extern struct page *pud_page(pud_t pud);
|
|
|
|
|
|
|
|
static inline pte_t pud_pte(pud_t pud)
|
|
|
|
{
|
|
|
|
return __pte(pud_val(pud));
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline pud_t pte_pud(pte_t pte)
|
|
|
|
{
|
|
|
|
return __pud(pte_val(pte));
|
|
|
|
}
|
|
|
|
#define pud_write(pud) pte_write(pud_pte(pud))
|
2020-06-04 16:46:44 -07:00
|
|
|
#define p4d_write(pgd) pte_write(p4d_pte(p4d))
|
2007-04-30 16:30:56 +10:00
|
|
|
|
2020-06-04 16:46:44 -07:00
|
|
|
static inline void p4d_set(p4d_t *p4dp, unsigned long val)
|
2015-12-01 09:06:35 +05:30
|
|
|
{
|
2020-06-04 16:46:44 -07:00
|
|
|
*p4dp = __p4d(val);
|
2015-12-01 09:06:35 +05:30
|
|
|
}
|
|
|
|
|
2018-10-26 15:08:35 -07:00
|
|
|
#define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT
|
powerpc: Add 64 bit version of huge_ptep_set_wrprotect
The implementation of huge_ptep_set_wrprotect() directly calls
ptep_set_wrprotect() to mark a hugepte write protected. However this
call is not appropriate on ppc64 kernels as this is a small page only
implementation. This can lead to the hash not being flushed correctly
when a mapping is being converted to COW, allowing processes to continue
using the original copy.
Currently huge_ptep_set_wrprotect() unconditionally calls
ptep_set_wrprotect(). This is fine on ppc32 kernels as this call is
generic. On 64 bit this is implemented as:
pte_update(mm, addr, ptep, _PAGE_RW, 0);
On ppc64 this last parameter is the page size and is passed directly on
to hpte_need_flush():
hpte_need_flush(mm, addr, ptep, old, huge);
And this directly affects the page size we pass to flush_hash_page():
flush_hash_page(vaddr, rpte, psize, ssize, 0);
As this changes the way the hash is calculated we will flush the wrong
pages, potentially leaving live hashes to the original page.
Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific
headers.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-26 19:55:58 +10:00
|
|
|
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
|
|
|
|
unsigned long addr, pte_t *ptep)
|
|
|
|
{
|
2023-09-25 20:31:42 +02:00
|
|
|
pte_update(mm, addr, ptep, _PAGE_WRITE, 0, 1);
|
powerpc: Add 64 bit version of huge_ptep_set_wrprotect
The implementation of huge_ptep_set_wrprotect() directly calls
ptep_set_wrprotect() to mark a hugepte write protected. However this
call is not appropriate on ppc64 kernels as this is a small page only
implementation. This can lead to the hash not being flushed correctly
when a mapping is being converted to COW, allowing processes to continue
using the original copy.
Currently huge_ptep_set_wrprotect() unconditionally calls
ptep_set_wrprotect(). This is fine on ppc32 kernels as this call is
generic. On 64 bit this is implemented as:
pte_update(mm, addr, ptep, _PAGE_RW, 0);
On ppc64 this last parameter is the page size and is passed directly on
to hpte_need_flush():
hpte_need_flush(mm, addr, ptep, old, huge);
And this directly affects the page size we pass to flush_hash_page():
flush_hash_page(vaddr, rpte, psize, ssize, 0);
As this changes the way the hash is calculated we will flush the wrong
pages, potentially leaving live hashes to the original page.
Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific
headers.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-26 19:55:58 +10:00
|
|
|
}
|
2007-04-30 16:30:56 +10:00
|
|
|
|
|
|
|
#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
|
|
|
|
#define ptep_clear_flush_young(__vma, __address, __ptep) \
|
|
|
|
({ \
|
2023-09-25 20:31:30 +02:00
|
|
|
int __young = ptep_test_and_clear_young(__vma, __address, __ptep);\
|
2007-04-30 16:30:56 +10:00
|
|
|
__young; \
|
|
|
|
})
|
|
|
|
|
|
|
|
#define pmd_ERROR(e) \
|
2014-09-17 14:39:39 +10:00
|
|
|
pr_err("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e))
|
2007-04-30 16:30:56 +10:00
|
|
|
#define pgd_ERROR(e) \
|
2014-09-17 14:39:39 +10:00
|
|
|
pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
|
2007-04-30 16:30:56 +10:00
|
|
|
|
2023-01-13 18:10:18 +01:00
|
|
|
/*
|
|
|
|
* Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that
|
|
|
|
* are !pte_none() && !pte_present().
|
|
|
|
*
|
|
|
|
* Format of swap PTEs:
|
|
|
|
*
|
|
|
|
* 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
|
|
|
|
* 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|
|
|
|
* <-------------------------- offset ----------------------------
|
|
|
|
*
|
|
|
|
* 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6
|
|
|
|
* 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3
|
|
|
|
* --------------> <----------- zero ------------> E < type -> 0 0
|
|
|
|
*
|
|
|
|
* E is the exclusive marker that is not stored in swap entries.
|
|
|
|
*/
|
2015-06-17 08:13:41 +05:30
|
|
|
#define MAX_SWAPFILES_CHECK() do { \
|
|
|
|
BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS); \
|
|
|
|
} while (0)
|
2017-11-03 13:30:42 -04:00
|
|
|
|
2015-06-17 08:13:41 +05:30
|
|
|
#define SWP_TYPE_BITS 5
|
2023-01-13 18:10:18 +01:00
|
|
|
#define __swp_type(x) (((x).val >> 2) \
|
2015-06-17 08:13:41 +05:30
|
|
|
& ((1UL << SWP_TYPE_BITS) - 1))
|
|
|
|
#define __swp_offset(x) ((x).val >> PTE_RPN_SHIFT)
|
|
|
|
#define __swp_entry(type, offset) ((swp_entry_t) { \
|
2023-01-13 18:10:18 +01:00
|
|
|
(((type) & 0x1f) << 2) \
|
2015-06-17 08:13:41 +05:30
|
|
|
| ((offset) << PTE_RPN_SHIFT) })
|
|
|
|
|
|
|
|
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) })
|
|
|
|
#define __swp_entry_to_pte(x) __pte((x).val)
|
2007-04-30 16:30:56 +10:00
|
|
|
|
2023-01-13 18:10:18 +01:00
|
|
|
/* We borrow MSB 56 (LSB 7) to store the exclusive marker in swap PTEs. */
|
|
|
|
#define _PAGE_SWP_EXCLUSIVE 0x80
|
|
|
|
|
2016-04-29 23:25:59 +10:00
|
|
|
extern int __meminit vmemmap_create_mapping(unsigned long start,
|
|
|
|
unsigned long page_size,
|
|
|
|
unsigned long phys);
|
|
|
|
extern void vmemmap_remove_mapping(unsigned long start,
|
|
|
|
unsigned long page_size);
|
2021-12-02 13:00:24 +01:00
|
|
|
void __patch_exception(int exc, unsigned long addr);
|
|
|
|
#define patch_exception(exc, name) do { \
|
|
|
|
extern unsigned int name; \
|
|
|
|
__patch_exception((exc), (unsigned long)&name); \
|
|
|
|
} while (0)
|
|
|
|
|
2007-04-30 16:30:56 +10:00
|
|
|
#endif /* __ASSEMBLY__ */
|
|
|
|
|
2015-12-01 09:06:38 +05:30
|
|
|
#endif /* _ASM_POWERPC_NOHASH_64_PGTABLE_H */
|