2005-04-16 15:20:36 -07:00
|
|
|
/*
|
2011-06-28 09:54:48 +00:00
|
|
|
* PPC Huge TLB Page Support for Kernel.
|
2005-04-16 15:20:36 -07:00
|
|
|
*
|
|
|
|
* Copyright (C) 2003 David Gibson, IBM Corporation.
|
2011-06-28 09:54:48 +00:00
|
|
|
* Copyright (C) 2011 Becky Bruce, Freescale Semiconductor
|
2005-04-16 15:20:36 -07:00
|
|
|
*
|
|
|
|
* Based on the IA-32 version:
|
|
|
|
* Copyright (C) 2002, Rohit Seth <rohit.seth@intel.com>
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/mm.h>
|
2009-10-26 19:24:31 +00:00
|
|
|
#include <linux/io.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 17:04:11 +09:00
|
|
|
#include <linux/slab.h>
|
2005-04-16 15:20:36 -07:00
|
|
|
#include <linux/hugetlb.h>
|
KVM: PPC: Implement MMU notifiers for Book3S HV guests
This adds the infrastructure to enable us to page out pages underneath
a Book3S HV guest, on processors that support virtualized partition
memory, that is, POWER7. Instead of pinning all the guest's pages,
we now look in the host userspace Linux page tables to find the
mapping for a given guest page. Then, if the userspace Linux PTE
gets invalidated, kvm_unmap_hva() gets called for that address, and
we replace all the guest HPTEs that refer to that page with absent
HPTEs, i.e. ones with the valid bit clear and the HPTE_V_ABSENT bit
set, which will cause an HDSI when the guest tries to access them.
Finally, the page fault handler is extended to reinstantiate the
guest HPTE when the guest tries to access a page which has been paged
out.
Since we can't intercept the guest DSI and ISI interrupts on PPC970,
we still have to pin all the guest pages on PPC970. We have a new flag,
kvm->arch.using_mmu_notifiers, that indicates whether we can page
guest pages out. If it is not set, the MMU notifier callbacks do
nothing and everything operates as before.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-12 12:38:05 +00:00
|
|
|
#include <linux/export.h>
|
2011-06-28 09:54:48 +00:00
|
|
|
#include <linux/of_fdt.h>
|
|
|
|
#include <linux/memblock.h>
|
2011-11-24 09:40:07 +00:00
|
|
|
#include <linux/moduleparam.h>
|
2017-07-06 15:38:59 -07:00
|
|
|
#include <linux/swap.h>
|
|
|
|
#include <linux/swapops.h>
|
2018-08-13 13:19:52 +00:00
|
|
|
#include <linux/kmemleak.h>
|
2005-04-16 15:20:36 -07:00
|
|
|
#include <asm/pgalloc.h>
|
|
|
|
#include <asm/tlb.h>
|
2011-06-28 09:54:48 +00:00
|
|
|
#include <asm/setup.h>
|
2013-06-20 14:30:16 +05:30
|
|
|
#include <asm/hugetlb.h>
|
2017-07-27 11:54:53 +05:30
|
|
|
#include <asm/pte-walk.h>
|
2022-05-06 11:14:24 +02:00
|
|
|
#include <asm/firmware.h>
|
2017-07-27 11:54:53 +05:30
|
|
|
|
2018-04-10 19:11:31 +05:30
|
|
|
bool hugetlb_disabled = false;
|
|
|
|
|
2020-05-19 05:49:06 +00:00
|
|
|
#define PTE_T_ORDER (__builtin_ffs(sizeof(pte_basic_t)) - \
|
|
|
|
__builtin_ffs(sizeof(void *)))
|
2018-11-29 14:07:05 +00:00
|
|
|
|
2017-07-06 15:39:42 -07:00
|
|
|
pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz)
|
powerpc/mm: Allow more flexible layouts for hugepage pagetables
Currently each available hugepage size uses a slightly different
pagetable layout: that is, the bottem level table of pointers to
hugepages is a different size, and may branch off from the normal page
tables at a different level. Every hugepage aware path that needs to
walk the pagetables must therefore look up the hugepage size from the
slice info first, and work out the correct way to walk the pagetables
accordingly. Future hardware is likely to add more possible hugepage
sizes, more layout options and more mess.
This patch, therefore reworks the handling of hugepage pagetables to
reduce this complexity. In the new scheme, instead of having to
consult the slice mask, pagetable walking code can check a flag in the
PGD/PUD/PMD entries to see where to branch off to hugepage pagetables,
and the entry also contains the information (eseentially hugepage
shift) necessary to then interpret that table without recourse to the
slice mask. This scheme can be extended neatly to handle multiple
levels of self-describing "special" hugepage pagetables, although for
now we assume only one level exists.
This approach means that only the pagetable allocation path needs to
know how the pagetables should be set out. All other (hugepage)
pagetable walking paths can just interpret the structure as they go.
There already was a flag bit in PGD/PUD/PMD entries for hugepage
directory pointers, but it was only used for debug. We alter that
flag bit to instead be a 0 in the MSB to indicate a hugepage pagetable
pointer (normally it would be 1 since the pointer lies in the linear
mapping). This means that asm pagetable walking can test for (and
punt on) hugepage pointers with the same test that checks for
unpopulated page directory entries (beq becomes bge), since hugepage
pointers will always be positive, and normal pointers always negative.
While we're at it, we get rid of the confusing (and grep defeating)
#defining of hugepte_shift to be the same thing as mmu_huge_psizes.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-10-26 19:24:31 +00:00
|
|
|
{
|
2017-07-27 11:54:53 +05:30
|
|
|
/*
|
|
|
|
* Only called for hugetlbfs pages, hence can ignore THP and the
|
|
|
|
* irq disabled walk.
|
|
|
|
*/
|
|
|
|
return __find_linux_pte(mm->pgd, addr, NULL, NULL);
|
powerpc/mm: Allow more flexible layouts for hugepage pagetables
Currently each available hugepage size uses a slightly different
pagetable layout: that is, the bottem level table of pointers to
hugepages is a different size, and may branch off from the normal page
tables at a different level. Every hugepage aware path that needs to
walk the pagetables must therefore look up the hugepage size from the
slice info first, and work out the correct way to walk the pagetables
accordingly. Future hardware is likely to add more possible hugepage
sizes, more layout options and more mess.
This patch, therefore reworks the handling of hugepage pagetables to
reduce this complexity. In the new scheme, instead of having to
consult the slice mask, pagetable walking code can check a flag in the
PGD/PUD/PMD entries to see where to branch off to hugepage pagetables,
and the entry also contains the information (eseentially hugepage
shift) necessary to then interpret that table without recourse to the
slice mask. This scheme can be extended neatly to handle multiple
levels of self-describing "special" hugepage pagetables, although for
now we assume only one level exists.
This approach means that only the pagetable allocation path needs to
know how the pagetables should be set out. All other (hugepage)
pagetable walking paths can just interpret the structure as they go.
There already was a flag bit in PGD/PUD/PMD entries for hugepage
directory pointers, but it was only used for debug. We alter that
flag bit to instead be a 0 in the MSB to indicate a hugepage pagetable
pointer (normally it would be 1 since the pointer lies in the linear
mapping). This means that asm pagetable walking can test for (and
punt on) hugepage pointers with the same test that checks for
unpopulated page directory entries (beq becomes bge), since hugepage
pointers will always be positive, and normal pointers always negative.
While we're at it, we get rid of the confusing (and grep defeating)
#defining of hugepte_shift to be the same thing as mmu_huge_psizes.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-10-26 19:24:31 +00:00
|
|
|
}
|
|
|
|
|
2024-07-02 15:51:23 +02:00
|
|
|
pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
|
|
|
|
unsigned long addr, unsigned long sz)
|
|
|
|
{
|
|
|
|
p4d_t *p4d;
|
|
|
|
pud_t *pud;
|
|
|
|
pmd_t *pmd;
|
|
|
|
|
|
|
|
addr &= ~(sz - 1);
|
|
|
|
|
|
|
|
p4d = p4d_offset(pgd_offset(mm, addr), addr);
|
|
|
|
if (!mm_pud_folded(mm) && sz >= P4D_SIZE)
|
|
|
|
return (pte_t *)p4d;
|
|
|
|
|
|
|
|
pud = pud_alloc(mm, p4d, addr);
|
|
|
|
if (!pud)
|
|
|
|
return NULL;
|
|
|
|
if (!mm_pmd_folded(mm) && sz >= PUD_SIZE)
|
|
|
|
return (pte_t *)pud;
|
|
|
|
|
|
|
|
pmd = pmd_alloc(mm, pud, addr);
|
|
|
|
if (!pmd)
|
|
|
|
return NULL;
|
|
|
|
|
2024-07-02 15:51:25 +02:00
|
|
|
if (sz >= PMD_SIZE) {
|
|
|
|
/* On 8xx, all hugepages are handled as contiguous PTEs */
|
|
|
|
if (IS_ENABLED(CONFIG_PPC_8xx)) {
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < sz / PMD_SIZE; i++) {
|
|
|
|
if (!pte_alloc_huge(mm, pmd + i, addr))
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
2024-07-02 15:51:23 +02:00
|
|
|
return (pte_t *)pmd;
|
2024-07-02 15:51:25 +02:00
|
|
|
}
|
2024-07-02 15:51:23 +02:00
|
|
|
|
|
|
|
return pte_alloc_huge(mm, pmd, addr);
|
|
|
|
}
|
2008-01-04 09:59:50 +11:00
|
|
|
|
2017-07-28 10:31:26 +05:30
|
|
|
#ifdef CONFIG_PPC_BOOK3S_64
|
2011-06-28 09:54:48 +00:00
|
|
|
/*
|
2017-07-28 10:31:26 +05:30
|
|
|
* Tracks gpages after the device tree is scanned and before the
|
|
|
|
* huge_boot_pages list is ready on pseries.
|
2011-06-28 09:54:48 +00:00
|
|
|
*/
|
2017-07-28 10:31:26 +05:30
|
|
|
#define MAX_NUMBER_GPAGES 1024
|
|
|
|
__initdata static u64 gpage_freearray[MAX_NUMBER_GPAGES];
|
|
|
|
__initdata static unsigned nr_gpages;
|
2011-06-28 09:54:48 +00:00
|
|
|
|
|
|
|
/*
|
2017-07-28 10:31:26 +05:30
|
|
|
* Build list of addresses of gigantic pages. This function is used in early
|
2014-09-17 22:15:34 +10:00
|
|
|
* boot before the buddy allocator is setup.
|
2011-06-28 09:54:48 +00:00
|
|
|
*/
|
2017-07-28 10:31:26 +05:30
|
|
|
void __init pseries_add_gpage(u64 addr, u64 page_size, unsigned long number_of_pages)
|
2008-07-23 21:27:54 -07:00
|
|
|
{
|
|
|
|
if (!addr)
|
|
|
|
return;
|
|
|
|
while (number_of_pages > 0) {
|
|
|
|
gpage_freearray[nr_gpages] = addr;
|
|
|
|
nr_gpages++;
|
|
|
|
number_of_pages--;
|
|
|
|
addr += page_size;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-01-04 15:31:58 +01:00
|
|
|
static int __init pseries_alloc_bootmem_huge_page(struct hstate *hstate)
|
2008-07-23 21:27:53 -07:00
|
|
|
{
|
|
|
|
struct huge_bootmem_page *m;
|
|
|
|
if (nr_gpages == 0)
|
|
|
|
return 0;
|
|
|
|
m = phys_to_virt(gpage_freearray[--nr_gpages]);
|
|
|
|
gpage_freearray[nr_gpages] = 0;
|
2024-02-22 22:04:21 +08:00
|
|
|
list_add(&m->list, &huge_boot_pages[0]);
|
2008-07-23 21:27:56 -07:00
|
|
|
m->hstate = hstate;
|
2025-02-28 18:29:18 +00:00
|
|
|
m->flags = 0;
|
2008-07-23 21:27:53 -07:00
|
|
|
return 1;
|
|
|
|
}
|
2021-11-05 13:43:28 -07:00
|
|
|
|
|
|
|
bool __init hugetlb_node_alloc_supported(void)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
2011-06-28 09:54:48 +00:00
|
|
|
#endif
|
2008-07-23 21:27:53 -07:00
|
|
|
|
2017-07-28 10:31:26 +05:30
|
|
|
|
2021-11-05 13:43:28 -07:00
|
|
|
int __init alloc_bootmem_huge_page(struct hstate *h, int nid)
|
2017-07-28 10:31:26 +05:30
|
|
|
{
|
|
|
|
|
|
|
|
#ifdef CONFIG_PPC_BOOK3S_64
|
|
|
|
if (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled())
|
|
|
|
return pseries_alloc_bootmem_huge_page(h);
|
|
|
|
#endif
|
2021-11-05 13:43:28 -07:00
|
|
|
return __alloc_bootmem_huge_page(h, nid);
|
2017-07-28 10:31:26 +05:30
|
|
|
}
|
|
|
|
|
2020-06-03 16:00:34 -07:00
|
|
|
bool __init arch_hugetlb_valid_size(unsigned long size)
|
2008-01-04 09:59:50 +11:00
|
|
|
{
|
2009-10-26 19:24:31 +00:00
|
|
|
int shift = __ffs(size);
|
|
|
|
int mmu_psize;
|
powerpc/mm: Allow more flexible layouts for hugepage pagetables
Currently each available hugepage size uses a slightly different
pagetable layout: that is, the bottem level table of pointers to
hugepages is a different size, and may branch off from the normal page
tables at a different level. Every hugepage aware path that needs to
walk the pagetables must therefore look up the hugepage size from the
slice info first, and work out the correct way to walk the pagetables
accordingly. Future hardware is likely to add more possible hugepage
sizes, more layout options and more mess.
This patch, therefore reworks the handling of hugepage pagetables to
reduce this complexity. In the new scheme, instead of having to
consult the slice mask, pagetable walking code can check a flag in the
PGD/PUD/PMD entries to see where to branch off to hugepage pagetables,
and the entry also contains the information (eseentially hugepage
shift) necessary to then interpret that table without recourse to the
slice mask. This scheme can be extended neatly to handle multiple
levels of self-describing "special" hugepage pagetables, although for
now we assume only one level exists.
This approach means that only the pagetable allocation path needs to
know how the pagetables should be set out. All other (hugepage)
pagetable walking paths can just interpret the structure as they go.
There already was a flag bit in PGD/PUD/PMD entries for hugepage
directory pointers, but it was only used for debug. We alter that
flag bit to instead be a 0 in the MSB to indicate a hugepage pagetable
pointer (normally it would be 1 since the pointer lies in the linear
mapping). This means that asm pagetable walking can test for (and
punt on) hugepage pointers with the same test that checks for
unpopulated page directory entries (beq becomes bge), since hugepage
pointers will always be positive, and normal pointers always negative.
While we're at it, we get rid of the confusing (and grep defeating)
#defining of hugepte_shift to be the same thing as mmu_huge_psizes.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-10-26 19:24:31 +00:00
|
|
|
|
2008-01-04 09:59:50 +11:00
|
|
|
/* Check that it is a page size supported by the hardware and
|
2009-10-26 19:24:31 +00:00
|
|
|
* that it fits within pagetable and slice limits. */
|
2019-04-26 05:59:46 +00:00
|
|
|
if (size <= PAGE_SIZE || !is_power_of_2(size))
|
2020-06-03 16:00:34 -07:00
|
|
|
return false;
|
2008-07-23 21:27:55 -07:00
|
|
|
|
powerpc/mm: Fix crashes with hugepages & 4K pages
The recent commit to cleanup ifdefs in the hugepage initialisation led
to crashes when using 4K pages as reported by Sachin:
BUG: Kernel NULL pointer dereference at 0x0000001c
Faulting instruction address: 0xc000000001d1e58c
Oops: Kernel access of bad area, sig: 11 [#1]
LE PAGE_SIZE=4K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
...
CPU: 3 PID: 4635 Comm: futex_wake04 Tainted: G W O 5.1.0-next-20190507-autotest #1
NIP: c000000001d1e58c LR: c000000001d1e54c CTR: 0000000000000000
REGS: c000000004937890 TRAP: 0300
MSR: 8000000000009033 <SF,EE,ME,IR,DR,RI,LE> CR: 22424822 XER: 00000000
CFAR: c00000000183e9e0 DAR: 000000000000001c DSISR: 40000000 IRQMASK: 0
...
NIP kmem_cache_alloc+0xbc/0x5a0
LR kmem_cache_alloc+0x7c/0x5a0
Call Trace:
huge_pte_alloc+0x580/0x950
hugetlb_fault+0x9a0/0x1250
handle_mm_fault+0x490/0x4a0
__do_page_fault+0x77c/0x1f00
do_page_fault+0x28/0x50
handle_page_fault+0x18/0x38
This is caused by us trying to allocate from a NULL kmem cache in
__hugepte_alloc(). The kmem cache is NULL because it was never
allocated in hugetlbpage_init(), because add_huge_page_size() returned
an error.
The reason add_huge_page_size() returned an error is a simple typo, we
are calling check_and_get_huge_psize(size) when we should be passing
shift instead.
The fact that we're able to trigger this path when the kmem caches are
NULL is a separate bug, ie. we should not advertise any hugepage sizes
if we haven't setup the required caches for them.
This was only seen with 4K pages, with 64K pages we don't need to
allocate any extra kmem caches because the 16M hugepage just occupies
a single entry at the PMD level.
Fixes: 723f268f19da ("powerpc/mm: cleanup ifdef mess in add_huge_page_size()")
Reported-by: Sachin Sant <sachinp@linux.ibm.com>
Tested-by: Sachin Sant <sachinp@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
2019-05-14 23:00:58 +10:00
|
|
|
mmu_psize = check_and_get_huge_psize(shift);
|
2019-04-26 05:59:46 +00:00
|
|
|
if (mmu_psize < 0)
|
2020-06-03 16:00:34 -07:00
|
|
|
return false;
|
2009-10-26 19:24:31 +00:00
|
|
|
|
|
|
|
BUG_ON(mmu_psize_defs[mmu_psize].shift != shift);
|
|
|
|
|
2020-06-03 16:00:34 -07:00
|
|
|
return true;
|
|
|
|
}
|
2009-10-26 19:24:31 +00:00
|
|
|
|
2020-06-03 16:00:34 -07:00
|
|
|
static int __init add_huge_page_size(unsigned long long size)
|
|
|
|
{
|
|
|
|
int shift = __ffs(size);
|
|
|
|
|
|
|
|
if (!arch_hugetlb_valid_size((unsigned long)size))
|
|
|
|
return -EINVAL;
|
2009-10-26 19:24:31 +00:00
|
|
|
|
2020-06-03 16:00:42 -07:00
|
|
|
hugetlb_add_hstate(shift - PAGE_SHIFT);
|
2009-10-26 19:24:31 +00:00
|
|
|
return 0;
|
2008-01-04 09:59:50 +11:00
|
|
|
}
|
|
|
|
|
2011-06-28 09:54:48 +00:00
|
|
|
static int __init hugetlbpage_init(void)
|
|
|
|
{
|
2019-05-28 11:06:26 +05:30
|
|
|
bool configured = false;
|
2011-06-28 09:54:48 +00:00
|
|
|
int psize;
|
|
|
|
|
2018-04-10 19:11:31 +05:30
|
|
|
if (hugetlb_disabled) {
|
|
|
|
pr_info("HugeTLB support is disabled!\n");
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-04-26 05:59:49 +00:00
|
|
|
if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled() &&
|
|
|
|
!mmu_has_feature(MMU_FTR_16M_PAGE))
|
2006-04-28 15:02:51 +10:00
|
|
|
return -ENODEV;
|
2019-04-26 05:59:49 +00:00
|
|
|
|
2009-10-26 19:24:31 +00:00
|
|
|
for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
|
|
|
|
unsigned shift;
|
2008-07-23 21:27:56 -07:00
|
|
|
|
2009-10-26 19:24:31 +00:00
|
|
|
if (!mmu_psize_defs[psize].shift)
|
|
|
|
continue;
|
2008-07-28 16:13:18 +10:00
|
|
|
|
2009-10-26 19:24:31 +00:00
|
|
|
shift = mmu_psize_to_shift(psize);
|
|
|
|
|
2018-03-30 17:34:08 +05:30
|
|
|
if (add_huge_page_size(1ULL << shift) < 0)
|
|
|
|
continue;
|
2019-05-28 11:06:26 +05:30
|
|
|
|
|
|
|
configured = true;
|
2008-07-23 21:27:56 -07:00
|
|
|
}
|
2006-04-28 15:02:51 +10:00
|
|
|
|
2022-02-11 12:22:15 +05:30
|
|
|
if (!configured)
|
2019-05-28 11:06:26 +05:30
|
|
|
pr_info("Failed to initialize. Disabling HugeTLB");
|
2019-04-26 05:59:48 +00:00
|
|
|
|
2006-04-28 15:02:51 +10:00
|
|
|
return 0;
|
|
|
|
}
|
2016-12-07 08:47:26 +01:00
|
|
|
|
2015-05-01 20:08:21 -04:00
|
|
|
arch_initcall(hugetlbpage_init);
|
2009-10-26 19:24:31 +00:00
|
|
|
|
2020-07-13 20:37:48 +05:30
|
|
|
void __init gigantic_hugetlb_cma_reserve(void)
|
|
|
|
{
|
|
|
|
unsigned long order = 0;
|
|
|
|
|
|
|
|
if (radix_enabled())
|
|
|
|
order = PUD_SHIFT - PAGE_SHIFT;
|
|
|
|
else if (!firmware_has_feature(FW_FEATURE_LPAR) && mmu_psize_defs[MMU_PAGE_16G].shift)
|
|
|
|
/*
|
|
|
|
* For pseries we do use ibm,expected#pages for reserving 16G pages.
|
|
|
|
*/
|
|
|
|
order = mmu_psize_to_shift(MMU_PAGE_16G) - PAGE_SHIFT;
|
|
|
|
|
2024-02-09 11:12:21 +05:30
|
|
|
if (order)
|
2020-07-13 20:37:48 +05:30
|
|
|
hugetlb_cma_reserve(order);
|
|
|
|
}
|