linux/arch/x86/kernel/cpu/cacheinfo.c

827 lines
20 KiB
C
Raw Permalink Normal View History

License cleanup: add SPDX GPL-2.0 license identifier to files with no license Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
// SPDX-License-Identifier: GPL-2.0
/*
* x86 CPU caches detection and configuration
*
* Previous changes
* - Venkatesh Pallipadi: Cache identification through CPUID(0x4)
* - Ashok Raj <ashok.raj@intel.com>: Work with CPU hotplug infrastructure
* - Andi Kleen / Andreas Herrmann: CPUID(0x4) emulation on AMD
*/
#include <linux/cacheinfo.h>
#include <linux/cpu.h>
#include <linux/cpuhotplug.h>
#include <linux/stop_machine.h>
#include <asm/amd/nb.h>
#include <asm/cacheinfo.h>
#include <asm/cpufeature.h>
#include <asm/cpuid/api.h>
#include <asm/mtrr.h>
#include <asm/smp.h>
#include <asm/tlbflush.h>
#include "cpu.h"
x86/cacheinfo: move shared cache map definitions Patch series "cpumask: Fix invalid uniprocessor assumptions", v4. On uniprocessor builds, it is currently assumed that any cpumask will contain the single CPU: cpu0. This assumption is used to provide optimised implementations. The current assumption also appears to be wrong, by ignoring the fact that users can provide empty cpumasks. This can result in bugs as explained in [1] - for_each_cpu() will run one iteration of the loop even when passed an empty cpumask. This series introduces some basic tests, and updates the optimisations for uniprocessor builds. The x86 patch was written after the kernel test robot [2] ran into a failed build. I have tried to list the files potentially affected by the changes to cpumask.h, in an attempt to find any other cases that fail on !SMP. I've gone through some of the files manually, and ran a few cross builds, but nothing else popped up. I (build) checked about half of the potientally affected files, but I do not have the resources to do them all. I hope we can fix other issues if/when they pop up later. [1] https://lore.kernel.org/all/20220530082552.46113-1-sander@svanheule.net/ [2] https://lore.kernel.org/all/202206060858.wA0FOzRy-lkp@intel.com/ This patch (of 5): The maps to keep track of shared caches between CPUs on SMP systems are declared in asm/smp.h, among them specifically cpu_llc_shared_map. These maps are externally defined in cpu/smpboot.c. The latter is only compiled on CONFIG_SMP=y, which means the declared extern symbols from asm/smp.h do not have a corresponding definition on uniprocessor builds. The inline cpu_llc_shared_mask() function from asm/smp.h refers to the map declaration mentioned above. This function is referenced in cacheinfo.c inside for_each_cpu() loop macros, to provide cpumask for the loop. On uniprocessor builds, the symbol for the cpu_llc_shared_map does not exist. However, the current implementation of for_each_cpu() also (wrongly) ignores the provided mask. By sheer luck, the compiler thus optimises out this unused reference to cpu_llc_shared_map, and the linker therefore does not require the cpu_llc_shared_mask to actually exist on uniprocessor builds. Only on SMP bulids does smpboot.o exist to provide the required symbols. To no longer rely on compiler optimisations for successful uniprocessor builds, move the definitions of cpu_llc_shared_map and cpu_l2c_shared_map from smpboot.c to cacheinfo.c. Link: https://lkml.kernel.org/r/cover.1656777646.git.sander@svanheule.net Link: https://lkml.kernel.org/r/e8167ddb570f56744a3dc12c2149a660a324d969.1656777646.git.sander@svanheule.net Signed-off-by: Sander Vanheule <sander@svanheule.net> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Marco Elver <elver@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Valentin Schneider <vschneid@redhat.com> Cc: Yury Norov <yury.norov@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-07-02 18:08:24 +02:00
/* Shared last level cache maps */
DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
/* Shared L2 cache maps */
DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_l2c_shared_map);
static cpumask_var_t cpu_cacheinfo_mask;
/* Kernel controls MTRR and/or PAT MSRs. */
unsigned int memory_caching_control __ro_after_init;
enum _cache_type {
CTYPE_NULL = 0,
CTYPE_DATA = 1,
CTYPE_INST = 2,
CTYPE_UNIFIED = 3
};
union _cpuid4_leaf_eax {
struct {
enum _cache_type type :5;
unsigned int level :3;
unsigned int is_self_initializing :1;
unsigned int is_fully_associative :1;
unsigned int reserved :4;
unsigned int num_threads_sharing :12;
unsigned int num_cores_on_die :6;
} split;
u32 full;
};
union _cpuid4_leaf_ebx {
struct {
unsigned int coherency_line_size :12;
unsigned int physical_line_partition :10;
unsigned int ways_of_associativity :10;
} split;
u32 full;
};
union _cpuid4_leaf_ecx {
struct {
unsigned int number_of_sets :32;
} split;
u32 full;
};
struct _cpuid4_info {
union _cpuid4_leaf_eax eax;
union _cpuid4_leaf_ebx ebx;
union _cpuid4_leaf_ecx ecx;
unsigned int id;
unsigned long size;
};
/* Map CPUID(0x4) EAX.cache_type to <linux/cacheinfo.h> types */
static const enum cache_type cache_type_map[] = {
[CTYPE_NULL] = CACHE_TYPE_NOCACHE,
[CTYPE_DATA] = CACHE_TYPE_DATA,
[CTYPE_INST] = CACHE_TYPE_INST,
[CTYPE_UNIFIED] = CACHE_TYPE_UNIFIED,
};
/*
* Fallback AMD CPUID(0x4) emulation
* AMD CPUs with TOPOEXT can just use CPUID(0x8000001d)
*
* @AMD_L2_L3_INVALID_ASSOC: cache info for the respective L2/L3 cache should
* be determined from CPUID(0x8000001d) instead of CPUID(0x80000006).
*/
#define AMD_CPUID4_FULLY_ASSOCIATIVE 0xffff
#define AMD_L2_L3_INVALID_ASSOC 0x9
union l1_cache {
struct {
unsigned line_size :8;
unsigned lines_per_tag :8;
unsigned assoc :8;
unsigned size_in_kb :8;
};
unsigned int val;
};
union l2_cache {
struct {
unsigned line_size :8;
unsigned lines_per_tag :4;
unsigned assoc :4;
unsigned size_in_kb :16;
};
unsigned int val;
};
union l3_cache {
struct {
unsigned line_size :8;
unsigned lines_per_tag :4;
unsigned assoc :4;
unsigned res :2;
unsigned size_encoded :14;
};
unsigned int val;
};
/* L2/L3 associativity mapping */
x86: delete __cpuinit usage from all x86 files The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/x86 uses of the __cpuinit macros from all C files. x86 only had the one __CPUINIT used in assembly files, and it wasn't paired off with a .previous or a __FINIT, so we can delete it directly w/o any corresponding additional change there. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 18:23:59 -04:00
static const unsigned short assocs[] = {
[1] = 1,
[2] = 2,
[3] = 3,
[4] = 4,
[5] = 6,
[6] = 8,
[8] = 16,
[0xa] = 32,
[0xb] = 48,
[0xc] = 64,
[0xd] = 96,
[0xe] = 128,
[0xf] = AMD_CPUID4_FULLY_ASSOCIATIVE
};
x86: delete __cpuinit usage from all x86 files The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/x86 uses of the __cpuinit macros from all C files. x86 only had the one __CPUINIT used in assembly files, and it wasn't paired off with a .previous or a __FINIT, so we can delete it directly w/o any corresponding additional change there. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 18:23:59 -04:00
static const unsigned char levels[] = { 1, 1, 2, 3 };
static const unsigned char types[] = { 1, 2, 3, 3 };
static void legacy_amd_cpuid4(int index, union _cpuid4_leaf_eax *eax,
union _cpuid4_leaf_ebx *ebx, union _cpuid4_leaf_ecx *ecx)
{
unsigned int dummy, line_size, lines_per_tag, assoc, size_in_kb;
union l1_cache l1i, l1d, *l1;
union l2_cache l2;
union l3_cache l3;
eax->full = 0;
ebx->full = 0;
ecx->full = 0;
cpuid(0x80000005, &dummy, &dummy, &l1d.val, &l1i.val);
cpuid(0x80000006, &dummy, &dummy, &l2.val, &l3.val);
l1 = &l1d;
switch (index) {
case 1:
l1 = &l1i;
fallthrough;
case 0:
if (!l1->val)
return;
assoc = (l1->assoc == 0xff) ? AMD_CPUID4_FULLY_ASSOCIATIVE : l1->assoc;
line_size = l1->line_size;
lines_per_tag = l1->lines_per_tag;
size_in_kb = l1->size_in_kb;
break;
case 2:
if (!l2.assoc || l2.assoc == AMD_L2_L3_INVALID_ASSOC)
return;
/* Use x86_cache_size as it might have K7 errata fixes */
assoc = assocs[l2.assoc];
line_size = l2.line_size;
lines_per_tag = l2.lines_per_tag;
size_in_kb = __this_cpu_read(cpu_info.x86_cache_size);
break;
case 3:
if (!l3.assoc || l3.assoc == AMD_L2_L3_INVALID_ASSOC)
return;
assoc = assocs[l3.assoc];
line_size = l3.line_size;
lines_per_tag = l3.lines_per_tag;
size_in_kb = l3.size_encoded * 512;
if (boot_cpu_has(X86_FEATURE_AMD_DCM)) {
size_in_kb = size_in_kb >> 1;
assoc = assoc >> 1;
}
break;
default:
return;
}
eax->split.is_self_initializing = 1;
eax->split.type = types[index];
eax->split.level = levels[index];
eax->split.num_threads_sharing = 0;
eax->split.num_cores_on_die = topology_num_cores_per_package();
if (assoc == AMD_CPUID4_FULLY_ASSOCIATIVE)
eax->split.is_fully_associative = 1;
ebx->split.coherency_line_size = line_size - 1;
ebx->split.ways_of_associativity = assoc - 1;
ebx->split.physical_line_partition = lines_per_tag - 1;
ecx->split.number_of_sets = (size_in_kb * 1024) / line_size /
(ebx->split.ways_of_associativity + 1) - 1;
}
static int cpuid4_info_fill_done(struct _cpuid4_info *id4, union _cpuid4_leaf_eax eax,
union _cpuid4_leaf_ebx ebx, union _cpuid4_leaf_ecx ecx)
{
if (eax.split.type == CTYPE_NULL)
return -EIO;
id4->eax = eax;
id4->ebx = ebx;
id4->ecx = ecx;
id4->size = (ecx.split.number_of_sets + 1) *
(ebx.split.coherency_line_size + 1) *
(ebx.split.physical_line_partition + 1) *
(ebx.split.ways_of_associativity + 1);
return 0;
}
static int amd_fill_cpuid4_info(int index, struct _cpuid4_info *id4)
{
union _cpuid4_leaf_eax eax;
union _cpuid4_leaf_ebx ebx;
union _cpuid4_leaf_ecx ecx;
u32 ignored;
if (boot_cpu_has(X86_FEATURE_TOPOEXT) || boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
cpuid_count(0x8000001d, index, &eax.full, &ebx.full, &ecx.full, &ignored);
else
legacy_amd_cpuid4(index, &eax, &ebx, &ecx);
return cpuid4_info_fill_done(id4, eax, ebx, ecx);
}
static int intel_fill_cpuid4_info(int index, struct _cpuid4_info *id4)
{
union _cpuid4_leaf_eax eax;
union _cpuid4_leaf_ebx ebx;
union _cpuid4_leaf_ecx ecx;
u32 ignored;
cpuid_count(4, index, &eax.full, &ebx.full, &ecx.full, &ignored);
return cpuid4_info_fill_done(id4, eax, ebx, ecx);
}
static int fill_cpuid4_info(int index, struct _cpuid4_info *id4)
{
u8 cpu_vendor = boot_cpu_data.x86_vendor;
return (cpu_vendor == X86_VENDOR_AMD || cpu_vendor == X86_VENDOR_HYGON) ?
amd_fill_cpuid4_info(index, id4) :
intel_fill_cpuid4_info(index, id4);
}
x86: delete __cpuinit usage from all x86 files The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/x86 uses of the __cpuinit macros from all C files. x86 only had the one __CPUINIT used in assembly files, and it wasn't paired off with a .previous or a __FINIT, so we can delete it directly w/o any corresponding additional change there. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 18:23:59 -04:00
static int find_num_cache_leaves(struct cpuinfo_x86 *c)
{
unsigned int eax, ebx, ecx, edx, op;
union _cpuid4_leaf_eax cache_eax;
int i = -1;
/* Do a CPUID(op) loop to calculate num_cache_leaves */
op = (c->x86_vendor == X86_VENDOR_AMD || c->x86_vendor == X86_VENDOR_HYGON) ? 0x8000001d : 4;
do {
++i;
cpuid_count(op, i, &eax, &ebx, &ecx, &edx);
cache_eax.full = eax;
} while (cache_eax.split.type != CTYPE_NULL);
return i;
}
/*
* AMD/Hygon CPUs may have multiple LLCs if L3 caches exist.
*/
void cacheinfo_amd_init_llc_id(struct cpuinfo_x86 *c, u16 die_id)
{
if (!cpuid_amd_hygon_has_l3_cache())
return;
if (c->x86 < 0x17) {
/* Pre-Zen: LLC is at the node level */
c->topo.llc_id = die_id;
} else if (c->x86 == 0x17 && c->x86_model <= 0x1F) {
/*
* Family 17h up to 1F models: LLC is at the core
* complex level. Core complex ID is ApicId[3].
*/
c->topo.llc_id = c->topo.apicid >> 3;
} else {
/*
* Newer families: LLC ID is calculated from the number
* of threads sharing the L3 cache.
*/
u32 eax, ebx, ecx, edx, num_sharing_cache = 0;
u32 llc_index = find_num_cache_leaves(c) - 1;
cpuid_count(0x8000001d, llc_index, &eax, &ebx, &ecx, &edx);
if (eax)
num_sharing_cache = ((eax >> 14) & 0xfff) + 1;
if (num_sharing_cache) {
int index_msb = get_count_order(num_sharing_cache);
c->topo.llc_id = c->topo.apicid >> index_msb;
}
}
}
void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c)
{
if (!cpuid_amd_hygon_has_l3_cache())
return;
/*
* Hygons are similar to AMD Family 17h up to 1F models: LLC is
* at the core complex level. Core complex ID is ApicId[3].
*/
c->topo.llc_id = c->topo.apicid >> 3;
}
x86: delete __cpuinit usage from all x86 files The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/x86 uses of the __cpuinit macros from all C files. x86 only had the one __CPUINIT used in assembly files, and it wasn't paired off with a .previous or a __FINIT, so we can delete it directly w/o any corresponding additional change there. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 18:23:59 -04:00
void init_amd_cacheinfo(struct cpuinfo_x86 *c)
{
struct cpu_cacheinfo *ci = get_cpu_cacheinfo(c->cpu_index);
if (boot_cpu_has(X86_FEATURE_TOPOEXT))
ci->num_leaves = find_num_cache_leaves(c);
else if (c->extended_cpuid_level >= 0x80000006)
ci->num_leaves = (cpuid_edx(0x80000006) & 0xf000) ? 4 : 3;
}
void init_hygon_cacheinfo(struct cpuinfo_x86 *c)
{
struct cpu_cacheinfo *ci = get_cpu_cacheinfo(c->cpu_index);
ci->num_leaves = find_num_cache_leaves(c);
}
static void intel_cacheinfo_done(struct cpuinfo_x86 *c, unsigned int l3,
unsigned int l2, unsigned int l1i, unsigned int l1d)
{
/*
* If llc_id is still unset, then cpuid_level < 4, which implies
* that the only possibility left is SMT. Since CPUID(0x2) doesn't
* specify any shared caches and SMT shares all caches, we can
* unconditionally set LLC ID to the package ID so that all
* threads share it.
*/
if (c->topo.llc_id == BAD_APICID)
c->topo.llc_id = c->topo.pkg_id;
c->x86_cache_size = l3 ? l3 : (l2 ? l2 : l1i + l1d);
if (!l2)
cpu_detect_cache_sizes(c);
}
/*
* Legacy Intel CPUID(0x2) path if CPUID(0x4) is not available.
*/
static void intel_cacheinfo_0x2(struct cpuinfo_x86 *c)
{
unsigned int l1i = 0, l1d = 0, l2 = 0, l3 = 0;
const struct leaf_0x2_table *desc;
union leaf_0x2_regs regs;
u8 *ptr;
if (c->cpuid_level < 2)
return;
cpuid_leaf_0x2(&regs);
for_each_cpuid_0x2_desc(regs, ptr, desc) {
switch (desc->c_type) {
case CACHE_L1_INST: l1i += desc->c_size; break;
case CACHE_L1_DATA: l1d += desc->c_size; break;
case CACHE_L2: l2 += desc->c_size; break;
case CACHE_L3: l3 += desc->c_size; break;
}
}
intel_cacheinfo_done(c, l3, l2, l1i, l1d);
}
static unsigned int calc_cache_topo_id(struct cpuinfo_x86 *c, const struct _cpuid4_info *id4)
{
unsigned int num_threads_sharing;
int index_msb;
num_threads_sharing = 1 + id4->eax.split.num_threads_sharing;
index_msb = get_count_order(num_threads_sharing);
return c->topo.apicid & ~((1 << index_msb) - 1);
}
static bool intel_cacheinfo_0x4(struct cpuinfo_x86 *c)
{
struct cpu_cacheinfo *ci = get_cpu_cacheinfo(c->cpu_index);
unsigned int l2_id = BAD_APICID, l3_id = BAD_APICID;
unsigned int l1d = 0, l1i = 0, l2 = 0, l3 = 0;
if (c->cpuid_level < 4)
return false;
/*
* There should be at least one leaf. A non-zero value means
* that the number of leaves has been previously initialized.
*/
if (!ci->num_leaves)
ci->num_leaves = find_num_cache_leaves(c);
if (!ci->num_leaves)
return false;
for (int i = 0; i < ci->num_leaves; i++) {
struct _cpuid4_info id4 = {};
int ret;
ret = intel_fill_cpuid4_info(i, &id4);
if (ret < 0)
continue;
switch (id4.eax.split.level) {
case 1:
if (id4.eax.split.type == CTYPE_DATA)
l1d = id4.size / 1024;
else if (id4.eax.split.type == CTYPE_INST)
l1i = id4.size / 1024;
break;
case 2:
l2 = id4.size / 1024;
l2_id = calc_cache_topo_id(c, &id4);
break;
case 3:
l3 = id4.size / 1024;
l3_id = calc_cache_topo_id(c, &id4);
break;
default:
break;
}
}
c->topo.l2c_id = l2_id;
c->topo.llc_id = (l3_id == BAD_APICID) ? l2_id : l3_id;
intel_cacheinfo_done(c, l3, l2, l1i, l1d);
return true;
}
void init_intel_cacheinfo(struct cpuinfo_x86 *c)
{
/* Don't use CPUID(0x2) if CPUID(0x4) is supported. */
if (intel_cacheinfo_0x4(c))
return;
intel_cacheinfo_0x2(c);
}
/*
* <linux/cacheinfo.h> shared_cpu_map setup, AMD/Hygon
*/
static int __cache_amd_cpumap_setup(unsigned int cpu, int index,
const struct _cpuid4_info *id4)
{
struct cpu_cacheinfo *this_cpu_ci;
struct cacheinfo *ci;
int i, sibling;
/*
* For L3, always use the pre-calculated cpu_llc_shared_mask
* to derive shared_cpu_map.
*/
if (index == 3) {
for_each_cpu(i, cpu_llc_shared_mask(cpu)) {
this_cpu_ci = get_cpu_cacheinfo(i);
if (!this_cpu_ci->info_list)
continue;
ci = this_cpu_ci->info_list + index;
for_each_cpu(sibling, cpu_llc_shared_mask(cpu)) {
if (!cpu_online(sibling))
continue;
cpumask_set_cpu(sibling, &ci->shared_cpu_map);
}
}
} else if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
unsigned int apicid, nshared, first, last;
nshared = id4->eax.split.num_threads_sharing + 1;
apicid = cpu_data(cpu).topo.apicid;
first = apicid - (apicid % nshared);
last = first + nshared - 1;
for_each_online_cpu(i) {
this_cpu_ci = get_cpu_cacheinfo(i);
if (!this_cpu_ci->info_list)
continue;
apicid = cpu_data(i).topo.apicid;
if ((apicid < first) || (apicid > last))
continue;
ci = this_cpu_ci->info_list + index;
for_each_online_cpu(sibling) {
apicid = cpu_data(sibling).topo.apicid;
if ((apicid < first) || (apicid > last))
continue;
cpumask_set_cpu(sibling, &ci->shared_cpu_map);
}
}
} else
return 0;
return 1;
}
/*
* <linux/cacheinfo.h> shared_cpu_map setup, Intel + fallback AMD/Hygon
*/
static void __cache_cpumap_setup(unsigned int cpu, int index,
const struct _cpuid4_info *id4)
{
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
struct cpuinfo_x86 *c = &cpu_data(cpu);
struct cacheinfo *ci, *sibling_ci;
unsigned long num_threads_sharing;
int index_msb, i;
if (c->x86_vendor == X86_VENDOR_AMD || c->x86_vendor == X86_VENDOR_HYGON) {
if (__cache_amd_cpumap_setup(cpu, index, id4))
return;
}
ci = this_cpu_ci->info_list + index;
num_threads_sharing = 1 + id4->eax.split.num_threads_sharing;
cpumask_set_cpu(cpu, &ci->shared_cpu_map);
if (num_threads_sharing == 1)
return;
index_msb = get_count_order(num_threads_sharing);
for_each_online_cpu(i)
if (cpu_data(i).topo.apicid >> index_msb == c->topo.apicid >> index_msb) {
struct cpu_cacheinfo *sib_cpu_ci = get_cpu_cacheinfo(i);
/* Skip if itself or no cacheinfo */
if (i == cpu || !sib_cpu_ci->info_list)
continue;
sibling_ci = sib_cpu_ci->info_list + index;
cpumask_set_cpu(i, &ci->shared_cpu_map);
cpumask_set_cpu(cpu, &sibling_ci->shared_cpu_map);
}
}
static void ci_info_init(struct cacheinfo *ci, const struct _cpuid4_info *id4,
struct amd_northbridge *nb)
{
ci->id = id4->id;
ci->attributes = CACHE_ID;
ci->level = id4->eax.split.level;
ci->type = cache_type_map[id4->eax.split.type];
ci->coherency_line_size = id4->ebx.split.coherency_line_size + 1;
ci->ways_of_associativity = id4->ebx.split.ways_of_associativity + 1;
ci->size = id4->size;
ci->number_of_sets = id4->ecx.split.number_of_sets + 1;
ci->physical_line_partition = id4->ebx.split.physical_line_partition + 1;
ci->priv = nb;
}
int init_cache_level(unsigned int cpu)
{
struct cpu_cacheinfo *ci = get_cpu_cacheinfo(cpu);
/* There should be at least one leaf. */
if (!ci->num_leaves)
return -ENOENT;
return 0;
}
/*
* The max shared threads number comes from CPUID(0x4) EAX[25-14] with input
* ECX as cache index. Then right shift apicid by the number's order to get
* cache id for this cache node.
*/
static void get_cache_id(int cpu, struct _cpuid4_info *id4)
{
struct cpuinfo_x86 *c = &cpu_data(cpu);
unsigned long num_threads_sharing;
int index_msb;
num_threads_sharing = 1 + id4->eax.split.num_threads_sharing;
index_msb = get_count_order(num_threads_sharing);
id4->id = c->topo.apicid >> index_msb;
}
int populate_cache_leaves(unsigned int cpu)
{
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
struct cacheinfo *ci = this_cpu_ci->info_list;
u8 cpu_vendor = boot_cpu_data.x86_vendor;
struct amd_northbridge *nb = NULL;
struct _cpuid4_info id4 = {};
int idx, ret;
for (idx = 0; idx < this_cpu_ci->num_leaves; idx++) {
ret = fill_cpuid4_info(idx, &id4);
if (ret)
return ret;
get_cache_id(cpu, &id4);
if (cpu_vendor == X86_VENDOR_AMD || cpu_vendor == X86_VENDOR_HYGON)
nb = amd_init_l3_cache(idx);
ci_info_init(ci++, &id4, nb);
__cache_cpumap_setup(cpu, idx, &id4);
}
this_cpu_ci->cpu_map_populated = true;
return 0;
}
/*
* Disable and enable caches. Needed for changing MTRRs and the PAT MSR.
*
* Since we are disabling the cache don't allow any interrupts,
* they would run extremely slow and would only increase the pain.
*
* The caller must ensure that local interrupts are disabled and
* are reenabled after cache_enable() has been called.
*/
static unsigned long saved_cr4;
static DEFINE_RAW_SPINLOCK(cache_disable_lock);
/*
* Cache flushing is the most time-consuming step when programming the
* MTRRs. On many Intel CPUs without known erratas, it can be skipped
* if the CPU declares cache self-snooping support.
*/
static void maybe_flush_caches(void)
{
if (!static_cpu_has(X86_FEATURE_SELFSNOOP))
wbinvd();
}
void cache_disable(void) __acquires(cache_disable_lock)
{
unsigned long cr0;
/*
* This is not ideal since the cache is only flushed/disabled
* for this CPU while the MTRRs are changed, but changing this
* requires more invasive changes to the way the kernel boots.
*/
raw_spin_lock(&cache_disable_lock);
/* Enter the no-fill (CD=1, NW=0) cache mode and flush caches. */
cr0 = read_cr0() | X86_CR0_CD;
write_cr0(cr0);
maybe_flush_caches();
/* Save value of CR4 and clear Page Global Enable (bit 7) */
if (cpu_feature_enabled(X86_FEATURE_PGE)) {
saved_cr4 = __read_cr4();
__write_cr4(saved_cr4 & ~X86_CR4_PGE);
}
/* Flush all TLBs via a mov %cr3, %reg; mov %reg, %cr3 */
count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
flush_tlb_local();
if (cpu_feature_enabled(X86_FEATURE_MTRR))
mtrr_disable();
maybe_flush_caches();
}
void cache_enable(void) __releases(cache_disable_lock)
{
/* Flush TLBs (no need to flush caches - they are disabled) */
count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
flush_tlb_local();
if (cpu_feature_enabled(X86_FEATURE_MTRR))
mtrr_enable();
/* Enable caches */
write_cr0(read_cr0() & ~X86_CR0_CD);
/* Restore value of CR4 */
if (cpu_feature_enabled(X86_FEATURE_PGE))
__write_cr4(saved_cr4);
raw_spin_unlock(&cache_disable_lock);
}
static void cache_cpu_init(void)
{
unsigned long flags;
local_irq_save(flags);
if (memory_caching_control & CACHE_MTRR) {
cache_disable();
mtrr_generic_set_state();
cache_enable();
}
if (memory_caching_control & CACHE_PAT)
pat_cpu_init();
local_irq_restore(flags);
}
static bool cache_aps_delayed_init = true;
void set_cache_aps_delayed_init(bool val)
{
cache_aps_delayed_init = val;
}
bool get_cache_aps_delayed_init(void)
{
return cache_aps_delayed_init;
}
static int cache_rendezvous_handler(void *unused)
{
if (get_cache_aps_delayed_init() || !cpu_online(smp_processor_id()))
cache_cpu_init();
return 0;
}
void __init cache_bp_init(void)
{
mtrr_bp_init();
pat_bp_init();
if (memory_caching_control)
cache_cpu_init();
}
void cache_bp_restore(void)
{
if (memory_caching_control)
cache_cpu_init();
}
static int cache_ap_online(unsigned int cpu)
{
cpumask_set_cpu(cpu, cpu_cacheinfo_mask);
if (!memory_caching_control || get_cache_aps_delayed_init())
return 0;
/*
* Ideally we should hold mtrr_mutex here to avoid MTRR entries
* changed, but this routine will be called in CPU boot time,
* holding the lock breaks it.
*
* This routine is called in two cases:
*
* 1. very early time of software resume, when there absolutely
* isn't MTRR entry changes;
*
* 2. CPU hotadd time. We let mtrr_add/del_page hold cpuhotplug
* lock to prevent MTRR entry changes
*/
stop_machine_from_inactive_cpu(cache_rendezvous_handler, NULL,
cpu_cacheinfo_mask);
return 0;
}
static int cache_ap_offline(unsigned int cpu)
{
cpumask_clear_cpu(cpu, cpu_cacheinfo_mask);
return 0;
}
/*
* Delayed cache initialization for all AP's
*/
void cache_aps_init(void)
{
if (!memory_caching_control || !get_cache_aps_delayed_init())
return;
stop_machine(cache_rendezvous_handler, NULL, cpu_online_mask);
set_cache_aps_delayed_init(false);
}
static int __init cache_ap_register(void)
{
zalloc_cpumask_var(&cpu_cacheinfo_mask, GFP_KERNEL);
cpumask_set_cpu(smp_processor_id(), cpu_cacheinfo_mask);
cpuhp_setup_state_nocalls(CPUHP_AP_CACHECTRL_STARTING,
"x86/cachectrl:starting",
cache_ap_online, cache_ap_offline);
return 0;
}
early_initcall(cache_ap_register);