License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2019-07-22 20:47:11 +02:00
|
|
|
|
|
|
|
#include <linux/cpuhotplug.h>
|
2008-07-10 11:16:54 -07:00
|
|
|
#include <linux/cpumask.h>
|
2019-07-22 20:47:11 +02:00
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/mm.h>
|
|
|
|
|
|
|
|
#include <asm/apic.h>
|
2008-07-21 22:08:21 -07:00
|
|
|
|
2019-07-22 20:47:14 +02:00
|
|
|
#include "local.h"
|
2008-07-10 11:16:54 -07:00
|
|
|
|
2023-03-16 22:20:58 +00:00
|
|
|
#define apic_cluster(apicid) ((apicid) >> 4)
|
2017-09-13 23:29:24 +02:00
|
|
|
|
2021-10-07 07:35:56 -07:00
|
|
|
/*
|
|
|
|
* __x2apic_send_IPI_mask() possibly needs to read
|
|
|
|
* x86_cpu_to_logical_apicid for all online cpus in a sequential way.
|
|
|
|
* Using per cpu variable would cost one cache line per cpu.
|
|
|
|
*/
|
|
|
|
static u32 *x86_cpu_to_logical_apicid __read_mostly;
|
|
|
|
|
2011-05-19 16:45:49 -07:00
|
|
|
static DEFINE_PER_CPU(cpumask_var_t, ipi_mask);
|
2023-03-16 22:20:58 +00:00
|
|
|
static DEFINE_PER_CPU_READ_MOSTLY(struct cpumask *, cluster_masks);
|
2008-07-10 11:16:54 -07:00
|
|
|
|
2008-10-12 11:44:11 +02:00
|
|
|
static int x2apic_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
|
2008-07-21 22:08:21 -07:00
|
|
|
{
|
2009-02-21 14:23:21 -08:00
|
|
|
return x2apic_enabled();
|
2008-07-21 22:08:21 -07:00
|
|
|
}
|
|
|
|
|
2015-11-04 22:57:00 +00:00
|
|
|
static void x2apic_send_IPI(int cpu, int vector)
|
|
|
|
{
|
2021-10-07 07:35:56 -07:00
|
|
|
u32 dest = x86_cpu_to_logical_apicid[cpu];
|
2015-11-04 22:57:00 +00:00
|
|
|
|
x86/apic: Add extra serialization for non-serializing MSRs
Jan Kiszka reported that the x2apic_wrmsr_fence() function uses a plain
MFENCE while the Intel SDM (10.12.3 MSR Access in x2APIC Mode) calls for
MFENCE; LFENCE.
Short summary: we have special MSRs that have weaker ordering than all
the rest. Add fencing consistent with current SDM recommendations.
This is not known to cause any issues in practice, only in theory.
Longer story below:
The reason the kernel uses a different semantic is that the SDM changed
(roughly in late 2017). The SDM changed because folks at Intel were
auditing all of the recommended fences in the SDM and realized that the
x2apic fences were insufficient.
Why was the pain MFENCE judged insufficient?
WRMSR itself is normally a serializing instruction. No fences are needed
because the instruction itself serializes everything.
But, there are explicit exceptions for this serializing behavior written
into the WRMSR instruction documentation for two classes of MSRs:
IA32_TSC_DEADLINE and the X2APIC MSRs.
Back to x2apic: WRMSR is *not* serializing in this specific case.
But why is MFENCE insufficient? MFENCE makes writes visible, but
only affects load/store instructions. WRMSR is unfortunately not a
load/store instruction and is unaffected by MFENCE. This means that a
non-serializing WRMSR could be reordered by the CPU to execute before
the writes made visible by the MFENCE have even occurred in the first
place.
This means that an x2apic IPI could theoretically be triggered before
there is any (visible) data to process.
Does this affect anything in practice? I honestly don't know. It seems
quite possible that by the time an interrupt gets to consume the (not
yet) MFENCE'd data, it has become visible, mostly by accident.
To be safe, add the SDM-recommended fences for all x2apic WRMSRs.
This also leaves open the question of the _other_ weakly-ordered WRMSR:
MSR_IA32_TSC_DEADLINE. While it has the same ordering architecture as
the x2APIC MSRs, it seems substantially less likely to be a problem in
practice. While writes to the in-memory Local Vector Table (LVT) might
theoretically be reordered with respect to a weakly-ordered WRMSR like
TSC_DEADLINE, the SDM has this to say:
In x2APIC mode, the WRMSR instruction is used to write to the LVT
entry. The processor ensures the ordering of this write and any
subsequent WRMSR to the deadline; no fencing is required.
But, that might still leave xAPIC exposed. The safest thing to do for
now is to add the extra, recommended LFENCE.
[ bp: Massage commit message, fix typos, drop accidentally added
newline to tools/arch/x86/include/asm/barrier.h. ]
Reported-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20200305174708.F77040DD@viggo.jf.intel.com
2020-03-05 09:47:08 -08:00
|
|
|
/* x2apic MSRs are special and need a special fence: */
|
|
|
|
weak_wrmsr_fence();
|
2015-11-04 22:57:00 +00:00
|
|
|
__x2apic_send_IPI_dest(dest, vector, APIC_DEST_LOGICAL);
|
|
|
|
}
|
|
|
|
|
2011-05-19 16:45:47 -07:00
|
|
|
static void
|
|
|
|
__x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
|
2008-07-10 11:16:54 -07:00
|
|
|
{
|
2017-09-13 23:29:24 +02:00
|
|
|
unsigned int cpu, clustercpu;
|
|
|
|
struct cpumask *tmpmsk;
|
2009-01-28 15:42:24 +01:00
|
|
|
unsigned long flags;
|
2011-05-19 16:45:49 -07:00
|
|
|
u32 dest;
|
2008-07-10 11:16:54 -07:00
|
|
|
|
x86/apic: Add extra serialization for non-serializing MSRs
Jan Kiszka reported that the x2apic_wrmsr_fence() function uses a plain
MFENCE while the Intel SDM (10.12.3 MSR Access in x2APIC Mode) calls for
MFENCE; LFENCE.
Short summary: we have special MSRs that have weaker ordering than all
the rest. Add fencing consistent with current SDM recommendations.
This is not known to cause any issues in practice, only in theory.
Longer story below:
The reason the kernel uses a different semantic is that the SDM changed
(roughly in late 2017). The SDM changed because folks at Intel were
auditing all of the recommended fences in the SDM and realized that the
x2apic fences were insufficient.
Why was the pain MFENCE judged insufficient?
WRMSR itself is normally a serializing instruction. No fences are needed
because the instruction itself serializes everything.
But, there are explicit exceptions for this serializing behavior written
into the WRMSR instruction documentation for two classes of MSRs:
IA32_TSC_DEADLINE and the X2APIC MSRs.
Back to x2apic: WRMSR is *not* serializing in this specific case.
But why is MFENCE insufficient? MFENCE makes writes visible, but
only affects load/store instructions. WRMSR is unfortunately not a
load/store instruction and is unaffected by MFENCE. This means that a
non-serializing WRMSR could be reordered by the CPU to execute before
the writes made visible by the MFENCE have even occurred in the first
place.
This means that an x2apic IPI could theoretically be triggered before
there is any (visible) data to process.
Does this affect anything in practice? I honestly don't know. It seems
quite possible that by the time an interrupt gets to consume the (not
yet) MFENCE'd data, it has become visible, mostly by accident.
To be safe, add the SDM-recommended fences for all x2apic WRMSRs.
This also leaves open the question of the _other_ weakly-ordered WRMSR:
MSR_IA32_TSC_DEADLINE. While it has the same ordering architecture as
the x2APIC MSRs, it seems substantially less likely to be a problem in
practice. While writes to the in-memory Local Vector Table (LVT) might
theoretically be reordered with respect to a weakly-ordered WRMSR like
TSC_DEADLINE, the SDM has this to say:
In x2APIC mode, the WRMSR instruction is used to write to the LVT
entry. The processor ensures the ordering of this write and any
subsequent WRMSR to the deadline; no fencing is required.
But, that might still leave xAPIC exposed. The safest thing to do for
now is to add the extra, recommended LFENCE.
[ bp: Massage commit message, fix typos, drop accidentally added
newline to tools/arch/x86/include/asm/barrier.h. ]
Reported-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: <stable@vger.kernel.org>
Link: https://lkml.kernel.org/r/20200305174708.F77040DD@viggo.jf.intel.com
2020-03-05 09:47:08 -08:00
|
|
|
/* x2apic MSRs are special and need a special fence: */
|
|
|
|
weak_wrmsr_fence();
|
2008-07-10 11:16:54 -07:00
|
|
|
local_irq_save(flags);
|
2011-05-19 16:45:47 -07:00
|
|
|
|
2017-09-13 23:29:24 +02:00
|
|
|
tmpmsk = this_cpu_cpumask_var_ptr(ipi_mask);
|
|
|
|
cpumask_copy(tmpmsk, mask);
|
|
|
|
/* If IPI should not be sent to self, clear current CPU */
|
|
|
|
if (apic_dest != APIC_DEST_ALLINC)
|
2019-06-12 23:48:13 -07:00
|
|
|
__cpumask_clear_cpu(smp_processor_id(), tmpmsk);
|
2011-05-19 16:45:49 -07:00
|
|
|
|
2017-09-13 23:29:24 +02:00
|
|
|
/* Collapse cpus in a cluster so a single IPI per cluster is sent */
|
|
|
|
for_each_cpu(cpu, tmpmsk) {
|
2023-03-16 22:20:58 +00:00
|
|
|
struct cpumask *cmsk = per_cpu(cluster_masks, cpu);
|
2011-05-19 16:45:49 -07:00
|
|
|
|
|
|
|
dest = 0;
|
2023-03-16 22:20:58 +00:00
|
|
|
for_each_cpu_and(clustercpu, tmpmsk, cmsk)
|
2021-10-07 07:35:56 -07:00
|
|
|
dest |= x86_cpu_to_logical_apicid[clustercpu];
|
2011-05-19 16:45:49 -07:00
|
|
|
|
|
|
|
if (!dest)
|
2011-05-19 16:45:47 -07:00
|
|
|
continue;
|
2011-05-19 16:45:49 -07:00
|
|
|
|
2020-10-24 22:35:06 +01:00
|
|
|
__x2apic_send_IPI_dest(dest, vector, APIC_DEST_LOGICAL);
|
2017-09-13 23:29:24 +02:00
|
|
|
/* Remove cluster CPUs from tmpmask */
|
2023-03-16 22:20:58 +00:00
|
|
|
cpumask_andnot(tmpmsk, tmpmsk, cmsk);
|
2009-01-28 15:42:24 +01:00
|
|
|
}
|
2011-05-19 16:45:47 -07:00
|
|
|
|
2008-07-10 11:16:54 -07:00
|
|
|
local_irq_restore(flags);
|
|
|
|
}
|
|
|
|
|
2011-05-19 16:45:47 -07:00
|
|
|
static void x2apic_send_IPI_mask(const struct cpumask *mask, int vector)
|
|
|
|
{
|
|
|
|
__x2apic_send_IPI_mask(mask, vector, APIC_DEST_ALLINC);
|
|
|
|
}
|
|
|
|
|
2009-01-28 15:42:24 +01:00
|
|
|
static void
|
2012-06-05 13:23:15 +02:00
|
|
|
x2apic_send_IPI_mask_allbutself(const struct cpumask *mask, int vector)
|
2008-07-10 11:16:54 -07:00
|
|
|
{
|
2011-05-19 16:45:47 -07:00
|
|
|
__x2apic_send_IPI_mask(mask, vector, APIC_DEST_ALLBUT);
|
2008-12-16 17:33:52 -08:00
|
|
|
}
|
2008-07-10 11:16:54 -07:00
|
|
|
|
2017-09-13 23:29:37 +02:00
|
|
|
static u32 x2apic_calc_apicid(unsigned int cpu)
|
|
|
|
{
|
2021-10-07 07:35:56 -07:00
|
|
|
return x86_cpu_to_logical_apicid[cpu];
|
2017-09-13 23:29:37 +02:00
|
|
|
}
|
|
|
|
|
2008-07-10 11:16:54 -07:00
|
|
|
static void init_x2apic_ldr(void)
|
2011-05-19 16:45:48 -07:00
|
|
|
{
|
2023-03-16 22:20:58 +00:00
|
|
|
struct cpumask *cmsk = this_cpu_read(cluster_masks);
|
2011-05-19 16:45:48 -07:00
|
|
|
|
2023-03-16 22:20:58 +00:00
|
|
|
BUG_ON(!cmsk);
|
2011-05-19 16:45:48 -07:00
|
|
|
|
2023-03-16 22:20:58 +00:00
|
|
|
cpumask_set_cpu(smp_processor_id(), cmsk);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* As an optimisation during boot, set the cluster_mask for all present
|
|
|
|
* CPUs at once, to prevent each of them having to iterate over the others
|
|
|
|
* to find the existing cluster_mask.
|
|
|
|
*/
|
|
|
|
static void prefill_clustermask(struct cpumask *cmsk, unsigned int cpu, u32 cluster)
|
|
|
|
{
|
|
|
|
int cpu_i;
|
|
|
|
|
|
|
|
for_each_present_cpu(cpu_i) {
|
|
|
|
struct cpumask **cpu_cmsk = &per_cpu(cluster_masks, cpu_i);
|
|
|
|
u32 apicid = apic->cpu_present_to_apicid(cpu_i);
|
|
|
|
|
|
|
|
if (apicid == BAD_APICID || cpu_i == cpu || apic_cluster(apicid) != cluster)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (WARN_ON_ONCE(*cpu_cmsk == cmsk))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
BUG_ON(*cpu_cmsk);
|
|
|
|
*cpu_cmsk = cmsk;
|
2011-05-19 16:45:48 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-03-16 22:20:58 +00:00
|
|
|
static int alloc_clustermask(unsigned int cpu, u32 cluster, int node)
|
2011-05-19 16:45:48 -07:00
|
|
|
{
|
2023-03-16 22:20:58 +00:00
|
|
|
struct cpumask *cmsk = NULL;
|
|
|
|
unsigned int cpu_i;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* At boot time, the CPU present mask is stable. The cluster mask is
|
|
|
|
* allocated for the first CPU in the cluster and propagated to all
|
|
|
|
* present siblings in the cluster. If the cluster mask is already set
|
|
|
|
* on entry to this function for a given CPU, there is nothing to do.
|
|
|
|
*/
|
2017-09-13 23:29:24 +02:00
|
|
|
if (per_cpu(cluster_masks, cpu))
|
|
|
|
return 0;
|
2023-03-16 22:20:58 +00:00
|
|
|
|
|
|
|
if (system_state < SYSTEM_RUNNING)
|
|
|
|
goto alloc;
|
|
|
|
|
2017-09-13 23:29:24 +02:00
|
|
|
/*
|
2023-03-16 22:20:58 +00:00
|
|
|
* On post boot hotplug for a CPU which was not present at boot time,
|
|
|
|
* iterate over all possible CPUs (even those which are not present
|
|
|
|
* any more) to find any existing cluster mask.
|
2017-09-13 23:29:24 +02:00
|
|
|
*/
|
2023-03-16 22:20:58 +00:00
|
|
|
for_each_possible_cpu(cpu_i) {
|
|
|
|
u32 apicid = apic->cpu_present_to_apicid(cpu_i);
|
|
|
|
|
|
|
|
if (apicid != BAD_APICID && apic_cluster(apicid) == cluster) {
|
|
|
|
cmsk = per_cpu(cluster_masks, cpu_i);
|
|
|
|
/*
|
|
|
|
* If the cluster is already initialized, just store
|
|
|
|
* the mask and return. There's no need to propagate.
|
|
|
|
*/
|
|
|
|
if (cmsk) {
|
|
|
|
per_cpu(cluster_masks, cpu) = cmsk;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
2017-09-13 23:29:24 +02:00
|
|
|
}
|
2023-03-16 22:20:58 +00:00
|
|
|
/*
|
|
|
|
* No CPU in the cluster has ever been initialized, so fall through to
|
|
|
|
* the boot time code which will also populate the cluster mask for any
|
|
|
|
* other CPU in the cluster which is (now) present.
|
|
|
|
*/
|
|
|
|
alloc:
|
|
|
|
cmsk = kzalloc_node(sizeof(*cmsk), GFP_KERNEL, node);
|
|
|
|
if (!cmsk)
|
2016-07-13 17:17:00 +00:00
|
|
|
return -ENOMEM;
|
2023-03-16 22:20:58 +00:00
|
|
|
per_cpu(cluster_masks, cpu) = cmsk;
|
|
|
|
prefill_clustermask(cmsk, cpu, cluster);
|
|
|
|
|
2017-09-13 23:29:24 +02:00
|
|
|
return 0;
|
|
|
|
}
|
2011-05-19 16:45:48 -07:00
|
|
|
|
2017-09-13 23:29:24 +02:00
|
|
|
static int x2apic_prepare_cpu(unsigned int cpu)
|
|
|
|
{
|
2023-03-16 22:20:58 +00:00
|
|
|
u32 phys_apicid = apic->cpu_present_to_apicid(cpu);
|
|
|
|
u32 cluster = apic_cluster(phys_apicid);
|
|
|
|
u32 logical_apicid = (cluster << 16) | (1 << (phys_apicid & 0xf));
|
2024-04-10 06:59:47 +02:00
|
|
|
int node = cpu_to_node(cpu);
|
2023-03-16 22:20:58 +00:00
|
|
|
|
|
|
|
x86_cpu_to_logical_apicid[cpu] = logical_apicid;
|
|
|
|
|
2024-04-10 06:59:47 +02:00
|
|
|
if (alloc_clustermask(cpu, cluster, node) < 0)
|
2017-09-13 23:29:24 +02:00
|
|
|
return -ENOMEM;
|
2024-04-10 06:59:47 +02:00
|
|
|
|
|
|
|
if (!zalloc_cpumask_var_node(&per_cpu(ipi_mask, cpu), GFP_KERNEL, node))
|
2017-09-13 23:29:24 +02:00
|
|
|
return -ENOMEM;
|
2024-04-10 06:59:47 +02:00
|
|
|
|
2016-07-13 17:17:00 +00:00
|
|
|
return 0;
|
2011-05-19 16:45:48 -07:00
|
|
|
}
|
|
|
|
|
2017-09-13 23:29:24 +02:00
|
|
|
static int x2apic_dead_cpu(unsigned int dead_cpu)
|
2008-07-10 11:16:54 -07:00
|
|
|
{
|
2023-03-16 22:20:58 +00:00
|
|
|
struct cpumask *cmsk = per_cpu(cluster_masks, dead_cpu);
|
2011-05-19 16:45:48 -07:00
|
|
|
|
2019-10-01 13:50:19 -07:00
|
|
|
if (cmsk)
|
2023-03-16 22:20:58 +00:00
|
|
|
cpumask_clear_cpu(dead_cpu, cmsk);
|
2017-09-13 23:29:24 +02:00
|
|
|
free_cpumask_var(per_cpu(ipi_mask, dead_cpu));
|
2016-07-13 17:17:00 +00:00
|
|
|
return 0;
|
2008-07-10 11:16:54 -07:00
|
|
|
}
|
|
|
|
|
2011-05-19 16:45:46 -07:00
|
|
|
static int x2apic_cluster_probe(void)
|
|
|
|
{
|
2021-10-07 07:35:56 -07:00
|
|
|
u32 slots;
|
|
|
|
|
2016-07-13 17:17:00 +00:00
|
|
|
if (!x2apic_mode)
|
2011-05-19 16:45:48 -07:00
|
|
|
return 0;
|
2016-07-13 17:17:00 +00:00
|
|
|
|
2021-10-07 07:35:56 -07:00
|
|
|
slots = max_t(u32, L1_CACHE_BYTES/sizeof(u32), nr_cpu_ids);
|
|
|
|
x86_cpu_to_logical_apicid = kcalloc(slots, sizeof(u32), GFP_KERNEL);
|
|
|
|
if (!x86_cpu_to_logical_apicid)
|
|
|
|
return 0;
|
|
|
|
|
2017-09-13 23:29:24 +02:00
|
|
|
if (cpuhp_setup_state(CPUHP_X2APIC_PREPARE, "x86/x2apic:prepare",
|
|
|
|
x2apic_prepare_cpu, x2apic_dead_cpu) < 0) {
|
2016-08-11 16:08:35 +02:00
|
|
|
pr_err("Failed to register X2APIC_PREPARE\n");
|
2021-10-07 07:35:56 -07:00
|
|
|
kfree(x86_cpu_to_logical_apicid);
|
|
|
|
x86_cpu_to_logical_apicid = NULL;
|
2016-08-11 16:08:35 +02:00
|
|
|
return 0;
|
|
|
|
}
|
2017-09-13 23:29:24 +02:00
|
|
|
init_x2apic_ldr();
|
2016-07-13 17:17:00 +00:00
|
|
|
return 1;
|
2011-05-19 16:45:46 -07:00
|
|
|
}
|
|
|
|
|
2016-08-08 16:29:06 -07:00
|
|
|
static struct apic apic_x2apic_cluster __ro_after_init = {
|
2009-01-28 02:37:01 +01:00
|
|
|
|
|
|
|
.name = "cluster x2apic",
|
2011-05-19 16:45:46 -07:00
|
|
|
.probe = x2apic_cluster_probe,
|
2009-01-28 02:37:01 +01:00
|
|
|
.acpi_madt_oem_check = x2apic_acpi_madt_oem_check,
|
|
|
|
|
2020-10-24 22:35:08 +01:00
|
|
|
.dest_mode_logical = true,
|
2009-01-28 02:37:01 +01:00
|
|
|
|
2009-01-28 05:08:44 +01:00
|
|
|
.disable_esr = 0,
|
2009-01-28 02:37:01 +01:00
|
|
|
|
|
|
|
.init_apic_ldr = init_x2apic_ldr,
|
2009-01-28 06:50:47 +01:00
|
|
|
.cpu_present_to_apicid = default_cpu_present_to_apicid,
|
2009-01-28 02:37:01 +01:00
|
|
|
|
2023-08-08 15:04:10 -07:00
|
|
|
.max_apic_id = UINT_MAX,
|
2023-08-08 15:04:11 -07:00
|
|
|
.x2apic_set_max_apicid = true,
|
2011-05-19 16:45:50 -07:00
|
|
|
.get_apic_id = x2apic_get_apic_id,
|
2009-01-28 02:37:01 +01:00
|
|
|
|
2017-09-13 23:29:37 +02:00
|
|
|
.calc_dest_apicid = x2apic_calc_apicid,
|
2009-01-28 02:37:01 +01:00
|
|
|
|
2015-11-04 22:57:00 +00:00
|
|
|
.send_IPI = x2apic_send_IPI,
|
2009-01-28 02:37:01 +01:00
|
|
|
.send_IPI_mask = x2apic_send_IPI_mask,
|
|
|
|
.send_IPI_mask_allbutself = x2apic_send_IPI_mask_allbutself,
|
|
|
|
.send_IPI_allbutself = x2apic_send_IPI_allbutself,
|
|
|
|
.send_IPI_all = x2apic_send_IPI_all,
|
|
|
|
.send_IPI_self = x2apic_send_IPI_self,
|
2023-10-02 14:00:07 +02:00
|
|
|
.nmi_to_offline_cpu = true,
|
2009-01-28 02:37:01 +01:00
|
|
|
|
2009-02-16 23:02:14 -08:00
|
|
|
.read = native_apic_msr_read,
|
|
|
|
.write = native_apic_msr_write,
|
2023-08-08 15:04:15 -07:00
|
|
|
.eoi = native_apic_msr_eoi,
|
2009-02-16 23:02:14 -08:00
|
|
|
.icr_read = native_x2apic_icr_read,
|
|
|
|
.icr_write = native_x2apic_icr_write,
|
2008-07-10 11:16:54 -07:00
|
|
|
};
|
2011-05-20 17:51:17 -07:00
|
|
|
|
|
|
|
apic_driver(apic_x2apic_cluster);
|