linux/arch/powerpc/include/asm/paravirt.h

224 lines
6.2 KiB
C
Raw Normal View History

/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef _ASM_POWERPC_PARAVIRT_H
#define _ASM_POWERPC_PARAVIRT_H
#include <linux/jump_label.h>
#include <asm/smp.h>
#ifdef CONFIG_PPC64
#include <asm/paca.h>
#include <asm/lppaca.h>
#include <asm/hvcall.h>
#endif
#ifdef CONFIG_PPC_SPLPAR
#include <linux/smp.h>
powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted() If its a shared LPAR but not a KVM guest, then see if the vCPU is related to the calling vCPU. On PowerVM, only cores can be preempted. So if one vCPU is a non-preempted state, we can decipher that all other vCPUs sharing the same core are in non-preempted state. Performance results: $ perf stat -r 5 -a perf bench sched pipe -l 10000000 (lesser time is better) powerpc/next 35,107,951.20 msec cpu-clock # 255.898 CPUs utilized ( +- 0.31% ) 23,655,348 context-switches # 0.674 K/sec ( +- 3.72% ) 14,465 cpu-migrations # 0.000 K/sec ( +- 5.37% ) 82,463 page-faults # 0.002 K/sec ( +- 8.40% ) 1,127,182,328,206 cycles # 0.032 GHz ( +- 1.60% ) (66.67%) 78,587,300,622 stalled-cycles-frontend # 6.97% frontend cycles idle ( +- 0.08% ) (50.01%) 654,124,218,432 stalled-cycles-backend # 58.03% backend cycles idle ( +- 1.74% ) (50.01%) 834,013,059,242 instructions # 0.74 insn per cycle # 0.78 stalled cycles per insn ( +- 0.73% ) (66.67%) 132,911,454,387 branches # 3.786 M/sec ( +- 0.59% ) (50.00%) 2,890,882,143 branch-misses # 2.18% of all branches ( +- 0.46% ) (50.00%) 137.195 +- 0.419 seconds time elapsed ( +- 0.31% ) powerpc/next + patchset 29,981,702.64 msec cpu-clock # 255.881 CPUs utilized ( +- 1.30% ) 40,162,456 context-switches # 0.001 M/sec ( +- 0.01% ) 1,110 cpu-migrations # 0.000 K/sec ( +- 5.20% ) 62,616 page-faults # 0.002 K/sec ( +- 3.93% ) 1,430,030,626,037 cycles # 0.048 GHz ( +- 1.41% ) (66.67%) 83,202,707,288 stalled-cycles-frontend # 5.82% frontend cycles idle ( +- 0.75% ) (50.01%) 744,556,088,520 stalled-cycles-backend # 52.07% backend cycles idle ( +- 1.39% ) (50.01%) 940,138,418,674 instructions # 0.66 insn per cycle # 0.79 stalled cycles per insn ( +- 0.51% ) (66.67%) 146,452,852,283 branches # 4.885 M/sec ( +- 0.80% ) (50.00%) 3,237,743,996 branch-misses # 2.21% of all branches ( +- 1.18% ) (50.01%) 117.17 +- 1.52 seconds time elapsed ( +- 1.30% ) This is around 14.6% improvement in performance. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> [mpe: Fold in performance results from cover letter] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-5-srikar@linux.vnet.ibm.com
2020-12-02 10:34:56 +05:30
#include <asm/kvm_guest.h>
#include <asm/cputhreads.h>
DECLARE_STATIC_KEY_FALSE(shared_processor);
static inline bool is_shared_processor(void)
{
return static_branch_unlikely(&shared_processor);
}
powerpc/pseries: Implement CONFIG_PARAVIRT_TIME_ACCOUNTING CONFIG_VIRT_CPU_ACCOUNTING_GEN under pseries does not provide stolen time accounting unless CONFIG_PARAVIRT_TIME_ACCOUNTING is enabled. Implement this using the VPA accumulated wait counters. Note this will not work on current KVM hosts because KVM does not implement the VPA dispatch counters (yet). It could be implemented with the dispatch trace log as it is for VIRT_CPU_ACCOUNTING_NATIVE, but that is not necessary for the more limited accounting provided by PARAVIRT_TIME_ACCOUNTING, and it is more expensive, complex, and has downsides like potential log wrap. From Shrikanth: [...] it was tested on Power10 [PowerVM] Shared LPAR. system has two LPAR. we will call first one LPAR1 and second one as LPAR2. Test was carried out in SMT=1. Similar observation was seen in SMT=8 as well. LPAR config header from each LPAR is below. LPAR1 is twice as big as LPAR2. Since Both are sharing the same underlying hardware, work stealing will happen when both the LPAR's are contending for the same resource. LPAR1: type=Shared mode=Uncapped smt=Off lcpu=40 cpus=40 ent=20.00 LPAR2: type=Shared mode=Uncapped smt=Off lcpu=20 cpus=40 ent=10.00 mpstat was used to check for the utilization. stress-ng has been used as the workload. Few cases are tested. when the both LPAR are idle there is no steal time. when LPAR1 starts running at 100% which consumes all of the physical resource, steal time starts to get accounted. With LPAR1 running at 100% and LPAR2 starts running, steal time starts increasing. This is as expected. When the LPAR2 Load is increased further, steal time increases further. Case 1: 0% LPAR1; 0% LPAR2 %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 0.00 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 99.95 Case 2: 100% LPAR1; 0% LPAR2 %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 97.68 0.00 0.00 0.00 0.00 0.00 2.32 0.00 0.00 0.00 Case 3: 100% LPAR1; 50% LPAR2 %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 86.34 0.00 0.10 0.00 0.00 0.03 13.54 0.00 0.00 0.00 Case 4: 100% LPAR1; 100% LPAR2 %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 78.54 0.00 0.07 0.00 0.00 0.02 21.36 0.00 0.00 0.00 Case 5: 50% LPAR1; 100% LPAR2 %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 49.37 0.00 0.00 0.00 0.00 0.00 1.17 0.00 0.00 49.47 Patch is accounting for the steal time and basic tests are holding good. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Shrikanth Hegde <sshegde@linux.ibm.com> [mpe: Add SPDX tag to new paravirt_api_clock.h] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20220902085316.2071519-3-npiggin@gmail.com
2022-09-02 18:53:14 +10:00
#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
extern struct static_key paravirt_steal_enabled;
extern struct static_key paravirt_steal_rq_enabled;
u64 pseries_paravirt_steal_clock(int cpu);
static inline u64 paravirt_steal_clock(int cpu)
{
return pseries_paravirt_steal_clock(cpu);
}
#endif
/* If bit 0 is set, the cpu has been ceded, conferred, or preempted */
static inline u32 yield_count_of(int cpu)
{
__be32 yield_count = READ_ONCE(lppaca_of(cpu).yield_count);
return be32_to_cpu(yield_count);
}
/*
* Spinlock code confers and prods, so don't trace the hcalls because the
* tracing code takes spinlocks which can cause recursion deadlocks.
*
* These calls are made while the lock is not held: the lock slowpath yields if
* it can not acquire the lock, and unlock slow path might prod if a waiter has
* yielded). So this may not be a problem for simple spin locks because the
* tracing does not technically recurse on the lock, but we avoid it anyway.
*
* However the queued spin lock contended path is more strictly ordered: the
* H_CONFER hcall is made after the task has queued itself on the lock, so then
* recursing on that lock will cause the task to then queue up again behind the
* first instance (or worse: queued spinlocks use tricks that assume a context
* never waits on more than one spinlock, so such recursion may cause random
* corruption in the lock code).
*/
static inline void yield_to_preempted(int cpu, u32 yield_count)
{
plpar_hcall_norets_notrace(H_CONFER, get_hard_smp_processor_id(cpu), yield_count);
}
static inline void prod_cpu(int cpu)
{
plpar_hcall_norets_notrace(H_PROD, get_hard_smp_processor_id(cpu));
}
static inline void yield_to_any(void)
{
plpar_hcall_norets_notrace(H_CONFER, -1, 0);
}
static inline bool is_vcpu_idle(int vcpu)
{
return lppaca_of(vcpu).idle;
}
static inline bool vcpu_is_dispatched(int vcpu)
{
/*
* This is the yield_count. An "odd" value (low bit on) means that
* the processor is yielded (either because of an OS yield or a
* hypervisor preempt). An even value implies that the processor is
* currently executing.
*/
return (!(yield_count_of(vcpu) & 1));
}
#else
static inline bool is_shared_processor(void)
{
return false;
}
static inline u32 yield_count_of(int cpu)
{
return 0;
}
extern void ___bad_yield_to_preempted(void);
static inline void yield_to_preempted(int cpu, u32 yield_count)
{
___bad_yield_to_preempted(); /* This would be a bug */
}
extern void ___bad_yield_to_any(void);
static inline void yield_to_any(void)
{
___bad_yield_to_any(); /* This would be a bug */
}
extern void ___bad_prod_cpu(void);
static inline void prod_cpu(int cpu)
{
___bad_prod_cpu(); /* This would be a bug */
}
static inline bool is_vcpu_idle(int vcpu)
{
return false;
}
static inline bool vcpu_is_dispatched(int vcpu)
{
return true;
}
#endif
#define vcpu_is_preempted vcpu_is_preempted
static inline bool vcpu_is_preempted(int cpu)
{
/*
* The dispatch/yield bit alone is an imperfect indicator of
* whether the hypervisor has dispatched @cpu to run on a physical
* processor. When it is clear, @cpu is definitely not preempted.
* But when it is set, it means only that it *might* be, subject to
* other conditions. So we check other properties of the VM and
* @cpu first, resorting to the yield count last.
*/
/*
* Hypervisor preemption isn't possible in dedicated processor
* mode by definition.
*/
if (!is_shared_processor())
return false;
powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted() If its a shared LPAR but not a KVM guest, then see if the vCPU is related to the calling vCPU. On PowerVM, only cores can be preempted. So if one vCPU is a non-preempted state, we can decipher that all other vCPUs sharing the same core are in non-preempted state. Performance results: $ perf stat -r 5 -a perf bench sched pipe -l 10000000 (lesser time is better) powerpc/next 35,107,951.20 msec cpu-clock # 255.898 CPUs utilized ( +- 0.31% ) 23,655,348 context-switches # 0.674 K/sec ( +- 3.72% ) 14,465 cpu-migrations # 0.000 K/sec ( +- 5.37% ) 82,463 page-faults # 0.002 K/sec ( +- 8.40% ) 1,127,182,328,206 cycles # 0.032 GHz ( +- 1.60% ) (66.67%) 78,587,300,622 stalled-cycles-frontend # 6.97% frontend cycles idle ( +- 0.08% ) (50.01%) 654,124,218,432 stalled-cycles-backend # 58.03% backend cycles idle ( +- 1.74% ) (50.01%) 834,013,059,242 instructions # 0.74 insn per cycle # 0.78 stalled cycles per insn ( +- 0.73% ) (66.67%) 132,911,454,387 branches # 3.786 M/sec ( +- 0.59% ) (50.00%) 2,890,882,143 branch-misses # 2.18% of all branches ( +- 0.46% ) (50.00%) 137.195 +- 0.419 seconds time elapsed ( +- 0.31% ) powerpc/next + patchset 29,981,702.64 msec cpu-clock # 255.881 CPUs utilized ( +- 1.30% ) 40,162,456 context-switches # 0.001 M/sec ( +- 0.01% ) 1,110 cpu-migrations # 0.000 K/sec ( +- 5.20% ) 62,616 page-faults # 0.002 K/sec ( +- 3.93% ) 1,430,030,626,037 cycles # 0.048 GHz ( +- 1.41% ) (66.67%) 83,202,707,288 stalled-cycles-frontend # 5.82% frontend cycles idle ( +- 0.75% ) (50.01%) 744,556,088,520 stalled-cycles-backend # 52.07% backend cycles idle ( +- 1.39% ) (50.01%) 940,138,418,674 instructions # 0.66 insn per cycle # 0.79 stalled cycles per insn ( +- 0.51% ) (66.67%) 146,452,852,283 branches # 4.885 M/sec ( +- 0.80% ) (50.00%) 3,237,743,996 branch-misses # 2.21% of all branches ( +- 1.18% ) (50.01%) 117.17 +- 1.52 seconds time elapsed ( +- 1.30% ) This is around 14.6% improvement in performance. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> [mpe: Fold in performance results from cover letter] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-5-srikar@linux.vnet.ibm.com
2020-12-02 10:34:56 +05:30
/*
* If the hypervisor has dispatched the target CPU on a physical
* processor, then the target CPU is definitely not preempted.
*/
if (vcpu_is_dispatched(cpu))
return false;
/*
* if the target CPU is not dispatched and the guest OS
* has not marked the CPU idle, then it is hypervisor preempted.
*/
if (!is_vcpu_idle(cpu))
return true;
powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted() If its a shared LPAR but not a KVM guest, then see if the vCPU is related to the calling vCPU. On PowerVM, only cores can be preempted. So if one vCPU is a non-preempted state, we can decipher that all other vCPUs sharing the same core are in non-preempted state. Performance results: $ perf stat -r 5 -a perf bench sched pipe -l 10000000 (lesser time is better) powerpc/next 35,107,951.20 msec cpu-clock # 255.898 CPUs utilized ( +- 0.31% ) 23,655,348 context-switches # 0.674 K/sec ( +- 3.72% ) 14,465 cpu-migrations # 0.000 K/sec ( +- 5.37% ) 82,463 page-faults # 0.002 K/sec ( +- 8.40% ) 1,127,182,328,206 cycles # 0.032 GHz ( +- 1.60% ) (66.67%) 78,587,300,622 stalled-cycles-frontend # 6.97% frontend cycles idle ( +- 0.08% ) (50.01%) 654,124,218,432 stalled-cycles-backend # 58.03% backend cycles idle ( +- 1.74% ) (50.01%) 834,013,059,242 instructions # 0.74 insn per cycle # 0.78 stalled cycles per insn ( +- 0.73% ) (66.67%) 132,911,454,387 branches # 3.786 M/sec ( +- 0.59% ) (50.00%) 2,890,882,143 branch-misses # 2.18% of all branches ( +- 0.46% ) (50.00%) 137.195 +- 0.419 seconds time elapsed ( +- 0.31% ) powerpc/next + patchset 29,981,702.64 msec cpu-clock # 255.881 CPUs utilized ( +- 1.30% ) 40,162,456 context-switches # 0.001 M/sec ( +- 0.01% ) 1,110 cpu-migrations # 0.000 K/sec ( +- 5.20% ) 62,616 page-faults # 0.002 K/sec ( +- 3.93% ) 1,430,030,626,037 cycles # 0.048 GHz ( +- 1.41% ) (66.67%) 83,202,707,288 stalled-cycles-frontend # 5.82% frontend cycles idle ( +- 0.75% ) (50.01%) 744,556,088,520 stalled-cycles-backend # 52.07% backend cycles idle ( +- 1.39% ) (50.01%) 940,138,418,674 instructions # 0.66 insn per cycle # 0.79 stalled cycles per insn ( +- 0.51% ) (66.67%) 146,452,852,283 branches # 4.885 M/sec ( +- 0.80% ) (50.00%) 3,237,743,996 branch-misses # 2.21% of all branches ( +- 1.18% ) (50.01%) 117.17 +- 1.52 seconds time elapsed ( +- 1.30% ) This is around 14.6% improvement in performance. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> [mpe: Fold in performance results from cover letter] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-5-srikar@linux.vnet.ibm.com
2020-12-02 10:34:56 +05:30
#ifdef CONFIG_PPC_SPLPAR
if (!is_kvm_guest()) {
int first_cpu, i;
powerpc/paravirt: correct preempt debug splat in vcpu_is_preempted() vcpu_is_preempted() can be used outside of preempt-disabled critical sections, yielding warnings such as: BUG: using smp_processor_id() in preemptible [00000000] code: systemd-udevd/185 caller is rwsem_spin_on_owner+0x1cc/0x2d0 CPU: 1 PID: 185 Comm: systemd-udevd Not tainted 5.15.0-rc2+ #33 Call Trace: [c000000012907ac0] [c000000000aa30a8] dump_stack_lvl+0xac/0x108 (unreliable) [c000000012907b00] [c000000001371f70] check_preemption_disabled+0x150/0x160 [c000000012907b90] [c0000000001e0e8c] rwsem_spin_on_owner+0x1cc/0x2d0 [c000000012907be0] [c0000000001e1408] rwsem_down_write_slowpath+0x478/0x9a0 [c000000012907ca0] [c000000000576cf4] filename_create+0x94/0x1e0 [c000000012907d10] [c00000000057ac08] do_symlinkat+0x68/0x1a0 [c000000012907d70] [c00000000057ae18] sys_symlink+0x58/0x70 [c000000012907da0] [c00000000002e448] system_call_exception+0x198/0x3c0 [c000000012907e10] [c00000000000c54c] system_call_common+0xec/0x250 The result of vcpu_is_preempted() is always used speculatively, and the function does not access per-cpu resources in a (Linux) preempt-unsafe way. Use raw_smp_processor_id() to avoid such warnings, adding explanatory comments. Fixes: ca3f969dcb11 ("powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted()") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210928214147.312412-3-nathanl@linux.ibm.com
2021-09-28 16:41:47 -05:00
/*
* The result of vcpu_is_preempted() is used in a
* speculative way, and is always subject to invalidation
* by events internal and external to Linux. While we can
* be called in preemptable context (in the Linux sense),
* we're not accessing per-cpu resources in a way that can
* race destructively with Linux scheduler preemption and
* migration, and callers can tolerate the potential for
* error introduced by sampling the CPU index without
* pinning the task to it. So it is permissible to use
* raw_smp_processor_id() here to defeat the preempt debug
* warnings that can arise from using smp_processor_id()
* in arbitrary contexts.
*/
first_cpu = cpu_first_thread_sibling(raw_smp_processor_id());
powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted() If its a shared LPAR but not a KVM guest, then see if the vCPU is related to the calling vCPU. On PowerVM, only cores can be preempted. So if one vCPU is a non-preempted state, we can decipher that all other vCPUs sharing the same core are in non-preempted state. Performance results: $ perf stat -r 5 -a perf bench sched pipe -l 10000000 (lesser time is better) powerpc/next 35,107,951.20 msec cpu-clock # 255.898 CPUs utilized ( +- 0.31% ) 23,655,348 context-switches # 0.674 K/sec ( +- 3.72% ) 14,465 cpu-migrations # 0.000 K/sec ( +- 5.37% ) 82,463 page-faults # 0.002 K/sec ( +- 8.40% ) 1,127,182,328,206 cycles # 0.032 GHz ( +- 1.60% ) (66.67%) 78,587,300,622 stalled-cycles-frontend # 6.97% frontend cycles idle ( +- 0.08% ) (50.01%) 654,124,218,432 stalled-cycles-backend # 58.03% backend cycles idle ( +- 1.74% ) (50.01%) 834,013,059,242 instructions # 0.74 insn per cycle # 0.78 stalled cycles per insn ( +- 0.73% ) (66.67%) 132,911,454,387 branches # 3.786 M/sec ( +- 0.59% ) (50.00%) 2,890,882,143 branch-misses # 2.18% of all branches ( +- 0.46% ) (50.00%) 137.195 +- 0.419 seconds time elapsed ( +- 0.31% ) powerpc/next + patchset 29,981,702.64 msec cpu-clock # 255.881 CPUs utilized ( +- 1.30% ) 40,162,456 context-switches # 0.001 M/sec ( +- 0.01% ) 1,110 cpu-migrations # 0.000 K/sec ( +- 5.20% ) 62,616 page-faults # 0.002 K/sec ( +- 3.93% ) 1,430,030,626,037 cycles # 0.048 GHz ( +- 1.41% ) (66.67%) 83,202,707,288 stalled-cycles-frontend # 5.82% frontend cycles idle ( +- 0.75% ) (50.01%) 744,556,088,520 stalled-cycles-backend # 52.07% backend cycles idle ( +- 1.39% ) (50.01%) 940,138,418,674 instructions # 0.66 insn per cycle # 0.79 stalled cycles per insn ( +- 0.51% ) (66.67%) 146,452,852,283 branches # 4.885 M/sec ( +- 0.80% ) (50.00%) 3,237,743,996 branch-misses # 2.21% of all branches ( +- 1.18% ) (50.01%) 117.17 +- 1.52 seconds time elapsed ( +- 1.30% ) This is around 14.6% improvement in performance. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> [mpe: Fold in performance results from cover letter] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-5-srikar@linux.vnet.ibm.com
2020-12-02 10:34:56 +05:30
/*
* The PowerVM hypervisor dispatches VMs on a whole core
* basis. So we know that a thread sibling of the executing CPU
* cannot have been preempted by the hypervisor, even if it
* has called H_CONFER, which will set the yield bit.
powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted() If its a shared LPAR but not a KVM guest, then see if the vCPU is related to the calling vCPU. On PowerVM, only cores can be preempted. So if one vCPU is a non-preempted state, we can decipher that all other vCPUs sharing the same core are in non-preempted state. Performance results: $ perf stat -r 5 -a perf bench sched pipe -l 10000000 (lesser time is better) powerpc/next 35,107,951.20 msec cpu-clock # 255.898 CPUs utilized ( +- 0.31% ) 23,655,348 context-switches # 0.674 K/sec ( +- 3.72% ) 14,465 cpu-migrations # 0.000 K/sec ( +- 5.37% ) 82,463 page-faults # 0.002 K/sec ( +- 8.40% ) 1,127,182,328,206 cycles # 0.032 GHz ( +- 1.60% ) (66.67%) 78,587,300,622 stalled-cycles-frontend # 6.97% frontend cycles idle ( +- 0.08% ) (50.01%) 654,124,218,432 stalled-cycles-backend # 58.03% backend cycles idle ( +- 1.74% ) (50.01%) 834,013,059,242 instructions # 0.74 insn per cycle # 0.78 stalled cycles per insn ( +- 0.73% ) (66.67%) 132,911,454,387 branches # 3.786 M/sec ( +- 0.59% ) (50.00%) 2,890,882,143 branch-misses # 2.18% of all branches ( +- 0.46% ) (50.00%) 137.195 +- 0.419 seconds time elapsed ( +- 0.31% ) powerpc/next + patchset 29,981,702.64 msec cpu-clock # 255.881 CPUs utilized ( +- 1.30% ) 40,162,456 context-switches # 0.001 M/sec ( +- 0.01% ) 1,110 cpu-migrations # 0.000 K/sec ( +- 5.20% ) 62,616 page-faults # 0.002 K/sec ( +- 3.93% ) 1,430,030,626,037 cycles # 0.048 GHz ( +- 1.41% ) (66.67%) 83,202,707,288 stalled-cycles-frontend # 5.82% frontend cycles idle ( +- 0.75% ) (50.01%) 744,556,088,520 stalled-cycles-backend # 52.07% backend cycles idle ( +- 1.39% ) (50.01%) 940,138,418,674 instructions # 0.66 insn per cycle # 0.79 stalled cycles per insn ( +- 0.51% ) (66.67%) 146,452,852,283 branches # 4.885 M/sec ( +- 0.80% ) (50.00%) 3,237,743,996 branch-misses # 2.21% of all branches ( +- 1.18% ) (50.01%) 117.17 +- 1.52 seconds time elapsed ( +- 1.30% ) This is around 14.6% improvement in performance. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> [mpe: Fold in performance results from cover letter] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-5-srikar@linux.vnet.ibm.com
2020-12-02 10:34:56 +05:30
*/
if (cpu_first_thread_sibling(cpu) == first_cpu)
return false;
/*
* The specific target CPU was marked by guest OS as idle, but
* then also check all other cpus in the core for PowerVM
* because it does core scheduling and one of the vcpu
* of the core getting preempted by hypervisor implies
* other vcpus can also be considered preempted.
*/
first_cpu = cpu_first_thread_sibling(cpu);
for (i = first_cpu; i < first_cpu + threads_per_core; i++) {
if (i == cpu)
continue;
if (vcpu_is_dispatched(i))
return false;
if (!is_vcpu_idle(i))
return true;
}
powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted() If its a shared LPAR but not a KVM guest, then see if the vCPU is related to the calling vCPU. On PowerVM, only cores can be preempted. So if one vCPU is a non-preempted state, we can decipher that all other vCPUs sharing the same core are in non-preempted state. Performance results: $ perf stat -r 5 -a perf bench sched pipe -l 10000000 (lesser time is better) powerpc/next 35,107,951.20 msec cpu-clock # 255.898 CPUs utilized ( +- 0.31% ) 23,655,348 context-switches # 0.674 K/sec ( +- 3.72% ) 14,465 cpu-migrations # 0.000 K/sec ( +- 5.37% ) 82,463 page-faults # 0.002 K/sec ( +- 8.40% ) 1,127,182,328,206 cycles # 0.032 GHz ( +- 1.60% ) (66.67%) 78,587,300,622 stalled-cycles-frontend # 6.97% frontend cycles idle ( +- 0.08% ) (50.01%) 654,124,218,432 stalled-cycles-backend # 58.03% backend cycles idle ( +- 1.74% ) (50.01%) 834,013,059,242 instructions # 0.74 insn per cycle # 0.78 stalled cycles per insn ( +- 0.73% ) (66.67%) 132,911,454,387 branches # 3.786 M/sec ( +- 0.59% ) (50.00%) 2,890,882,143 branch-misses # 2.18% of all branches ( +- 0.46% ) (50.00%) 137.195 +- 0.419 seconds time elapsed ( +- 0.31% ) powerpc/next + patchset 29,981,702.64 msec cpu-clock # 255.881 CPUs utilized ( +- 1.30% ) 40,162,456 context-switches # 0.001 M/sec ( +- 0.01% ) 1,110 cpu-migrations # 0.000 K/sec ( +- 5.20% ) 62,616 page-faults # 0.002 K/sec ( +- 3.93% ) 1,430,030,626,037 cycles # 0.048 GHz ( +- 1.41% ) (66.67%) 83,202,707,288 stalled-cycles-frontend # 5.82% frontend cycles idle ( +- 0.75% ) (50.01%) 744,556,088,520 stalled-cycles-backend # 52.07% backend cycles idle ( +- 1.39% ) (50.01%) 940,138,418,674 instructions # 0.66 insn per cycle # 0.79 stalled cycles per insn ( +- 0.51% ) (66.67%) 146,452,852,283 branches # 4.885 M/sec ( +- 0.80% ) (50.00%) 3,237,743,996 branch-misses # 2.21% of all branches ( +- 1.18% ) (50.01%) 117.17 +- 1.52 seconds time elapsed ( +- 1.30% ) This is around 14.6% improvement in performance. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> [mpe: Fold in performance results from cover letter] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-5-srikar@linux.vnet.ibm.com
2020-12-02 10:34:56 +05:30
}
#endif
/*
* None of the threads in target CPU's core are running but none of
* them were preempted too. Hence assume the target CPU to be
* non-preempted.
*/
return false;
}
static inline bool pv_is_native_spin_unlock(void)
{
return !is_shared_processor();
}
#endif /* _ASM_POWERPC_PARAVIRT_H */