License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2011-08-18 11:48:06 -07:00
|
|
|
#ifndef ASM_X86_CMPXCHG_H
|
|
|
|
#define ASM_X86_CMPXCHG_H
|
|
|
|
|
2011-08-29 14:47:58 -07:00
|
|
|
#include <linux/compiler.h>
|
2016-01-26 22:12:04 +01:00
|
|
|
#include <asm/cpufeatures.h>
|
2011-08-18 11:48:06 -07:00
|
|
|
#include <asm/alternative.h> /* Provides LOCK_PREFIX */
|
|
|
|
|
2011-08-29 14:47:58 -07:00
|
|
|
/*
|
2018-12-03 10:47:34 +01:00
|
|
|
* Non-existent functions to indicate usage errors at link time
|
2011-08-29 14:47:58 -07:00
|
|
|
* (or compile-time if the compiler implements __compiletime_error().
|
|
|
|
*/
|
|
|
|
extern void __xchg_wrong_size(void)
|
|
|
|
__compiletime_error("Bad argument size for xchg");
|
|
|
|
extern void __cmpxchg_wrong_size(void)
|
|
|
|
__compiletime_error("Bad argument size for cmpxchg");
|
|
|
|
extern void __xadd_wrong_size(void)
|
|
|
|
__compiletime_error("Bad argument size for xadd");
|
2011-09-28 11:49:28 -07:00
|
|
|
extern void __add_wrong_size(void)
|
|
|
|
__compiletime_error("Bad argument size for add");
|
2011-08-18 11:48:06 -07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Constants for operation sizes. On 32-bit, the 64-bit size it set to
|
|
|
|
* -1 because sizeof will never return -1, thereby making those switch
|
2021-03-18 15:28:01 +01:00
|
|
|
* case statements guaranteed dead code which the compiler will
|
2011-08-18 11:48:06 -07:00
|
|
|
* eliminate, and allowing the "missing symbol in the default case" to
|
|
|
|
* indicate a usage error.
|
|
|
|
*/
|
|
|
|
#define __X86_CASE_B 1
|
|
|
|
#define __X86_CASE_W 2
|
|
|
|
#define __X86_CASE_L 4
|
|
|
|
#ifdef CONFIG_64BIT
|
|
|
|
#define __X86_CASE_Q 8
|
|
|
|
#else
|
|
|
|
#define __X86_CASE_Q -1 /* sizeof will never return -1 */
|
|
|
|
#endif
|
|
|
|
|
2011-09-30 12:14:10 -07:00
|
|
|
/*
|
|
|
|
* An exchange-type operation, which takes a value and a pointer, and
|
2013-04-25 15:20:54 +08:00
|
|
|
* returns the old value.
|
2011-09-30 12:14:10 -07:00
|
|
|
*/
|
|
|
|
#define __xchg_op(ptr, arg, op, lock) \
|
|
|
|
({ \
|
|
|
|
__typeof__ (*(ptr)) __ret = (arg); \
|
|
|
|
switch (sizeof(*(ptr))) { \
|
|
|
|
case __X86_CASE_B: \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile (lock #op "b %b0, %1" \
|
2012-04-02 16:15:33 -07:00
|
|
|
: "+q" (__ret), "+m" (*(ptr)) \
|
2011-09-30 12:14:10 -07:00
|
|
|
: : "memory", "cc"); \
|
|
|
|
break; \
|
|
|
|
case __X86_CASE_W: \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile (lock #op "w %w0, %1" \
|
2011-09-30 12:14:10 -07:00
|
|
|
: "+r" (__ret), "+m" (*(ptr)) \
|
|
|
|
: : "memory", "cc"); \
|
|
|
|
break; \
|
|
|
|
case __X86_CASE_L: \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile (lock #op "l %0, %1" \
|
2011-09-30 12:14:10 -07:00
|
|
|
: "+r" (__ret), "+m" (*(ptr)) \
|
|
|
|
: : "memory", "cc"); \
|
|
|
|
break; \
|
|
|
|
case __X86_CASE_Q: \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile (lock #op "q %q0, %1" \
|
2011-09-30 12:14:10 -07:00
|
|
|
: "+r" (__ret), "+m" (*(ptr)) \
|
|
|
|
: : "memory", "cc"); \
|
|
|
|
break; \
|
|
|
|
default: \
|
|
|
|
__ ## op ## _wrong_size(); \
|
|
|
|
} \
|
|
|
|
__ret; \
|
|
|
|
})
|
|
|
|
|
2011-08-18 11:48:06 -07:00
|
|
|
/*
|
|
|
|
* Note: no "lock" prefix even on SMP: xchg always implies lock anyway.
|
|
|
|
* Since this is generally used to protect other memory information, we
|
|
|
|
* use "asm volatile" and "memory" clobbers to prevent gcc from moving
|
|
|
|
* information around.
|
|
|
|
*/
|
2018-07-16 12:30:09 +01:00
|
|
|
#define arch_xchg(ptr, v) __xchg_op((ptr), (v), xchg, "")
|
2011-08-18 11:48:06 -07:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Atomic compare and exchange. Compare OLD with MEM, if identical,
|
|
|
|
* store NEW in MEM. Return the initial value in MEM. Success is
|
|
|
|
* indicated by comparing RETURN with OLD.
|
|
|
|
*/
|
|
|
|
#define __raw_cmpxchg(ptr, old, new, size, lock) \
|
|
|
|
({ \
|
|
|
|
__typeof__(*(ptr)) __ret; \
|
|
|
|
__typeof__(*(ptr)) __old = (old); \
|
|
|
|
__typeof__(*(ptr)) __new = (new); \
|
|
|
|
switch (size) { \
|
|
|
|
case __X86_CASE_B: \
|
|
|
|
{ \
|
|
|
|
volatile u8 *__ptr = (volatile u8 *)(ptr); \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile(lock "cmpxchgb %2, %1" \
|
2011-08-18 11:48:06 -07:00
|
|
|
: "=a" (__ret), "+m" (*__ptr) \
|
|
|
|
: "q" (__new), "0" (__old) \
|
|
|
|
: "memory"); \
|
|
|
|
break; \
|
|
|
|
} \
|
|
|
|
case __X86_CASE_W: \
|
|
|
|
{ \
|
|
|
|
volatile u16 *__ptr = (volatile u16 *)(ptr); \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile(lock "cmpxchgw %2, %1" \
|
2011-08-18 11:48:06 -07:00
|
|
|
: "=a" (__ret), "+m" (*__ptr) \
|
|
|
|
: "r" (__new), "0" (__old) \
|
|
|
|
: "memory"); \
|
|
|
|
break; \
|
|
|
|
} \
|
|
|
|
case __X86_CASE_L: \
|
|
|
|
{ \
|
|
|
|
volatile u32 *__ptr = (volatile u32 *)(ptr); \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile(lock "cmpxchgl %2, %1" \
|
2011-08-18 11:48:06 -07:00
|
|
|
: "=a" (__ret), "+m" (*__ptr) \
|
|
|
|
: "r" (__new), "0" (__old) \
|
|
|
|
: "memory"); \
|
|
|
|
break; \
|
|
|
|
} \
|
|
|
|
case __X86_CASE_Q: \
|
|
|
|
{ \
|
|
|
|
volatile u64 *__ptr = (volatile u64 *)(ptr); \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile(lock "cmpxchgq %2, %1" \
|
2011-08-18 11:48:06 -07:00
|
|
|
: "=a" (__ret), "+m" (*__ptr) \
|
|
|
|
: "r" (__new), "0" (__old) \
|
|
|
|
: "memory"); \
|
|
|
|
break; \
|
|
|
|
} \
|
|
|
|
default: \
|
|
|
|
__cmpxchg_wrong_size(); \
|
|
|
|
} \
|
|
|
|
__ret; \
|
|
|
|
})
|
|
|
|
|
|
|
|
#define __cmpxchg(ptr, old, new, size) \
|
|
|
|
__raw_cmpxchg((ptr), (old), (new), (size), LOCK_PREFIX)
|
|
|
|
|
|
|
|
#define __sync_cmpxchg(ptr, old, new, size) \
|
2025-02-28 09:51:15 +01:00
|
|
|
__raw_cmpxchg((ptr), (old), (new), (size), "lock ")
|
2011-08-18 11:48:06 -07:00
|
|
|
|
|
|
|
#define __cmpxchg_local(ptr, old, new, size) \
|
|
|
|
__raw_cmpxchg((ptr), (old), (new), (size), "")
|
|
|
|
|
2007-10-11 11:20:03 +02:00
|
|
|
#ifdef CONFIG_X86_32
|
2012-10-02 18:01:25 +01:00
|
|
|
# include <asm/cmpxchg_32.h>
|
2007-10-11 11:20:03 +02:00
|
|
|
#else
|
2012-10-02 18:01:25 +01:00
|
|
|
# include <asm/cmpxchg_64.h>
|
2007-10-11 11:20:03 +02:00
|
|
|
#endif
|
2011-08-18 11:48:06 -07:00
|
|
|
|
2018-01-29 18:26:05 +01:00
|
|
|
#define arch_cmpxchg(ptr, old, new) \
|
2012-01-26 15:47:37 +00:00
|
|
|
__cmpxchg(ptr, old, new, sizeof(*(ptr)))
|
2011-08-18 11:48:06 -07:00
|
|
|
|
2018-01-29 18:26:05 +01:00
|
|
|
#define arch_sync_cmpxchg(ptr, old, new) \
|
2012-01-26 15:47:37 +00:00
|
|
|
__sync_cmpxchg(ptr, old, new, sizeof(*(ptr)))
|
2011-08-18 11:48:06 -07:00
|
|
|
|
2018-01-29 18:26:05 +01:00
|
|
|
#define arch_cmpxchg_local(ptr, old, new) \
|
2012-01-26 15:47:37 +00:00
|
|
|
__cmpxchg_local(ptr, old, new, sizeof(*(ptr)))
|
2011-08-18 11:48:06 -07:00
|
|
|
|
locking/atomic: Introduce atomic_try_cmpxchg()
Add a new cmpxchg interface:
bool try_cmpxchg(u{8,16,32,64} *ptr, u{8,16,32,64} *val, u{8,16,32,64} new);
Where the boolean returns the result of the compare; and thus if the
exchange happened; and in case of failure, the new value of *ptr is
returned in *val.
This allows simplification/improvement of loops like:
for (;;) {
new = val $op $imm;
old = cmpxchg(ptr, val, new);
if (old == val)
break;
val = old;
}
into:
do {
} while (!try_cmpxchg(ptr, &val, val $op $imm));
while also generating better code (GCC6 and onwards).
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-02-01 16:39:38 +01:00
|
|
|
|
|
|
|
#define __raw_try_cmpxchg(_ptr, _pold, _new, size, lock) \
|
|
|
|
({ \
|
|
|
|
bool success; \
|
2017-06-17 11:15:28 +02:00
|
|
|
__typeof__(_ptr) _old = (__typeof__(_ptr))(_pold); \
|
locking/atomic: Introduce atomic_try_cmpxchg()
Add a new cmpxchg interface:
bool try_cmpxchg(u{8,16,32,64} *ptr, u{8,16,32,64} *val, u{8,16,32,64} new);
Where the boolean returns the result of the compare; and thus if the
exchange happened; and in case of failure, the new value of *ptr is
returned in *val.
This allows simplification/improvement of loops like:
for (;;) {
new = val $op $imm;
old = cmpxchg(ptr, val, new);
if (old == val)
break;
val = old;
}
into:
do {
} while (!try_cmpxchg(ptr, &val, val $op $imm));
while also generating better code (GCC6 and onwards).
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-02-01 16:39:38 +01:00
|
|
|
__typeof__(*(_ptr)) __old = *_old; \
|
|
|
|
__typeof__(*(_ptr)) __new = (_new); \
|
|
|
|
switch (size) { \
|
|
|
|
case __X86_CASE_B: \
|
|
|
|
{ \
|
|
|
|
volatile u8 *__ptr = (volatile u8 *)(_ptr); \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile(lock "cmpxchgb %[new], %[ptr]" \
|
locking/atomic: Introduce atomic_try_cmpxchg()
Add a new cmpxchg interface:
bool try_cmpxchg(u{8,16,32,64} *ptr, u{8,16,32,64} *val, u{8,16,32,64} new);
Where the boolean returns the result of the compare; and thus if the
exchange happened; and in case of failure, the new value of *ptr is
returned in *val.
This allows simplification/improvement of loops like:
for (;;) {
new = val $op $imm;
old = cmpxchg(ptr, val, new);
if (old == val)
break;
val = old;
}
into:
do {
} while (!try_cmpxchg(ptr, &val, val $op $imm));
while also generating better code (GCC6 and onwards).
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-02-01 16:39:38 +01:00
|
|
|
CC_SET(z) \
|
|
|
|
: CC_OUT(z) (success), \
|
|
|
|
[ptr] "+m" (*__ptr), \
|
|
|
|
[old] "+a" (__old) \
|
|
|
|
: [new] "q" (__new) \
|
|
|
|
: "memory"); \
|
|
|
|
break; \
|
|
|
|
} \
|
|
|
|
case __X86_CASE_W: \
|
|
|
|
{ \
|
|
|
|
volatile u16 *__ptr = (volatile u16 *)(_ptr); \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile(lock "cmpxchgw %[new], %[ptr]" \
|
locking/atomic: Introduce atomic_try_cmpxchg()
Add a new cmpxchg interface:
bool try_cmpxchg(u{8,16,32,64} *ptr, u{8,16,32,64} *val, u{8,16,32,64} new);
Where the boolean returns the result of the compare; and thus if the
exchange happened; and in case of failure, the new value of *ptr is
returned in *val.
This allows simplification/improvement of loops like:
for (;;) {
new = val $op $imm;
old = cmpxchg(ptr, val, new);
if (old == val)
break;
val = old;
}
into:
do {
} while (!try_cmpxchg(ptr, &val, val $op $imm));
while also generating better code (GCC6 and onwards).
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-02-01 16:39:38 +01:00
|
|
|
CC_SET(z) \
|
|
|
|
: CC_OUT(z) (success), \
|
|
|
|
[ptr] "+m" (*__ptr), \
|
|
|
|
[old] "+a" (__old) \
|
|
|
|
: [new] "r" (__new) \
|
|
|
|
: "memory"); \
|
|
|
|
break; \
|
|
|
|
} \
|
|
|
|
case __X86_CASE_L: \
|
|
|
|
{ \
|
|
|
|
volatile u32 *__ptr = (volatile u32 *)(_ptr); \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile(lock "cmpxchgl %[new], %[ptr]" \
|
locking/atomic: Introduce atomic_try_cmpxchg()
Add a new cmpxchg interface:
bool try_cmpxchg(u{8,16,32,64} *ptr, u{8,16,32,64} *val, u{8,16,32,64} new);
Where the boolean returns the result of the compare; and thus if the
exchange happened; and in case of failure, the new value of *ptr is
returned in *val.
This allows simplification/improvement of loops like:
for (;;) {
new = val $op $imm;
old = cmpxchg(ptr, val, new);
if (old == val)
break;
val = old;
}
into:
do {
} while (!try_cmpxchg(ptr, &val, val $op $imm));
while also generating better code (GCC6 and onwards).
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-02-01 16:39:38 +01:00
|
|
|
CC_SET(z) \
|
|
|
|
: CC_OUT(z) (success), \
|
|
|
|
[ptr] "+m" (*__ptr), \
|
|
|
|
[old] "+a" (__old) \
|
|
|
|
: [new] "r" (__new) \
|
|
|
|
: "memory"); \
|
|
|
|
break; \
|
|
|
|
} \
|
|
|
|
case __X86_CASE_Q: \
|
|
|
|
{ \
|
|
|
|
volatile u64 *__ptr = (volatile u64 *)(_ptr); \
|
x86/locking/atomic: Improve performance by using asm_inline() for atomic locking instructions
According to:
https://gcc.gnu.org/onlinedocs/gcc/Size-of-an-asm.html
the usage of asm pseudo directives in the asm template can confuse
the compiler to wrongly estimate the size of the generated
code.
The LOCK_PREFIX macro expands to several asm pseudo directives, so
its usage in atomic locking insns causes instruction length estimates
to fail significantly (the specially instrumented compiler reports
the estimated length of these asm templates to be 6 instructions long).
This incorrect estimate further causes unoptimal inlining decisions,
un-optimal instruction scheduling and un-optimal code block alignments
for functions that use these locking primitives.
Use asm_inline instead:
https://gcc.gnu.org/pipermail/gcc-patches/2018-December/512349.html
which is a feature that makes GCC pretend some inline assembler code
is tiny (while it would think it is huge), instead of just asm.
For code size estimation, the size of the asm is then taken as
the minimum size of one instruction, ignoring how many instructions
compiler thinks it is.
bloat-o-meter reports the following code size increase
(x86_64 defconfig, gcc-14.2.1):
add/remove: 82/283 grow/shrink: 870/372 up/down: 76272/-43618 (32654)
Total: Before=22770320, After=22802974, chg +0.14%
with top grows (>500 bytes):
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
__schedule 2416 3472 +1056
__i915_vma_resource_unhold - 946 +946
sched_mm_cid_after_execve 175 1097 +922
__do_sys_membarrier - 862 +862
filemap_fault 2666 3462 +796
nl80211_send_wiphy 11185 11874 +689
samsung_input_mapping.cold 900 1500 +600
virtio_gpu_queue_fenced_ctrl_buffer 839 1410 +571
ilk_update_pipe_csc 1201 1735 +534
enable_step - 525 +525
icl_color_commit_noarm 1334 1847 +513
tg3_read_bc_ver - 501 +501
and top shrinks (>500 bytes):
Function old new delta
----------------------------------------------------------------
nl80211_send_iftype_data 580 - -580
samsung_gamepad_input_mapping.isra.cold 604 - -604
virtio_gpu_queue_ctrl_sgs 724 - -724
tg3_get_invariants 9218 8376 -842
__i915_vma_resource_unhold.part 899 - -899
ext4_mark_iloc_dirty 1735 106 -1629
samsung_gamepad_input_mapping.isra 2046 - -2046
icl_program_input_csc 2203 - -2203
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
These code size changes can be grouped into 4 groups:
a) some functions now include once-called functions in full or
in part. These are:
Function old new delta
----------------------------------------------------------------
copy_process 6465 10191 +3726
balance_dirty_pages_ratelimited_flags 237 2949 +2712
icl_plane_update_noarm 5800 7969 +2169
samsung_input_mapping 3375 5170 +1795
ext4_do_update_inode.isra - 1526 +1526
that now include:
Function old new delta
----------------------------------------------------------------
copy_mm 2242 - -2242
balance_dirty_pages 2657 - -2657
icl_program_input_csc 2203 - -2203
samsung_gamepad_input_mapping.isra 2046 - -2046
ext4_mark_iloc_dirty 1735 106 -1629
b) ISRA [interprocedural scalar replacement of aggregates,
interprocedural pass that removes unused function return values
(turning functions returning a value which is never used into void
functions) and removes unused function parameters. It can also
replace an aggregate parameter by a set of other parameters
representing part of the original, turning those passed by reference
into new ones which pass the value directly.]
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
ext4_do_update_inode.isra - 1526 +1526
nfs4_begin_drain_session.isra - 249 +249
nfs4_end_drain_session.isra - 168 +168
__guc_action_register_multi_lrc_v70.isra 335 500 +165
__i915_gem_free_objects.isra - 144 +144
...
membarrier_register_private_expedited.isra 108 - -108
syncobj_eventfd_entry_func.isra 445 314 -131
__ext4_sb_bread_gfp.isra 140 - -140
class_preempt_notrace_destructor.isra 145 - -145
p9_fid_put.isra 151 - -151
__mm_cid_try_get.isra 238 - -238
membarrier_global_expedited.isra 294 - -294
mm_cid_get.isra 295 - -295
samsung_gamepad_input_mapping.isra.cold 604 - -604
samsung_gamepad_input_mapping.isra 2046 - -2046
c) different split points of hot/cold split that just move code around:
Top grows and shrinks of this group are listed below:
Function old new delta
----------------------------------------------------------------
samsung_input_mapping.cold 900 1500 +600
__i915_request_reset.cold 311 389 +78
nfs_update_inode.cold 77 153 +76
__do_sys_swapon.cold 404 455 +51
copy_process.cold - 45 +45
tg3_get_invariants.cold 73 115 +42
...
hibernate.cold 671 643 -28
copy_mm.cold 31 - -31
software_resume.cold 249 207 -42
io_poll_wake.cold 106 54 -52
samsung_gamepad_input_mapping.isra.cold 604 - -604
c) full inline of small functions with locking insn (~150 cases).
These bring in most of the code size increase because the removed
function code is now inlined in multiple places. E.g.:
0000000000a50e10 <release_devnum>:
a50e10: 48 63 07 movslq (%rdi),%rax
a50e13: 85 c0 test %eax,%eax
a50e15: 7e 10 jle a50e27 <release_devnum+0x17>
a50e17: 48 8b 4f 50 mov 0x50(%rdi),%rcx
a50e1b: f0 48 0f b3 41 50 lock btr %rax,0x50(%rcx)
a50e21: c7 07 ff ff ff ff movl $0xffffffff,(%rdi)
a50e27: e9 00 00 00 00 jmp a50e2c <release_devnum+0x1c>
a50e28: R_X86_64_PLT32 __x86_return_thunk-0x4
a50e2c: 0f 1f 40 00 nopl 0x0(%rax)
is now fully inlined into the caller function. This is desirable due
to the per function overhead of CPU bug mitigations like retpolines.
FTR a) with -Os (where generated code size really matters) x86_64
defconfig object file decreases by 24.388 kbytes, representing 0.1%
code size decrease:
text data bss dec hex filename
23883860 4617284 814212 29315356 1bf511c vmlinux-old.o
23859472 4615404 814212 29289088 1beea80 vmlinux-new.o
FTR b) clang recognizes "asm inline", but there was no difference in
code sizes:
text data bss dec hex filename
27577163 4503078 807732 32887973 1f5d4a5 vmlinux-clang-patched.o
27577181 4503078 807732 32887991 1f5d4b7 vmlinux-clang-unpatched.o
The performance impact of the patch was assessed by recompiling
fedora-41 6.13.5 kernel and running lmbench with old and new kernel.
The most noticeable improvements were:
Process fork+exit: 270.0952 microseconds
Process fork+execve: 2620.3333 microseconds
Process fork+/bin/sh -c: 6781.0000 microseconds
File /usr/tmp/XXX write bandwidth: 1780350 KB/sec
Pagefaults on /usr/tmp/XXX: 0.3875 microseconds
to:
Process fork+exit: 298.6842 microseconds
Process fork+execve: 1662.7500 microseconds
Process fork+/bin/sh -c: 2127.6667 microseconds
File /usr/tmp/XXX write bandwidth: 1950077 KB/sec
Pagefaults on /usr/tmp/XXX: 0.1958 microseconds
and from:
Socket bandwidth using localhost
0.000001 2.52 MB/sec
0.000064 163.02 MB/sec
0.000128 321.70 MB/sec
0.000256 630.06 MB/sec
0.000512 1207.07 MB/sec
0.001024 2004.06 MB/sec
0.001437 2475.43 MB/sec
10.000000 5817.34 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.2230 millisecs, 34.15 MB/sec
AF_UNIX sock stream bandwidth: 9850.01 MB/sec
Pipe bandwidth: 4631.28 MB/sec
to:
Socket bandwidth using localhost
0.000001 3.13 MB/sec
0.000064 187.08 MB/sec
0.000128 324.12 MB/sec
0.000256 618.51 MB/sec
0.000512 1137.13 MB/sec
0.001024 1962.95 MB/sec
0.001437 2458.27 MB/sec
10.000000 6168.08 MB/sec
Avg xfer: 3.2KB, 41.8KB in 1.0060 millisecs, 41.52 MB/sec
AF_UNIX sock stream bandwidth: 9921.68 MB/sec
Pipe bandwidth: 4649.96 MB/sec
[ mingo: Prettified the changelog a bit. ]
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Link: https://lore.kernel.org/r/20250309170955.48919-1-ubizjak@gmail.com
2025-03-09 18:09:36 +01:00
|
|
|
asm_inline volatile(lock "cmpxchgq %[new], %[ptr]" \
|
locking/atomic: Introduce atomic_try_cmpxchg()
Add a new cmpxchg interface:
bool try_cmpxchg(u{8,16,32,64} *ptr, u{8,16,32,64} *val, u{8,16,32,64} new);
Where the boolean returns the result of the compare; and thus if the
exchange happened; and in case of failure, the new value of *ptr is
returned in *val.
This allows simplification/improvement of loops like:
for (;;) {
new = val $op $imm;
old = cmpxchg(ptr, val, new);
if (old == val)
break;
val = old;
}
into:
do {
} while (!try_cmpxchg(ptr, &val, val $op $imm));
while also generating better code (GCC6 and onwards).
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-02-01 16:39:38 +01:00
|
|
|
CC_SET(z) \
|
|
|
|
: CC_OUT(z) (success), \
|
|
|
|
[ptr] "+m" (*__ptr), \
|
|
|
|
[old] "+a" (__old) \
|
|
|
|
: [new] "r" (__new) \
|
|
|
|
: "memory"); \
|
|
|
|
break; \
|
|
|
|
} \
|
|
|
|
default: \
|
|
|
|
__cmpxchg_wrong_size(); \
|
|
|
|
} \
|
2017-03-27 13:54:38 +02:00
|
|
|
if (unlikely(!success)) \
|
|
|
|
*_old = __old; \
|
|
|
|
likely(success); \
|
locking/atomic: Introduce atomic_try_cmpxchg()
Add a new cmpxchg interface:
bool try_cmpxchg(u{8,16,32,64} *ptr, u{8,16,32,64} *val, u{8,16,32,64} new);
Where the boolean returns the result of the compare; and thus if the
exchange happened; and in case of failure, the new value of *ptr is
returned in *val.
This allows simplification/improvement of loops like:
for (;;) {
new = val $op $imm;
old = cmpxchg(ptr, val, new);
if (old == val)
break;
val = old;
}
into:
do {
} while (!try_cmpxchg(ptr, &val, val $op $imm));
while also generating better code (GCC6 and onwards).
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-02-01 16:39:38 +01:00
|
|
|
})
|
|
|
|
|
|
|
|
#define __try_cmpxchg(ptr, pold, new, size) \
|
|
|
|
__raw_try_cmpxchg((ptr), (pold), (new), (size), LOCK_PREFIX)
|
|
|
|
|
locking/atomic/x86: Introduce arch_sync_try_cmpxchg()
Introduce the arch_sync_try_cmpxchg() macro to improve code using
sync_try_cmpxchg() locking primitive. The new definitions use existing
__raw_try_cmpxchg() macros, but use its own "lock; " prefix.
The new macros improve assembly of the cmpxchg loop in
evtchn_fifo_unmask() from drivers/xen/events/events_fifo.c from:
57a: 85 c0 test %eax,%eax
57c: 78 52 js 5d0 <...>
57e: 89 c1 mov %eax,%ecx
580: 25 ff ff ff af and $0xafffffff,%eax
585: c7 04 24 00 00 00 00 movl $0x0,(%rsp)
58c: 81 e1 ff ff ff ef and $0xefffffff,%ecx
592: 89 4c 24 04 mov %ecx,0x4(%rsp)
596: 89 44 24 08 mov %eax,0x8(%rsp)
59a: 8b 74 24 08 mov 0x8(%rsp),%esi
59e: 8b 44 24 04 mov 0x4(%rsp),%eax
5a2: f0 0f b1 32 lock cmpxchg %esi,(%rdx)
5a6: 89 04 24 mov %eax,(%rsp)
5a9: 8b 04 24 mov (%rsp),%eax
5ac: 39 c1 cmp %eax,%ecx
5ae: 74 07 je 5b7 <...>
5b0: a9 00 00 00 40 test $0x40000000,%eax
5b5: 75 c3 jne 57a <...>
<...>
to:
578: a9 00 00 00 40 test $0x40000000,%eax
57d: 74 2b je 5aa <...>
57f: 85 c0 test %eax,%eax
581: 78 40 js 5c3 <...>
583: 89 c1 mov %eax,%ecx
585: 25 ff ff ff af and $0xafffffff,%eax
58a: 81 e1 ff ff ff ef and $0xefffffff,%ecx
590: 89 4c 24 04 mov %ecx,0x4(%rsp)
594: 89 44 24 08 mov %eax,0x8(%rsp)
598: 8b 4c 24 08 mov 0x8(%rsp),%ecx
59c: 8b 44 24 04 mov 0x4(%rsp),%eax
5a0: f0 0f b1 0a lock cmpxchg %ecx,(%rdx)
5a4: 89 44 24 04 mov %eax,0x4(%rsp)
5a8: 75 30 jne 5da <...>
<...>
5da: 8b 44 24 04 mov 0x4(%rsp),%eax
5de: eb 98 jmp 578 <...>
The new code removes move instructions from 585: 5a6: and 5a9:
and the compare from 5ac:. Additionally, the compiler assumes that
cmpxchg success is more probable and optimizes code flow accordingly.
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
2023-09-25 16:55:48 +02:00
|
|
|
#define __sync_try_cmpxchg(ptr, pold, new, size) \
|
2025-02-28 09:51:15 +01:00
|
|
|
__raw_try_cmpxchg((ptr), (pold), (new), (size), "lock ")
|
locking/atomic/x86: Introduce arch_sync_try_cmpxchg()
Introduce the arch_sync_try_cmpxchg() macro to improve code using
sync_try_cmpxchg() locking primitive. The new definitions use existing
__raw_try_cmpxchg() macros, but use its own "lock; " prefix.
The new macros improve assembly of the cmpxchg loop in
evtchn_fifo_unmask() from drivers/xen/events/events_fifo.c from:
57a: 85 c0 test %eax,%eax
57c: 78 52 js 5d0 <...>
57e: 89 c1 mov %eax,%ecx
580: 25 ff ff ff af and $0xafffffff,%eax
585: c7 04 24 00 00 00 00 movl $0x0,(%rsp)
58c: 81 e1 ff ff ff ef and $0xefffffff,%ecx
592: 89 4c 24 04 mov %ecx,0x4(%rsp)
596: 89 44 24 08 mov %eax,0x8(%rsp)
59a: 8b 74 24 08 mov 0x8(%rsp),%esi
59e: 8b 44 24 04 mov 0x4(%rsp),%eax
5a2: f0 0f b1 32 lock cmpxchg %esi,(%rdx)
5a6: 89 04 24 mov %eax,(%rsp)
5a9: 8b 04 24 mov (%rsp),%eax
5ac: 39 c1 cmp %eax,%ecx
5ae: 74 07 je 5b7 <...>
5b0: a9 00 00 00 40 test $0x40000000,%eax
5b5: 75 c3 jne 57a <...>
<...>
to:
578: a9 00 00 00 40 test $0x40000000,%eax
57d: 74 2b je 5aa <...>
57f: 85 c0 test %eax,%eax
581: 78 40 js 5c3 <...>
583: 89 c1 mov %eax,%ecx
585: 25 ff ff ff af and $0xafffffff,%eax
58a: 81 e1 ff ff ff ef and $0xefffffff,%ecx
590: 89 4c 24 04 mov %ecx,0x4(%rsp)
594: 89 44 24 08 mov %eax,0x8(%rsp)
598: 8b 4c 24 08 mov 0x8(%rsp),%ecx
59c: 8b 44 24 04 mov 0x4(%rsp),%eax
5a0: f0 0f b1 0a lock cmpxchg %ecx,(%rdx)
5a4: 89 44 24 04 mov %eax,0x4(%rsp)
5a8: 75 30 jne 5da <...>
<...>
5da: 8b 44 24 04 mov 0x4(%rsp),%eax
5de: eb 98 jmp 578 <...>
The new code removes move instructions from 585: 5a6: and 5a9:
and the compare from 5ac:. Additionally, the compiler assumes that
cmpxchg success is more probable and optimizes code flow accordingly.
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
2023-09-25 16:55:48 +02:00
|
|
|
|
2023-04-05 16:17:09 +02:00
|
|
|
#define __try_cmpxchg_local(ptr, pold, new, size) \
|
|
|
|
__raw_try_cmpxchg((ptr), (pold), (new), (size), "")
|
|
|
|
|
2020-08-29 22:03:35 +09:00
|
|
|
#define arch_try_cmpxchg(ptr, pold, new) \
|
locking/atomic: Introduce atomic_try_cmpxchg()
Add a new cmpxchg interface:
bool try_cmpxchg(u{8,16,32,64} *ptr, u{8,16,32,64} *val, u{8,16,32,64} new);
Where the boolean returns the result of the compare; and thus if the
exchange happened; and in case of failure, the new value of *ptr is
returned in *val.
This allows simplification/improvement of loops like:
for (;;) {
new = val $op $imm;
old = cmpxchg(ptr, val, new);
if (old == val)
break;
val = old;
}
into:
do {
} while (!try_cmpxchg(ptr, &val, val $op $imm));
while also generating better code (GCC6 and onwards).
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-02-01 16:39:38 +01:00
|
|
|
__try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr)))
|
|
|
|
|
locking/atomic/x86: Introduce arch_sync_try_cmpxchg()
Introduce the arch_sync_try_cmpxchg() macro to improve code using
sync_try_cmpxchg() locking primitive. The new definitions use existing
__raw_try_cmpxchg() macros, but use its own "lock; " prefix.
The new macros improve assembly of the cmpxchg loop in
evtchn_fifo_unmask() from drivers/xen/events/events_fifo.c from:
57a: 85 c0 test %eax,%eax
57c: 78 52 js 5d0 <...>
57e: 89 c1 mov %eax,%ecx
580: 25 ff ff ff af and $0xafffffff,%eax
585: c7 04 24 00 00 00 00 movl $0x0,(%rsp)
58c: 81 e1 ff ff ff ef and $0xefffffff,%ecx
592: 89 4c 24 04 mov %ecx,0x4(%rsp)
596: 89 44 24 08 mov %eax,0x8(%rsp)
59a: 8b 74 24 08 mov 0x8(%rsp),%esi
59e: 8b 44 24 04 mov 0x4(%rsp),%eax
5a2: f0 0f b1 32 lock cmpxchg %esi,(%rdx)
5a6: 89 04 24 mov %eax,(%rsp)
5a9: 8b 04 24 mov (%rsp),%eax
5ac: 39 c1 cmp %eax,%ecx
5ae: 74 07 je 5b7 <...>
5b0: a9 00 00 00 40 test $0x40000000,%eax
5b5: 75 c3 jne 57a <...>
<...>
to:
578: a9 00 00 00 40 test $0x40000000,%eax
57d: 74 2b je 5aa <...>
57f: 85 c0 test %eax,%eax
581: 78 40 js 5c3 <...>
583: 89 c1 mov %eax,%ecx
585: 25 ff ff ff af and $0xafffffff,%eax
58a: 81 e1 ff ff ff ef and $0xefffffff,%ecx
590: 89 4c 24 04 mov %ecx,0x4(%rsp)
594: 89 44 24 08 mov %eax,0x8(%rsp)
598: 8b 4c 24 08 mov 0x8(%rsp),%ecx
59c: 8b 44 24 04 mov 0x4(%rsp),%eax
5a0: f0 0f b1 0a lock cmpxchg %ecx,(%rdx)
5a4: 89 44 24 04 mov %eax,0x4(%rsp)
5a8: 75 30 jne 5da <...>
<...>
5da: 8b 44 24 04 mov 0x4(%rsp),%eax
5de: eb 98 jmp 578 <...>
The new code removes move instructions from 585: 5a6: and 5a9:
and the compare from 5ac:. Additionally, the compiler assumes that
cmpxchg success is more probable and optimizes code flow accordingly.
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
2023-09-25 16:55:48 +02:00
|
|
|
#define arch_sync_try_cmpxchg(ptr, pold, new) \
|
|
|
|
__sync_try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr)))
|
|
|
|
|
2023-04-05 16:17:09 +02:00
|
|
|
#define arch_try_cmpxchg_local(ptr, pold, new) \
|
|
|
|
__try_cmpxchg_local((ptr), (pold), (new), sizeof(*(ptr)))
|
|
|
|
|
2011-06-21 12:00:55 -07:00
|
|
|
/*
|
|
|
|
* xadd() adds "inc" to "*ptr" and atomically returns the previous
|
|
|
|
* value of "*ptr".
|
|
|
|
*
|
|
|
|
* xadd() is locked when multiple CPUs are online
|
|
|
|
*/
|
2011-09-30 12:14:10 -07:00
|
|
|
#define __xadd(ptr, inc, lock) __xchg_op((ptr), (inc), xadd, lock)
|
2011-06-21 12:00:55 -07:00
|
|
|
#define xadd(ptr, inc) __xadd((ptr), (inc), LOCK_PREFIX)
|
2011-09-28 11:49:28 -07:00
|
|
|
|
2011-08-18 11:48:06 -07:00
|
|
|
#endif /* ASM_X86_CMPXCHG_H */
|