linux/tools/perf/Documentation/perf-sched.txt

242 lines
7.5 KiB
Text
Raw Permalink Normal View History

perf-sched(1)
=============
NAME
----
perf-sched - Tool to trace/measure scheduler properties (latencies)
SYNOPSIS
--------
[verse]
'perf sched' {record|latency|map|replay|script|timehist}
DESCRIPTION
-----------
There are several variants of 'perf sched':
'perf sched record <command>' to record the scheduling events
of an arbitrary workload.
'perf sched latency' to report the per task scheduling latencies
and other scheduling properties of the workload.
perf sched: Rename 'switches' column header to 'count' and add usage description, options for latency Rename 'Switches' to 'Count' and document metrics shown for perf sched latency output. Also add options possible with perf sched latency. Initially, after seeing the output of 'perf sched latency', the term 'Switches' seemed like it's the number of context switches-in for a particular task, but upon going through the code, it was observed that it's actually keeping track of number of times a delay was calculated so that it is used in calculation of the average delay. Actually, the switches here is a subset of number of context switches-in because there are some cases where the count is not incremented in switch-in handler 'add_sched_in_event'. For example when a task is switched-in while it's state is not ready to run(!= THREAD_WAIT_CPU). commit d9340c1db3f52460 ("perf sched: Display time in milliseconds, reorganize output") changed it from the original count to switches. So, renamed switches to count to make things a bit more clearer and added the metrics description of latency in the document. Reviewed-by: Aditya Gupta <adityag@linux.ibm.com> Signed-off-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240328090005.8321-1-vineethr@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2024-03-28 14:30:05 +05:30
Example usage:
perf sched record -- sleep 1
perf sched latency
-------------------------------------------------------------------------------------------------------------------------------------------
Task | Runtime ms | Count | Avg delay ms | Max delay ms | Max delay start | Max delay end |
-------------------------------------------------------------------------------------------------------------------------------------------
perf:(2) | 2.804 ms | 66 | avg: 0.524 ms | max: 1.069 ms | max start: 254752.314960 s | max end: 254752.316029 s
NetworkManager:1343 | 0.372 ms | 13 | avg: 0.008 ms | max: 0.013 ms | max start: 254751.551153 s | max end: 254751.551166 s
kworker/1:2-xfs:4649 | 0.012 ms | 1 | avg: 0.008 ms | max: 0.008 ms | max start: 254751.519807 s | max end: 254751.519815 s
kworker/3:1-xfs:388 | 0.011 ms | 1 | avg: 0.006 ms | max: 0.006 ms | max start: 254751.519809 s | max end: 254751.519815 s
sleep:147736 | 0.938 ms | 3 | avg: 0.006 ms | max: 0.007 ms | max start: 254751.313817 s | max end: 254751.313824 s
It shows Runtime(time that a task spent actually running on the CPU),
Count(number of times a delay was calculated) and delay(time that a
task was ready to run but was kept waiting).
Tasks with the same command name are merged and the merge count is
given within (), However if -p option is used, pid is mentioned.
'perf sched script' to see a detailed trace of the workload that
was recorded (aliased to 'perf script' for now).
'perf sched replay' to simulate the workload that was recorded
via perf sched record. (this is done by starting up mockup threads
that mimic the workload based on the events in the trace. These
threads can then replay the timings (CPU runtime and sleep patterns)
of the workload as it occurred when it was recorded - and can repeat
it a number of times, measuring its performance.)
'perf sched map' to print a textual context-switching outline of
workload captured via perf sched record. Columns stand for
individual CPUs, and the two-letter shortcuts stand for tasks that
are running on a CPU. A '*' denotes the CPU that had the event, and
a dot signals an idle CPU.
'perf sched timehist' provides an analysis of scheduling events.
Example usage:
perf sched record -- sleep 1
perf sched timehist
By default it shows the individual schedule events, including the wait
time (time between sched-out and next sched-in events for the task), the
task scheduling delay (time between runnable and actually running) and
run time for the task:
time cpu task name wait time sch delay run time
[tid/pid] (msec) (msec) (msec)
-------------- ------ -------------------- --------- --------- ---------
79371.874569 [0011] gcc[31949] 0.014 0.000 1.148
79371.874591 [0010] gcc[31951] 0.000 0.000 0.024
79371.874603 [0010] migration/10[59] 3.350 0.004 0.011
79371.874604 [0011] <idle> 1.148 0.000 0.035
79371.874723 [0005] <idle> 0.016 0.000 1.383
79371.874746 [0005] gcc[31949] 0.153 0.078 0.022
...
Times are in msec.usec.
OPTIONS
-------
-i::
--input=<file>::
Input file name. (default: perf.data unless stdin is a fifo)
-v::
--verbose::
Be more verbose. (show symbol address, etc)
-D::
--dump-raw-trace=::
Display verbose dump of the sched data.
-f::
--force::
Don't complain, do it.
perf sched: Rename 'switches' column header to 'count' and add usage description, options for latency Rename 'Switches' to 'Count' and document metrics shown for perf sched latency output. Also add options possible with perf sched latency. Initially, after seeing the output of 'perf sched latency', the term 'Switches' seemed like it's the number of context switches-in for a particular task, but upon going through the code, it was observed that it's actually keeping track of number of times a delay was calculated so that it is used in calculation of the average delay. Actually, the switches here is a subset of number of context switches-in because there are some cases where the count is not incremented in switch-in handler 'add_sched_in_event'. For example when a task is switched-in while it's state is not ready to run(!= THREAD_WAIT_CPU). commit d9340c1db3f52460 ("perf sched: Display time in milliseconds, reorganize output") changed it from the original count to switches. So, renamed switches to count to make things a bit more clearer and added the metrics description of latency in the document. Reviewed-by: Aditya Gupta <adityag@linux.ibm.com> Signed-off-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240328090005.8321-1-vineethr@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2024-03-28 14:30:05 +05:30
OPTIONS for 'perf sched latency'
-------------------------------
-C::
--CPU <n>::
CPU to profile on.
-p::
--pids::
latency stats per pid instead of per command name.
-s::
--sort <key[,key2...]>::
sort by key(s): runtime, switch, avg, max
by default it's sorted by "avg ,max ,switch ,runtime".
OPTIONS for 'perf sched map'
----------------------------
--compact::
Show only CPUs with activity. Helps visualizing on high core
count systems.
--cpus::
Show just entries with activities for the given CPUs.
--color-cpus::
Highlight the given cpus.
--color-pids::
Highlight the given pids.
perf sched map: Add task-name option to filter the output map By default, perf sched map prints sched-in events for all the tasks which may not be required all the time as it prints lot of symbols and rows to the terminal. With --task-name option, one could specify the specific task name for which the map has to be shown. This would help in analyzing the CPU usage patterns easier for that specific task. Since multiple PID's might have the same task name, using task-name filter would be more useful for debugging. For other tasks, instead of printing the symbol, '-' is printed and the same '.' is used to represent idle. '-' is used instead of symbol for other tasks because it helps in clear visualization of task of interest and secondly the symbol itself doesn't mean anything because the sched-in of that symbol will not be printed(first sched-in contains pid and the corresponding symbol). When using the --task-name option, the sched-out time is represented by a '*-'. Since not all task sched-in events are printed, the sched-out time of the relevant task might be lost. This representation ensures that the sched-out time of the interested task is not overlooked. 6.10.0-rc1 ========== *A0 131040.639793 secs A0 => migration/0:19 *. 131040.639801 secs . => swapper:0 . *B0 131040.639830 secs B0 => migration/1:24 . *. 131040.639836 secs . . *C0 131040.640108 secs C0 => migration/2:30 . . *. 131040.640163 secs . . . *D0 131040.640386 secs D0 => migration/3:36 . . . *. 131040.640395 secs 6.10.0-rc1 + patch (--task-name wdavdaemon) ============= . *A0 . . . . - . 131040.641346 secs A0 => wdavdaemon:62509 . A0 *B0 . . . - . 131040.641378 secs B0 => wdavdaemon:62274 - *- B0 . . . - . 131040.641379 secs *C0 . B0 . . . . . 131040.641572 secs C0 => wdavdaemon:62283 C0 . B0 . *D0 . . . 131040.641572 secs D0 => wdavdaemon:62277 C0 . B0 . D0 . *E0 . 131040.641578 secs E0 => wdavdaemon:62270 *- . B0 . D0 . E0 . 131040.641581 secs . . B0 . D0 . *- . 131040.641583 secs Reviewed-and-tested-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com> Cc: Chen Yu <yu.c.chen@intel.com> Link: https://lore.kernel.org/r/20240707182716.22054-2-vineethr@linux.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2024-07-07 23:57:14 +05:30
--task-name <task>::
Map output only for the given task name(s). Separate the
task names with a comma (without whitespace). The sched-out
perf sched map: Add task-name option to filter the output map By default, perf sched map prints sched-in events for all the tasks which may not be required all the time as it prints lot of symbols and rows to the terminal. With --task-name option, one could specify the specific task name for which the map has to be shown. This would help in analyzing the CPU usage patterns easier for that specific task. Since multiple PID's might have the same task name, using task-name filter would be more useful for debugging. For other tasks, instead of printing the symbol, '-' is printed and the same '.' is used to represent idle. '-' is used instead of symbol for other tasks because it helps in clear visualization of task of interest and secondly the symbol itself doesn't mean anything because the sched-in of that symbol will not be printed(first sched-in contains pid and the corresponding symbol). When using the --task-name option, the sched-out time is represented by a '*-'. Since not all task sched-in events are printed, the sched-out time of the relevant task might be lost. This representation ensures that the sched-out time of the interested task is not overlooked. 6.10.0-rc1 ========== *A0 131040.639793 secs A0 => migration/0:19 *. 131040.639801 secs . => swapper:0 . *B0 131040.639830 secs B0 => migration/1:24 . *. 131040.639836 secs . . *C0 131040.640108 secs C0 => migration/2:30 . . *. 131040.640163 secs . . . *D0 131040.640386 secs D0 => migration/3:36 . . . *. 131040.640395 secs 6.10.0-rc1 + patch (--task-name wdavdaemon) ============= . *A0 . . . . - . 131040.641346 secs A0 => wdavdaemon:62509 . A0 *B0 . . . - . 131040.641378 secs B0 => wdavdaemon:62274 - *- B0 . . . - . 131040.641379 secs *C0 . B0 . . . . . 131040.641572 secs C0 => wdavdaemon:62283 C0 . B0 . *D0 . . . 131040.641572 secs D0 => wdavdaemon:62277 C0 . B0 . D0 . *E0 . 131040.641578 secs E0 => wdavdaemon:62270 *- . B0 . D0 . E0 . 131040.641581 secs . . B0 . D0 . *- . 131040.641583 secs Reviewed-and-tested-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com> Cc: Chen Yu <yu.c.chen@intel.com> Link: https://lore.kernel.org/r/20240707182716.22054-2-vineethr@linux.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2024-07-07 23:57:14 +05:30
time is printed and is represented by '*-' for the given
task name(s).
perf sched map: Add task-name option to filter the output map By default, perf sched map prints sched-in events for all the tasks which may not be required all the time as it prints lot of symbols and rows to the terminal. With --task-name option, one could specify the specific task name for which the map has to be shown. This would help in analyzing the CPU usage patterns easier for that specific task. Since multiple PID's might have the same task name, using task-name filter would be more useful for debugging. For other tasks, instead of printing the symbol, '-' is printed and the same '.' is used to represent idle. '-' is used instead of symbol for other tasks because it helps in clear visualization of task of interest and secondly the symbol itself doesn't mean anything because the sched-in of that symbol will not be printed(first sched-in contains pid and the corresponding symbol). When using the --task-name option, the sched-out time is represented by a '*-'. Since not all task sched-in events are printed, the sched-out time of the relevant task might be lost. This representation ensures that the sched-out time of the interested task is not overlooked. 6.10.0-rc1 ========== *A0 131040.639793 secs A0 => migration/0:19 *. 131040.639801 secs . => swapper:0 . *B0 131040.639830 secs B0 => migration/1:24 . *. 131040.639836 secs . . *C0 131040.640108 secs C0 => migration/2:30 . . *. 131040.640163 secs . . . *D0 131040.640386 secs D0 => migration/3:36 . . . *. 131040.640395 secs 6.10.0-rc1 + patch (--task-name wdavdaemon) ============= . *A0 . . . . - . 131040.641346 secs A0 => wdavdaemon:62509 . A0 *B0 . . . - . 131040.641378 secs B0 => wdavdaemon:62274 - *- B0 . . . - . 131040.641379 secs *C0 . B0 . . . . . 131040.641572 secs C0 => wdavdaemon:62283 C0 . B0 . *D0 . . . 131040.641572 secs D0 => wdavdaemon:62277 C0 . B0 . D0 . *E0 . 131040.641578 secs E0 => wdavdaemon:62270 *- . B0 . D0 . E0 . 131040.641581 secs . . B0 . D0 . *- . 131040.641583 secs Reviewed-and-tested-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com> Cc: Chen Yu <yu.c.chen@intel.com> Link: https://lore.kernel.org/r/20240707182716.22054-2-vineethr@linux.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2024-07-07 23:57:14 +05:30
('-' indicates other tasks while '.' is idle).
--fuzzy-name::
Given task name(s) can be partially matched (fuzzy matching).
OPTIONS for 'perf sched timehist'
---------------------------------
-k::
--vmlinux=<file>::
vmlinux pathname
--kallsyms=<file>::
kallsyms pathname
perf sched timehist: Add call graph options If callchains were recorded they are appended to the line with a default stack depth of 5: 1.874569 [0011] gcc[31949] 0.014 0.000 1.148 wait_for_completion_killable <- do_fork <- sys_vfork <- stub_vfork <- __vfork 1.874591 [0010] gcc[31951] 0.000 0.000 0.024 __cond_resched <- _cond_resched <- wait_for_completion <- stop_one_cpu <- sched_exec 1.874603 [0010] migration/10[59] 3.350 0.004 0.011 smpboot_thread_fn <- kthread <- ret_from_fork 1.874604 [0011] <idle> 1.148 0.000 0.035 cpu_startup_entry <- start_secondary 1.874723 [0005] <idle> 0.016 0.000 1.383 cpu_startup_entry <- start_secondary 1.874746 [0005] gcc[31949] 0.153 0.078 0.022 do_wait sys_wait4 <- system_call_fastpath <- __GI___waitpid --no-call-graph can be used to not show the callchains. --max-stack is used to control the number of frames shown (default of 5). -x/--excl options can be used to collapse redundant callchains to get more relevant data on screen. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20161116060634.28477-7-namhyung@kernel.org [ Add documentation based on above commit message ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-16 15:06:32 +09:00
-g::
--call-graph::
Display call chains if present (default on).
perf sched timehist: Add call graph options If callchains were recorded they are appended to the line with a default stack depth of 5: 1.874569 [0011] gcc[31949] 0.014 0.000 1.148 wait_for_completion_killable <- do_fork <- sys_vfork <- stub_vfork <- __vfork 1.874591 [0010] gcc[31951] 0.000 0.000 0.024 __cond_resched <- _cond_resched <- wait_for_completion <- stop_one_cpu <- sched_exec 1.874603 [0010] migration/10[59] 3.350 0.004 0.011 smpboot_thread_fn <- kthread <- ret_from_fork 1.874604 [0011] <idle> 1.148 0.000 0.035 cpu_startup_entry <- start_secondary 1.874723 [0005] <idle> 0.016 0.000 1.383 cpu_startup_entry <- start_secondary 1.874746 [0005] gcc[31949] 0.153 0.078 0.022 do_wait sys_wait4 <- system_call_fastpath <- __GI___waitpid --no-call-graph can be used to not show the callchains. --max-stack is used to control the number of frames shown (default of 5). -x/--excl options can be used to collapse redundant callchains to get more relevant data on screen. Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/20161116060634.28477-7-namhyung@kernel.org [ Add documentation based on above commit message ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-16 15:06:32 +09:00
--max-stack::
Maximum number of functions to display in backtrace, default 5.
perf sched timehist: Add support for filtering on CPU Allow user to limit output to one or more CPUs. Really helpful on systems with a large number of cpus. Committer testing: # perf sched record -a sleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 1.765 MB perf.data (1412 samples) ] [root@quaco ~]# perf sched timehist | head Samples do not have callchains. time cpu task name wait time sch delay run time [tid/pid] (msec) (msec) (msec) --------------- ------ ------------------------------ --------- --------- --------- 66307.802686 [0000] perf[13086] 0.000 0.000 0.000 66307.802700 [0000] migration/0[12] 0.000 0.001 0.014 66307.802766 [0001] perf[13086] 0.000 0.000 0.000 66307.802774 [0001] migration/1[15] 0.000 0.001 0.007 66307.802841 [0002] perf[13086] 0.000 0.000 0.000 66307.802849 [0002] migration/2[20] 0.000 0.001 0.008 66307.802913 [0003] perf[13086] 0.000 0.000 0.000 # # perf sched timehist --cpu 2 | head Samples do not have callchains. time cpu task name wait time sch delay run time [tid/pid] (msec) (msec) (msec) --------------- ------ ------------------------------ --------- --------- --------- 66307.802841 [0002] perf[13086] 0.000 0.000 0.000 66307.802849 [0002] migration/2[20] 0.000 0.001 0.008 66307.964485 [0002] <idle> 0.000 0.000 161.635 66307.964811 [0002] CPU 0/KVM[3589/3561] 0.000 0.056 0.325 66307.965477 [0002] <idle> 0.325 0.000 0.666 66307.965553 [0002] CPU 0/KVM[3589/3561] 0.666 0.024 0.076 66307.966456 [0002] <idle> 0.076 0.000 0.903 # Signed-off-by: David Ahern <dsahern@gmail.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lore.kernel.org/lkml/20191204173925.66976-1-dsahern@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2019-12-04 10:39:25 -07:00
-C=::
--cpu=::
Only show events for the given CPU(s) (comma separated list).
-p=::
--pid=::
Only show events for given process ID (comma separated list).
-t=::
--tid=::
Only show events for given thread ID (comma separated list).
-s::
--summary::
Show only a summary of scheduling by thread with min, max, and average
run times (in sec) and relative stddev.
-S::
--with-summary::
Show all scheduling events followed by a summary by thread with min,
max, and average run times (in sec) and relative stddev.
--symfs=<directory>::
Look for files with symbols relative to this directory.
-V::
--cpu-visual::
Show visual aid for sched switches by CPU: 'i' marks idle time,
's' are scheduler events.
-w::
--wakeups::
Show wakeup events.
-M::
--migrations::
Show migration events.
-n::
--next::
Show next task.
perf sched timehist: Add -I/--idle-hist option The --idle-hist option is to analyze system idle state so which process makes cpu to go idle. If this option is specified, non-idle events will be skipped and processes switching to/from idle will be shown. This option is mostly useful when used with --summary(-only) option. In the idle-time summary view, idle time is accounted to previous thread which is run before idle task. The example output looks like following: Idle-time summary comm parent sched-out idle-time min-idle avg-idle max-idle stddev migrations (count) (msec) (msec) (msec) (msec) % -------------------------------------------------------------------------------------------- rcu_preempt[7] 2 95 550.872 0.011 5.798 23.146 7.63 0 migration/1[16] 2 1 15.558 15.558 15.558 15.558 0.00 0 khugepaged[39] 2 1 3.062 3.062 3.062 3.062 0.00 0 kworker/0:1H[124] 2 2 4.728 0.611 2.364 4.116 74.12 0 systemd-journal[167] 1 1 4.510 4.510 4.510 4.510 0.00 0 kworker/u16:3[558] 2 13 74.737 0.080 5.749 12.960 21.96 0 irq/34-iwlwifi[628] 2 21 118.403 0.032 5.638 23.990 24.00 0 kworker/u17:0[673] 2 1 3.523 3.523 3.523 3.523 0.00 0 dbus-daemon[722] 1 1 6.743 6.743 6.743 6.743 0.00 0 ifplugd[741] 1 1 58.826 58.826 58.826 58.826 0.00 0 wpa_supplicant[1490] 1 1 13.302 13.302 13.302 13.302 0.00 0 wpa_actiond[1492] 1 2 4.064 0.168 2.032 3.896 91.72 0 dockerd[1500] 1 1 0.055 0.055 0.055 0.055 0.00 0 ... Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: David Ahern <dsahern@gmail.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20161208144755.16673-6-namhyung@kernel.org Link: http://lkml.kernel.org/r/20161213080632.19099-2-namhyung@kernel.org [ Merged fix sent by Namhyumg, as posted in the second Link: tag ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-12-08 23:47:54 +09:00
-I::
--idle-hist::
Show idle-related events only.
perf sched timehist: Add option to specify time window of interest Add option to allow user to control analysis window. e.g., collect data for time window and analyze a segment of interest within that window. Committer notes: Testing it: # perf sched record -a usleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 1.593 MB perf.data (25 samples) ] # # perf sched timehist | head -18 Samples do not have callchains. time cpu task name wait time sch delay run time [tid/pid] (msec) (msec) (msec) ------------- ------ --------------- --------- --------- -------- 19818.635579 [0002] <idle> 0.000 0.000 0.000 19818.635613 [0000] perf[9116] 0.000 0.000 0.000 19818.635676 [0000] <idle> 0.000 0.000 0.063 19818.635678 [0000] rcuos/2[29] 0.000 0.002 0.001 19818.635696 [0002] perf[9117] 0.000 0.004 0.116 19818.635702 [0000] <idle> 0.001 0.000 0.024 19818.635709 [0002] migration/2[25] 0.000 0.003 0.012 19818.636263 [0000] usleep[9117] 0.005 0.000 0.560 19818.636316 [0000] <idle> 0.560 0.000 0.053 19818.636358 [0002] <idle> 0.129 0.000 0.649 19818.636358 [0000] usleep[9117] 0.053 0.002 0.042 # # perf sched timehist --time 19818.635696, Samples do not have callchains. time cpu task name wait time sch delay run time [tid/pid] (msec) (msec) (msec) ------------- ------ --------------- -------- --------- --------- 19818.635696 [0002] perf[9117] 0.000 0.120 0.000 19818.635702 [0000] <idle> 0.019 0.000 0.006 19818.635709 [0002] migration/2[25] 0.000 0.003 0.012 19818.636263 [0000] usleep[9117] 0.005 0.000 0.560 19818.636316 [0000] <idle> 0.560 0.000 0.053 19818.636358 [0002] <idle> 0.129 0.000 0.649 19818.636358 [0000] usleep[9117] 0.053 0.002 0.042 # # perf sched timehist --time 19818.635696,19818.635709 Samples do not have callchains. time cpu task name wait time sch delay run time [tid/pid] (msec) (msec) (msec) ------------- ------ --------------- --------- --------- --------- 19818.635696 [0002] perf[9117] 0.000 0.120 0.000 19818.635702 [0000] <idle> 0.019 0.000 0.006 19818.635709 [0002] migration/2[25] 0.000 0.003 0.012 19818.635709 [0000] usleep[9117] 0.005 0.000 0.006 # Signed-off-by: David Ahern <dsahern@gmail.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1480439746-42695-5-git-send-email-dsahern@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-29 10:15:44 -07:00
--time::
Only analyze samples within given time window: <start>,<stop>. Times
have the format seconds.microseconds. If start is not given (i.e., time
string is ',x.y') then analysis starts at the beginning of the file. If
stop time is not given (i.e, time string is 'x.y,') then analysis goes
to end of file.
--state::
Show task state when it switched out.
perf sched timehist: Add option to specify time window of interest Add option to allow user to control analysis window. e.g., collect data for time window and analyze a segment of interest within that window. Committer notes: Testing it: # perf sched record -a usleep 1 [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 1.593 MB perf.data (25 samples) ] # # perf sched timehist | head -18 Samples do not have callchains. time cpu task name wait time sch delay run time [tid/pid] (msec) (msec) (msec) ------------- ------ --------------- --------- --------- -------- 19818.635579 [0002] <idle> 0.000 0.000 0.000 19818.635613 [0000] perf[9116] 0.000 0.000 0.000 19818.635676 [0000] <idle> 0.000 0.000 0.063 19818.635678 [0000] rcuos/2[29] 0.000 0.002 0.001 19818.635696 [0002] perf[9117] 0.000 0.004 0.116 19818.635702 [0000] <idle> 0.001 0.000 0.024 19818.635709 [0002] migration/2[25] 0.000 0.003 0.012 19818.636263 [0000] usleep[9117] 0.005 0.000 0.560 19818.636316 [0000] <idle> 0.560 0.000 0.053 19818.636358 [0002] <idle> 0.129 0.000 0.649 19818.636358 [0000] usleep[9117] 0.053 0.002 0.042 # # perf sched timehist --time 19818.635696, Samples do not have callchains. time cpu task name wait time sch delay run time [tid/pid] (msec) (msec) (msec) ------------- ------ --------------- -------- --------- --------- 19818.635696 [0002] perf[9117] 0.000 0.120 0.000 19818.635702 [0000] <idle> 0.019 0.000 0.006 19818.635709 [0002] migration/2[25] 0.000 0.003 0.012 19818.636263 [0000] usleep[9117] 0.005 0.000 0.560 19818.636316 [0000] <idle> 0.560 0.000 0.053 19818.636358 [0002] <idle> 0.129 0.000 0.649 19818.636358 [0000] usleep[9117] 0.053 0.002 0.042 # # perf sched timehist --time 19818.635696,19818.635709 Samples do not have callchains. time cpu task name wait time sch delay run time [tid/pid] (msec) (msec) (msec) ------------- ------ --------------- --------- --------- --------- 19818.635696 [0002] perf[9117] 0.000 0.120 0.000 19818.635702 [0000] <idle> 0.019 0.000 0.006 19818.635709 [0002] migration/2[25] 0.000 0.003 0.012 19818.635709 [0000] usleep[9117] 0.005 0.000 0.006 # Signed-off-by: David Ahern <dsahern@gmail.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1480439746-42695-5-git-send-email-dsahern@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-11-29 10:15:44 -07:00
perf sched timehist: Add --show-prio option The --show-prio option is used to display the priority of task. It is disabled by default, which is consistent with original behavior. The display format is xxx (priority does not change during task running) or xxx->yyy (priority changes during task running) Testcase: # perf sched record nice -n 9 true [ perf record: Woken up 0 times to write data ] [ perf record: Captured and wrote 0.497 MB perf.data ] # perf sched timehist -h Usage: perf sched timehist [<options>] -C, --cpu <cpu> list of cpus to profile -D, --dump-raw-trace dump raw trace in ASCII -f, --force don't complain, do it -g, --call-graph Display call chains if present (default on) -I, --idle-hist Show idle events only -i, --input <file> input file name -k, --vmlinux <file> vmlinux pathname -M, --migrations Show migration events -n, --next Show next task -p, --pid <pid[,pid...]> analyze events only for given process id(s) -s, --summary Show only syscall summary with statistics -S, --with-summary Show all syscalls and summary with statistics -t, --tid <tid[,tid...]> analyze events only for given thread id(s) -V, --cpu-visual Add CPU visual -v, --verbose be more verbose (show symbol address, etc) -w, --wakeups Show wakeup events --kallsyms <file> kallsyms pathname --max-stack <n> Maximum number of functions to display backtrace. --show-prio Show task priority --state Show task state when sched-out --symfs <directory> Look for files with symbols relative to this directory --time <str> Time span for analysis (start,stop) # perf sched timehist Samples of sched_switch event do not have callchains. time cpu task name wait time sch delay run time [tid/pid] (msec) (msec) (msec) --------------- ------ ------------------------------ --------- --------- --------- 23952.006537 [0000] perf[534] 0.000 0.000 0.000 23952.006593 [0000] migration/0[19] 0.000 0.014 0.056 23952.006899 [0001] perf[534] 0.000 0.000 0.000 23952.006947 [0001] migration/1[22] 0.000 0.015 0.047 23952.007138 [0002] perf[534] 0.000 0.000 0.000 <SNIP> # perf sched timehist --show-prio Samples of sched_switch event do not have callchains. time cpu task name prio wait time sch delay run time [tid/pid] (msec) (msec) (msec) --------------- ------ ------------------------------ -------- --------- --------- --------- 23952.006537 [0000] perf[534] 120 0.000 0.000 0.000 23952.006593 [0000] migration/0[19] 0 0.000 0.014 0.056 23952.006899 [0001] perf[534] 120 0.000 0.000 0.000 <SNIP> 23952.034843 [0003] nice[535] 120->129 0.189 0.024 23.314 <SNIP> 23952.053838 [0005] rcu_preempt[16] 120 3.993 0.000 0.023 23952.053990 [0005] <idle> 120 0.023 0.023 0.152 23952.054137 [0006] <idle> 120 1.427 1.427 17.855 23952.054278 [0007] <idle> 120 0.506 0.506 1.650 Signed-off-by: Yang Jihong <yangjihong@bytedance.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240819033016.2427235-2-yangjihong@bytedance.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2024-08-19 11:30:15 +08:00
--show-prio::
Show task priority.
perf sched timehist: Add --prio option The --prio option is used to only show events for the given task priority(ies). The default is to show events for all priority tasks, which is consistent with the previous behavior. Testcase: # perf sched record nice -n 9 perf bench sched messaging -l 10000 # Running 'sched/messaging' benchmark: # 20 sender and receiver processes per group # 10 groups == 400 processes run Total time: 3.435 [sec] [ perf record: Woken up 270 times to write data ] [ perf record: Captured and wrote 618.688 MB perf.data (5729036 samples) ] # perf sched timehist -h Usage: perf sched timehist [<options>] -C, --cpu <cpu> list of cpus to profile -D, --dump-raw-trace dump raw trace in ASCII -f, --force don't complain, do it -g, --call-graph Display call chains if present (default on) -I, --idle-hist Show idle events only -i, --input <file> input file name -k, --vmlinux <file> vmlinux pathname -M, --migrations Show migration events -n, --next Show next task -p, --pid <pid[,pid...]> analyze events only for given process id(s) -s, --summary Show only syscall summary with statistics -S, --with-summary Show all syscalls and summary with statistics -t, --tid <tid[,tid...]> analyze events only for given thread id(s) -V, --cpu-visual Add CPU visual -v, --verbose be more verbose (show symbol address, etc) -w, --wakeups Show wakeup events --kallsyms <file> kallsyms pathname --max-stack <n> Maximum number of functions to display backtrace. --prio <prio> analyze events only for given task priority(ies) --show-prio Show task priority --state Show task state when sched-out --symfs <directory> Look for files with symbols relative to this directory --time <str> Time span for analysis (start,stop) # perf sched timehist --prio 140 Samples of sched_switch event do not have callchains. Invalid prio string # perf sched timehist --show-prio --prio 129 Samples of sched_switch event do not have callchains. time cpu task name prio wait time sch delay run time [tid/pid] (msec) (msec) (msec) --------------- ------ ------------------------------ -------- --------- --------- --------- 2090450.765421 [0002] sched-messaging[1229618] 129 0.000 0.000 0.029 2090450.765445 [0007] sched-messaging[1229616] 129 0.000 0.062 0.043 2090450.765448 [0014] sched-messaging[1229619] 129 0.000 0.000 0.032 2090450.765478 [0013] sched-messaging[1229617] 129 0.000 0.065 0.048 2090450.765503 [0014] sched-messaging[1229622] 129 0.000 0.000 0.017 2090450.765550 [0002] sched-messaging[1229624] 129 0.000 0.000 0.021 2090450.765562 [0007] sched-messaging[1229621] 129 0.000 0.071 0.028 2090450.765570 [0005] sched-messaging[1229620] 129 0.000 0.064 0.066 2090450.765583 [0001] sched-messaging[1229625] 129 0.000 0.001 0.031 2090450.765595 [0013] sched-messaging[1229623] 129 0.000 0.060 0.028 2090450.765637 [0014] sched-messaging[1229628] 129 0.000 0.000 0.019 2090450.765665 [0007] sched-messaging[1229627] 129 0.000 0.038 0.030 <SNIP> # perf sched timehist --show-prio --prio 0,120-129 Samples of sched_switch event do not have callchains. time cpu task name prio wait time sch delay run time [tid/pid] (msec) (msec) (msec) --------------- ------ ------------------------------ -------- --------- --------- --------- 2090450.763231 [0000] perf[1229608] 120 0.000 0.000 0.000 2090450.763235 [0000] migration/0[15] 0 0.000 0.001 0.003 2090450.763263 [0001] perf[1229608] 120 0.000 0.000 0.000 2090450.763268 [0001] migration/1[21] 0 0.000 0.001 0.004 2090450.763302 [0002] perf[1229608] 120 0.000 0.000 0.000 2090450.763309 [0002] migration/2[27] 0 0.000 0.001 0.007 2090450.763338 [0003] perf[1229608] 120 0.000 0.000 0.000 2090450.763343 [0003] migration/3[33] 0 0.000 0.001 0.004 2090450.763459 [0004] perf[1229608] 120 0.000 0.000 0.000 2090450.763469 [0004] migration/4[39] 0 0.000 0.002 0.010 2090450.763496 [0005] perf[1229608] 120 0.000 0.000 0.000 2090450.763501 [0005] migration/5[45] 0 0.000 0.001 0.004 2090450.763613 [0006] perf[1229608] 120 0.000 0.000 0.000 2090450.763622 [0006] migration/6[51] 0 0.000 0.001 0.008 2090450.763652 [0007] perf[1229608] 120 0.000 0.000 0.000 2090450.763660 [0007] migration/7[57] 0 0.000 0.001 0.008 <SNIP> 2090450.765665 [0001] <idle> 120 0.031 0.031 0.081 2090450.765665 [0007] sched-messaging[1229627] 129 0.000 0.038 0.030 2090450.765667 [0000] s1-perf[8235/7168] 120 0.008 0.000 0.004 2090450.765684 [0013] <idle> 120 0.028 0.028 0.088 2090450.765685 [0001] sched-messaging[1229630] 129 0.000 0.001 0.020 2090450.765688 [0000] <idle> 120 0.004 0.004 0.020 2090450.765689 [0002] <idle> 120 0.021 0.021 0.138 2090450.765691 [0005] sched-messaging[1229626] 129 0.000 0.085 0.029 Signed-off-by: Yang Jihong <yangjihong@bytedance.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Clark <james.clark@arm.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20240819033016.2427235-3-yangjihong@bytedance.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2024-08-19 11:30:16 +08:00
--prio::
Only show events for given task priority(ies). Multiple priorities can be
provided as a comma-separated list with no spaces: 0,120. Ranges of
priorities are specified with -: 120-129. A combination of both can also be
provided: 0,120-129.
perf sched timehist: Add pre-migration wait time option pre-migration wait time is the time that a task unnecessarily spends on the runqueue of a CPU but doesn't get switched-in there. In terms of tracepoints, it is the time between sched:sched_wakeup and sched:sched_migrate_task. Let's say a task woke up on CPU2, then it got migrated to CPU4 and then it's switched-in to CPU4. So, here pre-migration wait time is time that it was waiting on runqueue of CPU2 after it is woken up. The general pattern for pre-migration to occur is: sched:sched_wakeup sched:sched_migrate_task sched:sched_switch The sched:sched_waking event is used to capture the wakeup time, as it aligns with the existing code and only introduces a negligible time difference. pre-migrations are generally not useful and it increases migrations. This metric would be helpful in testing patches mainly related to wakeup and load-balancer code paths as better wakeup logic would choose an optimal CPU where task would be switched-in and thereby reducing pre- migrations. The sample output(s) when -P or --pre-migrations is used: ================= time cpu task name wait time sch delay run time pre-mig time [tid/pid] (msec) (msec) (msec) (msec) --------------- ------ ------------------------------ --------- --------- --------- --------- 38456.720806 [0001] schbench[28634/28574] 4.917 4.768 1.004 0.000 38456.720810 [0001] rcu_preempt[18] 3.919 0.003 0.004 0.000 38456.721800 [0006] schbench[28779/28574] 23.465 23.465 1.999 0.000 38456.722800 [0002] schbench[28773/28574] 60.371 60.237 3.955 60.197 38456.722806 [0001] schbench[28634/28574] 0.004 0.004 1.996 0.000 38456.722811 [0001] rcu_preempt[18] 1.996 0.005 0.005 0.000 38456.723800 [0000] schbench[28833/28574] 4.000 4.000 3.999 0.000 38456.723800 [0004] schbench[28762/28574] 42.951 42.839 3.999 39.867 38456.723802 [0007] schbench[28812/28574] 43.947 43.817 3.999 40.866 38456.723804 [0001] schbench[28587/28574] 7.935 7.822 0.993 0.000 Signed-off-by: Madadi Vineeth Reddy <vineethr@linux.ibm.com> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com> Link: https://lore.kernel.org/r/20241004170756.18064-1-vineethr@linux.ibm.com Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2024-10-04 22:37:56 +05:30
-P::
--pre-migrations::
Show pre-migration wait time. pre-migration wait time is the time spent
by a task waiting on a runqueue but not getting the chance to run there
and is migrated to a different runqueue where it is finally run. This
time between sched_wakeup and migrate_task is the pre-migration wait
time.
OPTIONS for 'perf sched replay'
------------------------------
-r::
--repeat <n>::
repeat the workload n times (0: infinite). Default is 10.
SEE ALSO
--------
linkperf:perf-record[1]