summaryrefslogtreecommitdiff
path: root/tools/perf/util
AgeCommit message (Collapse)Author
2018-12-17perf cs-etm: Add support for PTMv1.1 decodingMathieu Poirier
This patch is re-using the mechanic set forth by ETMv3 to add support for PTM decoding. Configuration for both encoding protocol is similar but the generated stream itself is very different, hence requiring special handling. Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1543955944-10042-4-git-send-email-mathieu.poirier@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf cs-etm: Add support for ETMv3 trace decodingMathieu Poirier
Add support for the creation of packet printer and decoder for the ETMv3 trace architecture. That way traces generated by tracers adhering to that trace protocol can be handled properly by the perf infrastructure. Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1543955944-10042-3-git-send-email-mathieu.poirier@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf cs-etm: Add configuration for ETMv3 trace protocolMathieu Poirier
This patch deals with the proper initialisation of configuration parameters for the ETMv3 trace protocol in order to properly handle packets generated by tracers following this specification. Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1543955944-10042-2-git-send-email-mathieu.poirier@linaro.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf top: Move perf_top__reset_sample_counters() to after counts displayJiri Olsa
Move the perf_top__reset_sample_counters() call to right after we display the counters so we can see the updated numbers for longer. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/n/tip-o72pyiwt05f3p2juprwmz2jo@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf top: Save and display the drop count statsJiri Olsa
Add drop count to 'perf top' headers: # perf top --stdio PerfTop: 3549 irqs/sec kernel:51.8% exact: 100.0% lost: 0/0 drop: 0/0 [4000Hz cycles:ppp], (all, 8 CPUs) # perf top Samples: 0 of event 'cycles:ppp', 4000 Hz, Event count (approx.): 0 lost: 0/0 drop: 0/0 The format is: <current period drop>/<total drop> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-2lj87zz8tq9ye1ntax3ulw0n@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf top: Use cond variable instead of a lockJiri Olsa
Use conditional variable logic to synchronize between the reading and processing threads. Currently it's done by having mutex around rotation code. Using a POSIX cond variable to sync both threads after queues rotation: Process thread: - Detects data - Switches queues - Sets rotate variable - Waits in pthread_cond_wait() Read thread: - Detects rotate is set - Kicks the process thread with a pthread_cond_signal() After this rotation is safely completed and both threads can continue with the new queue. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-3rdeg23rv3brvy1pwt3igvyw@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf top: Add processing threadJiri Olsa
Add a new thread that takes care of the hist creating to alleviate the main reader thread so it can keep perf mmaps served in time so that we reduce the possibility of losing events. The 'perf top' command now spawns 2 extra threads, the data processing is the following: 1) The main thread reads the data from mmaps and queues them to ordered events object; 2) The processing threads takes the data from the ordered events object and create initial histogram; 3) The GUI thread periodically sorts the initial histogram and presents it. Passing the data between threads 1 and 2 is done by having 2 ordered events queues. One is always being stored by thread 1 while the other is flushed out in thread 2. Passing the data between threads 2 and 3 stays the same as was initially for threads 1 and 3. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-hhf4hllgkmle9wl1aly1jli0@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf top: Save and display the lost count statsJiri Olsa
Add a 'lost count' to 'perf top' headers: # perf top --stdio PerfTop: 3850 irqs/sec kernel:49.0% exact: 100.0% lost: 0/0 [4000Hz cycles:ppp], (all, 8 CPUs) # perf top Samples: 0 of event 'cycles:ppp', 4000 Hz, Event count (approx.): 0 lost: 0/0 The format is: <current period lost>/<total lost> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/n/tip-zo11rn270gij5jtp8fknpf8u@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf ordered_events: Add private data memberJiri Olsa
We will need it in following patch, where we can't use the container_of() trick to get the higher level object. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-vgs9aoek21v14o3obza586yy@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf ordered_events: Rework show_progress for __ordered_events__flushJiri Olsa
Decide to use the progress bar one level higher, we will need this in following patch. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-ocjdukp2a8ujikkmafd0j5zv@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf tools: Support 'srccode' outputAndi Kleen
When looking at PT or brstackinsn traces with 'perf script' it can be very useful to see the source code. This adds a simple facility to print them with 'perf script', if the information is available through dwarf % perf record ... % perf script -F insn,ip,sym,srccode ... 4004c6 main 5 for (i = 0; i < 10000000; i++) 4004cd main 5 for (i = 0; i < 10000000; i++) 4004c6 main 5 for (i = 0; i < 10000000; i++) 4004cd main 5 for (i = 0; i < 10000000; i++) 4004cd main 5 for (i = 0; i < 10000000; i++) 4004cd main 5 for (i = 0; i < 10000000; i++) 4004cd main 5 for (i = 0; i < 10000000; i++) 4004cd main 5 for (i = 0; i < 10000000; i++) 4004b3 main 6 v++; % perf record -b ... % perf script -F insn,ip,sym,srccode,brstackinsn ... main+22: 0000000000400543 insn: e8 ca ff ff ff # PRED |18 f1(); f1: 0000000000400512 insn: 55 |10 { 0000000000400513 insn: 48 89 e5 0000000000400516 insn: b8 00 00 00 00 |11 f2(); 000000000040051b insn: e8 d6 ff ff ff # PRED f2: 00000000004004f6 insn: 55 |5 { 00000000004004f7 insn: 48 89 e5 00000000004004fa insn: 8b 05 2c 0b 20 00 |6 c = a / b; 0000000000400500 insn: 8b 0d 2a 0b 20 00 0000000000400506 insn: 99 0000000000400507 insn: f7 f9 0000000000400509 insn: 89 05 29 0b 20 00 000000000040050f insn: 90 |7 } 0000000000400510 insn: 5d 0000000000400511 insn: c3 # PRED f1+14: 0000000000400520 insn: b8 00 00 00 00 |12 f2(); 0000000000400525 insn: e8 cc ff ff ff # PRED f2: 00000000004004f6 insn: 55 |5 { 00000000004004f7 insn: 48 89 e5 00000000004004fa insn: 8b 05 2c 0b 20 00 |6 c = a / b; Not supported for callchains currently, would need some layout changes there. Committer notes: Fixed the build on Alpine Linux (3.4 .. 3.8) by addressing this warning: In file included from util/srccode.c:19:0: /usr/include/sys/fcntl.h:1:2: error: #warning redirecting incorrect #include <sys/fcntl.h> to <fcntl.h> [-Werror=cpp] #warning redirecting incorrect #include <sys/fcntl.h> to <fcntl.h> ^~~~~~~ cc1: all warnings being treated as errors Signed-off-by: Andi Kleen <ak@linux.intel.com> Tested-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20181204001848.24769-1-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf tools: Allow specifying proc-map-timeout in config fileMark Drayton
The default timeout of 500ms for parsing /proc/<pid>/maps files is too short for profiling many of our services. This can be overridden by passing --proc-map-timeout to the relevant command but it'd be nice to globally increase our default value. This patch permits setting a different default with the core.proc-map-timeout config file parameter. Signed-off-by: Mark Drayton <mbd@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20181204203420.1683114-1-mbd@fb.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf tools: Fix diverse comment typosIngo Molnar
Go over the tools/ files that are maintained in Arnaldo's tree and fix common typos: half of them were in comments, the other half in JSON files. No change in functionality intended. Committer notes: This was split from a larger patch as there are code that is, additionally, maintained outside the kernel tree, so to ease cherry-picking and/or backporting, split this into multiple patches. Just typos in comments, no need to backport, reducing the possibility of possible backporting artifacts. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20181203102200.GA104797@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf bpf-loader: Fix debugging message typoIngo Molnar
Go over the tools/ files that are maintained in Arnaldo's tree and fix common typos: half of them were in comments, the other half in JSON files. No change in functionality intended. Committer notes: This was split from a larger patch as there are code that is, additionally, maintained outside the kernel tree, so to ease cherry picking and/or backporting, split this into multiple patches. This one has information that is presented to the user, albeit in debug mode. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Link: http://lkml.kernel.org/r/20181203102200.GA104797@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf cs-etm: Support for ARM A32/T32 instruction sets in CoreSight traceRobert Walker
This patch adds support for generating instruction samples from trace of AArch32 programs using the A32 and T32 instruction sets. T32 has variable 2 or 4 byte instruction size, so the conversion between addresses and instruction counts requires extra information from the trace decoder, requiring version 0.10.0 of OpenCSD. A check for the OpenCSD library version has been added to the feature check for OpenCSD. Signed-off-by: Robert Walker <robert.walker@arm.com> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Tested-by: Leo Yan <leo.yan@linaro.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: coresight@lists.linaro.org Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1543839526-30348-1-git-send-email-robert.walker@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf tools: traceevent API cleanup, remove __tep_data2host*()Tzvetomir Stoyanov
In order to make libtraceevent into a proper library, its API should be straightforward. The __tep_data2host*() functions are going to no longer be available as a libtraceevent API, tep_read_number() should be used instead. This patch replaces __tep_data2host*() usage with tep_read_number() in perf. Signed-off-by: Tzvetomir Stoyanov <tstoyanov@vmware.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20181130154647.743979275@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17tools lib traceevent, perf tools: Rename 'struct tep_event_format' to ↵Tzvetomir Stoyanov
'struct tep_event' In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. This renames 'struct tep_event_format' to 'struct tep_event', which describes more closely the purpose of the struct. Signed-off-by: Tzvetomir Stoyanov <tstoyanov@vmware.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/r/20181130154647.436403995@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> [ Fixup conflict with 6e33c250a88f ("tools lib traceevent: Fix compile warnings in tools/lib/traceevent/event-parse.c") ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf report: Display average IPC and IPC coverage per symbolJin Yao
Support displaying the average IPC and IPC coverage per symbol in 'perf report' --tui and --stdio modes. For example, $ perf record -b ... $ perf report -s symbol Overhead Symbol IPC [IPC Coverage] 39.60% [.] __random 2.30 [ 54.8%] 18.02% [.] main 0.43 [ 54.3%] 14.21% [.] compute_flag 2.29 [100.0%] 14.16% [.] rand 0.36 [100.0%] 7.06% [.] __random_r 2.57 [ 70.5%] 6.85% [.] rand@plt 0.00 [ 0.0%] Jiri Olsa <jolsa@redhat.com> provided the patch to support the --stdio mode. I merged Jiri's code in this patch. $ perf report -s symbol --stdio # Overhead Symbol IPC [IPC Coverage] # ........ ........................... .................... # 39.60% [.] __random 2.30 [ 54.8%] 18.02% [.] main 0.43 [ 54.3%] 14.21% [.] compute_flag 2.29 [100.0%] 14.16% [.] rand 0.36 [100.0%] 7.06% [.] __random_r 2.57 [ 70.5%] 6.85% [.] rand@plt 0.00 [ 0.0%] 0.02% [k] run_timer_softirq 1.60 [ 57.2%] The columns "IPC" and "[IPC Coverage]" are automatically enabled when the sort-key "symbol" is specified. If the perf.data file doesn't contain timed LBR information, columns are filled with "-". For example, # Overhead Symbol IPC [IPC Coverage] # ........ ........................... .................... # 46.57% [.] main - - 17.60% [.] rand - - 15.84% [.] __random_r - - 11.90% [.] __random - - 6.50% [.] compute_flag - - 1.59% [.] rand@plt - - 0.00% [.] _dl_relocate_object - - 0.00% [k] tlb_flush_mmu - - 0.00% [k] perf_event_mmap - - 0.00% [k] native_sched_clock - - 0.00% [k] intel_pmu_handle_irq_v4 - - 0.00% [k] native_write_msr - - v3: --- Removed the sortkey 'ipc' from command-line. The columns "IPC" and "[IPC Coverage]" are automatically enabled when "symbol" is specified. v2: --- Merge in Jiri's patch to support stdio mode Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1543586097-27632-4-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf annotate: Create a annotate2 flag in struct symbolJin Yao
We often use the symbol__annotate2() to annotate a specified symbol. While annotating may take some time, so in order to avoid annotating the same symbol repeatedly, the patch creates a new flag to indicate the symbol has been annotated. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1543586097-27632-3-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf annotate: Compute average IPC and IPC coverage per symbolJin Yao
Add support to 'perf report' annotate view or 'perf annotate --stdio2' to aggregate the IPC derived from timed LBRs per symbol. We compute the average IPC and the IPC coverage percentage. For example: $ perf annotate --stdio2 Percent IPC Cycle (Average IPC: 2.30, IPC Coverage: 54.8%) Disassembly of section .text: 000000000003aac0 <random@@GLIBC_2.2.5>: 8.32 3.28 sub $0x18,%rsp 3.28 mov $0x1,%esi 3.28 xor %eax,%eax 3.28 cmpl $0x0,argp_program_version_hook@@GLIBC_2.2.5+0x1e0 11.57 3.28 1 ↓ je 20 lock cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+0x8a0 ↓ jne 29 ↓ jmp 43 11.57 1.10 20: cmpxchg %esi,__abort_msg@@GLIBC_PRIVATE+0x8a0 0.00 1.10 1 ↓ je 43 29: lea __abort_msg@@GLIBC_PRIVATE+0x8a0,%rdi sub $0x80,%rsp → callq __lll_lock_wait_private add $0x80,%rsp 0.00 3.00 43: lea __ctype_b@GLIBC_2.2.5+0x38,%rdi 3.00 lea 0xc(%rsp),%rsi 8.49 3.00 1 → callq __random_r 7.91 1.94 cmpl $0x0,argp_program_version_hook@@GLIBC_2.2.5+0x1e0 0.00 1.94 1 ↓ je 68 lock decl __abort_msg@@GLIBC_PRIVATE+0x8a0 ↓ jne 70 ↓ jmp 8a 0.00 2.00 68: decl __abort_msg@@GLIBC_PRIVATE+0x8a0 21.56 2.00 1 ↓ je 8a 70: lea __abort_msg@@GLIBC_PRIVATE+0x8a0,%rdi sub $0x80,%rsp → callq __lll_unlock_wake_private add $0x80,%rsp 21.56 2.90 8a: movslq 0xc(%rsp),%rax 2.90 add $0x18,%rsp 9.03 2.90 1 ← retq It shows for this symbol the average IPC is 2.30 and the IPC coverage is 54.8%. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1543586097-27632-2-git-send-email-yao.jin@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf record: Extend trace writing to multi AIOAlexey Budankov
Multi AIO trace writing allows caching more kernel data into userspace memory postponing trace writing for the sake of overall profiling data thruput increase. It could be seen as kernel data buffer extension into userspace memory. With an --aio option value different from 0 (default value is 1) the tool has capability to cache more and more data into user space along with delegating spill to AIO. That allows avoiding to suspend at record__aio_sync() between calls of record__mmap_read_evlist() and increases profiling data thruput at the cost of userspace memory. Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/050bb053-e7f3-aa83-fde7-f27ff90be7f6@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf record: Enable asynchronous trace writingAlexey Budankov
The trace file offset is read once before mmaps iterating loop and written back after all performance data is enqueued for aio writing. The trace file offset is incremented linearly after every successful aio write operation. record__aio_sync() blocks till completion of the started AIO operation and then proceeds. record__aio_mmap_read_sync() implements a barrier for all incomplete aio write requests. Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/ce2d45e9-d236-871c-7c8f-1bed2d37e8ac@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf mmap: Map data buffer for preserving collected dataAlexey Budankov
The map->data buffer is used to preserve map->base profiling data for writing to disk. AIO map->cblock is used to queue corresponding map->data buffer for asynchronous writing. Signed-off-by: Alexey Budankov <alexey.budankov@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@redhat.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/5fcda10c-6c63-68df-383a-c6d9e5d1f918@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf bpf: Use ERR_CAST instead of ERR_PTR(PTR_ERR())Wen Yang
Use ERR_CAST inlined function instead of ERR_PTR(PTR_ERR(...)). This makes it more readable and also fix this warning detected by err_cast.cocci: tools/perf/util/bpf-loader.c:1606:11-18: WARNING: ERR_CAST can be used with op Signed-off-by: Wen Yang <wen.yang99@zte.com.cn> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Julia Lawall <julia.lawall@lip6.fr> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wen Yang <yellowriver2010@hotmail.com> Cc: zhong.weidong@zte.com.cn Link: http://lkml.kernel.org/r/20181127090610.28488-1-wen.yang99@zte.com.cn Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf script: Use fallbacks for branch stacksAdrian Hunter
Branch stacks do not necessarily have the same cpumode as the 'ip'. Use the fallback functions in those cases. This patch depends on patch "perf tools: Add fallback functions for cases where cpumode is insufficient". Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: stable@vger.kernel.org # 4.19 Link: http://lkml.kernel.org/r/20181106210712.12098-4-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf tools: Use fallback for sample_addr_correlates_sym() casesAdrian Hunter
thread__resolve() is used in the sample_addr_correlates_sym() cases where 'addr' is a destination of a branch which does not necessarily have the same cpumode as the 'ip'. Use the fallback function in that case. This patch depends on patch "perf tools: Add fallback functions for cases where cpumode is insufficient". Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: stable@vger.kernel.org # 4.19 Link: http://lkml.kernel.org/r/20181106210712.12098-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf thread: Add fallback functions for cases where cpumode is insufficientAdrian Hunter
For branch stacks or branch samples, the sample cpumode might not be correct because it applies only to the sample 'ip' and not necessary to 'addr' or branch stack addresses. Add fallback functions that can be used to deal with those cases Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: stable@vger.kernel.org # 4.19 Link: http://lkml.kernel.org/r/20181106210712.12098-2-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf machine: Record if a arch has a single user/kernel address spaceAdrian Hunter
Some architectures have a single address space for kernel and user addresses, which makes it possible to determine if an address is in kernel space or user space. Some don't, e.g.: sparc. Cache that info in perf_env so that, for instance, code needing to fallback failed symbol lookups at the kernel space in single address space arches can lookup at userspace. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: stable@vger.kernel.org # 4.19 Link: http://lkml.kernel.org/r/20181106210712.12098-2-adrian.hunter@intel.com [ split from a larger patch ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf env: Also consider env->arch == NULL as local operationArnaldo Carvalho de Melo
We'll set a new machine field based on env->arch, which for live mode, like with 'perf top' means we need to use uname() to figure the name of the arch, fix perf_env__arch() to consider both (env == NULL) and (env->arch == NULL) as local operation. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: David S. Miller <davem@davemloft.net> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Cc: stable@vger.kernel.org # 4.19 Link: https://lkml.kernel.org/n/tip-vcz4ufzdon7cwy8dm2ua53xk@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf map: Remove extra indirection from map__find()Eric Saint-Etienne
A double pointer is used in map__find() where a single pointer is enough because the function doesn't affect the rbtree and the rbtree is locked. Signed-off-by: Eric Saint-Etienne <eric.saint.etienne@oracle.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Eric Saint-Etienne <eric.saintetienne@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1542969759-24346-1-git-send-email-eric.saint.etienne@oracle.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf stat: Fix CSV mode column output for non-cgroup eventsStephane Eranian
When using the -x option, perf stat prints CSV-style output with one event per line. For each event, it prints the count, the unit, the event name, the cgroup, and a bunch of other event specific fields (such as insn per cycles). When you use CSV-style mode, you expect a normalized output where each event is printed with the same number of fields regardless of what it is so it can easily be imported into a spreadsheet or parsed. For instance, if an event does not have a unit, then print an empty field for it. Although this approach was implemented for the unit, it was not for the cgroup. When mixing cgroup and non-cgroup events, then non-cgroup events would not show an empty field, instead the next field was printed, make columns not line up correctly. This patch fixes the cgroup output issues by forcing an empty field for non-cgroup events as soon as one event has cgroup. Before: <not counted> @ @cycles @foo @ 0 @100.00@@ 2531614 @ @cycles @6420922@100.00@ @ foo cgroup lines up with time_running! After: <not counted> @ @cycles @foo @0 @100.00@@ 2594834 @ @cycles @ @5287372 @100.00@@ Fields line up. Signed-off-by: Stephane Eranian <eranian@google.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1541587845-9150-1-git-send-email-eranian@google.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-12-17perf stat: Fix shadow stats for clock eventsRavi Bangoria
Commit 0aa802a79469 ("perf stat: Get rid of extra clock display function") introduced scale and unit for clock events. Thus, perf_stat__update_shadow_stats() now saves scaled values of clock events in msecs, instead of original nsecs. But while calculating values of shadow stats we still consider clock event values in nsecs. This results in a wrong shadow stat values. Ex, # ./perf stat -e task-clock,cycles ls <SNIP> 2.60 msec task-clock:u # 0.877 CPUs utilized 2,430,564 cycles:u # 1215282.000 GHz Fix this by saving original nsec values for clock events in perf_stat__update_shadow_stats(). After patch: # ./perf stat -e task-clock,cycles ls <SNIP> 3.14 msec task-clock:u # 0.839 CPUs utilized 3,094,528 cycles:u # 0.985 GHz Suggested-by: Jiri Olsa <jolsa@redhat.com> Reported-by: Anton Blanchard <anton@samba.org> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Thomas Richter <tmricht@linux.vnet.ibm.com> Cc: yuzhoujian@didichuxing.com Fixes: 0aa802a79469 ("perf stat: Get rid of extra clock display function") Link: http://lkml.kernel.org/r/20181116042843.24067-1-ravi.bangoria@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-21perf pmu: Move *_cpuid_str() weak functions to header.cKan Liang
The weak functions, strcmp_cpuid_str() and get_cpuid_str(), are defined in pmu.c. Most of the cpuid related functions, including *_cpuid_str()'s declaration and platform specific definition, are in header.c/h. To make the declaration and definition of all cpuid related functions in a consistent place, move the weak functions to header.c. There is no functional change. Suggested-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Link: http://lkml.kernel.org/r/20181121164939.13482-1-kan.liang@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-21perf symbols: Fix slowness due to -ffunction-sectionEric Saint-Etienne
Perf can take minutes to parse an image when -ffunction-section is used. This is especially true with the kernel image when it is compiled this way, which is the arm64 default since the patcheset "Enable deadcode elimination at link time". Perf organize maps using a rbtree. Whenever perf finds a new symbols, it first searches this rbtree for the map it belongs to, by strcmp()'aring section names. When it finds the map with the right name, it uses it to add the symbol. With a usual image there aren't so many maps but when using -ffunction-section there's basically one map per function. With the kernel image that's north of 40,000 maps. For most symbols perf has to parses the entire rbtree to eventually create a new map and add it. Consequently perf spends most of the time browsing a rbtree that keeps getting larger. This performance fix introduces a secondary rbtree that indexes maps based on the section name. Signed-off-by: Eric Saint-Etienne <eric.saint.etienne@oracle.com> Reviewed-by: Dave Kleikamp <dave.kleikamp@oracle.com> Reviewed-by: David Aldridge <david.aldridge@oracle.com> Reviewed-by: Rob Gardner <rob.gardner@oracle.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1542822679-25591-1-git-send-email-eric.saint.etienne@oracle.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-21perf vendor events: Add stepping in CPUID string for x86Kan Liang
The perf tools cannot find the proper event list for the Cascadelake server. Because the Cascadelake server and the Skylake server have the same CPU model number, which are used by the perf tools to find the event list. The stepping for Skylake server is up to 4. The stepping for Cascadelake server starts from 5. The stepping can be used to distinguish between them. The stepping is added in get_cpuid_str(). The stepping information for Skylake server is updated in mapfile.csv. A x86 specific strcmp_cpuid_cmp() function is added to handle two CPUID formats in mapfile.csv, "vendor-family-model-stepping" and "vendor-family-model": - If a cpuid-regular-expression from the mapfile.csv using the new stepping format, a cpuid-string generated on the machine must include stepping. Otherwise, it is a mismatch. - If the cpuid-regular-expression using the old non-stepping format, the stepping in the cpuid-string will be ignored. The script, using environment string "PERF_CPUID" without stepping on Skylake server, will be broken. If so, users must fix their scripts. Committer notes: Fixed this build error on centos:6 and debian:7: arch/x86/util/header.c: In function 'is_full_cpuid': arch/x86/util/header.c:82:39: error: declaration of 'cpuid' shadows a global declaration [-Werror=shadow] arch/x86/util/header.c:12:1: error: shadowed declaration is here [-Werror=shadow] arch/x86/util/header.c: In function 'strcmp_cpuid_str': arch/x86/util/header.c:98:56: error: declaration of 'cpuid' shadows a global declaration [-Werror=shadow] arch/x86/util/header.c:12:1: error: shadowed declaration is here [-Werror=shadow] cc1: all warnings being treated as errors Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20181114212416.15665-1-kan.liang@linux.intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-21perf stat: Use perf_evsel__is_clocki() for clock eventsRavi Bangoria
We already have function to check if a given event is either SW_CPU_CLOCK or SW_TASK_CLOCK. Utilize it. Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Anton Blanchard <anton@samba.org> Cc: Jin Yao <yao.jin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Thomas Richter <tmricht@linux.vnet.ibm.com> Cc: yuzhoujian@didichuxing.com Link: http://lkml.kernel.org/r/20181115095533.16930-1-ravi.bangoria@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-21perf pmu: Suppress potential format-truncation warningBen Hutchings
Depending on which functions are inlined in util/pmu.c, the snprintf() calls in perf_pmu__parse_{scale,unit,per_pkg,snapshot}() might trigger a warning: util/pmu.c: In function 'pmu_aliases': util/pmu.c:178:31: error: '%s' directive output may be truncated writing up to 255 bytes into a region of size between 0 and 4095 [-Werror=format-truncation=] snprintf(path, PATH_MAX, "%s/%s.unit", dir, name); ^~ I found this when trying to build perf from Linux 3.16 with gcc 8. However I can reproduce the problem in mainline if I force __perf_pmu__new_alias() to be inlined. Suppress this by using scnprintf() as has been done elsewhere in perf. Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20181111184524.fux4taownc6ndbx6@decadent.org.uk Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-21perf evlist: Rename perf_evlist__set_filter* to perf_evlist__set_tp_filter*Arnaldo Carvalho de Melo
To better reflect that this is a tracepoint filter, as opposed, for instance to map based BPF filters. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-9138svli6ddcphrr3ymy9oy3@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-19perf tools: Restore proper cwd on return from mnt namespaceJiri Olsa
When reporting on 'record' server we try to retrieve/use the mnt namespace of the profiled tasks. We use following API with cookie to hold the return namespace, roughly: nsinfo__mountns_enter(struct nsinfo *nsi, struct nscookie *nc) setns(newns, 0); ... new ns related open.. ... nsinfo__mountns_exit(struct nscookie *nc) setns(nc->oldns) Once finished we setns to old namespace, which also sets the current working directory (cwd) to "/", trashing the cwd we had. This is mostly fine, because we use absolute paths almost everywhere, but it screws up 'perf diff': # perf diff failed to open perf.data: No such file or directory (try 'perf record' first) ... Adding the current working directory to be part of the cookie and restoring it in the nsinfo__mountns_exit call. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Krister Johansen <kjlx@templeofstupid.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Fixes: 843ff37bb59e ("perf symbols: Find symbols in different mount namespace") Link: http://lkml.kernel.org/r/20181101170001.30019-1-jolsa@kernel.org [ No need to check for NULL args for free(), use zfree() for struct members ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-19tools build feature: Check if get_current_dir_name() is availableArnaldo Carvalho de Melo
As the namespace support code will use this, which is not available in some non _GNU_SOURCE libraries such as Android's bionic used in my container build tests (r12b and r15c at the moment). Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-x56ypm940pwclwu45d7jfj47@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-12perf tools: Fix crash on synthesizing the unitJiri Olsa
Adam reported a record command crash for simple session like: $ perf record -e cpu-clock ls with following backtrace: Program received signal SIGSEGV, Segmentation fault. 3543 ev = event_update_event__new(size + 1, PERF_EVENT_UPDATE__UNIT, evsel->id[0]); (gdb) bt #0 perf_event__synthesize_event_update_unit #1 0x000000000051e469 in perf_event__synthesize_extra_attr #2 0x00000000004445cb in record__synthesize #3 0x0000000000444bc5 in __cmd_record ... We synthesize an update event that needs to touch the evsel id array, which is not defined at that time. Fix this by forcing the id allocation for events with their unit defined. Reflecting possible read_format ID bit in the attr tests. Reported-by: Yongxin Liu <yongxin.liu@outlook.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Adam Lee <leeadamrobert@gmail.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=201477 Fixes: bfd8f72c2778 ("perf record: Synthesize unit/scale/... in event update") Link: http://lkml.kernel.org/r/20181112130012.5424-1-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-06Merge tag 'perf-urgent-for-mingo-4.20-20181106' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent Pull perf/urgent improvements and fixes from Arnaldo Carvalho de Melo: Intel PT SQL viewer: (Adrian Hunter) - Fall back to /usr/local/lib/libxed.so - Add Selected branches report - Add help window - Fix table find when table re-ordered Intel PT debug log (Adrian Hunter) - Add more event information - Add MTC and CYC timestamps perf record: (Andi Kleen) - Support weak groups, just like with 'perf stat' perf trace: (Arnaldo Carvalho de Melo) - Start augmenting raw_syscalls:{sys_enter,sys_exit}: goal is to have a generic, arch independent eBPF kernel component that is programmed with syscall table details, what to copy, how many bytes, pid, arg filters from the userspace via eBPF maps by the 'perf trace' tool that continues to use all its argument beautifiers, just taking advantage of the extra pointer contents. JVMTI: (Gustavo Romero) - Fix undefined symbol scnprintf in libperf-jvmti.so perf top: (Jin Yao) - Display the LBR stats in callchain entries perf stat: (Thomas Richter) - Handle different PMU names with common prefix arm64: Will (Deacon) - Fix arm64 tools build failure wrt smp_load_{acquire,release}. Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-11-06perf tools: Do not zero sample_id_all for group membersJiri Olsa
Andi reported following malfunction: # perf record -e '{ref-cycles,cycles}:S' -a sleep 1 # perf script non matching sample_id_all That's because we disable sample_id_all bit for non-sampling group members. We can't do that, because it needs to be the same over the whole event list. This patch keeps it untouched again. Reported-by: Andi Kleen <andi@firstfloor.org> Tested-by: Andi Kleen <andi@firstfloor.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20180923150420.27327-1-jolsa@kernel.org Fixes: e9add8bac6c6 ("perf evsel: Disable write_backward for leader sampling group events") Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-05perf intel-pt: Add MTC and CYC timestamps to debug logAdrian Hunter
One cause of decoding errors is un-synchronized side-band data. Timestamps are needed to debug such cases. TSC packet timestamps are logged. Log also MTC and CYC timestamps. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Link: http://lkml.kernel.org/r/20181105073505.8129-3-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-05perf intel-pt: Add more event information to debug logAdrian Hunter
More event information is useful for debugging, especially MMAP events. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Link: http://lkml.kernel.org/r/20181105073505.8129-2-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-05perf stat: Handle different PMU names with common prefixThomas Richter
On s390 the CPU Measurement Facility for counters now supports 2 PMUs named cpum_cf (CPU Measurement Facility for counters) and cpum_cf_diag (CPU Measurement Facility for diagnostic counters) for one and the same CPU. Running command [root@s35lp76 perf]# ./perf stat -e tx_c_tend \ -- ~/mytests/cf-tx-events 1 Measuring transactions TX_C_TABORT_NO_SPECIAL: 0 expected:0 TX_C_TABORT_SPECIAL: 0 expected:0 TX_C_TEND: 1 expected:1 TX_NC_TABORT: 11 expected:11 TX_NC_TEND: 1 expected:1 Performance counter stats for '/root/mytests/cf-tx-events 1': 2 tx_c_tend 0.002120091 seconds time elapsed 0.000121000 seconds user 0.002127000 seconds sys [root@s35lp76 perf]# displays output which is unexpected (and wrong): 2 tx_c_tend The test program definitely triggers only one transaction, as shown in line 'TX_C_TEND: 1 expected:1'. This is caused by the following call sequence: pmu_lookup() scans and installs a PMU. +--> pmu_aliases() parses all aliases in directory .../<pmu-name>/events/* which are file names. +--> pmu_aliases_parse() Read each file in directory and create an new alias entry. This is done with +--> perf_pmu__new_alias() and +--> __perf_pmu__new_alias() which also check for identical alias names. After pmu_aliases() returns, a complete list of event names for this pmu has been created. Now function pmu_add_cpu_aliases() is called to add the events listed in the json | files to the alias list of the cpu. +--> perf_pmu__find_map() Returns a pointer to the json events. Now function pmu_add_cpu_aliases() scans through all events listed in the JSON files for this CPU. Each json event pmu name is compared with the current PMU being built up and if they mismatch, the json event is added to the current PMUs alias list. To avoid duplicate entries the following comparison is done: if (!is_arm_pmu_core(name)) { pname = pe->pmu ? pe->pmu : "cpu"; if (strncmp(pname, name, strlen(pname))) continue; } The culprit is the strncmp() function. Using current s390 PMU naming, the first PMU is 'cpum_cf' and a long list of events is added, among them 'tx_c_tend' When the second PMU named 'cpum_cf_diag' is added, only one event named 'CF_DIAG' is added by the pmu_aliases() function. Now function pmu_add_cpu_aliases() is invoked for PMU 'cpum_cf_diag'. Since the CPUID string is the same for both PMUs, json file events for PMU named 'cpum_cf' are added to the PMU 'cpm_cf_diag' This happens because the strncmp() actually compares: strncmp("cpum_cf", "cpum_cf_diag", 6); The first parameter is the pmu name taken from the event in the json file. The second parameter is the pmu name of the PMU currently being built. They are different, but the length of the compare only tests the common prefix and this returns 0(true) when it should return false. Now all events for PMU cpum_cf are added to the alias list for pmu cpum_cf_diag. Later on in function parse_events_add_pmu() the event 'tx_c_end' is searched in all available PMUs and found twice, adding it two times to the evsel_list global variable which is the root of all events. This results in a counter value of 2 instead of 1. Output with this patch: [root@s35lp76 perf]# ./perf stat -e tx_c_tend \ -- ~/mytests/cf-tx-events 1 Measuring transactions TX_C_TABORT_NO_SPECIAL: 0 expected:0 TX_C_TABORT_SPECIAL: 0 expected:0 TX_C_TEND: 1 expected:1 TX_NC_TABORT: 11 expected:11 TX_NC_TEND: 1 expected:1 Performance counter stats for '/root/mytests/cf-tx-events 1': 1 tx_c_tend 0.001815365 seconds time elapsed 0.000123000 seconds user 0.001756000 seconds sys [root@s35lp76 perf]# Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com> Reviewed-by: Sebastien Boisvert <sboisvert@gydle.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: stable@vger.kernel.org Fixes: 292c34c10249 ("perf pmu: Fix core PMU alias list for X86 platform") Link: http://lkml.kernel.org/r/20181023151616.78193-1-tmricht@linux.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-05perf evlist: Move perf_evsel__reset_weak_group into evlistAndi Kleen
- Move the function from builtin-stat to evlist for reuse - Rename to evlist to match purpose better - Pass the evlist as first argument. - No functional changes Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Link: http://lkml.kernel.org/r/20181001195927.14211-1-andi@firstfloor.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-11-03Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf updates and fixes from Ingo Molnar: "These are almost all tooling updates: 'perf top', 'perf trace' and 'perf script' fixes and updates, an UAPI header sync with the merge window versions, license marker updates, much improved Sparc support from David Miller, and a number of fixes" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (66 commits) perf intel-pt/bts: Calculate cpumode for synthesized samples perf intel-pt: Insert callchain context into synthesized callchains perf tools: Don't clone maps from parent when synthesizing forks perf top: Start display thread earlier tools headers uapi: Update linux/if_link.h header copy tools headers uapi: Update linux/netlink.h header copy tools headers: Sync the various kvm.h header copies tools include uapi: Update linux/mmap.h copy perf trace beauty: Use the mmap flags table generated from headers perf beauty: Wire up the mmap flags table generator to the Makefile perf beauty: Add a generator for MAP_ mmap's flag constants tools include uapi: Update asound.h copy tools arch uapi: Update asm-generic/unistd.h and arm64 unistd.h copies tools include uapi: Update linux/fs.h copy perf callchain: Honour the ordering of PERF_CONTEXT_{USER,KERNEL,etc} perf cs-etm: Correct CPU mode for samples perf unwind: Take pgoff into account when reporting elf to libdwfl perf top: Do not use overwrite mode by default perf top: Allow disabling the overwrite mode perf trace: Beautify mount's first pathname arg ...
2018-10-31perf intel-pt/bts: Calculate cpumode for synthesized samplesAdrian Hunter
In the absence of a fallback, samples must provide a correct cpumode for the 'ip'. Do that now there is no fallback. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: stable@vger.kernel.org # 4.19 Link: http://lkml.kernel.org/r/20181031091043.23465-6-adrian.hunter@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-31perf intel-pt: Insert callchain context into synthesized callchainsAdrian Hunter
In the absence of a fallback, callchains must encode also the callchain context. Do that now there is no fallback. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Jiri Olsa <jolsa@kernel.org> Cc: Andi Kleen <ak@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Leo Yan <leo.yan@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: stable@vger.kernel.org # 4.19 Link: http://lkml.kernel.org/r/100ea2ec-ed14-b56d-d810-e0a6d2f4b069@intel.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>