summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2015-11-03 17:38:09 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2015-11-03 17:38:09 -0800
commitb02ac6b18cd4e2c76bf0a102c20c429b973f5f76 (patch)
tree87b3648f448627d61cb9ba32511584d6318b7bb6 /include
parent105ff3cbf225036b75a6a46c96d1ddce8e7bdc66 (diff)
parentbebd23a2ed31d47e7dd746d3b125068aa2c42d85 (diff)
Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf updates from Ingo Molnar: "Kernel side changes: - Improve accuracy of perf/sched clock on x86. (Adrian Hunter) - Intel DS and BTS updates. (Alexander Shishkin) - Intel cstate PMU support. (Kan Liang) - Add group read support to perf_event_read(). (Peter Zijlstra) - Branch call hardware sampling support, implemented on x86 and PowerPC. (Stephane Eranian) - Event groups transactional interface enhancements. (Sukadev Bhattiprolu) - Enable proper x86/intel/uncore PMU support on multi-segment PCI systems. (Taku Izumi) - ... misc fixes and cleanups. The perf tooling team was very busy again with 200+ commits, the full diff doesn't fit into lkml size limits. Here's an (incomplete) list of the tooling highlights: New features: - Change the default event used in all tools (record/top): use the most precise "cycles" hw counter available, i.e. when the user doesn't specify any event, it will try using cycles:ppp, cycles:pp, etc and fall back transparently until it finds a working counter. (Arnaldo Carvalho de Melo) - Integration of perf with eBPF that, given an eBPF .c source file (or .o file built for the 'bpf' target with clang), will get it automatically built, validated and loaded into the kernel via the sys_bpf syscall, which can then be used and seen using 'perf trace' and other tools. (Wang Nan) Various user interface improvements: - Automatic pager invocation on long help output. (Namhyung Kim) - Search for more options when passing args to -h, e.g.: (Arnaldo Carvalho de Melo) $ perf report -h interface Usage: perf report [<options>] --gtk Use the GTK2 interface --stdio Use the stdio interface --tui Use the TUI interface - Show ordered command line options when -h is used or when an unknown option is specified. (Arnaldo Carvalho de Melo) - If options are passed after -h, show just its descriptions, not all options. (Arnaldo Carvalho de Melo) - Implement column based horizontal scrolling in the hists browser (top, report), making it possible to use the TUI for things like 'perf mem report' where there are many more columns than can fit in a terminal. (Arnaldo Carvalho de Melo) - Enhance the error reporting of tracepoint event parsing, e.g.: $ oldperf record -e sched:sched_switc usleep 1 event syntax error: 'sched:sched_switc' \___ unknown tracepoint Run 'perf list' for a list of valid events Now we get the much nicer: $ perf record -e sched:sched_switc ls event syntax error: 'sched:sched_switc' \___ can't access trace events Error: No permissions to read /sys/kernel/debug/tracing/events/sched/sched_switc Hint: Try 'sudo mount -o remount,mode=755 /sys/kernel/debug' And after we have those mount point permissions fixed: $ perf record -e sched:sched_switc ls event syntax error: 'sched:sched_switc' \___ unknown tracepoint Error: File /sys/kernel/debug/tracing/events/sched/sched_switc not found. Hint: Perhaps this kernel misses some CONFIG_ setting to enable this feature?. I.e. basically now the event parsing routing uses the strerror_open() routines introduced by and used in 'perf trace' work. (Jiri Olsa) - Fail properly when pattern matching fails to find a tracepoint, i.e. '-e non:existent' was being correctly handled, with a proper error message about that not being a valid event, but '-e non:existent*' wasn't, fix it. (Jiri Olsa) - Do event name substring search as last resort in 'perf list'. (Arnaldo Carvalho de Melo) E.g.: # perf list clock List of pre-defined events (to be used in -e): cpu-clock [Software event] task-clock [Software event] uncore_cbox_0/clockticks/ [Kernel PMU event] uncore_cbox_1/clockticks/ [Kernel PMU event] kvm:kvm_pvclock_update [Tracepoint event] kvm:kvm_update_master_clock [Tracepoint event] power:clock_disable [Tracepoint event] power:clock_enable [Tracepoint event] power:clock_set_rate [Tracepoint event] syscalls:sys_enter_clock_adjtime [Tracepoint event] syscalls:sys_enter_clock_getres [Tracepoint event] syscalls:sys_enter_clock_gettime [Tracepoint event] syscalls:sys_enter_clock_nanosleep [Tracepoint event] syscalls:sys_enter_clock_settime [Tracepoint event] syscalls:sys_exit_clock_adjtime [Tracepoint event] syscalls:sys_exit_clock_getres [Tracepoint event] syscalls:sys_exit_clock_gettime [Tracepoint event] syscalls:sys_exit_clock_nanosleep [Tracepoint event] syscalls:sys_exit_clock_settime [Tracepoint event] Intel PT hardware tracing enhancements: - Accept a zero --itrace period, meaning "as often as possible". In the case of Intel PT that is the same as a period of 1 and a unit of 'instructions' (i.e. --itrace=i1i). (Adrian Hunter) - Harmonize itrace's synthesized callchains with the existing --max-stack tool option. (Adrian Hunter) - Allow time to be displayed in nanoseconds in 'perf script'. (Adrian Hunter) - Fix potential infinite loop when handling Intel PT timestamps. (Adrian Hunter) - Slighly improve Intel PT debug logging. (Adrian Hunter) - Warn when AUX data has been lost, just like when processing PERF_RECORD_LOST. (Adrian Hunter) - Further document export-to-postgresql.py script. (Adrian Hunter) - Add option to synthesize branch stack from auxtrace data. (Adrian Hunter) Misc notable changes: - Switch the default callchain output mode to 'graph,0.5,caller', to make it look like the default for other tools, reducing the learning curve for people used to 'caller' based viewing. (Arnaldo Carvalho de Melo) - various call chain usability enhancements. (Namhyung Kim) - Introduce the 'P' event modifier, meaning 'max precision level, please', i.e.: $ perf record -e cycles:P usleep 1 Is now similar to: $ perf record usleep 1 Useful, for instance, when specifying multiple events. (Jiri Olsa) - Add 'socket' sort entry, to sort by the processor socket in 'perf top' and 'perf report'. (Kan Liang) - Introduce --socket-filter to 'perf report', for filtering by processor socket. (Kan Liang) - Add new "Zoom into Processor Socket" operation in the perf hists browser, used in 'perf top' and 'perf report'. (Kan Liang) - Allow probing on kmodules without DWARF. (Masami Hiramatsu) - Fix 'perf probe -l' for probes added to kernel module functions. (Masami Hiramatsu) - Preparatory work for the 'perf stat record' feature that will allow generating perf.data files with counting data in addition to the sampling mode we have now (Jiri Olsa) - Update libtraceevent KVM plugin. (Paolo Bonzini) - ... plus lots of other enhancements that I failed to list properly, by: Adrian Hunter, Alexander Shishkin, Andi Kleen, Andrzej Hajda, Arnaldo Carvalho de Melo, Dima Kogan, Don Zickus, Geliang Tang, He Kuang, Huaitong Han, Ingo Molnar, Jan Stancek, Jiri Olsa, Kan Liang, Kirill Tkhai, Masami Hiramatsu, Matt Fleming, Namhyung Kim, Paolo Bonzini, Peter Zijlstra, Rabin Vincent, Scott Wood, Stephane Eranian, Sukadev Bhattiprolu, Taku Izumi, Vaishali Thakkar, Wang Nan, Yang Shi and Yunlong Song" * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (260 commits) perf unwind: Pass symbol source to libunwind tools build: Fix libiberty feature detection perf tools: Compile scriptlets to BPF objects when passing '.c' to --event perf record: Add clang options for compiling BPF scripts perf bpf: Attach eBPF filter to perf event perf tools: Make sure fixdep is built before libbpf perf script: Enable printing of branch stack perf trace: Add cmd string table to decode sys_bpf first arg perf bpf: Collect perf_evsel in BPF object files perf tools: Load eBPF object into kernel perf tools: Create probe points for BPF programs perf tools: Enable passing bpf object file to --event perf ebpf: Add the libbpf glue perf tools: Make perf depend on libbpf perf symbols: Fix endless loop in dso__split_kallsyms_for_kcore perf tools: Enable pre-event inherit setting by config terms perf symbols: we can now read separate debug-info files based on a build ID perf symbols: Fix type error when reading a build-id perf tools: Search for more options when passing args to -h perf stat: Cache aggregated map entries in extra cpumap ...
Diffstat (limited to 'include')
-rw-r--r--include/linux/perf_event.h120
-rw-r--r--include/uapi/linux/perf_event.h6
2 files changed, 107 insertions, 19 deletions
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 092a0e8a479a..d841d33bcdc9 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -140,33 +140,67 @@ struct hw_perf_event {
};
#endif
};
+ /*
+ * If the event is a per task event, this will point to the task in
+ * question. See the comment in perf_event_alloc().
+ */
struct task_struct *target;
+
+/*
+ * hw_perf_event::state flags; used to track the PERF_EF_* state.
+ */
+#define PERF_HES_STOPPED 0x01 /* the counter is stopped */
+#define PERF_HES_UPTODATE 0x02 /* event->count up-to-date */
+#define PERF_HES_ARCH 0x04
+
int state;
+
+ /*
+ * The last observed hardware counter value, updated with a
+ * local64_cmpxchg() such that pmu::read() can be called nested.
+ */
local64_t prev_count;
+
+ /*
+ * The period to start the next sample with.
+ */
u64 sample_period;
+
+ /*
+ * The period we started this sample with.
+ */
u64 last_period;
+
+ /*
+ * However much is left of the current period; note that this is
+ * a full 64bit value and allows for generation of periods longer
+ * than hardware might allow.
+ */
local64_t period_left;
+
+ /*
+ * State for throttling the event, see __perf_event_overflow() and
+ * perf_adjust_freq_unthr_context().
+ */
u64 interrupts_seq;
u64 interrupts;
+ /*
+ * State for freq target events, see __perf_event_overflow() and
+ * perf_adjust_freq_unthr_context().
+ */
u64 freq_time_stamp;
u64 freq_count_stamp;
#endif
};
-/*
- * hw_perf_event::state flags
- */
-#define PERF_HES_STOPPED 0x01 /* the counter is stopped */
-#define PERF_HES_UPTODATE 0x02 /* event->count up-to-date */
-#define PERF_HES_ARCH 0x04
-
struct perf_event;
/*
* Common implementation detail of pmu::{start,commit,cancel}_txn
*/
-#define PERF_EVENT_TXN 0x1
+#define PERF_PMU_TXN_ADD 0x1 /* txn to add/schedule event on PMU */
+#define PERF_PMU_TXN_READ 0x2 /* txn to read event group from PMU */
/**
* pmu::capabilities flags
@@ -210,7 +244,19 @@ struct pmu {
/*
* Try and initialize the event for this PMU.
- * Should return -ENOENT when the @event doesn't match this PMU.
+ *
+ * Returns:
+ * -ENOENT -- @event is not for this PMU
+ *
+ * -ENODEV -- @event is for this PMU but PMU not present
+ * -EBUSY -- @event is for this PMU but PMU temporarily unavailable
+ * -EINVAL -- @event is for this PMU but @event is not valid
+ * -EOPNOTSUPP -- @event is for this PMU, @event is valid, but not supported
+ * -EACCESS -- @event is for this PMU, @event is valid, but no privilidges
+ *
+ * 0 -- @event is for this PMU and valid
+ *
+ * Other error return values are allowed.
*/
int (*event_init) (struct perf_event *event);
@@ -221,27 +267,61 @@ struct pmu {
void (*event_mapped) (struct perf_event *event); /*optional*/
void (*event_unmapped) (struct perf_event *event); /*optional*/
+ /*
+ * Flags for ->add()/->del()/ ->start()/->stop(). There are
+ * matching hw_perf_event::state flags.
+ */
#define PERF_EF_START 0x01 /* start the counter when adding */
#define PERF_EF_RELOAD 0x02 /* reload the counter when starting */
#define PERF_EF_UPDATE 0x04 /* update the counter when stopping */
/*
- * Adds/Removes a counter to/from the PMU, can be done inside
- * a transaction, see the ->*_txn() methods.
+ * Adds/Removes a counter to/from the PMU, can be done inside a
+ * transaction, see the ->*_txn() methods.
+ *
+ * The add/del callbacks will reserve all hardware resources required
+ * to service the event, this includes any counter constraint
+ * scheduling etc.
+ *
+ * Called with IRQs disabled and the PMU disabled on the CPU the event
+ * is on.
+ *
+ * ->add() called without PERF_EF_START should result in the same state
+ * as ->add() followed by ->stop().
+ *
+ * ->del() must always PERF_EF_UPDATE stop an event. If it calls
+ * ->stop() that must deal with already being stopped without
+ * PERF_EF_UPDATE.
*/
int (*add) (struct perf_event *event, int flags);
void (*del) (struct perf_event *event, int flags);
/*
- * Starts/Stops a counter present on the PMU. The PMI handler
- * should stop the counter when perf_event_overflow() returns
- * !0. ->start() will be used to continue.
+ * Starts/Stops a counter present on the PMU.
+ *
+ * The PMI handler should stop the counter when perf_event_overflow()
+ * returns !0. ->start() will be used to continue.
+ *
+ * Also used to change the sample period.
+ *
+ * Called with IRQs disabled and the PMU disabled on the CPU the event
+ * is on -- will be called from NMI context with the PMU generates
+ * NMIs.
+ *
+ * ->stop() with PERF_EF_UPDATE will read the counter and update
+ * period/count values like ->read() would.
+ *
+ * ->start() with PERF_EF_RELOAD will reprogram the the counter
+ * value, must be preceded by a ->stop() with PERF_EF_UPDATE.
*/
void (*start) (struct perf_event *event, int flags);
void (*stop) (struct perf_event *event, int flags);
/*
* Updates the counter value of the event.
+ *
+ * For sampling capable PMUs this will also update the software period
+ * hw_perf_event::period_left field.
*/
void (*read) (struct perf_event *event);
@@ -252,20 +332,26 @@ struct pmu {
*
* Start the transaction, after this ->add() doesn't need to
* do schedulability tests.
+ *
+ * Optional.
*/
- void (*start_txn) (struct pmu *pmu); /* optional */
+ void (*start_txn) (struct pmu *pmu, unsigned int txn_flags);
/*
* If ->start_txn() disabled the ->add() schedulability test
* then ->commit_txn() is required to perform one. On success
* the transaction is closed. On error the transaction is kept
* open until ->cancel_txn() is called.
+ *
+ * Optional.
*/
- int (*commit_txn) (struct pmu *pmu); /* optional */
+ int (*commit_txn) (struct pmu *pmu);
/*
* Will cancel the transaction, assumes ->del() is called
* for each successful ->add() during the transaction.
+ *
+ * Optional.
*/
- void (*cancel_txn) (struct pmu *pmu); /* optional */
+ void (*cancel_txn) (struct pmu *pmu);
/*
* Will return the value for perf_event_mmap_page::index for this event,
diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
index 2881145cda86..651221334f49 100644
--- a/include/uapi/linux/perf_event.h
+++ b/include/uapi/linux/perf_event.h
@@ -168,6 +168,7 @@ enum perf_branch_sample_type_shift {
PERF_SAMPLE_BRANCH_CALL_STACK_SHIFT = 11, /* call/ret stack */
PERF_SAMPLE_BRANCH_IND_JUMP_SHIFT = 12, /* indirect jumps */
+ PERF_SAMPLE_BRANCH_CALL_SHIFT = 13, /* direct call */
PERF_SAMPLE_BRANCH_MAX_SHIFT /* non-ABI */
};
@@ -188,6 +189,7 @@ enum perf_branch_sample_type {
PERF_SAMPLE_BRANCH_CALL_STACK = 1U << PERF_SAMPLE_BRANCH_CALL_STACK_SHIFT,
PERF_SAMPLE_BRANCH_IND_JUMP = 1U << PERF_SAMPLE_BRANCH_IND_JUMP_SHIFT,
+ PERF_SAMPLE_BRANCH_CALL = 1U << PERF_SAMPLE_BRANCH_CALL_SHIFT,
PERF_SAMPLE_BRANCH_MAX = 1U << PERF_SAMPLE_BRANCH_MAX_SHIFT,
};
@@ -476,7 +478,7 @@ struct perf_event_mmap_page {
* u64 delta;
*
* quot = (cyc >> time_shift);
- * rem = cyc & ((1 << time_shift) - 1);
+ * rem = cyc & (((u64)1 << time_shift) - 1);
* delta = time_offset + quot * time_mult +
* ((rem * time_mult) >> time_shift);
*
@@ -507,7 +509,7 @@ struct perf_event_mmap_page {
* And vice versa:
*
* quot = cyc >> time_shift;
- * rem = cyc & ((1 << time_shift) - 1);
+ * rem = cyc & (((u64)1 << time_shift) - 1);
* timestamp = time_zero + quot * time_mult +
* ((rem * time_mult) >> time_shift);
*/