diff options
author | Namhyung Kim <namhyung@kernel.org> | 2024-02-06 21:05:44 -0800 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2024-04-10 06:13:57 +0200 |
commit | 0259bf63f71e2accfeca4a4e346ede8edcc86aab (patch) | |
tree | 9af8bdbc67f072d870c7df0de5df98d654f3696b /include/linux/perf_event.h | |
parent | 9794563d4d053b1b46a0cc91901f0a11d8678c19 (diff) |
perf/core: Optimize perf_adjust_freq_unthr_context()
It was unnecessarily disabling and enabling PMUs for each event. It
should be done at PMU level. Add pmu_ctx->nr_freq counter to check it
at each PMU. As PMU context has separate active lists for pinned group
and flexible group, factor out a new function to do the job.
Another minor optimization is that it can skip PMUs w/ CAP_NO_INTERRUPT
even if it needs to unthrottle sampling events.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Mingwei Zhang <mizhang@google.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Link: https://lore.kernel.org/r/20240207050545.2727923-1-namhyung@kernel.org
Diffstat (limited to 'include/linux/perf_event.h')
-rw-r--r-- | include/linux/perf_event.h | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index d2a15c0c6f8a..3e33b366347a 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -883,6 +883,7 @@ struct perf_event_pmu_context { unsigned int nr_events; unsigned int nr_cgroups; + unsigned int nr_freq; atomic_t refcount; /* event <-> epc */ struct rcu_head rcu_head; @@ -897,6 +898,11 @@ struct perf_event_pmu_context { int rotate_necessary; }; +static inline bool perf_pmu_ctx_is_active(struct perf_event_pmu_context *epc) +{ + return !list_empty(&epc->flexible_active) || !list_empty(&epc->pinned_active); +} + struct perf_event_groups { struct rb_root tree; u64 index; |