diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2025-12-02 17:31:22 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2025-12-02 17:31:22 -0800 |
| commit | d348c22394ad3c8eaf7bc693cb0ca0edc2ec5246 (patch) | |
| tree | 57cbdede16efa659dc64f5ab78b82a8e78596272 | |
| parent | 959bfe496bbaf3daa5dca32d397e29ea12471779 (diff) | |
| parent | 7cede21e9f04f16a456d3c3c8a9a8899c8d84757 (diff) | |
Merge tag 'pm-6.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"There are quite a few interesting things here, including new hardware
support, new features, some bug fixes and documentation updates. In
addition, there are a usual bunch of minor fixes and cleanups all
over.
In the new hardware support category, there are intel_pstate and
intel_rapl driver updates to support new processors, Panther Lake,
Wildcat Lake, Noval Lake, and Diamond Rapids in the OOB mode, OPP and
bandwidth allocation support in the tegra186 cpufreq driver, and
JH7110S SOC support in dt-platdev cpufreq.
The new features are the PM QoS CPU latency limit for suspend-to-idle,
the netlink support for the energy model management, support for
terminating system suspend via a wakeup event during the sync of file
systems, configurable number of hibernation compression threads, the
runtime PM auto-cleanup macros, and the "poweroff" PM event that is
expected to be used during system shutdown.
Bugs are mostly fixed in cpuidle governors, but there are also fixes
elsewhere, like in the amd-pstate cpufreq driver.
Documentation updates include, but are not limited to, a new doc on
debugging shutdown hangs, cross-referencing fixes and cleanups in the
intel_pstate documentation, and updates of comments in the core
hibernation code.
Specifics:
- Introduce and document a QoS limit on CPU exit latency during
wakeup from suspend-to-idle (Ulf Hansson)
- Add support for building libcpupower statically (Zuo An)
- Add support for sending netlink notifications to user space on
energy model updates (Changwoo Mini, Peng Fan)
- Minor improvements to the Rust OPP interface (Tamir Duberstein)
- Fixes to scope-based pointers in the OPP library (Viresh Kumar)
- Use residency threshold in polling state override decisions in the
menu cpuidle governor (Aboorva Devarajan)
- Add sanity check for exit latency and target residency in the
cpufreq core (Rafael Wysocki)
- Use this_cpu_ptr() where possible in the teo governor (Christian
Loehle)
- Rework the handling of tick wakeups in the teo cpuidle governor to
increase the likelihood of stopping the scheduler tick in the cases
when tick wakeups can be counted as non-timer ones (Rafael Wysocki)
- Fix a reverse condition in the teo cpuidle governor and drop a
misguided target residency check from it (Rafael Wysocki)
- Clean up multiple minor defects in the teo cpuidle governor (Rafael
Wysocki)
- Update header inclusion to make it follow the Include What You Use
principle (Andy Shevchenko)
- Enable MSR-based RAPL PMU support in the intel_rapl power capping
driver and arrange for using it on the Panther Lake and Wildcat
Lake processors (Kuppuswamy Sathyanarayanan)
- Add support for Nova Lake and Wildcat Lake processors to the
intel_rapl power capping driver (Kaushlendra Kumar, Srinivas
Pandruvada)
- Add OPP and bandwidth support for Tegra186 (Aaron Kling)
- Optimizations for parameter array handling in the amd-pstate
cpufreq driver (Mario Limonciello)
- Fix for mode changes with offline CPUs in the amd-pstate cpufreq
driver (Gautham Shenoy)
- Preserve freq_table_sorted across suspend/hibernate in the cpufreq
core (Zihuan Zhang)
- Adjust energy model rules for Intel hybrid platforms in the
intel_pstate cpufreq driver and improve printing of debug messages
in it (Rafael Wysocki)
- Replace deprecated strcpy() in cpufreq_unregister_governor()
(Thorsten Blum)
- Fix duplicate hyperlink target errors in the intel_pstate cpufreq
driver documentation and use :ref: directive for internal linking
in it (Swaraj Gaikwad, Bagas Sanjaya)
- Add Diamond Rapids OOB mode support to the intel_pstate cpufreq
driver (Kuppuswamy Sathyanarayanan)
- Use mutex guard for driver locking in the intel_pstate driver and
eliminate some code duplication from it (Rafael Wysocki)
- Replace udelay() with usleep_range() in ACPI cpufreq (Kaushlendra
Kumar)
- Minor improvements to various cpufreq drivers (Christian Marangi,
Hal Feng, Jie Zhan, Marco Crivellari, Miaoqian Lin, and Shuhao Fu)
- Replace snprintf() with scnprintf() in show_trace_dev_match()
(Kaushlendra Kumar)
- Fix memory allocation error handling in pm_vt_switch_required()
(Malaya Kumar Rout)
- Introduce CALL_PM_OP() macro and use it to simplify code in generic
PM operations (Kaushlendra Kumar)
- Add module param to backtrace all CPUs in the device power
management watchdog (Sergey Senozhatsky)
- Rework message printing in swsusp_save() (Rafael Wysocki)
- Make it possible to change the number of hibernation compression
threads (Xueqin Luo)
- Clarify that only cgroup1 freezer uses PM freezer (Tejun Heo)
- Add document on debugging shutdown hangs to PM documentation and
correct a mistaken configuration option in it (Mario Limonciello)
- Shut down wakeup source timer before removing the wakeup source
from the list (Kaushlendra Kumar, Rafael Wysocki)
- Introduce new PMSG_POWEROFF event for system shutdown handling with
the help of PM device callbacks (Mario Limonciello)
- Make pm_test delay interruptible by wakeup events (Riwen Lu)
- Clean up kernel-doc comment style usage in the core hibernation
code and remove unuseful comments from it (Sunday Adelodun, Rafael
Wysocki)
- Add support for handling wakeup events and aborting the suspend
process while it is syncing file systems (Samuel Wu, Rafael
Wysocki)
- Add WQ_UNBOUND to pm_wq workqueue (Marco Crivellari)
- Add runtime PM wrapper macros for ACQUIRE()/ACQUIRE_ERR() and use
them in the PCI core and the ACPI TAD driver (Rafael Wysocki)
- Improve runtime PM in the ACPI TAD driver (Rafael Wysocki)
- Update pm_runtime_allow/forbid() documentation (Rafael Wysocki)
- Fix typos in runtime.c comments (Malaya Kumar Rout)
- Move governor.h from devfreq under include/linux/ and rename to
devfreq-governor.h to allow devfreq governor definitions in out of
drivers/devfreq/ (Dmitry Baryshkov)
- Use min() to improve readability in tegra30-devfreq.c (Thorsten
Blum)
- Fix potential use-after-free issue of OPP handling in
hisi_uncore_freq.c (Pengjie Zhang)
- Fix typo in DFSO_DOWNDIFFERENTIAL macro name in
governor_simpleondemand.c in devfreq (Riwen Lu)"
* tag 'pm-6.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (96 commits)
PM / devfreq: Fix typo in DFSO_DOWNDIFFERENTIAL macro name
cpuidle: Warn instead of bailing out if target residency check fails
cpuidle: Update header inclusion
Documentation: power/cpuidle: Document the CPU system wakeup latency QoS
cpuidle: Respect the CPU system wakeup QoS limit for cpuidle
sched: idle: Respect the CPU system wakeup QoS limit for s2idle
pmdomain: Respect the CPU system wakeup QoS limit for cpuidle
pmdomain: Respect the CPU system wakeup QoS limit for s2idle
PM: QoS: Introduce a CPU system wakeup QoS limit
cpuidle: governors: teo: Add missing space to the description
PM: hibernate: Extra cleanup of comments in swap handling code
PM / devfreq: tegra30: use min to simplify actmon_cpu_to_emc_rate
PM / devfreq: hisi: Fix potential UAF in OPP handling
PM / devfreq: Move governor.h to a public header location
powercap: intel_rapl: Enable MSR-based RAPL PMU support
powercap: intel_rapl: Prepare read_raw() interface for atomic-context callers
cpufreq: qcom-nvmem: fix compilation warning for qcom_cpufreq_ipq806x_match_list
PM: sleep: Call pm_sleep_fs_sync() instead of ksys_sync_helper()
PM: sleep: Add support for wakeup during filesystem sync
cpufreq: ACPI: Replace udelay() with usleep_range()
...
86 files changed, 2139 insertions, 825 deletions
diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power index 4d8e1ad020f0..d38da077905a 100644 --- a/Documentation/ABI/testing/sysfs-power +++ b/Documentation/ABI/testing/sysfs-power @@ -454,3 +454,19 @@ Description: disables it. Reads from the file return the current value. The default is "1" if the build-time "SUSPEND_SKIP_SYNC" config flag is unset, or "0" otherwise. + +What: /sys/power/hibernate_compression_threads +Date: October 2025 +Contact: <luoxueqin@kylinos.cn> +Description: + Controls the number of threads used for compression + and decompression of hibernation images. + + The value can be adjusted at runtime to balance + performance and CPU utilization. + + The change takes effect on the next hibernation or + resume operation. + + Minimum value: 1 + Default value: 3 diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 8c5636a120ee..2b465eab41a1 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1907,6 +1907,16 @@ /sys/power/pm_test). Only available when CONFIG_PM_DEBUG is set. Default value is 5. + hibernate_compression_threads= + [HIBERNATION] + Set the number of threads used for compressing or decompressing + hibernation images. + + Format: <integer> + Default: 3 + Minimum: 1 + Example: hibernate_compression_threads=4 + highmem=nn[KMG] [KNL,BOOT,EARLY] forces the highmem zone to have an exact size of <nn>. This works even on boxes that have no highmem otherwise. This also works to reduce highmem diff --git a/Documentation/admin-guide/pm/cpuidle.rst b/Documentation/admin-guide/pm/cpuidle.rst index 0c090b076224..be4c1120e3f0 100644 --- a/Documentation/admin-guide/pm/cpuidle.rst +++ b/Documentation/admin-guide/pm/cpuidle.rst @@ -580,6 +580,15 @@ the given CPU as the upper limit for the exit latency of the idle states that they are allowed to select for that CPU. They should never select any idle states with exit latency beyond that limit. +While the above CPU QoS constraints apply to CPU idle time management, user +space may also request a CPU system wakeup latency QoS limit, via the +`cpu_wakeup_latency` file. This QoS constraint is respected when selecting a +suitable idle state for the CPUs, while entering the system-wide suspend-to-idle +sleep state, but also to the regular CPU idle time management. + +Note that, the management of the `cpu_wakeup_latency` file works according to +the 'cpu_dma_latency' file from user space point of view. Moreover, the unit +is also microseconds. Idle States Control Via Kernel Command Line =========================================== diff --git a/Documentation/admin-guide/pm/intel_pstate.rst b/Documentation/admin-guide/pm/intel_pstate.rst index 26e702c7016e..fde967b0c2e0 100644 --- a/Documentation/admin-guide/pm/intel_pstate.rst +++ b/Documentation/admin-guide/pm/intel_pstate.rst @@ -48,8 +48,9 @@ only way to pass early-configuration-time parameters to it is via the kernel command line. However, its configuration can be adjusted via ``sysfs`` to a great extent. In some configurations it even is possible to unregister it via ``sysfs`` which allows another ``CPUFreq`` scaling driver to be loaded and -registered (see `below <status_attr_>`_). +registered (see :ref:`below <status_attr>`). +.. _operation_modes: Operation Modes =============== @@ -62,6 +63,8 @@ a certain performance scaling algorithm. Which of them will be in effect depends on what kernel command line options are used and on the capabilities of the processor. +.. _active_mode: + Active Mode ----------- @@ -94,6 +97,8 @@ Which of the P-state selection algorithms is used by default depends on the Namely, if that option is set, the ``performance`` algorithm will be used by default, and the other one will be used by default if it is not set. +.. _active_mode_hwp: + Active Mode With HWP ~~~~~~~~~~~~~~~~~~~~ @@ -123,7 +128,7 @@ Energy-Performance Bias (EPB) knob (otherwise), which means that the processor's internal P-state selection logic is expected to focus entirely on performance. This will override the EPP/EPB setting coming from the ``sysfs`` interface -(see `Energy vs Performance Hints`_ below). Moreover, any attempts to change +(see :ref:`energy_performance_hints` below). Moreover, any attempts to change the EPP/EPB to a value different from 0 ("performance") via ``sysfs`` in this configuration will be rejected. @@ -192,6 +197,8 @@ This is the default P-state selection algorithm if the :c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option is not set. +.. _passive_mode: + Passive Mode ------------ @@ -289,12 +296,12 @@ Unlike ``_PSS`` objects in the ACPI tables, ``intel_pstate`` always exposes the entire range of available P-states, including the whole turbo range, to the ``CPUFreq`` core and (in the passive mode) to generic scaling governors. This generally causes turbo P-states to be set more often when ``intel_pstate`` is -used relative to ACPI-based CPU performance scaling (see `below <acpi-cpufreq_>`_ -for more information). +used relative to ACPI-based CPU performance scaling (see +:ref:`below <acpi-cpufreq>` for more information). Moreover, since ``intel_pstate`` always knows what the real turbo threshold is (even if the Configurable TDP feature is enabled in the processor), its -``no_turbo`` attribute in ``sysfs`` (described `below <no_turbo_attr_>`_) should +``no_turbo`` attribute in ``sysfs`` (described :ref:`below <no_turbo_attr>`) should work as expected in all cases (that is, if set to disable turbo P-states, it always should prevent ``intel_pstate`` from using them). @@ -307,12 +314,12 @@ pieces of information on it to be known, including: * The minimum supported P-state. - * The maximum supported `non-turbo P-state <turbo_>`_. + * The maximum supported :ref:`non-turbo P-state <turbo>`. * Whether or not turbo P-states are supported at all. - * The maximum supported `one-core turbo P-state <turbo_>`_ (if turbo P-states - are supported). + * The maximum supported :ref:`one-core turbo P-state <turbo>` (if turbo + P-states are supported). * The scaling formula to translate the driver's internal representation of P-states into frequencies and the other way around. @@ -400,10 +407,10 @@ Energy-Aware Scheduling Support If ``CONFIG_ENERGY_MODEL`` has been set during kernel configuration and ``intel_pstate`` runs on a hybrid processor without SMT, in addition to enabling -`CAS <CAS_>`_ it registers an Energy Model for the processor. This allows the +:ref:`CAS` it registers an Energy Model for the processor. This allows the Energy-Aware Scheduling (EAS) support to be enabled in the CPU scheduler if ``schedutil`` is used as the ``CPUFreq`` governor which requires ``intel_pstate`` -to operate in the `passive mode <Passive Mode_>`_. +to operate in the :ref:`passive mode <passive_mode>`. The Energy Model registered by ``intel_pstate`` is artificial (that is, it is based on abstract cost values and it does not include any real power numbers) @@ -432,6 +439,8 @@ the ``energy_model`` directory in ``debugfs`` (typlically mounted on User Space Interface in ``sysfs`` ================================= +.. _global_attributes: + Global Attributes ----------------- @@ -444,8 +453,8 @@ argument is passed to the kernel in the command line. ``max_perf_pct`` Maximum P-state the driver is allowed to set in percent of the - maximum supported performance level (the highest supported `turbo - P-state <turbo_>`_). + maximum supported performance level (the highest supported :ref:`turbo + P-state <turbo>`). This attribute will not be exposed if the ``intel_pstate=per_cpu_perf_limits`` argument is present in the kernel @@ -453,8 +462,8 @@ argument is passed to the kernel in the command line. ``min_perf_pct`` Minimum P-state the driver is allowed to set in percent of the - maximum supported performance level (the highest supported `turbo - P-state <turbo_>`_). + maximum supported performance level (the highest supported :ref:`turbo + P-state <turbo>`). This attribute will not be exposed if the ``intel_pstate=per_cpu_perf_limits`` argument is present in the kernel @@ -463,18 +472,18 @@ argument is passed to the kernel in the command line. ``num_pstates`` Number of P-states supported by the processor (between 0 and 255 inclusive) including both turbo and non-turbo P-states (see - `Turbo P-states Support`_). + :ref:`turbo`). This attribute is present only if the value exposed by it is the same for all of the CPUs in the system. The value of this attribute is not affected by the ``no_turbo`` - setting described `below <no_turbo_attr_>`_. + setting described :ref:`below <no_turbo_attr>`. This attribute is read-only. ``turbo_pct`` - Ratio of the `turbo range <turbo_>`_ size to the size of the entire + Ratio of the :ref:`turbo range <turbo>` size to the size of the entire range of supported P-states, in percent. This attribute is present only if the value exposed by it is the same @@ -486,7 +495,7 @@ argument is passed to the kernel in the command line. ``no_turbo`` If set (equal to 1), the driver is not allowed to set any turbo P-states - (see `Turbo P-states Support`_). If unset (equal to 0, which is the + (see :ref:`turbo`). If unset (equal to 0, which is the default), turbo P-states can be set by the driver. [Note that ``intel_pstate`` does not support the general ``boost`` attribute (supported by some other scaling drivers) which is replaced @@ -495,11 +504,11 @@ argument is passed to the kernel in the command line. This attribute does not affect the maximum supported frequency value supplied to the ``CPUFreq`` core and exposed via the policy interface, but it affects the maximum possible value of per-policy P-state limits - (see `Interpretation of Policy Attributes`_ below for details). + (see :ref:`policy_attributes_interpretation` below for details). ``hwp_dynamic_boost`` This attribute is only present if ``intel_pstate`` works in the - `active mode with the HWP feature enabled <Active Mode With HWP_>`_ in + :ref:`active mode with the HWP feature enabled <active_mode_hwp>` in the processor. If set (equal to 1), it causes the minimum P-state limit to be increased dynamically for a short time whenever a task previously waiting on I/O is selected to run on a given logical CPU (the purpose @@ -514,12 +523,12 @@ argument is passed to the kernel in the command line. Operation mode of the driver: "active", "passive" or "off". "active" - The driver is functional and in the `active mode - <Active Mode_>`_. + The driver is functional and in the :ref:`active mode + <active_mode>`. "passive" - The driver is functional and in the `passive mode - <Passive Mode_>`_. + The driver is functional and in the :ref:`passive mode + <passive_mode>`. "off" The driver is not functional (it is not registered as a scaling @@ -547,13 +556,15 @@ argument is passed to the kernel in the command line. attribute to "1" enables the energy-efficiency optimizations and setting to "0" disables them. +.. _policy_attributes_interpretation: + Interpretation of Policy Attributes ----------------------------------- The interpretation of some ``CPUFreq`` policy attributes described in Documentation/admin-guide/pm/cpufreq.rst is special with ``intel_pstate`` as the current scaling driver and it generally depends on the driver's -`operation mode <Operation Modes_>`_. +:ref:`operation mode <operation_modes>`. First of all, the values of the ``cpuinfo_max_freq``, ``cpuinfo_min_freq`` and ``scaling_cur_freq`` attributes are produced by applying a processor-specific @@ -562,9 +573,10 @@ Also, the values of the ``scaling_max_freq`` and ``scaling_min_freq`` attributes are capped by the frequency corresponding to the maximum P-state that the driver is allowed to set. -If the ``no_turbo`` `global attribute <no_turbo_attr_>`_ is set, the driver is -not allowed to use turbo P-states, so the maximum value of ``scaling_max_freq`` -and ``scaling_min_freq`` is limited to the maximum non-turbo P-state frequency. +If the ``no_turbo`` :ref:`global attribute <no_turbo_attr>` is set, the driver +is not allowed to use turbo P-states, so the maximum value of +``scaling_max_freq`` and ``scaling_min_freq`` is limited to the maximum +non-turbo P-state frequency. Accordingly, setting ``no_turbo`` causes ``scaling_max_freq`` and ``scaling_min_freq`` to go down to that value if they were above it before. However, the old values of ``scaling_max_freq`` and ``scaling_min_freq`` will be @@ -576,7 +588,7 @@ and ``scaling_min_freq`` corresponds to the maximum supported turbo P-state, which also is the value of ``cpuinfo_max_freq`` in either case. Next, the following policy attributes have special meaning if -``intel_pstate`` works in the `active mode <Active Mode_>`_: +``intel_pstate`` works in the :ref:`active mode <active_mode>`: ``scaling_available_governors`` List of P-state selection algorithms provided by ``intel_pstate``. @@ -597,20 +609,22 @@ processor: Shows the base frequency of the CPU. Any frequency above this will be in the turbo frequency range. -The meaning of these attributes in the `passive mode <Passive Mode_>`_ is the +The meaning of these attributes in the :ref:`passive mode <passive_mode>` is the same as for other scaling drivers. Additionally, the value of the ``scaling_driver`` attribute for ``intel_pstate`` depends on the operation mode of the driver. Namely, it is either -"intel_pstate" (in the `active mode <Active Mode_>`_) or "intel_cpufreq" (in the -`passive mode <Passive Mode_>`_). +"intel_pstate" (in the :ref:`active mode <active_mode>`) or "intel_cpufreq" +(in the :ref:`passive mode <passive_mode>`). + +.. _pstate_limits_coordination: Coordination of P-State Limits ------------------------------ ``intel_pstate`` allows P-state limits to be set in two ways: with the help of -the ``max_perf_pct`` and ``min_perf_pct`` `global attributes -<Global Attributes_>`_ or via the ``scaling_max_freq`` and ``scaling_min_freq`` +the ``max_perf_pct`` and ``min_perf_pct`` :ref:`global attributes +<global_attributes>` or via the ``scaling_max_freq`` and ``scaling_min_freq`` ``CPUFreq`` policy attributes. The coordination between those limits is based on the following rules, regardless of the current operation mode of the driver: @@ -632,17 +646,18 @@ on the following rules, regardless of the current operation mode of the driver: 3. The global and per-policy limits can be set independently. -In the `active mode with the HWP feature enabled <Active Mode With HWP_>`_, the +In the :ref:`active mode with the HWP feature enabled <active_mode_hwp>`, the resulting effective values are written into hardware registers whenever the limits change in order to request its internal P-state selection logic to always set P-states within these limits. Otherwise, the limits are taken into account -by scaling governors (in the `passive mode <Passive Mode_>`_) and by the driver -every time before setting a new P-state for a CPU. +by scaling governors (in the :ref:`passive mode <passive_mode>`) and by the +driver every time before setting a new P-state for a CPU. Additionally, if the ``intel_pstate=per_cpu_perf_limits`` command line argument is passed to the kernel, ``max_perf_pct`` and ``min_perf_pct`` are not exposed at all and the only way to set the limits is by using the policy attributes. +.. _energy_performance_hints: Energy vs Performance Hints --------------------------- @@ -702,9 +717,9 @@ output. On those systems each ``_PSS`` object returns a list of P-states supported by the corresponding CPU which basically is a subset of the P-states range that can be used by ``intel_pstate`` on the same system, with one exception: the whole -`turbo range <turbo_>`_ is represented by one item in it (the topmost one). By -convention, the frequency returned by ``_PSS`` for that item is greater by 1 MHz -than the frequency of the highest non-turbo P-state listed by it, but the +:ref:`turbo range <turbo>` is represented by one item in it (the topmost one). +By convention, the frequency returned by ``_PSS`` for that item is greater by +1 MHz than the frequency of the highest non-turbo P-state listed by it, but the corresponding P-state representation (following the hardware specification) returned for it matches the maximum supported turbo P-state (or is the special value 255 meaning essentially "go as high as you can get"). @@ -730,18 +745,18 @@ benefit from running at turbo frequencies will be given non-turbo P-states instead. One more issue related to that may appear on systems supporting the -`Configurable TDP feature <turbo_>`_ allowing the platform firmware to set the -turbo threshold. Namely, if that is not coordinated with the lists of P-states -returned by ``_PSS`` properly, there may be more than one item corresponding to -a turbo P-state in those lists and there may be a problem with avoiding the -turbo range (if desirable or necessary). Usually, to avoid using turbo -P-states overall, ``acpi-cpufreq`` simply avoids using the topmost state listed -by ``_PSS``, but that is not sufficient when there are other turbo P-states in -the list returned by it. +:ref:`Configurable TDP feature <turbo>` allowing the platform firmware to set +the turbo threshold. Namely, if that is not coordinated with the lists of +P-states returned by ``_PSS`` properly, there may be more than one item +corresponding to a turbo P-state in those lists and there may be a problem with +avoiding the turbo range (if desirable or necessary). Usually, to avoid using +turbo P-states overall, ``acpi-cpufreq`` simply avoids using the topmost state +listed by ``_PSS``, but that is not sufficient when there are other turbo +P-states in the list returned by it. Apart from the above, ``acpi-cpufreq`` works like ``intel_pstate`` in the -`passive mode <Passive Mode_>`_, except that the number of P-states it can set -is limited to the ones listed by the ACPI ``_PSS`` objects. +:ref:`passive mode <passive_mode>`, except that the number of P-states it can +set is limited to the ones listed by the ACPI ``_PSS`` objects. Kernel Command Line Options for ``intel_pstate`` @@ -756,11 +771,11 @@ of them have to be prepended with the ``intel_pstate=`` prefix. processor is supported by it. ``active`` - Register ``intel_pstate`` in the `active mode <Active Mode_>`_ to start - with. + Register ``intel_pstate`` in the :ref:`active mode <active_mode>` to + start with. ``passive`` - Register ``intel_pstate`` in the `passive mode <Passive Mode_>`_ to + Register ``intel_pstate`` in the :ref:`passive mode <passive_mode>` to start with. ``force`` @@ -793,12 +808,12 @@ of them have to be prepended with the ``intel_pstate=`` prefix. and this option has no effect. ``per_cpu_perf_limits`` - Use per-logical-CPU P-State limits (see `Coordination of P-state - Limits`_ for details). + Use per-logical-CPU P-State limits (see + :ref:`pstate_limits_coordination` for details). ``no_cas`` - Do not enable `capacity-aware scheduling <CAS_>`_ which is enabled by - default on hybrid systems without SMT. + Do not enable :ref:`capacity-aware scheduling <CAS>` which is enabled + by default on hybrid systems without SMT. Diagnostics and Tuning ====================== @@ -810,7 +825,7 @@ There are two static trace events that can be used for ``intel_pstate`` diagnostics. One of them is the ``cpu_frequency`` trace event generally used by ``CPUFreq``, and the other one is the ``pstate_sample`` trace event specific to ``intel_pstate``. Both of them are triggered by ``intel_pstate`` only if -it works in the `active mode <Active Mode_>`_. +it works in the :ref:`active mode <active_mode>`. The following sequence of shell commands can be used to enable them and see their output (if the kernel is generally configured to support event tracing):: @@ -822,7 +837,7 @@ their output (if the kernel is generally configured to support event tracing):: gnome-terminal--4510 [001] ..s. 1177.680733: pstate_sample: core_busy=107 scaled=94 from=26 to=26 mperf=1143818 aperf=1230607 tsc=29838618 freq=2474476 cat-5235 [002] ..s. 1177.681723: cpu_frequency: state=2900000 cpu_id=2 -If ``intel_pstate`` works in the `passive mode <Passive Mode_>`_, the +If ``intel_pstate`` works in the :ref:`passive mode <passive_mode>`, the ``cpu_frequency`` trace event will be triggered either by the ``schedutil`` scaling governor (for the policies it is attached to), or by the ``CPUFreq`` core (for the policies with other scaling governors). diff --git a/Documentation/netlink/specs/em.yaml b/Documentation/netlink/specs/em.yaml new file mode 100644 index 000000000000..9905ca482325 --- /dev/null +++ b/Documentation/netlink/specs/em.yaml @@ -0,0 +1,113 @@ +# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) + +name: em + +doc: | + Energy model netlink interface to notify its changes. + +protocol: genetlink + +uapi-header: linux/energy_model.h + +attribute-sets: + - + name: pds + attributes: + - + name: pd + type: nest + nested-attributes: pd + multi-attr: true + - + name: pd + attributes: + - + name: pad + type: pad + - + name: pd-id + type: u32 + - + name: flags + type: u64 + - + name: cpus + type: string + - + name: pd-table + attributes: + - + name: pd-id + type: u32 + - + name: ps + type: nest + nested-attributes: ps + multi-attr: true + - + name: ps + attributes: + - + name: pad + type: pad + - + name: performance + type: u64 + - + name: frequency + type: u64 + - + name: power + type: u64 + - + name: cost + type: u64 + - + name: flags + type: u64 + +operations: + list: + - + name: get-pds + attribute-set: pds + doc: Get the list of information for all performance domains. + do: + reply: + attributes: + - pd + - + name: get-pd-table + attribute-set: pd-table + doc: Get the energy model table of a performance domain. + do: + request: + attributes: + - pd-id + reply: + attributes: + - pd-id + - ps + - + name: pd-created + doc: A performance domain is created. + notify: get-pd-table + mcgrp: event + - + name: pd-updated + doc: A performance domain is updated. + notify: get-pd-table + mcgrp: event + - + name: pd-deleted + doc: A performance domain is deleted. + attribute-set: pd-table + event: + attributes: + - pd-id + mcgrp: event + +mcast-groups: + list: + - + name: event diff --git a/Documentation/power/index.rst b/Documentation/power/index.rst index a0f5244fb427..ea70633d9ce6 100644 --- a/Documentation/power/index.rst +++ b/Documentation/power/index.rst @@ -19,6 +19,7 @@ Power Management power_supply_class runtime_pm s2ram + shutdown-debugging suspend-and-cpuhotplug suspend-and-interrupts swsusp-and-swap-files diff --git a/Documentation/power/pm_qos_interface.rst b/Documentation/power/pm_qos_interface.rst index 5019c79c7710..4c008e2202f0 100644 --- a/Documentation/power/pm_qos_interface.rst +++ b/Documentation/power/pm_qos_interface.rst @@ -55,7 +55,8 @@ int cpu_latency_qos_request_active(handle): From user space: -The infrastructure exposes one device node, /dev/cpu_dma_latency, for the CPU +The infrastructure exposes two separate device nodes, /dev/cpu_dma_latency for +the CPU latency QoS and /dev/cpu_wakeup_latency for the CPU system wakeup latency QoS. Only processes can register a PM QoS request. To provide for automatic @@ -63,15 +64,15 @@ cleanup of a process, the interface requires the process to register its parameter requests as follows. To register the default PM QoS target for the CPU latency QoS, the process must -open /dev/cpu_dma_latency. +open /dev/cpu_dma_latency. To register a CPU system wakeup QoS limit, the +process must open /dev/cpu_wakeup_latency. As long as the device node is held open that process has a registered request on the parameter. To change the requested target value, the process needs to write an s32 value to the open device node. Alternatively, it can write a hex string for the value -using the 10 char long format e.g. "0x12345678". This translates to a -cpu_latency_qos_update_request() call. +using the 10 char long format e.g. "0x12345678". To remove the user mode request for a target value simply close the device node. diff --git a/Documentation/power/runtime_pm.rst b/Documentation/power/runtime_pm.rst index c8dbdb8595e5..8246df3cecd7 100644 --- a/Documentation/power/runtime_pm.rst +++ b/Documentation/power/runtime_pm.rst @@ -480,16 +480,6 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h: `bool pm_runtime_status_suspended(struct device *dev);` - return true if the device's runtime PM status is 'suspended' - `void pm_runtime_allow(struct device *dev);` - - set the power.runtime_auto flag for the device and decrease its usage - counter (used by the /sys/devices/.../power/control interface to - effectively allow the device to be power managed at run time) - - `void pm_runtime_forbid(struct device *dev);` - - unset the power.runtime_auto flag for the device and increase its usage - counter (used by the /sys/devices/.../power/control interface to - effectively prevent the device from being power managed at run time) - `void pm_runtime_no_callbacks(struct device *dev);` - set the power.no_callbacks flag for the device and remove the runtime PM attributes from /sys/devices/.../power (or prevent them from being diff --git a/Documentation/power/shutdown-debugging.rst b/Documentation/power/shutdown-debugging.rst new file mode 100644 index 000000000000..c510122e0bbc --- /dev/null +++ b/Documentation/power/shutdown-debugging.rst @@ -0,0 +1,53 @@ +.. SPDX-License-Identifier: GPL-2.0 + +Debugging Kernel Shutdown Hangs with pstore ++++++++++++++++++++++++++++++++++++++++++++ + +Overview +======== +If the system hangs while shutting down, the kernel logs may need to be +retrieved to debug the issue. + +On systems that have a UART available, it is best to configure the kernel to use +this UART for kernel console output. + +If a UART isn't available, the ``pstore`` subsystem provides a mechanism to +persist this data across a system reset, allowing it to be retrieved on the next +boot. + +Kernel Configuration +==================== +To enable ``pstore`` and enable saving kernel ring buffer logs, set the +following kernel configuration options: + +* ``CONFIG_PSTORE=y`` +* ``CONFIG_PSTORE_CONSOLE=y`` + +Additionally, enable a backend to store the data. Depending upon your platform +some potential options include: + +* ``CONFIG_EFI_VARS_PSTORE=y`` +* ``CONFIG_PSTORE_RAM=y`` +* ``CONFIG_CHROMEOS_PSTORE=y`` +* ``CONFIG_PSTORE_BLK=y`` + +Kernel Command-line Parameters +============================== +Add these parameters to your kernel command line: + +* ``printk.always_kmsg_dump=Y`` + * Forces the kernel to dump the entire message buffer to pstore during + shutdown +* ``efi_pstore.pstore_disable=N`` + * For EFI-based systems, ensures the EFI backend is active + +Userspace Interaction and Log Retrieval +======================================= +On the next boot after a hang, pstore logs will be available in the pstore +filesystem (``/sys/fs/pstore``) and can be retrieved by userspace. + +On systemd systems, the ``systemd-pstore`` service will help do the following: + +#. Locate pstore data in ``/sys/fs/pstore`` +#. Read and save it to ``/var/lib/systemd/pstore`` +#. Clear pstore data for the next event diff --git a/MAINTAINERS b/MAINTAINERS index ac5a0874b78b..472dc58de40d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9188,6 +9188,9 @@ S: Maintained F: kernel/power/energy_model.c F: include/linux/energy_model.h F: Documentation/power/energy-model.rst +F: Documentation/netlink/specs/em.yaml +F: include/uapi/linux/energy_model.h +F: kernel/power/em_netlink*.* EPAPR HYPERVISOR BYTE CHANNEL DEVICE DRIVER M: Laurentiu Tudor <laurentiu.tudor@nxp.com> diff --git a/drivers/acpi/acpi_tad.c b/drivers/acpi/acpi_tad.c index c9487c5bb7b3..6d870d97ada6 100644 --- a/drivers/acpi/acpi_tad.c +++ b/drivers/acpi/acpi_tad.c @@ -90,8 +90,8 @@ static int acpi_tad_set_real_time(struct device *dev, struct acpi_tad_rt *rt) args[0].buffer.pointer = (u8 *)rt; args[0].buffer.length = sizeof(*rt); - ACQUIRE(pm_runtime_active_try, pm)(dev); - if (ACQUIRE_ERR(pm_runtime_active_try, &pm)) + PM_RUNTIME_ACQUIRE(dev, pm); + if (PM_RUNTIME_ACQUIRE_ERR(&pm)) return -ENXIO; status = acpi_evaluate_integer(handle, "_SRT", &arg_list, &retval); @@ -137,8 +137,8 @@ static int acpi_tad_get_real_time(struct device *dev, struct acpi_tad_rt *rt) { int ret; - ACQUIRE(pm_runtime_active_try, pm)(dev); - if (ACQUIRE_ERR(pm_runtime_active_try, &pm)) + PM_RUNTIME_ACQUIRE(dev, pm); + if (PM_RUNTIME_ACQUIRE_ERR(&pm)) return -ENXIO; ret = acpi_tad_evaluate_grt(dev, rt); @@ -275,8 +275,8 @@ static int acpi_tad_wake_set(struct device *dev, char *method, u32 timer_id, args[0].integer.value = timer_id; args[1].integer.value = value; - ACQUIRE(pm_runtime_active_try, pm)(dev); - if (ACQUIRE_ERR(pm_runtime_active_try, &pm)) + PM_RUNTIME_ACQUIRE(dev, pm); + if (PM_RUNTIME_ACQUIRE_ERR(&pm)) return -ENXIO; status = acpi_evaluate_integer(handle, method, &arg_list, &retval); @@ -322,8 +322,8 @@ static ssize_t acpi_tad_wake_read(struct device *dev, char *buf, char *method, args[0].integer.value = timer_id; - ACQUIRE(pm_runtime_active_try, pm)(dev); - if (ACQUIRE_ERR(pm_runtime_active_try, &pm)) + PM_RUNTIME_ACQUIRE(dev, pm); + if (PM_RUNTIME_ACQUIRE_ERR(&pm)) return -ENXIO; status = acpi_evaluate_integer(handle, method, &arg_list, &retval); @@ -377,8 +377,8 @@ static int acpi_tad_clear_status(struct device *dev, u32 timer_id) args[0].integer.value = timer_id; - ACQUIRE(pm_runtime_active_try, pm)(dev); - if (ACQUIRE_ERR(pm_runtime_active_try, &pm)) + PM_RUNTIME_ACQUIRE(dev, pm); + if (PM_RUNTIME_ACQUIRE_ERR(&pm)) return -ENXIO; status = acpi_evaluate_integer(handle, "_CWS", &arg_list, &retval); @@ -417,8 +417,8 @@ static ssize_t acpi_tad_status_read(struct device *dev, char *buf, u32 timer_id) args[0].integer.value = timer_id; - ACQUIRE(pm_runtime_active_try, pm)(dev); - if (ACQUIRE_ERR(pm_runtime_active_try, &pm)) + PM_RUNTIME_ACQUIRE(dev, pm); + if (PM_RUNTIME_ACQUIRE_ERR(&pm)) return -ENXIO; status = acpi_evaluate_integer(handle, "_GWS", &arg_list, &retval); diff --git a/drivers/base/power/generic_ops.c b/drivers/base/power/generic_ops.c index 6502720bb564..af99bbcf281c 100644 --- a/drivers/base/power/generic_ops.c +++ b/drivers/base/power/generic_ops.c @@ -8,6 +8,13 @@ #include <linux/pm_runtime.h> #include <linux/export.h> +#define CALL_PM_OP(dev, op) \ +({ \ + struct device *_dev = (dev); \ + const struct dev_pm_ops *pm = _dev->driver ? _dev->driver->pm : NULL; \ + pm && pm->op ? pm->op(_dev) : 0; \ +}) + #ifdef CONFIG_PM /** * pm_generic_runtime_suspend - Generic runtime suspend callback for subsystems. @@ -19,12 +26,7 @@ */ int pm_generic_runtime_suspend(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - int ret; - - ret = pm && pm->runtime_suspend ? pm->runtime_suspend(dev) : 0; - - return ret; + return CALL_PM_OP(dev, runtime_suspend); } EXPORT_SYMBOL_GPL(pm_generic_runtime_suspend); @@ -38,12 +40,7 @@ EXPORT_SYMBOL_GPL(pm_generic_runtime_suspend); */ int pm_generic_runtime_resume(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - int ret; - - ret = pm && pm->runtime_resume ? pm->runtime_resume(dev) : 0; - - return ret; + return CALL_PM_OP(dev, runtime_resume); } EXPORT_SYMBOL_GPL(pm_generic_runtime_resume); #endif /* CONFIG_PM */ @@ -72,9 +69,7 @@ int pm_generic_prepare(struct device *dev) */ int pm_generic_suspend_noirq(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->suspend_noirq ? pm->suspend_noirq(dev) : 0; + return CALL_PM_OP(dev, suspend_noirq); } EXPORT_SYMBOL_GPL(pm_generic_suspend_noirq); @@ -84,9 +79,7 @@ EXPORT_SYMBOL_GPL(pm_generic_suspend_noirq); */ int pm_generic_suspend_late(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->suspend_late ? pm->suspend_late(dev) : 0; + return CALL_PM_OP(dev, suspend_late); } EXPORT_SYMBOL_GPL(pm_generic_suspend_late); @@ -96,9 +89,7 @@ EXPORT_SYMBOL_GPL(pm_generic_suspend_late); */ int pm_generic_suspend(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->suspend ? pm->suspend(dev) : 0; + return CALL_PM_OP(dev, suspend); } EXPORT_SYMBOL_GPL(pm_generic_suspend); @@ -108,9 +99,7 @@ EXPORT_SYMBOL_GPL(pm_generic_suspend); */ int pm_generic_freeze_noirq(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->freeze_noirq ? pm->freeze_noirq(dev) : 0; + return CALL_PM_OP(dev, freeze_noirq); } EXPORT_SYMBOL_GPL(pm_generic_freeze_noirq); @@ -120,9 +109,7 @@ EXPORT_SYMBOL_GPL(pm_generic_freeze_noirq); */ int pm_generic_freeze(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->freeze ? pm->freeze(dev) : 0; + return CALL_PM_OP(dev, freeze); } EXPORT_SYMBOL_GPL(pm_generic_freeze); @@ -132,9 +119,7 @@ EXPORT_SYMBOL_GPL(pm_generic_freeze); */ int pm_generic_poweroff_noirq(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->poweroff_noirq ? pm->poweroff_noirq(dev) : 0; + return CALL_PM_OP(dev, poweroff_noirq); } EXPORT_SYMBOL_GPL(pm_generic_poweroff_noirq); @@ -144,9 +129,7 @@ EXPORT_SYMBOL_GPL(pm_generic_poweroff_noirq); */ int pm_generic_poweroff_late(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->poweroff_late ? pm->poweroff_late(dev) : 0; + return CALL_PM_OP(dev, poweroff_late); } EXPORT_SYMBOL_GPL(pm_generic_poweroff_late); @@ -156,9 +139,7 @@ EXPORT_SYMBOL_GPL(pm_generic_poweroff_late); */ int pm_generic_poweroff(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->poweroff ? pm->poweroff(dev) : 0; + return CALL_PM_OP(dev, poweroff); } EXPORT_SYMBOL_GPL(pm_generic_poweroff); @@ -168,9 +149,7 @@ EXPORT_SYMBOL_GPL(pm_generic_poweroff); */ int pm_generic_thaw_noirq(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->thaw_noirq ? pm->thaw_noirq(dev) : 0; + return CALL_PM_OP(dev, thaw_noirq); } EXPORT_SYMBOL_GPL(pm_generic_thaw_noirq); @@ -180,9 +159,7 @@ EXPORT_SYMBOL_GPL(pm_generic_thaw_noirq); */ int pm_generic_thaw(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->thaw ? pm->thaw(dev) : 0; + return CALL_PM_OP(dev, thaw); } EXPORT_SYMBOL_GPL(pm_generic_thaw); @@ -192,9 +169,7 @@ EXPORT_SYMBOL_GPL(pm_generic_thaw); */ int pm_generic_resume_noirq(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->resume_noirq ? pm->resume_noirq(dev) : 0; + return CALL_PM_OP(dev, resume_noirq); } EXPORT_SYMBOL_GPL(pm_generic_resume_noirq); @@ -204,9 +179,7 @@ EXPORT_SYMBOL_GPL(pm_generic_resume_noirq); */ int pm_generic_resume_early(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->resume_early ? pm->resume_early(dev) : 0; + return CALL_PM_OP(dev, resume_early); } EXPORT_SYMBOL_GPL(pm_generic_resume_early); @@ -216,9 +189,7 @@ EXPORT_SYMBOL_GPL(pm_generic_resume_early); */ int pm_generic_resume(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->resume ? pm->resume(dev) : 0; + return CALL_PM_OP(dev, resume); } EXPORT_SYMBOL_GPL(pm_generic_resume); @@ -228,9 +199,7 @@ EXPORT_SYMBOL_GPL(pm_generic_resume); */ int pm_generic_restore_noirq(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->restore_noirq ? pm->restore_noirq(dev) : 0; + return CALL_PM_OP(dev, restore_noirq); } EXPORT_SYMBOL_GPL(pm_generic_restore_noirq); @@ -240,9 +209,7 @@ EXPORT_SYMBOL_GPL(pm_generic_restore_noirq); */ int pm_generic_restore_early(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->restore_early ? pm->restore_early(dev) : 0; + return CALL_PM_OP(dev, restore_early); } EXPORT_SYMBOL_GPL(pm_generic_restore_early); @@ -252,9 +219,7 @@ EXPORT_SYMBOL_GPL(pm_generic_restore_early); */ int pm_generic_restore(struct device *dev) { - const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; - - return pm && pm->restore ? pm->restore(dev) : 0; + return CALL_PM_OP(dev, restore); } EXPORT_SYMBOL_GPL(pm_generic_restore); diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c index 1de1cd72b616..a0225a83f50c 100644 --- a/drivers/base/power/main.c +++ b/drivers/base/power/main.c @@ -34,6 +34,7 @@ #include <linux/cpufreq.h> #include <linux/devfreq.h> #include <linux/timer.h> +#include <linux/nmi.h> #include "../base.h" #include "power.h" @@ -95,6 +96,8 @@ static const char *pm_verb(int event) return "restore"; case PM_EVENT_RECOVER: return "recover"; + case PM_EVENT_POWEROFF: + return "poweroff"; default: return "(unknown PM event)"; } @@ -367,6 +370,7 @@ static pm_callback_t pm_op(const struct dev_pm_ops *ops, pm_message_t state) case PM_EVENT_FREEZE: case PM_EVENT_QUIESCE: return ops->freeze; + case PM_EVENT_POWEROFF: case PM_EVENT_HIBERNATE: return ops->poweroff; case PM_EVENT_THAW: @@ -401,6 +405,7 @@ static pm_callback_t pm_late_early_op(const struct dev_pm_ops *ops, case PM_EVENT_FREEZE: case PM_EVENT_QUIESCE: return ops->freeze_late; + case PM_EVENT_POWEROFF: case PM_EVENT_HIBERNATE: return ops->poweroff_late; case PM_EVENT_THAW: @@ -435,6 +440,7 @@ static pm_callback_t pm_noirq_op(const struct dev_pm_ops *ops, pm_message_t stat case PM_EVENT_FREEZE: case PM_EVENT_QUIESCE: return ops->freeze_noirq; + case PM_EVENT_POWEROFF: case PM_EVENT_HIBERNATE: return ops->poweroff_noirq; case PM_EVENT_THAW: @@ -515,6 +521,11 @@ struct dpm_watchdog { #define DECLARE_DPM_WATCHDOG_ON_STACK(wd) \ struct dpm_watchdog wd +static bool __read_mostly dpm_watchdog_all_cpu_backtrace; +module_param(dpm_watchdog_all_cpu_backtrace, bool, 0644); +MODULE_PARM_DESC(dpm_watchdog_all_cpu_backtrace, + "Backtrace all CPUs on DPM watchdog timeout"); + /** * dpm_watchdog_handler - Driver suspend / resume watchdog handler. * @t: The timer that PM watchdog depends on. @@ -530,8 +541,12 @@ static void dpm_watchdog_handler(struct timer_list *t) unsigned int time_left; if (wd->fatal) { + unsigned int this_cpu = smp_processor_id(); + dev_emerg(wd->dev, "**** DPM device timeout ****\n"); show_stack(wd->tsk, NULL, KERN_EMERG); + if (dpm_watchdog_all_cpu_backtrace) + trigger_allbutcpu_cpu_backtrace(this_cpu); panic("%s %s: unrecoverable failure\n", dev_driver_string(wd->dev), dev_name(wd->dev)); } diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c index 1b11a3cd4acc..62707738caa4 100644 --- a/drivers/base/power/runtime.c +++ b/drivers/base/power/runtime.c @@ -90,7 +90,7 @@ static void update_pm_runtime_accounting(struct device *dev) /* * Because ktime_get_mono_fast_ns() is not monotonic during * timekeeping updates, ensure that 'now' is after the last saved - * timesptamp. + * timestamp. */ if (now < last) return; @@ -217,7 +217,7 @@ static int dev_memalloc_noio(struct device *dev, void *data) * resume/suspend callback of any one of its ancestors(or the * block device itself), the deadlock may be triggered inside the * memory allocation since it might not complete until the block - * device becomes active and the involed page I/O finishes. The + * device becomes active and the involved page I/O finishes. The * situation is pointed out first by Alan Stern. Network device * are involved in iSCSI kind of situation. * @@ -1210,7 +1210,7 @@ EXPORT_SYMBOL_GPL(__pm_runtime_resume); * * Otherwise, if its runtime PM status is %RPM_ACTIVE and (1) @ign_usage_count * is set, or (2) @dev is not ignoring children and its active child count is - * nonero, or (3) the runtime PM usage counter of @dev is not zero, increment + * nonzero, or (3) the runtime PM usage counter of @dev is not zero, increment * the usage counter of @dev and return 1. * * Otherwise, return 0 without changing the usage counter. @@ -1664,9 +1664,12 @@ EXPORT_SYMBOL_GPL(devm_pm_runtime_get_noresume); * pm_runtime_forbid - Block runtime PM of a device. * @dev: Device to handle. * - * Increase the device's usage count and clear its power.runtime_auto flag, - * so that it cannot be suspended at run time until pm_runtime_allow() is called - * for it. + * Resume @dev if already suspended and block runtime suspend of @dev in such + * a way that it can be unblocked via the /sys/devices/.../power/control + * interface, or otherwise by calling pm_runtime_allow(). + * + * Calling this function many times in a row has the same effect as calling it + * once. */ void pm_runtime_forbid(struct device *dev) { @@ -1687,7 +1690,13 @@ EXPORT_SYMBOL_GPL(pm_runtime_forbid); * pm_runtime_allow - Unblock runtime PM of a device. * @dev: Device to handle. * - * Decrease the device's usage count and set its power.runtime_auto flag. + * Unblock runtime suspend of @dev after it has been blocked by + * pm_runtime_forbid() (for instance, if it has been blocked via the + * /sys/devices/.../power/control interface), check if @dev can be + * suspended and suspend it in that case. + * + * Calling this function many times in a row has the same effect as calling it + * once. */ void pm_runtime_allow(struct device *dev) { diff --git a/drivers/base/power/trace.c b/drivers/base/power/trace.c index cd6e559648b2..d8da7195bb00 100644 --- a/drivers/base/power/trace.c +++ b/drivers/base/power/trace.c @@ -238,10 +238,8 @@ int show_trace_dev_match(char *buf, size_t size) unsigned int hash = hash_string(DEVSEED, dev_name(dev), DEVHASH); if (hash == value) { - int len = snprintf(buf, size, "%s\n", + int len = scnprintf(buf, size, "%s\n", dev_driver_string(dev)); - if (len > size) - len = size; buf += len; ret += len; size -= len; diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c index d1283ff1080b..1e1a0e7eeac5 100644 --- a/drivers/base/power/wakeup.c +++ b/drivers/base/power/wakeup.c @@ -189,17 +189,16 @@ static void wakeup_source_remove(struct wakeup_source *ws) if (WARN_ON(!ws)) return; + /* + * After shutting down the timer, wakeup_source_activate() will warn if + * the given wakeup source is passed to it. + */ + timer_shutdown_sync(&ws->timer); + raw_spin_lock_irqsave(&events_lock, flags); list_del_rcu(&ws->entry); raw_spin_unlock_irqrestore(&events_lock, flags); synchronize_srcu(&wakeup_srcu); - - timer_delete_sync(&ws->timer); - /* - * Clear timer.function to make wakeup_source_not_registered() treat - * this wakeup source as not registered. - */ - ws->timer.function = NULL; } /** @@ -506,14 +505,14 @@ int device_set_wakeup_enable(struct device *dev, bool enable) EXPORT_SYMBOL_GPL(device_set_wakeup_enable); /** - * wakeup_source_not_registered - validate the given wakeup source. + * wakeup_source_not_usable - validate the given wakeup source. * @ws: Wakeup source to be validated. */ -static bool wakeup_source_not_registered(struct wakeup_source *ws) +static bool wakeup_source_not_usable(struct wakeup_source *ws) { /* - * Use timer struct to check if the given source is initialized - * by wakeup_source_add. + * Use the timer struct to check if the given wakeup source has been + * initialized by wakeup_source_add() and it is not going away. */ return ws->timer.function != pm_wakeup_timer_fn; } @@ -558,8 +557,7 @@ static void wakeup_source_activate(struct wakeup_source *ws) { unsigned int cec; - if (WARN_ONCE(wakeup_source_not_registered(ws), - "unregistered wakeup source\n")) + if (WARN_ONCE(wakeup_source_not_usable(ws), "unusable wakeup source\n")) return; ws->active = true; diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c index 083d8369a591..e73a66785d69 100644 --- a/drivers/cpufreq/acpi-cpufreq.c +++ b/drivers/cpufreq/acpi-cpufreq.c @@ -395,7 +395,7 @@ static unsigned int check_freqs(struct cpufreq_policy *policy, cur_freq = extract_freq(policy, get_cur_val(mask, data)); if (cur_freq == freq) return 1; - udelay(10); + usleep_range(10, 15); } return 0; } diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index b44f0f7a5ba1..c45bc98721d2 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -65,13 +65,13 @@ static const char * const amd_pstate_mode_string[] = { [AMD_PSTATE_PASSIVE] = "passive", [AMD_PSTATE_ACTIVE] = "active", [AMD_PSTATE_GUIDED] = "guided", - NULL, }; +static_assert(ARRAY_SIZE(amd_pstate_mode_string) == AMD_PSTATE_MAX); const char *amd_pstate_get_mode_string(enum amd_pstate_mode mode) { - if (mode < 0 || mode >= AMD_PSTATE_MAX) - return NULL; + if (mode < AMD_PSTATE_UNDEFINED || mode >= AMD_PSTATE_MAX) + mode = AMD_PSTATE_UNDEFINED; return amd_pstate_mode_string[mode]; } EXPORT_SYMBOL_GPL(amd_pstate_get_mode_string); @@ -110,6 +110,7 @@ enum energy_perf_value_index { EPP_INDEX_BALANCE_PERFORMANCE, EPP_INDEX_BALANCE_POWERSAVE, EPP_INDEX_POWERSAVE, + EPP_INDEX_MAX, }; static const char * const energy_perf_strings[] = { @@ -118,8 +119,8 @@ static const char * const energy_perf_strings[] = { [EPP_INDEX_BALANCE_PERFORMANCE] = "balance_performance", [EPP_INDEX_BALANCE_POWERSAVE] = "balance_power", [EPP_INDEX_POWERSAVE] = "power", - NULL }; +static_assert(ARRAY_SIZE(energy_perf_strings) == EPP_INDEX_MAX); static unsigned int epp_values[] = { [EPP_INDEX_DEFAULT] = 0, @@ -127,7 +128,8 @@ static unsigned int epp_values[] = { [EPP_INDEX_BALANCE_PERFORMANCE] = AMD_CPPC_EPP_BALANCE_PERFORMANCE, [EPP_INDEX_BALANCE_POWERSAVE] = AMD_CPPC_EPP_BALANCE_POWERSAVE, [EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE, - }; +}; +static_assert(ARRAY_SIZE(epp_values) == EPP_INDEX_MAX); typedef int (*cppc_mode_transition_fn)(int); @@ -183,7 +185,7 @@ static inline int get_mode_idx_from_str(const char *str, size_t size) { int i; - for (i=0; i < AMD_PSTATE_MAX; i++) { + for (i = 0; i < AMD_PSTATE_MAX; i++) { if (!strncmp(str, amd_pstate_mode_string[i], size)) return i; } @@ -1137,16 +1139,15 @@ static ssize_t show_amd_pstate_hw_prefcore(struct cpufreq_policy *policy, static ssize_t show_energy_performance_available_preferences( struct cpufreq_policy *policy, char *buf) { - int i = 0; - int offset = 0; + int offset = 0, i; struct amd_cpudata *cpudata = policy->driver_data; if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) return sysfs_emit_at(buf, offset, "%s\n", energy_perf_strings[EPP_INDEX_PERFORMANCE]); - while (energy_perf_strings[i] != NULL) - offset += sysfs_emit_at(buf, offset, "%s ", energy_perf_strings[i++]); + for (i = 0; i < ARRAY_SIZE(energy_perf_strings); i++) + offset += sysfs_emit_at(buf, offset, "%s ", energy_perf_strings[i]); offset += sysfs_emit_at(buf, offset, "\n"); @@ -1157,15 +1158,10 @@ static ssize_t store_energy_performance_preference( struct cpufreq_policy *policy, const char *buf, size_t count) { struct amd_cpudata *cpudata = policy->driver_data; - char str_preference[21]; ssize_t ret; u8 epp; - ret = sscanf(buf, "%20s", str_preference); - if (ret != 1) - return -EINVAL; - - ret = match_string(energy_perf_strings, -1, str_preference); + ret = sysfs_match_string(energy_perf_strings, buf); if (ret < 0) return -EINVAL; @@ -1282,7 +1278,7 @@ static int amd_pstate_change_mode_without_dvr_change(int mode) if (cpu_feature_enabled(X86_FEATURE_CPPC) || cppc_state == AMD_PSTATE_ACTIVE) return 0; - for_each_present_cpu(cpu) { + for_each_online_cpu(cpu) { cppc_set_auto_sel(cpu, (cppc_state == AMD_PSTATE_PASSIVE) ? 0 : 1); } @@ -1353,9 +1349,8 @@ int amd_pstate_update_status(const char *buf, size_t size) return -EINVAL; mode_idx = get_mode_idx_from_str(buf, size); - - if (mode_idx < 0 || mode_idx >= AMD_PSTATE_MAX) - return -EINVAL; + if (mode_idx < 0) + return mode_idx; if (mode_state_machine[cppc_state][mode_idx]) { guard(mutex)(&amd_pstate_driver_lock); diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c index e23d9abea135..9eac77c4f294 100644 --- a/drivers/cpufreq/cppc_cpufreq.c +++ b/drivers/cpufreq/cppc_cpufreq.c @@ -142,16 +142,15 @@ static void cppc_cpufreq_cpu_fie_init(struct cpufreq_policy *policy) init_irq_work(&cppc_fi->irq_work, cppc_irq_work); ret = cppc_get_perf_ctrs(cpu, &cppc_fi->prev_perf_fb_ctrs); - if (ret) { - pr_warn("%s: failed to read perf counters for cpu:%d: %d\n", - __func__, cpu, ret); - /* - * Don't abort if the CPU was offline while the driver - * was getting registered. - */ - if (cpu_online(cpu)) - return; + /* + * Don't abort as the CPU was offline while the driver was + * getting registered. + */ + if (ret && cpu_online(cpu)) { + pr_debug("%s: failed to read perf counters for cpu:%d: %d\n", + __func__, cpu, ret); + return; } } diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c index cd1816a12bb9..dc11b62399ad 100644 --- a/drivers/cpufreq/cpufreq-dt-platdev.c +++ b/drivers/cpufreq/cpufreq-dt-platdev.c @@ -87,6 +87,7 @@ static const struct of_device_id allowlist[] __initconst = { { .compatible = "st-ericsson,u9540", }, { .compatible = "starfive,jh7110", }, + { .compatible = "starfive,jh7110s", }, { .compatible = "ti,omap2", }, { .compatible = "ti,omap4", }, diff --git a/drivers/cpufreq/cpufreq-nforce2.c b/drivers/cpufreq/cpufreq-nforce2.c index fedad1081973..fbbbe501cf2d 100644 --- a/drivers/cpufreq/cpufreq-nforce2.c +++ b/drivers/cpufreq/cpufreq-nforce2.c @@ -145,6 +145,8 @@ static unsigned int nforce2_fsb_read(int bootfsb) pci_read_config_dword(nforce2_sub5, NFORCE2_BOOTFSB, &fsb); fsb /= 1000000; + pci_dev_put(nforce2_sub5); + /* Check if PLL register is already set */ pci_read_config_byte(nforce2_dev, NFORCE2_PLLENABLE, (u8 *)&temp); @@ -426,6 +428,7 @@ static int __init nforce2_init(void) static void __exit nforce2_exit(void) { cpufreq_unregister_driver(&nforce2_driver); + pci_dev_put(nforce2_dev); } module_init(nforce2_init); diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 852e024facc3..4472bb1ec83c 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1421,9 +1421,12 @@ static int cpufreq_policy_online(struct cpufreq_policy *policy, * If there is a problem with its frequency table, take it * offline and drop it. */ - ret = cpufreq_table_validate_and_sort(policy); - if (ret) - goto out_offline_policy; + if (policy->freq_table_sorted != CPUFREQ_TABLE_SORTED_ASCENDING && + policy->freq_table_sorted != CPUFREQ_TABLE_SORTED_DESCENDING) { + ret = cpufreq_table_validate_and_sort(policy); + if (ret) + goto out_offline_policy; + } /* related_cpus should at least include policy->cpus. */ cpumask_copy(policy->related_cpus, policy->cpus); @@ -2550,7 +2553,7 @@ void cpufreq_unregister_governor(struct cpufreq_governor *governor) for_each_inactive_policy(policy) { if (!strcmp(policy->last_governor, governor->name)) { policy->governor = NULL; - strcpy(policy->last_governor, "\0"); + policy->last_governor[0] = '\0'; } } read_unlock_irqrestore(&cpufreq_driver_lock, flags); diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index 492a10f1bdbf..ec4abe374573 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -575,13 +575,18 @@ static void intel_pstate_hybrid_hwp_adjust(struct cpudata *cpu) int scaling = cpu->pstate.scaling; int freq; - pr_debug("CPU%d: perf_ctl_max_phys = %d\n", cpu->cpu, perf_ctl_max_phys); - pr_debug("CPU%d: perf_ctl_turbo = %d\n", cpu->cpu, perf_ctl_turbo); - pr_debug("CPU%d: perf_ctl_scaling = %d\n", cpu->cpu, perf_ctl_scaling); + pr_debug("CPU%d: PERF_CTL max_phys = %d\n", cpu->cpu, perf_ctl_max_phys); + pr_debug("CPU%d: PERF_CTL turbo = %d\n", cpu->cpu, perf_ctl_turbo); + pr_debug("CPU%d: PERF_CTL scaling = %d\n", cpu->cpu, perf_ctl_scaling); pr_debug("CPU%d: HWP_CAP guaranteed = %d\n", cpu->cpu, cpu->pstate.max_pstate); pr_debug("CPU%d: HWP_CAP highest = %d\n", cpu->cpu, cpu->pstate.turbo_pstate); pr_debug("CPU%d: HWP-to-frequency scaling factor: %d\n", cpu->cpu, scaling); + if (scaling == perf_ctl_scaling) + return; + + hwp_is_hybrid = true; + cpu->pstate.turbo_freq = rounddown(cpu->pstate.turbo_pstate * scaling, perf_ctl_scaling); cpu->pstate.max_freq = rounddown(cpu->pstate.max_pstate * scaling, @@ -909,6 +914,11 @@ static struct freq_attr *hwp_cpufreq_attrs[] = { [HWP_CPUFREQ_ATTR_COUNT] = NULL, }; +static u8 hybrid_get_cpu_type(unsigned int cpu) +{ + return cpu_data(cpu).topo.intel_type; +} + static bool no_cas __ro_after_init; static struct cpudata *hybrid_max_perf_cpu __read_mostly; @@ -925,11 +935,8 @@ static int hybrid_active_power(struct device *dev, unsigned long *power, unsigned long *freq) { /* - * Create "utilization bins" of 0-40%, 40%-60%, 60%-80%, and 80%-100% - * of the maximum capacity such that two CPUs of the same type will be - * regarded as equally attractive if the utilization of each of them - * falls into the same bin, which should prevent tasks from being - * migrated between them too often. + * Create four "states" corresponding to 40%, 60%, 80%, and 100% of the + * full capacity. * * For this purpose, return the "frequency" of 2 for the first * performance level and otherwise leave the value set by the caller. @@ -943,38 +950,40 @@ static int hybrid_active_power(struct device *dev, unsigned long *power, return 0; } +static bool hybrid_has_l3(unsigned int cpu) +{ + struct cpu_cacheinfo *cacheinfo = get_cpu_cacheinfo(cpu); + unsigned int i; + + if (!cacheinfo) + return false; + + for (i = 0; i < cacheinfo->num_leaves; i++) { + if (cacheinfo->info_list[i].level == 3) + return true; + } + + return false; +} + static int hybrid_get_cost(struct device *dev, unsigned long freq, unsigned long *cost) { - struct pstate_data *pstate = &all_cpu_data[dev->id]->pstate; - struct cpu_cacheinfo *cacheinfo = get_cpu_cacheinfo(dev->id); - + /* Facilitate load balancing between CPUs of the same type. */ + *cost = freq; /* - * The smaller the perf-to-frequency scaling factor, the larger the IPC - * ratio between the given CPU and the least capable CPU in the system. - * Regard that IPC ratio as the primary cost component and assume that - * the scaling factors for different CPU types will differ by at least - * 5% and they will not be above INTEL_PSTATE_CORE_SCALING. + * Adjust the cost depending on CPU type. * - * Add the freq value to the cost, so that the cost of running on CPUs - * of the same type in different "utilization bins" is different. - */ - *cost = div_u64(100ULL * INTEL_PSTATE_CORE_SCALING, pstate->scaling) + freq; - /* - * Increase the cost slightly for CPUs able to access L3 to avoid - * touching it in case some other CPUs of the same type can do the work - * without it. + * The idea is to start loading up LPE-cores before E-cores and start + * to populate E-cores when LPE-cores are utilized above 60% of the + * capacity. Similarly, P-cores start to be populated when E-cores are + * utilized above 60% of the capacity. */ - if (cacheinfo) { - unsigned int i; - - /* Check if L3 cache is there. */ - for (i = 0; i < cacheinfo->num_leaves; i++) { - if (cacheinfo->info_list[i].level == 3) { - *cost += 2; - break; - } - } + if (hybrid_get_cpu_type(dev->id) == INTEL_CPU_TYPE_ATOM) { + if (hybrid_has_l3(dev->id)) /* E-core */ + *cost += 1; + } else { /* P-core */ + *cost += 2; } return 0; @@ -1037,9 +1046,9 @@ static void hybrid_set_cpu_capacity(struct cpudata *cpu) topology_set_cpu_scale(cpu->cpu, arch_scale_cpu_capacity(cpu->cpu)); - pr_debug("CPU%d: perf = %u, max. perf = %u, base perf = %d\n", cpu->cpu, - cpu->capacity_perf, hybrid_max_perf_cpu->capacity_perf, - cpu->pstate.max_pstate_physical); + pr_debug("CPU%d: capacity perf = %u, base perf = %u, sys max perf = %u\n", + cpu->cpu, cpu->capacity_perf, cpu->pstate.max_pstate_physical, + hybrid_max_perf_cpu->capacity_perf); } static void hybrid_clear_cpu_capacity(unsigned int cpunum) @@ -1384,7 +1393,8 @@ static void set_power_ctl_ee_state(bool input) { u64 power_ctl; - mutex_lock(&intel_pstate_driver_lock); + guard(mutex)(&intel_pstate_driver_lock); + rdmsrq(MSR_IA32_POWER_CTL, power_ctl); if (input) { power_ctl &= ~BIT(MSR_IA32_POWER_CTL_BIT_EE); @@ -1394,7 +1404,6 @@ static void set_power_ctl_ee_state(bool input) power_ctl_ee_state = POWER_CTL_EE_DISABLE; } wrmsrq(MSR_IA32_POWER_CTL, power_ctl); - mutex_unlock(&intel_pstate_driver_lock); } static void intel_pstate_hwp_enable(struct cpudata *cpudata); @@ -1516,13 +1525,9 @@ static int intel_pstate_update_status(const char *buf, size_t size); static ssize_t show_status(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { - ssize_t ret; - - mutex_lock(&intel_pstate_driver_lock); - ret = intel_pstate_show_status(buf); - mutex_unlock(&intel_pstate_driver_lock); + guard(mutex)(&intel_pstate_driver_lock); - return ret; + return intel_pstate_show_status(buf); } static ssize_t store_status(struct kobject *a, struct kobj_attribute *b, @@ -1531,11 +1536,13 @@ static ssize_t store_status(struct kobject *a, struct kobj_attribute *b, char *p = memchr(buf, '\n', count); int ret; - mutex_lock(&intel_pstate_driver_lock); + guard(mutex)(&intel_pstate_driver_lock); + ret = intel_pstate_update_status(buf, p ? p - buf : count); - mutex_unlock(&intel_pstate_driver_lock); + if (ret < 0) + return ret; - return ret < 0 ? ret : count; + return count; } static ssize_t show_turbo_pct(struct kobject *kobj, @@ -1545,12 +1552,10 @@ static ssize_t show_turbo_pct(struct kobject *kobj, int total, no_turbo, turbo_pct; uint32_t turbo_fp; - mutex_lock(&intel_pstate_driver_lock); + guard(mutex)(&intel_pstate_driver_lock); - if (!intel_pstate_driver) { - mutex_unlock(&intel_pstate_driver_lock); + if (!intel_pstate_driver) return -EAGAIN; - } cpu = all_cpu_data[0]; @@ -1559,8 +1564,6 @@ static ssize_t show_turbo_pct(struct kobject *kobj, turbo_fp = div_fp(no_turbo, total); turbo_pct = 100 - fp_toint(mul_fp(turbo_fp, int_tofp(100))); - mutex_unlock(&intel_pstate_driver_lock); - return sprintf(buf, "%u\n", turbo_pct); } @@ -1570,38 +1573,26 @@ static ssize_t show_num_pstates(struct kobject *kobj, struct cpudata *cpu; int total; - mutex_lock(&intel_pstate_driver_lock); + guard(mutex)(&intel_pstate_driver_lock); - if (!intel_pstate_driver) { - mutex_unlock(&intel_pstate_driver_lock); + if (!intel_pstate_driver) return -EAGAIN; - } cpu = all_cpu_data[0]; total = cpu->pstate.turbo_pstate - cpu->pstate.min_pstate + 1; - mutex_unlock(&intel_pstate_driver_lock); - return sprintf(buf, "%u\n", total); } static ssize_t show_no_turbo(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { - ssize_t ret; + guard(mutex)(&intel_pstate_driver_lock); - mutex_lock(&intel_pstate_driver_lock); - - if (!intel_pstate_driver) { - mutex_unlock(&intel_pstate_driver_lock); + if (!intel_pstate_driver) return -EAGAIN; - } - - ret = sprintf(buf, "%u\n", global.no_turbo); - - mutex_unlock(&intel_pstate_driver_lock); - return ret; + return sprintf(buf, "%u\n", global.no_turbo); } static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b, @@ -1613,29 +1604,25 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b, if (sscanf(buf, "%u", &input) != 1) return -EINVAL; - mutex_lock(&intel_pstate_driver_lock); + guard(mutex)(&intel_pstate_driver_lock); - if (!intel_pstate_driver) { - count = -EAGAIN; - goto unlock_driver; - } + if (!intel_pstate_driver) + return -EAGAIN; no_turbo = !!clamp_t(int, input, 0, 1); WRITE_ONCE(global.turbo_disabled, turbo_is_disabled()); if (global.turbo_disabled && !no_turbo) { pr_notice("Turbo disabled by BIOS or unavailable on processor\n"); - count = -EPERM; if (global.no_turbo) - goto unlock_driver; - else - no_turbo = 1; - } + return -EPERM; - if (no_turbo == global.no_turbo) { - goto unlock_driver; + no_turbo = 1; } + if (no_turbo == global.no_turbo) + return count; + WRITE_ONCE(global.no_turbo, no_turbo); mutex_lock(&intel_pstate_limits_lock); @@ -1654,9 +1641,6 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b, intel_pstate_update_limits_for_all(); arch_set_max_freq_ratio(no_turbo); -unlock_driver: - mutex_unlock(&intel_pstate_driver_lock); - return count; } @@ -1706,12 +1690,10 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b, if (ret != 1) return -EINVAL; - mutex_lock(&intel_pstate_driver_lock); + guard(mutex)(&intel_pstate_driver_lock); - if (!intel_pstate_driver) { - mutex_unlock(&intel_pstate_driver_lock); + if (!intel_pstate_driver) return -EAGAIN; - } mutex_lock(&intel_pstate_limits_lock); @@ -1724,8 +1706,6 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b, else update_qos_requests(FREQ_QOS_MAX); - mutex_unlock(&intel_pstate_driver_lock); - return count; } @@ -1739,12 +1719,10 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct kobj_attribute *b, if (ret != 1) return -EINVAL; - mutex_lock(&intel_pstate_driver_lock); + guard(mutex)(&intel_pstate_driver_lock); - if (!intel_pstate_driver) { - mutex_unlock(&intel_pstate_driver_lock); + if (!intel_pstate_driver) return -EAGAIN; - } mutex_lock(&intel_pstate_limits_lock); @@ -1758,8 +1736,6 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct kobj_attribute *b, else update_qos_requests(FREQ_QOS_MIN); - mutex_unlock(&intel_pstate_driver_lock); - return count; } @@ -1780,10 +1756,10 @@ static ssize_t store_hwp_dynamic_boost(struct kobject *a, if (ret) return ret; - mutex_lock(&intel_pstate_driver_lock); + guard(mutex)(&intel_pstate_driver_lock); + hwp_boost = !!input; intel_pstate_update_policies(); - mutex_unlock(&intel_pstate_driver_lock); return count; } @@ -2072,6 +2048,18 @@ static void intel_pstate_hwp_enable(struct cpudata *cpudata) intel_pstate_update_epp_defaults(cpudata); } +static u64 get_perf_ctl_val(int pstate) +{ + u64 val; + + val = (u64)pstate << 8; + if (READ_ONCE(global.no_turbo) && !READ_ONCE(global.turbo_disabled) && + cpu_feature_enabled(X86_FEATURE_IDA)) + val |= (u64)1 << 32; + + return val; +} + static int atom_get_min_pstate(int not_used) { u64 value; @@ -2098,15 +2086,10 @@ static int atom_get_turbo_pstate(int not_used) static u64 atom_get_val(struct cpudata *cpudata, int pstate) { - u64 val; + u64 val = get_perf_ctl_val(pstate); int32_t vid_fp; u32 vid; - val = (u64)pstate << 8; - if (READ_ONCE(global.no_turbo) && !READ_ONCE(global.turbo_disabled) && - cpu_feature_enabled(X86_FEATURE_IDA)) - val |= (u64)1 << 32; - vid_fp = cpudata->vid.min + mul_fp( int_tofp(pstate - cpudata->pstate.min_pstate), cpudata->vid.ratio); @@ -2266,14 +2249,7 @@ static int core_get_turbo_pstate(int cpu) static u64 core_get_val(struct cpudata *cpudata, int pstate) { - u64 val; - - val = (u64)pstate << 8; - if (READ_ONCE(global.no_turbo) && !READ_ONCE(global.turbo_disabled) && - cpu_feature_enabled(X86_FEATURE_IDA)) - val |= (u64)1 << 32; - - return val; + return get_perf_ctl_val(pstate); } static int knl_get_aperf_mperf_shift(void) @@ -2297,18 +2273,14 @@ static int knl_get_turbo_pstate(int cpu) static int hwp_get_cpu_scaling(int cpu) { if (hybrid_scaling_factor) { - struct cpuinfo_x86 *c = &cpu_data(cpu); - u8 cpu_type = c->topo.intel_type; - /* * Return the hybrid scaling factor for P-cores and use the * default core scaling for E-cores. */ - if (cpu_type == INTEL_CPU_TYPE_CORE) + if (hybrid_get_cpu_type(cpu) == INTEL_CPU_TYPE_CORE) return hybrid_scaling_factor; - if (cpu_type == INTEL_CPU_TYPE_ATOM) - return core_get_scaling(); + return core_get_scaling(); } /* Use core scaling on non-hybrid systems. */ @@ -2343,11 +2315,10 @@ static void intel_pstate_set_min_pstate(struct cpudata *cpu) static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) { - int perf_ctl_max_phys = pstate_funcs.get_max_physical(cpu->cpu); int perf_ctl_scaling = pstate_funcs.get_scaling(); + cpu->pstate.max_pstate_physical = pstate_funcs.get_max_physical(cpu->cpu); cpu->pstate.min_pstate = pstate_funcs.get_min(cpu->cpu); - cpu->pstate.max_pstate_physical = perf_ctl_max_phys; cpu->pstate.perf_ctl_scaling = perf_ctl_scaling; if (hwp_active && !hwp_mode_bdw) { @@ -2355,10 +2326,7 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) if (pstate_funcs.get_cpu_scaling) { cpu->pstate.scaling = pstate_funcs.get_cpu_scaling(cpu->cpu); - if (cpu->pstate.scaling != perf_ctl_scaling) { - intel_pstate_hybrid_hwp_adjust(cpu); - hwp_is_hybrid = true; - } + intel_pstate_hybrid_hwp_adjust(cpu); } else { cpu->pstate.scaling = perf_ctl_scaling; } @@ -2760,6 +2728,7 @@ static const struct x86_cpu_id intel_pstate_cpu_oob_ids[] __initconst = { X86_MATCH(INTEL_ATOM_CRESTMONT, core_funcs), X86_MATCH(INTEL_ATOM_CRESTMONT_X, core_funcs), X86_MATCH(INTEL_ATOM_DARKMONT_X, core_funcs), + X86_MATCH(INTEL_DIAMONDRAPIDS_X, core_funcs), {} }; #endif @@ -3912,9 +3881,9 @@ hwp_cpu_matched: } - mutex_lock(&intel_pstate_driver_lock); - rc = intel_pstate_register_driver(default_driver); - mutex_unlock(&intel_pstate_driver_lock); + scoped_guard(mutex, &intel_pstate_driver_lock) { + rc = intel_pstate_register_driver(default_driver); + } if (rc) { intel_pstate_sysfs_remove(); return rc; diff --git a/drivers/cpufreq/qcom-cpufreq-nvmem.c b/drivers/cpufreq/qcom-cpufreq-nvmem.c index 765a5bb81829..81e16b5a0245 100644 --- a/drivers/cpufreq/qcom-cpufreq-nvmem.c +++ b/drivers/cpufreq/qcom-cpufreq-nvmem.c @@ -256,13 +256,22 @@ len_error: return ret; } +static const struct of_device_id qcom_cpufreq_ipq806x_match_list[] __maybe_unused = { + { .compatible = "qcom,ipq8062", .data = (const void *)QCOM_ID_IPQ8062 }, + { .compatible = "qcom,ipq8064", .data = (const void *)QCOM_ID_IPQ8064 }, + { .compatible = "qcom,ipq8065", .data = (const void *)QCOM_ID_IPQ8065 }, + { .compatible = "qcom,ipq8066", .data = (const void *)QCOM_ID_IPQ8066 }, + { .compatible = "qcom,ipq8068", .data = (const void *)QCOM_ID_IPQ8068 }, + { .compatible = "qcom,ipq8069", .data = (const void *)QCOM_ID_IPQ8069 }, +}; + static int qcom_cpufreq_ipq8064_name_version(struct device *cpu_dev, struct nvmem_cell *speedbin_nvmem, char **pvs_name, struct qcom_cpufreq_drv *drv) { + int msm_id = -1, ret = 0; int speed = 0, pvs = 0; - int msm_id, ret = 0; u8 *speedbin; size_t len; @@ -279,8 +288,30 @@ static int qcom_cpufreq_ipq8064_name_version(struct device *cpu_dev, get_krait_bin_format_a(cpu_dev, &speed, &pvs, speedbin); ret = qcom_smem_get_soc_id(&msm_id); - if (ret) + if (ret == -ENODEV) { + const struct of_device_id *match; + struct device_node *root; + + root = of_find_node_by_path("/"); + if (!root) { + ret = -ENODEV; + goto exit; + } + + /* Fallback to compatible match with no SMEM initialized */ + match = of_match_node(qcom_cpufreq_ipq806x_match_list, root); + of_node_put(root); + if (!match) { + ret = -ENODEV; + goto exit; + } + + /* We found a matching device, get the msm_id from the data entry */ + msm_id = (int)(uintptr_t)match->data; + ret = 0; + } else if (ret) { goto exit; + } switch (msm_id) { case QCOM_ID_IPQ8062: diff --git a/drivers/cpufreq/s5pv210-cpufreq.c b/drivers/cpufreq/s5pv210-cpufreq.c index 4215621deb3f..ba8a1c96427a 100644 --- a/drivers/cpufreq/s5pv210-cpufreq.c +++ b/drivers/cpufreq/s5pv210-cpufreq.c @@ -518,7 +518,7 @@ static int s5pv210_cpu_init(struct cpufreq_policy *policy) if (policy->cpu != 0) { ret = -EINVAL; - goto out_dmc1; + goto out; } /* @@ -530,7 +530,7 @@ static int s5pv210_cpu_init(struct cpufreq_policy *policy) if ((mem_type != LPDDR) && (mem_type != LPDDR2)) { pr_err("CPUFreq doesn't support this memory type\n"); ret = -EINVAL; - goto out_dmc1; + goto out; } /* Find current refresh counter and frequency each DMC */ @@ -544,6 +544,8 @@ static int s5pv210_cpu_init(struct cpufreq_policy *policy) cpufreq_generic_init(policy, s5pv210_freq_table, 40000); return 0; +out: + clk_put(dmc1_clk); out_dmc1: clk_put(dmc0_clk); out_dmc0: diff --git a/drivers/cpufreq/tegra186-cpufreq.c b/drivers/cpufreq/tegra186-cpufreq.c index 136ab102f636..34ed943c5f34 100644 --- a/drivers/cpufreq/tegra186-cpufreq.c +++ b/drivers/cpufreq/tegra186-cpufreq.c @@ -8,6 +8,7 @@ #include <linux/module.h> #include <linux/of.h> #include <linux/platform_device.h> +#include <linux/units.h> #include <soc/tegra/bpmp.h> #include <soc/tegra/bpmp-abi.h> @@ -58,7 +59,7 @@ static const struct tegra186_cpufreq_cpu tegra186_cpus[] = { }; struct tegra186_cpufreq_cluster { - struct cpufreq_frequency_table *table; + struct cpufreq_frequency_table *bpmp_lut; u32 ref_clk_khz; u32 div; }; @@ -66,16 +67,119 @@ struct tegra186_cpufreq_cluster { struct tegra186_cpufreq_data { void __iomem *regs; const struct tegra186_cpufreq_cpu *cpus; + bool icc_dram_bw_scaling; struct tegra186_cpufreq_cluster clusters[]; }; +static int tegra_cpufreq_set_bw(struct cpufreq_policy *policy, unsigned long freq_khz) +{ + struct tegra186_cpufreq_data *data = cpufreq_get_driver_data(); + struct device *dev; + int ret; + + dev = get_cpu_device(policy->cpu); + if (!dev) + return -ENODEV; + + struct dev_pm_opp *opp __free(put_opp) = + dev_pm_opp_find_freq_exact(dev, freq_khz * HZ_PER_KHZ, true); + if (IS_ERR(opp)) + return PTR_ERR(opp); + + ret = dev_pm_opp_set_opp(dev, opp); + if (ret) + data->icc_dram_bw_scaling = false; + + return ret; +} + +static int tegra_cpufreq_init_cpufreq_table(struct cpufreq_policy *policy, + struct cpufreq_frequency_table *bpmp_lut, + struct cpufreq_frequency_table **opp_table) +{ + struct tegra186_cpufreq_data *data = cpufreq_get_driver_data(); + struct cpufreq_frequency_table *freq_table = NULL; + struct cpufreq_frequency_table *pos; + struct device *cpu_dev; + unsigned long rate; + int ret, max_opps; + int j = 0; + + cpu_dev = get_cpu_device(policy->cpu); + if (!cpu_dev) { + pr_err("%s: failed to get cpu%d device\n", __func__, policy->cpu); + return -ENODEV; + } + + /* Initialize OPP table mentioned in operating-points-v2 property in DT */ + ret = dev_pm_opp_of_add_table_indexed(cpu_dev, 0); + if (ret) { + dev_err(cpu_dev, "Invalid or empty opp table in device tree\n"); + data->icc_dram_bw_scaling = false; + return ret; + } + + max_opps = dev_pm_opp_get_opp_count(cpu_dev); + if (max_opps <= 0) { + dev_err(cpu_dev, "Failed to add OPPs\n"); + return max_opps; + } + + /* Disable all opps and cross-validate against LUT later */ + for (rate = 0; ; rate++) { + struct dev_pm_opp *opp __free(put_opp) = + dev_pm_opp_find_freq_ceil(cpu_dev, &rate); + if (IS_ERR(opp)) + break; + + dev_pm_opp_disable(cpu_dev, rate); + } + + freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_KERNEL); + if (!freq_table) + return -ENOMEM; + + /* + * Cross check the frequencies from BPMP-FW LUT against the OPP's present in DT. + * Enable only those DT OPP's which are present in LUT also. + */ + cpufreq_for_each_valid_entry(pos, bpmp_lut) { + struct dev_pm_opp *opp __free(put_opp) = + dev_pm_opp_find_freq_exact(cpu_dev, pos->frequency * HZ_PER_KHZ, false); + if (IS_ERR(opp)) + continue; + + ret = dev_pm_opp_enable(cpu_dev, pos->frequency * HZ_PER_KHZ); + if (ret < 0) + return ret; + + freq_table[j].driver_data = pos->driver_data; + freq_table[j].frequency = pos->frequency; + j++; + } + + freq_table[j].driver_data = pos->driver_data; + freq_table[j].frequency = CPUFREQ_TABLE_END; + + *opp_table = &freq_table[0]; + + dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus); + + /* Prime interconnect data */ + tegra_cpufreq_set_bw(policy, freq_table[j - 1].frequency); + + return ret; +} + static int tegra186_cpufreq_init(struct cpufreq_policy *policy) { struct tegra186_cpufreq_data *data = cpufreq_get_driver_data(); unsigned int cluster = data->cpus[policy->cpu].bpmp_cluster_id; + struct cpufreq_frequency_table *freq_table; + struct cpufreq_frequency_table *bpmp_lut; u32 cpu; + int ret; - policy->freq_table = data->clusters[cluster].table; policy->cpuinfo.transition_latency = 300 * 1000; policy->driver_data = NULL; @@ -85,6 +189,20 @@ static int tegra186_cpufreq_init(struct cpufreq_policy *policy) cpumask_set_cpu(cpu, policy->cpus); } + bpmp_lut = data->clusters[cluster].bpmp_lut; + + if (data->icc_dram_bw_scaling) { + ret = tegra_cpufreq_init_cpufreq_table(policy, bpmp_lut, &freq_table); + if (!ret) { + policy->freq_table = freq_table; + return 0; + } + } + + data->icc_dram_bw_scaling = false; + policy->freq_table = bpmp_lut; + pr_info("OPP tables missing from DT, EMC frequency scaling disabled\n"); + return 0; } @@ -102,6 +220,10 @@ static int tegra186_cpufreq_set_target(struct cpufreq_policy *policy, writel(edvd_val, data->regs + edvd_offset); } + if (data->icc_dram_bw_scaling) + tegra_cpufreq_set_bw(policy, tbl->frequency); + + return 0; } @@ -134,7 +256,7 @@ static struct cpufreq_driver tegra186_cpufreq_driver = { .init = tegra186_cpufreq_init, }; -static struct cpufreq_frequency_table *init_vhint_table( +static struct cpufreq_frequency_table *tegra_cpufreq_bpmp_read_lut( struct platform_device *pdev, struct tegra_bpmp *bpmp, struct tegra186_cpufreq_cluster *cluster, unsigned int cluster_id, int *num_rates) @@ -229,6 +351,7 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev) { struct tegra186_cpufreq_data *data; struct tegra_bpmp *bpmp; + struct device *cpu_dev; unsigned int i = 0, err, edvd_offset; int num_rates = 0; u32 edvd_val, cpu; @@ -254,9 +377,9 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev) for (i = 0; i < TEGRA186_NUM_CLUSTERS; i++) { struct tegra186_cpufreq_cluster *cluster = &data->clusters[i]; - cluster->table = init_vhint_table(pdev, bpmp, cluster, i, &num_rates); - if (IS_ERR(cluster->table)) { - err = PTR_ERR(cluster->table); + cluster->bpmp_lut = tegra_cpufreq_bpmp_read_lut(pdev, bpmp, cluster, i, &num_rates); + if (IS_ERR(cluster->bpmp_lut)) { + err = PTR_ERR(cluster->bpmp_lut); goto put_bpmp; } else if (!num_rates) { err = -EINVAL; @@ -265,7 +388,7 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev) for (cpu = 0; cpu < ARRAY_SIZE(tegra186_cpus); cpu++) { if (data->cpus[cpu].bpmp_cluster_id == i) { - edvd_val = cluster->table[num_rates - 1].driver_data; + edvd_val = cluster->bpmp_lut[num_rates - 1].driver_data; edvd_offset = data->cpus[cpu].edvd_offset; writel(edvd_val, data->regs + edvd_offset); } @@ -274,6 +397,19 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev) tegra186_cpufreq_driver.driver_data = data; + /* Check for optional OPPv2 and interconnect paths on CPU0 to enable ICC scaling */ + cpu_dev = get_cpu_device(0); + if (!cpu_dev) { + err = -EPROBE_DEFER; + goto put_bpmp; + } + + if (dev_pm_opp_of_get_opp_desc_node(cpu_dev)) { + err = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL); + if (!err) + data->icc_dram_bw_scaling = true; + } + err = cpufreq_register_driver(&tegra186_cpufreq_driver); put_bpmp: diff --git a/drivers/cpufreq/tegra194-cpufreq.c b/drivers/cpufreq/tegra194-cpufreq.c index 9b4f516f313e..695599e1001f 100644 --- a/drivers/cpufreq/tegra194-cpufreq.c +++ b/drivers/cpufreq/tegra194-cpufreq.c @@ -750,7 +750,8 @@ static int tegra194_cpufreq_probe(struct platform_device *pdev) if (IS_ERR(bpmp)) return PTR_ERR(bpmp); - read_counters_wq = alloc_workqueue("read_counters_wq", __WQ_LEGACY, 1); + read_counters_wq = alloc_workqueue("read_counters_wq", + __WQ_LEGACY | WQ_PERCPU, 1); if (!read_counters_wq) { dev_err(&pdev->dev, "fail to create_workqueue\n"); err = -EINVAL; diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c index 56132e843c99..c7876e9e024f 100644 --- a/drivers/cpuidle/cpuidle.c +++ b/drivers/cpuidle/cpuidle.c @@ -184,20 +184,22 @@ static noinstr void enter_s2idle_proper(struct cpuidle_driver *drv, * cpuidle_enter_s2idle - Enter an idle state suitable for suspend-to-idle. * @drv: cpuidle driver for the given CPU. * @dev: cpuidle device for the given CPU. + * @latency_limit_ns: Idle state exit latency limit * * If there are states with the ->enter_s2idle callback, find the deepest of * them and enter it with frozen tick. */ -int cpuidle_enter_s2idle(struct cpuidle_driver *drv, struct cpuidle_device *dev) +int cpuidle_enter_s2idle(struct cpuidle_driver *drv, struct cpuidle_device *dev, + u64 latency_limit_ns) { int index; /* - * Find the deepest state with ->enter_s2idle present, which guarantees - * that interrupts won't be enabled when it exits and allows the tick to - * be frozen safely. + * Find the deepest state with ->enter_s2idle present that meets the + * specified latency limit, which guarantees that interrupts won't be + * enabled when it exits and allows the tick to be frozen safely. */ - index = find_deepest_state(drv, dev, U64_MAX, 0, true); + index = find_deepest_state(drv, dev, latency_limit_ns, 0, true); if (index > 0) { enter_s2idle_proper(drv, dev, index); local_irq_enable(); diff --git a/drivers/cpuidle/driver.c b/drivers/cpuidle/driver.c index 9bbfa594c442..370664c47e65 100644 --- a/drivers/cpuidle/driver.c +++ b/drivers/cpuidle/driver.c @@ -8,6 +8,8 @@ * This code is licenced under the GPL. */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + #include <linux/mutex.h> #include <linux/module.h> #include <linux/sched.h> @@ -193,6 +195,14 @@ static void __cpuidle_driver_init(struct cpuidle_driver *drv) s->exit_latency_ns = 0; else s->exit_latency = div_u64(s->exit_latency_ns, NSEC_PER_USEC); + + /* + * Warn if the exit latency of a CPU idle state exceeds its + * target residency which is assumed to never happen in cpuidle + * in multiple places. + */ + if (s->exit_latency_ns > s->target_residency_ns) + pr_warn("Idle state %d target residency too low\n", i); } } diff --git a/drivers/cpuidle/governor.c b/drivers/cpuidle/governor.c index 0d0f9751ff8f..5d0e7f78c6c5 100644 --- a/drivers/cpuidle/governor.c +++ b/drivers/cpuidle/governor.c @@ -111,6 +111,10 @@ s64 cpuidle_governor_latency_req(unsigned int cpu) struct device *device = get_cpu_device(cpu); int device_req = dev_pm_qos_raw_resume_latency(device); int global_req = cpu_latency_qos_limit(); + int global_wake_req = cpu_wakeup_latency_qos_limit(); + + if (global_req > global_wake_req) + global_req = global_wake_req; if (device_req > global_req) device_req = global_req; diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c index 23239b0c04f9..64d6f7a1c776 100644 --- a/drivers/cpuidle/governors/menu.c +++ b/drivers/cpuidle/governors/menu.c @@ -317,12 +317,13 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, } /* - * Use a physical idle state, not busy polling, unless a timer - * is going to trigger soon enough or the exit latency of the - * idle state in question is greater than the predicted idle - * duration. + * Use a physical idle state instead of busy polling so long as + * its target residency is below the residency threshold, its + * exit latency is not greater than the predicted idle duration, + * and the next timer doesn't expire soon. */ if ((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) && + s->target_residency_ns < RESIDENCY_THRESHOLD_NS && s->target_residency_ns <= data->next_timer_ns && s->exit_latency_ns <= predicted_ns) { predicted_ns = s->target_residency_ns; diff --git a/drivers/cpuidle/governors/teo.c b/drivers/cpuidle/governors/teo.c index bfa55c1eab5b..81ac5fd58a1c 100644 --- a/drivers/cpuidle/governors/teo.c +++ b/drivers/cpuidle/governors/teo.c @@ -76,7 +76,7 @@ * likely woken up by a non-timer wakeup source). * * 2. If the second sum computed in step 1 is greater than a half of the sum of - * both metrics for the candidate state bin and all subsequent bins(if any), + * both metrics for the candidate state bin and all subsequent bins (if any), * a shallower idle state is likely to be more suitable, so look for it. * * - Traverse the enabled idle states shallower than the candidate one in the @@ -133,21 +133,33 @@ struct teo_bin { * @sleep_length_ns: Time till the closest timer event (at the selection time). * @state_bins: Idle state data bins for this CPU. * @total: Grand total of the "intercepts" and "hits" metrics for all bins. + * @total_tick: Wakeups by the scheduler tick. * @tick_intercepts: "Intercepts" before TICK_NSEC. * @short_idles: Wakeups after short idle periods. - * @artificial_wakeup: Set if the wakeup has been triggered by a safety net. + * @tick_wakeup: Set if the last wakeup was by the scheduler tick. */ struct teo_cpu { s64 sleep_length_ns; struct teo_bin state_bins[CPUIDLE_STATE_MAX]; unsigned int total; + unsigned int total_tick; unsigned int tick_intercepts; unsigned int short_idles; - bool artificial_wakeup; + bool tick_wakeup; }; static DEFINE_PER_CPU(struct teo_cpu, teo_cpus); +static void teo_decay(unsigned int *metric) +{ + unsigned int delta = *metric >> DECAY_SHIFT; + + if (delta) + *metric -= delta; + else + *metric = 0; +} + /** * teo_update - Update CPU metrics after wakeup. * @drv: cpuidle driver containing state data. @@ -155,21 +167,22 @@ static DEFINE_PER_CPU(struct teo_cpu, teo_cpus); */ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) { - struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); + struct teo_cpu *cpu_data = this_cpu_ptr(&teo_cpus); int i, idx_timer = 0, idx_duration = 0; - s64 target_residency_ns; - u64 measured_ns; + s64 target_residency_ns, measured_ns; + unsigned int total = 0; - cpu_data->short_idles -= cpu_data->short_idles >> DECAY_SHIFT; + teo_decay(&cpu_data->short_idles); - if (cpu_data->artificial_wakeup) { + if (dev->poll_time_limit) { + dev->poll_time_limit = false; /* - * If one of the safety nets has triggered, assume that this + * Polling state timeout has triggered, so assume that this * might have been a long sleep. */ - measured_ns = U64_MAX; + measured_ns = S64_MAX; } else { - u64 lat_ns = drv->states[dev->last_state_idx].exit_latency_ns; + s64 lat_ns = drv->states[dev->last_state_idx].exit_latency_ns; measured_ns = dev->last_residency_ns; /* @@ -196,8 +209,10 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) for (i = 0; i < drv->state_count; i++) { struct teo_bin *bin = &cpu_data->state_bins[i]; - bin->hits -= bin->hits >> DECAY_SHIFT; - bin->intercepts -= bin->intercepts >> DECAY_SHIFT; + teo_decay(&bin->hits); + total += bin->hits; + teo_decay(&bin->intercepts); + total += bin->intercepts; target_residency_ns = drv->states[i].target_residency_ns; @@ -208,7 +223,24 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) } } - cpu_data->tick_intercepts -= cpu_data->tick_intercepts >> DECAY_SHIFT; + cpu_data->total = total + PULSE; + + teo_decay(&cpu_data->tick_intercepts); + + teo_decay(&cpu_data->total_tick); + if (cpu_data->tick_wakeup) { + cpu_data->total_tick += PULSE; + /* + * If tick wakeups dominate the wakeup pattern, count this one + * as a hit on the deepest available idle state to increase the + * likelihood of stopping the tick. + */ + if (3 * cpu_data->total_tick > 2 * cpu_data->total) { + cpu_data->state_bins[drv->state_count-1].hits += PULSE; + return; + } + } + /* * If the measured idle duration falls into the same bin as the sleep * length, this is a "hit", so update the "hits" metric for that bin. @@ -219,18 +251,9 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev) cpu_data->state_bins[idx_timer].hits += PULSE; } else { cpu_data->state_bins[idx_duration].intercepts += PULSE; - if (TICK_NSEC <= measured_ns) + if (measured_ns <= TICK_NSEC) cpu_data->tick_intercepts += PULSE; } - - cpu_data->total -= cpu_data->total >> DECAY_SHIFT; - cpu_data->total += PULSE; -} - -static bool teo_state_ok(int i, struct cpuidle_driver *drv) -{ - return !tick_nohz_tick_stopped() || - drv->states[i].target_residency_ns >= TICK_NSEC; } /** @@ -239,17 +262,15 @@ static bool teo_state_ok(int i, struct cpuidle_driver *drv) * @dev: Target CPU. * @state_idx: Index of the capping idle state. * @duration_ns: Idle duration value to match. - * @no_poll: Don't consider polling states. */ static int teo_find_shallower_state(struct cpuidle_driver *drv, struct cpuidle_device *dev, int state_idx, - s64 duration_ns, bool no_poll) + s64 duration_ns) { int i; for (i = state_idx - 1; i >= 0; i--) { - if (dev->states_usage[i].disable || - (no_poll && drv->states[i].flags & CPUIDLE_FLAG_POLLING)) + if (dev->states_usage[i].disable) continue; state_idx = i; @@ -268,7 +289,7 @@ static int teo_find_shallower_state(struct cpuidle_driver *drv, static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, bool *stop_tick) { - struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); + struct teo_cpu *cpu_data = this_cpu_ptr(&teo_cpus); s64 latency_req = cpuidle_governor_latency_req(dev->cpu); ktime_t delta_tick = TICK_NSEC / 2; unsigned int idx_intercept_sum = 0; @@ -356,7 +377,18 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, * better choice. */ if (2 * idx_intercept_sum > cpu_data->total - idx_hit_sum) { - int first_suitable_idx = idx; + int min_idx = idx0; + + if (tick_nohz_tick_stopped()) { + /* + * Look for the shallowest idle state below the current + * candidate one whose target residency is at least + * equal to the tick period length. + */ + while (min_idx < idx && + drv->states[min_idx].target_residency_ns < TICK_NSEC) + min_idx++; + } /* * Look for the deepest idle state whose target residency had @@ -366,49 +398,14 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, * Take the possible duration limitation present if the tick * has been stopped already into account. */ - intercept_sum = 0; - - for (i = idx - 1; i >= 0; i--) { - struct teo_bin *bin = &cpu_data->state_bins[i]; - - intercept_sum += bin->intercepts; - - if (2 * intercept_sum > idx_intercept_sum) { - /* - * Use the current state unless it is too - * shallow or disabled, in which case take the - * first enabled state that is deep enough. - */ - if (teo_state_ok(i, drv) && - !dev->states_usage[i].disable) { - idx = i; - break; - } - idx = first_suitable_idx; - break; - } + for (i = idx - 1, intercept_sum = 0; i >= min_idx; i--) { + intercept_sum += cpu_data->state_bins[i].intercepts; if (dev->states_usage[i].disable) continue; - if (teo_state_ok(i, drv)) { - /* - * The current state is deep enough, but still - * there may be a better one. - */ - first_suitable_idx = i; - continue; - } - - /* - * The current state is too shallow, so if no suitable - * states other than the initial candidate have been - * found, give up (the remaining states to check are - * shallower still), but otherwise the first suitable - * state other than the initial candidate may turn out - * to be preferable. - */ - if (first_suitable_idx == idx) + idx = i; + if (2 * intercept_sum > idx_intercept_sum) break; } } @@ -458,11 +455,8 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, * If the closest expected timer is before the target residency of the * candidate state, a shallower one needs to be found. */ - if (drv->states[idx].target_residency_ns > duration_ns) { - i = teo_find_shallower_state(drv, dev, idx, duration_ns, false); - if (teo_state_ok(i, drv)) - idx = i; - } + if (drv->states[idx].target_residency_ns > duration_ns) + idx = teo_find_shallower_state(drv, dev, idx, duration_ns); /* * If the selected state's target residency is below the tick length @@ -490,7 +484,7 @@ end: */ if (idx > idx0 && drv->states[idx].target_residency_ns > delta_tick) - idx = teo_find_shallower_state(drv, dev, idx, delta_tick, false); + idx = teo_find_shallower_state(drv, dev, idx, delta_tick); out_tick: *stop_tick = false; @@ -504,20 +498,11 @@ out_tick: */ static void teo_reflect(struct cpuidle_device *dev, int state) { - struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); + struct teo_cpu *cpu_data = this_cpu_ptr(&teo_cpus); + + cpu_data->tick_wakeup = tick_nohz_idle_got_tick(); dev->last_state_idx = state; - if (dev->poll_time_limit || - (tick_nohz_idle_got_tick() && cpu_data->sleep_length_ns > TICK_NSEC)) { - /* - * The wakeup was not "genuine", but triggered by one of the - * safety nets. - */ - dev->poll_time_limit = false; - cpu_data->artificial_wakeup = true; - } else { - cpu_data->artificial_wakeup = false; - } } /** diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c index 9b6d90a72601..c7524e4c522a 100644 --- a/drivers/cpuidle/poll_state.c +++ b/drivers/cpuidle/poll_state.c @@ -4,9 +4,13 @@ */ #include <linux/cpuidle.h> +#include <linux/export.h> +#include <linux/irqflags.h> #include <linux/sched.h> #include <linux/sched/clock.h> #include <linux/sched/idle.h> +#include <linux/sprintf.h> +#include <linux/types.h> #define POLL_IDLE_RELAX_COUNT 200 diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c index 2e8d01d47f69..00979f2e0e27 100644 --- a/drivers/devfreq/devfreq.c +++ b/drivers/devfreq/devfreq.c @@ -20,6 +20,7 @@ #include <linux/stat.h> #include <linux/pm_opp.h> #include <linux/devfreq.h> +#include <linux/devfreq-governor.h> #include <linux/workqueue.h> #include <linux/platform_device.h> #include <linux/list.h> @@ -28,7 +29,6 @@ #include <linux/of.h> #include <linux/pm_qos.h> #include <linux/units.h> -#include "governor.h" #define CREATE_TRACE_POINTS #include <trace/events/devfreq.h> diff --git a/drivers/devfreq/governor_passive.c b/drivers/devfreq/governor_passive.c index 953cf9a1e9f7..8cd6f9a59f64 100644 --- a/drivers/devfreq/governor_passive.c +++ b/drivers/devfreq/governor_passive.c @@ -14,8 +14,33 @@ #include <linux/slab.h> #include <linux/device.h> #include <linux/devfreq.h> +#include <linux/devfreq-governor.h> #include <linux/units.h> -#include "governor.h" + +/** + * struct devfreq_cpu_data - Hold the per-cpu data + * @node: list node + * @dev: reference to cpu device. + * @first_cpu: the cpumask of the first cpu of a policy. + * @opp_table: reference to cpu opp table. + * @cur_freq: the current frequency of the cpu. + * @min_freq: the min frequency of the cpu. + * @max_freq: the max frequency of the cpu. + * + * This structure stores the required cpu_data of a cpu. + * This is auto-populated by the governor. + */ +struct devfreq_cpu_data { + struct list_head node; + + struct device *dev; + unsigned int first_cpu; + + struct opp_table *opp_table; + unsigned int cur_freq; + unsigned int min_freq; + unsigned int max_freq; +}; static struct devfreq_cpu_data * get_parent_cpu_data(struct devfreq_passive_data *p_data, diff --git a/drivers/devfreq/governor_performance.c b/drivers/devfreq/governor_performance.c index 2e4e981446fa..fdb22bf512cf 100644 --- a/drivers/devfreq/governor_performance.c +++ b/drivers/devfreq/governor_performance.c @@ -7,8 +7,8 @@ */ #include <linux/devfreq.h> +#include <linux/devfreq-governor.h> #include <linux/module.h> -#include "governor.h" static int devfreq_performance_func(struct devfreq *df, unsigned long *freq) diff --git a/drivers/devfreq/governor_powersave.c b/drivers/devfreq/governor_powersave.c index f059e8814804..ee2d6ec8a512 100644 --- a/drivers/devfreq/governor_powersave.c +++ b/drivers/devfreq/governor_powersave.c @@ -7,8 +7,8 @@ */ #include <linux/devfreq.h> +#include <linux/devfreq-governor.h> #include <linux/module.h> -#include "governor.h" static int devfreq_powersave_func(struct devfreq *df, unsigned long *freq) diff --git a/drivers/devfreq/governor_simpleondemand.c b/drivers/devfreq/governor_simpleondemand.c index c23435736367..ac9c5e9e51a4 100644 --- a/drivers/devfreq/governor_simpleondemand.c +++ b/drivers/devfreq/governor_simpleondemand.c @@ -9,12 +9,12 @@ #include <linux/errno.h> #include <linux/module.h> #include <linux/devfreq.h> +#include <linux/devfreq-governor.h> #include <linux/math64.h> -#include "governor.h" /* Default constants for DevFreq-Simple-Ondemand (DFSO) */ #define DFSO_UPTHRESHOLD (90) -#define DFSO_DOWNDIFFERENCTIAL (5) +#define DFSO_DOWNDIFFERENTIAL (5) static int devfreq_simple_ondemand_func(struct devfreq *df, unsigned long *freq) { @@ -22,7 +22,7 @@ static int devfreq_simple_ondemand_func(struct devfreq *df, struct devfreq_dev_status *stat; unsigned long long a, b; unsigned int dfso_upthreshold = DFSO_UPTHRESHOLD; - unsigned int dfso_downdifferential = DFSO_DOWNDIFFERENCTIAL; + unsigned int dfso_downdifferential = DFSO_DOWNDIFFERENTIAL; struct devfreq_simple_ondemand_data *data = df->data; err = devfreq_update_stats(df); diff --git a/drivers/devfreq/governor_userspace.c b/drivers/devfreq/governor_userspace.c index 175de0c0b50e..395174f93960 100644 --- a/drivers/devfreq/governor_userspace.c +++ b/drivers/devfreq/governor_userspace.c @@ -9,11 +9,11 @@ #include <linux/slab.h> #include <linux/device.h> #include <linux/devfreq.h> +#include <linux/devfreq-governor.h> #include <linux/kstrtox.h> #include <linux/pm.h> #include <linux/mutex.h> #include <linux/module.h> -#include "governor.h" struct userspace_data { unsigned long user_frequency; diff --git a/drivers/devfreq/hisi_uncore_freq.c b/drivers/devfreq/hisi_uncore_freq.c index 96d1815059e3..4d00d813c8ac 100644 --- a/drivers/devfreq/hisi_uncore_freq.c +++ b/drivers/devfreq/hisi_uncore_freq.c @@ -9,6 +9,7 @@ #include <linux/bits.h> #include <linux/cleanup.h> #include <linux/devfreq.h> +#include <linux/devfreq-governor.h> #include <linux/device.h> #include <linux/dev_printk.h> #include <linux/errno.h> @@ -26,8 +27,6 @@ #include <linux/units.h> #include <acpi/pcc.h> -#include "governor.h" - struct hisi_uncore_pcc_data { u16 status; u16 resv; @@ -265,10 +264,11 @@ static int hisi_uncore_target(struct device *dev, unsigned long *freq, dev_err(dev, "Failed to get opp for freq %lu hz\n", *freq); return PTR_ERR(opp); } - dev_pm_opp_put(opp); data = (u32)(dev_pm_opp_get_freq(opp) / HZ_PER_MHZ); + dev_pm_opp_put(opp); + return hisi_uncore_cmd_send(uncore, HUCF_PCC_CMD_SET_FREQ, &data); } diff --git a/drivers/devfreq/tegra30-devfreq.c b/drivers/devfreq/tegra30-devfreq.c index 4a4f0106ab9d..8b57194ac698 100644 --- a/drivers/devfreq/tegra30-devfreq.c +++ b/drivers/devfreq/tegra30-devfreq.c @@ -9,9 +9,11 @@ #include <linux/clk.h> #include <linux/cpufreq.h> #include <linux/devfreq.h> +#include <linux/devfreq-governor.h> #include <linux/interrupt.h> #include <linux/io.h> #include <linux/irq.h> +#include <linux/minmax.h> #include <linux/module.h> #include <linux/of.h> #include <linux/platform_device.h> @@ -21,8 +23,6 @@ #include <soc/tegra/fuse.h> -#include "governor.h" - #define ACTMON_GLB_STATUS 0x0 #define ACTMON_GLB_PERIOD_CTRL 0x4 @@ -326,14 +326,9 @@ static unsigned long actmon_cpu_to_emc_rate(struct tegra_devfreq *tegra, unsigned int i; const struct tegra_actmon_emc_ratio *ratio = actmon_emc_ratios; - for (i = 0; i < ARRAY_SIZE(actmon_emc_ratios); i++, ratio++) { - if (cpu_freq >= ratio->cpu_freq) { - if (ratio->emc_freq >= tegra->max_freq) - return tegra->max_freq; - else - return ratio->emc_freq; - } - } + for (i = 0; i < ARRAY_SIZE(actmon_emc_ratios); i++, ratio++) + if (cpu_freq >= ratio->cpu_freq) + return min(ratio->emc_freq, tegra->max_freq); return 0; } diff --git a/drivers/opp/core.c b/drivers/opp/core.c index bba4f7daff8c..dbebb8c829bc 100644 --- a/drivers/opp/core.c +++ b/drivers/opp/core.c @@ -309,9 +309,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_is_turbo); */ unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev) { - struct opp_table *opp_table __free(put_opp_table); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); - opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) return 0; @@ -327,7 +327,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_clock_latency); */ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) { - struct opp_table *opp_table __free(put_opp_table); struct dev_pm_opp *opp; struct regulator *reg; unsigned long latency_ns = 0; @@ -337,7 +336,9 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) unsigned long max; } *uV; - opp_table = _find_opp_table(dev); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); + if (IS_ERR(opp_table)) return 0; @@ -409,10 +410,11 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_transition_latency); */ unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev) { - struct opp_table *opp_table __free(put_opp_table); unsigned long freq = 0; - opp_table = _find_opp_table(dev); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); + if (IS_ERR(opp_table)) return 0; @@ -447,9 +449,9 @@ int _get_opp_count(struct opp_table *opp_table) */ int dev_pm_opp_get_opp_count(struct device *dev) { - struct opp_table *opp_table __free(put_opp_table); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); - opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) { dev_dbg(dev, "%s: OPP table not found (%ld)\n", __func__, PTR_ERR(opp_table)); @@ -605,9 +607,9 @@ _find_key(struct device *dev, unsigned long *key, int index, bool available, unsigned long opp_key, unsigned long key), bool (*assert)(struct opp_table *opp_table, unsigned int index)) { - struct opp_table *opp_table __free(put_opp_table); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); - opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) { dev_err(dev, "%s: OPP table not found (%ld)\n", __func__, PTR_ERR(opp_table)); @@ -1410,12 +1412,13 @@ static int _set_opp(struct device *dev, struct opp_table *opp_table, */ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) { - struct opp_table *opp_table __free(put_opp_table); struct dev_pm_opp *opp __free(put_opp) = NULL; unsigned long freq = 0, temp_freq; bool forced = false; - opp_table = _find_opp_table(dev); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); + if (IS_ERR(opp_table)) { dev_err(dev, "%s: device's opp table doesn't exist\n", __func__); return PTR_ERR(opp_table); @@ -1477,9 +1480,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_rate); */ int dev_pm_opp_set_opp(struct device *dev, struct dev_pm_opp *opp) { - struct opp_table *opp_table __free(put_opp_table); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); - opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) { dev_err(dev, "%s: device opp doesn't exist\n", __func__); return PTR_ERR(opp_table); @@ -1794,10 +1797,11 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put); */ void dev_pm_opp_remove(struct device *dev, unsigned long freq) { - struct opp_table *opp_table __free(put_opp_table); struct dev_pm_opp *opp = NULL, *iter; - opp_table = _find_opp_table(dev); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); + if (IS_ERR(opp_table)) return; @@ -1885,9 +1889,9 @@ bool _opp_remove_all_static(struct opp_table *opp_table) */ void dev_pm_opp_remove_all_dynamic(struct device *dev) { - struct opp_table *opp_table __free(put_opp_table); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); - opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) return; @@ -2871,10 +2875,11 @@ static int _opp_set_availability(struct device *dev, unsigned long freq, bool availability_req) { struct dev_pm_opp *opp __free(put_opp) = ERR_PTR(-ENODEV), *tmp_opp; - struct opp_table *opp_table __free(put_opp_table); /* Find the opp_table */ - opp_table = _find_opp_table(dev); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); + if (IS_ERR(opp_table)) { dev_warn(dev, "%s: Device OPP not found (%ld)\n", __func__, PTR_ERR(opp_table)); @@ -2932,11 +2937,12 @@ int dev_pm_opp_adjust_voltage(struct device *dev, unsigned long freq, { struct dev_pm_opp *opp __free(put_opp) = ERR_PTR(-ENODEV), *tmp_opp; - struct opp_table *opp_table __free(put_opp_table); int r; /* Find the opp_table */ - opp_table = _find_opp_table(dev); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); + if (IS_ERR(opp_table)) { r = PTR_ERR(opp_table); dev_warn(dev, "%s: Device OPP not found (%d)\n", __func__, r); @@ -2986,12 +2992,13 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_adjust_voltage); */ int dev_pm_opp_sync_regulators(struct device *dev) { - struct opp_table *opp_table __free(put_opp_table); struct regulator *reg; int ret, i; /* Device may not have OPP table */ - opp_table = _find_opp_table(dev); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); + if (IS_ERR(opp_table)) return 0; @@ -3062,9 +3069,9 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_disable); */ int dev_pm_opp_register_notifier(struct device *dev, struct notifier_block *nb) { - struct opp_table *opp_table __free(put_opp_table); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); - opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) return PTR_ERR(opp_table); @@ -3082,9 +3089,9 @@ EXPORT_SYMBOL(dev_pm_opp_register_notifier); int dev_pm_opp_unregister_notifier(struct device *dev, struct notifier_block *nb) { - struct opp_table *opp_table __free(put_opp_table); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); - opp_table = _find_opp_table(dev); if (IS_ERR(opp_table)) return PTR_ERR(opp_table); @@ -3101,10 +3108,10 @@ EXPORT_SYMBOL(dev_pm_opp_unregister_notifier); */ void dev_pm_opp_remove_table(struct device *dev) { - struct opp_table *opp_table __free(put_opp_table); - /* Check for existing table for 'dev' */ - opp_table = _find_opp_table(dev); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(dev); + if (IS_ERR(opp_table)) { int error = PTR_ERR(opp_table); diff --git a/drivers/opp/cpu.c b/drivers/opp/cpu.c index 97989d4fe336..a6da7ee3ec76 100644 --- a/drivers/opp/cpu.c +++ b/drivers/opp/cpu.c @@ -56,10 +56,10 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev, return -ENOMEM; for (i = 0, rate = 0; i < max_opps; i++, rate++) { - struct dev_pm_opp *opp __free(put_opp); - /* find next rate */ - opp = dev_pm_opp_find_freq_ceil(dev, &rate); + struct dev_pm_opp *opp __free(put_opp) = + dev_pm_opp_find_freq_ceil(dev, &rate); + if (IS_ERR(opp)) { ret = PTR_ERR(opp); goto out; @@ -154,12 +154,13 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_cpumask_remove_table); int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask) { - struct opp_table *opp_table __free(put_opp_table); struct opp_device *opp_dev; struct device *dev; int cpu; - opp_table = _find_opp_table(cpu_dev); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(cpu_dev); + if (IS_ERR(opp_table)) return PTR_ERR(opp_table); @@ -201,10 +202,11 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus); */ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) { - struct opp_table *opp_table __free(put_opp_table); struct opp_device *opp_dev; - opp_table = _find_opp_table(cpu_dev); + struct opp_table *opp_table __free(put_opp_table) = + _find_opp_table(cpu_dev); + if (IS_ERR(opp_table)) return PTR_ERR(opp_table); diff --git a/drivers/opp/of.c b/drivers/opp/of.c index 505d79821584..1e0d0adb18e1 100644 --- a/drivers/opp/of.c +++ b/drivers/opp/of.c @@ -45,9 +45,10 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_opp_desc_node); struct opp_table *_managed_opp(struct device *dev, int index) { struct opp_table *opp_table, *managed_table = NULL; - struct device_node *np __free(device_node); - np = _opp_of_get_opp_desc_node(dev->of_node, index); + struct device_node *np __free(device_node) = + _opp_of_get_opp_desc_node(dev->of_node, index); + if (!np) return NULL; @@ -95,10 +96,11 @@ static struct device_node *of_parse_required_opp(struct device_node *np, /* The caller must call dev_pm_opp_put_opp_table() after the table is used */ static struct opp_table *_find_table_of_opp_np(struct device_node *opp_np) { - struct device_node *opp_table_np __free(device_node); struct opp_table *opp_table; - opp_table_np = of_get_parent(opp_np); + struct device_node *opp_table_np __free(device_node) = + of_get_parent(opp_np); + if (!opp_table_np) return ERR_PTR(-ENODEV); @@ -146,12 +148,13 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table, struct device_node *opp_np) { struct opp_table **required_opp_tables; - struct device_node *np __free(device_node); bool lazy = false; int count, i, size; /* Traversing the first OPP node is all we need */ - np = of_get_next_available_child(opp_np, NULL); + struct device_node *np __free(device_node) = + of_get_next_available_child(opp_np, NULL); + if (!np) { dev_warn(dev, "Empty OPP table\n"); return; @@ -171,9 +174,9 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table, opp_table->required_opp_count = count; for (i = 0; i < count; i++) { - struct device_node *required_np __free(device_node); + struct device_node *required_np __free(device_node) = + of_parse_required_opp(np, i); - required_np = of_parse_required_opp(np, i); if (!required_np) { _opp_table_free_required_tables(opp_table); return; @@ -199,14 +202,15 @@ static void _opp_table_alloc_required_tables(struct opp_table *opp_table, void _of_init_opp_table(struct opp_table *opp_table, struct device *dev, int index) { - struct device_node *np __free(device_node), *opp_np; + struct device_node *opp_np; u32 val; /* * Only required for backward compatibility with v1 bindings, but isn't * harmful for other cases. And so we do it unconditionally. */ - np = of_node_get(dev->of_node); + struct device_node *np __free(device_node) = of_node_get(dev->of_node); + if (!np) return; @@ -273,9 +277,9 @@ void _of_clear_opp(struct opp_table *opp_table, struct dev_pm_opp *opp) static int _link_required_opps(struct dev_pm_opp *opp, struct opp_table *required_table, int index) { - struct device_node *np __free(device_node); + struct device_node *np __free(device_node) = + of_parse_required_opp(opp->np, index); - np = of_parse_required_opp(opp->np, index); if (unlikely(!np)) return -ENODEV; @@ -349,16 +353,13 @@ static void lazy_link_required_opp_table(struct opp_table *new_table) guard(mutex)(&opp_table_lock); list_for_each_entry_safe(opp_table, temp, &lazy_opp_tables, lazy) { - struct device_node *opp_np __free(device_node); bool lazy = false; /* opp_np can't be invalid here */ - opp_np = of_get_next_available_child(opp_table->np, NULL); + struct device_node *opp_np __free(device_node) = + of_get_next_available_child(opp_table->np, NULL); for (i = 0; i < opp_table->required_opp_count; i++) { - struct device_node *required_np __free(device_node) = NULL; - struct device_node *required_table_np __free(device_node) = NULL; - required_opp_tables = opp_table->required_opp_tables; /* Required opp-table is already parsed */ @@ -366,8 +367,10 @@ static void lazy_link_required_opp_table(struct opp_table *new_table) continue; /* required_np can't be invalid here */ - required_np = of_parse_required_opp(opp_np, i); - required_table_np = of_get_parent(required_np); + struct device_node *required_np __free(device_node) = + of_parse_required_opp(opp_np, i); + struct device_node *required_table_np __free(device_node) = + of_get_parent(required_np); /* * Newly added table isn't the required opp-table for @@ -402,13 +405,12 @@ static void lazy_link_required_opp_table(struct opp_table *new_table) static int _bandwidth_supported(struct device *dev, struct opp_table *opp_table) { struct device_node *opp_np __free(device_node) = NULL; - struct device_node *np __free(device_node) = NULL; struct property *prop; if (!opp_table) { - struct device_node *np __free(device_node); + struct device_node *np __free(device_node) = + of_node_get(dev->of_node); - np = of_node_get(dev->of_node); if (!np) return -ENODEV; @@ -422,7 +424,9 @@ static int _bandwidth_supported(struct device *dev, struct opp_table *opp_table) return 0; /* Checking only first OPP is sufficient */ - np = of_get_next_available_child(opp_np, NULL); + struct device_node *np __free(device_node) = + of_get_next_available_child(opp_np, NULL); + if (!np) { dev_err(dev, "OPP table empty\n"); return -EINVAL; @@ -1269,11 +1273,12 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table); int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) { - struct device_node *np __free(device_node); int cpu; /* Get OPP descriptor node */ - np = dev_pm_opp_of_get_opp_desc_node(cpu_dev); + struct device_node *np __free(device_node) = + dev_pm_opp_of_get_opp_desc_node(cpu_dev); + if (!np) { dev_dbg(cpu_dev, "%s: Couldn't find opp node.\n", __func__); return -ENOENT; @@ -1286,13 +1291,12 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, return 0; for_each_possible_cpu(cpu) { - struct device_node *cpu_np __free(device_node) = NULL; - struct device_node *tmp_np __free(device_node) = NULL; - if (cpu == cpu_dev->id) continue; - cpu_np = of_cpu_device_node_get(cpu); + struct device_node *cpu_np __free(device_node) = + of_cpu_device_node_get(cpu); + if (!cpu_np) { dev_err(cpu_dev, "%s: failed to get cpu%d node\n", __func__, cpu); @@ -1300,7 +1304,9 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, } /* Get OPP descriptor node */ - tmp_np = _opp_of_get_opp_desc_node(cpu_np, 0); + struct device_node *tmp_np __free(device_node) = + _opp_of_get_opp_desc_node(cpu_np, 0); + if (!tmp_np) { pr_err("%pOF: Couldn't find opp node\n", cpu_np); return -ENOENT; @@ -1328,16 +1334,17 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_sharing_cpus); */ int of_get_required_opp_performance_state(struct device_node *np, int index) { - struct device_node *required_np __free(device_node); - struct opp_table *opp_table __free(put_opp_table) = NULL; - struct dev_pm_opp *opp __free(put_opp) = NULL; int pstate = -EINVAL; - required_np = of_parse_required_opp(np, index); + struct device_node *required_np __free(device_node) = + of_parse_required_opp(np, index); + if (!required_np) return -ENODEV; - opp_table = _find_table_of_opp_np(required_np); + struct opp_table *opp_table __free(put_opp_table) = + _find_table_of_opp_np(required_np); + if (IS_ERR(opp_table)) { pr_err("%s: Failed to find required OPP table %pOF: %ld\n", __func__, np, PTR_ERR(opp_table)); @@ -1350,7 +1357,9 @@ int of_get_required_opp_performance_state(struct device_node *np, int index) return -EINVAL; } - opp = _find_opp_of_np(opp_table, required_np); + struct dev_pm_opp *opp __free(put_opp) = + _find_opp_of_np(opp_table, required_np); + if (opp) { if (opp->level == OPP_LEVEL_UNSET) { pr_err("%s: OPP levels aren't available for %pOF\n", @@ -1376,14 +1385,17 @@ EXPORT_SYMBOL_GPL(of_get_required_opp_performance_state); */ bool dev_pm_opp_of_has_required_opp(struct device *dev) { - struct device_node *np __free(device_node) = NULL, *opp_np __free(device_node); int count; - opp_np = _opp_of_get_opp_desc_node(dev->of_node, 0); + struct device_node *opp_np __free(device_node) = + _opp_of_get_opp_desc_node(dev->of_node, 0); + if (!opp_np) return false; - np = of_get_next_available_child(opp_np, NULL); + struct device_node *np __free(device_node) = + of_get_next_available_child(opp_np, NULL); + if (!np) { dev_warn(dev, "Empty OPP table\n"); return false; @@ -1425,12 +1437,14 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_of_node); static int __maybe_unused _get_dt_power(struct device *dev, unsigned long *uW, unsigned long *kHz) { - struct dev_pm_opp *opp __free(put_opp); unsigned long opp_freq, opp_power; /* Find the right frequency and related OPP */ opp_freq = *kHz * 1000; - opp = dev_pm_opp_find_freq_ceil(dev, &opp_freq); + + struct dev_pm_opp *opp __free(put_opp) = + dev_pm_opp_find_freq_ceil(dev, &opp_freq); + if (IS_ERR(opp)) return -EINVAL; @@ -1465,14 +1479,13 @@ _get_dt_power(struct device *dev, unsigned long *uW, unsigned long *kHz) int dev_pm_opp_calc_power(struct device *dev, unsigned long *uW, unsigned long *kHz) { - struct dev_pm_opp *opp __free(put_opp) = NULL; - struct device_node *np __free(device_node); unsigned long mV, Hz; u32 cap; u64 tmp; int ret; - np = of_node_get(dev->of_node); + struct device_node *np __free(device_node) = of_node_get(dev->of_node); + if (!np) return -EINVAL; @@ -1481,7 +1494,10 @@ int dev_pm_opp_calc_power(struct device *dev, unsigned long *uW, return -EINVAL; Hz = *kHz * 1000; - opp = dev_pm_opp_find_freq_ceil(dev, &Hz); + + struct dev_pm_opp *opp __free(put_opp) = + dev_pm_opp_find_freq_ceil(dev, &Hz); + if (IS_ERR(opp)) return -EINVAL; @@ -1502,11 +1518,12 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_calc_power); static bool _of_has_opp_microwatt_property(struct device *dev) { - struct dev_pm_opp *opp __free(put_opp); unsigned long freq = 0; /* Check if at least one OPP has needed property */ - opp = dev_pm_opp_find_freq_ceil(dev, &freq); + struct dev_pm_opp *opp __free(put_opp) = + dev_pm_opp_find_freq_ceil(dev, &freq); + if (IS_ERR(opp)) return false; @@ -1526,12 +1543,16 @@ static bool _of_has_opp_microwatt_property(struct device *dev) */ int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus) { - struct device_node *np __free(device_node) = NULL; struct em_data_callback em_cb; int ret, nr_opp; u32 cap; - if (IS_ERR_OR_NULL(dev)) { + if (IS_ERR_OR_NULL(dev)) + return -EINVAL; + + struct device_node *np __free(device_node) = of_node_get(dev->of_node); + + if (!np) { ret = -EINVAL; goto failed; } @@ -1548,12 +1569,6 @@ int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus) goto register_em; } - np = of_node_get(dev->of_node); - if (!np) { - ret = -EINVAL; - goto failed; - } - /* * Register an EM only if the 'dynamic-power-coefficient' property is * set in devicetree. It is assumed the voltage values are known if that diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index 9d6f74bd95f8..3881359440b1 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c @@ -1517,8 +1517,8 @@ static ssize_t reset_method_store(struct device *dev, return count; } - ACQUIRE(pm_runtime_active_try, pm)(dev); - if (ACQUIRE_ERR(pm_runtime_active_try, &pm)) + PM_RUNTIME_ACQUIRE(dev, pm); + if (PM_RUNTIME_ACQUIRE_ERR(&pm)) return -ENXIO; if (sysfs_streq(buf, "default")) { diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c index 61c2277c9ce3..4fd546ef0448 100644 --- a/drivers/pmdomain/core.c +++ b/drivers/pmdomain/core.c @@ -1425,8 +1425,14 @@ static void genpd_sync_power_off(struct generic_pm_domain *genpd, bool use_lock, return; } - /* Choose the deepest state when suspending */ - genpd->state_idx = genpd->state_count - 1; + if (genpd->gov && genpd->gov->system_power_down_ok) { + if (!genpd->gov->system_power_down_ok(&genpd->domain)) + return; + } else { + /* Default to the deepest state. */ + genpd->state_idx = genpd->state_count - 1; + } + if (_genpd_power_off(genpd, false)) { genpd->states[genpd->state_idx].rejected++; return; diff --git a/drivers/pmdomain/governor.c b/drivers/pmdomain/governor.c index 39359811a930..05e68680f34b 100644 --- a/drivers/pmdomain/governor.c +++ b/drivers/pmdomain/governor.c @@ -351,7 +351,7 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd) ktime_t domain_wakeup, next_hrtimer; ktime_t now = ktime_get(); struct device *cpu_dev; - s64 cpu_constraint, global_constraint; + s64 cpu_constraint, global_constraint, wakeup_constraint; s64 idle_duration_ns; int cpu, i; @@ -362,7 +362,11 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd) if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN)) return true; + wakeup_constraint = cpu_wakeup_latency_qos_limit(); global_constraint = cpu_latency_qos_limit(); + if (global_constraint > wakeup_constraint) + global_constraint = wakeup_constraint; + /* * Find the next wakeup for any of the online CPUs within the PM domain * and its subdomains. Note, we only need the genpd->cpus, as it already @@ -415,9 +419,36 @@ static bool cpu_power_down_ok(struct dev_pm_domain *pd) return false; } +static bool cpu_system_power_down_ok(struct dev_pm_domain *pd) +{ + s64 constraint_ns = cpu_wakeup_latency_qos_limit() * NSEC_PER_USEC; + struct generic_pm_domain *genpd = pd_to_genpd(pd); + int state_idx = genpd->state_count - 1; + + if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN)) { + genpd->state_idx = state_idx; + return true; + } + + /* Find the deepest state for the latency constraint. */ + while (state_idx >= 0) { + s64 latency_ns = genpd->states[state_idx].power_off_latency_ns + + genpd->states[state_idx].power_on_latency_ns; + + if (latency_ns <= constraint_ns) { + genpd->state_idx = state_idx; + return true; + } + state_idx--; + } + + return false; +} + struct dev_power_governor pm_domain_cpu_gov = { .suspend_ok = default_suspend_ok, .power_down_ok = cpu_power_down_ok, + .system_power_down_ok = cpu_system_power_down_ok, }; #endif diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c index c7e7f9bf5313..b9d87e56cbbc 100644 --- a/drivers/powercap/intel_rapl_common.c +++ b/drivers/powercap/intel_rapl_common.c @@ -253,7 +253,8 @@ struct rapl_primitive_info { static void rapl_init_domains(struct rapl_package *rp); static int rapl_read_data_raw(struct rapl_domain *rd, enum rapl_primitives prim, - bool xlate, u64 *data); + bool xlate, u64 *data, + bool atomic); static int rapl_write_data_raw(struct rapl_domain *rd, enum rapl_primitives prim, unsigned long long value); @@ -289,7 +290,7 @@ static int get_energy_counter(struct powercap_zone *power_zone, cpus_read_lock(); rd = power_zone_to_rapl_domain(power_zone); - if (!rapl_read_data_raw(rd, ENERGY_COUNTER, true, &energy_now)) { + if (!rapl_read_data_raw(rd, ENERGY_COUNTER, true, &energy_now, false)) { *energy_raw = energy_now; cpus_read_unlock(); @@ -830,7 +831,8 @@ prim_fixups(struct rapl_domain *rd, enum rapl_primitives prim) * 63-------------------------- 31--------------------------- 0 */ static int rapl_read_data_raw(struct rapl_domain *rd, - enum rapl_primitives prim, bool xlate, u64 *data) + enum rapl_primitives prim, bool xlate, u64 *data, + bool atomic) { u64 value; enum rapl_primitives prim_fixed = prim_fixups(rd, prim); @@ -852,7 +854,7 @@ static int rapl_read_data_raw(struct rapl_domain *rd, ra.mask = rpi->mask; - if (rd->rp->priv->read_raw(get_rid(rd->rp), &ra)) { + if (rd->rp->priv->read_raw(get_rid(rd->rp), &ra, atomic)) { pr_debug("failed to read reg 0x%llx for %s:%s\n", ra.reg.val, rd->rp->name, rd->name); return -EIO; } @@ -904,7 +906,7 @@ static int rapl_read_pl_data(struct rapl_domain *rd, int pl, if (!is_pl_valid(rd, pl)) return -EINVAL; - return rapl_read_data_raw(rd, prim, xlate, data); + return rapl_read_data_raw(rd, prim, xlate, data, false); } static int rapl_write_pl_data(struct rapl_domain *rd, int pl, @@ -941,7 +943,7 @@ static int rapl_check_unit_core(struct rapl_domain *rd) ra.reg = rd->regs[RAPL_DOMAIN_REG_UNIT]; ra.mask = ~0; - if (rd->rp->priv->read_raw(get_rid(rd->rp), &ra)) { + if (rd->rp->priv->read_raw(get_rid(rd->rp), &ra, false)) { pr_err("Failed to read power unit REG 0x%llx on %s:%s, exit.\n", ra.reg.val, rd->rp->name, rd->name); return -ENODEV; @@ -969,7 +971,7 @@ static int rapl_check_unit_atom(struct rapl_domain *rd) ra.reg = rd->regs[RAPL_DOMAIN_REG_UNIT]; ra.mask = ~0; - if (rd->rp->priv->read_raw(get_rid(rd->rp), &ra)) { + if (rd->rp->priv->read_raw(get_rid(rd->rp), &ra, false)) { pr_err("Failed to read power unit REG 0x%llx on %s:%s, exit.\n", ra.reg.val, rd->rp->name, rd->name); return -ENODEV; @@ -1156,7 +1158,7 @@ static int rapl_check_unit_tpmi(struct rapl_domain *rd) ra.reg = rd->regs[RAPL_DOMAIN_REG_UNIT]; ra.mask = ~0; - if (rd->rp->priv->read_raw(get_rid(rd->rp), &ra)) { + if (rd->rp->priv->read_raw(get_rid(rd->rp), &ra, false)) { pr_err("Failed to read power unit REG 0x%llx on %s:%s, exit.\n", ra.reg.val, rd->rp->name, rd->name); return -ENODEV; @@ -1284,6 +1286,9 @@ static const struct x86_cpu_id rapl_ids[] __initconst = { X86_MATCH_VFM(INTEL_EMERALDRAPIDS_X, &rapl_defaults_spr_server), X86_MATCH_VFM(INTEL_LUNARLAKE_M, &rapl_defaults_core), X86_MATCH_VFM(INTEL_PANTHERLAKE_L, &rapl_defaults_core), + X86_MATCH_VFM(INTEL_WILDCATLAKE_L, &rapl_defaults_core), + X86_MATCH_VFM(INTEL_NOVALAKE, &rapl_defaults_core), + X86_MATCH_VFM(INTEL_NOVALAKE_L, &rapl_defaults_core), X86_MATCH_VFM(INTEL_ARROWLAKE_H, &rapl_defaults_core), X86_MATCH_VFM(INTEL_ARROWLAKE, &rapl_defaults_core), X86_MATCH_VFM(INTEL_ARROWLAKE_U, &rapl_defaults_core), @@ -1325,7 +1330,7 @@ static void rapl_update_domain_data(struct rapl_package *rp) struct rapl_primitive_info *rpi = get_rpi(rp, prim); if (!rapl_read_data_raw(&rp->domains[dmn], prim, - rpi->unit, &val)) + rpi->unit, &val, false)) rp->domains[dmn].rdd.primitives[prim] = val; } } @@ -1425,7 +1430,7 @@ static int rapl_check_domain(int domain, struct rapl_package *rp) */ ra.mask = ENERGY_STATUS_MASK; - if (rp->priv->read_raw(get_rid(rp), &ra) || !ra.value) + if (rp->priv->read_raw(get_rid(rp), &ra, false) || !ra.value) return -ENODEV; return 0; @@ -1592,11 +1597,11 @@ static int get_pmu_cpu(struct rapl_package *rp) if (!rp->has_pmu) return nr_cpu_ids; - /* Only TPMI RAPL is supported for now */ - if (rp->priv->type != RAPL_IF_TPMI) + /* Only TPMI & MSR RAPL are supported for now */ + if (rp->priv->type != RAPL_IF_TPMI && rp->priv->type != RAPL_IF_MSR) return nr_cpu_ids; - /* TPMI RAPL uses any CPU in the package for PMU */ + /* TPMI/MSR RAPL uses any CPU in the package for PMU */ for_each_online_cpu(cpu) if (topology_physical_package_id(cpu) == rp->id) return cpu; @@ -1609,11 +1614,11 @@ static bool is_rp_pmu_cpu(struct rapl_package *rp, int cpu) if (!rp->has_pmu) return false; - /* Only TPMI RAPL is supported for now */ - if (rp->priv->type != RAPL_IF_TPMI) + /* Only TPMI & MSR RAPL are supported for now */ + if (rp->priv->type != RAPL_IF_TPMI && rp->priv->type != RAPL_IF_MSR) return false; - /* TPMI RAPL uses any CPU in the package for PMU */ + /* TPMI/MSR RAPL uses any CPU in the package for PMU */ return topology_physical_package_id(cpu) == rp->id; } @@ -1636,7 +1641,7 @@ static u64 event_read_counter(struct perf_event *event) if (event->hw.idx < 0) return 0; - ret = rapl_read_data_raw(&rp->domains[event->hw.idx], ENERGY_COUNTER, false, &val); + ret = rapl_read_data_raw(&rp->domains[event->hw.idx], ENERGY_COUNTER, false, &val, true); /* Return 0 for failed read */ if (ret) diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c index 4ed06c71a3ac..0ce1096b6314 100644 --- a/drivers/powercap/intel_rapl_msr.c +++ b/drivers/powercap/intel_rapl_msr.c @@ -33,6 +33,8 @@ /* private data for RAPL MSR Interface */ static struct rapl_if_priv *rapl_msr_priv; +static bool rapl_msr_pmu __ro_after_init; + static struct rapl_if_priv rapl_msr_priv_intel = { .type = RAPL_IF_MSR, .reg_unit.msr = MSR_RAPL_POWER_UNIT, @@ -79,6 +81,8 @@ static int rapl_cpu_online(unsigned int cpu) rp = rapl_add_package_cpuslocked(cpu, rapl_msr_priv, true); if (IS_ERR(rp)) return PTR_ERR(rp); + if (rapl_msr_pmu) + rapl_package_add_pmu(rp); } cpumask_set_cpu(cpu, &rp->cpumask); return 0; @@ -95,19 +99,37 @@ static int rapl_cpu_down_prep(unsigned int cpu) cpumask_clear_cpu(cpu, &rp->cpumask); lead_cpu = cpumask_first(&rp->cpumask); - if (lead_cpu >= nr_cpu_ids) + if (lead_cpu >= nr_cpu_ids) { + if (rapl_msr_pmu) + rapl_package_remove_pmu(rp); rapl_remove_package_cpuslocked(rp); - else if (rp->lead_cpu == cpu) + } else if (rp->lead_cpu == cpu) { rp->lead_cpu = lead_cpu; + } + return 0; } -static int rapl_msr_read_raw(int cpu, struct reg_action *ra) +static int rapl_msr_read_raw(int cpu, struct reg_action *ra, bool atomic) { + /* + * When called from atomic-context (eg PMU event handler) + * perform MSR read directly using rdmsrq(). + */ + if (atomic) { + if (unlikely(smp_processor_id() != cpu)) + return -EIO; + + rdmsrq(ra->reg.msr, ra->value); + goto out; + } + if (rdmsrq_safe_on_cpu(cpu, ra->reg.msr, &ra->value)) { pr_debug("failed to read msr 0x%x on cpu %d\n", ra->reg.msr, cpu); return -EIO; } + +out: ra->value &= ra->mask; return 0; } @@ -151,6 +173,16 @@ static const struct x86_cpu_id pl4_support_ids[] = { X86_MATCH_VFM(INTEL_ARROWLAKE_U, NULL), X86_MATCH_VFM(INTEL_ARROWLAKE_H, NULL), X86_MATCH_VFM(INTEL_PANTHERLAKE_L, NULL), + X86_MATCH_VFM(INTEL_WILDCATLAKE_L, NULL), + X86_MATCH_VFM(INTEL_NOVALAKE, NULL), + X86_MATCH_VFM(INTEL_NOVALAKE_L, NULL), + {} +}; + +/* List of MSR-based RAPL PMU support CPUs */ +static const struct x86_cpu_id pmu_support_ids[] = { + X86_MATCH_VFM(INTEL_PANTHERLAKE_L, NULL), + X86_MATCH_VFM(INTEL_WILDCATLAKE_L, NULL), {} }; @@ -181,6 +213,11 @@ static int rapl_msr_probe(struct platform_device *pdev) pr_info("PL4 support detected.\n"); } + if (x86_match_cpu(pmu_support_ids)) { + rapl_msr_pmu = true; + pr_info("MSR-based RAPL PMU support enabled\n"); + } + rapl_msr_priv->control_type = powercap_register_control_type(NULL, "intel-rapl", NULL); if (IS_ERR(rapl_msr_priv->control_type)) { pr_debug("failed to register powercap control_type.\n"); diff --git a/drivers/powercap/intel_rapl_tpmi.c b/drivers/powercap/intel_rapl_tpmi.c index 82201bf4685d..0a0b85f4528b 100644 --- a/drivers/powercap/intel_rapl_tpmi.c +++ b/drivers/powercap/intel_rapl_tpmi.c @@ -60,7 +60,7 @@ static DEFINE_MUTEX(tpmi_rapl_lock); static struct powercap_control_type *tpmi_control_type; -static int tpmi_rapl_read_raw(int id, struct reg_action *ra) +static int tpmi_rapl_read_raw(int id, struct reg_action *ra, bool atomic) { if (!ra->reg.mmio) return -EINVAL; diff --git a/drivers/scsi/mesh.c b/drivers/scsi/mesh.c index 1c15cac41d80..768b85eecc8f 100644 --- a/drivers/scsi/mesh.c +++ b/drivers/scsi/mesh.c @@ -1762,6 +1762,7 @@ static int mesh_suspend(struct macio_dev *mdev, pm_message_t mesg) case PM_EVENT_SUSPEND: case PM_EVENT_HIBERNATE: case PM_EVENT_FREEZE: + case PM_EVENT_POWEROFF: break; default: return 0; diff --git a/drivers/scsi/stex.c b/drivers/scsi/stex.c index d8ad02c29320..e6357bc301cb 100644 --- a/drivers/scsi/stex.c +++ b/drivers/scsi/stex.c @@ -1965,6 +1965,7 @@ static int stex_choice_sleep_mic(struct st_hba *hba, pm_message_t state) case PM_EVENT_SUSPEND: return ST_S3; case PM_EVENT_HIBERNATE: + case PM_EVENT_POWEROFF: hba->msi_lock = 0; return ST_S4; default: diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c index bde2cc386afd..bf51a17c5be6 100644 --- a/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c +++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_rapl.c @@ -19,7 +19,7 @@ static const struct rapl_mmio_regs rapl_mmio_default = { .limits[RAPL_DOMAIN_DRAM] = BIT(POWER_LIMIT2), }; -static int rapl_mmio_read_raw(int cpu, struct reg_action *ra) +static int rapl_mmio_read_raw(int cpu, struct reg_action *ra, bool atomic) { if (!ra->reg.mmio) return -EINVAL; diff --git a/drivers/usb/host/sl811-hcd.c b/drivers/usb/host/sl811-hcd.c index ea3cab99c5d4..5d6dba681e50 100644 --- a/drivers/usb/host/sl811-hcd.c +++ b/drivers/usb/host/sl811-hcd.c @@ -1748,6 +1748,7 @@ sl811h_suspend(struct platform_device *dev, pm_message_t state) break; case PM_EVENT_SUSPEND: case PM_EVENT_HIBERNATE: + case PM_EVENT_POWEROFF: case PM_EVENT_PRETHAW: /* explicitly discard hw state */ port_power(sl811, 0); break; diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h index a9ee4fe55dcf..4073690504a7 100644 --- a/include/linux/cpuidle.h +++ b/include/linux/cpuidle.h @@ -248,7 +248,8 @@ extern int cpuidle_find_deepest_state(struct cpuidle_driver *drv, struct cpuidle_device *dev, u64 latency_limit_ns); extern int cpuidle_enter_s2idle(struct cpuidle_driver *drv, - struct cpuidle_device *dev); + struct cpuidle_device *dev, + u64 latency_limit_ns); extern void cpuidle_use_deepest_state(u64 latency_limit_ns); #else static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv, @@ -256,7 +257,8 @@ static inline int cpuidle_find_deepest_state(struct cpuidle_driver *drv, u64 latency_limit_ns) {return -ENODEV; } static inline int cpuidle_enter_s2idle(struct cpuidle_driver *drv, - struct cpuidle_device *dev) + struct cpuidle_device *dev, + u64 latency_limit_ns) {return -ENODEV; } static inline void cpuidle_use_deepest_state(u64 latency_limit_ns) { diff --git a/drivers/devfreq/governor.h b/include/linux/devfreq-governor.h index 0adfebc0467a..dfdd0160a29f 100644 --- a/drivers/devfreq/governor.h +++ b/include/linux/devfreq-governor.h @@ -5,11 +5,11 @@ * Copyright (C) 2011 Samsung Electronics * MyungJoo Ham <myungjoo.ham@samsung.com> * - * This header is for devfreq governors in drivers/devfreq/ + * This header is for devfreq governors */ -#ifndef _GOVERNOR_H -#define _GOVERNOR_H +#ifndef __LINUX_DEVFREQ_DEVFREQ_H__ +#define __LINUX_DEVFREQ_DEVFREQ_H__ #include <linux/devfreq.h> @@ -48,31 +48,6 @@ #define DEVFREQ_GOV_ATTR_TIMER BIT(1) /** - * struct devfreq_cpu_data - Hold the per-cpu data - * @node: list node - * @dev: reference to cpu device. - * @first_cpu: the cpumask of the first cpu of a policy. - * @opp_table: reference to cpu opp table. - * @cur_freq: the current frequency of the cpu. - * @min_freq: the min frequency of the cpu. - * @max_freq: the max frequency of the cpu. - * - * This structure stores the required cpu_data of a cpu. - * This is auto-populated by the governor. - */ -struct devfreq_cpu_data { - struct list_head node; - - struct device *dev; - unsigned int first_cpu; - - struct opp_table *opp_table; - unsigned int cur_freq; - unsigned int min_freq; - unsigned int max_freq; -}; - -/** * struct devfreq_governor - Devfreq policy governor * @node: list node - contains registered devfreq governors * @name: Governor's name @@ -124,4 +99,4 @@ static inline int devfreq_update_stats(struct devfreq *df) return df->profile->get_dev_status(df->dev.parent, &df->last_status); } -#endif /* _GOVERNOR_H */ +#endif /* __LINUX_DEVFREQ_DEVFREQ_H__ */ diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 61d50571ad88..43aa6153dc57 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -54,6 +54,8 @@ struct em_perf_table { /** * struct em_perf_domain - Performance domain * @em_table: Pointer to the runtime modifiable em_perf_table + * @node: node in em_pd_list (in energy_model.c) + * @id: A unique ID number for each performance domain * @nr_perf_states: Number of performance states * @min_perf_state: Minimum allowed Performance State index * @max_perf_state: Maximum allowed Performance State index @@ -71,6 +73,8 @@ struct em_perf_table { */ struct em_perf_domain { struct em_perf_table __rcu *em_table; + struct list_head node; + int id; int nr_perf_states; int min_perf_state; int max_perf_state; diff --git a/include/linux/freezer.h b/include/linux/freezer.h index 32884c9721e5..0a8c6c4d1a82 100644 --- a/include/linux/freezer.h +++ b/include/linux/freezer.h @@ -22,14 +22,18 @@ extern bool pm_nosig_freezing; /* PM nosig freezing in effect */ extern unsigned int freeze_timeout_msecs; /* - * Check if a process has been frozen + * Check if a process has been frozen for PM or cgroup1 freezer. Note that + * cgroup2 freezer uses the job control mechanism and does not interact with + * the PM freezer. */ extern bool frozen(struct task_struct *p); extern bool freezing_slow_path(struct task_struct *p); /* - * Check if there is a request to freeze a process + * Check if there is a request to freeze a task from PM or cgroup1 freezer. + * Note that cgroup2 freezer uses the job control mechanism and does not + * interact with the PM freezer. */ static inline bool freezing(struct task_struct *p) { @@ -63,9 +67,9 @@ extern bool freeze_task(struct task_struct *p); extern bool set_freezable(void); #ifdef CONFIG_CGROUP_FREEZER -extern bool cgroup_freezing(struct task_struct *task); +extern bool cgroup1_freezing(struct task_struct *task); #else /* !CONFIG_CGROUP_FREEZER */ -static inline bool cgroup_freezing(struct task_struct *task) +static inline bool cgroup1_freezing(struct task_struct *task) { return false; } diff --git a/include/linux/intel_rapl.h b/include/linux/intel_rapl.h index c0397423d3a8..e9ade2ff4af6 100644 --- a/include/linux/intel_rapl.h +++ b/include/linux/intel_rapl.h @@ -152,7 +152,7 @@ struct rapl_if_priv { union rapl_reg reg_unit; union rapl_reg regs[RAPL_DOMAIN_MAX][RAPL_DOMAIN_REG_MAX]; int limits[RAPL_DOMAIN_MAX]; - int (*read_raw)(int id, struct reg_action *ra); + int (*read_raw)(int id, struct reg_action *ra, bool atomic); int (*write_raw)(int id, struct reg_action *ra); void *defaults; void *rpi; diff --git a/include/linux/pm.h b/include/linux/pm.h index cc7b2dc28574..7f69f739f613 100644 --- a/include/linux/pm.h +++ b/include/linux/pm.h @@ -25,11 +25,12 @@ extern void (*pm_power_off)(void); struct device; /* we have a circular dep with device.h */ #ifdef CONFIG_VT_CONSOLE_SLEEP -extern void pm_vt_switch_required(struct device *dev, bool required); +extern int pm_vt_switch_required(struct device *dev, bool required); extern void pm_vt_switch_unregister(struct device *dev); #else -static inline void pm_vt_switch_required(struct device *dev, bool required) +static inline int pm_vt_switch_required(struct device *dev, bool required) { + return 0; } static inline void pm_vt_switch_unregister(struct device *dev) { @@ -507,6 +508,7 @@ const struct dev_pm_ops name = { \ * RECOVER Creation of a hibernation image or restoration of the main * memory contents from a hibernation image has failed, call * ->thaw() and ->complete() for all devices. + * POWEROFF System will poweroff, call ->poweroff() for all devices. * * The following PM_EVENT_ messages are defined for internal use by * kernel subsystems. They are never issued by the PM core. @@ -537,6 +539,7 @@ const struct dev_pm_ops name = { \ #define PM_EVENT_USER 0x0100 #define PM_EVENT_REMOTE 0x0200 #define PM_EVENT_AUTO 0x0400 +#define PM_EVENT_POWEROFF 0x0800 #define PM_EVENT_SLEEP (PM_EVENT_SUSPEND | PM_EVENT_HIBERNATE) #define PM_EVENT_USER_SUSPEND (PM_EVENT_USER | PM_EVENT_SUSPEND) @@ -551,6 +554,7 @@ const struct dev_pm_ops name = { \ #define PMSG_QUIESCE ((struct pm_message){ .event = PM_EVENT_QUIESCE, }) #define PMSG_SUSPEND ((struct pm_message){ .event = PM_EVENT_SUSPEND, }) #define PMSG_HIBERNATE ((struct pm_message){ .event = PM_EVENT_HIBERNATE, }) +#define PMSG_POWEROFF ((struct pm_message){ .event = PM_EVENT_POWEROFF, }) #define PMSG_RESUME ((struct pm_message){ .event = PM_EVENT_RESUME, }) #define PMSG_THAW ((struct pm_message){ .event = PM_EVENT_THAW, }) #define PMSG_RESTORE ((struct pm_message){ .event = PM_EVENT_RESTORE, }) diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h index f67a2cb7d781..93ba0143ca47 100644 --- a/include/linux/pm_domain.h +++ b/include/linux/pm_domain.h @@ -153,6 +153,7 @@ enum genpd_sync_state { }; struct dev_power_governor { + bool (*system_power_down_ok)(struct dev_pm_domain *domain); bool (*power_down_ok)(struct dev_pm_domain *domain); bool (*suspend_ok)(struct device *dev); }; diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h index 4a69d4af3ff8..6cea4455f867 100644 --- a/include/linux/pm_qos.h +++ b/include/linux/pm_qos.h @@ -162,6 +162,15 @@ static inline void cpu_latency_qos_update_request(struct pm_qos_request *req, static inline void cpu_latency_qos_remove_request(struct pm_qos_request *req) {} #endif +#ifdef CONFIG_PM_QOS_CPU_SYSTEM_WAKEUP +s32 cpu_wakeup_latency_qos_limit(void); +#else +static inline s32 cpu_wakeup_latency_qos_limit(void) +{ + return PM_QOS_RESUME_LATENCY_NO_CONSTRAINT; +} +#endif + #ifdef CONFIG_PM enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask); enum pm_qos_flags_status dev_pm_qos_flags(struct device *dev, s32 mask); diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h index 0b436e15f4cd..911d7a4d32c1 100644 --- a/include/linux/pm_runtime.h +++ b/include/linux/pm_runtime.h @@ -637,6 +637,30 @@ DEFINE_GUARD_COND(pm_runtime_active_auto, _try, DEFINE_GUARD_COND(pm_runtime_active_auto, _try_enabled, pm_runtime_resume_and_get(_T), _RET == 0) +/* ACQUIRE() wrapper macros for the guards defined above. */ + +#define PM_RUNTIME_ACQUIRE(_dev, _var) \ + ACQUIRE(pm_runtime_active_try, _var)(_dev) + +#define PM_RUNTIME_ACQUIRE_AUTOSUSPEND(_dev, _var) \ + ACQUIRE(pm_runtime_active_auto_try, _var)(_dev) + +#define PM_RUNTIME_ACQUIRE_IF_ENABLED(_dev, _var) \ + ACQUIRE(pm_runtime_active_try_enabled, _var)(_dev) + +#define PM_RUNTIME_ACQUIRE_IF_ENABLED_AUTOSUSPEND(_dev, _var) \ + ACQUIRE(pm_runtime_active_auto_try_enabled, _var)(_dev) + +/* + * ACQUIRE_ERR() wrapper macro for guard pm_runtime_active. + * + * Always check PM_RUNTIME_ACQUIRE_ERR() after using one of the + * PM_RUNTIME_ACQUIRE*() macros defined above (yes, it can be used with + * any of them) and if it is nonzero, avoid accessing the given device. + */ +#define PM_RUNTIME_ACQUIRE_ERR(_var_ptr) \ + ACQUIRE_ERR(pm_runtime_active, _var_ptr) + /** * pm_runtime_put_sync - Drop device usage counter and run "idle check" if 0. * @dev: Target device. diff --git a/include/trace/events/power.h b/include/trace/events/power.h index 82904291c2b8..370f8df2fdb4 100644 --- a/include/trace/events/power.h +++ b/include/trace/events/power.h @@ -179,7 +179,8 @@ TRACE_EVENT(pstate_sample, { PM_EVENT_HIBERNATE, "hibernate" }, \ { PM_EVENT_THAW, "thaw" }, \ { PM_EVENT_RESTORE, "restore" }, \ - { PM_EVENT_RECOVER, "recover" }) + { PM_EVENT_RECOVER, "recover" }, \ + { PM_EVENT_POWEROFF, "poweroff" }) DEFINE_EVENT(cpu, cpu_frequency, diff --git a/include/uapi/linux/energy_model.h b/include/uapi/linux/energy_model.h new file mode 100644 index 000000000000..4ec4c0eabbbb --- /dev/null +++ b/include/uapi/linux/energy_model.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ +/* Do not edit directly, auto-generated from: */ +/* Documentation/netlink/specs/em.yaml */ +/* YNL-GEN uapi header */ + +#ifndef _UAPI_LINUX_ENERGY_MODEL_H +#define _UAPI_LINUX_ENERGY_MODEL_H + +#define EM_FAMILY_NAME "em" +#define EM_FAMILY_VERSION 1 + +enum { + EM_A_PDS_PD = 1, + + __EM_A_PDS_MAX, + EM_A_PDS_MAX = (__EM_A_PDS_MAX - 1) +}; + +enum { + EM_A_PD_PAD = 1, + EM_A_PD_PD_ID, + EM_A_PD_FLAGS, + EM_A_PD_CPUS, + + __EM_A_PD_MAX, + EM_A_PD_MAX = (__EM_A_PD_MAX - 1) +}; + +enum { + EM_A_PD_TABLE_PD_ID = 1, + EM_A_PD_TABLE_PS, + + __EM_A_PD_TABLE_MAX, + EM_A_PD_TABLE_MAX = (__EM_A_PD_TABLE_MAX - 1) +}; + +enum { + EM_A_PS_PAD = 1, + EM_A_PS_PERFORMANCE, + EM_A_PS_FREQUENCY, + EM_A_PS_POWER, + EM_A_PS_COST, + EM_A_PS_FLAGS, + + __EM_A_PS_MAX, + EM_A_PS_MAX = (__EM_A_PS_MAX - 1) +}; + +enum { + EM_CMD_GET_PDS = 1, + EM_CMD_GET_PD_TABLE, + EM_CMD_PD_CREATED, + EM_CMD_PD_UPDATED, + EM_CMD_PD_DELETED, + + __EM_CMD_MAX, + EM_CMD_MAX = (__EM_CMD_MAX - 1) +}; + +#define EM_MCGRP_EVENT "event" + +#endif /* _UAPI_LINUX_ENERGY_MODEL_H */ diff --git a/kernel/cgroup/legacy_freezer.c b/kernel/cgroup/legacy_freezer.c index dd9417425d92..915b02f65980 100644 --- a/kernel/cgroup/legacy_freezer.c +++ b/kernel/cgroup/legacy_freezer.c @@ -63,7 +63,7 @@ static struct freezer *parent_freezer(struct freezer *freezer) return css_freezer(freezer->css.parent); } -bool cgroup_freezing(struct task_struct *task) +bool cgroup1_freezing(struct task_struct *task) { bool ret; diff --git a/kernel/freezer.c b/kernel/freezer.c index ddc11a8bd2ea..a76bf957fb32 100644 --- a/kernel/freezer.c +++ b/kernel/freezer.c @@ -44,7 +44,7 @@ bool freezing_slow_path(struct task_struct *p) if (tsk_is_oom_victim(p)) return false; - if (pm_nosig_freezing || cgroup_freezing(p)) + if (pm_nosig_freezing || cgroup1_freezing(p)) return true; if (pm_freezing && !(p->flags & PF_KTHREAD)) diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig index 54a623680019..05337f437cca 100644 --- a/kernel/power/Kconfig +++ b/kernel/power/Kconfig @@ -202,6 +202,17 @@ config PM_WAKELOCKS_GC depends on PM_WAKELOCKS default y +config PM_QOS_CPU_SYSTEM_WAKEUP + bool "User space interface for CPU system wakeup QoS" + depends on CPU_IDLE + help + Enable this to allow user space via the cpu_wakeup_latency file to + specify a CPU system wakeup latency limit. + + This may be particularly useful for platforms supporting multiple low + power states for CPUs during system-wide suspend and s2idle in + particular. + config PM bool "Device power management core functionality" help diff --git a/kernel/power/Makefile b/kernel/power/Makefile index 874ad834dc8d..773e2789412b 100644 --- a/kernel/power/Makefile +++ b/kernel/power/Makefile @@ -21,4 +21,6 @@ obj-$(CONFIG_PM_WAKELOCKS) += wakelock.o obj-$(CONFIG_MAGIC_SYSRQ) += poweroff.o -obj-$(CONFIG_ENERGY_MODEL) += energy_model.o +obj-$(CONFIG_ENERGY_MODEL) += em.o +em-y := energy_model.o +em-$(CONFIG_NET) += em_netlink_autogen.o em_netlink.o diff --git a/kernel/power/console.c b/kernel/power/console.c index 19c48aa5355d..a906a0ac0f9b 100644 --- a/kernel/power/console.c +++ b/kernel/power/console.c @@ -44,9 +44,10 @@ static LIST_HEAD(pm_vt_switch_list); * no_console_suspend argument has been passed on the command line, VT * switches will occur. */ -void pm_vt_switch_required(struct device *dev, bool required) +int pm_vt_switch_required(struct device *dev, bool required) { struct pm_vt_switch *entry, *tmp; + int ret = 0; mutex_lock(&vt_switch_mutex); list_for_each_entry(tmp, &pm_vt_switch_list, head) { @@ -58,8 +59,10 @@ void pm_vt_switch_required(struct device *dev, bool required) } entry = kmalloc(sizeof(*entry), GFP_KERNEL); - if (!entry) + if (!entry) { + ret = -ENOMEM; goto out; + } entry->required = required; entry->dev = dev; @@ -67,6 +70,7 @@ void pm_vt_switch_required(struct device *dev, bool required) list_add(&entry->head, &pm_vt_switch_list); out: mutex_unlock(&vt_switch_mutex); + return ret; } EXPORT_SYMBOL(pm_vt_switch_required); diff --git a/kernel/power/em_netlink.c b/kernel/power/em_netlink.c new file mode 100644 index 000000000000..4b85da138a06 --- /dev/null +++ b/kernel/power/em_netlink.c @@ -0,0 +1,308 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * + * Generic netlink for energy model. + * + * Copyright (c) 2025 Valve Corporation. + * Author: Changwoo Min <changwoo@igalia.com> + */ + +#define pr_fmt(fmt) "energy_model: " fmt + +#include <linux/energy_model.h> +#include <net/sock.h> +#include <net/genetlink.h> +#include <uapi/linux/energy_model.h> + +#include "em_netlink.h" +#include "em_netlink_autogen.h" + +#define EM_A_PD_CPUS_LEN 256 + +/*************************** Command encoding ********************************/ +static int __em_nl_get_pd_size(struct em_perf_domain *pd, void *data) +{ + char cpus_buf[EM_A_PD_CPUS_LEN]; + int *tot_msg_sz = data; + int msg_sz, cpus_sz; + + cpus_sz = snprintf(cpus_buf, sizeof(cpus_buf), "%*pb", + cpumask_pr_args(to_cpumask(pd->cpus))); + + msg_sz = nla_total_size(0) + /* EM_A_PDS_PD */ + nla_total_size(sizeof(u32)) + /* EM_A_PD_PD_ID */ + nla_total_size_64bit(sizeof(u64)) + /* EM_A_PD_FLAGS */ + nla_total_size(cpus_sz); /* EM_A_PD_CPUS */ + + *tot_msg_sz += nlmsg_total_size(genlmsg_msg_size(msg_sz)); + return 0; +} + +static int __em_nl_get_pd(struct em_perf_domain *pd, void *data) +{ + char cpus_buf[EM_A_PD_CPUS_LEN]; + struct sk_buff *msg = data; + struct nlattr *entry; + + entry = nla_nest_start(msg, EM_A_PDS_PD); + if (!entry) + goto out_cancel_nest; + + if (nla_put_u32(msg, EM_A_PD_PD_ID, pd->id)) + goto out_cancel_nest; + + if (nla_put_u64_64bit(msg, EM_A_PD_FLAGS, pd->flags, EM_A_PD_PAD)) + goto out_cancel_nest; + + snprintf(cpus_buf, sizeof(cpus_buf), "%*pb", + cpumask_pr_args(to_cpumask(pd->cpus))); + if (nla_put_string(msg, EM_A_PD_CPUS, cpus_buf)) + goto out_cancel_nest; + + nla_nest_end(msg, entry); + + return 0; + +out_cancel_nest: + nla_nest_cancel(msg, entry); + + return -EMSGSIZE; +} + +int em_nl_get_pds_doit(struct sk_buff *skb, struct genl_info *info) +{ + struct sk_buff *msg; + void *hdr; + int cmd = info->genlhdr->cmd; + int ret = -EMSGSIZE, msg_sz = 0; + + for_each_em_perf_domain(__em_nl_get_pd_size, &msg_sz); + + msg = genlmsg_new(msg_sz, GFP_KERNEL); + if (!msg) + return -ENOMEM; + + hdr = genlmsg_put_reply(msg, info, &em_nl_family, 0, cmd); + if (!hdr) + goto out_free_msg; + + ret = for_each_em_perf_domain(__em_nl_get_pd, msg); + if (ret) + goto out_cancel_msg; + + genlmsg_end(msg, hdr); + + return genlmsg_reply(msg, info); + +out_cancel_msg: + genlmsg_cancel(msg, hdr); +out_free_msg: + nlmsg_free(msg); + + return ret; +} + +static struct em_perf_domain *__em_nl_get_pd_table_id(struct nlattr **attrs) +{ + struct em_perf_domain *pd; + int id; + + if (!attrs[EM_A_PD_TABLE_PD_ID]) + return NULL; + + id = nla_get_u32(attrs[EM_A_PD_TABLE_PD_ID]); + pd = em_perf_domain_get_by_id(id); + return pd; +} + +static int __em_nl_get_pd_table_size(const struct em_perf_domain *pd) +{ + int id_sz, ps_sz; + + id_sz = nla_total_size(sizeof(u32)); /* EM_A_PD_TABLE_PD_ID */ + ps_sz = nla_total_size(0) + /* EM_A_PD_TABLE_PS */ + nla_total_size_64bit(sizeof(u64)) + /* EM_A_PS_PERFORMANCE */ + nla_total_size_64bit(sizeof(u64)) + /* EM_A_PS_FREQUENCY */ + nla_total_size_64bit(sizeof(u64)) + /* EM_A_PS_POWER */ + nla_total_size_64bit(sizeof(u64)) + /* EM_A_PS_COST */ + nla_total_size_64bit(sizeof(u64)); /* EM_A_PS_FLAGS */ + ps_sz *= pd->nr_perf_states; + + return nlmsg_total_size(genlmsg_msg_size(id_sz + ps_sz)); +} + +static int __em_nl_get_pd_table(struct sk_buff *msg, const struct em_perf_domain *pd) +{ + struct em_perf_state *table, *ps; + struct nlattr *entry; + int i; + + if (nla_put_u32(msg, EM_A_PD_TABLE_PD_ID, pd->id)) + goto out_err; + + rcu_read_lock(); + table = em_perf_state_from_pd((struct em_perf_domain *)pd); + + for (i = 0; i < pd->nr_perf_states; i++) { + ps = &table[i]; + + entry = nla_nest_start(msg, EM_A_PD_TABLE_PS); + if (!entry) + goto out_unlock_ps; + + if (nla_put_u64_64bit(msg, EM_A_PS_PERFORMANCE, + ps->performance, EM_A_PS_PAD)) + goto out_cancel_ps_nest; + if (nla_put_u64_64bit(msg, EM_A_PS_FREQUENCY, + ps->frequency, EM_A_PS_PAD)) + goto out_cancel_ps_nest; + if (nla_put_u64_64bit(msg, EM_A_PS_POWER, + ps->power, EM_A_PS_PAD)) + goto out_cancel_ps_nest; + if (nla_put_u64_64bit(msg, EM_A_PS_COST, + ps->cost, EM_A_PS_PAD)) + goto out_cancel_ps_nest; + if (nla_put_u64_64bit(msg, EM_A_PS_FLAGS, + ps->flags, EM_A_PS_PAD)) + goto out_cancel_ps_nest; + + nla_nest_end(msg, entry); + } + rcu_read_unlock(); + return 0; + +out_cancel_ps_nest: + nla_nest_cancel(msg, entry); +out_unlock_ps: + rcu_read_unlock(); +out_err: + return -EMSGSIZE; +} + +int em_nl_get_pd_table_doit(struct sk_buff *skb, struct genl_info *info) +{ + int cmd = info->genlhdr->cmd; + int msg_sz, ret = -EMSGSIZE; + struct em_perf_domain *pd; + struct sk_buff *msg; + void *hdr; + + pd = __em_nl_get_pd_table_id(info->attrs); + if (!pd) + return -EINVAL; + + msg_sz = __em_nl_get_pd_table_size(pd); + + msg = genlmsg_new(msg_sz, GFP_KERNEL); + if (!msg) + return -ENOMEM; + + hdr = genlmsg_put_reply(msg, info, &em_nl_family, 0, cmd); + if (!hdr) + goto out_free_msg; + + ret = __em_nl_get_pd_table(msg, pd); + if (ret) + goto out_free_msg; + + genlmsg_end(msg, hdr); + return genlmsg_reply(msg, info); + +out_free_msg: + nlmsg_free(msg); + return ret; +} + + +/**************************** Event encoding *********************************/ +static void __em_notify_pd_table(const struct em_perf_domain *pd, int ntf_type) +{ + struct sk_buff *msg; + int msg_sz, ret = -EMSGSIZE; + void *hdr; + + if (!genl_has_listeners(&em_nl_family, &init_net, EM_NLGRP_EVENT)) + return; + + msg_sz = __em_nl_get_pd_table_size(pd); + + msg = genlmsg_new(msg_sz, GFP_KERNEL); + if (!msg) + return; + + hdr = genlmsg_put(msg, 0, 0, &em_nl_family, 0, ntf_type); + if (!hdr) + goto out_free_msg; + + ret = __em_nl_get_pd_table(msg, pd); + if (ret) + goto out_free_msg; + + genlmsg_end(msg, hdr); + + genlmsg_multicast(&em_nl_family, msg, 0, EM_NLGRP_EVENT, GFP_KERNEL); + + return; + +out_free_msg: + nlmsg_free(msg); + return; +} + +void em_notify_pd_created(const struct em_perf_domain *pd) +{ + __em_notify_pd_table(pd, EM_CMD_PD_CREATED); +} + +void em_notify_pd_updated(const struct em_perf_domain *pd) +{ + __em_notify_pd_table(pd, EM_CMD_PD_UPDATED); +} + +static int __em_notify_pd_deleted_size(const struct em_perf_domain *pd) +{ + int id_sz = nla_total_size(sizeof(u32)); /* EM_A_PD_TABLE_PD_ID */ + + return nlmsg_total_size(genlmsg_msg_size(id_sz)); +} + +void em_notify_pd_deleted(const struct em_perf_domain *pd) +{ + struct sk_buff *msg; + void *hdr; + int msg_sz; + + if (!genl_has_listeners(&em_nl_family, &init_net, EM_NLGRP_EVENT)) + return; + + msg_sz = __em_notify_pd_deleted_size(pd); + + msg = genlmsg_new(msg_sz, GFP_KERNEL); + if (!msg) + return; + + hdr = genlmsg_put(msg, 0, 0, &em_nl_family, 0, EM_CMD_PD_DELETED); + if (!hdr) + goto out_free_msg; + + if (nla_put_u32(msg, EM_A_PD_TABLE_PD_ID, pd->id)) { + goto out_free_msg; + } + + genlmsg_end(msg, hdr); + + genlmsg_multicast(&em_nl_family, msg, 0, EM_NLGRP_EVENT, GFP_KERNEL); + + return; + +out_free_msg: + nlmsg_free(msg); + return; +} + +/**************************** Initialization *********************************/ +static int __init em_netlink_init(void) +{ + return genl_register_family(&em_nl_family); +} +postcore_initcall(em_netlink_init); diff --git a/kernel/power/em_netlink.h b/kernel/power/em_netlink.h new file mode 100644 index 000000000000..583d7f1c3939 --- /dev/null +++ b/kernel/power/em_netlink.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * + * Generic netlink for energy model. + * + * Copyright (c) 2025 Valve Corporation. + * Author: Changwoo Min <changwoo@igalia.com> + */ +#ifndef _EM_NETLINK_H +#define _EM_NETLINK_H + +#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_NET) +int for_each_em_perf_domain(int (*cb)(struct em_perf_domain*, void *), + void *data); +struct em_perf_domain *em_perf_domain_get_by_id(int id); +void em_notify_pd_created(const struct em_perf_domain *pd); +void em_notify_pd_deleted(const struct em_perf_domain *pd); +void em_notify_pd_updated(const struct em_perf_domain *pd); +#else +static inline +int for_each_em_perf_domain(int (*cb)(struct em_perf_domain*, void *), + void *data) +{ + return -EINVAL; +} +static inline +struct em_perf_domain *em_perf_domain_get_by_id(int id) +{ + return NULL; +} + +static inline void em_notify_pd_created(const struct em_perf_domain *pd) {} + +static inline void em_notify_pd_deleted(const struct em_perf_domain *pd) {} + +static inline void em_notify_pd_updated(const struct em_perf_domain *pd) {} +#endif + +#endif /* _EM_NETLINK_H */ diff --git a/kernel/power/em_netlink_autogen.c b/kernel/power/em_netlink_autogen.c new file mode 100644 index 000000000000..a7a09ab1d1c2 --- /dev/null +++ b/kernel/power/em_netlink_autogen.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) +/* Do not edit directly, auto-generated from: */ +/* Documentation/netlink/specs/em.yaml */ +/* YNL-GEN kernel source */ + +#include <net/netlink.h> +#include <net/genetlink.h> + +#include "em_netlink_autogen.h" + +#include <uapi/linux/energy_model.h> + +/* EM_CMD_GET_PD_TABLE - do */ +static const struct nla_policy em_get_pd_table_nl_policy[EM_A_PD_TABLE_PD_ID + 1] = { + [EM_A_PD_TABLE_PD_ID] = { .type = NLA_U32, }, +}; + +/* Ops table for em */ +static const struct genl_split_ops em_nl_ops[] = { + { + .cmd = EM_CMD_GET_PDS, + .doit = em_nl_get_pds_doit, + .flags = GENL_CMD_CAP_DO, + }, + { + .cmd = EM_CMD_GET_PD_TABLE, + .doit = em_nl_get_pd_table_doit, + .policy = em_get_pd_table_nl_policy, + .maxattr = EM_A_PD_TABLE_PD_ID, + .flags = GENL_CMD_CAP_DO, + }, +}; + +static const struct genl_multicast_group em_nl_mcgrps[] = { + [EM_NLGRP_EVENT] = { "event", }, +}; + +struct genl_family em_nl_family __ro_after_init = { + .name = EM_FAMILY_NAME, + .version = EM_FAMILY_VERSION, + .netnsok = true, + .parallel_ops = true, + .module = THIS_MODULE, + .split_ops = em_nl_ops, + .n_split_ops = ARRAY_SIZE(em_nl_ops), + .mcgrps = em_nl_mcgrps, + .n_mcgrps = ARRAY_SIZE(em_nl_mcgrps), +}; diff --git a/kernel/power/em_netlink_autogen.h b/kernel/power/em_netlink_autogen.h new file mode 100644 index 000000000000..78ce609641f1 --- /dev/null +++ b/kernel/power/em_netlink_autogen.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ +/* Do not edit directly, auto-generated from: */ +/* Documentation/netlink/specs/em.yaml */ +/* YNL-GEN kernel header */ + +#ifndef _LINUX_EM_GEN_H +#define _LINUX_EM_GEN_H + +#include <net/netlink.h> +#include <net/genetlink.h> + +#include <uapi/linux/energy_model.h> + +int em_nl_get_pds_doit(struct sk_buff *skb, struct genl_info *info); +int em_nl_get_pd_table_doit(struct sk_buff *skb, struct genl_info *info); + +enum { + EM_NLGRP_EVENT, +}; + +extern struct genl_family em_nl_family; + +#endif /* _LINUX_EM_GEN_H */ diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 5f17d2e8e954..11af9f64aa82 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -17,12 +17,24 @@ #include <linux/sched/topology.h> #include <linux/slab.h> +#include "em_netlink.h" + /* * Mutex serializing the registrations of performance domains and letting * callbacks defined by drivers sleep. */ static DEFINE_MUTEX(em_pd_mutex); +/* + * Manage performance domains with IDs. One can iterate the performance domains + * through the list and pick one with their associated ID. The mutex serializes + * the list access. When holding em_pd_list_mutex, em_pd_mutex should not be + * taken to avoid potential deadlock. + */ +static DEFINE_IDA(em_pd_ida); +static LIST_HEAD(em_pd_list); +static DEFINE_MUTEX(em_pd_list_mutex); + static void em_cpufreq_update_efficiencies(struct device *dev, struct em_perf_state *table); static void em_check_capacity_update(void); @@ -116,6 +128,16 @@ static int em_debug_flags_show(struct seq_file *s, void *unused) } DEFINE_SHOW_ATTRIBUTE(em_debug_flags); +static int em_debug_id_show(struct seq_file *s, void *unused) +{ + struct em_perf_domain *pd = s->private; + + seq_printf(s, "%d\n", pd->id); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(em_debug_id); + static void em_debug_create_pd(struct device *dev) { struct em_dbg_info *em_dbg; @@ -132,6 +154,8 @@ static void em_debug_create_pd(struct device *dev) debugfs_create_file("flags", 0444, d, dev->em_pd, &em_debug_flags_fops); + debugfs_create_file("id", 0444, d, dev->em_pd, &em_debug_id_fops); + em_dbg = devm_kcalloc(dev, dev->em_pd->nr_perf_states, sizeof(*em_dbg), GFP_KERNEL); if (!em_dbg) @@ -328,6 +352,8 @@ int em_dev_update_perf_domain(struct device *dev, em_table_free(old_table); mutex_unlock(&em_pd_mutex); + + em_notify_pd_updated(pd); return 0; } EXPORT_SYMBOL_GPL(em_dev_update_perf_domain); @@ -396,7 +422,7 @@ static int em_create_pd(struct device *dev, int nr_states, struct em_perf_table *em_table; struct em_perf_domain *pd; struct device *cpu_dev; - int cpu, ret, num_cpus; + int cpu, ret, num_cpus, id; if (_is_cpu_device(dev)) { num_cpus = cpumask_weight(cpus); @@ -420,6 +446,13 @@ static int em_create_pd(struct device *dev, int nr_states, pd->nr_perf_states = nr_states; + INIT_LIST_HEAD(&pd->node); + + id = ida_alloc(&em_pd_ida, GFP_KERNEL); + if (id < 0) + return -ENOMEM; + pd->id = id; + em_table = em_table_alloc(pd); if (!em_table) goto free_pd; @@ -444,6 +477,7 @@ free_pd_table: kfree(em_table); free_pd: kfree(pd); + ida_free(&em_pd_ida, id); return -EINVAL; } @@ -659,8 +693,16 @@ int em_dev_register_pd_no_update(struct device *dev, unsigned int nr_states, unlock: mutex_unlock(&em_pd_mutex); + if (ret) + return ret; - return ret; + mutex_lock(&em_pd_list_mutex); + list_add_tail(&dev->em_pd->node, &em_pd_list); + mutex_unlock(&em_pd_list_mutex); + + em_notify_pd_created(dev->em_pd); + + return 0; } EXPORT_SYMBOL_GPL(em_dev_register_pd_no_update); @@ -678,6 +720,12 @@ void em_dev_unregister_perf_domain(struct device *dev) if (_is_cpu_device(dev)) return; + mutex_lock(&em_pd_list_mutex); + list_del_init(&dev->em_pd->node); + mutex_unlock(&em_pd_list_mutex); + + em_notify_pd_deleted(dev->em_pd); + /* * The mutex separates all register/unregister requests and protects * from potential clean-up/setup issues in the debugfs directories. @@ -689,6 +737,8 @@ void em_dev_unregister_perf_domain(struct device *dev) em_table_free(rcu_dereference_protected(dev->em_pd->em_table, lockdep_is_held(&em_pd_mutex))); + ida_free(&em_pd_ida, dev->em_pd->id); + kfree(dev->em_pd); dev->em_pd = NULL; mutex_unlock(&em_pd_mutex); @@ -958,3 +1008,39 @@ void em_rebuild_sched_domains(void) */ schedule_work(&rebuild_sd_work); } + +#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_NET) +int for_each_em_perf_domain(int (*cb)(struct em_perf_domain*, void *), + void *data) +{ + struct em_perf_domain *pd; + + lockdep_assert_not_held(&em_pd_mutex); + guard(mutex)(&em_pd_list_mutex); + + list_for_each_entry(pd, &em_pd_list, node) { + int ret; + + ret = cb(pd, data); + if (ret) + return ret; + } + + return 0; +} + +struct em_perf_domain *em_perf_domain_get_by_id(int id) +{ + struct em_perf_domain *pd; + + lockdep_assert_not_held(&em_pd_mutex); + guard(mutex)(&em_pd_list_mutex); + + list_for_each_entry(pd, &em_pd_list, node) { + if (pd->id == id) + return pd; + } + + return NULL; +} +#endif diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c index 26e45f86b955..af8d07bafe02 100644 --- a/kernel/power/hibernate.c +++ b/kernel/power/hibernate.c @@ -820,7 +820,10 @@ int hibernate(void) if (error) goto Restore; - ksys_sync_helper(); + error = pm_sleep_fs_sync(); + if (error) + goto Notify; + filesystems_freeze(filesystem_freeze_enabled); error = freeze_processes(); @@ -891,6 +894,7 @@ int hibernate(void) freezer_test_done = false; Exit: filesystems_thaw(); + Notify: pm_notifier_call_chain(PM_POST_HIBERNATION); Restore: pm_restore_console(); diff --git a/kernel/power/main.c b/kernel/power/main.c index 549f51ca3a1e..03b2c5495c77 100644 --- a/kernel/power/main.c +++ b/kernel/power/main.c @@ -18,6 +18,8 @@ #include <linux/suspend.h> #include <linux/syscalls.h> #include <linux/pm_runtime.h> +#include <linux/atomic.h> +#include <linux/wait.h> #include "power.h" @@ -92,6 +94,61 @@ void ksys_sync_helper(void) } EXPORT_SYMBOL_GPL(ksys_sync_helper); +#if defined(CONFIG_SUSPEND) || defined(CONFIG_HIBERNATION) +/* Wakeup events handling resolution while syncing file systems in jiffies */ +#define PM_FS_SYNC_WAKEUP_RESOLUTION 5 + +static atomic_t pm_fs_sync_count = ATOMIC_INIT(0); +static struct workqueue_struct *pm_fs_sync_wq; +static DECLARE_WAIT_QUEUE_HEAD(pm_fs_sync_wait); + +static bool pm_fs_sync_completed(void) +{ + return atomic_read(&pm_fs_sync_count) == 0; +} + +static void pm_fs_sync_work_fn(struct work_struct *work) +{ + ksys_sync_helper(); + + if (atomic_dec_and_test(&pm_fs_sync_count)) + wake_up(&pm_fs_sync_wait); +} +static DECLARE_WORK(pm_fs_sync_work, pm_fs_sync_work_fn); + +/** + * pm_sleep_fs_sync() - Sync file systems in an interruptible way + * + * Return: 0 on successful file system sync, or -EBUSY if the file system sync + * was aborted. + */ +int pm_sleep_fs_sync(void) +{ + pm_wakeup_clear(0); + + /* + * Take back-to-back sleeps into account by queuing a subsequent fs sync + * only if the previous fs sync is running or is not queued. Multiple fs + * syncs increase the likelihood of saving the latest files immediately + * before sleep. + */ + if (!work_pending(&pm_fs_sync_work)) { + atomic_inc(&pm_fs_sync_count); + queue_work(pm_fs_sync_wq, &pm_fs_sync_work); + } + + while (!pm_fs_sync_completed()) { + if (pm_wakeup_pending()) + return -EBUSY; + + wait_event_timeout(pm_fs_sync_wait, pm_fs_sync_completed(), + PM_FS_SYNC_WAKEUP_RESOLUTION); + } + + return 0; +} +#endif /* CONFIG_SUSPEND || CONFIG_HIBERNATION */ + /* Routines for PM-transition notifications */ static BLOCKING_NOTIFIER_HEAD(pm_chain_head); @@ -231,10 +288,10 @@ static ssize_t mem_sleep_store(struct kobject *kobj, struct kobj_attribute *attr power_attr(mem_sleep); /* - * sync_on_suspend: invoke ksys_sync_helper() before suspend. + * sync_on_suspend: Sync file systems before suspend. * - * show() returns whether ksys_sync_helper() is invoked before suspend. - * store() accepts 0 or 1. 0 disables ksys_sync_helper() and 1 enables it. + * show() returns whether file systems sync before suspend is enabled. + * store() accepts 0 or 1. 0 disables file systems sync and 1 enables it. */ bool sync_on_suspend_enabled = !IS_ENABLED(CONFIG_SUSPEND_SKIP_SYNC); @@ -1066,16 +1123,26 @@ static const struct attribute_group *attr_groups[] = { struct workqueue_struct *pm_wq; EXPORT_SYMBOL_GPL(pm_wq); -static int __init pm_start_workqueue(void) +static int __init pm_start_workqueues(void) { - pm_wq = alloc_workqueue("pm", WQ_FREEZABLE, 0); + pm_wq = alloc_workqueue("pm", WQ_FREEZABLE | WQ_UNBOUND, 0); + if (!pm_wq) + return -ENOMEM; - return pm_wq ? 0 : -ENOMEM; +#if defined(CONFIG_SUSPEND) || defined(CONFIG_HIBERNATION) + pm_fs_sync_wq = alloc_ordered_workqueue("pm_fs_sync", 0); + if (!pm_fs_sync_wq) { + destroy_workqueue(pm_wq); + return -ENOMEM; + } +#endif + + return 0; } static int __init pm_init(void) { - int error = pm_start_workqueue(); + int error = pm_start_workqueues(); if (error) return error; hibernate_image_size_init(); diff --git a/kernel/power/power.h b/kernel/power/power.h index 7ccd709af93f..75b63843886e 100644 --- a/kernel/power/power.h +++ b/kernel/power/power.h @@ -19,6 +19,7 @@ struct swsusp_info { } __aligned(PAGE_SIZE); #if defined(CONFIG_SUSPEND) || defined(CONFIG_HIBERNATION) +extern int pm_sleep_fs_sync(void); extern bool filesystem_freeze_enabled; #endif diff --git a/kernel/power/qos.c b/kernel/power/qos.c index 4244b069442e..f7d8064e9adc 100644 --- a/kernel/power/qos.c +++ b/kernel/power/qos.c @@ -415,6 +415,105 @@ static struct miscdevice cpu_latency_qos_miscdev = { .fops = &cpu_latency_qos_fops, }; +#ifdef CONFIG_PM_QOS_CPU_SYSTEM_WAKEUP +/* The CPU system wakeup latency QoS. */ +static struct pm_qos_constraints cpu_wakeup_latency_constraints = { + .list = PLIST_HEAD_INIT(cpu_wakeup_latency_constraints.list), + .target_value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT, + .default_value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT, + .no_constraint_value = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT, + .type = PM_QOS_MIN, +}; + +/** + * cpu_wakeup_latency_qos_limit - Current CPU system wakeup latency QoS limit. + * + * Returns the current CPU system wakeup latency QoS limit that may have been + * requested by user space. + */ +s32 cpu_wakeup_latency_qos_limit(void) +{ + return pm_qos_read_value(&cpu_wakeup_latency_constraints); +} + +static int cpu_wakeup_latency_qos_open(struct inode *inode, struct file *filp) +{ + struct pm_qos_request *req; + + req = kzalloc(sizeof(*req), GFP_KERNEL); + if (!req) + return -ENOMEM; + + req->qos = &cpu_wakeup_latency_constraints; + pm_qos_update_target(req->qos, &req->node, PM_QOS_ADD_REQ, + PM_QOS_RESUME_LATENCY_NO_CONSTRAINT); + filp->private_data = req; + + return 0; +} + +static int cpu_wakeup_latency_qos_release(struct inode *inode, + struct file *filp) +{ + struct pm_qos_request *req = filp->private_data; + + filp->private_data = NULL; + pm_qos_update_target(req->qos, &req->node, PM_QOS_REMOVE_REQ, + PM_QOS_RESUME_LATENCY_NO_CONSTRAINT); + kfree(req); + + return 0; +} + +static ssize_t cpu_wakeup_latency_qos_read(struct file *filp, char __user *buf, + size_t count, loff_t *f_pos) +{ + s32 value = pm_qos_read_value(&cpu_wakeup_latency_constraints); + + return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32)); +} + +static ssize_t cpu_wakeup_latency_qos_write(struct file *filp, + const char __user *buf, + size_t count, loff_t *f_pos) +{ + struct pm_qos_request *req = filp->private_data; + s32 value; + + if (count == sizeof(s32)) { + if (copy_from_user(&value, buf, sizeof(s32))) + return -EFAULT; + } else { + int ret; + + ret = kstrtos32_from_user(buf, count, 16, &value); + if (ret) + return ret; + } + + if (value < 0) + return -EINVAL; + + pm_qos_update_target(req->qos, &req->node, PM_QOS_UPDATE_REQ, value); + + return count; +} + +static const struct file_operations cpu_wakeup_latency_qos_fops = { + .open = cpu_wakeup_latency_qos_open, + .release = cpu_wakeup_latency_qos_release, + .read = cpu_wakeup_latency_qos_read, + .write = cpu_wakeup_latency_qos_write, + .llseek = noop_llseek, +}; + +static struct miscdevice cpu_wakeup_latency_qos_miscdev = { + .minor = MISC_DYNAMIC_MINOR, + .name = "cpu_wakeup_latency", + .fops = &cpu_wakeup_latency_qos_fops, +}; +#endif /* CONFIG_PM_QOS_CPU_SYSTEM_WAKEUP */ + static int __init cpu_latency_qos_init(void) { int ret; @@ -424,6 +523,13 @@ static int __init cpu_latency_qos_init(void) pr_err("%s: %s setup failed\n", __func__, cpu_latency_qos_miscdev.name); +#ifdef CONFIG_PM_QOS_CPU_SYSTEM_WAKEUP + ret = misc_register(&cpu_wakeup_latency_qos_miscdev); + if (ret < 0) + pr_err("%s: %s setup failed\n", __func__, + cpu_wakeup_latency_qos_miscdev.name); +#endif + return ret; } late_initcall(cpu_latency_qos_init); diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index 645f42e40478..0a946932d5c1 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -2110,22 +2110,20 @@ asmlinkage __visible int swsusp_save(void) { unsigned int nr_pages, nr_highmem; - pr_info("Creating image:\n"); + pm_deferred_pr_dbg("Creating image\n"); drain_local_pages(NULL); nr_pages = count_data_pages(); nr_highmem = count_highmem_pages(); - pr_info("Need to copy %u pages\n", nr_pages + nr_highmem); + pm_deferred_pr_dbg("Need to copy %u pages\n", nr_pages + nr_highmem); if (!enough_free_mem(nr_pages, nr_highmem)) { - pr_err("Not enough free memory\n"); + pm_deferred_pr_dbg("Not enough free memory for image creation\n"); return -ENOMEM; } - if (swsusp_alloc(©_bm, nr_pages, nr_highmem)) { - pr_err("Memory allocation failed\n"); + if (swsusp_alloc(©_bm, nr_pages, nr_highmem)) return -ENOMEM; - } /* * During allocating of suspend pagedir, new cold pages may appear. @@ -2144,7 +2142,8 @@ asmlinkage __visible int swsusp_save(void) nr_zero_pages = nr_pages - nr_copy_pages; nr_meta_pages = DIV_ROUND_UP(nr_pages * sizeof(long), PAGE_SIZE); - pr_info("Image created (%d pages copied, %d zero pages)\n", nr_copy_pages, nr_zero_pages); + pm_deferred_pr_dbg("Image created (%d pages copied, %d zero pages)\n", + nr_copy_pages, nr_zero_pages); return 0; } diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c index 3d4ebedad69f..2da4482bb6eb 100644 --- a/kernel/power/suspend.c +++ b/kernel/power/suspend.c @@ -344,10 +344,14 @@ MODULE_PARM_DESC(pm_test_delay, static int suspend_test(int level) { #ifdef CONFIG_PM_DEBUG + int i; + if (pm_test_level == level) { pr_info("suspend debug: Waiting for %d second(s).\n", pm_test_delay); - mdelay(pm_test_delay * 1000); + for (i = 0; i < pm_test_delay && !pm_wakeup_pending(); i++) + msleep(1000); + return 1; } #endif /* !CONFIG_PM_DEBUG */ @@ -589,7 +593,11 @@ static int enter_state(suspend_state_t state) if (sync_on_suspend_enabled) { trace_suspend_resume(TPS("sync_filesystems"), 0, true); - ksys_sync_helper(); + + error = pm_sleep_fs_sync(); + if (error) + goto Unlock; + trace_suspend_resume(TPS("sync_filesystems"), 0, false); } diff --git a/kernel/power/swap.c b/kernel/power/swap.c index 70ae21f7370d..33a186373bef 100644 --- a/kernel/power/swap.c +++ b/kernel/power/swap.c @@ -46,19 +46,18 @@ static bool clean_pages_on_read; static bool clean_pages_on_decompress; /* - * The swap map is a data structure used for keeping track of each page - * written to a swap partition. It consists of many swap_map_page - * structures that contain each an array of MAP_PAGE_ENTRIES swap entries. - * These structures are stored on the swap and linked together with the - * help of the .next_swap member. + * The swap map is a data structure used for keeping track of each page + * written to a swap partition. It consists of many swap_map_page structures + * that contain each an array of MAP_PAGE_ENTRIES swap entries. These + * structures are stored on the swap and linked together with the help of the + * .next_swap member. * - * The swap map is created during suspend. The swap map pages are - * allocated and populated one at a time, so we only need one memory - * page to set up the entire structure. + * The swap map is created during suspend. The swap map pages are allocated and + * populated one at a time, so we only need one memory page to set up the entire + * structure. * - * During resume we pick up all swap_map_page structures into a list. + * During resume we pick up all swap_map_page structures into a list. */ - #define MAP_PAGE_ENTRIES (PAGE_SIZE / sizeof(sector_t) - 1) /* @@ -89,10 +88,8 @@ struct swap_map_page_list { }; /* - * The swap_map_handle structure is used for handling swap in - * a file-alike way + * The swap_map_handle structure is used for handling swap in a file-alike way. */ - struct swap_map_handle { struct swap_map_page *cur; struct swap_map_page_list *maps; @@ -117,10 +114,9 @@ struct swsusp_header { static struct swsusp_header *swsusp_header; /* - * The following functions are used for tracing the allocated - * swap pages, so that they can be freed in case of an error. + * The following functions are used for tracing the allocated swap pages, so + * that they can be freed in case of an error. */ - struct swsusp_extent { struct rb_node node; unsigned long start; @@ -170,15 +166,14 @@ static int swsusp_extents_insert(unsigned long swap_offset) return 0; } -/* - * alloc_swapdev_block - allocate a swap page and register that it has - * been allocated, so that it can be freed in case of an error. - */ - sector_t alloc_swapdev_block(int swap) { unsigned long offset; + /* + * Allocate a swap page and register that it has been allocated, so that + * it can be freed in case of an error. + */ offset = swp_offset(get_swap_page_of_type(swap)); if (offset) { if (swsusp_extents_insert(offset)) @@ -189,16 +184,14 @@ sector_t alloc_swapdev_block(int swap) return 0; } -/* - * free_all_swap_pages - free swap pages allocated for saving image data. - * It also frees the extents used to register which swap entries had been - * allocated. - */ - void free_all_swap_pages(int swap) { struct rb_node *node; + /* + * Free swap pages allocated for saving image data. It also frees the + * extents used to register which swap entries had been allocated. + */ while ((node = swsusp_extents.rb_node)) { struct swsusp_extent *ext; @@ -303,6 +296,7 @@ static int hib_wait_io(struct hib_bio_batch *hb) /* * Saving part */ + static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags) { int error; @@ -336,16 +330,14 @@ static int mark_swapfiles(struct swap_map_handle *handle, unsigned int flags) */ unsigned int swsusp_header_flags; -/** - * swsusp_swap_check - check if the resume device is a swap device - * and get its index (if so) - * - * This is called before saving image - */ static int swsusp_swap_check(void) { int res; + /* + * Check if the resume device is a swap device and get its index (if so). + * This is called before saving the image. + */ if (swsusp_resume_device) res = swap_type_of(swsusp_resume_device, swsusp_resume_block); else @@ -362,13 +354,6 @@ static int swsusp_swap_check(void) return 0; } -/** - * write_page - Write one page to given swap location. - * @buf: Address we're writing. - * @offset: Offset of the swap page we're writing to. - * @hb: bio completion batch - */ - static int write_page(void *buf, sector_t offset, struct hib_bio_batch *hb) { gfp_t gfp = GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY; @@ -519,17 +504,14 @@ static int swap_writer_finish(struct swap_map_handle *handle, CMP_HEADER, PAGE_SIZE) #define CMP_SIZE (CMP_PAGES * PAGE_SIZE) -/* Maximum number of threads for compression/decompression. */ -#define CMP_THREADS 3 +/* Default number of threads for compression/decompression. */ +#define CMP_THREADS 3 +static unsigned int hibernate_compression_threads = CMP_THREADS; /* Minimum/maximum number of pages for read buffering. */ #define CMP_MIN_RD_PAGES 1024 #define CMP_MAX_RD_PAGES 8192 -/** - * save_image - save the suspend image data - */ - static int save_image(struct swap_map_handle *handle, struct snapshot_handle *snapshot, unsigned int nr_to_write) @@ -585,13 +567,48 @@ struct crc_data { wait_queue_head_t go; /* start crc update */ wait_queue_head_t done; /* crc update done */ u32 *crc32; /* points to handle's crc32 */ - size_t *unc_len[CMP_THREADS]; /* uncompressed lengths */ - unsigned char *unc[CMP_THREADS]; /* uncompressed data */ + size_t **unc_len; /* uncompressed lengths */ + unsigned char **unc; /* uncompressed data */ }; -/* - * CRC32 update function that runs in its own thread. - */ +static struct crc_data *alloc_crc_data(int nr_threads) +{ + struct crc_data *crc; + + crc = kzalloc(sizeof(*crc), GFP_KERNEL); + if (!crc) + return NULL; + + crc->unc = kcalloc(nr_threads, sizeof(*crc->unc), GFP_KERNEL); + if (!crc->unc) + goto err_free_crc; + + crc->unc_len = kcalloc(nr_threads, sizeof(*crc->unc_len), GFP_KERNEL); + if (!crc->unc_len) + goto err_free_unc; + + return crc; + +err_free_unc: + kfree(crc->unc); +err_free_crc: + kfree(crc); + return NULL; +} + +static void free_crc_data(struct crc_data *crc) +{ + if (!crc) + return; + + if (crc->thr) + kthread_stop(crc->thr); + + kfree(crc->unc_len); + kfree(crc->unc); + kfree(crc); +} + static int crc32_threadfn(void *data) { struct crc_data *d = data; @@ -616,6 +633,7 @@ static int crc32_threadfn(void *data) } return 0; } + /* * Structure used for data compression. */ @@ -637,9 +655,6 @@ struct cmp_data { /* Indicates the image size after compression */ static atomic64_t compressed_size = ATOMIC_INIT(0); -/* - * Compression function that runs in its own thread. - */ static int compress_threadfn(void *data) { struct cmp_data *d = data; @@ -671,12 +686,6 @@ static int compress_threadfn(void *data) return 0; } -/** - * save_compressed_image - Save the suspend image data after compression. - * @handle: Swap map handle to use for saving the image. - * @snapshot: Image to read data from. - * @nr_to_write: Number of pages to save. - */ static int save_compressed_image(struct swap_map_handle *handle, struct snapshot_handle *snapshot, unsigned int nr_to_write) @@ -703,7 +712,7 @@ static int save_compressed_image(struct swap_map_handle *handle, * footprint. */ nr_threads = num_online_cpus() - 1; - nr_threads = clamp_val(nr_threads, 1, CMP_THREADS); + nr_threads = clamp_val(nr_threads, 1, hibernate_compression_threads); page = (void *)__get_free_page(GFP_NOIO | __GFP_HIGH); if (!page) { @@ -719,7 +728,7 @@ static int save_compressed_image(struct swap_map_handle *handle, goto out_clean; } - crc = kzalloc(sizeof(*crc), GFP_KERNEL); + crc = alloc_crc_data(nr_threads); if (!crc) { pr_err("Failed to allocate crc\n"); ret = -ENOMEM; @@ -888,11 +897,7 @@ out_finish: out_clean: hib_finish_batch(&hb); - if (crc) { - if (crc->thr) - kthread_stop(crc->thr); - kfree(crc); - } + free_crc_data(crc); if (data) { for (thr = 0; thr < nr_threads; thr++) { if (data[thr].thr) @@ -908,13 +913,6 @@ out_clean: return ret; } -/** - * enough_swap - Make sure we have enough swap to save the image. - * - * Returns TRUE or FALSE after checking the total amount of swap - * space available from the resume partition. - */ - static int enough_swap(unsigned int nr_pages) { unsigned int free_swap = count_swap_pages(root_swap, 1); @@ -927,15 +925,16 @@ static int enough_swap(unsigned int nr_pages) } /** - * swsusp_write - Write entire image and metadata. - * @flags: flags to pass to the "boot" kernel in the image header + * swsusp_write - Write entire image and metadata. + * @flags: flags to pass to the "boot" kernel in the image header + * + * It is important _NOT_ to umount filesystems at this point. We want them + * synced (in case something goes wrong) but we DO not want to mark filesystem + * clean: it is not. (And it does not matter, if we resume correctly, we'll mark + * system clean, anyway.) * - * It is important _NOT_ to umount filesystems at this point. We want - * them synced (in case something goes wrong) but we DO not want to mark - * filesystem clean: it is not. (And it does not matter, if we resume - * correctly, we'll mark system clean, anyway.) + * Return: 0 on success, negative error code on failure. */ - int swsusp_write(unsigned int flags) { struct swap_map_handle handle; @@ -978,8 +977,8 @@ out_finish: } /* - * The following functions allow us to read data using a swap map - * in a file-like way. + * The following functions allow us to read data using a swap map in a file-like + * way. */ static void release_swap_reader(struct swap_map_handle *handle) @@ -1081,12 +1080,6 @@ static int swap_reader_finish(struct swap_map_handle *handle) return 0; } -/** - * load_image - load the image using the swap map handle - * @handle and the snapshot handle @snapshot - * (assume there are @nr_pages pages to load) - */ - static int load_image(struct swap_map_handle *handle, struct snapshot_handle *snapshot, unsigned int nr_to_read) @@ -1157,9 +1150,6 @@ struct dec_data { unsigned char cmp[CMP_SIZE]; /* compressed buffer */ }; -/* - * Decompression function that runs in its own thread. - */ static int decompress_threadfn(void *data) { struct dec_data *d = data; @@ -1194,12 +1184,6 @@ static int decompress_threadfn(void *data) return 0; } -/** - * load_compressed_image - Load compressed image data and decompress it. - * @handle: Swap map handle to use for loading data. - * @snapshot: Image to copy uncompressed data into. - * @nr_to_read: Number of pages to load. - */ static int load_compressed_image(struct swap_map_handle *handle, struct snapshot_handle *snapshot, unsigned int nr_to_read) @@ -1227,7 +1211,7 @@ static int load_compressed_image(struct swap_map_handle *handle, * footprint. */ nr_threads = num_online_cpus() - 1; - nr_threads = clamp_val(nr_threads, 1, CMP_THREADS); + nr_threads = clamp_val(nr_threads, 1, hibernate_compression_threads); page = vmalloc_array(CMP_MAX_RD_PAGES, sizeof(*page)); if (!page) { @@ -1243,7 +1227,7 @@ static int load_compressed_image(struct swap_map_handle *handle, goto out_clean; } - crc = kzalloc(sizeof(*crc), GFP_KERNEL); + crc = alloc_crc_data(nr_threads); if (!crc) { pr_err("Failed to allocate crc\n"); ret = -ENOMEM; @@ -1510,11 +1494,7 @@ out_clean: hib_finish_batch(&hb); for (i = 0; i < ring_size; i++) free_page((unsigned long)page[i]); - if (crc) { - if (crc->thr) - kthread_stop(crc->thr); - kfree(crc); - } + free_crc_data(crc); if (data) { for (thr = 0; thr < nr_threads; thr++) { if (data[thr].thr) @@ -1533,8 +1513,9 @@ out_clean: * swsusp_read - read the hibernation image. * @flags_p: flags passed by the "frozen" kernel in the image header should * be written into this memory location + * + * Return: 0 on success, negative error code on failure. */ - int swsusp_read(unsigned int *flags_p) { int error; @@ -1571,8 +1552,9 @@ static void *swsusp_holder; /** * swsusp_check - Open the resume device and check for the swsusp signature. * @exclusive: Open the resume device exclusively. + * + * Return: 0 if a valid image is found, negative error code otherwise. */ - int swsusp_check(bool exclusive) { void *holder = exclusive ? &swsusp_holder : NULL; @@ -1622,7 +1604,6 @@ put: /** * swsusp_close - close resume device. */ - void swsusp_close(void) { if (IS_ERR(hib_resume_bdev_file)) { @@ -1634,9 +1615,10 @@ void swsusp_close(void) } /** - * swsusp_unmark - Unmark swsusp signature in the resume device + * swsusp_unmark - Unmark swsusp signature in the resume device + * + * Return: 0 on success, negative error code on failure. */ - #ifdef CONFIG_SUSPEND int swsusp_unmark(void) { @@ -1662,8 +1644,46 @@ int swsusp_unmark(void) } #endif +static ssize_t hibernate_compression_threads_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "%d\n", hibernate_compression_threads); +} + +static ssize_t hibernate_compression_threads_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t n) +{ + unsigned long val; + + if (kstrtoul(buf, 0, &val)) + return -EINVAL; + + if (val < 1) + return -EINVAL; + + hibernate_compression_threads = val; + return n; +} +power_attr(hibernate_compression_threads); + +static struct attribute *g[] = { + &hibernate_compression_threads_attr.attr, + NULL, +}; + +static const struct attribute_group attr_group = { + .attrs = g, +}; + static int __init swsusp_header_init(void) { + int error; + + error = sysfs_create_group(power_kobj, &attr_group); + if (error) + return -ENOMEM; + swsusp_header = (struct swsusp_header*) __get_free_page(GFP_KERNEL); if (!swsusp_header) panic("Could not allocate memory for swsusp_header\n"); @@ -1671,3 +1691,19 @@ static int __init swsusp_header_init(void) } core_initcall(swsusp_header_init); + +static int __init hibernate_compression_threads_setup(char *str) +{ + int rc = kstrtouint(str, 0, &hibernate_compression_threads); + + if (rc) + return rc; + + if (hibernate_compression_threads < 1) + hibernate_compression_threads = CMP_THREADS; + + return 1; + +} + +__setup("hibernate_compression_threads=", hibernate_compression_threads_setup); diff --git a/kernel/power/user.c b/kernel/power/user.c index 3f9e3efb9f6e..4401cfe26e5c 100644 --- a/kernel/power/user.c +++ b/kernel/power/user.c @@ -278,7 +278,9 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd, if (data->frozen) break; - ksys_sync_helper(); + error = pm_sleep_fs_sync(); + if (error) + break; error = freeze_processes(); if (error) diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c index 1cb7a3d70e65..c174afe1dd17 100644 --- a/kernel/sched/idle.c +++ b/kernel/sched/idle.c @@ -131,12 +131,13 @@ void __cpuidle default_idle_call(void) } static int call_cpuidle_s2idle(struct cpuidle_driver *drv, - struct cpuidle_device *dev) + struct cpuidle_device *dev, + u64 max_latency_ns) { if (current_clr_polling_and_test()) return -EBUSY; - return cpuidle_enter_s2idle(drv, dev); + return cpuidle_enter_s2idle(drv, dev, max_latency_ns); } static int call_cpuidle(struct cpuidle_driver *drv, struct cpuidle_device *dev, @@ -205,12 +206,13 @@ static void cpuidle_idle_call(void) u64 max_latency_ns; if (idle_should_enter_s2idle()) { + max_latency_ns = cpu_wakeup_latency_qos_limit() * + NSEC_PER_USEC; - entered_state = call_cpuidle_s2idle(drv, dev); + entered_state = call_cpuidle_s2idle(drv, dev, + max_latency_ns); if (entered_state > 0) goto exit_idle; - - max_latency_ns = U64_MAX; } else { max_latency_ns = dev->forced_idle_latency_limit_ns; } diff --git a/rust/kernel/opp.rs b/rust/kernel/opp.rs index 2c763fa9276d..f9641c639fff 100644 --- a/rust/kernel/opp.rs +++ b/rust/kernel/opp.rs @@ -87,7 +87,7 @@ use core::{marker::PhantomData, ptr}; use macros::vtable; -/// Creates a null-terminated slice of pointers to [`Cstring`]s. +/// Creates a null-terminated slice of pointers to [`CString`]s. fn to_c_str_array(names: &[CString]) -> Result<KVec<*const u8>> { // Allocated a null-terminated vector of pointers. let mut list = KVec::with_capacity(names.len() + 1, GFP_KERNEL)?; @@ -443,66 +443,70 @@ impl<T: ConfigOps + Default> Config<T> { /// /// The returned [`ConfigToken`] will remove the configuration when dropped. pub fn set(self, dev: &Device) -> Result<ConfigToken> { - let (_clk_list, clk_names) = match &self.clk_names { - Some(x) => { - let list = to_c_str_array(x)?; - let ptr = list.as_ptr(); - (Some(list), ptr) - } - None => (None, ptr::null()), - }; + let clk_names = self.clk_names.as_deref().map(to_c_str_array).transpose()?; + let regulator_names = self + .regulator_names + .as_deref() + .map(to_c_str_array) + .transpose()?; + + let set_config = || { + let clk_names = clk_names.as_ref().map_or(ptr::null(), |c| c.as_ptr()); + let regulator_names = regulator_names.as_ref().map_or(ptr::null(), |c| c.as_ptr()); + + let prop_name = self + .prop_name + .as_ref() + .map_or(ptr::null(), |p| p.as_char_ptr()); + + let (supported_hw, supported_hw_count) = self + .supported_hw + .as_ref() + .map_or((ptr::null(), 0), |hw| (hw.as_ptr(), hw.len() as u32)); + + let (required_dev, required_dev_index) = self + .required_dev + .as_ref() + .map_or((ptr::null_mut(), 0), |(dev, idx)| (dev.as_raw(), *idx)); + + let mut config = bindings::dev_pm_opp_config { + clk_names, + config_clks: if T::HAS_CONFIG_CLKS { + Some(Self::config_clks) + } else { + None + }, + prop_name, + regulator_names, + config_regulators: if T::HAS_CONFIG_REGULATORS { + Some(Self::config_regulators) + } else { + None + }, + supported_hw, + supported_hw_count, - let (_regulator_list, regulator_names) = match &self.regulator_names { - Some(x) => { - let list = to_c_str_array(x)?; - let ptr = list.as_ptr(); - (Some(list), ptr) - } - None => (None, ptr::null()), - }; + required_dev, + required_dev_index, + }; - let prop_name = self - .prop_name - .as_ref() - .map_or(ptr::null(), |p| p.as_char_ptr()); - - let (supported_hw, supported_hw_count) = self - .supported_hw - .as_ref() - .map_or((ptr::null(), 0), |hw| (hw.as_ptr(), hw.len() as u32)); - - let (required_dev, required_dev_index) = self - .required_dev - .as_ref() - .map_or((ptr::null_mut(), 0), |(dev, idx)| (dev.as_raw(), *idx)); - - let mut config = bindings::dev_pm_opp_config { - clk_names, - config_clks: if T::HAS_CONFIG_CLKS { - Some(Self::config_clks) - } else { - None - }, - prop_name, - regulator_names, - config_regulators: if T::HAS_CONFIG_REGULATORS { - Some(Self::config_regulators) - } else { - None - }, - supported_hw, - supported_hw_count, + // SAFETY: The requirements are satisfied by the existence of [`Device`] and its safety + // requirements. The OPP core guarantees not to access fields of [`Config`] after this + // call and so we don't need to save a copy of them for future use. + let ret = unsafe { bindings::dev_pm_opp_set_config(dev.as_raw(), &mut config) }; - required_dev, - required_dev_index, + to_result(ret).map(|()| ConfigToken(ret)) }; - // SAFETY: The requirements are satisfied by the existence of [`Device`] and its safety - // requirements. The OPP core guarantees not to access fields of [`Config`] after this call - // and so we don't need to save a copy of them for future use. - let ret = unsafe { bindings::dev_pm_opp_set_config(dev.as_raw(), &mut config) }; + // Ensure the closure does not accidentally drop owned data; if violated, the compiler + // produces E0525 with e.g.: + // + // ``` + // closure is `FnOnce` because it moves the variable `clk_names` out of its environment + // ``` + let _: &dyn Fn() -> _ = &set_config; - to_result(ret).map(|()| ConfigToken(ret)) + set_config() } /// Config's clk callback. diff --git a/tools/power/cpupower/Makefile b/tools/power/cpupower/Makefile index c43db1c41205..a1df9196dc45 100644 --- a/tools/power/cpupower/Makefile +++ b/tools/power/cpupower/Makefile @@ -37,9 +37,7 @@ NLS ?= true # cpufreq-bench benchmarking tool CPUFREQ_BENCH ?= true -# Do not build libraries, but build the code in statically -# Libraries are still built, otherwise the Makefile code would -# be rather ugly. +# Build the code, including libraries, statically. export STATIC ?= false # Prefix to the directories we're installing to @@ -207,14 +205,25 @@ $(OUTPUT)lib/%.o: $(LIB_SRC) $(LIB_HEADERS) $(ECHO) " CC " $@ $(QUIET) $(CC) $(CFLAGS) -fPIC -o $@ -c lib/$*.c -$(OUTPUT)libcpupower.so.$(LIB_VER): $(LIB_OBJS) +ifeq ($(strip $(STATIC)),true) +LIBCPUPOWER := libcpupower.a +else +LIBCPUPOWER := libcpupower.so.$(LIB_VER) +endif + +$(OUTPUT)$(LIBCPUPOWER): $(LIB_OBJS) +ifeq ($(strip $(STATIC)),true) + $(ECHO) " AR " $@ + $(QUIET) $(AR) rcs $@ $(LIB_OBJS) +else $(ECHO) " LD " $@ $(QUIET) $(CC) -shared $(CFLAGS) $(LDFLAGS) -o $@ \ -Wl,-soname,libcpupower.so.$(LIB_MAJ) $(LIB_OBJS) @ln -sf $(@F) $(OUTPUT)libcpupower.so @ln -sf $(@F) $(OUTPUT)libcpupower.so.$(LIB_MAJ) +endif -libcpupower: $(OUTPUT)libcpupower.so.$(LIB_VER) +libcpupower: $(OUTPUT)$(LIBCPUPOWER) # Let all .o files depend on its .c file and all headers # Might be worth to put this into utils/Makefile at some point of time @@ -224,7 +233,7 @@ $(OUTPUT)%.o: %.c $(ECHO) " CC " $@ $(QUIET) $(CC) $(CFLAGS) -I./lib -I ./utils -o $@ -c $*.c -$(OUTPUT)cpupower: $(UTIL_OBJS) $(OUTPUT)libcpupower.so.$(LIB_VER) +$(OUTPUT)cpupower: $(UTIL_OBJS) $(OUTPUT)$(LIBCPUPOWER) $(ECHO) " CC " $@ ifeq ($(strip $(STATIC)),true) $(QUIET) $(CC) $(CFLAGS) $(LDFLAGS) $(UTIL_OBJS) -lrt -lpci -L$(OUTPUT) -o $@ @@ -269,7 +278,7 @@ update-po: $(OUTPUT)po/$(PACKAGE).pot done; endif -compile-bench: $(OUTPUT)libcpupower.so.$(LIB_VER) +compile-bench: $(OUTPUT)$(LIBCPUPOWER) @V=$(V) confdir=$(confdir) $(MAKE) -C bench O=$(OUTPUT) # we compile into subdirectories. if the target directory is not the @@ -287,6 +296,7 @@ clean: -find $(OUTPUT) \( -not -type d \) -and \( -name '*~' -o -name '*.[oas]' \) -type f -print \ | xargs rm -f -rm -f $(OUTPUT)cpupower + -rm -f $(OUTPUT)libcpupower.a -rm -f $(OUTPUT)libcpupower.so* -rm -rf $(OUTPUT)po/*.gmo -rm -rf $(OUTPUT)po/*.pot @@ -295,7 +305,11 @@ clean: install-lib: libcpupower $(INSTALL) -d $(DESTDIR)${libdir} +ifeq ($(strip $(STATIC)),true) + $(CP) $(OUTPUT)libcpupower.a $(DESTDIR)${libdir}/ +else $(CP) $(OUTPUT)libcpupower.so* $(DESTDIR)${libdir}/ +endif $(INSTALL) -d $(DESTDIR)${includedir} $(INSTALL_DATA) lib/cpufreq.h $(DESTDIR)${includedir}/cpufreq.h $(INSTALL_DATA) lib/cpuidle.h $(DESTDIR)${includedir}/cpuidle.h @@ -336,11 +350,7 @@ install-bench: compile-bench @#DESTDIR must be set from outside to survive @sbindir=$(sbindir) bindir=$(bindir) docdir=$(docdir) confdir=$(confdir) $(MAKE) -C bench O=$(OUTPUT) install -ifeq ($(strip $(STATIC)),true) -install: all install-tools install-man $(INSTALL_NLS) $(INSTALL_BENCH) -else install: all install-lib install-tools install-man $(INSTALL_NLS) $(INSTALL_BENCH) -endif uninstall: - rm -f $(DESTDIR)${libdir}/libcpupower.* |
