Age | Commit message (Collapse) | Author |
|
Patch series "Use kmem_cache for memcg alloc", v3.
(willy tldr: "you've gone from allocating 8 objects per 32KiB to
allocating 13 objects per 32KiB, a 62% improvement in memory consumption"
[1])
The mem_cgroup_alloc function creates mem_cgroup struct and it's
associated structures including mem_cgroup_per_node. Through detailed
analysis on our test machine (Arm64, 16GB RAM, 6.6 kernel, 1 NUMA node,
memcgv2 with nokmem,nosocket,cgroup_disable=pressure), we can observe the
memory allocation for these structures using the following shell commands:
# Enable tracing
echo 1 > /sys/kernel/tracing/events/kmem/kmalloc/enable
echo 1 > /sys/kernel/tracing/tracing_on
cat /sys/kernel/tracing/trace_pipe | grep kmalloc | grep mem_cgroup
# Trigger allocation if cgroup subtree do not enable memcg
echo +memory > /sys/fs/cgroup/cgroup.subtree_control
Ftrace Output:
# mem_cgroup struct allocation
sh-6312 [000] ..... 58015.698365: kmalloc:
call_site=mem_cgroup_css_alloc+0xd8/0x5b4
ptr=000000003e4c3799 bytes_req=2312 bytes_alloc=4096
gfp_flags=GFP_KERNEL|__GFP_ZERO node=-1 accounted=false
# mem_cgroup_per_node allocation
sh-6312 [000] ..... 58015.698389: kmalloc:
call_site=mem_cgroup_css_alloc+0x1d8/0x5b4
ptr=00000000d798700c bytes_req=2896 bytes_alloc=4096
gfp_flags=GFP_KERNEL|__GFP_ZERO node=0 accounted=false
Key Observations:
1. Both structures use kmalloc with requested sizes between 2KB-4KB
2. Allocation alignment forces 4KB slab usage due to pre-defined sizes
(64B, 128B,..., 2KB, 4KB, 8KB)
3. Memory waste per memcg instance:
Base struct: 4096 - 2312 = 1784 bytes
Per-node struct: 4096 - 2896 = 1200 bytes
Total waste: 2984 bytes (1-node system)
NUMA scaling: (1200 + 8) * nr_node_ids bytes
So, it's a little waste.
This patchset introduces dedicated kmem_cache:
Patch2 - mem_cgroup kmem_cache - memcg_cachep
Patch3 - mem_cgroup_per_node kmem_cache - memcg_pn_cachep
The benefits of this change can be observed with the following tracing
commands:
# Enable tracing
echo 1 > /sys/kernel/tracing/events/kmem/kmem_cache_alloc/enable
echo 1 > /sys/kernel/tracing/tracing_on
cat /sys/kernel/tracing/trace_pipe | grep kmem_cache_alloc | grep mem_cgroup
# In another terminal:
echo +memory > /sys/fs/cgroup/cgroup.subtree_control
The output might now look like this:
# mem_cgroup struct allocation
sh-9827 [000] ..... 289.513598: kmem_cache_alloc:
call_site=mem_cgroup_css_alloc+0xbc/0x5d4 ptr=00000000695c1806
bytes_req=2312 bytes_alloc=2368 gfp_flags=GFP_KERNEL|__GFP_ZERO node=-1
accounted=false
# mem_cgroup_per_node allocation
sh-9827 [000] ..... 289.513602: kmem_cache_alloc:
call_site=mem_cgroup_css_alloc+0x1b8/0x5d4 ptr=000000002989e63a
bytes_req=2896 bytes_alloc=2944 gfp_flags=GFP_KERNEL|__GFP_ZERO node=0
accounted=false
This indicates that the `mem_cgroup` struct now requests 2312 bytes and is
allocated 2368 bytes, while `mem_cgroup_per_node` requests 2896 bytes and
is allocated 2944 bytes. The slight increase in allocated size is due to
`SLAB_HWCACHE_ALIGN` in the `kmem_cache`.
Without `SLAB_HWCACHE_ALIGN`, the allocation might appear as:
# mem_cgroup struct allocation
sh-9269 [003] ..... 80.396366: kmem_cache_alloc:
call_site=mem_cgroup_css_alloc+0xbc/0x5d4 ptr=000000005b12b475
bytes_req=2312 bytes_alloc=2312 gfp_flags=GFP_KERNEL|__GFP_ZERO node=-1
accounted=false
# mem_cgroup_per_node allocation
sh-9269 [003] ..... 80.396411: kmem_cache_alloc:
call_site=mem_cgroup_css_alloc+0x1b8/0x5d4 ptr=00000000f347adc6
bytes_req=2896 bytes_alloc=2896 gfp_flags=GFP_KERNEL|__GFP_ZERO node=0
accounted=false
While the `bytes_alloc` now matches the `bytes_req`, this patchset
defaults to using `SLAB_HWCACHE_ALIGN` as it is generally considered more
beneficial for performance. Please let me know if there are any issues or
if I've misunderstood anything.
This patchset also move mem_cgroup_init ahead of cgroup_init() due to
cgroup_init() will allocate root_mem_cgroup, but each initcall invoke
after cgroup_init, so if each kmem_cache do not prepare, we need testing
NULL before use it.
This patch (of 3):
When cgroup_init() creates root_mem_cgroup through css_alloc callback,
some critical resources might not be fully initialized, forcing later
operations to perform conditional checks for resource availability.
This patch move mem_cgroup_init() to address the init order, it invoke
before cgroup_init, so, compare to subsys_initcall, it can use to prepare
some key resources before root_mem_cgroup alloc.
Link: https://lkml.kernel.org/r/aAsRCj-niMMTtmK8@casper.infradead.org [1]
Link: https://lkml.kernel.org/r/20250425031935.76411-1-link@vivo.com
Link: https://lkml.kernel.org/r/20250425031935.76411-2-link@vivo.com
Signed-off-by: Huan Yang <link@vivo.com>
Suggested-by: Shakeel Butt <shakeel.butt@linux.dev>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Francesco Valla <francesco@valla.it>
Cc: guoweikang <guoweikang.kernel@gmail.com>
Cc: Huang Shijie <shijie@os.amperecomputing.com>
Cc: KP Singh <kpsingh@kernel.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: "Paul E . McKenney" <paulmck@kernel.org>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Raul E Rangel <rrangel@chromium.org>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
Since the previous commit "mm/huge_memory: Adjust try_to_migrate_one() and
split_huge_pmd_locked()" has simplified the logic by leveraging the folio
verification in page_vma_mapped_walk(), this patch removes the unnecessary
folio pointers passing.
Link: https://lkml.kernel.org/r/20250425103859.825879-3-gavinguo@igalia.com
Link: https://lore.kernel.org/all/98d1d195-7821-4627-b518-83103ade56c0@redhat.com/
Link: https://lore.kernel.org/all/91599a3c-e69e-4d79-bac5-5013c96203d7@redhat.com/
Signed-off-by: Gavin Guo <gavinguo@igalia.com>
Suggested-by: David Hildenbrand <david@redhat.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Florent Revest <revest@google.com>
Cc: Gavin Shan <gshan@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
It is possible for a reclaimer to cause demotions of an lruvec belonging
to a cgroup with cpuset.mems set to exclude some nodes. Attempt to apply
this limitation based on the lruvec's memcg and prevent demotion.
Notably, this may still allow demotion of shared libraries or any memory
first instantiated in another cgroup. This means cpusets still cannot
cannot guarantee complete isolation when demotion is enabled, and the docs
have been updated to reflect this.
This is useful for isolating workloads on a multi-tenant system from
certain classes of memory more consistently - with the noted exceptions.
Note on locking:
The cgroup_get_e_css reference protects the css->effective_mems, and calls
of this interface would be subject to the same race conditions associated
with a non-atomic access to cs->effective_mems.
So while this interface cannot make strong guarantees of correctness, it
can therefore avoid taking a global or rcu_read_lock for performance.
Link: https://lkml.kernel.org/r/20250424202806.52632-3-gourry@gourry.net
Signed-off-by: Gregory Price <gourry@gourry.net>
Suggested-by: Shakeel Butt <shakeel.butt@linux.dev>
Suggested-by: Waiman Long <longman@redhat.com>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
Reviewed-by: Waiman Long <longman@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michal Koutný <mkoutny@suse.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
Patch series "vmscan: enforce mems_effective during demotion", v5.
Change reclaim to respect cpuset.mems_effective during demotion when
possible. Presently, reclaim explicitly ignores cpuset.mems_effective
when demoting, which may cause the cpuset settings to violated.
Implement cpuset_node_allowed() to check the cpuset.mems_effective
associated wih the mem_cgroup of the lruvec being scanned. This only
applies to cgroup/cpuset v2, as cpuset exists in a different hierarchy
than mem_cgroup in v1.
This requires renaming the existing cpuset_node_allowed() to be
cpuset_current_now_allowed() - which is more descriptive anyway - to
implement the new cpuset_node_allowed() which takes a target cgroup.
This patch (of 2):
Rename cpuset_node_allowed to reflect that the function checks the current
task's cpuset.mems. This allows us to make a new cpuset_node_allowed
function that checks a target cgroup's cpuset.mems.
Link: https://lkml.kernel.org/r/20250424202806.52632-1-gourry@gourry.net
Link: https://lkml.kernel.org/r/20250424202806.52632-2-gourry@gourry.net
Signed-off-by: Gregory Price <gourry@gourry.net>
Acked-by: Waiman Long <longman@redhat.com>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Michal Koutný <mkoutny@suse.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
Use cl@gentwo.org throughout and remove the old email addresses.
Link: https://lkml.kernel.org/r/8b962f57-4d98-cbb0-cd82-b6ba456733e8@gentwo.org
Signed-off-by: Christoph Lameter <cl@gentwo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
Patch series "mm/damon: auto-tune DAMOS for NUMA setups including tiered
memory".
Utilizing DAMON for memory tiering usually requires manual tuning and/or
tedious controls. Let it self-tune hotness and coldness thresholds for
promotion and demotion aiming high utilization of high memory tiers, by
introducing new DAMOS quota goal metrics representing the used and the
free memory ratios of specific NUMA nodes. And introduce a sample DAMON
module that demonstrates how the new feature can be used for memory
tiering use cases.
Backgrounds
===========
A type of tiered memory system exposes the memory tiers as NUMA nodes. A
straightforward pages placement strategy for such systems is placing
access-hot and cold pages on upper and lower tiers, reespectively,
pursuing higher utilization of upper tiers. Since access temperature can
be dynamic, periodically finding and migrating hot pages and cold pages to
proper tiers (promoting and demoting) is also required. Linux kernel
provides several features for such dynamic and transparent pages
placement.
Page Faults and LRU
-------------------
One widely known way is using NUMA balancing in tiering mode (a.k.a
NUMAB-2) and reclaim-based demotion features. In the setup, NUMAB-2 finds
hot pages using access check-purpose page faults (a.k.a prot_none) and
promote those inside each process' context, until there is no more pages
to promote, or the upper tier is filled up and memory pressure happens.
In the latter case, LRU-based reclaim logic wakes up as a response to the
memory pressure and demotes cold pages to lower tiers in asynchronous
(kswapd) and/or synchronous ways (direct reclaim).
DAMON
-----
Yet another available solution is using DAMOS with migrate_hot and
migrate_cold DAMOS actions for promotions and demotions, respectively. To
make it optimum, users need to specify aggressiveness and access
temperature thresholds for promotions and demotions in a good balance that
results in high utilization of upper tiers. The number of parameters is
not small, and optimum parameter values depend on characteristics of the
underlying hardware and the workload. As a result, it often requires
manual, time consuming and repetitive tuning of the DAMOS schemes for
given workloads and systems combinations.
Self-tuned DAMON-based Memory Tiering
=====================================
To solve such manual tuning problems, DAMOS provides aim-oriented
feedback-driven quotas self-tuning. Using the feature, we design a
self-tuned DAMON-based memory tiering for general multi-tier memory
systems.
For each memory tier node, if it has a lower tier, run a DAMOS scheme that
demotes cold pages of the node, auto-tuning the aggressiveness aiming an
amount of free space of the node. The free space is for keeping the
headroom that avoids significant memory pressure during upper tier memory
usage spike, and promoting hot pages from the lower tier.
For each memory tier node, if it has an upper tier, run a DAMOS scheme
that promotes hot pages of the current node to the upper tier, auto-tuning
the aggressiveness aiming a high utilization ratio of the upper tier. The
target ratio is to ensure higher tiers are utilized as much as possible.
It should match with the headroom for demotion scheme, but have slight
overlap, to ensure promotion and demotion are not entirely stopped.
The aim-oriented aggressiveness auto-tuning of DAMOS is already available.
Hence, to make such tiering solution implementation, only new quota goal
metrics for utilization and free space ratio of specific NUMA node need to
be developed.
Discussions
===========
The design imposes below discussion points.
Expected Behaviors
------------------
The system will let upper tier memory node accommodates as many hot data
as possible. If total amount of the data is less than the top tier
memory's promotion/demotion target utilization, entire data will be just
placed on the top tier. Promotion scheme will do nothing since there is
no data to promote. Demotion scheme will also do nothing since the free
space ratio of the top tier is higher than the goal.
Only if the amount of data is larger than the top tier's utilization
ratio, demotion scheme will demote cold pages and ensure the headroom free
space. Since the promotion and demotion schemes for a single node has
small overlap at their target utilization and free space goals, promotions
and demotions will continue working with a moderate aggressiveness level.
It will keep all data is placed on access hotness under dynamic access
pattern, while minimizing the migration overhead.
In any case, each node will keep headroom free space and as many upper
tiers are utilized as possible.
Ease of Use
-----------
Users still need to set the target utilization and free space ratio, but
it will be easier to set. We argue 99.7 % utilization and 0.5 % free
space ratios can be good default values. It can be easily adjusted based
on desired headroom size of given use case. Users are also still required
to answer the minimum coldness and hotness thresholds. Together with
monitoring intervals auto-tuning[2], DAMON will always show meaningful
amount of hot and cold memory. And DAMOS quota's prioritization mechanism
will make good decision as long as the source information is that
colorful. Hence, users can very naively set the minimum criterias. We
believe any access observation and no access observation within last one
aggregation interval is enough for minimum hot and cold regions criterias.
General Tiered Memory Setup Applicability
-----------------------------------------
The design can be applied to any number of tiers having any performance
characteristics, as long as they can be hierarchical. Hence, applying the
system to different tiered memory system will be straightforward. Note
that this assumes only single CPU NUMA node case. Because today's DAMON
is not aware of which CPU made each access, applying this on systems
having multiple CPU NUMA nodes can be complicated. We are planning to
extend DAMON for the use case, but that's out of the scope of this patch
series.
How To Use
----------
Users can implement the auto-tuned DAMON-based memory tiering using DAMON
sysfs interface. It can be easily done using DAMON user-space tool like
user-space tool. Below evaluation results section shows an example DAMON
user-space tool command for that.
For wider and simpler deployment, having a kernel module that sets up and
run the DAMOS schemes via DAMON kernel API can be useful. The module can
enable the memory tiering at boot time via kernel command line parameter
or at run time with single command. This patch series implements a sample
DAMON kernel module that shows how such module can be implemented.
Comparison To Page Faults and LRU-based Approaches
--------------------------------------------------
The existing page faults based promotion (NUMAB-2) does hot pages
detection and migration in the process context. When there are many pages
to promote, it can block the progress of the application's real works.
DAMOS works in asynchronous worker thread, so it doesn't block the real
works.
NUMAB-2 doesn't provide a way to control aggressiveness of promotion other
than the maximum amount of pages to promote per given time widnow. If hot
pages are found, promotions can happen in the upper-bound speed,
regardless of upper tier's memory pressure. If the maximum speed is not
well set for the given workload, it can result in slow promotion or
unnecessary memory pressure. Self-tuned DAMON-based memory tiering
alleviates the problem by adjusting the speed based on current utilization
of the upper tier.
LRU-based demotion can be triggered in both asynchronous (kswapd) and
synchronous (direct reclaim) ways. Other than the way of finding cold
pages, asynchronous LRU-based demotion and DAMON-based demotion has no big
difference. DAMON-based demotion can make a better balancing with
DAMON-based promotion, though. The LRU-based demotion can do better than
DAMON-based demotion when the tier is having significant memory pressure.
It would be wise to use DAMON-based demotion as a proactive and primary
one, but utilizing LRU-based demotions together as a fast backup solution.
Evaluation
==========
In short, under a setup that requires fast and frequent promotions,
self-tuned DAMON-based memory tiering's hot pages promotion improves
performance about 4.42 %. We believe this shows self-tuned DAMON-based
promotion's effectiveness. Meanwhile, NUMAB-2's hot pages promotion
degrades the performance about 7.34 %. We suspect the degradation is
mostly due to NUMAB-2's synchronous nature that can block the
application's progress, which highlights the advantage of DAMON-based
solution's asynchronous nature.
Note that the test was done with the RFC version of this patch series. We
don't run it again since this patch series got no meaningful change after
the RFC, while the test takes pretty long time.
Setup
-----
Hardware. Use a machine that equips 250 GiB DRAM memory tier and 50 GiB
CXL memory tier. The tiers are exposed as NUMA nodes 0 and 1,
respectively.
Kernel. Use Linux kernel v6.13 that modified as following. Add all DAMON
patches that available on mm tree of 2025-03-15, and this patch series.
Also modify it to ignore mempolicy() system calls, to avoid bad effects
from application's traditional NUMA systems assumed optimizations.
Workload. Use a modified version of Taobench benchmark[3] that available
on DCPerf benchmark suite. It represents an in-memory caching workload.
We set its 'memsize', 'warmup_time', and 'test_time' parameter as 340 GiB,
2,500 seconds and 1,440 seconds. The parameters are chosen to ensure the
workload uses more than DRAM memory tier. Its RSS under the parameter
grows to 270 GiB within the warmup time.
It turned out the workload has a very static access pattrn. Only about 13
% of the RSS is frequently accessed from the beginning to end. Hence
promotion shows no meaningful performance difference regardless of
different design and implementations. We therefore modify the kernel to
periodically demote up to 10 GiB hot pages and promote up to 10 GiB cold
pages once per minute. The intention is to simulate periodic access
pattern changes. The hotness and coldness threshold is very naively set
so that it is more like random access pattern change rather than strict
hot/cold pages exchange. This is why we call the workload as "modified".
It is implemented as two DAMOS schemes each running on an asynchronous
thread. It can be reproduced with DAMON user-space tool like below.
# ./damo start \
--ops paddr --numa_node 0 --monitoring_intervals 10s 200s 200s \
--damos_action migrate_hot 1 \
--damos_quota_interval 60s --damos_quota_space 10G \
--ops paddr --numa_node 1 --monitoring_intervals 10s 200s 200s \
--damos_action migrate_cold 0 \
--damos_quota_interval 60s --damos_quota_space 10G \
--nr_schemes 1 1 --nr_targets 1 1 --nr_ctxs 1 1
System configurations. Use below variant system configurations.
- Baseline. No memory tiering features are turned on.
- Numab_tiering. On the baseline, enable NUMAB-2 and relcaim-based
demotion. In detail, following command is executed:
echo 2 > /proc/sys/kernel/numa_balancing;
echo 1 > /sys/kernel/mm/numa/demotion_enabled;
echo 7 > /proc/sys/vm/zone_reclaim_mode
- DAMON_tiering. On the baseline, utilize self-tuned DAMON-based memory
tiering implementation via DAMON user-space tool. It utilizes two
kernel threads, namely promotion thread and demotion thread. Demotion
thread monitors access pattern of DRAM node using DAMON with
auto-tuned monitoring intervals aiming 4% DAMON-observed access ratio,
and demote coldest pages up to 200 MiB per second aiming 0.5% free
space of DRAM node. Promotion thread monitors CXL node using same
intervals auto-tuning, and promote hot pages in same way but aiming
for 99.7% utilization of DRAM node. Because DAMON provides only
best-effort accuracy, add young page DAMOS filters to allow only and
reject all young pages at promoting and demoting, respectively. It
can be reproduced with DAMON user-space tool like below.
# ./damo start \
--numa_node 0 --monitoring_intervals_goal 4% 3 5ms 10s \
--damos_action migrate_cold 1 --damos_access_rate 0% 0% \
--damos_apply_interval 1s \
--damos_quota_interval 1s --damos_quota_space 200MB \
--damos_quota_goal node_mem_free_bp 0.5% 0 \
--damos_filter reject young \
--numa_node 1 --monitoring_intervals_goal 4% 3 5ms 10s \
--damos_action migrate_hot 0 --damos_access_rate 5% max \
--damos_apply_interval 1s \
--damos_quota_interval 1s --damos_quota_space 200MB \
--damos_quota_goal node_mem_used_bp 99.7% 0 \
--damos_filter allow young \
--damos_nr_quota_goals 1 1 --damos_nr_filters 1 1 \
--nr_targets 1 1 --nr_schemes 1 1 --nr_ctxs 1 1
Measurment Results
------------------
On each system configuration, run the modified version of Taobench and
collect 'score'. 'score' is a metric that calculated and provided by
Taobench to represents the performance of the run on the system. To
handle the measurement errors, repeat the measurement five times. The
results are as below.
Config Score Stdev (%) Normalized
Baseline 1.6165 0.0319 1.9764 1.0000
Numab_tiering 1.4976 0.0452 3.0209 0.9264
DAMON_tiering 1.6881 0.0249 1.4767 1.0443
'Config' column shows the system config of the measurement. 'Score'
column shows the 'score' measurement in average of the five runs on the
system config. 'Stdev' column shows the standsard deviation of the five
measurements of the scores. '(%)' column shows the 'Stdev' to 'Score'
ratio in percentage. Finally, 'Normalized' column shows the averaged
score values of the configs that normalized to that of 'Baseline'.
The periodic hot pages demotion and cold pages promotion that was
conducted to simulate dynamic access pattern was started from the
beginning of the workload. It resulted in the DRAM tier utilization
always under the watermark, and hence no real demotion was happened for
all test runs. This means the above results show no difference between
LRU-based and DAMON-based demotions. Only difference between NUMAB-2 and
DAMON-based promotions are represented on the results.
Numab_tiering config degraded the performance about 7.36 %. We suspect
this happened because NUMAB-2's synchronous promotion was blocking the
Taobench's real work progress.
DAMON_tiering config improved the performance about 4.43 %. We believe
this shows effectiveness of DAMON-based promotion that didn't block
Taobench's real work progress due to its asynchronous nature. Also this
means DAMON's monitoring results are accurate enough to provide visible
amount of improvement.
Evaluation Limitations
----------------------
As mentioned above, this evaluation shows only comparison of promotion
mechanisms. DAMON-based tiering is recommended to be used together with
reclaim-based demotion as a faster backup under significant memory
pressure, though.
From some perspective, the modified version of Taobench may seems making
the picture distorted too much. It would be better to evaluate with more
realistic workload, or more finely tuned micro benchmarks.
Patch Sequence
==============
The first patch (patch 1) implements two new quota goal metrics on core
layer and expose it to DAMON core kernel API. The second and third ones
(patches 2 and 3) further link it to DAMON sysfs interface. Three
following patches (patches 4-6) document the new feature and sysfs file on
design, usage, and ABI documents. The final one (patch 7) implements a
working version of a self-tuned DAMON-based memory tiering solution in an
incomplete but easy to understand form as a kernel module under
samples/damon/ directory.
References
==========
[1] https://lore.kernel.org/20231112195602.61525-1-sj@kernel.org/
[2] https://lore.kernel.org/20250303221726.484227-1-sj@kernel.org
[3] https://github.com/facebookresearch/DCPerf/blob/main/packages/tao_bench/README.md
This patch (of 7):
Used and free space ratios for specific NUMA nodes can be useful inputs
for NUMA-specific DAMOS schemes' aggressiveness self-tuning feedback loop.
Implement DAMOS quota goal metrics for such self-tuned schemes.
Link: https://lkml.kernel.org/r/20250420194030.75838-1-sj@kernel.org
Link: https://lkml.kernel.org/r/20250420194030.75838-2-sj@kernel.org
Signed-off-by: SeongJae Park <sj@kernel.org>
Cc: Yunjeong Mun <yunjeong.mun@sk.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
We need the USB fixes in here as well.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
We need the iio/hyperv fixes in here as well.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Merge measurement-register infrastructure for v6.16. Resolve conflicts
with the establishment of drivers/virt/coco/guest/ for cross-vendor
common TSM functionality.
Address a mis-merge with a fixup from Lukas:
Link: http://lore.kernel.org/20250509134031.70559-1-lukas.bulwahn@redhat.com
|
|
The REPORT ZONES buffer size is currently limited by the HBA's maximum
segment count to ensure the buffer can be mapped. However, the block
layer further limits the number of iovec entries to 1024 when allocating
a bio.
To avoid allocation of buffers too large to be mapped, further restrict
the maximum buffer size to BIO_MAX_INLINE_VECS.
Replace the UIO_MAXIOV symbolic name with the more contextually
appropriate BIO_MAX_INLINE_VECS.
Fixes: b091ac616846 ("sd_zbc: Fix report zones buffer allocation")
Cc: stable@vger.kernel.org
Signed-off-by: Steve Siwinski <ssiwinski@atto.com>
Link: https://lore.kernel.org/r/20250508200122.243129-1-ssiwinski@atto.com
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Having a variable length array at the end of scsi_stream_status_header
only causes problems. Remove it and switch sd_is_perm_stream(), which is
the only place that currently uses it, to use the scsi_stream_status
directly following it in the local buf structure.
Besides being a much better data structure design, this also avoids a
-Wflex-array-member-not-at-end warning.
Reported-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20250505060640.3398500-1-hch@lst.de
Reviewed-by: John Garry <john.g.garry@oracle.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Recent devlink change added validation of an integer value
via NLA_POLICY_VALIDATE_FN, for sparse enums. Handle this
in policy dump. We can't extract any info out of the callback,
so report only the type.
Fixes: 429ac6211494 ("devlink: define enum for attr types of dynamic attributes")
Reported-by: syzbot+01eb26848144516e7f0a@syzkaller.appspotmail.com
Link: https://patch.msgid.link/20250509212751.1905149-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux
Tony Nguyen says:
====================
Prepare for Intel IPU E2000 (GEN3)
This is the first part in introducing RDMA support for idpf.
----------------------------------------------------------------
Tatyana Nikolova says:
To align with review comments, the patch series introducing RDMA
RoCEv2 support for the Intel Infrastructure Processing Unit (IPU)
E2000 line of products is going to be submitted in three parts:
1. Modify ice to use specific and common IIDC definitions and
pass a core device info to irdma.
2. Add RDMA support to idpf and modify idpf to use specific and
common IIDC definitions and pass a core device info to irdma.
3. Add RDMA RoCEv2 support for the E2000 products, referred to as
GEN3 to irdma.
This first part is a 5 patch series based on the original
"iidc/ice/irdma: Update IDC to support multiple consumers" patch
to allow for multiple CORE PCI drivers, using the auxbus.
Patches:
1) Move header file to new name for clarity and replace ice
specific DSCP define with a kernel equivalent one in irdma
2) Unify naming convention
3) Separate header file into common and driver specific info
4) Replace ice specific DSCP define with a kernel equivalent
one in ice
5) Implement core device info struct and update drivers to use it
----------------------------------------------------------------
v1: https://lore.kernel.org/20250505212037.2092288-1-anthony.l.nguyen@intel.com
IWL reviews:
[v5] https://lore.kernel.org/20250416021549.606-1-tatyana.e.nikolova@intel.com
[v4] https://lore.kernel.org/20250225050428.2166-1-tatyana.e.nikolova@intel.com
[v3] https://lore.kernel.org/20250207194931.1569-1-tatyana.e.nikolova@intel.com
[v2] https://lore.kernel.org/20240824031924.421-1-tatyana.e.nikolova@intel.com
[v1] https://lore.kernel.org/20240724233917.704-1-tatyana.e.nikolova@intel.com
* 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux:
iidc/ice/irdma: Update IDC to support multiple consumers
ice: Replace ice specific DSCP mapping num with a kernel define
iidc/ice/irdma: Break iidc.h into two headers
iidc/ice/irdma: Rename to iidc_* convention
iidc/ice/irdma: Rename IDC header file
====================
Link: https://patch.msgid.link/20250509200712.2911060-1-anthony.l.nguyen@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Make bpf_dynptr_slice_rdwr, bpf_dynptr_check_off_len and
__bpf_dynptr_write available outside of the helpers.c by
adding their prototypes into linux/include/bpf.h.
bpf_dynptr_check_off_len() implementation is moved to header and made
inline explicitly, as small function should typically be inlined.
These functions are going to be used from bpf_trace.c in the next
patch of this series.
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
Link: https://lore.kernel.org/r/20250512205348.191079-2-mykyta.yatsenko5@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Add system cache table and configs for SM8750 SoCs.
Signed-off-by: Melody Olvera <melody.olvera@oss.qualcomm.com>
Reviewed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Link: https://lore.kernel.org/r/20250512-sm8750_llcc_master-v5-3-d78dca6282a5@oss.qualcomm.com
Signed-off-by: Bjorn Andersson <andersson@kernel.org>
|
|
Add the initial nova-drm driver skeleton.
nova-drm is connected to nova-core through the auxiliary bus and
implements the DRM parts of the nova driver stack.
For now, it implements the fundamental DRM abstractions, i.e. creates a
DRM device and registers it, exposing a three sample IOCTLs.
DRM_IOCTL_NOVA_GETPARAM
- provides the PCI bar size from the bar that maps the GPUs VRAM
from nova-core
DRM_IOCTL_NOVA_GEM_CREATE
- creates a new dummy DRM GEM object and returns a handle
DRM_IOCTL_NOVA_GEM_INFO
- provides metadata for the DRM GEM object behind a given handle
I implemented a small userspace test suite [1] that utilizes this
interface.
Link: https://gitlab.freedesktop.org/dakr/drm-test [1]
Reviewed-by: Maxime Ripard <mripard@kernel.org>
Acked-by: Dave Airlie <airlied@redhat.com>
Link: https://lore.kernel.org/r/20250424160452.8070-3-dakr@kernel.org
[ Kconfig: depend on DRM=y rather than just DRM. - Danilo ]
Signed-off-by: Danilo Krummrich <dakr@kernel.org>
|
|
Don't rely on CQ sequence numbers for draining, as it has become messy
and needs cq_extra adjustments. Instead, base it on the number of
allocated requests and only allow flushing when all requests are in the
drain list.
As a result, cq_extra is gone, no overhead for its accounting in aux cqe
posting, less bloating as it was inlined before, and it's in general
simpler than trying to track where we should bump it and where it should
be put back like in cases of overflow. Also, it'll likely help with
cleaning and unifying some of the CQ posting helpers.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/46ece1e34320b046c06fee2498d6b4cd12a700f2.1746788718.git.asml.silence@gmail.com
Link: https://lore.kernel.org/r/24497b04b004bceada496033d3c9d09ff8e81ae9.1746944903.git.asml.silence@gmail.com
[axboe: fold in fix from link2]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
The resctrl file system code needs to know how many region tags
are supported. Parse the ACPI MRRM table and save the max_mem_region
value.
Provide a function for resctrl to collect that value.
Signed-off-by: Tony Luck <tony.luck@intel.com>
Link: https://patch.msgid.link/20250505173819.419271-2-tony.luck@intel.com
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
ACPICA commit 45253be18b3f37d46cd0072aa3f8a0a21a70e0a4
Changes needed by acpisrc to update copyright year when building for
release.
Link: https://github.com/acpica/acpica/commit/45253be1
Signed-off-by: Saket Dumbre <saket.dumbre@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
ACPICA commit 52de9040740c562a35244de19d21c0313964c874
Link: https://github.com/acpica/acpica/commit/52de9040
Signed-off-by: Saket Dumbre <saket.dumbre@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/2251077.Icojqenx9y@rjwysocki.net
|
|
ACPICA commit 83019b471e1902151e67c588014ba2d09fa099a3
strncpy() is deprecated for NUL-terminated destination buffers[1].
Use memcpy() for length-bounded destinations.
Link: https://github.com/KSPP/linux/issues/90 [1]
Link: https://github.com/acpica/acpica/commit/83019b47
Signed-off-by: Ahmed Salem <x0rw3ll@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/1910878.atdPhlSkOF@rjwysocki.net
|
|
ACPICA commit 1035a3d453f7dd49a235a59ee84ebda9d2d2f41b
Add ACPI_NONSTRING for destination char arrays without a terminating NUL
character. This is a follow-up to commit 35ad99236f3a ("ACPICA: Apply
ACPI_NONSTRING") where not all instances received the same treatment, in
preparation for replacing strncpy() calls with memcpy()
Link: https://github.com/acpica/acpica/commit/1035a3d4
Signed-off-by: Ahmed Salem <x0rw3ll@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/3833065.MHq7AAxBmi@rjwysocki.net
|
|
ACPICA commit 8b83a8d88dfec59ea147fad35fc6deea8859c58c
ap_get_table_length() checks if tables are valid by
calling ap_is_valid_header(). The latter then calls
ACPI_VALIDATE_RSDP_SIG(Table->Signature).
ap_is_valid_header() accepts struct acpi_table_header as an argument, so
the signature size is always fixed to 4 bytes.
The problem is when the string comparison is between ACPI-defined table
signature and ACPI_SIG_RSDP. Common ACPI table header specifies the
Signature field to be 4 bytes long[1], with the exception of the RSDP
structure whose signature is 8 bytes long "RSD PTR " (including the
trailing blank character)[2]. Calling strncmp(sig, rsdp_sig, 8) would
then result in a sequence overread[3] as sig would be smaller (4 bytes)
than the specified bound (8 bytes).
As a workaround, pass the bound conditionally based on the size of the
signature being passed.
Link: https://uefi.org/specs/ACPI/6.5_A/05_ACPI_Software_Programming_Model.html#system-description-table-header [1]
Link: https://uefi.org/specs/ACPI/6.5_A/05_ACPI_Software_Programming_Model.html#root-system-description-pointer-rsdp-structure [2]
Link: https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wstringop-overread [3]
Link: https://github.com/acpica/acpica/commit/8b83a8d8
Signed-off-by: Ahmed Salem <x0rw3ll@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/2248233.Mh6RI2rZIc@rjwysocki.net
|
|
RAS2 table
ACPICA commit 2c8a38f747de9d977491a494faf0dfaf799b777b
Rename the structure and field names of the RAS2 table to shorten them and
avoid long lines in the ACPI RAS2 drivers.
1. struct acpi_ras2_shared_memory to struct acpi_ras2_shmem
2. In struct acpi_ras2_shared_memory: fields,
- set_capabilities[16] to set_caps[16]
- num_parameter_blocks to num_param_blks
- set_capabilities_status to set_caps_status
3. struct acpi_ras2_patrol_scrub_parameter to
struct acpi_ras2_patrol_scrub_param
4. In struct acpi_ras2_patrol_scrub_parameter: fields,
- patrol_scrub_command to command
- requested_address_range to req_addr_range
- actual_address_range to actl_addr_range
Link: https://github.com/acpica/acpica/commit/2c8a38f7
Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/1942053.CQOukoFCf9@rjwysocki.net
|
|
ACPICA commit 878823ca20f1987cba0c9d4c1056be0d117ea4fe
In order to distinguish character arrays from C Strings (i.e. strings with
a terminating NUL character), add support for the "nonstring" attribute
provided by GCC. (A better name might be "ACPI_NONCSTRING", but that's
the attribute name, so stick to the existing naming convention.)
GCC 15's -Wunterminated-string-initialization will warn about truncation
of the NUL byte for string initializers unless the destination is marked
with "nonstring". Prepare for applying this attribute to the project.
Link: https://github.com/acpica/acpica/commit/878823ca
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/1841930.VLH7GnMWUR@rjwysocki.net
Signed-off-by: Kees Cook <kees@kernel.org>
[ rjw: Pick up the tag from Kees ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
ACPICA commit dddd9270531d74af523afa68515d8aae6a18bbe0
The ERDT table (and its many subtables) enumerate capabilities
and methods for Intel Resource Director Technology to monitor
and control L3 cache allocation and memory bandwidth by CPU
cores and IO devices.
Structure defined in the Intel Resource Director Technology (RDT)
Architecture specification downloadable from www.intel.com/sdm
Link: https://github.com/acpica/acpica/commit/dddd9270
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/3296755.5fSG56mABF@rjwysocki.net
|
|
ACPICA commit b8713f71b4023a0396fe61503bbbf5226e5eed1b
Some ERDT subtables have 11 and 24 byte reserved fields.
Add the ACPI_DMT_BUF11 and ACPI_DMT_BUF24 types to describe these reserved
fields in struct acpi_dmtable_info structures.
Shorten the ACPI_SUBTABLE_HEADER_16 name to ACPI_SUBTBL_HDR
Link: https://github.com/acpica/acpica/commit/b8713f71
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/3643286.iIbC2pHGDl@rjwysocki.net
|
|
ACPICA commit 022e2e4169841f429dbda677a4780830bf4c2177
1) Added source specification to MRRM table comment in actbl2.h
2) Shorten typedef from ACPI_TABLE_MRRM_MEM_RANGE_ENTRY to
struct acpi_mrrm_mem_range_entry
3) Add new typedefs to source/tools/acpisrc/astable.c
4) Fix cut and paste errors in acpi_dm_table_info_mrrm0[] definition
5) Fix indent and source code style errors in actbl2.h
6) The base/length fields in the memory range structure are system
memory addresses, not "MMIO". Update the acpi_dm_table_info_mrrm0[]
strings.
7) Add main/sub table comments to acpi_dm_dump_mrrm() and dt_compile_mrrm()
Link: https://github.com/acpica/acpica/commit/022e2e41
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/2018955.PYKUYFuaPT@rjwysocki.net
|
|
ACPICA commit 73c32bc89cad64ab19c1231a202361e917e6823c
RISC-V IO Mapping Table (RIMT) is a new static table defined for RISC-V
to communicate IOMMU information to the OS. The specification for RIMT
is available at [1]. Add structure definitions for RIMT.
Link: https://github.com/riscv-non-isa/riscv-acpi-rimt [1]
Link: https://github.com/acpica/acpica/commit/73c32bc8
Signed-off-by: Sunil V L <sunilvl@ventanamicro.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/10665648.nUPlyArG6x@rjwysocki.net
|
|
ACPICA commit 04fd53b2647b9f6f98cfca551383689cb3b59362
The MRRM table describes association between physical address ranges
and "region numbers".
Structure defined in the Intel Resource Director Technology (RDT)
Architecture specification downloadable from www.intel.com/sdm
Link: https://github.com/acpica/acpica/commit/04fd53b2
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/3372188.44csPzL39Z@rjwysocki.net
|
|
ACPICA commit 52840d3826bd7e183fcb555e044e190aea0b5021
New MRRM tables can have subtables that are larger than 255 bytes.
Add a new header typedef that uses u16 for Length. Could be
backported to acpi_aspt_header, struct acpi_dmar_header, struct acpi_nfit_header,
struct acpi_prmt_module_header, struct acpi_prmt_module_info. Will be used for
upcoming ERDT table.
MRRM table has a 26-byte reserved section in header. Add ACPI_DMT_BUF26
to describe this in struct acpi_dmtable_info.
Link: https://github.com/acpica/acpica/commit/52840d38
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/3005638.e9J7NaK4W3@rjwysocki.net
|
|
ACPICA commit af51f730e0bccf789686cea68e116d5f0b27aacb
Added in revision 3.4 of the VT-d spec. To support SIDP, part of the
previously reserved field in the device scope structure was used to
create a 1-byte "Flags" field.
Link: https://github.com/acpica/acpica/commit/af51f730
Signed-off-by: Alexey Neyman <aneyman@google.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://patch.msgid.link/2239745.irdbgypaU6@rjwysocki.net
|
|
We need the driver core fix in here as well for testing
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
When restarting a CPU powered by the PCA9450 power management IC, it
is beneficial to use the PCA9450 to power cycle the CPU and all its
connected peripherals to start up in a known state. The PCA9450 features
a cold start procedure initiated by an I2C command.
Add a restart handler so that the PCA9450 is used to restart the CPU.
The restart handler sends command 0x14 to the SW_RST register,
initiating a cold reset (Power recycle all regulators except LDO1, LDO2
and CLK_32K_OUT)
As the PCA9450 is a PMIC specific for the i.MX8M family CPU, the restart
handler priority is set just slightly higher than imx2_wdt and the PSCI
restart handler. This makes sure this restart handler takes precedence.
Signed-off-by: Paul Geurts <paul.geurts@prodrive-technologies.com>
Link: https://patch.msgid.link/20250505115936.1946891-1-paul.geurts@prodrive-technologies.com
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
snd_soc_unregister_component_by_driver()
We have below 2 functions, but these are very similar
(A) snd_soc_unregister_component_by_driver()
(B) snd_soc_unregister_component()
(A) void snd_soc_unregister_component_by_driver(...)
{
...
(a) mutex_lock(&client_mutex);
^ (X) component = snd_soc_lookup_component_nolocked(dev, component_driver->name);
| if (!component) ^^^^^^^^^^^^^^^^^^^^^^
| goto out;
(b)
| snd_soc_del_component_unlocked(component);
v
out:
(c) mutex_unlock(&client_mutex);
}
(B) void snd_soc_unregister_component_by_driver(...)
{
(a) mutex_lock(&client_mutex);
^ while (1) {
| (X) struct snd_soc_component *component = snd_soc_lookup_component_nolocked(dev, NULL);
| ^^^^
(b) if (!component)
| break;
|
| snd_soc_del_component_unlocked(component);
v }
(c) mutex_unlock(&client_mutex);
}
Both are calling lock (a), find component and remove it (b), and
unlock (c). The big diff is whether use driver name for lookup() or
not (X).
Merge these into snd_soc_unregister_component_by_driver() (B), and
snd_soc_unregister_component_by_driver() (A) can be macro.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Link: https://patch.msgid.link/87h61qy2vn.wl-kuninori.morimoto.gx@renesas.com
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc into soc/drivers
These are updates from Marek Behún for the cznic platform drivers:
This series adds support for generating ECDSA signatures with hardware
stored private key on Turris Omnia and Turris MOX.
This ability is exposed via the keyctl() syscall.
* 'cznic/platform' of https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc:
platform: cznic: use ffs() instead of __bf_shf()
firmware: turris-mox-rwtm: fix building without CONFIG_KEYS
platform: cznic: fix function parameter names
firmware: turris-mox-rwtm: Add support for ECDSA signatures with HW private key
firmware: turris-mox-rwtm: Drop ECDSA signatures via debugfs
platform: cznic: turris-omnia-mcu: Add support for digital message signing with HW private key
platform: cznic: Add keyctl helpers for Turris platform
platform: cznic: turris-omnia-mcu: Refactor requesting MCU interrupt
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
Handle soc servicing events which require the rdma auxiliary device resources to
be cleaned up during a suspend, and re-initialized during a resume.
Signed-off-by: Shiraz Saleem <shirazsaleem@microsoft.com>
Signed-off-by: Konstantin Taranov <kotaranov@microsoft.com>
Link: https://patch.msgid.link/1746633545-17653-5-git-send-email-kotaranov@linux.microsoft.com
Signed-off-by: Leon Romanovsky <leon@kernel.org>
|
|
Initialize gdma device for rdma inside mana module.
For each gdma device, initialize an auxiliary ib device.
Signed-off-by: Konstantin Taranov <kotaranov@microsoft.com>
Link: https://patch.msgid.link/1746633545-17653-2-git-send-email-kotaranov@linux.microsoft.com
Reviewed-by: Long Li <longli@microsoft.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
|
|
Some batteries can signal when an internal fuse was blown. In such a
case POWER_SUPPLY_HEALTH_DEAD is too vague for userspace applications
to perform meaningful diagnostics.
Additionally some batteries can also signal when some of their
internal cells are imbalanced. In such a case returning
POWER_SUPPLY_HEALTH_UNSPEC_FAILURE is again too vague for userspace
applications to perform meaningful diagnostics.
Add new health status values for both cases.
Signed-off-by: Armin Wolf <W_Armin@gmx.de>
Reviewed-by: Sebastian Reichel <sebastian.reichel@collabora.com>
Link: https://lore.kernel.org/r/20250429003606.303870-1-W_Armin@gmx.de
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
|
|
Reuse newly added DMA API to cache IOVA and only link/unlink pages
in fast path for UMEM ODP flow.
Tested-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
|
|
As a preparation to remove dma_list, store access mask in PFN pointer
and not in dma_addr_t.
Tested-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
|
|
HMM callers use PFN list to populate range while calling
to hmm_range_fault(), the conversion from PFN to DMA address
is done by the callers with help of another DMA list. However,
it is wasteful on any modern platform and by doing the right
logic, that DMA list can be avoided.
Provide generic logic to manage these lists and gave an interface
to map/unmap PFNs to DMA addresses, without requiring from the callers
to be an experts in DMA core API.
Tested-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
|
|
Introduce new sticky flag (HMM_PFN_DMA_MAPPED), which isn't overwritten
by HMM range fault. Such flag allows users to tag specific PFNs with
information if this specific PFN was already DMA mapped.
Tested-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
|
|
Currently the only efficient way to map a complex memory description through
the DMA API is by using the scatter list APIs. The SG APIs are unique in that
they efficiently combine the two fundamental operations of sizing and allocating
a large IOVA window from the IOMMU and processing all the per-address
swiotlb/flushing/p2p/map details.
This uniqueness has been a long standing pain point as the scatter list API
is mandatory, but expensive to use. It prevents any kind of optimization or
feature improvement (such as avoiding struct page for P2P) due to the
impossibility of improving the scatter list.
Several approaches have been explored to expand the DMA API with additional
scatterlist-like structures (BIO, rlist), instead split up the DMA API
to allow callers to bring their own data structure.
The API is split up into parts:
- Allocate IOVA space:
To do any pre-allocation required. This is done based on the caller
supplying some details about how much IOMMU address space it would need
in worst case.
- Map and unmap relevant structures to pre-allocated IOVA space:
Perform the actual mapping into the pre-allocated IOVA. This is very
similar to dma_map_page().
Thanks
Signed-off-by: Leon Romanovsky <leon@kernel.org>
|
|
Currently the full set of crypto self-tests requires
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y. This is problematic in two ways.
First, developers regularly overlook this option. Second, the
description of the tests as "extra" sometimes gives the impression that
it is not required that all algorithms pass these tests.
Given that the main use case for the crypto self-tests is for
developers, make enabling CONFIG_CRYPTO_SELFTESTS=y just enable the full
set of crypto self-tests by default.
The slow tests can still be disabled by adding the command-line
parameter cryptomgr.noextratests=1, soon to be renamed to
cryptomgr.noslowtests=1. The only known use case for doing this is for
people trying to use the crypto self-tests to satisfy the FIPS 140-3
pre-operational self-testing requirements when the kernel is being
validated as a FIPS 140-3 cryptographic module.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
crypto_get_default_null_skcipher() and
crypto_put_default_null_skcipher() are no longer used, so remove them.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
For copying data between two scatterlists, just use memcpy_sglist()
instead of the so-called "null skcipher". This is much simpler.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add explicit array bounds to the function prototypes for the parameters
that didn't already get handled by the conversion to use chacha_state:
- chacha_block_*():
Change 'u8 *out' or 'u8 *stream' to u8 out[CHACHA_BLOCK_SIZE].
- hchacha_block_*():
Change 'u32 *out' or 'u32 *stream' to u32 out[HCHACHA_OUT_WORDS].
- chacha_init():
Change 'const u32 *key' to 'const u32 key[CHACHA_KEY_WORDS]'.
Change 'const u8 *iv' to 'const u8 iv[CHACHA_IV_SIZE]'.
No functional changes. This just makes it clear when fixed-size arrays
are expected.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that the ChaCha state matrix is strongly-typed, add a helper
function chacha_zeroize_state() which zeroizes it. Then convert all
applicable callers to use it instead of direct memzero_explicit. No
functional changes.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The ChaCha state matrix is 16 32-bit words. Currently it is represented
in the code as a raw u32 array, or even just a pointer to u32. This
weak typing is error-prone. Instead, introduce struct chacha_state:
struct chacha_state {
u32 x[16];
};
Convert all ChaCha and HChaCha functions to use struct chacha_state.
No functional changes.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Kent Overstreet <kent.overstreet@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|