Age | Commit message (Collapse) | Author |
|
FW attestation was disabled on MP0_14_0_{2/3}.
V2:
Move check into is_fw_attestation_support func. (Frank)
Remove DRM_WARN log info. (Alex)
Fix format. (Christian)
Signed-off-by: Gui Chengming <Jack.Gui@amd.com>
Reviewed-by: Frank.Min <Frank.Min@amd.com>
Reviewed-by: Christian König <Christian.Koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit 62952a38d9bcf357d5ffc97615c48b12c9cd627c)
Cc: stable@vger.kernel.org # 6.12.x
|
|
That is needed to enforce isolation between contexts.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit def59436fb0d3ca0f211d14873d0273d69ebb405)
Cc: stable@vger.kernel.org
|
|
Disable gfxoff with the compute workload on gfx12. This is a
workaround for the opencl test failure.
Signed-off-by: Kenneth Feng <kenneth.feng@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit 2affe2bbc997b3920045c2c434e480c81a5f9707)
Cc: stable@vger.kernel.org # 6.12.x
|
|
Jann reports he can trigger a UAF if the target ring unregisters
buffers before the clone operation is fully done. And additionally
also an issue related to node allocation failures. Both of those
stemp from the fact that the cleanup logic puts the buffers manually,
rather than just relying on io_rsrc_data_free() doing it. Hence kill
the manual cleanup code and just let io_rsrc_data_free() handle it,
it'll put the nodes appropriately.
Reported-by: Jann Horn <jannh@google.com>
Fixes: 3597f2786b68 ("io_uring/rsrc: unify file and buffer resource tables")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
This commit addresses a circular locking dependency issue within the GFX
isolation mechanism. The problem was identified by a warning indicating
a potential deadlock due to inconsistent lock acquisition order.
- The `amdgpu_gfx_enforce_isolation_ring_begin_use` and
`amdgpu_gfx_enforce_isolation_ring_end_use` functions previously
acquired `enforce_isolation_mutex` and called `amdgpu_gfx_kfd_sch_ctrl`,
leading to potential deadlocks. ie., If `amdgpu_gfx_kfd_sch_ctrl` is
called while `enforce_isolation_mutex` is held, and
`amdgpu_gfx_enforce_isolation_handler` is called while `kfd_sch_mutex` is
held, it can create a circular dependency.
By ensuring consistent lock usage, this fix resolves the issue:
[ 606.297333] ======================================================
[ 606.297343] WARNING: possible circular locking dependency detected
[ 606.297353] 6.10.0-amd-mlkd-610-311224-lof #19 Tainted: G OE
[ 606.297365] ------------------------------------------------------
[ 606.297375] kworker/u96:3/3825 is trying to acquire lock:
[ 606.297385] ffff9aa64e431cb8 ((work_completion)(&(&adev->gfx.enforce_isolation[i].work)->work)){+.+.}-{0:0}, at: __flush_work+0x232/0x610
[ 606.297413]
but task is already holding lock:
[ 606.297423] ffff9aa64e432338 (&adev->gfx.kfd_sch_mutex){+.+.}-{3:3}, at: amdgpu_gfx_kfd_sch_ctrl+0x51/0x4d0 [amdgpu]
[ 606.297725]
which lock already depends on the new lock.
[ 606.297738]
the existing dependency chain (in reverse order) is:
[ 606.297749]
-> #2 (&adev->gfx.kfd_sch_mutex){+.+.}-{3:3}:
[ 606.297765] __mutex_lock+0x85/0x930
[ 606.297776] mutex_lock_nested+0x1b/0x30
[ 606.297786] amdgpu_gfx_kfd_sch_ctrl+0x51/0x4d0 [amdgpu]
[ 606.298007] amdgpu_gfx_enforce_isolation_ring_begin_use+0x2a4/0x5d0 [amdgpu]
[ 606.298225] amdgpu_ring_alloc+0x48/0x70 [amdgpu]
[ 606.298412] amdgpu_ib_schedule+0x176/0x8a0 [amdgpu]
[ 606.298603] amdgpu_job_run+0xac/0x1e0 [amdgpu]
[ 606.298866] drm_sched_run_job_work+0x24f/0x430 [gpu_sched]
[ 606.298880] process_one_work+0x21e/0x680
[ 606.298890] worker_thread+0x190/0x350
[ 606.298899] kthread+0xe7/0x120
[ 606.298908] ret_from_fork+0x3c/0x60
[ 606.298919] ret_from_fork_asm+0x1a/0x30
[ 606.298929]
-> #1 (&adev->enforce_isolation_mutex){+.+.}-{3:3}:
[ 606.298947] __mutex_lock+0x85/0x930
[ 606.298956] mutex_lock_nested+0x1b/0x30
[ 606.298966] amdgpu_gfx_enforce_isolation_handler+0x87/0x370 [amdgpu]
[ 606.299190] process_one_work+0x21e/0x680
[ 606.299199] worker_thread+0x190/0x350
[ 606.299208] kthread+0xe7/0x120
[ 606.299217] ret_from_fork+0x3c/0x60
[ 606.299227] ret_from_fork_asm+0x1a/0x30
[ 606.299236]
-> #0 ((work_completion)(&(&adev->gfx.enforce_isolation[i].work)->work)){+.+.}-{0:0}:
[ 606.299257] __lock_acquire+0x16f9/0x2810
[ 606.299267] lock_acquire+0xd1/0x300
[ 606.299276] __flush_work+0x250/0x610
[ 606.299286] cancel_delayed_work_sync+0x71/0x80
[ 606.299296] amdgpu_gfx_kfd_sch_ctrl+0x287/0x4d0 [amdgpu]
[ 606.299509] amdgpu_gfx_enforce_isolation_ring_begin_use+0x2a4/0x5d0 [amdgpu]
[ 606.299723] amdgpu_ring_alloc+0x48/0x70 [amdgpu]
[ 606.299909] amdgpu_ib_schedule+0x176/0x8a0 [amdgpu]
[ 606.300101] amdgpu_job_run+0xac/0x1e0 [amdgpu]
[ 606.300355] drm_sched_run_job_work+0x24f/0x430 [gpu_sched]
[ 606.300369] process_one_work+0x21e/0x680
[ 606.300378] worker_thread+0x190/0x350
[ 606.300387] kthread+0xe7/0x120
[ 606.300396] ret_from_fork+0x3c/0x60
[ 606.300406] ret_from_fork_asm+0x1a/0x30
[ 606.300416]
other info that might help us debug this:
[ 606.300428] Chain exists of:
(work_completion)(&(&adev->gfx.enforce_isolation[i].work)->work) --> &adev->enforce_isolation_mutex --> &adev->gfx.kfd_sch_mutex
[ 606.300458] Possible unsafe locking scenario:
[ 606.300468] CPU0 CPU1
[ 606.300476] ---- ----
[ 606.300484] lock(&adev->gfx.kfd_sch_mutex);
[ 606.300494] lock(&adev->enforce_isolation_mutex);
[ 606.300508] lock(&adev->gfx.kfd_sch_mutex);
[ 606.300521] lock((work_completion)(&(&adev->gfx.enforce_isolation[i].work)->work));
[ 606.300536]
*** DEADLOCK ***
[ 606.300546] 5 locks held by kworker/u96:3/3825:
[ 606.300555] #0: ffff9aa5aa1f5d58 ((wq_completion)comp_1.1.0){+.+.}-{0:0}, at: process_one_work+0x3f5/0x680
[ 606.300577] #1: ffffaa53c3c97e40 ((work_completion)(&sched->work_run_job)){+.+.}-{0:0}, at: process_one_work+0x1d6/0x680
[ 606.300600] #2: ffff9aa64e463c98 (&adev->enforce_isolation_mutex){+.+.}-{3:3}, at: amdgpu_gfx_enforce_isolation_ring_begin_use+0x1c3/0x5d0 [amdgpu]
[ 606.300837] #3: ffff9aa64e432338 (&adev->gfx.kfd_sch_mutex){+.+.}-{3:3}, at: amdgpu_gfx_kfd_sch_ctrl+0x51/0x4d0 [amdgpu]
[ 606.301062] #4: ffffffff8c1a5660 (rcu_read_lock){....}-{1:2}, at: __flush_work+0x70/0x610
[ 606.301083]
stack backtrace:
[ 606.301092] CPU: 14 PID: 3825 Comm: kworker/u96:3 Tainted: G OE 6.10.0-amd-mlkd-610-311224-lof #19
[ 606.301109] Hardware name: Gigabyte Technology Co., Ltd. X570S GAMING X/X570S GAMING X, BIOS F7 03/22/2024
[ 606.301124] Workqueue: comp_1.1.0 drm_sched_run_job_work [gpu_sched]
[ 606.301140] Call Trace:
[ 606.301146] <TASK>
[ 606.301154] dump_stack_lvl+0x9b/0xf0
[ 606.301166] dump_stack+0x10/0x20
[ 606.301175] print_circular_bug+0x26c/0x340
[ 606.301187] check_noncircular+0x157/0x170
[ 606.301197] ? register_lock_class+0x48/0x490
[ 606.301213] __lock_acquire+0x16f9/0x2810
[ 606.301230] lock_acquire+0xd1/0x300
[ 606.301239] ? __flush_work+0x232/0x610
[ 606.301250] ? srso_alias_return_thunk+0x5/0xfbef5
[ 606.301261] ? mark_held_locks+0x54/0x90
[ 606.301274] ? __flush_work+0x232/0x610
[ 606.301284] __flush_work+0x250/0x610
[ 606.301293] ? __flush_work+0x232/0x610
[ 606.301305] ? __pfx_wq_barrier_func+0x10/0x10
[ 606.301318] ? mark_held_locks+0x54/0x90
[ 606.301331] ? srso_alias_return_thunk+0x5/0xfbef5
[ 606.301345] cancel_delayed_work_sync+0x71/0x80
[ 606.301356] amdgpu_gfx_kfd_sch_ctrl+0x287/0x4d0 [amdgpu]
[ 606.301661] amdgpu_gfx_enforce_isolation_ring_begin_use+0x2a4/0x5d0 [amdgpu]
[ 606.302050] ? srso_alias_return_thunk+0x5/0xfbef5
[ 606.302069] amdgpu_ring_alloc+0x48/0x70 [amdgpu]
[ 606.302452] amdgpu_ib_schedule+0x176/0x8a0 [amdgpu]
[ 606.302862] ? drm_sched_entity_error+0x82/0x190 [gpu_sched]
[ 606.302890] amdgpu_job_run+0xac/0x1e0 [amdgpu]
[ 606.303366] drm_sched_run_job_work+0x24f/0x430 [gpu_sched]
[ 606.303388] process_one_work+0x21e/0x680
[ 606.303409] worker_thread+0x190/0x350
[ 606.303424] ? __pfx_worker_thread+0x10/0x10
[ 606.303437] kthread+0xe7/0x120
[ 606.303449] ? __pfx_kthread+0x10/0x10
[ 606.303463] ret_from_fork+0x3c/0x60
[ 606.303476] ? __pfx_kthread+0x10/0x10
[ 606.303489] ret_from_fork_asm+0x1a/0x30
[ 606.303512] </TASK>
v2: Refactor lock handling to resolve circular dependency (Alex)
- Introduced a `sched_work` flag to defer the call to
`amdgpu_gfx_kfd_sch_ctrl` until after releasing
`enforce_isolation_mutex`.
- This change ensures that `amdgpu_gfx_kfd_sch_ctrl` is called outside
the critical section, preventing the circular dependency and deadlock.
- The `sched_work` flag is set within the mutex-protected section if
conditions are met, and the actual function call is made afterward.
- This approach ensures consistent lock acquisition order.
Fixes: afefd6f24502 ("drm/amdgpu: Implement Enforce Isolation Handler for KGD/KFD serialization")
Cc: Christian König <christian.koenig@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Srinivasan Shanmugam <srinivasan.shanmugam@amd.com>
Suggested-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
(cherry picked from commit 0b6b2dd38336d5fd49214f0e4e6495e658e3ab44)
Cc: stable@vger.kernel.org
|
|
The yt2_1380_fc_serdev_probe() function calls devm_serdev_device_open()
before setting the client ops via serdev_device_set_client_ops(). This
ordering can trigger a NULL pointer dereference in the serdev controller's
receive_buf handler, as it assumes serdev->ops is valid when
SERPORT_ACTIVE is set.
This is similar to the issue fixed in commit 5e700b384ec1
("platform/chrome: cros_ec_uart: properly fix race condition") where
devm_serdev_device_open() was called before fully initializing the
device.
Fix the race by ensuring client ops are set before enabling the port via
devm_serdev_device_open().
Note, serdev_device_set_baudrate() and serdev_device_set_flow_control()
calls should be after the devm_serdev_device_open() call.
Fixes: b2ed33e8d486 ("platform/x86: Add lenovo-yoga-tab2-pro-1380-fastcharger driver")
Signed-off-by: Chenyuan Yang <chenyuan0y@gmail.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Link: https://lore.kernel.org/r/20250111180951.2277757-1-chenyuan0y@gmail.com
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
|
|
The dell_uart_bl_serdev_probe() function calls devm_serdev_device_open()
before setting the client ops via serdev_device_set_client_ops(). This
ordering can trigger a NULL pointer dereference in the serdev controller's
receive_buf handler, as it assumes serdev->ops is valid when
SERPORT_ACTIVE is set.
This is similar to the issue fixed in commit 5e700b384ec1
("platform/chrome: cros_ec_uart: properly fix race condition") where
devm_serdev_device_open() was called before fully initializing the
device.
Fix the race by ensuring client ops are set before enabling the port via
devm_serdev_device_open().
Note, serdev_device_set_baudrate() and serdev_device_set_flow_control()
calls should be after the devm_serdev_device_open() call.
Fixes: 484bae9e4d6a ("platform/x86: Add new Dell UART backlight driver")
Signed-off-by: Chenyuan Yang <chenyuan0y@gmail.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Link: https://lore.kernel.org/r/20250111180118.2274516-1-chenyuan0y@gmail.com
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
|
|
We've got a problem with bch_stripe that is going to take an on disk
format rev to fix - we can't access the block sector counts if the
checksum type is unknown.
Document it for now, there are a few other things to fix as well.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
We were incorrectly checking if there'd been an io error.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
The transaction is going to abort, so there will be no cycle involving
this transaction anymore.
Signed-off-by: Alan Huang <mmpgouride@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
The function graph tracer now calculates the calltime internally and for
each instance. If there are two instances that are running function graph
tracer and are tracing the same functions, the timings of the length of
those functions may be slightly different:
# trace-cmd record -B foo -p function_graph -B bar -p function_graph sleep 5
# trace-cmd report
[..]
bar: sleep-981 [000] ...1. 1101.109027: funcgraph_entry: 0.764 us | mutex_unlock(); (ret=0xffff8abcc256c300)
foo: sleep-981 [000] ...1. 1101.109028: funcgraph_entry: 0.748 us | mutex_unlock(); (ret=0xffff8abcc256c300)
bar: sleep-981 [000] ..... 1101.109029: funcgraph_exit: 2.456 us | } (ret=0xffff8abcc256c300)
foo: sleep-981 [000] ..... 1101.109029: funcgraph_exit: 2.403 us | } (ret=0xffff8abcc256c300)
bar: sleep-981 [000] d..1. 1101.109031: funcgraph_entry: 0.844 us | fpregs_assert_state_consistent(); (ret=0x0)
foo: sleep-981 [000] d..1. 1101.109032: funcgraph_entry: 0.803 us | fpregs_assert_state_consistent(); (ret=0x0)
Link: https://lore.kernel.org/all/20250114101806.b2778cb01f34f5be9d23ad98@kernel.org/
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Suggested-by: Masami Hiramatsu <mhiramat@kernel.org>
Link: https://lore.kernel.org/20250114101202.02e7bc68@gandalf.local.home
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
|
|
When the cycle doesn't involve the initiator of the cycle detection,
we might choose a transaction that is not involved in the cycle to abort.
It shouldn't be that since it won't break the cycle, this patch
therefore chooses the transaction in the cycle to abort.
Signed-off-by: Alan Huang <mmpgouride@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
This patch introduces a helper function called lock_graph_pop_from,
it pops the graph from i.
Signed-off-by: Alan Huang <mmpgouride@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Signed-off-by: Alan Huang <mmpgouride@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
If the transaction chose itself as a victim before and restarted, it
might request a no fail lock request this time. But it might be added to
others' lock graph and be chose as the victim again, it's no longer safe
without additional check. We can also convert the cycle detector to be
fully RCU-based to solve that unsoundness, but the latency added to trans_put
and additional memory required may not worth it.
Signed-off-by: Alan Huang <mmpgouride@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
If the lock has been acquired and unlocked, we don't have to do clear
and wakeup again, though harmless since we hold the intent lock. Merge
the condition might be clearer.
Signed-off-by: Alan Huang <mmpgouride@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
This reverts commit 62448afee714354a26db8a0f3c644f58628f0792.
six_lock_tryupgrade fails only if there is an intent lock held,
it won't fail no matter how many read locks are held.
Signed-off-by: Alan Huang <mmpgouride@gmail.com>
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
|
|
Merge series from Miquel Raynal <miquel.raynal@bootlin.com>:
Here is a (big) series supposed to bring DTR support in SPI-NAND.
|
|
Use syscon_regmap_lookup_by_phandle_args() which is a wrapper over
syscon_regmap_lookup_by_phandle() combined with getting the syscon
argument. Except simpler code this annotates within one line that given
phandle has arguments, so grepping for code would be easier.
There is also no real benefit in printing errors on missing syscon
argument, because this is done just too late: runtime check on
static/build-time data. Dtschema and Devicetree bindings offer the
static/build-time check for this already.
Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Link: https://patch.msgid.link/20250111185400.183760-1-krzysztof.kozlowski@linaro.org
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
After commit e6204f39fe3a ("spi: amd: Drop redundant check"), clang warns (or
errors with CONFIG_WERROR=y):
drivers/spi/spi-amd.c:695:9: error: variable 'ret' is uninitialized when used here [-Werror,-Wuninitialized]
695 | return ret;
| ^~~
drivers/spi/spi-amd.c:673:9: note: initialize the variable 'ret' to silence this warning
673 | int ret;
| ^
| = 0
1 error generated.
ret is no longer set on anything other than the default switch path.
Replace ret with a direct return of 0 at the end of the function and
-EOPNOTSUPP in the default case to resolve the warning.
Fixes: e6204f39fe3a ("spi: amd: Drop redundant check")
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202501112315.ugYQ7Ce7-lkp@intel.com/
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Miguel Ojeda <ojeda@kernel.org>
Reviewed-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://patch.msgid.link/20250111-spi-amd-fix-uninitialized-ret-v1-1-c66ab9f6a23d@kernel.org
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Add a selftest creating three extents and then deleting two out of the
three extents.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Test creating a range of three RAID stripe-extents and then punch a hole
in the middle, deleting all of the middle extents and partially deleting
the "book ends".
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Add a selftest for punching a hole into a RAID stripe extent. The test
create an 1M extent and punches a 64k bytes long hole at offset of 32k from
the start of the extent.
Afterwards it verifies the start and length of both resulting new extents
"left" and "right" as well as the absence of the hole.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Add a selftest for RAID stripe-tree deletion with a delete range spanning
two items, so that we're punching a hole into two adjacent RAID stripe
extents truncating the first and "moving" the second to the right.
The following diagram illustrates the operation:
|--- RAID Stripe Extent ---||--- RAID Stripe Extent ---|
|----- keep -----|--- drop ---|----- keep ----|
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
The selftests for partially deleting the start or tail of RAID
stripe-extents split these extents in half.
This can hide errors in the calculation, so don't split the RAID
stripe-extents in half but delete the first or last 16K of the 64K
extents.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Commit 5e72aabc1fff ("btrfs: return ENODATA in case RST lookup fails")
changed btrfs_get_raid_extent_offset()'s return value to ENODATA in case
the RAID stripe-tree lookup failed.
Adjust the test cases which check for absence of a given range to check
for ENODATA as return value in this case.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Don't use btrfs_set_item_key_safe() to modify the keys in the RAID
stripe-tree, as this can lead to corruption of the tree, which is caught
by the checks in btrfs_set_item_key_safe():
BTRFS info (device nvme1n1): leaf 49168384 gen 15 total ptrs 194 free space 8329 owner 12
BTRFS info (device nvme1n1): refs 2 lock_owner 1030 current 1030
[ snip ]
item 105 key (354549760 230 20480) itemoff 14587 itemsize 16
stride 0 devid 5 physical 67502080
item 106 key (354631680 230 4096) itemoff 14571 itemsize 16
stride 0 devid 1 physical 88559616
item 107 key (354631680 230 32768) itemoff 14555 itemsize 16
stride 0 devid 1 physical 88555520
item 108 key (354717696 230 28672) itemoff 14539 itemsize 16
stride 0 devid 2 physical 67604480
[ snip ]
BTRFS critical (device nvme1n1): slot 106 key (354631680 230 32768) new key (354635776 230 4096)
------------[ cut here ]------------
kernel BUG at fs/btrfs/ctree.c:2602!
Oops: invalid opcode: 0000 [#1] PREEMPT SMP PTI
CPU: 1 UID: 0 PID: 1055 Comm: fsstress Not tainted 6.13.0-rc1+ #1464
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.2-3-gd478f380-rebuilt.opensuse.org 04/01/2014
RIP: 0010:btrfs_set_item_key_safe+0xf7/0x270
Code: <snip>
RSP: 0018:ffffc90001337ab0 EFLAGS: 00010287
RAX: 0000000000000000 RBX: ffff8881115fd000 RCX: 0000000000000000
RDX: 0000000000000001 RSI: 0000000000000001 RDI: 00000000ffffffff
RBP: ffff888110ed6f50 R08: 00000000ffffefff R09: ffffffff8244c500
R10: 00000000ffffefff R11: 00000000ffffffff R12: ffff888100586000
R13: 00000000000000c9 R14: ffffc90001337b1f R15: ffff888110f23b58
FS: 00007f7d75c72740(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fa811652c60 CR3: 0000000111398001 CR4: 0000000000370eb0
Call Trace:
<TASK>
? __die_body.cold+0x14/0x1a
? die+0x2e/0x50
? do_trap+0xca/0x110
? do_error_trap+0x65/0x80
? btrfs_set_item_key_safe+0xf7/0x270
? exc_invalid_op+0x50/0x70
? btrfs_set_item_key_safe+0xf7/0x270
? asm_exc_invalid_op+0x1a/0x20
? btrfs_set_item_key_safe+0xf7/0x270
btrfs_partially_delete_raid_extent+0xc4/0xe0
btrfs_delete_raid_extent+0x227/0x240
__btrfs_free_extent.isra.0+0x57f/0x9c0
? exc_coproc_segment_overrun+0x40/0x40
__btrfs_run_delayed_refs+0x2fa/0xe80
btrfs_run_delayed_refs+0x81/0xe0
btrfs_commit_transaction+0x2dd/0xbe0
? preempt_count_add+0x52/0xb0
btrfs_sync_file+0x375/0x4c0
do_fsync+0x39/0x70
__x64_sys_fsync+0x13/0x20
do_syscall_64+0x54/0x110
entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7f7d7550ef90
Code: <snip>
RSP: 002b:00007ffd70237248 EFLAGS: 00000202 ORIG_RAX: 000000000000004a
RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f7d7550ef90
RDX: 000000000000013a RSI: 000000000040eb28 RDI: 0000000000000004
RBP: 000000000000001b R08: 0000000000000078 R09: 00007ffd7023725c
R10: 00007f7d75400390 R11: 0000000000000202 R12: 028f5c28f5c28f5c
R13: 8f5c28f5c28f5c29 R14: 000000000040b520 R15: 00007f7d75c726c8
</TASK>
While the root cause of the tree order corruption isn't clear, using
btrfs_duplicate_item() to copy the item and then adjusting both the key
and the per-device physical addresses is a safe way to counter this
problem.
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
If the stripe extent we want to delete starts before the range we want to
delete and ends after the range we want to delete we're punching a
hole in the stripe extent:
|--- RAID Stripe Extent ---|
| keep |--- drop ---| keep |
This means we need to a) truncate the existing item and b)
create a second item for the remaining range.
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
When a user requests the deletion of a range that spans multiple stripe
extents and btrfs_search_slot() returns us the second RAID stripe extent,
we need to pick the previous item and truncate it, if there's still a
range to delete left, move on to the next item.
The following diagram illustrates the operation:
|--- RAID Stripe Extent ---||--- RAID Stripe Extent ---|
|--- keep ---|--- drop ---|
While at it, comment the trivial case of a whole item delete as well.
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Fix tail delete of RAID stripe-extents, if there is a range to be deleted
as well after the tail delete of the extent.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
When deleting the front of a RAID stripe-extent the delete code
miscalculates the size on how much to pad the remaining extent part in the
front.
Fix the calculation so we're always having the sizes we expect.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
When modifying a RAID stripe-extent, ASSERT() that the length of the new
RAID stripe-extent is always greater than 0.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Even if the RAID stripe-tree is not enabled in the filesystem,
do_free_extent_accounting() still calls into btrfs_delete_raid_extent().
Check if the extent in question is on a block-group that has a profile
which is used by RAID stripe-tree before attempting to delete a stripe
extent. Return early if it doesn't, otherwise we're doing a unnecessary
search.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
RAID stripe-tree is an incompatible feature not a read-only compatible, so
set the incompat flag not a compat_ro one in the selftest code.
Subsequent changes in btrfs_delete_raid_extent() will start checking for
this flag.
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Print lazy preemption model in ftrace header when latency-format=1.
# cat /sys/kernel/debug/sched/preempt
none voluntary full (lazy)
Without patch:
latency: 0 us, #232946/232946, CPU#40 | (M:unknown VP:0, KP:0, SP:0 HP:0 #P:80)
^^^^^^^
With Patch:
latency: 0 us, #1897938/25566788, CPU#16 | (M:lazy VP:0, KP:0, SP:0 HP:0 #P:80)
^^^^
Now that lazy preemption is part of the kernel, make sure the tracing
infrastructure reflects that.
Link: https://lore.kernel.org/20250103093647.575919-1-sshegde@linux.ibm.com
Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
|
|
The function graph tracer has become generic so that kretprobes and BPF
can use it along with function graph tracing itself. Some of the
infrastructure was specific for function graph tracing such as recording
the calltime and return time of the functions. Calling the clock code on a
high volume function does add overhead. The calculation of the calltime
was removed from the generic code and placed into the function graph
tracer itself so that the other users did not incur this overhead as they
did not need that timestamp.
The calltime field was still kept in the generic return entry structure
and the function graph return entry callback filled it as that structure
was passed to other code.
But this broke both irqsoff and wakeup latency tracer as they still
depended on the trace structure containing the calltime when the option
display-graph is set as it used some of those same functions that the
function graph tracer used. But now the calltime was not set and was just
zero. This caused the calculation of the function time to be the absolute
value of the return timestamp and not the length of the function.
# cd /sys/kernel/tracing
# echo 1 > options/display-graph
# echo irqsoff > current_tracer
The tracers went from:
# REL TIME CPU TASK/PID |||| DURATION FUNCTION CALLS
# | | | | |||| | | | | | |
0 us | 4) <idle>-0 | d..1. | 0.000 us | irqentry_enter();
3 us | 4) <idle>-0 | d..2. | | irq_enter_rcu() {
4 us | 4) <idle>-0 | d..2. | 0.431 us | preempt_count_add();
5 us | 4) <idle>-0 | d.h2. | | tick_irq_enter() {
5 us | 4) <idle>-0 | d.h2. | 0.433 us | tick_check_oneshot_broadcast_this_cpu();
6 us | 4) <idle>-0 | d.h2. | 2.426 us | ktime_get();
9 us | 4) <idle>-0 | d.h2. | | tick_nohz_stop_idle() {
10 us | 4) <idle>-0 | d.h2. | 0.398 us | nr_iowait_cpu();
11 us | 4) <idle>-0 | d.h1. | 1.903 us | }
11 us | 4) <idle>-0 | d.h2. | | tick_do_update_jiffies64() {
12 us | 4) <idle>-0 | d.h2. | | _raw_spin_lock() {
12 us | 4) <idle>-0 | d.h2. | 0.360 us | preempt_count_add();
13 us | 4) <idle>-0 | d.h3. | 0.354 us | do_raw_spin_lock();
14 us | 4) <idle>-0 | d.h2. | 2.207 us | }
15 us | 4) <idle>-0 | d.h3. | 0.428 us | calc_global_load();
16 us | 4) <idle>-0 | d.h3. | | _raw_spin_unlock() {
16 us | 4) <idle>-0 | d.h3. | 0.380 us | do_raw_spin_unlock();
17 us | 4) <idle>-0 | d.h3. | 0.334 us | preempt_count_sub();
18 us | 4) <idle>-0 | d.h1. | 1.768 us | }
18 us | 4) <idle>-0 | d.h2. | | update_wall_time() {
[..]
To:
# REL TIME CPU TASK/PID |||| DURATION FUNCTION CALLS
# | | | | |||| | | | | | |
0 us | 5) <idle>-0 | d.s2. | 0.000 us | _raw_spin_lock_irqsave();
0 us | 5) <idle>-0 | d.s3. | 312159583 us | preempt_count_add();
2 us | 5) <idle>-0 | d.s4. | 312159585 us | do_raw_spin_lock();
3 us | 5) <idle>-0 | d.s4. | | _raw_spin_unlock() {
3 us | 5) <idle>-0 | d.s4. | 312159586 us | do_raw_spin_unlock();
4 us | 5) <idle>-0 | d.s4. | 312159587 us | preempt_count_sub();
4 us | 5) <idle>-0 | d.s2. | 312159587 us | }
5 us | 5) <idle>-0 | d.s3. | | _raw_spin_lock() {
5 us | 5) <idle>-0 | d.s3. | 312159588 us | preempt_count_add();
6 us | 5) <idle>-0 | d.s4. | 312159589 us | do_raw_spin_lock();
7 us | 5) <idle>-0 | d.s3. | 312159590 us | }
8 us | 5) <idle>-0 | d.s4. | 312159591 us | calc_wheel_index();
9 us | 5) <idle>-0 | d.s4. | | enqueue_timer() {
9 us | 5) <idle>-0 | d.s4. | | wake_up_nohz_cpu() {
11 us | 5) <idle>-0 | d.s4. | | native_smp_send_reschedule() {
11 us | 5) <idle>-0 | d.s4. | 312171987 us | default_send_IPI_single_phys();
12408 us | 5) <idle>-0 | d.s3. | 312171990 us | }
12408 us | 5) <idle>-0 | d.s3. | 312171991 us | }
12409 us | 5) <idle>-0 | d.s3. | 312171991 us | }
Where the calculation of the time for each function was the return time
minus zero and not the time of when the function returned.
Have these tracers also save the calltime in the fgraph data section and
retrieve it again on the return to get the correct timings again.
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/20250113183124.61767419@gandalf.local.home
Fixes: f1f36e22bee9 ("ftrace: Have calltime be saved in the fgraph storage")
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
|
|
namespaces
DEFAULT_SYMBOL_NAMESPACE must already be defined when <linux/export.h>
is included. So move the define above the include block.
Fixes: fd57a3325a77 ("i2c: designware: Move exports to I2C_DW namespaces")
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@baylibre.com>
Acked-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
|
|
irq_chip functions may be called in raw spinlock context. Therefore, we
must also use a raw spinlock for our own internal locking.
This fixes the following lockdep splat:
[ 5.349336] =============================
[ 5.353349] [ BUG: Invalid wait context ]
[ 5.357361] 6.13.0-rc5+ #69 Tainted: G W
[ 5.363031] -----------------------------
[ 5.367045] kworker/u17:1/44 is trying to lock:
[ 5.371587] ffffff88018b02c0 (&chip->gpio_lock){....}-{3:3}, at: xgpio_irq_unmask (drivers/gpio/gpio-xilinx.c:433 (discriminator 8))
[ 5.380079] other info that might help us debug this:
[ 5.385138] context-{5:5}
[ 5.387762] 5 locks held by kworker/u17:1/44:
[ 5.392123] #0: ffffff8800014958 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work (kernel/workqueue.c:3204)
[ 5.402260] #1: ffffffc082fcbdd8 (deferred_probe_work){+.+.}-{0:0}, at: process_one_work (kernel/workqueue.c:3205)
[ 5.411528] #2: ffffff880172c900 (&dev->mutex){....}-{4:4}, at: __device_attach (drivers/base/dd.c:1006)
[ 5.419929] #3: ffffff88039c8268 (request_class#2){+.+.}-{4:4}, at: __setup_irq (kernel/irq/internals.h:156 kernel/irq/manage.c:1596)
[ 5.428331] #4: ffffff88039c80c8 (lock_class#2){....}-{2:2}, at: __setup_irq (kernel/irq/manage.c:1614)
[ 5.436472] stack backtrace:
[ 5.439359] CPU: 2 UID: 0 PID: 44 Comm: kworker/u17:1 Tainted: G W 6.13.0-rc5+ #69
[ 5.448690] Tainted: [W]=WARN
[ 5.451656] Hardware name: xlnx,zynqmp (DT)
[ 5.455845] Workqueue: events_unbound deferred_probe_work_func
[ 5.461699] Call trace:
[ 5.464147] show_stack+0x18/0x24 C
[ 5.467821] dump_stack_lvl (lib/dump_stack.c:123)
[ 5.471501] dump_stack (lib/dump_stack.c:130)
[ 5.474824] __lock_acquire (kernel/locking/lockdep.c:4828 kernel/locking/lockdep.c:4898 kernel/locking/lockdep.c:5176)
[ 5.478758] lock_acquire (arch/arm64/include/asm/percpu.h:40 kernel/locking/lockdep.c:467 kernel/locking/lockdep.c:5851 kernel/locking/lockdep.c:5814)
[ 5.482429] _raw_spin_lock_irqsave (include/linux/spinlock_api_smp.h:111 kernel/locking/spinlock.c:162)
[ 5.486797] xgpio_irq_unmask (drivers/gpio/gpio-xilinx.c:433 (discriminator 8))
[ 5.490737] irq_enable (kernel/irq/internals.h:236 kernel/irq/chip.c:170 kernel/irq/chip.c:439 kernel/irq/chip.c:432 kernel/irq/chip.c:345)
[ 5.494060] __irq_startup (kernel/irq/internals.h:241 kernel/irq/chip.c:180 kernel/irq/chip.c:250)
[ 5.497645] irq_startup (kernel/irq/chip.c:270)
[ 5.501143] __setup_irq (kernel/irq/manage.c:1807)
[ 5.504728] request_threaded_irq (kernel/irq/manage.c:2208)
Fixes: a32c7caea292 ("gpio: gpio-xilinx: Add interrupt support")
Signed-off-by: Sean Anderson <sean.anderson@linux.dev>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20250110163354.2012654-1-sean.anderson@linux.dev
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
|
|
Kory Maincent says:
====================
Arrange PSE core and update TPS23881 driver
This patch includes several improvements to the PSE core for better
implementation and maintainability:
- Move the conversion between current limit and power limit from the driver
to the PSE core.
- Update power and current limit checks.
- Split the ethtool_get_status callback into multiple callbacks.
- Fix PSE PI of_node detection.
- Clean ethtool header of PSE structures.
Additionally, the TPS23881 driver has been updated to support power
limit and measurement features, aligning with the new PSE core
functionalities.
This patch series is the first part of the budget evaluation strategy
support patch series sent earlier:
https://lore.kernel.org/netdev/20250104161622.7b82dfdf@kmaincent-XPS-13-7390/T/#t
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
====================
Link: https://patch.msgid.link/20250110-b4-feature_poe_arrange-v3-0-142279aedb94@bootlin.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Remove PSE-specific structures from the ethtool header to improve code
modularity, maintain independent headers, and reduce incremental build
time.
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
The PI of_node was not assigned in the regulator_config structure, leading
to failures in resolving the correct supply when different power supplies
are assigned to multiple PIs of a PSE controller. This fix ensures that the
of_node is properly set in the regulator_config, allowing accurate supply
resolution for each PI.
Acked-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Expand PSE callbacks to support the newly introduced
pi_get/set_pw_limit() and pi_get_voltage() functions. These callbacks
allow for power limit configuration in the TPS23881 controller.
Additionally, the patch includes the pi_get_pw_class() the
pi_get_actual_pw(), and the pi_get_pw_limit_ranges') callbacks providing
more comprehensive PoE status reporting.
Acked-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
The is_enabled callback is now redundant as the admin_state can be obtained
directly from the driver and provides the same information.
To simplify functionality, the core will handle this internally, making
the is_enabled callback unnecessary at the driver level. Remove the
callback from all drivers.
Acked-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
The ethtool_get_status callback currently handles all status and PSE
information within a single function. This approach has two key
drawbacks:
1. If the core requires some information for purposes other than
ethtool_get_status, redundant code will be needed to fetch the same
data from the driver (like is_enabled).
2. Drivers currently have access to all information passed to ethtool.
New variables will soon be added to ethtool status, such as PSE ID,
power domain IDs, and budget evaluation strategies, which are meant
to be managed solely by the core. Drivers should not have the ability
to modify these variables.
To resolve these issues, ethtool_get_status has been split into multiple
callbacks, with each handling a specific piece of information required
by ethtool or the core.
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
The regulator framework uses current limits, but the PSE standard and
known PSE controllers rely on power limits. Instead of converting
current to power within each driver, perform the conversion in the PSE
core. This avoids redundancy in driver implementation and aligns better
with the standard, simplifying driver development.
Remove at the same time the _pse_ethtool_get_status() function which is
not needed anymore.
Acked-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
When setting the PWOFF register, the controller resets multiple
configuration registers. This patch ensures these registers are
reconfigured as needed following a disable operation.
Acked-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
This driver frequently follows a pattern where two registers are read or
written in a single operation, followed by calculating the bit offset for
a specific channel.
Introduce helpers to streamline this process and reduce code redundancy,
making the codebase cleaner and more maintainable.
Acked-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Cleaned up several functions in tps23881 by removing redundant checks on
return values at the end of functions. These check has been removed, and
the return statement now directly returns the function result, reducing
the code's complexity and making it more concise.
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Oleksij Rempel <o.rempel@pengutronix.de>
Reviewed-by: Kyle Swenson <kyle.swenson@est.tech>
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Checking only the current limit is not sufficient. According to the
standard, voltage can reach up to 57V and current up to 1.92A, which
exceeds the power limit described in the standard (99.9W). Add a power
limit check to prevent this.
Acked-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Setting the max_uA constraint in the regulator API imposes a current
limit during the regulator registration process. This behavior conflicts
with preserving the maximum PI power budget configuration across reboots.
Instead, compare the desired current limit to MAX_PI_CURRENT in the
pse_pi_set_current_limit() function to ensure proper handling of the
power budget.
Acked-by: Oleksij Rempel <o.rempel@pengutronix.de>
Signed-off-by: Kory Maincent <kory.maincent@bootlin.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|