diff options
author | Jens Axboe <axboe@kernel.dk> | 2023-08-09 12:59:40 -0600 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2023-08-11 10:36:17 -0600 |
commit | de36a15f9a3842be24ca220060b77925f2f5435b (patch) | |
tree | 439bd272f185945a9c393210b755a3f322407f28 /sound/ppc/burgundy.c | |
parent | 78848b9b05623cfddb790d23b0dc38a275eb0763 (diff) |
io_uring/io-wq: reduce frequency of acct->lock acquisitions
When we check if we have work to run, we grab the acct lock, check,
drop it, and then return the result. If we do have work to run, then
running the work will again grab acct->lock and get the work item.
This causes us to grab acct->lock more frequently than we need to.
If we have work to do, have io_acct_run_queue() return with the acct
lock still acquired. io_worker_handle_work() is then always invoked
with the acct lock already held.
In a simple test cases that stats files (IORING_OP_STATX always hits
io-wq), we see a nice reduction in locking overhead with this change:
19.32% -12.55% [kernel.kallsyms] [k] __cmpwait_case_32
20.90% -12.07% [kernel.kallsyms] [k] queued_spin_lock_slowpath
Reviewed-by: Hao Xu <howeyxu@tencent.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'sound/ppc/burgundy.c')
0 files changed, 0 insertions, 0 deletions