summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-03-10cpufreq: unify show() and store() naming and use __ATTR_XXLianjie Zhang
Usually, sysfs attributes have .show and .store and their naming convention is filename_show() and filename_store(). But in cpufreq the naming convention of these functions is show_filename() and store_filename() which prevents __ATTR_RW() and __ATTR_RO() from being used in there to simplify code. Accordingly, change the naming convention of the sysfs .show and .store methods in cpufreq to follow the one expected by __ATTR_RW() and __ATTR_RO() and use these macros in that code. Signed-off-by: Lianjie Zhang <zhanglianjie@uniontech.com> [ rjw: Subject and changelog edits ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-03-10mm: gup: make fault_in_safe_writeable() use fixup_user_fault()Linus Torvalds
Instead of using GUP, make fault_in_safe_writeable() actually force a 'handle_mm_fault()' using the same fixup_user_fault() machinery that futexes already use. Using the GUP machinery meant that fault_in_safe_writeable() did not do everything that a real fault would do, ranging from not auto-expanding the stack segment, to not updating accessed or dirty flags in the page tables (GUP sets those flags on the pages themselves). The latter causes problems on architectures (like s390) that do accessed bit handling in software, which meant that fault_in_safe_writeable() didn't actually do all the fault handling it needed to, and trying to access the user address afterwards would still cause faults. Reported-and-tested-by: Andreas Gruenbacher <agruenba@redhat.com> Fixes: cdd591fc86e3 ("iov_iter: Introduce fault_in_iov_iter_writeable") Link: https://lore.kernel.org/all/CAHc6FU5nP+nziNGG0JAF1FUx-GV7kKFvM7aZuU_XD2_1v4vnvg@mail.gmail.com/ Acked-by: David Hildenbrand <david@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-03-10PM: core: keep irq flags in device_pm_check_callbacks()Dmitry Baryshkov
The function device_pm_check_callbacks() can be called under the spin lock (in the reported case it happens from genpd_add_device() -> dev_pm_domain_set(), when the genpd uses spinlocks rather than mutexes. However this function uncoditionally uses spin_lock_irq() / spin_unlock_irq(), thus not preserving the CPU flags. Use the irqsave/irqrestore instead. The backtrace for the reference: [ 2.752010] ------------[ cut here ]------------ [ 2.756769] raw_local_irq_restore() called with IRQs enabled [ 2.762596] WARNING: CPU: 4 PID: 1 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x34/0x50 [ 2.772338] Modules linked in: [ 2.775487] CPU: 4 PID: 1 Comm: swapper/0 Tainted: G S 5.17.0-rc6-00384-ge330d0d82eff-dirty #684 [ 2.781384] Freeing initrd memory: 46024K [ 2.785839] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 2.785841] pc : warn_bogus_irq_restore+0x34/0x50 [ 2.785844] lr : warn_bogus_irq_restore+0x34/0x50 [ 2.785846] sp : ffff80000805b7d0 [ 2.785847] x29: ffff80000805b7d0 x28: 0000000000000000 x27: 0000000000000002 [ 2.785850] x26: ffffd40e80930b18 x25: ffff7ee2329192b8 x24: ffff7edfc9f60800 [ 2.785853] x23: ffffd40e80930b18 x22: ffffd40e80930d30 x21: ffff7edfc0dffa00 [ 2.785856] x20: ffff7edfc09e3768 x19: 0000000000000000 x18: ffffffffffffffff [ 2.845775] x17: 6572206f74206465 x16: 6c696166203a3030 x15: ffff80008805b4f7 [ 2.853108] x14: 0000000000000000 x13: ffffd40e809550b0 x12: 00000000000003d8 [ 2.860441] x11: 0000000000000148 x10: ffffd40e809550b0 x9 : ffffd40e809550b0 [ 2.867774] x8 : 00000000ffffefff x7 : ffffd40e809ad0b0 x6 : ffffd40e809ad0b0 [ 2.875107] x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000 [ 2.882440] x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff7edfc03a8000 [ 2.889774] Call trace: [ 2.892290] warn_bogus_irq_restore+0x34/0x50 [ 2.896770] _raw_spin_unlock_irqrestore+0x94/0xa0 [ 2.901690] genpd_unlock_spin+0x20/0x30 [ 2.905724] genpd_add_device+0x100/0x2d0 [ 2.909850] __genpd_dev_pm_attach+0xa8/0x23c [ 2.914329] genpd_dev_pm_attach_by_id+0xc4/0x190 [ 2.919167] genpd_dev_pm_attach_by_name+0x3c/0xd0 [ 2.924086] dev_pm_domain_attach_by_name+0x24/0x30 [ 2.929102] psci_dt_attach_cpu+0x24/0x90 [ 2.933230] psci_cpuidle_probe+0x2d4/0x46c [ 2.937534] platform_probe+0x68/0xe0 [ 2.941304] really_probe.part.0+0x9c/0x2fc [ 2.945605] __driver_probe_device+0x98/0x144 [ 2.950085] driver_probe_device+0x44/0x15c [ 2.954385] __device_attach_driver+0xb8/0x120 [ 2.958950] bus_for_each_drv+0x78/0xd0 [ 2.962896] __device_attach+0xd8/0x180 [ 2.966843] device_initial_probe+0x14/0x20 [ 2.971144] bus_probe_device+0x9c/0xa4 [ 2.975092] device_add+0x380/0x88c [ 2.978679] platform_device_add+0x114/0x234 [ 2.983067] platform_device_register_full+0x100/0x190 [ 2.988344] psci_idle_init+0x6c/0xb0 [ 2.992113] do_one_initcall+0x74/0x3a0 [ 2.996060] kernel_init_freeable+0x2fc/0x384 [ 3.000543] kernel_init+0x28/0x130 [ 3.004132] ret_from_fork+0x10/0x20 [ 3.007817] irq event stamp: 319826 [ 3.011404] hardirqs last enabled at (319825): [<ffffd40e7eda0268>] __up_console_sem+0x78/0x84 [ 3.020332] hardirqs last disabled at (319826): [<ffffd40e7fd6d9d8>] el1_dbg+0x24/0x8c [ 3.028458] softirqs last enabled at (318312): [<ffffd40e7ec90410>] _stext+0x410/0x588 [ 3.036678] softirqs last disabled at (318299): [<ffffd40e7ed1bf68>] __irq_exit_rcu+0x158/0x174 [ 3.045607] ---[ end trace 0000000000000000 ]--- Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2022-03-10riscv: alternative only works on !XIP_KERNELJisheng Zhang
The alternative mechanism needs runtime code patching, it can't work on XIP_KERNEL. And the errata workarounds are implemented via the alternative mechanism. So add !XIP_KERNEL dependency for alternative and erratas. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Fixes: 44c922572952 ("RISC-V: enable XIP") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-03-10arm64: Add gcc Shadow Call Stack supportDan Li
Shadow call stacks will be available in GCC >= 12, this patch makes the corresponding kernel configuration available when compiling the kernel with the gcc. Note that the implementation in GCC is slightly different from Clang. With SCS enabled, functions will only pop x30 once in the epilogue, like: str x30, [x18], #8 stp x29, x30, [sp, #-16]! ...... - ldp x29, x30, [sp], #16 //clang + ldr x29, [sp], #16 //GCC ldr x30, [x18, #-8]! Link: https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=ce09ab17ddd21f73ff2caf6eec3b0ee9b0e1a11e Reviewed-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Dan Li <ashimida@linux.alibaba.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220303074323.86282-1-ashimida@linux.alibaba.com
2022-03-10io_uring: recycle provided buffers if request goes asyncJens Axboe
If we are using provided buffers, it's less than useful to have a buffer selected and pinned if a request needs to go async or arms poll for notification trigger on when we can process it. Recycle the buffer in those events, so we don't pin it for the duration of the request. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: ensure reads re-import for selected buffersJens Axboe
If we drop buffers between scheduling a retry, then we need to re-import when we start the request again. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: retry early for reads if we can pollJens Axboe
Most of the logic in io_read() deals with regular files, and in some ways it would make sense to split the handling into S_IFREG and others. But at least for retry, we don't need to bother setting up a bunch of state just to abort in the loop later. In particular, don't bother forcing setup of async data for a normal non-vectored read when we don't need it. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io-uring: Make statx API stableStefan Roesch
One of the key architectual tenets is to keep the parameters for io-uring stable. After the call has been submitted, its value can be changed. Unfortunaltely this is not the case for the current statx implementation. IO-Uring change: This changes replaces the const char * filename pointer in the io_statx structure with a struct filename *. In addition it also creates the filename object during the prepare phase. With this change, the opcode also needs to invoke cleanup, so the filename object gets freed after processing the request. fs change: This replaces the const char* __user filename parameter in the two functions do_statx and vfs_statx with a struct filename *. In addition to be able to correctly construct a filename object a new helper function getname_statx_lookup_flags is introduced. The function makes sure that do_statx and vfs_statx is invoked with the correct lookup flags. Signed-off-by: Stefan Roesch <shr@fb.com> Link: https://lore.kernel.org/r/20220225185326.1373304-2-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: Add support for napi_busy_pollOlivier Langlois
The sqpoll thread can be used for performing the napi busy poll in a similar way that it does io polling for file systems supporting direct access bypassing the page cache. The other way that io_uring can be used for napi busy poll is by calling io_uring_enter() to get events. If the user specify a timeout value, it is distributed between polling and sleeping by using the systemwide setting /proc/sys/net/core/busy_poll. The changes have been tested with this program: https://github.com/lano1106/io_uring_udp_ping and the result is: Without sqpoll: NAPI busy loop disabled: rtt min/avg/max/mdev = 40.631/42.050/58.667/1.547 us NAPI busy loop enabled: rtt min/avg/max/mdev = 30.619/31.753/61.433/1.456 us With sqpoll: NAPI busy loop disabled: rtt min/avg/max/mdev = 42.087/44.438/59.508/1.533 us NAPI busy loop enabled: rtt min/avg/max/mdev = 35.779/37.347/52.201/0.924 us Co-developed-by: Hao Xu <haoxu@linux.alibaba.com> Signed-off-by: Hao Xu <haoxu@linux.alibaba.com> Signed-off-by: Olivier Langlois <olivier@trillion01.com> Link: https://lore.kernel.org/r/810bd9408ffc510ff08269e78dca9df4af0b9e4e.1646777484.git.olivier@trillion01.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: minor io_cqring_wait() optimizationOlivier Langlois
Move up the block manipulating the sig variable to execute code that may encounter an error and exit first before continuing executing the rest of the function and avoid useless computations Signed-off-by: Olivier Langlois <olivier@trillion01.com> Link: https://lore.kernel.org/r/84513f7cc1b1fb31d8f4cb910aee033391d036b4.1646777484.git.olivier@trillion01.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: add support for IORING_OP_MSG_RING commandJens Axboe
This adds support for IORING_OP_MSG_RING, which allows an SQE to signal another ring. That allows either waking up someone waiting on the ring, or even passing a 64-bit value via the user_data field in the CQE. sqe->fd must contain the fd of a ring that should receive the CQE. sqe->off will be propagated to the cqe->user_data on the target ring, and sqe->len will be propagated to cqe->res. The results CQE will have IORING_CQE_F_MSG set in its flags, to indicate that this CQE was generated from a messaging request rather than a SQE issued locally on that ring. This effectively allows passing a 64-bit and a 32-bit quantify between the two rings. This request type has the following request specific error cases: - -EBADFD. Set if the sqe->fd doesn't point to a file descriptor that is of the io_uring type. - -EOVERFLOW. Set if we were not able to deliver a request to the target ring. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10a.out: Stop building a.out/osf1 support on alpha and m68kEric W. Biederman
There has been repeated discussion on removing a.out support, the most recent was[1]. Having read through a bunch of the discussion it looks like no one has see any reason why we need to keep a.out support. The m68k maintainer has even come out in favor of removing a.out support[2]. At a practical level with only two rarely used architectures building a.out support, it gets increasingly hard to test and to care about. Which means the code will almost certainly bit-rot. Let's see if anyone cares about a.out support on the last two architectures that build it, by disabling the build of the support in Kconfig. If anyone cares, this can be easily reverted, and we can then have a discussion about what it is going to take to support a.out binaries in the long term. [1] https://lkml.kernel.org/r/20220113160115.5375-1-bp@alien8.de [2] https://lkml.kernel.org/r/CAMuHMdUbTNNr16YY1TFe=-uRLjg6yGzgw_RqtAFpyhnOMM5Pvw@mail.gmail.com Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: linux-alpha@vger.kernel.org Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: linux-m68k@lists.linux-m68k.org Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/87ilsmdhb5.fsf_-_@email.froward.int.ebiederm.org Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/lkml/CAMuHMdVLyu6LNONJa1QcMGv__bWSCRvVq9haD7=fOm1k5O3Pnw@mail.gmail.com
2022-03-10Merge tag 'mvebu-fixes-5.17-2' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/gclement/mvebu into arm/fixes mvebu fixes for 5.17 (part 2) Allow using old PCIe card on Armada 37xx * tag 'mvebu-fixes-5.17-2' of git://git.kernel.org/pub/scm/linux/kernel/git/gclement/mvebu: arm64: dts: marvell: armada-37xx: Remap IO space to bus address 0x0 Link: https://lore.kernel.org/r/87bkydj4fn.fsf@BL-laptop Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-03-10arm64: dts: marvell: armada-37xx: Remap IO space to bus address 0x0Pali Rohár
Legacy and old PCI I/O based cards do not support 32-bit I/O addressing. Since commit 64f160e19e92 ("PCI: aardvark: Configure PCIe resources from 'ranges' DT property") kernel can set different PCIe address on CPU and different on the bus for the one A37xx address mapping without any firmware support in case the bus address does not conflict with other A37xx mapping. So remap I/O space to the bus address 0x0 to enable support for old legacy I/O port based cards which have hardcoded I/O ports in low address space. Note that DDR on A37xx is mapped to bus address 0x0. And mapping of I/O space can be set to address 0x0 too because MEM space and I/O space are separate and so do not conflict. Remapping IO space on Turris Mox to different address is not possible to due bootloader bug. Signed-off-by: Pali Rohár <pali@kernel.org> Reported-by: Arnd Bergmann <arnd@arndb.de> Fixes: 76f6386b25cc ("arm64: dts: marvell: Add Aardvark PCIe support for Armada 3700") Cc: stable@vger.kernel.org # 64f160e19e92 ("PCI: aardvark: Configure PCIe resources from 'ranges' DT property") Cc: stable@vger.kernel.org # 514ef1e62d65 ("arm64: dts: marvell: armada-37xx: Extend PCIe MEM space") Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2022-03-10io_uring: speedup provided buffer handlingJens Axboe
In testing high frequency workloads with provided buffers, we spend a lot of time in allocating and freeing the buffer units themselves. Rather than repeatedly free and alloc them, add a recycling cache instead. There are two caches: - ctx->io_buffers_cache. This is the one we grab from in the submission path, and it's protected by ctx->uring_lock. For inline completions, we can recycle straight back to this cache and not need any extra locking. - ctx->io_buffers_comp. If we're not under uring_lock, then we use this list to recycle buffers. It's protected by the completion_lock. On adding a new buffer, check io_buffers_cache. If it's empty, check if we can splice entries from the io_buffers_comp_cache. This reduces about 5-10% of overhead from provided buffers, bringing it pretty close to the non-provided path. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: add support for registering ring file descriptorsJens Axboe
Lots of workloads use multiple threads, in which case the file table is shared between them. This makes getting and putting the ring file descriptor for each io_uring_enter(2) system call more expensive, as it involves an atomic get and put for each call. Similarly to how we allow registering normal file descriptors to avoid this overhead, add support for an io_uring_register(2) API that allows to register the ring fds themselves: 1) IORING_REGISTER_RING_FDS - takes an array of io_uring_rsrc_update structs, and registers them with the task. 2) IORING_UNREGISTER_RING_FDS - takes an array of io_uring_src_update structs, and unregisters them. When a ring fd is registered, it is internally represented by an offset. This offset is returned to the application, and the application then uses this offset and sets IORING_ENTER_REGISTERED_RING for the io_uring_enter(2) system call. This works just like using a registered file descriptor, rather than a real one, in an SQE, where IOSQE_FIXED_FILE gets set to tell io_uring that we're using an internal offset/descriptor rather than a real file descriptor. In initial testing, this provides a nice bump in performance for threaded applications in real world cases where the batch count (eg number of requests submitted per io_uring_enter(2) invocation) is low. In a microbenchmark, submitting NOP requests, we see the following increases in performance: Requests per syscall Baseline Registered Increase ---------------------------------------------------------------- 1 ~7030K ~8080K +15% 2 ~13120K ~14800K +13% 4 ~22740K ~25300K +11% Co-developed-by: Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: documentation fixupDylan Yudaken
Fix incorrect name reference in comment. ki_filp does not exist in the struct, but file does. Signed-off-by: Dylan Yudaken <dylany@fb.com> Link: https://lore.kernel.org/r/20220224105157.1332353-1-dylany@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: do not recalculate ppos unnecessarilyDylan Yudaken
There is a slight optimisation to be had by calculating the correct pos pointer inside io_kiocb_update_pos and then using that later. It seems code size drops by a bit: 000000000000a1b0 0000000000000400 t io_read 000000000000a5b0 0000000000000319 t io_write vs 000000000000a1b0 00000000000003f6 t io_read 000000000000a5b0 0000000000000310 t io_write Signed-off-by: Dylan Yudaken <dylany@fb.com> Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: update kiocb->ki_pos at execution timeDylan Yudaken
Update kiocb->ki_pos at execution time rather than in io_prep_rw(). io_prep_rw() happens before the job is enqueued to a worker and so the offset might be read multiple times before being executed once. Ensures that the file position in a set of _linked_ SQEs will be only obtained after earlier SQEs have completed, and so will include their incremented file position. Signed-off-by: Dylan Yudaken <dylany@fb.com> Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: remove duplicated calls to io_kiocb_pposDylan Yudaken
io_kiocb_ppos is called in both branches, and it seems that the compiler does not fuse this. Fusing removes a few bytes from loop_rw_iter. Before: $ nm -S fs/io_uring.o | grep loop_rw_iter 0000000000002430 0000000000000124 t loop_rw_iter After: $ nm -S fs/io_uring.o | grep loop_rw_iter 0000000000002430 000000000000010d t loop_rw_iter Signed-off-by: Dylan Yudaken <dylany@fb.com> Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: Remove unneeded test in io_run_task_work_sig()Olivier Langlois
Avoid testing TIF_NOTIFY_SIGNAL twice by calling task_sigpending() directly from io_run_task_work_sig() Signed-off-by: Olivier Langlois <olivier@trillion01.com> Link: https://lore.kernel.org/r/bd7c0495f7656e803e5736708591bb665e6eaacd.1645041650.git.olivier@trillion01.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io-uring: Make tracepoints consistent.Stefan Roesch
This makes the io-uring tracepoints consistent. Where it makes sense the tracepoints start with the following four fields: - context (ring) - request - user_data - opcode. Signed-off-by: Stefan Roesch <shr@fb.com> Link: https://lore.kernel.org/r/20220214180430.70572-3-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io-uring: add __fill_cqe functionStefan Roesch
This introduces the __fill_cqe function. This is necessary to correctly issue the io_uring_complete tracepoint. Signed-off-by: Stefan Roesch <shr@fb.com> Link: https://lore.kernel.org/r/20220214180430.70572-2-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io-wq: use IO_WQ_ACCT_NR rather than hardcoded numberHao Xu
It's better to use the defined enum stuff not the hardcoded number to define array. Signed-off-by: Hao Xu <haoxu@linux.alibaba.com> Link: https://lore.kernel.org/r/20220206095241.121485-4-haoxu@linux.alibaba.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io-wq: reduce acct->lock crossing functions lock/unlockHao Xu
reduce acct->lock lock and unlock in different functions to make the code clearer. Signed-off-by: Hao Xu <haoxu@linux.alibaba.com> Link: https://lore.kernel.org/r/20220206095241.121485-3-haoxu@linux.alibaba.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io-wq: decouple work_list protection from the big wqe->lockHao Xu
wqe->lock is abused, it now protects acct->work_list, hash stuff, nr_workers, wqe->free_list and so on. Lets first get the work_list out of the wqe-lock mess by introduce a specific lock for work list. This is the first step to solve the huge contension between work insertion and work consumption. good thing: - split locking for bound and unbound work list - reduce contension between work_list visit and (worker's)free_list. For the hash stuff, since there won't be a work with same file in both bound and unbound work list, thus they won't visit same hash entry. it works well to use the new lock to protect hash stuff. Results: set max_unbound_worker = 4, test with echo-server: nice -n -15 ./io_uring_echo_server -p 8081 -f -n 1000 -l 16 (-n connection, -l workload) before this patch: Samples: 2M of event 'cycles:ppp', Event count (approx.): 1239982111074 Overhead Command Shared Object Symbol 28.59% iou-wrk-10021 [kernel.vmlinux] [k] native_queued_spin_lock_slowpath 8.89% io_uring_echo_s [kernel.vmlinux] [k] native_queued_spin_lock_slowpath 6.20% iou-wrk-10021 [kernel.vmlinux] [k] _raw_spin_lock 2.45% io_uring_echo_s [kernel.vmlinux] [k] io_prep_async_work 2.36% iou-wrk-10021 [kernel.vmlinux] [k] _raw_spin_lock_irqsave 2.29% iou-wrk-10021 [kernel.vmlinux] [k] io_worker_handle_work 1.29% io_uring_echo_s [kernel.vmlinux] [k] io_wqe_enqueue 1.06% iou-wrk-10021 [kernel.vmlinux] [k] io_wqe_worker 1.06% io_uring_echo_s [kernel.vmlinux] [k] _raw_spin_lock 1.03% iou-wrk-10021 [kernel.vmlinux] [k] __schedule 0.99% iou-wrk-10021 [kernel.vmlinux] [k] tcp_sendmsg_locked with this patch: Samples: 1M of event 'cycles:ppp', Event count (approx.): 708446691943 Overhead Command Shared Object Symbol 16.86% iou-wrk-10893 [kernel.vmlinux] [k] native_queued_spin_lock_slowpat 9.10% iou-wrk-10893 [kernel.vmlinux] [k] _raw_spin_lock 4.53% io_uring_echo_s [kernel.vmlinux] [k] native_queued_spin_lock_slowpat 2.87% iou-wrk-10893 [kernel.vmlinux] [k] io_worker_handle_work 2.57% iou-wrk-10893 [kernel.vmlinux] [k] _raw_spin_lock_irqsave 2.56% io_uring_echo_s [kernel.vmlinux] [k] io_prep_async_work 1.82% io_uring_echo_s [kernel.vmlinux] [k] _raw_spin_lock 1.33% iou-wrk-10893 [kernel.vmlinux] [k] io_wqe_worker 1.26% io_uring_echo_s [kernel.vmlinux] [k] try_to_wake_up spin_lock failure from 25.59% + 8.89% = 34.48% to 16.86% + 4.53% = 21.39% TPS is similar, while cpu usage is from almost 400% to 350% Signed-off-by: Hao Xu <haoxu@linux.alibaba.com> Link: https://lore.kernel.org/r/20220206095241.121485-2-haoxu@linux.alibaba.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: Fix use of uninitialized ret in io_eventfd_register()Nathan Chancellor
Clang warns: fs/io_uring.c:9396:9: warning: variable 'ret' is uninitialized when used here [-Wuninitialized] return ret; ^~~ fs/io_uring.c:9373:13: note: initialize the variable 'ret' to silence this warning int fd, ret; ^ = 0 1 warning generated. Just return 0 directly and reduce the scope of ret to the if statement, as that is the only place that it is used, which is how the function was before the fixes commit. Fixes: 1a75fac9a0f9 ("io_uring: avoid ring quiesce while registering/unregistering eventfd") Link: https://github.com/ClangBuiltLinux/linux/issues/1579 Signed-off-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20220207162410.1013466-1-nathan@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: remove ring quiesce for io_uring_registerUsama Arif
None of the opcodes in io_uring_register use ring quiesce anymore. Hence io_register_op_must_quiesce always returns false and io_ctx_quiesce is never called. Signed-off-by: Usama Arif <usama.arif@bytedance.com> Link: https://lore.kernel.org/r/20220204145117.1186568-6-usama.arif@bytedance.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: avoid ring quiesce while registering restrictions and enabling ringsUsama Arif
IORING_SETUP_R_DISABLED prevents submitting requests and so there will be no requests until IORING_REGISTER_ENABLE_RINGS is called. And IORING_REGISTER_RESTRICTIONS works only before IORING_REGISTER_ENABLE_RINGS is called. Hence ring quiesce is not needed for these opcodes. Signed-off-by: Usama Arif <usama.arif@bytedance.com> Link: https://lore.kernel.org/r/20220204145117.1186568-5-usama.arif@bytedance.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: avoid ring quiesce while registering async eventfdUsama Arif
This is done using the RCU data structure (io_ev_fd). eventfd_async is moved from io_ring_ctx to io_ev_fd which is RCU protected hence avoiding ring quiesce which is much more expensive than an RCU lock. The place where eventfd_async is read is already under rcu_read_lock so there is no extra RCU read-side critical section needed. Signed-off-by: Usama Arif <usama.arif@bytedance.com> Link: https://lore.kernel.org/r/20220204145117.1186568-4-usama.arif@bytedance.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: avoid ring quiesce while registering/unregistering eventfdUsama Arif
This is done by creating a new RCU data structure (io_ev_fd) as part of io_ring_ctx that holds the eventfd_ctx. The function io_eventfd_signal is executed under rcu_read_lock with a single rcu_dereference to io_ev_fd so that if another thread unregisters the eventfd while io_eventfd_signal is still being executed, the eventfd_signal for which io_eventfd_signal was called completes successfully. The process of registering/unregistering eventfd is already done under uring_lock so multiple threads won't enter a race condition while registering/unregistering eventfd. With the above approach ring quiesce can be avoided which is much more expensive then using RCU lock. On the system tested, io_uring_register with IORING_REGISTER_EVENTFD takes less than 1ms with RCU lock, compared to 15ms before with ring quiesce. Signed-off-by: Usama Arif <usama.arif@bytedance.com> Link: https://lore.kernel.org/r/20220204145117.1186568-3-usama.arif@bytedance.com [axboe: long line fixups] Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10io_uring: remove trace for eventfdUsama Arif
The information on whether eventfd is registered is not very useful and would result in the tracepoint being enclosed in an rcu_readlock in a later patch that tries to avoid ring quiesce for registering eventfd. Suggested-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Usama Arif <usama.arif@bytedance.com> Link: https://lore.kernel.org/r/20220204145117.1186568-2-usama.arif@bytedance.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-03-10vhost: allow batching hint without sizeJason Wang
Commit e2ae38cf3d91 ("vhost: fix hung thread due to erroneous iotlb entries") tries to reject the IOTLB message whose size is zero. But the size is not necessarily meaningful, one example is the batching hint, so the commit breaks that. Fixing this be reject zero size message only if the message is used to update/invalidate the IOTLB. Fixes: e2ae38cf3d91 ("vhost: fix hung thread due to erroneous iotlb entries") Reported-by: Eli Cohen <elic@nvidia.com> Cc: Anirudh Rayabharam <mail@anirudhrb.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Link: https://lore.kernel.org/r/20220310075211.4801-1-jasowang@redhat.com Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Tested-by: Eli Cohen <elic@nvidia.com>
2022-03-10spi: rockchip-sfc: fix platform_get_irq.cocci warningYihao Han
Remove dev_err() messages after platform_get_irq*() failures. platform_get_irq() already prints an error. Generated by: scripts/coccinelle/api/platform_get_irq.cocci Signed-off-by: Yihao Han <hanyihao@vivo.com> Link: https://lore.kernel.org/r/20220310094806.13734-1-hanyihao@vivo.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-03-10Merge tag 'spi-fix-v5.17-rc7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi Pull spi fix from Mark Brown: "One fix for type conversion issues when working out maximum scatter/gather segment sizes. It caused problems for some systems where the limits overflow due to the type conversion" * tag 'spi-fix-v5.17-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: spi: Fix invalid sgs value
2022-03-10ARM: fix build warning in proc-v7-bugs.cRussell King (Oracle)
The kernel test robot discovered that building without HARDEN_BRANCH_PREDICTOR issues a warning due to a missing argument to pr_info(). Add the missing argument. Reported-by: kernel test robot <lkp@intel.com> Fixes: 9dd78194a372 ("ARM: report Spectre v2 status through sysfs") Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2022-03-10Merge tag 'gpio-fixes-for-v5.17' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux Pull gpio fixes from Bartosz Golaszewski: - fix a probe failure for Tegra241 GPIO controller in gpio-tegra186 - revert changes that caused a regression in the sysfs user-space interface - correct the debounce time conversion in GPIO ACPI - statify a struct in gpio-sim and fix a typo - update registers in correct order (hardware quirk) in gpio-ts4900 * tag 'gpio-fixes-for-v5.17' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux: gpio: sim: fix a typo gpio: ts4900: Do not set DAT and OE together gpio: sim: Declare gpio_sim_hog_config_item_ops static gpiolib: acpi: Convert ACPI value of debounce to microseconds gpio: Revert regression in sysfs-gpio (gpiolib.c) gpio: tegra186: Add IRQ per bank for Tegra241
2022-03-10docs: sysfs-devices-system-cpu: document "asymm" value for mte_tcf_preferredEvgenii Stepanov
It was added in commit 766121ba5de3 ("arm64/mte: Add userspace interface for enabling asymmetric mode"). Signed-off-by: Evgenii Stepanov <eugenis@google.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220309215943.87831-1-eugenis@google.com Signed-off-by: Will Deacon <will@kernel.org>
2022-03-10PCI: apple: Change MSI handling to handle 4-cell AIC fwspec formHector Martin
AIC2 changes the IRQ fwspec to add a cell. Always use the second-to-last cell for the MSI handling, so it will work for both AIC1 and AIC2 devices. Signed-off-by: Hector Martin <marcan@marcan.st> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220309192123.152028-2-marcan@marcan.st
2022-03-10gpio: sim: fix a typoBartosz Golaszewski
Just noticed this when applying Andy's patch. s/childred/children/ Fixes: cb8c474e79be ("gpio: sim: new testing module") Signed-off-by: Bartosz Golaszewski <brgl@bgdev.pl> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
2022-03-10gpio: ts4900: Do not set DAT and OE togetherMark Featherston
This works around an issue with the hardware where both OE and DAT are exposed in the same register. If both are updated simultaneously, the harware makes no guarantees that OE or DAT will actually change in any given order and may result in a glitch of a few ns on a GPIO pin when changing direction and value in a single write. Setting direction to input now only affects OE bit. Setting direction to output updates DAT first, then OE. Fixes: 9c6686322d74 ("gpio: add Technologic I2C-FPGA gpio support") Signed-off-by: Mark Featherston <mark@embeddedTS.com> Signed-off-by: Kris Bahnsen <kris@embeddedTS.com> Signed-off-by: Bartosz Golaszewski <brgl@bgdev.pl>
2022-03-10Merge branch irq/aic-pmu into irq/irqchip-nextMarc Zyngier
* irq/aic-pmu: : . : Prefix branch for the M1 PMU support, adding the required : irqchip changes. Shared with the arm64 tree. : . irqchip/apple-aic: Fix cpumask allocation for FIQs irqchip/apple-aic: Move PMU-specific registers to their own include file arm64: dts: apple: Add t8303 PMU nodes arm64: dts: apple: Add t8103 PMU interrupt affinities irqchip/apple-aic: Wire PMU interrupts irqchip/apple-aic: Parse FIQ affinities from device-tree dt-bindings: apple,aic: Add affinity description for per-cpu pseudo-interrupts dt-bindings: apple,aic: Add CPU PMU per-cpu pseudo-interrupts dt-bindings: arm-pmu: Document Apple PMU compatible strings Signed-off-by: Marc Zyngier <maz@kernel.org>
2022-03-10irqchip/apple-aic: Fix cpumask allocation for FIQsMarc Zyngier
An emparassing typo: allocating a pointer instead of the object pointed to. No harm done, as the pointer is large enough for what we are using the object for, but still... Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220310050238.4478-1-guozhengkui@vivo.com
2022-03-10af_key: add __GFP_ZERO flag for compose_sadb_supported in function ↵Haimin Zhang
pfkey_register Add __GFP_ZERO flag for compose_sadb_supported in function pfkey_register to initialize the buffer of supp_skb to fix a kernel-info-leak issue. 1) Function pfkey_register calls compose_sadb_supported to request a sk_buff. 2) compose_sadb_supported calls alloc_sbk to allocate a sk_buff, but it doesn't zero it. 3) If auth_len is greater 0, then compose_sadb_supported treats the memory as a struct sadb_supported and begins to initialize. But it just initializes the field sadb_supported_len and field sadb_supported_exttype without field sadb_supported_reserved. Reported-by: TCS Robot <tcs_robot@tencent.com> Signed-off-by: Haimin Zhang <tcs_kernel@tencent.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-03-09Merge tag 'clk-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux Pull clk fixes from Stephen Boyd: "One more small batch of clk driver fixes: - A fix for the Qualcomm GDSC power domain delays that avoids black screens at boot on some more recent SoCs that use a different delay than the hard-coded delays in the driver. - A build fix LAN966X clk driver that let it be built on architectures that didn't have IOMEM" * tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: clk: lan966x: Fix linking error clk: qcom: dispcc: Update the transition delay for MDSS GDSC clk: qcom: gdsc: Add support to update GDSC transition delay
2022-03-09Merge tag 'xsa396-5.17-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen fixes from Juergen Gross: "Several Linux PV device frontends are using the grant table interfaces for removing access rights of the backends in ways being subject to race conditions, resulting in potential data leaks, data corruption by malicious backends, and denial of service triggered by malicious backends: - blkfront, netfront, scsifront and the gntalloc driver are testing whether a grant reference is still in use. If this is not the case, they assume that a following removal of the granted access will always succeed, which is not true in case the backend has mapped the granted page between those two operations. As a result the backend can keep access to the memory page of the guest no matter how the page will be used after the frontend I/O has finished. The xenbus driver has a similar problem, as it doesn't check the success of removing the granted access of a shared ring buffer. - blkfront, netfront, scsifront, usbfront, dmabuf, xenbus, 9p, kbdfront, and pvcalls are using a functionality to delay freeing a grant reference until it is no longer in use, but the freeing of the related data page is not synchronized with dropping the granted access. As a result the backend can keep access to the memory page even after it has been freed and then re-used for a different purpose. - netfront will fail a BUG_ON() assertion if it fails to revoke access in the rx path. This will result in a Denial of Service (DoS) situation of the guest which can be triggered by the backend" * tag 'xsa396-5.17-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen/netfront: react properly to failing gnttab_end_foreign_access_ref() xen/gnttab: fix gnttab_end_foreign_access() without page specified xen/pvcalls: use alloc/free_pages_exact() xen/9p: use alloc/free_pages_exact() xen/usb: don't use gnttab_end_foreign_access() in xenhcd_gnttab_done() xen: remove gnttab_query_foreign_access() xen/gntalloc: don't use gnttab_query_foreign_access() xen/scsifront: don't use gnttab_query_foreign_access() for mapped status xen/netfront: don't use gnttab_query_foreign_access() for mapped status xen/blkfront: don't use gnttab_query_foreign_access() for mapped status xen/grant-table: add gnttab_try_end_foreign_access() xen/xenbus: don't let xenbus_grant_ring() remove grants in error case
2022-03-09Merge branch 'selftests-pmtu-sh-fix-cleanup-of-processes-launched-in-subshell'Jakub Kicinski
Guillaume Nault says: ==================== selftests: pmtu.sh: Fix cleanup of processes launched in subshell. Depending on the options used, pmtu.sh may launch tcpdump and nettest processes in the background. However it fails to clean them up after the tests complete. Patch 1 allows the cleanup() function to read the list of PIDs launched by the tests. Patch 2 fixes the way the nettest PIDs are retrieved. ==================== Link: https://lore.kernel.org/r/cover.1646776561.git.gnault@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-09selftests: pmtu.sh: Kill nettest processes launched in subshell.Guillaume Nault
When using "run_cmd <command> &", then "$!" refers to the PID of the subshell used to run <command>, not the command itself. Therefore nettest_pids actually doesn't contain the list of the nettest commands running in the background. So cleanup() can't kill them and the nettest processes run until completion (fortunately they have a 5s timeout). Fix this by defining a new command for running processes in the background, for which "$!" really refers to the PID of the command run. Also, double quote variables on the modified lines, to avoid shellcheck warnings. Fixes: ece1278a9b81 ("selftests: net: add ESP-in-UDP PMTU test") Signed-off-by: Guillaume Nault <gnault@redhat.com> Reviewed-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-09selftests: pmtu.sh: Kill tcpdump processes launched by subshell.Guillaume Nault
The cleanup() function takes care of killing processes launched by the test functions. It relies on variables like ${tcpdump_pids} to get the relevant PIDs. But tests are run in their own subshell, so updated *_pids values are invisible to other shells. Therefore cleanup() never sees any process to kill: $ ./tools/testing/selftests/net/pmtu.sh -t pmtu_ipv4_exception TEST: ipv4: PMTU exceptions [ OK ] TEST: ipv4: PMTU exceptions - nexthop objects [ OK ] $ pgrep -af tcpdump 6084 tcpdump -s 0 -i veth_A-R1 -w pmtu_ipv4_exception_veth_A-R1.pcap 6085 tcpdump -s 0 -i veth_R1-A -w pmtu_ipv4_exception_veth_R1-A.pcap 6086 tcpdump -s 0 -i veth_R1-B -w pmtu_ipv4_exception_veth_R1-B.pcap 6087 tcpdump -s 0 -i veth_B-R1 -w pmtu_ipv4_exception_veth_B-R1.pcap 6088 tcpdump -s 0 -i veth_A-R2 -w pmtu_ipv4_exception_veth_A-R2.pcap 6089 tcpdump -s 0 -i veth_R2-A -w pmtu_ipv4_exception_veth_R2-A.pcap 6090 tcpdump -s 0 -i veth_R2-B -w pmtu_ipv4_exception_veth_R2-B.pcap 6091 tcpdump -s 0 -i veth_B-R2 -w pmtu_ipv4_exception_veth_B-R2.pcap 6228 tcpdump -s 0 -i veth_A-R1 -w pmtu_ipv4_exception_veth_A-R1.pcap 6229 tcpdump -s 0 -i veth_R1-A -w pmtu_ipv4_exception_veth_R1-A.pcap 6230 tcpdump -s 0 -i veth_R1-B -w pmtu_ipv4_exception_veth_R1-B.pcap 6231 tcpdump -s 0 -i veth_B-R1 -w pmtu_ipv4_exception_veth_B-R1.pcap 6232 tcpdump -s 0 -i veth_A-R2 -w pmtu_ipv4_exception_veth_A-R2.pcap 6233 tcpdump -s 0 -i veth_R2-A -w pmtu_ipv4_exception_veth_R2-A.pcap 6234 tcpdump -s 0 -i veth_R2-B -w pmtu_ipv4_exception_veth_R2-B.pcap 6235 tcpdump -s 0 -i veth_B-R2 -w pmtu_ipv4_exception_veth_B-R2.pcap Fix this by running cleanup() in the context of the test subshell. Now that each test cleans the environment after completion, there's no need for calling cleanup() again when the next test starts. So let's drop it from the setup() function. This is okay because cleanup() is also called when pmtu.sh starts, so even the first test starts in a clean environment. Also, use tcpdump's immediate mode. Otherwise it might not have time to process buffered packets, resulting in missing packets or even empty pcap files for short tests. Note: PAUSE_ON_FAIL is still evaluated before cleanup(), so one can still inspect the test environment upon failure when using -p. Fixes: a92a0a7b8e7c ("selftests: pmtu: Simplify cleanup and namespace names") Signed-off-by: Guillaume Nault <gnault@redhat.com> Reviewed-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>