summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-02-14Merge branch 'net-make-kobj_type-structures-constant'Jakub Kicinski
Thomas Weißschuh says: ==================== net: make kobj_type structures constant Since commit ee6d3dd4ed48 ("driver core: make kobj_type constant.") the driver core allows the usage of const struct kobj_type. Take advantage of this to constify the structure definitions to prevent modification at runtime. ==================== Link: https://lore.kernel.org/r/20230211-kobj_type-net-v2-0-013b59e59bf3@weissschuh.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14net-sysfs: make kobj_type structures constantThomas Weißschuh
Since commit ee6d3dd4ed48 ("driver core: make kobj_type constant.") the driver core allows the usage of const struct kobj_type. Take advantage of this to constify the structure definitions to prevent modification at runtime. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14net: bridge: make kobj_type structure constantThomas Weißschuh
Since commit ee6d3dd4ed48 ("driver core: make kobj_type constant.") the driver core allows the usage of const struct kobj_type. Take advantage of this to constify the structure definition to prevent modification at runtime. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14tipc: fix kernel warning when sending SYN messageTung Nguyen
When sending a SYN message, this kernel stack trace is observed: ... [ 13.396352] RIP: 0010:_copy_from_iter+0xb4/0x550 ... [ 13.398494] Call Trace: [ 13.398630] <TASK> [ 13.398630] ? __alloc_skb+0xed/0x1a0 [ 13.398630] tipc_msg_build+0x12c/0x670 [tipc] [ 13.398630] ? shmem_add_to_page_cache.isra.71+0x151/0x290 [ 13.398630] __tipc_sendmsg+0x2d1/0x710 [tipc] [ 13.398630] ? tipc_connect+0x1d9/0x230 [tipc] [ 13.398630] ? __local_bh_enable_ip+0x37/0x80 [ 13.398630] tipc_connect+0x1d9/0x230 [tipc] [ 13.398630] ? __sys_connect+0x9f/0xd0 [ 13.398630] __sys_connect+0x9f/0xd0 [ 13.398630] ? preempt_count_add+0x4d/0xa0 [ 13.398630] ? fpregs_assert_state_consistent+0x22/0x50 [ 13.398630] __x64_sys_connect+0x16/0x20 [ 13.398630] do_syscall_64+0x42/0x90 [ 13.398630] entry_SYSCALL_64_after_hwframe+0x63/0xcd It is because commit a41dad905e5a ("iov_iter: saner checks for attempt to copy to/from iterator") has introduced sanity check for copying from/to iov iterator. Lacking of copy direction from the iterator viewpoint would lead to kernel stack trace like above. This commit fixes this issue by initializing the iov iterator with the correct copy direction when sending SYN or ACK without data. Fixes: f25dcc7687d4 ("tipc: tipc ->sendmsg() conversion") Reported-by: syzbot+d43608d061e8847ec9f3@syzkaller.appspotmail.com Acked-by: Jon Maloy <jmaloy@redhat.com> Signed-off-by: Tung Nguyen <tung.q.nguyen@dektech.com.au> Link: https://lore.kernel.org/r/20230214012606.5804-1-tung.q.nguyen@dektech.com.au Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14igb: Fix PPS input and output using 3rd and 4th SDPMiroslav Lichvar
Fix handling of the tsync interrupt to compare the pin number with IGB_N_SDP instead of IGB_N_EXTTS/IGB_N_PEROUT and fix the indexing to the perout array. Fixes: cf99c1dd7b77 ("igb: move PEROUT and EXTTS isr logic to separate functions") Reported-by: Matt Corallo <ntp-lists@mattcorallo.com> Signed-off-by: Miroslav Lichvar <mlichvar@redhat.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://lore.kernel.org/r/20230213185822.3960072-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14Merge branch '100GbE' of ↵Jakub Kicinski
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2023-02-13 (ice) This series contains updates to ice driver only. Michal fixes check of scheduling node weight and priority to be done against desired value, not current value. Jesse adds setting of all multicast when adding promiscuous mode to resolve traffic being lost due to filter settings. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue: ice: fix lost multicast packets in promisc mode ice: Fix check for weight and priority of a scheduling node ==================== Link: https://lore.kernel.org/r/20230213185259.3959224-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14Merge branch 'net-ipa-define-gsi-register-fields-differently'Jakub Kicinski
Alex Elder says: ==================== net: ipa: define GSI register fields differently Now that we have "reg" definitions in place to define GSI register offsets, add the definitions for the fields of GSI registers that have them. There aren't many differences between versions, but a few fields are present only in some versions of IPA, so additional "gsi_reg-vX.Y.c" files are created to capture such differences. As in the previous series, these files are created as near-copies of existing files just before they're needed to represent these differences. The first patch adds files for IPA v4.0, v4.5, and v4.9; the fifth patch adds a file for IPA v4.11. Note that the first and fifth patch cause some checkpatch warnings because they align some continued lines with an open parenthesis that at the fourth column. ==================== Link: https://lore.kernel.org/r/20230213162229.604438-1-elder@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14net: ipa: define fields for remaining GSI registersAlex Elder
Define field IDs for the remaining GSI registers, and populate the register definition files accordingly. Use the reg_*() functions to access field values for those regiters, and get rid of the previous field definition constants. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14net: ipa: add "gsi_v4.11.c"Alex Elder
The next patch adds a GSI register field that is only valid starting at IPA v4.11. Create "gsi_v4.11.c" from "gsi_v4.9.c", changing only the name of the public regs structure it defines. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14net: ipa: define fields for event-ring related registersAlex Elder
Define field IDs for the EV_CH_E_CNTXT_0 and EV_CH_E_CNTXT_8 GSI registers, and populate the register definition files accordingly. Use the reg_*() functions to access field values for those regiters, and get rid of the previous field definition constants. The remaining EV_CH_E_CNTXT_* registers are written with full 32-bit values (and have no fields). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14net: ipa: define more fields for GSI registersAlex Elder
Beyond the CH_C_QOS register, two other registers whose offset is related to channel number have fields within them. Define the fields within the CH_C_CNTXT_0 GSI register, using an enumerated type to identify the register's fields, and define an array of field masks to use for that register's reg structure. For the CH_C_CNTXT_1 GSI register, ch_c_cntxt_1_length_encode() previously hid the difference in bit width in the channel ring length field. Instead, define a new field CH_R_LENGTH and encode the ring size with reg_encode(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14net: ipa: define GSI CH_C_QOS register fieldsAlex Elder
Define the fields within the CH_C_QOS GSI register using an array of field masks in that register's reg structure. Use the reg functions for encoding values in those fields. One field in the register is present for IPA v4.0-4.2 only, two others are present starting at IPA v4.5, and one more is there starting at IPA v4.9. Drop the "GSI_" prefix in symbols defined in the gsi_prefetch_mode enumerated type, and define their values using decimal rather than hexidecimal values. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14net: ipa: populate more GSI register filesAlex Elder
Create "gsi_v4.0.c", "gsi_v4.5.c", and "gsi_v4.9.c" as essentially identical copies of "gsi_v3.5.1.c". The only difference is the name of the exported "gsi_regs_vX_Y" structure. The next patch will start differentiating them. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14net: use a bounce buffer for copying skb->markEric Dumazet
syzbot found arm64 builds would crash in sock_recv_mark() when CONFIG_HARDENED_USERCOPY=y x86 and powerpc are not detecting the issue because they define user_access_begin. This will be handled in a different patch, because a check_object_size() is missing. Only data from skb->cb[] can be copied directly to/from user space, as explained in commit 79a8a642bf05 ("net: Whitelist the skbuff_head_cache "cb" field") syzbot report was: usercopy: Kernel memory exposure attempt detected from SLUB object 'skbuff_head_cache' (offset 168, size 4)! ------------[ cut here ]------------ kernel BUG at mm/usercopy.c:102 ! Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP Modules linked in: CPU: 0 PID: 4410 Comm: syz-executor533 Not tainted 6.2.0-rc7-syzkaller-17907-g2d3827b3f393 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/21/2023 pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : usercopy_abort+0x90/0x94 mm/usercopy.c:90 lr : usercopy_abort+0x90/0x94 mm/usercopy.c:90 sp : ffff80000fb9b9a0 x29: ffff80000fb9b9b0 x28: ffff0000c6073400 x27: 0000000020001a00 x26: 0000000000000014 x25: ffff80000cf52000 x24: fffffc0000000000 x23: 05ffc00000000200 x22: fffffc000324bf80 x21: ffff0000c92fe1a8 x20: 0000000000000001 x19: 0000000000000004 x18: 0000000000000000 x17: 656a626f2042554c x16: ffff0000c6073dd0 x15: ffff80000dbd2118 x14: ffff0000c6073400 x13: 00000000ffffffff x12: ffff0000c6073400 x11: ff808000081bbb4c x10: 0000000000000000 x9 : 7b0572d7cc0ccf00 x8 : 7b0572d7cc0ccf00 x7 : ffff80000bf650d4 x6 : 0000000000000000 x5 : 0000000000000001 x4 : 0000000000000001 x3 : 0000000000000000 x2 : ffff0001fefbff08 x1 : 0000000100000000 x0 : 000000000000006c Call trace: usercopy_abort+0x90/0x94 mm/usercopy.c:90 __check_heap_object+0xa8/0x100 mm/slub.c:4761 check_heap_object mm/usercopy.c:196 [inline] __check_object_size+0x208/0x6b8 mm/usercopy.c:251 check_object_size include/linux/thread_info.h:199 [inline] __copy_to_user include/linux/uaccess.h:115 [inline] put_cmsg+0x408/0x464 net/core/scm.c:238 sock_recv_mark net/socket.c:975 [inline] __sock_recv_cmsgs+0x1fc/0x248 net/socket.c:984 sock_recv_cmsgs include/net/sock.h:2728 [inline] packet_recvmsg+0x2d8/0x678 net/packet/af_packet.c:3482 ____sys_recvmsg+0x110/0x3a0 ___sys_recvmsg net/socket.c:2737 [inline] __sys_recvmsg+0x194/0x210 net/socket.c:2767 __do_sys_recvmsg net/socket.c:2777 [inline] __se_sys_recvmsg net/socket.c:2774 [inline] __arm64_sys_recvmsg+0x2c/0x3c net/socket.c:2774 __invoke_syscall arch/arm64/kernel/syscall.c:38 [inline] invoke_syscall+0x64/0x178 arch/arm64/kernel/syscall.c:52 el0_svc_common+0xbc/0x180 arch/arm64/kernel/syscall.c:142 do_el0_svc+0x48/0x110 arch/arm64/kernel/syscall.c:193 el0_svc+0x58/0x14c arch/arm64/kernel/entry-common.c:637 el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:591 Code: 91388800 aa0903e1 f90003e8 94e6d752 (d4210000) Fixes: 6fd1d51cfa25 ("net: SO_RCVMARK socket option for SO_MARK with recvmsg()") Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Erin MacNeil <lnx.erin@gmail.com> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/r/20230213160059.3829741-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2023-02-14rpmsg: glink: Avoid infinite loop on intent for missing channelBjorn Andersson
In the event that an intent advertisement arrives on an unknown channel the fifo is not advanced, resulting in the same message being handled over and over. Fixes: dacbb35e930f ("rpmsg: glink: Receive and store the remote intent buffers") Signed-off-by: Bjorn Andersson <quic_bjorande@quicinc.com> Reviewed-by: Chris Lew <quic_clew@quicinc.com> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20230214234231.2069751-1-quic_bjorande@quicinc.com
2023-02-14rpmsg: glink: Fix GLINK command prefixBjorn Andersson
The upstream GLINK driver was first introduced to communicate with the RPM on MSM8996, presumably as an artifact from that era the command defines was prefixed RPM_CMD, while they actually are GLINK_CMDs. Let's rename these, to keep things tidy. No functional change. Signed-off-by: Bjorn Andersson <quic_bjorande@quicinc.com> Reviewed-by: Chris Lew <quic_clew@quicinc.com> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20230214225933.2025595-1-quic_bjorande@quicinc.com
2023-02-14rpmsg: glink: Fix spelling of peekBjorn Andersson
The code is peeking into the buffers, not peaking. Fix this throughout the glink drivers. Signed-off-by: Bjorn Andersson <quic_bjorande@quicinc.com> Reviewed-by: Chris Lew <quic_clew@quicinc.com> Signed-off-by: Bjorn Andersson <andersson@kernel.org> Link: https://lore.kernel.org/r/20230214224746.1996130-1-quic_bjorande@quicinc.com
2023-02-14drm/vmwgfx: Do not drop the reference to the handle too soonZack Rusin
v3: Fix vmw_user_bo_lookup which was also dropping the gem reference before the kernel was done with buffer depending on userspace doing the right thing. Same bug, different spot. It is possible for userspace to predict the next buffer handle and to destroy the buffer while it's still used by the kernel. Delay dropping the internal reference on the buffers until kernel is done with them. Instead of immediately dropping the gem reference in vmw_user_bo_lookup and vmw_gem_object_create_with_handle let the callers decide when they're ready give the control back to userspace. Also fixes the second usage of vmw_gem_object_create_with_handle in vmwgfx_surface.c which wasn't grabbing an explicit reference to the gem object which could have been destroyed by the userspace on the owning surface at any point. Signed-off-by: Zack Rusin <zackr@vmware.com> Fixes: 8afa13a0583f ("drm/vmwgfx: Implement DRIVER_GEM") Reviewed-by: Martin Krastev <krastevm@vmware.com> Reviewed-by: Maaz Mombasawala <mombasawalam@vmware.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230211050514.2431155-1-zack@kde.org (cherry picked from commit 9ef8d83e8e25d5f1811b3a38eb1484f85f64296c) Cc: <stable@vger.kernel.org> # v5.17+
2023-02-14Merge patch series "dt-bindings: Add a cpu-capacity property for RISC-V"Palmer Dabbelt
Conor Dooley <conor@kernel.org> says: From: Conor Dooley <conor.dooley@microchip.com> Ever since RISC-V starting using generic arch topology code, the code paths for cpu-capacity have been there but there's no binding defined to actually convey the information. Defining the same property as used on arm seems to be the only logical thing to do, so do it. [Palmer: This is on top of the fix required to make it work, which itself wasn't merged until late in the 6.2 cycle and thus pulls in various other fixes.] * b4-shazam-merge: dt-bindings: riscv: add a capacity-dmips-mhz cpu property dt-bindings: arm: move cpu-capacity to a shared loation riscv: Move call to init_cpu_topology() to later initialization stage riscv/kprobe: Fix instruction simulation of JALR riscv: fix -Wundef warning for CONFIG_RISCV_BOOT_SPINWAIT MAINTAINERS: add an IRC entry for RISC-V RISC-V: fix compile error from deduplicated __ALTERNATIVE_CFG_2 dt-bindings: riscv: fix single letter canonical order dt-bindings: riscv: fix underscore requirement for multi-letter extensions riscv: uaccess: fix type of 0 variable on error in get_user() riscv, kprobes: Stricter c.jr/c.jalr decoding Link: https://lore.kernel.org/r/20230104180513.1379453-1-conor@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-02-14dt-bindings: riscv: add a capacity-dmips-mhz cpu propertyConor Dooley
Since commit 03f11f03dbfe ("RISC-V: Parse cpu topology during boot.") RISC-V has used the generic arch topology code, which provides for disparate CPU capacities. We never defined a binding to acquire this information from the DT though, so document the one already used by the generic arch topology code: "capacity-dmips-mhz". Signed-off-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Ley Foon Tan <leyfoon.tan@starfivetech.com> Acked-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20230104180513.1379453-3-conor@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-02-14dt-bindings: arm: move cpu-capacity to a shared loationConor Dooley
RISC-V uses the same generic topology code as arm64 & while there currently exists no binding for cpu-capacity on RISC-V, the code paths can be hit if the property is present. Move the documentation of cpu-capacity to a shared location, ahead of defining a binding for capacity-dmips-mhz on RISC-V. Update some references to this document in the process. Signed-off-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Ley Foon Tan <leyfoon.tan@starfivetech.com> Acked-by: Rob Herring <robh@kernel.org> Reviewed-by: Yanteng Si <siyanteng@loongson.cn> Link: https://lore.kernel.org/r/20230104180513.1379453-2-conor@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-02-14drm/vmwgfx: Stop accessing buffer objects which failed initZack Rusin
ttm_bo_init_reserved on failure puts the buffer object back which causes it to be deleted, but kfree was still being called on the same buffer in vmw_bo_create leading to a double free. After the double free the vmw_gem_object_create_with_handle was setting the gem function objects before checking the return status of vmw_bo_create leading to null pointer access. Fix the entire path by relaying on ttm_bo_init_reserved to delete the buffer objects on failure and making sure the return status is checked before setting the gem function objects on the buffer object. Signed-off-by: Zack Rusin <zackr@vmware.com> Fixes: 8afa13a0583f ("drm/vmwgfx: Implement DRIVER_GEM") Reviewed-by: Maaz Mombasawala <mombasawalam@vmware.com> Reviewed-by: Martin Krastev <krastevm@vmware.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230208180050.2093426-1-zack@kde.org (cherry picked from commit 36d421e632e9a0e8375eaed0143551a34d81a7e3) Cc: <stable@vger.kernel.org> # v5.17+
2023-02-14Merge tag 'renesas-clk-for-v6.3-tag3' of ↵Stephen Boyd
git://git.kernel.org/pub/scm/linux/kernel/git/geert/renesas-drivers into clk-renesas Pull one more Renesas clk driver update from Geert Uytterhoeven: - Disable R-Car H3 ES1.*, as it was only available to an internal development group and needed a lot of quirks and workarounds. * tag 'renesas-clk-for-v6.3-tag3' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/renesas-drivers: clk: renesas: rcar-gen3: Disable R-Car H3 ES1.*
2023-02-14ext4: dio take shared inode lock when overwriting preallocated blocksZhang Yi
In the dio write path, we only take shared inode lock for the case of aligned overwriting initialized blocks inside EOF. But for overwriting preallocated blocks, it may only need to split unwritten extents, this procedure has been protected under i_data_sem lock, it's safe to release the exclusive inode lock and take shared inode lock. This could give a significant speed up for multi-threaded writes. Test on Intel Xeon Gold 6140 and nvme SSD with below fio parameters. direct=1 ioengine=libaio iodepth=10 numjobs=10 runtime=60 rw=randwrite size=100G And the test result are: Before: bs=4k IOPS=11.1k, BW=43.2MiB/s bs=16k IOPS=11.1k, BW=173MiB/s bs=64k IOPS=11.2k, BW=697MiB/s After: bs=4k IOPS=41.4k, BW=162MiB/s bs=16k IOPS=41.3k, BW=646MiB/s bs=64k IOPS=13.5k, BW=843MiB/s Signed-off-by: Zhang Yi <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20221226062015.3479416-1-yi.zhang@huaweicloud.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2023-02-14xfs: fix uninitialized variable accessDarrick J. Wong
If the end position of a GETFSMAP query overlaps an allocated space and we're using the free space info to generate fsmap info, the akeys information gets fed into the fsmap formatter with bad results. Zero-init the space. Reported-by: syzbot+090ae72d552e6bd93cfe@syzkaller.appspotmail.com Signed-off-by: Darrick J. Wong <djwong@kernel.org>
2023-02-14Merge tag 'xfs-alloc-perag-conversion' of ↵Darrick J. Wong
git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs into xfs-6.3-merge-A xfs: per-ag centric allocation alogrithms This series continues the work towards making shrinking a filesystem possible. We need to be able to stop operations from taking place on AGs that need to be removed by a shrink, so before shrink can be implemented we need to have the infrastructure in place to prevent incursion into AGs that are going to be, or are in the process, of being removed from active duty. The focus of this is making operations that depend on access to AGs use the perag to access and pin the AG in active use, thereby creating a barrier we can use to delay shrink until all active uses of an AG have been drained and new uses are prevented. This series starts by fixing some existing issues that are exposed by changes later in the series. They stand alone, so can be picked up independently of the rest of this patchset. The most complex of these fixes is cleaning up the mess that is the AGF deadlock avoidance algorithm. This algorithm stores the first block that is allocated in a transaction in tp->t_firstblock, then uses this to try to limit future allocations within the transaction to AGs at or higher than the filesystem block stored in tp->t_firstblock. This depends on one of the initial bug fixes in the series to move the deadlock avoidance checks to xfs_alloc_vextent(), and then builds on it to relax the constraints of the avoidance algorithm to only be active when a deadlock is possible. We also update the algorithm to record allocations from higher AGs that are allocated from, because we when we need to lock more than two AGs we still have to ensure lock order is correct. Therefore we can't lock AGs in the order 1, 3, 2, even though tp->t_firstblock indicates that we've allocated from AG 1 and so AG is valid to lock. It's not valid, because we already hold AG 3 locked, and so tp->t-first_block should actually point at AG 3, not AG 1 in this situation. It should now be obvious that the deadlock avoidance algorithm should record AGs, not filesystem blocks. So the series then changes the transaction to store the highest AG we've allocated in rather than a filesystem block we allocated. This makes it obvious what the constraints are, and trivial to update as we lock and allocate from various AGs. With all the bug fixes out of the way, the series then starts converting the code to use active references. Active reference counts are used by high level code that needs to prevent the AG from being taken out from under it by a shrink operation. The high level code needs to be able to handle not getting an active reference gracefully, and the shrink code will need to wait for active references to drain before continuing. Active references are implemented just as reference counts right now - an active reference is taken at perag init during mount, and all other active references are dependent on the active reference count being greater than zero. This gives us an initial method of stopping new active references without needing other infrastructure; just drop the reference taken at filesystem mount time and when the refcount then falls to zero no new references can be taken. In future, this will need to take into account AG control state (e.g. offline, no alloc, etc) as well as the reference count, but right now we can implement a basic barrier for shrink with just reference count manipulations. As such, patches to convert the perag state to atomic opstate fields similar to the xfs_mount and xlog opstate fields follow the initial active perag reference counting patches. The first target for active reference conversion is the for_each_perag*() iterators. This captures a lot of high level code that should skip offline AGs, and introduces the ability to differentiate between a lookup that didn't have an online AG and the end of the AG iteration range. From there, the inode allocation AG selection is converted to active references, and the perag is driven deeper into the inode allocation and btree code to replace the xfs_mount. Most of the inode allocation code operates on a single AG once it is selected, hence it should pass the perag as the primary referenced object around for allocation, not the xfs_mount. There is a bit of churn here, but it emphasises that inode allocation is inherently an allocation group based operation. Next the bmap/alloc interface undergoes a major untangling, reworking xfs_bmap_btalloc() into separate allocation operations for different contexts and failure handling behaviours. This then allows us to completely remove the xfs_alloc_vextent() layer via restructuring the xfs_alloc_vextent/xfs_alloc_ag_vextent() into a set of realtively simple helper function that describe the allocation that they are doing. e.g. xfs_alloc_vextent_exact_bno(). This allows the requirements for accessing AGs to be allocation context dependent. The allocations that require operation on a single AG generally can't tolerate failure after the allocation method and AG has been decided on, and hence the caller needs to manage the active references to ensure the allocation does not race with shrink removing the selected AG for the duration of the operation that requires access to that allocation group. Other allocations iterate AGs and so the first AG is just a hint - these do not need to pin a perag first as they can tolerate not being able to access an AG by simply skipping over it. These require new perag iteration functions that can start at arbitrary AGs and wrap around at arbitrary AGs, hence a new set for for_each_perag_wrap*() helpers to do this. Next is the rework of the filestreams allocator. This doesn't change any functionality, but gets rid of the unnecessary multi-pass selection algorithm when the selected AG is not available. It currently does a lookup pass which might iterate all AGs to select an AG, then checks if the AG is acceptible and if not does a "new AG" pass that is essentially identical to the lookup pass. Both of these scans also do the same "longest extent in AG" check before selecting an AG as is done after the AG is selected. IOWs, the filestreams algorithm can be greatly simplified into a single new AG selection pass if the there is no current association or the currently associated AG doesn't have enough contiguous free space for the allocation to proceed. With this simplification of the filestreams allocator, it's then trivial to convert it to use for_each_perag_wrap() for the AG scan algorithm. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Darrick J. Wong <djwong@kernel.org> * tag 'xfs-alloc-perag-conversion' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs: (42 commits) xfs: refactor the filestreams allocator pick functions xfs: return a referenced perag from filestreams allocator xfs: pass perag to filestreams tracing xfs: use for_each_perag_wrap in xfs_filestream_pick_ag xfs: track an active perag reference in filestreams xfs: factor out MRU hit case in xfs_filestream_select_ag xfs: remove xfs_filestream_select_ag() longest extent check xfs: merge new filestream AG selection into xfs_filestream_select_ag() xfs: merge filestream AG lookup into xfs_filestream_select_ag() xfs: move xfs_bmap_btalloc_filestreams() to xfs_filestreams.c xfs: use xfs_bmap_longest_free_extent() in filestreams xfs: get rid of notinit from xfs_bmap_longest_free_extent xfs: factor out filestreams from xfs_bmap_btalloc_nullfb xfs: convert trim to use for_each_perag_range xfs: convert xfs_alloc_vextent_iterate_ags() to use perag walker xfs: move the minimum agno checks into xfs_alloc_vextent_check_args xfs: fold xfs_alloc_ag_vextent() into callers xfs: move allocation accounting to xfs_alloc_vextent_set_fsbno() xfs: introduce xfs_alloc_vextent_prepare() xfs: introduce xfs_alloc_vextent_exact_bno() ...
2023-02-15erofs: unify anonymous inodes for blobJingbo Xu
Currently there're two anonymous inodes (inode and anon_inode in struct erofs_fscache) for each blob. The former was introduced as the address_space of page cache for bootstrap. The latter was initially introduced as both the address_space of page cache and also a sentinel in the shared domain. Since now the management of cookies in share domain has been decoupled with the anonymous inode, there's no need to maintain an extra anonymous inode. Let's unify these two anonymous inodes. Besides, in non-share-domain mode only bootstrap will allocate anonymous inode. To simplify the implementation, always allocate anonymous inode for both bootstrap and data blobs. Similarly release anonymous inodes for data blobs when .put_super() is called, or we'll get "VFS: Busy inodes after unmount." warning. Also remove the redundant set_nlink() when initializing the anonymous inode, since i_nlink has already been initialized to 1 when the inode gets allocated. Signed-off-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Jia Zhu <zhujia.zj@bytedance.com> Link: https://lore.kernel.org/r/20230209063913.46341-5-jefflexu@linux.alibaba.com Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15erofs: relinquish volume with mutex heldJingbo Xu
Relinquish fscache volume with mutex held. Otherwise if a new domain is registered when the old domain with the same name gets removed from the list but not relinquished yet, fscache may complain the collision. Fixes: 8b7adf1dff3d ("erofs: introduce fscache-based domain") Signed-off-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Jia Zhu <zhujia.zj@bytedance.com> Link: https://lore.kernel.org/r/20230209063913.46341-4-jefflexu@linux.alibaba.com Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15erofs: maintain cookies of share domain in self-contained listJingbo Xu
We'd better not touch sb->s_inodes list and inode->i_count directly. Let's maintain cookies of share domain in a self-contained list in erofs. Besides, relinquish cookie with the mutex held. Otherwise if a cookie is registered when the old cookie with the same name in the same domain has been removed from the list but not relinquished yet, fscache may complain "Duplicate cookie detected". Signed-off-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Jia Zhu <zhujia.zj@bytedance.com> Link: https://lore.kernel.org/r/20230209063913.46341-3-jefflexu@linux.alibaba.com Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15erofs: remove unused device mapping in meta routineJingbo Xu
Currently metadata is always on bootstrap, and thus device mapping is not needed so far. Remove the redundant device mapping in the meta routine. Signed-off-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Jia Zhu <zhujia.zj@bytedance.com> Reviewed-by: Chao Yu <chao@kernel.org> Link: https://lore.kernel.org/r/20230209063913.46341-2-jefflexu@linux.alibaba.com Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15MAINTAINERS: erofs: Add Documentation/ABI/testing/sysfs-fs-erofsYangtao Li
Add this doc to the erofs maintainers entry. Signed-off-by: Yangtao Li <frank.li@vivo.com> Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com> Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Chao Yu <chao@kernel.org> Link: https://lore.kernel.org/r/20230209052013.34952-1-frank.li@vivo.com Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15Documentation/ABI: sysfs-fs-erofs: update supported featuresYue Hu
Add missing feaures for sysfs-fs-erofs feature doc. Signed-off-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com> Reviewed-by: Chao Yu <chao@kernel.org> Link: https://lore.kernel.org/r/20230209051128.10571-1-zbestahu@gmail.com Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15erofs: remove unused EROFS_GET_BLOCKS_RAW flagJingbo Xu
For erofs_map_blocks() and erofs_map_blocks_flatmode(), the flags argument is always EROFS_GET_BLOCKS_RAW. Thus remove the unused flags parameter for these two functions. Besides EROFS_GET_BLOCKS_RAW is originally introduced for reading compressed (raw) data for compressed files. However it's never used actually and let's remove it now. Signed-off-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com> Reviewed-by: Chao Yu <chao@kernel.org> Link: https://lore.kernel.org/r/20230209024825.17335-2-jefflexu@linux.alibaba.com Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15erofs: update print symbols for various flags in traceJingbo Xu
As new flags introduced, the corresponding print symbols for trace are not added accordingly. Add these missing print symbols for these flags. Signed-off-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Gao Xiang <hsiangkao@linux.alibaba.com> Reviewed-by: Chao Yu <chao@kernel.org> Link: https://lore.kernel.org/r/20230209024825.17335-1-jefflexu@linux.alibaba.com Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15erofs: make kobj_type structures constantThomas Weißschuh
Since commit ee6d3dd4ed48 ("driver core: make kobj_type constant.") the driver core allows the usage of const struct kobj_type. Take advantage of this to constify the structure definitions to prevent modification at runtime. Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Acked-by: Gao Xiang <hsiangkao@linux.alibaba.com> Reviewed-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Chao Yu <chao@kernel.org> Link: https://lore.kernel.org/r/20230209-kobj_type-erofs-v1-1-078c945e2c4b@weissschuh.net Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15erofs: add per-cpu threads for decompression as an optionSandeep Dhavale
Using per-cpu thread pool we can reduce the scheduling latency compared to workqueue implementation. With this patch scheduling latency and variation is reduced as per-cpu threads are high priority kthread_workers. The results were evaluated on arm64 Android devices running 5.10 kernel. The table below shows resulting improvements of total scheduling latency for the same app launch benchmark runs with 50 iterations. Scheduling latency is the latency between when the task (workqueue kworker vs kthread_worker) became eligible to run to when it actually started running. +-------------------------+-----------+----------------+---------+ | | workqueue | kthread_worker | diff | +-------------------------+-----------+----------------+---------+ | Average (us) | 15253 | 2914 | -80.89% | | Median (us) | 14001 | 2912 | -79.20% | | Minimum (us) | 3117 | 1027 | -67.05% | | Maximum (us) | 30170 | 3805 | -87.39% | | Standard deviation (us) | 7166 | 359 | | +-------------------------+-----------+----------------+---------+ Background: Boot times and cold app launch benchmarks are very important to the Android ecosystem as they directly translate to responsiveness from user point of view. While EROFS provides a lot of important features like space savings, we saw some performance penalty in cold app launch benchmarks in few scenarios. Analysis showed that the significant variance was coming from the scheduling cost while decompression cost was more or less the same. Having per-cpu thread pool we can see from the above table that this variation is reduced by ~80% on average. This problem was discussed at LPC 2022. Link to LPC 2022 slides and talk at [1] [1] https://lpc.events/event/16/contributions/1338/ [ Gao Xiang: At least, we have to add this until WQ_UNBOUND workqueue issue [2] on many arm64 devices is resolved. ] [2] https://lore.kernel.org/r/CAJkfWY490-m6wNubkxiTPsW59sfsQs37Wey279LmiRxKt7aQYg@mail.gmail.com Signed-off-by: Sandeep Dhavale <dhavale@google.com> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com> Link: https://lore.kernel.org/r/20230208093322.75816-1-hsiangkao@linux.alibaba.com
2023-02-15erofs: tidy up internal.hGao Xiang
Reorder internal.h code so that removing unneeded macros and more. No logic changes. Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com> Link: https://lore.kernel.org/r/20230204093040.97967-6-hsiangkao@linux.alibaba.com
2023-02-15erofs: get rid of z_erofs_do_map_blocks() forward declarationGao Xiang
The code can be neater without forward declarations. Let's get rid of z_erofs_do_map_blocks() forward declaration. Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com> Link: https://lore.kernel.org/r/20230204093040.97967-5-hsiangkao@linux.alibaba.com
2023-02-15erofs: move zdata.h into zdata.cGao Xiang
Definitions in zdata.h are only used in zdata.c and for internal use only. No logic changes. Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com> Link: https://lore.kernel.org/r/20230204093040.97967-4-hsiangkao@linux.alibaba.com
2023-02-15erofs: remove tagged pointer helpersGao Xiang
Just open-code the remaining one to simplify the code. Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com> Link: https://lore.kernel.org/r/20230204093040.97967-3-hsiangkao@linux.alibaba.com
2023-02-15erofs: avoid tagged pointers to mark sync decompressionGao Xiang
We could just use a boolean in z_erofs_decompressqueue for sync decompression to simplify the code. Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com> Link: https://lore.kernel.org/r/20230204093040.97967-2-hsiangkao@linux.alibaba.com
2023-02-15erofs: get rid of erofs_inode_datablocks()Gao Xiang
erofs_inode_datablocks() has the only one caller, let's just get rid of it entirely. No logic changes. Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com> Link: https://lore.kernel.org/r/20230204093040.97967-1-hsiangkao@linux.alibaba.com
2023-02-15erofs: simplify iloc()Gao Xiang
Actually we could pass in inodes directly to clean up all callers. Also rename iloc() as erofs_iloc(). Link: https://lore.kernel.org/r/20230114150823.432069-1-xiang@kernel.org Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15erofs: get rid of debug_one_dentry()Gao Xiang
Since erofsdump is available, no need to keep this debugging functionality at all. Also drop a useless comment since it's the VFS behavior. Link: https://lore.kernel.org/r/20230114125746.399253-1-xiang@kernel.org Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15erofs: remove linux/buffer_head.h dependencyGao Xiang
EROFS actually never uses buffer heads, therefore just get rid of BH_xxx definitions and linux/buffer_head.h inclusive. Link: https://lore.kernel.org/r/20230113065226.68801-2-hsiangkao@linux.alibaba.com Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-15erofs: clean up erofs_iget()Gao Xiang
Move inode hash function into inode.c and simplify erofs_iget(). Link: https://lore.kernel.org/r/20230113065226.68801-1-hsiangkao@linux.alibaba.com Reviewed-by: Yue Hu <huyue2@coolpad.com> Reviewed-by: Jingbo Xu <jefflexu@linux.alibaba.com> Reviewed-by: Chao Yu <chao@kernel.org> Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
2023-02-14riscv: Fix Zbb alternative IDsSamuel Holland
Commit 4bf8860760d9 ("riscv: cpufeature: extend riscv_cpufeature_patch_func to all ISA extensions") switched ISA extension alternatives to use the RISCV_ISA_EXT_* macros instead of CPUFEATURE_*. This was mismerged when applied on top of the Zbb series, so the Zbb alternatives referenced the wrong errata ID values. Fixes: 9daca9a5b9ac ("Merge patch series "riscv: improve boot time isa extensions handling"") Signed-off-by: Samuel Holland <samuel@sholland.org> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Guo Ren <guoren@kernel.org> Tested-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20230212021534.59121-3-samuel@sholland.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-02-14riscv: Fix early alternative patchingSamuel Holland
Now that the text to patch is located using a relative offset from the alternative entry, the text address should be computed without applying the kernel mapping offset, both before and after VM setup. Fixes: 8d23e94a4433 ("riscv: switch to relative alternative entries") Signed-off-by: Samuel Holland <samuel@sholland.org> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Reviewed-by: Guo Ren <guoren@kernel.org> Reviewed-by: Jisheng Zhang <jszhang@kernel.org> Tested-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20230212021534.59121-2-samuel@sholland.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2023-02-14Merge branch 'for-6.3/cxl-rr-emu' into cxl/nextDan Williams
Pick up the CXL DVSEC range register emulation for v6.3, and resolve conflicts with the cxl_port_probe() split (from for-6.3/cxl-ram-region) and event handling (from for-6.3/cxl-events).
2023-02-14RISC-V: re-order Kconfig selects alphanumericallyConor Dooley
Selects should be sorted alphanumerically, and were tidied up originally by Palmer in commit e8c7ef7d5819 ("RISC-V: Sort select statements alphanumerically") since then, things have gotten out of order again. Fish RMK's original script out of commit b1b3f49ce460 ("ARM: config: sort select statements alphanumerically") and do some spring cleaning. Signed-off-by: Conor Dooley <conor.dooley@microchip.com> Acked-by: Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20221219172836.134709-1-conor@kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>