summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-09-05net: dsa: b53: Fix IMP port setup on BCM5301xRafał Miłecki
Broadcom's b53 switches have one IMP (Inband Management Port) that needs to be programmed using its own designed register. IMP port may be different than CPU port - especially on devices with multiple CPU ports. For that reason it's required to explicitly note IMP port index and check for it when choosing a register to use. This commit fixes BCM5301x support. Those switches use CPU port 5 while their IMP port is 8. Before this patch b53 was trying to program port 5 with B53_PORT_OVERRIDE_CTRL instead of B53_GMII_PORT_OVERRIDE_CTRL(5). It may be possible to also replace "cpu_port" usages with dsa_is_cpu_port() but that is out of the scope of thix BCM5301x fix. Fixes: 967dd82ffc52 ("net: dsa: b53: Add support for Broadcom RoboSwitch") Signed-off-by: Rafał Miłecki <rafal@milecki.pl> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-05ip_gre: validate csum_start only on pullWillem de Bruijn
The GRE tunnel device can pull existing outer headers in ipge_xmit. This is a rare path, apparently unique to this device. The below commit ensured that pulling does not move skb->data beyond csum_start. But it has a false positive if ip_summed is not CHECKSUM_PARTIAL and thus csum_start is irrelevant. Refine to exclude this. At the same time simplify and strengthen the test. Simplify, by moving the check next to the offending pull, making it more self documenting and removing an unnecessary branch from other code paths. Strengthen, by also ensuring that the transport header is correct and therefore the inner headers will be after skb_reset_inner_headers. The transport header is set to csum_start in skb_partial_csum_set. Link: https://lore.kernel.org/netdev/YS+h%2FtqCJJiQei+W@shredder/ Fixes: 1d011c4803c7 ("ip_gre: add validation for csum_start") Reported-by: Ido Schimmel <idosch@idosch.org> Suggested-by: Alexander Duyck <alexander.duyck@gmail.com> Signed-off-by: Willem de Bruijn <willemb@google.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-05Merge tag 'mtd/for-5.15' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux Pull MTD updates from Miquel Raynal: "MTD changes: - blkdevs: - Simplify the refcounting in blktrans_{open, release} - Simplify blktrans_getgeo - Remove blktrans_ref_mutex - Simplify blktrans_dev_get - Use lockdep_assert_held - Don't hold del_mtd_blktrans_dev in blktrans_{open, release} - ftl: - Don't cast away the type when calling add_mtd_blktrans_dev - Don't cast away the type when calling add_mtd_blktrans_dev - Use container_of() rather than cast - Fix use-after-free - Add discard support - Allow use of MTD_RAM for testing purposes - concat: - Check _read, _write callbacks existence before assignment - Judge callback existence based on the master - maps: - Maps: remove dead MTD map driver for PMC-Sierra MSP boards - mtdblock: - Warn if added for a NAND device - Add comment about UBI block devices - Update old JFFS2 mention in Kconfig - partitions: - Redboot: convert to YAML NAND core changes: - Repair Miquel Raynal's email address in MAINTAINERS - Fix a couple of spelling mistakes in Kconfig - bbt: Skip bad blocks when searching for the BBT in NAND - Remove never changed ret variable Raw NAND changes: - cafe: Fix a resource leak in the error handling path of 'cafe_nand_probe()' - intel: Fix error handling in probe - omap: Fix kernel doc warning on 'calcuate' typo - gpmc: Fix the ECC bytes vs. OOB bytes equation SPI-NAND core changes: - Properly fill the OOB area. - Fix comment SPI-NAND drivers changes: - macronix: Add Quad support for serial NAND flash" * tag 'mtd/for-5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (30 commits) mtd: rawnand: cafe: Fix a resource leak in the error handling path of 'cafe_nand_probe()' mtd_blkdevs: simplify the refcounting in blktrans_{open, release} mtd_blkdevs: simplify blktrans_getgeo mtd_blkdevs: remove blktrans_ref_mutex mtd_blkdevs: simplify blktrans_dev_get mtd/rfd_ftl: don't cast away the type when calling add_mtd_blktrans_dev mtd/ftl: don't cast away the type when calling add_mtd_blktrans_dev mtd_blkdevs: use lockdep_assert_held mtd_blkdevs: don't hold del_mtd_blktrans_dev in blktrans_{open, release} mtd: rawnand: intel: Fix error handling in probe mtd: mtdconcat: Check _read, _write callbacks existence before assignment mtd: mtdconcat: Judge callback existence based on the master mtd: maps: remove dead MTD map driver for PMC-Sierra MSP boards mtd: rfd_ftl: use container_of() rather than cast mtd: rfd_ftl: fix use-after-free mtd: rfd_ftl: add discard support mtd: rfd_ftl: allow use of MTD_RAM for testing purposes mtdblock: Warn if added for a NAND device mtd: spinand: macronix: Add Quad support for serial NAND flash mtdblock: Add comment about UBI block devices ...
2021-09-05binfmt: a.out: Fix bogus semicolonGeert Uytterhoeven
fs/binfmt_aout.c: In function ‘load_aout_library’: fs/binfmt_aout.c:311:27: error: expected ‘)’ before ‘;’ token 311 | MAP_FIXED | MAP_PRIVATE; | ^ fs/binfmt_aout.c:309:10: error: too few arguments to function ‘vm_mmap’ 309 | error = vm_mmap(file, start_addr, ex.a_text + ex.a_data, | ^~~~~~~ In file included from fs/binfmt_aout.c:12: include/linux/mm.h:2626:35: note: declared here 2626 | extern unsigned long __must_check vm_mmap(struct file *, unsigned long, | ^~~~~~~ Fix this by reverting the accidental replacement of a comma by a semicolon. Fixes: 42be8b42535183f8 ("binfmt: don't use MAP_DENYWRITE when loading shared libraries via uselib()") Reported-by: noreply@ellerman.id.au Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-09-05bonding: complain about missing route only once for A/B ARP probesDavid Decotigny
On configs where there is no confirgured direct route to the target of the ARP probes, these probes are still sent and may be replied to properly, so no need to repeatedly complain about the missing route. Signed-off-by: David Decotigny <ddecotig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-05ip/ip6_gre: use the same logic as SIT interfaces when computing v6LL addressAntonio Quartulli
GRE interfaces are not Ether-like and therefore it is not possible to generate the v6LL address the same way as (for example) GRETAP devices. With default settings, a GRE interface will attempt generating its v6LL address using the EUI64 approach, but this will fail when the local endpoint of the GRE tunnel is set to "any". In this case the GRE interface will end up with no v6LL address, thus violating RFC4291. SIT interfaces already implement a different logic to ensure that a v6LL address is always computed. Change the GRE v6LL generation logic to follow the same approach as SIT. This way GRE interfaces will always have a v6LL address as well. Behaviour of GRETAP interfaces has not been changed as they behave like classic Ether-like interfaces. To avoid code duplication sit_add_v4_addrs() has been renamed to add_v4_addrs() and adapted to handle also the IP6GRE/GRE cases. Signed-off-by: Antonio Quartulli <a@unstable.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-05net: stmmac: Fix overall budget calculation for rxtx_napiSong Yoong Siang
tx_done is not used for napi_complete_done(). Thus, NAPI busy polling mechanism by gro_flush_timeout and napi_defer_hard_irqs will not able be triggered after a packet is transmitted when there is no receive packet. Fix this by taking the maximum value between tx_done and rx_done as overall budget completed by the rxtx NAPI poll to ensure XDP Tx ZC operation is continuously polling for next Tx frame. This gives benefit of lower packet submission processing latency and jitter under XDP Tx ZC mode. Performance of tx-only using xdp-sock on Intel ADL-S platform is the same with and without this patch. root@intel-corei7-64:~# ./xdpsock -i enp0s30f4 -t -z -q 1 -n 10 sock0@enp0s30f4:1 txonly xdp-drv pps pkts 10.00 rx 0 0 tx 511630 8659520 sock0@enp0s30f4:1 txonly xdp-drv pps pkts 10.00 rx 0 0 tx 511625 13775808 sock0@enp0s30f4:1 txonly xdp-drv pps pkts 10.00 rx 0 0 tx 511619 18892032 Fixes: 132c32ee5bc0 ("net: stmmac: Add TX via XDP zero-copy socket") Cc: <stable@vger.kernel.org> # 5.13.x Co-developed-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: Song Yoong Siang <yoong.siang.song@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-05iwlwifi: pnvm: Fix a memory leak in 'iwl_pnvm_get_from_fs()'Christophe JAILLET
A firmware is requested but never released in this function. This leads to a memory leak in the normal execution path. Add the missing 'release_firmware()' call. Also introduce a temp variable (new_len) in order to keep the value of 'pnvm->size' after the firmware has been released. Fixes: cdda18fbbefa ("iwlwifi: pnvm: move file loading code to a separate function") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Luca Coelho <luca@coelho.fi> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Link: https://lore.kernel.org/r/1b5d80f54c1dbf85710fd285243932943b498fe7.1630614969.git.christophe.jaillet@wanadoo.fr
2021-09-05net/9p: increase default msize to 128kChristian Schoenebeck
Let's raise the default msize value to 128k. The 'msize' option defines the maximum message size allowed for any message being transmitted (in both directions) between 9p server and 9p client during a 9p session. Currently the default 'msize' is just 8k, which is way too conservative. Such a small 'msize' value has quite a negative performance impact, because individual 9p messages have to be split up far too often into numerous smaller messages to fit into this message size limitation. A default value of just 8k also has a much higher probablity of hitting short-read issues like: https://gitlab.com/qemu-project/qemu/-/issues/409 Unfortunately user feedback showed that many 9p users are not aware that this option even exists, nor the negative impact it might have if it is too low. Link: http://lkml.kernel.org/r/61ea0f0faaaaf26dd3c762eabe4420306ced21b9.1630770829.git.linux_oss@crudebyte.com Link: https://lists.gnu.org/archive/html/qemu-devel/2021-03/msg01003.html Signed-off-by: Christian Schoenebeck <linux_oss@crudebyte.com> Signed-off-by: Dominique Martinet <asmadeus@codewreck.org>
2021-09-05net/9p: use macro to define default msizeChristian Schoenebeck
Use a macro to define the default value for the 'msize' option at one place instead of using two separate integer literals. Link: http://lkml.kernel.org/r/28bb651ae0349a7d57e8ddc92c1bd5e62924a912.1630770829.git.linux_oss@crudebyte.com Signed-off-by: Christian Schoenebeck <linux_oss@crudebyte.com> Signed-off-by: Dominique Martinet <asmadeus@codewreck.org>
2021-09-05net/9p: increase tcp max msize to 1MBDominique Martinet
Historically TCP has been limited to 64K buffers, but increasing msize provides huge performance benefits especially as latency increase so allow for bigger buffers. Ideally further improvements could change the allocation from the current contiguous chunk in slab (kmem_cache) to some scatter-gather compatible API... Note this only increases the max possible setting, not the default value. Link: http://lkml.kernel.org/r/YTQB5jCbvhmCWzNd@codewreck.org Signed-off-by: Dominique Martinet <asmadeus@codewreck.org>
2021-09-04NTB: perf: Fix an error code in perf_setup_inbuf()Yang Li
When the function IS_ALIGNED() returns false, the value of ret is 0. So, we set ret to -EINVAL to indicate this error. Clean up smatch warning: drivers/ntb/test/ntb_perf.c:602 perf_setup_inbuf() warn: missing error code 'ret'. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Signed-off-by: Jon Mason <jdmason@kudzu.us>
2021-09-04NTB: Fix an error code in ntb_msit_probe()Yang Li
When the value of nm->isr_ctx is false, the value of ret is 0. So, we set ret to -ENOMEM to indicate this error. Clean up smatch warning: drivers/ntb/test/ntb_msi_test.c:373 ntb_msit_probe() warn: missing error code 'ret'. Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Reviewed-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Jon Mason <jdmason@kudzu.us>
2021-09-04ntb: intel: remove invalid email address in header commentDave Jiang
Remove Jon's old email address. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Jon Mason <jdmason@kudzu.us>
2021-09-04Merge tag 'denywrite-for-5.15' of git://github.com/davidhildenbrand/linuxLinus Torvalds
Pull MAP_DENYWRITE removal from David Hildenbrand: "Remove all in-tree usage of MAP_DENYWRITE from the kernel and remove VM_DENYWRITE. There are some (minor) user-visible changes: - We no longer deny write access to shared libaries loaded via legacy uselib(); this behavior matches modern user space e.g. dlopen(). - We no longer deny write access to the elf interpreter after exec completed, treating it just like shared libraries (which it often is). - We always deny write access to the file linked via /proc/pid/exe: sys_prctl(PR_SET_MM_MAP/EXE_FILE) will fail if write access to the file cannot be denied, and write access to the file will remain denied until the link is effectivel gone (exec, termination, sys_prctl(PR_SET_MM_MAP/EXE_FILE)) -- just as if exec'ing the file. Cross-compiled for a bunch of architectures (alpha, microblaze, i386, s390x, ...) and verified via ltp that especially the relevant tests (i.e., creat07 and execve04) continue working as expected" * tag 'denywrite-for-5.15' of git://github.com/davidhildenbrand/linux: fs: update documentation of get_write_access() and friends mm: ignore MAP_DENYWRITE in ksys_mmap_pgoff() mm: remove VM_DENYWRITE binfmt: remove in-tree usage of MAP_DENYWRITE kernel/fork: always deny write access to current MM exe_file kernel/fork: factor out replacing the current MM exe_file binfmt: don't use MAP_DENYWRITE when loading shared libraries via uselib()
2021-09-04Merge git://github.com/Paragon-Software-Group/linux-ntfs3Linus Torvalds
Merge NTFSv3 filesystem from Konstantin Komarov: "This patch adds NTFS Read-Write driver to fs/ntfs3. Having decades of expertise in commercial file systems development and huge test coverage, we at Paragon Software GmbH want to make our contribution to the Open Source Community by providing implementation of NTFS Read-Write driver for the Linux Kernel. This is fully functional NTFS Read-Write driver. Current version works with NTFS (including v3.1) and normal/compressed/sparse files and supports journal replaying. We plan to support this version after the codebase once merged, and add new features and fix bugs. For example, full journaling support over JBD will be added in later updates" Link: https://lore.kernel.org/lkml/20210729134943.778917-1-almaz.alexandrovich@paragon-software.com/ Link: https://lore.kernel.org/lkml/aa4aa155-b9b2-9099-b7a2-349d8d9d8fbd@paragon-software.com/ * git://github.com/Paragon-Software-Group/linux-ntfs3: (35 commits) fs/ntfs3: Change how module init/info messages are displayed fs/ntfs3: Remove GPL boilerplates from decompress lib files fs/ntfs3: Remove unnecessary condition checking from ntfs_file_read_iter fs/ntfs3: Fix integer overflow in ni_fiemap with fiemap_prep() fs/ntfs3: Restyle comments to better align with kernel-doc fs/ntfs3: Rework file operations fs/ntfs3: Remove fat ioctl's from ntfs3 driver for now fs/ntfs3: Restyle comments to better align with kernel-doc fs/ntfs3: Fix error handling in indx_insert_into_root() fs/ntfs3: Potential NULL dereference in hdr_find_split() fs/ntfs3: Fix error code in indx_add_allocate() fs/ntfs3: fix an error code in ntfs_get_acl_ex() fs/ntfs3: add checks for allocation failure fs/ntfs3: Use kcalloc/kmalloc_array over kzalloc/kmalloc fs/ntfs3: Do not use driver own alloc wrappers fs/ntfs3: Use kernel ALIGN macros over driver specific fs/ntfs3: Restyle comment block in ni_parse_reparse() fs/ntfs3: Remove unused including <linux/version.h> fs/ntfs3: Fix fall-through warnings for Clang fs/ntfs3: Fix one none utf8 char in source file ...
2021-09-04Merge tag 'f2fs-for-5.15-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs updates from Jaegeuk Kim: "In this cycle, we've addressed some performance issues such as lock contention, misbehaving compress_cache, allowing extent_cache for compressed files, and new sysfs to adjust ra_size for fadvise. In order to diagnose the performance issues quickly, we also added an iostat which shows the IO latencies periodically. On the stability side, we've found two memory leakage cases in the error path in compression flow. And, we've also fixed various corner cases in fiemap, quota, checkpoint=disable, zstd, and so on. Enhancements: - avoid long checkpoint latency by releasing nat_tree_lock - collect and show iostats periodically - support extent_cache for compressed files - add a sysfs entry to manage ra_size given fadvise(POSIX_FADV_SEQUENTIAL) - report f2fs GC status via sysfs - add discard_unit=%s in mount option to handle zoned device Bug fixes: - fix two memory leakages when an error happens in the compressed IO flow - fix commpress_cache to get the right LBA - fix fiemap to deal with compressed case correctly - fix wrong EIO returns due to SBI_NEED_FSCK - fix missing writes when enabling checkpoint back - fix quota deadlock - fix zstd level mount option In addition to the above major updates, we've cleaned up several code paths such as dio, unnecessary operations, debugfs/f2fs/status, sanity check, and typos" * tag 'f2fs-for-5.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (46 commits) f2fs: should put a page beyond EOF when preparing a write f2fs: deallocate compressed pages when error happens f2fs: enable realtime discard iff device supports discard f2fs: guarantee to write dirty data when enabling checkpoint back f2fs: fix to unmap pages from userspace process in punch_hole() f2fs: fix unexpected ENOENT comes from f2fs_map_blocks() f2fs: fix to account missing .skipped_gc_rwsem f2fs: adjust unlock order for cleanup f2fs: Don't create discard thread when device doesn't support realtime discard f2fs: rebuild nat_bits during umount f2fs: introduce periodic iostat io latency traces f2fs: separate out iostat feature f2fs: compress: do sanity check on cluster f2fs: fix description about main_blkaddr node f2fs: convert S_IRUGO to 0444 f2fs: fix to keep compatibility of fault injection interface f2fs: support fault injection for f2fs_kmem_cache_alloc() f2fs: compress: allow write compress released file after truncate to zero f2fs: correct comment in segment.h f2fs: improve sbi status info in debugfs/f2fs/status ...
2021-09-04cdrom: update uniform CD-ROM maintainership in MAINTAINERS filePhillip Potter
Update maintainership for the uniform CD-ROM driver from Jens Axboe to Phillip Potter in MAINTAINERS file, to reflect the attempt to pass on maintainership of this driver to a different individual. Also remove URL to site which is no longer active. Suggested-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Phillip Potter <phil@philpotter.co.uk> Link: https://lore.kernel.org/r/20210904174030.1103-1-phil@philpotter.co.uk Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-09-04Merge tag 'nfs-for-5.15-1' of git://git.linux-nfs.org/projects/anna/linux-nfsLinus Torvalds
Pull NFS client updates from Anna Schumaker: "New Features: - Better client responsiveness when server isn't replying - Use refcount_t in sunrpc rpc_client refcount tracking - Add srcaddr and dst_port to the sunrpc sysfs info files - Add basic support for connection sharing between servers with multiple NICs` Bugfixes and Cleanups: - Sunrpc tracepoint cleanups - Disconnect after ib_post_send() errors to avoid deadlocks - Fix for tearing down rpcrdma_reps - Fix a potential pNFS layoutget livelock loop - pNFS layout barrier fixes - Fix a potential memory corruption in rpc_wake_up_queued_task_set_status() - Fix reconnection locking - Fix return value of get_srcport() - Remove rpcrdma_post_sends() - Remove pNFS dead code - Remove copy size restriction for inter-server copies - Overhaul the NFS callback service - Clean up sunrpc TCP socket shutdowns - Always provide aligned buffers to RPC read layers" * tag 'nfs-for-5.15-1' of git://git.linux-nfs.org/projects/anna/linux-nfs: (39 commits) NFS: Always provide aligned buffers to the RPC read layers NFSv4.1 add network transport when session trunking is detected SUNRPC enforce creation of no more than max_connect xprts NFSv4 introduce max_connect mount options SUNRPC add xps_nunique_destaddr_xprts to xprt_switch_info in sysfs SUNRPC keep track of number of transports to unique addresses NFSv3: Delete duplicate judgement in nfs3_async_handle_jukebox SUNRPC: Tweak TCP socket shutdown in the RPC client SUNRPC: Simplify socket shutdown when not reusing TCP ports NFSv4.2: remove restriction of copy size for inter-server copy. NFS: Clean up the synopsis of callback process_op() NFS: Extract the xdr_init_encode/decode() calls from decode_compound NFS: Remove unused callback void decoder NFS: Add a private local dispatcher for NFSv4 callback operations SUNRPC: Eliminate the RQ_AUTHERR flag SUNRPC: Set rq_auth_stat in the pg_authenticate() callout SUNRPC: Add svc_rqst::rq_auth_stat SUNRPC: Add dst_port to the sysfs xprt info file SUNRPC: Add srcaddr as a file in sysfs sunrpc: Fix return value of get_srcport() ...
2021-09-04octeontx2-af: Fix some memory leaks in the error handling path of ↵Christophe JAILLET
'cgx_lmac_init()' Memory allocated before 'lmac' is stored in 'cgx->lmac_idmap[]' must be freed explicitly. Otherwise, in case of error, it will leak. Rename the 'err_irq' label to better describe what is done at this place in the error handling path. Fixes: 6f14078e3ee5 ("octeontx2-af: DMAC filter support in MAC block") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-04octeontx2-af: Add a 'rvu_free_bitmap()' functionChristophe JAILLET
In order to match 'rvu_alloc_bitmap()', add a 'rvu_free_bitmap()' function Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-04ethtool: Fix an error code in cxgb2.cYang Li
When adapter->registered_device_map is NULL, the value of err is uncertain, we set err to -EINVAL to avoid ambiguity. Clean up smatch warning: drivers/net/ethernet/chelsio/cxgb/cxgb2.c:1114 init_one() warn: missing error code 'err' Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-04qlcnic: Remove redundant unlock in qlcnic_pinit_from_romDinghao Liu
Previous commit 68233c583ab4 removes the qlcnic_rom_lock() in qlcnic_pinit_from_rom(), but remains its corresponding unlock function, which is odd. I'm not very sure whether the lock is missing, or the unlock is redundant. This bug is suggested by a static analysis tool, please advise. Fixes: 68233c583ab4 ("qlcnic: updated reset sequence") Signed-off-by: Dinghao Liu <dinghao.liu@zju.edu.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-04fq_codel: reject silly quantum parametersEric Dumazet
syzbot found that forcing a big quantum attribute would crash hosts fast, essentially using this: tc qd replace dev eth0 root fq_codel quantum 4294967295 This is because fq_codel_dequeue() would have to loop ~2^31 times in : if (flow->deficit <= 0) { flow->deficit += q->quantum; list_move_tail(&flow->flowchain, &q->old_flows); goto begin; } SFQ max quantum is 2^19 (half a megabyte) Lets adopt a max quantum of one megabyte for FQ_CODEL. Fixes: 4b549a2ef4be ("fq_codel: Fair Queue Codel AQM") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-09-04mm, slub: convert kmem_cpu_slab protection to local_lockVlastimil Babka
Embed local_lock into struct kmem_cpu_slab and use the irq-safe versions of local_lock instead of plain local_irq_save/restore. On !PREEMPT_RT that's equivalent, with better lockdep visibility. On PREEMPT_RT that means better preemption. However, the cost on PREEMPT_RT is the loss of lockless fast paths which only work with cpu freelist. Those are designed to detect and recover from being preempted by other conflicting operations (both fast or slow path), but the slow path operations assume they cannot be preempted by a fast path operation, which is guaranteed naturally with disabled irqs. With local locks on PREEMPT_RT, the fast paths now also need to take the local lock to avoid races. In the allocation fastpath slab_alloc_node() we can just defer to the slowpath __slab_alloc() which also works with cpu freelist, but under the local lock. In the free fastpath do_slab_free() we have to add a new local lock protected version of freeing to the cpu freelist, as the existing slowpath only works with the page freelist. Also update the comment about locking scheme in SLUB to reflect changes done by this series. [ Mike Galbraith <efault@gmx.de>: use local_lock() without irq in PREEMPT_RT scope; debugging of RT crashes resulting in put_cpu_partial() locking changes ] Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: use migrate_disable() on PREEMPT_RTVlastimil Babka
We currently use preempt_disable() (directly or via get_cpu_ptr()) to stabilize the pointer to kmem_cache_cpu. On PREEMPT_RT this would be incompatible with the list_lock spinlock. We can use migrate_disable() instead, but that increases overhead on !PREEMPT_RT as it's an unconditional function call. In order to get the best available mechanism on both PREEMPT_RT and !PREEMPT_RT, introduce private slub_get_cpu_ptr() and slub_put_cpu_ptr() wrappers and use them. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchgVlastimil Babka
Jann Horn reported [1] the following theoretically possible race: task A: put_cpu_partial() calls preempt_disable() task A: oldpage = this_cpu_read(s->cpu_slab->partial) interrupt: kfree() reaches unfreeze_partials() and discards the page task B (on another CPU): reallocates page as page cache task A: reads page->pages and page->pobjects, which are actually halves of the pointer page->lru.prev task B (on another CPU): frees page interrupt: allocates page as SLUB page and places it on the percpu partial list task A: this_cpu_cmpxchg() succeeds which would cause page->pages and page->pobjects to end up containing halves of pointers that would then influence when put_cpu_partial() happens and show up in root-only sysfs files. Maybe that's acceptable, I don't know. But there should probably at least be a comment for now to point out that we're reading union fields of a page that might be in a completely different state. Additionally, the this_cpu_cmpxchg() approach in put_cpu_partial() is only safe against s->cpu_slab->partial manipulation in ___slab_alloc() if the latter disables irqs, otherwise a __slab_free() in an irq handler could call put_cpu_partial() in the middle of ___slab_alloc() manipulating ->partial and corrupt it. This becomes an issue on RT after a local_lock is introduced in later patch. The fix means taking the local_lock also in put_cpu_partial() on RT. After debugging this issue, Mike Galbraith suggested [2] that to avoid different locking schemes on RT and !RT, we can just protect put_cpu_partial() with disabled irqs (to be converted to local_lock_irqsave() later) everywhere. This should be acceptable as it's not a fast path, and moving the actual partial unfreezing outside of the irq disabled section makes it short, and with the retry loop gone the code can be also simplified. In addition, the race reported by Jann should no longer be possible. [1] https://lore.kernel.org/lkml/CAG48ez1mvUuXwg0YPH5ANzhQLpbphqk-ZS+jbRz+H66fvm4FcA@mail.gmail.com/ [2] https://lore.kernel.org/linux-rt-users/e3470ab357b48bccfbd1f5133b982178a7d2befb.camel@gmx.de/ Reported-by: Jann Horn <jannh@google.com> Suggested-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: make slab_lock() disable irqs with PREEMPT_RTVlastimil Babka
We need to disable irqs around slab_lock() (a bit spinlock) to make it irq-safe. Most calls to slab_lock() are nested under spin_lock_irqsave() which doesn't disable irqs on PREEMPT_RT, so add explicit disabling with PREEMPT_RT. The exception is cmpxchg_double_slab() which already disables irqs, so use a __slab_[un]lock() variant without irq disable there. slab_[un]lock() thus needs a flags pointer parameter, which is unused on !RT. free_debug_processing() now has two flags variables, which looks odd, but only one is actually used - the one used in spin_lock_irqsave() on !RT and the one used in slab_lock() on RT. As a result, __cmpxchg_double_slab() and cmpxchg_double_slab() become effectively identical on RT, as both will disable irqs, which is necessary on RT as most callers of this function also rely on irqsaving lock operations. Thus, assert that irqs are already disabled in __cmpxchg_double_slab() only on !RT and also change the VM_BUG_ON assertion to the more standard lockdep_assert one. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm: slub: make object_map_lock a raw_spinlock_tSebastian Andrzej Siewior
The variable object_map is protected by object_map_lock. The lock is always acquired in debug code and within already atomic context Make object_map_lock a raw_spinlock_t. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-03loop: reduce the loop_ctl_mutex scopeTetsuo Handa
syzbot is reporting circular locking problem at __loop_clr_fd() [1], for commit a160c6159d4a0cf8 ("block: add an optional probe callback to major_names") is calling the module's probe function with major_names_lock held. Fortunately, since commit 990e78116d38059c ("block: loop: fix deadlock between open and remove") stopped holding loop_ctl_mutex in lo_open(), current role of loop_ctl_mutex is to serialize access to loop_index_idr and loop_add()/loop_remove(); in other words, management of id for IDR. To avoid holding loop_ctl_mutex during whole add/remove operation, use a bool flag to indicate whether the loop device is ready for use. loop_unregister_transfer() which is called from cleanup_cryptoloop() currently has possibility of use-after-free problem due to lack of serialization between kfree() from loop_remove() from loop_control_remove() and mutex_lock() from unregister_transfer_cb(). But since lo->lo_encryption should be already NULL when this function is called due to module unload, and commit 222013f9ac30b9ce ("cryptoloop: add a deprecation warning") indicates that we will remove this function shortly, this patch updates this function to emit warning instead of checking lo->lo_encryption. Holding loop_ctl_mutex in loop_exit() is pointless, for all users must close /dev/loop-control and /dev/loop$num (in order to drop module's refcount to 0) before loop_exit() starts, and nobody can open /dev/loop-control or /dev/loop$num afterwards. Link: https://syzkaller.appspot.com/bug?id=7bb10e8b62f83e4d445cdf4c13d69e407e629558 [1] Reported-by: syzbot <syzbot+f61766d5763f9e7a118f@syzkaller.appspotmail.com> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/adb1e792-fc0e-ee81-7ea0-0906fc36419d@i-love.sakura.ne.jp Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-09-03Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nfJakub Kicinski
Pablo Neira Ayuso says: ==================== Netfilter fixes for net 1) Protect nft_ct template with global mutex, from Pavel Skripkin. 2) Two recent commits switched inet rt and nexthop exception hashes from jhash to siphash. If those two spots are problematic then conntrack is affected as well, so switch voer to siphash too. While at it, add a hard upper limit on chain lengths and reject insertion if this is hit. Patches from Florian Westphal. 3) Fix use-after-scope in nf_socket_ipv6 reported by KASAN, from Benjamin Hesmans. * git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf: netfilter: socket: icmp6: fix use-after-scope netfilter: refuse insertion if chain has grown too large netfilter: conntrack: switch to siphash netfilter: conntrack: sanitize table size default settings netfilter: nft_ct: protect nft_ct_pcpu_template_refcnt with mutex ==================== Link: https://lore.kernel.org/r/20210903163020.13741-1-pablo@netfilter.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-09-03ionic: fix a sleeping in atomic bugDan Carpenter
This code is holding spin_lock_bh(&lif->rx_filters.lock); so the allocation needs to be atomic. Fixes: 969f84394604 ("ionic: sync the filters in the work task") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Shannon Nelson <snelson@pensando.io> Link: https://lore.kernel.org/r/20210903131856.GA25934@kili Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2021-09-04mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of ↵Sebastian Andrzej Siewior
IRQ context flush_all() flushes a specific SLAB cache on each CPU (where the cache is present). The deactivate_slab()/__free_slab() invocation happens within IPI handler and is problematic for PREEMPT_RT. The flush operation is not a frequent operation or a hot path. The per-CPU flush operation can be moved to within a workqueue. Because a workqueue handler, unlike IPI handler, does not disable irqs, flush_slab() now has to disable them for working with the kmem_cache_cpu fields. deactivate_slab() is safe to call with irqs enabled. [vbabka@suse.cz: adapt to new SLUB changes] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slab: split out the cpu offline variant of flush_slab()Vlastimil Babka
flush_slab() is called either as part IPI handler on given live cpu, or as a cleanup on behalf of another cpu that went offline. The first case needs to protect updating the kmem_cache_cpu fields with disabled irqs. Currently the whole call happens with irqs disabled by the IPI handler, but the following patch will change from IPI to workqueue, and flush_slab() will have to disable irqs (to be replaced with a local lock later) in the critical part. To prepare for this change, replace the call to flush_slab() for the dead cpu handling with an opencoded variant that will not disable irqs nor take a local lock. Suggested-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: don't disable irqs in slub_cpu_dead()Vlastimil Babka
slub_cpu_dead() cleans up for an offlined cpu from another cpu and calls only functions that are now irq safe, so we don't need to disable irqs anymore. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: only disable irq with spin_lock in __unfreeze_partials()Vlastimil Babka
__unfreeze_partials() no longer needs to have irqs disabled, except for making the spin_lock operations irq-safe, so convert the spin_locks operations and remove the separate irq handling. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: separate detaching of partial list in unfreeze_partials() from ↵Vlastimil Babka
unfreezing Unfreezing partial list can be split to two phases - detaching the list from struct kmem_cache_cpu, and processing the list. The whole operation does not need to be protected by disabled irqs. Restructure the code to separate the detaching (with disabled irqs) and unfreezing (with irq disabling to be reduced in the next patch). Also, unfreeze_partials() can be called from another cpu on behalf of a cpu that is being offlined, where disabling irqs on the local cpu has no sense, so restructure the code as follows: - __unfreeze_partials() is the bulk of unfreeze_partials() that processes the detached percpu partial list - unfreeze_partials() detaches list from current cpu with irqs disabled and calls __unfreeze_partials() - unfreeze_partials_cpu() is to be called for the offlined cpu so it needs no irq disabling, and is called from __flush_cpu_slab() - flush_cpu_slab() is for the local cpu thus it needs to call unfreeze_partials(). So it can't simply call __flush_cpu_slab(smp_processor_id()) anymore and we have to open-code the proper calls. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: detach whole partial list at once in unfreeze_partials()Vlastimil Babka
Instead of iterating through the live percpu partial list, detach it from the kmem_cache_cpu at once. This is simpler and will allow further optimization. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: discard slabs in unfreeze_partials() without irqs disabledVlastimil Babka
No need for disabled irqs when discarding slabs, so restore them before discarding. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: move irq control into unfreeze_partials()Vlastimil Babka
unfreeze_partials() can be optimized so that it doesn't need irqs disabled for the whole time. As the first step, move irq control into the function and remove it from the put_cpu_partial() caller. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: call deactivate_slab() without disabling irqsVlastimil Babka
The function is now safe to be called with irqs enabled, so move the calls outside of irq disabled sections. When called from ___slab_alloc() -> flush_slab() we have irqs disabled, so to reenable them before deactivate_slab() we need to open-code flush_slab() in ___slab_alloc() and reenable irqs after modifying the kmem_cache_cpu fields. But that means a IRQ handler meanwhile might have assigned a new page to kmem_cache_cpu.page so we have to retry the whole check. The remaining callers of flush_slab() are the IPI handler which has disabled irqs anyway, and slub_cpu_dead() which will be dealt with in the following patch. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: make locking in deactivate_slab() irq-safeVlastimil Babka
dectivate_slab() now no longer touches the kmem_cache_cpu structure, so it will be possible to call it with irqs enabled. Just convert the spin_lock calls to their irq saving/restoring variants to make it irq-safe. Note we now have to use cmpxchg_double_slab() for irq-safe slab_lock(), because in some situations we don't take the list_lock, which would disable irqs. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: move reset of c->page and freelist out of deactivate_slab()Vlastimil Babka
deactivate_slab() removes the cpu slab by merging the cpu freelist with slab's freelist and putting the slab on the proper node's list. It also sets the respective kmem_cache_cpu pointers to NULL. By extracting the kmem_cache_cpu operations from the function, we can make it not dependent on disabled irqs. Also if we return a single free pointer from ___slab_alloc, we no longer have to assign kmem_cache_cpu.page before deactivation or care if somebody preempted us and assigned a different page to our kmem_cache_cpu in the process. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: stop disabling irqs around get_partial()Vlastimil Babka
The function get_partial() does not need to have irqs disabled as a whole. It's sufficient to convert spin_lock operations to their irq saving/restoring versions. As a result, it's now possible to reach the page allocator from the slab allocator without disabling and re-enabling interrupts on the way. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: check new pages with restored irqsVlastimil Babka
Building on top of the previous patch, re-enable irqs before checking new pages. alloc_debug_processing() is now called with enabled irqs so we need to remove VM_BUG_ON(!irqs_disabled()); in check_slab() - there doesn't seem to be a need for it anyway. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: validate slab from partial list or page allocator before making it ↵Vlastimil Babka
cpu slab When we obtain a new slab page from node partial list or page allocator, we assign it to kmem_cache_cpu, perform some checks, and if they fail, we undo the assignment. In order to allow doing the checks without irq disabled, restructure the code so that the checks are done first, and kmem_cache_cpu.page assignment only after they pass. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: restore irqs around calling new_slab()Vlastimil Babka
allocate_slab() currently re-enables irqs before calling to the page allocator. It depends on gfpflags_allow_blocking() to determine if it's safe to do so. Now we can instead simply restore irq before calling it through new_slab(). The other caller early_kmem_cache_node_alloc() is unaffected by this. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc()Vlastimil Babka
Continue reducing the irq disabled scope. Check for per-cpu partial slabs with first with irqs enabled and then recheck with irqs disabled before grabbing the slab page. Mostly preparatory for the following patches. Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2021-09-04mm, slub: do initial checks in ___slab_alloc() with irqs enabledVlastimil Babka
As another step of shortening irq disabled sections in ___slab_alloc(), delay disabling irqs until we pass the initial checks if there is a cached percpu slab and it's suitable for our allocation. Now we have to recheck c->page after actually disabling irqs as an allocation in irq handler might have replaced it. Because we call pfmemalloc_match() as one of the checks, we might hit VM_BUG_ON_PAGE(!PageSlab(page)) in PageSlabPfmemalloc in case we get interrupted and the page is freed. Thus introduce a pfmemalloc_match_unsafe() variant that lacks the PageSlab check. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net>
2021-09-04mm, slub: move disabling/enabling irqs to ___slab_alloc()Vlastimil Babka
Currently __slab_alloc() disables irqs around the whole ___slab_alloc(). This includes cases where this is not needed, such as when the allocation ends up in the page allocator and has to awkwardly enable irqs back based on gfp flags. Also the whole kmem_cache_alloc_bulk() is executed with irqs disabled even when it hits the __slab_alloc() slow path, and long periods with disabled interrupts are undesirable. As a first step towards reducing irq disabled periods, move irq handling into ___slab_alloc(). Callers will instead prevent the s->cpu_slab percpu pointer from becoming invalid via get_cpu_ptr(), thus preempt_disable(). This does not protect against modification by an irq handler, which is still done by disabled irq for most of ___slab_alloc(). As a small immediate benefit, slab_out_of_memory() from ___slab_alloc() is now called with irqs enabled. kmem_cache_alloc_bulk() disables irqs for its fastpath and then re-enables them before calling ___slab_alloc(), which then disables them at its discretion. The whole kmem_cache_alloc_bulk() operation also disables preemption. When ___slab_alloc() calls new_slab() to allocate a new page, re-enable preemption, because new_slab() will re-enable interrupts in contexts that allow blocking (this will be improved by later patches). The patch itself will thus increase overhead a bit due to disabled preemption (on configs where it matters) and increased disabling/enabling irqs in kmem_cache_alloc_bulk(), but that will be gradually improved in the following patches. Note in __slab_alloc() we need to change the #ifdef CONFIG_PREEMPT guard to CONFIG_PREEMPT_COUNT to make sure preempt disable/enable is properly paired in all configurations. On configs without involuntary preemption and debugging the re-read of kmem_cache_cpu pointer is still compiled out as it was before. [ Mike Galbraith <efault@gmx.de>: Fix kmem_cache_alloc_bulk() error path ] Signed-off-by: Vlastimil Babka <vbabka@suse.cz>