summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-05-05arm64: mm: Cleanup useless parameters in zone_sizes_init()Kefeng Wang
Directly use max_pfn for max and no one use min, kill them. Reviewed-by: Vijay Balakrishna <vijayb@linux.microsoft.com> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Link: https://lore.kernel.org/r/20220411092455.1461-4-wangkefeng.wang@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-05-05mm/readahead: Fix readahead with large foliosMatthew Wilcox (Oracle)
Reading 100KB chunks from a big file (eg dd bs=100K) leads to poor readahead behaviour. Studying the traces in detail, I noticed two problems. The first is that we were setting the readahead flag on the folio which contains the last byte read from the block. This is wrong because we will trigger readahead at the end of the read without waiting to see if a subsequent read is going to use the pages we just read. Instead, we need to set the readahead flag on the first folio _after_ the one which contains the last byte that we're reading. The second is that we were looking for the index of the folio with the readahead flag set to exactly match the start + size - async_size. If we've rounded this, either down (as previously) or up (as now), we'll think we hit a folio marked as readahead by a different read, and try to read the wrong pages. So round the expected index to the order of the folio we hit. Reported-by: Guo Xuenan <guoxuenan@huawei.com> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-05-05block: Do not call folio_next() on an unreferenced folioMatthew Wilcox (Oracle)
It is unsafe to call folio_next() on a folio unless you hold a reference on it that prevents it from being split or freed. After returning from the iterator, iomap calls folio_end_writeback() which may drop the last reference to the page, or allow the page to be split. If that happens, the iterator will not advance far enough through the bio_vec, leading to assertion failures like the BUG() in folio_end_writeback() that checks we're not trying to end writeback on a page not currently under writeback. Other assertion failures were also seen, but they're all explained by this one bug. Fix the bug by remembering where the next folio starts before returning from the iterator. There are other ways of fixing this bug, but this seems the simplest. Reported-by: Darrick J. Wong <djwong@kernel.org> Tested-by: Darrick J. Wong <djwong@kernel.org> Reported-by: Brian Foster <bfoster@redhat.com> Tested-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-05-04selftests: ocelot: tc_flower_chains: specify conform-exceed action for policerVladimir Oltean
As discussed here with Ido Schimmel: https://patchwork.kernel.org/project/netdevbpf/patch/20220224102908.5255-2-jianbol@nvidia.com/ the default conform-exceed action is "reclassify", for a reason we don't really understand. The point is that hardware can't offload that police action, so not specifying "conform-exceed" was always wrong, even though the command used to work in hardware (but not in software) until the kernel started adding validation for it. Fix the command used by the selftest by making the policer drop on exceed, and pass the packet to the next action (goto) on conform. Fixes: 8cd6b020b644 ("selftests: ocelot: add some example VCAP IS1, IS2 and ES0 tc offloads") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/20220503121428.842906-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04Merge branch 'insufficient-tcp-source-port-randomness'Jakub Kicinski
Willy Tarreau says: ==================== insufficient TCP source port randomness In a not-yet published paper, Moshe Kol, Amit Klein, and Yossi Gilad report being able to accurately identify a client by forcing it to emit only 40 times more connections than the number of entries in the table_perturb[] table, which is indexed by hashing the connection tuple. The current 2^8 setting allows them to perform that attack with only 10k connections, which is not hard to achieve in a few seconds. Eric, Amit and I have been working on this for a few weeks now imagining, testing and eliminating a number of approaches that Amit and his team were still able to break or that were found to be too risky or too expensive, and ended up with the simple improvements in this series that resists to the attack, doesn't degrade the performance, and preserves a reliable port selection algorithm to avoid connection failures, including the odd/even port selection preference that allows bind() to always find a port quickly even under strong connect() stress. The approach relies on several factors: - resalting the hash secret that's used to choose the table_perturb[] entry every 10 seconds to eliminate slow attacks and force the attacker to forget everything that was learned after this delay. This already eliminates most of the problem because if a client stays silent for more than 10 seconds there's no link between the previous and the next patterns, and 10s isn't yet frequent enough to cause too frequent repetition of a same port that may induce a connection failure ; - adding small random increments to the source port. Previously, a random 0 or 1 was added every 16 ports. Now a random 0 to 7 is added after each port. This means that with the default 32768-60999 range, a worst case rollover happens after 1764 connections, and an average of 3137. This doesn't stop statistical attacks but requires significantly more iterations of the same attack to confirm a guess. - increasing the table_perturb[] size from 2^8 to 2^16, which Amit says will require 2.6 million connections to be attacked with the changes above, making it pointless to get a fingerprint that will only last 10 seconds. Due to the size, the table was made dynamic. - a few minor improvements on the bits used from the hash, to eliminate some unfortunate correlations that may possibly have been exploited to design future attack models. These changes were tested under the most extreme conditions, up to 1.1 million connections per second to one and a few targets, showing no performance regression, and only 2 connection failures within 13 billion, which is less than 2^-32 and perfectly within usual values. The series is split into small reviewable changes and was already reviewed by Amit and Eric. ==================== Link: https://lore.kernel.org/r/20220502084614.24123-1-w@1wt.eu Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04tcp: drop the hash_32() part from the index calculationWilly Tarreau
In commit 190cc82489f4 ("tcp: change source port randomizarion at connect() time"), the table_perturb[] array was introduced and an index was taken from the port_offset via hash_32(). But it turns out that hash_32() performs a multiplication while the input here comes from the output of SipHash in secure_seq, that is well distributed enough to avoid the need for yet another hash. Suggested-by: Amit Klein <aksecurity@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04tcp: increase source port perturb table to 2^16Willy Tarreau
Moshe Kol, Amit Klein, and Yossi Gilad reported being able to accurately identify a client by forcing it to emit only 40 times more connections than there are entries in the table_perturb[] table. The previous two improvements consisting in resalting the secret every 10s and adding randomness to each port selection only slightly improved the situation, and the current value of 2^8 was too small as it's not very difficult to make a client emit 10k connections in less than 10 seconds. Thus we're increasing the perturb table from 2^8 to 2^16 so that the same precision now requires 2.6M connections, which is more difficult in this time frame and harder to hide as a background activity. The impact is that the table now uses 256 kB instead of 1 kB, which could mostly affect devices making frequent outgoing connections. However such components usually target a small set of destinations (load balancers, database clients, perf assessment tools), and in practice only a few entries will be visited, like before. A live test at 1 million connections per second showed no performance difference from the previous value. Reported-by: Moshe Kol <moshe.kol@mail.huji.ac.il> Reported-by: Yossi Gilad <yossi.gilad@mail.huji.ac.il> Reported-by: Amit Klein <aksecurity@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04tcp: dynamically allocate the perturb table used by source portsWilly Tarreau
We'll need to further increase the size of this table and it's likely that at some point its size will not be suitable anymore for a static table. Let's allocate it on boot from inet_hashinfo2_init(), which is called from tcp_init(). Cc: Moshe Kol <moshe.kol@mail.huji.ac.il> Cc: Yossi Gilad <yossi.gilad@mail.huji.ac.il> Cc: Amit Klein <aksecurity@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04tcp: add small random increments to the source portWilly Tarreau
Here we're randomly adding between 0 and 7 random increments to the selected source port in order to add some noise in the source port selection that will make the next port less predictable. With the default port range of 32768-60999 this means a worst case reuse scenario of 14116/8=1764 connections between two consecutive uses of the same port, with an average of 14116/4.5=3137. This code was stressed at more than 800000 connections per second to a fixed target with all connections closed by the client using RSTs (worst condition) and only 2 connections failed among 13 billion, despite the hash being reseeded every 10 seconds, indicating a perfectly safe situation. Cc: Moshe Kol <moshe.kol@mail.huji.ac.il> Cc: Yossi Gilad <yossi.gilad@mail.huji.ac.il> Cc: Amit Klein <aksecurity@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04tcp: resalt the secret every 10 secondsEric Dumazet
In order to limit the ability for an observer to recognize the source ports sequence used to contact a set of destinations, we should periodically shuffle the secret. 10 seconds looks effective enough without causing particular issues. Cc: Moshe Kol <moshe.kol@mail.huji.ac.il> Cc: Yossi Gilad <yossi.gilad@mail.huji.ac.il> Cc: Amit Klein <aksecurity@gmail.com> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Tested-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04tcp: use different parts of the port_offset for index and offsetWilly Tarreau
Amit Klein suggests that we use different parts of port_offset for the table's index and the port offset so that there is no direct relation between them. Cc: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Moshe Kol <moshe.kol@mail.huji.ac.il> Cc: Yossi Gilad <yossi.gilad@mail.huji.ac.il> Cc: Amit Klein <aksecurity@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04secure_seq: use the 64 bits of the siphash for port offset calculationWilly Tarreau
SipHash replaced MD5 in secure_ipv{4,6}_port_ephemeral() via commit 7cd23e5300c1 ("secure_seq: use SipHash in place of MD5"), but the output remained truncated to 32-bit only. In order to exploit more bits from the hash, let's make the functions return the full 64-bit of siphash_3u32(). We also make sure the port offset calculation in __inet_hash_connect() remains done on 32-bit to avoid the need for div_u64_rem() and an extra cost on 32-bit systems. Cc: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Moshe Kol <moshe.kol@mail.huji.ac.il> Cc: Yossi Gilad <yossi.gilad@mail.huji.ac.il> Cc: Amit Klein <aksecurity@gmail.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04Merge branch 'wireguard-patches-for-5-18-rc6'Jakub Kicinski
Jason A. Donenfeld says: ==================== wireguard patches for 5.18-rc6 In working on some other problems, I wound up leaning on the WireGuard CI more than usual and uncovered a few small issues with reliability. These are fairly low key changes, since they don't impact kernel code itself. One change does stick out in particular, though, which is the "make routing loop test non-fatal" commit. I'm not thrilled about doing this, but currently [1] remains unsolved, and I'm still working on a real solution to that (hopefully for 5.19 or 5.20 if I can come up with a good idea...), so for now that test just prints a big red warning instead. [1] https://lore.kernel.org/netdev/YmszSXueTxYOC41G@zx2c4.com/ ==================== Link: https://lore.kernel.org/r/20220504202920.72908-1-Jason@zx2c4.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04wireguard: selftests: set panic_on_warn=1 from cmdlineJason A. Donenfeld
Rather than setting this once init is running, set panic_on_warn from the kernel command line, so that it catches splats from WireGuard initialization code and the various crypto selftests. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04wireguard: selftests: bump package depsJason A. Donenfeld
Use newer, more reliable package dependencies. These should hopefully reduce flakes. However, we keep the old iputils package, as it accumulated bugs after resulting in flakes on slow machines. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04wireguard: selftests: restore support for ccacheJason A. Donenfeld
When moving to non-system toolchains, we inadvertantly killed the ability to use ccache. So instead, build ccache support into the test harness directly. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04wireguard: selftests: use newer toolchains to fill out architecturesJason A. Donenfeld
Rather than relying on the system to have cross toolchains available, simply download musl.cc's ones and use that libc.so, and then we use it to fill in a few missing platforms, such as riscv64, riscv64, powerpc64, and s390x. Since riscv doesn't have a second serial port in its device description, we have to use virtio's vport. This is actually the same situation on ARM, but we were previously hacking QEMU up to work around this, which required a custom QEMU. Instead just do the vport trick on ARM too. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04wireguard: selftests: limit parallelism to $(nproc) tests at onceJason A. Donenfeld
The parallel tests were added to catch queueing issues from multiple cores. But what happens in reality when testing tons of processes is that these separate threads wind up fighting with the scheduler, and we wind up with contention in places we don't care about that decrease the chances of hitting a bug. So just do a test with the number of CPU cores, rather than trying to scale up arbitrarily. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-04wireguard: selftests: make routing loop test non-fatalJason A. Donenfeld
I hate to do this, but I still do not have a good solution to actually fix this bug across architectures. So just disable it for now, so that the CI can still deliver actionable results. This commit adds a large red warning, so that at least the failure isn't lost forever, and hopefully this can be revisited down the line. Link: https://lore.kernel.org/netdev/CAHmME9pv1x6C4TNdL6648HydD8r+txpV4hTUXOBVkrapBXH4QQ@mail.gmail.com/ Link: https://lore.kernel.org/netdev/YmszSXueTxYOC41G@zx2c4.com/ Link: https://lore.kernel.org/wireguard/CAHmME9rNnBiNvBstb7MPwK-7AmAN0sOfnhdR=eeLrowWcKxaaQ@mail.gmail.com/ Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-05-05x86/fpu: Prevent FPU state corruptionThomas Gleixner
The FPU usage related to task FPU management is either protected by disabling interrupts (switch_to, return to user) or via fpregs_lock() which is a wrapper around local_bh_disable(). When kernel code wants to use the FPU then it has to check whether it is possible by calling irq_fpu_usable(). But the condition in irq_fpu_usable() is wrong. It allows FPU to be used when: !in_interrupt() || interrupted_user_mode() || interrupted_kernel_fpu_idle() The latter is checking whether some other context already uses FPU in the kernel, but if that's not the case then it allows FPU to be used unconditionally even if the calling context interrupted a fpregs_lock() critical region. If that happens then the FPU state of the interrupted context becomes corrupted. Allow in kernel FPU usage only when no other context has in kernel FPU usage and either the calling context is not hard interrupt context or the hard interrupt did not interrupt a local bottomhalf disabled region. It's hard to find a proper Fixes tag as the condition was broken in one way or the other for a very long time and the eager/lazy FPU changes caused a lot of churn. Picked something remotely connected from the history. This survived undetected for quite some time as FPU usage in interrupt context is rare, but the recent changes to the random code unearthed it at least on a kernel which had FPU debugging enabled. There is probably a higher rate of silent corruption as not all issues can be detected by the FPU debugging code. This will be addressed in a subsequent change. Fixes: 5d2bd7009f30 ("x86, fpu: decouple non-lazy/eager fpu restore from xsave") Reported-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20220501193102.588689270@linutronix.de
2022-05-04block: improve the error message from bio_check_eodChristoph Hellwig
Print the start sector and length separately instead of the combined value to help with debugging. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com> Link: https://lore.kernel.org/r/20220504143355.568660-1-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-04block: allow passing a NULL bdev to bio_alloc_clone/bio_init_cloneChristoph Hellwig
Device mapper wants to allocate a bio before knowing the device it gets send to, so add explicit support for that. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Mike Snitzer <snitzer@kernel.org> Link: https://lore.kernel.org/r/20220504142950.567582-3-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-04block: remove superfluous calls to blkcg_bio_issue_initChristoph Hellwig
blkcg_bio_issue_init is called in submit_bio. There is no need to have extra calls that just get overriden in __bio_clone and the two places that copy and pasted from it. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Mike Snitzer <snitzer@kernel.org> Link: https://lore.kernel.org/r/20220504142950.567582-2-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2022-05-04RDMA/rxe: Change mcg_lock to a _bh lockBob Pearson
rxe_mcast.c currently uses _irqsave spinlocks for rxe->mcg_lock while rxe_recv.c uses _bh spinlocks for the same lock. As there is no case where the mcg_lock can be taken from an IRQ, change these all to bh locks so we don't have confusing mismatched lock types on the same spinlock. Fixes: 6090a0c4c7c6 ("RDMA/rxe: Cleanup rxe_mcast.c") Link: https://lore.kernel.org/r/20220504202817.98247-1-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-05-04RDMA/rxe: Do not call dev_mc_add/del() under a spinlockBob Pearson
These routines were not intended to be called under a spinlock and will throw debugging warnings: raw_local_irq_restore() called with IRQs enabled WARNING: CPU: 13 PID: 3107 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x2f/0x50 CPU: 13 PID: 3107 Comm: python3 Tainted: G E 5.18.0-rc1+ #7 Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 RIP: 0010:warn_bogus_irq_restore+0x2f/0x50 Call Trace: <TASK> _raw_spin_unlock_irqrestore+0x75/0x80 rxe_attach_mcast+0x304/0x480 [rdma_rxe] ib_attach_mcast+0x88/0xa0 [ib_core] ib_uverbs_attach_mcast+0x186/0x1e0 [ib_uverbs] ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0xcd/0x140 [ib_uverbs] ib_uverbs_cmd_verbs+0xdb0/0xea0 [ib_uverbs] ib_uverbs_ioctl+0xd2/0x160 [ib_uverbs] do_syscall_64+0x5c/0x80 entry_SYSCALL_64_after_hwframe+0x44/0xae Move them out of the spinlock, it is OK if there is some races setting up the MC reception at the ethernet layer with rbtree lookups. Fixes: 6090a0c4c7c6 ("RDMA/rxe: Cleanup rxe_mcast.c") Link: https://lore.kernel.org/r/20220504202817.98247-1-rpearsonhpe@gmail.com Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-05-04RDMA/siw: Fix a condition race issue in MPA request processingCheng Xu
The calling of siw_cm_upcall and detaching new_cep with its listen_cep should be atomistic semantics. Otherwise siw_reject may be called in a temporary state, e,g, siw_cm_upcall is called but the new_cep->listen_cep has not being cleared. This fixes a WARN: WARNING: CPU: 7 PID: 201 at drivers/infiniband/sw/siw/siw_cm.c:255 siw_cep_put+0x125/0x130 [siw] CPU: 2 PID: 201 Comm: kworker/u16:22 Kdump: loaded Tainted: G E 5.17.0-rc7 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014 Workqueue: iw_cm_wq cm_work_handler [iw_cm] RIP: 0010:siw_cep_put+0x125/0x130 [siw] Call Trace: <TASK> siw_reject+0xac/0x180 [siw] iw_cm_reject+0x68/0xc0 [iw_cm] cm_work_handler+0x59d/0xe20 [iw_cm] process_one_work+0x1e2/0x3b0 worker_thread+0x50/0x3a0 ? rescuer_thread+0x390/0x390 kthread+0xe5/0x110 ? kthread_complete_and_exit+0x20/0x20 ret_from_fork+0x1f/0x30 </TASK> Fixes: 6c52fdc244b5 ("rdma/siw: connection management") Link: https://lore.kernel.org/r/d528d83466c44687f3872eadcb8c184528b2e2d4.1650526554.git.chengyou@linux.alibaba.com Reported-by: Luis Chamberlain <mcgrof@kernel.org> Reviewed-by: Bernard Metzler <bmt@zurich.ibm.com> Signed-off-by: Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2022-05-04dt-bindings: pci: apple,pcie: Drop max-link-speed from exampleHector Martin
We no longer use these since 111659c2a570 (and they never worked anyway); drop them from the example to avoid confusion. Fixes: 111659c2a570 ("arm64: dts: apple: t8103: Remove PCIe max-link-speed properties") Signed-off-by: Hector Martin <marcan@marcan.st> Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io> Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20220502091308.28233-1-marcan@marcan.st
2022-05-04dt-bindings: Drop redundant 'maxItems/minItems' in if/then schemasRob Herring
Another round of removing redundant minItems/maxItems when 'items' list is specified. This time it is in if/then schemas as the meta-schema was failing to check this case. If a property has an 'items' list, then a 'minItems' or 'maxItems' with the same size as the list is redundant and can be dropped. Note that is DT schema specific behavior and not standard json-schema behavior. The tooling will fixup the final schema adding any unspecified minItems/maxItems. Signed-off-by: Rob Herring <robh@kernel.org> Acked-By: Vinod Koul <vkoul@kernel.org> Acked-by: Marc Kleine-Budde <mkl@pengutronix.de> Acked-by: Mark Brown <broonie@kernel.org> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # For MMC Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> #for IIO Link: https://lore.kernel.org/r/20220503162738.3827041-1-robh@kernel.org
2022-05-04dt-bindings: pinctrl: Allow values for drive-push-pull and drive-open-drainRob Herring
A few platforms, at91 and tegra, use drive-push-pull and drive-open-drain with a 0 or 1 value. There's not really a need for values as '1' should be equivalent to no value (it wasn't treated that way) and drive-push-pull disabled is equivalent to drive-open-drain. So dropping the value can't be done without breaking existing OSs. As we don't want new cases, mark the case with values as deprecated. Cc: Arnd Bergmann <arnd@arndb.de> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Jonathan Hunter <jonathanh@nvidia.com> Cc: Nicolas Ferre <nicolas.ferre@microchip.com> Cc: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20220429194610.2741437-1-robh@kernel.org
2022-05-04regulator: dt-bindings: qcom,rpmh: minor cleanups and extend suppliesMark Brown
Merge series from Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>: Extend the RPMH regulator bindings with minor fixes and adding narrow supply matching.
2022-05-04selftests/seccomp: Fix spelling mistake "Coud" -> "Could"Colin Ian King
There is a spelling mistake in an error message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220504155535.239180-1-colin.i.king@gmail.com
2022-05-04arm64: fix types in copy_highpage()Tong Tiangen
In copy_highpage() the `kto` and `kfrom` local variables are pointers to struct page, but these are used to hold arbitrary pointers to kernel memory . Each call to page_address() returns a void pointer to memory associated with the relevant page, and copy_page() expects void pointers to this memory. This inconsistency was introduced in commit 2563776b41c3 ("arm64: mte: Tags-aware copy_{user_,}highpage() implementations") and while this doesn't appear to be harmful in practice it is clearly wrong. Correct this by making `kto` and `kfrom` void pointers. Fixes: 2563776b41c3 ("arm64: mte: Tags-aware copy_{user_,}highpage() implementations") Signed-off-by: Tong Tiangen <tongtiangen@huawei.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> Link: https://lore.kernel.org/r/20220420030418.3189040-3-tongtiangen@huawei.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-05-04MAINTAINERS: Update Josh Poimboeuf's email addressJosh Poimboeuf
Change to my kernel.org email address. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/1abc3de4b00dc6f915ac975a2ec29ed545d96dc4.1651687652.git.jpoimboe@redhat.com
2022-05-04Merge tag 'iomm-fixes-v5.18-rc5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull iommu fixes from Joerg Roedel: "IOMMU core: - Fix for a regression which could cause NULL-ptr dereferences Arm SMMU: - Fix off-by-one in SMMUv3 SVA TLB invalidation - Disable large mappings to workaround nvidia erratum Intel VT-d: - Handle PCI stop marker messages in IOMMU driver to meet the requirement of I/O page fault handling framework. - Calculate a feasible mask for non-aligned page-selective IOTLB invalidation. Apple DART IOMMU: - Fix potential NULL-ptr dereference - Set module owner" * tag 'iomm-fixes-v5.18-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: iommu: Make sysfs robust for non-API groups iommu/dart: Add missing module owner to ops structure iommu/dart: check return value after calling platform_get_resource() iommu/vt-d: Drop stop marker messages iommu/vt-d: Calculate mask for non-aligned flushes iommu: arm-smmu: disable large page mappings for Nvidia arm-smmu iommu/arm-smmu-v3: Fix size calculation in arm_smmu_mm_invalidate_range()
2022-05-04Merge tag 'for-linus-5.17-2' of https://github.com/cminyard/linux-ipmiLinus Torvalds
Pull IPMI fixes from Corey Minyard: "Fix some issues that were reported. This has been in for-next for a bit (longer than the times would indicate, I had to rebase to add some text to the headers) and these are fixes that need to go in" * tag 'for-linus-5.17-2' of https://github.com/cminyard/linux-ipmi: ipmi:ipmi_ipmb: Fix null-ptr-deref in ipmi_unregister_smi() ipmi: When handling send message responses, don't process the message
2022-05-04arm64/sysreg: Generate definitions for SCTLR_EL1Mark Brown
Automatically generate register definitions for SCTLR_EL1. No functional change. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220503170233.507788-13-broonie@kernel.org [catalin.marinas@arm.com: fix the SCTLR_EL1 encoding] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-05-04drm/amd/display: Avoid reading audio pattern past AUDIO_CHANNELS_COUNTHarry Wentland
A faulty receiver might report an erroneous channel count. We should guard against reading beyond AUDIO_CHANNELS_COUNT as that would overflow the dpcd_pattern_period array. Signed-off-by: Harry Wentland <harry.wentland@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2022-05-04x86/mm: Cleanup the control_va_addr_alignment() __setup handlerRandy Dunlap
Clean up control_va_addr_alignment(): a. Make '=' required instead of optional (as documented). b. Print a warning if an invalid option value is used. c. Return 1 from the __setup handler when an invalid option value is used. This prevents the kernel from polluting init's (limited) environment space with the entire string. Fixes: dfb09f9b7ab0 ("x86, amd: Avoid cache aliasing penalties on AMD family 15h") Reported-by: Igor Zhbanov <i.zhbanov@omprussia.ru> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/64644a2f-4a20-bab3-1e15-3b2cdd0defe3@omprussia.ru Link: https://lore.kernel.org/r/20220315001045.7680-1-rdunlap@infradead.org
2022-05-04drm/amdgpu: do not use passthrough mode in Xen dom0Marek Marczykowski-Górecki
While technically Xen dom0 is a virtual machine too, it does have access to most of the hardware so it doesn't need to be considered a "passthrough". Commit b818a5d37454 ("drm/amdgpu/gmc: use PCI BARs for APUs in passthrough") changed how FB is accessed based on passthrough mode. This breaks amdgpu in Xen dom0 with message like this: [drm:dc_dmub_srv_wait_idle [amdgpu]] *ERROR* Error waiting for DMUB idle: status=3 While the reason for this failure is unclear, the passthrough mode is not really necessary in Xen dom0 anyway. So, to unbreak booting affected kernels, disable passthrough mode in this case. Link: https://gitlab.freedesktop.org/drm/amd/-/issues/1985 Fixes: b818a5d37454 ("drm/amdgpu/gmc: use PCI BARs for APUs in passthrough") Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2022-05-04irqchip/gic: Improved warning about incorrect typeFlorian Fainelli
Issue the warning for interrupt lines that have an incorrect interrupt type and also print the hardware interrupt number to facilitate the resolution of such problems. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220308201117.3870678-1-f.fainelli@gmail.com
2022-05-04irqchip/csky: Return true/false (not 1/0) from bool functionsHaowen Bai
Return boolean values ("true" or "false") instead of 1 or 0 from bool functions. Signed-off-by: Haowen Bai <baihaowen@meizu.com> Acked-by: Guo Ren <guoren@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1647487284-30088-1-git-send-email-baihaowen@meizu.com
2022-05-04irqchip/imx-irqsteer: Add runtime PM supportLucas Stach
There are now SoCs that integrate the irqsteer controller within a separate power domain. In order to allow this domain to be powered down when not needed, add runtime PM support to the driver. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220406163701.1277930-2-l.stach@pengutronix.de
2022-05-04irqchip/imx-irqsteer: Constify irq_chip structLucas Stach
The imx_irqsteer_irq_chip struct is constant data. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220406163701.1277930-1-l.stach@pengutronix.de
2022-05-04irqchip/armada-370-xp: Enable MSI affinity configurationNathan Rossi
With multiple devices attached via PCIe to an Armada 385 it is possible to overwhelm a single CPU with MSI interrupts. Under certain scenarios configuring the interrupts to be handled by more than one CPU would prevent the system from being overwhelmed. However the irqchip-aramada-370-xp driver is configured to only handle MSIs on the boot CPU, and provides no affinity configuration. This change adds support to the armada-370-xp driver to allow for configuring the affinity of specific MSI irqs and to generate the interrupts on secondary CPUs. This is done by enabling the private doorbell for all online CPUs and configures all CPUs to unmask MSI specific private doorbell bits. The CPU affinity selection of the interrupt is handled by the target list of the software triggered interrupt value, which is provided as the MSI message. The message has the associated CPU bit set for the target CPU. For private doorbell interrupts only one bit can be set otherwise all CPUs will receive the interrupt, so the lowest CPU in the affinity mask is used. This means that by default the first CPU will handle all the interrupts as was the case before. Signed-off-by: Nathan Rossi <nathan.rossi@digi.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220422043532.146946-1-nathan@nathanrossi.com
2022-05-04irqchip/aspeed-scu-ic: Fix irq_of_parse_and_map() return valueKrzysztof Kozlowski
The irq_of_parse_and_map() returns 0 on failure, not a negative ERRNO. Fixes: 04f605906ff0 ("irqchip: Add Aspeed SCU interrupt controller") Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220423094227.33148-2-krzysztof.kozlowski@linaro.org
2022-05-04irqchip/aspeed-i2c-ic: Fix irq_of_parse_and_map() return valueKrzysztof Kozlowski
The irq_of_parse_and_map() returns 0 on failure, not a negative ERRNO. Fixes: f48e699ddf70 ("irqchip/aspeed-i2c-ic: Add I2C IRQ controller for Aspeed") Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220423094227.33148-1-krzysztof.kozlowski@linaro.org
2022-05-04irqchip/sun6i-r: Use NULL for chip_dataSamuel Holland
sparse complains about using an integer as a NULL pointer. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Samuel Holland <samuel@sholland.org> Reviewed-by: Jernej Skrabec <jernej.skrabec@gmail.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220424173952.36591-1-samuel@sholland.org
2022-05-04irqchip/xtensa-mx: Fix initial IRQ affinity in non-SMP setupMax Filippov
When irq-xtensa-mx chip is used in non-SMP configuration its irq_set_affinity callback is not called leaving IRQ affinity set empty. As a result IRQ delivery does not work in that configuration. Initialize IRQ affinity of the xtensa MX interrupt distributor to CPU 0 for all external IRQ lines. Cc: stable@vger.kernel.org Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220426161912.1113784-1-jcmvbkbc@gmail.com
2022-05-04arm64: Set ARCH_NR_GPIO to 2048 for ARCH_APPLEHector Martin
We're already running into the 512 GPIO limit on t600[01] depending on how many SMC GPIOs we allocate, and a 2-die version could double that. Let's make it 2K to be safe for now. Signed-off-by: Hector Martin <marcan@marcan.st> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220502091427.28416-1-marcan@marcan.st Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2022-05-04irqchip/exiu: Fix acknowledgment of edge triggered interruptsDaniel Thompson
Currently the EXIU uses the fasteoi interrupt flow that is configured by it's parent (irq-gic-v3.c). With this flow the only chance to clear the interrupt request happens during .irq_eoi() and (obviously) this happens after the interrupt handler has run. EXIU requires edge triggered interrupts to be acked prior to interrupt handling. Without this we risk incorrect interrupt dismissal when a new interrupt is delivered after the handler reads and acknowledges the peripheral but before the irq_eoi() takes place. Fix this by clearing the interrupt request from .irq_ack() if we are configured for edge triggered interrupts. This requires adopting the fasteoi-ack flow instead of the fasteoi to ensure the ack gets called. These changes have been tested using the power button on a Developerbox/SC2A11 combined with some hackery in gpio-keys so I can play with the different trigger mode [and an mdelay(500) so I can can check what happens on a double click in both modes]. Fixes: 706cffc1b912 ("irqchip/exiu: Add support for Socionext Synquacer EXIU controller") Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220503134541.2566457-1-daniel.thompson@linaro.org