summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-06-23spi: spi-rockchip: add description for rv1126Jon Lin
The description below will be used for rv1126.dtsi or compatible one in the future Signed-off-by: Jon Lin <jon.lin@rock-chips.com> Link: https://lore.kernel.org/r/20210621104800.19088-2-jon.lin@rock-chips.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-06-23spi: rockchip: Support SPI_CS_HIGHJon Lin
1.Add standard spi-cs-high support 2.Refer to spi-controller.yaml for details Signed-off-by: Jon Lin <jon.lin@rock-chips.com> Link: https://lore.kernel.org/r/20210621104848.19539-2-jon.lin@rock-chips.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-06-23spi: rockchip: Support cs-gpioJon Lin
1.Add standard cs-gpio support 2.Refer to spi-controller.yaml for details Signed-off-by: Jon Lin <jon.lin@rock-chips.com> Link: https://lore.kernel.org/r/20210621104848.19539-1-jon.lin@rock-chips.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-06-23spi: rockchip: Wait for STB status in slave mode tx_xferJon Lin
After ROCKCHIP_SPI_VER2_TYPE2, SR->STB is a more accurate judgment bit for spi slave transmition. Signed-off-by: Jon Lin <jon.lin@rock-chips.com> Link: https://lore.kernel.org/r/20210621104800.19088-5-jon.lin@rock-chips.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-06-23spi: rockchip: Set rx_fifo interrupt waterline base on transfer itemJon Lin
The error here is to calculate the width as 8 bits. In fact, 16 bits should be considered. Signed-off-by: Jon Lin <jon.lin@rock-chips.com> Link: https://lore.kernel.org/r/20210621104800.19088-4-jon.lin@rock-chips.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-06-23spi: rockchip: add compatible string for rv1126Jon Lin
Add compatible string for rv1126 for potential applications. Signed-off-by: Jon Lin <jon.lin@rock-chips.com> Link: https://lore.kernel.org/r/20210621104800.19088-3-jon.lin@rock-chips.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-06-23regulator: bd9576: Fix uninitializes variable may_have_irqsColin Ian King
The boolean variable may_have_irqs is not ininitialized and is only being set to true in the case where chip is ROHM_CHIP_TYPE_BD9576. Fix this by ininitialized may_have_irqs to false. Addresses-Coverity: ("Uninitialized scalar variable") Fixes: e7bf1fa58c46 ("regulator: bd9576: Support error reporting") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com> Link: https://lore.kernel.org/r/20210622144730.22821-1-colin.king@canonical.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-06-23regulator: max8893: Select REGMAP_I2C to fix build errorAxel Lin
Fix build error if REGMAP_I2C is not set. Signed-off-by: Axel Lin <axel.lin@ingics.com> Link: https://lore.kernel.org/r/20210622141526.472175-1-axel.lin@ingics.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-06-23regulator: da9052: Ensure enough delay time for .set_voltage_time_selAxel Lin
Use DIV_ROUND_UP to prevent truncation by integer division issue. This ensures we return enough delay time. Also fix returning negative value when new_sel < old_sel. Signed-off-by: Axel Lin <axel.lin@ingics.com> Link: https://lore.kernel.org/r/20210618141412.4014912-1-axel.lin@ingics.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-06-23Merge branch 'topic/ppc-kvm' of ↵Paolo Bonzini
https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux into HEAD - Support for the H_RPT_INVALIDATE hypercall - Conversion of Book3S entry/exit to C - Bug fixes
2021-06-23spi: spi-sun6i: Fix chipselect/clock bugMirko Vogt
The current sun6i SPI implementation initializes the transfer too early, resulting in SCK going high before the transfer. When using an additional (gpio) chipselect with sun6i, the chipselect is asserted at a time when clock is high, making the SPI transfer fail. This is due to SUN6I_GBL_CTL_BUS_ENABLE being written into SUN6I_GBL_CTL_REG at an early stage. Moving that to the transfer function, hence, right before the transfer starts, mitigates that problem. Fixes: 3558fe900e8af (spi: sunxi: Add Allwinner A31 SPI controller driver) Signed-off-by: Mirko Vogt <mirko-dev|linux@nanl.de> Signed-off-by: Ralf Schlatterbeck <rsc@runtux.com> Link: https://lore.kernel.org/r/20210614144507.y3udezjfbko7eavv@runtux.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-06-23regulator: mt6358: Fix vdram2 .vsel_maskHsin-Hsiung Wang
The valid vsel value are 0 and 12, so the .vsel_mask should be 0xf. Signed-off-by: Hsin-Hsiung Wang <hsin-hsiung.wang@mediatek.com> Reviewed-by: Axel Lin <axel.lin@ingics.com> Link: https://lore.kernel.org/r/1624424169-510-1-git-send-email-hsin-hsiung.wang@mediatek.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-06-23x86/sev: Use "SEV: " prefix for messages from sev.cJoerg Roedel
The source file has been renamed froms sev-es.c to sev.c, but the messages are still prefixed with "SEV-ES: ". Change that to "SEV: " to make it consistent. Fixes: e759959fe3b8 ("x86/sev-es: Rename sev-es.{ch} to sev.{ch}") Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210622144825.27588-4-joro@8bytes.org
2021-06-23x86/sev: Add defines for GHCB version 2 MSR protocol requestsBrijesh Singh
Add the necessary defines for supporting the GHCB version 2 protocol. This includes defines for: - MSR-based AP hlt request/response - Hypervisor Feature request/response This is the bare minimum of requests that need to be supported by a GHCB version 2 implementation. There are more requests in the specification, but those depend on Secure Nested Paging support being available. These defines are shared between SEV host and guest support. [ bp: Fold in https://lkml.kernel.org/r/20210622144825.27588-2-joro@8bytes.org too. Simplify the brewing macro maze into readability. ] Co-developed-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/YNLXQIZ5e1wjkshG@8bytes.org
2021-06-23KVM: s390: allow facility 192 (vector-packed-decimal-enhancement facility 2)Christian Borntraeger
pass through newer vector instructions if vector support is enabled. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2021-06-23KVM: s390: gen_facilities: allow facilities 165, 193, 194 and 196Christian Borntraeger
This enables the NNPA, BEAR enhancement,reset DAT protection and processor activity counter facilities via the cpu model. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2021-06-23KVM: s390: get rid of register asm usageHeiko Carstens
Using register asm statements has been proven to be very error prone, especially when using code instrumentation where gcc may add function calls, which clobbers register contents in an unexpected way. Therefore get rid of register asm statements in kvm code, even though there is currently nothing wrong with them. This way we know for sure that this bug class won't be introduced here. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Link: https://lore.kernel.org/r/20210621140356.1210771-1-hca@linux.ibm.com [borntraeger@de.ibm.com: checkpatch strict fix] Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
2021-06-22scsi: sd: Call sd_revalidate_disk() for ioctl(BLKRRPART)Christoph Hellwig
While the disk state has nothing to do with partitions, BLKRRPART is used to force a full revalidate after things like a disk format for historical reasons. Restore that behavior. Link: https://lore.kernel.org/r/20210617115504.1732350-1-hch@lst.de Fixes: 471bd0af544b ("sd: use bdev_check_media_change") Reported-by: Xiang Chen <chenxiang66@hisilicon.com> Tested-by: Xiang Chen <chenxiang66@hisilicon.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2021-06-22module: limit enabling module.sig_enforceMimi Zohar
Irrespective as to whether CONFIG_MODULE_SIG is configured, specifying "module.sig_enforce=1" on the boot command line sets "sig_enforce". Only allow "sig_enforce" to be set when CONFIG_MODULE_SIG is configured. This patch makes the presence of /sys/module/module/parameters/sig_enforce dependent on CONFIG_MODULE_SIG=y. Fixes: fda784e50aac ("module: export module signature enforcement status") Reported-by: Nayna Jain <nayna@linux.ibm.com> Tested-by: Mimi Zohar <zohar@linux.ibm.com> Tested-by: Jessica Yu <jeyu@kernel.org> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Jessica Yu <jeyu@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-06-22drm/kmb: Fix error return code in kmb_hw_init()Zhen Lei
When the call to platform_get_irq() to obtain the IRQ of the lcd fails, the returned error code should be propagated. However, we currently do not explicitly assign this error code to 'ret'. As a result, 0 was incorrectly returned. Fixes: 7f7b96a8a0a1 ("drm/kmb: Add support for KeemBay Display") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Anitha Chrisanthus <anitha.chrisanthus@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210513134639.6541-1-thunder.leizhen@huawei.com
2021-06-22Revert "PCI: PM: Do not read power state in pci_enable_device_flags()"Rafael J. Wysocki
Revert commit 4514d991d992 ("PCI: PM: Do not read power state in pci_enable_device_flags()") that is reported to cause PCI device initialization issues on some systems. BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=213481 Link: https://lore.kernel.org/linux-acpi/YNDoGICcg0V8HhpQ@eldamar.lan Reported-by: Michael <phyre@rogers.com> Reported-by: Salvatore Bonaccorso <carnil@debian.org> Fixes: 4514d991d992 ("PCI: PM: Do not read power state in pci_enable_device_flags()") Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-06-22locking/lockdep: Correct the description error for check_redundant()Xiongwei Song
If there is no matched result, check_redundant() will return BFS_RNOMATCH. Signed-off-by: Xiongwei Song <sxwjean@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lkml.kernel.org/r/20210618130230.123249-1-sxwjean@me.com
2021-06-22futex: Provide FUTEX_LOCK_PI2 to support clock selectionThomas Gleixner
The FUTEX_LOCK_PI futex operand uses a CLOCK_REALTIME based absolute timeout since it was implemented, but it does not require that the FUTEX_CLOCK_REALTIME flag is set, because that was introduced later. In theory as none of the user space implementations can set the FUTEX_CLOCK_REALTIME flag on this operand, it would be possible to creatively abuse it and make the meaning invers, i.e. select CLOCK_REALTIME when not set and CLOCK_MONOTONIC when set. But that's a nasty hackery. Another option would be to have a new FUTEX_CLOCK_MONOTONIC flag only for FUTEX_LOCK_PI, but that's also awkward because it does not allow libraries to handle the timeout clock selection consistently. So provide a new FUTEX_LOCK_PI2 operand which implements the timeout semantics which the other operands use and leave FUTEX_LOCK_PI alone. Reported-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210422194705.440773992@linutronix.de
2021-06-22futex: Prepare futex_lock_pi() for runtime clock selectionThomas Gleixner
futex_lock_pi() is the only futex operation which cannot select the clock for timeouts (CLOCK_MONOTONIC/CLOCK_REALTIME). That's inconsistent and there is no particular reason why this cannot be supported. This was overlooked when CLOCK_REALTIME_FLAG was introduced and unfortunately not reported when the inconsistency was discovered in glibc. Prepare the function and enforce the CLOCK_REALTIME_FLAG on FUTEX_LOCK_PI so that a new FUTEX_LOCK_PI2 can implement it correctly. Reported-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210422194705.338657741@linutronix.de
2021-06-22lockdep/selftest: Remove wait-type RCU_CALLBACK testsPeter Zijlstra
The problem is that rcu_callback_map doesn't have wait_types defined, and doing so would make it indistinguishable from SOFTIRQ in any case. Remove it. Fixes: 9271a40d2a14 ("lockdep/selftest: Add wait context selftests") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20210617190313.384290291@infradead.org
2021-06-22lockdep/selftests: Fix selftests vs PROVE_RAW_LOCK_NESTINGPeter Zijlstra
When PROVE_RAW_LOCK_NESTING=y many of the selftests FAILED because HARDIRQ context is out-of-bounds for spinlocks. Instead make the default hardware context the threaded hardirq context, which preserves the old locking rules. The wait-type specific locking selftests will have a non-threaded HARDIRQ variant. Fixes: de8f5e4f2dc1 ("lockdep: Introduce wait-type checks") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20210617190313.322096283@infradead.org
2021-06-22lockdep: Fix wait-type for empty stackPeter Zijlstra
Even the very first lock can violate the wait-context check, consider the various IRQ contexts. Fixes: de8f5e4f2dc1 ("lockdep: Introduce wait-type checks") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20210617190313.256987481@infradead.org
2021-06-22locking/selftests: Add a selftest for check_irq_usage()Boqun Feng
Johannes Berg reported a lockdep problem which could be reproduced by the special test case introduced in this patch, so add it. Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210618170110.3699115-5-boqun.feng@gmail.com
2021-06-22lockding/lockdep: Avoid to find wrong lock dep path in check_irq_usage()Boqun Feng
In the step #3 of check_irq_usage(), we seach backwards to find a lock whose usage conflicts the usage of @target_entry1 on safe/unsafe. However, we should only keep the irq-unsafe usage of @target_entry1 into consideration, because it could be a case where a lock is hardirq-unsafe but soft-safe, and in check_irq_usage() we find it because its hardirq-unsafe could result into a hardirq-safe-unsafe deadlock, but currently since we don't filter out the other usage bits, so we may find a lock dependency path softirq-unsafe -> softirq-safe, which in fact doesn't cause a deadlock. And this may cause misleading lockdep splats. Fix this by only keeping LOCKF_ENABLED_IRQ_ALL bits when we try the backwards search. Reported-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210618170110.3699115-4-boqun.feng@gmail.com
2021-06-22locking/lockdep: Remove the unnecessary trace savingBoqun Feng
In print_bad_irq_dependency(), save_trace() is called to set the ->trace for @prev_root as the current call trace, however @prev_root corresponds to the the held lock, which may not be acquired in current call trace, therefore it's wrong to use save_trace() to set ->trace of @prev_root. Moreover, with our adjustment of printing backwards dependency path, the ->trace of @prev_root is unncessary, so remove it. Reported-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210618170110.3699115-3-boqun.feng@gmail.com
2021-06-22locking/lockdep: Fix the dep path printing for backwards BFSBoqun Feng
We use the same code to print backwards lock dependency path as the forwards lock dependency path, and this could result into incorrect printing because for a backwards lock_list ->trace is not the call trace where the lock of ->class is acquired. Fix this by introducing a separate function on printing the backwards dependency path. Also add a few comments about the printing while we are at it. Reported-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20210618170110.3699115-2-boqun.feng@gmail.com
2021-06-22sched/uclamp: Fix uclamp_tg_restrict()Qais Yousef
Now cpu.uclamp.min acts as a protection, we need to make sure that the uclamp request of the task is within the allowed range of the cgroup, that is it is clamp()'ed correctly by tg->uclamp[UCLAMP_MIN] and tg->uclamp[UCLAMP_MAX]. As reported by Xuewen [1] we can have some corner cases where there's inversion between uclamp requested by task (p) and the uclamp values of the taskgroup it's attached to (tg). Following table demonstrates 2 corner cases: | p | tg | effective -----------+-----+------+----------- CASE 1 -----------+-----+------+----------- uclamp_min | 60% | 0% | 60% -----------+-----+------+----------- uclamp_max | 80% | 50% | 50% -----------+-----+------+----------- CASE 2 -----------+-----+------+----------- uclamp_min | 0% | 30% | 30% -----------+-----+------+----------- uclamp_max | 20% | 50% | 20% -----------+-----+------+----------- With this fix we get: | p | tg | effective -----------+-----+------+----------- CASE 1 -----------+-----+------+----------- uclamp_min | 60% | 0% | 50% -----------+-----+------+----------- uclamp_max | 80% | 50% | 50% -----------+-----+------+----------- CASE 2 -----------+-----+------+----------- uclamp_min | 0% | 30% | 30% -----------+-----+------+----------- uclamp_max | 20% | 50% | 30% -----------+-----+------+----------- Additionally uclamp_update_active_tasks() must now unconditionally update both UCLAMP_MIN/MAX because changing the tg's UCLAMP_MAX for instance could have an impact on the effective UCLAMP_MIN of the tasks. | p | tg | effective -----------+-----+------+----------- old -----------+-----+------+----------- uclamp_min | 60% | 0% | 50% -----------+-----+------+----------- uclamp_max | 80% | 50% | 50% -----------+-----+------+----------- *new* -----------+-----+------+----------- uclamp_min | 60% | 0% | *60%* -----------+-----+------+----------- uclamp_max | 80% |*70%* | *70%* -----------+-----+------+----------- [1] https://lore.kernel.org/lkml/CAB8ipk_a6VFNjiEnHRHkUMBKbA+qzPQvhtNjJ_YNzQhqV_o8Zw@mail.gmail.com/ Fixes: 0c18f2ecfcc2 ("sched/uclamp: Fix wrong implementation of cpu.uclamp.min") Reported-by: Xuewen Yan <xuewen.yan94@gmail.com> Signed-off-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210617165155.3774110-1-qais.yousef@arm.com
2021-06-22sched/rt: Fix Deadline utilization tracking during policy changeVincent Donnefort
DL keeps track of the utilization on a per-rq basis with the structure avg_dl. This utilization is updated during task_tick_dl(), put_prev_task_dl() and set_next_task_dl(). However, when the current running task changes its policy, set_next_task_dl() which would usually take care of updating the utilization when the rq starts running DL tasks, will not see a such change, leaving the avg_dl structure outdated. When that very same task will be dequeued later, put_prev_task_dl() will then update the utilization, based on a wrong last_update_time, leading to a huge spike in the DL utilization signal. The signal would eventually recover from this issue after few ms. Even if no DL tasks are run, avg_dl is also updated in __update_blocked_others(). But as the CPU capacity depends partly on the avg_dl, this issue has nonetheless a significant impact on the scheduler. Fix this issue by ensuring a load update when a running task changes its policy to DL. Fixes: 3727e0e ("sched/dl: Add dl_rq utilization tracking") Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lore.kernel.org/r/1624271872-211872-3-git-send-email-vincent.donnefort@arm.com
2021-06-22sched/rt: Fix RT utilization tracking during policy changeVincent Donnefort
RT keeps track of the utilization on a per-rq basis with the structure avg_rt. This utilization is updated during task_tick_rt(), put_prev_task_rt() and set_next_task_rt(). However, when the current running task changes its policy, set_next_task_rt() which would usually take care of updating the utilization when the rq starts running RT tasks, will not see a such change, leaving the avg_rt structure outdated. When that very same task will be dequeued later, put_prev_task_rt() will then update the utilization, based on a wrong last_update_time, leading to a huge spike in the RT utilization signal. The signal would eventually recover from this issue after few ms. Even if no RT tasks are run, avg_rt is also updated in __update_blocked_others(). But as the CPU capacity depends partly on the avg_rt, this issue has nonetheless a significant impact on the scheduler. Fix this issue by ensuring a load update when a running task changes its policy to RT. Fixes: 371bf427 ("sched/rt: Add rt_rq utilization tracking") Signed-off-by: Vincent Donnefort <vincent.donnefort@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lore.kernel.org/r/1624271872-211872-2-git-send-email-vincent.donnefort@arm.com
2021-06-23KVM: PPC: Book3S HV: Workaround high stack usage with clangNathan Chancellor
LLVM does not emit optimal byteswap assembly, which results in high stack usage in kvmhv_enter_nested_guest() due to the inlining of byteswap_pt_regs(). With LLVM 12.0.0: arch/powerpc/kvm/book3s_hv_nested.c:289:6: error: stack frame size of 2512 bytes in function 'kvmhv_enter_nested_guest' [-Werror,-Wframe-larger-than=] long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) ^ 1 error generated. While this gets fixed in LLVM, mark byteswap_pt_regs() as noinline_for_stack so that it does not get inlined and break the build due to -Werror by default in arch/powerpc/. Not inlining saves approximately 800 bytes with LLVM 12.0.0: arch/powerpc/kvm/book3s_hv_nested.c:290:6: warning: stack frame size of 1728 bytes in function 'kvmhv_enter_nested_guest' [-Wframe-larger-than=] long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) ^ 1 warning generated. Cc: stable@vger.kernel.org # v4.20+ Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/1292 Link: https://bugs.llvm.org/show_bug.cgi?id=49610 Link: https://lore.kernel.org/r/202104031853.vDT0Qjqj-lkp@intel.com/ Link: https://gist.github.com/ba710e3703bf45043a31e2806c843ffd Link: https://lore.kernel.org/r/20210621182440.990242-1-nathan@kernel.org
2021-06-22Merge branch kvm-arm64/mmu/mte into kvmarm-master/nextMarc Zyngier
KVM/arm64 support for MTE, courtesy of Steven Price. It allows the guest to use memory tagging, and offers a new userspace API to save/restore the tags. * kvm-arm64/mmu/mte: KVM: arm64: Document MTE capability and ioctl KVM: arm64: Add ioctl to fetch/store tags in a guest KVM: arm64: Expose KVM_ARM_CAP_MTE KVM: arm64: Save/restore MTE registers KVM: arm64: Introduce MTE VM feature arm64: mte: Sync tags for pages where PTE is untagged Signed-off-by: Marc Zyngier <maz@kernel.org>
2021-06-22signal: Prevent sigqueue caching after task got releasedThomas Gleixner
syzbot reported a memory leak related to sigqueue caching. The assumption that a task cannot cache a sigqueue after the signal handler has been dropped and exit_task_sigqueue_cache() has been invoked turns out to be wrong. Such a task can still invoke release_task(other_task), which cleans up the signals of 'other_task' and ends up in sigqueue_cache_or_free(), which in turn will cache the signal because task->sigqueue_cache is NULL. That's obviously bogus because nothing will free the cached signal of that task anymore, so the cached item is leaked. This happens when e.g. the last non-leader thread exits and reaps the zombie leader. Prevent this by setting tsk::sigqueue_cache to an error pointer value in exit_task_sigqueue_cache() which forces any subsequent invocation of sigqueue_cache_or_free() from that task to hand the sigqueue back to the kmemcache. Add comments to all relevant places. Fixes: 4bad58ebc8bc ("signal: Allow tasks to cache one sigqueue struct") Reported-by: syzbot+0bac5fec63d4f399ba98@syzkaller.appspotmail.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Christian Brauner <christian.brauner@ubuntu.com> Link: https://lore.kernel.org/r/878s32g6j5.ffs@nanos.tec.linutronix.de
2021-06-22KVM: PPC: Book3S HV: Use H_RPT_INVALIDATE in nested KVMBharata B Rao
In the nested KVM case, replace H_TLB_INVALIDATE by the new hcall H_RPT_INVALIDATE if available. The availability of this hcall is determined from "hcall-rpt-invalidate" string in ibm,hypertas-functions DT property. Signed-off-by: Bharata B Rao <bharata@linux.ibm.com> Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210621085003.904767-7-bharata@linux.ibm.com
2021-06-22KVM: PPC: Book3S HV: Add KVM_CAP_PPC_RPT_INVALIDATE capabilityBharata B Rao
Now that we have H_RPT_INVALIDATE fully implemented, enable support for the same via KVM_CAP_PPC_RPT_INVALIDATE KVM capability Signed-off-by: Bharata B Rao <bharata@linux.ibm.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210621085003.904767-6-bharata@linux.ibm.com
2021-06-22KVM: PPC: Book3S HV: Nested support in H_RPT_INVALIDATEBharata B Rao
Enable support for process-scoped invalidations from nested guests and partition-scoped invalidations for nested guests. Process-scoped invalidations for any level of nested guests are handled by implementing H_RPT_INVALIDATE handler in the nested guest exit path in L0. Partition-scoped invalidation requests are forwarded to the right nested guest, handled there and passed down to L0 for eventual handling. Signed-off-by: Bharata B Rao <bharata@linux.ibm.com> [aneesh: Nested guest partition-scoped invalidation changes] Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Squash in fixup patch] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210621085003.904767-5-bharata@linux.ibm.com
2021-06-22drm/amdgpu: wait for moving fence after pinningChristian König
We actually need to wait for the moving fence after pinning the BO to make sure that the pin is completed. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> References: https://lore.kernel.org/dri-devel/20210621151758.2347474-1-daniel.vetter@ffwll.ch/ CC: stable@kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20210622114506.106349-3-christian.koenig@amd.com
2021-06-22drm/radeon: wait for moving fence after pinningChristian König
We actually need to wait for the moving fence after pinning the BO to make sure that the pin is completed. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> References: https://lore.kernel.org/dri-devel/20210621151758.2347474-1-daniel.vetter@ffwll.ch/ CC: stable@kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20210622114506.106349-2-christian.koenig@amd.com
2021-06-22drm/nouveau: wait for moving fence after pinning v2Christian König
We actually need to wait for the moving fence after pinning the BO to make sure that the pin is completed. v2: grab the lock while waiting Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> References: https://lore.kernel.org/dri-devel/20210621151758.2347474-1-daniel.vetter@ffwll.ch/ CC: stable@kernel.org Link: https://patchwork.freedesktop.org/patch/msgid/20210622114506.106349-1-christian.koenig@amd.com
2021-06-22KVM: arm64: Document MTE capability and ioctlSteven Price
A new capability (KVM_CAP_ARM_MTE) identifies that the kernel supports granting a guest access to the tags, and provides a mechanism for the VMM to enable it. A new ioctl (KVM_ARM_MTE_COPY_TAGS) provides a simple way for a VMM to access the tags of a guest without having to maintain a PROT_MTE mapping in userspace. The above capability gates access to the ioctl. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-7-steven.price@arm.com
2021-06-22KVM: arm64: Add ioctl to fetch/store tags in a guestSteven Price
The VMM may not wish to have it's own mapping of guest memory mapped with PROT_MTE because this causes problems if the VMM has tag checking enabled (the guest controls the tags in physical RAM and it's unlikely the tags are correct for the VMM). Instead add a new ioctl which allows the VMM to easily read/write the tags from guest memory, allowing the VMM's mapping to be non-PROT_MTE while the VMM can still read/write the tags for the purpose of migration. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-6-steven.price@arm.com
2021-06-22KVM: arm64: Expose KVM_ARM_CAP_MTESteven Price
It's now safe for the VMM to enable MTE in a guest, so expose the capability to user space. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-5-steven.price@arm.com
2021-06-22KVM: arm64: Save/restore MTE registersSteven Price
Define the new system registers that MTE introduces and context switch them. The MTE feature is still hidden from the ID register as it isn't supported in a VM yet. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-4-steven.price@arm.com
2021-06-22KVM: arm64: Introduce MTE VM featureSteven Price
Add a new VM feature 'KVM_ARM_CAP_MTE' which enables memory tagging for a VM. This will expose the feature to the guest and automatically tag memory pages touched by the VM as PG_mte_tagged (and clear the tag storage) to ensure that the guest cannot see stale tags, and so that the tags are correctly saved/restored across swap. Actually exposing the new capability to user space happens in a later patch. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> [maz: move VM_SHARED sampling into the critical section] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-3-steven.price@arm.com
2021-06-22sched/fair: Ensure that the CFS parent is added after unthrottlingRik van Riel
Ensure that a CFS parent will be in the list whenever one of its children is also in the list. A warning on rq->tmp_alone_branch != &rq->leaf_cfs_rq_list has been reported while running LTP test cfs_bandwidth01. Odin Ugedal found the root cause: $ tree /sys/fs/cgroup/ltp/ -d --charset=ascii /sys/fs/cgroup/ltp/ |-- drain `-- test-6851 `-- level2 |-- level3a | |-- worker1 | `-- worker2 `-- level3b `-- worker3 Timeline (ish): - worker3 gets throttled - level3b is decayed, since it has no more load - level2 get throttled - worker3 get unthrottled - level2 get unthrottled - worker3 is added to list - level3b is not added to list, since nr_running==0 and is decayed [ Vincent Guittot: Rebased and updated to fix for the reported warning. ] Fixes: a7b359fc6a37 ("sched/fair: Correctly insert cfs_rq's to list on unthrottle") Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com> Suggested-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Rik van Riel <riel@surriel.com> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Sachin Sant <sachinp@linux.vnet.ibm.com> Acked-by: Odin Ugedal <odin@uged.al> Link: https://lore.kernel.org/r/20210621174330.11258-1-vincent.guittot@linaro.org
2021-06-22locking/lockdep: Improve noinstr vs errorsPeter Zijlstra
Better handle the failure paths. vmlinux.o: warning: objtool: debug_locks_off()+0x23: call to console_verbose() leaves .noinstr.text section vmlinux.o: warning: objtool: debug_locks_off()+0x19: call to __kasan_check_write() leaves .noinstr.text section debug_locks_off+0x19/0x40: instrument_atomic_write at include/linux/instrumented.h:86 (inlined by) __debug_locks_off at include/linux/debug_locks.h:17 (inlined by) debug_locks_off at lib/debug_locks.c:41 Fixes: 6eebad1ad303 ("lockdep: __always_inline more for noinstr") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210621120120.784404944@infradead.org