summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-07-30ixgbevf: use generic power managementVaibhav Gupta
With legacy PM, drivers themselves were responsible for managing the device's power states and takes care of register states. After upgrading to the generic structure, PCI core will take care of required tasks and drivers should do only device-specific operations. The driver was invoking PCI helper functions like pci_save/restore_state(), and pci_enable/disable_device(), which is not recommended. Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-30ixgbe: use generic power managementVaibhav Gupta
With legacy PM hooks, it was the responsibility of a driver to manage PCI states and also the device's power state. The generic approach is to let PCI core handle the work. ixgbe_suspend() calls __ixgbe_shutdown() to perform intermediate tasks. __ixgbe_shutdown() modifies the value of "wake" (device should be wakeup enabled or not), responsible for controlling the flow of legacy PM. Since, PCI core has no idea about the value of "wake", new code for generic PM may produce unexpected results. Thus, use "device_set_wakeup_enable()" to wakeup-enable the device accordingly. Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-30intel_idle: Customize IceLake server supportChen Yu
On ICX platform, the C1E auto-promotion is enabled by default. As a result, the CPU might fall into C1E more offen than previous platforms. Besides, the C1E is not exposed to sysfs on ICX, which is inconsistent with previous server platforms. So disable C1E auto-promotion and expose C1E as a separate idle state, so the C1E and C6 can be disabled via sysfs when necessary. Beside C1 and C1E, the exit latency of C6 was measured by a dedicated tool. However the exit latency(41us) exposed by _CST is much smaller than the one we measured(128us). This is probably due to the _CST uses the exit latency when woken up from PC0+C6, rather than PC6+C6 when C6 was measured. Choose the latter as we need the longest latency in theory. Reported-by: kernel test robot <lkp@intel.com> Tested-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Acked-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Reviewed-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-07-30igbvf: use generic power managementVaibhav Gupta
Remove legacy PM callbacks and use generic operations. With legacy code, drivers were responsible for handling PCI PM operations like pci_save_state(). In generic code, all these are handled by PCI core. The generic suspend() and resume() are called at the same point the legacy ones were called. Thus, it does not affect the normal functioning of the driver. __maybe_unused attribute is used with .resume() but not with .suspend(), as .suspend() is called by .shutdown(). Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-30iavf: use generic power managementVaibhav Gupta
With the support of generic PM callbacks, drivers no longer need to use legacy .suspend() and .resume() in which they had to maintain PCI states changes and device's power state themselves. The required operations are done by PCI core. PCI drivers are not expected to invoke PCI helper functions like pci_save/restore_state(), pci_enable/disable_device(), pci_set_power_state(), etc. Their tasks are completed by PCI core itself. Compile-tested only. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-30virtio-mem: Fix build error due to improper use 'select'Weilong Chen
As noted in: https://www.kernel.org/doc/Documentation/kbuild/kconfig-language.txt "select should be used with care. select will force a symbol to a value without visiting the dependencies." Config VIRTIO_MEM should not select CONTIG_ALLOC directly. Otherwise it will cause an error: https://bugzilla.kernel.org/show_bug.cgi?id=208245 Signed-off-by: Weilong Chen <chenweilong@huawei.com> Link: https://lore.kernel.org/r/20200619080333.194753-1-chenweilong@huawei.com Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: David Hildenbrand <david@redhat.com>
2020-07-30Merge branch 'opp/linux-next' of ↵Rafael J. Wysocki
git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm Pull operating performance points (OPP) framework updates for v5.9 from Viresh Kumar: "This contains following changes: - Fix HTTP links (Alexander A. Klimov). - Allow disabled OPPs in dev_pm_opp_get_freq() (Andrew-sh.Cheng). - Add missing export (Valdis Kletnieks)." * 'opp/linux-next' of git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm: opp: Allow disabled OPPs in dev_pm_opp_get_freq() opp: ti-opp-supply: Replace HTTP links with HTTPS ones opp: core: Add missing export for dev_pm_opp_adjust_voltage
2020-07-30PCI/P2PDMA: Allow P2PDMA on AMD Zen and newer CPUsLogan Gunthorpe
Allow P2PDMA if the CPU vendor is AMD and family is 0x17 (Zen) or greater. [bhelgaas: commit log, simplify #if/#else/#endif] Link: https://lore.kernel.org/r/20200729231844.4653-1-logang@deltatee.com Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: Huang Rui <ray.huang@amd.com>
2020-07-30Merge branch 'kvm-arm64/misc-5.9' into kvmarm-master/nextMarc Zyngier
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-07-30Revert "drm/amdgpu: Fix NULL dereference in dpm sysfs handlers"Alex Deucher
This regressed some working configurations so revert it. Will fix this properly for 5.9 and backport then. This reverts commit 38e0c89a19fd13f28d2b4721035160a3e66e270b. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2020-07-30KVM: arm64: Move S1PTW S2 fault logic out of io_mem_abort()Will Deacon
To allow for re-injection of stage-2 faults on stage-1 page-table walks due to either a missing or read-only memslot, move the triage logic out of io_mem_abort() and into kvm_handle_guest_abort(), where these aborts can be handled before anything else. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20200729102821.23392-5-will@kernel.org
2020-07-30KVM: arm64: Don't skip cache maintenance for read-only memslotsWill Deacon
If a guest performs cache maintenance on a read-only memslot, we should inform userspace rather than skip the instruction altogether. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20200729102821.23392-4-will@kernel.org
2020-07-30KVM: arm64: Handle data and instruction external aborts the same wayWill Deacon
If the guest generates a synchronous external abort which is not handled by the host, we inject it back into the guest as a virtual SError, but only if the original fault was reported on the data side. Instruction faults are reported as "Unsupported FSC", causing the vCPU run loop to bail with -EFAULT. Although synchronous external aborts from a guest are pretty unusual, treat them the same regardless of whether they are taken as data or instruction aborts by EL2. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20200729102821.23392-3-will@kernel.org
2020-07-30drm/amd/display: Clear dm_state for fast updatesMazin Rezk
This patch fixes a race condition that causes a use-after-free during amdgpu_dm_atomic_commit_tail. This can occur when 2 non-blocking commits are requested and the second one finishes before the first. Essentially, this bug occurs when the following sequence of events happens: 1. Non-blocking commit #1 is requested w/ a new dm_state #1 and is deferred to the workqueue. 2. Non-blocking commit #2 is requested w/ a new dm_state #2 and is deferred to the workqueue. 3. Commit #2 starts before commit #1, dm_state #1 is used in the commit_tail and commit #2 completes, freeing dm_state #1. 4. Commit #1 starts after commit #2 completes, uses the freed dm_state 1 and dereferences a freelist pointer while setting the context. Since this bug has only been spotted with fast commits, this patch fixes the bug by clearing the dm_state instead of using the old dc_state for fast updates. In addition, since dm_state is only used for its dc_state and amdgpu_dm_atomic_commit_tail will retain the dc_state if none is found, removing the dm_state should not have any consequences in fast updates. This use-after-free bug has existed for a while now, but only caused a noticeable issue starting from 5.7-rc1 due to 3202fa62f ("slub: relocate freelist pointer to middle of object") moving the freelist pointer from dm_state->base (which was unused) to dm_state->context (which is dereferenced). Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=207383 Fixes: bd200d190f45 ("drm/amd/display: Don't replace the dc_state for fast updates") Reported-by: Duncan <1i5t5.duncan@cox.net> Signed-off-by: Mazin Rezk <mnrzk@protonmail.com> Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2020-07-30drm/amdgpu: Prevent kernel-infoleak in amdgpu_info_ioctl()Peilin Ye
Compiler leaves a 4-byte hole near the end of `dev_info`, causing amdgpu_info_ioctl() to copy uninitialized kernel stack memory to userspace when `size` is greater than 356. In 2015 we tried to fix this issue by doing `= {};` on `dev_info`, which unfortunately does not initialize that 4-byte hole. Fix it by using memset() instead. Cc: stable@vger.kernel.org Fixes: c193fa91b918 ("drm/amdgpu: information leak in amdgpu_info_ioctl()") Fixes: d38ceaf99ed0 ("drm/amdgpu: add core driver (v4)") Suggested-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Peilin Ye <yepeilin.cs@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-07-30KVM: arm64: Rename kvm_vcpu_dabt_isextabt()Will Deacon
kvm_vcpu_dabt_isextabt() is not specific to data aborts and, unlike kvm_vcpu_dabt_issext(), has nothing to do with sign extension. Rename it to 'kvm_vcpu_abt_issea()'. Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Quentin Perret <qperret@google.com> Link: https://lore.kernel.org/r/20200729102821.23392-2-will@kernel.org
2020-07-30Merge branch 'kvm-arm64/el2-obj-v4.1' into kvmarm-master/nextMarc Zyngier
Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-07-30RDMA/hns: Remove redundant parameters in set_rc_wqe()Weihang Li
There are some functions called by set_rc_wqe() use two parameters: "void *wqe" and "struct hns_roce_v2_rc_send_wqe *rc_sq_wqe", but the first one can be got from the second one. So remove the redundant wqe from related functions. Link: https://lore.kernel.org/r/1595932941-40613-5-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-30RDMA/hns: Remove support for HIP08_ALang Cheng
HIP08_A is an temporary version and all features of it are supported by HIP08_B. So remove the relevant code. Link: https://lore.kernel.org/r/1595932941-40613-4-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Yangyang Li <liyangyang20@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-30RDMA/hns: Refactor hns_roce_v2_set_hem()Weihang Li
The parts about preparing and sending mailbox to hardware is not strongly related to other codes in hns_roce_v2_set_hem(), and can be encapsulated into a separate function. Link: https://lore.kernel.org/r/1595932941-40613-3-git-send-email-liweihang@huawei.com Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-30RDMA/hns: Remove redundant hardware opcode definitionsLang Cheng
HNS_ROCE_SQ_OPCODE_XXXs and HNS_ROCE_V2_WQE_OP_XXXs have same values, so remove a set of redundant definitions. In addition, remove the suffix of HNS_ROCE_V2_WQE_OP_BIND_MW_TYPE. Link: https://lore.kernel.org/r/1595932941-40613-2-git-send-email-liweihang@huawei.com Signed-off-by: Lang Cheng <chenglang@huawei.com> Signed-off-by: Weihang Li <liweihang@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-30RDMA/core: Free DIM memory in error unwindLeon Romanovsky
The memory allocated for the DIM wasn't freed in in error unwind path, fix it by calling to rdma_dim_destroy(). Fixes: da6629793aa6 ("RDMA/core: Provide RDMA DIM support for ULPs") Link: https://lore.kernel.org/r/20200730082719.1582397-4-leon@kernel.org Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com <mailto:maxg@mellanox.com>> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-30RDMA/core: Stop DIM before destroying CQLeon Romanovsky
HW destroy operation should be last operation after all possible CQ users completed their work, so move DIM work cancellation before such destroy call. Fixes: da6629793aa6 ("RDMA/core: Provide RDMA DIM support for ULPs") Link: https://lore.kernel.org/r/20200730082719.1582397-3-leon@kernel.org Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-30RDMA/mlx5: Initialize QP mutex for the debug kernelsLeon Romanovsky
In DCT and RSS RAW QP creation flows, the QP mutex wasn't initialized and the magic field inside lock was missing. This caused to the following kernel warning for kernels build with CONFIG_DEBUG_MUTEXES. DEBUG_LOCKS_WARN_ON(lock->magic != lock) WARNING: CPU: 3 PID: 16261 at kernel/locking/mutex.c:938 __mutex_lock+0x60e/0x940 Modules linked in: bonding nf_tables ipip tunnel4 geneve ip6_udp_tunnel udp_tunnel ip6_gre ip6_tunnel tunnel6 ip_gre gre ip_tunnel mlx5_ib mlx5_core mlxfw ptp pps_core rdma_ucm ib_uverbs ib_ipoib ib_umad openvswitch nsh xt_MASQUERADE nf_conntrack_netlink nfnetlink iptable_nat xt_addrtype xt_conntrack nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter overlay ib_srp scsi_transport_srp rpcrdma ib_iser libiscsi scsi_transport_iscsi rdma_cm iw_cm ib_cm ib_core [last unloaded: mlxfw] CPU: 3 PID: 16261 Comm: ib_send_bw Not tainted 5.8.0-rc4_for_upstream_min_debug_2020_07_08_22_04 #1 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.1-0-ga5cab58e9a3f-prebuilt.qemu.org 04/01/2014 RIP: 0010:__mutex_lock+0x60e/0x940 Code: c0 0f 84 6d fa ff ff 44 8b 15 4e 9d ba 00 45 85 d2 0f 85 5d fa ff ff 48 c7 c6 f2 de 2b 82 48 c7 c7 f1 8a 2b 82 e8 d2 4d 72 ff <0f> 0b 4c 8b 4d 88 e9 3f fa ff ff f6 c2 04 0f 84 37 fe ff ff 48 89 RSP: 0018:ffff88810bb8b870 EFLAGS: 00010286 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000 RDX: ffff88829f1dd880 RSI: 0000000000000000 RDI: ffffffff81192afa RBP: ffff88810bb8b910 R08: 0000000000000000 R09: 0000000000000028 R10: 0000000000000000 R11: 0000000000003f85 R12: 0000000000000002 R13: ffff88827d8d3ce0 R14: ffffffffa059f615 R15: ffff8882a4d02610 FS: 00007f3f6988e740(0000) GS:ffff8882f5b80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000556556158000 CR3: 000000010a63c005 CR4: 0000000000360ea0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? cmd_exec+0x947/0xe60 [mlx5_core] ? __mutex_lock+0x76/0x940 ? mlx5_ib_qp_set_counter+0x25/0xa0 [mlx5_ib] mlx5_ib_qp_set_counter+0x25/0xa0 [mlx5_ib] mlx5_ib_counter_bind_qp+0x9b/0xe0 [mlx5_ib] __rdma_counter_bind_qp+0x6b/0xa0 [ib_core] rdma_counter_bind_qp_auto+0x363/0x520 [ib_core] _ib_modify_qp+0x316/0x580 [ib_core] ib_modify_qp_with_udata+0x19/0x30 [ib_core] modify_qp+0x4c4/0x600 [ib_uverbs] ib_uverbs_ex_modify_qp+0x87/0xe0 [ib_uverbs] ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0x129/0x1c0 [ib_uverbs] ib_uverbs_cmd_verbs.isra.5+0x5d5/0x11f0 [ib_uverbs] ? ib_uverbs_handler_UVERBS_METHOD_QUERY_CONTEXT+0x120/0x120 [ib_uverbs] ? lock_acquire+0xb9/0x3a0 ? ib_uverbs_ioctl+0xd0/0x210 [ib_uverbs] ? ib_uverbs_ioctl+0x175/0x210 [ib_uverbs] ib_uverbs_ioctl+0x14b/0x210 [ib_uverbs] ? ib_uverbs_ioctl+0xd0/0x210 [ib_uverbs] ksys_ioctl+0x234/0x7d0 ? exc_page_fault+0x202/0x640 ? do_syscall_64+0x1f/0x2e0 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x59/0x2e0 ? asm_exc_page_fault+0x8/0x30 ? rcu_read_lock_sched_held+0x52/0x60 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Fixes: b4aaa1f0b415 ("IB/mlx5: Handle type IB_QPT_DRIVER when creating a QP") Link: https://lore.kernel.org/r/20200730082719.1582397-2-leon@kernel.org Reviewed-by: Maor Gottlieb <maorg@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-30KVM: arm: Add trace name for ARM_NISVAlexander Graf
Commit c726200dd106d ("KVM: arm/arm64: Allow reporting non-ISV data aborts to userspace") introduced a mechanism to deflect MMIO traffic the kernel can not handle to user space. For that, it introduced a new exit reason. However, it did not update the trace point array that gives human readable names to these exit reasons inside the trace log. Let's fix that up after the fact, so that trace logs are pretty even when we get user space MMIO traps on ARM. Fixes: c726200dd106d ("KVM: arm/arm64: Allow reporting non-ISV data aborts to userspace") Signed-off-by: Alexander Graf <graf@amazon.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200730094441.18231-1-graf@amazon.com
2020-07-30KVM: arm64: Ensure that all nVHE hyp code is in .hyp.textDavid Brazdil
Some compilers may put a subset of generated functions into '.text.*' ELF sections and the linker may leverage this division to optimize ELF layout. Unfortunately, the recently introduced HYPCOPY command assumes that all executable code (with the exception of specialized sections such as '.hyp.idmap.text') is in the '.text' section. If this assumption is broken, code in '.text.*' will be merged into kernel proper '.text' instead of the '.hyp.text' that is mapped in EL2. To ensure that this cannot happen, insert an OBJDUMP assertion into HYPCOPY. The command dumps a list of ELF sections in the input object file and greps for '.text.'. If found, compilation fails. Tested with both binutils' and LLVM's objdump (the output format is different). GCC offers '-fno-reorder-functions' to disable this behaviour. Select the flag if it is available. From inspection of GCC source (latest Git in July 2020), this flag does force all code into '.text'. By default, GCC uses profile data, heuristics and attributes to select a subsection. LLVM/Clang currently does not have a similar optimization pass. It can place static constructors into '.text.startup' and it's optimizer can be provided with profile data to reorder hot/cold functions. Neither of these is applicable to nVHE hyp code. If this changes in the future, the OBJDUMP assertion should alert users to the problem. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200730132519.48787-1-dbrazdil@google.com
2020-07-30cpuidle: pseries: Fixup exit latency for CEDE(0)Gautham R. Shenoy
We are currently assuming that CEDE(0) has exit latency 10us, since there is no way for us to query from the platform. However, if the wakeup latency of an Extended CEDE state is smaller than 10us, then we can be sure that the exit latency of CEDE(0) cannot be more than that. In this patch, we fix the exit latency of CEDE(0) if we discover an Extended CEDE state with wakeup latency smaller than 10us. Benchmark results: On POWER8, this patch does not have any impact since the advertized latency of Extended CEDE (1) is 30us which is higher than the default latency of CEDE (0) which is 10us. On POWER9 we see improvement the single-threaded performance of ebizzy, and no regression in the wakeup latency or the number of context-switches. ebizzy: 2 ebizzy threads bound to the same big-core. 25% improvement in the avg records/s with patch. x without_patch * with_patch N Min Max Median Avg Stddev x 10 2491089 5834307 5398375 4244335 1596244.9 * 10 2893813 5834474 5832448 5327281.3 1055941.4 context_switch2: There is no major regression observed with this patch as seen from the context_switch2 benchmark. context_switch2 across CPU0 CPU1 (Both belong to same big-core, but different small cores). We observe a minor 0.14% regression in the number of context-switches (higher is better). x without_patch * with_patch N Min Max Median Avg Stddev x 500 348872 362236 354712 354745.69 2711.827 * 500 349422 361452 353942 354215.4 2576.9258 Difference at 99.0% confidence -530.288 +/- 430.963 -0.149484% +/- 0.121485% (Student's t, pooled s = 2645.24) context_switch2 across CPU0 CPU8 (Different big-cores). We observe a 0.37% improvement in the number of context-switches (higher is better). x without_patch * with_patch N Min Max Median Avg Stddev x 500 287956 294940 288896 288977.23 646.59295 * 500 288300 294646 289582 290064.76 1161.9992 Difference at 99.0% confidence 1087.53 +/- 153.194 0.376337% +/- 0.0530125% (Student's t, pooled s = 940.299) schbench: No major difference could be seen until the 99.9th percentile. Without-patch: Latency percentiles (usec) 50.0th: 29 75.0th: 39 90.0th: 49 95.0th: 59 *99.0th: 13104 99.5th: 14672 99.9th: 15824 min=0, max=17993 With-patch: Latency percentiles (usec) 50.0th: 29 75.0th: 40 90.0th: 50 95.0th: 61 *99.0th: 13648 99.5th: 14768 99.9th: 15664 min=0, max=29812 Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> [mpe: Minor formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1596087177-30329-4-git-send-email-ego@linux.vnet.ibm.com
2020-07-30cpuidle: pseries: Add function to parse extended CEDE recordsGautham R. Shenoy
Currently we use CEDE with latency-hint 0 as the only other idle state on a dedicated LPAR apart from the polling "snooze" state. The platform might support additional extended CEDE idle states, which can be discovered through the "ibm,get-system-parameter" rtas-call made with CEDE_LATENCY_TOKEN. This patch adds a function to obtain information about the extended CEDE idle states from the platform and parse the contents to populate an array of extended CEDE states. These idle states thus discovered will be added to the cpuidle framework in the next patch. dmesg on a POWER8 and POWER9 LPAR, demonstrating the output of parsing the extended CEDE latency parameters are as follows POWER8 [ 10.093279] xcede : xcede_record_size = 10 [ 10.093285] xcede : Record 0 : hint = 1, latency = 0x3c00 tb ticks, Wake-on-irq = 1 [ 10.093291] xcede : Record 1 : hint = 2, latency = 0x4e2000 tb ticks, Wake-on-irq = 0 [ 10.093297] cpuidle : Skipping the 2 Extended CEDE idle states POWER9 [ 5.913180] xcede : xcede_record_size = 10 [ 5.913183] xcede : Record 0 : hint = 1, latency = 0x400 tb ticks, Wake-on-irq = 1 [ 5.913188] xcede : Record 1 : hint = 2, latency = 0x3e8000 tb ticks, Wake-on-irq = 0 [ 5.913193] cpuidle : Skipping the 2 Extended CEDE idle states Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> [mpe: Make space for 16 records, drop memset, minor cleanup & formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1596087177-30329-3-git-send-email-ego@linux.vnet.ibm.com
2020-07-30cpuidle: pseries: Set the latency-hint before entering CEDEGautham R. Shenoy
As per the PAPR, each H_CEDE call is associated with a latency-hint to be passed in the VPA field "cede_latency_hint". The CEDE states that we were implicitly entering so far is CEDE with latency-hint = 0. This patch explicitly sets the latency hint corresponding to the CEDE state that we are currently entering. While at it, we save the previous hint, to be restored once we wakeup from CEDE. This will be required in the future when we expose extended-cede states through the cpuidle framework, where each of them will have a different cede-latency hint. Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> [mpe: Make cede_latency_hint static] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1596087177-30329-2-git-send-email-ego@linux.vnet.ibm.com
2020-07-30selftests/powerpc: Fix online CPU selectionSandipan Das
The size of the CPU affinity mask must be large enough for systems with a very large number of CPUs. Otherwise, tests which try to determine the first online CPU by calling sched_getaffinity() will fail. This makes sure that the size of the allocated affinity mask is dependent on the number of CPUs as reported by get_nprocs_conf(). Fixes: 3752e453f6ba ("selftests/powerpc: Add tests of PMU EBBs") Reported-by: Shirisha Ganta <shiganta@in.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Reviewed-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a408c4b8e9a23bb39b539417a21eb0ff47bb5127.1596084858.git.sandipan@linux.ibm.com
2020-07-30powerpc/perf: Consolidate perf_callchain_user_[64|32]()Michal Suchanek
perf_callchain_user_64() and perf_callchain_user_32() are nearly identical. Consolidate into one function with thin wrappers. Suggested-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michal Suchanek <msuchanek@suse.de> [mpe: Adapt to copy_from_user_nofault(), minor formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200406210022.32265-1-msuchanek@suse.de
2020-07-30powerpc/pseries/hotplug-cpu: Remove double free in error pathNathan Lynch
In the unlikely event that the device tree lacks a /cpus node, find_dlpar_cpus_to_add() oddly frees the cpu_drcs buffer it has been passed before returning an error. Its only caller also frees the buffer on error. Remove the less conventional kfree() of a caller-supplied buffer from find_dlpar_cpus_to_add(). Fixes: 90edf184b9b7 ("powerpc/pseries: Add CPU dlpar add functionality") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190919231633.1344-1-nathanl@linux.ibm.com
2020-07-30powerpc/pseries/mobility: Add pr_debug() for device tree changesNathan Lynch
When investigating issues with partition migration or resource reassignments it is helpful to have a log of which nodes and properties in the device tree have changed. Use pr_debug() so it's easy to enable these at runtime with the dynamic debug facility. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190627053044.9238-3-nathanl@linux.ibm.com
2020-07-30powerpc/pseries/mobility: Set pr_fmt()Nathan Lynch
The pr_err() callsites in mobility.c already manually include a "mobility:" prefix, let's make it official for the benefit of messages to be added later. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190627053044.9238-2-nathanl@linux.ibm.com
2020-07-30powerpc/cacheinfo: Warn if cache object chain becomes unorderedNathan Lynch
This can catch cases where the device tree has gotten mishandled into an inconsistent state at runtime, e.g. the cache nodes for both the source and the destination are present after a migration. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190627051537.7298-5-nathanl@linux.ibm.com
2020-07-30powerpc/cacheinfo: Improve diagnostics about malformed cache listsNathan Lynch
If we have a bug which causes us to start with the wrong kind of OF node when linking up the cache tree, it's helpful for debugging to print information about what we found vs what we expected. So replace uses of WARN_ON_ONCE with WARN_ONCE, which lets us include an informative message instead of a contentless backtrace. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190627051537.7298-4-nathanl@linux.ibm.com
2020-07-30powerpc/cacheinfo: Use name@unit instead of full DT path in debug messagesNathan Lynch
We know that every OF node we deal with in this code is under /cpus, so we can make the debug messages a little less verbose without losing information. E.g. cacheinfo: creating L1 dcache and icache for /cpus/PowerPC,POWER8@0 cacheinfo: creating L2 ucache for /cpus/l2-cache@2006 cacheinfo: creating L3 ucache for /cpus/l3-cache@3106 becomes cacheinfo: creating L1 dcache and icache for PowerPC,POWER8@0 cacheinfo: creating L2 ucache for l2-cache@2006 cacheinfo: creating L3 ucache for l3-cache@3106 Replace all '%pOF' specifiers with '%pOFP'. Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190627051537.7298-3-nathanl@linux.ibm.com
2020-07-30arm64/mm: save memory access in check_and_switch_context() fast switch pathPingfan Liu
On arm64, smp_processor_id() reads a per-cpu `cpu_number` variable, using the per-cpu offset stored in the tpidr_el1 system register. In some cases we generate a per-cpu address with a sequence like: cpu_ptr = &per_cpu(ptr, smp_processor_id()); Which potentially incurs a cache miss for both `cpu_number` and the in-memory `__per_cpu_offset` array. This can be written more optimally as: cpu_ptr = this_cpu_ptr(ptr); Which only needs the offset from tpidr_el1, and does not need to load from memory. The following two test cases show a small performance improvement measured on a 46-cpus qualcomm machine with 5.8.0-rc4 kernel. Test 1: (about 0.3% improvement) #cat b.sh make clean && make all -j138 #perf stat --repeat 10 --null --sync sh b.sh - before this patch Performance counter stats for 'sh b.sh' (10 runs): 298.62 +- 1.86 seconds time elapsed ( +- 0.62% ) - after this patch Performance counter stats for 'sh b.sh' (10 runs): 297.734 +- 0.954 seconds time elapsed ( +- 0.32% ) Test 2: (about 1.69% improvement) 'perf stat -r 10 perf bench sched messaging' Then sum the total time of 'sched/messaging' by manual. - before this patch total 0.707 sec for 10 times - after this patch totol 0.695 sec for 10 times Signed-off-by: Pingfan Liu <kernelfans@gmail.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Vladimir Murzin <vladimir.murzin@arm.com> Cc: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/1594389852-19949-1-git-send-email-kernelfans@gmail.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-30irqchip/loongson-pch-pic: Fix the misused irq flow handlerHuacai Chen
Loongson PCH PIC is a standard level triggered PIC, and it need to clear interrupt during unmask. Fixes: ef8c01eb64ca6719da449dab0 ("irqchip: Add Loongson PCH PIC controller") Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Tested-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Link: https://lore.kernel.org/r/1596099090-23516-6-git-send-email-chenhc@lemote.com
2020-07-30irqchip/loongson-htvec: Support 8 groups of HT vectorsHuacai Chen
The original version can only used by old Loongson-3 which only use 4 groups of HT vectors. Now Loongson-3A R4 can use 8 groups, so improve the driver to support all 8 groups. Fixes: 818e915fbac518e8c78e1877a ("irqchip: Add Loongson HyperTransport Vector support") Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Link: https://lore.kernel.org/r/1596099090-23516-5-git-send-email-chenhc@lemote.com
2020-07-30irqchip/loongson-liointc: Fix misuse of gc->mask_cacheHuacai Chen
In gc->mask_cache bits, 1 means enabled and 0 means disabled, but in the loongson-liointc driver mask_cache is misused by reverting its meaning. This patch fix the bug and update the comments as well. Fixes: dbb152267908c4b2c3639492a ("irqchip: Add driver for Loongson I/O Local Interrupt Controller") Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/1596099090-23516-4-git-send-email-chenhc@lemote.com
2020-07-30dt-bindings: interrupt-controller: Update Loongson HTVEC descriptionHuacai Chen
Loongson HTVEC support 8 parents interrupts in maximum, so update the maxItems description. Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Link: https://lore.kernel.org/r/1596099090-23516-2-git-send-email-chenhc@lemote.com
2020-07-30arm64: sigcontext.h: delete duplicated wordRandy Dunlap
Drop the repeated word "the". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20200726003207.20253-4-rdunlap@infradead.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-30arm64: ptrace.h: delete duplicated wordRandy Dunlap
Drop the repeated word "the". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20200726003207.20253-3-rdunlap@infradead.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-30arm64: pgtable-hwdef.h: delete duplicated wordsRandy Dunlap
Drop the repeated words "at" and "the". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20200726003207.20253-2-rdunlap@infradead.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-07-30x86/kvm: Use __xfer_to_guest_mode_work_pending() in kvm_run_vcpu()Thomas Gleixner
The comments explicitely explain that the work flags check and handling in kvm_run_vcpu() is done with preemption and interrupts enabled as KVM invokes the check again right before entering guest mode with interrupts disabled which guarantees that the work flags are observed and handled before VMENTER. Nevertheless the flag pending check in kvm_run_vcpu() uses the helper variant which requires interrupts to be disabled triggering an instant lockdep splat. This was caught in testing before and then not fixed up in the patch before applying. :( Use the relaxed and intentionally racy __xfer_to_guest_mode_work_pending() instead. Fixes: 72c3c0fe54a3 ("x86/kvm: Use generic xfer to guest work function") Reported-by: Qian Cai <cai@lca.pw> writes: Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/87bljxa2sa.fsf@nanos.tec.linutronix.de
2020-07-30Revert "Bluetooth: btusb: Disable runtime suspend on Realtek devices"Abhishek Pandit-Subedi
This reverts commit 7ecacafc240638148567742cca41aa7144b4fe1e. Testing this change on a board with RTL8822CE, I found that enabling autosuspend has no effect on the stability of the system. The board continued working after autosuspend, suspend and reboot. The original commit makes it impossible to enable autosuspend on working systems so it should be reverted. Disabling autosuspend should be done via module param or udev in userspace instead. Signed-off-by: Abhishek Pandit-Subedi <abhishekpandit@chromium.org> Acked-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2020-07-30Bluetooth: Enable controller RPA resolution using Experimental featureSathish Narasimman
This patch adds support to enable the use of RPA Address resolution using expermental feature mgmt command. Signed-off-by: Sathish Narasimman <sathish.narasimman@intel.com> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
2020-07-30mac80211: remove STA txq pending airtime underflow warningFelix Fietkau
This warning can trigger if there is a mismatch between frames that were sent with the sta pointer set vs tx status frames reported for the sta address. This can happen due to race conditions on re-creating stations, or even in the case of .sta_add/remove being used instead of .sta_state, which can cause frames to be sent to a station that has not been uploaded yet. If there is an actual underflow issue, it should show up in the device airtime warning below, so it is better to remove this one. Signed-off-by: Felix Fietkau <nbd@nbd.name> Link: https://lore.kernel.org/r/20200725084533.13829-1-nbd@nbd.name Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2020-07-30mac80211: Fix bug in Tx ack status reporting in 802.3 xmit pathVasanthakumar Thiagarajan
Allocated ack_frame id from local->ack_status_frames is not really stored in the tx_info for 802.3 Tx path. Due to this, tx ack status is not reported and ack_frame id is not freed for the buffers requiring tx ack status. Also move the memset to 0 of tx_info before IEEE80211_TX_CTL_REQ_TX_STATUS flag assignment. Fixes: 50ff477a8639 ("mac80211: add 802.11 encapsulation offloading support") Signed-off-by: Vasanthakumar Thiagarajan <vthiagar@codeaurora.org> Link: https://lore.kernel.org/r/1595427617-1713-1-git-send-email-vthiagar@codeaurora.org Signed-off-by: Johannes Berg <johannes.berg@intel.com>