summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-07-05RDMA/counter: Add set/clear per-port auto mode supportMark Zhang
Add an API to support set/clear per-port auto mode. Signed-off-by: Mark Zhang <markz@mellanox.com> Reviewed-by: Majd Dibbiny <majd@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-07-05RDMA/restrack: Make is_visible_in_pid_ns() as an APIMark Zhang
Remove is_visible_in_pid_ns() from nldev.c and make it as a restrack API, so that it can be taken advantage by other parts like counter. Signed-off-by: Mark Zhang <markz@mellanox.com> Reviewed-by: Majd Dibbiny <majd@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-07-05RDMA/restrack: Add an API to attach a task to a resourceMark Zhang
Add rdma_restrack_attach_task() which is able to attach a task other then "current" to a resource. Signed-off-by: Mark Zhang <markz@mellanox.com> Reviewed-by: Majd Dibbiny <majd@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-07-05RDMA/restrack: Introduce statistic counterMark Zhang
Introduce statistic counter as a new resource. It allows a user to monitor specific objects (e.g., QPs) by binding to a counter. In some cases a user counter resource is created with task other then "current", because its creation is done as part of rdmatool call. Signed-off-by: Mark Zhang <markz@mellanox.com> Reviewed-by: Majd Dibbiny <majd@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2019-07-05Merge mlx5-next into rdma for-nextJason Gunthorpe
From git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux Required for dependencies in the next patches. * mlx5-next: net/mlx5: Add rts2rts_qp_counters_set_id field in hca cap net/mlx5: Properly name the generic WQE control field net/mlx5: Introduce TLS TX offload hardware bits and structures net/mlx5: Refactor mlx5_esw_query_functions for modularity net/mlx5: E-Switch prepare functions change handler to be modular net/mlx5: Introduce and use mlx5_eswitch_get_total_vports()
2019-07-05PCI: tegra: Enable Relaxed Ordering only for Tegra20 & Tegra30Vidya Sagar
The PCI Tegra controller conversion to a device tree configurable driver in commit d1523b52bff3 ("PCI: tegra: Move PCIe driver to drivers/pci/host") implied that code for the driver can be compiled in for a kernel supporting multiple platforms. Unfortunately, a blind move of the code did not check that some of the quirks that were applied in arch/arm (eg enabling Relaxed Ordering on all PCI devices - since the quirk hook erroneously matches PCI_ANY_ID for both Vendor-ID and Device-ID) are now applied in all kernels that compile the PCI Tegra controlled driver, DT and ACPI alike. This is completely wrong, in that enablement of Relaxed Ordering is only required by default in Tegra20 platforms as described in the Tegra20 Technical Reference Manual (available at https://developer.nvidia.com/embedded/downloads#?search=tegra%202 in Section 34.1, where it is mentioned that Relaxed Ordering bit needs to be enabled in its root ports to avoid deadlock in hardware) and in the Tegra30 platforms for the same reasons (unfortunately not documented in the TRM). There is no other strict requirement on PCI devices Relaxed Ordering enablement on any other Tegra platforms or PCI host bridge driver. Fix this quite upsetting situation by limiting the vendor and device IDs to which the Relaxed Ordering quirk applies to the root ports in question, reported above. Signed-off-by: Vidya Sagar <vidyas@nvidia.com> [lorenzo.pieralisi@arm.com: completely rewrote the commit log/fixes tag] Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Thierry Reding <treding@nvidia.com>
2019-07-05PCI: tegra: Change link retry log level to debugManikanta Maddireddy
Driver checks for link up three times before giving up, each retry attempt is printed as an error. Letting users know that PCIe link is down and in the process of being brought up again is for debug, not an error condition. Signed-off-by: Manikanta Maddireddy <mmaddireddy@nvidia.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Thierry Reding <treding@nvidia.com>
2019-07-05PCI: tegra: Add support for GPIO based PERST#Manikanta Maddireddy
Tegra PCIe has fixed per port SFIO line to signal PERST#, which can be controlled by AFI port register. However, if a platform routes a different GPIO to the PCIe slot, then port register cannot control it. Add support for GPIO based PERST# signal for such platforms. GPIO number comes from per port PCIe device tree node. PCIe driver probe doesn't fail if per port "reset-gpios" property is not populated, so platforms that require this workaround must make sure that the DT property is not missed in the corresponding device tree. Link: https://lore.kernel.org/linux-pci/20190705084850.30777-1-jonathanh@nvidia.com/ Signed-off-by: Manikanta Maddireddy <mmaddireddy@nvidia.com> [lorenzo.pieralisi@arm.com: squashed in fix in Link] Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Thierry Reding <treding@nvidia.com>
2019-07-05KVM: arm64: Migrate _elx sysreg accessors to msr_s/mrs_sDave Martin
Currently, the {read,write}_sysreg_el*() accessors for accessing particular ELs' sysregs in the presence of VHE rely on some local hacks and define their system register encodings in a way that is inconsistent with the core definitions in <asm/sysreg.h>. As a result, it is necessary to add duplicate definitions for any system register that already needs a definition in sysreg.h for other reasons. This is a bit of a maintenance headache, and the reasons for the _el*() accessors working the way they do is a bit historical. This patch gets rid of the shadow sysreg definitions in <asm/kvm_hyp.h>, converts the _el*() accessors to use the core __msr_s/__mrs_s interface, and converts all call sites to use the standard sysreg #define names (i.e., upper case, with SYS_ prefix). This patch will conflict heavily anyway, so the opportunity to clean up some bad whitespace in the context of the changes is taken. The change exposes a few system registers that have no sysreg.h definition, due to msr_s/mrs_s being used in place of msr/mrs: additions are made in order to fill in the gaps. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Link: https://www.spinics.net/lists/kvm-arm/msg31717.html [Rebased to v4.21-rc1] Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> [Rebased to v5.2-rc5, changelog updates] Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: doc: Add API documentation on the KVM_REG_ARM_WORKAROUNDS registerAndre Przywara
Add documentation for the newly defined firmware registers to save and restore any vulnerability mitigation status. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm/arm64: Add save/restore support for firmware workaround stateAndre Przywara
KVM implements the firmware interface for mitigating cache speculation vulnerabilities. Guests may use this interface to ensure mitigation is active. If we want to migrate such a guest to a host with a different support level for those workarounds, migration might need to fail, to ensure that critical guests don't loose their protection. Introduce a way for userland to save and restore the workarounds state. On restoring we do checks that make sure we don't downgrade our mitigation level. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05arm64: KVM: Propagate full Spectre v2 workaround state to KVM guestsAndre Przywara
Recent commits added the explicit notion of "workaround not required" to the state of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had "needed" and "unknown" before. Export this knowledge to the rest of the kernel and enhance the existing kvm_arm_harden_branch_predictor() to report this new state as well. Export this new state to guests when they use KVM's firmware interface emulation. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm/arm64: Support chained PMU countersAndrew Murray
ARMv8 provides support for chained PMU counters, where an event type of 0x001E is set for odd-numbered counters, the event counter will increment by one for each overflow of the preceding even-numbered counter. Let's emulate this in KVM by creating a 64 bit perf counter when a user chains two emulated counters together. For chained events we only support generating an overflow interrupt on the high counter. We use the attributes of the low counter to determine the attributes of the perf event. Suggested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm/arm64: Remove pmc->bitmaskAndrew Murray
We currently use pmc->bitmask to determine the width of the pmc - however it's superfluous as the pmc index already describes if the pmc is a cycle counter or event counter. The architecture clearly describes the widths of these counters. Let's remove the bitmask to simplify the code. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm/arm64: Re-create event when setting counter valueAndrew Murray
The perf event sample_period is currently set based upon the current counter value, when PMXEVTYPER is written to and the perf event is created. However the user may choose to write the type before the counter value in which case sample_period will be set incorrectly. Let's instead decouple event creation from PMXEVTYPER and (re)create the event in either suitation. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm/arm64: Extract duplicated code to own functionAndrew Murray
Let's reduce code duplication by extracting common code to its own function. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm/arm64: Rename kvm_pmu_{enable/disable}_counter functionsAndrew Murray
The kvm_pmu_{enable/disable}_counter functions can enable/disable multiple counters at once as they operate on a bitmask. Let's make this clearer by renaming the function. Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: LAPIC: ARBPRI is a reserved register for x2APICPaolo Bonzini
kvm-unit-tests were adjusted to match bare metal behavior, but KVM itself was not doing what bare metal does; fix that. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05KVM: arm64: Skip more of the SError vaxorcismJames Morse
During __guest_exit() we need to consume any SError left pending by the guest so it doesn't contaminate the host. With v8.2 we use the ESB-instruction. For systems without v8.2, we use dsb+isb and unmask SError. We do this on every guest exit. Use the same dsb+isr_el1 trick, this lets us know if an SError is pending after the dsb, allowing us to skip the isb and self-synchronising PSTATE write if its not. This means SError remains masked during KVM's world-switch, so any SError that occurs during this time is reported by the host, instead of causing a hyp-panic. As we're benchmarking this code lets polish the layout. If you give gcc likely()/unlikely() hints in an if() condition, it shuffles the generated assembly so that the likely case is immediately after the branch. Lets do the same here. Signed-off-by: James Morse <james.morse@arm.com> Changes since v2: * Added isb after the dsb to prevent an early read Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm64: Re-mask SError after the one instruction windowJames Morse
KVM consumes any SError that were pending during guest exit with a dsb/isb and unmasking SError. It currently leaves SError unmasked for the rest of world-switch. This means any SError that occurs during this part of world-switch will cause a hyp-panic. We'd much prefer it to remain pending until we return to the host. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05arm64: Update silicon-errata.txt for Neoverse-N1 #1349291James Morse
Neoverse-N1 affected by #1349291 may report an Uncontained RAS Error as Unrecoverable. The kernel's architecture code already considers Unrecoverable errors as fatal as without kernel-first support no further error-handling is possible. Now that KVM attributes SError to the host/guest more precisely the host's architecture code will always handle host errors that become pending during world-switch. Errors misclassified by this errata that affected the guest will be re-injected to the guest as an implementation-defined SError, which can be uncontained. Until kernel-first support is implemented, no workaround is needed for this issue. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm64: Defer guest entry when an asynchronous exception is pendingJames Morse
SError that occur during world-switch's entry to the guest will be accounted to the guest, as the exception is masked until we enter the guest... but we want to attribute the SError as precisely as possible. Reading DISR_EL1 before guest entry requires free registers, and using ESB+DISR_EL1 to consume and read back the ESR would leave KVM holding a host SError... We would rather leave the SError pending and let the host take it once we exit world-switch. To do this, we need to defer guest-entry if an SError is pending. Read the ISR to see if SError (or an IRQ) is pending. If so fake an exit. Place this check between __guest_enter()'s save of the host registers, and restore of the guest's. SError that occur between here and the eret into the guest must have affected the guest's registers, which we can naturally attribute to the guest. The dsb is needed to ensure any previous writes have been done before we read ISR_EL1. On systems without the v8.2 RAS extensions this doesn't give us anything as we can't contain errors, and the ESR bits to describe the severity are all implementation-defined. Replace this with a nop for these systems. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm64: Consume pending SError as early as possibleJames Morse
On systems with v8.2 we switch the 'vaxorcism' of guest SError with an alternative sequence that uses the ESB-instruction, then reads DISR_EL1. This saves the unmasking and remasking of asynchronous exceptions. We do this after we've saved the guest registers and restored the host's. Any SError that becomes pending due to this will be accounted to the guest, when it actually occurred during host-execution. Move the ESB-instruction as early as possible. Any guest SError will become pending due to this ESB-instruction and then consumed to DISR_EL1 before the host touches anything. This lets us account for host/guest SError precisely on the guest exit exception boundary. Because the ESB-instruction now lands in the preamble section of the vectors, we need to add it to the unpatched indirect vectors too, and to any sequence that may be patched in over the top. The ESB-instruction always lives in the head of the vectors, to be before any memory write. Whereas the register-store always lives in the tail. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm64: Make indirect vectors preamble behaviour symmetricJames Morse
The KVM indirect vectors support is a little complicated. Different CPUs may use different exception vectors for KVM that are generated at boot. Adding new instructions involves checking all the possible combinations do the right thing. To make changes here easier to review lets state what we expect of the preamble: 1. The first vector run, must always run the preamble. 2. Patching the head or tail of the vector shouldn't remove preamble instructions. Today, this is easy as we only have one instruction in the preamble. Change the unpatched tail of the indirect vector so that it always runs this, regardless of patching. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm64: Abstract the size of the HYP vectors pre-ambleJames Morse
The EL2 vector hardening feature causes KVM to generate vectors for each type of CPU present in the system. The generated sequences already do some of the early guest-exit work (i.e. saving registers). To avoid duplication the generated vectors branch to the original vector just after the preamble. This size is hard coded. Adding new instructions to the HYP vector causes strange side effects, which are difficult to debug as the affected code is patched in at runtime. Add KVM_VECTOR_PREAMBLE to tell kvm_patch_vector_branch() how big the preamble is. The valid_vect macro can then validate this at build time. Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05arm64: assembler: Switch ESB-instruction with a vanilla nop if !ARM64_HAS_RASJames Morse
The ESB-instruction is a nop on CPUs that don't implement the RAS extensions. This lets us use it in places like the vectors without having to use alternatives. If someone disables CONFIG_ARM64_RAS_EXTN, this instruction still has its RAS extensions behaviour, but we no longer read DISR_EL1 as this register does depend on alternatives. This could go wrong if we want to synchronize an SError from a KVM guest. On a CPU that has the RAS extensions, but the KConfig option was disabled, we consume the pending SError with no chance of ever reading it. Hide the ESB-instruction behind the CONFIG_ARM64_RAS_EXTN option, outputting a regular nop if the feature has been disabled. Reported-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM nVMX: Check Host Segment Registers and Descriptor Tables on vmentry of ↵Krish Sadhukhan
nested guests According to section "Checks on Host Segment and Descriptor-Table Registers" in Intel SDM vol 3C, the following checks are performed on vmentry of nested guests: - In the selector field for each of CS, SS, DS, ES, FS, GS and TR, the RPL (bits 1:0) and the TI flag (bit 2) must be 0. - The selector fields for CS and TR cannot be 0000H. - The selector field for SS cannot be 0000H if the "host address-space size" VM-exit control is 0. - On processors that support Intel 64 architecture, the base-address fields for FS, GS and TR must contain canonical addresses. Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05KVM: nVMX: Stash L1's CR3 in vmcs01.GUEST_CR3 on nested entry w/o EPTSean Christopherson
KVM does not have 100% coverage of VMX consistency checks, i.e. some checks that cause VM-Fail may only be detected by hardware during a nested VM-Entry. In such a case, KVM must restore L1's state to the pre-VM-Enter state as L2's state has already been loaded into KVM's software model. L1's CR3 and PDPTRs in particular are loaded from vmcs01.GUEST_*. But when EPT is disabled, the associated fields hold KVM's shadow values, not L1's "real" values. Fortunately, when EPT is disabled the PDPTRs come from memory, i.e. are not cached in the VMCS. Which leaves CR3 as the sole anomaly. A previously applied workaround to handle CR3 was to force nested early checks if EPT is disabled: commit 2b27924bb1d48 ("KVM: nVMX: always use early vmcs check when EPT is disabled") Forcing nested early checks is undesirable as doing so adds hundreds of cycles to every nested VM-Entry. Rather than take this performance hit, handle CR3 by overwriting vmcs01.GUEST_CR3 with L1's CR3 during nested VM-Entry when EPT is disabled *and* nested early checks are disabled. By stuffing vmcs01.GUEST_CR3, nested_vmx_restore_host_state() will naturally restore the correct vcpu->arch.cr3 from vmcs01.GUEST_CR3. These shenanigans work because nested_vmx_restore_host_state() does a full kvm_mmu_reset_context(), i.e. unloads the current MMU, which guarantees vmcs01.GUEST_CR3 will be rewritten with a new shadow CR3 prior to re-entering L1. vcpu->arch.root_mmu.root_hpa is set to INVALID_PAGE via: nested_vmx_restore_host_state() -> kvm_mmu_reset_context() -> kvm_mmu_unload() -> kvm_mmu_free_roots() kvm_mmu_unload() has WARN_ON(root_hpa != INVALID_PAGE), i.e. we can bank on 'root_hpa == INVALID_PAGE' unless the implementation of kvm_mmu_reset_context() is changed. On the way into L1, VMCS.GUEST_CR3 is guaranteed to be written (on a successful entry) via: vcpu_enter_guest() -> kvm_mmu_reload() -> kvm_mmu_load() -> kvm_mmu_load_cr3() -> vmx_set_cr3() Stuff vmcs01.GUEST_CR3 if and only if nested early checks are disabled as a "late" VM-Fail should never happen win that case (KVM WARNs), and the conditional write avoids the need to restore the correct GUEST_CR3 when nested_vmx_check_vmentry_hw() fails. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20190607185534.24368-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05KVM: x86: add tracepoints around __direct_map and FNAME(fetch)Paolo Bonzini
These are useful in debugging shadow paging. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05KVM: x86: change kvm_mmu_page_get_gfn BUG_ON to WARN_ONPaolo Bonzini
Note that in such a case it is quite likely that KVM will BUG_ON in __pte_list_remove when the VM is closed. However, there is no immediate risk of memory corruption in the host so a WARN_ON is enough and it lets you gather traces for debugging. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05KVM: x86: remove now unneeded hugepage gfn adjustmentPaolo Bonzini
After the previous patch, the low bits of the gfn are masked in both FNAME(fetch) and __direct_map, so we do not need to clear them in transparent_hugepage_adjust. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05KVM: x86: make FNAME(fetch) and __direct_map more similarPaolo Bonzini
These two functions are basically doing the same thing through kvm_mmu_get_page, link_shadow_page and mmu_set_spte; yet, for historical reasons, their code looks very different. This patch tries to take the best of each and make them very similar, so that it is easy to understand changes that apply to both of them. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05kvm: x86: Do not release the page inside mmu_set_spte()Junaid Shahid
Release the page at the call-site where it was originally acquired. This makes the exit code cleaner for most call sites, since they do not need to duplicate code between success and the failure label. Signed-off-by: Junaid Shahid <junaids@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05KVM: cpuid: remove has_leaf_count from struct kvm_cpuid_paramPaolo Bonzini
The has_leaf_count member was originally added for KVM's paravirtualization CPUID leaves. However, since then the leaf count _has_ been added to those leaves as well, so we can drop that special case. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05KVM: cpuid: rename do_cpuid_1_entPaolo Bonzini
do_cpuid_1_ent does not do the entire processing for a CPUID entry, it only retrieves the host's values. Rename it to match reality. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05KVM: cpuid: set struct kvm_cpuid_entry2 flags in do_cpuid_1_entPaolo Bonzini
do_cpuid_1_ent is typically called in two places by __do_cpuid_func for CPUID functions that have subleafs. Both places have to set the KVM_CPUID_FLAG_SIGNIFCANT_INDEX. Set that flag, and KVM_CPUID_FLAG_STATEFUL_FUNC as well, directly in do_cpuid_1_ent. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05KVM: cpuid: extract do_cpuid_7_mask and support multiple subleafsPaolo Bonzini
CPUID function 7 has multiple subleafs. Instead of having nested switch statements, move the logic to filter supported features to a separate function, and call it for each subleaf. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05KVM: cpuid: do_cpuid_ent works on a whole CPUID functionPaolo Bonzini
Rename it as well as __do_cpuid_ent and __do_cpuid_ent_emulated to have "func" in its name, and drop the index parameter which is always 0. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05docs: s390: s390dbf: typos and formatting, update crash commandSteffen Maier
Signed-off-by: Steffen Maier <maier@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <1562149189-1417-4-git-send-email-maier@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-07-05docs: s390: unify and update s390dbf kdocs at debug.cSteffen Maier
For non-static-inlines, debug.c already had non-compliant function header docs. So move the pure prototype kdocs of ("s390: include/asm/debug.h add kerneldoc markups") from debug.h to debug.c and merge them with the old function docs. Also, I had the impression that kdoc typically is at the implementation in the compile unit rather than at the prototype in the header file. While at it, update the short kdoc description to distinguish the different functions. And a few more consistency cleanups. Added a new kdoc for debug_set_critical() since debug.h comments it as part of the API. Signed-off-by: Steffen Maier <maier@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <1562149189-1417-3-git-send-email-maier@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-07-05docs: s390: restore important non-kdoc parts of s390dbf.rstSteffen Maier
Complements previous ("s390: include/asm/debug.h add kerneldoc markups") which seemed to have dropped important non-kdoc parts such as user space interface (level, size, flush) as well as views and caution regarding strings in the sprintf view. Signed-off-by: Steffen Maier <maier@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Message-Id: <1562149189-1417-2-git-send-email-maier@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-07-05Merge tag 'vfio-ccw-20190705' of ↵Vasily Gorbik
https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/vfio-ccw into features Fix a bug introduced in the refactoring. * tag 'vfio-ccw-20190705' of https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/vfio-ccw: vfio-ccw: Fix the conversion of Format-0 CCWs to Format-1 Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-07-05ALSA: cs4281: remove redundant assignment to variable val and remove a gotoColin Ian King
The variable val is being assigned with a value that is never read and it is being updated later with a new value. The assignment is redundant and can be removed. Also remove a goto statement and a label and replace with a break statement. Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-07-05KVM: arm64/sve: Fix vq_present() macro to yield a boolZhang Lei
The original implementation of vq_present() relied on aggressive inlining in order for the compiler to know that the code is correct, due to some const-casting issues. This was causing sparse and clang to complain, while GCC compiled cleanly. Commit 0c529ff789bc addressed this problem, but since vq_present() is no longer a function, there is now no implicit casting of the returned value to the return type (bool). In set_sve_vls(), this uncast bit value is compared against a bool, and so may spuriously compare as unequal when both are nonzero. As a result, KVM may reject valid SVE vector length configurations as invalid, and vice versa. Fix it by forcing the returned value to a bool. Signed-off-by: Zhang Lei <zhang.lei@jp.fujitsu.com> Fixes: 0c529ff789bc ("KVM: arm64: Implement vq_present() as a macro") Signed-off-by: Dave Martin <Dave.Martin@arm.com> [commit message rewrite] Cc: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-07-05ALSA: hda: Simplify snd_hdac_refresh_widgets()Takashi Iwai
Along with the recent fix for the races of snd_hdac_refresh_widgets() it turned out that the instantiation of widgets sysfs at snd_hdac_sysfs_reinit() could cause a race. The race itself was already covered later by extending the mutex protection range, the commit 98482377dc72 ("ALSA: hda: Fix widget_mutex incomplete protection"), but this also indicated that the call of *_reinit() is basically superfluous, as the widgets shall be created sooner or later from snd_hdac_device_register(). This patch removes the redundant call of snd_hdac_sysfs_reinit() at first. By this removal, the sysfs argument itself in snd_hdac_refresh_widgets() becomes superfluous, too, because the only case sysfs=false is always with codec->widgets=NULL. So, we drop this redundant argument as well. Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-07-05dmaengine: qcom: bam_dma: Fix completed descriptors countSricharan R
One space is left unused in circular FIFO to differentiate 'full' and 'empty' cases. So take that in to account while counting for the descriptors completed. Fixes the issue reported here, https://lkml.org/lkml/2019/6/18/669 Cc: stable@vger.kernel.org Reported-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Signed-off-by: Sricharan R <sricharan@codeaurora.org> Tested-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-07-05dmaengine: imx-sdma: remove BD_INTR for channel0Robin Gong
It is possible for an irq triggered by channel0 to be received later after clks are disabled once firmware loaded during sdma probe. If that happens then clearing them by writing to SDMA_H_INTR won't work and the kernel will hang processing infinite interrupts. Actually, don't need interrupt triggered on channel0 since it's pollling SDMA_H_STATSTOP to know channel0 done rather than interrupt in current code, just clear BD_INTR to disable channel0 interrupt to avoid the above case. This issue was brought by commit 1d069bfa3c78 ("dmaengine: imx-sdma: ack channel 0 IRQ in the interrupt handler") which didn't take care the above case. Fixes: 1d069bfa3c78 ("dmaengine: imx-sdma: ack channel 0 IRQ in the interrupt handler") Cc: stable@vger.kernel.org #5.0+ Signed-off-by: Robin Gong <yibin.gong@nxp.com> Reported-by: Sven Van Asbroeck <thesven73@gmail.com> Tested-by: Sven Van Asbroeck <thesven73@gmail.com> Reviewed-by: Michael Olbrich <m.olbrich@pengutronix.de> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-07-05dmaengine: imx-sdma: fix use-after-free on probe error pathSven Van Asbroeck
If probe() fails anywhere beyond the point where sdma_get_firmware() is called, then a kernel oops may occur. Problematic sequence of events: 1. probe() calls sdma_get_firmware(), which schedules the firmware callback to run when firmware becomes available, using the sdma instance structure as the context 2. probe() encounters an error, which deallocates the sdma instance structure 3. firmware becomes available, firmware callback is called with deallocated sdma instance structure 4. use after free - kernel oops ! Solution: only attempt to load firmware when we're certain that probe() will succeed. This guarantees that the firmware callback's context will remain valid. Note that the remove() path is unaffected by this issue: the firmware loader will increment the driver module's use count, ensuring that the module cannot be unloaded while the firmware callback is pending or running. Signed-off-by: Sven Van Asbroeck <TheSven73@gmail.com> Reviewed-by: Robin Gong <yibin.gong@nxp.com> [vkoul: fixed braces for if condition] Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-07-05dmaengine: jz4780: Fix an endian bug in IRQ handlerDan Carpenter
The "pending" variable was a u32 but we cast it to an unsigned long pointer when we do the for_each_set_bit() loop. The problem is that on big endian 64bit systems that results in an out of bounds read. Fixes: 4e4106f5e942 ("dmaengine: jz4780: Fix transfers being ACKed too soon") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-07-05ALSA: asihpi: Remove unneeded variable changeHariprasad Kelam
this patch fixes below issue reported by coccicheck sound/pci/asihpi/asihpi.c:1558:5-11: Unneeded variable: "change". Return "1" on line 1564 Signed-off-by: Hariprasad Kelam <hariprasad.kelam@gmail.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>