summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2020-05-13KVM: x86: Save L1 TSC offset in 'struct kvm_vcpu_arch'Sean Christopherson
Save L1's TSC offset in 'struct kvm_vcpu_arch' and drop the kvm_x86_ops hook read_l1_tsc_offset(). This avoids a retpoline (when configured) when reading L1's effective TSC, which is done at least once on every VM-Exit. No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200502043234.12481-2-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: nVMX: Skip IBPB when temporarily switching between vmcs01 and vmcs02Sean Christopherson
Skip the Indirect Branch Prediction Barrier that is triggered on a VMCS switch when temporarily loading vmcs02 to synchronize it to vmcs12, i.e. give copy_vmcs02_to_vmcs12_rare() the same treatment as vmx_switch_vmcs(). Make vmx_vcpu_load() static now that it's only referenced within vmx.c. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200506235850.22600-3-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: nVMX: Skip IBPB when switching between vmcs01 and vmcs02Sean Christopherson
Skip the Indirect Branch Prediction Barrier that is triggered on a VMCS switch when running with spectre_v2_user=on/auto if the switch is between two VMCSes in the same guest, i.e. between vmcs01 and vmcs02. The IBPB is intended to prevent one guest from attacking another, which is unnecessary in the nested case as it's the same guest from KVM's perspective. This all but eliminates the overhead observed for nested VMX transitions when running with CONFIG_RETPOLINE=y and spectre_v2_user=on/auto, which can be significant, e.g. roughly 3x on current systems. Reported-by: Alexander Graf <graf@amazon.com> Cc: KarimAllah Raslan <karahmed@amazon.de> Cc: stable@vger.kernel.org Fixes: 15d45071523d ("KVM/x86: Add IBPB support") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200501163117.4655-1-sean.j.christopherson@intel.com> [Invert direction of bool argument. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: VMX: Use accessor to read vmcs.INTR_INFO when handling exceptionSean Christopherson
Use vmx_get_intr_info() when grabbing the cached vmcs.INTR_INFO in handle_exception_nmi() to ensure the cache isn't stale. Bypassing the caching accessor doesn't cause any known issues as the cache is always refreshed by handle_exception_nmi_irqoff(), but the whole point of adding the proper caching mechanism was to avoid such dependencies. Fixes: 8791585837f6 ("KVM: VMX: Cache vmcs.EXIT_INTR_INFO using arch avail_reg flags") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200427171837.22613-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: x86: handle wrap around 32-bit address spacePaolo Bonzini
KVM is not handling the case where EIP wraps around the 32-bit address space (that is, outside long mode). This is needed both in vmx.c and in emulate.c. SVM with NRIPS is okay, but it can still print an error to dmesg due to integer overflow. Reported-by: Nick Peterson <everdox@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13kvm: Replace vcpu->swait with rcuwaitDavidlohr Bueso
The use of any sort of waitqueue (simple or regular) for wait/waking vcpus has always been an overkill and semantically wrong. Because this is per-vcpu (which is blocked) there is only ever a single waiting vcpu, thus no need for any sort of queue. As such, make use of the rcuwait primitive, with the following considerations: - rcuwait already provides the proper barriers that serialize concurrent waiter and waker. - Task wakeup is done in rcu read critical region, with a stable task pointer. - Because there is no concurrency among waiters, we need not worry about rcuwait_wait_event() calls corrupting the wait->task. As a consequence, this saves the locking done in swait when modifying the queue. This also applies to per-vcore wait for powerpc kvm-hv. The x86 tscdeadline_latency test mentioned in 8577370fb0cb ("KVM: Use simple waitqueue for vcpu->wq") shows that, on avg, latency is reduced by around 15-20% with this change. Cc: Paul Mackerras <paulus@ozlabs.org> Cc: kvmarm@lists.cs.columbia.edu Cc: linux-mips@vger.kernel.org Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Davidlohr Bueso <dbueso@suse.de> Message-Id: <20200424054837.5138-6-dave@stgolabs.net> [Avoid extra logic changes. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: x86: Replace late check_nested_events() hack with more precise fixPaolo Bonzini
Add an argument to interrupt_allowed and nmi_allowed, to checking if interrupt injection is blocked. Use the hook to handle the case where an interrupt arrives between check_nested_events() and the injection logic. Drop the retry of check_nested_events() that hack-a-fixed the same condition. Blocking injection is also a bit of a hack, e.g. KVM should do exiting and non-exiting interrupt processing in a single pass, but it's a more precise hack. The old comment is also misleading, e.g. KVM_REQ_EVENT is purely an optimization, setting it on every run loop (which KVM doesn't do) should not affect functionality, only performance. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-13-sean.j.christopherson@intel.com> [Extend to SVM, add SMI and NMI. Even though NMI and SMI cannot come asynchronously right now, making the fix generic is easy and removes a special case. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: VMX: Use vmx_get_rflags() to query RFLAGS in vmx_interrupt_blocked()Sean Christopherson
Use vmx_get_rflags() instead of manually reading vmcs.GUEST_RFLAGS when querying RFLAGS.IF so that multiple checks against interrupt blocking in a single run loop only require a single VMREAD. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-14-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: VMX: Use vmx_interrupt_blocked() directly from vmx_handle_exit()Sean Christopherson
Use vmx_interrupt_blocked() instead of bouncing through vmx_interrupt_allowed() when handling edge cases in vmx_handle_exit(). The nested_run_pending check in vmx_interrupt_allowed() should never evaluate true in the VM-Exit path. Hoist the WARN in handle_invalid_guest_state() up to vmx_handle_exit() to enforce the above assumption for the !enable_vnmi case, and to detect any other potential bugs with nested VM-Enter. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-12-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: x86: WARN on injected+pending exception even in nested caseSean Christopherson
WARN if a pending exception is coincident with an injected exception before calling check_nested_events() so that the WARN will fire even if inject_pending_event() bails early because check_nested_events() detects the conflict. Bailing early isn't problematic (quite the opposite), but suppressing the WARN is undesirable as it could mask a bug elsewhere in KVM. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-11-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: nSVM: Preserve IRQ/NMI/SMI priority irrespective of exiting behaviorPaolo Bonzini
Short circuit vmx_check_nested_events() if an unblocked IRQ/NMI/SMI is pending and needs to be injected into L2, priority between coincident events is not dependent on exiting behavior. Fixes: b518ba9fa691 ("KVM: nSVM: implement check_nested_events for interrupts") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: nSVM: Report interrupts as allowed when in L2 and exit-on-interrupt is setPaolo Bonzini
Report interrupts as allowed when the vCPU is in L2 and L2 is being run with exit-on-interrupts enabled and EFLAGS.IF=1 (either on the host or on the guest according to VINTR). Interrupts are always unblocked from L1's perspective in this case. While moving nested_exit_on_intr to svm.h, use INTERCEPT_INTR properly instead of assuming it's zero (which it is of course). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: nVMX: Prioritize SMI over nested IRQ/NMISean Christopherson
Check for an unblocked SMI in vmx_check_nested_events() so that pending SMIs are correctly prioritized over IRQs and NMIs when the latter events will trigger VM-Exit. This also fixes an issue where an SMI that was marked pending while processing a nested VM-Enter wouldn't trigger an immediate exit, i.e. would be incorrectly delayed until L2 happened to take a VM-Exit. Fixes: 64d6067057d96 ("KVM: x86: stubs for SMM support") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-10-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: nVMX: Preserve IRQ/NMI priority irrespective of exiting behaviorSean Christopherson
Short circuit vmx_check_nested_events() if an unblocked IRQ/NMI is pending and needs to be injected into L2, priority between coincident events is not dependent on exiting behavior. Fixes: b6b8a1451fc4 ("KVM: nVMX: Rework interception of IRQs and NMIs") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-9-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: SVM: Split out architectural interrupt/NMI/SMI blocking checksPaolo Bonzini
Move the architectural (non-KVM specific) interrupt/NMI/SMI blocking checks to a separate helper so that they can be used in a future patch by svm_check_nested_events(). No functional change intended. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: VMX: Split out architectural interrupt/NMI blocking checksSean Christopherson
Move the architectural (non-KVM specific) interrupt/NMI blocking checks to a separate helper so that they can be used in a future patch by vmx_check_nested_events(). No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-8-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: nSVM: Move SMI vmexit handling to svm_check_nested_events()Paolo Bonzini
Unlike VMX, SVM allows a hypervisor to take a SMI vmexit without having any special SMM-monitor enablement sequence. Therefore, it has to be handled like interrupts and NMIs. Check for an unblocked SMI in svm_check_nested_events() so that pending SMIs are correctly prioritized over IRQs and NMIs when the latter events will trigger VM-Exit. Note that there is no need to test explicitly for SMI vmexits, because guests always runs outside SMM and therefore can never get an SMI while they are blocked. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: nSVM: Report NMIs as allowed when in L2 and Exit-on-NMI is setPaolo Bonzini
Report NMIs as allowed when the vCPU is in L2 and L2 is being run with Exit-on-NMI enabled, as NMIs are always unblocked from L1's perspective in this case. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: nVMX: Report NMIs as allowed when in L2 and Exit-on-NMI is setSean Christopherson
Report NMIs as allowed when the vCPU is in L2 and L2 is being run with Exit-on-NMI enabled, as NMIs are always unblocked from L1's perspective in this case. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-7-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: x86: replace is_smm checks with kvm_x86_ops.smi_allowedPaolo Bonzini
Do not hardcode is_smm so that all the architectural conditions for blocking SMIs are listed in a single place. Well, in two places because this introduces some code duplication between Intel and AMD. This ensures that nested SVM obeys GIF in kvm_vcpu_has_events. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: x86: Make return for {interrupt_nmi,smi}_allowed() a bool instead of intSean Christopherson
Return an actual bool for kvm_x86_ops' {interrupt_nmi}_allowed() hook to better reflect the return semantics, and to avoid creating an even bigger mess when the related VMX code is refactored in upcoming patches. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-5-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: x86: Set KVM_REQ_EVENT if run is canceled with req_immediate_exit setSean Christopherson
Re-request KVM_REQ_EVENT if vcpu_enter_guest() bails after processing pending requests and an immediate exit was requested. This fixes a bug where a pending event, e.g. VMX preemption timer, is delayed and/or lost if the exit was deferred due to something other than a higher priority _injected_ event, e.g. due to a pending nested VM-Enter. This bug only affects the !injected case as kvm_x86_ops.cancel_injection() sets KVM_REQ_EVENT to redo the injection, but that's purely serendipitous behavior with respect to the deferred event. Note, emulated preemption timer isn't the only event that can be affected, it simply happens to be the only event where not re-requesting KVM_REQ_EVENT is blatantly visible to the guest. Fixes: f4124500c2c13 ("KVM: nVMX: Fully emulate preemption timer") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-4-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: nVMX: Open a window for pending nested VMX preemption timerSean Christopherson
Add a kvm_x86_ops hook to detect a nested pending "hypervisor timer" and use it to effectively open a window for servicing the expired timer. Like pending SMIs on VMX, opening a window simply means requesting an immediate exit. This fixes a bug where an expired VMX preemption timer (for L2) will be delayed and/or lost if a pending exception is injected into L2. The pending exception is rightly prioritized by vmx_check_nested_events() and injected into L2, with the preemption timer left pending. Because no window opened, L2 is free to run uninterrupted. Fixes: f4124500c2c13 ("KVM: nVMX: Fully emulate preemption timer") Reported-by: Jim Mattson <jmattson@google.com> Cc: Oliver Upton <oupton@google.com> Cc: Peter Shier <pshier@google.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-3-sean.j.christopherson@intel.com> [Check it in kvm_vcpu_has_events too, to ensure that the preemption timer is serviced promptly even if the vCPU is halted and L1 is not intercepting HLT. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: nVMX: Preserve exception priority irrespective of exiting behaviorSean Christopherson
Short circuit vmx_check_nested_events() if an exception is pending and needs to be injected into L2, priority between coincident events is not dependent on exiting behavior. This fixes a bug where a single-step #DB that is not intercepted by L1 is incorrectly dropped due to servicing a VMX Preemption Timer VM-Exit. Injected exceptions also need to be blocked if nested VM-Enter is pending or an exception was already injected, otherwise injecting the exception could overwrite an existing event injection from L1. Technically, this scenario should be impossible, i.e. KVM shouldn't inject its own exception during nested VM-Enter. This will be addressed in a future patch. Note, event priority between SMI, NMI and INTR is incorrect for L2, e.g. SMI should take priority over VM-Exit on NMI/INTR, and NMI that is injected into L2 should take priority over VM-Exit INTR. This will also be addressed in a future patch. Fixes: b6b8a1451fc4 ("KVM: nVMX: Rework interception of IRQs and NMIs") Reported-by: Jim Mattson <jmattson@google.com> Cc: Oliver Upton <oupton@google.com> Cc: Peter Shier <pshier@google.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200423022550.15113-2-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: SVM: Implement check_nested_events for NMICathy Avery
Migrate nested guest NMI intercept processing to new check_nested_events. Signed-off-by: Cathy Avery <cavery@redhat.com> Message-Id: <20200414201107.22952-2-cavery@redhat.com> [Reorder clauses as NMIs have higher priority than IRQs; inject immediate vmexit as is now done for IRQ vmexits. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: SVM: immediately inject INTR vmexitPaolo Bonzini
We can immediately leave SVM guest mode in svm_check_nested_events now that we have the nested_run_pending mechanism. This makes things easier because we can run the rest of inject_pending_event with GIF=0, and KVM will naturally end up requesting the next interrupt window. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: SVM: leave halted state on vmexitPaolo Bonzini
Similar to VMX, we need to leave the halted state when performing a vmexit. Failure to do so will cause a hang after vmexit. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13KVM: SVM: introduce nested_run_pendingPaolo Bonzini
We want to inject vmexits immediately from svm_check_nested_events, so that the interrupt/NMI window requests happen in inject_pending_event right after it returns. This however has the same issue as in vmx_check_nested_events, so introduce a nested_run_pending flag with the exact same purpose of delaying vmexit injection after the vmentry. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13Merge branch 'kvm-amd-fixes' into HEADPaolo Bonzini
2020-05-13KVM: x86: Fix pkru save/restore when guest CR4.PKE=0, move it to x86.cBabu Moger
Though rdpkru and wrpkru are contingent upon CR4.PKE, the PKRU resource isn't. It can be read with XSAVE and written with XRSTOR. So, if we don't set the guest PKRU value here(kvm_load_guest_xsave_state), the guest can read the host value. In case of kvm_load_host_xsave_state, guest with CR4.PKE clear could potentially use XRSTOR to change the host PKRU value. While at it, move pkru state save/restore to common code and the host_pkru field to kvm_vcpu_arch. This will let SVM support protection keys. Cc: stable@vger.kernel.org Reported-by: Jim Mattson <jmattson@google.com> Signed-off-by: Babu Moger <babu.moger@amd.com> Message-Id: <158932794619.44260.14508381096663848853.stgit@naples-babu.amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-05-13x86/hyperv: Properly suspend/resume reenlightenment notificationsVitaly Kuznetsov
Errors during hibernation with reenlightenment notifications enabled were reported: [ 51.730435] PM: hibernation entry [ 51.737435] PM: Syncing filesystems ... ... [ 54.102216] Disabling non-boot CPUs ... [ 54.106633] smpboot: CPU 1 is now offline [ 54.110006] unchecked MSR access error: WRMSR to 0x40000106 (tried to write 0x47c72780000100ee) at rIP: 0xffffffff90062f24 native_write_msr+0x4/0x20) [ 54.110006] Call Trace: [ 54.110006] hv_cpu_die+0xd9/0xf0 ... Normally, hv_cpu_die() just reassigns reenlightenment notifications to some other CPU when the CPU receiving them goes offline. Upon hibernation, there is no other CPU which is still online so cpumask_any_but(cpu_online_mask) returns >= nr_cpu_ids and using it as hv_vp_index index is incorrect. Disable the feature when cpumask_any_but() fails. Also, as we now disable reenlightenment notifications upon hibernation we need to restore them on resume. Check if hv_reenlightenment_cb was previously set and restore from hv_resume(). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Dexuan Cui <decui@microsoft.com> Reviewed-by: Tianyu Lan <Tianyu.Lan@microsoft.com> Link: https://lore.kernel.org/r/20200512160153.134467-1-vkuznets@redhat.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2020-05-13arm64: dts: allwinner: h6: Enable CPU opp tables for Tanix TX6Clément Péron
Enable CPU opp tables for Tanix TX6. Also add the fixed regulator that provided vdd-cpu-gpu required for CPU opp tables. This voltage has been found using a voltmeter and could be wrong. Tested-by: Jernej Škrabec <jernej.skrabec@gmail.com> Signed-off-by: Clément Péron <peron.clem@gmail.com> Signed-off-by: Maxime Ripard <maxime@cerno.tech>
2020-05-13arm64: dts: allwinner: h6: add voltage range to OPP tableClément Péron
Some boards have a fixed regulator and can't reach the voltage set by the OPP table. Add a range where the minimal voltage is the target and the maximal voltage is 1.2V. Suggested-by: Ondřej Jirman <megous@megous.com> Signed-off-by: Clément Péron <peron.clem@gmail.com> Signed-off-by: Maxime Ripard <maxime@cerno.tech>
2020-05-13x86/fpu/xstate: Define new functions for clearing fpregs and xstatesFenghua Yu
Currently, fpu__clear() clears all fpregs and xstates. Once XSAVES supervisor states are introduced, supervisor settings (e.g. CET xstates) must remain active for signals; It is necessary to have separate functions: - Create fpu__clear_user_states(): clear only user settings for signals; - Create fpu__clear_all(): clear both user and supervisor settings in flush_thread(). Also modify copy_init_fpstate_to_fpregs() to take a mask from above two functions. Remove obvious side-comment in fpu__clear(), while at it. [ bp: Make the second argument of fpu__clear() bool after requesting it a bunch of times during review. - Add a comment about copy_init_fpstate_to_fpregs() locking needs. ] Co-developed-by: Yu-cheng Yu <yu-cheng.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200512145444.15483-6-yu-cheng.yu@intel.com
2020-05-13MIPS: Fix "make clean" error due to recent changesHuacai Chen
Commit 26bff9eb49201aeb ("MIPS: Only include the platformfile needed") moves platform-(CONFIG_XYZ) from arch/mips/xyz/Platform to arch/mips/ Kbuild.platforms. This change causes an error when "make clean": ./scripts/Makefile.clean:15: arch/mips/vr41xx/Makefile: No such file or directory make[3]: *** No rule to make target `arch/mips/vr41xx/Makefile'. Stop. make[2]: *** [arch/mips/vr41xx] Error 2 make[1]: *** [_clean_arch/mips] Error 2 make: *** [sub-make] Error 2 Clean-files are defined in arch/mips/Kbuild: obj- := $(platform-) Due to the movement of platform-(CONFIG_XYZ), "make clean" will enter arch/mips/vr41xx/ whether CONFIG_MACH_VR41XX is defined or not. Because there is no Makefile in arch/mips/vr41xx/, "make clean" fails. I don't know what is the best way to fix it, but it seems like we can avoid this error by changing the obj- definition: obj- := $(platform-y) Fixes: 26bff9eb49201aeb ("MIPS: Only include the platformfile needed") Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-05-13MIPS: Fix typos in arch/mips/Kbuild.platformsHuacai Chen
Commit 26bff9eb49201aeb ("MIPS: Only include the platform file needed") misspelled "txx9" to "tx99", so fix it. Fixes: 26bff9eb49201aeb ("MIPS: Only include the platform file needed") Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
2020-05-13arm64: tegra: Add XUDC node on Tegra194Nagarjuna Kristam
Tegra194 has one XUSB device mode controller which can be operated in HS and SS modes. Add a DT node for this XUSB device mode controller. Signed-off-by: Nagarjuna Kristam <nkristam@nvidia.com> Signed-off-by: Thierry Reding <treding@nvidia.com>
2020-05-13arm64: tegra: Kill off "simple-panel" compatiblesRob Herring
"simple-panel" is a Linux driver and has never been an accepted upstream compatible string, so remove it. Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Jonathan Hunter <jonathanh@nvidia.com> Cc: linux-tegra@vger.kernel.org Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Thierry Reding <treding@nvidia.com>
2020-05-13x86/fpu/xstate: Introduce XSAVES supervisor statesYu-cheng Yu
Enable XSAVES supervisor states by setting MSR_IA32_XSS bits according to CPUID enumeration results. Also revise comments at various places. Co-developed-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200512145444.15483-5-yu-cheng.yu@intel.com
2020-05-13csky: Fixup remove unnecessary save/restore PSR codeGuo Ren
All processes' PSR could success from SETUP_MMU, so need set it in INIT_THREAD again. And use a3 instead of r7 in __switch_to for code convention. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-13csky: Fixup remove duplicate irq_disableLiu Yibin
Interrupt has been disabled in __schedule() with local_irq_disable() and enabled in finish_task_switch->finish_lock_switch() with local_irq_enabled(), So needn't to disable irq here. Signed-off-by: Liu Yibin <jiulong@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-13csky: Fixup calltrace panicGuo Ren
The implementation of show_stack will panic with wrong fp: addr = *fp++; because the fp isn't checked properly. The current implementations of show_stack, wchan and stack_trace haven't been designed properly, so just deprecate them. This patch is a reference to riscv's way, all codes are modified from arm's. The patch is passed with: - cat /proc/<pid>/stack - cat /proc/<pid>/wchan - echo c > /proc/sysrq-trigger Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-13csky: Fixup perf callchain unwindMao Han
[ 5221.974084] Unable to handle kernel paging request at virtual address 0xfffff000, pc: 0x8002c18e [ 5221.985929] Oops: 00000000 [ 5221.989488] [ 5221.989488] CURRENT PROCESS: [ 5221.989488] [ 5221.992877] COMM=callchain_test PID=11962 [ 5221.995213] TEXT=00008000-000087e0 DATA=00009f1c-0000a018 BSS=0000a018-0000b000 [ 5221.999037] USER-STACK=7fc18e20 KERNEL-STACK=be204680 [ 5221.999037] [ 5222.003292] PC: 0x8002c18e (perf_callchain_kernel+0x3e/0xd4) [ 5222.007957] LR: 0x8002c198 (perf_callchain_kernel+0x48/0xd4) [ 5222.074873] Call Trace: [ 5222.074873] [<800a248e>] get_perf_callchain+0x20a/0x29c [ 5222.074873] [<8009d964>] perf_callchain+0x64/0x80 [ 5222.074873] [<8009dc1c>] perf_prepare_sample+0x29c/0x4b8 [ 5222.074873] [<8009de6e>] perf_event_output_forward+0x36/0x98 [ 5222.074873] [<800497e0>] search_exception_tables+0x20/0x44 [ 5222.074873] [<8002cbb6>] do_page_fault+0x92/0x378 [ 5222.074873] [<80098608>] __perf_event_overflow+0x54/0xdc [ 5222.074873] [<80098778>] perf_swevent_hrtimer+0xe8/0x164 [ 5222.074873] [<8002ddd0>] update_mmu_cache+0x0/0xd8 [ 5222.074873] [<8002c014>] user_backtrace+0x58/0xc4 [ 5222.074873] [<8002c0b4>] perf_callchain_user+0x34/0xd0 [ 5222.074873] [<800a2442>] get_perf_callchain+0x1be/0x29c [ 5222.074873] [<8009d964>] perf_callchain+0x64/0x80 [ 5222.074873] [<8009d834>] perf_output_sample+0x78c/0x858 [ 5222.074873] [<8009dc1c>] perf_prepare_sample+0x29c/0x4b8 [ 5222.074873] [<8009de94>] perf_event_output_forward+0x5c/0x98 [ 5222.097846] [ 5222.097846] [<800a0300>] perf_event_exit_task+0x58/0x43c [ 5222.097846] [<8006c874>] hrtimer_interrupt+0x104/0x2ec [ 5222.097846] [<800a0300>] perf_event_exit_task+0x58/0x43c [ 5222.097846] [<80437bb6>] dw_apb_clockevent_irq+0x2a/0x4c [ 5222.097846] [<8006c770>] hrtimer_interrupt+0x0/0x2ec [ 5222.097846] [<8005f2e4>] __handle_irq_event_percpu+0xac/0x19c [ 5222.097846] [<80437bb6>] dw_apb_clockevent_irq+0x2a/0x4c [ 5222.097846] [<8005f408>] handle_irq_event_percpu+0x34/0x88 [ 5222.097846] [<8005f480>] handle_irq_event+0x24/0x64 [ 5222.097846] [<8006218c>] handle_level_irq+0x68/0xdc [ 5222.097846] [<8005ec76>] __handle_domain_irq+0x56/0xa8 [ 5222.097846] [<80450e90>] ck_irq_handler+0xac/0xe4 [ 5222.097846] [<80029012>] csky_do_IRQ+0x12/0x24 [ 5222.097846] [<8002a3a0>] csky_irq+0x70/0x80 [ 5222.097846] [<800ca612>] alloc_set_pte+0xd2/0x238 [ 5222.097846] [<8002ddd0>] update_mmu_cache+0x0/0xd8 [ 5222.097846] [<800a0340>] perf_event_exit_task+0x98/0x43c The original fp check doesn't base on the real kernal stack region. Invalid fp address may cause kernel panic. Signed-off-by: Mao Han <han_mao@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-13csky: Fixup msa highest 3 bits maskLiu Yibin
Just as comment mentioned, the msa format: cr<30/31, 15> MSA register format: 31 - 29 | 28 - 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 BA Reserved SH WA B SO SEC C D V So we should shift 29 bits not 28 bits for mask Signed-off-by: Liu Yibin <jiulong@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-13csky: Fixup perf probe -x hungupGuo Ren
case: # perf probe -x /lib/libc-2.28.9000.so memcpy # perf record -e probe_libc:memcpy -aR sleep 1 System hangup and cpu get in trap_c loop, because our hardware singlestep state could still get interrupt signal. When we get in uprobe_xol singlestep slot, we should disable irq in pt_regs->psr. And is_swbp_insn() need a csky arch implementation with a low 16bit mask. Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-05-13csky: Fixup compile error for abiv1 entry.SGuo Ren
This bug is from uprobe signal definition in thread_info.h. The instruction (andi) of abiv1 immediate is smaller than abiv2, then it will cause: AS arch/csky/kernel/entry.o arch/csky/kernel/entry.S: Assembler messages: arch/csky/kernel/entry.S:224: Error: Operand 2 immediate is overflow. Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-13csky/ftrace: Fixup error when disable CONFIG_DYNAMIC_FTRACEGuo Ren
When CONFIG_DYNAMIC_FTRACE is enabled, static ftrace will fail to boot up and compile. It's a carelessness when developing "dynamic ftrace" and "ftrace with regs". Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
2020-05-13arm64: dts: imx8qxp-mek: Do not use underscore in node nameFabio Estevam
Underscores are not recommended to be used in node names, so change the pinctrl IO expander node name. This change also makes the pinctrl node names to follow the convention used by other pinctrl group names. Signed-off-by: Fabio Estevam <festevam@gmail.com> Reviewed-by: Abel Vesa <abel.vesa@nxp.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org>
2020-05-13x86/fpu/xstate: Separate user and supervisor xfeatures maskYu-cheng Yu
Before the introduction of XSAVES supervisor states, 'xfeatures_mask' is used at various places to determine XSAVE buffer components and XCR0 bits. It contains only user xstates. To support supervisor xstates, it is necessary to separate user and supervisor xstates: - First, change 'xfeatures_mask' to 'xfeatures_mask_all', which represents the full set of bits that should ever be set in a kernel XSAVE buffer. - Introduce xfeatures_mask_supervisor() and xfeatures_mask_user() to extract relevant xfeatures from xfeatures_mask_all. Co-developed-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Tony Luck <tony.luck@intel.com> Link: https://lkml.kernel.org/r/20200512145444.15483-4-yu-cheng.yu@intel.com
2020-05-13ARM: dts: imx6qdl-gw552x: add USB OTG supportTim Harvey
The GW552x-B board revision adds USB OTG support. Enable the device-tree node and configure the OTG_ID pin. Signed-off-by: Tim Harvey <tharvey@gateworks.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org>