summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2021-02-10x86/apic: Split out spurious handling codeThomas Gleixner
sysvec_spurious_apic_interrupt() calls into the handling body of __spurious_interrupt() which is not obvious as that function is declared inside the DEFINE_IDTENTRY_IRQ(spurious_interrupt) macro. As __spurious_interrupt() is currently always inlined this ends up with two copies of the same code for no reason. Split the handling function out and invoke it from both entry points. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210210002512.469379641@linutronix.de
2021-02-10x86/irq/64: Adjust the per CPU irq stack pointer by 8Thomas Gleixner
The per CPU hardirq_stack_ptr contains the pointer to the irq stack in the form that it is ready to be assigned to [ER]SP so that the first push ends up on the top entry of the stack. But the stack switching on 64 bit has the following rules: 1) Store the current stack pointer (RSP) in the top most stack entry to allow the unwinder to link back to the previous stack 2) Set RSP to the top most stack entry 3) Invoke functions on the irq stack 4) Pop RSP from the top most stack entry (stored in #1) so it's back to the original stack. That requires all stack switching code to decrement the stored pointer by 8 in order to be able to store the current RSP and then set RSP to that location. That's a pointless exercise. Do the -8 adjustment right when storing the pointer and make the data type a void pointer to avoid confusion vs. the struct irq_stack data type which is on 64bit only used to declare the backing store. Move the definition next to the inuse flag so they likely end up in the same cache line. Sticking them into a struct to enforce it is a seperate change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210210002512.354260928@linutronix.de
2021-02-10x86/irq: Sanitize irq stack trackingThomas Gleixner
The recursion protection for hard interrupt stacks is an unsigned int per CPU variable initialized to -1 named __irq_count. The irq stack switching is only done when the variable is -1, which creates worse code than just checking for 0. When the stack switching happens it uses this_cpu_add/sub(1), but there is no reason to do so. It simply can use straight writes. This is a historical leftover from the low level ASM code which used inc and jz to make a decision. Rename it to hardirq_stack_inuse, make it a bool and use plain stores. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210210002512.228830141@linutronix.de
2021-02-10x86/entry: Fix instrumentation annotationThomas Gleixner
Embracing a callout into instrumentation_begin() / instrumentation_begin() does not really make sense. Make the latter instrumentation_end(). Fixes: 2f6474e4636b ("x86/entry: Switch XEN/PV hypercall entry to IDTENTRY") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210210002512.106502464@linutronix.de
2021-02-10Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller
2021-02-10x86/pci: Create PCI/MSI irqdomain after x86_init.pci.arch_init()Thomas Gleixner
Invoking x86_init.irqs.create_pci_msi_domain() before x86_init.pci.arch_init() breaks XEN PV. The XEN_PV specific pci.arch_init() function overrides the default create_pci_msi_domain() which is obviously too late. As a consequence the XEN PV PCI/MSI allocation goes through the native path which runs out of vectors and causes malfunction. Invoke it after x86_init.pci.arch_init(). Fixes: 6b15ffa07dc3 ("x86/irq: Initialize PCI/MSI domain at PCI init time") Reported-by: Juergen Gross <jgross@suse.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Juergen Gross <jgross@suse.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/87pn18djte.fsf@nanos.tec.linutronix.de
2021-02-10Merge branch 'objtool/core' into x86/entryThomas Gleixner
to base the irq stack modifications on.
2021-02-10x86/unwind/orc: Change REG_SP_INDIRECTPeter Zijlstra
Currently REG_SP_INDIRECT is unused but means (%rsp + offset), change it to mean (%rsp) + offset. The reason is that we're going to swizzle stack in the middle of a C function with non-trivial stack footprint. This means that when the unwinder finds the ToS, it needs to dereference it (%rsp) and then add the offset to the next frame, resulting in: (%rsp) + offset This is somewhat unfortunate, since REG_BP_INDIRECT is used (by DRAP) and thus needs to retain the current (%rbp + offset). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
2021-02-10x86/{fault,efi}: Fix and rename efi_recover_from_page_fault()Andy Lutomirski
efi_recover_from_page_fault() doesn't recover -- it does a special EFI mini-oops. Rename it to make it clear that it crashes. While renaming it, I noticed a blatant bug: a page fault oops in a different thread happening concurrently with an EFI runtime service call would be misinterpreted as an EFI page fault. Fix that. This isn't quite exact. The situation could be improved by using a special CS for calls into EFI. [ bp: Massage commit message and simplify in interrupt check. ] Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/f43b1e80830dc78ed60ed8b0826f4f189254570c.1612924255.git.luto@kernel.org
2021-02-10x86/fault: Don't run fixups for SMAP violationsAndy Lutomirski
A SMAP-violating kernel access is not a recoverable condition. Imagine kernel code that, outside of a uaccess region, dereferences a pointer to the user range by accident. If SMAP is on, this will reliably generate as an intentional user access. This makes it easy for bugs to be overlooked if code is inadequately tested both with and without SMAP. This was discovered because BPF can generate invalid accesses to user memory, but those warnings only got printed if SMAP was off. Make it so that this type of error will be discovered with SMAP on as well. [ bp: Massage commit message. ] Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/66a02343624b1ff46f02a838c497fc05c1a871b3.1612924255.git.luto@kernel.org
2021-02-10x86/fault: Don't look for extable entries for SMEP violationsAndy Lutomirski
If the kernel gets a SMEP violation or a fault that would have been a SMEP violation if it had SMEP support, it shouldn't run fixups. Just OOPS. [ bp: Massage commit message. ] Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/46160d8babce2abf1d6daa052146002efa24ac56.1612924255.git.luto@kernel.org
2021-02-10perf/x86/rapl: Fix psys-energy event on Intel SPR platformZhang Rui
There are several things special for the RAPL Psys energy counter, on Intel Sapphire Rapids platform. 1. it contains one Psys master package, and only CPUs on the master package can read valid value of the Psys energy counter, reading the MSR on CPUs in the slave package returns 0. 2. The master package does not have to be Physical package 0. And when all the CPUs on the Psys master package are offlined, we lose the Psys energy counter, at runtime. 3. The Psys energy counter can be disabled by BIOS, while all the other energy counters are not affected. It is not easy to handle all of these in the current RAPL PMU design because a) perf_msr_probe() validates the MSR on some random CPU, which may either be in the Psys master package or in the Psys slave package. b) all the RAPL events share the same PMU, and there is not API to remove the psys-energy event cleanly, without affecting the other events in the same PMU. This patch addresses the problems in a simple way. First, by setting .no_check bit for RAPL Psys MSR, the psys-energy event is always added, so we don't have to check the Psys ENERGY_STATUS MSR on master package. Then, by removing rapl_not_visible(), the psys-energy event is always available in sysfs. This does not affect the previous code because, for the RAPL MSRs with .no_check cleared, the .is_visible() callback is always overriden in the perf_msr_probe() function. Note, although RAPL PMU is die-based, and the Psys energy counter MSR on Intel SPR is package scope, this is not a problem because there is only one die in each package on SPR. Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andi Kleen <ak@linux.intel.com> Link: https://lkml.kernel.org/r/20210204161816.12649-3-rui.zhang@intel.com
2021-02-10perf/x86/rapl: Only check lower 32bits for RAPL energy countersZhang Rui
In the RAPL ENERGY_COUNTER MSR, only the lower 32bits represent the energy counter. On previous platforms, the higher 32bits are reverved and always return Zero. But on Intel SapphireRapids platform, the higher 32bits are reused for other purpose and return non-zero value. Thus check the lower 32bits only for these ENERGY_COUTNER MSRs, to make sure the RAPL PMU events are not added erroneously when higher 32bits contain non-zero value. Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andi Kleen <ak@linux.intel.com> Link: https://lkml.kernel.org/r/20210204161816.12649-2-rui.zhang@intel.com
2021-02-10perf/x86/rapl: Add msr mask supportZhang Rui
In some cases, when probing a perf MSR, we're probing certain bits of the MSR instead of the whole register, thus only these bits should be checked. For example, for RAPL ENERGY_STATUS MSR, only the lower 32 bits represents the energy counter, and the higher 32bits are reserved. Introduce a new mask field in struct perf_msr to allow probing certain bits of a MSR. This change is transparent to the current perf_msr_probe() users. Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andi Kleen <ak@linux.intel.com> Link: https://lkml.kernel.org/r/20210204161816.12649-1-rui.zhang@intel.com
2021-02-10perf/x86/kvm: Add Cascade Lake Xeon steppings to isolation_ucodes[]Jim Mattson
Cascade Lake Xeon parts have the same model number as Skylake Xeon parts, so they are tagged with the intel_pebs_isolation quirk. However, as with Skylake Xeon H0 stepping parts, the PEBS isolation issue is fixed in all microcode versions. Add the Cascade Lake Xeon steppings (5, 6, and 7) to the isolation_ucodes[] table so that these parts benefit from Andi's optimization in commit 9b545c04abd4f ("perf/x86/kvm: Avoid unnecessary work in guest filtering"). Signed-off-by: Jim Mattson <jmattson@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andi Kleen <ak@linux.intel.com> Link: https://lkml.kernel.org/r/20210205191324.2889006-1-jmattson@google.com
2021-02-10x86/fault: Rename no_context() to kernelmode_fixup_or_oops()Andy Lutomirski
The name no_context() has never been very clear. It's only called for faults from kernel mode, so rename it and change the no-longer-useful user_mode(regs) check to a WARN_ON_ONCE. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/c21940efe676024bb4bc721f7d70c29c420e127e.1612924255.git.luto@kernel.org
2021-02-10x86/fault: Bypass no_context() for implicit kernel faults from usermodeAndy Lutomirski
Drop an indentation level and remove the last user_mode(regs) == true caller of no_context() by directly OOPSing for implicit kernel faults from usermode. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/6e3d1129494a8de1e59d28012286e3a292a2296e.1612924255.git.luto@kernel.org
2021-02-10x86/fault: Split the OOPS code out from no_context()Andy Lutomirski
Not all callers of no_context() want to run exception fixups. Separate the OOPS code out from the fixup code in no_context(). Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/450f8d8eabafb83a5df349108c8e5ea83a2f939d.1612924255.git.luto@kernel.org
2021-02-10x86/fault: Improve kernel-executing-user-memory handlingAndy Lutomirski
Right now, the case of the kernel trying to execute from user memory is treated more or less just like the kernel getting a page fault on a user access. In the failure path, it checks for erratum #93, tries to otherwise fix up the error, and then oopses. If it manages to jump to the user address space, with or without SMEP, it should not try to resolve the page fault. This is an error, pure and simple. Rearrange the code so that this case is caught early, check for erratum #93, and bail out. [ bp: Massage commit message. ] Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/ab8719c7afb8bd501c4eee0e36493150fbbe5f6a.1612924255.git.luto@kernel.org
2021-02-10x86/fault: Correct a few user vs kernel checks wrt WRUSSAndy Lutomirski
In general, page fault errors for WRUSS should be just like get_user(), etc. Fix three bugs in this area: There is a comment that says that, if the kernel can't handle a page fault on a user address due to OOM, the OOM-kill-and-retry logic would be skipped. The code checked kernel *privilege*, not kernel mode, so it missed WRUSS. This means that the kernel would malfunction if it got OOM on a WRUSS fault -- this would be a kernel-mode, user-privilege fault, and the OOM killer would be invoked and the handler would retry the faulting instruction. A failed user access from kernel while a fatal signal is pending should fail even if the instruction in question was WRUSS. do_sigbus() should not send SIGBUS for WRUSS -- it should handle it like any other kernel mode failure. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/a7b7bcea730bd4069e6b7e629236bb2cf526c2fb.1612924255.git.luto@kernel.org
2021-02-10x86/fault: Document the locking in the fault_signal_pending() pathAndy Lutomirski
If fault_signal_pending() returns true, then the core mm has unlocked the mm for us. Add a comment to help future readers of this code. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/c56de3d103f40e6304437b150aa7b215530d23f7.1612924255.git.luto@kernel.org
2021-02-10x86/fault/32: Move is_f00f_bug() to do_kern_addr_fault()Andy Lutomirski
bad_area() and its relatives are called from many places in fault.c, and exactly one of them wants the F00F workaround. __bad_area_nosemaphore() no longer contains any kernel fault code, which prepares for further cleanups. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/e9668729a48ce6754022b0a4415631e8ebdd00e7.1612924255.git.luto@kernel.org
2021-02-10x86/fault: Fold mm_fault_error() into do_user_addr_fault()Andy Lutomirski
mm_fault_error() is logically just the end of do_user_addr_fault(). Combine the functions. This makes the code easier to read. Most of the churn here is from renaming hw_error_code to error_code in do_user_addr_fault(). This makes no difference at all to the generated code (objdump -dr) as compared to changing noinline to __always_inline in the definition of mm_fault_error(). Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/dedc4d9c9b047e51ce38b991bd23971a28af4e7b.1612924255.git.luto@kernel.org
2021-02-10x86/fault: Skip the AMD erratum #91 workaround on unaffected CPUsAndy Lutomirski
According to the Revision Guide for AMD Athlon™ 64 and AMD Opteron™ Processors, only early revisions of family 0xF are affected. This will avoid unnecessarily fetching instruction bytes before sending SIGSEGV to user programs. Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/477173b7784bc28afb3e53d76ae5ef143917e8dd.1612924255.git.luto@kernel.org
2021-02-10x86/fault: Fix AMD erratum #91 errata fixup for user codeAndy Lutomirski
The recent rework of probe_kernel_address() and its conversion to get_kernel_nofault() inadvertently broke is_prefetch(). Before this change, probe_kernel_address() was used as a sloppy "read user or kernel memory" helper, but it doesn't do that any more. The new get_kernel_nofault() reads *kernel* memory only, which completely broke is_prefetch() for user access. Adjust the code to the correct accessor based on access mode. The manual address bounds check is no longer necessary, since the accessor helpers (get_user() / get_kernel_nofault()) do the right thing all by themselves. As a bonus, by using the correct accessor, the open-coded address bounds check is not needed anymore. [ bp: Massage commit message. ] Fixes: eab0c6089b68 ("maccess: unify the probe kernel arch hooks") Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/b91f7f92f3367d2d3a88eec3b09c6aab1b2dc8ef.1612924255.git.luto@kernel.org
2021-02-10x86/pv: Rework arch_local_irq_restore() to not use popfJuergen Gross
POPF is a rather expensive operation, so don't use it for restoring irq flags. Instead, test whether interrupts are enabled in the flags parameter and enable interrupts via STI in that case. This results in the restore_fl paravirt op to be no longer needed. Suggested-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210120135555.32594-7-jgross@suse.com
2021-02-10x86/xen: Drop USERGS_SYSRET64 paravirt callJuergen Gross
USERGS_SYSRET64 is used to return from a syscall via SYSRET, but a Xen PV guest will nevertheless use the IRET hypercall, as there is no sysret PV hypercall defined. So instead of testing all the prerequisites for doing a sysret and then mangling the stack for Xen PV again for doing an iret just use the iret exit from the beginning. This can easily be done via an ALTERNATIVE like it is done for the sysenter compat case already. It should be noted that this drops the optimization in Xen for not restoring a few registers when returning to user mode, but it seems as if the saved instructions in the kernel more than compensate for this drop (a kernel build in a Xen PV guest was slightly faster with this patch applied). While at it remove the stale sysret32 remnants. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210120135555.32594-6-jgross@suse.com
2021-02-10x86/pv: Switch SWAPGS to ALTERNATIVEJuergen Gross
SWAPGS is used only for interrupts coming from user mode or for returning to user mode. So there is no reason to use the PARAVIRT framework, as it can easily be replaced by an ALTERNATIVE depending on X86_FEATURE_XENPV. There are several instances using the PV-aware SWAPGS macro in paths which are never executed in a Xen PV guest. Replace those with the plain swapgs instruction. For SWAPGS_UNSAFE_STACK the same applies. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Andy Lutomirski <luto@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210120135555.32594-5-jgross@suse.com
2021-02-10x86/xen: Use specific Xen pv interrupt entry for DFJuergen Gross
Xen PV guests don't use IST. For double fault interrupts, switch to the same model as NMI. Correct a typo in a comment while copying it. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210120135555.32594-4-jgross@suse.com
2021-02-10x86/xen: Use specific Xen pv interrupt entry for MCEJuergen Gross
Xen PV guests don't use IST. For machine check interrupts, switch to the same model as debug interrupts. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210120135555.32594-3-jgross@suse.com
2021-02-09mfd: intel_msic: Remove driver for deprecated platformAndy Shevchenko
Intel Moorestown and Medfield are quite old Intel Atom based 32-bit platforms, which were in limited use in some Android phones, tablets and consumer electronics more than eight years ago. There are no bugs or problems ever reported outside from Intel for breaking any of that platforms for years. It seems no real users exists who run more or less fresh kernel on it. Commit 05f4434bc130 ("ASoC: Intel: remove mfld_machine") is also in align with this theory. Due to above and to reduce a burden of supporting outdated drivers, remove the support for outdated platforms completely. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-02-09x86/apb_timer: Remove driver for deprecated platformAndy Shevchenko
Intel Moorestown and Medfield are quite old Intel Atom based 32-bit platforms, which were in limited use in some Android phones, tablets and consumer electronics more than eight years ago. There are no bugs or problems ever reported outside from Intel for breaking any of that platforms for years. It seems no real users exists who run more or less fresh kernel on it. Commit 05f4434bc130 ("ASoC: Intel: remove mfld_machine") is also in align with this theory. Due to above and to reduce a burden of supporting outdated drivers, remove the support for outdated platforms completely. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-02-09x86/platform/intel-mid: Remove unused leftovers (vRTC)Andy Shevchenko
There is no driver present, remove the device creation and other leftovers. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-02-09x86/platform/intel-mid: Remove unused leftovers (msic)Andy Shevchenko
There is no driver present, remove the device creation and other leftovers. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-02-09x86/platform/intel-mid: Remove unused leftovers (msic_thermal)Andy Shevchenko
There is no driver present, remove the device creation and other leftovers. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-02-09x86/platform/intel-mid: Remove unused leftovers (msic_power_btn)Andy Shevchenko
There is no driver present, remove the device creation and other leftovers. Note, for Intel Merrifield there is another driver which is instantiated by a certain MFD one and does not need any support from device_libs. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-02-09x86/platform/intel-mid: Remove unused leftovers (msic_gpio)Andy Shevchenko
There is no driver present, remove the device creation and other leftovers. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-02-09x86/platform/intel-mid: Remove unused leftovers (msic_battery)Andy Shevchenko
It seems msic_battery driver was never upstreamed. Why should we have dead code in the kernel? Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-02-09x86/platform/intel-mid: Remove unused leftovers (msic_ocd)Andy Shevchenko
It seems msic_ocd driver was never upstreamed. Why should we have dead code in the kernel? Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-02-09x86/platform/intel-mid: Remove unused leftovers (msic_audio)Andy Shevchenko
Commit 05f4434bc130 ("ASoC: Intel: remove mfld_machine") removed the driver, no need to have support files for it. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2021-02-09Merge tag 'ib-drm-gpio-pdx86-rtc-wdt-v5.12-2' of ↵Rafael J. Wysocki
git://git.infradead.org/linux-platform-drivers-x86 Pull the first part of Intel MID outdated platforms removal from Andy Shevchenko: "The following is an automated git shortlog grouped by driver: drm/gma500: - Get rid of duplicate NULL checks - Convert to use new SCU IPC API gpio: - msic: Remove driver for deprecated platform - intel-mid: Remove driver for deprecated platform intel_mid_powerbtn: - Remove driver for deprecated platform intel_mid_thermal: - Remove driver for deprecated platform intel_scu_wdt: - Drop mistakenly added const - Get rid of custom x86 model comparison - Drop SCU notification - Move driver from arch/x86 rtc: - mrst: Remove driver for deprecated platform watchdog: - intel-mid_wdt: Postpone IRQ handler registration till SCU is ready - intel_scu_watchdog: Remove driver for deprecated platform" * tag 'ib-drm-gpio-pdx86-rtc-wdt-v5.12-2' of git://git.infradead.org/linux-platform-drivers-x86: platform/x86: intel_scu_wdt: Drop mistakenly added const platform/x86: intel_scu_wdt: Get rid of custom x86 model comparison platform/x86: intel_scu_wdt: Drop SCU notification platform/x86: intel_scu_wdt: Move driver from arch/x86 watchdog: intel-mid_wdt: Postpone IRQ handler registration till SCU is ready watchdog: intel_scu_watchdog: Remove driver for deprecated platform rtc: mrst: Remove driver for deprecated platform platform/x86: intel_mid_powerbtn: Remove driver for deprecated platform platform/x86: intel_mid_thermal: Remove driver for deprecated platform gpio: msic: Remove driver for deprecated platform gpio: intel-mid: Remove driver for deprecated platform drm/gma500: Get rid of duplicate NULL checks drm/gma500: Convert to use new SCU IPC API
2021-02-09KVM: x86/xen: Allow reset of Xen attributesDavid Woodhouse
In order to support Xen SHUTDOWN_soft_reset (for guest kexec, etc.) the VMM needs to be able to tear everything down and return the Xen features to a clean slate. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Message-Id: <20210208232326.1830370-1-dwmw2@infradead.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86/mmu: Make HVA handler retpoline-friendlyMaciej S. Szmigiero
When retpolines are enabled they have high overhead in the inner loop inside kvm_handle_hva_range() that iterates over the provided memory area. Let's mark this function and its TDP MMU equivalent __always_inline so compiler will be able to change the call to the actual handler function inside each of them into a direct one. This significantly improves performance on the unmap test on the existing kernel memslot code (tested on a Xeon 8167M machine): 30 slots in use: Test Before After Improvement Unmap 0.0353s 0.0334s 5% Unmap 2M 0.00104s 0.000407s 61% 509 slots in use: Test Before After Improvement Unmap 0.0742s 0.0740s None Unmap 2M 0.00221s 0.00159s 28% Looks like having an indirect call in these functions (and, so, a retpoline) might have interfered with unrolling of the whole loop in the CPU. Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <732d3fe9eb68aa08402a638ab0309199fa89ae56.1612810129.git.maciej.szmigiero@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Drop hv_vcpu_to_vcpu() helperVitaly Kuznetsov
hv_vcpu_to_vcpu() helper is only used by other helpers and is not very complex, we can drop it without much regret. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-16-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Allocate Hyper-V context lazilyVitaly Kuznetsov
Hyper-V context is only needed for guests which use Hyper-V emulation in KVM (e.g. Windows/Hyper-V guests) so we don't actually need to allocate it in kvm_arch_vcpu_create(), we can postpone the action until Hyper-V specific MSRs are accessed or SynIC is enabled. Once allocated, let's keep the context alive for the lifetime of the vCPU as an attempt to free it would require additional synchronization with other vCPUs and normally it is not supposed to happen. Note, Hyper-V style hypercall enablement is done by writing to HV_X64_MSR_GUEST_OS_ID so we don't need to worry about allocating Hyper-V context from kvm_hv_hypercall(). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-15-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Make Hyper-V emulation enablement conditionalVitaly Kuznetsov
Hyper-V emulation is enabled in KVM unconditionally. This is bad at least from security standpoint as it is an extra attack surface. Ideally, there should be a per-VM capability explicitly enabled by VMM but currently it is not the case and we can't mandate one without breaking backwards compatibility. We can, however, check guest visible CPUIDs and only enable Hyper-V emulation when "Hv#1" interface was exposed in HYPERV_CPUID_INTERFACE. Note, VMMs are free to act in any sequence they like, e.g. they can try to set MSRs first and CPUIDs later so we still need to allow the host to read/write Hyper-V specific MSRs unconditionally. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-14-vkuznets@redhat.com> [Add selftest vcpu_set_hv_cpuid API to avoid breaking xen_vmcall_test. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Allocate 'struct kvm_vcpu_hv' dynamicallyVitaly Kuznetsov
Hyper-V context is only needed for guests which use Hyper-V emulation in KVM (e.g. Windows/Hyper-V guests). 'struct kvm_vcpu_hv' is, however, quite big, it accounts for more than 1/4 of the total 'struct kvm_vcpu_arch' which is also quite big already. This all looks like a waste. Allocate 'struct kvm_vcpu_hv' dynamically. This patch does not bring any (intentional) functional change as we still allocate the context unconditionally but it paves the way to doing that only when needed. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-13-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Prepare to meet unallocated Hyper-V contextVitaly Kuznetsov
Currently, Hyper-V context is part of 'struct kvm_vcpu_arch' and is always available. As a preparation to allocating it dynamically, check that it is not NULL at call sites which can normally proceed without it i.e. the behavior is identical to the situation when Hyper-V emulation is not being used by the guest. When Hyper-V context for a particular vCPU is not allocated, we may still need to get 'vp_index' from there. E.g. in a hypothetical situation when Hyper-V emulation was enabled on one CPU and wasn't on another, Hyper-V style send-IPI hypercall may still be used. Luckily, vp_index is always initialized to kvm_vcpu_get_idx() and can only be changed when Hyper-V context is present. Introduce kvm_hv_get_vpindex() helper for simplification. No functional change intended. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-12-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Always use to_hv_vcpu() accessor to get to 'struct ↵Vitaly Kuznetsov
kvm_vcpu_hv' As a preparation to allocating Hyper-V context dynamically, make it clear who's the user of the said context. No functional change intended. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-11-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-02-09KVM: x86: hyper-v: Stop shadowing global 'current_vcpu' variableVitaly Kuznetsov
'current_vcpu' variable in KVM is a per-cpu pointer to the currently scheduled vcpu. kvm_hv_flush_tlb()/kvm_hv_send_ipi() functions used to have local 'vcpu' variable to iterate over vCPUs but it's gone now and there's no need to use anything but the standard 'vcpu' as an argument. Suggested-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210126134816.1880136-10-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>