summaryrefslogtreecommitdiff
path: root/arch/x86
AgeCommit message (Collapse)Author
2015-08-12perf/x86: Fix MSR PMU driverPeter Zijlstra
Currently we only update the sysfs event files per available MSR, we didn't actually disallow creating unlisted events. Rework things such that the dectection, sysfs listing and event creation are better coordinated. Sadly it appears it's impossible to probe R/O MSRs under virt. This means we have to do the full model table to avoid listing all MSRs all the time. Tested-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Andy Lutomirski <luto@amacapital.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-12Merge branch 'perf/urgent' into perf/core, to pick up fixes before applying ↵Ingo Molnar
new changes Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-12perf/x86/intel/cqm: Do not access cpu_data() from CPU_UP_PREPARE handlerMatt Fleming
Tony reports that booting his 144-cpu machine with maxcpus=10 triggers the following WARN_ON(): [ 21.045727] WARNING: CPU: 8 PID: 647 at arch/x86/kernel/cpu/perf_event_intel_cqm.c:1267 intel_cqm_cpu_prepare+0x75/0x90() [ 21.045744] CPU: 8 PID: 647 Comm: systemd-udevd Not tainted 4.2.0-rc4 #1 [ 21.045745] Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRHSXSD1.86B.0066.R00.1506021730 06/02/2015 [ 21.045747] 0000000000000000 0000000082771b09 ffff880856333ba8 ffffffff81669b67 [ 21.045748] 0000000000000000 0000000000000000 ffff880856333be8 ffffffff8107b02a [ 21.045750] ffff88085b789800 ffff88085f68a020 ffffffff819e2470 000000000000000a [ 21.045750] Call Trace: [ 21.045757] [<ffffffff81669b67>] dump_stack+0x45/0x57 [ 21.045759] [<ffffffff8107b02a>] warn_slowpath_common+0x8a/0xc0 [ 21.045761] [<ffffffff8107b15a>] warn_slowpath_null+0x1a/0x20 [ 21.045762] [<ffffffff81036725>] intel_cqm_cpu_prepare+0x75/0x90 [ 21.045764] [<ffffffff81036872>] intel_cqm_cpu_notifier+0x42/0x160 [ 21.045767] [<ffffffff8109a33d>] notifier_call_chain+0x4d/0x80 [ 21.045769] [<ffffffff8109a44e>] __raw_notifier_call_chain+0xe/0x10 [ 21.045770] [<ffffffff8107b538>] _cpu_up+0xe8/0x190 [ 21.045771] [<ffffffff8107b65a>] cpu_up+0x7a/0xa0 [ 21.045774] [<ffffffff8165e920>] cpu_subsys_online+0x40/0x90 [ 21.045777] [<ffffffff81433b37>] device_online+0x67/0x90 [ 21.045778] [<ffffffff81433bea>] online_store+0x8a/0xa0 [ 21.045782] [<ffffffff81430e78>] dev_attr_store+0x18/0x30 [ 21.045785] [<ffffffff8126b6ba>] sysfs_kf_write+0x3a/0x50 [ 21.045786] [<ffffffff8126ad40>] kernfs_fop_write+0x120/0x170 [ 21.045789] [<ffffffff811f0b77>] __vfs_write+0x37/0x100 [ 21.045791] [<ffffffff811f38b8>] ? __sb_start_write+0x58/0x110 [ 21.045795] [<ffffffff81296d2d>] ? security_file_permission+0x3d/0xc0 [ 21.045796] [<ffffffff811f1279>] vfs_write+0xa9/0x190 [ 21.045797] [<ffffffff811f2075>] SyS_write+0x55/0xc0 [ 21.045800] [<ffffffff81067300>] ? do_page_fault+0x30/0x80 [ 21.045804] [<ffffffff816709ae>] entry_SYSCALL_64_fastpath+0x12/0x71 [ 21.045805] ---[ end trace fe228b836d8af405 ]--- The root cause is that CPU_UP_PREPARE is completely the wrong notifier action from which to access cpu_data(), because smp_store_cpu_info() won't have been executed by the target CPU at that point, which in turn means that ->x86_cache_max_rmid and ->x86_cache_occ_scale haven't been filled out. Instead let's invoke our handler from CPU_STARTING and rename it appropriately. Reported-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Kanaka Juvva <kanaka.d.juvva@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vikas Shivappa <vikas.shivappa@intel.com> Link: http://lkml.kernel.org/r/1438863163-14083-1-git-send-email-matt@codeblueprint.co.uk Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-12perf/x86/intel: Fix memory leak on hot-plug allocation failPeter Zijlstra
We fail to free the shared_regs allocation if the constraint_list allocation fails. Cure this and be more consistent in NULL-ing the pointers after free. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-11KVM: x86/vPMU: Fix unnecessary signed extension for AMD PERFCTRnWei Huang
According to AMD programmer's manual, AMD PERFCTRn is 64-bit MSR which, unlike Intel perf counters, doesn't require signed extension. This patch removes the unnecessary conversion in SVM vPMU code when PERFCTRn is being updated. Signed-off-by: Wei Huang <wei@redhat.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-11kvm: x86: Fix error handling in the function kvm_lapic_sync_from_vapicNicholas Krause
This fixes error handling in the function kvm_lapic_sync_from_vapic by checking if the call to kvm_read_guest_cached has returned a error code to signal to its caller the call to this function has failed and due to this we must immediately return to the caller of kvm_lapic_sync_from_vapic to avoid incorrectly call apic_set_tpc if a error has occurred here. Signed-off-by: Nicholas Krause <xerofoify@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-10x86/xen: build "Xen PV" APIC driver for domU as wellJason A. Donenfeld
It turns out that a PV domU also requires the "Xen PV" APIC driver. Otherwise, the flat driver is used and we get stuck in busy loops that never exit, such as in this stack trace: (gdb) target remote localhost:9999 Remote debugging using localhost:9999 __xapic_wait_icr_idle () at ./arch/x86/include/asm/ipi.h:56 56 while (native_apic_mem_read(APIC_ICR) & APIC_ICR_BUSY) (gdb) bt #0 __xapic_wait_icr_idle () at ./arch/x86/include/asm/ipi.h:56 #1 __default_send_IPI_shortcut (shortcut=<optimized out>, dest=<optimized out>, vector=<optimized out>) at ./arch/x86/include/asm/ipi.h:75 #2 apic_send_IPI_self (vector=246) at arch/x86/kernel/apic/probe_64.c:54 #3 0xffffffff81011336 in arch_irq_work_raise () at arch/x86/kernel/irq_work.c:47 #4 0xffffffff8114990c in irq_work_queue (work=0xffff88000fc0e400) at kernel/irq_work.c:100 #5 0xffffffff8110c29d in wake_up_klogd () at kernel/printk/printk.c:2633 #6 0xffffffff8110ca60 in vprintk_emit (facility=0, level=<optimized out>, dict=0x0 <irq_stack_union>, dictlen=<optimized out>, fmt=<optimized out>, args=<optimized out>) at kernel/printk/printk.c:1778 #7 0xffffffff816010c8 in printk (fmt=<optimized out>) at kernel/printk/printk.c:1868 #8 0xffffffffc00013ea in ?? () #9 0x0000000000000000 in ?? () Mailing-list-thread: https://lkml.org/lkml/2015/8/4/755 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: <stable@vger.kernel.org> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
2015-08-10clockevents/drivers/i8253: Migrate to new 'set-state' interfaceViresh Kumar
Migrate i8253 driver to the new 'set-state' interface provided by clockevents core, the earlier 'set-mode' interface is marked obsolete now. This also enables us to implement callbacks for new states of clockevent devices, for example: ONESHOT_STOPPED. Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
2015-08-09bpf: Make the bpf_prog_array_map more genericWang Nan
All the map backends are of generic nature. In order to avoid adding much special code into the eBPF core, rewrite part of the bpf_prog_array map code and make it more generic. So the new perf_event_array map type can reuse most of code with bpf_prog_array map and add fewer lines of special code. Signed-off-by: Wang Nan <wangnan0@huawei.com> Signed-off-by: Kaixu Xia <xiakaixu@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-09Merge 4.2-rc6 into char-misc-nextGreg Kroah-Hartman
We want the fixes in Linus's tree in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-08-08x86/vdso: Emit a GNU hashAndy Lutomirski
Some dynamic loaders may be slightly faster if a GNU hash is available. Strangely, this seems to have no effect at all on the vdso size. This is unlikely to have any measurable effect on the time it takes to resolve vdso symbols (since there are so few of them). In some contexts, it can be a win for a different reason: if every DSO has a GNU hash section, then libc can avoid calculating SysV hashes at all. Both musl and glibc appear to have this optimization. It's plausible that this breaks some ancient glibc version. If so, then, depending on what glibc versions break, we could either require COMPAT_VDSO for them or consider reverting. Signed-off-by: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Isaac Dunham <ibid.ag@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Nathan Lynch <nathan_lynch@mentor.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rich Felker <dalias@libc.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: musl@lists.openwall.com <musl@lists.openwall.com> Link: http://lkml.kernel.org/r/fd56cc057a2d62ab31c56a48d04fccb435b3fd4f.1438897382.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-08acpi, x86: Implement arch_apei_get_mem_attributes()Jonathan (Zhixiong) Zhang
... to allow an arch specific implementation of getting page protection type associated with a physical address. On x86, we currently have no way to look up the EFI memory map attributes for a region in a consistent way, because the memmap is discarded after efi_free_boot_services(). So if you call efi_mem_attributes() during boot and at runtime, you could theoretically see different attributes. Since we are yet to see any x86 platforms that require anything other than PAGE_KERNEL (some arm64 platforms require the equivalent of PAGE_KERNEL_NOCACHE), return that until we know differently. Signed-off-by: Jonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438936621-5215-5-git-send-email-matt@codeblueprint.co.uk [ Small fixes to spelling. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-08efi, x86: Rearrange efi_mem_attributes()Jonathan (Zhixiong) Zhang
x86 and ia64 implement efi_mem_attributes() differently. This function needs to be available for other architectures (such as arm64) as well, such as for the purpose of ACPI/APEI. ia64 EFI does not set up a 'memmap' variable and does not set the EFI_MEMMAP flag, so it needs to have its unique implementation of efi_mem_attributes(). Move efi_mem_attributes() implementation from x86 to the core EFI code, and declare it with __weak. It is recommended that other architectures should not override the default implementation. Signed-off-by: Jonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Reviewed-by: Matt Fleming <matt.fleming@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438936621-5215-4-git-send-email-matt@codeblueprint.co.uk Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-08Revert "x86/efi: Request desired alignment via the PE/COFF headers"Matt Fleming
This reverts commit: aeffc4928ea2 ("x86/efi: Request desired alignment via the PE/COFF headers") Linn reports that Signtool complains that kernels built with CONFIG_EFI_STUB=y are violating the PE/COFF specification because the 'SizeOfImage' field is not a multiple of 'SectionAlignment'. This violation was introduced as an optimisation to skip having the kernel relocate itself during boot and instead have the firmware place it at a correctly aligned address. No one else has complained and I'm not aware of any firmware implementations that refuse to boot with commit aeffc4928ea2, but it's a real bug, so revert the offending commit. Reported-by: Linn Crosetto <linn@hp.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michael Brown <mbrown@fensystems.co.uk> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438936621-5215-3-git-send-email-matt@codeblueprint.co.uk Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-08x86/efi-bgrt: Switch pr_err() to pr_debug() for invalid BGRTMatt Fleming
It's totally legitimate, per the ACPI spec, for the firmware to set the BGRT 'status' field to zero to indicate that the BGRT image isn't being displayed, and we shouldn't be printing an error message in that case because it's just noise for users. So swap pr_err() for pr_debug(). However, Josh points that out it still makes sense to test the validity of the upper 7 bits of the 'status' field, since they're marked as "reserved" in the spec and must be zero. If firmware violates this it really *is* an error. Reported-by: Tom Yan <tom.ty89@gmail.com> Tested-by: Tom Yan <tom.ty89@gmail.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1438936621-5215-2-git-send-email-matt@codeblueprint.co.uk Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-08x86/ldt: Correct FPU emulation access to LDTJuergen Gross
Commit 37868fe113ff ("x86/ldt: Make modify_ldt synchronous") introduced a new struct ldt_struct anchored at mm->context.ldt. Adapt the x86 fpu emulation code to use that new structure. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: <stable@vger.kernel.org> # On top of: 37868fe113ff: x86/ldt: Make modify_ldt synchronous Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: billm@melbpc.org.au Link: http://lkml.kernel.org/r/1438883674-1240-1-git-send-email-jgross@suse.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-08x86/ldt: Correct LDT access in single stepping logicJuergen Gross
Commit 37868fe113ff ("x86/ldt: Make modify_ldt synchronous") introduced a new struct ldt_struct anchored at mm->context.ldt. convert_ip_to_linear() was changed to reflect this, but indexing into the ldt has to be changed as the pointer is no longer void *. Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Andy Lutomirski <luto@kernel.org> Cc: <stable@vger.kernel.org> # On top of: 37868fe113ff: x86/ldt: Make modify_ldt synchronous Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bp@suse.de Link: http://lkml.kernel.org/r/1438848278-12906-1-git-send-email-jgross@suse.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-07KVM: x86: Use adjustment in guest cycles when handling MSR_IA32_TSC_ADJUSTHaozhong Zhang
When kvm_set_msr_common() handles a guest's write to MSR_IA32_TSC_ADJUST, it will calcuate an adjustment based on the data written by guest and then use it to adjust TSC offset by calling a call-back adjust_tsc_offset(). The 3rd parameter of adjust_tsc_offset() indicates whether the adjustment is in host TSC cycles or in guest TSC cycles. If SVM TSC scaling is enabled, adjust_tsc_offset() [i.e. svm_adjust_tsc_offset()] will first scale the adjustment; otherwise, it will just use the unscaled one. As the MSR write here comes from the guest, the adjustment is in guest TSC cycles. However, the current kvm_set_msr_common() uses it as a value in host TSC cycles (by using true as the 3rd parameter of adjust_tsc_offset()), which can result in an incorrect adjustment of TSC offset if SVM TSC scaling is enabled. This patch fixes this problem. Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com> Cc: stable@vger.linux.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-07KVM: x86: zero IDT limit on entry to SMMPaolo Bonzini
The recent BlackHat 2015 presentation "The Memory Sinkhole" mentions that the IDT limit is zeroed on entry to SMM. This is not documented, and must have changed some time after 2010 (see http://www.ssi.gouv.fr/uploads/IMG/pdf/IT_Defense_2010_final.pdf). KVM was not doing it, but the fix is easy. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05bus: subsys: update return type of ->remove_dev() to voidViresh Kumar
Its return value is not used by the subsys core and nothing meaningful can be done with it, even if we want to use it. The subsys device is anyway getting removed. Update prototype of ->remove_dev() to make its return type as void. Fix all usage sites as well. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-08-06x86/irq: Store irq descriptor in vector arrayThomas Gleixner
We can spare the irq_desc lookup in the interrupt entry code if we store the descriptor pointer in the vector array instead the interrupt number. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Bjorn Helgaas <bhelgaas@google.com> Link: http://lkml.kernel.org/r/20150802203609.717724106@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-08-06x86/irq: Get rid of an indentation levelThomas Gleixner
Make the code simpler to read. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Bjorn Helgaas <bhelgaas@google.com> Link: http://lkml.kernel.org/r/20150802203609.555253675@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-08-06x86/irq: Rename VECTOR_UNDEFINED to VECTOR_UNUSEDThomas Gleixner
VECTOR_UNDEFINED is a misnomer. The vector is defined, but unused. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Bjorn Helgaas <bhelgaas@google.com> Link: http://lkml.kernel.org/r/20150802203609.477282494@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-08-06x86/irq: Replace numeric constantThomas Gleixner
Use the proper define instead of 0. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Bjorn Helgaas <bhelgaas@google.com> Link: http://lkml.kernel.org/r/20150802203609.385495420@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-08-06x86/irq: Protect smp_cleanup_moveThomas Gleixner
smp_cleanup_move fiddles without protection in the interrupt descriptors and the vector array. A concurrent irq setup/teardown or affinity setting can pull the rug under that operation. Add proper locking. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Bjorn Helgaas <bhelgaas@google.com> Link: http://lkml.kernel.org/r/20150802203609.222975294@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-08-06x86/lguest: Do not setup unused irq vectorsThomas Gleixner
No point in assigning the interrupt vectors if there is no interrupt chip installed. Move it to lguest_setup_irq() and call it from lguest_enable_irq. [ rusty: Typo fix and error handling ] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Link: http://lkml.kernel.org/r/1438662776-4823-2-git-send-email-rusty@rustcorp.com.au Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-08-06x86/lguest: Clean up lguest_setup_irqRusty Russell
We make it static and hoist it higher in the file for the next patch. We also give a nice panic if it fails during boot. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Link: http://lkml.kernel.org/r/1438662776-4823-1-git-send-email-rusty@rustcorp.com.au Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-08-06Merge branch 'linus' into x86/apicThomas Gleixner
Pull in upstream changes to avoid conflicts
2015-08-05Drivers: hv: vmbus: Implement a clocksource based on the TSC pageK. Y. Srinivasan
The current Hyper-V clock source is based on the per-partition reference counter and this counter is being accessed via s synthetic MSR - HV_X64_MSR_TIME_REF_COUNT. Hyper-V has a more efficient way of computing the per-partition reference counter value that does not involve reading a synthetic MSR. We implement a time source based on this mechanism. Tested-by: Vivek Yadav <vyadav@microsoft.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-08-05Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull KVM fixes from Paolo Bonzini: "Just two very small & simple patches" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: MTRR: Use default type for non-MTRR-covered gfn before WARN_ON KVM: s390: Fix hang VCPU hang/loop regression
2015-08-05KVM: VMX: drop ept misconfig checkXiao Guangrong
The logic used to check ept misconfig is completely contained in common reserved bits check for sptes, so it can be removed Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05KVM: MMU: fully check zero bits for sptesXiao Guangrong
The #PF with PFEC.RSV = 1 is designed to speed MMIO emulation, however, it is possible that the RSV #PF is caused by real BUG by mis-configure shadow page table entries This patch enables full check for the zero bits on shadow page table entries (which includes not only bits reserved by the hardware, but also bits that will never be set in the SPTE), then dump the shadow page table hierarchy. Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05KVM: MMU: introduce is_shadow_zero_bits_set()Xiao Guangrong
We have the same data struct to check reserved bits on guest page tables and shadow page tables, split is_rsvd_bits_set() so that the logic can be shared between these two paths Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05KVM: MMU: introduce the framework to check zero bits on sptesXiao Guangrong
We have abstracted the data struct and functions which are used to check reserved bit on guest page tables, now we extend the logic to check zero bits on shadow page tables The zero bits on sptes include not only reserved bits on hardware but also the bits that SPTEs willnever use. For example, shadow pages will never use GB pages unless the guest uses them too. Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05KVM: MMU: split reset_rsvds_bits_mask_eptXiao Guangrong
Since shadow ept page tables and Intel nested guest page tables have the same format, split reset_rsvds_bits_mask_ept so that the logic can be reused by later patches which check zero bits on sptes Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05KVM: MMU: split reset_rsvds_bits_maskXiao Guangrong
Since softmmu & AMD nested shadow page tables and guest page tables have the same format, split reset_rsvds_bits_mask so that the logic can be reused by later patches which check zero bits on sptes Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05KVM: MMU: introduce rsvd_bits_validateXiao Guangrong
These two fields, rsvd_bits_mask and bad_mt_xwr, in "struct kvm_mmu" are used to check if reserved bits set on guest ptes, move them to a data struct so that the approach can be applied to check host shadow page table entries as well Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05KVM: MMU: move FNAME(is_rsvd_bits_set) to mmu.cXiao Guangrong
FNAME(is_rsvd_bits_set) does not depend on guest mmu mode, move it to mmu.c to stop being compiled multiple times Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05KVM: MMU: fix validation of mmio page faultXiao Guangrong
We got the bug that qemu complained with "KVM: unknown exit, hardware reason 31" and KVM shown these info: [84245.284948] EPT: Misconfiguration. [84245.285056] EPT: GPA: 0xfeda848 [84245.285154] ept_misconfig_inspect_spte: spte 0x5eaef50107 level 4 [84245.285344] ept_misconfig_inspect_spte: spte 0x5f5fadc107 level 3 [84245.285532] ept_misconfig_inspect_spte: spte 0x5141d18107 level 2 [84245.285723] ept_misconfig_inspect_spte: spte 0x52e40dad77 level 1 This is because we got a mmio #PF and the handler see the mmio spte becomes normal (points to the ram page) However, this is valid after introducing fast mmio spte invalidation which increases the generation-number instead of zapping mmio sptes, a example is as follows: 1. QEMU drops mmio region by adding a new memslot 2. invalidate all mmio sptes 3. VCPU 0 VCPU 1 access the invalid mmio spte access the region originally was MMIO before set the spte to the normal ram map mmio #PF check the spte and see it becomes normal ram mapping !!! This patch fixes the bug just by dropping the check in mmio handler, it's good for backport. Full check will be introduced in later patches Reported-by: Pavel Shirshov <ru.pchel@gmail.com> Tested-by: Pavel Shirshov <ru.pchel@gmail.com> Signed-off-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05KVM: MTRR: Use default type for non-MTRR-covered gfn before WARN_ONAlex Williamson
The patch was munged on commit to re-order these tests resulting in excessive warnings when trying to do device assignment. Return to original ordering: https://lkml.org/lkml/2015/7/15/769 Fixes: 3e5d2fdceda1 ("KVM: MTRR: simplify kvm_mtrr_get_guest_memory_type") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05KVM: MTRR: Use default type for non-MTRR-covered gfn before WARN_ONAlex Williamson
The patch was munged on commit to re-order these tests resulting in excessive warnings when trying to do device assignment. Return to original ordering: https://lkml.org/lkml/2015/7/15/769 Fixes: 3e5d2fdceda1 ("KVM: MTRR: simplify kvm_mtrr_get_guest_memory_type") Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Xiao Guangrong <guangrong.xiao@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-08-05x86/entry: Remove do_notify_resume(), syscall_trace_leave(), and their TIF masksAndy Lutomirski
They are no longer used. Good riddance! Deleting the TIF_ macros is really nice. It was never clear why there were so many variants. Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Eric Paris <eparis@parisplace.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/22c61682f446628573dde0f1d573ab821677e06da.1438378274.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-05x86/entry/32: Migrate to C exit pathAndy Lutomirski
This removes the hybrid asm-and-C implementation of exit work. Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Eric Paris <eparis@parisplace.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/2baa438619ea6c027b40ec9fceacca52f09c74d09.1438378274.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-05x86/entry/32: Remove 32-bit syscall audit optimizationsAndy Lutomirski
The asm audit optimizations are ugly and obfuscate the code too much. Remove them. This will regress performance if syscall auditing is enabled on 32-bit kernels and SYSENTER is in use. If this becomes a problem, interested parties are encouraged to implement the equivalent of the 64-bit opportunistic SYSRET optimization. Alternatively, a case could be made that, on 32-bit kernels, a less messy asm audit optimization could be done. 32-bit kernels don't have the complicated partial register saving tricks that 64-bit kernels have, so the SYSENTER post-syscall path could just call the audit hooks directly. Any reimplementation of this ought to demonstrate that it only calls the audit hook once per syscall, though, which does not currently appear to be true. Someone would have to make the case that doing so would be better than implementing opportunistic SYSEXIT, though. Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Eric Paris <eparis@parisplace.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/212be39dd8c90b44c4b7bbc678128d6b88bdb9912.1438378274.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-05x86/hweight: Force inlining of __arch_hweight{32,64}()Denys Vlasenko
With this config: http://busybox.net/~vda/kernel_config_OPTIMIZE_INLINING_and_Os gcc-4.7.2 generates many copies of these tiny functions: __arch_hweight32 (35 copies): 55 push %rbp e8 66 9b 4a 00 callq __sw_hweight32 48 89 e5 mov %rsp,%rbp 5d pop %rbp c3 retq __arch_hweight64 (8 copies): 55 push %rbp e8 5e c2 8a 00 callq __sw_hweight64 48 89 e5 mov %rsp,%rbp 5d pop %rbp c3 retq See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66122 This patch fixes this via s/inline/__always_inline/ To avoid touching 32-bit case where such change was not tested to be a win, reformat __arch_hweight64() to have completely disjoint 64-bit and 32-bit implementations. IOW: made #ifdef / 32 bits and 64 bits instead of having #ifdef / #else / #endif inside a single function body. Only 64-bit __arch_hweight64() is __always_inline'd. text data bss dec filename 86971120 17195912 36659200 140826232 vmlinux.before 86970954 17195912 36659200 140826066 vmlinux Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: David Rientjes <rientjes@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Graf <tgraf@suug.ch> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1438697716-28121-2-git-send-email-dvlasenk@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-04mshyperv: fix recognition of Hyper-V guest crash MSR'sDenis V. Lunev
Hypervisor Top Level Functional Specification v3.1/4.0 notes that cpuid (0x40000003) EDX's 10th bit should be used to check that Hyper-V guest crash MSR's functionality available. This patch should fix this recognition. Currently the code checks EAX register instead of EDX. Signed-off-by: Andrey Smetanin <asmetanin@virtuozzo.com> Signed-off-by: Denis V. Lunev <den@openvz.org> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-08-04Drivers: hv: vmbus: add special crash handlerVitaly Kuznetsov
Full kernel hang is observed when kdump kernel starts after a crash. This hang happens in vmbus_negotiate_version() function on wait_for_completion() as Hyper-V host (Win2012R2 in my testing) never responds to CHANNELMSG_INITIATE_CONTACT as it thinks the connection is already established. We need to perform some mandatory minimalistic cleanup before we start new kernel. Reported-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-08-04Drivers: hv: vmbus: add special kexec handlerVitaly Kuznetsov
When general-purpose kexec (not kdump) is being performed in Hyper-V guest the newly booted kernel fails with an MCE error coming from the host. It is the same error which was fixed in the "Drivers: hv: vmbus: Implement the protocol for tearing down vmbus state" commit - monitor pages remain special and when they're being written to (as the new kernel doesn't know these pages are special) bad things happen. We need to perform some minimalistic cleanup before booting a new kernel on kexec. To do so we need to register a special machine_ops.shutdown handler to be executed before the native_machine_shutdown(). Registering a shutdown notification handler via the register_reboot_notifier() call is not sufficient as it happens to early for our purposes. machine_ops is not being exported to modules (and I don't think we want to export it) so let's do this in mshyperv.c The minimalistic cleanup consists of cleaning up clockevents, synic MSRs, guest os id MSR, and hypercall MSR. Kdump doesn't require all this stuff as it lives in a separate memory space. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-08-04Merge branches 'pci/irq', 'pci/misc', 'pci/resource' and ↵Bjorn Helgaas
'pci/virtualization' into next * pci/irq: PCI/MSI: Free legacy IRQ when enabling MSI/MSI-X PCI: Add helpers to manage pci_dev->irq and pci_dev->irq_managed PCI, x86: Implement pcibios_alloc_irq() and pcibios_free_irq() PCI: Add pcibios_alloc_irq() and pcibios_free_irq() * pci/misc: PCI: Remove unused "pci_probe" flags PCI: Add VPD function 0 quirk for Intel Ethernet devices PCI: Add dev_flags bit to access VPD through function 0 PCI / ACPI: Fix pci_acpi_optimize_delay() comment PCI: Remove a broken link in quirks.c PCI: Remove useless redundant code PCI: Simplify pci_find_(ext_)capability() return value checks PCI: Move PCI_FIND_CAP_TTL to pci.h and use it in quirks PCI: Add pcie_downstream_port() (true for Root and Switch Downstream Ports) PCI: Fix pcie_port_device_resume() comment PCI: Shift PCI_CLASS_NOT_DEFINED consistently with other classes PCI: Revert aeb30016fec3 ("PCI: add Intel USB specific reset method") PCI: Fix TI816X class code quirk PCI: Fix generic NCR 53c810 class code quirk PCI: Use PCI_CLASS_SERIAL_USB instead of bare number PCI: Add quirk for Intersil/Techwell TW686[4589] AV capture cards PCI: Remove Intel Cherrytrail D3 delays * pci/resource: PCI: Call pci_read_bridge_bases() from core instead of arch code * pci/virtualization: PCI: Restore ACS configuration as part of pci_restore_state()
2015-08-04perf/x86/intel/pebs: Robustify PEBS buffer drainPeter Zijlstra
Vince Weaver and Stephane Eranian reported warnings in the PEBS code when running the perf fuzzer. Stephane wrote: > I can reproduce the problem on my HSW running the fuzzer. > > I can see why this could be happening if you are mixing PEBS and non PEBS events > in the bottom 4 counters. I suspect: > for (bit = 0; bit < x86_pmu.max_pebs_events; bit++) { > if ((counts[bit] == 0) && (error[bit] == 0)) > continue; > > This test is not correct when you have non-PEBS events mixed with > PEBS events and they overflow at the same time. They will have > counts[i] != 0 but error[i] == 0, and thus you fall thru the loop > and hit the assert. Or it is something along those lines. The only way I can make this work is if ->status only has !PEBS events set, because if it has both set we'll take that slow path which masks out the !PEBS bits. After masking there are 3 options: - there is one bit set, and its @bit, we increment counts[bit]. - there are multiple bits set, we increment error[] for each set bit, we do not increment counts[]. - there are no bits set, we do nothing. The intent was to never increment counts[] for !PEBS events. Now if we start out with only a single !PEBS event set, we'll pass the test and increment counts[] for a !PEBS and hit the warn. Reported-by: Vince Weaver <vincent.weaver@maine.edu> Reported-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>