summaryrefslogtreecommitdiff
path: root/arch/x86/kernel
AgeCommit message (Collapse)Author
2014-03-04x86: hyperv: Fix brown paperbag typos reported by Fenguangs build robotThomas Gleixner
Reported-by: fengguang.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: K. Y. Srinivasan <kys@microsoft.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: linuxdrivers <devel@linuxdriverproject.org> Cc: x86 <x86@kernel.org>
2014-03-04x86: hyperv: Make it build with CONFIG_HYPERV=m againThomas Gleixner
Commit 1aec16967 (x86: Hyperv: Cleanup the irq mess) removed the ability to build the hyperv stuff as a module. Bring it back. Reported-by: fengguang.wu@intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: K. Y. Srinivasan <kys@microsoft.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: linuxdrivers <devel@linuxdriverproject.org> Cc: x86 <x86@kernel.org>
2014-03-04x86: Remove deprecated IRQF_DISABLEDMichael Opdenacker
This patch removes the IRQF_DISABLED flag from x86 architecture code. It's a NOOP since 2.6.35 and it will be removed one day. Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com> Cc: venki@google.com Link: http://lkml.kernel.org/r/1393965305-17248-1-git-send-email-michael.opdenacker@free-electrons.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-03-04x86: Hyperv: Cleanup the irq messThomas Gleixner
The vmbus/hyperv interrupt handling is another complete trainwreck and probably the worst of all currently in tree. If CONFIG_HYPERV=y then the interrupt delivery to the vmbus happens via the direct HYPERVISOR_CALLBACK_VECTOR. So far so good, but: The driver requests first a normal device interrupt. The only reason to do so is to increment the interrupt stats of that device interrupt. For no reason it also installs a private flow handler. We have proper accounting mechanisms for direct vectors, but of course it's too much effort to add that 5 lines of code. Aside of that the alloc_intr_gate() is not protected against reallocation which makes module reload impossible. Solution to the problem is simple to rip out the whole mess and implement it correctly. First of all move all that code to arch/x86/kernel/cpu/mshyperv.c and merily install the HYPERVISOR_CALLBACK_VECTOR with proper reallocation protection and use the proper direct vector accounting mechanism. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: K. Y. Srinivasan <kys@microsoft.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: linuxdrivers <devel@linuxdriverproject.org> Cc: x86 <x86@kernel.org> Link: http://lkml.kernel.org/r/20140223212739.028307673@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-03-04x86: Add proper vector accounting for HYPERVISOR_CALLBACK_VECTORThomas Gleixner
HyperV abuses a device interrupt to account for the HYPERVISOR_CALLBACK_VECTOR. Provide proper accounting as we have for the other vectors as well. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: K. Y. Srinivasan <kys@microsoft.com> Cc: x86 <x86@kernel.org> Link: http://lkml.kernel.org/r/20140223212738.681855582@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-03-04efi: Move facility flags to struct efiMatt Fleming
As we grow support for more EFI architectures they're going to want the ability to query which EFI features are available on the running system. Instead of storing this information in an architecture-specific place, stick it in the global 'struct efi', which is already the central location for EFI state. While we're at it, let's change the return value of efi_enabled() to be bool and replace all references to 'facility' with 'feature', which is the usual word used to describe the attributes of the running system. Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-03-04Merge tag 'kvm-for-3.15-1' of ↵Paolo Bonzini
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into kvm-next
2014-03-03ftrace/x86: One more missing sync after fixup of function modification failurePetr Mladek
If a failure occurs while modifying ftrace function, it bails out and will remove the tracepoints to be back to what the code originally was. There is missing the final sync run across the CPUs after the fix up is done and before the ftrace int3 handler flag is reset. Here's the description of the problem: CPU0 CPU1 ---- ---- remove_breakpoint(); modifying_ftrace_code = 0; [still sees breakpoint] <takes trap> [sees modifying_ftrace_code as zero] [no breakpoint handler] [goto failed case] [trap exception - kernel breakpoint, no handler] BUG() Link: http://lkml.kernel.org/r/1393258342-29978-2-git-send-email-pmladek@suse.cz Fixes: 8a4d0a687a5 "ftrace: Use breakpoint method to update ftrace caller" Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Petr Mladek <pmladek@suse.cz> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-03-03ftrace/x86: Run a sync after fixup on failureSteven Rostedt (Red Hat)
If a failure occurs while enabling a trace, it bails out and will remove the tracepoints to be back to what the code originally was. But the fix up had some bugs in it. By injecting a failure in the code, the fix up ran to completion, but shortly afterward the system rebooted. There was two bugs here. The first was that there was no final sync run across the CPUs after the fix up was done, and before the ftrace int3 handler flag was reset. That means that other CPUs could still see the breakpoint and trigger on it long after the flag was cleared, and the int3 handler would think it was a spurious interrupt. Worse yet, the int3 handler could hit other breakpoints because the ftrace int3 handler flag would have prevented the int3 handler from going further. Here's a description of the issue: CPU0 CPU1 ---- ---- remove_breakpoint(); modifying_ftrace_code = 0; [still sees breakpoint] <takes trap> [sees modifying_ftrace_code as zero] [no breakpoint handler] [goto failed case] [trap exception - kernel breakpoint, no handler] BUG() The second bug was that the removal of the breakpoints required the "within()" logic updates instead of accessing the ip address directly. As the kernel text is mapped read-only when CONFIG_DEBUG_RODATA is set, and the removal of the breakpoint is a modification of the kernel text. The ftrace_write() includes the "within()" logic, where as, the probe_kernel_write() does not. This prevented the breakpoint from being removed at all. Link: http://lkml.kernel.org/r/1392650573-3390-1-git-send-email-pmladek@suse.cz Reported-by: Petr Mladek <pmladek@suse.cz> Tested-by: Petr Mladek <pmladek@suse.cz> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-03-02Merge 3.14-rc5 into driver-core-nextGreg Kroah-Hartman
We want the fixes in here.
2014-03-02Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: "Misc fixes, most of them on the tooling side" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf tools: Fix strict alias issue for find_first_bit perf tools: fix BFD detection on opensuse perf: Fix hotplug splat perf/x86: Fix event scheduling perf symbols: Destroy unused symsrcs perf annotate: Check availability of annotate when processing samples
2014-02-27amd64_edac: Add support for newer F16h modelsAravind Gopalakrishnan
Extend ECC decoding support for F16h M30h. Tested on F16h M30h with ECC turned on using mce_amd_inj module and the patch works fine. Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com> Link: http://lkml.kernel.org/r/1392913726-16961-1-git-send-email-Aravind.Gopalakrishnan@amd.com Tested-by: Arindam Nath <Arindam.Nath@amd.com> Acked-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Borislav Petkov <bp@suse.de>
2014-02-27x86, cpufeature: If we disable CLFLUSH, we should disable CLFLUSHOPTH. Peter Anvin
If we explicitly disable the use of CLFLUSH, we should disable the use of CLFLUSHOPT as well. Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/n/tip-jtdv7btppr4jgzxm3sxx1e74@git.kernel.org
2014-02-27x86, cpufeature: Rename X86_FEATURE_CLFLSH to X86_FEATURE_CLFLUSHH. Peter Anvin
We call this "clflush" in /proc/cpuinfo, and have cpu_has_clflush()... let's be consistent and just call it that. Cc: Gleb Natapov <gleb@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Alan Cox <alan@linux.intel.com> Link: http://lkml.kernel.org/n/tip-mlytfzjkvuf739okyn40p8a5@git.kernel.org
2014-02-27x86, platforms: Remove NUMAQH. Peter Anvin
The NUMAQ support seems to be unmaintained, remove it. Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: David Rientjes <rientjes@google.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/r/n/530CFD6C.7040705@zytor.com
2014-02-27x86, platforms: Remove SGI Visual WorkstationH. Peter Anvin
The SGI Visual Workstation seems to be dead; remove support so we don't have to continue maintaining it. Cc: Andrey Panin <pazke@donpac.ru> Cc: Michael Reed <mdr@sgi.com> Link: http://lkml.kernel.org/r/530CFD6C.7040705@zytor.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-02-27perf/x86: Add a few more commentsPeter Zijlstra
Add a few comments on the ->add(), ->del() and ->*_txn() implementation. Requested-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-he3819318c245j7t5e1e22tr@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27Merge branch 'perf/urgent' into perf/coreIngo Molnar
Merge the latest fixes before queueing up new changes. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27perf/x86: Fix event schedulingPeter Zijlstra
Vince "Super Tester" Weaver reported a new round of syscall fuzzing (Trinity) failures, with perf WARN_ON()s triggering. He also provided traces of the failures. This is I think the relevant bit: > pec_1076_warn-2804 [000] d... 147.926153: x86_pmu_disable: x86_pmu_disable > pec_1076_warn-2804 [000] d... 147.926153: x86_pmu_state: Events: { > pec_1076_warn-2804 [000] d... 147.926156: x86_pmu_state: 0: state: .R config: ffffffffffffffff ( (null)) > pec_1076_warn-2804 [000] d... 147.926158: x86_pmu_state: 33: state: AR config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926159: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926160: x86_pmu_state: n_events: 1, n_added: 0, n_txn: 1 > pec_1076_warn-2804 [000] d... 147.926161: x86_pmu_state: Assignment: { > pec_1076_warn-2804 [000] d... 147.926162: x86_pmu_state: 0->33 tag: 1 config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926163: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926166: collect_events: Adding event: 1 (ffff880119ec8800) So we add the insn:p event (fd[23]). At this point we should have: n_events = 2, n_added = 1, n_txn = 1 > pec_1076_warn-2804 [000] d... 147.926170: collect_events: Adding event: 0 (ffff8800c9e01800) > pec_1076_warn-2804 [000] d... 147.926172: collect_events: Adding event: 4 (ffff8800cbab2c00) We try and add the {BP,cycles,br_insn} group (fd[3], fd[4], fd[15]). These events are 0:cycles and 4:br_insn, the BP event isn't x86_pmu so that's not visible. group_sched_in() pmu->start_txn() /* nop - BP pmu */ event_sched_in() event->pmu->add() So here we should end up with: 0: n_events = 3, n_added = 2, n_txn = 2 4: n_events = 4, n_added = 3, n_txn = 3 But seeing the below state on x86_pmu_enable(), the must have failed, because the 0 and 4 events aren't there anymore. Looking at group_sched_in(), since the BP is the leader, its event_sched_in() must have succeeded, for otherwise we would not have seen the sibling adds. But since neither 0 or 4 are in the below state; their event_sched_in() must have failed; but I don't see why, the complete state: 0,0,1:p,4 fits perfectly fine on a core2. However, since we try and schedule 4 it means the 0 event must have succeeded! Therefore the 4 event must have failed, its failure will have put group_sched_in() into the fail path, which will call: event_sched_out() event->pmu->del() on 0 and the BP event. Now x86_pmu_del() will reduce n_events; but it will not reduce n_added; giving what we see below: n_event = 2, n_added = 2, n_txn = 2 > pec_1076_warn-2804 [000] d... 147.926177: x86_pmu_enable: x86_pmu_enable > pec_1076_warn-2804 [000] d... 147.926177: x86_pmu_state: Events: { > pec_1076_warn-2804 [000] d... 147.926179: x86_pmu_state: 0: state: .R config: ffffffffffffffff ( (null)) > pec_1076_warn-2804 [000] d... 147.926181: x86_pmu_state: 33: state: AR config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926182: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926184: x86_pmu_state: n_events: 2, n_added: 2, n_txn: 2 > pec_1076_warn-2804 [000] d... 147.926184: x86_pmu_state: Assignment: { > pec_1076_warn-2804 [000] d... 147.926186: x86_pmu_state: 0->33 tag: 1 config: 0 (ffff88011ac99800) > pec_1076_warn-2804 [000] d... 147.926188: x86_pmu_state: 1->0 tag: 1 config: 1 (ffff880119ec8800) > pec_1076_warn-2804 [000] d... 147.926188: x86_pmu_state: } > pec_1076_warn-2804 [000] d... 147.926190: x86_pmu_enable: S0: hwc->idx: 33, hwc->last_cpu: 0, hwc->last_tag: 1 hwc->state: 0 So the problem is that x86_pmu_del(), when called from a group_sched_in() that fails (for whatever reason), and without x86_pmu TXN support (because the leader is !x86_pmu), will corrupt the n_added state. Reported-and-Tested-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Stephane Eranian <eranian@google.com> Cc: Dave Jones <davej@redhat.com> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/20140221150312.GF3104@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-02-27Merge back earlier 'acpi-processor' material.Rafael J. Wysocki
2014-02-25x86, kaslr: randomize module base load addressKees Cook
Randomize the load address of modules in the kernel to make kASLR effective for modules. Modules can only be loaded within a particular range of virtual address space. This patch adds 10 bits of entropy to the load address by adding 1-1024 * PAGE_SIZE to the beginning range where modules are loaded. The single base offset was chosen because randomizing each module load ends up wasting/fragmenting memory too much. Prior approaches to minimizing fragmentation while doing randomization tend to result in worse entropy than just doing a single base address offset. Example kASLR boot without this change, with a single module loaded: ---[ Modules ]--- 0xffffffffc0000000-0xffffffffc0001000 4K ro GLB x pte 0xffffffffc0001000-0xffffffffc0002000 4K ro GLB NX pte 0xffffffffc0002000-0xffffffffc0004000 8K RW GLB NX pte 0xffffffffc0004000-0xffffffffc0200000 2032K pte 0xffffffffc0200000-0xffffffffff000000 1006M pmd ---[ End Modules ]--- Example kASLR boot after this change, same module loaded: ---[ Modules ]--- 0xffffffffc0000000-0xffffffffc0200000 2M pmd 0xffffffffc0200000-0xffffffffc03bf000 1788K pte 0xffffffffc03bf000-0xffffffffc03c0000 4K ro GLB x pte 0xffffffffc03c0000-0xffffffffc03c1000 4K ro GLB NX pte 0xffffffffc03c1000-0xffffffffc03c3000 8K RW GLB NX pte 0xffffffffc03c3000-0xffffffffc0400000 244K pte 0xffffffffc0400000-0xffffffffff000000 1004M pmd ---[ End Modules ]--- Signed-off-by: Andy Honig <ahonig@google.com> Link: http://lkml.kernel.org/r/20140226005916.GA27083@www.outflux.net Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-02-25x86, kaslr: export offset in VMCOREINFO ELF notesEugene Surovegin
Include kASLR offset in VMCOREINFO ELF notes to assist in debugging. [ hpa: pushing this for v3.14 to avoid having a kernel version with kASLR where we can't debug output. ] Signed-off-by: Eugene Surovegin <surovegin@google.com> Link: http://lkml.kernel.org/r/20140123173120.GA25474@www.outflux.net Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-02-23Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: - a bugfix which prevents a divide by 0 panic when the newly introduced try_msr_calibrate_tsc() fails - enablement of the Baytrail platform to utilize the newfangled msr based calibration * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86: tsc: Add missing Baytrail frequency to the table x86, tsc: Fallback to normal calibration if fast MSR calibration fails
2014-02-22Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: "Misc fixlets from all around the place" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/uncore: Fix IVT/SNB-EP uncore CBOX NID filter table perf/x86: Correctly use FEATURE_PDCM perf, nmi: Fix unknown NMI warning perf trace: Fix ioctl 'request' beautifier build problems on !(i386 || x86_64) arches perf trace: Add fallback definition of EFD_SEMAPHORE perf list: Fix checking for supported events on older kernels perf tools: Handle PERF_RECORD_HEADER_EVENT_TYPE properly perf probe: Do not add offset twice to uprobe address perf/x86: Fix Userspace RDPMC switch perf/x86/intel/p6: Add userspace RDPMC quirk for PPro
2014-02-22kvm: remove redundant registration of BSP's hv_clock areaFernando Luis Vázquez Cao
These days hv_clock allocation is memblock based (i.e. the percpu allocator is not involved), which means that the physical address of each of the per-cpu hv_clock areas is guaranteed to remain unchanged through all its lifetime and we do not need to update its location after CPU bring-up. Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-02-21perf/x86/uncore: Fix IVT/SNB-EP uncore CBOX NID filter tableStephane Eranian
This patch updates the CBOX PMU filters mapping tables for SNB-EP and IVT (model 45 and 62 respectively). The NID umask always comes in addition to another umask. When set, the NID filter is applied. The current mapping tables were missing some code/umask combinations to account for the NID umask. This patch fixes that. Cc: mingo@elte.hu Cc: ak@linux.intel.com Reviewed-by: Yan, Zheng <zheng.z.yan@intel.com> Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140219131018.GA24475@quad Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-21perf/x86: Correctly use FEATURE_PDCMPeter Zijlstra
The current code simply assumes Intel Arch PerfMon v2+ to have the IA32_PERF_CAPABILITIES MSR; the SDM specifies that we should check CPUID[1].ECX[15] (aka, FEATURE_PDCM) instead. This was found by KVM which implements v2+ but didn't provide the capabilities MSR. Change the code to DTRT; KVM will also implement the MSR and return 0. Cc: pbonzini@redhat.com Reported-by: "Michael S. Tsirkin" <mst@redhat.com> Suggested-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20140203132903.GI8874@twins.programming.kicks-ass.net Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-21perf, nmi: Fix unknown NMI warningMarkus Metzger
When using BTS on Core i7-4*, I get the below kernel warning. $ perf record -c 1 -e branches:u ls Message from syslogd@labpc1501 at Nov 11 15:49:25 ... kernel:[ 438.317893] Uhhuh. NMI received for unknown reason 31 on CPU 2. Message from syslogd@labpc1501 at Nov 11 15:49:25 ... kernel:[ 438.317920] Do you have a strange power saving mode enabled? Message from syslogd@labpc1501 at Nov 11 15:49:25 ... kernel:[ 438.317945] Dazed and confused, but trying to continue Make intel_pmu_handle_irq() take the full exit path when returning early. Cc: eranian@google.com Cc: peterz@infradead.org Cc: mingo@kernel.org Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1392425048-5309-1-git-send-email-andi@firstfloor.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-21perf/x86/uncore: use MiB unit for events for SNB/IVB/HSW IMCStephane Eranian
This patch makes perf use Mebibytes to display the counts of uncore_imc/data_reads/ and uncore_imc/data_writes. 1MiB = 1024*1024 bytes. Cc: mingo@elte.hu Cc: acme@redhat.com Cc: ak@linux.intel.com Cc: zheng.z.yan@intel.com Cc: peterz@infradead.org Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1392132015-14521-9-git-send-email-eranian@google.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-21perf/x86/uncore: add hrtimer to SNB uncore IMC PMUStephane Eranian
This patch is needed because that PMU uses 32-bit free running counters with no interrupt capabilities. On SNB/IVB/HSW, we used 20GB/s theoretical peak to calculate the hrtimer timeout necessary to avoid missing an overflow. That delay is set to 5s to be on the cautious side. The SNB IMC uses free running counters, which are handled via pseudo fixed counters. The SNB IMC PMU implementation supports an arbitrary number of events, because the counters are read-only. Therefore it is not possible to track active counters. Instead we put active events on a linked list which is then used by the hrtimer handler to update the SW counts. Cc: mingo@elte.hu Cc: acme@redhat.com Cc: ak@linux.intel.com Cc: zheng.z.yan@intel.com Cc: peterz@infradead.org Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1392132015-14521-8-git-send-email-eranian@google.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-21perf/x86/uncore: add SNB/IVB/HSW client uncore memory controller supportStephane Eranian
This patch adds a new uncore PMU for Intel SNB/IVB/HSW client CPUs. It adds the Integrated Memory Controller (IMC) PMU. This new PMU provides a set of events to measure memory bandwidth utilization. The IMC on those processor is PCI-space based. This patch exposes a new uncore PMU on those processor: uncore_imc Two new events are defined: - name: data_reads - code: 0x1 - unit: 64 bytes - number of full cacheline read requests to the IMC - name: data_writes - code: 0x2 - unit: 64 bytes - number of full cacheline write requests to the IMC Documentation available at: http://software.intel.com/en-us/articles/monitoring-integrated-memory-controller-requests-in-the-2nd-3rd-and-4th-generation-intel Cc: mingo@elte.hu Cc: acme@redhat.com Cc: ak@linux.intel.com Cc: zheng.z.yan@intel.com Cc: peterz@infradead.org Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1392132015-14521-7-git-send-email-eranian@google.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-21perf/x86/uncore: move uncore_event_to_box() and uncore_pmu_to_box()Stephane Eranian
Move a couple of functions around to avoid forward declarations when we add code later on. Cc: mingo@elte.hu Cc: acme@redhat.com Cc: ak@linux.intel.com Cc: zheng.z.yan@intel.com Cc: peterz@infradead.org Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1392132015-14521-6-git-send-email-eranian@google.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-21perf/x86/uncore: make hrtimer timeout configurable per boxStephane Eranian
This patch makes the hrtimer timeout configurable per PMU box. Not all counters have necessarily the same width and rate, thus the default timeout of 60s may need to be adjusted. This patch adds box->hrtimer_duration. It is set to default when the box is allocated. It can be overriden when the box is initialized. Cc: mingo@elte.hu Cc: acme@redhat.com Cc: ak@linux.intel.com Cc: zheng.z.yan@intel.com Cc: peterz@infradead.org Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1392132015-14521-5-git-send-email-eranian@google.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-21perf/x86/uncore: add ability to customize pmu callbacksStephane Eranian
This patch enables custom struct pmu callbacks per uncore PMU types. This feature may be used to simplify counter setup for certain uncore PMUs which have free running counters for instance. It becomes possible to bypass the event scheduling phase of the configuration. Cc: mingo@elte.hu Cc: acme@redhat.com Cc: ak@linux.intel.com Cc: zheng.z.yan@intel.com Cc: peterz@infradead.org Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1392132015-14521-3-git-send-email-eranian@google.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-21perf/x86/uncore: fix initialization of cpumaskStephane Eranian
On certain processors, the uncore PMU boxes may only be msr-bsed or PCI-based. But in both cases, the cpumask, suggesting on which CPUs to monitor to get full coverage of the particular PMU, must be created. However with the current code base, the cpumask was only created on processor which had at least one MSR-based uncore PMU. This patch removes that restriction and ensures the cpumask is created even when there is no msr-based PMU. For instance, on SNB client where only a PCI-based memory controller PMU is supported. Cc: mingo@elte.hu Cc: acme@redhat.com Cc: ak@linux.intel.com Cc: zheng.z.yan@intel.com Cc: peterz@infradead.org Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1392132015-14521-2-git-send-email-eranian@google.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-21Merge branch 'linus' into sched/coreThomas Gleixner
Reason: Bring bakc upstream modification to resolve conflicts Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-20x86, acpi: Fix bug in associating hot-added CPUs with corresponding NUMA nodeJiang Liu
Current ACPI cpu hotplug driver fails to associate hot-added CPUs with corresponding NUMA node when doing socket online. The code path to associate CPU with NUMA node is as below: acpi_processor_add() ->acpi_processor_get_info() ->acpi_processor_hotadd_init() ->acpi_map_lsapic() ->_acpi_map_lsapic() ->acpi_map_cpu2node() cpu_subsys_online() ->try_online_node() ->node_set_online() When doing socket online, a new NUMA node is introduced in addition to hot-added CPU and memory device. And the new NUMA node is marked as online when onlining hot-added CPUs through sysfs interface /sys/devices/system/cpu/cpuxx/online. On the other hand, acpi_map_cpu2node() will only build the CPU to node map if corresponding NUMA node is already online, so it always fails to associate hot-added CPUs with corresponding NUMA node because the NUMA node is still in offline state. For the fix, we could safely remove the "node_online(node)" check in function acpi_map_cpu2node() because it's only called for hot-added CPUs by acpi_processor_hotadd_init(). Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Link: http://lkml.kernel.org/r/1390185115-26850-1-git-send-email-jiang.liu@linux.intel.com Acked-by: Rafael J. Wysocki <rjw@rjwysocki.net> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-02-20Merge branch 'fixes-for-v3.14' of ↵Linus Torvalds
git://git.linaro.org/people/mszyprowski/linux-dma-mapping Pull DMA-mapping fixes from Marek Szyprowski: "This contains fixes for incorrect atomic test in dma-mapping subsystem for ARM and x86 architecture" * 'fixes-for-v3.14' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping: x86: dma-mapping: fix GFP_ATOMIC macro usage ARM: dma-mapping: fix GFP_ATOMIC macro usage
2014-02-19ACPI idle: permit sparse C-state sub-state numbersLen Brown
Linux uses CPUID.MWAIT.EDX to validate the C-states reported by ACPI, silently discarding states which are not supported by the HW. This test is too restrictive, as some HW now uses sparse sub-state numbering, so the sub-state number may be higher than the number of sub-states... Also, rather than silently ignoring an invalid state, we should complain about a firmware bug. In practice... Bay Trail systems originally supported C6-no-shrink as MWAIT sub-state 0x58, and in CPUID.MWAIT.EDX 0x03000000 indicated that there were 3 MWAIT-C6 sub-states. So acpi_idle would discard that C-state because 8 >= 3. Upon discovering this issue, the ucode was updated so that C6-no-shrink was also exported as 0x51, and the BIOS was updated to match. However, systems shipped with 0x58, will never get a BIOS update, and this patch allows Linux to see C6-no-shrink on early Bay Trail. Signed-off-by: Len Brown <len.brown@intel.com>
2014-02-19x86: tsc: Add missing Baytrail frequency to the tableMika Westerberg
Intel Baytrail is based on Silvermont core so MSR_FSB_FREQ[2:0] == 0 means that the CPU reference clock runs at 83.3MHz. Add this missing frequency to the table. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Cc: Bin Gao <bin.gao@linux.intel.com> Cc: One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk> Cc: Ingo Molnar <mingo@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Link: http://lkml.kernel.org/r/1392810750-18660-2-git-send-email-mika.westerberg@linux.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-19x86, tsc: Fallback to normal calibration if fast MSR calibration failsThomas Gleixner
If we cannot calibrate TSC via MSR based calibration try_msr_calibrate_tsc() stores zero to fast_calibrate and returns that to the caller. This value gets then propagated further to clockevents code resulting division by zero oops like the one below: divide error: 0000 [#1] PREEMPT SMP Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 3.13.0+ #47 task: ffff880075508000 ti: ffff880075506000 task.ti: ffff880075506000 RIP: 0010:[<ffffffff810aec14>] [<ffffffff810aec14>] clockevents_config.part.3+0x24/0xa0 RSP: 0000:ffff880075507e58 EFLAGS: 00010246 RAX: ffffffffffffffff RBX: ffff880079c0cd80 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffffffffff RBP: ffff880075507e70 R08: 0000000000000001 R09: 00000000000000be R10: 00000000000000bd R11: 0000000000000003 R12: 000000000000b008 R13: 0000000000000008 R14: 000000000000b010 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff880079c00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: ffff880079fff000 CR3: 0000000001c0b000 CR4: 00000000001006f0 Stack: ffff880079c0cd80 000000000000b008 0000000000000008 ffff880075507e88 ffffffff810aecb0 ffff880079c0cd80 ffff880075507e98 ffffffff81030168 ffff880075507ed8 ffffffff81d1104f 00000000000000c3 0000000000000000 Call Trace: [<ffffffff810aecb0>] clockevents_config_and_register+0x20/0x30 [<ffffffff81030168>] setup_APIC_timer+0xc8/0xd0 [<ffffffff81d1104f>] setup_boot_APIC_clock+0x4cc/0x4d8 [<ffffffff81d0f5de>] native_smp_prepare_cpus+0x3dd/0x3f0 [<ffffffff81d02ee9>] kernel_init_freeable+0xc3/0x205 [<ffffffff8177c910>] ? rest_init+0x90/0x90 [<ffffffff8177c91e>] kernel_init+0xe/0x120 [<ffffffff8178deec>] ret_from_fork+0x7c/0xb0 [<ffffffff8177c910>] ? rest_init+0x90/0x90 Prevent this from happening by: 1) Modifying try_msr_calibrate_tsc() to return calibration value or zero if it fails. 2) Check this return value in native_calibrate_tsc() and in case of zero fallback to use normal non-MSR based calibration. [mw: Added subject and changelog] Reported-and-tested-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Bin Gao <bin.gao@linux.intel.com> Cc: One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk> Cc: Ingo Molnar <mingo@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Link: http://lkml.kernel.org/r/1392810750-18660-1-git-send-email-mika.westerberg@linux.intel.com Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-02-19ACPI: Move BAD_MADT_ENTRY() to linux/acpi.hHanjun Guo
BAD_MADT_ENTRY() is arch independent and will be used for all architectures which parse MADT, so move it to linux/acpi.h to reduce code duplication. Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2014-02-18x86: align x86 arch with generic CPU modalias handlingArd Biesheuvel
The x86 CPU feature modalias handling existed before it was reimplemented generically. This patch aligns the x86 handling so that it (a) reuses some more code that is now generic; (b) uses the generic format for the modalias module metadata entry, i.e., it now uses 'cpu:type:x86,venVVVVfamFFFFmodMMMM:feature:,XXXX,YYYY' instead of the 'x86cpu:vendor:VVVV:family:FFFF:model:MMMM:feature:,XXXX,YYYY' that was used before. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-02-15Merge tag 'trace-fixes-v3.14-rc2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull twi tracing fixes from Steven Rostedt: "Two urgent fixes in the tracing utility. The first is a fix for the way the ring buffer stores timestamps. After a restructure of the code was done, the ring buffer timestamp logic missed the fact that the first event on a sub buffer is to have a zero delta, as the full timestamp is stored on the sub buffer itself. But because the delta was not cleared to zero, the timestamp for that event will be calculated as the real timestamp + the delta from the last timestamp. This can skew the timestamps of the events and have them say they happened when they didn't really happen. That's bad. The second fix is for modifying the function graph caller site. When the stop machine was removed from updating the function tracing code, it missed updating the function graph call site location. It is still modified as if it is being done via stop machine. But it's not. This can lead to a GPF and kernel crash if the function graph call site happens to lie between cache lines and one CPU is executing it while another CPU is doing the update. It would be a very hard condition to hit, but the result is severe enough to have it fixed ASAP" * tag 'trace-fixes-v3.14-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: ftrace/x86: Use breakpoints for converting function graph caller ring-buffer: Fix first commit on sub-buffer having non-zero delta
2014-02-13asmlinkage: Make jiffies visibleAndi Kleen
Jiffies is referenced by the linker script, so it has to be visible. Handled both the generic and the x86 version. Signed-off-by: Andi Kleen <ak@linux.intel.com> Link: http://lkml.kernel.org/r/1391845930-28580-3-git-send-email-ak@linux.intel.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-02-13x86, smap: Don't enable SMAP if CONFIG_X86_SMAP is disabledH. Peter Anvin
If SMAP support is not compiled into the kernel, don't enable SMAP in CR4 -- in fact, we should clear it, because the kernel doesn't contain the proper STAC/CLAC instructions for SMAP support. Found by Fengguang Wu's test system. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Link: http://lkml.kernel.org/r/20140213124550.GA30497@localhost Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: <stable@vger.kernel.org> # v3.7+
2014-02-11x86, apic: Remove support for IBM Summit/EXA chipsetDavid Rientjes
There should no longer be any IBM x440 systems or those using the Summit/EXA chipset out in the wild, so remove support for it. We've done our due diligence in reaching out to any contact information listed for this chipset and no indication was given that it should be kept around. Signed-off-by: David Rientjes <rientjes@google.com>
2014-02-11x86, apic: Remove support for ia32-based Unisys ES7000David Rientjes
There should no longer be any ia32-based Unisys ES7000 systems out in the wild, so remove support for it. We've done our due diligence in reaching out to any contact information listed for this system and no indication was given that it should be kept around. Signed-off-by: David Rientjes <rientjes@google.com>
2014-02-11ftrace/x86: Use breakpoints for converting function graph callerSteven Rostedt (Red Hat)
When the conversion was made to remove stop machine and use the breakpoint logic instead, the modification of the function graph caller is still done directly as though it was being done under stop machine. As it is not converted via stop machine anymore, there is a possibility that the code could be layed across cache lines and if another CPU is accessing that function graph call when it is being updated, it could cause a General Protection Fault. Convert the update of the function graph caller to use the breakpoint method as well. Cc: H. Peter Anvin <hpa@zytor.com> Cc: stable@vger.kernel.org # 3.5+ Fixes: 08d636b6d4fb "ftrace/x86: Have arch x86_64 use breakpoints instead of stop machine" Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-02-11sched/idle, x86: Remove redundant cpuidle_idle_call()Nicolas Pitre
The core idle loop now takes care of it. Signed-off-by: Nicolas Pitre <nico@linaro.org> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: linux-arm-kernel@lists.infradead.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-sh@vger.kernel.org Cc: linux-pm@vger.kernel.org Cc: Russell King <linux@arm.linux.org.uk> Cc: linaro-kernel@lists.linaro.org Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-3ioazimg4j5iq6kdefks04i8@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>