summaryrefslogtreecommitdiff
path: root/arch/arm/kernel
AgeCommit message (Collapse)Author
2011-07-13ARM: kprobes: Make str_pc_offset a constant on ARMv7Jon Medhurst
The str_pc_offset value is architecturally defined on ARMv7 onwards so we can make it a compile time constant. This means on Thumb kernels the runtime checking code isn't needed, which saves us from having to fix it to work for Thumb. Signed-off-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2011-07-13ARM: kprobes: Move find_str_pc_offset into kprobes-common.cJon Medhurst
Move str_pc_offset into kprobes-common.c as it will be needed by common code later. Signed-off-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2011-07-13ARM: kprobes: Move is_writeback define to header file.Jon Medhurst
This will be used later in other files. Signed-off-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2011-07-13ARM: kprobes: Add kprobes-common.cJon Medhurst
This file will contain the instruction decoding and emulation code which is common to both ARM and Thumb instruction sets. For now, we will just move over condition_checks from kprobes-arm.c This table is also renamed to kprobe_condition_checks to avoid polluting the public namespace with a too generic name. Signed-off-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2011-07-13ARM: kprobes: Split out internal parts of kprobes.hJon Medhurst
Later, we will be adding a considerable amount of internal implementation definitions to kprobe header files and it would be good to have these in local header file along side the source code, rather than pollute the existing header which is include by all users of kprobes. To this end, we add arch/arm/kernel/kprobes.h and move into this the existing internal defintions from arch/arm/include/asm/kprobes.h Signed-off-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2011-07-13ARM: kprobes: Rename kprobes-decode.c to kprobes-arm.cJon Medhurst
This file contains decoding and emulation functions for the ARM instruction set. As we will later be adding a file for Thumb and a file with common decoding functions, this renaming makes things clearer. Signed-off-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2011-07-13ARM: Thumb-2: Support Thumb-2 in undefined instruction handlerJon Medhurst
This patch allows undef_hook's to be specified for 32-bit Thumb instructions and also to be used for thumb kernel-side code. 32-bit Thumb instructions are specified in the form: ((first_half << 16 ) | second_half) which matches the layout used by the ARM ARM. ptrace was handling 32-bit Thumb instructions by hooking the first halfword and manually checking the second half. This method would be broken by this patch so it is migrated to make use of the new Thumb-2 support. Signed-off-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2011-07-13ARM: Thumb-2: Fix exception return sequence to restore stack correctlyJon Medhurst
The implementation of svc_exit didn't take into account any stack hole created by svc_entry; as happens with the undef handler when kprobes are configured. The fix is to read the saved value of SP rather than trying to calculate it. Signed-off-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2011-07-12ARM: introduce handle_IRQ() not to dump exception stackRussell King - ARM Linux
On Mon, Jul 11, 2011 at 3:52 PM, Russell King - ARM Linux <linux@arm.linux.org.uk> wrote: ... > The __exception annotation on a function causes this to happen: > > [<c002406c>] (asm_do_IRQ+0x6c/0x8c) from [<c0024b84>] > (__irq_svc+0x44/0xcc) > Exception stack(0xc3897c78 to 0xc3897cc0) > 7c60: 4022d320 4022e000 > 7c80: 08000075 00001000 c32273c0 c03ce1c0 c2b49b78 4022d000 c2b420b4 00000001 > 7ca0: 00000000 c3897cfc 00000000 c3897cc0 c00afc54 c002edd8 00000013 ffffffff > > Where that stack dump represents the pt_regs for the exception which > happened. Any function found in while unwinding will cause this to > be printed. > > If you insert a C function between the IRQ assembly and asm_do_IRQ, > the > dump you get from asm_do_IRQ will be the stack for your function, > not > the pt_regs. That makes the feature useless. > When __irq_svc - or any of the other exception handling assembly code - calls the C code, the stack pointer will be pointing at the pt_regs structure. All the entry points into C code from the exception handling code are marked with __exception or __exception_irq_enter to indicate that they are one of the functions which has pt_regs above them. Normally, when you've entered asm_do_IRQ() you will have this stack layout (higher address towards top): pt_regs asm_do_IRQ frame If you insert a C function between the exception assembly code and asm_do_IRQ, you end up with this stack layout instead: pt_regs your function frame asm_do_IRQ frame This means when we unwind, we'll get to asm_do_IRQ, and rather than dumping out the pt_regs, we'll dump out your functions stack frame instead, because that's what is above the asm_do_IRQ stack frame rather than the expected pt_regs structure. The fix is to introduce handle_IRQ() for no exception stack dump, so it can be called with MULTI_IRQ_HANDLER is selected and a C function is between the assembly code and the actual IRQ handling code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2011-07-09ARM: vfp: fix a hole in VFP thread migrationRussell King
Fix a hole in the VFP thread migration. Lets define two threads. Thread 1, we'll call 'interesting_thread' which is a thread which is running on CPU0, using VFP (so vfp_current_hw_state[0] = &interesting_thread->vfpstate) and gets migrated off to CPU1, where it continues execution of VFP instructions. Thread 2, we'll call 'new_cpu0_thread' which is the thread which takes over on CPU0. This has also been using VFP, and last used VFP on CPU0, but doesn't use it again. The following code will be executed twice: cpu = thread->cpu; /* * On SMP, if VFP is enabled, save the old state in * case the thread migrates to a different CPU. The * restoring is done lazily. */ if ((fpexc & FPEXC_EN) && vfp_current_hw_state[cpu]) { vfp_save_state(vfp_current_hw_state[cpu], fpexc); vfp_current_hw_state[cpu]->hard.cpu = cpu; } /* * Thread migration, just force the reloading of the * state on the new CPU in case the VFP registers * contain stale data. */ if (thread->vfpstate.hard.cpu != cpu) vfp_current_hw_state[cpu] = NULL; The first execution will be on CPU0 to switch away from 'interesting_thread'. interesting_thread->cpu will be 0. So, vfp_current_hw_state[0] points at interesting_thread->vfpstate. The hardware state will be saved, along with the CPU number (0) that it was executing on. 'thread' will be 'new_cpu0_thread' with new_cpu0_thread->cpu = 0. Also, because it was executing on CPU0, new_cpu0_thread->vfpstate.hard.cpu = 0, and so the thread migration check is not triggered. This means that vfp_current_hw_state[0] remains pointing at interesting_thread. The second execution will be on CPU1 to switch _to_ 'interesting_thread'. So, 'thread' will be 'interesting_thread' and interesting_thread->cpu now will be 1. The previous thread executing on CPU1 is not relevant to this so we shall ignore that. We get to the thread migration check. Here, we discover that interesting_thread->vfpstate.hard.cpu = 0, yet interesting_thread->cpu is now 1, indicating thread migration. We set vfp_current_hw_state[1] to NULL. So, at this point vfp_current_hw_state[] contains the following: [0] = &interesting_thread->vfpstate [1] = NULL Our interesting thread now executes a VFP instruction, takes a fault which loads the state into the VFP hardware. Now, through the assembly we now have: [0] = &interesting_thread->vfpstate [1] = &interesting_thread->vfpstate CPU1 stops due to ptrace (and so saves its VFP state) using the thread switch code above), and CPU0 calls vfp_sync_hwstate(). if (vfp_current_hw_state[cpu] == &thread->vfpstate) { vfp_save_state(&thread->vfpstate, fpexc | FPEXC_EN); BANG, we corrupt interesting_thread's VFP state by overwriting the more up-to-date state saved by CPU1 with the old VFP state from CPU0. Fix this by ensuring that we have sane semantics for the various state describing variables: 1. vfp_current_hw_state[] points to the current owner of the context information stored in each CPUs hardware, or NULL if that state information is invalid. 2. thread->vfpstate.hard.cpu always contains the most recent CPU number which the state was loaded into or NR_CPUS if no CPU owns the state. So, for a particular CPU to be a valid owner of the VFP state for a particular thread t, two things must be true: vfp_current_hw_state[cpu] == &t->vfpstate && t->vfpstate.hard.cpu == cpu. and that is valid from the moment a CPU loads the saved VFP context into the hardware. This gives clear and consistent semantics to interpreting these variables. This patch also fixes thread copying, ensuring that t->vfpstate.hard.cpu is invalidated, otherwise CPU0 may believe it was the last owner. The hole can happen thus: - thread1 runs on CPU2 using VFP, migrates to CPU3, exits and thread_info freed. - New thread allocated from a previously running thread on CPU2, reusing memory for thread1 and copying vfp.hard.cpu. At this point, the following are true: new_thread1->vfpstate.hard.cpu == 2 &new_thread1->vfpstate == vfp_current_hw_state[2] Lastly, this also addresses thread flushing in a similar way to thread copying. Hole is: - thread runs on CPU0, using VFP, migrates to CPU1 but does not use VFP. - thread calls execve(), so thread flush happens, leaving vfp_current_hw_state[0] intact. This vfpstate is memset to 0 causing thread->vfpstate.hard.cpu = 0. - thread migrates back to CPU0 before using VFP. At this point, the following are true: thread->vfpstate.hard.cpu == 0 &thread->vfpstate == vfp_current_hw_state[0] Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-08Merge branch 'for-rmk' of git://linux-arm.org/linux-2.6-wd into devel-stableRussell King
2011-07-07ARM: vmlinux.lds: use _text and _stext the same way as x86Russell King
x86 uses _text to mark the start of the kernel image including the head text, and _stext to mark the start of the .text section. Change our vmlinux.lds to conform. An audit of the places which use _stext and _text in arch/arm indicates no users of either symbol are impacted by this change. It does mean a slight change to /proc/iomem output. Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07ARM: vmlinux.lds: move init sections between text and data sectionsRussell King
Place the init sections between the text and data sections. This means all code is grouped together at the beginning of the kernel image, and all data is at the end of the image. This avoids problems with the 24-bit branch instruction relocations becoming invalid with large initramfs images. Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07ARM: vmlinux.lds: remove .rodata/.rodata1 from main .text segmentRussell King
RODATA() already handles these sections, so allow it to take care of them for us. Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07ARM: vmlinux.lds: rearrange .init output sectionRussell King
Keep the various linker tables as separate output sections rather than combining them together into one big .init section. This makes the 'vmlinux' easier to see what is placed where. Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07ARM: vmlinux.lds: move discarded sections to beginningRussell King
Rather than scattering the discarded sections throughout the linker file, move them to the start. Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07ARM: perf: add support for the Cortex-A15 PMUWill Deacon
This patch adds support for the Cortex-A15 PMU to the ARMv7 perf-event backend. Signed-off-by: Will Deacon <will.deacon@arm.com>
2011-07-07ARM: perf: add support for the Cortex-A5 PMUWill Deacon
This patch adds support for the Cortex-A5 PMU to the ARMv7 perf-event backend. Signed-off-by: Will Deacon <will.deacon@arm.com>
2011-07-07ARM: perf: add PMUv2 common event definitionsWill Deacon
The PMUv2 specification reserves a number of event encodings for common events. This patch adds these events to the common event enumeration in preparation for PMUv2 cores, such as Cortex-A15. Acked-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2011-07-07ARM: perf: remove confusing comment from v7 perf events backendWill Deacon
The comment about measuring TLB misses and refills in the ARMv7 perf backend makes little sense and refers loosely to raw counters that should be used instead. This patch removes the comments to avoid any confusion. Signed-off-by: Will Deacon <will.deacon@arm.com>
2011-07-07ARM: hwcaps: add new HWCAP defines for ARMv7-AWill Deacon
Modern ARMv7-A cores can optionally implement these new hardware features: - VFPv4: The latest version of the ARMv7 vector floating-point extensions, including hardware support for fused multiple accumulate. D16 or D32 variants may be implemented. - Integer divide: The SDIV and UDIV instructions provide signed and unsigned integer division in hardware. When implemented, these instructions may be available in either both Thumb and ARM, or Thumb only. This patch adds new HWCAP defines to describe these new features. The integer divide capabilities are split into two bits for ARM and Thumb respectively. Whilst HWCAP_IDIVA should never be set if HWCAP_IDIVT is clear, separating the bits makes it easier to interpret from userspace. Signed-off-by: Will Deacon <will.deacon@arm.com>
2011-07-07ARM: 6994/1: smp_twd: Fix typo in 'twd_timer_rate' printingVitaly Kuzmichev
To get hundredths of MHz the rate needs to be divided by 10'000. Here is an example: twd_timer_rate = 123456789 Before the patch: twd_timer_rate / 1000000 = 123 (twd_timer_rate / 1000000) % 100 = 23 Result: 123.23MHz. After being fixed: twd_timer_rate / 1000000 = 123 (twd_timer_rate / 10000) % 100 = 45 Result: 123.45MHz. Signed-off-by: Vitaly Kuzmichev <vkuzmichev@mvista.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07ARM: 6993/1: platsmp: Allow secondary cpu hotplug with maxcpus=1Stephen Boyd
If an ARM system has multiple cpus in the same socket and the kernel is booted with maxcpus=1, secondary cpus are possible but not present due to how platform_smp_prepare_cpus() is called. Since most typical ARM processors don't actually support physical hotplug, initialize the present map to be equal to the possible map in generic ARM SMP code. Also, always call platform_smp_prepare_cpus() as long as max_cpus is non-zero (0 means no SMP) to allow platform code to do any SMP setup. After applying this patch it's possible to boot an ARM system with maxcpus=1 on the command line and then hotplug in secondary cpus via sysfs. This is more in line with how x86 does things. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Kukjin Kim <kgene.kim@samsung.com> Cc: David Brown <davidb@codeaurora.org> Cc: Tony Lindgren <tony@atomide.com> Cc: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> Cc: Linus Walleij <linus.walleij@stericsson.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-06ARM: 6960/1: allow enabling SCU code on UPRob Herring
The scu_power_mode function can be used on UP builds as it drives signals to an SOC power controller. So make it selectable for !SMP. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-06ARM: 6985/1: export functions to determine the presence of I/DTCMLinus Walleij
By allowing code to detect whether DTCM or ITCM is present, code paths involving TCM can be avoided when running on platforms that lack it. This is good for creating single kernels across several archs, if some of them utilize TCM but others don't. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-06ARM: 6984/1: enhance TCM robustnessLinus Walleij
The PB11MPCore reports "3" DTCM banks, but anything above 2 is an "undefined" value, so push this to become 0. Further add some checks if code is compiled to TCM even if there is no D/ITCM present in the system, and if we can really fit the compiled code. We don't do the BUG() since it's not helpful, it's better to deal with non-present TCM dynamically. If there is nothing compiled to the TCM and no TCM is detected, it will now just shut up even if TCM support is enabled. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-05ARM: move memory layout sanity checking before meminfo initializationRussell King
Ensure that the meminfo array is sanity checked before we pass the memory to memblock. This helps to ensure that memblock and meminfo agree on the dimensions of memory, especially when more memory is passed than the kernel can deal with. Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-05ARM: 6989/1: perf: do not start the PMU when no events are presentWill Deacon
armpmu_enable can be called in situations where no events are present (for example, from the event rotation tick after a profiled task has exited). In this case, we currently start the PMU anyway which may leave it active inevitably without any events being monitored. This patch adds a simple check to the enabling code so that we avoid starting the PMU when no events are present. Cc: <stable@kernel.org> Reported-by: Ashwin Chaugle <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02ARM: entry: no need to reload the SPSR value from struct pt_regsRussell King
The SVC IRQ, prefetch and data abort handlers preserve the SPSR value via r5 across the exception. Rather than re-loading it from pt_regs, use the preserved value instead. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02ARM: entry: data abort: tail-call the main data abort handlerRussell King
Tail-call the main C data abort handler code from the per-CPU helper code. Update the comments in the code wrt the new calling and return register state. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02ARM: entry: data abort: arrange for CPU abort helpers to take pc/psr in r4/r5Russell King
Re-jig the CPU abort helpers to take the PC/PSR in r4/r5 rather than r2/r3. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02ARM: entry: prefetch abort: tail-call the main prefetch abort handlerRussell King
Tail-call the main C prefetch abort handler code from the per-CPU helper code. Also note that the helper function becomes ABI compliant in terms of the registers preserved. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02ARM: entry: re-allocate registers in irq entry assembly macrosRussell King
This avoids the irq entry assembly corrupting r5, thereby allowing it to be preserved through to the svc exit code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02ARM: entry: consolidate trace_hardirqs_off into (svc|usr)_entry macrosRussell King
All handlers now call trace_hardirqs_off, so move this common code into the (svc|usr)_entry assembler macros. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02ARM: entry: instrument usr exception handlers with irqsoff tracingRussell King
As we no longer re-enable interrupts in these exception handlers, add the irqsoff tracing calls to them so that the kernel tracks the state more accurately. Note that these calls are conditional on IRQSOFF_TRACER: kernel ----------> user ---------> kernel ^ irqs enabled ^ irqs disabled No kernel code can run on the local CPU until we've re-entered the kernel through one of the exception handlers - and userspace can not take any locks etc. So, the kernel doesn't care about the IRQ mask state while userspace is running unless we're doing IRQ off latency tracing. So, we can (and do) avoid the overhead of updating the IRQ mask state on every kernel->user and user->kernel transition. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02ARM: entry: instrument svc undefined exception handler with irqtraceRussell King
Add irqtrace function calls to the undefined exception handler, so that we get sane lockdep traces from locking problems in undefined exception handlers. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02ARM: entry: avoid enabling interrupts in prefetch/data abort handlersRussell King
Avoid enabling interrupts if the parent context had interrupts enabled in the abort handler assembly code, and move this into the breakpoint/ page/alignment fault handlers instead. This gets rid of some special-casing for the breakpoint fault handlers from the low level abort handler path. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02ARM: pm: allow suspend finisher to return error codesRussell King
There are SoCs where attempting to enter a low power state is ignored, and the CPU continues executing instructions with all state preserved. It is over-complex at that point to disable the MMU just to call the resume path. Instead, allow the suspend finisher to return error codes to abort suspend in this circumstance, where the cpu_suspend internals will then unwind the saved state on the stack. Also omit the tlb flush as no changes to the page tables will have happened. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-01perf: Add context field to perf_eventAvi Kivity
The perf_event overflow handler does not receive any caller-derived argument, so many callers need to resort to looking up the perf_event in their local data structure. This is ugly and doesn't scale if a single callback services many perf_events. Fix by adding a context parameter to perf_event_create_kernel_counter() (and derived hardware breakpoints APIs) and storing it in the perf_event. The field can be accessed from the callback as event->overflow_handler_context. All callers are updated. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1309362157-6596-2-git-send-email-avi@redhat.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-01perf, arch: Add generic NODE cache eventsPeter Zijlstra
Add a NODE level to the generic cache events which is used to measure local vs remote memory accesses. Like all other cache events, an ACCESS is HIT+MISS, if there is no way to distinguish between reads and writes do reads only etc.. The below needs filling out for !x86 (which I filled out with unsupported events). I'm fairly sure ARM can leave it like that since it doesn't strike me as an architecture that even has NUMA support. SH might have something since it does appear to have some NUMA bits. Sparc64, PowerPC and MIPS certainly want a good look there since they clearly are NUMA capable. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: David Miller <davem@davemloft.net> Cc: Anton Blanchard <anton@samba.org> Cc: David Daney <ddaney@caviumnetworks.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Robert Richter <robert.richter@amd.com> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1303508226.4865.8.camel@laptop Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-01perf: Remove the nmi parameter from the swevent and overflow interfacePeter Zijlstra
The nmi parameter indicated if we could do wakeups from the current context, if not, we would set some state and self-IPI and let the resulting interrupt do the wakeup. For the various event classes: - hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from the PMI-tail (ARM etc.) - tracepoint: nmi=0; since tracepoint could be from NMI context. - software: nmi=[0,1]; some, like the schedule thing cannot perform wakeups, and hence need 0. As one can see, there is very little nmi=1 usage, and the down-side of not using it is that on some platforms some software events can have a jiffy delay in wakeup (when arch_irq_work_raise isn't implemented). The up-side however is that we can remove the nmi parameter and save a bunch of conditionals in fast paths. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Michael Cree <mcree@orcon.net.nz> Cc: Will Deacon <will.deacon@arm.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com> Cc: Anton Blanchard <anton@samba.org> Cc: Eric B Munson <emunson@mgebm.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: David S. Miller <davem@davemloft.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jason Wessel <jason.wessel@windriver.com> Cc: Don Zickus <dzickus@redhat.com> Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-06-30ARM: entry: prefetch abort helper: pass aborted pc in r4 rather than r0Russell King
This avoids unnecessary instructions for CPUs which implement the IFAR (instruction fault address register). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-30ARM: entry: rejig register allocation in exception entry handlersRussell King
This allows us to avoid moving registers twice to work around the clobbered registers when we add calls to trace_hardirqs_{on,off}. Ensure that all SVC handlers return with SPSR in r5 for consistency. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29ARM: 6977/1: pmu: add platform_device_id table supportMark Rutland
This patch adds support for platform_device_id tables, allowing new PMU types to be registered with the correct type, without requiring new platform_driver shims to provide the type. An single entry for existing devices is provided. Macros matching functionality of the of_device_id table macros are provided for convenience. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Jamie Iles <jamie@jamieiles.com> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29ARM: 6976/1: pmu: add OF probing supportMark Rutland
This is based on an earlier patch from Rob Herring <rob.herring@calxeda.com> > Add OF match table to enable OF style driver binding. The dts entry is like > this: > > pmu { > compatible = "arm,cortex-a9-pmu"; > interrupts = <100 101>; > }; > > The use of pdev->id as an index breaks with OF device binding, so set the type > based on the OF compatible string. This modification sets the PMU hardware type based on data embedded in the binding, allowing easy addition of new PMU types in future. Support for new PMU types not provided by devicetree can be added later using platform_device_id tables in a similar fashion. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Jamie Iles <jamie@jamieiles.com> Acked-by: Rob Herring <rob.herring@calxeda.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29ARM: 6975/1: pmu: reject duplicate PMU registrationsMark Rutland
Currently, the PMU reservation framework allows for multiple PMUs of the same type to register themselves. This can lead to a bug with the sequence: register_pmu(pmu1); reserve_pmu(pmu_type); register_pmu(pmu2); release_pmu(pmu1); Here, pmu1 cannot be released, and pmu2 cannot be reserved. This patch modifies register_pmu to reject registrations where a PMU is already present, preventing this problem. PMUs which can have multiple instances should not use the PMU reservation framework. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Jamie Iles <jamie@jamieiles.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29ARM: 6974/1: pmu: refactor reservationMark Rutland
Currently, PMU platform_device reservation relies on some minor abuse of the platform_device::id field for determining the type of PMU. This is problematic for device tree based probing, where the ID cannot be controlled. This patch removes reliance on the id field, and depends on each PMU's platform driver to figure out which type it is. As all PMUs handled by the current platform_driver name "arm-pmu" are CPU PMUs, this convention is hardcoded. New PMU types can be supported through the use of {of,platform}_device_id tables Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Jamie Iles <jamie@jamieiles.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29ARM: entry: no need to check parent IRQ mask in IRQ handler returnRussell King
There's no point checking to see whether IRQs were masked in the parent context when returning from IRQ handling - the fact that we're handling an IRQ means that the parent context must have had IRQs unmasked. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29ARM: entry: no need to increase preempt count for IRQ handlersRussell King
irq_enter() and irq_exit() already take care of the preempt_count handling for interrupts, which increment and decrement the hardirq bits of the preempt count. So we can remove the preempt count handing in our IRQ entry/exit assembly, like x86 did some 9 years ago. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29ARM: entry: prefetch/data abort helpers: avoid corrupting r4Russell King
Replace r4 with ip for calling abort helpers - ip is allowed to be corrupted by called functions in the ABI, so it makes more sense to use such a register. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>