summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2010-12-19ARM: clockevents: fix IOP clock events initializationRussell King
Ensure that no interrupt is pending before registering the clock event device, and properly initialize the periodic tick in the ->set_mode callback. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-19sched: Fix interactivity bug by charging unaccounted run-time on entity ↵Paul Turner
re-weight Mike Galbraith reported poor interactivity[*] when the new shares distribution code was combined with autogroups. The root cause turns out to be a mis-ordering of accounting accrued execution time and shares updates. Since update_curr() is issued hierarchically, updating the parent entity weights to reflect child enqueue/dequeue results in the parent's unaccounted execution time then being accrued (vs vruntime) at the new weight as opposed to the weight present at accumulation. While this doesn't have much effect on processes with timeslices that cross a tick, it is particularly problematic for an interactive process (e.g. Xorg) which incurs many (tiny) timeslices. In this scenario almost all updates are at dequeue which can result in significant fairness perturbation (especially if it is the only thread, resulting in potential {tg->shares, MIN_SHARES} transitions). Correct this by ensuring unaccounted time is accumulated prior to manipulating an entity's weight. [*] http://xkcd.com/619/ is perversely Nostradamian here. Signed-off-by: Paul Turner <pjt@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <20101216031038.159704378@google.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-12-19sched: Move periodic share updates to entity_tick()Paul Turner
Long running entities that do not block (dequeue) require periodic updates to maintain accurate share values. (Note: group entities with several threads are quite likely to be non-blocking in many circumstances). By virtue of being long-running however, we will see entity ticks (otherwise the required update occurs in dequeue/put and we are done). Thus we can move the detection (and associated work) for these updates into the periodic path. This restores the 'atomicity' of update_curr() with respect to accounting. Signed-off-by: Paul Turner <pjt@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20101216031038.067028969@google.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-12-19Merge commit 'v2.6.37-rc6' into sched/coreIngo Molnar
Merge reason: Update to the latest -rc. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-12-19oprofile, x86: Add support for 6 counters (AMD family 15h)Robert Richter
This patch adds support for up to 6 hardware counters for AMD family 15h cpus. There is a new MSR range for hardware counters beginning at MSRC001_0200 Performance Event Select (PERF_CTL0). Signed-off-by: Robert Richter <robert.richter@amd.com>
2010-12-19oprofile, x86: Add support for AMD family 15hRobert Richter
This patch adds support for AMD family 15h (Interlagos/Valencia/ Zambezi) cpus. Signed-off-by: Robert Richter <robert.richter@amd.com>
2010-12-18ipv6: remove duplicate neigh_ifdownstephen hemminger
When device is being set to down, neigh_ifdown was being called twice. Once from addrconf notifier and once from ndisc notifier. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-12-18ipv6: fib6_ifdown cleanupstephen hemminger
Remove (unnecessary) casts to make code cleaner. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2010-12-18Merge git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tileLinus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile: arch/tile: handle rt_sigreturn() more cleanly arch/tile: handle CLONE_SETTLS in copy_thread(), not user space
2010-12-18Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/upstream-linusLinus Torvalds
* 'upstream' of git://git.linux-mips.org/pub/scm/upstream-linus: MIPS: Fix build errors in sc-mips.c
2010-12-18Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6: x86: avoid high BIOS area when allocating address space x86: avoid E820 regions when allocating address space x86: avoid low BIOS area when allocating address space resources: add arch hook for preventing allocation in reserved areas Revert "resources: support allocating space within a region from the top down" Revert "PCI: allocate bus resources from the top down" Revert "x86/PCI: allocate space from the end of a region, not the beginning" Revert "x86: allocate space within a region top-down" Revert "PCI: fix pci_bus_alloc_resource() hang, prefer positive decode" PCI: Update MCP55 quirk to not affect non HyperTransport variants
2010-12-18omap4: l2x0: Enable early BRESP bitSantosh Shilimkar
The AXI protocol specifies that the write response can only be sent back to an AXI master when the last write data has been accepted. This optimization enables the PL310 to send the write response of certain write transactions as soon as the store buffer accepts the write address. This behavior is not compatible with the AXI protocol and is disabled by default. You enable this optimization by setting the Early BRESP Enable bit in the Auxiliary Control Register (bit [30]). Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Mans Rullgard <mans@mansr.com> Tested-by: Nishanth Menon <nm@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-18omap4: l2x0: Set share override bitSantosh Shilimkar
Clearing bit 22 in the PL310 Auxiliary Control register (shared attribute override enable) has the side effect of transforming Normal Shared Non-cacheable reads into Cacheable no-allocate reads. Coherent DMA buffers in Linux always have a Cacheable alias via the kernel linear mapping and the processor can speculatively load cache lines into the PL310 controller. With bit 22 cleared, Non-cacheable reads would unexpectedly hit such cache lines leading to buffer corruption Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Nishanth Menon <nm@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-18omap4: l2x0: enable instruction and data prefetchingMans Rullgard
Enabling L2 prefetching improves performance as shown on Panda ES2.1 board with mem test, and it has measurable impact on performances. I think we should consider it, even though it damages "writes" a bit. (rebased to k.org) Usually the prefetch is used at both levels together L1 + L2, however, to enable the CP15 prefetch engines, these are under security, and on GP devices, we cannot enable it(e.g. on PandaBoard). However, just enabling PL310 prefetch seems to provide performance improvement, as shown in the data below (from Ubuntu) and would be a great thing to pull in. What prefetch does is enable automatic next line prefetching. With this enabled, whenever the PL310 receives a cachable read request, it automatically prefetches the following cache line as well. Measurement Data: == STOCK 10.10 WITHOUT PATCH ======================== ~# ./memspeed size 8388608 8192k 8M offset 8388608, 0 buffers 0x2aaad000 0x2b2ad000 copy libc 133 MB/s copy Android v5 273 MB/s copy Android NEON 235 MB/s copy INT32 116 MB/s copy ASM ARM 187 MB/s copy ASM VLDM 64 204 MB/s copy ASM VLDM 128 173 MB/s copy ASM VLD1 216 MB/s read ASM ARM 286 MB/s read ASM VLDM 242 MB/s read ASM VLD1 286 MB/s write libc 1947 MB/s write ASM ARM 1943 MB/s write ASM VSTM 1942 MB/s write ASM VST1 1935 MB/s 10.10 + PATCH ============= ~# ./memspeed size 8388608 8192k 8M offset 8388608, 0 buffers 0x2ab17000 0x2b317000 copy libc 129 MB/s copy Android v5 256 MB/s copy Android NEON 356 MB/s copy INT32 127 MB/s copy ASM ARM 321 MB/s copy ASM VLDM 64 337 MB/s copy ASM VLDM 128 321 MB/s copy ASM VLD1 350 MB/s read ASM ARM 496 MB/s read ASM VLDM 470 MB/s read ASM VLD1 488 MB/s write libc 1701 MB/s write ASM ARM 1682 MB/s write ASM VSTM 1693 MB/s write ASM VST1 1681 MB/s Signed-off-by: Mans Rullgard <mans@mansr.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Nishanth Menon <nm@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-18omap4: l2x0: Construct the AUXCTRL value using definesSantosh Shilimkar
This patch removes the hardcoded value of auxctrl value and construct it using bitfields Bit 25 is reserved and is always set to 1. Same value of this bit is retained in this patch Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Nishanth Menon <nm@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-18ARM: l2x0: Add aux control register bitfieldsSantosh Shilimkar
This patch adds the PL310 Auxiliary Control Register bitfields so that SOC's can use these bit fields to construct the AUXCTRL value to be passed/programmed instead of hardcoding it. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-18vmstat: User per cpu atomics to avoid interrupt disable / enableChristoph Lameter
Currently the operations to increment vm counters must disable interrupts in order to not mess up their housekeeping of counters. So use this_cpu_cmpxchg() to avoid the overhead. Since we can no longer count on preremption being disabled we still have some minor issues. The fetching of the counter thresholds is racy. A threshold from another cpu may be applied if we happen to be rescheduled on another cpu. However, the following vmstat operation will then bring the counter again under the threshold limit. The operations for __xxx_zone_state are not changed since the caller has taken care of the synchronization needs (and therefore the cycle count is even less than the optimized version for the irq disable case provided here). The optimization using this_cpu_cmpxchg will only be used if the arch supports efficient this_cpu_ops (must have CONFIG_CMPXCHG_LOCAL set!) The use of this_cpu_cmpxchg reduces the cycle count for the counter operations by %80 (inc_zone_page_state goes from 170 cycles to 32). Signed-off-by: Christoph Lameter <cl@linux.com>
2010-12-18irq_work: Use per cpu atomics instead of regular atomicsChristoph Lameter
The irq work queue is a per cpu object and it is sufficient for synchronization if per cpu atomics are used. Doing so simplifies the code and reduces the overhead of the code. Before: christoph@linux-2.6$ size kernel/irq_work.o text data bss dec hex filename 451 8 1 460 1cc kernel/irq_work.o After: christoph@linux-2.6$ size kernel/irq_work.o text data bss dec hex filename 438 8 1 447 1bf kernel/irq_work.o Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Christoph Lameter <cl@linux.com>
2010-12-18Merge branch 'this_cpu_ops' into for-2.6.38Tejun Heo
2010-12-18cpuops: Use cmpxchg for xchg to avoid lock semanticsChristoph Lameter
Use cmpxchg instead of xchg to realize this_cpu_xchg. xchg will cause LOCK overhead since LOCK is always implied but cmpxchg will not. Baselines: xchg() = 18 cycles (no segment prefix, LOCK semantics) __this_cpu_xchg = 1 cycle (simulated using this_cpu_read/write, two prefixes. Looks like the cpu can use loop optimization to get rid of most of the overhead) Cycles before: this_cpu_xchg = 37 cycles (segment prefix and LOCK (implied by xchg)) After: this_cpu_xchg = 11 cycle (using cmpxchg without lock semantics) Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2010-12-18x86: this_cpu_cmpxchg and this_cpu_xchg operationsChristoph Lameter
Provide support as far as the hardware capabilities of the x86 cpus allow. Define CONFIG_CMPXCHG_LOCAL in Kconfig.cpu to allow core code to test for fast cpuops implementations. V1->V2: - Take out the definition for this_cpu_cmpxchg_8 and move it into a separate patch. tj: - Reordered ops to better follow this_cpu_* organization. - Renamed macro temp variables similar to their existing neighbours. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2010-12-18percpu: Generic this_cpu_cmpxchg() and this_cpu_xchg supportChristoph Lameter
Generic code to provide new per cpu atomic features this_cpu_cmpxchg this_cpu_xchg Fallback occurs to functions using interrupts disable/enable to ensure correct per cpu atomicity. Fallback to regular cmpxchg and xchg is not possible since per cpu atomic semantics include the guarantee that the current cpus per cpu data is accessed atomically. Use of regular cmpxchg and xchg requires the determination of the address of the per cpu data before regular cmpxchg or xchg which therefore cannot be atomically included in an xchg or cmpxchg without segment override. tj: - Relocated new ops to conform better to the general organization. - This patch contains a trivial comment fix. Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2010-12-18Merge git://git.kernel.org/pub/scm/linux/kernel/git/nico/orion into devel-stableRussell King
2010-12-18Merge branch 'hw-breakpoint' of git://repo.or.cz/linux-2.6/linux-wd into ↵Russell King
devel-stable
2010-12-18ARM: smp: avoid incrementing mm_users on CPU startupRussell King
We should not be incrementing mm_users when we startup a secondary CPU - doing so results in mm_users incrementing by one each time we hotplug a CPU, which will eventually wrap, and will cause problems. Other architectures such as x86 do not increment mm_users, but only mm_count, so we follow that pattern. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-12-18ARM: pxa: sanitize IRQ registers access based on offsetHaojian Zhuang
Signed-off-by: Eric Miao <eric.y.miao@gmail.com> Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
2010-12-18ARM: mmp: select CPU_PJ4Haojian Zhuang
Since CPU_PJ4 is shared between PXA95x and MMP2, select CPU_PJ4 in MMP2 configuration. Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com> Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2010-12-18ARM: pxa: support saarb platformHaojian Zhuang
Saarb platform is a handheld platform that supports Marvell PXA955 silicon. Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com> Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2010-12-18ARM: pxa: support pxa95xHaojian Zhuang
The core of PXA955 is PJ4. Add new PJ4 support. And add new macro CONFIG_PXA95x. Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com> Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2010-12-18ARM: pxa: introduce pxa3xx_clock_sysclass for clock suspend/resumeEric Miao
Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com> Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2010-12-18hid: egalax: Add support for Wetab (726b)Andy Ross
This patch adds support for another Wetab device (726b), and grabs it accordingly in hid-core. [rydberg@euromail.se: rename and log message changes] Signed-off-by: Andy Ross <andy@plausible.org> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Henrik Rydberg <rydberg@euromail.se>
2010-12-17Merge branch 'for_2.6.38' of git://gitorious.org/iommu_mailbox/iommu_mailbox ↵Tony Lindgren
into devel-iommu-mailbox
2010-12-17omap: remove dead wdt code in plat-omap/devices.cAnand Gadiyar
Commit f2ce62312650 (OMAP: WDT: Split OMAP1 and OMAP2PLUS device registration) removed omap_init_wdt and related structures from plat-omap/devices.c. However a subsequent commit or merge seems to have reintroduced these by accident. The caller of omap_init_wdt was also removed by that commit, and this did not get restored. So we have the following build warning now: CC arch/arm/plat-omap/devices.o arch/arm/plat-omap/devices.c:252: warning: 'omap_init_wdt' defined but not used Fix this by removing this dead code. Signed-off-by: Anand Gadiyar <gadiyar@ti.com> Cc: Tony Lindgren <tony@atomide.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17OMAP4: enable smc instruction in new assembler versionsJohn Rigby
New assemblers need -march=armv7-a+sec on command line or .arch_extension sec inline to enable use of the smc instruction. This patch uses as-instr to check the latter to conditionally enable the former in AFLAGS for files that use smc. Checked on both old and new binutils to verify that it does not break old versions. Signed-off-by: John Rigby <john.rigby@linaro.org> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17omap: kill all section mismatch warning for omap2plus_defconfigBryan Wu
This patch will kill following section mismatch warnings: WARNING: vmlinux.o(.text+0x24a00): Section mismatch in reference from the function zoom_twl_gpio_setup() to the (unknown reference) .init.data:(unknown) The function zoom_twl_gpio_setup() references the (unknown reference) __initdata (unknown). This is often because zoom_twl_gpio_setup lacks a __initdata annotation or the annotation of (unknown) is wrong. WARNING: vmlinux.o(.text+0x24bfc): Section mismatch in reference from the function cm_t35_twl_gpio_setup() to the (unknown reference) .init.data:(unknown) The function cm_t35_twl_gpio_setup() references the (unknown reference) __initdata (unknown). This is often because cm_t35_twl_gpio_setup lacks a __initdata annotation or the annotation of (unknown) is wrong. WARNING: vmlinux.o(.data+0x1d3e0): Section mismatch in reference from the variable h4_config to the (unknown reference) .init.data:(unknown) The variable h4_config references the (unknown reference) __initdata (unknown) If the reference is valid then annotate the variable with __init* or __refdata (see linux/init.h) or name the variable: *_template, *_timer, *_sht, *_ops, *_probe, *_probe_one, *_console, WARNING: vmlinux.o(.data+0x1dc08): Section mismatch in reference from the variable sdp2430_config to the (unknown reference) .init.data:(unknown) The variable sdp2430_config references the (unknown reference) __initdata (unknown) If the reference is valid then annotate the variable with __init* or __refdata (see linux/init.h) or name the variable: *_template, *_timer, *_sht, *_ops, *_probe, *_probe_one, *_console, WARNING: vmlinux.o(.data+0x1e1d8): Section mismatch in reference from the variable apollon_config to the (unknown reference) .init.data:(unknown) The variable apollon_config references the (unknown reference) __initdata (unknown) If the reference is valid then annotate the variable with __init* or __refdata (see linux/init.h) or name the variable: *_template, *_timer, *_sht, *_ops, *_probe, *_probe_one, *_console, Signed-off-by: Bryan Wu <bryan.wu@canonical.com> Cc: Paul Walmsley <paul@pwsan.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17omap: boards w/ wl12xx should select REGULATOR_FIXED_VOLTAGEOhad Ben-Cohen
Power to the wl12xx wlan device is controlled by a fixed regulator. Boards that have the wl12xx should select REGULATOR_FIXED_VOLTAGE so users will not be baffled. Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17OMAP3: add comments for erratas i540 and i478 workaroundsJean Pihet
Add comments and IDs for the following erratas: - i540: MPU cannot exit from Standby, - i478: Unexpected Cold-Reset is generated when device is coming back from OFF mode Signed-off-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17Merge branch 'devel-board' into omap-for-linusTony Lindgren
2010-12-17arm: omap: add minimal support for RM-680Aaro Koskinen
Add minimal support for Nokia RM-680 board. Tested with omap2plus_defconfig. Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> [tony@atomide.com: updated to remove omap_gpio_init Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17arm: omap: sdram-nokia: add 97.6/195.2 MHz timing dataAaro Koskinen
Introduce 97.6/195.2 MHz memory timing data. Based on patches by Eduardo Valentin, Igor Dmitriev and Juha Keski-Saari. Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> Cc: Eduardo Valentin <eduardo.valentin@nokia.com> Cc: Igor Dmitriev <ext-dmitriev.igor@nokia.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17arm: omap: sdram-nokia: delete redundant timing dataAaro Koskinen
41.5 MHz SDRAM clock is not usable. Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17arm: omap: sdram-nokia: improve error handlingAaro Koskinen
Actually check for errors: print an error log and return NULL. Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17arm: omap: sdram-nokia: use array to list timingsAaro Koskinen
Use an array to make it easier to add new values. Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17arm: omap: sdram-nokia: prepare for new memory timingsAaro Koskinen
Rename the current timings to indicate they're for 166 MHz. Based on patches by Eduardo Valentin and Juha Keski-Saari. Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> Cc: Eduardo Valentin <eduardo.valentin@nokia.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17arm: omap: add sdram-nokia.hAaro Koskinen
Add a header file for Nokia SDRAM functions. Based on patches by Juha Keski-Saari. Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17arm: omap: rename board-rx51-sdram.c to sdram-nokia.cAaro Koskinen
Rename the file and functions so that it can be reused by future Nokia boards. Based on patches by Juha Keski-Saari. Signed-off-by: Aaro Koskinen <aaro.koskinen@nokia.com> Signed-off-by: Tony Lindgren <tony@atomide.com>
2010-12-17x86, kexec: Limit the crashkernel address appropriatelyH. Peter Anvin
Keep the crash kernel address below 512 MiB for 32 bits and 896 MiB for 64 bits. For 32 bits, this retains compatibility with earlier kernel releases, and makes it work even if the vmalloc= setting is adjusted. For 64 bits, we should be able to increase this substantially once a hard-coded limit in kexec-tools is fixed. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Stanislaw Gruszka <sgruszka@redhat.com> Cc: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <20101217195035.GE14502@redhat.com>
2010-12-17arch/tile: handle rt_sigreturn() more cleanlyChris Metcalf
The current tile rt_sigreturn() syscall pattern uses the common idiom of loading up pt_regs with all the saved registers from the time of the signal, then anticipating the fact that we will clobber the ABI "return value" register (r0) as we return from the syscall by setting the rt_sigreturn return value to whatever random value was in the pt_regs for r0. However, this breaks in our 64-bit kernel when running "compat" tasks, since we always sign-extend the "return value" register to properly handle returned pointers that are in the upper 2GB of the 32-bit compat address space. Doing this to the sigreturn path then causes occasional random corruption of the 64-bit r0 register. Instead, we stop doing the crazy "load the return-value register" hack in sigreturn. We already have some sigreturn-specific assembly code that we use to pass the pt_regs pointer to C code. We extend that code to also set the link register to point to a spot a few instructions after the usual syscall return address so we don't clobber the saved r0. Now it no longer matters what the rt_sigreturn syscall returns, and the pt_regs structure can be cleanly and completely reloaded. Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
2010-12-17arch/tile: handle CLONE_SETTLS in copy_thread(), not user spaceChris Metcalf
Previously we were just setting up the "tp" register in the new task as started by clone() in libc. However, this is not quite right, since in principle a signal might be delivered to the new task before it had its TLS set up. (Of course, this race window still exists for resetting the libc getpid() cached value in the new task, in principle. But in any case, we are now doing this exactly the way all other architectures do it.) This change is important for 2.6.37 since the tile glibc we will be submitting upstream will not set TLS in user space any more, so it will only work on a kernel that has this fix. It should also be taken for 2.6.36.x in the stable tree if possible. Signed-off-by: Chris Metcalf <cmetcalf@tilera.com> Cc: stable <stable@kernel.org>
2010-12-17ACPI: Execute _PRW for devices reported as inactive or not presentRafael J. Wysocki
If a device is reported as inactive or not present by its _STA control method, acpi_bus_check_add() skips it without evaluating its _PRW method. This leads to a problem when the device's _PRW method points to a GPE, because in that case the GPE may be enabled by ACPICA during the subsequent acpi_update_gpes() call which, in turn, may cause a GPE storm to appear. To avoid this issue, make acpi_bus_check_add() evaluate _PRW for inactive or not present devices and register the wakeup GPE information returned by them, so that acpi_update_gpes() does not enable their GPEs unnecessarily. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Reported-by: Matthew Garrett <mjg@redhat.com> Signed-off-by: Len Brown <len.brown@intel.com>