summaryrefslogtreecommitdiff
path: root/arch/arm/mm/proc-v7.S
AgeCommit message (Collapse)Author
2017-04-09ARM: Update cpu_v7_reset documentationMarc Zyngier
cpu_v7_reset() now takes a second parameter indicating whether we should reboot in HYP or not. Update the documentation to reflect this. Tested-by: Keerthy <j-keerthy@ti.com> Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
2017-04-09ARM: soft-reboot into same mode that we entered the kernelRussell King
When we soft-reboot (eg, kexec) from one kernel into the next, we need to ensure that we enter the new kernel in the same processor mode as when we were entered, so that (eg) the new kernel can install its own hypervisor - the old kernel's hypervisor will have been overwritten. In order to do this, we need to pass a flag to cpu_reset() so it knows what to do, and we need to modify the kernel's own hypervisor stub to allow it to handle a soft-reboot. As we are always guaranteed to install our own hypervisor if we're entered in HYP32 mode, and KVM will have moved itself out of the way on kexec/normal reboot, we can assume that our hypervisor is in place when we want to kexec, so changing our hypervisor API should not be a problem. Tested-by: Keerthy <j-keerthy@ti.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
2016-08-23ARM: 8599/1: mm: pull asm/memory.h explicitlyVladimir Murzin
Commit d78114554939a (""ARM: 8512/1: proc-v7.S: Adjust stack address when XIP_KERNEL"") introduced a macro which lives under asm/memory.h. Unfortunately, for MMU-less systems (like R-class) it leads to build failure: arch/arm/mm/proc-v7.S: Assembler messages: arch/arm/mm/proc-v7.S:538: Error: unrecognised relocation suffix make[1]: *** [arch/arm/mm/proc-v7.o] Error 1 make: *** [arch/arm/mm] Error 2 since it is implicitly pulled via asm/pgtable.h for MMU capable systems only. To fix it include asm/memory.h explicitly. Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-07-14ARM: 8560/1: errata: Workaround errata A12 825619 / A17 852421Doug Anderson
The workaround for both errata is to set bit 24 in the diagnostic register. There are no known end-user bugs solved by fixing this errata, but the fix is trivial and it seems sane to apply it. The arguments for why this needs to be in the kernel are similar to the arugments made in the patch "Workaround errata A12 818325/852422 A17 852423". Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-07-14ARM: 8559/1: errata: Workaround erratum A12 821420Doug Anderson
This erratum has a very simple workaround (set a bit in a register), so let's apply it. Apparently the workaround's downside is a very slight power impact. Note that applying this errata fixes deadlocks that are easy to reproduce with real world applications. The arguments for why this needs to be in the kernel are similar to the arugments made in the patch "Workaround errata A12 818325/852422 A17 852423". Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-07-14ARM: 8558/1: errata: Workaround errata A12 818325/852422 A17 852423Doug Anderson
There are several similar errata on Cortex A12 and A17 that all have the same workaround: setting bit[12] of the Feature Register. Technically the list of errata are: - A12 818325: Execution of an UNPREDICTABLE STR or STM instruction might deadlock. Fixed in r0p1. - A12 852422: Execution of a sequence of instructions might lead to either a data corruption or a CPU deadlock. Not fixed in any A12s yet. - A17 852423: Execution of a sequence of instructions might lead to either a data corruption or a CPU deadlock. Not fixed in any A17s yet. Since A12 got renamed to A17 it seems likely that there won't be any future Cortex-A12 cores, so we'll enable for all Cortex-A12. For Cortex-A17 I believe that all known revisions are affected and that all knows revisions means <= r1p2. Presumably if a new A17 was released it would have this problem fixed. Note that in <https://patchwork.kernel.org/patch/4735341/> folks previously expressed opposition to this change because: A) It was thought to only apply to r0p0 and there were no known r0p0 boards supported in mainline. B) It was argued that such a workaround beloned in firmware. Now that this same fix solves other errata on real boards (like rk3288) point A) is addressed. Point B) is impossible to address on boards like rk3288. On rk3288 the firmware doesn't stay resident in RAM and isn't involved at all in the suspend/resume process nor in the SMP bringup process. That means that the most the firmware could do would be to set the bit on "core 0" and this bit would be lost at suspend/resume time. It is true that we could write a "generic" solution that saved the boot-time "core 0" value of this register and applied it at SMP bringup / resume time. However, since this register (described as the "Feature Register" in errata) appears to be undocumented (as far as I can tell) and is only modified for these errata, that "generic" solution seems questionably cleaner. The generic solution also won't fix existing users that haven't happened to do a FW update. Note that in ARM64 presumably PSCI will be universal and fixes like this will end up in ATF. Hopefully we are nearing the end of this style of errata workaround. Signed-off-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Huang Tao <huangtao@rock-chips.com> Signed-off-by: Kever Yang <kever.yang@rock-chips.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-04-01ARM: SMP enable of cache maintanence broadcastRussell King
Masahiro Yamada reports that we can fail to set the FW bit in the auxiliary control register, which enables broadcasting the cache maintanence operations. This occurs because we only check that the SMP/nAMP bit is set, rather than checking whether all the bits we want to be set are set. Rearrange the code to ensure that all desired bits are set, and only update the register if we discover some required bits are not set. Tested-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2016-02-17ARM: make the physical-relative calculation more obviousRussell King
The physical-relative calculation between the XIP text and data sections introduced by the previous patch was far from obvious. Let's simplify it by turning it into a macro which takes the two (virtual) addresses. This allows us to arrange the calculation in a more obvious manner - we can make it two sub-expressions which calculate the physical address for each symbol, and then takes the difference of those physical addresses. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-02-16ARM: 8512/1: proc-v7.S: Adjust stack address when XIP_KERNELNicolas Pitre
When XIP_KERNEL is enabled, the virt to phys address translation for RAM is not the same as the virt to phys address translation for .text. The only way to know where physical RAM is located is to use PLAT_PHYS_OFFSET. The MACRO will be useful for other places where there is a similar problem. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Chris Brandt <chris.brandt@renesas.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-01-05Merge branches 'misc' and 'misc-rc6' into for-linusRussell King
2015-12-17ARM: 8453/2: proc-v7.S: don't locate temporary stack space in .text sectionNicolas Pitre
The proc-v7.S code uses a small temporary stack to preserve register content in its setup code. This stack is located in the .text section which is normally meant to be read-only. Move that temporary stack to the .bss section and get its address in a position independent way, similarly to what we do in other parts of the kernel. While at it, one comments was updated to reflect reality, and the list of saved registers in the proc-v7.S case is updated to match the comment next to it for coherency. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-12-15ARM: 8471/1: need to save/restore arm register(r11) when it is corruptedAnson Huang
In cpu_v7_do_suspend routine, r11 is used while it is NOT saved/restored, different compiler may have different usage of ARM general registers, so it may cause issues during calling cpu_v7_do_suspend. We meet kernel fault occurs when using GCC 4.8.3, r11 contains valid value before calling into cpu_v7_do_suspend, but when returned from this routine, r11 is corrupted and lead to kernel fault. Doing save/restore for those corrupted registers is a must in assemble code. Signed-off-by: Anson Huang <Anson.Huang@freescale.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Cc: <stable@vger.kernel.org> # v3.3+ Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-07-17ARM: invalidate L1 before enabling coherencyRussell King
We must invalidate the L1 cache before enabling coherency, otherwise secondary CPUs can inject invalid cache lines into the coherent CPU cluster, which could then be migrated to other CPUs. This fixes a recent regression with SoCFPGA randomly failing to boot. Fixes: 02b4e2756e01 ("ARM: v7 setup function should invalidate L1 cache") Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-12Merge branch 'for-arm-soc' into for-nextRussell King
2015-06-01ARM: proc-v7: sanitise and document registers around errataRussell King
Document that r13 is not a stack in the initialisation function, in case anyone gets other ideas. Document the registers available for the errata workarounds, and specifically which registers contain parts of the MIDR register, as well as which registers must be preserved. Lastly, use the lowest numbered available register (r0) rather than r10 for temporary storage. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-01ARM: proc-v7: clean up MIDR accessRussell King
We already have the main ID register available in r9, there's no need to refetch it. Use the saved value. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-01ARM: proc-v7: move CPU errata out of lineRussell King
Rather than having a long sprawling __v7_setup function, which is hard to maintain properly, move the CPU errata out of line. While doing this, it was discovered that the Cortex-A15 errata had been incorrectly added: ldr r10, =0x00000c08 @ Cortex-A8 primary part number teq r0, r10 bne 2f /* Cortex-A8 errata */ b 3f 2: ldr r10, =0x00000c09 @ Cortex-A9 primary part number teq r0, r10 bne 3f /* Cortex-A9 errata */ 3: ldr r10, =0x00000c0f @ Cortex-A15 primary part number teq r0, r10 bne 4f /* Cortex-A15 errata */ 4: This results in the Cortex-A15 test always being executed after the Cortex-A8 and Cortex-A9 errata, which is obviously not what is intended. The 'b 3f' labels should have been updated to 'b 4f'. The new structure of: /* Cortex-A8 Errata */ ldr r10, =0x00000c08 @ Cortex-A8 primary part number teq r0, r10 beq __ca8_errata /* Cortex-A9 Errata */ ldr r10, =0x00000c09 @ Cortex-A9 primary part number teq r0, r10 beq __ca9_errata /* Cortex-A15 Errata */ ldr r10, =0x00000c0f @ Cortex-A15 primary part number teq r0, r10 beq __ca15_errata __errata_finish: is much cleaner and easier to see that this kind of thing doesn't happen. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-01ARM: redo TTBR setup code for LPAERussell King
Re-engineer the LPAE TTBR setup code. Rather than passing some shifted address in order to fit in a CPU register, pass either a full physical address (in the case of r4, r5 for TTBR0) or a PFN (for TTBR1). This removes the ARCH_PGD_SHIFT hack, and the last dangerous user of cpu_set_ttbr() in the secondary CPU startup code path (which was there to re-set TTBR1 to the appropriate high physical address space on Keystone2.) Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-06-01ARM: v7 setup function should invalidate L1 cacheRussell King
All ARMv5 and older CPUs invalidate their caches in the early assembly setup function, prior to enabling the MMU. This is because the L1 cache should not contain any data relevant to the execution of the kernel at this point; all data should have been flushed out to memory. This requirement should also be true for ARMv6 and ARMv7 CPUs - indeed, these typically do not search their caches when caching is disabled (as it needs to be when the MMU is disabled) so this change should be safe. ARMv7 allows there to be CPUs which search their caches while caching is disabled, and it's permitted that the cache is uninitialised at boot; for these, the architecture reference manual requires that an implementation specific code sequence is used immediately after reset to ensure that the cache is placed into a sane state. Such functionality is definitely outside the remit of the Linux kernel, and must be done by the SoC's firmware before _any_ CPU gets to the Linux kernel. Changing the data cache clean+invalidate to a mere invalidate allows us to get rid of a lot of platform specific hacks around this issue for their secondary CPU bringup paths - some of which were buggy. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Dinh Nguyen <dinguyen@opensource.altera.com> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Thierry Reding <treding@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Tested-by: Michal Simek <michal.simek@xilinx.com> Tested-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-04-14ARM: proc-v7: avoid errata 430973 workaround for non-Cortex A8 CPUsRussell King
Avoid the errata 430973 workaround for non-Cortex A8 CPUs. Having this workaround enabled introduces an additional branch target buffer flush into the context switching path, something we wish to avoid. To allow this errata to be enabled in multiplatform kernels while reducing its impact, rearrange the Cortex-A8 CPU support to avoid impacting on other Version 7 CPUs. Tested-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-03-28ARM: 8314/1: replace PROCINFO embedded branch with relative offsetArd Biesheuvel
This patch replaces the 'branch to setup()' instructions embedded in the PROCINFO structs with the offset to that setup function relative to the base of the struct. This preserves the position independent nature of that field, but uses a data item rather than an instruction. This is mainly done to prevent linker failures on large kernels, where the setup function is out of reach for the branch. Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-12-05Merge branches 'fixes', 'misc', 'pm' and 'sa1100' into for-nextRussell King
2014-11-27ARM: 8222/1: mvebu: enable strex backoff delayThomas Petazzoni
Under extremely rare conditions, in an MPCore node consisting of at least 3 CPUs, two CPUs trying to perform a STREX to data on the same shared cache line can enter a livelock situation. This patch enables the HW mechanism that overcomes the bug. This fixes the incorrect setup of the STREX backoff delay bit due to a wrong description in the specification. Note that enabling the STREX backoff delay mechanism is done by leaving the bit *cleared*, while the bit was currently being set by the proc-v7.S code. [Thomas: adapt to latest mainline, slightly reword the commit log, add stable markers.] Fixes: de4901933f6d ("arm: mm: Add support for PJ4B cpu and init routines") Cc: <stable@vger.kernel.org> # v3.8+ Signed-off-by: Nadav Haklai <nadavh@marvell.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-11-21ARM: 8196/1: vfp: Workaround bad MVFR1 register on some KraitsStephen Boyd
Certain versions of the Krait processor don't report that they support the fused multiply accumulate instruction via the MVFR1 register despite the fact that they actually do. Unfortunately we use this register to identify support for VFPv4. Override the hwcap on all Krait processors to indicate support for VFPv4 to workaround this. Tested-by: Rob Clark <robdclark@gmail.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-09-12ARM: 8138/1: drop ISAR0 workaround for B15Brian Norris
The Brahma-B15's ISAR0 correcty advertises UDIV/SDIV support in both ARM and Thumb2 modes (CPUID_EXT_ISAR0=02101110), so we don't need to manually apply this hwcap. The code in question actually predates the following commit, which made our hwcaps unnecessary: commit 8164f7af88d9ad3a757bd14f634b23997ee77f6b Author: Stephen Boyd <sboyd@codeaurora.org> Date: Mon Mar 18 19:44:15 2013 +0100 ARM: 7680/1: Detect support for SDIV/UDIV from ISAR0 register Signed-off-by: Brian Norris <computersforpeace@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-24ARM: 8110/1: do CPU-specific init for Broadcom Brahma15 coresMarc Carino
Perform any CPU-specific initialization required on the Broadcom Brahma-15 core. Signed-off-by: Marc Carino <marc.ceeeee@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Brian Norris <computersforpeace@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8103/1: save/restore Cortex-A9 CP15 registers on suspend/resumeShawn Guo
The CP15 diagnostic register holds ARM errata bits on Cortex-A9, so it needs to be saved/restored on suspend/resume. Otherwise, the effectiveness of errata workaround gets lost together with diagnostic register bit across suspend/resume cycle. And the CP15 power control register of Cortex-A9 shares the same problem. The patch adds a couple of Cortex-A9 specific suspend/resume functions to save/restore these two Cortex-A9 CP15 registers across the suspend/resume cycle. Signed-off-by: Shawn Guo <shawn.guo@freescale.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: 8089/1: cpu_pj4b_suspend_size should base on cpu_v7_suspend_sizeShawn Guo
Since pj4b suspend/resume routines are implemented based on generic ARMv7 ones, instead of hard-coding cpu_pj4b_suspend_size, we should have it be cpu_v7_suspend_size plus pj4b specific bytes. Otherwise, if cpu_v7_suspend_size gets updated alone, the pj4b suspend/resume will likely be broken. While at it, fix the comments in cpu_pj4b_do_resume, as we're restoring CP15 registers rather than saving in there. Signed-off-by: Shawn Guo <shawn.guo@freescale.com> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+Russell King
ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1). We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction. Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection. Reported-by: Will Deacon <will.deacon@arm.com> Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood Tested-by: Shawn Guo <shawn.guo@freescale.com> Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385 Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-05-25ARM: 8046/1: proc: add support for the Cortex-A17 processorWill Deacon
Cortex-A17 has identical initialisation requirements to Cortex-A12, so hook it up in proc-v7.S in the same way. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-23ARM: 8013/1: PJ4B: Add cpu_suspend/cpu_resume hooks for PJ4BGregory CLEMENT
PJ4B needs extra instructions for suspend and resume, so instead of using the armv7 version, this commit introduces specific versions for PJ4B. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04Merge branches 'amba', 'fixes', 'misc', 'mmci', 'unstable/omap-dma' and ↵Russell King
'unstable/sa11x0' into for-next
2014-02-10ARM: 7940/1: add support for the Cortex-A12 processorJonathan Austin
The A12 behaves as the A7/A15 does with respect to setting the SMP bit, and doesn't require TLB ops broadcasting to be explicitly enabled like the A9 does. Note that as the ACTLR cannot (usually) be written from non-secure, it is the responsibility of the bootloader/firmware to set this bit per core - it is done here in Linux as last resort in case of bad firmware. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jonathan Austin <jonathan.austin@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-10ARM: 7953/1: mm: ensure TLB invalidation is complete before enabling MMUWill Deacon
During __v{6,7}_setup, we invalidate the TLBs since we are about to enable the MMU on return to head.S. Unfortunately, without a subsequent dsb instruction, the invalidation is not guaranteed to have completed by the time we write to the sctlr, potentially exposing us to junk/stale translations cached in the TLB. This patch reworks the init functions so that the dsb used to ensure completion of cache/predictor maintenance is also used to ensure completion of the TLB invalidation. Cc: <stable@vger.kernel.org> Reported-by: Albin Tonnerre <Albin.Tonnerre@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-11-14ARM: 7885/1: Save/Restore 64-bit TTBR registers on LPAE suspend/resumeMahesh Sivasubramanian
LPAE enabled kernels use the 64-bit version of TTBR0 and TTBR1 registers. If we're running an LPAE kernel, fill the upper half of TTBR0 with 0 because we're setting it to the idmap here (the idmap is guaranteed to be < 4Gb) and fully restore TTBR1 instead of just restoring the lower 32 bits. Failure to do so can cause failures on resume from suspend when these registers are only half restored. Signed-off-by: Mahesh Sivasubramanian <msivasub@codeaurora.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-10-19ARM: asm: Add ARM_BE8() assembly helperBen Dooks
Add ARM_BE8() helper to wrap any code conditional on being compile when CONFIG_ARM_ENDIAN_BE8 is selected and convert existing places where this is to use it. Acked-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
2013-09-05Merge branches 'debug-choice', 'devel-stable' and 'misc' into for-linusRussell King
2013-09-02ARM: 7823/1: errata: workaround Cortex-A15 erratum 773022Will Deacon
On Cortex-A15 CPUs up to and including r0p4, in certain rare sequences of code, the loop buffer may deliver incorrect instructions. This workaround disables the loop buffer to avoid the erratum. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-08-12ARM: mm: use inner-shareable barriers for TLB and user cache operationsWill Deacon
System-wide barriers aren't required for situations where we only need to make visibility and ordering guarantees in the inner-shareable domain (i.e. we are not dealing with devices or potentially incoherent CPUs). This patch changes the v7 TLB operations, coherent_user_range and dcache_clean_area functions to user inner-shareable barriers. For cache maintenance, only the store access type is required to ensure completion. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-07-22ARM: 7784/1: mm: ensure SMP alternates assemble to exactly 4 bytes with Thumb-2Will Deacon
Commit ae8a8b9553bd ("ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP instead") added early function returns for page table cache flushing operations on ARMv7 SMP CPUs. Unfortunately, when targetting Thumb-2, these `mov pc, lr' sequences assemble to 2 bytes which can lead to corruption of the instruction stream after code patching. This patch fixes the alternates to use wide (32-bit) instructions for Thumb-2, therefore ensuring that the patching code works correctly. Cc: <stable@vger.kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-07-14arm: delete __cpuinit/__CPUINIT usage from all ARM usersPaul Gortmaker
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) and are flagged as __cpuinit -- so if we remove the __cpuinit from the arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit related content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the ARM uses of the __cpuinit macros from C code, and all __CPUINIT from assembly code. It also had two ".previous" section statements that were paired off against __CPUINIT (aka .section ".cpuinit.text") that also get removed here. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-29Merge branch 'devel-stable' into for-nextRussell King
Conflicts: arch/arm/Makefile arch/arm/include/asm/glue-proc.h
2013-06-24ARM: 7773/1: PJ4B: Add support for errata 4742Gregory CLEMENT
This commit fixes the regression on Armada 370 (the kernal hang during boot) introduced by the commit: "ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP instead". When coming out of either a Wait for Interrupt (WFI) or a Wait for Event (WFE) IDLE states, a specific timing sensitivity exists between the retiring WFI/WFE instructions and the newly issued subsequent instructions. This sensitivity can result in a CPU hang scenario. The workaround is to insert either a Data Synchronization Barrier (DSB) or Data Memory Barrier (DMB) command immediately after the WFI/WFE instruction. This commit was based on the work of Lior Amsalem, but heavily modified to apply the errata fix dynamically according to the processor type thanks to the suggestions of Russell King and Nicolas Pitre. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Willy Tarreau <w@1wt.eu> Cc: <stable@vger.kernel.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-06-17ARM: 7754/1: Fix the CPU ID and the mask associated to the PJ4BGregory CLEMENT
This commit fixes the ID and mask for the PJ4B which was too restrictive and didn't match the CPU of the Armada 370 SoC. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-06-07ARM: add Cortex-R7 Processor InfoJonathan Austin
This patch adds processor info for ARM Ltd. Cortex-R7. The R7 has many similarities to the A9 and though the ACTLR layout is not identical, the bits associated with cache operations broadcasting and SMP modes are the same for A9, A5 and R7 (Though in the A-class processors the same bits toggle TLB-ops broadcasting as well as cache-ops) Signed-off-by: Jonathan Austin <jonathan.austin@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Stephen Boyd <sboyd@codeaurora.org>
2013-06-07ARM: suspend: fix CPU suspend code for !CONFIG_MMU configurationsWill Deacon
The ARM CPU suspend code can be selected even for a !CONFIG_MMU configuration. The resulting kernel will not compile and, even if it did, would access undefined co-processor registers when executing. This patch fixes the v6 and v7 CPU suspend code for the nommu case. Signed-off-by: Will Deacon <will.deacon@arm.com> Tested-by: Jonathan Austin <jonathan.austin@arm.com> CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> (commit_signer:1/3=33%) CC: Santosh Shilimkar <santosh.shilimkar@ti.com> (commit_signer:1/3=33%) CC: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
2013-05-02Merge branches 'devel-stable', 'entry', 'fixes', 'mach-types', 'misc' and ↵Russell King
'smp-hotplug' into for-linus
2013-04-17ARM: 7695/1: mvebu: Enable pj4b on LPAE compilationsGregory CLEMENT
pj4b cpus are LPAE capable so enable them on LPAE compilations Signed-off-by: Lior Amsalem <alior@marvell.com> Tested-by: Franklin <flin@marvell.com> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-04-03ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE and use ALT_SMP insteadWill Deacon
Many ARMv7 cores have hardware page table walkers that can read the L1 cache. This is discoverable from the ID_MMFR3 register, although this can be expensive to access from the low-level set_pte functions and is a pain to cache, particularly with multi-cluster systems. A useful observation is that the multi-processing extensions for ARMv7 require coherent table walks, meaning that we can make use of ALT_SMP patching in proc-v7-* to patch away the cache flush safely for these cores. Reported-by: Albin Tonnerre <Albin.Tonnerre@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-22ARM: 7678/1: Work around faulty ISAR0 register in some Krait CPUsStepan Moskovchenko
Some early versions of the Krait CPU design incorrectly indicate that they only support the UDIV and SDIV instructions in Thumb mode when they actually support them in ARM and Thumb mode. It seems that these CPUs follow the DDI0406B ARM ARM which has two possible values for the divide instructions field, instead of the DDI0406C document which has three possible values. Work around this problem by checking the MIDR against Krait CPUs with this faulty ISAR0 register and force the hwcaps to indicate support in both modes. [sboyd: Rewrote commit text to reflect real reasoning now that we autodetect udiv/sdiv] Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>