From 8a4e3a9ead7e37ce1505602b564c15da09ac039f Mon Sep 17 00:00:00 2001 From: Will Deacon Date: Thu, 28 Feb 2013 17:47:36 +0100 Subject: ARM: 7659/1: mm: make mm->context.id an atomic64_t variable mm->context.id is updated under asid_lock when a new ASID is allocated to an mm_struct. However, it is also read without the lock when a task is being scheduled and checking whether or not the current ASID generation is up-to-date. If two threads of the same process are being scheduled in parallel and the bottom bits of the generation in their mm->context.id match the current generation (that is, the mm_struct has not been used for ~2^24 rollovers) then the non-atomic, lockless access to mm->context.id may yield the incorrect ASID. This patch fixes this issue by making mm->context.id and atomic64_t, ensuring that the generation is always read consistently. For code that only requires access to the ASID bits (e.g. TLB flushing by mm), then the value is accessed directly, which GCC converts to an ldrb. Cc: # 3.8 Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon Signed-off-by: Russell King --- arch/arm/kernel/asm-offsets.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'arch/arm/kernel/asm-offsets.c') diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c index 5ce738b43508..923eec7105cf 100644 --- a/arch/arm/kernel/asm-offsets.c +++ b/arch/arm/kernel/asm-offsets.c @@ -110,7 +110,7 @@ int main(void) BLANK(); #endif #ifdef CONFIG_CPU_HAS_ASID - DEFINE(MM_CONTEXT_ID, offsetof(struct mm_struct, context.id)); + DEFINE(MM_CONTEXT_ID, offsetof(struct mm_struct, context.id.counter)); BLANK(); #endif DEFINE(VMA_VM_MM, offsetof(struct vm_area_struct, vm_mm)); -- cgit From 7fe31d28e839f9565c8176ec584676a045970802 Mon Sep 17 00:00:00 2001 From: Dave Martin Date: Tue, 17 Jul 2012 14:25:42 +0100 Subject: ARM: mcpm: introduce helpers for platform coherency exit/setup This provides helper methods to coordinate between CPUs coming down and CPUs going up, as well as documentation on the used algorithms, so that cluster teardown and setup operations are not done for a cluster simultaneously. For use in the power_down() implementation: * __mcpm_cpu_going_down(unsigned int cluster, unsigned int cpu) * __mcpm_outbound_enter_critical(unsigned int cluster) * __mcpm_outbound_leave_critical(unsigned int cluster) * __mcpm_cpu_down(unsigned int cluster, unsigned int cpu) The power_up_setup() helper should do platform-specific setup in preparation for turning the CPU on, such as invalidating local caches or entering coherency. It must be assembler for now, since it must run before the MMU can be switched on. It is passed the affinity level for which initialization should be performed. Because the mcpm_sync_struct content is looked-up and modified with the cache enabled or disabled depending on the code path, it is crucial to always ensure proper cache maintenance to update main memory right away. The sync_cache_*() helpers are used to that end. Also, in order to prevent a cached writer from interfering with an adjacent non-cached writer, we ensure each state variable is located to a separate cache line. Thanks to Nicolas Pitre and Achin Gupta for the help with this patch. Signed-off-by: Dave Martin Signed-off-by: Nicolas Pitre Reviewed-by: Will Deacon --- arch/arm/kernel/asm-offsets.c | 3 +++ 1 file changed, 3 insertions(+) (limited to 'arch/arm/kernel/asm-offsets.c') diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c index 923eec7105cf..1bed82a0a9e0 100644 --- a/arch/arm/kernel/asm-offsets.c +++ b/arch/arm/kernel/asm-offsets.c @@ -149,6 +149,9 @@ int main(void) DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL); DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE); DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE); + BLANK(); + DEFINE(CACHE_WRITEBACK_GRANULE, __CACHE_WRITEBACK_GRANULE); + BLANK(); #ifdef CONFIG_KVM_ARM_HOST DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm)); DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr)); -- cgit From 1ae98561b16f305e43151405f226727c00ee52bc Mon Sep 17 00:00:00 2001 From: Dave Martin Date: Fri, 17 Aug 2012 16:07:02 +0100 Subject: ARM: mcpm_head.S: vlock-based first man election Instead of requiring the first man to be elected in advance (which can be suboptimal in some situations), this patch uses a per- cluster mutex to co-ordinate selection of the first man. This should also make it more feasible to reuse this code path for asynchronous cluster resume (as in CPUidle scenarios). We must ensure that the vlock data doesn't share a cacheline with anything else, or dirty cache eviction could corrupt it. Signed-off-by: Dave Martin Signed-off-by: Nicolas Pitre Reviewed-by: Santosh Shilimkar Reviewed-by: Will Deacon --- arch/arm/kernel/asm-offsets.c | 1 + 1 file changed, 1 insertion(+) (limited to 'arch/arm/kernel/asm-offsets.c') diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c index 1bed82a0a9e0..3f088225e71c 100644 --- a/arch/arm/kernel/asm-offsets.c +++ b/arch/arm/kernel/asm-offsets.c @@ -150,6 +150,7 @@ int main(void) DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE); DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE); BLANK(); + DEFINE(CACHE_WRITEBACK_ORDER, __CACHE_WRITEBACK_ORDER); DEFINE(CACHE_WRITEBACK_GRANULE, __CACHE_WRITEBACK_GRANULE); BLANK(); #ifdef CONFIG_KVM_ARM_HOST -- cgit