summaryrefslogtreecommitdiff
path: root/arch/arm/kernel
diff options
context:
space:
mode:
authorArd Biesheuvel <ardb@kernel.org>2022-01-10 09:54:22 +0100
committerArd Biesheuvel <ardb@kernel.org>2022-01-25 09:53:52 +0100
commitd31e23aff011d96278f4dbc22f2ec5db433eabaf (patch)
tree7da6315ac0b3016db684d3be4caac44f45ebd802 /arch/arm/kernel
parentaa0a20f521516ba83ea29b510fcc12fb35920b48 (diff)
ARM: mm: make vmalloc_seq handling SMP safe
Rework the vmalloc_seq handling so it can be used safely under SMP, as we started using it to ensure that vmap'ed stacks are guaranteed to be mapped by the active mm before switching to a task, and here we need to ensure that changes to the page tables are visible to other CPUs when they observe a change in the sequence count. Since LPAE needs none of this, fold a check against it into the vmalloc_seq counter check after breaking it out into a separate static inline helper. Given that vmap'ed stacks are now also supported on !SMP configurations, let's drop the WARN() that could potentially now fire spuriously. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Diffstat (limited to 'arch/arm/kernel')
-rw-r--r--arch/arm/kernel/traps.c25
1 files changed, 7 insertions, 18 deletions
diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
index 3f38357efc46..08612032aefe 100644
--- a/arch/arm/kernel/traps.c
+++ b/arch/arm/kernel/traps.c
@@ -885,6 +885,7 @@ asmlinkage void handle_bad_stack(struct pt_regs *regs)
die("kernel stack overflow", regs, 0);
}
+#ifndef CONFIG_ARM_LPAE
/*
* Normally, we rely on the logic in do_translation_fault() to update stale PMD
* entries covering the vmalloc space in a task's page tables when it first
@@ -895,26 +896,14 @@ asmlinkage void handle_bad_stack(struct pt_regs *regs)
* So we need to ensure that these PMD entries are up to date *before* the MM
* switch. As we already have some logic in the MM switch path that takes care
* of this, let's trigger it by bumping the counter every time the core vmalloc
- * code modifies a PMD entry in the vmalloc region.
+ * code modifies a PMD entry in the vmalloc region. Use release semantics on
+ * the store so that other CPUs observing the counter's new value are
+ * guaranteed to see the updated page table entries as well.
*/
void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
{
- if (start > VMALLOC_END || end < VMALLOC_START)
- return;
-
- /*
- * This hooks into the core vmalloc code to receive notifications of
- * any PMD level changes that have been made to the kernel page tables.
- * This means it should only be triggered once for every MiB worth of
- * vmalloc space, given that we don't support huge vmalloc/vmap on ARM,
- * and that kernel PMD level table entries are rarely (if ever)
- * updated.
- *
- * This means that the counter is going to max out at ~250 for the
- * typical case. If it overflows, something entirely unexpected has
- * occurred so let's throw a warning if that happens.
- */
- WARN_ON(++init_mm.context.vmalloc_seq == UINT_MAX);
+ if (start < VMALLOC_END && end > VMALLOC_START)
+ atomic_inc_return_release(&init_mm.context.vmalloc_seq);
}
-
+#endif
#endif