diff options
author | Johannes Berg <johannes.berg@intel.com> | 2021-03-15 23:38:04 +0100 |
---|---|---|
committer | Richard Weinberger <richard@nod.at> | 2021-06-17 22:04:40 +0200 |
commit | 80f849bf541ef9b633a9c08ac208f9c9afd14eb9 (patch) | |
tree | 4da4171a7d84405ecdbecfc246301eb9e176f9da /arch/um/include/asm/cacheflush.h | |
parent | dd3035a21ba7ccaa883d7107d357ad06320d78fc (diff) |
um: implement flush_cache_vmap/flush_cache_vunmap
vmalloc() heavy workloads in UML are extremely slow, due to
flushing the entire kernel VM space (flush_tlb_kernel_vm())
on the first segfault.
Implement flush_cache_vmap() to avoid that, and while at it
also add flush_cache_vunmap() since it's trivial.
This speeds up my vmalloc() heavy test of copying files out
from /sys/kernel/debug/gcov/ by 30x (from 30s to 1s.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Diffstat (limited to 'arch/um/include/asm/cacheflush.h')
-rw-r--r-- | arch/um/include/asm/cacheflush.h | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/arch/um/include/asm/cacheflush.h b/arch/um/include/asm/cacheflush.h new file mode 100644 index 000000000000..4c9858cd36ec --- /dev/null +++ b/arch/um/include/asm/cacheflush.h @@ -0,0 +1,9 @@ +#ifndef __UM_ASM_CACHEFLUSH_H +#define __UM_ASM_CACHEFLUSH_H + +#include <asm/tlbflush.h> +#define flush_cache_vmap flush_tlb_kernel_range +#define flush_cache_vunmap flush_tlb_kernel_range + +#include <asm-generic/cacheflush.h> +#endif /* __UM_ASM_CACHEFLUSH_H */ |