diff options
author | Jisheng Zhang <jszhang@kernel.org> | 2024-03-25 19:00:36 +0800 |
---|---|---|
committer | Palmer Dabbelt <palmer@rivosinc.com> | 2024-04-30 10:35:45 -0700 |
commit | dcb2743d1e701fc1a986c187adc11f6148316d21 (patch) | |
tree | 33263065e01dc2f7af6fda2665cabb2cf903399c /arch/riscv/include/asm | |
parent | 0fdbb06379b1126a8c69ceec28e6e506088614a2 (diff) |
riscv: mm: still create swiotlb buffer for kmalloc() bouncing if required
After commit f51f7a0fc2f4 ("riscv: enable DMA_BOUNCE_UNALIGNED_KMALLOC
for !dma_coherent"), for non-coherent platforms with less than 4GB
memory, we rely on users to pass "swiotlb=mmnn,force" kernel parameters
to enable DMA bouncing for unaligned kmalloc() buffers. Now let's go
further: If no bouncing needed for ZONE_DMA, let kernel automatically
allocate 1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing on
non-coherent platforms, so that no need to pass "swiotlb=mmnn,force"
any more.
The math of "1MB swiotlb buffer per 1GB of RAM for kmalloc() bouncing"
is taken from arm64. Users can still force smaller swiotlb buffer by
passing "swiotlb=mmnn".
Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Link: https://lore.kernel.org/r/20240325110036.1564-1-jszhang@kernel.org
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Diffstat (limited to 'arch/riscv/include/asm')
-rw-r--r-- | arch/riscv/include/asm/cache.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/riscv/include/asm/cache.h b/arch/riscv/include/asm/cache.h index 2174fe7bac9a..570e9d8acad1 100644 --- a/arch/riscv/include/asm/cache.h +++ b/arch/riscv/include/asm/cache.h @@ -26,8 +26,8 @@ #ifndef __ASSEMBLY__ -#ifdef CONFIG_RISCV_DMA_NONCOHERENT extern int dma_cache_alignment; +#ifdef CONFIG_RISCV_DMA_NONCOHERENT #define dma_get_cache_alignment dma_get_cache_alignment static inline int dma_get_cache_alignment(void) { |