summaryrefslogtreecommitdiff
path: root/lib/raid6/rvv.c
AgeCommit message (Collapse)Author
2025-06-12raid6: riscv: Fix NULL pointer dereference caused by a missing clobberChunyan Zhang
When running the raid6 user-space test program on RISC-V QEMU, there's a segmentation fault which seems caused by accessing a NULL pointer, which is the pointer variable p/q in raid6_rvv*_gen/xor_syndrome_real(), p/q should have been equal to dptr[x], but when I use GDB command to see its value, which was 0x10 like below: " Program received signal SIGSEGV, Segmentation fault. 0x0000000000011062 in raid6_rvv2_xor_syndrome_real (disks=<optimized out>, start=0, stop=<optimized out>, bytes=4096, ptrs=<optimized out>) at rvv.c:386 (gdb) p p $1 = (u8 *) 0x10 <error: Cannot access memory at address 0x10> " The issue was found to be related with: 1) Compile optimization There's no segmentation fault if compiling the raid6test program with the optimization flag -O0. 2) The RISC-V vector command vsetvli If not used t0 as the first parameter in vsetvli, there's no segmentation fault either. This patch selects the 2nd solution to fix the issue. [Palmer: The actual issue here is a missing clobber in the vsetvli code. It's a little tricky: we've already probed for VLENB so we don't need to look at the output register, we just need to have an X register in the instruction as that's the form required to actually set VL. Thus we clobber a register, and without describing that we end up breaking compilers.] Fixes: 6093faaf9593 ("raid6: Add RISC-V SIMD syndrome and recovery calculations") Signed-off-by: Chunyan Zhang <zhangchunyan@iscas.ac.cn> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20250610101234.1100660-3-zhangchunyan@iscas.ac.cn Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>
2025-06-05raid6: Add RISC-V SIMD syndrome and recovery calculationsChunyan Zhang
The assembly is originally based on the ARM NEON and int.uc, but uses RISC-V vector instructions to implement the RAID6 syndrome and recovery calculations. The functions are tested on QEMU running with the option "-icount shift=0": raid6: rvvx1 gen() 1008 MB/s raid6: rvvx2 gen() 1395 MB/s raid6: rvvx4 gen() 1584 MB/s raid6: rvvx8 gen() 1694 MB/s raid6: int64x8 gen() 113 MB/s raid6: int64x4 gen() 116 MB/s raid6: int64x2 gen() 272 MB/s raid6: int64x1 gen() 229 MB/s raid6: using algorithm rvvx8 gen() 1694 MB/s raid6: .... xor() 1000 MB/s, rmw enabled raid6: using rvv recovery algorithm [Charlie: - Fixup vector options] Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> Signed-off-by: Chunyan Zhang <zhangchunyan@iscas.ac.cn> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Tested-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20250305083707.74218-1-zhangchunyan@iscas.ac.cn Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Signed-off-by: Palmer Dabbelt <palmer@dabbelt.com>