summaryrefslogtreecommitdiff
path: root/arch/riscv/Kconfig
diff options
context:
space:
mode:
authorPalmer Dabbelt <palmer@rivosinc.com>2024-01-17 18:07:11 -0800
committerPalmer Dabbelt <palmer@rivosinc.com>2024-01-17 18:07:11 -0800
commitc640868491105d53899f9f8e613acd4aa06cef68 (patch)
tree1399e589b73645b72a468c603eb1280e625bb035 /arch/riscv/Kconfig
parent0de65288d75ff96c30e216557d979fb9342c4323 (diff)
parent6f4c45cbcb00d649475a3099235e5b4fce569b4b (diff)
Merge patch series "riscv: Add fine-tuned checksum functions"
Charlie Jenkins <charlie@rivosinc.com> says: Each architecture generally implements fine-tuned checksum functions to leverage the instruction set. This patch adds the main checksum functions that are used in networking. Tested on QEMU, this series allows the CHECKSUM_KUNIT tests to complete an average of 50.9% faster. This patch takes heavy use of the Zbb extension using alternatives patching. To test this patch, enable the configs for KUNIT, then CHECKSUM_KUNIT. I have attempted to make these functions as optimal as possible, but I have not ran anything on actual riscv hardware. My performance testing has been limited to inspecting the assembly, running the algorithms on x86 hardware, and running in QEMU. ip_fast_csum is a relatively small function so even though it is possible to read 64 bits at a time on compatible hardware, the bottleneck becomes the clean up and setup code so loading 32 bits at a time is actually faster. * b4-shazam-merge: kunit: Add tests for csum_ipv6_magic and ip_fast_csum riscv: Add checksum library riscv: Add checksum header riscv: Add static key for misaligned accesses asm-generic: Improve csum_fold Link: https://lore.kernel.org/r/20240108-optimize_checksum-v15-0-1c50de5f2167@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
Diffstat (limited to 'arch/riscv/Kconfig')
0 files changed, 0 insertions, 0 deletions