summaryrefslogtreecommitdiff
path: root/arch/arm64/include/asm/checksum.h
diff options
context:
space:
mode:
authorRobin Murphy <robin.murphy@arm.com>2020-01-15 16:42:39 +0000
committerWill Deacon <will@kernel.org>2020-01-16 15:23:29 +0000
commit5777eaed566a1d63e344d3dd8f2b5e33be20643e (patch)
tree9bf4f13c0209f26135e66073760b297ba0cac0d9 /arch/arm64/include/asm/checksum.h
parent46cf053efec6a3a5f343fead837777efe8252a46 (diff)
arm64: Implement optimised checksum routine
Apparently there exist certain workloads which rely heavily on software checksumming, for which the generic do_csum() implementation becomes a significant bottleneck. Therefore let's give arm64 its own optimised version - for ease of maintenance this foregoes assembly or intrisics, and is thus not actually arm64-specific, but does rely heavily on C idioms that translate well to the A64 ISA and the typical load/store capabilities of most ARMv8 CPU cores. The resulting increase in checksum throughput scales nicely with buffer size, tending towards 4x for a small in-order core (Cortex-A53), and up to 6x or more for an aggressive big core (Ampere eMAG). Reported-by: Lingyan Huang <huanglingyan2@huawei.com> Tested-by: Lingyan Huang <huanglingyan2@huawei.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to 'arch/arm64/include/asm/checksum.h')
-rw-r--r--arch/arm64/include/asm/checksum.h3
1 files changed, 3 insertions, 0 deletions
diff --git a/arch/arm64/include/asm/checksum.h b/arch/arm64/include/asm/checksum.h
index d064a50deb5f..8d2a7de39744 100644
--- a/arch/arm64/include/asm/checksum.h
+++ b/arch/arm64/include/asm/checksum.h
@@ -35,6 +35,9 @@ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
}
#define ip_fast_csum ip_fast_csum
+extern unsigned int do_csum(const unsigned char *buff, int len);
+#define do_csum do_csum
+
#include <asm-generic/checksum.h>
#endif /* __ASM_CHECKSUM_H */