summaryrefslogtreecommitdiff
path: root/arch/arc/lib
AgeCommit message (Collapse)Author
2023-08-17ARCv2: memset: don't prefetch for len == 0 which happens a alotVineet Gupta
This avoids potential "bleeding" when size == 0 as cache line would be dirtied (and possibly fetched from other cores) and due to the same reaons more optimal too. Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-08ARC: memset: fix build with L1_CACHE_SHIFT != 6Eugeniy Paltsev
In case of 'L1_CACHE_SHIFT != 6' we define dummy assembly macroses PREALLOC_INSTR and PREFETCHW_INSTR without arguments. However we pass arguments to them in code which cause build errors. Fix that. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Cc: <stable@vger.kernel.org> [5.0] Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-02-25ARCv2: Add explcit unaligned access support (and ability to disable too)Eugeniy Paltsev
As of today we enable unaligned access unconditionally on ARCv2. Do this under a Kconfig option to allow disable it for test, benchmarking etc. Also while at it - Select HAVE_EFFICIENT_UNALIGNED_ACCESS - Although gcc defaults to unaligned access (since GNU 2018.03), add the right toggles for enabling or disabling as appropriate - update bootlog to prints both HW feature status (exists, enabled/disabled) and SW status (used / not used). - wire up the relaxed memcpy for unaligned access Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> [vgupta: squashed patches, handle gcc -mno-unaligned-access quick]
2019-02-25ARCv2: lib: introduce memcpy optimized for unaligned accessEugeniy Paltsev
Optimise code to use efficient unaligned memory access which is available on ARCv2. This allows us to really simplify memcpy code and speed up the code one and a half times (in case of unaligned source or destination). Don't wire it up yet ! Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-02-21ARCv2: lib: memcpy: fix doing prefetchw outside of bufferEugeniy Paltsev
ARCv2 optimized memcpy uses PREFETCHW instruction for prefetching the next cache line but doesn't ensure that the line is not past the end of the buffer. PRETECHW changes the line ownership and marks it dirty, which can cause data corruption if this area is used for DMA IO. Fix the issue by avoiding the PREFETCHW. This leads to performance degradation but it is OK as we'll introduce new memcpy implementation optimized for unaligned memory access using. We also cut off all PREFETCH instructions at they are quite useless here: * we call PREFETCH right before LOAD instruction call. * we copy 16 or 32 bytes of data (depending on CONFIG_ARC_HAS_LL64) in a main logical loop. so we call PREFETCH 4 times (or 2 times) for each L1 cache line (in case of 64B L1 cache Line which is default case). Obviously this is not optimal. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-01-17ARCv2: lib: memeset: fix doing prefetchw outside of bufferEugeniy Paltsev
ARCv2 optimized memset uses PREFETCHW instruction for prefetching the next cache line but doesn't ensure that the line is not past the end of the buffer. PRETECHW changes the line ownership and marks it dirty, which can cause issues in SMP config when next line was already owned by other core. Fix the issue by avoiding the PREFETCHW Some more details: The current code has 3 logical loops (ignroing the unaligned part) (a) Big loop for doing aligned 64 bytes per iteration with PREALLOC (b) Loop for 32 x 2 bytes with PREFETCHW (c) any left over bytes loop (a) was already eliding the last 64 bytes, so PREALLOC was safe. The fix was removing PREFETCW from (b). Another potential issue (applicable to configs with 32 or 128 byte L1 cache line) is that PREALLOC assumes 64 byte cache line and may not do the right thing specially for 32b. While it would be easy to adapt, there are no known configs with those lie sizes, so for now, just compile out PREALLOC in such cases. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Cc: stable@vger.kernel.org #4.4+ Signed-off-by: Vineet Gupta <vgupta@synopsys.com> [vgupta: rewrote changelog, used asm .macro vs. "C" macro]
2016-09-30ARC: dw2 unwind: enable cfi pseudo ops in string libVineet Gupta
This uses a new set of annoations viz. ENTRY_CFI/END_CFI to enabel cfi ops generation. Note that we didn't change the normal ENTRY/EXIT as we don't actually want unwind info in the trap/exception/interrutp handlers which use these, as unwinder then gets confused (it keeps recursing vs. stopping). Semantically these are leaf routines and unwinding should stop when it hits those routines. Before ------ 28.52% 1.19% 9929 hackbench libuClibc-1.0.17.so [.] __write_nocancel | ---__write_nocancel |--8.95%--EV_Trap | --8.25%--sys_write | |--3.93%--sock_write_iter ... |--2.62%--memset <==== [LEAF entry as no unwind info] ^^^^^^ After ----- 29.46% 1.24% 13622 hackbench libuClibc-1.0.17.so [.] __write_nocancel | ---__write_nocancel |--9.31%--EV_Trap | --8.62%--sys_write | |--4.17%--sock_write_iter ... |--6.19%--sys_write | --6.19%--sock_write_iter | unix_stream_sendmsg | |--1.62%--sock_alloc_send_pskb | |--0.89%--sock_def_readable | |--0.88%--_raw_spin_unlock_irqrestore | |--0.69%--memset | | ^^^^^^ <==== [now in proper callframe] | | | --0.52%--skb_copy_datagram_from_iter Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-11-03ARCv2: lib: memcpy: use local symbolsVineet Gupta
Otherwise perf profiles don't charge tme to memcpy Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-07-20ARCv2: lib: memset: Don't assume 64-bit load/storesVineet Gupta
There are configurations which may not have LDD/STD Signed-off-by: Claudiu Zissulescu <claziss@synopsys.com> Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-07-20ARCv2: lib: memcpy: Missing PREFETCHWVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-22ARCv2: Adhere to Zero Delay loop restrictionVineet Gupta
Branch insn can't be scheduled as last insn of Zero Overhead loop Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-22ARCv2: optimised string/mem lib routinesClaudiu Zissulescu
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-03-26ARC: switch to generic ENTRY/END assembler annotationsVineet Gupta
With commit 9df62f054406 "arch: use ASM_NL instead of ';'" the generic macros can handle the arch specific newline quirk. Hence we can get rid of ARC asm macros and use the "C" style macros. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-24ARC: [lib] strchr breakage in Big-endian configurationJoern Rennecke
For a search buffer, 2 byte aligned, strchr() was returning pointer outside of buffer (buf - 1) ------------->8---------------- // Input buffer (default 4 byte aigned) char *buffer = "1AA_"; // actual search start (to mimick 2 byte alignment) char *current_line = &(buffer[2]); // Character to search for char c = 'A'; char *c_pos = strchr(current_line, c); printf("%s\n", c_pos) --> 'AA_' as oppose to 'A_' ------------->8---------------- Reported-by: Anton Kolesov <Anton.Kolesov@synopsys.com> Debugged-by: Anton Kolesov <Anton.Kolesov@synopsys.com> Cc: <stable@vger.kernel.org> # [3.9 and 3.10] Cc: Noam Camus <noamc@ezchip.com> Signed-off-by: Joern Rennecke <joern.rennecke@embecosm.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-11ARC: String libraryVineet Gupta
Hand optimised asm code for ARC700 pipeline. Originally written/optimized by Joern Rennecke Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Cc: Joern Rennecke <joern.rennecke@embecosm.com>
2013-02-11ARC: Build system: Makefiles, Kconfig, Linker scriptVineet Gupta
Arnd in his review pointed out that arch Kconfig organisation has several deficiencies: * Build time entries for things which can be runtime extracted from DT (e.g. SDRAM size, core clk frequency..) * Not multi-platform-image-build friendly (choice .. endchoice constructs) * cpu variants support (750/770) is exclusive. The first 2 have been fixed in subsequent patches. Due to the nature of the 750 and 770, it is not possible to build for both together, w/o special runtime glue code which would hurt performance. Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Sam Ravnborg <sam@ravnborg.org> Acked-by: Sam Ravnborg <sam@ravnborg.org>