summaryrefslogtreecommitdiff
path: root/lib/decompress.c
diff options
context:
space:
mode:
authorEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>2019-01-14 18:16:48 +0300
committerVineet Gupta <vgupta@synopsys.com>2019-01-17 16:24:39 -0800
commite6a72b7daeeb521753803550f0ed711152bb2555 (patch)
treef3b3b4f0b0be9df45faff0feec1b5a1656a3bffa /lib/decompress.c
parent4d447455e73b47c43dd35fcc38ed823d3182a474 (diff)
ARCv2: lib: memeset: fix doing prefetchw outside of buffer
ARCv2 optimized memset uses PREFETCHW instruction for prefetching the next cache line but doesn't ensure that the line is not past the end of the buffer. PRETECHW changes the line ownership and marks it dirty, which can cause issues in SMP config when next line was already owned by other core. Fix the issue by avoiding the PREFETCHW Some more details: The current code has 3 logical loops (ignroing the unaligned part) (a) Big loop for doing aligned 64 bytes per iteration with PREALLOC (b) Loop for 32 x 2 bytes with PREFETCHW (c) any left over bytes loop (a) was already eliding the last 64 bytes, so PREALLOC was safe. The fix was removing PREFETCW from (b). Another potential issue (applicable to configs with 32 or 128 byte L1 cache line) is that PREALLOC assumes 64 byte cache line and may not do the right thing specially for 32b. While it would be easy to adapt, there are no known configs with those lie sizes, so for now, just compile out PREALLOC in such cases. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Cc: stable@vger.kernel.org #4.4+ Signed-off-by: Vineet Gupta <vgupta@synopsys.com> [vgupta: rewrote changelog, used asm .macro vs. "C" macro]
Diffstat (limited to 'lib/decompress.c')
0 files changed, 0 insertions, 0 deletions