summaryrefslogtreecommitdiff
path: root/include/crypto/scatterwalk.h
AgeCommit message (Collapse)Author
2025-04-28crypto: scatterwalk - Move skcipher walk and use it for memcpy_sglistHerbert Xu
Move the generic part of skcipher walk into scatterwalk, and use it to implement memcpy_sglist. This makes memcpy_sglist do the right thing when two distinct SG lists contain identical subsets (e.g., the AD part of AEAD). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: scatterwalk - Use nth_page instead of doing it by handHerbert Xu
Curiously, the Crypto API scatterwalk incremented pages by hand rather than using nth_page. Possibly because scatterwalk predates nth_page (the following commit is from the history tree): commit 3957f2b34960d85b63e814262a8be7d5ad91444d Author: James Morris <jmorris@intercode.com.au> Date: Sun Feb 2 07:35:32 2003 -0800 [CRYPTO]: in/out scatterlist support for ciphers. Fix this by using nth_page. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21crypto: scatterwalk - simplify map and unmap calling conventionEric Biggers
Now that the address returned by scatterwalk_map() is always being stored into the same struct scatter_walk that is passed in, make scatterwalk_map() do so itself and return void. Similarly, now that scatterwalk_unmap() is always being passed the address field within a struct scatter_walk, make scatterwalk_unmap() take a pointer to struct scatter_walk instead of the address directly. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: scatterwalk - Add memcpy_sglistHerbert Xu
Add memcpy_sglist which copies one SG list to another. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15crypto: scatterwalk - Change scatterwalk_next calling conventionHerbert Xu
Rather than returning the address and storing the length into an argument pointer, add an address field to the walk struct and use that to store the address. The length is returned directly. Change the done functions to use this stored address instead of getting them from the caller. Split the address into two using a union. The user should only access the const version so that it is never changed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - don't split at page boundaries when !HIGHMEMEric Biggers
When !HIGHMEM, the kmap_local_page() in the scatterlist walker does not actually map anything, and the address it returns is just the address from the kernel's direct map, where each sg entry's data is virtually contiguous. To improve performance, stop unnecessarily clamping data segments to page boundaries in this case. For now, still limit segments to PAGE_SIZE. This is needed to prevent preemption from being disabled for too long when SIMD is used, and to support the alignmask case which still uses a page-sized bounce buffer. Even so, this change still helps a lot in cases where messages cross a page boundary. For example, testing IPsec with AES-GCM on x86_64, the messages are 1424 bytes which is less than PAGE_SIZE, but on the Rx side over a third cross a page boundary. These ended up being processed in three parts, with the middle part going through skcipher_next_slow which uses a 16-byte bounce buffer. That was causing a significant amount of overhead which unnecessarily reduced the performance benefit of the new x86_64 AES-GCM assembly code. This change solves the problem; all these messages now get passed to the assembly code in one part. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - remove obsolete functionsEric Biggers
Remove various functions that are no longer used. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - add scatterwalk_get_sglist()Eric Biggers
Add a function that creates a scatterlist that represents the remaining data in a walk. This will be used to replace chain_to_walk() in net/tls/tls_device_fallback.c so that it will no longer need to reach into the internals of struct scatter_walk. Cc: Boris Pismenny <borisp@nvidia.com> Cc: Jakub Kicinski <kuba@kernel.org> Cc: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - add new functions for copying dataEric Biggers
Add memcpy_from_sglist() and memcpy_to_sglist() which are more readable versions of scatterwalk_map_and_copy() with the 'out' argument 0 and 1 respectively. They follow the same argument order as memcpy_from_page() and memcpy_to_page() from <linux/highmem.h>. Note that in the case of memcpy_from_sglist(), this also happens to be the same argument order that scatterwalk_map_and_copy() uses. The new code is also faster, mainly because it builds the scatter_walk directly without creating a temporary scatterlist. E.g., a 20% performance improvement is seen for copying the AES-GCM auth tag. Make scatterwalk_map_and_copy() be a wrapper around memcpy_from_sglist() and memcpy_to_sglist(). Callers of scatterwalk_map_and_copy() should be updated to call memcpy_from_sglist() or memcpy_to_sglist() directly, but there are a lot of them so they aren't all being updated right away. Also add functions memcpy_from_scatterwalk() and memcpy_to_scatterwalk() which are similar but operate on a scatter_walk instead of a scatterlist. These will replace scatterwalk_copychunks() with the 'out' argument 0 and 1 respectively. Their behavior differs slightly from scatterwalk_copychunks() in that they automatically take care of flushing the dcache when needed, making them easier to use. scatterwalk_copychunks() itself is left unchanged for now. It will be removed after its callers are updated to use other functions instead. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - add new functions for iterating through dataEric Biggers
Add scatterwalk_next() which consolidates scatterwalk_clamp() and scatterwalk_map(). Also add scatterwalk_done_src() and scatterwalk_done_dst() which consolidate scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done() or scatterwalk_pagedone(). A later patch will remove scatterwalk_done() and scatterwalk_pagedone(). The new code eliminates the error-prone 'more' parameter. Advancing to the next sg entry now only happens just-in-time in scatterwalk_next(). The new code also pairs the dcache flush more closely with the actual write, similar to memcpy_to_page(). Previously it was paired with advancing to the next page. This is currently causing bugs where the dcache flush is incorrectly being skipped, usually due to scatterwalk_copychunks() being called without a following scatterwalk_done(). The dcache flush may have been placed where it was in order to not call flush_dcache_page() redundantly when visiting a page more than once. However, that case is rare in practice, and most architectures either do not implement flush_dcache_page() anyway or implement it lazily where it just clears a page flag. Another limitation of the old code was that by the time the flush happened, there was no way to tell if more than one page needed to be flushed. That has been sufficient because the code goes page by page, but I would like to optimize that on !HIGHMEM platforms. The new code makes this possible, and a later patch will implement this optimization. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - add new functions for skipping dataEric Biggers
Add scatterwalk_skip() to skip the given number of bytes in a scatter_walk. Previously support for skipping was provided through scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was confusing and less efficient. Also add scatterwalk_start_at_pos() which starts a scatter_walk at the given position, equivalent to scatterwalk_start() + scatterwalk_skip(). This addresses another common need in a more streamlined way. Later patches will convert various users to use these functions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02crypto: scatterwalk - move to next sg entry just in timeEric Biggers
The scatterwalk_* functions are designed to advance to the next sg entry only when there is more data from the request to process. Compared to the alternative of advancing after each step if !sg_is_last(sg), this has the advantage that it doesn't cause problems if users accidentally don't terminate their scatterlist with the end marker (which is an easy mistake to make, and there are examples of this). Currently, the advance to the next sg entry happens in scatterwalk_done(), which is called after each "step" of the walk. It requires the caller to pass in a boolean 'more' that indicates whether there is more data. This works when the caller immediately knows whether there is more data, though it adds some complexity. However in the case of scatterwalk_copychunks() it's not immediately known whether there is more data, so the call to scatterwalk_done() has to happen higher up the stack. This is error-prone, and indeed the needed call to scatterwalk_done() is not always made, e.g. scatterwalk_copychunks() is sometimes called multiple times in a row. This causes a zero-length step to get added in some cases, which is unexpected and seems to work only by accident. This patch begins the switch to a less error-prone approach where the advance to the next sg entry happens just in time instead. For now, that means just doing the advance in scatterwalk_clamp() if it's needed there. Initially this is redundant, but it's needed to keep the tree in a working state as later patches change things to the final state. Later patches will similarly move the dcache flushing logic out of scatterwalk_done() and then remove scatterwalk_done() entirely. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-30crypto: scatterwalk - use kmap_local() not kmap_atomic()Ard Biesheuvel
kmap_atomic() is used to create short-lived mappings of pages that may not be accessible via the kernel direct map. This is only needed on 32-bit architectures that implement CONFIG_HIGHMEM, but it can be used on 64-bit other architectures too, where the returned mapping is simply the kernel direct address of the page. However, kmap_atomic() does not support migration on CONFIG_HIGHMEM configurations, due to the use of per-CPU kmap slots, and so it disables preemption on all architectures, not just the 32-bit ones. This implies that all scatterwalk based crypto routines essentially execute with preemption disabled all the time, which is less than ideal. So let's switch scatterwalk_map/_unmap and the shash/ahash routines to kmap_local() instead, which serves a similar purpose, but without the resulting impact on preemption on architectures that have no need for CONFIG_HIGHMEM. Cc: Eric Biggers <ebiggers@kernel.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "Elliott, Robert (Servers)" <elliott@hpe.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-10-21crypto: scatterwalk - remove duplicate function declarationsTianjia Zhang
scatterwalk_map() is an inline function already defined in the header file, it is necessary to delete the re-declaration at the same location, which was left out in the header file by an earlier modification. Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-09-30crypto: scatterwalk - Remove unused inline function scatterwalk_aligned()Gaosheng Cui
The scatterwalk_aligned() are no longer used since removing blkcipher and ablkcipher support, all use of it has been removed since commit d63007eb954e ("crypto: ablkcipher - remove deprecated and unused ablkcipher support"), so remove it. Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-12-17crypto: api - Replace kernel.h with the necessary inclusionsAndy Shevchenko
When kernel.h is used in the headers it adds a lot into dependency hell, especially when there are circular dependencies are involved. Replace kernel.h inclusion with the list of what is really being used. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-06-28crypto: scatterwalk - Remove obsolete PageSlab checkHerbert Xu
As it is now legal to call flush_dcache_page on slab pages we no longer need to do the check in the Crypto API. Reported-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-03crypto: scatterwalk - remove scatterwalk_samebuf()Eric Biggers
scatterwalk_samebuf() is never used. Remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: scatterwalk - remove 'chain' argument from scatterwalk_crypto_chain()Eric Biggers
All callers pass chain=0 to scatterwalk_crypto_chain(). Remove this unneeded parameter. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-18crypto: scatterwalk - Inline start/map/doneHerbert Xu
This patch inlines the functions scatterwalk_start, scatterwalk_map and scatterwalk_done as they're all tiny and mostly used by the block cipher walker. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-18crypto: scatterwalk - Remove scatterwalk_bytes_sglenHerbert Xu
This patch removes the now unused scatterwalk_bytes_sglen. Anyone using this out-of-tree should switch over to sg_nents_for_len. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-08-17crypto: replace scatterwalk_sg_chain with sg_chainDan Williams
Signed-off-by: Dan Williams <dan.j.williams@intel.com> [hch: split from a larger patch by Dan] Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-05-22crypto: scatterwalk - Add scatterwalk_ffwd helperHerbert Xu
This patch adds the scatterwalk_ffwd helper which can create an SG list that starts in the middle of an existing SG list. The new list may either be part of the existing list or be a chain that latches onto part of the existing list. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-01-26crypto: replace scatterwalk_sg_next with sg_nextCristian Stoica
Modify crypto drivers to use the generic SG helper since both of them are equivalent and the one from crypto is redundant. See also: 468577abe37ff7b453a9ac613e0ea155349203ae reverted in b2ab4a57b018aafbba35bff088218f5cc3d2142e Signed-off-by: Cristian Stoica <cristian.stoica@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-06-20crypto: api - Move crypto_yield() to algapi.hMarek Vasut
It makes no sense for crypto_yield() to be defined in scatterwalk.h , move it into algapi.h as it's an internal function to crypto API. Signed-off-by: Marek Vasut <marex@denx.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-12-09crypto: scatterwalk - Use sg_chain_ptr on chain entriesTom Lendacky
Now that scatterwalk_sg_chain sets the chain pointer bit the sg_page call in scatterwalk_sg_next hits a BUG_ON when CONFIG_DEBUG_SG is enabled. Use sg_chain_ptr instead of sg_page on a chain entry. Cc: stable@vger.kernel.org Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-11-28crypto: scatterwalk - Set the chain pointer indication bitTom Lendacky
The scatterwalk_crypto_chain function invokes the scatterwalk_sg_chain function to chain two scatterlists, but the chain pointer indication bit is not set. When the resulting scatterlist is used, for example, by sg_nents to count the number of scatterlist entries, a segfault occurs because sg_nents does not follow the chain pointer to the chained scatterlist. Update scatterwalk_sg_chain to set the chain pointer indication bit as is done by the sg_chain function. Cc: stable@vger.kernel.org Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-08-21crypto: scatterwalk - Add support for calculating number of SG elementsJoel Fernandes
Crypto layer only passes nbytes to encrypt but in omap-aes driver we need to know number of SG elements to pass to dmaengine slave API. We add function for the same to scatterwalk library. Signed-off-by: Joel Fernandes <joelf@ti.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-03-20crypto: remove the second argument of k[un]map_atomic()Cong Wang
Signed-off-by: Cong Wang <amwang@redhat.com>
2010-12-02crypto: scatterwalk - Add scatterwalk_crypto_chain helperSteffen Klassert
A lot of crypto algorithms implement their own chaining function. So add a generic one that can be used from all the algorithms that need scatterlist chaining. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-05-01[CRYPTO] api: Fix scatterwalk_sg_chainHerbert Xu
When I backed out of using the generic sg chaining (as it isn't currently portable) and introduced scatterwalk_sg_chain/scatterwalk_sg_next I left out the sg_is_last check in the latter. This causes it to potentially dereference beyond the end of the sg array. As most uses of scatterwalk_sg_next are bound by an overall length, this only affected the chaining code in authenc and eseqiv. Thanks to Patrick McHardy for identifying this problem. This patch also clears the "last" bit on the head of the chained list as it's no longer last. This also went missing in scatterwalk_sg_chain and is present in sg_chain. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11[CRYPTO] api: Include sched.h for cond_resched in scatterwalk.hHerbert Xu
As Andrew Morton correctly points out, we need to explicitly include sched.h as we use the function cond_resched in crypto/scatterwalk.h. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11[CRYPTO] scatterwalk: Restore custom sg chaining for nowHerbert Xu
Unfortunately the generic chaining hasn't been ported to all architectures yet, and notably not s390. So this patch restores the chainging that we've been using previously which does work everywhere. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2008-01-11[CRYPTO] scatterwalk: Move scatterwalk.h to linux/cryptoHerbert Xu
The scatterwalk infrastructure is used by algorithms so it needs to move out of crypto for future users that may live in drivers/crypto or asm/*/crypto. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>