summaryrefslogtreecommitdiff
path: root/drivers/spi
diff options
context:
space:
mode:
authorUwe Kleine-König <u.kleine-koenig@pengutronix.de>2024-02-10 17:40:07 +0100
committerMark Brown <broonie@kernel.org>2024-02-27 15:49:33 +0000
commitd748b48eeba8e1a10c822109252b75ca30288ca0 (patch)
treea7c0d28692127bda82d410282a0d7b2e185c2f5b /drivers/spi
parent786115655f4d9167bd855872bd55b0a37321806e (diff)
spi: ppc4xx: Fix fallout from rename in struct spi_bitbang
I failed to adapt this driver because it's not enabled in a powerpc allmodconfig build and also wasn't hit by my grep expertise. Fix accordingly. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202402100815.XQXw9XCF-lkp@intel.com/ Fixes: 2259233110d9 ("spi: bitbang: Follow renaming of SPI "master" to "controller"") Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Link: https://msgid.link/r/20240210164006.208149-7-u.kleine-koenig@pengutronix.de Signed-off-by: Mark Brown <broonie@kernel.org>
Diffstat (limited to 'drivers/spi')
-rw-r--r--drivers/spi/spi-ppc4xx.c14
1 files changed, 7 insertions, 7 deletions
diff --git a/drivers/spi/spi-ppc4xx.c b/drivers/spi/spi-ppc4xx.c
index 03aab661be9d..b07bb5e5811e 100644
--- a/drivers/spi/spi-ppc4xx.c
+++ b/drivers/spi/spi-ppc4xx.c
@@ -362,22 +362,22 @@ static int spi_ppc4xx_of_probe(struct platform_device *op)
/* Setup the state for the bitbang driver */
bbp = &hw->bitbang;
- bbp->master = hw->host;
+ bbp->ctlr = hw->host;
bbp->setup_transfer = spi_ppc4xx_setupxfer;
bbp->txrx_bufs = spi_ppc4xx_txrx;
bbp->use_dma = 0;
- bbp->master->setup = spi_ppc4xx_setup;
- bbp->master->cleanup = spi_ppc4xx_cleanup;
- bbp->master->bits_per_word_mask = SPI_BPW_MASK(8);
- bbp->master->use_gpio_descriptors = true;
+ bbp->ctlr->setup = spi_ppc4xx_setup;
+ bbp->ctlr->cleanup = spi_ppc4xx_cleanup;
+ bbp->ctlr->bits_per_word_mask = SPI_BPW_MASK(8);
+ bbp->ctlr->use_gpio_descriptors = true;
/*
* The SPI core will count the number of GPIO descriptors to figure
* out the number of chip selects available on the platform.
*/
- bbp->master->num_chipselect = 0;
+ bbp->ctlr->num_chipselect = 0;
/* the spi->mode bits understood by this driver: */
- bbp->master->mode_bits =
+ bbp->ctlr->mode_bits =
SPI_CPHA | SPI_CPOL | SPI_CS_HIGH | SPI_LSB_FIRST;
/* Get the clock for the OPB */
'>2019-04-18crypto: run initcalls for generic implementations earlierEric Biggers Use subsys_initcall for registration of all templates and generic algorithm implementations, rather than module_init. Then change cryptomgr to use arch_initcall, to place it before the subsys_initcalls. This is needed so that when both a generic and optimized implementation of an algorithm are built into the kernel (not loadable modules), the generic implementation is registered before the optimized one. Otherwise, the self-tests for the optimized implementation are unable to allocate the generic implementation for the new comparison fuzz tests. Note that on arm, a side effect of this change is that self-tests for generic implementations may run before the unaligned access handler has been installed. So, unaligned accesses will crash the kernel. This is arguably a good thing as it makes it easier to detect that type of bug. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> 2019-04-08crypto: aes - Use ___cacheline_aligned for aes dataAndi Kleen cacheline_aligned is a special section. It cannot be const at the same time because it's not read-only. It doesn't give any MMU protection. Mark it ____cacheline_aligned to not place it in a special section, but just align it in .rodata Cc: herbert@gondor.apana.org.au Suggested-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> 2018-11-09crypto: arm/aes - add some hardening against cache-timing attacksEric Biggers Make the ARM scalar AES implementation closer to constant-time by disabling interrupts and prefetching the tables into L1 cache. This is feasible because due to ARM's "free" rotations, the main tables are only 1024 bytes instead of the usual 4096 used by most AES implementations. On ARM Cortex-A7, the speed loss is only about 5%. The resulting code is still over twice as fast as aes_ti.c. Responsiveness is potentially a concern, but interrupts are only disabled for a single AES block. Note that even after these changes, the implementation still isn't necessarily guaranteed to be constant-time; see https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion of the many difficulties involved in writing truly constant-time AES software. But it's valuable to make such attacks more difficult. Much of this patch is based on patches suggested by Ard Biesheuvel. Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> 2017-02-11crypto: aes-generic - drop alignment requirementArd Biesheuvel The generic AES code exposes a 32-bit align mask, which forces all users of the code to use temporary buffers or take other measures to ensure the alignment requirement is adhered to, even on architectures that don't care about alignment for software algorithms such as this one. So drop the align mask, and fix the code to use get_unaligned_le32() where appropriate, which will resolve to whatever is optimal for the architecture. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> 2015-01-13crypto: add missing crypto module aliasesMathias Krause Commit 5d26a105b5a7 ("crypto: prefix module autoloading with "crypto-"") changed the automatic module loading when requesting crypto algorithms to prefix all module requests with "crypto-". This requires all crypto modules to have a crypto specific module alias even if their file name would otherwise match the requested crypto algorithm. Even though commit 5d26a105b5a7 added those aliases for a vast amount of modules, it was missing a few. Add the required MODULE_ALIAS_CRYPTO annotations to those files to make them get loaded automatically, again. This fixes, e.g., requesting 'ecb(blowfish-generic)', which used to work with kernels v3.18 and below. Also change MODULE_ALIAS() lines to MODULE_ALIAS_CRYPTO(). The former won't work for crypto modules any more. Fixes: 5d26a105b5a7 ("crypto: prefix module autoloading with "crypto-"") Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> 2014-11-24crypto: prefix module autoloading with "crypto-"Kees Cook This prefixes all crypto module loading with "crypto-" so we never run the risk of exposing module auto-loading to userspace via a crypto API, as demonstrated by Mathias Krause: https://lkml.org/lkml/2013/3/4/70 Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> 2013-08-14crypto: make tables used from assembler __visibleAndi Kleen Tables used from assembler should be marked __visible to let the compiler know. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> 2012-08-01crypto: cleanup - remove unneeded crypto_alg.cra_list initializationsJussi Kivilinna Initialization of cra_list is currently mixed, most ciphers initialize this field and most shashes do not. Initialization however is not needed at all since cra_list is initialized/overwritten in __crypto_register_alg() with list_add(). Therefore perform cleanup to remove all unneeded initializations of this field in 'crypto/'. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> 2010-02-16crypto: aes_generic - Fix checkpatch errorsRichard Hartmann Signed-off-by: Richard Hartmann <richih.mailinglist@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>