summaryrefslogtreecommitdiff
path: root/lib/mpi/mpicoder.c
AgeCommit message (Collapse)Author
2017-08-22lib/mpi: kunmap after finishing accessing bufferStephan Mueller
Using sg_miter_start and sg_miter_next, the buffer of an SG is kmap'ed to *buff. The current code calls sg_miter_stop (and thus kunmap) on the SG entry before the last access of *buff. The patch moves the sg_miter_stop call after the last access to *buff to ensure that the memory pointed to by *buff is still mapped. Fixes: 4816c9406430 ("lib/mpi: Fix SG miter leak") Cc: <stable@vger.kernel.org> Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-29lib/mpi: Fix SG miter leakHerbert Xu
In mpi_read_raw_from_sgl we may leak the SG miter resouces after reading the leading zeroes. This patch fixes this by stopping the iteration once the leading zeroes have been read. Fixes: 127827b9c295 ("lib/mpi: Do not do sg_virt") Reported-by: Nicolai Stange <nicstange@gmail.com> Tested-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01lib/mpi: Do not do sg_virtHerbert Xu
Currently the mpi SG helpers use sg_virt which is completely broken. It happens to work with normal kernel memory but will fail with anything that is not linearly mapped. This patch fixes this by using the SG iterator helpers. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-07-01crypto: rsa - Generate fixed-length outputHerbert Xu
Every implementation of RSA that we have naturally generates output with leading zeroes. The one and only user of RSA, pkcs1pad wants to have those leading zeroes in place, in fact because they are currently absent it has to write those zeroes itself. So we shouldn't be stripping leading zeroes in the first place. In fact this patch makes rsa-generic produce output with fixed length so that pkcs1pad does not need to do any extra work. This patch also changes DH to use the new interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-05-31lib/mpi: refactor mpi_read_from_buffer() in terms of mpi_read_raw_data()Nicolai Stange
mpi_read_from_buffer() and mpi_read_raw_data() do basically the same thing except that the former extracts the number of payload bits from the first two bytes of the input buffer. Besides that, the data copying logic is exactly the same. Replace the open coded buffer to MPI instance conversion by a call to mpi_read_raw_data(). Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-05-31lib/mpi: mpi_read_from_buffer(): sanitize short buffer printkNicolai Stange
The first two bytes of the input buffer encode its expected length and mpi_read_from_buffer() prints a console message if the given buffer is too short. However, there are some oddities with how this message is printed: - It is printed at the default loglevel. This is different from the one used in the case that the first two bytes' value is unsupportedly large, i.e. KERN_INFO. - The format specifier '%d' is used for unsigned ints. - It prints the values of nread and *ret_nread. This is redundant since the former is always the latter + 1. Clean this up as follows: - Use pr_info() rather than printk() with no loglevel. - Use the format specifiers '%u' in place if '%d'. - Do not print the redundant 'nread' but the more helpful 'nbytes' value. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-05-31lib/mpi: mpi_read_from_buffer(): return -EINVAL upon too short bufferNicolai Stange
Currently, if the input buffer is shorter than the expected length as indicated by its first two bytes, an MPI instance of this expected length will be allocated and filled with as much data as is available. The rest will remain uninitialized. Instead of leaving this condition undetected, an error code should be reported to the caller. Since this situation indicates that the input buffer's first two bytes, encoding the number of expected bits, are garbled, -EINVAL is appropriate here. If the input buffer is shorter than indicated by its first two bytes, make mpi_read_from_buffer() return -EINVAL. Get rid of the 'nread' variable: with the new semantics, the total number of bytes read from the input buffer is known in advance. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-05-31lib/mpi: mpi_read_from_buffer(): return error codeNicolai Stange
mpi_read_from_buffer() reads a MPI from a buffer into a newly allocated MPI instance. It expects the buffer's leading two bytes to contain the number of bits, followed by the actual payload. On failure, it returns NULL and updates the in/out argument ret_nread somewhat inconsistently: - If the given buffer is too short to contain the leading two bytes encoding the number of bits or their value is unsupported, then ret_nread will be cleared. - If the allocation of the resulting MPI instance fails, ret_nread is left as is. The only user of mpi_read_from_buffer(), digsig_verify_rsa(), simply checks for a return value of NULL and returns -ENOMEM if that happens. While this is all of cosmetic nature only, there is another error condition which currently isn't detectable by the caller of mpi_read_from_buffer(): if the given buffer is too small to hold the number of bits as encoded in its first two bytes, the return value will be non-NULL and *ret_nread > 0. In preparation of communicating this condition to the caller, let mpi_read_from_buffer() return error values by means of the ERR_PTR() mechanism. Make the sole caller of mpi_read_from_buffer(), digsig_verify_rsa(), check the return value for IS_ERR() rather than == NULL. If IS_ERR() is true, return the associated error value rather than the fixed -ENOMEM. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-05-31lib/mpi: mpi_read_raw_data(): fix nbits calculationNicolai Stange
The number of bits, nbits, is calculated in mpi_read_raw_data() as follows: nbits = nbytes * 8; Afterwards, the number of leading zero bits of the first byte get subtracted: nbits -= count_leading_zeros(buffer[0]); However, count_leading_zeros() takes an unsigned long and thus, the u8 gets promoted to an unsigned long. Thus, the above doesn't subtract the number of leading zeros in the most significant nonzero input byte from nbits, but the number of leading zeros of the most significant nonzero input byte promoted to unsigned long, i.e. BITS_PER_LONG - 8 too many. Fix this by subtracting count_leading_zeros(...) - (BITS_PER_LONG - 8) from nbits only. Fixes: e1045992949 ("MPILIB: Provide a function to read raw data into an MPI") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-05-31lib/mpi: mpi_read_raw_data(): purge redundant clearing of nbitsNicolai Stange
In mpi_read_raw_data(), unsigned nbits is calculated as follows: nbits = nbytes * 8; and redundantly cleared later on if nbytes == 0: if (nbytes > 0) ... else nbits = 0; Purge this redundant clearing for the sake of clarity. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-05-31lib/mpi: purge mpi_set_buffer()Nicolai Stange
mpi_set_buffer() has no in-tree users and similar functionality is provided by mpi_read_raw_data(). Remove mpi_set_buffer(). Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_read_raw_from_sgl(): fix out-of-bounds buffer accessNicolai Stange
Within the copying loop in mpi_read_raw_from_sgl(), the last input SGE's byte count gets artificially extended as follows: if (sg_is_last(sg) && (len % BYTES_PER_MPI_LIMB)) len += BYTES_PER_MPI_LIMB - (len % BYTES_PER_MPI_LIMB); Within the following byte copying loop, this causes reads beyond that SGE's allocated buffer: BUG: KASAN: slab-out-of-bounds in mpi_read_raw_from_sgl+0x331/0x650 at addr ffff8801e168d4d8 Read of size 1 by task systemd-udevd/721 [...] Call Trace: [<ffffffff818c4d35>] dump_stack+0xbc/0x117 [<ffffffff818c4c79>] ? _atomic_dec_and_lock+0x169/0x169 [<ffffffff814af5d1>] ? print_section+0x61/0xb0 [<ffffffff814b1109>] print_trailer+0x179/0x2c0 [<ffffffff814bc524>] object_err+0x34/0x40 [<ffffffff814bfdc7>] kasan_report_error+0x307/0x8c0 [<ffffffff814bf315>] ? kasan_unpoison_shadow+0x35/0x50 [<ffffffff814bf38e>] ? kasan_kmalloc+0x5e/0x70 [<ffffffff814c0ad1>] kasan_report+0x71/0xa0 [<ffffffff81938171>] ? mpi_read_raw_from_sgl+0x331/0x650 [<ffffffff814bf1a6>] __asan_load1+0x46/0x50 [<ffffffff81938171>] mpi_read_raw_from_sgl+0x331/0x650 [<ffffffff817f41b6>] rsa_verify+0x106/0x260 [<ffffffff817f40b0>] ? rsa_set_pub_key+0xf0/0xf0 [<ffffffff818edc79>] ? sg_init_table+0x29/0x50 [<ffffffff817f4d22>] ? pkcs1pad_sg_set_buf+0xb2/0x2e0 [<ffffffff817f5b74>] pkcs1pad_verify+0x1f4/0x2b0 [<ffffffff81831057>] public_key_verify_signature+0x3a7/0x5e0 [<ffffffff81830cb0>] ? public_key_describe+0x80/0x80 [<ffffffff817830f0>] ? keyring_search_aux+0x150/0x150 [<ffffffff818334a4>] ? x509_request_asymmetric_key+0x114/0x370 [<ffffffff814b83f0>] ? kfree+0x220/0x370 [<ffffffff818312c2>] public_key_verify_signature_2+0x32/0x50 [<ffffffff81830b5c>] verify_signature+0x7c/0xb0 [<ffffffff81835d0c>] pkcs7_validate_trust+0x42c/0x5f0 [<ffffffff813c391a>] system_verify_data+0xca/0x170 [<ffffffff813c3850>] ? top_trace_array+0x9b/0x9b [<ffffffff81510b29>] ? __vfs_read+0x279/0x3d0 [<ffffffff8129372f>] mod_verify_sig+0x1ff/0x290 [...] The exact purpose of the len extension isn't clear to me, but due to its form, I suspect that it's a leftover somehow accounting for leading zero bytes within the most significant output limb. Note however that without that len adjustement, the total number of bytes ever processed by the inner loop equals nbytes and thus, the last output limb gets written at this point. Thus the net effect of the len adjustement cited above is just to keep the inner loop running for some more iterations, namely < BYTES_PER_MPI_LIMB ones, reading some extra bytes from beyond the last SGE's buffer and discarding them afterwards. Fix this issue by purging the extension of len beyond the last input SGE's buffer length. Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_read_raw_from_sgl(): sanitize meaning of indicesNicolai Stange
Within the byte reading loop in mpi_read_raw_sgl(), there are two housekeeping indices used, z and x. At all times, the index z represents the number of output bytes covered by the input SGEs for which processing has completed so far. This includes any leading zero bytes within the most significant limb. The index x changes its meaning after the first outer loop's iteration though: while processing the first input SGE, it represents "number of leading zero bytes in most significant output limb" + "current position within current SGE" For the remaining SGEs OTOH, x corresponds just to "current position within current SGE" After all, it is only the sum of z and x that has any meaning for the output buffer and thus, the "number of leading zero bytes in most significant output limb" part can be moved away from x into z from the beginning, opening up the opportunity for cleaner code. Before the outer loop iterating over the SGEs, don't initialize z with zero, but with the number of leading zero bytes in the most significant output limb. For the inner loop iterating over a single SGE's bytes, get rid of the buf_shift offset to x' bounds and let x run from zero to sg->length - 1. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_read_raw_from_sgl(): fix nbits calculationNicolai Stange
The number of bits, nbits, is calculated in mpi_read_raw_from_sgl() as follows: nbits = nbytes * 8; Afterwards, the number of leading zero bits of the first byte get subtracted: nbits -= count_leading_zeros(*(u8 *)(sg_virt(sgl) + lzeros)); However, count_leading_zeros() takes an unsigned long and thus, the u8 gets promoted to an unsigned long. Thus, the above doesn't subtract the number of leading zeros in the most significant nonzero input byte from nbits, but the number of leading zeros of the most significant nonzero input byte promoted to unsigned long, i.e. BITS_PER_LONG - 8 too many. Fix this by subtracting count_leading_zeros(...) - (BITS_PER_LONG - 8) from nbits only. Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_read_raw_from_sgl(): purge redundant clearing of nbitsNicolai Stange
In mpi_read_raw_from_sgl(), unsigned nbits is calculated as follows: nbits = nbytes * 8; and redundantly cleared later on if nbytes == 0: if (nbytes > 0) ... else nbits = 0; Purge this redundant clearing for the sake of clarity. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_read_raw_from_sgl(): don't include leading zero SGEs in nbytesNicolai Stange
At the very beginning of mpi_read_raw_from_sgl(), the leading zeros of the input scatterlist are counted: lzeros = 0; for_each_sg(sgl, sg, ents, i) { ... if (/* sg contains nonzero bytes */) break; /* sg contains nothing but zeros here */ ents--; lzeros = 0; } Later on, the total number of trailing nonzero bytes is calculated by subtracting the number of leading zero bytes from the total number of input bytes: nbytes -= lzeros; However, since lzeros gets reset to zero for each completely zero leading sg in the loop above, it doesn't include those. Besides wasting resources by allocating a too large output buffer, this mistake propagates into the calculation of x, the number of leading zeros within the most significant output limb: x = BYTES_PER_MPI_LIMB - nbytes % BYTES_PER_MPI_LIMB; What's more, the low order bytes of the output, equal in number to the extra bytes in nbytes, are left uninitialized. Fix this by adjusting nbytes for each completely zero leading scatterlist entry. Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_read_raw_from_sgl(): replace len argument by nbytesNicolai Stange
Currently, the nbytes local variable is calculated from the len argument as follows: ... mpi_read_raw_from_sgl(..., unsigned int len) { unsigned nbytes; ... if (!ents) nbytes = 0; else nbytes = len - lzeros; ... } Given that nbytes is derived from len in a trivial way and that the len argument is shadowed by a local len variable in several loops, this is just confusing. Rename the len argument to nbytes and get rid of the nbytes local variable. Do the nbytes calculation in place. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_read_buffer(): fix buffer overflowNicolai Stange
Currently, mpi_read_buffer() writes full limbs to the output buffer and moves memory around to purge leading zero limbs afterwards. However, with commit 9cbe21d8f89d ("lib/mpi: only require buffers as big as needed for the integer") the caller is only required to provide a buffer large enough to hold the result without the leading zeros. This might result in a buffer overflow for small MP numbers with leading zeros. Fix this by coping the result to its final destination within the output buffer and not copying the leading zeros at all. Fixes: 9cbe21d8f89d ("lib/mpi: only require buffers as big as needed for the integer") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_read_buffer(): replace open coded endian conversionNicolai Stange
Currently, the endian conversion from CPU order to BE is open coded in mpi_read_buffer(). Replace this by the centrally provided cpu_to_be*() macros. Copy from the temporary storage on stack to the destination buffer by means of memcpy(). Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_read_buffer(): optimize skipping of leading zero limbsNicolai Stange
Currently, if the number of leading zeros is greater than fits into a complete limb, mpi_read_buffer() skips them by iterating over them limb-wise. Instead of skipping the high order zero limbs within the loop as shown above, adjust the copying loop's bounds. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_write_sgl(): replace open coded endian conversionNicolai Stange
Currently, the endian conversion from CPU order to BE is open coded in mpi_write_sgl(). Replace this by the centrally provided cpu_to_be*() macros. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_write_sgl(): fix out-of-bounds stack accessNicolai Stange
Within the copying loop in mpi_write_sgl(), we have if (lzeros) { mpi_limb_t *limb1 = (void *)p - sizeof(alimb); mpi_limb_t *limb2 = (void *)p - sizeof(alimb) + lzeros; *limb1 = *limb2; ... } where p points past the end of alimb2 which lives on the stack and contains the current limb in BE order. The purpose of the above is to shift the non-zero bytes of alimb2 to its beginning in memory, i.e. to skip its leading zero bytes. However, limb2 points somewhere into the middle of alimb2 and thus, reading *limb2 pulls in lzero bytes from somewhere. Indeed, KASAN splats: BUG: KASAN: stack-out-of-bounds in mpi_write_to_sgl+0x4e3/0x6f0 at addr ffff8800cb04f601 Read of size 8 by task systemd-udevd/391 page:ffffea00032c13c0 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x3fff8000000000() page dumped because: kasan: bad access detected CPU: 3 PID: 391 Comm: systemd-udevd Tainted: G B L 4.5.0-next-20160316+ #12 [...] Call Trace: [<ffffffff8194889e>] dump_stack+0xdc/0x15e [<ffffffff819487c2>] ? _atomic_dec_and_lock+0xa2/0xa2 [<ffffffff814892b5>] ? __dump_page+0x185/0x330 [<ffffffff8150ffd6>] kasan_report_error+0x5e6/0x8b0 [<ffffffff814724cd>] ? kzfree+0x2d/0x40 [<ffffffff819c5bce>] ? mpi_free_limb_space+0xe/0x20 [<ffffffff819c469e>] ? mpi_powm+0x37e/0x16f0 [<ffffffff815109f1>] kasan_report+0x71/0xa0 [<ffffffff819c0353>] ? mpi_write_to_sgl+0x4e3/0x6f0 [<ffffffff8150ed34>] __asan_load8+0x64/0x70 [<ffffffff819c0353>] mpi_write_to_sgl+0x4e3/0x6f0 [<ffffffff819bfe70>] ? mpi_set_buffer+0x620/0x620 [<ffffffff819c0e6f>] ? mpi_cmp+0xbf/0x180 [<ffffffff8186e282>] rsa_verify+0x202/0x260 What's more, since lzeros can be anything from 1 to sizeof(mpi_limb_t)-1, the above will cause unaligned accesses which is bad on non-x86 archs. Fix the issue, by preparing the starting point p for the upcoming copy operation instead of shifting the source memory, i.e. alimb2. Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_write_sgl(): purge redundant pointer arithmeticNicolai Stange
Within the copying loop in mpi_write_sgl(), we have if (lzeros) { ... p -= lzeros; y = lzeros; } p = p - (sizeof(alimb) - y); If lzeros == 0, then y == 0, too. Thus, lzeros gets subtracted and added back again to p. Purge this redundancy. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_write_sgl(): fix style issue with lzero decrementNicolai Stange
Within the copying loop in mpi_write_sgl(), we have if (lzeros > 0) { ... lzeros -= sizeof(alimb); } However, at this point, lzeros < sizeof(alimb) holds. Make this fact explicit by rewriting the above to if (lzeros) { ... lzeros = 0; } Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-04-05lib/mpi: mpi_write_sgl(): fix skipping of leading zero limbsNicolai Stange
Currently, if the number of leading zeros is greater than fits into a complete limb, mpi_write_sgl() skips them by iterating over them limb-wise. However, it fails to adjust its internal leading zeros tracking variable, lzeros, accordingly: it does a p -= sizeof(alimb); continue; which should really have been a lzeros -= sizeof(alimb); continue; Since lzeros never decreases if its initial value >= sizeof(alimb), nothing gets copied by mpi_write_sgl() in that case. Instead of skipping the high order zero limbs within the loop as shown above, fix the issue by adjusting the copying loop's bounds. Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2016-02-28lib/mpi: Endianness fixMichal Marek
The limbs are integers in the host endianness, so we can't simply iterate over the individual bytes. The current code happens to work on little-endian, because the order of the limbs in the MPI array is the same as the order of the bytes in each limb, but it breaks on big-endian. Fixes: 0f74fbf77d45 ("MPI: Fix mpi_read_buffer") Signed-off-by: Michal Marek <mmarek@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-11-17lib/mpi: only require buffers as big as needed for the integerAndrzej Zaborowski
Since mpi_write_to_sgl and mpi_read_buffer explicitly left-align the integers being written it makes no sense to require a buffer big enough for the number + the leading zero bytes which are not written. The error returned also doesn't convey any information. So instead require only the size needed and return -EOVERFLOW to signal when buffer too short. Signed-off-by: Andrew Zaborowski <andrew.zaborowski@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-11-06Merge tag 'asm-generic-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic Pull asm-generic cleanups from Arnd Bergmann: "The asm-generic changes for 4.4 are mostly a series from Christoph Hellwig to clean up various abuses of headers in there. The patch to rename the io-64-nonatomic-*.h headers caused some conflicts with new users, so I added a workaround that we can remove in the next merge window. The only other patch is a warning fix from Marek Vasut" * tag 'asm-generic-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic: asm-generic: temporarily add back asm-generic/io-64-nonatomic*.h asm-generic: cmpxchg: avoid warnings from macro-ized cmpxchg() implementations gpio-mxc: stop including <asm-generic/bug> n_tracesink: stop including <asm-generic/bug> n_tracerouter: stop including <asm-generic/bug> mlx5: stop including <asm-generic/kmap_types.h> hifn_795x: stop including <asm-generic/kmap_types.h> drbd: stop including <asm-generic/kmap_types.h> move count_zeroes.h out of asm-generic move io-64-nonatomic*.h out of asm-generic
2015-10-20lib/mpi: fix off by one in mpi_read_raw_from_sglStephan Mueller
The patch fixes the analysis of the input data which contains an off by one. The issue is visible when the SGL contains one byte per SG entry. The code for checking for zero bytes does not operate on the data byte. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-10-15move count_zeroes.h out of asm-genericChristoph Hellwig
This header contains a few helpers currenly only used by the mpi implementation, and not default implementation of architecture code. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2015-10-14lib/mpi: Add mpi sgl helpersTadeusz Struk
Add mpi_read_raw_from_sgl and mpi_write_to_sgl helpers. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-08-25MPI: Fix mpi_read_bufferTadeusz Struk
Change mpi_read_buffer to return a number without leading zeros so that mpi_read_buffer and mpi_get_buffer return the same thing. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-06-16MPILIB: add mpi_read_buf() and mpi_get_size() helpersTadeusz Struk
Added a mpi_read_buf() helper function to export MPI to a buf provided by the user, and a mpi_get_size() helper, that tells the user how big the buf is. Changed mpi_free to use kzfree instead of kfree because it is used to free crypto keys. Signed-off-by: Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2013-06-12lib/mpi/mpicoder.c: looping issue, need stop when equal to zero, found by ↵Chen Gang
'EXTRA_FLAGS=-W'. For 'while' looping, need stop when 'nbytes == 0', or will cause issue. ('nbytes' is size_t which is always bigger or equal than zero). The related warning: (with EXTRA_CFLAGS=-W) lib/mpi/mpicoder.c:40:2: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits] Signed-off-by: Chen Gang <gang.chen@asianux.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: David Howells <dhowells@redhat.com> Cc: James Morris <james.l.morris@oracle.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-01mpilib: use DIV_ROUND_UP and remove unused macrosAndy Shevchenko
Remove MIN, MAX and ABS macros that are duplicates kernel's native implementation. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: James Morris <james.l.morris@oracle.com>
2012-10-08MPILIB: Provide a function to read raw data into an MPIDavid Howells
Provide a function to read raw data of a predetermined size into an MPI rather than expecting the size to be encoded within the data. The data is assumed to represent an unsigned integer, and the resulting MPI will be positive. The function looks like this: MPI mpi_read_raw_data(const void *, size_t); This is useful for reading ASN.1 integer primitives where the length is encoded in the ASN.1 metadata. Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2012-05-26Remove unused code from MPI libraryDmitry Kasatkin
MPI library is used by RSA verification implementation. Few files contains functions which are never called. James Morris has asked to remove all of them. Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@intel.com> Requested-by: James Morris <james.l.morris@oracle.com> Signed-off-by: James Morris <james.l.morris@oracle.com>
2012-02-02lib/mpi: removed unused functionsDmitry Kasatkin
do_encode_md() and mpi_get_keyid() are not parts of mpi library. They were used early versions of gnupg and in digsig project, but they are not used neither here nor there anymore. Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@intel.com> Reviewed-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: James Morris <jmorris@namei.org>
2012-02-02lib/mpi: replaced MPI_NULL with normal NULLDmitry Kasatkin
MPI_NULL is replaced with normal NULL. Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@intel.com> Reviewed-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: James Morris <jmorris@namei.org>
2012-02-02lib/mpi: added missing NULL checkDmitry Kasatkin
Added missing NULL check after mpi_alloc(). Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@intel.com> Reviewed-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: James Morris <jmorris@namei.org>
2012-01-19MPILIB: Add a missing ENOMEM checkDavid Howells
Add a missing ENOMEM check. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Mimi Zohar <zohar@us.ibm.com> Signed-off-by: James Morris <jmorris@namei.org>
2011-11-09crypto: GnuPG based MPI lib - source files (part 1)Dmitry Kasatkin
Adds the multi-precision-integer maths library which was originally taken from GnuPG and ported to the kernel by (among others) David Howells. This version is taken from Fedora kernel 2.6.32-71.14.1.el6. The difference is that checkpatch reported errors and warnings have been fixed. This library is used to implemenet RSA digital signature verification used in IMA/EVM integrity protection subsystem. Due to patch size limitation, the patch is divided into 4 parts. Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@intel.com>