summaryrefslogtreecommitdiff
path: root/drivers/char
AgeCommit message (Collapse)Author
2022-05-16Merge 5.18-rc7 into usb-nextGreg Kroah-Hartman
We need the tty fixes in here as well, as we need to revert one of them :( Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-05-16random: order timer entropy functions below interrupt functionsJason A. Donenfeld
There are no code changes here; this is just a reordering of functions, so that in subsequent commits, the timer entropy functions can call into the interrupt ones. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-15random: do not pretend to handle premature next security modelJason A. Donenfeld
Per the thread linked below, "premature next" is not considered to be a realistic threat model, and leads to more serious security problems. "Premature next" is the scenario in which: - Attacker compromises the current state of a fully initialized RNG via some kind of infoleak. - New bits of entropy are added directly to the key used to generate the /dev/urandom stream, without any buffering or pooling. - Attacker then, somehow having read access to /dev/urandom, samples RNG output and brute forces the individual new bits that were added. - Result: the RNG never "recovers" from the initial compromise, a so-called violation of what academics term "post-compromise security". The usual solutions to this involve some form of delaying when entropy gets mixed into the crng. With Fortuna, this involves multiple input buckets. With what the Linux RNG was trying to do prior, this involves entropy estimation. However, by delaying when entropy gets mixed in, it also means that RNG compromises are extremely dangerous during the window of time before the RNG has gathered enough entropy, during which time nonces may become predictable (or repeated), ephemeral keys may not be secret, and so forth. Moreover, it's unclear how realistic "premature next" is from an attack perspective, if these attacks even make sense in practice. Put together -- and discussed in more detail in the thread below -- these constitute grounds for just doing away with the current code that pretends to handle premature next. I say "pretends" because it wasn't doing an especially great job at it either; should we change our mind about this direction, we would probably implement Fortuna to "fix" the "problem", in which case, removing the pretend solution still makes sense. This also reduces the crng reseed period from 5 minutes down to 1 minute. The rationale from the thread might lead us toward reducing that even further in the future (or even eliminating it), but that remains a topic of a future commit. At a high level, this patch changes semantics from: Before: Seed for the first time after 256 "bits" of estimated entropy have been accumulated since the system booted. Thereafter, reseed once every five minutes, but only if 256 new "bits" have been accumulated since the last reseeding. After: Seed for the first time after 256 "bits" of estimated entropy have been accumulated since the system booted. Thereafter, reseed once every minute. Most of this patch is renaming and removing: POOL_MIN_BITS becomes POOL_INIT_BITS, credit_entropy_bits() becomes credit_init_bits(), crng_reseed() loses its "force" parameter since it's now always true, the drain_entropy() function no longer has any use so it's removed, entropy estimation is skipped if we've already init'd, the various notifiers for "low on entropy" are now only active prior to init, and finally, some documentation comments are cleaned up here and there. Link: https://lore.kernel.org/lkml/YmlMGx6+uigkGiZ0@zx2c4.com/ Cc: Theodore Ts'o <tytso@mit.edu> Cc: Nadia Heninger <nadiah@cs.ucsd.edu> Cc: Tom Ristenpart <ristenpart@cornell.edu> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-13random: use first 128 bits of input as fast initJason A. Donenfeld
Before, the first 64 bytes of input, regardless of how entropic it was, would be used to mutate the crng base key directly, and none of those bytes would be credited as having entropy. Then 256 bits of credited input would be accumulated, and only then would the rng transition from the earlier "fast init" phase into being actually initialized. The thinking was that by mixing and matching fast init and real init, an attacker who compromised the fast init state, considered easy to do given how little entropy might be in those first 64 bytes, would then be able to bruteforce bits from the actual initialization. By keeping these separate, bruteforcing became impossible. However, by not crediting potentially creditable bits from those first 64 bytes of input, we delay initialization, and actually make the problem worse, because it means the user is drawing worse random numbers for a longer period of time. Instead, we can take the first 128 bits as fast init, and allow them to be credited, and then hold off on the next 128 bits until they've accumulated. This is still a wide enough margin to prevent bruteforcing the rng state, while still initializing much faster. Then, rather than trying to piecemeal inject into the base crng key at various points, instead just extract from the pool when we need it, for the crng_init==0 phase. Performance may even be better for the various inputs here, since there are likely more calls to mix_pool_bytes() then there are to get_random_bytes() during this phase of system execution. Since the preinit injection code is gone, bootloader randomness can then do something significantly more straight forward, removing the weird system_wq hack in hwgenerator randomness. Cc: Theodore Ts'o <tytso@mit.edu> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-13random: do not use batches when !crng_ready()Jason A. Donenfeld
It's too hard to keep the batches synchronized, and pointless anyway, since in !crng_ready(), we're updating the base_crng key really often, where batching only hurts. So instead, if the crng isn't ready, just call into get_random_bytes(). At this stage nothing is performance critical anyhow. Cc: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-13random: mix in timestamps and reseed on system restoreJason A. Donenfeld
Since the RNG loses freshness with system suspend/hibernation, when we resume, immediately reseed using whatever data we can, which for this particular case is the various timestamps regarding system suspend time, in addition to more generally the RDSEED/RDRAND/RDTSC values that happen whenever the crng reseeds. On systems that suspend and resume automatically all the time -- such as Android -- we skip the reseeding on suspend resumption, since that could wind up being far too busy. This is the same trade-off made in WireGuard. In addition to reseeding upon resumption always mix into the pool these various stamps on every power notification event. Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-13random: vary jitter iterations based on cycle counter speedJason A. Donenfeld
Currently, we do the jitter dance if two consecutive reads to the cycle counter return different values. If they do, then we consider the cycle counter to be fast enough that one trip through the scheduler will yield one "bit" of credited entropy. If those two reads return the same value, then we assume the cycle counter is too slow to show meaningful differences. This methodology is flawed for a variety of reasons, one of which Eric posted a patch to fix in [1]. The issue that patch solves is that on a system with a slow counter, you might be [un]lucky and read the counter _just_ before it changes, so that the second cycle counter you read differs from the first, even though there's usually quite a large period of time in between the two. For example: | real time | cycle counter | | --------- | ------------- | | 3 | 5 | | 4 | 5 | | 5 | 5 | | 6 | 5 | | 7 | 5 | <--- a | 8 | 6 | <--- b | 9 | 6 | <--- c If we read the counter at (a) and compare it to (b), we might be fooled into thinking that it's a fast counter, when in reality it is not. The solution in [1] is to also compare counter (b) to counter (c), on the theory that if the counter is _actually_ slow, and (a)!=(b), then certainly (b)==(c). This helps solve this particular issue, in one sense, but in another sense, it mostly functions to disallow jitter entropy on these systems, rather than simply taking more samples in that case. Instead, this patch takes a different approach. Right now we assume that a difference in one set of consecutive samples means one "bit" of credited entropy per scheduler trip. We can extend this so that a difference in two sets of consecutive samples means one "bit" of credited entropy per /two/ scheduler trips, and three for three, and four for four. In other words, we can increase the amount of jitter "work" we require for each "bit", depending on how slow the cycle counter is. So this patch takes whole bunch of samples, sees how many of them are different, and divides to find the amount of work required per "bit", and also requires that at least some minimum of them are different in order to attempt any jitter entropy. Note that this approach is still far from perfect. It's not a real statistical estimate on how much these samples vary; it's not a real-time analysis of the relevant input data. That remains a project for another time. However, it makes the same (partly flawed) assumptions as the code that's there now, so it's probably not worse than the status quo, and it handles the issue Eric mentioned in [1]. But, again, it's probably a far cry from whatever a really robust version of this would be. [1] https://lore.kernel.org/lkml/20220421233152.58522-1-ebiggers@kernel.org/ https://lore.kernel.org/lkml/20220421192939.250680-1-ebiggers@kernel.org/ Cc: Eric Biggers <ebiggers@google.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-13random: insist on random_get_entropy() existing in order to simplifyJason A. Donenfeld
All platforms are now guaranteed to provide some value for random_get_entropy(). In case some bug leads to this not being so, we print a warning, because that indicates that something is really very wrong (and likely other things are impacted too). This should never be hit, but it's a good and cheap way of finding out if something ever is problematic. Since we now have viable fallback code for random_get_entropy() on all platforms, which is, in the worst case, not worse than jiffies, we can count on getting the best possible value out of it. That means there's no longer a use for using jiffies as entropy input. It also means we no longer have a reason for doing the round-robin register flow in the IRQ handler, which was always of fairly dubious value. Instead we can greatly simplify the IRQ handler inputs and also unify the construction between 64-bits and 32-bits. We now collect the cycle counter and the return address, since those are the two things that matter. Because the return address and the irq number are likely related, to the extent we mix in the irq number, we can just xor it into the top unchanging bytes of the return address, rather than the bottom changing bytes of the cycle counter as before. Then, we can do a fixed 2 rounds of SipHash/HSipHash. Finally, we use the same construction of hashing only half of the [H]SipHash state on 32-bit and 64-bit. We're not actually discarding any entropy, since that entropy is carried through until the next time. And more importantly, it lets us do the same sponge-like construction everywhere. Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-05-12ipmi:ipmb: Fix refcount leak in ipmi_ipmb_probeMiaoqian Lin
of_parse_phandle() returns a node pointer with refcount incremented, we should use of_node_put() on it when done. Add missing of_node_put() to avoid refcount leak. Fixes: 00d93611f002 ("ipmi:ipmb: Add the ability to have a separate slave and master device") Signed-off-by: Miaoqian Lin <linmq006@gmail.com> Message-Id: <20220512044445.3102-1-linmq006@gmail.com> Cc: stable@vger.kernel.org # v5.17+ Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi: remove unnecessary type castingsYu Zhe
remove unnecessary void* type castings. Signed-off-by: Yu Zhe <yuzhe@nfschina.com> Message-Id: <20220421150941.7659-1-yuzhe@nfschina.com> Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi: Make two logs uniqueCorey Minyard
There were two identical logs in two different places, so you couldn't tell which one was being logged. Make them unique. Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi:si: Convert pr_debug() to dev_dbg()Corey Minyard
A device is available, use it. Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi: Convert pr_debug() to dev_dbg()Corey Minyard
A device is available at all debug points, use the right interface. Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi: Fix pr_fmt to avoid compilation issuesCorey Minyard
The was it was wouldn't work in some situations, simplify it. What was there was unnecessary complexity. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi: Add an intializer for ipmi_recv_msg structCorey Minyard
Don't hand-initialize the struct here, create a macro to initialize it so new fields added don't get forgotten in places. Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi: Add an intializer for ipmi_smi_msg structCorey Minyard
There was a "type" element added to this structure, but some static values were missed. The default value will be zero, which is correct, but create an initializer for the type and initialize the type properly in the initializer to avoid future issues. Reported-by: Joe Wiese <jwiese@rackspace.com> Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi:ssif: Check for NULL msg when handling events and messagesCorey Minyard
Even though it's not possible to get into the SSIF_GETTING_MESSAGES and SSIF_GETTING_EVENTS states without a valid message in the msg field, it's probably best to be defensive here and check and print a log, since that means something else went wrong. Also add a default clause to that switch statement to release the lock and print a log, in case the state variable gets messed up somehow. Reported-by: Haowen Bai <baihaowen@meizu.com> Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi: use simple i2c probe functionStephen Kitt
The i2c probe functions here don't use the id information provided in their second argument, so the single-parameter i2c probe function ("probe_new") can be used instead. This avoids scanning the identifier tables during probes. Signed-off-by: Stephen Kitt <steve@sk2.org> Message-Id: <20220324171159.544565-1-steve@sk2.org> Signed-off-by: Corey Minyard <cminyard@mvista.com> Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
2022-05-12ipmi: Add a sysfs count of total outstanding messages for an interfaceCorey Minyard
Go through each user and add its message count to a total and print the total. It would be nice to have a per-user file, but there's no user sysfs entity at this point to hang it off of. Probably not worth the effort. Based on work by Chen Guanqiao <chen.chenchacha@foxmail.com> Cc: Chen Guanqiao <chen.chenchacha@foxmail.com> Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi: Add a sysfs interface to view the number of usersCorey Minyard
A count of users is kept for each interface, allow it to be viewed. Based on work by Chen Guanqiao <chen.chenchacha@foxmail.com> Cc: Chen Guanqiao <chen.chenchacha@foxmail.com> Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi: Limit the number of message a user may have outstandingCorey Minyard
This way a rogue application can't use up a bunch of memory. Based on work by Chen Guanqiao <chen.chenchacha@foxmail.com> Cc: Chen Guanqiao <chen.chenchacha@foxmail.com> Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-12ipmi: Add a limit on the number of users that may use IPMICorey Minyard
Each user uses memory, we need limits to avoid a rogue program from running the system out of memory. Based on work by Chen Guanqiao <chen.chenchacha@foxmail.com> Cc: Chen Guanqiao <chen.chenchacha@foxmail.com> Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-05-06hwrng: cn10k - Enable compile testingHerbert Xu
This patch enables COMPILE_TEST for cn10k. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-05-04Merge tag 'for-linus-5.17-2' of https://github.com/cminyard/linux-ipmiLinus Torvalds
Pull IPMI fixes from Corey Minyard: "Fix some issues that were reported. This has been in for-next for a bit (longer than the times would indicate, I had to rebase to add some text to the headers) and these are fixes that need to go in" * tag 'for-linus-5.17-2' of https://github.com/cminyard/linux-ipmi: ipmi:ipmi_ipmb: Fix null-ptr-deref in ipmi_unregister_smi() ipmi: When handling send message responses, don't process the message
2022-05-02Merge 5.18-rc5 into tty-nextGreg Kroah-Hartman
We need the tty/serial fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-05-02Merge 5.18-rc5 into char-misc-nextGreg Kroah-Hartman
We need the char-misc fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-04-29ipmi:ipmi_ipmb: Fix null-ptr-deref in ipmi_unregister_smi()Corey Minyard
KASAN report null-ptr-deref as follows: KASAN: null-ptr-deref in range [0x0000000000000008-0x000000000000000f] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014 RIP: 0010:ipmi_unregister_smi+0x7d/0xd50 drivers/char/ipmi/ipmi_msghandler.c:3680 Call Trace: ipmi_ipmb_remove+0x138/0x1a0 drivers/char/ipmi/ipmi_ipmb.c:443 ipmi_ipmb_probe+0x409/0xda1 drivers/char/ipmi/ipmi_ipmb.c:548 i2c_device_probe+0x959/0xac0 drivers/i2c/i2c-core-base.c:563 really_probe+0x3f3/0xa70 drivers/base/dd.c:541 In ipmi_ipmb_probe(), 'iidev->intf' is not set before ipmi_register_smi() success. And in the error handling case, ipmi_ipmb_remove() is called to release resources, ipmi_unregister_smi() is called without check 'iidev->intf', this will cause KASAN null-ptr-deref issue. General kernel style is to allow NULL to be passed into unregister calls, so fix it that way. This allows a NULL check to be removed in other code. Fixes: 57c9e3c9a374 ("ipmi:ipmi_ipmb: Unregister the SMI on remove") Reported-by: Hulk Robot <hulkci@huawei.com> Cc: stable@vger.kernel.org # v5.17+ Cc: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: Corey Minyard <cminyard@mvista.com>
2022-04-29ipmi: When handling send message responses, don't process the messageCorey Minyard
A chunk was dropped when the code handling send messages was rewritten. Those messages shouldn't be processed normally, they are just an indication that the message was successfully sent and the timers should be started for the real response that should be coming later. Add back in the missing chunk to just discard the message and go on. Fixes: 059747c245f0 ("ipmi: Add support for IPMB direct messages") Reported-by: Joe Wiese <jwiese@rackspace.com> Cc: stable@vger.kernel.org # v5.16+ Signed-off-by: Corey Minyard <cminyard@mvista.com> Tested-by: Joe Wiese <jwiese@rackspace.com>
2022-04-29hwrng: optee - remove redundant initialization to variable rng_sizeColin Ian King
Variable rng_size is being initialized with a value that is never read, the variable is being re-assigned later on. The initialization is redundant and can be removed. Cleans up cppcheck warning: Variable 'rng_size' is assigned a value that is never used. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Reviewed-by: Sumit Garg <sumit.garg@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-04-26Revert "hwrng: mpfs - Enable COMPILE_TEST"Herbert Xu
This reverts commit 6a71277ce91e4766ebe9a5f6725089c80d043ba2. The underlying option POLARFIRE_SOC_SYS_CTRL already supports COMPILE_TEST so there is no need for this. What's more, if we force this option on without the underlying option it fails to build. Reported-by: kernel test robot <lkp@intel.com> Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-04-25random: document crng_fast_key_erasure() destination possibilityJason A. Donenfeld
This reverts 35a33ff3807d ("random: use memmove instead of memcpy for remaining 32 bytes"), which was made on a totally bogus basis. The thing it was worried about overlapping came from the stack, not from one of its arguments, as Eric pointed out. But the fact that this confusion even happened draws attention to the fact that it's a bit non-obvious that the random_data parameter can alias chacha_state, and in fact should do so when the caller can't rely on the stack being cleared in a timely manner. So this commit documents that. Reported-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-04-24/dev/mem: make reads and writes interruptibleJason A. Donenfeld
In 8619e5bdeee8 ("/dev/mem: Bail out upon SIGKILL."), /dev/mem became killable, and that commit noted: Theoretically, reading/writing /dev/mem and /dev/kmem can become "interruptible". But this patch chose "killable". Future patch will make them "interruptible" so that we can revert to "killable" if some program regressed. So now we take the next step in making it "interruptible", by changing fatal_signal_pending() into signal_pending(). Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Link: https://lore.kernel.org/r/20220407122638.490660-1-Jason@zx2c4.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-04-24char: xillybus: fix a refcount leak in cleanup_dev()Hangyu Hua
usb_get_dev is called in xillyusb_probe. So it is better to call usb_put_dev before xdev is released. Acked-by: Eli Billauer <eli.billauer@gmail.com> Signed-off-by: Hangyu Hua <hbh25y@gmail.com> Link: https://lore.kernel.org/r/20220406075703.23464-1-hbh25y@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-04-24char: xillybus: replace usage of found with dedicated list iterator variableJakob Koschel
To move the list iterator variable into the list_for_each_entry_*() macro in the future it should be avoided to use the list iterator variable after the loop body. To *never* use the list iterator variable after the loop it was concluded to use a separate iterator variable instead of a found boolean [1]. This removes the need to use a found variable and simply checking if the variable was set, can determine if the break/goto was hit. Link: https://lore.kernel.org/all/CAHk-=wgRr_D8CB-D9Kg-c=EHreAsk5SqXPwr9Y7k9sA6cWXJ6w@mail.gmail.com/ Acked-by: Eli Billauer <eli.billauer@gmail.com> Signed-off-by: Jakob Koschel <jakobkoschel@gmail.com> Link: https://lore.kernel.org/r/20220324070939.59297-1-jakobkoschel@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-04-24char: misc: remove usage of list iterator past the loop bodyJakob Koschel
In preparation to limit the scope of the list iterator to the list traversal loop, use a dedicated pointer pointing to the found element [1]. Link: https://lore.kernel.org/all/YhdfEIwI4EdtHdym@kroah.com/ Signed-off-by: Jakob Koschel <jakobkoschel@gmail.com> Link: https://lore.kernel.org/r/20220319201454.2511733-1-jakobkoschel@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-04-22char: ttyprintk: register consoleVincent Whitchurch
Register a console in the ttyprintk driver so that it can be selected for /dev/console with console=ttyprintk on the kernel command line, similar to other console drivers. Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Link: https://lore.kernel.org/r/20220215141750.92808-1-vincent.whitchurch@axis.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-04-21hwrng: mpfs - Enable COMPILE_TESTHerbert Xu
The dependency on HW_RANDOM is redundant so this patch removes it. As this driver seems to cross-compile just fine we could also enable COMPILE_TEST. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Conor Dooley <conor.dooley@microchip.com Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-04-21hwrng: cn10k - Make check_rng_health() return an error codeVladis Dronov
Currently check_rng_health() returns zero unconditionally. Make it to output an error code and return it. Fixes: 38e9791a0209 ("hwrng: cn10k - Add random number generator support") Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-04-21hwrng: cn10k - Optimize cn10k_rng_read()Vladis Dronov
This function assumes that sizeof(void) is 1 and arithmetic works for void pointers. This is a GNU C extention and may not work with other compilers. Change this by using an u8 pointer. Also move cn10k_read_trng() out of a loop thus saving some cycles. Fixes: 38e9791a0209 ("hwrng: cn10k - Add random number generator support") Signed-off-by: Vladis Dronov <vdronov@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-04-20tty: synclink_cs: Use bitwise instead of arithmetic operator for flagsHaowen Bai
This silences the following coccinelle warning: drivers/s390/char/tape_34xx.c:360:38-39: WARNING: sum of probable bitmasks, consider | we will try to make code cleaner Reviewed-by: Jiri Slaby <jirislaby@kernel.org> Signed-off-by: Haowen Bai <baihaowen@meizu.com> Link: https://lore.kernel.org/r/1647846757-946-1-git-send-email-baihaowen@meizu.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-04-16random: use memmove instead of memcpy for remaining 32 bytesJason A. Donenfeld
In order to immediately overwrite the old key on the stack, before servicing a userspace request for bytes, we use the remaining 32 bytes of block 0 as the key. This means moving indices 8,9,a,b,c,d,e,f -> 4,5,6,7,8,9,a,b. Since 4 < 8, for the kernel implementations of memcpy(), this doesn't actually appear to be a problem in practice. But relying on that characteristic seems a bit brittle. So let's change that to a proper memmove(), which is the by-the-books way of handling overlapping memory copies. Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-04-15hwrng: mpfs - add polarfire soc hwrng supportConor Dooley
Add a driver to access the hardware random number generator on the Polarfire SoC. The hwrng can only be accessed via the system controller, so use the mailbox interface the system controller exposes to access the hwrng. Signed-off-by: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-04-13random: make random_get_entropy() return an unsigned longJason A. Donenfeld
Some implementations were returning type `unsigned long`, while others that fell back to get_cycles() were implicitly returning a `cycles_t` or an untyped constant int literal. That makes for weird and confusing code, and basically all code in the kernel already handled it like it was an `unsigned long`. I recently tried to handle it as the largest type it could be, a `cycles_t`, but doing so doesn't really help with much. Instead let's just make random_get_entropy() return an unsigned long all the time. This also matches the commonly used `arch_get_random_long()` function, so now RDRAND and RDTSC return the same sized integer, which means one can fallback to the other more gracefully. Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Theodore Ts'o <tytso@mit.edu> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-04-13random: allow partial reads if later user copies failJason A. Donenfeld
Rather than failing entirely if a copy_to_user() fails at some point, instead we should return a partial read for the amount that succeeded prior, unless none succeeded at all, in which case we return -EFAULT as before. This makes it consistent with other reader interfaces. For example, the following snippet for /dev/zero outputs "4" followed by "1": int fd; void *x = mmap(NULL, 4096, PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); assert(x != MAP_FAILED); fd = open("/dev/zero", O_RDONLY); assert(fd >= 0); printf("%zd\n", read(fd, x, 4)); printf("%zd\n", read(fd, x + 4095, 4)); close(fd); This brings that same standard behavior to the various RNG reader interfaces. While we're at it, we can streamline the loop logic a little bit. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jann Horn <jannh@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-04-07random: check for signals every PAGE_SIZE chunk of /dev/[u]randomJason A. Donenfeld
In 1448769c9cdb ("random: check for signal_pending() outside of need_resched() check"), Jann pointed out that we previously were only checking the TIF_NOTIFY_SIGNAL and TIF_SIGPENDING flags if the process had TIF_NEED_RESCHED set, which meant in practice, super long reads to /dev/[u]random would delay signal handling by a long time. I tried this using the below program, and indeed I wasn't able to interrupt a /dev/urandom read until after several megabytes had been read. The bug he fixed has always been there, and so code that reads from /dev/urandom without checking the return value of read() has mostly worked for a long time, for most sizes, not just for <= 256. Maybe it makes sense to keep that code working. The reason it was so small prior, ignoring the fact that it didn't work anyway, was likely because /dev/random used to block, and that could happen for pretty large lengths of time while entropy was gathered. But now, it's just a chacha20 call, which is extremely fast and is just operating on pure data, without having to wait for some external event. In that sense, /dev/[u]random is a lot more like /dev/zero. Taking a page out of /dev/zero's read_zero() function, it always returns at least one chunk, and then checks for signals after each chunk. Chunk sizes there are of length PAGE_SIZE. Let's just copy the same thing for /dev/[u]random, and check for signals and cond_resched() for every PAGE_SIZE amount of data. This makes the behavior more consistent with expectations, and should mitigate the impact of Jann's fix for the age-old signal check bug. ---- test program ---- #include <unistd.h> #include <signal.h> #include <stdio.h> #include <sys/random.h> static unsigned char x[~0U]; static void handle(int) { } int main(int argc, char *argv[]) { pid_t pid = getpid(), child; signal(SIGUSR1, handle); if (!(child = fork())) { for (;;) kill(pid, SIGUSR1); } pause(); printf("interrupted after reading %zd bytes\n", getrandom(x, sizeof(x), 0)); kill(child, SIGTERM); return 0; } Cc: Jann Horn <jannh@google.com> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-04-06random: check for signal_pending() outside of need_resched() checkJann Horn
signal_pending() checks TIF_NOTIFY_SIGNAL and TIF_SIGPENDING, which signal that the task should bail out of the syscall when possible. This is a separate concept from need_resched(), which checks TIF_NEED_RESCHED, signaling that the task should preempt. In particular, with the current code, the signal_pending() bailout probably won't work reliably. Change this to look like other functions that read lots of data, such as read_zero(). Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-04-06random: do not allow user to keep crng key around on stackJason A. Donenfeld
The fast key erasure RNG design relies on the key that's used to be used and then discarded. We do this, making judicious use of memzero_explicit(). However, reads to /dev/urandom and calls to getrandom() involve a copy_to_user(), and userspace can use FUSE or userfaultfd, or make a massive call, dynamically remap memory addresses as it goes, and set the process priority to idle, in order to keep a kernel stack alive indefinitely. By probing /proc/sys/kernel/random/entropy_avail to learn when the crng key is refreshed, a malicious userspace could mount this attack every 5 minutes thereafter, breaking the crng's forward secrecy. In order to fix this, we just overwrite the stack's key with the first 32 bytes of the "free" fast key erasure output. If we're returning <= 32 bytes to the user, then we can still return those bytes directly, so that short reads don't become slower. And for long reads, the difference is hopefully lost in the amortization, so it doesn't change much, with that amortization helping variously for medium reads. We don't need to do this for get_random_bytes() and the various kernel-space callers, and later, if we ever switch to always batching, this won't be necessary either, so there's no need to change the API of these functions. Cc: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Jann Horn <jannh@google.com> Fixes: c92e040d575a ("random: add backtracking protection to the CRNG") Fixes: 186873c549df ("random: use simpler fast key erasure flow on per-cpu keys") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-04-05x86/amd_nb: Unexport amd_cache_northbridges()Muralidhara M K
amd_cache_northbridges() is exported by amd_nb.c and is called by amd64-agp.c and amd64_edac.c modules at module_init() time so that NB descriptors are properly cached before those drivers can use them. However, the init_amd_nbs() initcall already does call amd_cache_northbridges() unconditionally and thus makes sure the NB descriptors are enumerated. That initcall is a fs_initcall type which is on the 5th group (starting from 0) of initcalls that gets run in increasing numerical order by the init code. The module_init() call is turned into an __initcall() in the MODULE=n case and those are device-level initcalls, i.e., group 6. Therefore, the northbridges caching is already finished by the time module initialization starts and thus the correct initialization order is retained. Unexport amd_cache_northbridges(), update dependent modules to call amd_nb_num() instead. While at it, simplify the checks in amd_cache_northbridges(). [ bp: Heavily massage and *actually* explain why the change is ok. ] Signed-off-by: Muralidhara M K <muralimk@amd.com> Signed-off-by: Naveen Krishna Chatradhi <nchatrad@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20220324122729.221765-1-nchatrad@amd.com
2022-04-05random: opportunistically initialize on /dev/urandom readsJason A. Donenfeld
In 6f98a4bfee72 ("random: block in /dev/urandom"), we tried to make a successful try_to_generate_entropy() call *required* if the RNG was not already initialized. Unfortunately, weird architectures and old userspaces combined in TCG test harnesses, making that change still not realistic, so it was reverted in 0313bc278dac ("Revert "random: block in /dev/urandom""). However, rather than making a successful try_to_generate_entropy() call *required*, we can instead make it *best-effort*. If try_to_generate_entropy() fails, it fails, and nothing changes from the current behavior. If it succeeds, then /dev/urandom becomes safe to use for free. This way, we don't risk the regression potential that led to us reverting the required-try_to_generate_entropy() call before. Practically speaking, this means that at least on x86, /dev/urandom becomes safe. Probably other architectures with working cycle counters will also become safe. And architectures with slow or broken cycle counters at least won't be affected at all by this change. So it may not be the glorious "all things are unified!" change we were hoping for initially, but practically speaking, it makes a positive impact. Cc: Theodore Ts'o <tytso@mit.edu> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-04-04random: do not split fast init input in add_hwgenerator_randomness()Jan Varho
add_hwgenerator_randomness() tries to only use the required amount of input for fast init, but credits all the entropy, rather than a fraction of it. Since it's hard to determine how much entropy is left over out of a non-unformly random sample, either give it all to fast init or credit it, but don't attempt to do both. In the process, we can clean up the injection code to no longer need to return a value. Signed-off-by: Jan Varho <jan.varho@gmail.com> [Jason: expanded commit message] Fixes: 73c7733f122e ("random: do not throw away excess input to crng_fast_load") Cc: stable@vger.kernel.org # 5.17+, requires af704c856e88 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>