summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2022-03-14btrfs: match stale devices by dev_tAnand Jain
After the commit "btrfs: harden identification of the stale device", we don't have to match the device path anymore. Instead, we match the dev_t. So pass in the dev_t instead of the device path, in the call chain btrfs_forget_devices()->btrfs_free_stale_devices(). Signed-off-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-03-14btrfs: harden identification of a stale deviceAnand Jain
Identifying and removing the stale device from the fs_uuids list is done by btrfs_free_stale_devices(). btrfs_free_stale_devices() in turn depends on device_path_matched() to check if the device appears in more than one btrfs_device structure. The matching of the device happens by its path, the device path. However, when device mapper is in use, the dm device paths are nothing but a link to the actual block device, which leads to the device_path_matched() failing to match. Fix this by matching the dev_t as provided by lookup_bdev() instead of plain string compare of the device paths. Reported-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Anand Jain <anand.jain@oracle.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-03-14btrfs: simplify fs_devices member access in btrfs_init_dev_replace_tgtdevAnand Jain
In btrfs_init_dev_replace_tgtdev() we dereference fs_info to get fs_devices many times, instead save a point to the fs_devices. Signed-off-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-03-14btrfs: reuse existing inode from btrfs_ioctlSahil Kang
btrfs_ioctl extracts inode from file so we can pass that into the callbacks. Signed-off-by: Sahil Kang <sahil.kang@asilaycomputing.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-03-14btrfs: move missing device handling in a dedicate functionNikolay Borisov
This simplifies the code flow in read_one_chunk and makes error handling when handling missing devices a bit simpler by reducing it to a single check if something went wrong. No functional changes. Reviewed-by: Su Yue <l@damenly.su> Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-03-14btrfs: stop trying to log subdirectories created in past transactionsFilipe Manana
When logging a directory we are trying to log subdirectories that were changed in the current transaction and created in a past transaction. This type of behaviour was introduced by commit 2f2ff0ee5e4303 ("Btrfs: fix metadata inconsistencies after directory fsync"), to fix some metadata inconsistencies that in the meanwhile no longer need this behaviour due to numerous other changes that happened throughout the years. This behaviour, besides not needed anymore, it's also undesirable because: 1) It's not reliable because it's only triggered for the directories of dentries (dir items) that happen to be present on a leaf that was changed in the current transaction. If a dentry that points to a directory resides on a leaf that was not changed in the current transaction, then it's not logged, as at log_dir_items() and log_new_dir_dentries() we use btrfs_search_forward(); 2) It's not required by posix or any standard, it's undefined territory. The only way to guarantee a subdirectory is logged, it to explicitly fsync it; Making the behaviour guaranteed would require scanning all directory items, check which point to a directory, and then fsync each subdirectory which was modified in the current transaction. This could be very expensive for large directories with many subdirectories and/or large subdirectories. So remove that obsolete logic. Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-03-14btrfs: stop copying old dir items when logging a directoryFilipe Manana
When logging a directory, we go over every leaf of the subvolume tree that was changed in the current transaction and copy all its dir index keys to the log tree. That includes copying dir index keys created in past transactions. This is done mostly for simplicity, as after logging the keys we log an item that specifies the start and end ranges of the keys we logged. That item is then used during log replay to figure out which keys need to be deleted - every key in that range that we find in the subvolume tree and is not in the log tree, needs to be deleted. Now that we log only dir index keys, and not dir item keys anymore, when we remove dentries from a directory (due to unlink and rename operations), we can get entire leaves that we changed only for deleting old dir index keys, or that have few dir index keys that are new - this is due to the fact that the offset for new index keys comes from a monotonically increasing counter. We can avoid logging dir index keys from past transactions, and in order to track the deletions, only log range items (BTRFS_DIR_LOG_INDEX_KEY key type) when we find gaps between consecutive index keys. This massively reduces the amount of logged metadata when we have deleted directory entries, even if it's a small percentage of the total number of entries. The reduction comes from both less items that are logged and instead of logging many dir index items (struct btrfs_dir_item), which have a size of 30 bytes plus a file name, we typically log just a few range items (struct btrfs_dir_log_item), which take only 8 bytes each. Even if no entries were deleted from a directory and only new entries were added, we typically still get a reduction on the amount of logged metadata, because it's very likely the first leaf that got the new dir index entries also has several old dir index entries. So change the logging logic to not log dir index keys created in past transactions and log a range item for every gap it finds between each pair of consecutive index keys, to ensure deletions are tracked and replayed on log replay. This patch is part of a patchset comprised of the following patches: 1/4 btrfs: don't log unnecessary boundary keys when logging directory 2/4 btrfs: put initial index value of a directory in a constant 3/4 btrfs: stop copying old dir items when logging a directory 4/4 btrfs: stop trying to log subdirectories created in past transactions The following test was run on a branch without this patchset and on a branch with the first three patches applied: $ cat test.sh #!/bin/bash DEV=/dev/nvme0n1 MNT=/mnt/nvme0n1 NUM_FILES=1000000 NUM_FILE_DELETES=10000 MKFS_OPTIONS="-O no-holes -R free-space-tree" MOUNT_OPTIONS="-o ssd" mkfs.btrfs -f $MKFS_OPTIONS $DEV mount $MOUNT_OPTIONS $DEV $MNT mkdir $MNT/testdir for ((i = 1; i <= $NUM_FILES; i++)); do echo -n > $MNT/testdir/file_$i done sync del_inc=$(( $NUM_FILES / $NUM_FILE_DELETES )) for ((i = 1; i <= $NUM_FILES; i += $del_inc)); do rm -f $MNT/testdir/file_$i done start=$(date +%s%N) xfs_io -c "fsync" $MNT/testdir end=$(date +%s%N) dur=$(( (end - start) / 1000000 )) echo "dir fsync took $dur ms after deleting $NUM_FILE_DELETES files" echo umount $MNT The test was run on a non-debug kernel (Debian's default kernel config), and the results were the following for various values of NUM_FILES and NUM_FILE_DELETES: ** before, NUM_FILES = 1 000 000, NUM_FILE_DELETES = 10 000 ** dir fsync took 585 ms after deleting 10000 files ** after, NUM_FILES = 1 000 000, NUM_FILE_DELETES = 10 000 ** dir fsync took 34 ms after deleting 10000 files (-94.2%) ** before, NUM_FILES = 100 000, NUM_FILE_DELETES = 1 000 ** dir fsync took 50 ms after deleting 1000 files ** after, NUM_FILES = 100 000, NUM_FILE_DELETES = 1 000 ** dir fsync took 7 ms after deleting 1000 files (-86.0%) ** before, NUM_FILES = 10 000, NUM_FILE_DELETES = 100 ** dir fsync took 9 ms after deleting 100 files ** after, NUM_FILES = 10 000, NUM_FILE_DELETES = 100 ** dir fsync took 5 ms after deleting 100 files (-44.4%) Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-03-14btrfs: put initial index value of a directory in a constantFilipe Manana
At btrfs_set_inode_index_count() we refer twice to the number 2 as the initial index value for a directory (when it's empty), with a proper comment explaining the reason for that value. In the next patch I'll have to use that magic value in the directory logging code, so put the value in a #define at btrfs_inode.h, to avoid hardcoding the magic value again at tree-log.c. Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-03-14btrfs: don't log unnecessary boundary keys when logging directoryFilipe Manana
Before we start to log dir index keys from a leaf, we check if there is a previous index key, which normally is at the end of a leaf that was not changed in the current transaction. Then we log that key and set the start of logged range (item of type BTRFS_DIR_LOG_INDEX_KEY) to the offset of that key. This is to ensure that if there were deleted index keys between that key and the first key we are going to log, those deletions are replayed in case we need to replay to the log after a power failure. However we really don't need to log that previous key, we can just set the start of the logged range to that key's offset plus 1. This achieves the same and avoids logging one dir index key. The same logic is performed when we finish logging the index keys of a leaf and we find that the next leaf has index keys and was not changed in the current transaction. We are logging the first key of that next leaf and use its offset as the end of range we log. This is just to ensure that if there were deleted index keys between the last index key we logged and the first key of that next leaf, those index keys are deleted if we end up replaying the log. However that is not necessary, we can avoid logging that first index key of the next leaf and instead set the end of the logged range to match the offset of that index key minus 1. So avoid logging those index keys at the boundaries and adjust the start and end offsets of the logged ranges as described above. This patch is part of a patchset comprised of the following patches: 1/4 btrfs: don't log unnecessary boundary keys when logging directory 2/4 btrfs: put initial index value of a directory in a constant 3/4 btrfs: stop copying old dir items when logging a directory 4/4 btrfs: stop trying to log subdirectories created in past transactions Performance test results are listed in the changelog of patch 3/4. Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-03-14btrfs: reuse existing pointers from btrfs_ioctlSahil Kang
btrfs_ioctl already contains pointers to the inode and btrfs_root structs, so we can pass them into the subfunctions instead of the toplevel struct file. Signed-off-by: Sahil Kang <sahil.kang@asilaycomputing.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-03-14btrfs: remove write and wait of struct walk_controlFilipe Manana
The ->write and ->wait fields of struct walk_control, used for log trees, are not used since 2008, more specifically since commit d0c803c4049c5c ("Btrfs: Record dirty pages tree-log pages in an extent_io tree") and since commit d0c803c4049c5c ("Btrfs: Record dirty pages tree-log pages in an extent_io tree"). So just remove them, along with the function btrfs_write_tree_block(), which is also not used anymore after removing the ->write member. Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2022-03-14esp6: fix check on ipv6_skip_exthdr's return valueSabrina Dubroca
Commit 5f9c55c8066b ("ipv6: check return value of ipv6_skip_exthdr") introduced an incorrect check, which leads to all ESP packets over either TCPv6 or UDPv6 encapsulation being dropped. In this particular case, offset is negative, since skb->data points to the ESP header in the following chain of headers, while skb->network_header points to the IPv6 header: IPv6 | ext | ... | ext | UDP | ESP | ... That doesn't seem to be a problem, especially considering that if we reach esp6_input_done2, we're guaranteed to have a full set of headers available (otherwise the packet would have been dropped earlier in the stack). However, it means that the return value will (intentionally) be negative. We can make the test more specific, as the expected return value of ipv6_skip_exthdr will be the (negated) size of either a UDP header, or a TCP header with possible options. In the future, we should probably either make ipv6_skip_exthdr explicitly accept negative offsets (and adjust its return value for error cases), or make ipv6_skip_exthdr only take non-negative offsets (and audit all callers). Fixes: 5f9c55c8066b ("ipv6: check return value of ipv6_skip_exthdr") Reported-by: Xiumei Mu <xmu@redhat.com> Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2022-03-14net: dsa: microchip: add spi_device_id tablesClaudiu Beznea
Add spi_device_id tables to avoid logs like "SPI driver ksz9477-switch has no spi_device_id". Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-03-14Merge tag 'irqchip-5.18' of ↵Thomas Gleixner
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/core Pull irqchip updates from Marc Zyngier: - Add support for the STM32MP13 variant - Move parent device away from struct irq_chip - Remove all instances of non-const strings assigned to struct irq_chip::name, enabling a nice cleanup for VIC and GIC) - Simplify the Qualcomm PDC driver - A bunch of SiFive PLIC cleanups - Add support for a new variant of the Meson GPIO block - Add support for the irqchip side of the Apple M1 PMU - Add support for the Apple M1 Pro/Max AICv2 irqchip - Add support for the Qualcomm MPM wakeup gadget - Move the Xilinx driver over to the generic irqdomain handling - Tiny speedup for IPIs on GICv3 systems - The usual odd cleanups Link: https://lore.kernel.org/all/20220313105142.704579-1-maz@kernel.org
2022-03-14Merge tag 'timers-v5.18-rc1' of ↵Thomas Gleixner
https://git.linaro.org/people/daniel.lezcano/linux into timers/core Pull clocksource/events updates from Daniel Lezcano: - Fix return error code check for the timer-of layer when getting the base address (Guillaume Ranquet) - Remove MMIO dependency, add notrace annotation for sched_clock and increase the timer resolution for the Microchip PIT64b (Claudiu Beznea) - Convert DT bindings to yaml for the Tegra timer (David Heidelberg) - Fix compilation error on architecture other than ARM for the i.MX TPM (Nathan Chancellor) - Add support for the event stream scaling for 1GHz counter on the arch ARM timer (Marc Zyngier) - Support a higher number of interrupts by the Exynos MCT timer driver (Alim Akhtar) - Detect and prevent memory corruption when the specified number of interrupts in the DTS is greater than the array size in the code for the Exynos MCT timer (Krzysztof Kozlowski) - Fix regression from a previous errata fix on the TI DM timer (Drew Fustini) - Several fixes and code improvements for the i.MX TPM driver (Peng Fan) Link: https://lore.kernel.org/all/a8cd9be9-7d70-80df-2b74-1a8226a215e1@linaro.org
2022-03-14Merge branch 'timers/core' of ↵Thomas Gleixner
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks into timers/core Pull tick/NOHZ updates from Frederic Weisbecker: - A fix for rare jiffies update stalls that were reported by Paul McKenney - Tick side cleanups after RCU_FAST_NO_HZ removal - Handle softirqs on idle more gracefully Link: https://lore.kernel.org/all/20220307233034.34550-1-frederic@kernel.org
2022-03-14nvmet: use snprintf() with PAGE_SIZE in configfsChaitanya Kulkarni
Instead of using sprintf, use snprintf with buffer size limited to PAGE_SIZE just like what we have for the rest of the file. Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-03-14nvmet: don't fold linesChaitanya Kulkarni
Don't fold line that can fit into 80 char limit. No functional change in this patch. Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-03-14nvmet-rdma: fix kernel-doc warning for nvmet_rdma_device_removalChaitanya Kulkarni
This fixes following kernel-doc warning:- drivers/nvme/target/rdma.c:1722: warning: expecting prototype for nvme_rdma_device_removal(). Prototype was for nvmet_rdma_device_removal() instead Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-03-14nvmet-fc: fix kernel-doc warning for nvmet_fc_unregister_targetportChaitanya Kulkarni
This fixes following kernel-doc warning:- drivers/nvme/target/fc.c:1619: warning: expecting prototype for nvme_fc_unregister_targetport(). Prototype was for nvmet_fc_unregister_targetport() instead Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-03-14nvmet-fc: fix kernel-doc warning for nvmet_fc_register_targetportChaitanya Kulkarni
This fixes following kernel-doc warning :- drivers/nvme/target/fc.c:1365: warning: expecting prototype for nvme_fc_register_targetport(). Prototype was for nvmet_fc_register_targetport() instead Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-03-14nvme-tcp: lockdep: annotate in-kernel socketsChris Leech
Put NVMe/TCP sockets in their own class to avoid some lockdep warnings. Sockets created by nvme-tcp are not exposed to user-space, and will not trigger certain code paths that the general socket API exposes. Lockdep complains about a circular dependency between the socket and filesystem locks, because setsockopt can trigger a page fault with a socket lock held, but nvme-tcp sends requests on the socket while file system locks are held. ====================================================== WARNING: possible circular locking dependency detected 5.15.0-rc3 #1 Not tainted ------------------------------------------------------ fio/1496 is trying to acquire lock: (sk_lock-AF_INET){+.+.}-{0:0}, at: tcp_sendpage+0x23/0x80 but task is already holding lock: (&xfs_dir_ilock_class/5){+.+.}-{3:3}, at: xfs_ilock+0xcf/0x290 [xfs] which lock already depends on the new lock. other info that might help us debug this: chain exists of: sk_lock-AF_INET --> sb_internal --> &xfs_dir_ilock_class/5 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&xfs_dir_ilock_class/5); lock(sb_internal); lock(&xfs_dir_ilock_class/5); lock(sk_lock-AF_INET); *** DEADLOCK *** 6 locks held by fio/1496: #0: (sb_writers#13){.+.+}-{0:0}, at: path_openat+0x9fc/0xa20 #1: (&inode->i_sb->s_type->i_mutex_dir_key){++++}-{3:3}, at: path_openat+0x296/0xa20 #2: (sb_internal){.+.+}-{0:0}, at: xfs_trans_alloc_icreate+0x41/0xd0 [xfs] #3: (&xfs_dir_ilock_class/5){+.+.}-{3:3}, at: xfs_ilock+0xcf/0x290 [xfs] #4: (hctx->srcu){....}-{0:0}, at: hctx_lock+0x51/0xd0 #5: (&queue->send_mutex){+.+.}-{3:3}, at: nvme_tcp_queue_rq+0x33e/0x380 [nvme_tcp] This annotation lets lockdep analyze nvme-tcp controlled sockets independently of what the user-space sockets API does. Link: https://lore.kernel.org/linux-nvme/CAHj4cs9MDYLJ+q+2_GXUK9HxFizv2pxUryUR0toX974M040z7g@mail.gmail.com/ Signed-off-by: Chris Leech <cleech@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-03-14nvme-tcp: don't fold the lineChaitanya Kulkarni
The call to nvme_tcp_alloc_queue() fits perfectly in one line without exceeding 80 char limit for the line. Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-03-14nvme-tcp: don't initialize ret variableChaitanya Kulkarni
No point in initializing ret variable to 0 in nvme_tcp_start_io_queue() since it gets overwritten by a call to nvme_tcp_start_queue(). Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-03-14nvme-multipath: call bio_io_error in nvme_ns_head_submit_bioGuoqing Jiang
Use bio_io_error() here since bio_io_error does the same thing. Signed-off-by: Guoqing Jiang <guoqing.jiang@linux.dev> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-03-14nvme-multipath: use vmalloc for ANA log bufferHannes Reinecke
The ANA log buffer can get really large, as it depends on the controller configuration. So to avoid an out-of-memory issue during scanning use kvmalloc() instead of the kmalloc(). Signed-off-by: Hannes Reinecke <hare@suse.de> Tested-by: Daniel Wagner <dwagner@suse.de> Signed-off-by: Daniel Wagner <dwagner@suse.de> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-03-14crypto: xilinx - Turn SHA into a tristate and allow COMPILE_TESTHerbert Xu
This patch turns the new SHA driver into a tristate and also allows compile testing. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-14MAINTAINERS: update HPRE/SEC2/TRNG driver maintainers listLongfang Liu
Zaibo moved projects and is not looking into crypto stuff. I am responsible for checking the patches of these modules. so the maintainers list needs to be updated. I take care of HPRE, Qian Weili take care of TRNG, Ye Kai and me take care of SEC2. Signed-off-by: Longfang Liu <liulongfang@huawei.com> Signed-off-by: Kai Ye <yekai12@huawei.com> Signed-off-by: Weili Qian <qianweili@huawei.com> Signed-off-by: Zaibo Xu <xuzaibo@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-14crypto: dh - Remove the unused function dh_safe_prime_dh_alg()Jiapeng Chong
Fix the following W=1 kernel warnings: crypto/dh.c:311:31: warning: unused function 'dh_safe_prime_dh_alg' [-Wunused-function] Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-14hwrng: nomadik - Change clk_disable to clk_disable_unprepareMiaoqian Lin
The corresponding API for clk_prepare_enable is clk_disable_unprepare, other than clk_disable_unprepare. Fix this by changing clk_disable to clk_disable_unprepare. Fixes: beca35d05cc2 ("hwrng: nomadik - use clk_prepare_enable()") Signed-off-by: Miaoqian Lin <linmq006@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-14crypto: qcom-rng - ensure buffer for generate is completely filledBrian Masney
The generate function in struct rng_alg expects that the destination buffer is completely filled if the function returns 0. qcom_rng_read() can run into a situation where the buffer is partially filled with randomness and the remaining part of the buffer is zeroed since qcom_rng_generate() doesn't check the return value. This issue can be reproduced by running the following from libkcapi: kcapi-rng -b 9000000 > OUTFILE The generated OUTFILE will have three huge sections that contain all zeros, and this is caused by the code where the test 'val & PRNG_STATUS_DATA_AVAIL' fails. Let's fix this issue by ensuring that qcom_rng_read() always returns with a full buffer if the function returns success. Let's also have qcom_rng_generate() return the correct value. Here's some statistics from the ent project (https://www.fourmilab.ch/random/) that shows information about the quality of the generated numbers: $ ent -c qcom-random-before Value Char Occurrences Fraction 0 606748 0.067416 1 33104 0.003678 2 33001 0.003667 ... 253 � 32883 0.003654 254 � 33035 0.003671 255 � 33239 0.003693 Total: 9000000 1.000000 Entropy = 7.811590 bits per byte. Optimum compression would reduce the size of this 9000000 byte file by 2 percent. Chi square distribution for 9000000 samples is 9329962.81, and randomly would exceed this value less than 0.01 percent of the times. Arithmetic mean value of data bytes is 119.3731 (127.5 = random). Monte Carlo value for Pi is 3.197293333 (error 1.77 percent). Serial correlation coefficient is 0.159130 (totally uncorrelated = 0.0). Without this patch, the results of the chi-square test is 0.01%, and the numbers are certainly not random according to ent's project page. The results improve with this patch: $ ent -c qcom-random-after Value Char Occurrences Fraction 0 35432 0.003937 1 35127 0.003903 2 35424 0.003936 ... 253 � 35201 0.003911 254 � 34835 0.003871 255 � 35368 0.003930 Total: 9000000 1.000000 Entropy = 7.999979 bits per byte. Optimum compression would reduce the size of this 9000000 byte file by 0 percent. Chi square distribution for 9000000 samples is 258.77, and randomly would exceed this value 42.24 percent of the times. Arithmetic mean value of data bytes is 127.5006 (127.5 = random). Monte Carlo value for Pi is 3.141277333 (error 0.01 percent). Serial correlation coefficient is 0.000468 (totally uncorrelated = 0.0). This change was tested on a Nexus 5 phone (msm8974 SoC). Signed-off-by: Brian Masney <bmasney@redhat.com> Fixes: ceec5f5b5988 ("crypto: qcom-rng - Add Qcom prng driver") Cc: stable@vger.kernel.org # 4.19+ Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org> Reviewed-by: Andrew Halaney <ahalaney@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-13Linux 5.17-rc8v5.17-rc8Linus Torvalds
2022-03-13drm/mgag200: Fix PLL setup for g200wb and g200ewJocelyn Falempe
commit f86c3ed55920 ("drm/mgag200: Split PLL setup into compute and update functions") introduced a regression for g200wb and g200ew. The PLLs are not set up properly, and VGA screen stays black, or displays "out of range" message. MGA1064_WB_PIX_PLLC_N/M/P was mistakenly replaced with MGA1064_PIX_PLLC_N/M/P which have different addresses. Patch tested on a Dell T310 with g200wb Fixes: f86c3ed55920 ("drm/mgag200: Split PLL setup into compute and update functions") Cc: stable@vger.kernel.org Signed-off-by: Jocelyn Falempe <jfalempe@redhat.com> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Link: https://patchwork.freedesktop.org/patch/msgid/20220308174321.225606-1-jfalempe@redhat.com
2022-03-13Merge tag 'x86_urgent_for_v5.17_rc8' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Borislav Petkov: - Free shmem backing storage for SGX enclave pages when those are swapped back into EPC memory - Prevent do_int3() from being kprobed, to avoid recursion - Remap setup_data and setup_indirect structures properly when accessing their members - Correct the alternatives patching order for modules too * tag 'x86_urgent_for_v5.17_rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/sgx: Free backing memory after faulting the enclave page x86/traps: Mark do_int3() NOKPROBE_SYMBOL x86/boot: Add setup_indirect support in early_memremap_is_setup_data() x86/boot: Fix memremap of setup_indirect structures x86/module: Fix the paravirt vs alternative order
2022-03-12random: check for signal and try earlier when generating entropyJason A. Donenfeld
Rather than waiting a full second in an interruptable waiter before trying to generate entropy, try to generate entropy first and wait second. While waiting one second might give an extra second for getting entropy from elsewhere, we're already pretty late in the init process here, and whatever else is generating entropy will still continue to contribute. This has implications on signal handling: we call try_to_generate_entropy() from wait_for_random_bytes(), and wait_for_random_bytes() always uses wait_event_interruptible_timeout() when waiting, since it's called by userspace code in restartable contexts, where signals can pend. Since try_to_generate_entropy() now runs first, if a signal is pending, it's necessary for try_to_generate_entropy() to check for signals, since it won't hit the wait until after try_to_generate_entropy() has returned. And even before this change, when entering a busy loop in try_to_generate_entropy(), we should have been checking to see if any signals are pending, so that a process doesn't get stuck in that loop longer than expected. Cc: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-03-12random: reseed more often immediately after bootingJason A. Donenfeld
In order to chip away at the "premature first" problem, we augment our existing entropy accounting with more frequent reseedings at boot. The idea is that at boot, we're getting entropy from various places, and we're not very sure which of early boot entropy is good and which isn't. Even when we're crediting the entropy, we're still not totally certain that it's any good. Since boot is the one time (aside from a compromise) that we have zero entropy, it's important that we shepherd entropy into the crng fairly often. At the same time, we don't want a "premature next" problem, whereby an attacker can brute force individual bits of added entropy. In lieu of going full-on Fortuna (for now), we can pick a simpler strategy of just reseeding more often during the first 5 minutes after boot. This is still bounded by the 256-bit entropy credit requirement, so we'll skip a reseeding if we haven't reached that, but in case entropy /is/ coming in, this ensures that it makes its way into the crng rather rapidly during these early stages. Ordinarily we reseed if the previous reseeding is 300 seconds old. This commit changes things so that for the first 600 seconds of boot time, we reseed if the previous reseeding is uptime / 2 seconds old. That means that we'll reseed at the very least double the uptime of the previous reseeding. Cc: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-03-12ext4: do not call FC trace event in ext4_fc_commit() if FS does not support FCRitesh Harjani
This just puts trace_ext4_fc_commit_start(sb) & ktime_get() for measuring FC commit time, after the check of whether sb supports JOURNAL_FAST_COMMIT or not. Signed-off-by: Ritesh Harjani <riteshh@linux.ibm.com> Reviewed-by: Harshad Shirwadkar <harshadshirwadkar@gmail.com> Reviewed-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/d53cf3e535924ec0a1eb41a560e96561b0727e7a.1647057583.git.riteshh@linux.ibm.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-03-12ext4: convert ext4_fc_track_dentry type events to use event classRitesh Harjani
One should use DECLARE_EVENT_CLASS for similar event types instead of defining TRACE_EVENT for each event type. This is helpful in reducing the text section footprint for e.g. [1] [1]: https://lwn.net/Articles/381064/ Signed-off-by: Ritesh Harjani <riteshh@linux.ibm.com> Reviewed-by: Harshad Shirwadkar <harshadshirwadkar@gmail.com> Reviewed-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/a019cb46219ef4b30e4d98d7ced7d8819a2fc61d.1647057583.git.riteshh@linux.ibm.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-03-12ext4: fix ext4_fc_stats trace pointRitesh Harjani
ftrace's __print_symbolic() requires that any enum values used in the symbol to string translation table be wrapped in a TRACE_DEFINE_ENUM so that the enum value can be decoded from the ftrace ring buffer by user space tooling. This patch also fixes few other problems found in this trace point. e.g. dereferencing structures in TP_printk which should not be done at any cost. Also to avoid checkpatch warnings, this patch removes those whitespaces/tab stops issues. Cc: stable@kernel.org Fixes: aa75f4d3daae ("ext4: main fast-commit commit path") Reported-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ritesh Harjani <riteshh@linux.ibm.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Reviewed-by: Harshad Shirwadkar <harshadshirwadkar@gmail.com> Link: https://lore.kernel.org/r/b4b9691414c35c62e570b723e661c80674169f9a.1647057583.git.riteshh@linux.ibm.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-03-12ext4: remove unused enum EXT4_FC_COMMIT_FAILEDRitesh Harjani
Below commit removed all references of EXT4_FC_COMMIT_FAILED. commit 0915e464cb274 ("ext4: simplify updating of fast commit stats") Just remove it since it is not used anymore. Signed-off-by: Ritesh Harjani <riteshh@linux.ibm.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Harshad Shirwadkar <harshadshirwadkar@gmail.com> Link: https://lore.kernel.org/r/c941357e476be07a1138c7319ca5faab7fb80fc6.1647057583.git.riteshh@linux.ibm.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-03-12ext4: warn when dirtying page w/o buffers in data=journal modeJan Kara
Recently I've got a report of BUG_ON trigerring during transaction commit in ext4_journalled_writepage_callback() because we've spotted a dirty page without buffers. Add WARN_ON_ONCE to ext4_journalled_set_page_dirty() to catch the problematic condition earlier where we have better chance of understanding which code path is creating dirty data without preparing the page properly. Also update the comment with current information when we are at it. Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20220310101832.5645-1-jack@suse.cz Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-03-12doc: fixed a typo in ext4 documentationlianzhi chang
The unit of file system size should be TiB, not PiB Signed-off-by: lianzhi chang <changlianzhi@uniontech.com> Link: https://lore.kernel.org/r/20220310014415.29937-1-changlianzhi@uniontech.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-03-12ext4: make mb_optimize_scan performance mount option work with extentsOjaswin Mujoo
Currently mb_optimize_scan scan feature which improves filesystem performance heavily (when FS is fragmented), seems to be not working with files with extents (ext4 by default has files with extents). This patch fixes that and makes mb_optimize_scan feature work for files with extents. Below are some performance numbers obtained when allocating a 10M and 100M file with and w/o this patch on a filesytem with no 1M contiguous block. <perf numbers> =============== Workload: dd if=/dev/urandom of=test conv=fsync bs=1M count=10/100 Time taken ===================================================== no. Size without-patch with-patch Diff(%) 1 10M 0m8.401s 0m5.623s 33.06% 2 100M 1m40.465s 1m14.737s 25.6% <debug stats> ============= w/o patch: mballoc: reqs: 17056 success: 11407 groups_scanned: 13643 cr0_stats: hits: 37 groups_considered: 9472 useless_loops: 36 bad_suggestions: 0 cr1_stats: hits: 11418 groups_considered: 908560 useless_loops: 1894 bad_suggestions: 0 cr2_stats: hits: 1873 groups_considered: 6913 useless_loops: 21 cr3_stats: hits: 21 groups_considered: 5040 useless_loops: 21 extents_scanned: 417364 goal_hits: 3707 2^n_hits: 37 breaks: 1873 lost: 0 buddies_generated: 239/240 buddies_time_used: 651080 preallocated: 705 discarded: 478 with patch: mballoc: reqs: 12768 success: 11305 groups_scanned: 12768 cr0_stats: hits: 1 groups_considered: 18 useless_loops: 0 bad_suggestions: 0 cr1_stats: hits: 5829 groups_considered: 50626 useless_loops: 0 bad_suggestions: 0 cr2_stats: hits: 6938 groups_considered: 580363 useless_loops: 0 cr3_stats: hits: 0 groups_considered: 0 useless_loops: 0 extents_scanned: 309059 goal_hits: 0 2^n_hits: 1 breaks: 1463 lost: 0 buddies_generated: 239/240 buddies_time_used: 791392 preallocated: 673 discarded: 446 Fixes: 196e402 (ext4: improve cr 0 / cr 1 group scanning) Cc: stable@kernel.org Reported-by: Geetika Moolchandani <Geetika.Moolchandani1@ibm.com> Reported-by: Nageswara R Sastry <rnsastry@linux.ibm.com> Suggested-by: Ritesh Harjani <riteshh@linux.ibm.com> Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Link: https://lore.kernel.org/r/fc9a48f7f8dcfc83891a8b21f6dd8cdf056ed810.1646732698.git.ojaswin@linux.ibm.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-03-12ext4: make mb_optimize_scan option work with set/unset mount cmdOjaswin Mujoo
After moving to the new mount API, mb_optimize_scan mount option handling was not working as expected due to the parsed value always being overwritten by default. Refactor and fix this to the expected behavior described below: * mb_optimize_scan=1 - On * mb_optimize_scan=0 - Off * mb_optimize_scan not passed - On if no. of BGs > threshold else off * Remounts retain previous value unless we explicitly pass the option with a new value Fixes: cebe85d570cf ("ext4: switch to the new mount api") Cc: stable@kernel.org Reported-by: Ritesh Harjani <riteshh@linux.ibm.com> Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Reviewed-by: Ritesh Harjani <riteshh@linux.ibm.com> Link: https://lore.kernel.org/r/c98970fe99f26718586d02e942f293300fb48ef3.1646732698.git.ojaswin@linux.ibm.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2022-03-12random: make consistent usage of crng_ready()Jason A. Donenfeld
Rather than sometimes checking `crng_init < 2`, we should always use the crng_ready() macro, so that should we change anything later, it's consistent. Additionally, that macro already has a likely() around it, which means we don't need to open code our own likely() and unlikely() annotations. Cc: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-03-12random: use SipHash as interrupt entropy accumulatorJason A. Donenfeld
The current fast_mix() function is a piece of classic mailing list crypto, where it just sort of sprung up by an anonymous author without a lot of real analysis of what precisely it was accomplishing. As an ARX permutation alone, there are some easily searchable differential trails in it, and as a means of preventing malicious interrupts, it completely fails, since it xors new data into the entire state every time. It can't really be analyzed as a random permutation, because it clearly isn't, and it can't be analyzed as an interesting linear algebraic structure either, because it's also not that. There really is very little one can say about it in terms of entropy accumulation. It might diffuse bits, some of the time, maybe, we hope, I guess. But for the most part, it fails to accomplish anything concrete. As a reminder, the simple goal of add_interrupt_randomness() is to simply accumulate entropy until ~64 interrupts have elapsed, and then dump it into the main input pool, which uses a cryptographic hash. It would be nice to have something cryptographically strong in the interrupt handler itself, in case a malicious interrupt compromises a per-cpu fast pool within the 64 interrupts / 1 second window, and then inside of that same window somehow can control its return address and cycle counter, even if that's a bit far fetched. However, with a very CPU-limited budget, actually doing that remains an active research project (and perhaps there'll be something useful for Linux to come out of it). And while the abundance of caution would be nice, this isn't *currently* the security model, and we don't yet have a fast enough solution to make it our security model. Plus there's not exactly a pressing need to do that. (And for the avoidance of doubt, the actual cluster of 64 accumulated interrupts still gets dumped into our cryptographically secure input pool.) So, for now we are going to stick with the existing interrupt security model, which assumes that each cluster of 64 interrupt data samples is mostly non-malicious and not colluding with an infoleaker. With this as our goal, we have a few more choices, simply aiming to accumulate entropy, while discarding the least amount of it. We know from <https://eprint.iacr.org/2019/198> that random oracles, instantiated as computational hash functions, make good entropy accumulators and extractors, which is the justification for using BLAKE2s in the main input pool. As mentioned, we don't have that luxury here, but we also don't have the same security model requirements, because we're assuming that there aren't malicious inputs. A pseudorandom function instance can approximately behave like a random oracle, provided that the key is uniformly random. But since we're not concerned with malicious inputs, we can pick a fixed key, which is not secret, knowing that "nature" won't interact with a sufficiently chosen fixed key by accident. So we pick a PRF with a fixed initial key, and accumulate into it continuously, dumping the result every 64 interrupts into our cryptographically secure input pool. For this, we make use of SipHash-1-x on 64-bit and HalfSipHash-1-x on 32-bit, which are already in use in the kernel's hsiphash family of functions and achieve the same performance as the function they replace. It would be nice to do two rounds, but we don't exactly have the CPU budget handy for that, and one round alone is already sufficient. As mentioned, we start with a fixed initial key (zeros is fine), and allow SipHash's symmetry breaking constants to turn that into a useful starting point. Also, since we're dumping the result (or half of it on 64-bit so as to tax our hash function the same amount on all platforms) into the cryptographically secure input pool, there's no point in finalizing SipHash's output, since it'll wind up being finalized by something much stronger. This means that all we need to do is use the ordinary round function word-by-word, as normal SipHash does. Simplified, the flow is as follows: Initialize: siphash_state_t state; siphash_init(&state, key={0, 0, 0, 0}); Update (accumulate) on interrupt: siphash_update(&state, interrupt_data_and_timing); Dump into input pool after 64 interrupts: blake2s_update(&input_pool, &state, sizeof(state) / 2); The result of all of this is that the security model is unchanged from before -- we assume non-malicious inputs -- yet we now implement that model with a stronger argument. I would like to emphasize, again, that the purpose of this commit is to improve the existing design, by making it analyzable, without changing any fundamental assumptions. There may well be value down the road in changing up the existing design, using something cryptographically strong, or simply using a ring buffer of samples rather than having a fast_mix() at all, or changing which and how much data we collect each interrupt so that we can use something linear, or a variety of other ideas. This commit does not invalidate the potential for those in the future. For example, in the future, if we're able to characterize the data we're collecting on each interrupt, we may be able to inch toward information theoretic accumulators. <https://eprint.iacr.org/2021/523> shows that `s = ror32(s, 7) ^ x` and `s = ror64(s, 19) ^ x` make very good accumulators for 2-monotone distributions, which would apply to timestamp counters, like random_get_entropy() or jiffies, but would not apply to our current combination of the two values, or to the various function addresses and register values we mix in. Alternatively, <https://eprint.iacr.org/2021/1002> shows that max-period linear functions with no non-trivial invariant subspace make good extractors, used in the form `s = f(s) ^ x`. However, this only works if the input data is both identical and independent, and obviously a collection of address values and counters fails; so it goes with theoretical papers. Future directions here may involve trying to characterize more precisely what we actually need to collect in the interrupt handler, and building something specific around that. However, as mentioned, the morass of data we're gathering at the interrupt handler presently defies characterization, and so we use SipHash for now, which works well and performs well. Cc: Theodore Ts'o <tytso@mit.edu> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-03-12wireguard: device: clear keys on VM forkJason A. Donenfeld
When a virtual machine forks, it's important that WireGuard clear existing sessions so that different plaintexts are not transmitted using the same key+nonce, which can result in catastrophic cryptographic failure. To accomplish this, we simply hook into the newly added vmfork notifier. As a bonus, it turns out that, like the vmfork registration function, the PM registration function is stubbed out when CONFIG_PM_SLEEP is not set, so we can actually just remove the maze of ifdefs, which makes it really quite clean to support both notifiers at once. Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Theodore Ts'o <tytso@mit.edu> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-03-12random: provide notifier for VM forkJason A. Donenfeld
Drivers such as WireGuard need to learn when VMs fork in order to clear sessions. This commit provides a simple notifier_block for that, with a register and unregister function. When no VM fork detection is compiled in, this turns into a no-op, similar to how the power notifier works. Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-03-12random: replace custom notifier chain with standard oneJason A. Donenfeld
We previously rolled our own randomness readiness notifier, which only has two users in the whole kernel. Replace this with a more standard atomic notifier block that serves the same purpose with less code. Also unexport the symbols, because no modules use it, only unconditional builtins. The only drawback is that it's possible for a notification handler returning the "stop" code to prevent further processing, but given that there are only two users, and that we're unexporting this anyway, that doesn't seem like a significant drawback for the simplification we receive here. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Theodore Ts'o <tytso@mit.edu> Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-03-12random: do not export add_vmfork_randomness() unless neededJason A. Donenfeld
Since add_vmfork_randomness() is only called from vmgenid.o, we can guard it in CONFIG_VMGENID, similarly to how we do with add_disk_randomness() and CONFIG_BLOCK. If we ever have multiple things calling into add_vmfork_randomness(), we can add another shared Kconfig symbol for that, but for now, this is good enough. Even though add_vmfork_randomess() is a pretty small function, removing it means that there are only calls to crng_reseed(false) and none to crng_reseed(true), which means the compiler can constant propagate the false, removing branches from crng_reseed() and its descendants. Additionally, we don't even need the symbol to be exported if CONFIG_VMGENID is not a module, so conditionalize that too. Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>