summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-11-07selftests: vm: Build/Run 64bit tests only on 64bit archMasami Hiramatsu
Some virtual address range tests requires 64bit address space, and we can not build and run those tests on the 32bit machine. Filter the 64bit architectures in Makefile and run_vmtests, so that those tests are built/run only on 64bit archs. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2019-11-07selftests: proc: Make va_max 1MBMasami Hiramatsu
Currently proc-self-map-files-002.c sets va_max (max test address of user virtual address) to 4GB, but it is too big for 32bit arch and 1UL << 32 is overflow on 32bit long. Also since this value should be enough bigger than vm.mmap_min_addr (64KB or 32KB by default), 1MB should be enough. Make va_max 1MB unconditionally. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2019-11-07kselftest: Fix NULL INSTALL_PATH for TARGETS runlistPrabhakar Kushwaha
As per commit 131b30c94fbc ("kselftest: exclude failed TARGETS from runlist") failed targets were excluded from the runlist. But value $$INSTALL_PATH is always NULL. It should be $INSTALL_PATH instead $$INSTALL_PATH. So, fix Makefile to use $INSTALL_PATH. Fixes: 131b30c94fbc ("kselftest: exclude failed TARGETS from runlist") Signed-off-by: Prabhakar Kushwaha <pkushwaha@marvell.com> Reviewed-by: Cristian Marussi <cristian.marussi@arm.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2019-11-07selftests: Move kselftest_module.sh into kselftest/Kees Cook
The kselftest_module.sh file was not being installed by the Makefile "install" target, rendering the lib/*.sh tests nonfunction. This fixes that and takes the opportunity to move it into the kselftest/ subdirectory which is where the kselftest infrastructure bits are collecting. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Link: https://lore.kernel.org/lkml/CA+G9fYsfJpXQvOvHdjtg8z4a89dSStOQZOKa9zMjjQgWKng1aw@mail.gmail.com Fixes: d3460527706e ("kselftest: Add test runner creation script") Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2019-11-07selftests: gen_kselftest_tar.sh: Do not clobber kselftest/Kees Cook
The default installation location for gen_kselftest_tar.sh was still "kselftest/" which collides with the existing directory. Instead, this moves the installation target into "kselftest_install/kselftest/" and adjusts the tar creation accordingly. This also adjusts indentation and logic to be consistent. Fixes: 42d46e57ec97 ("selftests: Extract single-test shell logic from lib.mk") Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2019-11-07selftests: breakpoints: Fix a typo of function nameMasami Hiramatsu
Since commit 5821ba969511 ("selftests: Add test plan API to kselftest.h and adjust callers") accidentally introduced 'a' typo in the front of run_test() function, breakpoint_test_arm64.c became not able to be compiled. Remove the 'a' from arun_test(). Fixes: 5821ba969511 ("selftests: Add test plan API to kselftest.h and adjust callers") Reported-by: Jun Takahashi <takahashi.jun_s@aa.socionext.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: Kees Cook <keescook@chromium.org> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2019-11-07Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid Pull HID fixes from Jiri Kosina: "Two fixes for the HID subsystem: - regression fix for i2c-hid power management (Hans de Goede) - signed vs unsigned API fix for Wacom driver (Jason Gerecke)" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid: HID: wacom: generic: Treat serial number and related fields as unsigned HID: i2c-hid: Send power-on command after reset
2019-11-07blk-cgroup: separate out blkg_rwstat under CONFIG_BLK_CGROUP_RWSTATTejun Heo
blkg_rwstat is now only used by bfq-iosched and blk-throtl when on cgroup1. Let's move it into its own files and gate it behind a config option. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07blk-cgroup: reimplement basic IO stats using cgroup rstatTejun Heo
blk-cgroup has been using blkg_rwstat to track basic IO stats. Unfortunately, reading recursive stats scales badly as itinvolves walking all descendants. On systems with a huge number of cgroups (dead or alive), this can lead to substantial CPU cost when reading IO stats. This patch reimplements basic IO stats using cgroup rstat which uses more memory but makes recursive stat reading O(# descendants which have been active since last reading) instead of O(# descendants). * blk-cgroup core no longer uses sync/async stats. Introduce new stat enums - BLKG_IOSTAT_{READ|WRITE|DISCARD}. * Add blkg_iostat[_set] which encapsulates byte and io stats, last values for propagation delta calculation and u64_stats_sync for correctness on 32bit archs. * Update the new percpu stat counters directly and implement blkcg_rstat_flush() to implement propagation. * blkg_print_stat() can now bring the stats up to date by calling cgroup_rstat_flush() and print them instead of directly summing up all descendants. * It now allocates 96 bytes per cpu. It used to be 40 bytes. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Dan Schatzberg <dschatzberg@fb.com> Cc: Daniel Xu <dlxu@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07blk-cgroup: remove now unused blkg_print_stat_{bytes|ios}_recursive()Tejun Heo
These don't have users anymore. Remove them. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07blk-throtl: stop using blkg->stat_bytes and ->stat_iosTejun Heo
When used on cgroup1, blk-throtl uses the blkg->stat_bytes and ->stat_ios from blk-cgroup core to populate four stat knobs. blk-cgroup core is moving away from blkg_rwstat to improve scalability and won't be able to support this usage. It isn't like the sharing gains all that much. Let's break them out to dedicated rwstat counters which are updated when on cgroup1. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07bfq-iosched: stop using blkg->stat_bytes and ->stat_iosTejun Heo
When used on cgroup1, bfq uses the blkg->stat_bytes and ->stat_ios from blk-cgroup core to populate six stat knobs. blk-cgroup core is moving away from blkg_rwstat to improve scalability and won't be able to support this usage. It isn't like the sharing gains all that much. Let's break it out to dedicated rwstat counters which are updated when on cgroup1. This makes use of bfqg_*rwstat*() helpers outside of CONFIG_BFQ_CGROUP_DEBUG. Move them out. v2: Compile fix when !CONFIG_BFQ_CGROUP_DEBUG. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07bfq-iosched: relocate bfqg_*rwstat*() helpersTejun Heo
Collect them right under #ifdef CONFIG_BFQ_CGROUP_DEBUG. The next patch will use them from !DEBUG path and this makes it easy to move them out of the ifdef block. This is pure code reorganization. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07Merge branch 'for-linus' into for-5.5/blockJens Axboe
Pull on for-linus to resolve what otherwise would have been a conflict with the cgroups rstat patchset from Tejun. * for-linus: (942 commits) blkcg: make blkcg_print_stat() print stats only for online blkgs nvme: change nvme_passthru_cmd64 to explicitly mark rsvd nvme-multipath: fix crash in nvme_mpath_clear_ctrl_paths nvme-rdma: fix a segmentation fault during module unload iocost: don't nest spin_lock_irq in ioc_weight_write() io_uring: ensure we clear io_kiocb->result before each issue um-ubd: Entrust re-queue to the upper layers nvme-multipath: remove unused groups_only mode in ana log nvme-multipath: fix possible io hang after ctrl reconnect io_uring: don't touch ctx in setup after ring fd install io_uring: Fix leaked shadow_req Linux 5.4-rc5 riscv: cleanup do_trap_break nbd: verify socket is supported during setup ata: libahci_platform: Fix regulator_get_optional() misuse nbd: handle racing with error'ed out commands nbd: protect cmd->status with cmd->lock io_uring: fix bad inflight accounting for SETUP_IOPOLL|SETUP_SQTHREAD io_uring: used cached copies of sq->dropped and cq->overflow ARM: dts: stm32: relax qspi pins slew-rate for stm32mp157 ...
2019-11-07bpf: Add cb access in kfree_skb testMartin KaFai Lau
Access the skb->cb[] in the kfree_skb test. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191107180905.4097871-1-kafai@fb.com
2019-11-07bpf: Add array support to btf_struct_accessMartin KaFai Lau
This patch adds array support to btf_struct_access(). It supports array of int, array of struct and multidimensional array. It also allows using u8[] as a scratch space. For example, it allows access the "char cb[48]" with size larger than the array's element "char". Another potential use case is "u64 icsk_ca_priv[]" in the tcp congestion control. btf_resolve_size() is added to resolve the size of any type. It will follow the modifier if there is any. Please see the function comment for details. This patch also adds the "off < moff" check at the beginning of the for loop. It is to reject cases when "off" is pointing to a "hole" in a struct. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191107180903.4097702-1-kafai@fb.com
2019-11-07io_uring: properly mark async work as bounded vs unboundedJens Axboe
Now that io-wq supports separating the two request lifetime types, mark the following IO as having unbounded runtimes: - Any read/write to a non-regular file - Any specific networked IO - Any poll command Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07Merge branch 'cxgb4-add-support-for-TC-MQPRIO-Qdisc-Offload'David S. Miller
Rahul Lakkireddy says: ==================== cxgb4: add support for TC-MQPRIO Qdisc Offload This series of patches add support for offloading TC-MQPRIO Qdisc to Chelsio T5/T6 NICs. Offloading QoS traffic shaping and pacing requires using Ethernet Offload (ETHOFLD) resources available on Chelsio NICs. The ETHOFLD resources are configured by firmware and taken from the resource pool shared with other Chelsio Upper Layer Drivers. Traffic flowing through ETHOFLD region requires a software netdev Tx queue (EOSW_TXQ) exposed to networking stack, and an underlying hardware Tx queue (EOHW_TXQ) used for sending packets through hardware. ETHOFLD region is addressed using EOTIDs, which are per-connection resource. Hence, EOTIDs are capable of storing only a very small number of packets in flight. To allow more connections to share the the QoS rate limiting configuration, multiple EOTIDs must be allocated to reduce packet drops. EOTIDs are 1-to-1 mapped with software EOSW_TXQ. Several software EOSW_TXQs can post packets to a single hardware EOHW_TXQ. The series is broken down as follows: Patch 1 queries firmware for maximum available traffic classes, as well as, start and maximum available indices (EOTID) into ETHOFLD region, supported by the underlying device. Patch 2 reworks queue configuration and simplifies MSI-X allocation logic in preparation for ETHOFLD queues support. Patch 3 adds skeleton for validating and configuring TC-MQPRIO Qdisc offload. Also, adds support for software EOSW_TXQs and exposes them to network stack. Updates Tx queue selection to use fallback NIC Tx path for unsupported traffic that can't go through ETHOFLD queues. Patch 4 adds support for managing hardware queues to rate limit traffic flowing through them. The queues are allocated/removed based on enabling/disabling TC-MQPRIO Qdisc offload, respectively. Patch 5 adds Tx path for traffic flowing through software EOSW_TXQ and EOHW_TXQ. Also, adds Rx path to handle Tx completions. Patch 6 updates exisiting SCHED API to configure FLOWC based QoS offload. In the existing QUEUE based rate limiting, multiple queues sharing a traffic class get the aggreagated max rate limit value. On the other hand, in FLOWC based rate limiting, multiple queues sharing a traffic class get their own individual max rate limit value. For example, if 2 queues are bound to class 0, which is rate limited to 1 Gbps, then in QUEUE based rate limiting, both the queues get the aggregate max output of 1 Gbps only. In FLOWC based rate limiting, each queue gets its own output of max 1 Gbps each; i.e. 2 queues * 1 Gbps rate limit = 2 Gbps max output. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-07cxgb4: add FLOWC based QoS offloadRahul Lakkireddy
Rework SCHED API to allow offloading TC-MQPRIO QoS configuration. The existing QUEUE based rate limiting throttles all queues sharing a traffic class, to the specified max rate limit value. So, if multiple queues share a traffic class, then all the queues get the aggregate specified max rate limit. So, introduce the new FLOWC based rate limiting, where multiple queues can share a traffic class with each queue getting its own individual specified max rate limit. For example, if 2 queues are bound to class 0, which is rate limited to 1 Gbps, then 2 queues using QUEUE based rate limiting, get the aggregate output of 1 Gbps only. In FLOWC based rate limiting, each queue gets its own output of max 1 Gbps each; i.e. 2 queues * 1 Gbps rate limit = 2 Gbps. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-07cxgb4: add Tx and Rx path for ETHOFLD trafficRahul Lakkireddy
Implement Tx path for traffic flowing through software EOSW_TXQ and EOHW_TXQ. Since multiple EOSW_TXQ can post packets to a single EOHW_TXQ, protect the hardware queue with necessary spinlock. Also, move common code used to generate TSO work request to a common function. Implement Rx path to handle Tx completions for successfully transmitted packets. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-07cxgb4: add ETHOFLD hardware queue supportRahul Lakkireddy
Add support for configuring and managing ETHOFLD hardware queues. Keep the queue count and MSI-X allocation scheme same as NIC queues. ETHOFLD hardware queues are dynamically allocated/destroyed as TC-MQPRIO Qdisc offload is enabled/disabled on the corresponding interface, respectively. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-07cxgb4: parse and configure TC-MQPRIO offloadRahul Lakkireddy
Add logic for validation and configuration of TC-MQPRIO Qdisc offload. Also, add support to manage EOSW_TXQ, which have 1-to-1 mapping with EOTIDs, and expose them to network stack. Move common skb validation in Tx path to a separate function and add minimal Tx path for ETHOFLD. Update Tx queue selection to return normal NIC Txq to send traffic pattern that can't go through ETHOFLD Tx path. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-07cxgb4: rework queue config and MSI-X allocationRahul Lakkireddy
Simplify queue configuration and MSI-X allocation logic. Use a single MSI-X information table for both NIC and ULDs. Remove hard-coded MSI-X indices for firmware event queue and non data interrupts. Instead, use the MSI-X bitmap to obtain a free MSI-X index dynamically. Save each Rxq's index into the MSI-X information table, within the Rxq structures themselves, for easier cleanup. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-07cxgb4: query firmware for QoS offload resourcesRahul Lakkireddy
QoS offload needs Ethernet Offload (ETHOFLD) resources present in the NIC. These resources are shared with other ULDs. So, query firmware for the available number of traffic classes, as well as, start and end indices (EOTID) of the ETHOFLD region. Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-07io-wq: add support for bounded vs unbunded workJens Axboe
io_uring supports request types that basically have two different lifetimes: 1) Bounded completion time. These are requests like disk reads or writes, which we know will finish in a finite amount of time. 2) Unbounded completion time. These are generally networked IO, where we have no idea how long they will take to complete. Another example is POLL commands. This patch provides support for io-wq to handle these differently, so we don't starve bounded requests by tying up workers for too long. By default all work is bounded, unless otherwise specified in the work item. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-09io-wq: io_wqe_run_queue() doesn't need to use list_empty_careful()Jens Axboe
We hold the wqe lock at this point (which is also annotated), so there's no need to use the careful variant of list_empty(). Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-09io_uring: add support for backlogged CQ ringJens Axboe
Currently we drop completion events, if the CQ ring is full. That's fine for requests with bounded completion times, but it may make it harder or impossible to use io_uring with networked IO where request completion times are generally unbounded. Or with POLL, for example, which is also unbounded. After this patch, we never overflow the ring, we simply store requests in a backlog for later flushing. This flushing is done automatically by the kernel. To prevent the backlog from growing indefinitely, if the backlog is non-empty, we apply back pressure on IO submissions. Any attempt to submit new IO with a non-empty backlog will get an -EBUSY return from the kernel. This is a signal to the application that it has backlogged CQ events, and that it must reap those before being allowed to submit more IO. Note that if we do return -EBUSY, we will have filled whatever backlogged events into the CQ ring first, if there's room. This means the application can safely reap events WITHOUT entering the kernel and waiting for them, they are already available in the CQ ring. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-08io_uring: pass in io_kiocb to fill/add CQ handlersJens Axboe
This is in preparation for handling CQ ring overflow a bit smarter. We should not have any functional changes in this patch. Most of the changes are fairly straight forward, the only ones that stick out a bit are the ones that change __io_free_req() to take the reference count into account. If the request hasn't been submitted yet, we know it's safe to simply ignore references and free it. But let's clean these up too, as later patches will depend on the caller doing the right thing if the completion logging grabs a reference to the request. Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-08io_uring: make io_cqring_events() take 'ctx' as argumentJens Axboe
The rings can be derived from the ctx, and we need the ctx there for a future change. No functional changes in this patch. Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07io_uring: add support for linked SQE timeoutsJens Axboe
While we have support for generic timeouts, we don't have a way to tie a timeout to a specific SQE. The generic timeouts simply trigger wakeups on the CQ ring. This adds support for IORING_OP_LINK_TIMEOUT. This command is only valid as a link to a previous command. The timeout specific can be either relative or absolute, following the same rules as IORING_OP_TIMEOUT. If the timeout triggers before the dependent command completes, it will attempt to cancel that command. Likewise, if the dependent command completes before the timeout triggers, it will cancel the timeout. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07io_uring: abstract out io_async_cancel_one() helperJens Axboe
We're going to need this helper in a future patch, so move it out of io_async_cancel() and into its own separate function. No functional changes in this patch. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-11-07ceph: return -EINVAL if given fsc mount option on kernel w/o supportJeff Layton
If someone requests fscache on the mount, and the kernel doesn't support it, it should fail the mount. [ Drop ceph prefix -- it's provided by pr_err. ] Signed-off-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: Ilya Dryomov <idryomov@gmail.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2019-11-07dm raid: Remove unnecessary negation of a shift in raid10_format_to_md_layoutNathan Chancellor
When building with Clang + -Wtautological-constant-compare: drivers/md/dm-raid.c:619:8: warning: converting the result of '<<' to a boolean always evaluates to true [-Wtautological-constant-compare] r = !RAID10_OFFSET; ^ drivers/md/dm-raid.c:517:28: note: expanded from macro 'RAID10_OFFSET' #define RAID10_OFFSET (1 << 16) /* stripes with data copies area adjacent on devices */ ^ 1 warning generated. Negating a non-zero number will always make it zero, which is the default value of r in this function so this statement is unnecessary; remove it so that clang no longer warns. Link: https://github.com/ClangBuiltLinux/linux/issues/753 Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2019-11-07KVM: arm/arm64: Let the timer expire in hardirq context on RTThomas Gleixner
The timers are canceled from an preempt-notifier which is invoked with disabled preemption which is not allowed on PREEMPT_RT. The timer callback is short so in could be invoked in hard-IRQ context on -RT. Let the timer expire on hard-IRQ context even on -RT. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Marc Zyngier <maz@kernel.org> Tested-by: Julien Grall <julien.grall@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20191107095424.16647-1-bigeasy@linutronix.de
2019-11-07cgroup: freezer: don't change task and cgroups status unnecessarilyHonglei Wang
It's not necessary to adjust the task state and revisit the state of source and destination cgroups if the cgroups are not in freeze state and the task itself is not frozen. And in this scenario, it wakes up the task who's not supposed to be ready to run. Don't do the unnecessary task state adjustment can help stop waking up the task without a reason. Signed-off-by: Honglei Wang <honglei.wang@oracle.com> Acked-by: Roman Gushchin <guro@fb.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2019-11-07staging: Fix error return code in vboxsf_fill_super()Wei Yongjun
Fix to return negative error code -ENOMEM from the error handling case instead of 0, as done elsewhere in this function. Fixes: df4028658f9d ("staging: Add VirtualBox guest shared folder (vboxsf) support") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by: Hans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20191106115954.114678-1-weiyongjun1@huawei.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-11-07staging: vboxsf: fix dereference of pointer dentry before it is null checkedColin Ian King
Currently the pointer dentry is being dereferenced before it is being null checked. Fix this by only dereferencing dentry once we know it is not null. Addresses-Coverity: ("Dereference before null check") Fixes: df4028658f9d ("staging: Add VirtualBox guest shared folder (vboxsf) support") Signed-off-by: Colin Ian King <colin.king@canonical.com> Link: https://lore.kernel.org/r/20191105175108.79824-1-colin.king@canonical.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-11-07staging: vboxsf: Remove unused including <linux/version.h>YueHaibing
Remove including <linux/version.h> that don't need it. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Link: https://lore.kernel.org/r/20191107015923.100013-1-yuehaibing@huawei.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-11-07Merge branch 'bpf-libbpf-fixes'Daniel Borkmann
Andrii Nakryiko says: ==================== Github's mirror of libbpf got LGTM and Coverity statis analysis running against it and spotted few real bugs and few potential issues. This patch series fixes found issues. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-11-07libbpf: Improve handling of corrupted ELF during map initializationAndrii Nakryiko
If we get ELF file with "maps" section, but no symbols pointing to it, we'll end up with division by zero. Add check against this situation and exit early with error. Found by Coverity scan against Github libbpf sources. Fixes: bf82927125dd ("libbpf: refactor map initialization") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-6-andriin@fb.com
2019-11-07libbpf: Make btf__resolve_size logic always check size error conditionAndrii Nakryiko
Perform size check always in btf__resolve_size. Makes the logic a bit more robust against corrupted BTF and silences LGTM/Coverity complaining about always true (size < 0) check. Fixes: 69eaab04c675 ("btf: extract BTF type size calculation") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-5-andriin@fb.com
2019-11-07libbpf: Fix another potential overflow issue in bpf_prog_linfoAndrii Nakryiko
Fix few issues found by Coverity and LGTM. Fixes: b053b439b72a ("bpf: libbpf: bpftool: Print bpf_line_info during prog dump") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-4-andriin@fb.com
2019-11-07libbpf: Fix potential overflow issueAndrii Nakryiko
Fix a potential overflow issue found by LGTM analysis, based on Github libbpf source code. Fixes: 3d65014146c6 ("bpf: libbpf: Add btf_line_info support to libbpf") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-3-andriin@fb.com
2019-11-07libbpf: Fix memory leak/double free issueAndrii Nakryiko
Coverity scan against Github libbpf code found the issue of not freeing memory and leaving already freed memory still referenced from bpf_program. Fix it by re-assigning successfully reallocated memory sooner. Fixes: 2993e0515bb4 ("tools/bpf: add support to read .BTF.ext sections") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107020855.3834758-2-andriin@fb.com
2019-11-07MAINTAINERS: Add VSPRINTFPetr Mladek
printk maintainers have been reviewing patches against vsprintf code last few years. Most changes have been committed via printk.git last two years. New group is used because printk() is not the only vsprintf() user. Also the group of interested people is not the same. Link: http://lkml.kernel.org/r/20191031133337.9306-1-pmladek@suse.com Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: linux-kernel@vger.kernel.org Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Uwe Kleine-König <uwe@kleine-koenig.org> Cc: Joe Perches <joe@perches.com> Cc: Sakari Ailus <sakari.ailus@linux.intel.com> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
2019-11-07libbpf: Fix negative FD close() in xsk_setup_xdp_prog()Andrii Nakryiko
Fix issue reported by static analysis (Coverity). If bpf_prog_get_fd_by_id() fails, xsk_lookup_bpf_maps() will fail as well and clean-up code will attempt close() with fd=-1. Fix by checking bpf_prog_get_fd_by_id() return result and exiting early. Fixes: 10a13bb40e54 ("libbpf: remove qidconf and better support external bpf programs.") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107054059.313884-1-andriin@fb.com
2019-11-07s390/bpf: Remove unused SEEN_RET0, SEEN_REG_AX and ret0_ipIlya Leoshkevich
We don't need them since commit e1cf4befa297 ("bpf, s390x: remove ld_abs/ld_ind") and commit a3212b8f15d8 ("bpf, s390x: remove obsolete exception handling from div/mod"). Also, use BIT(n) instead of 1 << n, because checkpatch says so. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107114033.90505-1-iii@linux.ibm.com
2019-11-07dm zoned: reduce overhead of backing device checksDmitry Fomichev
Commit 75d66ffb48efb3 added backing device health checks and as a part of these checks, check_events() block ops template call is invoked in dm-zoned mapping path as well as in reclaim and flush path. Calling check_events() with ATA or SCSI backing devices introduces a blocking scsi_test_unit_ready() call being made in sd_check_events(). Even though the overhead of calling scsi_test_unit_ready() is small for ATA zoned devices, it is much larger for SCSI and it affects performance in a very negative way. Fix this performance regression by executing check_events() only in case of any I/O errors. The function dmz_bdev_is_dying() is modified to call only blk_queue_dying(), while calls to check_events() are made in a new helper function, dmz_check_bdev(). Reported-by: zhangxiaoxu <zhangxiaoxu5@huawei.com> Fixes: 75d66ffb48efb3 ("dm zoned: properly handle backing device failure") Cc: stable@vger.kernel.org Signed-off-by: Dmitry Fomichev <dmitry.fomichev@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2019-11-07s390/bpf: Wrap JIT macro parameter usages in parenthesesIlya Leoshkevich
This change does not alter JIT behavior; it only makes it possible to safely invoke JIT macros with complex arguments in the future. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191107113211.90105-1-iii@linux.ibm.com
2019-11-07x86/speculation/taa: Fix printing of TAA_MSG_SMT on IBRS_ALL CPUsJosh Poimboeuf
For new IBRS_ALL CPUs, the Enhanced IBRS check at the beginning of cpu_bugs_smt_update() causes the function to return early, unintentionally skipping the MDS and TAA logic. This is not a problem for MDS, because there appears to be no overlap between IBRS_ALL and MDS-affected CPUs. So the MDS mitigation would be disabled and nothing would need to be done in this function anyway. But for TAA, the TAA_MSG_SMT string will never get printed on Cascade Lake and newer. The check is superfluous anyway: when 'spectre_v2_enabled' is SPECTRE_V2_IBRS_ENHANCED, 'spectre_v2_user' is always SPECTRE_V2_USER_NONE, and so the 'spectre_v2_user' switch statement handles it appropriately by doing nothing. So just remove the check. Fixes: 1b42f017415b ("x86/speculation/taa: Add mitigation for TSX Async Abort") Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Tyler Hicks <tyhicks@canonical.com> Reviewed-by: Borislav Petkov <bp@suse.de>