summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-01-20io_uring: remove 'fname' from io_open structureJens Axboe
We only use it internally in the prep functions for both statx and openat, so we don't need it to be persistent across the request. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add 'struct open_how' to the openat request contextJens Axboe
We'll need this for openat2(2) support, remove flags and mode from the existing io_open struct. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: enable option to only trigger eventfd for async completionsJens Axboe
If an application is using eventfd notifications with poll to know when new SQEs can be issued, it's expecting the following read/writes to complete inline. And with that, it knows that there are events available, and don't want spurious wakeups on the eventfd for those requests. This adds IORING_REGISTER_EVENTFD_ASYNC, which works just like IORING_REGISTER_EVENTFD, except it only triggers notifications for events that happen from async completions (IRQ, or io-wq worker completions). Any completions inline from the submission itself will not trigger notifications. Suggested-by: Mark Papadakis <markuspapadakis@icloud.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: change io_ring_ctx bool fields into bit fieldsJens Axboe
In preparation for adding another one, which would make us spill into another long (and hence bump the size of the ctx), change them to bit fields. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: file set registration should use interruptible waitsJens Axboe
If an application attempts to register a set with unbounded requests pending, we can be stuck here forever if they don't complete. We can make this wait interruptible, and just abort if we get signaled. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: Remove unnecessary null checkYueHaibing
Null check kfree is redundant, so remove it. This is detected by coccinelle. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add support for send(2) and recv(2)Jens Axboe
This adds IORING_OP_SEND for send(2) support, and IORING_OP_RECV for recv(2) support. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: remove extra io_wq_current_is_worker()Pavel Begunkov
io_wq workers use io_issue_sqe() to forward sqes and never io_queue_sqe(). Remove extra check for io_wq_current_is_worker() Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: optimise commit_sqring() for common casePavel Begunkov
It should be pretty rare to not submitting anything when there is something in the ring. No need to keep heuristics for this case. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: optimise head checks in io_get_sqring()Pavel Begunkov
A user may ask to submit more than there is in the ring, and then io_uring will submit as much as it can. However, in the last iteration it will allocate an io_kiocb and immediately free it. It could do better and adjust @to_submit to what is in the ring. And since the ring's head is already checked here, there is no need to do it in the loop, spamming with smp_load_acquire()'s barriers Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: clamp to_submit in io_submit_sqes()Pavel Begunkov
Make io_submit_sqes() to clamp @to_submit itself. It removes duplicated code and prepares for following changes. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add support for IORING_SETUP_CLAMPJens Axboe
Some applications like to start small in terms of ring size, and then ramp up as needed. This is a bit tricky to do currently, since we don't advertise the max ring size. This adds IORING_SETUP_CLAMP. If set, and the values for SQ or CQ ring size exceed what we support, then clamp them at the max values instead of returning -EINVAL. Since we return the chosen ring sizes after setup, no further changes are needed on the application side. io_uring already changes the ring sizes if the application doesn't ask for power-of-two sizes, for example. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: extend batch freeing to cover more casesJens Axboe
Currently we only batch free if fixed files are used, no links, no aux data, etc. This extends the batch freeing to only exclude the linked case and fallback case, and make io_free_req_many() handle the other cases just fine. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: wrap multi-req freeing in struct req_batchJens Axboe
This cleans up the code a bit, and it allows us to build on top of the multi-req freeing. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: batch getting pcpu referencesPavel Begunkov
percpu_ref_tryget() has its own overhead. Instead getting a reference for each request, grab a bunch once per io_submit_sqes(). ~5% throughput boost for a "submit and wait 128 nops" benchmark. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> __io_req_free_empty() -> __io_req_do_free() Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20pcpu_ref: add percpu_ref_tryget_many()Pavel Begunkov
Add percpu_ref_tryget_many(), which works the same way as percpu_ref_tryget(), but grabs specified number of refs. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Acked-by: Dennis Zhou <dennis@kernel.org> Cc: Christoph Lameter <cl@linux.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add IORING_OP_MADVISEJens Axboe
This adds support for doing madvise(2) through io_uring. We assume that any operation can block, and hence punt everything async. This could be improved, but hard to make bullet proof. The async punt ensures it's safe. Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20mm: make do_madvise() available internallyJens Axboe
This is in preparation for enabling this functionality through io_uring. Add a helper that is just exporting what sys_madvise() does, and have the system call use it. No functional changes in this patch. Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add IORING_OP_FADVISEJens Axboe
This adds support for doing fadvise through io_uring. We assume that WILLNEED doesn't block, but that DONTNEED may block. Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: allow use of offset == -1 to mean file positionJens Axboe
This behaves like preadv2/pwritev2 with offset == -1, it'll use (and update) the current file position. This obviously comes with the caveat that if the application has multiple read/writes in flight, then the end result will not be as expected. This is similar to threads sharing a file descriptor and doing IO using the current file position. Since this feature isn't easily detectable by doing a read or write, add a feature flags, IORING_FEAT_RW_CUR_POS, to allow applications to detect presence of this feature. Reported-by: 李通洲 <carter.li@eoitek.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add non-vectored read/write commandsJens Axboe
For uses cases that don't already naturally have an iovec, it's easier (or more convenient) to just use a buffer address + length. This is particular true if the use case is from languages that want to create a memory safe abstraction on top of io_uring, and where introducing the need for the iovec may impose an ownership issue. For those cases, they currently need an indirection buffer, which means allocating data just for this purpose. Add basic read/write that don't require the iovec. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: improve poll completion performanceJens Axboe
For busy IORING_OP_POLL_ADD workloads, we can have enough contention on the completion lock that we fail the inline completion path quite often as we fail the trylock on that lock. Add a list for deferred completions that we can use in that case. This helps reduce the number of async offloads we have to do, as if we get multiple completions in a row, we'll piggy back on to the poll_llist instead of having to queue our own offload. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: split overflow state into SQ and CQ sideJens Axboe
We currently check ->cq_overflow_list from both SQ and CQ context, which causes some bouncing of that cache line. Add separate bits of state for this instead, so that the SQ side can check using its own state, and likewise for the CQ side. This adds ->sq_check_overflow with the SQ state, and ->cq_check_overflow with the CQ state. If we hit an overflow condition, both of these bits are set. Likewise for overflow flush clear, we clear both bits. For the fast path of just checking if there's an overflow condition on either the SQ or CQ side, we can use our own private bit for this. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add lookup table for various opcode needsJens Axboe
We currently have various switch statements that check if an opcode needs a file, mm, etc. These are hard to keep in sync as opcodes are added. Add a struct io_op_def that holds all of this information, so we have just one spot to update when opcodes are added. This also enables us to NOT allocate req->io if a deferred command doesn't need it, and corrects some mistakes we had in terms of what commands need mm context. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: remove two unnecessary function declarationsJens Axboe
__io_free_req() and io_double_put_req() aren't used before they are defined, so we can kill these two forwards. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: move *queue_link_head() from common pathPavel Begunkov
Move io_queue_link_head() to links handling code in io_submit_sqe(), so it wouldn't need extra checks and would have better data locality. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: rename prev to headPavel Begunkov
Calling "prev" a head of a link is a bit misleading. Rename it Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add IOSQE_ASYNCJens Axboe
io_uring defaults to always doing inline submissions, if at all possible. But for larger copies, even if the data is fully cached, that can take a long time. Add an IOSQE_ASYNC flag that the application can set on the SQE - if set, it'll ensure that we always go async for those kinds of requests. Use the io-wq IO_WQ_WORK_CONCURRENT flag to ensure we get the concurrency we desire for this case. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io-wq: support concurrent non-blocking workJens Axboe
io-wq assumes that work will complete fast (and not block), so it doesn't create a new worker when work is enqueued, if we already have at least one worker running. This is done on the assumption that if work is running, then it will complete fast. Add an option to force io-wq to fork a new worker for work queued. This is signaled by setting IO_WQ_WORK_CONCURRENT on the work item. For that case, io-wq will create a new worker, even though workers are already running. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add support for IORING_OP_STATXJens Axboe
This provides support for async statx(2) through io_uring. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20fs: make two stat prep helpers availableJens Axboe
To implement an async stat, we need to provide the flags mapping and the statx user copy. Make them available internally, through fs/internal.h. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: avoid ring quiesce for fixed file set unregister and updateJens Axboe
We currently fully quiesce the ring before an unregister or update of the fixed fileset. This is very expensive, and we can be a bit smarter about this. Add a percpu refcount for the file tables as a whole. Grab a percpu ref when we use a registered file, and put it on completion. This is cheap to do. Upon removal of a file from a set, switch the ref count to atomic mode. When we hit zero ref on the completion side, then we know we can drop the previously registered files. When the old files have been dropped, switch the ref back to percpu mode for normal operation. Since there's a period between doing the update and the kernel being done with it, add a IORING_OP_FILES_UPDATE opcode that can perform the same action. The application knows the update has completed when it gets the CQE for it. Between doing the update and receiving this completion, the application must continue to use the unregistered fd if submitting IO on this particular file. This takes the runtime of test/file-register from liburing from 14s to about 0.7s. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add support for IORING_OP_CLOSEJens Axboe
This works just like close(2), unsurprisingly. We remove the file descriptor and post the completion inline, then offload the actual (potential) last file put to async context. Mark the async part of this work as uncancellable, as we really must guarantee that the latter part of the close is run. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io-wq: add support for uncancellable workJens Axboe
Not all work can be cancelled, some of it we may need to guarantee that it runs to completion. Allow the caller to set IO_WQ_WORK_NO_CANCEL on work that must not be cancelled. Note that the caller work function must also check for IO_WQ_WORK_NO_CANCEL on work that is marked IO_WQ_WORK_CANCEL. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20fs: move filp_close() outside of __close_fd_get_file()Jens Axboe
Just one caller of this, and just use filp_close() there manually. This is important to allow async close/removal of the fd. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add support for IORING_OP_OPENATJens Axboe
This works just like openat(2), except it can be performed async. For the normal case of a non-blocking path lookup this will complete inline. If we have to do IO to perform the open, it'll be done from async context. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20fs: make build_open_flags() available internallyJens Axboe
This is a prep patch for supporting non-blocking open from io_uring. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20io_uring: add support for fallocate()Jens Axboe
This exposes fallocate(2) through io_uring. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20Merge branch 'io_uring-5.5' into for-5.6/io_uring-vfsJens Axboe
Pull in compatability fix for the files_update command. * io_uring-5.5: io_uring: fix compat for IORING_REGISTER_FILES_UPDATE
2020-01-20io_uring: fix compat for IORING_REGISTER_FILES_UPDATEEugene Syromiatnikov
fds field of struct io_uring_files_update is problematic with regards to compat user space, as pointer size is different in 32-bit, 32-on-64-bit, and 64-bit user space. In order to avoid custom handling of compat in the syscall implementation, make fds __u64 and use u64_to_user_ptr in order to retrieve it. Also, align the field naturally and check that no garbage is passed there. Fixes: c3a31e605620c279 ("io_uring: add support for IORING_REGISTER_FILES_UPDATE") Signed-off-by: Eugene Syromiatnikov <esyr@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-01-20MIPS: syscalls: fix indentation of the 'SYSNR' messageAlexander Lobakin
It also lacks a whitespace (copy'n'paste error?) and also messes up the output: SYSHDR arch/mips/include/generated/uapi/asm/unistd_n32.h SYSHDR arch/mips/include/generated/uapi/asm/unistd_n64.h SYSHDR arch/mips/include/generated/uapi/asm/unistd_o32.h SYSNR arch/mips/include/generated/uapi/asm/unistd_nr_n32.h SYSNR arch/mips/include/generated/uapi/asm/unistd_nr_n64.h SYSNR arch/mips/include/generated/uapi/asm/unistd_nr_o32.h WRAP arch/mips/include/generated/uapi/asm/bpf_perf_event.h WRAP arch/mips/include/generated/uapi/asm/ipcbuf.h After: SYSHDR arch/mips/include/generated/uapi/asm/unistd_n32.h SYSHDR arch/mips/include/generated/uapi/asm/unistd_n64.h SYSHDR arch/mips/include/generated/uapi/asm/unistd_o32.h SYSNR arch/mips/include/generated/uapi/asm/unistd_nr_n32.h SYSNR arch/mips/include/generated/uapi/asm/unistd_nr_n64.h SYSNR arch/mips/include/generated/uapi/asm/unistd_nr_o32.h WRAP arch/mips/include/generated/uapi/asm/bpf_perf_event.h WRAP arch/mips/include/generated/uapi/asm/ipcbuf.h Present since day 0 of syscall table generation introduction for MIPS. Fixes: 9bcbf97c6293 ("mips: add system call table generation support") Cc: <stable@vger.kernel.org> # v5.0+ Signed-off-by: Alexander Lobakin <alobakin@dlink.ru> Signed-off-by: Paul Burton <paulburton@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Rob Herring <robh@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org
2020-01-20MIPS: boot: fix typo in 'vmlinux.lzma.its' targetAlexander Lobakin
Commit 92b34a976348 ("MIPS: boot: add missing targets for vmlinux.*.its") fixed constant rebuild of *.its files on every make invocation, but due to typo ("lzmo") it made no sense for vmlinux.lzma.its. Fixes: 92b34a976348 ("MIPS: boot: add missing targets for vmlinux.*.its") Cc: <stable@vger.kernel.org> # v4.19+ Signed-off-by: Alexander Lobakin <alobakin@dlink.ru> [paulburton@kernel.org: s/invokation/invocation/] Signed-off-by: Paul Burton <paulburton@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Rob Herring <robh@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org
2020-01-20scsi: qla2xxx: Fix a NULL pointer dereference in an error pathBart Van Assche
This patch fixes the following Coverity complaint: FORWARD_NULL qla_init.c: 5275 in qla2x00_configure_local_loop() 5269 5270 if (fcport->scan_state == QLA_FCPORT_FOUND) 5271 qla24xx_fcport_handle_login(vha, fcport); 5272 } 5273 5274 cleanup_allocation: >>> CID 353340: (FORWARD_NULL) >>> Passing null pointer "new_fcport" to "qla2x00_free_fcport", which dereferences it. 5275 qla2x00_free_fcport(new_fcport); 5276 5277 if (rval != QLA_SUCCESS) { 5278 ql_dbg(ql_dbg_disc, vha, 0x2098, 5279 "Configure local loop error exit: rval=%x.\n", rval); 5280 } qla_init.c: 5275 in qla2x00_configure_local_loop() 5269 5270 if (fcport->scan_state == QLA_FCPORT_FOUND) 5271 qla24xx_fcport_handle_login(vha, fcport); 5272 } 5273 5274 cleanup_allocation: >>> CID 353340: (FORWARD_NULL) >>> Passing null pointer "new_fcport" to "qla2x00_free_fcport", which dereferences it. 5275 qla2x00_free_fcport(new_fcport); 5276 5277 if (rval != QLA_SUCCESS) { 5278 ql_dbg(ql_dbg_disc, vha, 0x2098, 5279 "Configure local loop error exit: rval=%x.\n", rval); 5280 } Fixes: 3dae220595ba ("scsi: qla2xxx: Use common routine to free fcport struct") Cc: Himanshu Madhani <hmadhani@marvell.com> Cc: Quinn Tran <qutran@marvell.com> Cc: Martin Wilck <mwilck@suse.com> Cc: Daniel Wagner <dwagner@suse.de> Cc: Roman Bolshakov <r.bolshakov@yadro.com> Link: https://lore.kernel.org/r/20200118042056.32232-1-bvanassche@acm.org Signed-off-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Ewan D. Milne <emilne@redhat.com> Reviewed-by: Daniel Wagner <dwagner@suse.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2020-01-20MIPS: fix indentation of the 'RELOCS' messageAlexander Lobakin
quiet_cmd_relocs lacks a whitespace which results in: LD vmlinux SORTEX vmlinux SYSMAP System.map RELOCS vmlinux Building modules, stage 2. MODPOST 64 modules After this patch: LD vmlinux SORTEX vmlinux SYSMAP System.map RELOCS vmlinux Building modules, stage 2. MODPOST 64 modules Typo is present in kernel tree since the introduction of relocatable kernel support in commit e818fac595ab ("MIPS: Generate relocation table when CONFIG_RELOCATABLE"), but the relocation scripts were moved to Makefile.postlink later with commit 44079d3509ae ("MIPS: Use Makefile.postlink to insert relocations into vmlinux"). Fixes: 44079d3509ae ("MIPS: Use Makefile.postlink to insert relocations into vmlinux") Cc: <stable@vger.kernel.org> # v4.11+ Signed-off-by: Alexander Lobakin <alobakin@dlink.ru> [paulburton@kernel.org: Fixup commit references in commit message.] Signed-off-by: Paul Burton <paulburton@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Rob Herring <robh@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org
2020-01-20scsi: qla1280: Make checking for 64bit support consistentThomas Bogendoerfer
Use #ifdef QLA_64BIT_PTR to check if 64bit support is enabled. This fixes ("scsi: qla1280: Fix dma firmware download, if dma address is 64bit"). Link: https://lore.kernel.org/r/20200117115628.13219-1-tbogendoerfer@suse.de Signed-off-by: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2020-01-20xfs: change return value of xfs_inode_need_cow to intzhengbin
Fixes coccicheck warning: fs/xfs/xfs_reflink.c:236:9-10: WARNING: return of 0/1 in function 'xfs_inode_need_cow' with return type bool Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: zhengbin <zhengbin13@huawei.com> [darrick: rename the function so it doesn't sound like a predicate] Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
2020-01-20selftests/bpf: Skip perf hw events test if the setup disabled itHangbin Liu
The same with commit 4e59afbbed96 ("selftests/bpf: skip nmi test when perf hw events are disabled"), it would make more sense to skip the test_stacktrace_build_id_nmi test if the setup (e.g. virtual machines) has disabled hardware perf events. Fixes: 13790d1cc72c ("bpf: add selftest for stackmap with build_id in NMI context") Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200117100656.10359-1-liuhangbin@gmail.com
2020-01-20selftests/bpf: Don't check for btf fd in test_btfStanislav Fomichev
After commit 0d13bfce023a ("libbpf: Don't require root for bpf_object__open()") we no longer load BTF during bpf_object__open(), so let's remove the expectation from test_btf that the fd is not -1. The test currently fails. Before: BTF libbpf test[1] (test_btf_haskv.o): do_test_file:4152:FAIL bpf_object__btf_fd: -1 BTF libbpf test[2] (test_btf_newkv.o): do_test_file:4152:FAIL bpf_object__btf_fd: -1 BTF libbpf test[3] (test_btf_nokv.o): do_test_file:4152:FAIL bpf_object__btf_fd: -1 After: BTF libbpf test[1] (test_btf_haskv.o): OK BTF libbpf test[2] (test_btf_newkv.o): OK BTF libbpf test[3] (test_btf_nokv.o): OK Fixes: 0d13bfce023a ("libbpf: Don't require root for bpf_object__open()") Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200118010546.74279-1-sdf@google.com
2020-01-20bpf: Fix memory leaks in generic update/delete batch opsBrian Vazquez
Generic update/delete batch ops functions were using __bpf_copy_key without properly freeing the memory. Handle the memory allocation and copy_from_user separately. Fixes: aa2e93b8e58e ("bpf: Add generic support for update and delete batch ops") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Brian Vazquez <brianvv@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200119194040.128369-1-brianvv@google.com
2020-01-20tracing: Do not set trace clock if tracefs lockdown is in effectMasami Ichikawa
When trace_clock option is not set and unstable clcok detected, tracing_set_default_clock() sets trace_clock(ThinkPad A285 is one of case). In that case, if lockdown is in effect, null pointer dereference error happens in ring_buffer_set_clock(). Link: http://lkml.kernel.org/r/20200116131236.3866925-1-masami256@gmail.com Cc: stable@vger.kernel.org Fixes: 17911ff38aa58 ("tracing: Add locked_down checks to the open calls of files created for tracefs") Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1788488 Signed-off-by: Masami Ichikawa <masami256@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>