Age | Commit message (Collapse) | Author |
|
For conistency and clarity, rename all symbols and strings from
ux500 to db8500 in the driver.
Cc: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Lee Jones <lee.jones@linaro.org>
Link: https://lore.kernel.org/r/20210922230947.1864357-3-linus.walleij@linaro.org
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Wim Van Sebroeck <wim@linux-watchdog.org>
|
|
This driver is named after the ambition to support more SoCs than
the DB8500. Those were never produced, so cut down the scope and
rename the driver accordingly. Since the Kconfig for the watchdog
defaults to y this will still be built by default.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/r/20210922230947.1864357-2-linus.walleij@linaro.org
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Wim Van Sebroeck <wim@linux-watchdog.org>
|
|
Drop the platform data passing from the PRCMU driver. This platform
data was part of the ambition to support more SoCs, which in turn
were never mass produced.
Only a name remains of the MFD cell so switch to MFD_CELL_NAME().
Cc: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Link: https://lore.kernel.org/r/20210922230947.1864357-1-linus.walleij@linaro.org
Acked-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Wim Van Sebroeck <wim@linux-watchdog.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat
Pull exfat fix from Namjae Jeon:
"Fix ->i_blocks truncation issue caused by wrong 32bit mask"
* tag 'exfat-for-5.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat:
exfat: fix incorrect loading of i_blocks for large files
|
|
In 1394 OHCI specification, Isochronous Receive DMA context has several
modes. One of mode is 'BufferFill' and Linux FireWire stack uses it to
receive isochronous packets for multiple isochronous channel as
FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL.
The mode is not used by in-kernel driver, while it's available for
userspace. The character device driver in firewire-core includes
cast of function callback for the mode since the type of callback
function is different from the other modes. The case is inconvenient
to effort of Control Flow Integrity builds due to
-Wcast-function-type warning.
This commit removes the cast. A static helper function is newly added
to initialize isochronous context for the mode. The helper function
arranges isochronous context to assign specific callback function
after call of existent kernel API. It's noticeable that the number of
isochronous channel, speed, and the size of header are not required for
the mode. The helper function is used for the mode by character device
driver instead of direct call of existent kernel API.
The same goal can be achieved (in the ioctl_create_iso_context function)
without this helper function as follows:
- Call the fw_iso_context_create function passing NULL to the callback
parameter.
- Then setting the context->callback.sc or context->callback.mc
variables based on the a->type value.
However using the helper function created in this patch makes code more
clear and declarative. This way avoid the call to a function with one
purpose to achieved another one.
Co-developed-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Co-developed-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Oscar Carter <oscar.carter@gmx.com>
Reviewed-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Testeb-by: Takashi Sakamoto<o-takashi@sakamocchi.jp>
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs
Pull erofs updates from Gao Xiang:
"There are some new features available for this cycle. Firstly, EROFS
LZMA algorithm support, specifically called MicroLZMA, is available as
an option for embedded devices, LiveCDs and/or as the secondary
auxiliary compression algorithm besides the primary algorithm in one
file.
In order to better support the LZMA fixed-sized output compression,
especially for 4KiB pcluster size (which has lowest memory pressure
thus useful for memory-sensitive scenarios), Lasse introduced a new
LZMA header/container format called MicroLZMA to minimize the original
LZMA1 header (for example, we don't need to waste 4-byte dictionary
size and another 8-byte uncompressed size, which can be calculated by
fs directly, for each pcluster) and enable EROFS fixed-sized output
compression.
Note that MicroLZMA can also be later used by other things in addition
to EROFS too where wasting minimal amount of space for headers is
important and it can be only compiled by enabling XZ_DEC_MICROLZMA.
MicroLZMA has been supported by the latest upstream XZ embedded [1] &
XZ utils [2], apply the latest related XZ embedded upstream patches by
the XZ author Lasse here.
Secondly, multiple device is also supported in this cycle, which is
designed for multi-layer container images. By working together with
inter-layer data deduplication and compression, we can achieve the
next high-performance container image solution. Our team will announce
the new Nydus container image service [3] implementation with new RAFS
v6 (EROFS-compatible) format in Open Source Summit 2021 China [4]
soon.
Besides, the secondary compression head support and readmore
decompression strategy are also included in this cycle. There are also
some minor bugfixes and cleanups, as always.
Summary:
- support multiple devices for multi-layer container images;
- support the secondary compression head;
- support readmore decompression strategy;
- support new LZMA algorithm (specifically called MicroLZMA);
- some bugfixes & cleanups"
* tag 'erofs-for-5.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs:
erofs: don't trigger WARN() when decompression fails
erofs: get rid of ->lru usage
erofs: lzma compression support
erofs: rename some generic methods in decompressor
lib/xz, lib/decompress_unxz.c: Fix spelling in comments
lib/xz: Add MicroLZMA decoder
lib/xz: Move s->lzma.len = 0 initialization to lzma_reset()
lib/xz: Validate the value before assigning it to an enum variable
lib/xz: Avoid overlapping memcpy() with invalid input with in-place decompression
erofs: introduce readmore decompression strategy
erofs: introduce the secondary compression head
erofs: get compression algorithms directly on mapping
erofs: add multiple device support
erofs: decouple basic mount options from fs_context
erofs: remove the fast path of per-CPU buffer decompression
|
|
Pull fscrypt updates from Eric Biggers:
"Some cleanups for fs/crypto/:
- Allow 256-bit master keys with AES-256-XTS
- Improve documentation and comments
- Remove unneeded field fscrypt_operations::max_namelen"
* tag 'fscrypt-for-linus' of git://git.kernel.org/pub/scm/fs/fscrypt/fscrypt:
fscrypt: improve a few comments
fscrypt: allow 256-bit master keys with AES-256-XTS
fscrypt: improve documentation for inline encryption
fscrypt: clean up comments in bio.c
fscrypt: remove fscrypt_operations::max_namelen
|
|
Patches held over for a possible rc8.
* for-rc:
RDMA/qedr: Fix NULL deref for query_qp on the GSI QP
RDMA/hns: Modify the value of MAX_LP_MSG_LEN to meet hardware compatibility
RDMA/hns: Fix initial arm_st of CQ
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Pull in the accepted for-rc patches as the next merge needs a newer base.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
In the function irdma_post_recv, the function irdma_copy_sg_list is
not needed since the struct irdma_sge and ib_sge have the similar
member variables. The struct irdma_sge can be replaced with the
struct ib_sge totally.
This can increase the rx performance of irdma.
Link: https://lore.kernel.org/r/20211030104226.253346-1-yanjun.zhu@linux.dev
Signed-off-by: Zhu Yanjun <yanjun.zhu@linux.dev>
Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Help debugging table creation errors by adding the error name in the log.
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
|
|
Add device number to kdmflush workqueue name to help debugging CPU usage.
Resulting `ps axfu` snippet:
root 3791 0.0 0.0 0 0 ? I< paź19 0:00 \_ [kdmflush/253:7]
root 3792 0.0 0.0 0 0 ? I< paź19 0:00 \_ [kcryptd_io/253:7]
root 3793 0.0 0.0 0 0 ? I< paź19 0:00 \_ [kcryptd/253:7]
root 3794 0.0 0.0 0 0 ? S paź19 0:00 \_ [dmcrypt_write/253:7]
root 3814 0.0 0.0 0 0 ? I< paź19 0:00 \_ [kdmflush/253:8]
root 3815 0.0 0.0 0 0 ? I< paź19 0:00 \_ [kdmflush/253:9]
root 3816 0.0 0.0 0 0 ? I< paź19 0:00 \_ [kdmflush/253:10]
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
|
|
Replace kthread_create/wake_up_process() with kthread_run()
to simplify the code.
Signed-off-by: Cai Huoqing <caihuoqing@baidu.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
|
|
Replace kthread_create/wake_up_process() with kthread_run()
to simplify the code.
Signed-off-by: Cai Huoqing <caihuoqing@baidu.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
|
|
Using local kmaps slightly reduces the chances to stray writes, and
the bvec interface cleans up the code a little bit.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
|
|
Use memcpy_from_bvec instead of open coding the logic.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
|
|
Using local kmaps slightly reduces the chances to stray writes, and
the bvec interface cleans up the code a little bit.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
|
|
Using local kmaps slightly reduces the chances to stray writes, and
the bvec interface cleans up the code a little bit.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
|
|
We never checked for errors on add_disk() as this function returned
void. Now that this is fixed, use the shiny new error handling.
There are two calls to dm_setup_md_queue() which can fail then, one on
dm_early_create() and we can easily see that the error path there
calls dm_destroy in the error path. The other use case is on the ioctl
table_load case. If that fails userspace needs to call the
DM_DEV_REMOVE_CMD to cleanup the state - similar to any other
failure.
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
|
|
destroy_workqueue() already drains the queue before destroying it, so
there is no need to flush it explicitly.
Remove the redundant flush_workqueue() calls.
This was generated with coccinelle:
@@
expression E;
@@
- flush_workqueue(E);
destroy_workqueue(E);
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
|
|
Support the DECLARE_PHY_INTERFACE_MASK() macro that is used to declare
a bitmap by converting the macro to DECLARE_BITMAP(), as has been done
for the __ETHTOOL_DECLARE_LINK_MODE_MASK() macro.
This fixes a 'make htmldocs' warning:
include/linux/phylink.h:82: warning: Function parameter or member 'DECLARE_PHY_INTERFACE_MASK(supported_interfaces' not described in 'phylink_config'
that was introduced by commit
38c310eb46f5 ("net: phylink: add MAC phy_interface_t bitmap")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Russell King (Oracle) <linux@armlinux.org.uk>
Link: https://lore.kernel.org/r/45934225-7942-4326-f883-a15378939db9@infradead.org
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
Translate Documentation/core-api/xarray.rst into Chinese
Signed-off-by: Yanteng Si <siyanteng@loongson.cn>
Reviewed-by: Alex Shi <alexs@kernel.org>
Link: https://lore.kernel.org/r/2a125bcb3220e7c1b72ae87bcad1b225dd950338.1634358018.git.siyanteng@loongson.cn
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
Translate Documentation/core-api/assoc_array.rst into Chinese.
Signed-off-by: Yanteng Si <siyanteng@loongson.cn>
Reviewed-by: Alex Shi <alexs@kernel.org>
Link: https://lore.kernel.org/r/860ac85d9a2a83c2b63eb8d1be929ad64280d7b2.1634358018.git.siyanteng@loongson.cn
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
Pull block inode sync updates from Jens Axboe:
"This contains improvements to how bdev inode syncing is handled,
unifying the API"
* tag 'for-5.16/inode-sync-2021-10-29' of git://git.kernel.dk/linux-block:
block: simplify the block device syncing code
ntfs3: use sync_blockdev_nowait
fat: use sync_blockdev_nowait
btrfs: use sync_blockdev
xen-blkback: use sync_blockdev
block: remove __sync_blockdev
fs: remove __sync_filesystem
|
|
There is a typo in the speakup documentation. Fix it.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Link: https://lore.kernel.org/r/20211028182319.613315-1-colin.i.king@gmail.com
Signed-off-by: Jonathan Corbet <corbet@lwn.net>
|
|
Pull kiocb->ki_complete() cleanup from Jens Axboe:
"This removes the res2 argument from kiocb->ki_complete().
Only the USB gadget code used it, everybody else passes 0. The USB
guys checked the user gadget code they could find, and everybody just
uses res as expected for the async interface"
* tag 'for-5.16/ki_complete-2021-10-29' of git://git.kernel.dk/linux-block:
fs: get rid of the res2 iocb->ki_complete argument
usb: remove res2 argument from gadget code completions
|
|
git://git.kernel.dk/linux-block
Pull QUEUE_FLAG_SCSI_PASSTHROUGH removal from Jens Axboe:
"This contains a series leading to the removal of the
QUEUE_FLAG_SCSI_PASSTHROUGH queue flag"
* tag 'for-5.16/passthrough-flag-2021-10-29' of git://git.kernel.dk/linux-block:
block: remove blk_{get,put}_request
block: remove QUEUE_FLAG_SCSI_PASSTHROUGH
block: remove the initialize_rq_fn blk_mq_ops method
scsi: add a scsi_alloc_request helper
bsg-lib: initialize the bsg_job in bsg_transport_sg_io_fn
nfsd/blocklayout: use ->get_unique_id instead of sending SCSI commands
sd: implement ->get_unique_id
block: add a ->get_unique_id method
|
|
Pull CDROM updates from Jens Axboe:
"On behalf of Phillip, here are the CDROM updates for the 5.16-rc1
merge window:
- Add ioctl for improved media change detection (Lukas)
- Reformat some documentation (Phillip)
- Redundant variable removal (luo)"
* tag 'for-5.16/cdrom-2021-10-29' of git://git.kernel.dk/linux-block:
cdrom: Remove redundant variable and its assignment
cdrom: docs: reformat table in Documentation/userspace-api/ioctl/cdrom.rst
drivers/cdrom: improved ioctl for media change detection
|
|
Pull SCSI multi-actuator support from Jens Axboe:
"This adds SCSI support for the recently merged block multi-actuator
support. Since this was sitting on top of the block tree, the SCSI
side asked me to queue it up."
* tag 'for-5.16/scsi-ma-2021-10-29' of git://git.kernel.dk/linux-block:
doc: Fix typo in request queue sysfs documentation
doc: document sysfs queue/independent_access_ranges attributes
libata: support concurrent positioning ranges log
scsi: sd: add concurrent positioning ranges support
|
|
Pull bdev size cleanups from Jens Axboe:
"Clean up the bdev size handling with new bdev_nr_bytes() helper"
* tag 'for-5.16/bdev-size-2021-10-29' of git://git.kernel.dk/linux-block: (34 commits)
partitions/ibm: use bdev_nr_sectors instead of open coding it
partitions/efi: use bdev_nr_bytes instead of open coding it
block/ioctl: use bdev_nr_sectors and bdev_nr_bytes
block: cache inode size in bdev
udf: use sb_bdev_nr_blocks
reiserfs: use sb_bdev_nr_blocks
ntfs: use sb_bdev_nr_blocks
jfs: use sb_bdev_nr_blocks
ext4: use sb_bdev_nr_blocks
block: add a sb_bdev_nr_blocks helper
block: use bdev_nr_bytes instead of open coding it in blkdev_fallocate
squashfs: use bdev_nr_bytes instead of open coding it
reiserfs: use bdev_nr_bytes instead of open coding it
pstore/blk: use bdev_nr_bytes instead of open coding it
ntfs3: use bdev_nr_bytes instead of open coding it
nilfs2: use bdev_nr_bytes instead of open coding it
nfs/blocklayout: use bdev_nr_bytes instead of open coding it
jfs: use bdev_nr_bytes instead of open coding it
hfsplus: use bdev_nr_sectors instead of open coding it
hfs: use bdev_nr_sectors instead of open coding it
...
|
|
In commit 324bda9e6c5a("bpf: multi program support for cgroup+bpf")
cgroup_bpf_*() called from kernel/bpf/syscall.c, but now they are only
used in kernel/bpf/cgroup.c, so move these function to
kernel/bpf/cgroup.c, like cgroup_bpf_replace().
Signed-off-by: He Fengqing <hefengqing@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
|
|
In account_guest_time in kernel/sched/cputime.c guest time is
attributed to both CPUTIME_NICE and CPUTIME_USER in addition to
CPUTIME_GUEST_NICE and CPUTIME_GUEST respectively. Therefore, adding
both to calculate usage results in double counting any guest time at
the rootcg.
Fixes: 936f2a70f207 ("cgroup: add cpu.stat file to root cgroup")
Signed-off-by: Dan Schatzberg <schatzberg.dan@gmail.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
|
|
Pull io_uring updates from Jens Axboe:
"Light on new features - basically just the hybrid mode support.
Outside of that it's just fixes, cleanups, and performance
improvements.
In detail:
- Add ring related information to the fdinfo output (Hao)
- Hybrid async mode (Hao)
- Support for batched issue on block (me)
- sqe error trace improvement (me)
- IOPOLL efficiency improvements (Pavel)
- submit state cleanups and improvements (Pavel)
- Completion side improvements (Pavel)
- Drain improvements (Pavel)
- Buffer selection cleanups (Pavel)
- Fixed file node improvements (Pavel)
- io-wq setup cancelation fix (Pavel)
- Various other performance improvements and cleanups (Pavel)
- Misc fixes (Arnd, Bixuan, Changcheng, Hao, me, Noah)"
* tag 'for-5.16/io_uring-2021-10-29' of git://git.kernel.dk/linux-block: (97 commits)
io-wq: remove worker to owner tw dependency
io_uring: harder fdinfo sq/cq ring iterating
io_uring: don't assign write hint in the read path
io_uring: clusterise ki_flags access in rw_prep
io_uring: kill unused param from io_file_supports_nowait
io_uring: clean up timeout async_data allocation
io_uring: don't try io-wq polling if not supported
io_uring: check if opcode needs poll first on arming
io_uring: clean iowq submit work cancellation
io_uring: clean io_wq_submit_work()'s main loop
io-wq: use helper for worker refcounting
io_uring: implement async hybrid mode for pollable requests
io_uring: Use ERR_CAST() instead of ERR_PTR(PTR_ERR())
io_uring: split logic of force_nonblock
io_uring: warning about unused-but-set parameter
io_uring: inform block layer of how many requests we are submitting
io_uring: simplify io_file_supports_nowait()
io_uring: combine REQ_F_NOWAIT_{READ,WRITE} flags
io_uring: arm poll for non-nowait files
fs/io_uring: Prioritise checking faster conditions first in io_write
...
|
|
Pull block driver updates from Jens Axboe:
- paride driver cleanups (Christoph)
- Remove cryptoloop support (Christoph)
- null_blk poll support (me)
- Now that add_disk() supports proper error handling, add it to various
drivers (Luis)
- Make ataflop actually work again (Michael)
- s390 dasd fixes (Stefan, Heiko)
- nbd fixes (Yu, Ye)
- Remove redundant wq flush in mtip32xx (Christophe)
- NVMe updates
- fix a multipath partition scanning deadlock (Hannes Reinecke)
- generate uevent once a multipath namespace is operational again
(Hannes Reinecke)
- support unique discovery controller NQNs (Hannes Reinecke)
- fix use-after-free when a port is removed (Israel Rukshin)
- clear shadow doorbell memory on resets (Keith Busch)
- use struct_size (Len Baker)
- add error handling support for add_disk (Luis Chamberlain)
- limit the maximal queue size for RDMA controllers (Max Gurtovoy)
- use a few more symbolic names (Max Gurtovoy)
- fix error code in nvme_rdma_setup_ctrl (Max Gurtovoy)
- add support for ->map_queues on FC (Saurav Kashyap)
- support the current discovery subsystem entry (Hannes Reinecke)
- use flex_array_size and struct_size (Len Baker)
- bcache fixes (Christoph, Coly, Chao, Lin, Qing)
- MD updates (Christoph, Guoqing, Xiao)
- Misc fixes (Dan, Ding, Jiapeng, Shin'ichiro, Ye)
* tag 'for-5.16/drivers-2021-10-29' of git://git.kernel.dk/linux-block: (117 commits)
null_blk: Fix handling of submit_queues and poll_queues attributes
block: ataflop: Fix warning comparing pointer to 0
bcache: replace snprintf in show functions with sysfs_emit
bcache: move uapi header bcache.h to bcache code directory
nvmet: use flex_array_size and struct_size
nvmet: register discovery subsystem as 'current'
nvmet: switch check for subsystem type
nvme: add new discovery log page entry definitions
block: ataflop: more blk-mq refactoring fixes
block: remove support for cryptoloop and the xor transfer
mtd: add add_disk() error handling
rnbd: add error handling support for add_disk()
um/drivers/ubd_kern: add error handling support for add_disk()
m68k/emu/nfblock: add error handling support for add_disk()
xen-blkfront: add error handling support for add_disk()
bcache: add error handling support for add_disk()
dm: add add_disk() error handling
block: aoe: fixup coccinelle warnings
nvmet: use struct_size over open coded arithmetic
nvme: drop scan_lock and always kick requeue list when removing namespaces
...
|
|
Pull block updates from Jens Axboe:
- mq-deadline accounting improvements (Bart)
- blk-wbt timer fix (Andrea)
- Untangle the block layer includes (Christoph)
- Rework the poll support to be bio based, which will enable adding
support for polling for bio based drivers (Christoph)
- Block layer core support for multi-actuator drives (Damien)
- blk-crypto improvements (Eric)
- Batched tag allocation support (me)
- Request completion batching support (me)
- Plugging improvements (me)
- Shared tag set improvements (John)
- Concurrent queue quiesce support (Ming)
- Cache bdev in ->private_data for block devices (Pavel)
- bdev dio improvements (Pavel)
- Block device invalidation and block size improvements (Xie)
- Various cleanups, fixes, and improvements (Christoph, Jackie,
Masahira, Tejun, Yu, Pavel, Zheng, me)
* tag 'for-5.16/block-2021-10-29' of git://git.kernel.dk/linux-block: (174 commits)
blk-mq-debugfs: Show active requests per queue for shared tags
block: improve readability of blk_mq_end_request_batch()
virtio-blk: Use blk_validate_block_size() to validate block size
loop: Use blk_validate_block_size() to validate block size
nbd: Use blk_validate_block_size() to validate block size
block: Add a helper to validate the block size
block: re-flow blk_mq_rq_ctx_init()
block: prefetch request to be initialized
block: pass in blk_mq_tags to blk_mq_rq_ctx_init()
block: add rq_flags to struct blk_mq_alloc_data
block: add async version of bio_set_polled
block: kill DIO_MULTI_BIO
block: kill unused polling bits in __blkdev_direct_IO()
block: avoid extra iter advance with async iocb
block: Add independent access ranges support
blk-mq: don't issue request directly in case that current is to be blocked
sbitmap: silence data race warning
blk-cgroup: synchronize blkg creation against policy deactivation
block: refactor bio_iov_bvec_set()
block: add single bio async direct IO helper
...
|
|
This patch is closely related to commit 6016df8fe874 ("selftests/bpf:
Fix broken riscv build"). When clang includes the system include
directories, but targeting BPF program, __BITS_PER_LONG defaults to
32, unless explicitly set. Work around this problem, by explicitly
setting __BITS_PER_LONG to __riscv_xlen.
Signed-off-by: Björn Töpel <bjorn@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211028161057.520552-5-bjorn@kernel.org
|
|
Add macros for 64-bit RISC-V PT_REGS to bpf_tracing.h.
Signed-off-by: Björn Töpel <bjorn@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211028161057.520552-4-bjorn@kernel.org
|
|
Add RISC-V to the HOSTARCH parsing, so that ARCH is "riscv", and not
"riscv32" or "riscv64".
This affects the perf and libbpf builds, so that arch specific
includes are correctly picked up for RISC-V.
Signed-off-by: Björn Töpel <bjorn@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211028161057.520552-3-bjorn@kernel.org
|
|
Now that BPF programs can be up to 1M instructions, it is not uncommon
that a program requires more than the current 16 iterations to
converge.
Bump it to 32, which is enough for selftests/bpf, and test_bpf.ko.
Signed-off-by: Björn Töpel <bjorn@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211028161057.520552-2-bjorn@kernel.org
|
|
Add the test to check sockmap with strparser is working well.
Signed-off-by: Liu Jian <liujian56@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20211029141216.211899-3-liujian56@huawei.com
|
|
After "skmsg: lose offset info in sk_psock_skb_ingress", the test case
with ktls failed. This because ktls parser(tls_read_size) return value
is 285 not 256.
The case like this:
tls_sk1 --> redir_sk --> tls_sk2
tls_sk1 sent out 512 bytes data, after tls related processing redir_sk
recved 570 btyes data, and redirect 512 (skb_use_parser) bytes data to
tls_sk2; but tls_sk2 needs 285 * 2 bytes data, receive timeout occurred.
Signed-off-by: Liu Jian <liujian56@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20211029141216.211899-2-liujian56@huawei.com
|
|
If sockmap enable strparser, there are lose offset info in
sk_psock_skb_ingress(). If the length determined by parse_msg function is not
skb->len, the skb will be converted to sk_msg multiple times, and userspace
app will get the data multiple times.
Fix this by get the offset and length from strp_msg. And as Cong suggested,
add one bit in skb->_sk_redir to distinguish enable or disable strparser.
Fixes: 604326b41a6fb ("bpf, sockmap: convert to generic sk_msg interface")
Signed-off-by: Liu Jian <liujian56@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Cong Wang <cong.wang@bytedance.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20211029141216.211899-1-liujian56@huawei.com
|
|
After most recent nightly Clang update strobemeta selftests started
failing with the following error (relevant portion of assembly included):
1624: (85) call bpf_probe_read_user_str#114
1625: (bf) r1 = r0
1626: (18) r2 = 0xfffffffe
1628: (5f) r1 &= r2
1629: (55) if r1 != 0x0 goto pc+7
1630: (07) r9 += 104
1631: (6b) *(u16 *)(r9 +0) = r0
1632: (67) r0 <<= 32
1633: (77) r0 >>= 32
1634: (79) r1 = *(u64 *)(r10 -456)
1635: (0f) r1 += r0
1636: (7b) *(u64 *)(r10 -456) = r1
1637: (79) r1 = *(u64 *)(r10 -368)
1638: (c5) if r1 s< 0x1 goto pc+778
1639: (bf) r6 = r8
1640: (0f) r6 += r7
1641: (b4) w1 = 0
1642: (6b) *(u16 *)(r6 +108) = r1
1643: (79) r3 = *(u64 *)(r10 -352)
1644: (79) r9 = *(u64 *)(r10 -456)
1645: (bf) r1 = r9
1646: (b4) w2 = 1
1647: (85) call bpf_probe_read_user_str#114
R1 unbounded memory access, make sure to bounds check any such access
In the above code r0 and r1 are implicitly related. Clang knows that,
but verifier isn't able to infer this relationship.
Yonghong Song narrowed down this "regression" in code generation to
a recent Clang optimization change ([0]), which for BPF target generates
code pattern that BPF verifier can't handle and loses track of register
boundaries.
This patch works around the issue by adding an BPF assembly-based helper
that helps to prove to the verifier that upper bound of the register is
a given constant by controlling the exact share of generated BPF
instruction sequence. This fixes the immediate issue for strobemeta
selftest.
[0] https://github.com/llvm/llvm-project/commit/acabad9ff6bf13e00305d9d8621ee8eafc1f8b08
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20211029182907.166910-1-andrii@kernel.org
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jlayton/linux
Pull file locking updates from Jeff Layton:
"Most of this is just follow-on cleanup work of documentation and
comments from the mandatory locking removal in v5.15.
The only real functional change is that LOCK_MAND flock() support is
also being removed, as it has basically been non-functional since the
v2.5 days"
* tag 'locks-v5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/jlayton/linux:
fs: remove leftover comments from mandatory locking removal
locks: remove changelog comments
docs: fs: locks.rst: update comment about mandatory file locking
Documentation: remove reference to now removed mandatory-locking doc
locks: remove LOCK_MAND flock lock support
|
|
Disabling unprivileged BPF would help prevent unprivileged users from
creating certain conditions required for potential speculative execution
side-channel attacks on unmitigated affected hardware.
A deep dive on such attacks and current mitigations is available here [0].
Sync with what many distros are currently applying already, and disable
unprivileged BPF by default. An admin can enable this at runtime, if
necessary, as described in 08389d888287 ("bpf: Add kconfig knob for
disabling unpriv bpf by default").
[0] "BPF and Spectre: Mitigating transient execution attacks", Daniel Borkmann, eBPF Summit '21
https://ebpf.io/summit-2021-slides/eBPF_Summit_2021-Keynote-Daniel_Borkmann-BPF_and_Spectre.pdf
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/bpf/0ace9ce3f97656d5f62d11093ad7ee81190c3c25.1635535215.git.pawan.kumar.gupta@linux.intel.com
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd
Pull tpm updates from Jarkko Sakkinen:
"Only bug fixes"
* tag 'tpmdd-next-v5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd:
tpm_tis_spi: Add missing SPI ID
tpm: fix Atmel TPM crash caused by too frequent queries
tpm: Check for integer overflow in tpm2_map_response_body()
tpm: tis: Kconfig: Add helper dependency on COMPILE_TEST
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Updates for v5.16
This is an unusually large set of updates, mostly a large crop of
unusually big drivers coupled with extensive overhauls of existing code.
There's a SH change here for the DAI format terminology, the change is
straightforward and the SH maintainers don't seem very active.
- A new version of the audio graph card which supports a wider range of
systems.
- Move of the Cirrus DSP framework into drivers/firmware to allow for
future use by non-audio DSPs.
- Several conversions to YAML DT bindings.
- Continuing cleanups to the SOF and Intel code.
- A very big overhaul of the cs42l42 driver, correcting many problems.
- Support for AMD Vangogh and Yelow Cap, Cirrus CS35L41, Maxim
MAX98520 and MAX98360A, Mediatek MT8195, Nuvoton NAU8821, nVidia
Tegra210, NXP i.MX8ULP, Qualcomm AudioReach, Realtek ALC5682I-VS,
RT5682S, and RT9120 and Rockchip RV1126 and RK3568
|
|
Pull memory folios from Matthew Wilcox:
"Add memory folios, a new type to represent either order-0 pages or the
head page of a compound page. This should be enough infrastructure to
support filesystems converting from pages to folios.
The point of all this churn is to allow filesystems and the page cache
to manage memory in larger chunks than PAGE_SIZE. The original plan
was to use compound pages like THP does, but I ran into problems with
some functions expecting only a head page while others expect the
precise page containing a particular byte.
The folio type allows a function to declare that it's expecting only a
head page. Almost incidentally, this allows us to remove various calls
to VM_BUG_ON(PageTail(page)) and compound_head().
This converts just parts of the core MM and the page cache. For 5.17,
we intend to convert various filesystems (XFS and AFS are ready; other
filesystems may make it) and also convert more of the MM and page
cache to folios. For 5.18, multi-page folios should be ready.
The multi-page folios offer some improvement to some workloads. The
80% win is real, but appears to be an artificial benchmark (postgres
startup, which isn't a serious workload). Real workloads (eg building
the kernel, running postgres in a steady state, etc) seem to benefit
between 0-10%. I haven't heard of any performance losses as a result
of this series. Nobody has done any serious performance tuning; I
imagine that tweaking the readahead algorithm could provide some more
interesting wins. There are also other places where we could choose to
create large folios and currently do not, such as writes that are
larger than PAGE_SIZE.
I'd like to thank all my reviewers who've offered review/ack tags:
Christoph Hellwig, David Howells, Jan Kara, Jeff Layton, Johannes
Weiner, Kirill A. Shutemov, Michal Hocko, Mike Rapoport, Vlastimil
Babka, William Kucharski, Yu Zhao and Zi Yan.
I'd also like to thank those who gave feedback I incorporated but
haven't offered up review tags for this part of the series: Nick
Piggin, Mel Gorman, Ming Lei, Darrick Wong, Ted Ts'o, John Hubbard,
Hugh Dickins, and probably a few others who I forget"
* tag 'folio-5.16' of git://git.infradead.org/users/willy/pagecache: (90 commits)
mm/writeback: Add folio_write_one
mm/filemap: Add FGP_STABLE
mm/filemap: Add filemap_get_folio
mm/filemap: Convert mapping_get_entry to return a folio
mm/filemap: Add filemap_add_folio()
mm/filemap: Add filemap_alloc_folio
mm/page_alloc: Add folio allocation functions
mm/lru: Add folio_add_lru()
mm/lru: Convert __pagevec_lru_add_fn to take a folio
mm: Add folio_evictable()
mm/workingset: Convert workingset_refault() to take a folio
mm/filemap: Add readahead_folio()
mm/filemap: Add folio_mkwrite_check_truncate()
mm/filemap: Add i_blocks_per_folio()
mm/writeback: Add folio_redirty_for_writepage()
mm/writeback: Add folio_account_redirty()
mm/writeback: Add folio_clear_dirty_for_io()
mm/writeback: Add folio_cancel_dirty()
mm/writeback: Add folio_account_cleaned()
mm/writeback: Add filemap_dirty_folio()
...
|
|
Move the error handling into a single switch statement for clarity.
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|
|
Since we do things like setting flags, etc it really is more appropriate
to use sock_lock().
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
|