summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-03-01block: add a queue_limits_stack_bdev helperChristoph Hellwig
Add a small wrapper around blk_stack_limits that allows passing a bdev for the bottom device and prints an error in case of misaligned device. The name fits into the new queue limits API and the intent is to eventually replace disk_stack_limits. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240228225653.947152-3-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-03-01block: add a queue_limits_set helperChristoph Hellwig
Add a small wrapper around queue_limits_commit_update for stacking drivers that don't want to update existing limits, but set an entirely new set. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20240228225653.947152-2-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2024-03-01mm, slab: remove the corner case of inc_slabs_node()Chengming Zhou
We already have the inc_slabs_node() after kmem_cache_node->node[node] initialized in early_kmem_cache_node_alloc(), this special case of inc_slabs_node() can be removed. Then we don't need to consider the existence of kmem_cache_node in inc_slabs_node() anymore. Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-03-01mm/slab: Fix a kmemleak in kmem_cache_destroy()Xiaolei Wang
For earlier kmem cache creation, slab_sysfs_init() has not been called. Consequently, kmem_cache_destroy() cannot utilize kobj_type::release to release the kmem_cache structure. Therefore, tweak kmem_cache_release() to use slab_kmem_cache_release() for releasing kmem_cache when slab_state isn't FULL. This will fixes the memory leaks like following: unreferenced object 0xffff0000c2d87080 (size 128): comm "swapper/0", pid 1, jiffies 4294893428 hex dump (first 32 bytes): 00 00 00 00 ad 4e ad de ff ff ff ff 6b 6b 6b 6b .....N......kkkk ff ff ff ff ff ff ff ff b8 ab 48 89 00 80 ff ff.....H..... backtrace (crc 8819d0f6): [<ffff80008317a298>] kmemleak_alloc+0xb0/0xc4 [<ffff8000807e553c>] kmem_cache_alloc_node+0x288/0x3a8 [<ffff8000807e95f0>] __kmem_cache_create+0x1e4/0x64c [<ffff8000807216bc>] kmem_cache_create_usercopy+0x1c4/0x2cc [<ffff8000807217e0>] kmem_cache_create+0x1c/0x28 [<ffff8000819f6278>] arm_v7s_alloc_pgtable+0x1c0/0x6d4 [<ffff8000819f53a0>] alloc_io_pgtable_ops+0xe8/0x2d0 [<ffff800084b2d2c4>] arm_v7s_do_selftests+0xe0/0x73c [<ffff800080016b68>] do_one_initcall+0x11c/0x7ac [<ffff800084a71ddc>] kernel_init_freeable+0x53c/0xbb8 [<ffff8000831728d8>] kernel_init+0x24/0x144 [<ffff800080018e98>] ret_from_fork+0x10/0x20 Signed-off-by: Xiaolei Wang <xiaolei.wang@windriver.com> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
2024-03-01arm: dts: marvell: clearfog-gtr-l8: align port numbers with enclosureJosua Mayer
Clearfog GTR has an official enclosure with labels for all interfaces. The "lan" ports on the 8-port switch in device-tree were numbered in reverse wrt. enclosure. Update all device-tree labels to match. Signed-off-by: Josua Mayer <josua@solid-run.com> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2024-03-01arm: dts: marvell: clearfog-gtr-l8: add support for second sfp connectorJosua Mayer
Clearfog GTR L8 has an extra SFP connector on the managed switch port 9. Add descriptions for both entities along with pinctrl. Signed-off-by: Josua Mayer <josua@solid-run.com> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
2024-03-01arm64/mm: Avoid ID mapping of kpti flag if it is no longer neededArd Biesheuvel
arm64_use_ng_mappings will be set to 'true' by the early boot code if it decides to use non-global (nG) attributes for all kernel mappings, typically when enabling KASLR on a system that does not implement E0PD. In this case, the G-to-nG update routines are never called, and so there is no reason to create the writable mapping of the associated status flag in the ID map. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240301104046.1234309-6-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-03-01arm64/mm: Use generic __pud_free() helper in pud_free() implementationArd Biesheuvel
Commit 0dd4f60a2c76 ("arm64: mm: Add support for folding PUDs at runtime") implements specialized PUD alloc/free helpers to allow the decision whether or not to fold PUDs to be made at runtime when the number of paging levels is 4 or higher. Its implementation of pud_free() is based on the generic version that existed when the patch was first written, but in the meantime, the freeing of a PUD has become a bit more involved, and so instead of simply freeing the page, we should invoke the generic __pud_free() that encapsulates whatever needs doing at this point. This fixes a reported warning emitted by the page flags self-diagnostics. Reported-by: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Ryan Roberts <ryan.roberts@arm.com> Link: https://lore.kernel.org/r/20240301104046.1234309-5-ardb+git@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2024-03-01arm64: dts: qcom: sc8280xp-x13s: limit pcie4 link speedJohan Hovold
Limit the WiFi PCIe link speed to Gen2 speed (500 MB/s), which is the speed that the boot firmware has brought up the link at (and that Windows uses). This is specifically needed to avoid a large amount of link errors when restarting the link during boot (but which are currently not reported). This also appears to fix intermittent failures to download the ath11k firmware during boot which can be seen when there is a longer delay between restarting the link and loading the WiFi driver (e.g. when using full disk encryption). Fixes: 123b30a75623 ("arm64: dts: qcom: sc8280xp-x13s: enable WiFi controller") Cc: stable@vger.kernel.org # 6.2 Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/20240223152124.20042-8-johan+linaro@kernel.org Signed-off-by: Bjorn Andersson <andersson@kernel.org>
2024-03-01arm64: dts: qcom: sc8280xp-crd: limit pcie4 link speedJohan Hovold
Limit the WiFi PCIe link speed to Gen2 speed (500 MB/s), which is the speed that Windows uses. Signed-off-by: Johan Hovold <johan+linaro@kernel.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/20240223152124.20042-7-johan+linaro@kernel.org Signed-off-by: Bjorn Andersson <andersson@kernel.org>
2024-03-01dt-bindings: soc: renesas: renesas-soc: Add pattern for gray-hawkLad Prabhakar
Add a pattern for the Renesas Gray Hawk Single board (based on the R-Car V4M SoC). Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20240227220930.213703-1-prabhakar.mahadev-lad.rj@bp.renesas.com Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
2024-03-01dm vdo thread-device: rename all methods to reflect vdo-only useMike Snitzer
Also moved vdo_init()'s call to vdo_initialize_thread_device_registry next to other registry initialization. Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo thread-registry: rename all methods to reflect vdo-only useMike Snitzer
Otherwise, uds_ prefix is misleading (vdo_ is the new catch-all for code that is used by vdo-only or _both_ vdo and the indexer code). Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo thread-utils: cleanup included headersMike Snitzer
Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo thread-utils: further cleanup of thread functionsMike Snitzer
Change thread function prefix from "uds_" to "vdo_" and fix vdo_join_threads() to return void. Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo thread-utils: remove all uds_*_mutex wrappersMike Snitzer
Just use mutex_init, mutex_lock and mutex_unlock. Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo thread-utils: push uds_*_cond interface down to indexerMike Snitzer
Only used by indexer components. Also return void from uds_init_cond(), remove uds_destroy_cond(), and fix up all callers. Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo: fold thread-cond-var.c into thread-utilsMike Snitzer
Further cleanup is needed for thread-utils interfaces given many functions should return void or be removed entirely because they amount to obfuscation via wrappers. Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo indexer: rename uds.h to indexer.hMike Snitzer
Also remove unnecessary include from funnel-queue.c. Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo: rename uds-threads.[ch] to thread-utils.[ch]Mike Snitzer
Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo indexer sparse-cache: cleanup threads_barrier codeMike Snitzer
Rename 'barrier' to 'threads_barrier', remove useless uds_destroy_barrier(), return void from remaining methods and clean up uds_make_sparse_cache() accordingly. Also remove uds_ prefix from the 2 remaining threads_barrier functions. Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo uds-threads: push 'barrier' down to sparse-cacheMike Snitzer
The sparse-cache is the only user of the 'barrier' data structure, so just move it private to it. Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo uds-threads: eliminate uds_*_semaphore interfacesMike Snitzer
The implementation of thread 'barrier' data structure does not require overdone private semaphore wrappers. Also rename the barrier structure's 'mutex' member (a semaphore) to 'lock'. Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo: make uds_*_semaphore interface private to uds-threads.cMike Snitzer
Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo block-map: rename page state name from "UDS_FREE" to "FREE"Mike Snitzer
Only used for log message, but no need for "UDS_" prefix. Signed-off-by: Mike Snitzer <snitzer@kernel.org> Signed-off-by: Susan LeGendre-McGhee <slegendr@redhat.com> Signed-off-by: Matthew Sakai <msakai@redhat.com>
2024-03-01dm vdo volume-index: fix an assert statement in ↵Harshit Mogalapalli
start_restoring_volume_sub_index() Use "==" instead of "=" in ASSERT() statement. Fixes: ef074a31e88e ("dm vdo: implement the volume index") Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com> Signed-off-by: Susan LeGendre-McGhee <slegendr@redhat.com> Signed-off-by: Matthew Sakai <msakai@redhat.com> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
2024-03-01nfsd: Fix NFSv3 atomicity bugs in nfsd_setattr()Trond Myklebust
The main point of the guarded SETATTR is to prevent races with other WRITE and SETATTR calls. That requires that the check of the guard time against the inode ctime be done after taking the inode lock. Furthermore, we need to take into account the 32-bit nature of timestamps in NFSv3, and the possibility that files may change at a faster rate than once a second. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: NeilBrown <neilb@suse.de> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01nfsd: Fix a regression in nfsd_setattr()Trond Myklebust
Commit bb4d53d66e4b ("NFSD: use (un)lock_inode instead of fh_(un)lock for file operations") broke the NFSv3 pre/post op attributes behaviour when doing a SETATTR rpc call by stripping out the calls to fh_fill_pre_attrs() and fh_fill_post_attrs(). Fixes: bb4d53d66e4b ("NFSD: use (un)lock_inode instead of fh_(un)lock for file operations") Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: NeilBrown <neilb@suse.de> Message-ID: <20240216012451.22725-1-trondmy@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01NFSD: OP_CB_RECALL_ANY should recall both read and write delegationsDai Ngo
Add RCA4_TYPE_MASK_WDATA_DLG to ra_bmval bitmask of OP_CB_RECALL_ANY Signed-off-by: Dai Ngo <dai.ngo@oracle.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01NFSD: handle GETATTR conflict with write delegationDai Ngo
If the GETATTR request on a file that has write delegation in effect and the request attributes include the change info and size attribute then the request is handled as below: Server sends CB_GETATTR to client to get the latest change info and file size. If these values are the same as the server's cached values then the GETATTR proceeds as normal. If either the change info or file size is different from the server's cached values, or the file was already marked as modified, then: . update time_modify and time_metadata into file's metadata with current time . encode GETATTR as normal except the file size is encoded with the value returned from CB_GETATTR . mark the file as modified If the CB_GETATTR fails for any reasons, the delegation is recalled and NFS4ERR_DELAY is returned for the GETATTR. Signed-off-by: Dai Ngo <dai.ngo@oracle.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01NFSD: add support for CB_GETATTR callbackDai Ngo
Includes: . CB_GETATTR proc for nfs4_cb_procedures[] . XDR encoding and decoding function for CB_GETATTR request/reply . add nfs4_cb_fattr to nfs4_delegation for sending CB_GETATTR and store file attributes from client's reply. Signed-off-by: Dai Ngo <dai.ngo@oracle.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01NFSD: Document the phases of CREATE_SESSIONChuck Lever
As described in RFC 8881 Section 18.36.4, CREATE_SESSION can be split into four phases. NFSD's implementation now does it like that description. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01NFSD: Fix the NFSv4.1 CREATE_SESSION operationChuck Lever
RFC 8881 Section 18.36.4 discusses the implementation of the NFSv4.1 CREATE_SESSION operation. The section defines four phases of operation. Phase 2 processes the CREATE_SESSION sequence ID. As a separate step, Phase 3 evaluates the CREATE_SESSION arguments. The problem we are concerned with is when phase 2 is successful but phase 3 fails. The spec language in this case is "No changes are made to any client records on the server." RFC 8881 Section 18.35.4 defines a "client record", and it does /not/ contain any details related to the special CREATE_SESSION slot. Therefore NFSD is incorrect to skip incrementing the CREATE_SESSION sequence id when phase 3 (see Section 18.36.4) of CREATE_SESSION processing fails. In other words, even though NFSD happens to store the cs_slot in a client record, in terms of the protocol the slot is logically separate from the client record. Three complications: 1. The world has moved on since commit 86c3e16cc7aa ("nfsd4: confirm only on succesful create_session") broke this. So we can't simply revert that commit. 2. NFSD's CREATE_SESSION implementation does not cleanly delineate the logic of phases 2 and 3. So this won't be a surgical fix. 3. Because of the way it currently handles the CREATE_SESSION slot sequence number, nfsd4_create_session() isn't caching error responses in the CREATE_SESSION slot. Instead of replaying the response cache in those cases, it's executing the transaction again. Reorganize the CREATE_SESSION slot sequence number accounting. This requires that error responses are appropriately cached in the CREATE_SESSION slot (once it is found). Reported-by: Connor Smith <connor.smith@hitachivantara.com> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218382 Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01nfsd: clean up comments over nfs4_client definitionChen Hanxiao
nfsd fault injection has been deprecated since commit 9d60d93198c6 ("Deprecate nfsd fault injection") and removed by commit e56dc9e2949e ("nfsd: remove fault injection code") So remove the outdated parts about fault injection. Signed-off-by: Chen Hanxiao <chenhx.fnst@fujitsu.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Add Write chunk WRs to the RPC's Send WR chainChuck Lever
Chain RDMA Writes that convey Write chunks onto the local Send chain. This means all WRs for an RPC Reply are now posted with a single ib_post_send() call, and there is a single Send completion when all of these are done. That reduces both the per-transport doorbell rate and completion rate. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Post WRs for Write chunks in svc_rdma_sendto()Chuck Lever
Refactor to eventually enable svcrdma to post the Write WRs for each RPC response using the same ib_post_send() as the Send WR (ie, as a single WR chain). svc_rdma_result_payload (originally svc_rdma_read_payload) was added so that the upper layer XDR encoder could identify a range of bytes to be possibly conveyed by RDMA (if a Write chunk was provided by the client). The purpose of commit f6ad77590a5d ("svcrdma: Post RDMA Writes while XDR encoding replies") was to post as much of the result payload outside of svc_rdma_sendto() as possible because svc_rdma_sendto() used to be called with the xpt_mutex held. However, since commit ca4faf543a33 ("SUNRPC: Move xpt_mutex into socket xpo_sendto methods"), the xpt_mutex is no longer held when calling svc_rdma_sendto(). Thus, that benefit is no longer an issue. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Post the Reply chunk and Send WR togetherChuck Lever
Reduce the doorbell and Send completion rates when sending RPC/RDMA replies that have Reply chunks. NFS READDIR procedures typically return their result in a Reply chunk, for example. Instead of calling ib_post_send() to post the Write WRs for the Reply chunk, and then calling it again to post the Send WR that conveys the transport header, chain the Write WRs to the Send WR and call ib_post_send() only once. Thanks to the Send Queue completion ordering rules, when the Send WR completes, that guarantees that Write WRs posted before it have also completed successfully. Thus all Write WRs for the Reply chunk can remain unsignaled. Instead of handling a Write completion and then a Send completion, only the Send completion is seen, and it handles clean up for both the Writes and the Send. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Move write_info for Reply chunks into struct svc_rdma_send_ctxtChuck Lever
Since the RPC transaction's svc_rdma_send_ctxt will stay around for the duration of the RDMA Write operation, the write_info structure for the Reply chunk can reside in the request's svc_rdma_send_ctxt instead of being allocated separately. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Post Send WR chainChuck Lever
Eventually I'd like the server to post the reply's Send WR along with any Write WRs using only a single call to ib_post_send(), in order to reduce the NIC's doorbell rate. To do this, add an anchor for a WR chain to svc_rdma_send_ctxt, and refactor svc_rdma_send() to post this WR chain to the Send Queue. For the moment, the posted chain will continue to contain a single Send WR. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Fix retry loop in svc_rdma_send()Chuck Lever
Don't call ib_post_send() at all if the transport is already shutting down. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Prevent a UAF in svc_rdma_send()Chuck Lever
In some error flow cases, svc_rdma_wc_send() releases @ctxt. Copy the sc_cid field in @ctxt to a stack variable in order to guarantee that the value is available after the ib_post_send() call. In case the new comment looks a little strange, this will be done with at least one more field in a subsequent patch. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Fix SQ wake-upsChuck Lever
Ensure there is a wake-up when increasing sc_sq_avail. Likewise, if a wake-up is done, sc_sq_avail needs to be updated, otherwise the wait_event() conditional is never going to be met. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Increase the per-transport rw_ctx countChuck Lever
rdma_rw_mr_factor() returns the smallest number of MRs needed to move a particular number of pages. svcrdma currently asks for the number of MRs needed to move RPCSVC_MAXPAGES (a little over one megabyte), as that is the number of pages in the largest r/wsize the server supports. This call assumes that the client's NIC can bundle a full one megabyte payload in a single rdma_segment. In fact, most NICs cannot handle a full megabyte with a single rkey / rdma_segment. Clients will typically split even a single Read chunk into many segments. The server needs one MR to read each rdma_segment in a Read chunk, and thus each one needs an rw_ctx. svcrdma has been vastly underestimating the number of rw_ctxs needed to handle 64 RPC requests with large Read chunks using small rdma_segments. Unfortunately there doesn't seem to be a good way to estimate this number without knowing the client NIC's capabilities. Even then, the client RPC/RDMA implementation is still free to split a chunk into smaller segments (for example, it might be using physical registration, which needs an rdma_segment per page). The best we can do for now is choose a number that will guarantee forward progress in the worst case (one page per segment). At some later point, we could add some mechanisms to make this much less of a problem: - Add a core API to add more rw_ctxs to an already-established QP - svcrdma could treat rw_ctx exhaustion as a temporary error and try again - Limit the number of Reads in flight Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Update max_send_sges after QP is createdChuck Lever
rdma_create_qp() can modify cap.max_send_sges. Copy the new value to the svcrdma transport so it is bound by the new limit instead of the requested one. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Report CQ depths in debugging outputChuck Lever
Check that svc_rdma_accept() is allocating an appropriate number of CQEs. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01svcrdma: Reserve an extra WQE for ib_drain_rq()Chuck Lever
Do as other ULPs already do: ensure there is an extra Receive WQE reserved for the tear-down drain WR. I haven't heard reports of problems but it can't hurt. Note that rq_depth is used to compute the Send Queue depth as well, so this fix should affect both the SQ and RQ. Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01MAINTAINERS: add Alex Aring as Reviewer for file locking codeJeff Layton
Alex helps co-maintain the DLM code and did some recent work to fix up how lockd and GFS2 work together. Add him as a Reviewer for file locking changes. Acked-by: Alexander Aring <aahringo@redhat.com> Acked-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01nfsd: Simplify the allocation of slab caches in nfsd4_init_slabsKunwu Chan
Use the new KMEM_CACHE() macro instead of direct kmem_cache_create to simplify the creation of SLAB caches. Make the code cleaner and more readable. Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01nfsd: Simplify the allocation of slab caches in nfsd_drc_slab_createKunwu Chan
Use the new KMEM_CACHE() macro instead of direct kmem_cache_create to simplify the creation of SLAB caches. And change cache name from 'nfsd_drc' to 'nfsd_cacherep'. Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
2024-03-01nfsd: Simplify the allocation of slab caches in nfsd_file_cache_initKunwu Chan
Use the new KMEM_CACHE() macro instead of direct kmem_cache_create to simplify the creation of SLAB caches. Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Acked-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>