diff options
| author | Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> | 2021-05-06 18:51:35 -0700 | 
|---|---|---|
| committer | Christoph Hellwig <hch@lst.de> | 2021-05-11 18:30:45 +0200 | 
| commit | 608a969046e6e0567d05a166be66c77d2dd8220b (patch) | |
| tree | c0148214146e4fce7cbf544fc72f633272e757da | |
| parent | 5e1f689913a4498e3081093670ef9d85b2c60920 (diff) | |
nvmet: fix inline bio check for bdev-ns
When handling rw commands, for inline bio case we only consider
transfer size. This works well when req->sg_cnt fits into the
req->inline_bvec, but it will result in the warning in
__bio_add_page() when req->sg_cnt > NVMET_MAX_INLINE_BVEC.
Consider an I/O size 32768 and first page is not aligned to the page
boundary, then I/O is split in following manner :-
[ 2206.256140] nvmet: sg->length 3440 sg->offset 656
[ 2206.256144] nvmet: sg->length 4096 sg->offset 0
[ 2206.256148] nvmet: sg->length 4096 sg->offset 0
[ 2206.256152] nvmet: sg->length 4096 sg->offset 0
[ 2206.256155] nvmet: sg->length 4096 sg->offset 0
[ 2206.256159] nvmet: sg->length 4096 sg->offset 0
[ 2206.256163] nvmet: sg->length 4096 sg->offset 0
[ 2206.256166] nvmet: sg->length 4096 sg->offset 0
[ 2206.256170] nvmet: sg->length 656 sg->offset 0
Now the req->transfer_size == NVMET_MAX_INLINE_DATA_LEN i.e. 32768, but
the req->sg_cnt is (9) > NVMET_MAX_INLINE_BIOVEC which is (8).
This will result in the following warning message :-
nvmet_bdev_execute_rw()
	bio_add_page()
		__bio_add_page()
			WARN_ON_ONCE(bio_full(bio, len));
This scenario is very hard to reproduce on the nvme-loop transport only
with rw commands issued with the passthru IOCTL interface from the host
application and the data buffer is allocated with the malloc() and not
the posix_memalign().
Fixes: 73383adfad24 ("nvmet: don't split large I/Os unconditionally")
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
| -rw-r--r-- | drivers/nvme/target/io-cmd-bdev.c | 2 | ||||
| -rw-r--r-- | drivers/nvme/target/nvmet.h | 6 | 
2 files changed, 7 insertions, 1 deletions
| diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c index 9a8b3726a37c..429263ca9b97 100644 --- a/drivers/nvme/target/io-cmd-bdev.c +++ b/drivers/nvme/target/io-cmd-bdev.c @@ -258,7 +258,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)  	sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba); -	if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { +	if (nvmet_use_inline_bvec(req)) {  		bio = &req->b.inline_bio;  		bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));  	} else { diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 5566ed403576..d69a409515d6 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -616,4 +616,10 @@ static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba)  	return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT);  } +static inline bool nvmet_use_inline_bvec(struct nvmet_req *req) +{ +	return req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN && +	       req->sg_cnt <= NVMET_MAX_INLINE_BIOVEC; +} +  #endif /* _NVMET_H */ | 
