summaryrefslogtreecommitdiff
path: root/block
diff options
context:
space:
mode:
authorJeffle Xu <jefflexu@linux.alibaba.com>2020-11-26 17:18:52 +0800
committerJens Axboe <axboe@kernel.dk>2020-12-07 20:29:15 -0700
commitcc29e1bf0d63f728a5bd60ef22638bbf77369552 (patch)
tree091dbb8f872ae7180064eb054fdbb46318797f40 /block
parent2afdeb23e4750acb4ff16fd86f566c9074708691 (diff)
block: disable iopoll for split bio
iopoll is initially for small size, latency sensitive IO. It doesn't work well for big IO, especially when it needs to be split to multiple bios. In this case, the returned cookie of __submit_bio_noacct_mq() is indeed the cookie of the last split bio. The completion of *this* last split bio done by iopoll doesn't mean the whole original bio has completed. Callers of iopoll still need to wait for completion of other split bios. Besides bio splitting may cause more trouble for iopoll which isn't supposed to be used in case of big IO. iopoll for split bio may cause potential race if CPU migration happens during bio submission. Since the returned cookie is that of the last split bio, polling on the corresponding hardware queue doesn't help complete other split bios, if these split bios are enqueued into different hardware queues. Since interrupts are disabled for polling queues, the completion of these other split bios depends on timeout mechanism, thus causing a potential hang. iopoll for split bio may also cause hang for sync polling. Currently both the blkdev and iomap-based fs (ext4/xfs, etc) support sync polling in direct IO routine. These routines will submit bio without REQ_NOWAIT flag set, and then start sync polling in current process context. The process may hang in blk_mq_get_tag() if the submitted bio has to be split into multiple bios and can rapidly exhaust the queue depth. The process are waiting for the completion of the previously allocated requests, which should be reaped by the following polling, and thus causing a deadlock. To avoid these subtle trouble described above, just disable iopoll for split bio and return BLK_QC_T_NONE in this case. The side effect is that non-HIPRI IO also returns BLK_QC_T_NONE now. It should be acceptable since the returned cookie is never used for non-HIPRI IO. Suggested-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jeffle Xu <jefflexu@linux.alibaba.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r--block/blk-merge.c8
-rw-r--r--block/blk-mq.c5
2 files changed, 13 insertions, 0 deletions
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 7497d86fff38..c3399bf29e9c 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -279,6 +279,14 @@ static struct bio *blk_bio_segment_split(struct request_queue *q,
return NULL;
split:
*segs = nsegs;
+
+ /*
+ * Bio splitting may cause subtle trouble such as hang when doing sync
+ * iopoll in direct IO routine. Given performance gain of iopoll for
+ * big IO can be trival, disable iopoll when split needed.
+ */
+ bio->bi_opf &= ~REQ_HIPRI;
+
return bio_split(bio, sectors, GFP_NOIO, bs);
}
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 2881a457de83..95ecc4c69969 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2159,6 +2159,7 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
unsigned int nr_segs;
blk_qc_t cookie;
blk_status_t ret;
+ bool hipri;
blk_queue_bounce(q, &bio);
__blk_queue_split(&bio, &nr_segs);
@@ -2175,6 +2176,8 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
rq_qos_throttle(q, bio);
+ hipri = bio->bi_opf & REQ_HIPRI;
+
data.cmd_flags = bio->bi_opf;
rq = __blk_mq_alloc_request(&data);
if (unlikely(!rq)) {
@@ -2267,6 +2270,8 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio)
blk_mq_sched_insert_request(rq, false, true, true);
}
+ if (!hipri)
+ return BLK_QC_T_NONE;
return cookie;
queue_exit:
blk_queue_exit(q);