summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPaolo Valente <paolo.valente@linaro.org>2017-04-12 18:23:14 +0200
committerJens Axboe <axboe@fb.com>2017-04-19 08:30:26 -0600
commitbcd5642607ab9195e22a1617d92fb82698d44448 (patch)
tree99a1fded0a351990ac6a5601252a7374076831d4
parent77b7dcead36d15d7af9159f2a5f91149c5887634 (diff)
block, bfq: preserve a low latency also with NCQ-capable drives
I/O schedulers typically allow NCQ-capable drives to prefetch I/O requests, as NCQ boosts the throughput exactly by prefetching and internally reordering requests. Unfortunately, as discussed in detail and shown experimentally in [1], this may cause fairness and latency guarantees to be violated. The main problem is that the internal scheduler of an NCQ-capable drive may postpone the service of some unlucky (prefetched) requests as long as it deems serving other requests more appropriate to boost the throughput. This patch addresses this issue by not disabling device idling for weight-raised queues, even if the device supports NCQ. This allows BFQ to start serving a new queue, and therefore allows the drive to prefetch new requests, only after the idling timeout expires. At that time, all the outstanding requests of the expired queue have been most certainly served. [1] P. Valente and M. Andreolini, "Improving Application Responsiveness with the BFQ Disk I/O Scheduler", Proceedings of the 5th Annual International Systems and Storage Conference (SYSTOR '12), June 2012. Slightly extended version: http://algogroup.unimore.it/people/paolo/disk_sched/bfq-v1-suite- results.pdf Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-rw-r--r--block/bfq-iosched.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 7f94ad3fa9f8..574a5f6a2370 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -6233,7 +6233,8 @@ static void bfq_update_idle_window(struct bfq_data *bfqd,
if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
bfqd->bfq_slice_idle == 0 ||
- (bfqd->hw_tag && BFQQ_SEEKY(bfqq)))
+ (bfqd->hw_tag && BFQQ_SEEKY(bfqq) &&
+ bfqq->wr_coeff == 1))
enable_idle = 0;
else if (bfq_sample_valid(bfqq->ttime.ttime_samples)) {
if (bfqq->ttime.ttime_mean > bfqd->bfq_slice_idle &&