Commit 13369816 authored by Dennis Zhou's avatar Dennis Zhou Committed by Jens Axboe

block: fix blk-iolatency accounting underflow

The blk-iolatency controller measures the time from rq_qos_throttle() to
rq_qos_done_bio() and attributes this time to the first bio that needs
to create the request. This means if a bio is plug-mergeable or
bio-mergeable, it gets to bypass the blk-iolatency controller.

The recent series [1], to tag all bios w/ blkgs undermined how iolatency
was determining which bios it was charging and should process in
rq_qos_done_bio(). Because all bios are being tagged, this caused the
atomic_t for the struct rq_wait inflight count to underflow and result
in a stall.

This patch adds a new flag BIO_TRACKED to let controllers know that a
bio is going through the rq_qos path. blk-iolatency now checks if this
flag is set to see if it should process the bio in rq_qos_done_bio().

Overloading BLK_QUEUE_ENTERED works, but makes the flag rules confusing.
BIO_THROTTLED was another candidate, but the flag is set for all bios
that have gone through blk-throttle code. Overloading a flag comes with
the burden of making sure that when either implementation changes, a
change in setting rules for one doesn't cause a bug in the other. So
here, we unfortunately opt for adding a new flag.

[1] https://lore.kernel.org/lkml/20181205171039.73066-1-dennis@kernel.org/

Fixes: 5cdf2e3f ("blkcg: associate blkg when associating a device")
Signed-off-by: default avatarDennis Zhou <dennis@kernel.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent c16d6b5a
...@@ -593,7 +593,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio) ...@@ -593,7 +593,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
bool enabled = false; bool enabled = false;
blkg = bio->bi_blkg; blkg = bio->bi_blkg;
if (!blkg) if (!blkg || !bio_flagged(bio, BIO_TRACKED))
return; return;
iolat = blkg_to_lat(bio->bi_blkg); iolat = blkg_to_lat(bio->bi_blkg);
......
...@@ -168,6 +168,11 @@ static inline void rq_qos_done_bio(struct request_queue *q, struct bio *bio) ...@@ -168,6 +168,11 @@ static inline void rq_qos_done_bio(struct request_queue *q, struct bio *bio)
static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio) static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)
{ {
/*
* BIO_TRACKED lets controllers know that a bio went through the
* normal rq_qos path.
*/
bio_set_flag(bio, BIO_TRACKED);
if (q->rq_qos) if (q->rq_qos)
__rq_qos_throttle(q->rq_qos, bio); __rq_qos_throttle(q->rq_qos, bio);
} }
......
...@@ -228,6 +228,7 @@ struct bio { ...@@ -228,6 +228,7 @@ struct bio {
#define BIO_TRACE_COMPLETION 10 /* bio_endio() should trace the final completion #define BIO_TRACE_COMPLETION 10 /* bio_endio() should trace the final completion
* of this bio. */ * of this bio. */
#define BIO_QUEUE_ENTERED 11 /* can use blk_queue_enter_live() */ #define BIO_QUEUE_ENTERED 11 /* can use blk_queue_enter_live() */
#define BIO_TRACKED 12 /* set if bio goes through the rq_qos path */
/* See BVEC_POOL_OFFSET below before adding new flags */ /* See BVEC_POOL_OFFSET below before adding new flags */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment