Commit 3deaa719 authored by Shaohua Li's avatar Shaohua Li Committed by Linus Torvalds

readahead: fix pipeline break caused by block plug

Herbert Poetzl reported a performance regression since 2.6.39.  The test
is a simple dd read, but with big block size.  The reason is:

T1: ra (A, A+128k), (A+128k, A+256k)
T2: lock_page for page A, submit the 256k
T3: hit page A+128K, ra (A+256k, A+384). the range isn't submitted
because of plug and there isn't any lock_page till we hit page A+256k
because all pages from A to A+256k is in memory
T4: hit page A+256k, ra (A+384, A+ 512). Because of plug, the range isn't
submitted again.
T5: lock_page A+256k, so (A+256k, A+512k) will be submitted. The task is
waitting for (A+256k, A+512k) finish.

There is no request to disk in T3 and T4, so readahead pipeline breaks.

We really don't need block plug for generic_file_aio_read() for buffered
I/O.  The readahead already has plug and has fine grained control when I/O
should be submitted.  Deleting plug for buffered I/O fixes the regression.

One side effect is plug makes the request size 256k, the size is 128k
without it.  This is because default ra size is 128k and not a reason we
need plug here.

Vivek said:

: We submit some readahead IO to device request queue but because of nested
: plug, queue never gets unplugged.  When read logic reaches a page which is
: not in page cache, it waits for page to be read from the disk
: (lock_page_killable()) and that time we flush the plug list.
:
: So effectively read ahead logic is kind of broken in parts because of
: nested plugging.  Removing top level plug (generic_file_aio_read()) for
: buffered reads, will allow unplugging queue earlier for readahead.
Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
Reported-by: default avatarHerbert Poetzl <herbert@13thfloor.at>
Tested-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 55ca6140
...@@ -1400,15 +1400,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov, ...@@ -1400,15 +1400,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
unsigned long seg = 0; unsigned long seg = 0;
size_t count; size_t count;
loff_t *ppos = &iocb->ki_pos; loff_t *ppos = &iocb->ki_pos;
struct blk_plug plug;
count = 0; count = 0;
retval = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE); retval = generic_segment_checks(iov, &nr_segs, &count, VERIFY_WRITE);
if (retval) if (retval)
return retval; return retval;
blk_start_plug(&plug);
/* coalesce the iovecs and go direct-to-BIO for O_DIRECT */ /* coalesce the iovecs and go direct-to-BIO for O_DIRECT */
if (filp->f_flags & O_DIRECT) { if (filp->f_flags & O_DIRECT) {
loff_t size; loff_t size;
...@@ -1424,8 +1421,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov, ...@@ -1424,8 +1421,12 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
retval = filemap_write_and_wait_range(mapping, pos, retval = filemap_write_and_wait_range(mapping, pos,
pos + iov_length(iov, nr_segs) - 1); pos + iov_length(iov, nr_segs) - 1);
if (!retval) { if (!retval) {
struct blk_plug plug;
blk_start_plug(&plug);
retval = mapping->a_ops->direct_IO(READ, iocb, retval = mapping->a_ops->direct_IO(READ, iocb,
iov, pos, nr_segs); iov, pos, nr_segs);
blk_finish_plug(&plug);
} }
if (retval > 0) { if (retval > 0) {
*ppos = pos + retval; *ppos = pos + retval;
...@@ -1481,7 +1482,6 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov, ...@@ -1481,7 +1482,6 @@ generic_file_aio_read(struct kiocb *iocb, const struct iovec *iov,
break; break;
} }
out: out:
blk_finish_plug(&plug);
return retval; return retval;
} }
EXPORT_SYMBOL(generic_file_aio_read); EXPORT_SYMBOL(generic_file_aio_read);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment