-
Oleg Nesterov authored
1. Current code can't always detect sequential reading, in case when read size is not PAGE_CACHE_SIZE aligned. If application reads the file by 4096+512 chunks, we have: 1st read: first read detected, prev_page = 2. 2nd read: offset == 2, the read is considered random. page_cache_readahead() should treat prev_page == offset as sequential access. In this case it is better to ++offset, because of blockable_page_cache_readahead(offset, size). 2. If application reads 4096 bytes with *ppos == 512, we have to read 2 pages, but req_size == 1 in do_generic_mapping_read(). Usually it's not a problem. But in random read case it results in unnecessary page cache misses. ~$ time dd conv=notrunc if=/tmp/GIG of=/tmp/dummy bs=$((4096+512)) 2.6.11-clean: real=370.35 user=0.16 sys=14.66 2.6.11-patched: real=234.49 user=0.19 sys=12.41 Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
671ccb4b