Commit fe683ada authored by Dan Williams's avatar Dan Williams Committed by Linus Torvalds

dax: guarantee page aligned results from bdev_direct_access()

If a ->direct_access() implementation ever returns a map count less than
PAGE_SIZE, catch the error in bdev_direct_access().  This simplifies
error checking in upper layers.
Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
Reported-by: default avatarRoss Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 0e749e54
...@@ -494,6 +494,8 @@ long bdev_direct_access(struct block_device *bdev, sector_t sector, ...@@ -494,6 +494,8 @@ long bdev_direct_access(struct block_device *bdev, sector_t sector,
avail = ops->direct_access(bdev, sector, addr, pfn); avail = ops->direct_access(bdev, sector, addr, pfn);
if (!avail) if (!avail)
return -ERANGE; return -ERANGE;
if (avail > 0 && avail & ~PAGE_MASK)
return -ENXIO;
return min(avail, size); return min(avail, size);
} }
EXPORT_SYMBOL_GPL(bdev_direct_access); EXPORT_SYMBOL_GPL(bdev_direct_access);
......
...@@ -52,7 +52,6 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size) ...@@ -52,7 +52,6 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size)
sz = min_t(long, count, SZ_128K); sz = min_t(long, count, SZ_128K);
clear_pmem(addr, sz); clear_pmem(addr, sz);
size -= sz; size -= sz;
BUG_ON(sz & 511);
sector += sz / 512; sector += sz / 512;
cond_resched(); cond_resched();
} while (size); } while (size);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment