• Fengnan Chang's avatar
    f2fs: fix compressed file start atomic write may cause data corruption · 9b56adcf
    Fengnan Chang authored
    When compressed file has blocks, f2fs_ioc_start_atomic_write will succeed,
    but compressed flag will be remained in inode. If write partial compreseed
    cluster and commit atomic write will cause data corruption.
    
    This is the reproduction process:
    Step 1:
    create a compressed file ,write 64K data , call fsync(), then the blocks
    are write as compressed cluster.
    Step2:
    iotcl(F2FS_IOC_START_ATOMIC_WRITE)  --- this should be fail, but not.
    write page 0 and page 3.
    iotcl(F2FS_IOC_COMMIT_ATOMIC_WRITE)  -- page 0 and 3 write as normal file,
    Step3:
    drop cache.
    read page 0-4   -- Since page 0 has a valid block address, read as
    non-compressed cluster, page 1 and 2 will be filled with compressed data
    or zero.
    
    The root cause is, after commit 7eab7a69 ("f2fs: compress: remove
    unneeded read when rewrite whole cluster"), in step 2, f2fs_write_begin()
    only set target page dirty, and in f2fs_commit_inmem_pages(), we will write
    partial raw pages into compressed cluster, result in corrupting compressed
    cluster layout.
    
    Fixes: 4c8ff709 ("f2fs: support data compression")
    Fixes: 7eab7a69 ("f2fs: compress: remove unneeded read when rewrite whole cluster")
    Reported-by: default avatarkernel test robot <lkp@intel.com>
    Reported-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
    Signed-off-by: default avatarFengnan Chang <changfengnan@vivo.com>
    Reviewed-by: default avatarChao Yu <chao@kernel.org>
    Signed-off-by: default avatarJaegeuk Kim <jaegeuk@kernel.org>
    9b56adcf
file.c 116 KB