• Chao Yu's avatar
    f2fs: compress: fix deadloop in f2fs_write_cache_pages() · c5d3f9b7
    Chao Yu authored
    With below mount option and testcase, it hangs kernel.
    
    1. mount -t f2fs -o compress_log_size=5 /dev/vdb /mnt/f2fs
    2. touch /mnt/f2fs/file
    3. chattr +c /mnt/f2fs/file
    4. dd if=/dev/zero of=/mnt/f2fs/file bs=1MB count=1
    5. sync
    6. dd if=/dev/zero of=/mnt/f2fs/file bs=111 count=11 conv=notrunc
    7. sync
    
    INFO: task sync:4788 blocked for more than 120 seconds.
          Not tainted 6.5.0-rc1+ #322
    "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    task:sync            state:D stack:0     pid:4788  ppid:509    flags:0x00000002
    Call Trace:
     <TASK>
     __schedule+0x335/0xf80
     schedule+0x6f/0xf0
     wb_wait_for_completion+0x5e/0x90
     sync_inodes_sb+0xd8/0x2a0
     sync_inodes_one_sb+0x1d/0x30
     iterate_supers+0x99/0xf0
     ksys_sync+0x46/0xb0
     __do_sys_sync+0x12/0x20
     do_syscall_64+0x3f/0x90
     entry_SYSCALL_64_after_hwframe+0x6e/0xd8
    
    The reason is f2fs_all_cluster_page_ready() assumes that pages array should
    cover at least one cluster, otherwise, it will always return false, result
    in deadloop.
    
    By default, pages array size is 16, and it can cover the case cluster_size
    is equal or less than 16, for the case cluster_size is larger than 16, let's
    allocate memory of pages array dynamically.
    
    Fixes: 4c8ff709 ("f2fs: support data compression")
    Signed-off-by: default avatarChao Yu <chao@kernel.org>
    Signed-off-by: default avatarJaegeuk Kim <jaegeuk@kernel.org>
    c5d3f9b7
data.c 104 KB