Commit 274c0e74 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'f2fs-for-4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs

Pull f2fs update from Jaegeuk Kim:
 "In this round, we've mainly focused on performance tuning and critical
  bug fixes occurred in low-end devices. Sheng Yong introduced
  lost_found feature to keep missing files during recovery instead of
  thrashing them. We're preparing coming fsverity implementation. And,
  we've got more features to communicate with users for better
  performance. In low-end devices, some memory-related issues were
  fixed, and subtle race condtions and corner cases were addressed as
  well.

  Enhancements:
   - large nat bitmaps for more free node ids
   - add three block allocation policies to pass down write hints given by user
   - expose extension list to user and introduce hot file extension
   - tune small devices seamlessly for low-end devices
   - set readdir_ra by default
   - give more resources under gc_urgent mode regarding to discard and cleaning
   - introduce fsync_mode to enforce posix or not
   - nowait aio support
   - add lost_found feature to keep dangling inodes
   - reserve bits for future fsverity feature
   - add test_dummy_encryption for FBE

  Bug fixes:
   - don't use highmem for dentry pages
   - align memory boundary for bitops
   - truncate preallocated blocks in write errors
   - guarantee i_times on fsync call
   - clear CP_TRIMMED_FLAG correctly
   - prevent node chain loop during recovery
   - avoid data race between atomic write and background cleaning
   - avoid unnecessary selinux violation warnings on resgid option
   - GFP_NOFS to avoid deadlock in quota and read paths
   - fix f2fs_skip_inode_update to allow i_size recovery

  In addition to the above, there are several minor bug fixes and clean-ups"

* tag 'f2fs-for-4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (50 commits)
  f2fs: remain written times to update inode during fsync
  f2fs: make assignment of t->dentry_bitmap more readable
  f2fs: truncate preallocated blocks in error case
  f2fs: fix a wrong condition in f2fs_skip_inode_update
  f2fs: reserve bits for fs-verity
  f2fs: Add a segment type check in inplace write
  f2fs: no need to initialize zero value for GFP_F2FS_ZERO
  f2fs: don't track new nat entry in nat set
  f2fs: clean up with F2FS_BLK_ALIGN
  f2fs: check blkaddr more accuratly before issue a bio
  f2fs: Set GF_NOFS in read_cache_page_gfp while doing f2fs_quota_read
  f2fs: introduce a new mount option test_dummy_encryption
  f2fs: introduce F2FS_FEATURE_LOST_FOUND feature
  f2fs: release locks before return in f2fs_ioc_gc_range()
  f2fs: align memory boundary for bitops
  f2fs: remove unneeded set_cold_node()
  f2fs: add nowait aio support
  f2fs: wrap all options with f2fs_sb_info.mount_opt
  f2fs: Don't overwrite all types of node to keep node chain
  f2fs: introduce mount option for fsync mode
  ...
parents 052c220d 214c2461
...@@ -192,3 +192,14 @@ Date: November 2017 ...@@ -192,3 +192,14 @@ Date: November 2017
Contact: "Sheng Yong" <shengyong1@huawei.com> Contact: "Sheng Yong" <shengyong1@huawei.com>
Description: Description:
Controls readahead inode block in readdir. Controls readahead inode block in readdir.
What: /sys/fs/f2fs/<disk>/extension_list
Date: Feburary 2018
Contact: "Chao Yu" <yuchao0@huawei.com>
Description:
Used to control configure extension list:
- Query: cat /sys/fs/f2fs/<disk>/extension_list
- Add: echo '[h/c]extension' > /sys/fs/f2fs/<disk>/extension_list
- Del: echo '[h/c]!extension' > /sys/fs/f2fs/<disk>/extension_list
- [h] means add/del hot file extension
- [c] means add/del cold file extension
...@@ -174,6 +174,23 @@ offgrpjquota Turn off group journelled quota. ...@@ -174,6 +174,23 @@ offgrpjquota Turn off group journelled quota.
offprjjquota Turn off project journelled quota. offprjjquota Turn off project journelled quota.
quota Enable plain user disk quota accounting. quota Enable plain user disk quota accounting.
noquota Disable all plain disk quota option. noquota Disable all plain disk quota option.
whint_mode=%s Control which write hints are passed down to block
layer. This supports "off", "user-based", and
"fs-based". In "off" mode (default), f2fs does not pass
down hints. In "user-based" mode, f2fs tries to pass
down hints given by users. And in "fs-based" mode, f2fs
passes down hints with its policy.
alloc_mode=%s Adjust block allocation policy, which supports "reuse"
and "default".
fsync_mode=%s Control the policy of fsync. Currently supports "posix"
and "strict". In "posix" mode, which is default, fsync
will follow POSIX semantics and does a light operation
to improve the filesystem performance. In "strict" mode,
fsync will be heavy and behaves in line with xfs, ext4
and btrfs, where xfstest generic/342 will pass, but the
performance will regress.
test_dummy_encryption Enable dummy encryption, which provides a fake fscrypt
context. The fake fscrypt context is used by xfstests.
================================================================================ ================================================================================
DEBUGFS ENTRIES DEBUGFS ENTRIES
...@@ -611,3 +628,63 @@ algorithm. ...@@ -611,3 +628,63 @@ algorithm.
In order to identify whether the data in the victim segment are valid or not, In order to identify whether the data in the victim segment are valid or not,
F2FS manages a bitmap. Each bit represents the validity of a block, and the F2FS manages a bitmap. Each bit represents the validity of a block, and the
bitmap is composed of a bit stream covering whole blocks in main area. bitmap is composed of a bit stream covering whole blocks in main area.
Write-hint Policy
-----------------
1) whint_mode=off. F2FS only passes down WRITE_LIFE_NOT_SET.
2) whint_mode=user-based. F2FS tries to pass down hints given by
users.
User F2FS Block
---- ---- -----
META WRITE_LIFE_NOT_SET
HOT_NODE "
WARM_NODE "
COLD_NODE "
*ioctl(COLD) COLD_DATA WRITE_LIFE_EXTREME
*extension list " "
-- buffered io
WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
WRITE_LIFE_NONE " "
WRITE_LIFE_MEDIUM " "
WRITE_LIFE_LONG " "
-- direct io
WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
WRITE_LIFE_NONE " WRITE_LIFE_NONE
WRITE_LIFE_MEDIUM " WRITE_LIFE_MEDIUM
WRITE_LIFE_LONG " WRITE_LIFE_LONG
3) whint_mode=fs-based. F2FS passes down hints with its policy.
User F2FS Block
---- ---- -----
META WRITE_LIFE_MEDIUM;
HOT_NODE WRITE_LIFE_NOT_SET
WARM_NODE "
COLD_NODE WRITE_LIFE_NONE
ioctl(COLD) COLD_DATA WRITE_LIFE_EXTREME
extension list " "
-- buffered io
WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_LONG
WRITE_LIFE_NONE " "
WRITE_LIFE_MEDIUM " "
WRITE_LIFE_LONG " "
-- direct io
WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
WRITE_LIFE_NONE " WRITE_LIFE_NONE
WRITE_LIFE_MEDIUM " WRITE_LIFE_MEDIUM
WRITE_LIFE_LONG " WRITE_LIFE_LONG
...@@ -68,6 +68,7 @@ static struct page *__get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index, ...@@ -68,6 +68,7 @@ static struct page *__get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index,
.old_blkaddr = index, .old_blkaddr = index,
.new_blkaddr = index, .new_blkaddr = index,
.encrypted_page = NULL, .encrypted_page = NULL,
.is_meta = is_meta,
}; };
if (unlikely(!is_meta)) if (unlikely(!is_meta))
...@@ -162,6 +163,7 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, ...@@ -162,6 +163,7 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
.op_flags = sync ? (REQ_META | REQ_PRIO) : REQ_RAHEAD, .op_flags = sync ? (REQ_META | REQ_PRIO) : REQ_RAHEAD,
.encrypted_page = NULL, .encrypted_page = NULL,
.in_list = false, .in_list = false,
.is_meta = (type != META_POR),
}; };
struct blk_plug plug; struct blk_plug plug;
...@@ -569,13 +571,8 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -569,13 +571,8 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
struct node_info ni; struct node_info ni;
int err = acquire_orphan_inode(sbi); int err = acquire_orphan_inode(sbi);
if (err) { if (err)
set_sbi_flag(sbi, SBI_NEED_FSCK); goto err_out;
f2fs_msg(sbi->sb, KERN_WARNING,
"%s: orphan failed (ino=%x), run fsck to fix.",
__func__, ino);
return err;
}
__add_ino_entry(sbi, ino, 0, ORPHAN_INO); __add_ino_entry(sbi, ino, 0, ORPHAN_INO);
...@@ -589,6 +586,11 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -589,6 +586,11 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
return PTR_ERR(inode); return PTR_ERR(inode);
} }
err = dquot_initialize(inode);
if (err)
goto err_out;
dquot_initialize(inode);
clear_nlink(inode); clear_nlink(inode);
/* truncate all the data during iput */ /* truncate all the data during iput */
...@@ -598,14 +600,18 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -598,14 +600,18 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
/* ENOMEM was fully retried in f2fs_evict_inode. */ /* ENOMEM was fully retried in f2fs_evict_inode. */
if (ni.blk_addr != NULL_ADDR) { if (ni.blk_addr != NULL_ADDR) {
set_sbi_flag(sbi, SBI_NEED_FSCK); err = -EIO;
f2fs_msg(sbi->sb, KERN_WARNING, goto err_out;
"%s: orphan failed (ino=%x) by kernel, retry mount.",
__func__, ino);
return -EIO;
} }
__remove_ino_entry(sbi, ino, ORPHAN_INO); __remove_ino_entry(sbi, ino, ORPHAN_INO);
return 0; return 0;
err_out:
set_sbi_flag(sbi, SBI_NEED_FSCK);
f2fs_msg(sbi->sb, KERN_WARNING,
"%s: orphan failed (ino=%x), run fsck to fix.",
__func__, ino);
return err;
} }
int recover_orphan_inodes(struct f2fs_sb_info *sbi) int recover_orphan_inodes(struct f2fs_sb_info *sbi)
...@@ -1136,6 +1142,8 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1136,6 +1142,8 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc)
if (cpc->reason & CP_TRIMMED) if (cpc->reason & CP_TRIMMED)
__set_ckpt_flags(ckpt, CP_TRIMMED_FLAG); __set_ckpt_flags(ckpt, CP_TRIMMED_FLAG);
else
__clear_ckpt_flags(ckpt, CP_TRIMMED_FLAG);
if (cpc->reason & CP_UMOUNT) if (cpc->reason & CP_UMOUNT)
__set_ckpt_flags(ckpt, CP_UMOUNT_FLAG); __set_ckpt_flags(ckpt, CP_UMOUNT_FLAG);
...@@ -1162,6 +1170,39 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1162,6 +1170,39 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc)
spin_unlock_irqrestore(&sbi->cp_lock, flags); spin_unlock_irqrestore(&sbi->cp_lock, flags);
} }
static void commit_checkpoint(struct f2fs_sb_info *sbi,
void *src, block_t blk_addr)
{
struct writeback_control wbc = {
.for_reclaim = 0,
};
/*
* pagevec_lookup_tag and lock_page again will take
* some extra time. Therefore, update_meta_pages and
* sync_meta_pages are combined in this function.
*/
struct page *page = grab_meta_page(sbi, blk_addr);
int err;
memcpy(page_address(page), src, PAGE_SIZE);
set_page_dirty(page);
f2fs_wait_on_page_writeback(page, META, true);
f2fs_bug_on(sbi, PageWriteback(page));
if (unlikely(!clear_page_dirty_for_io(page)))
f2fs_bug_on(sbi, 1);
/* writeout cp pack 2 page */
err = __f2fs_write_meta_page(page, &wbc, FS_CP_META_IO);
f2fs_bug_on(sbi, err);
f2fs_put_page(page, 0);
/* submit checkpoint (with barrier if NOBARRIER is not set) */
f2fs_submit_merged_write(sbi, META_FLUSH);
}
static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
{ {
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
...@@ -1264,16 +1305,6 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1264,16 +1305,6 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
} }
} }
/* need to wait for end_io results */
wait_on_all_pages_writeback(sbi);
if (unlikely(f2fs_cp_error(sbi)))
return -EIO;
/* flush all device cache */
err = f2fs_flush_device_cache(sbi);
if (err)
return err;
/* write out checkpoint buffer at block 0 */ /* write out checkpoint buffer at block 0 */
update_meta_page(sbi, ckpt, start_blk++); update_meta_page(sbi, ckpt, start_blk++);
...@@ -1301,26 +1332,26 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1301,26 +1332,26 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
start_blk += NR_CURSEG_NODE_TYPE; start_blk += NR_CURSEG_NODE_TYPE;
} }
/* writeout checkpoint block */ /* update user_block_counts */
update_meta_page(sbi, ckpt, start_blk); sbi->last_valid_block_count = sbi->total_valid_block_count;
percpu_counter_set(&sbi->alloc_valid_block_count, 0);
/* Here, we have one bio having CP pack except cp pack 2 page */
sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
/* wait for previous submitted node/meta pages writeback */ /* wait for previous submitted meta pages writeback */
wait_on_all_pages_writeback(sbi); wait_on_all_pages_writeback(sbi);
if (unlikely(f2fs_cp_error(sbi))) if (unlikely(f2fs_cp_error(sbi)))
return -EIO; return -EIO;
filemap_fdatawait_range(NODE_MAPPING(sbi), 0, LLONG_MAX); /* flush all device cache */
filemap_fdatawait_range(META_MAPPING(sbi), 0, LLONG_MAX); err = f2fs_flush_device_cache(sbi);
if (err)
/* update user_block_counts */ return err;
sbi->last_valid_block_count = sbi->total_valid_block_count;
percpu_counter_set(&sbi->alloc_valid_block_count, 0);
/* Here, we only have one bio having CP pack */
sync_meta_pages(sbi, META_FLUSH, LONG_MAX, FS_CP_META_IO);
/* wait for previous submitted meta pages writeback */ /* barrier and flush checkpoint cp pack 2 page if it can */
commit_checkpoint(sbi, ckpt, start_blk);
wait_on_all_pages_writeback(sbi); wait_on_all_pages_writeback(sbi);
release_ino_entry(sbi, false); release_ino_entry(sbi, false);
......
...@@ -175,15 +175,22 @@ static bool __same_bdev(struct f2fs_sb_info *sbi, ...@@ -175,15 +175,22 @@ static bool __same_bdev(struct f2fs_sb_info *sbi,
*/ */
static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr, static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr,
struct writeback_control *wbc, struct writeback_control *wbc,
int npages, bool is_read) int npages, bool is_read,
enum page_type type, enum temp_type temp)
{ {
struct bio *bio; struct bio *bio;
bio = f2fs_bio_alloc(sbi, npages, true); bio = f2fs_bio_alloc(sbi, npages, true);
f2fs_target_device(sbi, blk_addr, bio); f2fs_target_device(sbi, blk_addr, bio);
bio->bi_end_io = is_read ? f2fs_read_end_io : f2fs_write_end_io; if (is_read) {
bio->bi_private = is_read ? NULL : sbi; bio->bi_end_io = f2fs_read_end_io;
bio->bi_private = NULL;
} else {
bio->bi_end_io = f2fs_write_end_io;
bio->bi_private = sbi;
bio->bi_write_hint = io_type_to_rw_hint(sbi, type, temp);
}
if (wbc) if (wbc)
wbc_init_bio(wbc, bio); wbc_init_bio(wbc, bio);
...@@ -196,13 +203,12 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi, ...@@ -196,13 +203,12 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi,
if (!is_read_io(bio_op(bio))) { if (!is_read_io(bio_op(bio))) {
unsigned int start; unsigned int start;
if (f2fs_sb_mounted_blkzoned(sbi->sb) &&
current->plug && (type == DATA || type == NODE))
blk_finish_plug(current->plug);
if (type != DATA && type != NODE) if (type != DATA && type != NODE)
goto submit_io; goto submit_io;
if (f2fs_sb_has_blkzoned(sbi->sb) && current->plug)
blk_finish_plug(current->plug);
start = bio->bi_iter.bi_size >> F2FS_BLKSIZE_BITS; start = bio->bi_iter.bi_size >> F2FS_BLKSIZE_BITS;
start %= F2FS_IO_SIZE(sbi); start %= F2FS_IO_SIZE(sbi);
...@@ -377,12 +383,13 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio) ...@@ -377,12 +383,13 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
struct page *page = fio->encrypted_page ? struct page *page = fio->encrypted_page ?
fio->encrypted_page : fio->page; fio->encrypted_page : fio->page;
verify_block_addr(fio, fio->new_blkaddr);
trace_f2fs_submit_page_bio(page, fio); trace_f2fs_submit_page_bio(page, fio);
f2fs_trace_ios(fio, 0); f2fs_trace_ios(fio, 0);
/* Allocate a new bio */ /* Allocate a new bio */
bio = __bio_alloc(fio->sbi, fio->new_blkaddr, fio->io_wbc, bio = __bio_alloc(fio->sbi, fio->new_blkaddr, fio->io_wbc,
1, is_read_io(fio->op)); 1, is_read_io(fio->op), fio->type, fio->temp);
if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) { if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
bio_put(bio); bio_put(bio);
...@@ -422,8 +429,8 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio) ...@@ -422,8 +429,8 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio)
} }
if (fio->old_blkaddr != NEW_ADDR) if (fio->old_blkaddr != NEW_ADDR)
verify_block_addr(sbi, fio->old_blkaddr); verify_block_addr(fio, fio->old_blkaddr);
verify_block_addr(sbi, fio->new_blkaddr); verify_block_addr(fio, fio->new_blkaddr);
bio_page = fio->encrypted_page ? fio->encrypted_page : fio->page; bio_page = fio->encrypted_page ? fio->encrypted_page : fio->page;
...@@ -445,7 +452,8 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio) ...@@ -445,7 +452,8 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio)
goto out_fail; goto out_fail;
} }
io->bio = __bio_alloc(sbi, fio->new_blkaddr, fio->io_wbc, io->bio = __bio_alloc(sbi, fio->new_blkaddr, fio->io_wbc,
BIO_MAX_PAGES, false); BIO_MAX_PAGES, false,
fio->type, fio->temp);
io->fio = *fio; io->fio = *fio;
} }
...@@ -832,13 +840,6 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type) ...@@ -832,13 +840,6 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type)
return 0; return 0;
} }
static inline bool __force_buffered_io(struct inode *inode, int rw)
{
return (f2fs_encrypted_file(inode) ||
(rw == WRITE && test_opt(F2FS_I_SB(inode), LFS)) ||
F2FS_I_SB(inode)->s_ndevs);
}
int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from) int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
{ {
struct inode *inode = file_inode(iocb->ki_filp); struct inode *inode = file_inode(iocb->ki_filp);
...@@ -870,7 +871,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from) ...@@ -870,7 +871,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
if (direct_io) { if (direct_io) {
map.m_seg_type = rw_hint_to_seg_type(iocb->ki_hint); map.m_seg_type = rw_hint_to_seg_type(iocb->ki_hint);
flag = __force_buffered_io(inode, WRITE) ? flag = f2fs_force_buffered_io(inode, WRITE) ?
F2FS_GET_BLOCK_PRE_AIO : F2FS_GET_BLOCK_PRE_AIO :
F2FS_GET_BLOCK_PRE_DIO; F2FS_GET_BLOCK_PRE_DIO;
goto map_blocks; goto map_blocks;
...@@ -1114,6 +1115,31 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, ...@@ -1114,6 +1115,31 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
return err; return err;
} }
bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len)
{
struct f2fs_map_blocks map;
block_t last_lblk;
int err;
if (pos + len > i_size_read(inode))
return false;
map.m_lblk = F2FS_BYTES_TO_BLK(pos);
map.m_next_pgofs = NULL;
map.m_next_extent = NULL;
map.m_seg_type = NO_CHECK_TYPE;
last_lblk = F2FS_BLK_ALIGN(pos + len);
while (map.m_lblk < last_lblk) {
map.m_len = last_lblk - map.m_lblk;
err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_DEFAULT);
if (err || map.m_len == 0)
return false;
map.m_lblk += map.m_len;
}
return true;
}
static int __get_data_block(struct inode *inode, sector_t iblock, static int __get_data_block(struct inode *inode, sector_t iblock,
struct buffer_head *bh, int create, int flag, struct buffer_head *bh, int create, int flag,
pgoff_t *next_pgofs, int seg_type) pgoff_t *next_pgofs, int seg_type)
...@@ -2287,25 +2313,41 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) ...@@ -2287,25 +2313,41 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
{ {
struct address_space *mapping = iocb->ki_filp->f_mapping; struct address_space *mapping = iocb->ki_filp->f_mapping;
struct inode *inode = mapping->host; struct inode *inode = mapping->host;
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
size_t count = iov_iter_count(iter); size_t count = iov_iter_count(iter);
loff_t offset = iocb->ki_pos; loff_t offset = iocb->ki_pos;
int rw = iov_iter_rw(iter); int rw = iov_iter_rw(iter);
int err; int err;
enum rw_hint hint = iocb->ki_hint;
int whint_mode = F2FS_OPTION(sbi).whint_mode;
err = check_direct_IO(inode, iter, offset); err = check_direct_IO(inode, iter, offset);
if (err) if (err)
return err; return err;
if (__force_buffered_io(inode, rw)) if (f2fs_force_buffered_io(inode, rw))
return 0; return 0;
trace_f2fs_direct_IO_enter(inode, offset, count, rw); trace_f2fs_direct_IO_enter(inode, offset, count, rw);
down_read(&F2FS_I(inode)->dio_rwsem[rw]); if (rw == WRITE && whint_mode == WHINT_MODE_OFF)
iocb->ki_hint = WRITE_LIFE_NOT_SET;
if (!down_read_trylock(&F2FS_I(inode)->dio_rwsem[rw])) {
if (iocb->ki_flags & IOCB_NOWAIT) {
iocb->ki_hint = hint;
err = -EAGAIN;
goto out;
}
down_read(&F2FS_I(inode)->dio_rwsem[rw]);
}
err = blockdev_direct_IO(iocb, inode, iter, get_data_block_dio); err = blockdev_direct_IO(iocb, inode, iter, get_data_block_dio);
up_read(&F2FS_I(inode)->dio_rwsem[rw]); up_read(&F2FS_I(inode)->dio_rwsem[rw]);
if (rw == WRITE) { if (rw == WRITE) {
if (whint_mode == WHINT_MODE_OFF)
iocb->ki_hint = hint;
if (err > 0) { if (err > 0) {
f2fs_update_iostat(F2FS_I_SB(inode), APP_DIRECT_IO, f2fs_update_iostat(F2FS_I_SB(inode), APP_DIRECT_IO,
err); err);
...@@ -2315,6 +2357,7 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) ...@@ -2315,6 +2357,7 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
} }
} }
out:
trace_f2fs_direct_IO_exit(inode, offset, count, rw, err); trace_f2fs_direct_IO_exit(inode, offset, count, rw, err);
return err; return err;
......
...@@ -94,14 +94,12 @@ static struct f2fs_dir_entry *find_in_block(struct page *dentry_page, ...@@ -94,14 +94,12 @@ static struct f2fs_dir_entry *find_in_block(struct page *dentry_page,
struct f2fs_dir_entry *de; struct f2fs_dir_entry *de;
struct f2fs_dentry_ptr d; struct f2fs_dentry_ptr d;
dentry_blk = (struct f2fs_dentry_block *)kmap(dentry_page); dentry_blk = (struct f2fs_dentry_block *)page_address(dentry_page);
make_dentry_ptr_block(NULL, &d, dentry_blk); make_dentry_ptr_block(NULL, &d, dentry_blk);
de = find_target_dentry(fname, namehash, max_slots, &d); de = find_target_dentry(fname, namehash, max_slots, &d);
if (de) if (de)
*res_page = dentry_page; *res_page = dentry_page;
else
kunmap(dentry_page);
return de; return de;
} }
...@@ -287,7 +285,6 @@ ino_t f2fs_inode_by_name(struct inode *dir, const struct qstr *qstr, ...@@ -287,7 +285,6 @@ ino_t f2fs_inode_by_name(struct inode *dir, const struct qstr *qstr,
de = f2fs_find_entry(dir, qstr, page); de = f2fs_find_entry(dir, qstr, page);
if (de) { if (de) {
res = le32_to_cpu(de->ino); res = le32_to_cpu(de->ino);
f2fs_dentry_kunmap(dir, *page);
f2fs_put_page(*page, 0); f2fs_put_page(*page, 0);
} }
...@@ -302,7 +299,6 @@ void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de, ...@@ -302,7 +299,6 @@ void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de,
f2fs_wait_on_page_writeback(page, type, true); f2fs_wait_on_page_writeback(page, type, true);
de->ino = cpu_to_le32(inode->i_ino); de->ino = cpu_to_le32(inode->i_ino);
set_de_type(de, inode->i_mode); set_de_type(de, inode->i_mode);
f2fs_dentry_kunmap(dir, page);
set_page_dirty(page); set_page_dirty(page);
dir->i_mtime = dir->i_ctime = current_time(dir); dir->i_mtime = dir->i_ctime = current_time(dir);
...@@ -350,13 +346,11 @@ static int make_empty_dir(struct inode *inode, ...@@ -350,13 +346,11 @@ static int make_empty_dir(struct inode *inode,
if (IS_ERR(dentry_page)) if (IS_ERR(dentry_page))
return PTR_ERR(dentry_page); return PTR_ERR(dentry_page);
dentry_blk = kmap_atomic(dentry_page); dentry_blk = page_address(dentry_page);
make_dentry_ptr_block(NULL, &d, dentry_blk); make_dentry_ptr_block(NULL, &d, dentry_blk);
do_make_empty_dir(inode, parent, &d); do_make_empty_dir(inode, parent, &d);
kunmap_atomic(dentry_blk);
set_page_dirty(dentry_page); set_page_dirty(dentry_page);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
return 0; return 0;
...@@ -367,6 +361,7 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir, ...@@ -367,6 +361,7 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
struct page *dpage) struct page *dpage)
{ {
struct page *page; struct page *page;
int dummy_encrypt = DUMMY_ENCRYPTION_ENABLED(F2FS_I_SB(dir));
int err; int err;
if (is_inode_flag_set(inode, FI_NEW_INODE)) { if (is_inode_flag_set(inode, FI_NEW_INODE)) {
...@@ -393,7 +388,8 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir, ...@@ -393,7 +388,8 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
if (err) if (err)
goto put_error; goto put_error;
if (f2fs_encrypted_inode(dir) && f2fs_may_encrypt(inode)) { if ((f2fs_encrypted_inode(dir) || dummy_encrypt) &&
f2fs_may_encrypt(inode)) {
err = fscrypt_inherit_context(dir, inode, page, false); err = fscrypt_inherit_context(dir, inode, page, false);
if (err) if (err)
goto put_error; goto put_error;
...@@ -402,8 +398,6 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir, ...@@ -402,8 +398,6 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
page = get_node_page(F2FS_I_SB(dir), inode->i_ino); page = get_node_page(F2FS_I_SB(dir), inode->i_ino);
if (IS_ERR(page)) if (IS_ERR(page))
return page; return page;
set_cold_node(inode, page);
} }
if (new_name) { if (new_name) {
...@@ -547,13 +541,12 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name, ...@@ -547,13 +541,12 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name,
if (IS_ERR(dentry_page)) if (IS_ERR(dentry_page))
return PTR_ERR(dentry_page); return PTR_ERR(dentry_page);
dentry_blk = kmap(dentry_page); dentry_blk = page_address(dentry_page);
bit_pos = room_for_filename(&dentry_blk->dentry_bitmap, bit_pos = room_for_filename(&dentry_blk->dentry_bitmap,
slots, NR_DENTRY_IN_BLOCK); slots, NR_DENTRY_IN_BLOCK);
if (bit_pos < NR_DENTRY_IN_BLOCK) if (bit_pos < NR_DENTRY_IN_BLOCK)
goto add_dentry; goto add_dentry;
kunmap(dentry_page);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
} }
...@@ -588,7 +581,6 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name, ...@@ -588,7 +581,6 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name,
if (inode) if (inode)
up_write(&F2FS_I(inode)->i_sem); up_write(&F2FS_I(inode)->i_sem);
kunmap(dentry_page);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
return err; return err;
...@@ -642,7 +634,6 @@ int __f2fs_add_link(struct inode *dir, const struct qstr *name, ...@@ -642,7 +634,6 @@ int __f2fs_add_link(struct inode *dir, const struct qstr *name,
F2FS_I(dir)->task = NULL; F2FS_I(dir)->task = NULL;
} }
if (de) { if (de) {
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
err = -EEXIST; err = -EEXIST;
} else if (IS_ERR(page)) { } else if (IS_ERR(page)) {
...@@ -713,7 +704,8 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, ...@@ -713,7 +704,8 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
f2fs_update_time(F2FS_I_SB(dir), REQ_TIME); f2fs_update_time(F2FS_I_SB(dir), REQ_TIME);
add_ino_entry(F2FS_I_SB(dir), dir->i_ino, TRANS_DIR_INO); if (F2FS_OPTION(F2FS_I_SB(dir)).fsync_mode == FSYNC_MODE_STRICT)
add_ino_entry(F2FS_I_SB(dir), dir->i_ino, TRANS_DIR_INO);
if (f2fs_has_inline_dentry(dir)) if (f2fs_has_inline_dentry(dir))
return f2fs_delete_inline_entry(dentry, page, dir, inode); return f2fs_delete_inline_entry(dentry, page, dir, inode);
...@@ -730,7 +722,6 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, ...@@ -730,7 +722,6 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap, bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap,
NR_DENTRY_IN_BLOCK, NR_DENTRY_IN_BLOCK,
0); 0);
kunmap(page); /* kunmap - pair of f2fs_find_entry */
set_page_dirty(page); set_page_dirty(page);
dir->i_ctime = dir->i_mtime = current_time(dir); dir->i_ctime = dir->i_mtime = current_time(dir);
...@@ -775,7 +766,7 @@ bool f2fs_empty_dir(struct inode *dir) ...@@ -775,7 +766,7 @@ bool f2fs_empty_dir(struct inode *dir)
return false; return false;
} }
dentry_blk = kmap_atomic(dentry_page); dentry_blk = page_address(dentry_page);
if (bidx == 0) if (bidx == 0)
bit_pos = 2; bit_pos = 2;
else else
...@@ -783,7 +774,6 @@ bool f2fs_empty_dir(struct inode *dir) ...@@ -783,7 +774,6 @@ bool f2fs_empty_dir(struct inode *dir)
bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap, bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap,
NR_DENTRY_IN_BLOCK, NR_DENTRY_IN_BLOCK,
bit_pos); bit_pos);
kunmap_atomic(dentry_blk);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
...@@ -901,19 +891,17 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx) ...@@ -901,19 +891,17 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx)
} }
} }
dentry_blk = kmap(dentry_page); dentry_blk = page_address(dentry_page);
make_dentry_ptr_block(inode, &d, dentry_blk); make_dentry_ptr_block(inode, &d, dentry_blk);
err = f2fs_fill_dentries(ctx, &d, err = f2fs_fill_dentries(ctx, &d,
n * NR_DENTRY_IN_BLOCK, &fstr); n * NR_DENTRY_IN_BLOCK, &fstr);
if (err) { if (err) {
kunmap(dentry_page);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
break; break;
} }
kunmap(dentry_page);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
} }
out_free: out_free:
......
...@@ -460,7 +460,7 @@ static struct extent_node *__insert_extent_tree(struct inode *inode, ...@@ -460,7 +460,7 @@ static struct extent_node *__insert_extent_tree(struct inode *inode,
struct rb_node *insert_parent) struct rb_node *insert_parent)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct rb_node **p = &et->root.rb_node; struct rb_node **p;
struct rb_node *parent = NULL; struct rb_node *parent = NULL;
struct extent_node *en = NULL; struct extent_node *en = NULL;
...@@ -706,6 +706,9 @@ void f2fs_drop_extent_tree(struct inode *inode) ...@@ -706,6 +706,9 @@ void f2fs_drop_extent_tree(struct inode *inode)
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct extent_tree *et = F2FS_I(inode)->extent_tree; struct extent_tree *et = F2FS_I(inode)->extent_tree;
if (!f2fs_may_extent_tree(inode))
return;
set_inode_flag(inode, FI_NO_EXTENT); set_inode_flag(inode, FI_NO_EXTENT);
write_lock(&et->lock); write_lock(&et->lock);
......
This diff is collapsed.
...@@ -163,9 +163,10 @@ static inline enum cp_reason_type need_do_checkpoint(struct inode *inode) ...@@ -163,9 +163,10 @@ static inline enum cp_reason_type need_do_checkpoint(struct inode *inode)
cp_reason = CP_NODE_NEED_CP; cp_reason = CP_NODE_NEED_CP;
else if (test_opt(sbi, FASTBOOT)) else if (test_opt(sbi, FASTBOOT))
cp_reason = CP_FASTBOOT_MODE; cp_reason = CP_FASTBOOT_MODE;
else if (sbi->active_logs == 2) else if (F2FS_OPTION(sbi).active_logs == 2)
cp_reason = CP_SPEC_LOG_NUM; cp_reason = CP_SPEC_LOG_NUM;
else if (need_dentry_mark(sbi, inode->i_ino) && else if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT &&
need_dentry_mark(sbi, inode->i_ino) &&
exist_written_data(sbi, F2FS_I(inode)->i_pino, TRANS_DIR_INO)) exist_written_data(sbi, F2FS_I(inode)->i_pino, TRANS_DIR_INO))
cp_reason = CP_RECOVER_DIR; cp_reason = CP_RECOVER_DIR;
...@@ -479,6 +480,9 @@ static int f2fs_file_open(struct inode *inode, struct file *filp) ...@@ -479,6 +480,9 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
if (err) if (err)
return err; return err;
filp->f_mode |= FMODE_NOWAIT;
return dquot_file_open(inode, filp); return dquot_file_open(inode, filp);
} }
...@@ -569,7 +573,6 @@ static int truncate_partial_data_page(struct inode *inode, u64 from, ...@@ -569,7 +573,6 @@ static int truncate_partial_data_page(struct inode *inode, u64 from,
int truncate_blocks(struct inode *inode, u64 from, bool lock) int truncate_blocks(struct inode *inode, u64 from, bool lock)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
unsigned int blocksize = inode->i_sb->s_blocksize;
struct dnode_of_data dn; struct dnode_of_data dn;
pgoff_t free_from; pgoff_t free_from;
int count = 0, err = 0; int count = 0, err = 0;
...@@ -578,7 +581,7 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock) ...@@ -578,7 +581,7 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock)
trace_f2fs_truncate_blocks_enter(inode, from); trace_f2fs_truncate_blocks_enter(inode, from);
free_from = (pgoff_t)F2FS_BYTES_TO_BLK(from + blocksize - 1); free_from = (pgoff_t)F2FS_BLK_ALIGN(from);
if (free_from >= sbi->max_file_blocks) if (free_from >= sbi->max_file_blocks)
goto free_partial; goto free_partial;
...@@ -1348,8 +1351,12 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len, ...@@ -1348,8 +1351,12 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
} }
out: out:
if (!(mode & FALLOC_FL_KEEP_SIZE) && i_size_read(inode) < new_size) if (new_size > i_size_read(inode)) {
f2fs_i_size_write(inode, new_size); if (mode & FALLOC_FL_KEEP_SIZE)
file_set_keep_isize(inode);
else
f2fs_i_size_write(inode, new_size);
}
out_sem: out_sem:
up_write(&F2FS_I(inode)->i_mmap_sem); up_write(&F2FS_I(inode)->i_mmap_sem);
...@@ -1711,6 +1718,8 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp) ...@@ -1711,6 +1718,8 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp)
inode_lock(inode); inode_lock(inode);
down_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
if (f2fs_is_volatile_file(inode)) if (f2fs_is_volatile_file(inode))
goto err_out; goto err_out;
...@@ -1729,6 +1738,7 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp) ...@@ -1729,6 +1738,7 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp)
ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 1, false); ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 1, false);
} }
err_out: err_out:
up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
inode_unlock(inode); inode_unlock(inode);
mnt_drop_write_file(filp); mnt_drop_write_file(filp);
return ret; return ret;
...@@ -1938,7 +1948,7 @@ static int f2fs_ioc_set_encryption_policy(struct file *filp, unsigned long arg) ...@@ -1938,7 +1948,7 @@ static int f2fs_ioc_set_encryption_policy(struct file *filp, unsigned long arg)
{ {
struct inode *inode = file_inode(filp); struct inode *inode = file_inode(filp);
if (!f2fs_sb_has_crypto(inode->i_sb)) if (!f2fs_sb_has_encrypt(inode->i_sb))
return -EOPNOTSUPP; return -EOPNOTSUPP;
f2fs_update_time(F2FS_I_SB(inode), REQ_TIME); f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
...@@ -1948,7 +1958,7 @@ static int f2fs_ioc_set_encryption_policy(struct file *filp, unsigned long arg) ...@@ -1948,7 +1958,7 @@ static int f2fs_ioc_set_encryption_policy(struct file *filp, unsigned long arg)
static int f2fs_ioc_get_encryption_policy(struct file *filp, unsigned long arg) static int f2fs_ioc_get_encryption_policy(struct file *filp, unsigned long arg)
{ {
if (!f2fs_sb_has_crypto(file_inode(filp)->i_sb)) if (!f2fs_sb_has_encrypt(file_inode(filp)->i_sb))
return -EOPNOTSUPP; return -EOPNOTSUPP;
return fscrypt_ioctl_get_policy(filp, (void __user *)arg); return fscrypt_ioctl_get_policy(filp, (void __user *)arg);
} }
...@@ -1959,16 +1969,18 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg) ...@@ -1959,16 +1969,18 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg)
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
int err; int err;
if (!f2fs_sb_has_crypto(inode->i_sb)) if (!f2fs_sb_has_encrypt(inode->i_sb))
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (uuid_is_nonzero(sbi->raw_super->encrypt_pw_salt))
goto got_it;
err = mnt_want_write_file(filp); err = mnt_want_write_file(filp);
if (err) if (err)
return err; return err;
down_write(&sbi->sb_lock);
if (uuid_is_nonzero(sbi->raw_super->encrypt_pw_salt))
goto got_it;
/* update superblock with uuid */ /* update superblock with uuid */
generate_random_uuid(sbi->raw_super->encrypt_pw_salt); generate_random_uuid(sbi->raw_super->encrypt_pw_salt);
...@@ -1976,15 +1988,16 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg) ...@@ -1976,15 +1988,16 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg)
if (err) { if (err) {
/* undo new data */ /* undo new data */
memset(sbi->raw_super->encrypt_pw_salt, 0, 16); memset(sbi->raw_super->encrypt_pw_salt, 0, 16);
mnt_drop_write_file(filp); goto out_err;
return err;
} }
mnt_drop_write_file(filp);
got_it: got_it:
if (copy_to_user((__u8 __user *)arg, sbi->raw_super->encrypt_pw_salt, if (copy_to_user((__u8 __user *)arg, sbi->raw_super->encrypt_pw_salt,
16)) 16))
return -EFAULT; err = -EFAULT;
return 0; out_err:
up_write(&sbi->sb_lock);
mnt_drop_write_file(filp);
return err;
} }
static int f2fs_ioc_gc(struct file *filp, unsigned long arg) static int f2fs_ioc_gc(struct file *filp, unsigned long arg)
...@@ -2045,8 +2058,10 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg) ...@@ -2045,8 +2058,10 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
return ret; return ret;
end = range.start + range.len; end = range.start + range.len;
if (range.start < MAIN_BLKADDR(sbi) || end >= MAX_BLKADDR(sbi)) if (range.start < MAIN_BLKADDR(sbi) || end >= MAX_BLKADDR(sbi)) {
return -EINVAL; ret = -EINVAL;
goto out;
}
do_more: do_more:
if (!range.sync) { if (!range.sync) {
if (!mutex_trylock(&sbi->gc_mutex)) { if (!mutex_trylock(&sbi->gc_mutex)) {
...@@ -2885,25 +2900,54 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) ...@@ -2885,25 +2900,54 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
if (unlikely(f2fs_cp_error(F2FS_I_SB(inode)))) if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
return -EIO; return -EIO;
inode_lock(inode); if ((iocb->ki_flags & IOCB_NOWAIT) && !(iocb->ki_flags & IOCB_DIRECT))
return -EINVAL;
if (!inode_trylock(inode)) {
if (iocb->ki_flags & IOCB_NOWAIT)
return -EAGAIN;
inode_lock(inode);
}
ret = generic_write_checks(iocb, from); ret = generic_write_checks(iocb, from);
if (ret > 0) { if (ret > 0) {
bool preallocated = false;
size_t target_size = 0;
int err; int err;
if (iov_iter_fault_in_readable(from, iov_iter_count(from))) if (iov_iter_fault_in_readable(from, iov_iter_count(from)))
set_inode_flag(inode, FI_NO_PREALLOC); set_inode_flag(inode, FI_NO_PREALLOC);
err = f2fs_preallocate_blocks(iocb, from); if ((iocb->ki_flags & IOCB_NOWAIT) &&
if (err) { (iocb->ki_flags & IOCB_DIRECT)) {
clear_inode_flag(inode, FI_NO_PREALLOC); if (!f2fs_overwrite_io(inode, iocb->ki_pos,
inode_unlock(inode); iov_iter_count(from)) ||
return err; f2fs_has_inline_data(inode) ||
f2fs_force_buffered_io(inode, WRITE)) {
inode_unlock(inode);
return -EAGAIN;
}
} else {
preallocated = true;
target_size = iocb->ki_pos + iov_iter_count(from);
err = f2fs_preallocate_blocks(iocb, from);
if (err) {
clear_inode_flag(inode, FI_NO_PREALLOC);
inode_unlock(inode);
return err;
}
} }
blk_start_plug(&plug); blk_start_plug(&plug);
ret = __generic_file_write_iter(iocb, from); ret = __generic_file_write_iter(iocb, from);
blk_finish_plug(&plug); blk_finish_plug(&plug);
clear_inode_flag(inode, FI_NO_PREALLOC); clear_inode_flag(inode, FI_NO_PREALLOC);
/* if we couldn't write data, we should deallocate blocks. */
if (preallocated && i_size_read(inode) < target_size)
f2fs_truncate(inode);
if (ret > 0) if (ret > 0)
f2fs_update_iostat(F2FS_I_SB(inode), APP_WRITE_IO, ret); f2fs_update_iostat(F2FS_I_SB(inode), APP_WRITE_IO, ret);
} }
......
...@@ -76,14 +76,15 @@ static int gc_thread_func(void *data) ...@@ -76,14 +76,15 @@ static int gc_thread_func(void *data)
* invalidated soon after by user update or deletion. * invalidated soon after by user update or deletion.
* So, I'd like to wait some time to collect dirty segments. * So, I'd like to wait some time to collect dirty segments.
*/ */
if (!mutex_trylock(&sbi->gc_mutex))
goto next;
if (gc_th->gc_urgent) { if (gc_th->gc_urgent) {
wait_ms = gc_th->urgent_sleep_time; wait_ms = gc_th->urgent_sleep_time;
mutex_lock(&sbi->gc_mutex);
goto do_gc; goto do_gc;
} }
if (!mutex_trylock(&sbi->gc_mutex))
goto next;
if (!is_idle(sbi)) { if (!is_idle(sbi)) {
increase_sleep_time(gc_th, &wait_ms); increase_sleep_time(gc_th, &wait_ms);
mutex_unlock(&sbi->gc_mutex); mutex_unlock(&sbi->gc_mutex);
...@@ -161,12 +162,17 @@ static int select_gc_type(struct f2fs_gc_kthread *gc_th, int gc_type) ...@@ -161,12 +162,17 @@ static int select_gc_type(struct f2fs_gc_kthread *gc_th, int gc_type)
{ {
int gc_mode = (gc_type == BG_GC) ? GC_CB : GC_GREEDY; int gc_mode = (gc_type == BG_GC) ? GC_CB : GC_GREEDY;
if (gc_th && gc_th->gc_idle) { if (!gc_th)
return gc_mode;
if (gc_th->gc_idle) {
if (gc_th->gc_idle == 1) if (gc_th->gc_idle == 1)
gc_mode = GC_CB; gc_mode = GC_CB;
else if (gc_th->gc_idle == 2) else if (gc_th->gc_idle == 2)
gc_mode = GC_GREEDY; gc_mode = GC_GREEDY;
} }
if (gc_th->gc_urgent)
gc_mode = GC_GREEDY;
return gc_mode; return gc_mode;
} }
...@@ -188,11 +194,14 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type, ...@@ -188,11 +194,14 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
} }
/* we need to check every dirty segments in the FG_GC case */ /* we need to check every dirty segments in the FG_GC case */
if (gc_type != FG_GC && p->max_search > sbi->max_victim_search) if (gc_type != FG_GC &&
(sbi->gc_thread && !sbi->gc_thread->gc_urgent) &&
p->max_search > sbi->max_victim_search)
p->max_search = sbi->max_victim_search; p->max_search = sbi->max_victim_search;
/* let's select beginning hot/small space first */ /* let's select beginning hot/small space first in no_heap mode*/
if (type == CURSEG_HOT_DATA || IS_NODESEG(type)) if (test_opt(sbi, NOHEAP) &&
(type == CURSEG_HOT_DATA || IS_NODESEG(type)))
p->offset = 0; p->offset = 0;
else else
p->offset = SIT_I(sbi)->last_victim[p->gc_mode]; p->offset = SIT_I(sbi)->last_victim[p->gc_mode];
......
...@@ -369,7 +369,7 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, ...@@ -369,7 +369,7 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
f2fs_wait_on_page_writeback(page, DATA, true); f2fs_wait_on_page_writeback(page, DATA, true);
zero_user_segment(page, MAX_INLINE_DATA(dir), PAGE_SIZE); zero_user_segment(page, MAX_INLINE_DATA(dir), PAGE_SIZE);
dentry_blk = kmap_atomic(page); dentry_blk = page_address(page);
make_dentry_ptr_inline(dir, &src, inline_dentry); make_dentry_ptr_inline(dir, &src, inline_dentry);
make_dentry_ptr_block(dir, &dst, dentry_blk); make_dentry_ptr_block(dir, &dst, dentry_blk);
...@@ -386,7 +386,6 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, ...@@ -386,7 +386,6 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
memcpy(dst.dentry, src.dentry, SIZE_OF_DIR_ENTRY * src.max); memcpy(dst.dentry, src.dentry, SIZE_OF_DIR_ENTRY * src.max);
memcpy(dst.filename, src.filename, src.max * F2FS_SLOT_LEN); memcpy(dst.filename, src.filename, src.max * F2FS_SLOT_LEN);
kunmap_atomic(dentry_blk);
if (!PageUptodate(page)) if (!PageUptodate(page))
SetPageUptodate(page); SetPageUptodate(page);
set_page_dirty(page); set_page_dirty(page);
......
...@@ -284,6 +284,10 @@ static int do_read_inode(struct inode *inode) ...@@ -284,6 +284,10 @@ static int do_read_inode(struct inode *inode)
fi->i_crtime.tv_nsec = le32_to_cpu(ri->i_crtime_nsec); fi->i_crtime.tv_nsec = le32_to_cpu(ri->i_crtime_nsec);
} }
F2FS_I(inode)->i_disk_time[0] = inode->i_atime;
F2FS_I(inode)->i_disk_time[1] = inode->i_ctime;
F2FS_I(inode)->i_disk_time[2] = inode->i_mtime;
F2FS_I(inode)->i_disk_time[3] = F2FS_I(inode)->i_crtime;
f2fs_put_page(node_page, 1); f2fs_put_page(node_page, 1);
stat_inc_inline_xattr(inode); stat_inc_inline_xattr(inode);
...@@ -328,7 +332,7 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino) ...@@ -328,7 +332,7 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
inode->i_op = &f2fs_dir_inode_operations; inode->i_op = &f2fs_dir_inode_operations;
inode->i_fop = &f2fs_dir_operations; inode->i_fop = &f2fs_dir_operations;
inode->i_mapping->a_ops = &f2fs_dblock_aops; inode->i_mapping->a_ops = &f2fs_dblock_aops;
mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_HIGH_ZERO); inode_nohighmem(inode);
} else if (S_ISLNK(inode->i_mode)) { } else if (S_ISLNK(inode->i_mode)) {
if (f2fs_encrypted_inode(inode)) if (f2fs_encrypted_inode(inode))
inode->i_op = &f2fs_encrypted_symlink_inode_operations; inode->i_op = &f2fs_encrypted_symlink_inode_operations;
...@@ -439,12 +443,15 @@ void update_inode(struct inode *inode, struct page *node_page) ...@@ -439,12 +443,15 @@ void update_inode(struct inode *inode, struct page *node_page)
} }
__set_inode_rdev(inode, ri); __set_inode_rdev(inode, ri);
set_cold_node(inode, node_page);
/* deleted inode */ /* deleted inode */
if (inode->i_nlink == 0) if (inode->i_nlink == 0)
clear_inline_node(node_page); clear_inline_node(node_page);
F2FS_I(inode)->i_disk_time[0] = inode->i_atime;
F2FS_I(inode)->i_disk_time[1] = inode->i_ctime;
F2FS_I(inode)->i_disk_time[2] = inode->i_mtime;
F2FS_I(inode)->i_disk_time[3] = F2FS_I(inode)->i_crtime;
} }
void update_inode_page(struct inode *inode) void update_inode_page(struct inode *inode)
......
...@@ -78,7 +78,8 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -78,7 +78,8 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
set_inode_flag(inode, FI_NEW_INODE); set_inode_flag(inode, FI_NEW_INODE);
/* If the directory encrypted, then we should encrypt the inode. */ /* If the directory encrypted, then we should encrypt the inode. */
if (f2fs_encrypted_inode(dir) && f2fs_may_encrypt(inode)) if ((f2fs_encrypted_inode(dir) || DUMMY_ENCRYPTION_ENABLED(sbi)) &&
f2fs_may_encrypt(inode))
f2fs_set_encrypted_inode(inode); f2fs_set_encrypted_inode(inode);
if (f2fs_sb_has_extra_attr(sbi->sb)) { if (f2fs_sb_has_extra_attr(sbi->sb)) {
...@@ -97,7 +98,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -97,7 +98,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)) { if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)) {
f2fs_bug_on(sbi, !f2fs_has_extra_attr(inode)); f2fs_bug_on(sbi, !f2fs_has_extra_attr(inode));
if (f2fs_has_inline_xattr(inode)) if (f2fs_has_inline_xattr(inode))
xattr_size = sbi->inline_xattr_size; xattr_size = F2FS_OPTION(sbi).inline_xattr_size;
/* Otherwise, will be 0 */ /* Otherwise, will be 0 */
} else if (f2fs_has_inline_xattr(inode) || } else if (f2fs_has_inline_xattr(inode) ||
f2fs_has_inline_dentry(inode)) { f2fs_has_inline_dentry(inode)) {
...@@ -142,7 +143,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -142,7 +143,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
return ERR_PTR(err); return ERR_PTR(err);
} }
static int is_multimedia_file(const unsigned char *s, const char *sub) static int is_extension_exist(const unsigned char *s, const char *sub)
{ {
size_t slen = strlen(s); size_t slen = strlen(s);
size_t sublen = strlen(sub); size_t sublen = strlen(sub);
...@@ -168,19 +169,94 @@ static int is_multimedia_file(const unsigned char *s, const char *sub) ...@@ -168,19 +169,94 @@ static int is_multimedia_file(const unsigned char *s, const char *sub)
/* /*
* Set multimedia files as cold files for hot/cold data separation * Set multimedia files as cold files for hot/cold data separation
*/ */
static inline void set_cold_files(struct f2fs_sb_info *sbi, struct inode *inode, static inline void set_file_temperature(struct f2fs_sb_info *sbi, struct inode *inode,
const unsigned char *name) const unsigned char *name)
{ {
int i; __u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
__u8 (*extlist)[8] = sbi->raw_super->extension_list; int i, cold_count, hot_count;
down_read(&sbi->sb_lock);
cold_count = le32_to_cpu(sbi->raw_super->extension_count);
hot_count = sbi->raw_super->hot_ext_count;
int count = le32_to_cpu(sbi->raw_super->extension_count); for (i = 0; i < cold_count + hot_count; i++) {
for (i = 0; i < count; i++) { if (!is_extension_exist(name, extlist[i]))
if (is_multimedia_file(name, extlist[i])) { continue;
if (i < cold_count)
file_set_cold(inode); file_set_cold(inode);
break; else
} file_set_hot(inode);
break;
} }
up_read(&sbi->sb_lock);
}
int update_extension_list(struct f2fs_sb_info *sbi, const char *name,
bool hot, bool set)
{
__u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
int cold_count = le32_to_cpu(sbi->raw_super->extension_count);
int hot_count = sbi->raw_super->hot_ext_count;
int total_count = cold_count + hot_count;
int start, count;
int i;
if (set) {
if (total_count == F2FS_MAX_EXTENSION)
return -EINVAL;
} else {
if (!hot && !cold_count)
return -EINVAL;
if (hot && !hot_count)
return -EINVAL;
}
if (hot) {
start = cold_count;
count = total_count;
} else {
start = 0;
count = cold_count;
}
for (i = start; i < count; i++) {
if (strcmp(name, extlist[i]))
continue;
if (set)
return -EINVAL;
memcpy(extlist[i], extlist[i + 1],
F2FS_EXTENSION_LEN * (total_count - i - 1));
memset(extlist[total_count - 1], 0, F2FS_EXTENSION_LEN);
if (hot)
sbi->raw_super->hot_ext_count = hot_count - 1;
else
sbi->raw_super->extension_count =
cpu_to_le32(cold_count - 1);
return 0;
}
if (!set)
return -EINVAL;
if (hot) {
strncpy(extlist[count], name, strlen(name));
sbi->raw_super->hot_ext_count = hot_count + 1;
} else {
char buf[F2FS_MAX_EXTENSION][F2FS_EXTENSION_LEN];
memcpy(buf, &extlist[cold_count],
F2FS_EXTENSION_LEN * hot_count);
memset(extlist[cold_count], 0, F2FS_EXTENSION_LEN);
strncpy(extlist[cold_count], name, strlen(name));
memcpy(&extlist[cold_count + 1], buf,
F2FS_EXTENSION_LEN * hot_count);
sbi->raw_super->extension_count = cpu_to_le32(cold_count + 1);
}
return 0;
} }
static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode, static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
...@@ -203,7 +279,7 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode, ...@@ -203,7 +279,7 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
return PTR_ERR(inode); return PTR_ERR(inode);
if (!test_opt(sbi, DISABLE_EXT_IDENTIFY)) if (!test_opt(sbi, DISABLE_EXT_IDENTIFY))
set_cold_files(sbi, inode, dentry->d_name.name); set_file_temperature(sbi, inode, dentry->d_name.name);
inode->i_op = &f2fs_file_inode_operations; inode->i_op = &f2fs_file_inode_operations;
inode->i_fop = &f2fs_file_operations; inode->i_fop = &f2fs_file_operations;
...@@ -317,7 +393,6 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino) ...@@ -317,7 +393,6 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino)
de = f2fs_find_entry(dir, &dot, &page); de = f2fs_find_entry(dir, &dot, &page);
if (de) { if (de) {
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
} else if (IS_ERR(page)) { } else if (IS_ERR(page)) {
err = PTR_ERR(page); err = PTR_ERR(page);
...@@ -329,14 +404,12 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino) ...@@ -329,14 +404,12 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino)
} }
de = f2fs_find_entry(dir, &dotdot, &page); de = f2fs_find_entry(dir, &dotdot, &page);
if (de) { if (de)
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
} else if (IS_ERR(page)) { else if (IS_ERR(page))
err = PTR_ERR(page); err = PTR_ERR(page);
} else { else
err = __f2fs_add_link(dir, &dotdot, NULL, pino, S_IFDIR); err = __f2fs_add_link(dir, &dotdot, NULL, pino, S_IFDIR);
}
out: out:
if (!err) if (!err)
clear_inode_flag(dir, FI_INLINE_DOTS); clear_inode_flag(dir, FI_INLINE_DOTS);
...@@ -377,7 +450,6 @@ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry, ...@@ -377,7 +450,6 @@ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry,
} }
ino = le32_to_cpu(de->ino); ino = le32_to_cpu(de->ino);
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
inode = f2fs_iget(dir->i_sb, ino); inode = f2fs_iget(dir->i_sb, ino);
...@@ -452,7 +524,6 @@ static int f2fs_unlink(struct inode *dir, struct dentry *dentry) ...@@ -452,7 +524,6 @@ static int f2fs_unlink(struct inode *dir, struct dentry *dentry)
err = acquire_orphan_inode(sbi); err = acquire_orphan_inode(sbi);
if (err) { if (err) {
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
goto fail; goto fail;
} }
...@@ -579,7 +650,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) ...@@ -579,7 +650,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
inode->i_op = &f2fs_dir_inode_operations; inode->i_op = &f2fs_dir_inode_operations;
inode->i_fop = &f2fs_dir_operations; inode->i_fop = &f2fs_dir_operations;
inode->i_mapping->a_ops = &f2fs_dblock_aops; inode->i_mapping->a_ops = &f2fs_dblock_aops;
mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_HIGH_ZERO); inode_nohighmem(inode);
set_inode_flag(inode, FI_INC_LINK); set_inode_flag(inode, FI_INC_LINK);
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
...@@ -717,10 +788,12 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry, ...@@ -717,10 +788,12 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry,
static int f2fs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode) static int f2fs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
{ {
if (unlikely(f2fs_cp_error(F2FS_I_SB(dir)))) struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
if (unlikely(f2fs_cp_error(sbi)))
return -EIO; return -EIO;
if (f2fs_encrypted_inode(dir)) { if (f2fs_encrypted_inode(dir) || DUMMY_ENCRYPTION_ENABLED(sbi)) {
int err = fscrypt_get_encryption_info(dir); int err = fscrypt_get_encryption_info(dir);
if (err) if (err)
return err; return err;
...@@ -893,16 +966,15 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -893,16 +966,15 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
} }
if (old_dir_entry) { if (old_dir_entry) {
if (old_dir != new_dir && !whiteout) { if (old_dir != new_dir && !whiteout)
f2fs_set_link(old_inode, old_dir_entry, f2fs_set_link(old_inode, old_dir_entry,
old_dir_page, new_dir); old_dir_page, new_dir);
} else { else
f2fs_dentry_kunmap(old_inode, old_dir_page);
f2fs_put_page(old_dir_page, 0); f2fs_put_page(old_dir_page, 0);
}
f2fs_i_links_write(old_dir, false); f2fs_i_links_write(old_dir, false);
} }
add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO); if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT)
add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO);
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
...@@ -912,20 +984,15 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -912,20 +984,15 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
put_out_dir: put_out_dir:
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
if (new_page) { if (new_page)
f2fs_dentry_kunmap(new_dir, new_page);
f2fs_put_page(new_page, 0); f2fs_put_page(new_page, 0);
}
out_whiteout: out_whiteout:
if (whiteout) if (whiteout)
iput(whiteout); iput(whiteout);
out_dir: out_dir:
if (old_dir_entry) { if (old_dir_entry)
f2fs_dentry_kunmap(old_inode, old_dir_page);
f2fs_put_page(old_dir_page, 0); f2fs_put_page(old_dir_page, 0);
}
out_old: out_old:
f2fs_dentry_kunmap(old_dir, old_page);
f2fs_put_page(old_page, 0); f2fs_put_page(old_page, 0);
out: out:
return err; return err;
...@@ -1057,8 +1124,10 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -1057,8 +1124,10 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
} }
f2fs_mark_inode_dirty_sync(new_dir, false); f2fs_mark_inode_dirty_sync(new_dir, false);
add_ino_entry(sbi, old_dir->i_ino, TRANS_DIR_INO); if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT) {
add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO); add_ino_entry(sbi, old_dir->i_ino, TRANS_DIR_INO);
add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO);
}
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
...@@ -1067,19 +1136,15 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -1067,19 +1136,15 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
return 0; return 0;
out_new_dir: out_new_dir:
if (new_dir_entry) { if (new_dir_entry) {
f2fs_dentry_kunmap(new_inode, new_dir_page);
f2fs_put_page(new_dir_page, 0); f2fs_put_page(new_dir_page, 0);
} }
out_old_dir: out_old_dir:
if (old_dir_entry) { if (old_dir_entry) {
f2fs_dentry_kunmap(old_inode, old_dir_page);
f2fs_put_page(old_dir_page, 0); f2fs_put_page(old_dir_page, 0);
} }
out_new: out_new:
f2fs_dentry_kunmap(new_dir, new_page);
f2fs_put_page(new_page, 0); f2fs_put_page(new_page, 0);
out_old: out_old:
f2fs_dentry_kunmap(old_dir, old_page);
f2fs_put_page(old_page, 0); f2fs_put_page(old_page, 0);
out: out:
return err; return err;
......
...@@ -193,8 +193,8 @@ static void __del_from_nat_cache(struct f2fs_nm_info *nm_i, struct nat_entry *e) ...@@ -193,8 +193,8 @@ static void __del_from_nat_cache(struct f2fs_nm_info *nm_i, struct nat_entry *e)
__free_nat_entry(e); __free_nat_entry(e);
} }
static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i, static struct nat_entry_set *__grab_nat_entry_set(struct f2fs_nm_info *nm_i,
struct nat_entry *ne) struct nat_entry *ne)
{ {
nid_t set = NAT_BLOCK_OFFSET(ne->ni.nid); nid_t set = NAT_BLOCK_OFFSET(ne->ni.nid);
struct nat_entry_set *head; struct nat_entry_set *head;
...@@ -209,15 +209,36 @@ static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i, ...@@ -209,15 +209,36 @@ static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i,
head->entry_cnt = 0; head->entry_cnt = 0;
f2fs_radix_tree_insert(&nm_i->nat_set_root, set, head); f2fs_radix_tree_insert(&nm_i->nat_set_root, set, head);
} }
return head;
}
static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i,
struct nat_entry *ne)
{
struct nat_entry_set *head;
bool new_ne = nat_get_blkaddr(ne) == NEW_ADDR;
if (!new_ne)
head = __grab_nat_entry_set(nm_i, ne);
/*
* update entry_cnt in below condition:
* 1. update NEW_ADDR to valid block address;
* 2. update old block address to new one;
*/
if (!new_ne && (get_nat_flag(ne, IS_PREALLOC) ||
!get_nat_flag(ne, IS_DIRTY)))
head->entry_cnt++;
set_nat_flag(ne, IS_PREALLOC, new_ne);
if (get_nat_flag(ne, IS_DIRTY)) if (get_nat_flag(ne, IS_DIRTY))
goto refresh_list; goto refresh_list;
nm_i->dirty_nat_cnt++; nm_i->dirty_nat_cnt++;
head->entry_cnt++;
set_nat_flag(ne, IS_DIRTY, true); set_nat_flag(ne, IS_DIRTY, true);
refresh_list: refresh_list:
if (nat_get_blkaddr(ne) == NEW_ADDR) if (new_ne)
list_del_init(&ne->list); list_del_init(&ne->list);
else else
list_move_tail(&ne->list, &head->entry_list); list_move_tail(&ne->list, &head->entry_list);
...@@ -1076,7 +1097,7 @@ struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs) ...@@ -1076,7 +1097,7 @@ struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs)
f2fs_wait_on_page_writeback(page, NODE, true); f2fs_wait_on_page_writeback(page, NODE, true);
fill_node_footer(page, dn->nid, dn->inode->i_ino, ofs, true); fill_node_footer(page, dn->nid, dn->inode->i_ino, ofs, true);
set_cold_node(dn->inode, page); set_cold_node(page, S_ISDIR(dn->inode->i_mode));
if (!PageUptodate(page)) if (!PageUptodate(page))
SetPageUptodate(page); SetPageUptodate(page);
if (set_page_dirty(page)) if (set_page_dirty(page))
...@@ -2291,6 +2312,7 @@ int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page) ...@@ -2291,6 +2312,7 @@ int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page)
if (!PageUptodate(ipage)) if (!PageUptodate(ipage))
SetPageUptodate(ipage); SetPageUptodate(ipage);
fill_node_footer(ipage, ino, ino, 0, true); fill_node_footer(ipage, ino, ino, 0, true);
set_cold_node(page, false);
src = F2FS_INODE(page); src = F2FS_INODE(page);
dst = F2FS_INODE(ipage); dst = F2FS_INODE(ipage);
...@@ -2580,8 +2602,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi) ...@@ -2580,8 +2602,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
if (!enabled_nat_bits(sbi, NULL)) if (!enabled_nat_bits(sbi, NULL))
return 0; return 0;
nm_i->nat_bits_blocks = F2FS_BYTES_TO_BLK((nat_bits_bytes << 1) + 8 + nm_i->nat_bits_blocks = F2FS_BLK_ALIGN((nat_bits_bytes << 1) + 8);
F2FS_BLKSIZE - 1);
nm_i->nat_bits = f2fs_kzalloc(sbi, nm_i->nat_bits = f2fs_kzalloc(sbi,
nm_i->nat_bits_blocks << F2FS_BLKSIZE_BITS, GFP_KERNEL); nm_i->nat_bits_blocks << F2FS_BLKSIZE_BITS, GFP_KERNEL);
if (!nm_i->nat_bits) if (!nm_i->nat_bits)
...@@ -2707,12 +2728,20 @@ static int init_node_manager(struct f2fs_sb_info *sbi) ...@@ -2707,12 +2728,20 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
static int init_free_nid_cache(struct f2fs_sb_info *sbi) static int init_free_nid_cache(struct f2fs_sb_info *sbi)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
int i;
nm_i->free_nid_bitmap = f2fs_kvzalloc(sbi, nm_i->nat_blocks * nm_i->free_nid_bitmap = f2fs_kzalloc(sbi, nm_i->nat_blocks *
NAT_ENTRY_BITMAP_SIZE, GFP_KERNEL); sizeof(unsigned char *), GFP_KERNEL);
if (!nm_i->free_nid_bitmap) if (!nm_i->free_nid_bitmap)
return -ENOMEM; return -ENOMEM;
for (i = 0; i < nm_i->nat_blocks; i++) {
nm_i->free_nid_bitmap[i] = f2fs_kvzalloc(sbi,
NAT_ENTRY_BITMAP_SIZE_ALIGNED, GFP_KERNEL);
if (!nm_i->free_nid_bitmap)
return -ENOMEM;
}
nm_i->nat_block_bitmap = f2fs_kvzalloc(sbi, nm_i->nat_blocks / 8, nm_i->nat_block_bitmap = f2fs_kvzalloc(sbi, nm_i->nat_blocks / 8,
GFP_KERNEL); GFP_KERNEL);
if (!nm_i->nat_block_bitmap) if (!nm_i->nat_block_bitmap)
...@@ -2803,7 +2832,13 @@ void destroy_node_manager(struct f2fs_sb_info *sbi) ...@@ -2803,7 +2832,13 @@ void destroy_node_manager(struct f2fs_sb_info *sbi)
up_write(&nm_i->nat_tree_lock); up_write(&nm_i->nat_tree_lock);
kvfree(nm_i->nat_block_bitmap); kvfree(nm_i->nat_block_bitmap);
kvfree(nm_i->free_nid_bitmap); if (nm_i->free_nid_bitmap) {
int i;
for (i = 0; i < nm_i->nat_blocks; i++)
kvfree(nm_i->free_nid_bitmap[i]);
kfree(nm_i->free_nid_bitmap);
}
kvfree(nm_i->free_nid_count); kvfree(nm_i->free_nid_count);
kfree(nm_i->nat_bitmap); kfree(nm_i->nat_bitmap);
......
...@@ -44,6 +44,7 @@ enum { ...@@ -44,6 +44,7 @@ enum {
HAS_FSYNCED_INODE, /* is the inode fsynced before? */ HAS_FSYNCED_INODE, /* is the inode fsynced before? */
HAS_LAST_FSYNC, /* has the latest node fsync mark? */ HAS_LAST_FSYNC, /* has the latest node fsync mark? */
IS_DIRTY, /* this nat entry is dirty? */ IS_DIRTY, /* this nat entry is dirty? */
IS_PREALLOC, /* nat entry is preallocated */
}; };
/* /*
...@@ -422,12 +423,12 @@ static inline void clear_inline_node(struct page *page) ...@@ -422,12 +423,12 @@ static inline void clear_inline_node(struct page *page)
ClearPageChecked(page); ClearPageChecked(page);
} }
static inline void set_cold_node(struct inode *inode, struct page *page) static inline void set_cold_node(struct page *page, bool is_dir)
{ {
struct f2fs_node *rn = F2FS_NODE(page); struct f2fs_node *rn = F2FS_NODE(page);
unsigned int flag = le32_to_cpu(rn->footer.flag); unsigned int flag = le32_to_cpu(rn->footer.flag);
if (S_ISDIR(inode->i_mode)) if (is_dir)
flag &= ~(0x1 << COLD_BIT_SHIFT); flag &= ~(0x1 << COLD_BIT_SHIFT);
else else
flag |= (0x1 << COLD_BIT_SHIFT); flag |= (0x1 << COLD_BIT_SHIFT);
......
...@@ -144,7 +144,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage, ...@@ -144,7 +144,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage,
retry: retry:
de = __f2fs_find_entry(dir, &fname, &page); de = __f2fs_find_entry(dir, &fname, &page);
if (de && inode->i_ino == le32_to_cpu(de->ino)) if (de && inode->i_ino == le32_to_cpu(de->ino))
goto out_unmap_put; goto out_put;
if (de) { if (de) {
einode = f2fs_iget_retry(inode->i_sb, le32_to_cpu(de->ino)); einode = f2fs_iget_retry(inode->i_sb, le32_to_cpu(de->ino));
...@@ -153,19 +153,19 @@ static int recover_dentry(struct inode *inode, struct page *ipage, ...@@ -153,19 +153,19 @@ static int recover_dentry(struct inode *inode, struct page *ipage,
err = PTR_ERR(einode); err = PTR_ERR(einode);
if (err == -ENOENT) if (err == -ENOENT)
err = -EEXIST; err = -EEXIST;
goto out_unmap_put; goto out_put;
} }
err = dquot_initialize(einode); err = dquot_initialize(einode);
if (err) { if (err) {
iput(einode); iput(einode);
goto out_unmap_put; goto out_put;
} }
err = acquire_orphan_inode(F2FS_I_SB(inode)); err = acquire_orphan_inode(F2FS_I_SB(inode));
if (err) { if (err) {
iput(einode); iput(einode);
goto out_unmap_put; goto out_put;
} }
f2fs_delete_entry(de, page, dir, einode); f2fs_delete_entry(de, page, dir, einode);
iput(einode); iput(einode);
...@@ -180,8 +180,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage, ...@@ -180,8 +180,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage,
goto retry; goto retry;
goto out; goto out;
out_unmap_put: out_put:
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
out: out:
if (file_enc_name(inode)) if (file_enc_name(inode))
...@@ -243,6 +242,9 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, ...@@ -243,6 +242,9 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
struct curseg_info *curseg; struct curseg_info *curseg;
struct page *page = NULL; struct page *page = NULL;
block_t blkaddr; block_t blkaddr;
unsigned int loop_cnt = 0;
unsigned int free_blocks = sbi->user_block_count -
valid_user_blocks(sbi);
int err = 0; int err = 0;
/* get node pages in the current segment */ /* get node pages in the current segment */
...@@ -295,6 +297,17 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, ...@@ -295,6 +297,17 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
if (IS_INODE(page) && is_dent_dnode(page)) if (IS_INODE(page) && is_dent_dnode(page))
entry->last_dentry = blkaddr; entry->last_dentry = blkaddr;
next: next:
/* sanity check in order to detect looped node chain */
if (++loop_cnt >= free_blocks ||
blkaddr == next_blkaddr_of_node(page)) {
f2fs_msg(sbi->sb, KERN_NOTICE,
"%s: detect looped node chain, "
"blkaddr:%u, next:%u",
__func__, blkaddr, next_blkaddr_of_node(page));
err = -EINVAL;
break;
}
/* check next segment */ /* check next segment */
blkaddr = next_blkaddr_of_node(page); blkaddr = next_blkaddr_of_node(page);
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
......
...@@ -1411,12 +1411,11 @@ static int issue_discard_thread(void *data) ...@@ -1411,12 +1411,11 @@ static int issue_discard_thread(void *data)
if (kthread_should_stop()) if (kthread_should_stop())
return 0; return 0;
if (dcc->discard_wake) { if (dcc->discard_wake)
dcc->discard_wake = 0; dcc->discard_wake = 0;
if (sbi->gc_thread && sbi->gc_thread->gc_urgent)
init_discard_policy(&dpolicy, if (sbi->gc_thread && sbi->gc_thread->gc_urgent)
DPOLICY_FORCE, 1); init_discard_policy(&dpolicy, DPOLICY_FORCE, 1);
}
sb_start_intwrite(sbi->sb); sb_start_intwrite(sbi->sb);
...@@ -1485,7 +1484,7 @@ static int __issue_discard_async(struct f2fs_sb_info *sbi, ...@@ -1485,7 +1484,7 @@ static int __issue_discard_async(struct f2fs_sb_info *sbi,
struct block_device *bdev, block_t blkstart, block_t blklen) struct block_device *bdev, block_t blkstart, block_t blklen)
{ {
#ifdef CONFIG_BLK_DEV_ZONED #ifdef CONFIG_BLK_DEV_ZONED
if (f2fs_sb_mounted_blkzoned(sbi->sb) && if (f2fs_sb_has_blkzoned(sbi->sb) &&
bdev_zoned_model(bdev) != BLK_ZONED_NONE) bdev_zoned_model(bdev) != BLK_ZONED_NONE)
return __f2fs_issue_discard_zone(sbi, bdev, blkstart, blklen); return __f2fs_issue_discard_zone(sbi, bdev, blkstart, blklen);
#endif #endif
...@@ -1683,7 +1682,7 @@ void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1683,7 +1682,7 @@ void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc)
sbi->blocks_per_seg, cur_pos); sbi->blocks_per_seg, cur_pos);
len = next_pos - cur_pos; len = next_pos - cur_pos;
if (f2fs_sb_mounted_blkzoned(sbi->sb) || if (f2fs_sb_has_blkzoned(sbi->sb) ||
(force && len < cpc->trim_minlen)) (force && len < cpc->trim_minlen))
goto skip; goto skip;
...@@ -1727,7 +1726,7 @@ void init_discard_policy(struct discard_policy *dpolicy, ...@@ -1727,7 +1726,7 @@ void init_discard_policy(struct discard_policy *dpolicy,
} else if (discard_type == DPOLICY_FORCE) { } else if (discard_type == DPOLICY_FORCE) {
dpolicy->min_interval = DEF_MIN_DISCARD_ISSUE_TIME; dpolicy->min_interval = DEF_MIN_DISCARD_ISSUE_TIME;
dpolicy->max_interval = DEF_MAX_DISCARD_ISSUE_TIME; dpolicy->max_interval = DEF_MAX_DISCARD_ISSUE_TIME;
dpolicy->io_aware = true; dpolicy->io_aware = false;
} else if (discard_type == DPOLICY_FSTRIM) { } else if (discard_type == DPOLICY_FSTRIM) {
dpolicy->io_aware = false; dpolicy->io_aware = false;
} else if (discard_type == DPOLICY_UMOUNT) { } else if (discard_type == DPOLICY_UMOUNT) {
...@@ -1863,7 +1862,7 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del) ...@@ -1863,7 +1862,7 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
sbi->discard_blks--; sbi->discard_blks--;
/* don't overwrite by SSR to keep node chain */ /* don't overwrite by SSR to keep node chain */
if (se->type == CURSEG_WARM_NODE) { if (IS_NODESEG(se->type)) {
if (!f2fs_test_and_set_bit(offset, se->ckpt_valid_map)) if (!f2fs_test_and_set_bit(offset, se->ckpt_valid_map))
se->ckpt_valid_blocks++; se->ckpt_valid_blocks++;
} }
...@@ -2164,11 +2163,17 @@ static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type) ...@@ -2164,11 +2163,17 @@ static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type)
if (sbi->segs_per_sec != 1) if (sbi->segs_per_sec != 1)
return CURSEG_I(sbi, type)->segno; return CURSEG_I(sbi, type)->segno;
if (type == CURSEG_HOT_DATA || IS_NODESEG(type)) if (test_opt(sbi, NOHEAP) &&
(type == CURSEG_HOT_DATA || IS_NODESEG(type)))
return 0; return 0;
if (SIT_I(sbi)->last_victim[ALLOC_NEXT]) if (SIT_I(sbi)->last_victim[ALLOC_NEXT])
return SIT_I(sbi)->last_victim[ALLOC_NEXT]; return SIT_I(sbi)->last_victim[ALLOC_NEXT];
/* find segments from 0 to reuse freed segments */
if (F2FS_OPTION(sbi).alloc_mode == ALLOC_MODE_REUSE)
return 0;
return CURSEG_I(sbi, type)->segno; return CURSEG_I(sbi, type)->segno;
} }
...@@ -2455,6 +2460,101 @@ int rw_hint_to_seg_type(enum rw_hint hint) ...@@ -2455,6 +2460,101 @@ int rw_hint_to_seg_type(enum rw_hint hint)
} }
} }
/* This returns write hints for each segment type. This hints will be
* passed down to block layer. There are mapping tables which depend on
* the mount option 'whint_mode'.
*
* 1) whint_mode=off. F2FS only passes down WRITE_LIFE_NOT_SET.
*
* 2) whint_mode=user-based. F2FS tries to pass down hints given by users.
*
* User F2FS Block
* ---- ---- -----
* META WRITE_LIFE_NOT_SET
* HOT_NODE "
* WARM_NODE "
* COLD_NODE "
* ioctl(COLD) COLD_DATA WRITE_LIFE_EXTREME
* extension list " "
*
* -- buffered io
* WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
* WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
* WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
* WRITE_LIFE_NONE " "
* WRITE_LIFE_MEDIUM " "
* WRITE_LIFE_LONG " "
*
* -- direct io
* WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
* WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
* WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
* WRITE_LIFE_NONE " WRITE_LIFE_NONE
* WRITE_LIFE_MEDIUM " WRITE_LIFE_MEDIUM
* WRITE_LIFE_LONG " WRITE_LIFE_LONG
*
* 3) whint_mode=fs-based. F2FS passes down hints with its policy.
*
* User F2FS Block
* ---- ---- -----
* META WRITE_LIFE_MEDIUM;
* HOT_NODE WRITE_LIFE_NOT_SET
* WARM_NODE "
* COLD_NODE WRITE_LIFE_NONE
* ioctl(COLD) COLD_DATA WRITE_LIFE_EXTREME
* extension list " "
*
* -- buffered io
* WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
* WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
* WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_LONG
* WRITE_LIFE_NONE " "
* WRITE_LIFE_MEDIUM " "
* WRITE_LIFE_LONG " "
*
* -- direct io
* WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
* WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
* WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
* WRITE_LIFE_NONE " WRITE_LIFE_NONE
* WRITE_LIFE_MEDIUM " WRITE_LIFE_MEDIUM
* WRITE_LIFE_LONG " WRITE_LIFE_LONG
*/
enum rw_hint io_type_to_rw_hint(struct f2fs_sb_info *sbi,
enum page_type type, enum temp_type temp)
{
if (F2FS_OPTION(sbi).whint_mode == WHINT_MODE_USER) {
if (type == DATA) {
if (temp == WARM)
return WRITE_LIFE_NOT_SET;
else if (temp == HOT)
return WRITE_LIFE_SHORT;
else if (temp == COLD)
return WRITE_LIFE_EXTREME;
} else {
return WRITE_LIFE_NOT_SET;
}
} else if (F2FS_OPTION(sbi).whint_mode == WHINT_MODE_FS) {
if (type == DATA) {
if (temp == WARM)
return WRITE_LIFE_LONG;
else if (temp == HOT)
return WRITE_LIFE_SHORT;
else if (temp == COLD)
return WRITE_LIFE_EXTREME;
} else if (type == NODE) {
if (temp == WARM || temp == HOT)
return WRITE_LIFE_NOT_SET;
else if (temp == COLD)
return WRITE_LIFE_NONE;
} else if (type == META) {
return WRITE_LIFE_MEDIUM;
}
}
return WRITE_LIFE_NOT_SET;
}
static int __get_segment_type_2(struct f2fs_io_info *fio) static int __get_segment_type_2(struct f2fs_io_info *fio)
{ {
if (fio->type == DATA) if (fio->type == DATA)
...@@ -2487,7 +2587,8 @@ static int __get_segment_type_6(struct f2fs_io_info *fio) ...@@ -2487,7 +2587,8 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
if (is_cold_data(fio->page) || file_is_cold(inode)) if (is_cold_data(fio->page) || file_is_cold(inode))
return CURSEG_COLD_DATA; return CURSEG_COLD_DATA;
if (is_inode_flag_set(inode, FI_HOT_DATA)) if (file_is_hot(inode) ||
is_inode_flag_set(inode, FI_HOT_DATA))
return CURSEG_HOT_DATA; return CURSEG_HOT_DATA;
return rw_hint_to_seg_type(inode->i_write_hint); return rw_hint_to_seg_type(inode->i_write_hint);
} else { } else {
...@@ -2502,7 +2603,7 @@ static int __get_segment_type(struct f2fs_io_info *fio) ...@@ -2502,7 +2603,7 @@ static int __get_segment_type(struct f2fs_io_info *fio)
{ {
int type = 0; int type = 0;
switch (fio->sbi->active_logs) { switch (F2FS_OPTION(fio->sbi).active_logs) {
case 2: case 2:
type = __get_segment_type_2(fio); type = __get_segment_type_2(fio);
break; break;
...@@ -2642,6 +2743,7 @@ void write_meta_page(struct f2fs_sb_info *sbi, struct page *page, ...@@ -2642,6 +2743,7 @@ void write_meta_page(struct f2fs_sb_info *sbi, struct page *page,
struct f2fs_io_info fio = { struct f2fs_io_info fio = {
.sbi = sbi, .sbi = sbi,
.type = META, .type = META,
.temp = HOT,
.op = REQ_OP_WRITE, .op = REQ_OP_WRITE,
.op_flags = REQ_SYNC | REQ_META | REQ_PRIO, .op_flags = REQ_SYNC | REQ_META | REQ_PRIO,
.old_blkaddr = page->index, .old_blkaddr = page->index,
...@@ -2688,8 +2790,15 @@ void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio) ...@@ -2688,8 +2790,15 @@ void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio)
int rewrite_data_page(struct f2fs_io_info *fio) int rewrite_data_page(struct f2fs_io_info *fio)
{ {
int err; int err;
struct f2fs_sb_info *sbi = fio->sbi;
fio->new_blkaddr = fio->old_blkaddr; fio->new_blkaddr = fio->old_blkaddr;
/* i/o temperature is needed for passing down write hints */
__get_segment_type(fio);
f2fs_bug_on(sbi, !IS_DATASEG(get_seg_entry(sbi,
GET_SEGNO(sbi, fio->new_blkaddr))->type));
stat_inc_inplace_blocks(fio->sbi); stat_inc_inplace_blocks(fio->sbi);
err = f2fs_submit_page_bio(fio); err = f2fs_submit_page_bio(fio);
......
...@@ -53,13 +53,19 @@ ...@@ -53,13 +53,19 @@
((secno) == CURSEG_I(sbi, CURSEG_COLD_NODE)->segno / \ ((secno) == CURSEG_I(sbi, CURSEG_COLD_NODE)->segno / \
(sbi)->segs_per_sec)) \ (sbi)->segs_per_sec)) \
#define MAIN_BLKADDR(sbi) (SM_I(sbi)->main_blkaddr) #define MAIN_BLKADDR(sbi) \
#define SEG0_BLKADDR(sbi) (SM_I(sbi)->seg0_blkaddr) (SM_I(sbi) ? SM_I(sbi)->main_blkaddr : \
le32_to_cpu(F2FS_RAW_SUPER(sbi)->main_blkaddr))
#define SEG0_BLKADDR(sbi) \
(SM_I(sbi) ? SM_I(sbi)->seg0_blkaddr : \
le32_to_cpu(F2FS_RAW_SUPER(sbi)->segment0_blkaddr))
#define MAIN_SEGS(sbi) (SM_I(sbi)->main_segments) #define MAIN_SEGS(sbi) (SM_I(sbi)->main_segments)
#define MAIN_SECS(sbi) ((sbi)->total_sections) #define MAIN_SECS(sbi) ((sbi)->total_sections)
#define TOTAL_SEGS(sbi) (SM_I(sbi)->segment_count) #define TOTAL_SEGS(sbi) \
(SM_I(sbi) ? SM_I(sbi)->segment_count : \
le32_to_cpu(F2FS_RAW_SUPER(sbi)->segment_count))
#define TOTAL_BLKS(sbi) (TOTAL_SEGS(sbi) << (sbi)->log_blocks_per_seg) #define TOTAL_BLKS(sbi) (TOTAL_SEGS(sbi) << (sbi)->log_blocks_per_seg)
#define MAX_BLKADDR(sbi) (SEG0_BLKADDR(sbi) + TOTAL_BLKS(sbi)) #define MAX_BLKADDR(sbi) (SEG0_BLKADDR(sbi) + TOTAL_BLKS(sbi))
...@@ -596,6 +602,8 @@ static inline int utilization(struct f2fs_sb_info *sbi) ...@@ -596,6 +602,8 @@ static inline int utilization(struct f2fs_sb_info *sbi)
#define DEF_MIN_FSYNC_BLOCKS 8 #define DEF_MIN_FSYNC_BLOCKS 8
#define DEF_MIN_HOT_BLOCKS 16 #define DEF_MIN_HOT_BLOCKS 16
#define SMALL_VOLUME_SEGMENTS (16 * 512) /* 16GB */
enum { enum {
F2FS_IPU_FORCE, F2FS_IPU_FORCE,
F2FS_IPU_SSR, F2FS_IPU_SSR,
...@@ -630,10 +638,17 @@ static inline void check_seg_range(struct f2fs_sb_info *sbi, unsigned int segno) ...@@ -630,10 +638,17 @@ static inline void check_seg_range(struct f2fs_sb_info *sbi, unsigned int segno)
f2fs_bug_on(sbi, segno > TOTAL_SEGS(sbi) - 1); f2fs_bug_on(sbi, segno > TOTAL_SEGS(sbi) - 1);
} }
static inline void verify_block_addr(struct f2fs_sb_info *sbi, block_t blk_addr) static inline void verify_block_addr(struct f2fs_io_info *fio, block_t blk_addr)
{ {
BUG_ON(blk_addr < SEG0_BLKADDR(sbi) struct f2fs_sb_info *sbi = fio->sbi;
|| blk_addr >= MAX_BLKADDR(sbi));
if (PAGE_TYPE_OF_BIO(fio->type) == META &&
(!is_read_io(fio->op) || fio->is_meta))
BUG_ON(blk_addr < SEG0_BLKADDR(sbi) ||
blk_addr >= MAIN_BLKADDR(sbi));
else
BUG_ON(blk_addr < MAIN_BLKADDR(sbi) ||
blk_addr >= MAX_BLKADDR(sbi));
} }
/* /*
......
This diff is collapsed.
...@@ -58,7 +58,7 @@ static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type) ...@@ -58,7 +58,7 @@ static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type)
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
else if (struct_type == FAULT_INFO_RATE || else if (struct_type == FAULT_INFO_RATE ||
struct_type == FAULT_INFO_TYPE) struct_type == FAULT_INFO_TYPE)
return (unsigned char *)&sbi->fault_info; return (unsigned char *)&F2FS_OPTION(sbi).fault_info;
#endif #endif
return NULL; return NULL;
} }
...@@ -92,10 +92,10 @@ static ssize_t features_show(struct f2fs_attr *a, ...@@ -92,10 +92,10 @@ static ssize_t features_show(struct f2fs_attr *a,
if (!sb->s_bdev->bd_part) if (!sb->s_bdev->bd_part)
return snprintf(buf, PAGE_SIZE, "0\n"); return snprintf(buf, PAGE_SIZE, "0\n");
if (f2fs_sb_has_crypto(sb)) if (f2fs_sb_has_encrypt(sb))
len += snprintf(buf, PAGE_SIZE - len, "%s", len += snprintf(buf, PAGE_SIZE - len, "%s",
"encryption"); "encryption");
if (f2fs_sb_mounted_blkzoned(sb)) if (f2fs_sb_has_blkzoned(sb))
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
len ? ", " : "", "blkzoned"); len ? ", " : "", "blkzoned");
if (f2fs_sb_has_extra_attr(sb)) if (f2fs_sb_has_extra_attr(sb))
...@@ -116,6 +116,9 @@ static ssize_t features_show(struct f2fs_attr *a, ...@@ -116,6 +116,9 @@ static ssize_t features_show(struct f2fs_attr *a,
if (f2fs_sb_has_inode_crtime(sb)) if (f2fs_sb_has_inode_crtime(sb))
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
len ? ", " : "", "inode_crtime"); len ? ", " : "", "inode_crtime");
if (f2fs_sb_has_lost_found(sb))
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
len ? ", " : "", "lost_found");
len += snprintf(buf + len, PAGE_SIZE - len, "\n"); len += snprintf(buf + len, PAGE_SIZE - len, "\n");
return len; return len;
} }
...@@ -136,6 +139,27 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a, ...@@ -136,6 +139,27 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
if (!ptr) if (!ptr)
return -EINVAL; return -EINVAL;
if (!strcmp(a->attr.name, "extension_list")) {
__u8 (*extlist)[F2FS_EXTENSION_LEN] =
sbi->raw_super->extension_list;
int cold_count = le32_to_cpu(sbi->raw_super->extension_count);
int hot_count = sbi->raw_super->hot_ext_count;
int len = 0, i;
len += snprintf(buf + len, PAGE_SIZE - len,
"cold file extenstion:\n");
for (i = 0; i < cold_count; i++)
len += snprintf(buf + len, PAGE_SIZE - len, "%s\n",
extlist[i]);
len += snprintf(buf + len, PAGE_SIZE - len,
"hot file extenstion:\n");
for (i = cold_count; i < cold_count + hot_count; i++)
len += snprintf(buf + len, PAGE_SIZE - len, "%s\n",
extlist[i]);
return len;
}
ui = (unsigned int *)(ptr + a->offset); ui = (unsigned int *)(ptr + a->offset);
return snprintf(buf, PAGE_SIZE, "%u\n", *ui); return snprintf(buf, PAGE_SIZE, "%u\n", *ui);
...@@ -154,6 +178,41 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a, ...@@ -154,6 +178,41 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
if (!ptr) if (!ptr)
return -EINVAL; return -EINVAL;
if (!strcmp(a->attr.name, "extension_list")) {
const char *name = strim((char *)buf);
bool set = true, hot;
if (!strncmp(name, "[h]", 3))
hot = true;
else if (!strncmp(name, "[c]", 3))
hot = false;
else
return -EINVAL;
name += 3;
if (*name == '!') {
name++;
set = false;
}
if (strlen(name) >= F2FS_EXTENSION_LEN)
return -EINVAL;
down_write(&sbi->sb_lock);
ret = update_extension_list(sbi, name, hot, set);
if (ret)
goto out;
ret = f2fs_commit_super(sbi, false);
if (ret)
update_extension_list(sbi, name, hot, !set);
out:
up_write(&sbi->sb_lock);
return ret ? ret : count;
}
ui = (unsigned int *)(ptr + a->offset); ui = (unsigned int *)(ptr + a->offset);
ret = kstrtoul(skip_spaces(buf), 0, &t); ret = kstrtoul(skip_spaces(buf), 0, &t);
...@@ -166,7 +225,7 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a, ...@@ -166,7 +225,7 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
if (a->struct_type == RESERVED_BLOCKS) { if (a->struct_type == RESERVED_BLOCKS) {
spin_lock(&sbi->stat_lock); spin_lock(&sbi->stat_lock);
if (t > (unsigned long)(sbi->user_block_count - if (t > (unsigned long)(sbi->user_block_count -
sbi->root_reserved_blocks)) { F2FS_OPTION(sbi).root_reserved_blocks)) {
spin_unlock(&sbi->stat_lock); spin_unlock(&sbi->stat_lock);
return -EINVAL; return -EINVAL;
} }
...@@ -236,6 +295,7 @@ enum feat_id { ...@@ -236,6 +295,7 @@ enum feat_id {
FEAT_FLEXIBLE_INLINE_XATTR, FEAT_FLEXIBLE_INLINE_XATTR,
FEAT_QUOTA_INO, FEAT_QUOTA_INO,
FEAT_INODE_CRTIME, FEAT_INODE_CRTIME,
FEAT_LOST_FOUND,
}; };
static ssize_t f2fs_feature_show(struct f2fs_attr *a, static ssize_t f2fs_feature_show(struct f2fs_attr *a,
...@@ -251,6 +311,7 @@ static ssize_t f2fs_feature_show(struct f2fs_attr *a, ...@@ -251,6 +311,7 @@ static ssize_t f2fs_feature_show(struct f2fs_attr *a,
case FEAT_FLEXIBLE_INLINE_XATTR: case FEAT_FLEXIBLE_INLINE_XATTR:
case FEAT_QUOTA_INO: case FEAT_QUOTA_INO:
case FEAT_INODE_CRTIME: case FEAT_INODE_CRTIME:
case FEAT_LOST_FOUND:
return snprintf(buf, PAGE_SIZE, "supported\n"); return snprintf(buf, PAGE_SIZE, "supported\n");
} }
return 0; return 0;
...@@ -307,6 +368,7 @@ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, idle_interval, interval_time[REQ_TIME]); ...@@ -307,6 +368,7 @@ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, idle_interval, interval_time[REQ_TIME]);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, iostat_enable, iostat_enable); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, iostat_enable, iostat_enable);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, readdir_ra, readdir_ra); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, readdir_ra, readdir_ra);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_pin_file_thresh, gc_pin_file_threshold); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_pin_file_thresh, gc_pin_file_threshold);
F2FS_RW_ATTR(F2FS_SBI, f2fs_super_block, extension_list, extension_list);
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
F2FS_RW_ATTR(FAULT_INFO_RATE, f2fs_fault_info, inject_rate, inject_rate); F2FS_RW_ATTR(FAULT_INFO_RATE, f2fs_fault_info, inject_rate, inject_rate);
F2FS_RW_ATTR(FAULT_INFO_TYPE, f2fs_fault_info, inject_type, inject_type); F2FS_RW_ATTR(FAULT_INFO_TYPE, f2fs_fault_info, inject_type, inject_type);
...@@ -329,6 +391,7 @@ F2FS_FEATURE_RO_ATTR(inode_checksum, FEAT_INODE_CHECKSUM); ...@@ -329,6 +391,7 @@ F2FS_FEATURE_RO_ATTR(inode_checksum, FEAT_INODE_CHECKSUM);
F2FS_FEATURE_RO_ATTR(flexible_inline_xattr, FEAT_FLEXIBLE_INLINE_XATTR); F2FS_FEATURE_RO_ATTR(flexible_inline_xattr, FEAT_FLEXIBLE_INLINE_XATTR);
F2FS_FEATURE_RO_ATTR(quota_ino, FEAT_QUOTA_INO); F2FS_FEATURE_RO_ATTR(quota_ino, FEAT_QUOTA_INO);
F2FS_FEATURE_RO_ATTR(inode_crtime, FEAT_INODE_CRTIME); F2FS_FEATURE_RO_ATTR(inode_crtime, FEAT_INODE_CRTIME);
F2FS_FEATURE_RO_ATTR(lost_found, FEAT_LOST_FOUND);
#define ATTR_LIST(name) (&f2fs_attr_##name.attr) #define ATTR_LIST(name) (&f2fs_attr_##name.attr)
static struct attribute *f2fs_attrs[] = { static struct attribute *f2fs_attrs[] = {
...@@ -357,6 +420,7 @@ static struct attribute *f2fs_attrs[] = { ...@@ -357,6 +420,7 @@ static struct attribute *f2fs_attrs[] = {
ATTR_LIST(iostat_enable), ATTR_LIST(iostat_enable),
ATTR_LIST(readdir_ra), ATTR_LIST(readdir_ra),
ATTR_LIST(gc_pin_file_thresh), ATTR_LIST(gc_pin_file_thresh),
ATTR_LIST(extension_list),
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
ATTR_LIST(inject_rate), ATTR_LIST(inject_rate),
ATTR_LIST(inject_type), ATTR_LIST(inject_type),
...@@ -383,6 +447,7 @@ static struct attribute *f2fs_feat_attrs[] = { ...@@ -383,6 +447,7 @@ static struct attribute *f2fs_feat_attrs[] = {
ATTR_LIST(flexible_inline_xattr), ATTR_LIST(flexible_inline_xattr),
ATTR_LIST(quota_ino), ATTR_LIST(quota_ino),
ATTR_LIST(inode_crtime), ATTR_LIST(inode_crtime),
ATTR_LIST(lost_found),
NULL, NULL,
}; };
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#define F2FS_BLKSIZE 4096 /* support only 4KB block */ #define F2FS_BLKSIZE 4096 /* support only 4KB block */
#define F2FS_BLKSIZE_BITS 12 /* bits for F2FS_BLKSIZE */ #define F2FS_BLKSIZE_BITS 12 /* bits for F2FS_BLKSIZE */
#define F2FS_MAX_EXTENSION 64 /* # of extension entries */ #define F2FS_MAX_EXTENSION 64 /* # of extension entries */
#define F2FS_EXTENSION_LEN 8 /* max size of extension */
#define F2FS_BLK_ALIGN(x) (((x) + F2FS_BLKSIZE - 1) >> F2FS_BLKSIZE_BITS) #define F2FS_BLK_ALIGN(x) (((x) + F2FS_BLKSIZE - 1) >> F2FS_BLKSIZE_BITS)
#define NULL_ADDR ((block_t)0) /* used as block_t addresses */ #define NULL_ADDR ((block_t)0) /* used as block_t addresses */
...@@ -38,15 +39,14 @@ ...@@ -38,15 +39,14 @@
#define F2FS_MAX_QUOTAS 3 #define F2FS_MAX_QUOTAS 3
#define F2FS_IO_SIZE(sbi) (1 << (sbi)->write_io_size_bits) /* Blocks */ #define F2FS_IO_SIZE(sbi) (1 << F2FS_OPTION(sbi).write_io_size_bits) /* Blocks */
#define F2FS_IO_SIZE_KB(sbi) (1 << ((sbi)->write_io_size_bits + 2)) /* KB */ #define F2FS_IO_SIZE_KB(sbi) (1 << (F2FS_OPTION(sbi).write_io_size_bits + 2)) /* KB */
#define F2FS_IO_SIZE_BYTES(sbi) (1 << ((sbi)->write_io_size_bits + 12)) /* B */ #define F2FS_IO_SIZE_BYTES(sbi) (1 << (F2FS_OPTION(sbi).write_io_size_bits + 12)) /* B */
#define F2FS_IO_SIZE_BITS(sbi) ((sbi)->write_io_size_bits) /* power of 2 */ #define F2FS_IO_SIZE_BITS(sbi) (F2FS_OPTION(sbi).write_io_size_bits) /* power of 2 */
#define F2FS_IO_SIZE_MASK(sbi) (F2FS_IO_SIZE(sbi) - 1) #define F2FS_IO_SIZE_MASK(sbi) (F2FS_IO_SIZE(sbi) - 1)
/* This flag is used by node and meta inodes, and by recovery */ /* This flag is used by node and meta inodes, and by recovery */
#define GFP_F2FS_ZERO (GFP_NOFS | __GFP_ZERO) #define GFP_F2FS_ZERO (GFP_NOFS | __GFP_ZERO)
#define GFP_F2FS_HIGH_ZERO (GFP_NOFS | __GFP_ZERO | __GFP_HIGHMEM)
/* /*
* For further optimization on multi-head logs, on-disk layout supports maximum * For further optimization on multi-head logs, on-disk layout supports maximum
...@@ -102,7 +102,7 @@ struct f2fs_super_block { ...@@ -102,7 +102,7 @@ struct f2fs_super_block {
__u8 uuid[16]; /* 128-bit uuid for volume */ __u8 uuid[16]; /* 128-bit uuid for volume */
__le16 volume_name[MAX_VOLUME_NAME]; /* volume name */ __le16 volume_name[MAX_VOLUME_NAME]; /* volume name */
__le32 extension_count; /* # of extensions below */ __le32 extension_count; /* # of extensions below */
__u8 extension_list[F2FS_MAX_EXTENSION][8]; /* extension array */ __u8 extension_list[F2FS_MAX_EXTENSION][F2FS_EXTENSION_LEN];/* extension array */
__le32 cp_payload; __le32 cp_payload;
__u8 version[VERSION_LEN]; /* the kernel version */ __u8 version[VERSION_LEN]; /* the kernel version */
__u8 init_version[VERSION_LEN]; /* the initial kernel version */ __u8 init_version[VERSION_LEN]; /* the initial kernel version */
...@@ -111,12 +111,14 @@ struct f2fs_super_block { ...@@ -111,12 +111,14 @@ struct f2fs_super_block {
__u8 encrypt_pw_salt[16]; /* Salt used for string2key algorithm */ __u8 encrypt_pw_salt[16]; /* Salt used for string2key algorithm */
struct f2fs_device devs[MAX_DEVICES]; /* device list */ struct f2fs_device devs[MAX_DEVICES]; /* device list */
__le32 qf_ino[F2FS_MAX_QUOTAS]; /* quota inode numbers */ __le32 qf_ino[F2FS_MAX_QUOTAS]; /* quota inode numbers */
__u8 reserved[315]; /* valid reserved region */ __u8 hot_ext_count; /* # of hot file extension */
__u8 reserved[314]; /* valid reserved region */
} __packed; } __packed;
/* /*
* For checkpoint * For checkpoint
*/ */
#define CP_LARGE_NAT_BITMAP_FLAG 0x00000400
#define CP_NOCRC_RECOVERY_FLAG 0x00000200 #define CP_NOCRC_RECOVERY_FLAG 0x00000200
#define CP_TRIMMED_FLAG 0x00000100 #define CP_TRIMMED_FLAG 0x00000100
#define CP_NAT_BITS_FLAG 0x00000080 #define CP_NAT_BITS_FLAG 0x00000080
...@@ -303,6 +305,10 @@ struct f2fs_node { ...@@ -303,6 +305,10 @@ struct f2fs_node {
*/ */
#define NAT_ENTRY_PER_BLOCK (PAGE_SIZE / sizeof(struct f2fs_nat_entry)) #define NAT_ENTRY_PER_BLOCK (PAGE_SIZE / sizeof(struct f2fs_nat_entry))
#define NAT_ENTRY_BITMAP_SIZE ((NAT_ENTRY_PER_BLOCK + 7) / 8) #define NAT_ENTRY_BITMAP_SIZE ((NAT_ENTRY_PER_BLOCK + 7) / 8)
#define NAT_ENTRY_BITMAP_SIZE_ALIGNED \
((NAT_ENTRY_BITMAP_SIZE + BITS_PER_LONG - 1) / \
BITS_PER_LONG * BITS_PER_LONG)
struct f2fs_nat_entry { struct f2fs_nat_entry {
__u8 version; /* latest version of cached nat entry */ __u8 version; /* latest version of cached nat entry */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment