Commit 274c0e74 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'f2fs-for-4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs

Pull f2fs update from Jaegeuk Kim:
 "In this round, we've mainly focused on performance tuning and critical
  bug fixes occurred in low-end devices. Sheng Yong introduced
  lost_found feature to keep missing files during recovery instead of
  thrashing them. We're preparing coming fsverity implementation. And,
  we've got more features to communicate with users for better
  performance. In low-end devices, some memory-related issues were
  fixed, and subtle race condtions and corner cases were addressed as
  well.

  Enhancements:
   - large nat bitmaps for more free node ids
   - add three block allocation policies to pass down write hints given by user
   - expose extension list to user and introduce hot file extension
   - tune small devices seamlessly for low-end devices
   - set readdir_ra by default
   - give more resources under gc_urgent mode regarding to discard and cleaning
   - introduce fsync_mode to enforce posix or not
   - nowait aio support
   - add lost_found feature to keep dangling inodes
   - reserve bits for future fsverity feature
   - add test_dummy_encryption for FBE

  Bug fixes:
   - don't use highmem for dentry pages
   - align memory boundary for bitops
   - truncate preallocated blocks in write errors
   - guarantee i_times on fsync call
   - clear CP_TRIMMED_FLAG correctly
   - prevent node chain loop during recovery
   - avoid data race between atomic write and background cleaning
   - avoid unnecessary selinux violation warnings on resgid option
   - GFP_NOFS to avoid deadlock in quota and read paths
   - fix f2fs_skip_inode_update to allow i_size recovery

  In addition to the above, there are several minor bug fixes and clean-ups"

* tag 'f2fs-for-4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (50 commits)
  f2fs: remain written times to update inode during fsync
  f2fs: make assignment of t->dentry_bitmap more readable
  f2fs: truncate preallocated blocks in error case
  f2fs: fix a wrong condition in f2fs_skip_inode_update
  f2fs: reserve bits for fs-verity
  f2fs: Add a segment type check in inplace write
  f2fs: no need to initialize zero value for GFP_F2FS_ZERO
  f2fs: don't track new nat entry in nat set
  f2fs: clean up with F2FS_BLK_ALIGN
  f2fs: check blkaddr more accuratly before issue a bio
  f2fs: Set GF_NOFS in read_cache_page_gfp while doing f2fs_quota_read
  f2fs: introduce a new mount option test_dummy_encryption
  f2fs: introduce F2FS_FEATURE_LOST_FOUND feature
  f2fs: release locks before return in f2fs_ioc_gc_range()
  f2fs: align memory boundary for bitops
  f2fs: remove unneeded set_cold_node()
  f2fs: add nowait aio support
  f2fs: wrap all options with f2fs_sb_info.mount_opt
  f2fs: Don't overwrite all types of node to keep node chain
  f2fs: introduce mount option for fsync mode
  ...
parents 052c220d 214c2461
...@@ -192,3 +192,14 @@ Date: November 2017 ...@@ -192,3 +192,14 @@ Date: November 2017
Contact: "Sheng Yong" <shengyong1@huawei.com> Contact: "Sheng Yong" <shengyong1@huawei.com>
Description: Description:
Controls readahead inode block in readdir. Controls readahead inode block in readdir.
What: /sys/fs/f2fs/<disk>/extension_list
Date: Feburary 2018
Contact: "Chao Yu" <yuchao0@huawei.com>
Description:
Used to control configure extension list:
- Query: cat /sys/fs/f2fs/<disk>/extension_list
- Add: echo '[h/c]extension' > /sys/fs/f2fs/<disk>/extension_list
- Del: echo '[h/c]!extension' > /sys/fs/f2fs/<disk>/extension_list
- [h] means add/del hot file extension
- [c] means add/del cold file extension
...@@ -174,6 +174,23 @@ offgrpjquota Turn off group journelled quota. ...@@ -174,6 +174,23 @@ offgrpjquota Turn off group journelled quota.
offprjjquota Turn off project journelled quota. offprjjquota Turn off project journelled quota.
quota Enable plain user disk quota accounting. quota Enable plain user disk quota accounting.
noquota Disable all plain disk quota option. noquota Disable all plain disk quota option.
whint_mode=%s Control which write hints are passed down to block
layer. This supports "off", "user-based", and
"fs-based". In "off" mode (default), f2fs does not pass
down hints. In "user-based" mode, f2fs tries to pass
down hints given by users. And in "fs-based" mode, f2fs
passes down hints with its policy.
alloc_mode=%s Adjust block allocation policy, which supports "reuse"
and "default".
fsync_mode=%s Control the policy of fsync. Currently supports "posix"
and "strict". In "posix" mode, which is default, fsync
will follow POSIX semantics and does a light operation
to improve the filesystem performance. In "strict" mode,
fsync will be heavy and behaves in line with xfs, ext4
and btrfs, where xfstest generic/342 will pass, but the
performance will regress.
test_dummy_encryption Enable dummy encryption, which provides a fake fscrypt
context. The fake fscrypt context is used by xfstests.
================================================================================ ================================================================================
DEBUGFS ENTRIES DEBUGFS ENTRIES
...@@ -611,3 +628,63 @@ algorithm. ...@@ -611,3 +628,63 @@ algorithm.
In order to identify whether the data in the victim segment are valid or not, In order to identify whether the data in the victim segment are valid or not,
F2FS manages a bitmap. Each bit represents the validity of a block, and the F2FS manages a bitmap. Each bit represents the validity of a block, and the
bitmap is composed of a bit stream covering whole blocks in main area. bitmap is composed of a bit stream covering whole blocks in main area.
Write-hint Policy
-----------------
1) whint_mode=off. F2FS only passes down WRITE_LIFE_NOT_SET.
2) whint_mode=user-based. F2FS tries to pass down hints given by
users.
User F2FS Block
---- ---- -----
META WRITE_LIFE_NOT_SET
HOT_NODE "
WARM_NODE "
COLD_NODE "
*ioctl(COLD) COLD_DATA WRITE_LIFE_EXTREME
*extension list " "
-- buffered io
WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
WRITE_LIFE_NONE " "
WRITE_LIFE_MEDIUM " "
WRITE_LIFE_LONG " "
-- direct io
WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
WRITE_LIFE_NONE " WRITE_LIFE_NONE
WRITE_LIFE_MEDIUM " WRITE_LIFE_MEDIUM
WRITE_LIFE_LONG " WRITE_LIFE_LONG
3) whint_mode=fs-based. F2FS passes down hints with its policy.
User F2FS Block
---- ---- -----
META WRITE_LIFE_MEDIUM;
HOT_NODE WRITE_LIFE_NOT_SET
WARM_NODE "
COLD_NODE WRITE_LIFE_NONE
ioctl(COLD) COLD_DATA WRITE_LIFE_EXTREME
extension list " "
-- buffered io
WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_LONG
WRITE_LIFE_NONE " "
WRITE_LIFE_MEDIUM " "
WRITE_LIFE_LONG " "
-- direct io
WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
WRITE_LIFE_NONE " WRITE_LIFE_NONE
WRITE_LIFE_MEDIUM " WRITE_LIFE_MEDIUM
WRITE_LIFE_LONG " WRITE_LIFE_LONG
...@@ -68,6 +68,7 @@ static struct page *__get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index, ...@@ -68,6 +68,7 @@ static struct page *__get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index,
.old_blkaddr = index, .old_blkaddr = index,
.new_blkaddr = index, .new_blkaddr = index,
.encrypted_page = NULL, .encrypted_page = NULL,
.is_meta = is_meta,
}; };
if (unlikely(!is_meta)) if (unlikely(!is_meta))
...@@ -162,6 +163,7 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, ...@@ -162,6 +163,7 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
.op_flags = sync ? (REQ_META | REQ_PRIO) : REQ_RAHEAD, .op_flags = sync ? (REQ_META | REQ_PRIO) : REQ_RAHEAD,
.encrypted_page = NULL, .encrypted_page = NULL,
.in_list = false, .in_list = false,
.is_meta = (type != META_POR),
}; };
struct blk_plug plug; struct blk_plug plug;
...@@ -569,13 +571,8 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -569,13 +571,8 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
struct node_info ni; struct node_info ni;
int err = acquire_orphan_inode(sbi); int err = acquire_orphan_inode(sbi);
if (err) { if (err)
set_sbi_flag(sbi, SBI_NEED_FSCK); goto err_out;
f2fs_msg(sbi->sb, KERN_WARNING,
"%s: orphan failed (ino=%x), run fsck to fix.",
__func__, ino);
return err;
}
__add_ino_entry(sbi, ino, 0, ORPHAN_INO); __add_ino_entry(sbi, ino, 0, ORPHAN_INO);
...@@ -589,6 +586,11 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -589,6 +586,11 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
return PTR_ERR(inode); return PTR_ERR(inode);
} }
err = dquot_initialize(inode);
if (err)
goto err_out;
dquot_initialize(inode);
clear_nlink(inode); clear_nlink(inode);
/* truncate all the data during iput */ /* truncate all the data during iput */
...@@ -598,14 +600,18 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -598,14 +600,18 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
/* ENOMEM was fully retried in f2fs_evict_inode. */ /* ENOMEM was fully retried in f2fs_evict_inode. */
if (ni.blk_addr != NULL_ADDR) { if (ni.blk_addr != NULL_ADDR) {
set_sbi_flag(sbi, SBI_NEED_FSCK); err = -EIO;
f2fs_msg(sbi->sb, KERN_WARNING, goto err_out;
"%s: orphan failed (ino=%x) by kernel, retry mount.",
__func__, ino);
return -EIO;
} }
__remove_ino_entry(sbi, ino, ORPHAN_INO); __remove_ino_entry(sbi, ino, ORPHAN_INO);
return 0; return 0;
err_out:
set_sbi_flag(sbi, SBI_NEED_FSCK);
f2fs_msg(sbi->sb, KERN_WARNING,
"%s: orphan failed (ino=%x), run fsck to fix.",
__func__, ino);
return err;
} }
int recover_orphan_inodes(struct f2fs_sb_info *sbi) int recover_orphan_inodes(struct f2fs_sb_info *sbi)
...@@ -1136,6 +1142,8 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1136,6 +1142,8 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc)
if (cpc->reason & CP_TRIMMED) if (cpc->reason & CP_TRIMMED)
__set_ckpt_flags(ckpt, CP_TRIMMED_FLAG); __set_ckpt_flags(ckpt, CP_TRIMMED_FLAG);
else
__clear_ckpt_flags(ckpt, CP_TRIMMED_FLAG);
if (cpc->reason & CP_UMOUNT) if (cpc->reason & CP_UMOUNT)
__set_ckpt_flags(ckpt, CP_UMOUNT_FLAG); __set_ckpt_flags(ckpt, CP_UMOUNT_FLAG);
...@@ -1162,6 +1170,39 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1162,6 +1170,39 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc)
spin_unlock_irqrestore(&sbi->cp_lock, flags); spin_unlock_irqrestore(&sbi->cp_lock, flags);
} }
static void commit_checkpoint(struct f2fs_sb_info *sbi,
void *src, block_t blk_addr)
{
struct writeback_control wbc = {
.for_reclaim = 0,
};
/*
* pagevec_lookup_tag and lock_page again will take
* some extra time. Therefore, update_meta_pages and
* sync_meta_pages are combined in this function.
*/
struct page *page = grab_meta_page(sbi, blk_addr);
int err;
memcpy(page_address(page), src, PAGE_SIZE);
set_page_dirty(page);
f2fs_wait_on_page_writeback(page, META, true);
f2fs_bug_on(sbi, PageWriteback(page));
if (unlikely(!clear_page_dirty_for_io(page)))
f2fs_bug_on(sbi, 1);
/* writeout cp pack 2 page */
err = __f2fs_write_meta_page(page, &wbc, FS_CP_META_IO);
f2fs_bug_on(sbi, err);
f2fs_put_page(page, 0);
/* submit checkpoint (with barrier if NOBARRIER is not set) */
f2fs_submit_merged_write(sbi, META_FLUSH);
}
static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
{ {
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
...@@ -1264,16 +1305,6 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1264,16 +1305,6 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
} }
} }
/* need to wait for end_io results */
wait_on_all_pages_writeback(sbi);
if (unlikely(f2fs_cp_error(sbi)))
return -EIO;
/* flush all device cache */
err = f2fs_flush_device_cache(sbi);
if (err)
return err;
/* write out checkpoint buffer at block 0 */ /* write out checkpoint buffer at block 0 */
update_meta_page(sbi, ckpt, start_blk++); update_meta_page(sbi, ckpt, start_blk++);
...@@ -1301,26 +1332,26 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1301,26 +1332,26 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
start_blk += NR_CURSEG_NODE_TYPE; start_blk += NR_CURSEG_NODE_TYPE;
} }
/* writeout checkpoint block */ /* update user_block_counts */
update_meta_page(sbi, ckpt, start_blk); sbi->last_valid_block_count = sbi->total_valid_block_count;
percpu_counter_set(&sbi->alloc_valid_block_count, 0);
/* Here, we have one bio having CP pack except cp pack 2 page */
sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
/* wait for previous submitted node/meta pages writeback */ /* wait for previous submitted meta pages writeback */
wait_on_all_pages_writeback(sbi); wait_on_all_pages_writeback(sbi);
if (unlikely(f2fs_cp_error(sbi))) if (unlikely(f2fs_cp_error(sbi)))
return -EIO; return -EIO;
filemap_fdatawait_range(NODE_MAPPING(sbi), 0, LLONG_MAX); /* flush all device cache */
filemap_fdatawait_range(META_MAPPING(sbi), 0, LLONG_MAX); err = f2fs_flush_device_cache(sbi);
if (err)
/* update user_block_counts */ return err;
sbi->last_valid_block_count = sbi->total_valid_block_count;
percpu_counter_set(&sbi->alloc_valid_block_count, 0);
/* Here, we only have one bio having CP pack */
sync_meta_pages(sbi, META_FLUSH, LONG_MAX, FS_CP_META_IO);
/* wait for previous submitted meta pages writeback */ /* barrier and flush checkpoint cp pack 2 page if it can */
commit_checkpoint(sbi, ckpt, start_blk);
wait_on_all_pages_writeback(sbi); wait_on_all_pages_writeback(sbi);
release_ino_entry(sbi, false); release_ino_entry(sbi, false);
......
...@@ -175,15 +175,22 @@ static bool __same_bdev(struct f2fs_sb_info *sbi, ...@@ -175,15 +175,22 @@ static bool __same_bdev(struct f2fs_sb_info *sbi,
*/ */
static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr, static struct bio *__bio_alloc(struct f2fs_sb_info *sbi, block_t blk_addr,
struct writeback_control *wbc, struct writeback_control *wbc,
int npages, bool is_read) int npages, bool is_read,
enum page_type type, enum temp_type temp)
{ {
struct bio *bio; struct bio *bio;
bio = f2fs_bio_alloc(sbi, npages, true); bio = f2fs_bio_alloc(sbi, npages, true);
f2fs_target_device(sbi, blk_addr, bio); f2fs_target_device(sbi, blk_addr, bio);
bio->bi_end_io = is_read ? f2fs_read_end_io : f2fs_write_end_io; if (is_read) {
bio->bi_private = is_read ? NULL : sbi; bio->bi_end_io = f2fs_read_end_io;
bio->bi_private = NULL;
} else {
bio->bi_end_io = f2fs_write_end_io;
bio->bi_private = sbi;
bio->bi_write_hint = io_type_to_rw_hint(sbi, type, temp);
}
if (wbc) if (wbc)
wbc_init_bio(wbc, bio); wbc_init_bio(wbc, bio);
...@@ -196,13 +203,12 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi, ...@@ -196,13 +203,12 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi,
if (!is_read_io(bio_op(bio))) { if (!is_read_io(bio_op(bio))) {
unsigned int start; unsigned int start;
if (f2fs_sb_mounted_blkzoned(sbi->sb) &&
current->plug && (type == DATA || type == NODE))
blk_finish_plug(current->plug);
if (type != DATA && type != NODE) if (type != DATA && type != NODE)
goto submit_io; goto submit_io;
if (f2fs_sb_has_blkzoned(sbi->sb) && current->plug)
blk_finish_plug(current->plug);
start = bio->bi_iter.bi_size >> F2FS_BLKSIZE_BITS; start = bio->bi_iter.bi_size >> F2FS_BLKSIZE_BITS;
start %= F2FS_IO_SIZE(sbi); start %= F2FS_IO_SIZE(sbi);
...@@ -377,12 +383,13 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio) ...@@ -377,12 +383,13 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
struct page *page = fio->encrypted_page ? struct page *page = fio->encrypted_page ?
fio->encrypted_page : fio->page; fio->encrypted_page : fio->page;
verify_block_addr(fio, fio->new_blkaddr);
trace_f2fs_submit_page_bio(page, fio); trace_f2fs_submit_page_bio(page, fio);
f2fs_trace_ios(fio, 0); f2fs_trace_ios(fio, 0);
/* Allocate a new bio */ /* Allocate a new bio */
bio = __bio_alloc(fio->sbi, fio->new_blkaddr, fio->io_wbc, bio = __bio_alloc(fio->sbi, fio->new_blkaddr, fio->io_wbc,
1, is_read_io(fio->op)); 1, is_read_io(fio->op), fio->type, fio->temp);
if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) { if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
bio_put(bio); bio_put(bio);
...@@ -422,8 +429,8 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio) ...@@ -422,8 +429,8 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio)
} }
if (fio->old_blkaddr != NEW_ADDR) if (fio->old_blkaddr != NEW_ADDR)
verify_block_addr(sbi, fio->old_blkaddr); verify_block_addr(fio, fio->old_blkaddr);
verify_block_addr(sbi, fio->new_blkaddr); verify_block_addr(fio, fio->new_blkaddr);
bio_page = fio->encrypted_page ? fio->encrypted_page : fio->page; bio_page = fio->encrypted_page ? fio->encrypted_page : fio->page;
...@@ -445,7 +452,8 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio) ...@@ -445,7 +452,8 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio)
goto out_fail; goto out_fail;
} }
io->bio = __bio_alloc(sbi, fio->new_blkaddr, fio->io_wbc, io->bio = __bio_alloc(sbi, fio->new_blkaddr, fio->io_wbc,
BIO_MAX_PAGES, false); BIO_MAX_PAGES, false,
fio->type, fio->temp);
io->fio = *fio; io->fio = *fio;
} }
...@@ -832,13 +840,6 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type) ...@@ -832,13 +840,6 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type)
return 0; return 0;
} }
static inline bool __force_buffered_io(struct inode *inode, int rw)
{
return (f2fs_encrypted_file(inode) ||
(rw == WRITE && test_opt(F2FS_I_SB(inode), LFS)) ||
F2FS_I_SB(inode)->s_ndevs);
}
int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from) int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
{ {
struct inode *inode = file_inode(iocb->ki_filp); struct inode *inode = file_inode(iocb->ki_filp);
...@@ -870,7 +871,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from) ...@@ -870,7 +871,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
if (direct_io) { if (direct_io) {
map.m_seg_type = rw_hint_to_seg_type(iocb->ki_hint); map.m_seg_type = rw_hint_to_seg_type(iocb->ki_hint);
flag = __force_buffered_io(inode, WRITE) ? flag = f2fs_force_buffered_io(inode, WRITE) ?
F2FS_GET_BLOCK_PRE_AIO : F2FS_GET_BLOCK_PRE_AIO :
F2FS_GET_BLOCK_PRE_DIO; F2FS_GET_BLOCK_PRE_DIO;
goto map_blocks; goto map_blocks;
...@@ -1114,6 +1115,31 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, ...@@ -1114,6 +1115,31 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
return err; return err;
} }
bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len)
{
struct f2fs_map_blocks map;
block_t last_lblk;
int err;
if (pos + len > i_size_read(inode))
return false;
map.m_lblk = F2FS_BYTES_TO_BLK(pos);
map.m_next_pgofs = NULL;
map.m_next_extent = NULL;
map.m_seg_type = NO_CHECK_TYPE;
last_lblk = F2FS_BLK_ALIGN(pos + len);
while (map.m_lblk < last_lblk) {
map.m_len = last_lblk - map.m_lblk;
err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_DEFAULT);
if (err || map.m_len == 0)
return false;
map.m_lblk += map.m_len;
}
return true;
}
static int __get_data_block(struct inode *inode, sector_t iblock, static int __get_data_block(struct inode *inode, sector_t iblock,
struct buffer_head *bh, int create, int flag, struct buffer_head *bh, int create, int flag,
pgoff_t *next_pgofs, int seg_type) pgoff_t *next_pgofs, int seg_type)
...@@ -2287,25 +2313,41 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) ...@@ -2287,25 +2313,41 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
{ {
struct address_space *mapping = iocb->ki_filp->f_mapping; struct address_space *mapping = iocb->ki_filp->f_mapping;
struct inode *inode = mapping->host; struct inode *inode = mapping->host;
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
size_t count = iov_iter_count(iter); size_t count = iov_iter_count(iter);
loff_t offset = iocb->ki_pos; loff_t offset = iocb->ki_pos;
int rw = iov_iter_rw(iter); int rw = iov_iter_rw(iter);
int err; int err;
enum rw_hint hint = iocb->ki_hint;
int whint_mode = F2FS_OPTION(sbi).whint_mode;
err = check_direct_IO(inode, iter, offset); err = check_direct_IO(inode, iter, offset);
if (err) if (err)
return err; return err;
if (__force_buffered_io(inode, rw)) if (f2fs_force_buffered_io(inode, rw))
return 0; return 0;
trace_f2fs_direct_IO_enter(inode, offset, count, rw); trace_f2fs_direct_IO_enter(inode, offset, count, rw);
if (rw == WRITE && whint_mode == WHINT_MODE_OFF)
iocb->ki_hint = WRITE_LIFE_NOT_SET;
if (!down_read_trylock(&F2FS_I(inode)->dio_rwsem[rw])) {
if (iocb->ki_flags & IOCB_NOWAIT) {
iocb->ki_hint = hint;
err = -EAGAIN;
goto out;
}
down_read(&F2FS_I(inode)->dio_rwsem[rw]); down_read(&F2FS_I(inode)->dio_rwsem[rw]);
}
err = blockdev_direct_IO(iocb, inode, iter, get_data_block_dio); err = blockdev_direct_IO(iocb, inode, iter, get_data_block_dio);
up_read(&F2FS_I(inode)->dio_rwsem[rw]); up_read(&F2FS_I(inode)->dio_rwsem[rw]);
if (rw == WRITE) { if (rw == WRITE) {
if (whint_mode == WHINT_MODE_OFF)
iocb->ki_hint = hint;
if (err > 0) { if (err > 0) {
f2fs_update_iostat(F2FS_I_SB(inode), APP_DIRECT_IO, f2fs_update_iostat(F2FS_I_SB(inode), APP_DIRECT_IO,
err); err);
...@@ -2315,6 +2357,7 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) ...@@ -2315,6 +2357,7 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
} }
} }
out:
trace_f2fs_direct_IO_exit(inode, offset, count, rw, err); trace_f2fs_direct_IO_exit(inode, offset, count, rw, err);
return err; return err;
......
...@@ -94,14 +94,12 @@ static struct f2fs_dir_entry *find_in_block(struct page *dentry_page, ...@@ -94,14 +94,12 @@ static struct f2fs_dir_entry *find_in_block(struct page *dentry_page,
struct f2fs_dir_entry *de; struct f2fs_dir_entry *de;
struct f2fs_dentry_ptr d; struct f2fs_dentry_ptr d;
dentry_blk = (struct f2fs_dentry_block *)kmap(dentry_page); dentry_blk = (struct f2fs_dentry_block *)page_address(dentry_page);
make_dentry_ptr_block(NULL, &d, dentry_blk); make_dentry_ptr_block(NULL, &d, dentry_blk);
de = find_target_dentry(fname, namehash, max_slots, &d); de = find_target_dentry(fname, namehash, max_slots, &d);
if (de) if (de)
*res_page = dentry_page; *res_page = dentry_page;
else
kunmap(dentry_page);
return de; return de;
} }
...@@ -287,7 +285,6 @@ ino_t f2fs_inode_by_name(struct inode *dir, const struct qstr *qstr, ...@@ -287,7 +285,6 @@ ino_t f2fs_inode_by_name(struct inode *dir, const struct qstr *qstr,
de = f2fs_find_entry(dir, qstr, page); de = f2fs_find_entry(dir, qstr, page);
if (de) { if (de) {
res = le32_to_cpu(de->ino); res = le32_to_cpu(de->ino);
f2fs_dentry_kunmap(dir, *page);
f2fs_put_page(*page, 0); f2fs_put_page(*page, 0);
} }
...@@ -302,7 +299,6 @@ void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de, ...@@ -302,7 +299,6 @@ void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de,
f2fs_wait_on_page_writeback(page, type, true); f2fs_wait_on_page_writeback(page, type, true);
de->ino = cpu_to_le32(inode->i_ino); de->ino = cpu_to_le32(inode->i_ino);
set_de_type(de, inode->i_mode); set_de_type(de, inode->i_mode);
f2fs_dentry_kunmap(dir, page);
set_page_dirty(page); set_page_dirty(page);
dir->i_mtime = dir->i_ctime = current_time(dir); dir->i_mtime = dir->i_ctime = current_time(dir);
...@@ -350,13 +346,11 @@ static int make_empty_dir(struct inode *inode, ...@@ -350,13 +346,11 @@ static int make_empty_dir(struct inode *inode,
if (IS_ERR(dentry_page)) if (IS_ERR(dentry_page))
return PTR_ERR(dentry_page); return PTR_ERR(dentry_page);
dentry_blk = kmap_atomic(dentry_page); dentry_blk = page_address(dentry_page);
make_dentry_ptr_block(NULL, &d, dentry_blk); make_dentry_ptr_block(NULL, &d, dentry_blk);
do_make_empty_dir(inode, parent, &d); do_make_empty_dir(inode, parent, &d);
kunmap_atomic(dentry_blk);
set_page_dirty(dentry_page); set_page_dirty(dentry_page);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
return 0; return 0;
...@@ -367,6 +361,7 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir, ...@@ -367,6 +361,7 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
struct page *dpage) struct page *dpage)
{ {
struct page *page; struct page *page;
int dummy_encrypt = DUMMY_ENCRYPTION_ENABLED(F2FS_I_SB(dir));
int err; int err;
if (is_inode_flag_set(inode, FI_NEW_INODE)) { if (is_inode_flag_set(inode, FI_NEW_INODE)) {
...@@ -393,7 +388,8 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir, ...@@ -393,7 +388,8 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
if (err) if (err)
goto put_error; goto put_error;
if (f2fs_encrypted_inode(dir) && f2fs_may_encrypt(inode)) { if ((f2fs_encrypted_inode(dir) || dummy_encrypt) &&
f2fs_may_encrypt(inode)) {
err = fscrypt_inherit_context(dir, inode, page, false); err = fscrypt_inherit_context(dir, inode, page, false);
if (err) if (err)
goto put_error; goto put_error;
...@@ -402,8 +398,6 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir, ...@@ -402,8 +398,6 @@ struct page *init_inode_metadata(struct inode *inode, struct inode *dir,
page = get_node_page(F2FS_I_SB(dir), inode->i_ino); page = get_node_page(F2FS_I_SB(dir), inode->i_ino);
if (IS_ERR(page)) if (IS_ERR(page))
return page; return page;
set_cold_node(inode, page);
} }
if (new_name) { if (new_name) {
...@@ -547,13 +541,12 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name, ...@@ -547,13 +541,12 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name,
if (IS_ERR(dentry_page)) if (IS_ERR(dentry_page))
return PTR_ERR(dentry_page); return PTR_ERR(dentry_page);
dentry_blk = kmap(dentry_page); dentry_blk = page_address(dentry_page);
bit_pos = room_for_filename(&dentry_blk->dentry_bitmap, bit_pos = room_for_filename(&dentry_blk->dentry_bitmap,
slots, NR_DENTRY_IN_BLOCK); slots, NR_DENTRY_IN_BLOCK);
if (bit_pos < NR_DENTRY_IN_BLOCK) if (bit_pos < NR_DENTRY_IN_BLOCK)
goto add_dentry; goto add_dentry;
kunmap(dentry_page);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
} }
...@@ -588,7 +581,6 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name, ...@@ -588,7 +581,6 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name,
if (inode) if (inode)
up_write(&F2FS_I(inode)->i_sem); up_write(&F2FS_I(inode)->i_sem);
kunmap(dentry_page);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
return err; return err;
...@@ -642,7 +634,6 @@ int __f2fs_add_link(struct inode *dir, const struct qstr *name, ...@@ -642,7 +634,6 @@ int __f2fs_add_link(struct inode *dir, const struct qstr *name,
F2FS_I(dir)->task = NULL; F2FS_I(dir)->task = NULL;
} }
if (de) { if (de) {
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
err = -EEXIST; err = -EEXIST;
} else if (IS_ERR(page)) { } else if (IS_ERR(page)) {
...@@ -713,6 +704,7 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, ...@@ -713,6 +704,7 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
f2fs_update_time(F2FS_I_SB(dir), REQ_TIME); f2fs_update_time(F2FS_I_SB(dir), REQ_TIME);
if (F2FS_OPTION(F2FS_I_SB(dir)).fsync_mode == FSYNC_MODE_STRICT)
add_ino_entry(F2FS_I_SB(dir), dir->i_ino, TRANS_DIR_INO); add_ino_entry(F2FS_I_SB(dir), dir->i_ino, TRANS_DIR_INO);
if (f2fs_has_inline_dentry(dir)) if (f2fs_has_inline_dentry(dir))
...@@ -730,7 +722,6 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, ...@@ -730,7 +722,6 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap, bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap,
NR_DENTRY_IN_BLOCK, NR_DENTRY_IN_BLOCK,
0); 0);
kunmap(page); /* kunmap - pair of f2fs_find_entry */
set_page_dirty(page); set_page_dirty(page);
dir->i_ctime = dir->i_mtime = current_time(dir); dir->i_ctime = dir->i_mtime = current_time(dir);
...@@ -775,7 +766,7 @@ bool f2fs_empty_dir(struct inode *dir) ...@@ -775,7 +766,7 @@ bool f2fs_empty_dir(struct inode *dir)
return false; return false;
} }
dentry_blk = kmap_atomic(dentry_page); dentry_blk = page_address(dentry_page);
if (bidx == 0) if (bidx == 0)
bit_pos = 2; bit_pos = 2;
else else
...@@ -783,7 +774,6 @@ bool f2fs_empty_dir(struct inode *dir) ...@@ -783,7 +774,6 @@ bool f2fs_empty_dir(struct inode *dir)
bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap, bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap,
NR_DENTRY_IN_BLOCK, NR_DENTRY_IN_BLOCK,
bit_pos); bit_pos);
kunmap_atomic(dentry_blk);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
...@@ -901,19 +891,17 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx) ...@@ -901,19 +891,17 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx)
} }
} }
dentry_blk = kmap(dentry_page); dentry_blk = page_address(dentry_page);
make_dentry_ptr_block(inode, &d, dentry_blk); make_dentry_ptr_block(inode, &d, dentry_blk);
err = f2fs_fill_dentries(ctx, &d, err = f2fs_fill_dentries(ctx, &d,
n * NR_DENTRY_IN_BLOCK, &fstr); n * NR_DENTRY_IN_BLOCK, &fstr);
if (err) { if (err) {
kunmap(dentry_page);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
break; break;
} }
kunmap(dentry_page);
f2fs_put_page(dentry_page, 1); f2fs_put_page(dentry_page, 1);
} }
out_free: out_free:
......
...@@ -460,7 +460,7 @@ static struct extent_node *__insert_extent_tree(struct inode *inode, ...@@ -460,7 +460,7 @@ static struct extent_node *__insert_extent_tree(struct inode *inode,
struct rb_node *insert_parent) struct rb_node *insert_parent)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct rb_node **p = &et->root.rb_node; struct rb_node **p;
struct rb_node *parent = NULL; struct rb_node *parent = NULL;
struct extent_node *en = NULL; struct extent_node *en = NULL;
...@@ -706,6 +706,9 @@ void f2fs_drop_extent_tree(struct inode *inode) ...@@ -706,6 +706,9 @@ void f2fs_drop_extent_tree(struct inode *inode)
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct extent_tree *et = F2FS_I(inode)->extent_tree; struct extent_tree *et = F2FS_I(inode)->extent_tree;
if (!f2fs_may_extent_tree(inode))
return;
set_inode_flag(inode, FI_NO_EXTENT); set_inode_flag(inode, FI_NO_EXTENT);
write_lock(&et->lock); write_lock(&et->lock);
......
...@@ -98,9 +98,10 @@ extern char *fault_name[FAULT_MAX]; ...@@ -98,9 +98,10 @@ extern char *fault_name[FAULT_MAX];
#define F2FS_MOUNT_INLINE_XATTR_SIZE 0x00800000 #define F2FS_MOUNT_INLINE_XATTR_SIZE 0x00800000
#define F2FS_MOUNT_RESERVE_ROOT 0x01000000 #define F2FS_MOUNT_RESERVE_ROOT 0x01000000
#define clear_opt(sbi, option) ((sbi)->mount_opt.opt &= ~F2FS_MOUNT_##option) #define F2FS_OPTION(sbi) ((sbi)->mount_opt)
#define set_opt(sbi, option) ((sbi)->mount_opt.opt |= F2FS_MOUNT_##option) #define clear_opt(sbi, option) (F2FS_OPTION(sbi).opt &= ~F2FS_MOUNT_##option)
#define test_opt(sbi, option) ((sbi)->mount_opt.opt & F2FS_MOUNT_##option) #define set_opt(sbi, option) (F2FS_OPTION(sbi).opt |= F2FS_MOUNT_##option)
#define test_opt(sbi, option) (F2FS_OPTION(sbi).opt & F2FS_MOUNT_##option)
#define ver_after(a, b) (typecheck(unsigned long long, a) && \ #define ver_after(a, b) (typecheck(unsigned long long, a) && \
typecheck(unsigned long long, b) && \ typecheck(unsigned long long, b) && \
...@@ -114,6 +115,25 @@ typedef u32 nid_t; ...@@ -114,6 +115,25 @@ typedef u32 nid_t;
struct f2fs_mount_info { struct f2fs_mount_info {
unsigned int opt; unsigned int opt;
int write_io_size_bits; /* Write IO size bits */
block_t root_reserved_blocks; /* root reserved blocks */
kuid_t s_resuid; /* reserved blocks for uid */
kgid_t s_resgid; /* reserved blocks for gid */
int active_logs; /* # of active logs */
int inline_xattr_size; /* inline xattr size */
#ifdef CONFIG_F2FS_FAULT_INJECTION
struct f2fs_fault_info fault_info; /* For fault injection */
#endif
#ifdef CONFIG_QUOTA
/* Names of quota files with journalled quota */
char *s_qf_names[MAXQUOTAS];
int s_jquota_fmt; /* Format of quota to use */
#endif
/* For which write hints are passed down to block layer */
int whint_mode;
int alloc_mode; /* segment allocation policy */
int fsync_mode; /* fsync policy */
bool test_dummy_encryption; /* test dummy encryption */
}; };
#define F2FS_FEATURE_ENCRYPT 0x0001 #define F2FS_FEATURE_ENCRYPT 0x0001
...@@ -125,6 +145,8 @@ struct f2fs_mount_info { ...@@ -125,6 +145,8 @@ struct f2fs_mount_info {
#define F2FS_FEATURE_FLEXIBLE_INLINE_XATTR 0x0040 #define F2FS_FEATURE_FLEXIBLE_INLINE_XATTR 0x0040
#define F2FS_FEATURE_QUOTA_INO 0x0080 #define F2FS_FEATURE_QUOTA_INO 0x0080
#define F2FS_FEATURE_INODE_CRTIME 0x0100 #define F2FS_FEATURE_INODE_CRTIME 0x0100
#define F2FS_FEATURE_LOST_FOUND 0x0200
#define F2FS_FEATURE_VERITY 0x0400 /* reserved */
#define F2FS_HAS_FEATURE(sb, mask) \ #define F2FS_HAS_FEATURE(sb, mask) \
((F2FS_SB(sb)->raw_super->feature & cpu_to_le32(mask)) != 0) ((F2FS_SB(sb)->raw_super->feature & cpu_to_le32(mask)) != 0)
...@@ -450,7 +472,7 @@ static inline void make_dentry_ptr_block(struct inode *inode, ...@@ -450,7 +472,7 @@ static inline void make_dentry_ptr_block(struct inode *inode,
d->inode = inode; d->inode = inode;
d->max = NR_DENTRY_IN_BLOCK; d->max = NR_DENTRY_IN_BLOCK;
d->nr_bitmap = SIZE_OF_DENTRY_BITMAP; d->nr_bitmap = SIZE_OF_DENTRY_BITMAP;
d->bitmap = &t->dentry_bitmap; d->bitmap = t->dentry_bitmap;
d->dentry = t->dentry; d->dentry = t->dentry;
d->filename = t->filename; d->filename = t->filename;
} }
...@@ -576,6 +598,8 @@ enum { ...@@ -576,6 +598,8 @@ enum {
#define FADVISE_ENCRYPT_BIT 0x04 #define FADVISE_ENCRYPT_BIT 0x04
#define FADVISE_ENC_NAME_BIT 0x08 #define FADVISE_ENC_NAME_BIT 0x08
#define FADVISE_KEEP_SIZE_BIT 0x10 #define FADVISE_KEEP_SIZE_BIT 0x10
#define FADVISE_HOT_BIT 0x20
#define FADVISE_VERITY_BIT 0x40 /* reserved */
#define file_is_cold(inode) is_file(inode, FADVISE_COLD_BIT) #define file_is_cold(inode) is_file(inode, FADVISE_COLD_BIT)
#define file_wrong_pino(inode) is_file(inode, FADVISE_LOST_PINO_BIT) #define file_wrong_pino(inode) is_file(inode, FADVISE_LOST_PINO_BIT)
...@@ -590,6 +614,9 @@ enum { ...@@ -590,6 +614,9 @@ enum {
#define file_set_enc_name(inode) set_file(inode, FADVISE_ENC_NAME_BIT) #define file_set_enc_name(inode) set_file(inode, FADVISE_ENC_NAME_BIT)
#define file_keep_isize(inode) is_file(inode, FADVISE_KEEP_SIZE_BIT) #define file_keep_isize(inode) is_file(inode, FADVISE_KEEP_SIZE_BIT)
#define file_set_keep_isize(inode) set_file(inode, FADVISE_KEEP_SIZE_BIT) #define file_set_keep_isize(inode) set_file(inode, FADVISE_KEEP_SIZE_BIT)
#define file_is_hot(inode) is_file(inode, FADVISE_HOT_BIT)
#define file_set_hot(inode) set_file(inode, FADVISE_HOT_BIT)
#define file_clear_hot(inode) clear_file(inode, FADVISE_HOT_BIT)
#define DEF_DIR_LEVEL 0 #define DEF_DIR_LEVEL 0
...@@ -637,6 +664,7 @@ struct f2fs_inode_info { ...@@ -637,6 +664,7 @@ struct f2fs_inode_info {
kprojid_t i_projid; /* id for project quota */ kprojid_t i_projid; /* id for project quota */
int i_inline_xattr_size; /* inline xattr size */ int i_inline_xattr_size; /* inline xattr size */
struct timespec i_crtime; /* inode creation time */ struct timespec i_crtime; /* inode creation time */
struct timespec i_disk_time[4]; /* inode disk times */
}; };
static inline void get_extent_info(struct extent_info *ext, static inline void get_extent_info(struct extent_info *ext,
...@@ -743,7 +771,7 @@ struct f2fs_nm_info { ...@@ -743,7 +771,7 @@ struct f2fs_nm_info {
unsigned int nid_cnt[MAX_NID_STATE]; /* the number of free node id */ unsigned int nid_cnt[MAX_NID_STATE]; /* the number of free node id */
spinlock_t nid_list_lock; /* protect nid lists ops */ spinlock_t nid_list_lock; /* protect nid lists ops */
struct mutex build_lock; /* lock for build free nids */ struct mutex build_lock; /* lock for build free nids */
unsigned char (*free_nid_bitmap)[NAT_ENTRY_BITMAP_SIZE]; unsigned char **free_nid_bitmap;
unsigned char *nat_block_bitmap; unsigned char *nat_block_bitmap;
unsigned short *free_nid_count; /* free nid count of NAT block */ unsigned short *free_nid_count; /* free nid count of NAT block */
...@@ -976,6 +1004,7 @@ struct f2fs_io_info { ...@@ -976,6 +1004,7 @@ struct f2fs_io_info {
bool submitted; /* indicate IO submission */ bool submitted; /* indicate IO submission */
int need_lock; /* indicate we need to lock cp_rwsem */ int need_lock; /* indicate we need to lock cp_rwsem */
bool in_list; /* indicate fio is in io_list */ bool in_list; /* indicate fio is in io_list */
bool is_meta; /* indicate borrow meta inode mapping or not */
enum iostat_type io_type; /* io type */ enum iostat_type io_type; /* io type */
struct writeback_control *io_wbc; /* writeback control */ struct writeback_control *io_wbc; /* writeback control */
}; };
...@@ -1037,10 +1066,34 @@ enum { ...@@ -1037,10 +1066,34 @@ enum {
MAX_TIME, MAX_TIME,
}; };
enum {
WHINT_MODE_OFF, /* not pass down write hints */
WHINT_MODE_USER, /* try to pass down hints given by users */
WHINT_MODE_FS, /* pass down hints with F2FS policy */
};
enum {
ALLOC_MODE_DEFAULT, /* stay default */
ALLOC_MODE_REUSE, /* reuse segments as much as possible */
};
enum fsync_mode {
FSYNC_MODE_POSIX, /* fsync follows posix semantics */
FSYNC_MODE_STRICT, /* fsync behaves in line with ext4 */
};
#ifdef CONFIG_F2FS_FS_ENCRYPTION
#define DUMMY_ENCRYPTION_ENABLED(sbi) \
(unlikely(F2FS_OPTION(sbi).test_dummy_encryption))
#else
#define DUMMY_ENCRYPTION_ENABLED(sbi) (0)
#endif
struct f2fs_sb_info { struct f2fs_sb_info {
struct super_block *sb; /* pointer to VFS super block */ struct super_block *sb; /* pointer to VFS super block */
struct proc_dir_entry *s_proc; /* proc entry */ struct proc_dir_entry *s_proc; /* proc entry */
struct f2fs_super_block *raw_super; /* raw super block pointer */ struct f2fs_super_block *raw_super; /* raw super block pointer */
struct rw_semaphore sb_lock; /* lock for raw super block */
int valid_super_block; /* valid super block no */ int valid_super_block; /* valid super block no */
unsigned long s_flag; /* flags for sbi */ unsigned long s_flag; /* flags for sbi */
...@@ -1060,7 +1113,6 @@ struct f2fs_sb_info { ...@@ -1060,7 +1113,6 @@ struct f2fs_sb_info {
struct f2fs_bio_info *write_io[NR_PAGE_TYPE]; /* for write bios */ struct f2fs_bio_info *write_io[NR_PAGE_TYPE]; /* for write bios */
struct mutex wio_mutex[NR_PAGE_TYPE - 1][NR_TEMP_TYPE]; struct mutex wio_mutex[NR_PAGE_TYPE - 1][NR_TEMP_TYPE];
/* bio ordering for NODE/DATA */ /* bio ordering for NODE/DATA */
int write_io_size_bits; /* Write IO size bits */
mempool_t *write_io_dummy; /* Dummy pages */ mempool_t *write_io_dummy; /* Dummy pages */
/* for checkpoint */ /* for checkpoint */
...@@ -1110,9 +1162,7 @@ struct f2fs_sb_info { ...@@ -1110,9 +1162,7 @@ struct f2fs_sb_info {
unsigned int total_node_count; /* total node block count */ unsigned int total_node_count; /* total node block count */
unsigned int total_valid_node_count; /* valid node block count */ unsigned int total_valid_node_count; /* valid node block count */
loff_t max_file_blocks; /* max block index of file */ loff_t max_file_blocks; /* max block index of file */
int active_logs; /* # of active logs */
int dir_level; /* directory level */ int dir_level; /* directory level */
int inline_xattr_size; /* inline xattr size */
unsigned int trigger_ssr_threshold; /* threshold to trigger ssr */ unsigned int trigger_ssr_threshold; /* threshold to trigger ssr */
int readdir_ra; /* readahead inode in readdir */ int readdir_ra; /* readahead inode in readdir */
...@@ -1122,9 +1172,6 @@ struct f2fs_sb_info { ...@@ -1122,9 +1172,6 @@ struct f2fs_sb_info {
block_t last_valid_block_count; /* for recovery */ block_t last_valid_block_count; /* for recovery */
block_t reserved_blocks; /* configurable reserved blocks */ block_t reserved_blocks; /* configurable reserved blocks */
block_t current_reserved_blocks; /* current reserved blocks */ block_t current_reserved_blocks; /* current reserved blocks */
block_t root_reserved_blocks; /* root reserved blocks */
kuid_t s_resuid; /* reserved blocks for uid */
kgid_t s_resgid; /* reserved blocks for gid */
unsigned int nquota_files; /* # of quota sysfile */ unsigned int nquota_files; /* # of quota sysfile */
...@@ -1209,17 +1256,6 @@ struct f2fs_sb_info { ...@@ -1209,17 +1256,6 @@ struct f2fs_sb_info {
/* Precomputed FS UUID checksum for seeding other checksums */ /* Precomputed FS UUID checksum for seeding other checksums */
__u32 s_chksum_seed; __u32 s_chksum_seed;
/* For fault injection */
#ifdef CONFIG_F2FS_FAULT_INJECTION
struct f2fs_fault_info fault_info;
#endif
#ifdef CONFIG_QUOTA
/* Names of quota files with journalled quota */
char *s_qf_names[MAXQUOTAS];
int s_jquota_fmt; /* Format of quota to use */
#endif
}; };
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
...@@ -1229,7 +1265,7 @@ struct f2fs_sb_info { ...@@ -1229,7 +1265,7 @@ struct f2fs_sb_info {
__func__, __builtin_return_address(0)) __func__, __builtin_return_address(0))
static inline bool time_to_inject(struct f2fs_sb_info *sbi, int type) static inline bool time_to_inject(struct f2fs_sb_info *sbi, int type)
{ {
struct f2fs_fault_info *ffi = &sbi->fault_info; struct f2fs_fault_info *ffi = &F2FS_OPTION(sbi).fault_info;
if (!ffi->inject_rate) if (!ffi->inject_rate)
return false; return false;
...@@ -1586,12 +1622,12 @@ static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi, ...@@ -1586,12 +1622,12 @@ static inline bool __allow_reserved_blocks(struct f2fs_sb_info *sbi,
return false; return false;
if (IS_NOQUOTA(inode)) if (IS_NOQUOTA(inode))
return true; return true;
if (capable(CAP_SYS_RESOURCE)) if (uid_eq(F2FS_OPTION(sbi).s_resuid, current_fsuid()))
return true; return true;
if (uid_eq(sbi->s_resuid, current_fsuid())) if (!gid_eq(F2FS_OPTION(sbi).s_resgid, GLOBAL_ROOT_GID) &&
in_group_p(F2FS_OPTION(sbi).s_resgid))
return true; return true;
if (!gid_eq(sbi->s_resgid, GLOBAL_ROOT_GID) && if (capable(CAP_SYS_RESOURCE))
in_group_p(sbi->s_resgid))
return true; return true;
return false; return false;
} }
...@@ -1627,7 +1663,7 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi, ...@@ -1627,7 +1663,7 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi,
sbi->current_reserved_blocks; sbi->current_reserved_blocks;
if (!__allow_reserved_blocks(sbi, inode)) if (!__allow_reserved_blocks(sbi, inode))
avail_user_block_count -= sbi->root_reserved_blocks; avail_user_block_count -= F2FS_OPTION(sbi).root_reserved_blocks;
if (unlikely(sbi->total_valid_block_count > avail_user_block_count)) { if (unlikely(sbi->total_valid_block_count > avail_user_block_count)) {
diff = sbi->total_valid_block_count - avail_user_block_count; diff = sbi->total_valid_block_count - avail_user_block_count;
...@@ -1762,6 +1798,12 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info *sbi, int flag) ...@@ -1762,6 +1798,12 @@ static inline void *__bitmap_ptr(struct f2fs_sb_info *sbi, int flag)
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
int offset; int offset;
if (is_set_ckpt_flags(sbi, CP_LARGE_NAT_BITMAP_FLAG)) {
offset = (flag == SIT_BITMAP) ?
le32_to_cpu(ckpt->nat_ver_bitmap_bytesize) : 0;
return &ckpt->sit_nat_version_bitmap + offset;
}
if (__cp_payload(sbi) > 0) { if (__cp_payload(sbi) > 0) {
if (flag == NAT_BITMAP) if (flag == NAT_BITMAP)
return &ckpt->sit_nat_version_bitmap; return &ckpt->sit_nat_version_bitmap;
...@@ -1828,7 +1870,7 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi, ...@@ -1828,7 +1870,7 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi,
sbi->current_reserved_blocks + 1; sbi->current_reserved_blocks + 1;
if (!__allow_reserved_blocks(sbi, inode)) if (!__allow_reserved_blocks(sbi, inode))
valid_block_count += sbi->root_reserved_blocks; valid_block_count += F2FS_OPTION(sbi).root_reserved_blocks;
if (unlikely(valid_block_count > sbi->user_block_count)) { if (unlikely(valid_block_count > sbi->user_block_count)) {
spin_unlock(&sbi->stat_lock); spin_unlock(&sbi->stat_lock);
...@@ -2399,12 +2441,6 @@ static inline int f2fs_has_inline_dentry(struct inode *inode) ...@@ -2399,12 +2441,6 @@ static inline int f2fs_has_inline_dentry(struct inode *inode)
return is_inode_flag_set(inode, FI_INLINE_DENTRY); return is_inode_flag_set(inode, FI_INLINE_DENTRY);
} }
static inline void f2fs_dentry_kunmap(struct inode *dir, struct page *page)
{
if (!f2fs_has_inline_dentry(dir))
kunmap(page);
}
static inline int is_file(struct inode *inode, int type) static inline int is_file(struct inode *inode, int type)
{ {
return F2FS_I(inode)->i_advise & type; return F2FS_I(inode)->i_advise & type;
...@@ -2436,7 +2472,17 @@ static inline bool f2fs_skip_inode_update(struct inode *inode, int dsync) ...@@ -2436,7 +2472,17 @@ static inline bool f2fs_skip_inode_update(struct inode *inode, int dsync)
} }
if (!is_inode_flag_set(inode, FI_AUTO_RECOVER) || if (!is_inode_flag_set(inode, FI_AUTO_RECOVER) ||
file_keep_isize(inode) || file_keep_isize(inode) ||
i_size_read(inode) & PAGE_MASK) i_size_read(inode) & ~PAGE_MASK)
return false;
if (!timespec_equal(F2FS_I(inode)->i_disk_time, &inode->i_atime))
return false;
if (!timespec_equal(F2FS_I(inode)->i_disk_time + 1, &inode->i_ctime))
return false;
if (!timespec_equal(F2FS_I(inode)->i_disk_time + 2, &inode->i_mtime))
return false;
if (!timespec_equal(F2FS_I(inode)->i_disk_time + 3,
&F2FS_I(inode)->i_crtime))
return false; return false;
down_read(&F2FS_I(inode)->i_sem); down_read(&F2FS_I(inode)->i_sem);
...@@ -2446,9 +2492,9 @@ static inline bool f2fs_skip_inode_update(struct inode *inode, int dsync) ...@@ -2446,9 +2492,9 @@ static inline bool f2fs_skip_inode_update(struct inode *inode, int dsync)
return ret; return ret;
} }
static inline int f2fs_readonly(struct super_block *sb) static inline bool f2fs_readonly(struct super_block *sb)
{ {
return sb->s_flags & SB_RDONLY; return sb_rdonly(sb);
} }
static inline bool f2fs_cp_error(struct f2fs_sb_info *sbi) static inline bool f2fs_cp_error(struct f2fs_sb_info *sbi)
...@@ -2596,6 +2642,8 @@ void handle_failed_inode(struct inode *inode); ...@@ -2596,6 +2642,8 @@ void handle_failed_inode(struct inode *inode);
/* /*
* namei.c * namei.c
*/ */
int update_extension_list(struct f2fs_sb_info *sbi, const char *name,
bool hot, bool set);
struct dentry *f2fs_get_parent(struct dentry *child); struct dentry *f2fs_get_parent(struct dentry *child);
/* /*
...@@ -2768,6 +2816,8 @@ void destroy_segment_manager(struct f2fs_sb_info *sbi); ...@@ -2768,6 +2816,8 @@ void destroy_segment_manager(struct f2fs_sb_info *sbi);
int __init create_segment_manager_caches(void); int __init create_segment_manager_caches(void);
void destroy_segment_manager_caches(void); void destroy_segment_manager_caches(void);
int rw_hint_to_seg_type(enum rw_hint hint); int rw_hint_to_seg_type(enum rw_hint hint);
enum rw_hint io_type_to_rw_hint(struct f2fs_sb_info *sbi, enum page_type type,
enum temp_type temp);
/* /*
* checkpoint.c * checkpoint.c
...@@ -2850,6 +2900,7 @@ int f2fs_release_page(struct page *page, gfp_t wait); ...@@ -2850,6 +2900,7 @@ int f2fs_release_page(struct page *page, gfp_t wait);
int f2fs_migrate_page(struct address_space *mapping, struct page *newpage, int f2fs_migrate_page(struct address_space *mapping, struct page *newpage,
struct page *page, enum migrate_mode mode); struct page *page, enum migrate_mode mode);
#endif #endif
bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len);
/* /*
* gc.c * gc.c
...@@ -3172,45 +3223,21 @@ static inline bool f2fs_bio_encrypted(struct bio *bio) ...@@ -3172,45 +3223,21 @@ static inline bool f2fs_bio_encrypted(struct bio *bio)
return bio->bi_private != NULL; return bio->bi_private != NULL;
} }
static inline int f2fs_sb_has_crypto(struct super_block *sb) #define F2FS_FEATURE_FUNCS(name, flagname) \
{ static inline int f2fs_sb_has_##name(struct super_block *sb) \
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_ENCRYPT); { \
} return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_##flagname); \
static inline int f2fs_sb_mounted_blkzoned(struct super_block *sb)
{
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_BLKZONED);
}
static inline int f2fs_sb_has_extra_attr(struct super_block *sb)
{
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_EXTRA_ATTR);
}
static inline int f2fs_sb_has_project_quota(struct super_block *sb)
{
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_PRJQUOTA);
}
static inline int f2fs_sb_has_inode_chksum(struct super_block *sb)
{
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_INODE_CHKSUM);
} }
static inline int f2fs_sb_has_flexible_inline_xattr(struct super_block *sb) F2FS_FEATURE_FUNCS(encrypt, ENCRYPT);
{ F2FS_FEATURE_FUNCS(blkzoned, BLKZONED);
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_FLEXIBLE_INLINE_XATTR); F2FS_FEATURE_FUNCS(extra_attr, EXTRA_ATTR);
} F2FS_FEATURE_FUNCS(project_quota, PRJQUOTA);
F2FS_FEATURE_FUNCS(inode_chksum, INODE_CHKSUM);
static inline int f2fs_sb_has_quota_ino(struct super_block *sb) F2FS_FEATURE_FUNCS(flexible_inline_xattr, FLEXIBLE_INLINE_XATTR);
{ F2FS_FEATURE_FUNCS(quota_ino, QUOTA_INO);
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_QUOTA_INO); F2FS_FEATURE_FUNCS(inode_crtime, INODE_CRTIME);
} F2FS_FEATURE_FUNCS(lost_found, LOST_FOUND);
static inline int f2fs_sb_has_inode_crtime(struct super_block *sb)
{
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_INODE_CRTIME);
}
#ifdef CONFIG_BLK_DEV_ZONED #ifdef CONFIG_BLK_DEV_ZONED
static inline int get_blkz_type(struct f2fs_sb_info *sbi, static inline int get_blkz_type(struct f2fs_sb_info *sbi,
...@@ -3230,7 +3257,7 @@ static inline bool f2fs_discard_en(struct f2fs_sb_info *sbi) ...@@ -3230,7 +3257,7 @@ static inline bool f2fs_discard_en(struct f2fs_sb_info *sbi)
{ {
struct request_queue *q = bdev_get_queue(sbi->sb->s_bdev); struct request_queue *q = bdev_get_queue(sbi->sb->s_bdev);
return blk_queue_discard(q) || f2fs_sb_mounted_blkzoned(sbi->sb); return blk_queue_discard(q) || f2fs_sb_has_blkzoned(sbi->sb);
} }
static inline void set_opt_mode(struct f2fs_sb_info *sbi, unsigned int mt) static inline void set_opt_mode(struct f2fs_sb_info *sbi, unsigned int mt)
...@@ -3259,4 +3286,11 @@ static inline bool f2fs_may_encrypt(struct inode *inode) ...@@ -3259,4 +3286,11 @@ static inline bool f2fs_may_encrypt(struct inode *inode)
#endif #endif
} }
static inline bool f2fs_force_buffered_io(struct inode *inode, int rw)
{
return (f2fs_encrypted_file(inode) ||
(rw == WRITE && test_opt(F2FS_I_SB(inode), LFS)) ||
F2FS_I_SB(inode)->s_ndevs);
}
#endif #endif
...@@ -163,9 +163,10 @@ static inline enum cp_reason_type need_do_checkpoint(struct inode *inode) ...@@ -163,9 +163,10 @@ static inline enum cp_reason_type need_do_checkpoint(struct inode *inode)
cp_reason = CP_NODE_NEED_CP; cp_reason = CP_NODE_NEED_CP;
else if (test_opt(sbi, FASTBOOT)) else if (test_opt(sbi, FASTBOOT))
cp_reason = CP_FASTBOOT_MODE; cp_reason = CP_FASTBOOT_MODE;
else if (sbi->active_logs == 2) else if (F2FS_OPTION(sbi).active_logs == 2)
cp_reason = CP_SPEC_LOG_NUM; cp_reason = CP_SPEC_LOG_NUM;
else if (need_dentry_mark(sbi, inode->i_ino) && else if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT &&
need_dentry_mark(sbi, inode->i_ino) &&
exist_written_data(sbi, F2FS_I(inode)->i_pino, TRANS_DIR_INO)) exist_written_data(sbi, F2FS_I(inode)->i_pino, TRANS_DIR_INO))
cp_reason = CP_RECOVER_DIR; cp_reason = CP_RECOVER_DIR;
...@@ -479,6 +480,9 @@ static int f2fs_file_open(struct inode *inode, struct file *filp) ...@@ -479,6 +480,9 @@ static int f2fs_file_open(struct inode *inode, struct file *filp)
if (err) if (err)
return err; return err;
filp->f_mode |= FMODE_NOWAIT;
return dquot_file_open(inode, filp); return dquot_file_open(inode, filp);
} }
...@@ -569,7 +573,6 @@ static int truncate_partial_data_page(struct inode *inode, u64 from, ...@@ -569,7 +573,6 @@ static int truncate_partial_data_page(struct inode *inode, u64 from,
int truncate_blocks(struct inode *inode, u64 from, bool lock) int truncate_blocks(struct inode *inode, u64 from, bool lock)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
unsigned int blocksize = inode->i_sb->s_blocksize;
struct dnode_of_data dn; struct dnode_of_data dn;
pgoff_t free_from; pgoff_t free_from;
int count = 0, err = 0; int count = 0, err = 0;
...@@ -578,7 +581,7 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock) ...@@ -578,7 +581,7 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock)
trace_f2fs_truncate_blocks_enter(inode, from); trace_f2fs_truncate_blocks_enter(inode, from);
free_from = (pgoff_t)F2FS_BYTES_TO_BLK(from + blocksize - 1); free_from = (pgoff_t)F2FS_BLK_ALIGN(from);
if (free_from >= sbi->max_file_blocks) if (free_from >= sbi->max_file_blocks)
goto free_partial; goto free_partial;
...@@ -1348,8 +1351,12 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len, ...@@ -1348,8 +1351,12 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len,
} }
out: out:
if (!(mode & FALLOC_FL_KEEP_SIZE) && i_size_read(inode) < new_size) if (new_size > i_size_read(inode)) {
if (mode & FALLOC_FL_KEEP_SIZE)
file_set_keep_isize(inode);
else
f2fs_i_size_write(inode, new_size); f2fs_i_size_write(inode, new_size);
}
out_sem: out_sem:
up_write(&F2FS_I(inode)->i_mmap_sem); up_write(&F2FS_I(inode)->i_mmap_sem);
...@@ -1711,6 +1718,8 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp) ...@@ -1711,6 +1718,8 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp)
inode_lock(inode); inode_lock(inode);
down_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
if (f2fs_is_volatile_file(inode)) if (f2fs_is_volatile_file(inode))
goto err_out; goto err_out;
...@@ -1729,6 +1738,7 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp) ...@@ -1729,6 +1738,7 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp)
ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 1, false); ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 1, false);
} }
err_out: err_out:
up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
inode_unlock(inode); inode_unlock(inode);
mnt_drop_write_file(filp); mnt_drop_write_file(filp);
return ret; return ret;
...@@ -1938,7 +1948,7 @@ static int f2fs_ioc_set_encryption_policy(struct file *filp, unsigned long arg) ...@@ -1938,7 +1948,7 @@ static int f2fs_ioc_set_encryption_policy(struct file *filp, unsigned long arg)
{ {
struct inode *inode = file_inode(filp); struct inode *inode = file_inode(filp);
if (!f2fs_sb_has_crypto(inode->i_sb)) if (!f2fs_sb_has_encrypt(inode->i_sb))
return -EOPNOTSUPP; return -EOPNOTSUPP;
f2fs_update_time(F2FS_I_SB(inode), REQ_TIME); f2fs_update_time(F2FS_I_SB(inode), REQ_TIME);
...@@ -1948,7 +1958,7 @@ static int f2fs_ioc_set_encryption_policy(struct file *filp, unsigned long arg) ...@@ -1948,7 +1958,7 @@ static int f2fs_ioc_set_encryption_policy(struct file *filp, unsigned long arg)
static int f2fs_ioc_get_encryption_policy(struct file *filp, unsigned long arg) static int f2fs_ioc_get_encryption_policy(struct file *filp, unsigned long arg)
{ {
if (!f2fs_sb_has_crypto(file_inode(filp)->i_sb)) if (!f2fs_sb_has_encrypt(file_inode(filp)->i_sb))
return -EOPNOTSUPP; return -EOPNOTSUPP;
return fscrypt_ioctl_get_policy(filp, (void __user *)arg); return fscrypt_ioctl_get_policy(filp, (void __user *)arg);
} }
...@@ -1959,16 +1969,18 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg) ...@@ -1959,16 +1969,18 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg)
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
int err; int err;
if (!f2fs_sb_has_crypto(inode->i_sb)) if (!f2fs_sb_has_encrypt(inode->i_sb))
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (uuid_is_nonzero(sbi->raw_super->encrypt_pw_salt))
goto got_it;
err = mnt_want_write_file(filp); err = mnt_want_write_file(filp);
if (err) if (err)
return err; return err;
down_write(&sbi->sb_lock);
if (uuid_is_nonzero(sbi->raw_super->encrypt_pw_salt))
goto got_it;
/* update superblock with uuid */ /* update superblock with uuid */
generate_random_uuid(sbi->raw_super->encrypt_pw_salt); generate_random_uuid(sbi->raw_super->encrypt_pw_salt);
...@@ -1976,15 +1988,16 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg) ...@@ -1976,15 +1988,16 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg)
if (err) { if (err) {
/* undo new data */ /* undo new data */
memset(sbi->raw_super->encrypt_pw_salt, 0, 16); memset(sbi->raw_super->encrypt_pw_salt, 0, 16);
mnt_drop_write_file(filp); goto out_err;
return err;
} }
mnt_drop_write_file(filp);
got_it: got_it:
if (copy_to_user((__u8 __user *)arg, sbi->raw_super->encrypt_pw_salt, if (copy_to_user((__u8 __user *)arg, sbi->raw_super->encrypt_pw_salt,
16)) 16))
return -EFAULT; err = -EFAULT;
return 0; out_err:
up_write(&sbi->sb_lock);
mnt_drop_write_file(filp);
return err;
} }
static int f2fs_ioc_gc(struct file *filp, unsigned long arg) static int f2fs_ioc_gc(struct file *filp, unsigned long arg)
...@@ -2045,8 +2058,10 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg) ...@@ -2045,8 +2058,10 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg)
return ret; return ret;
end = range.start + range.len; end = range.start + range.len;
if (range.start < MAIN_BLKADDR(sbi) || end >= MAX_BLKADDR(sbi)) if (range.start < MAIN_BLKADDR(sbi) || end >= MAX_BLKADDR(sbi)) {
return -EINVAL; ret = -EINVAL;
goto out;
}
do_more: do_more:
if (!range.sync) { if (!range.sync) {
if (!mutex_trylock(&sbi->gc_mutex)) { if (!mutex_trylock(&sbi->gc_mutex)) {
...@@ -2885,25 +2900,54 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) ...@@ -2885,25 +2900,54 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
if (unlikely(f2fs_cp_error(F2FS_I_SB(inode)))) if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
return -EIO; return -EIO;
if ((iocb->ki_flags & IOCB_NOWAIT) && !(iocb->ki_flags & IOCB_DIRECT))
return -EINVAL;
if (!inode_trylock(inode)) {
if (iocb->ki_flags & IOCB_NOWAIT)
return -EAGAIN;
inode_lock(inode); inode_lock(inode);
}
ret = generic_write_checks(iocb, from); ret = generic_write_checks(iocb, from);
if (ret > 0) { if (ret > 0) {
bool preallocated = false;
size_t target_size = 0;
int err; int err;
if (iov_iter_fault_in_readable(from, iov_iter_count(from))) if (iov_iter_fault_in_readable(from, iov_iter_count(from)))
set_inode_flag(inode, FI_NO_PREALLOC); set_inode_flag(inode, FI_NO_PREALLOC);
if ((iocb->ki_flags & IOCB_NOWAIT) &&
(iocb->ki_flags & IOCB_DIRECT)) {
if (!f2fs_overwrite_io(inode, iocb->ki_pos,
iov_iter_count(from)) ||
f2fs_has_inline_data(inode) ||
f2fs_force_buffered_io(inode, WRITE)) {
inode_unlock(inode);
return -EAGAIN;
}
} else {
preallocated = true;
target_size = iocb->ki_pos + iov_iter_count(from);
err = f2fs_preallocate_blocks(iocb, from); err = f2fs_preallocate_blocks(iocb, from);
if (err) { if (err) {
clear_inode_flag(inode, FI_NO_PREALLOC); clear_inode_flag(inode, FI_NO_PREALLOC);
inode_unlock(inode); inode_unlock(inode);
return err; return err;
} }
}
blk_start_plug(&plug); blk_start_plug(&plug);
ret = __generic_file_write_iter(iocb, from); ret = __generic_file_write_iter(iocb, from);
blk_finish_plug(&plug); blk_finish_plug(&plug);
clear_inode_flag(inode, FI_NO_PREALLOC); clear_inode_flag(inode, FI_NO_PREALLOC);
/* if we couldn't write data, we should deallocate blocks. */
if (preallocated && i_size_read(inode) < target_size)
f2fs_truncate(inode);
if (ret > 0) if (ret > 0)
f2fs_update_iostat(F2FS_I_SB(inode), APP_WRITE_IO, ret); f2fs_update_iostat(F2FS_I_SB(inode), APP_WRITE_IO, ret);
} }
......
...@@ -76,14 +76,15 @@ static int gc_thread_func(void *data) ...@@ -76,14 +76,15 @@ static int gc_thread_func(void *data)
* invalidated soon after by user update or deletion. * invalidated soon after by user update or deletion.
* So, I'd like to wait some time to collect dirty segments. * So, I'd like to wait some time to collect dirty segments.
*/ */
if (!mutex_trylock(&sbi->gc_mutex))
goto next;
if (gc_th->gc_urgent) { if (gc_th->gc_urgent) {
wait_ms = gc_th->urgent_sleep_time; wait_ms = gc_th->urgent_sleep_time;
mutex_lock(&sbi->gc_mutex);
goto do_gc; goto do_gc;
} }
if (!mutex_trylock(&sbi->gc_mutex))
goto next;
if (!is_idle(sbi)) { if (!is_idle(sbi)) {
increase_sleep_time(gc_th, &wait_ms); increase_sleep_time(gc_th, &wait_ms);
mutex_unlock(&sbi->gc_mutex); mutex_unlock(&sbi->gc_mutex);
...@@ -161,12 +162,17 @@ static int select_gc_type(struct f2fs_gc_kthread *gc_th, int gc_type) ...@@ -161,12 +162,17 @@ static int select_gc_type(struct f2fs_gc_kthread *gc_th, int gc_type)
{ {
int gc_mode = (gc_type == BG_GC) ? GC_CB : GC_GREEDY; int gc_mode = (gc_type == BG_GC) ? GC_CB : GC_GREEDY;
if (gc_th && gc_th->gc_idle) { if (!gc_th)
return gc_mode;
if (gc_th->gc_idle) {
if (gc_th->gc_idle == 1) if (gc_th->gc_idle == 1)
gc_mode = GC_CB; gc_mode = GC_CB;
else if (gc_th->gc_idle == 2) else if (gc_th->gc_idle == 2)
gc_mode = GC_GREEDY; gc_mode = GC_GREEDY;
} }
if (gc_th->gc_urgent)
gc_mode = GC_GREEDY;
return gc_mode; return gc_mode;
} }
...@@ -188,11 +194,14 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type, ...@@ -188,11 +194,14 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
} }
/* we need to check every dirty segments in the FG_GC case */ /* we need to check every dirty segments in the FG_GC case */
if (gc_type != FG_GC && p->max_search > sbi->max_victim_search) if (gc_type != FG_GC &&
(sbi->gc_thread && !sbi->gc_thread->gc_urgent) &&
p->max_search > sbi->max_victim_search)
p->max_search = sbi->max_victim_search; p->max_search = sbi->max_victim_search;
/* let's select beginning hot/small space first */ /* let's select beginning hot/small space first in no_heap mode*/
if (type == CURSEG_HOT_DATA || IS_NODESEG(type)) if (test_opt(sbi, NOHEAP) &&
(type == CURSEG_HOT_DATA || IS_NODESEG(type)))
p->offset = 0; p->offset = 0;
else else
p->offset = SIT_I(sbi)->last_victim[p->gc_mode]; p->offset = SIT_I(sbi)->last_victim[p->gc_mode];
......
...@@ -369,7 +369,7 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, ...@@ -369,7 +369,7 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
f2fs_wait_on_page_writeback(page, DATA, true); f2fs_wait_on_page_writeback(page, DATA, true);
zero_user_segment(page, MAX_INLINE_DATA(dir), PAGE_SIZE); zero_user_segment(page, MAX_INLINE_DATA(dir), PAGE_SIZE);
dentry_blk = kmap_atomic(page); dentry_blk = page_address(page);
make_dentry_ptr_inline(dir, &src, inline_dentry); make_dentry_ptr_inline(dir, &src, inline_dentry);
make_dentry_ptr_block(dir, &dst, dentry_blk); make_dentry_ptr_block(dir, &dst, dentry_blk);
...@@ -386,7 +386,6 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, ...@@ -386,7 +386,6 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
memcpy(dst.dentry, src.dentry, SIZE_OF_DIR_ENTRY * src.max); memcpy(dst.dentry, src.dentry, SIZE_OF_DIR_ENTRY * src.max);
memcpy(dst.filename, src.filename, src.max * F2FS_SLOT_LEN); memcpy(dst.filename, src.filename, src.max * F2FS_SLOT_LEN);
kunmap_atomic(dentry_blk);
if (!PageUptodate(page)) if (!PageUptodate(page))
SetPageUptodate(page); SetPageUptodate(page);
set_page_dirty(page); set_page_dirty(page);
......
...@@ -284,6 +284,10 @@ static int do_read_inode(struct inode *inode) ...@@ -284,6 +284,10 @@ static int do_read_inode(struct inode *inode)
fi->i_crtime.tv_nsec = le32_to_cpu(ri->i_crtime_nsec); fi->i_crtime.tv_nsec = le32_to_cpu(ri->i_crtime_nsec);
} }
F2FS_I(inode)->i_disk_time[0] = inode->i_atime;
F2FS_I(inode)->i_disk_time[1] = inode->i_ctime;
F2FS_I(inode)->i_disk_time[2] = inode->i_mtime;
F2FS_I(inode)->i_disk_time[3] = F2FS_I(inode)->i_crtime;
f2fs_put_page(node_page, 1); f2fs_put_page(node_page, 1);
stat_inc_inline_xattr(inode); stat_inc_inline_xattr(inode);
...@@ -328,7 +332,7 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino) ...@@ -328,7 +332,7 @@ struct inode *f2fs_iget(struct super_block *sb, unsigned long ino)
inode->i_op = &f2fs_dir_inode_operations; inode->i_op = &f2fs_dir_inode_operations;
inode->i_fop = &f2fs_dir_operations; inode->i_fop = &f2fs_dir_operations;
inode->i_mapping->a_ops = &f2fs_dblock_aops; inode->i_mapping->a_ops = &f2fs_dblock_aops;
mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_HIGH_ZERO); inode_nohighmem(inode);
} else if (S_ISLNK(inode->i_mode)) { } else if (S_ISLNK(inode->i_mode)) {
if (f2fs_encrypted_inode(inode)) if (f2fs_encrypted_inode(inode))
inode->i_op = &f2fs_encrypted_symlink_inode_operations; inode->i_op = &f2fs_encrypted_symlink_inode_operations;
...@@ -439,12 +443,15 @@ void update_inode(struct inode *inode, struct page *node_page) ...@@ -439,12 +443,15 @@ void update_inode(struct inode *inode, struct page *node_page)
} }
__set_inode_rdev(inode, ri); __set_inode_rdev(inode, ri);
set_cold_node(inode, node_page);
/* deleted inode */ /* deleted inode */
if (inode->i_nlink == 0) if (inode->i_nlink == 0)
clear_inline_node(node_page); clear_inline_node(node_page);
F2FS_I(inode)->i_disk_time[0] = inode->i_atime;
F2FS_I(inode)->i_disk_time[1] = inode->i_ctime;
F2FS_I(inode)->i_disk_time[2] = inode->i_mtime;
F2FS_I(inode)->i_disk_time[3] = F2FS_I(inode)->i_crtime;
} }
void update_inode_page(struct inode *inode) void update_inode_page(struct inode *inode)
......
...@@ -78,7 +78,8 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -78,7 +78,8 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
set_inode_flag(inode, FI_NEW_INODE); set_inode_flag(inode, FI_NEW_INODE);
/* If the directory encrypted, then we should encrypt the inode. */ /* If the directory encrypted, then we should encrypt the inode. */
if (f2fs_encrypted_inode(dir) && f2fs_may_encrypt(inode)) if ((f2fs_encrypted_inode(dir) || DUMMY_ENCRYPTION_ENABLED(sbi)) &&
f2fs_may_encrypt(inode))
f2fs_set_encrypted_inode(inode); f2fs_set_encrypted_inode(inode);
if (f2fs_sb_has_extra_attr(sbi->sb)) { if (f2fs_sb_has_extra_attr(sbi->sb)) {
...@@ -97,7 +98,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -97,7 +98,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)) { if (f2fs_sb_has_flexible_inline_xattr(sbi->sb)) {
f2fs_bug_on(sbi, !f2fs_has_extra_attr(inode)); f2fs_bug_on(sbi, !f2fs_has_extra_attr(inode));
if (f2fs_has_inline_xattr(inode)) if (f2fs_has_inline_xattr(inode))
xattr_size = sbi->inline_xattr_size; xattr_size = F2FS_OPTION(sbi).inline_xattr_size;
/* Otherwise, will be 0 */ /* Otherwise, will be 0 */
} else if (f2fs_has_inline_xattr(inode) || } else if (f2fs_has_inline_xattr(inode) ||
f2fs_has_inline_dentry(inode)) { f2fs_has_inline_dentry(inode)) {
...@@ -142,7 +143,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -142,7 +143,7 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
return ERR_PTR(err); return ERR_PTR(err);
} }
static int is_multimedia_file(const unsigned char *s, const char *sub) static int is_extension_exist(const unsigned char *s, const char *sub)
{ {
size_t slen = strlen(s); size_t slen = strlen(s);
size_t sublen = strlen(sub); size_t sublen = strlen(sub);
...@@ -168,19 +169,94 @@ static int is_multimedia_file(const unsigned char *s, const char *sub) ...@@ -168,19 +169,94 @@ static int is_multimedia_file(const unsigned char *s, const char *sub)
/* /*
* Set multimedia files as cold files for hot/cold data separation * Set multimedia files as cold files for hot/cold data separation
*/ */
static inline void set_cold_files(struct f2fs_sb_info *sbi, struct inode *inode, static inline void set_file_temperature(struct f2fs_sb_info *sbi, struct inode *inode,
const unsigned char *name) const unsigned char *name)
{ {
int i; __u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
__u8 (*extlist)[8] = sbi->raw_super->extension_list; int i, cold_count, hot_count;
down_read(&sbi->sb_lock);
int count = le32_to_cpu(sbi->raw_super->extension_count); cold_count = le32_to_cpu(sbi->raw_super->extension_count);
for (i = 0; i < count; i++) { hot_count = sbi->raw_super->hot_ext_count;
if (is_multimedia_file(name, extlist[i])) {
for (i = 0; i < cold_count + hot_count; i++) {
if (!is_extension_exist(name, extlist[i]))
continue;
if (i < cold_count)
file_set_cold(inode); file_set_cold(inode);
else
file_set_hot(inode);
break; break;
} }
up_read(&sbi->sb_lock);
}
int update_extension_list(struct f2fs_sb_info *sbi, const char *name,
bool hot, bool set)
{
__u8 (*extlist)[F2FS_EXTENSION_LEN] = sbi->raw_super->extension_list;
int cold_count = le32_to_cpu(sbi->raw_super->extension_count);
int hot_count = sbi->raw_super->hot_ext_count;
int total_count = cold_count + hot_count;
int start, count;
int i;
if (set) {
if (total_count == F2FS_MAX_EXTENSION)
return -EINVAL;
} else {
if (!hot && !cold_count)
return -EINVAL;
if (hot && !hot_count)
return -EINVAL;
}
if (hot) {
start = cold_count;
count = total_count;
} else {
start = 0;
count = cold_count;
}
for (i = start; i < count; i++) {
if (strcmp(name, extlist[i]))
continue;
if (set)
return -EINVAL;
memcpy(extlist[i], extlist[i + 1],
F2FS_EXTENSION_LEN * (total_count - i - 1));
memset(extlist[total_count - 1], 0, F2FS_EXTENSION_LEN);
if (hot)
sbi->raw_super->hot_ext_count = hot_count - 1;
else
sbi->raw_super->extension_count =
cpu_to_le32(cold_count - 1);
return 0;
}
if (!set)
return -EINVAL;
if (hot) {
strncpy(extlist[count], name, strlen(name));
sbi->raw_super->hot_ext_count = hot_count + 1;
} else {
char buf[F2FS_MAX_EXTENSION][F2FS_EXTENSION_LEN];
memcpy(buf, &extlist[cold_count],
F2FS_EXTENSION_LEN * hot_count);
memset(extlist[cold_count], 0, F2FS_EXTENSION_LEN);
strncpy(extlist[cold_count], name, strlen(name));
memcpy(&extlist[cold_count + 1], buf,
F2FS_EXTENSION_LEN * hot_count);
sbi->raw_super->extension_count = cpu_to_le32(cold_count + 1);
} }
return 0;
} }
static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode, static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
...@@ -203,7 +279,7 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode, ...@@ -203,7 +279,7 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
return PTR_ERR(inode); return PTR_ERR(inode);
if (!test_opt(sbi, DISABLE_EXT_IDENTIFY)) if (!test_opt(sbi, DISABLE_EXT_IDENTIFY))
set_cold_files(sbi, inode, dentry->d_name.name); set_file_temperature(sbi, inode, dentry->d_name.name);
inode->i_op = &f2fs_file_inode_operations; inode->i_op = &f2fs_file_inode_operations;
inode->i_fop = &f2fs_file_operations; inode->i_fop = &f2fs_file_operations;
...@@ -317,7 +393,6 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino) ...@@ -317,7 +393,6 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino)
de = f2fs_find_entry(dir, &dot, &page); de = f2fs_find_entry(dir, &dot, &page);
if (de) { if (de) {
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
} else if (IS_ERR(page)) { } else if (IS_ERR(page)) {
err = PTR_ERR(page); err = PTR_ERR(page);
...@@ -329,14 +404,12 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino) ...@@ -329,14 +404,12 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino)
} }
de = f2fs_find_entry(dir, &dotdot, &page); de = f2fs_find_entry(dir, &dotdot, &page);
if (de) { if (de)
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
} else if (IS_ERR(page)) { else if (IS_ERR(page))
err = PTR_ERR(page); err = PTR_ERR(page);
} else { else
err = __f2fs_add_link(dir, &dotdot, NULL, pino, S_IFDIR); err = __f2fs_add_link(dir, &dotdot, NULL, pino, S_IFDIR);
}
out: out:
if (!err) if (!err)
clear_inode_flag(dir, FI_INLINE_DOTS); clear_inode_flag(dir, FI_INLINE_DOTS);
...@@ -377,7 +450,6 @@ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry, ...@@ -377,7 +450,6 @@ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry,
} }
ino = le32_to_cpu(de->ino); ino = le32_to_cpu(de->ino);
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
inode = f2fs_iget(dir->i_sb, ino); inode = f2fs_iget(dir->i_sb, ino);
...@@ -452,7 +524,6 @@ static int f2fs_unlink(struct inode *dir, struct dentry *dentry) ...@@ -452,7 +524,6 @@ static int f2fs_unlink(struct inode *dir, struct dentry *dentry)
err = acquire_orphan_inode(sbi); err = acquire_orphan_inode(sbi);
if (err) { if (err) {
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
goto fail; goto fail;
} }
...@@ -579,7 +650,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) ...@@ -579,7 +650,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
inode->i_op = &f2fs_dir_inode_operations; inode->i_op = &f2fs_dir_inode_operations;
inode->i_fop = &f2fs_dir_operations; inode->i_fop = &f2fs_dir_operations;
inode->i_mapping->a_ops = &f2fs_dblock_aops; inode->i_mapping->a_ops = &f2fs_dblock_aops;
mapping_set_gfp_mask(inode->i_mapping, GFP_F2FS_HIGH_ZERO); inode_nohighmem(inode);
set_inode_flag(inode, FI_INC_LINK); set_inode_flag(inode, FI_INC_LINK);
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
...@@ -717,10 +788,12 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry, ...@@ -717,10 +788,12 @@ static int __f2fs_tmpfile(struct inode *dir, struct dentry *dentry,
static int f2fs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode) static int f2fs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode)
{ {
if (unlikely(f2fs_cp_error(F2FS_I_SB(dir)))) struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
if (unlikely(f2fs_cp_error(sbi)))
return -EIO; return -EIO;
if (f2fs_encrypted_inode(dir)) { if (f2fs_encrypted_inode(dir) || DUMMY_ENCRYPTION_ENABLED(sbi)) {
int err = fscrypt_get_encryption_info(dir); int err = fscrypt_get_encryption_info(dir);
if (err) if (err)
return err; return err;
...@@ -893,15 +966,14 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -893,15 +966,14 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
} }
if (old_dir_entry) { if (old_dir_entry) {
if (old_dir != new_dir && !whiteout) { if (old_dir != new_dir && !whiteout)
f2fs_set_link(old_inode, old_dir_entry, f2fs_set_link(old_inode, old_dir_entry,
old_dir_page, new_dir); old_dir_page, new_dir);
} else { else
f2fs_dentry_kunmap(old_inode, old_dir_page);
f2fs_put_page(old_dir_page, 0); f2fs_put_page(old_dir_page, 0);
}
f2fs_i_links_write(old_dir, false); f2fs_i_links_write(old_dir, false);
} }
if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT)
add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO); add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO);
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
...@@ -912,20 +984,15 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -912,20 +984,15 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
put_out_dir: put_out_dir:
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
if (new_page) { if (new_page)
f2fs_dentry_kunmap(new_dir, new_page);
f2fs_put_page(new_page, 0); f2fs_put_page(new_page, 0);
}
out_whiteout: out_whiteout:
if (whiteout) if (whiteout)
iput(whiteout); iput(whiteout);
out_dir: out_dir:
if (old_dir_entry) { if (old_dir_entry)
f2fs_dentry_kunmap(old_inode, old_dir_page);
f2fs_put_page(old_dir_page, 0); f2fs_put_page(old_dir_page, 0);
}
out_old: out_old:
f2fs_dentry_kunmap(old_dir, old_page);
f2fs_put_page(old_page, 0); f2fs_put_page(old_page, 0);
out: out:
return err; return err;
...@@ -1057,8 +1124,10 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -1057,8 +1124,10 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
} }
f2fs_mark_inode_dirty_sync(new_dir, false); f2fs_mark_inode_dirty_sync(new_dir, false);
if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT) {
add_ino_entry(sbi, old_dir->i_ino, TRANS_DIR_INO); add_ino_entry(sbi, old_dir->i_ino, TRANS_DIR_INO);
add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO); add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO);
}
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
...@@ -1067,19 +1136,15 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -1067,19 +1136,15 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
return 0; return 0;
out_new_dir: out_new_dir:
if (new_dir_entry) { if (new_dir_entry) {
f2fs_dentry_kunmap(new_inode, new_dir_page);
f2fs_put_page(new_dir_page, 0); f2fs_put_page(new_dir_page, 0);
} }
out_old_dir: out_old_dir:
if (old_dir_entry) { if (old_dir_entry) {
f2fs_dentry_kunmap(old_inode, old_dir_page);
f2fs_put_page(old_dir_page, 0); f2fs_put_page(old_dir_page, 0);
} }
out_new: out_new:
f2fs_dentry_kunmap(new_dir, new_page);
f2fs_put_page(new_page, 0); f2fs_put_page(new_page, 0);
out_old: out_old:
f2fs_dentry_kunmap(old_dir, old_page);
f2fs_put_page(old_page, 0); f2fs_put_page(old_page, 0);
out: out:
return err; return err;
......
...@@ -193,7 +193,7 @@ static void __del_from_nat_cache(struct f2fs_nm_info *nm_i, struct nat_entry *e) ...@@ -193,7 +193,7 @@ static void __del_from_nat_cache(struct f2fs_nm_info *nm_i, struct nat_entry *e)
__free_nat_entry(e); __free_nat_entry(e);
} }
static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i, static struct nat_entry_set *__grab_nat_entry_set(struct f2fs_nm_info *nm_i,
struct nat_entry *ne) struct nat_entry *ne)
{ {
nid_t set = NAT_BLOCK_OFFSET(ne->ni.nid); nid_t set = NAT_BLOCK_OFFSET(ne->ni.nid);
...@@ -209,15 +209,36 @@ static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i, ...@@ -209,15 +209,36 @@ static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i,
head->entry_cnt = 0; head->entry_cnt = 0;
f2fs_radix_tree_insert(&nm_i->nat_set_root, set, head); f2fs_radix_tree_insert(&nm_i->nat_set_root, set, head);
} }
return head;
}
static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i,
struct nat_entry *ne)
{
struct nat_entry_set *head;
bool new_ne = nat_get_blkaddr(ne) == NEW_ADDR;
if (!new_ne)
head = __grab_nat_entry_set(nm_i, ne);
/*
* update entry_cnt in below condition:
* 1. update NEW_ADDR to valid block address;
* 2. update old block address to new one;
*/
if (!new_ne && (get_nat_flag(ne, IS_PREALLOC) ||
!get_nat_flag(ne, IS_DIRTY)))
head->entry_cnt++;
set_nat_flag(ne, IS_PREALLOC, new_ne);
if (get_nat_flag(ne, IS_DIRTY)) if (get_nat_flag(ne, IS_DIRTY))
goto refresh_list; goto refresh_list;
nm_i->dirty_nat_cnt++; nm_i->dirty_nat_cnt++;
head->entry_cnt++;
set_nat_flag(ne, IS_DIRTY, true); set_nat_flag(ne, IS_DIRTY, true);
refresh_list: refresh_list:
if (nat_get_blkaddr(ne) == NEW_ADDR) if (new_ne)
list_del_init(&ne->list); list_del_init(&ne->list);
else else
list_move_tail(&ne->list, &head->entry_list); list_move_tail(&ne->list, &head->entry_list);
...@@ -1076,7 +1097,7 @@ struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs) ...@@ -1076,7 +1097,7 @@ struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs)
f2fs_wait_on_page_writeback(page, NODE, true); f2fs_wait_on_page_writeback(page, NODE, true);
fill_node_footer(page, dn->nid, dn->inode->i_ino, ofs, true); fill_node_footer(page, dn->nid, dn->inode->i_ino, ofs, true);
set_cold_node(dn->inode, page); set_cold_node(page, S_ISDIR(dn->inode->i_mode));
if (!PageUptodate(page)) if (!PageUptodate(page))
SetPageUptodate(page); SetPageUptodate(page);
if (set_page_dirty(page)) if (set_page_dirty(page))
...@@ -2291,6 +2312,7 @@ int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page) ...@@ -2291,6 +2312,7 @@ int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page)
if (!PageUptodate(ipage)) if (!PageUptodate(ipage))
SetPageUptodate(ipage); SetPageUptodate(ipage);
fill_node_footer(ipage, ino, ino, 0, true); fill_node_footer(ipage, ino, ino, 0, true);
set_cold_node(page, false);
src = F2FS_INODE(page); src = F2FS_INODE(page);
dst = F2FS_INODE(ipage); dst = F2FS_INODE(ipage);
...@@ -2580,8 +2602,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi) ...@@ -2580,8 +2602,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
if (!enabled_nat_bits(sbi, NULL)) if (!enabled_nat_bits(sbi, NULL))
return 0; return 0;
nm_i->nat_bits_blocks = F2FS_BYTES_TO_BLK((nat_bits_bytes << 1) + 8 + nm_i->nat_bits_blocks = F2FS_BLK_ALIGN((nat_bits_bytes << 1) + 8);
F2FS_BLKSIZE - 1);
nm_i->nat_bits = f2fs_kzalloc(sbi, nm_i->nat_bits = f2fs_kzalloc(sbi,
nm_i->nat_bits_blocks << F2FS_BLKSIZE_BITS, GFP_KERNEL); nm_i->nat_bits_blocks << F2FS_BLKSIZE_BITS, GFP_KERNEL);
if (!nm_i->nat_bits) if (!nm_i->nat_bits)
...@@ -2707,12 +2728,20 @@ static int init_node_manager(struct f2fs_sb_info *sbi) ...@@ -2707,12 +2728,20 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
static int init_free_nid_cache(struct f2fs_sb_info *sbi) static int init_free_nid_cache(struct f2fs_sb_info *sbi)
{ {
struct f2fs_nm_info *nm_i = NM_I(sbi); struct f2fs_nm_info *nm_i = NM_I(sbi);
int i;
nm_i->free_nid_bitmap = f2fs_kvzalloc(sbi, nm_i->nat_blocks * nm_i->free_nid_bitmap = f2fs_kzalloc(sbi, nm_i->nat_blocks *
NAT_ENTRY_BITMAP_SIZE, GFP_KERNEL); sizeof(unsigned char *), GFP_KERNEL);
if (!nm_i->free_nid_bitmap) if (!nm_i->free_nid_bitmap)
return -ENOMEM; return -ENOMEM;
for (i = 0; i < nm_i->nat_blocks; i++) {
nm_i->free_nid_bitmap[i] = f2fs_kvzalloc(sbi,
NAT_ENTRY_BITMAP_SIZE_ALIGNED, GFP_KERNEL);
if (!nm_i->free_nid_bitmap)
return -ENOMEM;
}
nm_i->nat_block_bitmap = f2fs_kvzalloc(sbi, nm_i->nat_blocks / 8, nm_i->nat_block_bitmap = f2fs_kvzalloc(sbi, nm_i->nat_blocks / 8,
GFP_KERNEL); GFP_KERNEL);
if (!nm_i->nat_block_bitmap) if (!nm_i->nat_block_bitmap)
...@@ -2803,7 +2832,13 @@ void destroy_node_manager(struct f2fs_sb_info *sbi) ...@@ -2803,7 +2832,13 @@ void destroy_node_manager(struct f2fs_sb_info *sbi)
up_write(&nm_i->nat_tree_lock); up_write(&nm_i->nat_tree_lock);
kvfree(nm_i->nat_block_bitmap); kvfree(nm_i->nat_block_bitmap);
kvfree(nm_i->free_nid_bitmap); if (nm_i->free_nid_bitmap) {
int i;
for (i = 0; i < nm_i->nat_blocks; i++)
kvfree(nm_i->free_nid_bitmap[i]);
kfree(nm_i->free_nid_bitmap);
}
kvfree(nm_i->free_nid_count); kvfree(nm_i->free_nid_count);
kfree(nm_i->nat_bitmap); kfree(nm_i->nat_bitmap);
......
...@@ -44,6 +44,7 @@ enum { ...@@ -44,6 +44,7 @@ enum {
HAS_FSYNCED_INODE, /* is the inode fsynced before? */ HAS_FSYNCED_INODE, /* is the inode fsynced before? */
HAS_LAST_FSYNC, /* has the latest node fsync mark? */ HAS_LAST_FSYNC, /* has the latest node fsync mark? */
IS_DIRTY, /* this nat entry is dirty? */ IS_DIRTY, /* this nat entry is dirty? */
IS_PREALLOC, /* nat entry is preallocated */
}; };
/* /*
...@@ -422,12 +423,12 @@ static inline void clear_inline_node(struct page *page) ...@@ -422,12 +423,12 @@ static inline void clear_inline_node(struct page *page)
ClearPageChecked(page); ClearPageChecked(page);
} }
static inline void set_cold_node(struct inode *inode, struct page *page) static inline void set_cold_node(struct page *page, bool is_dir)
{ {
struct f2fs_node *rn = F2FS_NODE(page); struct f2fs_node *rn = F2FS_NODE(page);
unsigned int flag = le32_to_cpu(rn->footer.flag); unsigned int flag = le32_to_cpu(rn->footer.flag);
if (S_ISDIR(inode->i_mode)) if (is_dir)
flag &= ~(0x1 << COLD_BIT_SHIFT); flag &= ~(0x1 << COLD_BIT_SHIFT);
else else
flag |= (0x1 << COLD_BIT_SHIFT); flag |= (0x1 << COLD_BIT_SHIFT);
......
...@@ -144,7 +144,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage, ...@@ -144,7 +144,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage,
retry: retry:
de = __f2fs_find_entry(dir, &fname, &page); de = __f2fs_find_entry(dir, &fname, &page);
if (de && inode->i_ino == le32_to_cpu(de->ino)) if (de && inode->i_ino == le32_to_cpu(de->ino))
goto out_unmap_put; goto out_put;
if (de) { if (de) {
einode = f2fs_iget_retry(inode->i_sb, le32_to_cpu(de->ino)); einode = f2fs_iget_retry(inode->i_sb, le32_to_cpu(de->ino));
...@@ -153,19 +153,19 @@ static int recover_dentry(struct inode *inode, struct page *ipage, ...@@ -153,19 +153,19 @@ static int recover_dentry(struct inode *inode, struct page *ipage,
err = PTR_ERR(einode); err = PTR_ERR(einode);
if (err == -ENOENT) if (err == -ENOENT)
err = -EEXIST; err = -EEXIST;
goto out_unmap_put; goto out_put;
} }
err = dquot_initialize(einode); err = dquot_initialize(einode);
if (err) { if (err) {
iput(einode); iput(einode);
goto out_unmap_put; goto out_put;
} }
err = acquire_orphan_inode(F2FS_I_SB(inode)); err = acquire_orphan_inode(F2FS_I_SB(inode));
if (err) { if (err) {
iput(einode); iput(einode);
goto out_unmap_put; goto out_put;
} }
f2fs_delete_entry(de, page, dir, einode); f2fs_delete_entry(de, page, dir, einode);
iput(einode); iput(einode);
...@@ -180,8 +180,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage, ...@@ -180,8 +180,7 @@ static int recover_dentry(struct inode *inode, struct page *ipage,
goto retry; goto retry;
goto out; goto out;
out_unmap_put: out_put:
f2fs_dentry_kunmap(dir, page);
f2fs_put_page(page, 0); f2fs_put_page(page, 0);
out: out:
if (file_enc_name(inode)) if (file_enc_name(inode))
...@@ -243,6 +242,9 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, ...@@ -243,6 +242,9 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
struct curseg_info *curseg; struct curseg_info *curseg;
struct page *page = NULL; struct page *page = NULL;
block_t blkaddr; block_t blkaddr;
unsigned int loop_cnt = 0;
unsigned int free_blocks = sbi->user_block_count -
valid_user_blocks(sbi);
int err = 0; int err = 0;
/* get node pages in the current segment */ /* get node pages in the current segment */
...@@ -295,6 +297,17 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, ...@@ -295,6 +297,17 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
if (IS_INODE(page) && is_dent_dnode(page)) if (IS_INODE(page) && is_dent_dnode(page))
entry->last_dentry = blkaddr; entry->last_dentry = blkaddr;
next: next:
/* sanity check in order to detect looped node chain */
if (++loop_cnt >= free_blocks ||
blkaddr == next_blkaddr_of_node(page)) {
f2fs_msg(sbi->sb, KERN_NOTICE,
"%s: detect looped node chain, "
"blkaddr:%u, next:%u",
__func__, blkaddr, next_blkaddr_of_node(page));
err = -EINVAL;
break;
}
/* check next segment */ /* check next segment */
blkaddr = next_blkaddr_of_node(page); blkaddr = next_blkaddr_of_node(page);
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
......
...@@ -1411,12 +1411,11 @@ static int issue_discard_thread(void *data) ...@@ -1411,12 +1411,11 @@ static int issue_discard_thread(void *data)
if (kthread_should_stop()) if (kthread_should_stop())
return 0; return 0;
if (dcc->discard_wake) { if (dcc->discard_wake)
dcc->discard_wake = 0; dcc->discard_wake = 0;
if (sbi->gc_thread && sbi->gc_thread->gc_urgent) if (sbi->gc_thread && sbi->gc_thread->gc_urgent)
init_discard_policy(&dpolicy, init_discard_policy(&dpolicy, DPOLICY_FORCE, 1);
DPOLICY_FORCE, 1);
}
sb_start_intwrite(sbi->sb); sb_start_intwrite(sbi->sb);
...@@ -1485,7 +1484,7 @@ static int __issue_discard_async(struct f2fs_sb_info *sbi, ...@@ -1485,7 +1484,7 @@ static int __issue_discard_async(struct f2fs_sb_info *sbi,
struct block_device *bdev, block_t blkstart, block_t blklen) struct block_device *bdev, block_t blkstart, block_t blklen)
{ {
#ifdef CONFIG_BLK_DEV_ZONED #ifdef CONFIG_BLK_DEV_ZONED
if (f2fs_sb_mounted_blkzoned(sbi->sb) && if (f2fs_sb_has_blkzoned(sbi->sb) &&
bdev_zoned_model(bdev) != BLK_ZONED_NONE) bdev_zoned_model(bdev) != BLK_ZONED_NONE)
return __f2fs_issue_discard_zone(sbi, bdev, blkstart, blklen); return __f2fs_issue_discard_zone(sbi, bdev, blkstart, blklen);
#endif #endif
...@@ -1683,7 +1682,7 @@ void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1683,7 +1682,7 @@ void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc)
sbi->blocks_per_seg, cur_pos); sbi->blocks_per_seg, cur_pos);
len = next_pos - cur_pos; len = next_pos - cur_pos;
if (f2fs_sb_mounted_blkzoned(sbi->sb) || if (f2fs_sb_has_blkzoned(sbi->sb) ||
(force && len < cpc->trim_minlen)) (force && len < cpc->trim_minlen))
goto skip; goto skip;
...@@ -1727,7 +1726,7 @@ void init_discard_policy(struct discard_policy *dpolicy, ...@@ -1727,7 +1726,7 @@ void init_discard_policy(struct discard_policy *dpolicy,
} else if (discard_type == DPOLICY_FORCE) { } else if (discard_type == DPOLICY_FORCE) {
dpolicy->min_interval = DEF_MIN_DISCARD_ISSUE_TIME; dpolicy->min_interval = DEF_MIN_DISCARD_ISSUE_TIME;
dpolicy->max_interval = DEF_MAX_DISCARD_ISSUE_TIME; dpolicy->max_interval = DEF_MAX_DISCARD_ISSUE_TIME;
dpolicy->io_aware = true; dpolicy->io_aware = false;
} else if (discard_type == DPOLICY_FSTRIM) { } else if (discard_type == DPOLICY_FSTRIM) {
dpolicy->io_aware = false; dpolicy->io_aware = false;
} else if (discard_type == DPOLICY_UMOUNT) { } else if (discard_type == DPOLICY_UMOUNT) {
...@@ -1863,7 +1862,7 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del) ...@@ -1863,7 +1862,7 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
sbi->discard_blks--; sbi->discard_blks--;
/* don't overwrite by SSR to keep node chain */ /* don't overwrite by SSR to keep node chain */
if (se->type == CURSEG_WARM_NODE) { if (IS_NODESEG(se->type)) {
if (!f2fs_test_and_set_bit(offset, se->ckpt_valid_map)) if (!f2fs_test_and_set_bit(offset, se->ckpt_valid_map))
se->ckpt_valid_blocks++; se->ckpt_valid_blocks++;
} }
...@@ -2164,11 +2163,17 @@ static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type) ...@@ -2164,11 +2163,17 @@ static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type)
if (sbi->segs_per_sec != 1) if (sbi->segs_per_sec != 1)
return CURSEG_I(sbi, type)->segno; return CURSEG_I(sbi, type)->segno;
if (type == CURSEG_HOT_DATA || IS_NODESEG(type)) if (test_opt(sbi, NOHEAP) &&
(type == CURSEG_HOT_DATA || IS_NODESEG(type)))
return 0; return 0;
if (SIT_I(sbi)->last_victim[ALLOC_NEXT]) if (SIT_I(sbi)->last_victim[ALLOC_NEXT])
return SIT_I(sbi)->last_victim[ALLOC_NEXT]; return SIT_I(sbi)->last_victim[ALLOC_NEXT];
/* find segments from 0 to reuse freed segments */
if (F2FS_OPTION(sbi).alloc_mode == ALLOC_MODE_REUSE)
return 0;
return CURSEG_I(sbi, type)->segno; return CURSEG_I(sbi, type)->segno;
} }
...@@ -2455,6 +2460,101 @@ int rw_hint_to_seg_type(enum rw_hint hint) ...@@ -2455,6 +2460,101 @@ int rw_hint_to_seg_type(enum rw_hint hint)
} }
} }
/* This returns write hints for each segment type. This hints will be
* passed down to block layer. There are mapping tables which depend on
* the mount option 'whint_mode'.
*
* 1) whint_mode=off. F2FS only passes down WRITE_LIFE_NOT_SET.
*
* 2) whint_mode=user-based. F2FS tries to pass down hints given by users.
*
* User F2FS Block
* ---- ---- -----
* META WRITE_LIFE_NOT_SET
* HOT_NODE "
* WARM_NODE "
* COLD_NODE "
* ioctl(COLD) COLD_DATA WRITE_LIFE_EXTREME
* extension list " "
*
* -- buffered io
* WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
* WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
* WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
* WRITE_LIFE_NONE " "
* WRITE_LIFE_MEDIUM " "
* WRITE_LIFE_LONG " "
*
* -- direct io
* WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
* WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
* WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
* WRITE_LIFE_NONE " WRITE_LIFE_NONE
* WRITE_LIFE_MEDIUM " WRITE_LIFE_MEDIUM
* WRITE_LIFE_LONG " WRITE_LIFE_LONG
*
* 3) whint_mode=fs-based. F2FS passes down hints with its policy.
*
* User F2FS Block
* ---- ---- -----
* META WRITE_LIFE_MEDIUM;
* HOT_NODE WRITE_LIFE_NOT_SET
* WARM_NODE "
* COLD_NODE WRITE_LIFE_NONE
* ioctl(COLD) COLD_DATA WRITE_LIFE_EXTREME
* extension list " "
*
* -- buffered io
* WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
* WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
* WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_LONG
* WRITE_LIFE_NONE " "
* WRITE_LIFE_MEDIUM " "
* WRITE_LIFE_LONG " "
*
* -- direct io
* WRITE_LIFE_EXTREME COLD_DATA WRITE_LIFE_EXTREME
* WRITE_LIFE_SHORT HOT_DATA WRITE_LIFE_SHORT
* WRITE_LIFE_NOT_SET WARM_DATA WRITE_LIFE_NOT_SET
* WRITE_LIFE_NONE " WRITE_LIFE_NONE
* WRITE_LIFE_MEDIUM " WRITE_LIFE_MEDIUM
* WRITE_LIFE_LONG " WRITE_LIFE_LONG
*/
enum rw_hint io_type_to_rw_hint(struct f2fs_sb_info *sbi,
enum page_type type, enum temp_type temp)
{
if (F2FS_OPTION(sbi).whint_mode == WHINT_MODE_USER) {
if (type == DATA) {
if (temp == WARM)
return WRITE_LIFE_NOT_SET;
else if (temp == HOT)
return WRITE_LIFE_SHORT;
else if (temp == COLD)
return WRITE_LIFE_EXTREME;
} else {
return WRITE_LIFE_NOT_SET;
}
} else if (F2FS_OPTION(sbi).whint_mode == WHINT_MODE_FS) {
if (type == DATA) {
if (temp == WARM)
return WRITE_LIFE_LONG;
else if (temp == HOT)
return WRITE_LIFE_SHORT;
else if (temp == COLD)
return WRITE_LIFE_EXTREME;
} else if (type == NODE) {
if (temp == WARM || temp == HOT)
return WRITE_LIFE_NOT_SET;
else if (temp == COLD)
return WRITE_LIFE_NONE;
} else if (type == META) {
return WRITE_LIFE_MEDIUM;
}
}
return WRITE_LIFE_NOT_SET;
}
static int __get_segment_type_2(struct f2fs_io_info *fio) static int __get_segment_type_2(struct f2fs_io_info *fio)
{ {
if (fio->type == DATA) if (fio->type == DATA)
...@@ -2487,7 +2587,8 @@ static int __get_segment_type_6(struct f2fs_io_info *fio) ...@@ -2487,7 +2587,8 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
if (is_cold_data(fio->page) || file_is_cold(inode)) if (is_cold_data(fio->page) || file_is_cold(inode))
return CURSEG_COLD_DATA; return CURSEG_COLD_DATA;
if (is_inode_flag_set(inode, FI_HOT_DATA)) if (file_is_hot(inode) ||
is_inode_flag_set(inode, FI_HOT_DATA))
return CURSEG_HOT_DATA; return CURSEG_HOT_DATA;
return rw_hint_to_seg_type(inode->i_write_hint); return rw_hint_to_seg_type(inode->i_write_hint);
} else { } else {
...@@ -2502,7 +2603,7 @@ static int __get_segment_type(struct f2fs_io_info *fio) ...@@ -2502,7 +2603,7 @@ static int __get_segment_type(struct f2fs_io_info *fio)
{ {
int type = 0; int type = 0;
switch (fio->sbi->active_logs) { switch (F2FS_OPTION(fio->sbi).active_logs) {
case 2: case 2:
type = __get_segment_type_2(fio); type = __get_segment_type_2(fio);
break; break;
...@@ -2642,6 +2743,7 @@ void write_meta_page(struct f2fs_sb_info *sbi, struct page *page, ...@@ -2642,6 +2743,7 @@ void write_meta_page(struct f2fs_sb_info *sbi, struct page *page,
struct f2fs_io_info fio = { struct f2fs_io_info fio = {
.sbi = sbi, .sbi = sbi,
.type = META, .type = META,
.temp = HOT,
.op = REQ_OP_WRITE, .op = REQ_OP_WRITE,
.op_flags = REQ_SYNC | REQ_META | REQ_PRIO, .op_flags = REQ_SYNC | REQ_META | REQ_PRIO,
.old_blkaddr = page->index, .old_blkaddr = page->index,
...@@ -2688,8 +2790,15 @@ void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio) ...@@ -2688,8 +2790,15 @@ void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio)
int rewrite_data_page(struct f2fs_io_info *fio) int rewrite_data_page(struct f2fs_io_info *fio)
{ {
int err; int err;
struct f2fs_sb_info *sbi = fio->sbi;
fio->new_blkaddr = fio->old_blkaddr; fio->new_blkaddr = fio->old_blkaddr;
/* i/o temperature is needed for passing down write hints */
__get_segment_type(fio);
f2fs_bug_on(sbi, !IS_DATASEG(get_seg_entry(sbi,
GET_SEGNO(sbi, fio->new_blkaddr))->type));
stat_inc_inplace_blocks(fio->sbi); stat_inc_inplace_blocks(fio->sbi);
err = f2fs_submit_page_bio(fio); err = f2fs_submit_page_bio(fio);
......
...@@ -53,13 +53,19 @@ ...@@ -53,13 +53,19 @@
((secno) == CURSEG_I(sbi, CURSEG_COLD_NODE)->segno / \ ((secno) == CURSEG_I(sbi, CURSEG_COLD_NODE)->segno / \
(sbi)->segs_per_sec)) \ (sbi)->segs_per_sec)) \
#define MAIN_BLKADDR(sbi) (SM_I(sbi)->main_blkaddr) #define MAIN_BLKADDR(sbi) \
#define SEG0_BLKADDR(sbi) (SM_I(sbi)->seg0_blkaddr) (SM_I(sbi) ? SM_I(sbi)->main_blkaddr : \
le32_to_cpu(F2FS_RAW_SUPER(sbi)->main_blkaddr))
#define SEG0_BLKADDR(sbi) \
(SM_I(sbi) ? SM_I(sbi)->seg0_blkaddr : \
le32_to_cpu(F2FS_RAW_SUPER(sbi)->segment0_blkaddr))
#define MAIN_SEGS(sbi) (SM_I(sbi)->main_segments) #define MAIN_SEGS(sbi) (SM_I(sbi)->main_segments)
#define MAIN_SECS(sbi) ((sbi)->total_sections) #define MAIN_SECS(sbi) ((sbi)->total_sections)
#define TOTAL_SEGS(sbi) (SM_I(sbi)->segment_count) #define TOTAL_SEGS(sbi) \
(SM_I(sbi) ? SM_I(sbi)->segment_count : \
le32_to_cpu(F2FS_RAW_SUPER(sbi)->segment_count))
#define TOTAL_BLKS(sbi) (TOTAL_SEGS(sbi) << (sbi)->log_blocks_per_seg) #define TOTAL_BLKS(sbi) (TOTAL_SEGS(sbi) << (sbi)->log_blocks_per_seg)
#define MAX_BLKADDR(sbi) (SEG0_BLKADDR(sbi) + TOTAL_BLKS(sbi)) #define MAX_BLKADDR(sbi) (SEG0_BLKADDR(sbi) + TOTAL_BLKS(sbi))
...@@ -596,6 +602,8 @@ static inline int utilization(struct f2fs_sb_info *sbi) ...@@ -596,6 +602,8 @@ static inline int utilization(struct f2fs_sb_info *sbi)
#define DEF_MIN_FSYNC_BLOCKS 8 #define DEF_MIN_FSYNC_BLOCKS 8
#define DEF_MIN_HOT_BLOCKS 16 #define DEF_MIN_HOT_BLOCKS 16
#define SMALL_VOLUME_SEGMENTS (16 * 512) /* 16GB */
enum { enum {
F2FS_IPU_FORCE, F2FS_IPU_FORCE,
F2FS_IPU_SSR, F2FS_IPU_SSR,
...@@ -630,10 +638,17 @@ static inline void check_seg_range(struct f2fs_sb_info *sbi, unsigned int segno) ...@@ -630,10 +638,17 @@ static inline void check_seg_range(struct f2fs_sb_info *sbi, unsigned int segno)
f2fs_bug_on(sbi, segno > TOTAL_SEGS(sbi) - 1); f2fs_bug_on(sbi, segno > TOTAL_SEGS(sbi) - 1);
} }
static inline void verify_block_addr(struct f2fs_sb_info *sbi, block_t blk_addr) static inline void verify_block_addr(struct f2fs_io_info *fio, block_t blk_addr)
{ {
BUG_ON(blk_addr < SEG0_BLKADDR(sbi) struct f2fs_sb_info *sbi = fio->sbi;
|| blk_addr >= MAX_BLKADDR(sbi));
if (PAGE_TYPE_OF_BIO(fio->type) == META &&
(!is_read_io(fio->op) || fio->is_meta))
BUG_ON(blk_addr < SEG0_BLKADDR(sbi) ||
blk_addr >= MAIN_BLKADDR(sbi));
else
BUG_ON(blk_addr < MAIN_BLKADDR(sbi) ||
blk_addr >= MAX_BLKADDR(sbi));
} }
/* /*
......
...@@ -60,7 +60,7 @@ char *fault_name[FAULT_MAX] = { ...@@ -60,7 +60,7 @@ char *fault_name[FAULT_MAX] = {
static void f2fs_build_fault_attr(struct f2fs_sb_info *sbi, static void f2fs_build_fault_attr(struct f2fs_sb_info *sbi,
unsigned int rate) unsigned int rate)
{ {
struct f2fs_fault_info *ffi = &sbi->fault_info; struct f2fs_fault_info *ffi = &F2FS_OPTION(sbi).fault_info;
if (rate) { if (rate) {
atomic_set(&ffi->inject_ops, 0); atomic_set(&ffi->inject_ops, 0);
...@@ -129,6 +129,10 @@ enum { ...@@ -129,6 +129,10 @@ enum {
Opt_jqfmt_vfsold, Opt_jqfmt_vfsold,
Opt_jqfmt_vfsv0, Opt_jqfmt_vfsv0,
Opt_jqfmt_vfsv1, Opt_jqfmt_vfsv1,
Opt_whint,
Opt_alloc,
Opt_fsync,
Opt_test_dummy_encryption,
Opt_err, Opt_err,
}; };
...@@ -182,6 +186,10 @@ static match_table_t f2fs_tokens = { ...@@ -182,6 +186,10 @@ static match_table_t f2fs_tokens = {
{Opt_jqfmt_vfsold, "jqfmt=vfsold"}, {Opt_jqfmt_vfsold, "jqfmt=vfsold"},
{Opt_jqfmt_vfsv0, "jqfmt=vfsv0"}, {Opt_jqfmt_vfsv0, "jqfmt=vfsv0"},
{Opt_jqfmt_vfsv1, "jqfmt=vfsv1"}, {Opt_jqfmt_vfsv1, "jqfmt=vfsv1"},
{Opt_whint, "whint_mode=%s"},
{Opt_alloc, "alloc_mode=%s"},
{Opt_fsync, "fsync_mode=%s"},
{Opt_test_dummy_encryption, "test_dummy_encryption"},
{Opt_err, NULL}, {Opt_err, NULL},
}; };
...@@ -202,21 +210,24 @@ static inline void limit_reserve_root(struct f2fs_sb_info *sbi) ...@@ -202,21 +210,24 @@ static inline void limit_reserve_root(struct f2fs_sb_info *sbi)
block_t limit = (sbi->user_block_count << 1) / 1000; block_t limit = (sbi->user_block_count << 1) / 1000;
/* limit is 0.2% */ /* limit is 0.2% */
if (test_opt(sbi, RESERVE_ROOT) && sbi->root_reserved_blocks > limit) { if (test_opt(sbi, RESERVE_ROOT) &&
sbi->root_reserved_blocks = limit; F2FS_OPTION(sbi).root_reserved_blocks > limit) {
F2FS_OPTION(sbi).root_reserved_blocks = limit;
f2fs_msg(sbi->sb, KERN_INFO, f2fs_msg(sbi->sb, KERN_INFO,
"Reduce reserved blocks for root = %u", "Reduce reserved blocks for root = %u",
sbi->root_reserved_blocks); F2FS_OPTION(sbi).root_reserved_blocks);
} }
if (!test_opt(sbi, RESERVE_ROOT) && if (!test_opt(sbi, RESERVE_ROOT) &&
(!uid_eq(sbi->s_resuid, (!uid_eq(F2FS_OPTION(sbi).s_resuid,
make_kuid(&init_user_ns, F2FS_DEF_RESUID)) || make_kuid(&init_user_ns, F2FS_DEF_RESUID)) ||
!gid_eq(sbi->s_resgid, !gid_eq(F2FS_OPTION(sbi).s_resgid,
make_kgid(&init_user_ns, F2FS_DEF_RESGID)))) make_kgid(&init_user_ns, F2FS_DEF_RESGID))))
f2fs_msg(sbi->sb, KERN_INFO, f2fs_msg(sbi->sb, KERN_INFO,
"Ignore s_resuid=%u, s_resgid=%u w/o reserve_root", "Ignore s_resuid=%u, s_resgid=%u w/o reserve_root",
from_kuid_munged(&init_user_ns, sbi->s_resuid), from_kuid_munged(&init_user_ns,
from_kgid_munged(&init_user_ns, sbi->s_resgid)); F2FS_OPTION(sbi).s_resuid),
from_kgid_munged(&init_user_ns,
F2FS_OPTION(sbi).s_resgid));
} }
static void init_once(void *foo) static void init_once(void *foo)
...@@ -236,7 +247,7 @@ static int f2fs_set_qf_name(struct super_block *sb, int qtype, ...@@ -236,7 +247,7 @@ static int f2fs_set_qf_name(struct super_block *sb, int qtype,
char *qname; char *qname;
int ret = -EINVAL; int ret = -EINVAL;
if (sb_any_quota_loaded(sb) && !sbi->s_qf_names[qtype]) { if (sb_any_quota_loaded(sb) && !F2FS_OPTION(sbi).s_qf_names[qtype]) {
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
"Cannot change journaled " "Cannot change journaled "
"quota options when quota turned on"); "quota options when quota turned on");
...@@ -254,8 +265,8 @@ static int f2fs_set_qf_name(struct super_block *sb, int qtype, ...@@ -254,8 +265,8 @@ static int f2fs_set_qf_name(struct super_block *sb, int qtype,
"Not enough memory for storing quotafile name"); "Not enough memory for storing quotafile name");
return -EINVAL; return -EINVAL;
} }
if (sbi->s_qf_names[qtype]) { if (F2FS_OPTION(sbi).s_qf_names[qtype]) {
if (strcmp(sbi->s_qf_names[qtype], qname) == 0) if (strcmp(F2FS_OPTION(sbi).s_qf_names[qtype], qname) == 0)
ret = 0; ret = 0;
else else
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
...@@ -268,7 +279,7 @@ static int f2fs_set_qf_name(struct super_block *sb, int qtype, ...@@ -268,7 +279,7 @@ static int f2fs_set_qf_name(struct super_block *sb, int qtype,
"quotafile must be on filesystem root"); "quotafile must be on filesystem root");
goto errout; goto errout;
} }
sbi->s_qf_names[qtype] = qname; F2FS_OPTION(sbi).s_qf_names[qtype] = qname;
set_opt(sbi, QUOTA); set_opt(sbi, QUOTA);
return 0; return 0;
errout: errout:
...@@ -280,13 +291,13 @@ static int f2fs_clear_qf_name(struct super_block *sb, int qtype) ...@@ -280,13 +291,13 @@ static int f2fs_clear_qf_name(struct super_block *sb, int qtype)
{ {
struct f2fs_sb_info *sbi = F2FS_SB(sb); struct f2fs_sb_info *sbi = F2FS_SB(sb);
if (sb_any_quota_loaded(sb) && sbi->s_qf_names[qtype]) { if (sb_any_quota_loaded(sb) && F2FS_OPTION(sbi).s_qf_names[qtype]) {
f2fs_msg(sb, KERN_ERR, "Cannot change journaled quota options" f2fs_msg(sb, KERN_ERR, "Cannot change journaled quota options"
" when quota turned on"); " when quota turned on");
return -EINVAL; return -EINVAL;
} }
kfree(sbi->s_qf_names[qtype]); kfree(F2FS_OPTION(sbi).s_qf_names[qtype]);
sbi->s_qf_names[qtype] = NULL; F2FS_OPTION(sbi).s_qf_names[qtype] = NULL;
return 0; return 0;
} }
...@@ -302,15 +313,19 @@ static int f2fs_check_quota_options(struct f2fs_sb_info *sbi) ...@@ -302,15 +313,19 @@ static int f2fs_check_quota_options(struct f2fs_sb_info *sbi)
"Cannot enable project quota enforcement."); "Cannot enable project quota enforcement.");
return -1; return -1;
} }
if (sbi->s_qf_names[USRQUOTA] || sbi->s_qf_names[GRPQUOTA] || if (F2FS_OPTION(sbi).s_qf_names[USRQUOTA] ||
sbi->s_qf_names[PRJQUOTA]) { F2FS_OPTION(sbi).s_qf_names[GRPQUOTA] ||
if (test_opt(sbi, USRQUOTA) && sbi->s_qf_names[USRQUOTA]) F2FS_OPTION(sbi).s_qf_names[PRJQUOTA]) {
if (test_opt(sbi, USRQUOTA) &&
F2FS_OPTION(sbi).s_qf_names[USRQUOTA])
clear_opt(sbi, USRQUOTA); clear_opt(sbi, USRQUOTA);
if (test_opt(sbi, GRPQUOTA) && sbi->s_qf_names[GRPQUOTA]) if (test_opt(sbi, GRPQUOTA) &&
F2FS_OPTION(sbi).s_qf_names[GRPQUOTA])
clear_opt(sbi, GRPQUOTA); clear_opt(sbi, GRPQUOTA);
if (test_opt(sbi, PRJQUOTA) && sbi->s_qf_names[PRJQUOTA]) if (test_opt(sbi, PRJQUOTA) &&
F2FS_OPTION(sbi).s_qf_names[PRJQUOTA])
clear_opt(sbi, PRJQUOTA); clear_opt(sbi, PRJQUOTA);
if (test_opt(sbi, GRPQUOTA) || test_opt(sbi, USRQUOTA) || if (test_opt(sbi, GRPQUOTA) || test_opt(sbi, USRQUOTA) ||
...@@ -320,19 +335,19 @@ static int f2fs_check_quota_options(struct f2fs_sb_info *sbi) ...@@ -320,19 +335,19 @@ static int f2fs_check_quota_options(struct f2fs_sb_info *sbi)
return -1; return -1;
} }
if (!sbi->s_jquota_fmt) { if (!F2FS_OPTION(sbi).s_jquota_fmt) {
f2fs_msg(sbi->sb, KERN_ERR, "journaled quota format " f2fs_msg(sbi->sb, KERN_ERR, "journaled quota format "
"not specified"); "not specified");
return -1; return -1;
} }
} }
if (f2fs_sb_has_quota_ino(sbi->sb) && sbi->s_jquota_fmt) { if (f2fs_sb_has_quota_ino(sbi->sb) && F2FS_OPTION(sbi).s_jquota_fmt) {
f2fs_msg(sbi->sb, KERN_INFO, f2fs_msg(sbi->sb, KERN_INFO,
"QUOTA feature is enabled, so ignore jquota_fmt"); "QUOTA feature is enabled, so ignore jquota_fmt");
sbi->s_jquota_fmt = 0; F2FS_OPTION(sbi).s_jquota_fmt = 0;
} }
if (f2fs_sb_has_quota_ino(sbi->sb) && sb_rdonly(sbi->sb)) { if (f2fs_sb_has_quota_ino(sbi->sb) && f2fs_readonly(sbi->sb)) {
f2fs_msg(sbi->sb, KERN_INFO, f2fs_msg(sbi->sb, KERN_INFO,
"Filesystem with quota feature cannot be mounted RDWR " "Filesystem with quota feature cannot be mounted RDWR "
"without CONFIG_QUOTA"); "without CONFIG_QUOTA");
...@@ -403,14 +418,14 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -403,14 +418,14 @@ static int parse_options(struct super_block *sb, char *options)
q = bdev_get_queue(sb->s_bdev); q = bdev_get_queue(sb->s_bdev);
if (blk_queue_discard(q)) { if (blk_queue_discard(q)) {
set_opt(sbi, DISCARD); set_opt(sbi, DISCARD);
} else if (!f2fs_sb_mounted_blkzoned(sb)) { } else if (!f2fs_sb_has_blkzoned(sb)) {
f2fs_msg(sb, KERN_WARNING, f2fs_msg(sb, KERN_WARNING,
"mounting with \"discard\" option, but " "mounting with \"discard\" option, but "
"the device does not support discard"); "the device does not support discard");
} }
break; break;
case Opt_nodiscard: case Opt_nodiscard:
if (f2fs_sb_mounted_blkzoned(sb)) { if (f2fs_sb_has_blkzoned(sb)) {
f2fs_msg(sb, KERN_WARNING, f2fs_msg(sb, KERN_WARNING,
"discard is required for zoned block devices"); "discard is required for zoned block devices");
return -EINVAL; return -EINVAL;
...@@ -440,7 +455,7 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -440,7 +455,7 @@ static int parse_options(struct super_block *sb, char *options)
if (args->from && match_int(args, &arg)) if (args->from && match_int(args, &arg))
return -EINVAL; return -EINVAL;
set_opt(sbi, INLINE_XATTR_SIZE); set_opt(sbi, INLINE_XATTR_SIZE);
sbi->inline_xattr_size = arg; F2FS_OPTION(sbi).inline_xattr_size = arg;
break; break;
#else #else
case Opt_user_xattr: case Opt_user_xattr:
...@@ -480,7 +495,7 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -480,7 +495,7 @@ static int parse_options(struct super_block *sb, char *options)
return -EINVAL; return -EINVAL;
if (arg != 2 && arg != 4 && arg != NR_CURSEG_TYPE) if (arg != 2 && arg != 4 && arg != NR_CURSEG_TYPE)
return -EINVAL; return -EINVAL;
sbi->active_logs = arg; F2FS_OPTION(sbi).active_logs = arg;
break; break;
case Opt_disable_ext_identify: case Opt_disable_ext_identify:
set_opt(sbi, DISABLE_EXT_IDENTIFY); set_opt(sbi, DISABLE_EXT_IDENTIFY);
...@@ -524,9 +539,9 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -524,9 +539,9 @@ static int parse_options(struct super_block *sb, char *options)
if (test_opt(sbi, RESERVE_ROOT)) { if (test_opt(sbi, RESERVE_ROOT)) {
f2fs_msg(sb, KERN_INFO, f2fs_msg(sb, KERN_INFO,
"Preserve previous reserve_root=%u", "Preserve previous reserve_root=%u",
sbi->root_reserved_blocks); F2FS_OPTION(sbi).root_reserved_blocks);
} else { } else {
sbi->root_reserved_blocks = arg; F2FS_OPTION(sbi).root_reserved_blocks = arg;
set_opt(sbi, RESERVE_ROOT); set_opt(sbi, RESERVE_ROOT);
} }
break; break;
...@@ -539,7 +554,7 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -539,7 +554,7 @@ static int parse_options(struct super_block *sb, char *options)
"Invalid uid value %d", arg); "Invalid uid value %d", arg);
return -EINVAL; return -EINVAL;
} }
sbi->s_resuid = uid; F2FS_OPTION(sbi).s_resuid = uid;
break; break;
case Opt_resgid: case Opt_resgid:
if (args->from && match_int(args, &arg)) if (args->from && match_int(args, &arg))
...@@ -550,7 +565,7 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -550,7 +565,7 @@ static int parse_options(struct super_block *sb, char *options)
"Invalid gid value %d", arg); "Invalid gid value %d", arg);
return -EINVAL; return -EINVAL;
} }
sbi->s_resgid = gid; F2FS_OPTION(sbi).s_resgid = gid;
break; break;
case Opt_mode: case Opt_mode:
name = match_strdup(&args[0]); name = match_strdup(&args[0]);
...@@ -559,7 +574,7 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -559,7 +574,7 @@ static int parse_options(struct super_block *sb, char *options)
return -ENOMEM; return -ENOMEM;
if (strlen(name) == 8 && if (strlen(name) == 8 &&
!strncmp(name, "adaptive", 8)) { !strncmp(name, "adaptive", 8)) {
if (f2fs_sb_mounted_blkzoned(sb)) { if (f2fs_sb_has_blkzoned(sb)) {
f2fs_msg(sb, KERN_WARNING, f2fs_msg(sb, KERN_WARNING,
"adaptive mode is not allowed with " "adaptive mode is not allowed with "
"zoned block device feature"); "zoned block device feature");
...@@ -585,7 +600,7 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -585,7 +600,7 @@ static int parse_options(struct super_block *sb, char *options)
1 << arg, BIO_MAX_PAGES); 1 << arg, BIO_MAX_PAGES);
return -EINVAL; return -EINVAL;
} }
sbi->write_io_size_bits = arg; F2FS_OPTION(sbi).write_io_size_bits = arg;
break; break;
case Opt_fault_injection: case Opt_fault_injection:
if (args->from && match_int(args, &arg)) if (args->from && match_int(args, &arg))
...@@ -646,13 +661,13 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -646,13 +661,13 @@ static int parse_options(struct super_block *sb, char *options)
return ret; return ret;
break; break;
case Opt_jqfmt_vfsold: case Opt_jqfmt_vfsold:
sbi->s_jquota_fmt = QFMT_VFS_OLD; F2FS_OPTION(sbi).s_jquota_fmt = QFMT_VFS_OLD;
break; break;
case Opt_jqfmt_vfsv0: case Opt_jqfmt_vfsv0:
sbi->s_jquota_fmt = QFMT_VFS_V0; F2FS_OPTION(sbi).s_jquota_fmt = QFMT_VFS_V0;
break; break;
case Opt_jqfmt_vfsv1: case Opt_jqfmt_vfsv1:
sbi->s_jquota_fmt = QFMT_VFS_V1; F2FS_OPTION(sbi).s_jquota_fmt = QFMT_VFS_V1;
break; break;
case Opt_noquota: case Opt_noquota:
clear_opt(sbi, QUOTA); clear_opt(sbi, QUOTA);
...@@ -679,6 +694,73 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -679,6 +694,73 @@ static int parse_options(struct super_block *sb, char *options)
"quota operations not supported"); "quota operations not supported");
break; break;
#endif #endif
case Opt_whint:
name = match_strdup(&args[0]);
if (!name)
return -ENOMEM;
if (strlen(name) == 10 &&
!strncmp(name, "user-based", 10)) {
F2FS_OPTION(sbi).whint_mode = WHINT_MODE_USER;
} else if (strlen(name) == 3 &&
!strncmp(name, "off", 3)) {
F2FS_OPTION(sbi).whint_mode = WHINT_MODE_OFF;
} else if (strlen(name) == 8 &&
!strncmp(name, "fs-based", 8)) {
F2FS_OPTION(sbi).whint_mode = WHINT_MODE_FS;
} else {
kfree(name);
return -EINVAL;
}
kfree(name);
break;
case Opt_alloc:
name = match_strdup(&args[0]);
if (!name)
return -ENOMEM;
if (strlen(name) == 7 &&
!strncmp(name, "default", 7)) {
F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_DEFAULT;
} else if (strlen(name) == 5 &&
!strncmp(name, "reuse", 5)) {
F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_REUSE;
} else {
kfree(name);
return -EINVAL;
}
kfree(name);
break;
case Opt_fsync:
name = match_strdup(&args[0]);
if (!name)
return -ENOMEM;
if (strlen(name) == 5 &&
!strncmp(name, "posix", 5)) {
F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_POSIX;
} else if (strlen(name) == 6 &&
!strncmp(name, "strict", 6)) {
F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_STRICT;
} else {
kfree(name);
return -EINVAL;
}
kfree(name);
break;
case Opt_test_dummy_encryption:
#ifdef CONFIG_F2FS_FS_ENCRYPTION
if (!f2fs_sb_has_encrypt(sb)) {
f2fs_msg(sb, KERN_ERR, "Encrypt feature is off");
return -EINVAL;
}
F2FS_OPTION(sbi).test_dummy_encryption = true;
f2fs_msg(sb, KERN_INFO,
"Test dummy encryption mode enabled");
#else
f2fs_msg(sb, KERN_INFO,
"Test dummy encryption mount option ignored");
#endif
break;
default: default:
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
"Unrecognized mount option \"%s\" or missing value", "Unrecognized mount option \"%s\" or missing value",
...@@ -699,14 +781,22 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -699,14 +781,22 @@ static int parse_options(struct super_block *sb, char *options)
} }
if (test_opt(sbi, INLINE_XATTR_SIZE)) { if (test_opt(sbi, INLINE_XATTR_SIZE)) {
if (!f2fs_sb_has_extra_attr(sb) ||
!f2fs_sb_has_flexible_inline_xattr(sb)) {
f2fs_msg(sb, KERN_ERR,
"extra_attr or flexible_inline_xattr "
"feature is off");
return -EINVAL;
}
if (!test_opt(sbi, INLINE_XATTR)) { if (!test_opt(sbi, INLINE_XATTR)) {
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
"inline_xattr_size option should be " "inline_xattr_size option should be "
"set with inline_xattr option"); "set with inline_xattr option");
return -EINVAL; return -EINVAL;
} }
if (!sbi->inline_xattr_size || if (!F2FS_OPTION(sbi).inline_xattr_size ||
sbi->inline_xattr_size >= DEF_ADDRS_PER_INODE - F2FS_OPTION(sbi).inline_xattr_size >=
DEF_ADDRS_PER_INODE -
F2FS_TOTAL_EXTRA_ATTR_SIZE - F2FS_TOTAL_EXTRA_ATTR_SIZE -
DEF_INLINE_RESERVED_SIZE - DEF_INLINE_RESERVED_SIZE -
DEF_MIN_INLINE_SIZE) { DEF_MIN_INLINE_SIZE) {
...@@ -715,6 +805,12 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -715,6 +805,12 @@ static int parse_options(struct super_block *sb, char *options)
return -EINVAL; return -EINVAL;
} }
} }
/* Not pass down write hints if the number of active logs is lesser
* than NR_CURSEG_TYPE.
*/
if (F2FS_OPTION(sbi).active_logs != NR_CURSEG_TYPE)
F2FS_OPTION(sbi).whint_mode = WHINT_MODE_OFF;
return 0; return 0;
} }
...@@ -731,7 +827,6 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb) ...@@ -731,7 +827,6 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
/* Initialize f2fs-specific inode info */ /* Initialize f2fs-specific inode info */
atomic_set(&fi->dirty_pages, 0); atomic_set(&fi->dirty_pages, 0);
fi->i_current_depth = 1; fi->i_current_depth = 1;
fi->i_advise = 0;
init_rwsem(&fi->i_sem); init_rwsem(&fi->i_sem);
INIT_LIST_HEAD(&fi->dirty_list); INIT_LIST_HEAD(&fi->dirty_list);
INIT_LIST_HEAD(&fi->gdirty_list); INIT_LIST_HEAD(&fi->gdirty_list);
...@@ -743,10 +838,6 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb) ...@@ -743,10 +838,6 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
init_rwsem(&fi->i_mmap_sem); init_rwsem(&fi->i_mmap_sem);
init_rwsem(&fi->i_xattr_sem); init_rwsem(&fi->i_xattr_sem);
#ifdef CONFIG_QUOTA
memset(&fi->i_dquot, 0, sizeof(fi->i_dquot));
fi->i_reserved_quota = 0;
#endif
/* Will be used by directory only */ /* Will be used by directory only */
fi->i_dir_level = F2FS_SB(sb)->dir_level; fi->i_dir_level = F2FS_SB(sb)->dir_level;
...@@ -956,7 +1047,7 @@ static void f2fs_put_super(struct super_block *sb) ...@@ -956,7 +1047,7 @@ static void f2fs_put_super(struct super_block *sb)
mempool_destroy(sbi->write_io_dummy); mempool_destroy(sbi->write_io_dummy);
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
for (i = 0; i < MAXQUOTAS; i++) for (i = 0; i < MAXQUOTAS; i++)
kfree(sbi->s_qf_names[i]); kfree(F2FS_OPTION(sbi).s_qf_names[i]);
#endif #endif
destroy_percpu_info(sbi); destroy_percpu_info(sbi);
for (i = 0; i < NR_PAGE_TYPE; i++) for (i = 0; i < NR_PAGE_TYPE; i++)
...@@ -1070,8 +1161,9 @@ static int f2fs_statfs(struct dentry *dentry, struct kstatfs *buf) ...@@ -1070,8 +1161,9 @@ static int f2fs_statfs(struct dentry *dentry, struct kstatfs *buf)
buf->f_blocks = total_count - start_count; buf->f_blocks = total_count - start_count;
buf->f_bfree = user_block_count - valid_user_blocks(sbi) - buf->f_bfree = user_block_count - valid_user_blocks(sbi) -
sbi->current_reserved_blocks; sbi->current_reserved_blocks;
if (buf->f_bfree > sbi->root_reserved_blocks) if (buf->f_bfree > F2FS_OPTION(sbi).root_reserved_blocks)
buf->f_bavail = buf->f_bfree - sbi->root_reserved_blocks; buf->f_bavail = buf->f_bfree -
F2FS_OPTION(sbi).root_reserved_blocks;
else else
buf->f_bavail = 0; buf->f_bavail = 0;
...@@ -1106,10 +1198,10 @@ static inline void f2fs_show_quota_options(struct seq_file *seq, ...@@ -1106,10 +1198,10 @@ static inline void f2fs_show_quota_options(struct seq_file *seq,
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
struct f2fs_sb_info *sbi = F2FS_SB(sb); struct f2fs_sb_info *sbi = F2FS_SB(sb);
if (sbi->s_jquota_fmt) { if (F2FS_OPTION(sbi).s_jquota_fmt) {
char *fmtname = ""; char *fmtname = "";
switch (sbi->s_jquota_fmt) { switch (F2FS_OPTION(sbi).s_jquota_fmt) {
case QFMT_VFS_OLD: case QFMT_VFS_OLD:
fmtname = "vfsold"; fmtname = "vfsold";
break; break;
...@@ -1123,14 +1215,17 @@ static inline void f2fs_show_quota_options(struct seq_file *seq, ...@@ -1123,14 +1215,17 @@ static inline void f2fs_show_quota_options(struct seq_file *seq,
seq_printf(seq, ",jqfmt=%s", fmtname); seq_printf(seq, ",jqfmt=%s", fmtname);
} }
if (sbi->s_qf_names[USRQUOTA]) if (F2FS_OPTION(sbi).s_qf_names[USRQUOTA])
seq_show_option(seq, "usrjquota", sbi->s_qf_names[USRQUOTA]); seq_show_option(seq, "usrjquota",
F2FS_OPTION(sbi).s_qf_names[USRQUOTA]);
if (sbi->s_qf_names[GRPQUOTA]) if (F2FS_OPTION(sbi).s_qf_names[GRPQUOTA])
seq_show_option(seq, "grpjquota", sbi->s_qf_names[GRPQUOTA]); seq_show_option(seq, "grpjquota",
F2FS_OPTION(sbi).s_qf_names[GRPQUOTA]);
if (sbi->s_qf_names[PRJQUOTA]) if (F2FS_OPTION(sbi).s_qf_names[PRJQUOTA])
seq_show_option(seq, "prjjquota", sbi->s_qf_names[PRJQUOTA]); seq_show_option(seq, "prjjquota",
F2FS_OPTION(sbi).s_qf_names[PRJQUOTA]);
#endif #endif
} }
...@@ -1165,7 +1260,7 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root) ...@@ -1165,7 +1260,7 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
seq_puts(seq, ",noinline_xattr"); seq_puts(seq, ",noinline_xattr");
if (test_opt(sbi, INLINE_XATTR_SIZE)) if (test_opt(sbi, INLINE_XATTR_SIZE))
seq_printf(seq, ",inline_xattr_size=%u", seq_printf(seq, ",inline_xattr_size=%u",
sbi->inline_xattr_size); F2FS_OPTION(sbi).inline_xattr_size);
#endif #endif
#ifdef CONFIG_F2FS_FS_POSIX_ACL #ifdef CONFIG_F2FS_FS_POSIX_ACL
if (test_opt(sbi, POSIX_ACL)) if (test_opt(sbi, POSIX_ACL))
...@@ -1201,18 +1296,20 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root) ...@@ -1201,18 +1296,20 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
seq_puts(seq, "adaptive"); seq_puts(seq, "adaptive");
else if (test_opt(sbi, LFS)) else if (test_opt(sbi, LFS))
seq_puts(seq, "lfs"); seq_puts(seq, "lfs");
seq_printf(seq, ",active_logs=%u", sbi->active_logs); seq_printf(seq, ",active_logs=%u", F2FS_OPTION(sbi).active_logs);
if (test_opt(sbi, RESERVE_ROOT)) if (test_opt(sbi, RESERVE_ROOT))
seq_printf(seq, ",reserve_root=%u,resuid=%u,resgid=%u", seq_printf(seq, ",reserve_root=%u,resuid=%u,resgid=%u",
sbi->root_reserved_blocks, F2FS_OPTION(sbi).root_reserved_blocks,
from_kuid_munged(&init_user_ns, sbi->s_resuid), from_kuid_munged(&init_user_ns,
from_kgid_munged(&init_user_ns, sbi->s_resgid)); F2FS_OPTION(sbi).s_resuid),
from_kgid_munged(&init_user_ns,
F2FS_OPTION(sbi).s_resgid));
if (F2FS_IO_SIZE_BITS(sbi)) if (F2FS_IO_SIZE_BITS(sbi))
seq_printf(seq, ",io_size=%uKB", F2FS_IO_SIZE_KB(sbi)); seq_printf(seq, ",io_size=%uKB", F2FS_IO_SIZE_KB(sbi));
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
if (test_opt(sbi, FAULT_INJECTION)) if (test_opt(sbi, FAULT_INJECTION))
seq_printf(seq, ",fault_injection=%u", seq_printf(seq, ",fault_injection=%u",
sbi->fault_info.inject_rate); F2FS_OPTION(sbi).fault_info.inject_rate);
#endif #endif
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
if (test_opt(sbi, QUOTA)) if (test_opt(sbi, QUOTA))
...@@ -1225,15 +1322,37 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root) ...@@ -1225,15 +1322,37 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
seq_puts(seq, ",prjquota"); seq_puts(seq, ",prjquota");
#endif #endif
f2fs_show_quota_options(seq, sbi->sb); f2fs_show_quota_options(seq, sbi->sb);
if (F2FS_OPTION(sbi).whint_mode == WHINT_MODE_USER)
seq_printf(seq, ",whint_mode=%s", "user-based");
else if (F2FS_OPTION(sbi).whint_mode == WHINT_MODE_FS)
seq_printf(seq, ",whint_mode=%s", "fs-based");
#ifdef CONFIG_F2FS_FS_ENCRYPTION
if (F2FS_OPTION(sbi).test_dummy_encryption)
seq_puts(seq, ",test_dummy_encryption");
#endif
if (F2FS_OPTION(sbi).alloc_mode == ALLOC_MODE_DEFAULT)
seq_printf(seq, ",alloc_mode=%s", "default");
else if (F2FS_OPTION(sbi).alloc_mode == ALLOC_MODE_REUSE)
seq_printf(seq, ",alloc_mode=%s", "reuse");
if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_POSIX)
seq_printf(seq, ",fsync_mode=%s", "posix");
else if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT)
seq_printf(seq, ",fsync_mode=%s", "strict");
return 0; return 0;
} }
static void default_options(struct f2fs_sb_info *sbi) static void default_options(struct f2fs_sb_info *sbi)
{ {
/* init some FS parameters */ /* init some FS parameters */
sbi->active_logs = NR_CURSEG_TYPE; F2FS_OPTION(sbi).active_logs = NR_CURSEG_TYPE;
sbi->inline_xattr_size = DEFAULT_INLINE_XATTR_ADDRS; F2FS_OPTION(sbi).inline_xattr_size = DEFAULT_INLINE_XATTR_ADDRS;
F2FS_OPTION(sbi).whint_mode = WHINT_MODE_OFF;
F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_DEFAULT;
F2FS_OPTION(sbi).fsync_mode = FSYNC_MODE_POSIX;
F2FS_OPTION(sbi).test_dummy_encryption = false;
sbi->readdir_ra = 1;
set_opt(sbi, BG_GC); set_opt(sbi, BG_GC);
set_opt(sbi, INLINE_XATTR); set_opt(sbi, INLINE_XATTR);
...@@ -1243,7 +1362,7 @@ static void default_options(struct f2fs_sb_info *sbi) ...@@ -1243,7 +1362,7 @@ static void default_options(struct f2fs_sb_info *sbi)
set_opt(sbi, NOHEAP); set_opt(sbi, NOHEAP);
sbi->sb->s_flags |= SB_LAZYTIME; sbi->sb->s_flags |= SB_LAZYTIME;
set_opt(sbi, FLUSH_MERGE); set_opt(sbi, FLUSH_MERGE);
if (f2fs_sb_mounted_blkzoned(sbi->sb)) { if (f2fs_sb_has_blkzoned(sbi->sb)) {
set_opt_mode(sbi, F2FS_MOUNT_LFS); set_opt_mode(sbi, F2FS_MOUNT_LFS);
set_opt(sbi, DISCARD); set_opt(sbi, DISCARD);
} else { } else {
...@@ -1270,16 +1389,11 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -1270,16 +1389,11 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
struct f2fs_sb_info *sbi = F2FS_SB(sb); struct f2fs_sb_info *sbi = F2FS_SB(sb);
struct f2fs_mount_info org_mount_opt; struct f2fs_mount_info org_mount_opt;
unsigned long old_sb_flags; unsigned long old_sb_flags;
int err, active_logs; int err;
bool need_restart_gc = false; bool need_restart_gc = false;
bool need_stop_gc = false; bool need_stop_gc = false;
bool no_extent_cache = !test_opt(sbi, EXTENT_CACHE); bool no_extent_cache = !test_opt(sbi, EXTENT_CACHE);
#ifdef CONFIG_F2FS_FAULT_INJECTION
struct f2fs_fault_info ffi = sbi->fault_info;
#endif
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
int s_jquota_fmt;
char *s_qf_names[MAXQUOTAS];
int i, j; int i, j;
#endif #endif
...@@ -1289,21 +1403,21 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -1289,21 +1403,21 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
*/ */
org_mount_opt = sbi->mount_opt; org_mount_opt = sbi->mount_opt;
old_sb_flags = sb->s_flags; old_sb_flags = sb->s_flags;
active_logs = sbi->active_logs;
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
s_jquota_fmt = sbi->s_jquota_fmt; org_mount_opt.s_jquota_fmt = F2FS_OPTION(sbi).s_jquota_fmt;
for (i = 0; i < MAXQUOTAS; i++) { for (i = 0; i < MAXQUOTAS; i++) {
if (sbi->s_qf_names[i]) { if (F2FS_OPTION(sbi).s_qf_names[i]) {
s_qf_names[i] = kstrdup(sbi->s_qf_names[i], org_mount_opt.s_qf_names[i] =
kstrdup(F2FS_OPTION(sbi).s_qf_names[i],
GFP_KERNEL); GFP_KERNEL);
if (!s_qf_names[i]) { if (!org_mount_opt.s_qf_names[i]) {
for (j = 0; j < i; j++) for (j = 0; j < i; j++)
kfree(s_qf_names[j]); kfree(org_mount_opt.s_qf_names[j]);
return -ENOMEM; return -ENOMEM;
} }
} else { } else {
s_qf_names[i] = NULL; org_mount_opt.s_qf_names[i] = NULL;
} }
} }
#endif #endif
...@@ -1373,7 +1487,8 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -1373,7 +1487,8 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
need_stop_gc = true; need_stop_gc = true;
} }
if (*flags & SB_RDONLY) { if (*flags & SB_RDONLY ||
F2FS_OPTION(sbi).whint_mode != org_mount_opt.whint_mode) {
writeback_inodes_sb(sb, WB_REASON_SYNC); writeback_inodes_sb(sb, WB_REASON_SYNC);
sync_inodes_sb(sb); sync_inodes_sb(sb);
...@@ -1399,7 +1514,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -1399,7 +1514,7 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
/* Release old quota file names */ /* Release old quota file names */
for (i = 0; i < MAXQUOTAS; i++) for (i = 0; i < MAXQUOTAS; i++)
kfree(s_qf_names[i]); kfree(org_mount_opt.s_qf_names[i]);
#endif #endif
/* Update the POSIXACL Flag */ /* Update the POSIXACL Flag */
sb->s_flags = (sb->s_flags & ~SB_POSIXACL) | sb->s_flags = (sb->s_flags & ~SB_POSIXACL) |
...@@ -1417,18 +1532,14 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -1417,18 +1532,14 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
} }
restore_opts: restore_opts:
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
sbi->s_jquota_fmt = s_jquota_fmt; F2FS_OPTION(sbi).s_jquota_fmt = org_mount_opt.s_jquota_fmt;
for (i = 0; i < MAXQUOTAS; i++) { for (i = 0; i < MAXQUOTAS; i++) {
kfree(sbi->s_qf_names[i]); kfree(F2FS_OPTION(sbi).s_qf_names[i]);
sbi->s_qf_names[i] = s_qf_names[i]; F2FS_OPTION(sbi).s_qf_names[i] = org_mount_opt.s_qf_names[i];
} }
#endif #endif
sbi->mount_opt = org_mount_opt; sbi->mount_opt = org_mount_opt;
sbi->active_logs = active_logs;
sb->s_flags = old_sb_flags; sb->s_flags = old_sb_flags;
#ifdef CONFIG_F2FS_FAULT_INJECTION
sbi->fault_info = ffi;
#endif
return err; return err;
} }
...@@ -1456,7 +1567,7 @@ static ssize_t f2fs_quota_read(struct super_block *sb, int type, char *data, ...@@ -1456,7 +1567,7 @@ static ssize_t f2fs_quota_read(struct super_block *sb, int type, char *data,
while (toread > 0) { while (toread > 0) {
tocopy = min_t(unsigned long, sb->s_blocksize - offset, toread); tocopy = min_t(unsigned long, sb->s_blocksize - offset, toread);
repeat: repeat:
page = read_mapping_page(mapping, blkidx, NULL); page = read_cache_page_gfp(mapping, blkidx, GFP_NOFS);
if (IS_ERR(page)) { if (IS_ERR(page)) {
if (PTR_ERR(page) == -ENOMEM) { if (PTR_ERR(page) == -ENOMEM) {
congestion_wait(BLK_RW_ASYNC, HZ/50); congestion_wait(BLK_RW_ASYNC, HZ/50);
...@@ -1550,8 +1661,8 @@ static qsize_t *f2fs_get_reserved_space(struct inode *inode) ...@@ -1550,8 +1661,8 @@ static qsize_t *f2fs_get_reserved_space(struct inode *inode)
static int f2fs_quota_on_mount(struct f2fs_sb_info *sbi, int type) static int f2fs_quota_on_mount(struct f2fs_sb_info *sbi, int type)
{ {
return dquot_quota_on_mount(sbi->sb, sbi->s_qf_names[type], return dquot_quota_on_mount(sbi->sb, F2FS_OPTION(sbi).s_qf_names[type],
sbi->s_jquota_fmt, type); F2FS_OPTION(sbi).s_jquota_fmt, type);
} }
int f2fs_enable_quota_files(struct f2fs_sb_info *sbi, bool rdonly) int f2fs_enable_quota_files(struct f2fs_sb_info *sbi, bool rdonly)
...@@ -1570,7 +1681,7 @@ int f2fs_enable_quota_files(struct f2fs_sb_info *sbi, bool rdonly) ...@@ -1570,7 +1681,7 @@ int f2fs_enable_quota_files(struct f2fs_sb_info *sbi, bool rdonly)
} }
for (i = 0; i < MAXQUOTAS; i++) { for (i = 0; i < MAXQUOTAS; i++) {
if (sbi->s_qf_names[i]) { if (F2FS_OPTION(sbi).s_qf_names[i]) {
err = f2fs_quota_on_mount(sbi, i); err = f2fs_quota_on_mount(sbi, i);
if (!err) { if (!err) {
enabled = 1; enabled = 1;
...@@ -1797,11 +1908,28 @@ static int f2fs_get_context(struct inode *inode, void *ctx, size_t len) ...@@ -1797,11 +1908,28 @@ static int f2fs_get_context(struct inode *inode, void *ctx, size_t len)
static int f2fs_set_context(struct inode *inode, const void *ctx, size_t len, static int f2fs_set_context(struct inode *inode, const void *ctx, size_t len,
void *fs_data) void *fs_data)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
/*
* Encrypting the root directory is not allowed because fsck
* expects lost+found directory to exist and remain unencrypted
* if LOST_FOUND feature is enabled.
*
*/
if (f2fs_sb_has_lost_found(sbi->sb) &&
inode->i_ino == F2FS_ROOT_INO(sbi))
return -EPERM;
return f2fs_setxattr(inode, F2FS_XATTR_INDEX_ENCRYPTION, return f2fs_setxattr(inode, F2FS_XATTR_INDEX_ENCRYPTION,
F2FS_XATTR_NAME_ENCRYPTION_CONTEXT, F2FS_XATTR_NAME_ENCRYPTION_CONTEXT,
ctx, len, fs_data, XATTR_CREATE); ctx, len, fs_data, XATTR_CREATE);
} }
static bool f2fs_dummy_context(struct inode *inode)
{
return DUMMY_ENCRYPTION_ENABLED(F2FS_I_SB(inode));
}
static unsigned f2fs_max_namelen(struct inode *inode) static unsigned f2fs_max_namelen(struct inode *inode)
{ {
return S_ISLNK(inode->i_mode) ? return S_ISLNK(inode->i_mode) ?
...@@ -1812,6 +1940,7 @@ static const struct fscrypt_operations f2fs_cryptops = { ...@@ -1812,6 +1940,7 @@ static const struct fscrypt_operations f2fs_cryptops = {
.key_prefix = "f2fs:", .key_prefix = "f2fs:",
.get_context = f2fs_get_context, .get_context = f2fs_get_context,
.set_context = f2fs_set_context, .set_context = f2fs_set_context,
.dummy_context = f2fs_dummy_context,
.empty_dir = f2fs_empty_dir, .empty_dir = f2fs_empty_dir,
.max_namelen = f2fs_max_namelen, .max_namelen = f2fs_max_namelen,
}; };
...@@ -1894,7 +2023,6 @@ static int __f2fs_commit_super(struct buffer_head *bh, ...@@ -1894,7 +2023,6 @@ static int __f2fs_commit_super(struct buffer_head *bh,
lock_buffer(bh); lock_buffer(bh);
if (super) if (super)
memcpy(bh->b_data + F2FS_SUPER_OFFSET, super, sizeof(*super)); memcpy(bh->b_data + F2FS_SUPER_OFFSET, super, sizeof(*super));
set_buffer_uptodate(bh);
set_buffer_dirty(bh); set_buffer_dirty(bh);
unlock_buffer(bh); unlock_buffer(bh);
...@@ -2181,6 +2309,8 @@ static void init_sb_info(struct f2fs_sb_info *sbi) ...@@ -2181,6 +2309,8 @@ static void init_sb_info(struct f2fs_sb_info *sbi)
sbi->dirty_device = 0; sbi->dirty_device = 0;
spin_lock_init(&sbi->dev_lock); spin_lock_init(&sbi->dev_lock);
init_rwsem(&sbi->sb_lock);
} }
static int init_percpu_info(struct f2fs_sb_info *sbi) static int init_percpu_info(struct f2fs_sb_info *sbi)
...@@ -2206,7 +2336,7 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int devi) ...@@ -2206,7 +2336,7 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int devi)
unsigned int n = 0; unsigned int n = 0;
int err = -EIO; int err = -EIO;
if (!f2fs_sb_mounted_blkzoned(sbi->sb)) if (!f2fs_sb_has_blkzoned(sbi->sb))
return 0; return 0;
if (sbi->blocks_per_blkz && sbi->blocks_per_blkz != if (sbi->blocks_per_blkz && sbi->blocks_per_blkz !=
...@@ -2334,7 +2464,7 @@ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover) ...@@ -2334,7 +2464,7 @@ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover)
} }
/* write back-up superblock first */ /* write back-up superblock first */
bh = sb_getblk(sbi->sb, sbi->valid_super_block ? 0: 1); bh = sb_bread(sbi->sb, sbi->valid_super_block ? 0 : 1);
if (!bh) if (!bh)
return -EIO; return -EIO;
err = __f2fs_commit_super(bh, F2FS_RAW_SUPER(sbi)); err = __f2fs_commit_super(bh, F2FS_RAW_SUPER(sbi));
...@@ -2345,7 +2475,7 @@ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover) ...@@ -2345,7 +2475,7 @@ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover)
return err; return err;
/* write current valid superblock */ /* write current valid superblock */
bh = sb_getblk(sbi->sb, sbi->valid_super_block); bh = sb_bread(sbi->sb, sbi->valid_super_block);
if (!bh) if (!bh)
return -EIO; return -EIO;
err = __f2fs_commit_super(bh, F2FS_RAW_SUPER(sbi)); err = __f2fs_commit_super(bh, F2FS_RAW_SUPER(sbi));
...@@ -2413,7 +2543,7 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi) ...@@ -2413,7 +2543,7 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
#ifdef CONFIG_BLK_DEV_ZONED #ifdef CONFIG_BLK_DEV_ZONED
if (bdev_zoned_model(FDEV(i).bdev) == BLK_ZONED_HM && if (bdev_zoned_model(FDEV(i).bdev) == BLK_ZONED_HM &&
!f2fs_sb_mounted_blkzoned(sbi->sb)) { !f2fs_sb_has_blkzoned(sbi->sb)) {
f2fs_msg(sbi->sb, KERN_ERR, f2fs_msg(sbi->sb, KERN_ERR,
"Zoned block device feature not enabled\n"); "Zoned block device feature not enabled\n");
return -EINVAL; return -EINVAL;
...@@ -2447,6 +2577,18 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi) ...@@ -2447,6 +2577,18 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
return 0; return 0;
} }
static void f2fs_tuning_parameters(struct f2fs_sb_info *sbi)
{
struct f2fs_sm_info *sm_i = SM_I(sbi);
/* adjust parameters according to the volume size */
if (sm_i->main_segments <= SMALL_VOLUME_SEGMENTS) {
F2FS_OPTION(sbi).alloc_mode = ALLOC_MODE_REUSE;
sm_i->dcc_info->discard_granularity = 1;
sm_i->ipu_policy = 1 << F2FS_IPU_FORCE;
}
}
static int f2fs_fill_super(struct super_block *sb, void *data, int silent) static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
{ {
struct f2fs_sb_info *sbi; struct f2fs_sb_info *sbi;
...@@ -2494,8 +2636,8 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2494,8 +2636,8 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
sb->s_fs_info = sbi; sb->s_fs_info = sbi;
sbi->raw_super = raw_super; sbi->raw_super = raw_super;
sbi->s_resuid = make_kuid(&init_user_ns, F2FS_DEF_RESUID); F2FS_OPTION(sbi).s_resuid = make_kuid(&init_user_ns, F2FS_DEF_RESUID);
sbi->s_resgid = make_kgid(&init_user_ns, F2FS_DEF_RESGID); F2FS_OPTION(sbi).s_resgid = make_kgid(&init_user_ns, F2FS_DEF_RESGID);
/* precompute checksum seed for metadata */ /* precompute checksum seed for metadata */
if (f2fs_sb_has_inode_chksum(sb)) if (f2fs_sb_has_inode_chksum(sb))
...@@ -2508,7 +2650,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2508,7 +2650,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
* devices, but mandatory for host-managed zoned block devices. * devices, but mandatory for host-managed zoned block devices.
*/ */
#ifndef CONFIG_BLK_DEV_ZONED #ifndef CONFIG_BLK_DEV_ZONED
if (f2fs_sb_mounted_blkzoned(sb)) { if (f2fs_sb_has_blkzoned(sb)) {
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
"Zoned block device support is not enabled\n"); "Zoned block device support is not enabled\n");
err = -EOPNOTSUPP; err = -EOPNOTSUPP;
...@@ -2724,7 +2866,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2724,7 +2866,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
* Turn on quotas which were not enabled for read-only mounts if * Turn on quotas which were not enabled for read-only mounts if
* filesystem has quota feature, so that they are updated correctly. * filesystem has quota feature, so that they are updated correctly.
*/ */
if (f2fs_sb_has_quota_ino(sb) && !sb_rdonly(sb)) { if (f2fs_sb_has_quota_ino(sb) && !f2fs_readonly(sb)) {
err = f2fs_enable_quotas(sb); err = f2fs_enable_quotas(sb);
if (err) { if (err) {
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
...@@ -2799,6 +2941,8 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2799,6 +2941,8 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
f2fs_join_shrinker(sbi); f2fs_join_shrinker(sbi);
f2fs_tuning_parameters(sbi);
f2fs_msg(sbi->sb, KERN_NOTICE, "Mounted with checkpoint version = %llx", f2fs_msg(sbi->sb, KERN_NOTICE, "Mounted with checkpoint version = %llx",
cur_cp_version(F2FS_CKPT(sbi))); cur_cp_version(F2FS_CKPT(sbi)));
f2fs_update_time(sbi, CP_TIME); f2fs_update_time(sbi, CP_TIME);
...@@ -2807,7 +2951,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2807,7 +2951,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
free_meta: free_meta:
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
if (f2fs_sb_has_quota_ino(sb) && !sb_rdonly(sb)) if (f2fs_sb_has_quota_ino(sb) && !f2fs_readonly(sb))
f2fs_quota_off_umount(sbi->sb); f2fs_quota_off_umount(sbi->sb);
#endif #endif
f2fs_sync_inode_meta(sbi); f2fs_sync_inode_meta(sbi);
...@@ -2851,7 +2995,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2851,7 +2995,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
free_options: free_options:
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
for (i = 0; i < MAXQUOTAS; i++) for (i = 0; i < MAXQUOTAS; i++)
kfree(sbi->s_qf_names[i]); kfree(F2FS_OPTION(sbi).s_qf_names[i]);
#endif #endif
kfree(options); kfree(options);
free_sb_buf: free_sb_buf:
......
...@@ -58,7 +58,7 @@ static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type) ...@@ -58,7 +58,7 @@ static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type)
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
else if (struct_type == FAULT_INFO_RATE || else if (struct_type == FAULT_INFO_RATE ||
struct_type == FAULT_INFO_TYPE) struct_type == FAULT_INFO_TYPE)
return (unsigned char *)&sbi->fault_info; return (unsigned char *)&F2FS_OPTION(sbi).fault_info;
#endif #endif
return NULL; return NULL;
} }
...@@ -92,10 +92,10 @@ static ssize_t features_show(struct f2fs_attr *a, ...@@ -92,10 +92,10 @@ static ssize_t features_show(struct f2fs_attr *a,
if (!sb->s_bdev->bd_part) if (!sb->s_bdev->bd_part)
return snprintf(buf, PAGE_SIZE, "0\n"); return snprintf(buf, PAGE_SIZE, "0\n");
if (f2fs_sb_has_crypto(sb)) if (f2fs_sb_has_encrypt(sb))
len += snprintf(buf, PAGE_SIZE - len, "%s", len += snprintf(buf, PAGE_SIZE - len, "%s",
"encryption"); "encryption");
if (f2fs_sb_mounted_blkzoned(sb)) if (f2fs_sb_has_blkzoned(sb))
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
len ? ", " : "", "blkzoned"); len ? ", " : "", "blkzoned");
if (f2fs_sb_has_extra_attr(sb)) if (f2fs_sb_has_extra_attr(sb))
...@@ -116,6 +116,9 @@ static ssize_t features_show(struct f2fs_attr *a, ...@@ -116,6 +116,9 @@ static ssize_t features_show(struct f2fs_attr *a,
if (f2fs_sb_has_inode_crtime(sb)) if (f2fs_sb_has_inode_crtime(sb))
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s", len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
len ? ", " : "", "inode_crtime"); len ? ", " : "", "inode_crtime");
if (f2fs_sb_has_lost_found(sb))
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
len ? ", " : "", "lost_found");
len += snprintf(buf + len, PAGE_SIZE - len, "\n"); len += snprintf(buf + len, PAGE_SIZE - len, "\n");
return len; return len;
} }
...@@ -136,6 +139,27 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a, ...@@ -136,6 +139,27 @@ static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
if (!ptr) if (!ptr)
return -EINVAL; return -EINVAL;
if (!strcmp(a->attr.name, "extension_list")) {
__u8 (*extlist)[F2FS_EXTENSION_LEN] =
sbi->raw_super->extension_list;
int cold_count = le32_to_cpu(sbi->raw_super->extension_count);
int hot_count = sbi->raw_super->hot_ext_count;
int len = 0, i;
len += snprintf(buf + len, PAGE_SIZE - len,
"cold file extenstion:\n");
for (i = 0; i < cold_count; i++)
len += snprintf(buf + len, PAGE_SIZE - len, "%s\n",
extlist[i]);
len += snprintf(buf + len, PAGE_SIZE - len,
"hot file extenstion:\n");
for (i = cold_count; i < cold_count + hot_count; i++)
len += snprintf(buf + len, PAGE_SIZE - len, "%s\n",
extlist[i]);
return len;
}
ui = (unsigned int *)(ptr + a->offset); ui = (unsigned int *)(ptr + a->offset);
return snprintf(buf, PAGE_SIZE, "%u\n", *ui); return snprintf(buf, PAGE_SIZE, "%u\n", *ui);
...@@ -154,6 +178,41 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a, ...@@ -154,6 +178,41 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
if (!ptr) if (!ptr)
return -EINVAL; return -EINVAL;
if (!strcmp(a->attr.name, "extension_list")) {
const char *name = strim((char *)buf);
bool set = true, hot;
if (!strncmp(name, "[h]", 3))
hot = true;
else if (!strncmp(name, "[c]", 3))
hot = false;
else
return -EINVAL;
name += 3;
if (*name == '!') {
name++;
set = false;
}
if (strlen(name) >= F2FS_EXTENSION_LEN)
return -EINVAL;
down_write(&sbi->sb_lock);
ret = update_extension_list(sbi, name, hot, set);
if (ret)
goto out;
ret = f2fs_commit_super(sbi, false);
if (ret)
update_extension_list(sbi, name, hot, !set);
out:
up_write(&sbi->sb_lock);
return ret ? ret : count;
}
ui = (unsigned int *)(ptr + a->offset); ui = (unsigned int *)(ptr + a->offset);
ret = kstrtoul(skip_spaces(buf), 0, &t); ret = kstrtoul(skip_spaces(buf), 0, &t);
...@@ -166,7 +225,7 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a, ...@@ -166,7 +225,7 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
if (a->struct_type == RESERVED_BLOCKS) { if (a->struct_type == RESERVED_BLOCKS) {
spin_lock(&sbi->stat_lock); spin_lock(&sbi->stat_lock);
if (t > (unsigned long)(sbi->user_block_count - if (t > (unsigned long)(sbi->user_block_count -
sbi->root_reserved_blocks)) { F2FS_OPTION(sbi).root_reserved_blocks)) {
spin_unlock(&sbi->stat_lock); spin_unlock(&sbi->stat_lock);
return -EINVAL; return -EINVAL;
} }
...@@ -236,6 +295,7 @@ enum feat_id { ...@@ -236,6 +295,7 @@ enum feat_id {
FEAT_FLEXIBLE_INLINE_XATTR, FEAT_FLEXIBLE_INLINE_XATTR,
FEAT_QUOTA_INO, FEAT_QUOTA_INO,
FEAT_INODE_CRTIME, FEAT_INODE_CRTIME,
FEAT_LOST_FOUND,
}; };
static ssize_t f2fs_feature_show(struct f2fs_attr *a, static ssize_t f2fs_feature_show(struct f2fs_attr *a,
...@@ -251,6 +311,7 @@ static ssize_t f2fs_feature_show(struct f2fs_attr *a, ...@@ -251,6 +311,7 @@ static ssize_t f2fs_feature_show(struct f2fs_attr *a,
case FEAT_FLEXIBLE_INLINE_XATTR: case FEAT_FLEXIBLE_INLINE_XATTR:
case FEAT_QUOTA_INO: case FEAT_QUOTA_INO:
case FEAT_INODE_CRTIME: case FEAT_INODE_CRTIME:
case FEAT_LOST_FOUND:
return snprintf(buf, PAGE_SIZE, "supported\n"); return snprintf(buf, PAGE_SIZE, "supported\n");
} }
return 0; return 0;
...@@ -307,6 +368,7 @@ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, idle_interval, interval_time[REQ_TIME]); ...@@ -307,6 +368,7 @@ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, idle_interval, interval_time[REQ_TIME]);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, iostat_enable, iostat_enable); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, iostat_enable, iostat_enable);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, readdir_ra, readdir_ra); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, readdir_ra, readdir_ra);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_pin_file_thresh, gc_pin_file_threshold); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_pin_file_thresh, gc_pin_file_threshold);
F2FS_RW_ATTR(F2FS_SBI, f2fs_super_block, extension_list, extension_list);
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
F2FS_RW_ATTR(FAULT_INFO_RATE, f2fs_fault_info, inject_rate, inject_rate); F2FS_RW_ATTR(FAULT_INFO_RATE, f2fs_fault_info, inject_rate, inject_rate);
F2FS_RW_ATTR(FAULT_INFO_TYPE, f2fs_fault_info, inject_type, inject_type); F2FS_RW_ATTR(FAULT_INFO_TYPE, f2fs_fault_info, inject_type, inject_type);
...@@ -329,6 +391,7 @@ F2FS_FEATURE_RO_ATTR(inode_checksum, FEAT_INODE_CHECKSUM); ...@@ -329,6 +391,7 @@ F2FS_FEATURE_RO_ATTR(inode_checksum, FEAT_INODE_CHECKSUM);
F2FS_FEATURE_RO_ATTR(flexible_inline_xattr, FEAT_FLEXIBLE_INLINE_XATTR); F2FS_FEATURE_RO_ATTR(flexible_inline_xattr, FEAT_FLEXIBLE_INLINE_XATTR);
F2FS_FEATURE_RO_ATTR(quota_ino, FEAT_QUOTA_INO); F2FS_FEATURE_RO_ATTR(quota_ino, FEAT_QUOTA_INO);
F2FS_FEATURE_RO_ATTR(inode_crtime, FEAT_INODE_CRTIME); F2FS_FEATURE_RO_ATTR(inode_crtime, FEAT_INODE_CRTIME);
F2FS_FEATURE_RO_ATTR(lost_found, FEAT_LOST_FOUND);
#define ATTR_LIST(name) (&f2fs_attr_##name.attr) #define ATTR_LIST(name) (&f2fs_attr_##name.attr)
static struct attribute *f2fs_attrs[] = { static struct attribute *f2fs_attrs[] = {
...@@ -357,6 +420,7 @@ static struct attribute *f2fs_attrs[] = { ...@@ -357,6 +420,7 @@ static struct attribute *f2fs_attrs[] = {
ATTR_LIST(iostat_enable), ATTR_LIST(iostat_enable),
ATTR_LIST(readdir_ra), ATTR_LIST(readdir_ra),
ATTR_LIST(gc_pin_file_thresh), ATTR_LIST(gc_pin_file_thresh),
ATTR_LIST(extension_list),
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
ATTR_LIST(inject_rate), ATTR_LIST(inject_rate),
ATTR_LIST(inject_type), ATTR_LIST(inject_type),
...@@ -383,6 +447,7 @@ static struct attribute *f2fs_feat_attrs[] = { ...@@ -383,6 +447,7 @@ static struct attribute *f2fs_feat_attrs[] = {
ATTR_LIST(flexible_inline_xattr), ATTR_LIST(flexible_inline_xattr),
ATTR_LIST(quota_ino), ATTR_LIST(quota_ino),
ATTR_LIST(inode_crtime), ATTR_LIST(inode_crtime),
ATTR_LIST(lost_found),
NULL, NULL,
}; };
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#define F2FS_BLKSIZE 4096 /* support only 4KB block */ #define F2FS_BLKSIZE 4096 /* support only 4KB block */
#define F2FS_BLKSIZE_BITS 12 /* bits for F2FS_BLKSIZE */ #define F2FS_BLKSIZE_BITS 12 /* bits for F2FS_BLKSIZE */
#define F2FS_MAX_EXTENSION 64 /* # of extension entries */ #define F2FS_MAX_EXTENSION 64 /* # of extension entries */
#define F2FS_EXTENSION_LEN 8 /* max size of extension */
#define F2FS_BLK_ALIGN(x) (((x) + F2FS_BLKSIZE - 1) >> F2FS_BLKSIZE_BITS) #define F2FS_BLK_ALIGN(x) (((x) + F2FS_BLKSIZE - 1) >> F2FS_BLKSIZE_BITS)
#define NULL_ADDR ((block_t)0) /* used as block_t addresses */ #define NULL_ADDR ((block_t)0) /* used as block_t addresses */
...@@ -38,15 +39,14 @@ ...@@ -38,15 +39,14 @@
#define F2FS_MAX_QUOTAS 3 #define F2FS_MAX_QUOTAS 3
#define F2FS_IO_SIZE(sbi) (1 << (sbi)->write_io_size_bits) /* Blocks */ #define F2FS_IO_SIZE(sbi) (1 << F2FS_OPTION(sbi).write_io_size_bits) /* Blocks */
#define F2FS_IO_SIZE_KB(sbi) (1 << ((sbi)->write_io_size_bits + 2)) /* KB */ #define F2FS_IO_SIZE_KB(sbi) (1 << (F2FS_OPTION(sbi).write_io_size_bits + 2)) /* KB */
#define F2FS_IO_SIZE_BYTES(sbi) (1 << ((sbi)->write_io_size_bits + 12)) /* B */ #define F2FS_IO_SIZE_BYTES(sbi) (1 << (F2FS_OPTION(sbi).write_io_size_bits + 12)) /* B */
#define F2FS_IO_SIZE_BITS(sbi) ((sbi)->write_io_size_bits) /* power of 2 */ #define F2FS_IO_SIZE_BITS(sbi) (F2FS_OPTION(sbi).write_io_size_bits) /* power of 2 */
#define F2FS_IO_SIZE_MASK(sbi) (F2FS_IO_SIZE(sbi) - 1) #define F2FS_IO_SIZE_MASK(sbi) (F2FS_IO_SIZE(sbi) - 1)
/* This flag is used by node and meta inodes, and by recovery */ /* This flag is used by node and meta inodes, and by recovery */
#define GFP_F2FS_ZERO (GFP_NOFS | __GFP_ZERO) #define GFP_F2FS_ZERO (GFP_NOFS | __GFP_ZERO)
#define GFP_F2FS_HIGH_ZERO (GFP_NOFS | __GFP_ZERO | __GFP_HIGHMEM)
/* /*
* For further optimization on multi-head logs, on-disk layout supports maximum * For further optimization on multi-head logs, on-disk layout supports maximum
...@@ -102,7 +102,7 @@ struct f2fs_super_block { ...@@ -102,7 +102,7 @@ struct f2fs_super_block {
__u8 uuid[16]; /* 128-bit uuid for volume */ __u8 uuid[16]; /* 128-bit uuid for volume */
__le16 volume_name[MAX_VOLUME_NAME]; /* volume name */ __le16 volume_name[MAX_VOLUME_NAME]; /* volume name */
__le32 extension_count; /* # of extensions below */ __le32 extension_count; /* # of extensions below */
__u8 extension_list[F2FS_MAX_EXTENSION][8]; /* extension array */ __u8 extension_list[F2FS_MAX_EXTENSION][F2FS_EXTENSION_LEN];/* extension array */
__le32 cp_payload; __le32 cp_payload;
__u8 version[VERSION_LEN]; /* the kernel version */ __u8 version[VERSION_LEN]; /* the kernel version */
__u8 init_version[VERSION_LEN]; /* the initial kernel version */ __u8 init_version[VERSION_LEN]; /* the initial kernel version */
...@@ -111,12 +111,14 @@ struct f2fs_super_block { ...@@ -111,12 +111,14 @@ struct f2fs_super_block {
__u8 encrypt_pw_salt[16]; /* Salt used for string2key algorithm */ __u8 encrypt_pw_salt[16]; /* Salt used for string2key algorithm */
struct f2fs_device devs[MAX_DEVICES]; /* device list */ struct f2fs_device devs[MAX_DEVICES]; /* device list */
__le32 qf_ino[F2FS_MAX_QUOTAS]; /* quota inode numbers */ __le32 qf_ino[F2FS_MAX_QUOTAS]; /* quota inode numbers */
__u8 reserved[315]; /* valid reserved region */ __u8 hot_ext_count; /* # of hot file extension */
__u8 reserved[314]; /* valid reserved region */
} __packed; } __packed;
/* /*
* For checkpoint * For checkpoint
*/ */
#define CP_LARGE_NAT_BITMAP_FLAG 0x00000400
#define CP_NOCRC_RECOVERY_FLAG 0x00000200 #define CP_NOCRC_RECOVERY_FLAG 0x00000200
#define CP_TRIMMED_FLAG 0x00000100 #define CP_TRIMMED_FLAG 0x00000100
#define CP_NAT_BITS_FLAG 0x00000080 #define CP_NAT_BITS_FLAG 0x00000080
...@@ -303,6 +305,10 @@ struct f2fs_node { ...@@ -303,6 +305,10 @@ struct f2fs_node {
*/ */
#define NAT_ENTRY_PER_BLOCK (PAGE_SIZE / sizeof(struct f2fs_nat_entry)) #define NAT_ENTRY_PER_BLOCK (PAGE_SIZE / sizeof(struct f2fs_nat_entry))
#define NAT_ENTRY_BITMAP_SIZE ((NAT_ENTRY_PER_BLOCK + 7) / 8) #define NAT_ENTRY_BITMAP_SIZE ((NAT_ENTRY_PER_BLOCK + 7) / 8)
#define NAT_ENTRY_BITMAP_SIZE_ALIGNED \
((NAT_ENTRY_BITMAP_SIZE + BITS_PER_LONG - 1) / \
BITS_PER_LONG * BITS_PER_LONG)
struct f2fs_nat_entry { struct f2fs_nat_entry {
__u8 version; /* latest version of cached nat entry */ __u8 version; /* latest version of cached nat entry */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment