Commit 6d8ef53e authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'f2fs-for-4.14' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs

Pull f2fs updates from Jaegeuk Kim:
 "In this round, we've mostly tuned f2fs to provide better user
  experience for Android. Especially, we've worked on atomic write
  feature again with SQLite community in order to support it officially.
  And we added or modified several facilities to analyze and enhance IO
  behaviors.

  Major changes include:
   - add app/fs io stat
   - add inode checksum feature
   - support project/journalled quota
   - enhance atomic write with new ioctl() which exposes feature set
   - enhance background gc/discard/fstrim flows with new gc_urgent mode
   - add F2FS_IOC_FS{GET,SET}XATTR
   - fix some quota flows"

* tag 'f2fs-for-4.14' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (63 commits)
  f2fs: hurry up to issue discard after io interruption
  f2fs: fix to show correct discard_granularity in sysfs
  f2fs: detect dirty inode in evict_inode
  f2fs: clear radix tree dirty tag of pages whose dirty flag is cleared
  f2fs: speed up gc_urgent mode with SSR
  f2fs: better to wait for fstrim completion
  f2fs: avoid race in between read xattr & write xattr
  f2fs: make get_lock_data_page to handle encrypted inode
  f2fs: use generic terms used for encrypted block management
  f2fs: introduce f2fs_encrypted_file for clean-up
  Revert "f2fs: add a new function get_ssr_cost"
  f2fs: constify super_operations
  f2fs: fix to wake up all sleeping flusher
  f2fs: avoid race in between atomic_read & atomic_inc
  f2fs: remove unneeded parameter of change_curseg
  f2fs: update i_flags correctly
  f2fs: don't check inode's checksum if it was dirtied or writebacked
  f2fs: don't need to update inode checksum for recovery
  f2fs: trigger fdatasync for non-atomic_write file
  f2fs: fix to avoid race in between aio and gc
  ...
parents cdb897e3 e6c6de18
...@@ -57,6 +57,15 @@ Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com> ...@@ -57,6 +57,15 @@ Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
Description: Description:
Controls the issue rate of small discard commands. Controls the issue rate of small discard commands.
What: /sys/fs/f2fs/<disk>/discard_granularity
Date: July 2017
Contact: "Chao Yu" <yuchao0@huawei.com>
Description:
Controls discard granularity of inner discard thread, inner thread
will not issue discards with size that is smaller than granularity.
The unit size is one block, now only support configuring in range
of [1, 512].
What: /sys/fs/f2fs/<disk>/max_victim_search What: /sys/fs/f2fs/<disk>/max_victim_search
Date: January 2014 Date: January 2014
Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com> Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com>
...@@ -130,3 +139,15 @@ Date: June 2017 ...@@ -130,3 +139,15 @@ Date: June 2017
Contact: "Chao Yu" <yuchao0@huawei.com> Contact: "Chao Yu" <yuchao0@huawei.com>
Description: Description:
Controls current reserved blocks in system. Controls current reserved blocks in system.
What: /sys/fs/f2fs/<disk>/gc_urgent
Date: August 2017
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
Description:
Do background GC agressively
What: /sys/fs/f2fs/<disk>/gc_urgent_sleep_time
Date: August 2017
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
Description:
Controls sleep time of GC urgent mode
...@@ -164,6 +164,16 @@ io_bits=%u Set the bit size of write IO requests. It should be set ...@@ -164,6 +164,16 @@ io_bits=%u Set the bit size of write IO requests. It should be set
with "mode=lfs". with "mode=lfs".
usrquota Enable plain user disk quota accounting. usrquota Enable plain user disk quota accounting.
grpquota Enable plain group disk quota accounting. grpquota Enable plain group disk quota accounting.
prjquota Enable plain project quota accounting.
usrjquota=<file> Appoint specified file and type during mount, so that quota
grpjquota=<file> information can be properly updated during recovery flow,
prjjquota=<file> <quota file>: must be in root directory;
jqfmt=<quota type> <quota type>: [vfsold,vfsv0,vfsv1].
offusrjquota Turn off user journelled quota.
offgrpjquota Turn off group journelled quota.
offprjjquota Turn off project journelled quota.
quota Enable plain user disk quota accounting.
noquota Disable all plain disk quota option.
================================================================================ ================================================================================
DEBUGFS ENTRIES DEBUGFS ENTRIES
...@@ -209,6 +219,15 @@ Files in /sys/fs/f2fs/<devname> ...@@ -209,6 +219,15 @@ Files in /sys/fs/f2fs/<devname>
gc_idle = 1 will select the Cost Benefit approach gc_idle = 1 will select the Cost Benefit approach
& setting gc_idle = 2 will select the greedy approach. & setting gc_idle = 2 will select the greedy approach.
gc_urgent This parameter controls triggering background GCs
urgently or not. Setting gc_urgent = 0 [default]
makes back to default behavior, while if it is set
to 1, background thread starts to do GC by given
gc_urgent_sleep_time interval.
gc_urgent_sleep_time This parameter controls sleep time for gc_urgent.
500 ms is set by default. See above gc_urgent.
reclaim_segments This parameter controls the number of prefree reclaim_segments This parameter controls the number of prefree
segments to be reclaimed. If the number of prefree segments to be reclaimed. If the number of prefree
segments is larger than the number of segments segments is larger than the number of segments
......
...@@ -207,15 +207,16 @@ static int __f2fs_set_acl(struct inode *inode, int type, ...@@ -207,15 +207,16 @@ static int __f2fs_set_acl(struct inode *inode, int type,
void *value = NULL; void *value = NULL;
size_t size = 0; size_t size = 0;
int error; int error;
umode_t mode = inode->i_mode;
switch (type) { switch (type) {
case ACL_TYPE_ACCESS: case ACL_TYPE_ACCESS:
name_index = F2FS_XATTR_INDEX_POSIX_ACL_ACCESS; name_index = F2FS_XATTR_INDEX_POSIX_ACL_ACCESS;
if (acl && !ipage) { if (acl && !ipage) {
error = posix_acl_update_mode(inode, &inode->i_mode, &acl); error = posix_acl_update_mode(inode, &mode, &acl);
if (error) if (error)
return error; return error;
set_acl_inode(inode, inode->i_mode); set_acl_inode(inode, mode);
} }
break; break;
......
...@@ -230,8 +230,9 @@ void ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index) ...@@ -230,8 +230,9 @@ void ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index)
ra_meta_pages(sbi, index, BIO_MAX_PAGES, META_POR, true); ra_meta_pages(sbi, index, BIO_MAX_PAGES, META_POR, true);
} }
static int f2fs_write_meta_page(struct page *page, static int __f2fs_write_meta_page(struct page *page,
struct writeback_control *wbc) struct writeback_control *wbc,
enum iostat_type io_type)
{ {
struct f2fs_sb_info *sbi = F2FS_P_SB(page); struct f2fs_sb_info *sbi = F2FS_P_SB(page);
...@@ -244,7 +245,7 @@ static int f2fs_write_meta_page(struct page *page, ...@@ -244,7 +245,7 @@ static int f2fs_write_meta_page(struct page *page,
if (unlikely(f2fs_cp_error(sbi))) if (unlikely(f2fs_cp_error(sbi)))
goto redirty_out; goto redirty_out;
write_meta_page(sbi, page); write_meta_page(sbi, page, io_type);
dec_page_count(sbi, F2FS_DIRTY_META); dec_page_count(sbi, F2FS_DIRTY_META);
if (wbc->for_reclaim) if (wbc->for_reclaim)
...@@ -263,6 +264,12 @@ static int f2fs_write_meta_page(struct page *page, ...@@ -263,6 +264,12 @@ static int f2fs_write_meta_page(struct page *page,
return AOP_WRITEPAGE_ACTIVATE; return AOP_WRITEPAGE_ACTIVATE;
} }
static int f2fs_write_meta_page(struct page *page,
struct writeback_control *wbc)
{
return __f2fs_write_meta_page(page, wbc, FS_META_IO);
}
static int f2fs_write_meta_pages(struct address_space *mapping, static int f2fs_write_meta_pages(struct address_space *mapping,
struct writeback_control *wbc) struct writeback_control *wbc)
{ {
...@@ -283,7 +290,7 @@ static int f2fs_write_meta_pages(struct address_space *mapping, ...@@ -283,7 +290,7 @@ static int f2fs_write_meta_pages(struct address_space *mapping,
trace_f2fs_writepages(mapping->host, wbc, META); trace_f2fs_writepages(mapping->host, wbc, META);
diff = nr_pages_to_write(sbi, META, wbc); diff = nr_pages_to_write(sbi, META, wbc);
written = sync_meta_pages(sbi, META, wbc->nr_to_write); written = sync_meta_pages(sbi, META, wbc->nr_to_write, FS_META_IO);
mutex_unlock(&sbi->cp_mutex); mutex_unlock(&sbi->cp_mutex);
wbc->nr_to_write = max((long)0, wbc->nr_to_write - written - diff); wbc->nr_to_write = max((long)0, wbc->nr_to_write - written - diff);
return 0; return 0;
...@@ -295,7 +302,7 @@ static int f2fs_write_meta_pages(struct address_space *mapping, ...@@ -295,7 +302,7 @@ static int f2fs_write_meta_pages(struct address_space *mapping,
} }
long sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type, long sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
long nr_to_write) long nr_to_write, enum iostat_type io_type)
{ {
struct address_space *mapping = META_MAPPING(sbi); struct address_space *mapping = META_MAPPING(sbi);
pgoff_t index = 0, end = ULONG_MAX, prev = ULONG_MAX; pgoff_t index = 0, end = ULONG_MAX, prev = ULONG_MAX;
...@@ -346,7 +353,7 @@ long sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type, ...@@ -346,7 +353,7 @@ long sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
if (!clear_page_dirty_for_io(page)) if (!clear_page_dirty_for_io(page))
goto continue_unlock; goto continue_unlock;
if (mapping->a_ops->writepage(page, &wbc)) { if (__f2fs_write_meta_page(page, &wbc, io_type)) {
unlock_page(page); unlock_page(page);
break; break;
} }
...@@ -581,11 +588,24 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -581,11 +588,24 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
int recover_orphan_inodes(struct f2fs_sb_info *sbi) int recover_orphan_inodes(struct f2fs_sb_info *sbi)
{ {
block_t start_blk, orphan_blocks, i, j; block_t start_blk, orphan_blocks, i, j;
int err; unsigned int s_flags = sbi->sb->s_flags;
int err = 0;
if (!is_set_ckpt_flags(sbi, CP_ORPHAN_PRESENT_FLAG)) if (!is_set_ckpt_flags(sbi, CP_ORPHAN_PRESENT_FLAG))
return 0; return 0;
if (s_flags & MS_RDONLY) {
f2fs_msg(sbi->sb, KERN_INFO, "orphan cleanup on readonly fs");
sbi->sb->s_flags &= ~MS_RDONLY;
}
#ifdef CONFIG_QUOTA
/* Needed for iput() to work correctly and not trash data */
sbi->sb->s_flags |= MS_ACTIVE;
/* Turn on quotas so that they are updated correctly */
f2fs_enable_quota_files(sbi);
#endif
start_blk = __start_cp_addr(sbi) + 1 + __cp_payload(sbi); start_blk = __start_cp_addr(sbi) + 1 + __cp_payload(sbi);
orphan_blocks = __start_sum_addr(sbi) - 1 - __cp_payload(sbi); orphan_blocks = __start_sum_addr(sbi) - 1 - __cp_payload(sbi);
...@@ -601,14 +621,21 @@ int recover_orphan_inodes(struct f2fs_sb_info *sbi) ...@@ -601,14 +621,21 @@ int recover_orphan_inodes(struct f2fs_sb_info *sbi)
err = recover_orphan_inode(sbi, ino); err = recover_orphan_inode(sbi, ino);
if (err) { if (err) {
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
return err; goto out;
} }
} }
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
} }
/* clear Orphan Flag */ /* clear Orphan Flag */
clear_ckpt_flags(sbi, CP_ORPHAN_PRESENT_FLAG); clear_ckpt_flags(sbi, CP_ORPHAN_PRESENT_FLAG);
return 0; out:
#ifdef CONFIG_QUOTA
/* Turn quotas off */
f2fs_quota_off_umount(sbi->sb);
#endif
sbi->sb->s_flags = s_flags; /* Restore MS_RDONLY status */
return err;
} }
static void write_orphan_inodes(struct f2fs_sb_info *sbi, block_t start_blk) static void write_orphan_inodes(struct f2fs_sb_info *sbi, block_t start_blk)
...@@ -904,7 +931,14 @@ int sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type) ...@@ -904,7 +931,14 @@ int sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type)
if (inode) { if (inode) {
unsigned long cur_ino = inode->i_ino; unsigned long cur_ino = inode->i_ino;
if (is_dir)
F2FS_I(inode)->cp_task = current;
filemap_fdatawrite(inode->i_mapping); filemap_fdatawrite(inode->i_mapping);
if (is_dir)
F2FS_I(inode)->cp_task = NULL;
iput(inode); iput(inode);
/* We need to give cpu to another writers. */ /* We need to give cpu to another writers. */
if (ino == cur_ino) { if (ino == cur_ino) {
...@@ -1017,7 +1051,7 @@ static int block_operations(struct f2fs_sb_info *sbi) ...@@ -1017,7 +1051,7 @@ static int block_operations(struct f2fs_sb_info *sbi)
if (get_pages(sbi, F2FS_DIRTY_NODES)) { if (get_pages(sbi, F2FS_DIRTY_NODES)) {
up_write(&sbi->node_write); up_write(&sbi->node_write);
err = sync_node_pages(sbi, &wbc); err = sync_node_pages(sbi, &wbc, false, FS_CP_NODE_IO);
if (err) { if (err) {
up_write(&sbi->node_change); up_write(&sbi->node_change);
f2fs_unlock_all(sbi); f2fs_unlock_all(sbi);
...@@ -1115,7 +1149,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1115,7 +1149,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
/* Flush all the NAT/SIT pages */ /* Flush all the NAT/SIT pages */
while (get_pages(sbi, F2FS_DIRTY_META)) { while (get_pages(sbi, F2FS_DIRTY_META)) {
sync_meta_pages(sbi, META, LONG_MAX); sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
if (unlikely(f2fs_cp_error(sbi))) if (unlikely(f2fs_cp_error(sbi)))
return -EIO; return -EIO;
} }
...@@ -1194,7 +1228,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1194,7 +1228,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
/* Flush all the NAT BITS pages */ /* Flush all the NAT BITS pages */
while (get_pages(sbi, F2FS_DIRTY_META)) { while (get_pages(sbi, F2FS_DIRTY_META)) {
sync_meta_pages(sbi, META, LONG_MAX); sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
if (unlikely(f2fs_cp_error(sbi))) if (unlikely(f2fs_cp_error(sbi)))
return -EIO; return -EIO;
} }
...@@ -1249,7 +1283,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1249,7 +1283,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
percpu_counter_set(&sbi->alloc_valid_block_count, 0); percpu_counter_set(&sbi->alloc_valid_block_count, 0);
/* Here, we only have one bio having CP pack */ /* Here, we only have one bio having CP pack */
sync_meta_pages(sbi, META_FLUSH, LONG_MAX); sync_meta_pages(sbi, META_FLUSH, LONG_MAX, FS_CP_META_IO);
/* wait for previous submitted meta pages writeback */ /* wait for previous submitted meta pages writeback */
wait_on_all_pages_writeback(sbi); wait_on_all_pages_writeback(sbi);
......
...@@ -457,14 +457,65 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio) ...@@ -457,14 +457,65 @@ int f2fs_submit_page_write(struct f2fs_io_info *fio)
return err; return err;
} }
static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
unsigned nr_pages)
{
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct fscrypt_ctx *ctx = NULL;
struct bio *bio;
if (f2fs_encrypted_file(inode)) {
ctx = fscrypt_get_ctx(inode, GFP_NOFS);
if (IS_ERR(ctx))
return ERR_CAST(ctx);
/* wait the page to be moved by cleaning */
f2fs_wait_on_block_writeback(sbi, blkaddr);
}
bio = bio_alloc(GFP_KERNEL, min_t(int, nr_pages, BIO_MAX_PAGES));
if (!bio) {
if (ctx)
fscrypt_release_ctx(ctx);
return ERR_PTR(-ENOMEM);
}
f2fs_target_device(sbi, blkaddr, bio);
bio->bi_end_io = f2fs_read_end_io;
bio->bi_private = ctx;
bio_set_op_attrs(bio, REQ_OP_READ, 0);
return bio;
}
/* This can handle encryption stuffs */
static int f2fs_submit_page_read(struct inode *inode, struct page *page,
block_t blkaddr)
{
struct bio *bio = f2fs_grab_read_bio(inode, blkaddr, 1);
if (IS_ERR(bio))
return PTR_ERR(bio);
if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
bio_put(bio);
return -EFAULT;
}
__submit_bio(F2FS_I_SB(inode), bio, DATA);
return 0;
}
static void __set_data_blkaddr(struct dnode_of_data *dn) static void __set_data_blkaddr(struct dnode_of_data *dn)
{ {
struct f2fs_node *rn = F2FS_NODE(dn->node_page); struct f2fs_node *rn = F2FS_NODE(dn->node_page);
__le32 *addr_array; __le32 *addr_array;
int base = 0;
if (IS_INODE(dn->node_page) && f2fs_has_extra_attr(dn->inode))
base = get_extra_isize(dn->inode);
/* Get physical address of data block */ /* Get physical address of data block */
addr_array = blkaddr_in_node(rn); addr_array = blkaddr_in_node(rn);
addr_array[dn->ofs_in_node] = cpu_to_le32(dn->data_blkaddr); addr_array[base + dn->ofs_in_node] = cpu_to_le32(dn->data_blkaddr);
} }
/* /*
...@@ -508,8 +559,8 @@ int reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count) ...@@ -508,8 +559,8 @@ int reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count)
f2fs_wait_on_page_writeback(dn->node_page, NODE, true); f2fs_wait_on_page_writeback(dn->node_page, NODE, true);
for (; count > 0; dn->ofs_in_node++) { for (; count > 0; dn->ofs_in_node++) {
block_t blkaddr = block_t blkaddr = datablock_addr(dn->inode,
datablock_addr(dn->node_page, dn->ofs_in_node); dn->node_page, dn->ofs_in_node);
if (blkaddr == NULL_ADDR) { if (blkaddr == NULL_ADDR) {
dn->data_blkaddr = NEW_ADDR; dn->data_blkaddr = NEW_ADDR;
__set_data_blkaddr(dn); __set_data_blkaddr(dn);
...@@ -570,16 +621,6 @@ struct page *get_read_data_page(struct inode *inode, pgoff_t index, ...@@ -570,16 +621,6 @@ struct page *get_read_data_page(struct inode *inode, pgoff_t index,
struct page *page; struct page *page;
struct extent_info ei = {0,0,0}; struct extent_info ei = {0,0,0};
int err; int err;
struct f2fs_io_info fio = {
.sbi = F2FS_I_SB(inode),
.type = DATA,
.op = REQ_OP_READ,
.op_flags = op_flags,
.encrypted_page = NULL,
};
if (f2fs_encrypted_inode(inode) && S_ISREG(inode->i_mode))
return read_mapping_page(mapping, index, NULL);
page = f2fs_grab_cache_page(mapping, index, for_write); page = f2fs_grab_cache_page(mapping, index, for_write);
if (!page) if (!page)
...@@ -620,9 +661,7 @@ struct page *get_read_data_page(struct inode *inode, pgoff_t index, ...@@ -620,9 +661,7 @@ struct page *get_read_data_page(struct inode *inode, pgoff_t index,
return page; return page;
} }
fio.new_blkaddr = fio.old_blkaddr = dn.data_blkaddr; err = f2fs_submit_page_read(inode, page, dn.data_blkaddr);
fio.page = page;
err = f2fs_submit_page_bio(&fio);
if (err) if (err)
goto put_err; goto put_err;
return page; return page;
...@@ -756,7 +795,8 @@ static int __allocate_data_block(struct dnode_of_data *dn) ...@@ -756,7 +795,8 @@ static int __allocate_data_block(struct dnode_of_data *dn)
if (unlikely(is_inode_flag_set(dn->inode, FI_NO_ALLOC))) if (unlikely(is_inode_flag_set(dn->inode, FI_NO_ALLOC)))
return -EPERM; return -EPERM;
dn->data_blkaddr = datablock_addr(dn->node_page, dn->ofs_in_node); dn->data_blkaddr = datablock_addr(dn->inode,
dn->node_page, dn->ofs_in_node);
if (dn->data_blkaddr == NEW_ADDR) if (dn->data_blkaddr == NEW_ADDR)
goto alloc; goto alloc;
...@@ -782,7 +822,7 @@ static int __allocate_data_block(struct dnode_of_data *dn) ...@@ -782,7 +822,7 @@ static int __allocate_data_block(struct dnode_of_data *dn)
static inline bool __force_buffered_io(struct inode *inode, int rw) static inline bool __force_buffered_io(struct inode *inode, int rw)
{ {
return ((f2fs_encrypted_inode(inode) && S_ISREG(inode->i_mode)) || return (f2fs_encrypted_file(inode) ||
(rw == WRITE && test_opt(F2FS_I_SB(inode), LFS)) || (rw == WRITE && test_opt(F2FS_I_SB(inode), LFS)) ||
F2FS_I_SB(inode)->s_ndevs); F2FS_I_SB(inode)->s_ndevs);
} }
...@@ -814,7 +854,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from) ...@@ -814,7 +854,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
F2FS_GET_BLOCK_PRE_AIO : F2FS_GET_BLOCK_PRE_AIO :
F2FS_GET_BLOCK_PRE_DIO); F2FS_GET_BLOCK_PRE_DIO);
} }
if (iocb->ki_pos + iov_iter_count(from) > MAX_INLINE_DATA) { if (iocb->ki_pos + iov_iter_count(from) > MAX_INLINE_DATA(inode)) {
err = f2fs_convert_inline_inode(inode); err = f2fs_convert_inline_inode(inode);
if (err) if (err)
return err; return err;
...@@ -903,7 +943,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, ...@@ -903,7 +943,7 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
end_offset = ADDRS_PER_PAGE(dn.node_page, inode); end_offset = ADDRS_PER_PAGE(dn.node_page, inode);
next_block: next_block:
blkaddr = datablock_addr(dn.node_page, dn.ofs_in_node); blkaddr = datablock_addr(dn.inode, dn.node_page, dn.ofs_in_node);
if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR) { if (blkaddr == NEW_ADDR || blkaddr == NULL_ADDR) {
if (create) { if (create) {
...@@ -1040,7 +1080,7 @@ static int get_data_block_dio(struct inode *inode, sector_t iblock, ...@@ -1040,7 +1080,7 @@ static int get_data_block_dio(struct inode *inode, sector_t iblock,
struct buffer_head *bh_result, int create) struct buffer_head *bh_result, int create)
{ {
return __get_data_block(inode, iblock, bh_result, create, return __get_data_block(inode, iblock, bh_result, create,
F2FS_GET_BLOCK_DIO, NULL); F2FS_GET_BLOCK_DEFAULT, NULL);
} }
static int get_data_block_bmap(struct inode *inode, sector_t iblock, static int get_data_block_bmap(struct inode *inode, sector_t iblock,
...@@ -1146,35 +1186,6 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, ...@@ -1146,35 +1186,6 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
return ret; return ret;
} }
static struct bio *f2fs_grab_bio(struct inode *inode, block_t blkaddr,
unsigned nr_pages)
{
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct fscrypt_ctx *ctx = NULL;
struct bio *bio;
if (f2fs_encrypted_inode(inode) && S_ISREG(inode->i_mode)) {
ctx = fscrypt_get_ctx(inode, GFP_NOFS);
if (IS_ERR(ctx))
return ERR_CAST(ctx);
/* wait the page to be moved by cleaning */
f2fs_wait_on_encrypted_page_writeback(sbi, blkaddr);
}
bio = bio_alloc(GFP_KERNEL, min_t(int, nr_pages, BIO_MAX_PAGES));
if (!bio) {
if (ctx)
fscrypt_release_ctx(ctx);
return ERR_PTR(-ENOMEM);
}
f2fs_target_device(sbi, blkaddr, bio);
bio->bi_end_io = f2fs_read_end_io;
bio->bi_private = ctx;
return bio;
}
/* /*
* This function was originally taken from fs/mpage.c, and customized for f2fs. * This function was originally taken from fs/mpage.c, and customized for f2fs.
* Major change was from block_size == page_size in f2fs by default. * Major change was from block_size == page_size in f2fs by default.
...@@ -1240,7 +1251,7 @@ static int f2fs_mpage_readpages(struct address_space *mapping, ...@@ -1240,7 +1251,7 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
map.m_len = last_block - block_in_file; map.m_len = last_block - block_in_file;
if (f2fs_map_blocks(inode, &map, 0, if (f2fs_map_blocks(inode, &map, 0,
F2FS_GET_BLOCK_READ)) F2FS_GET_BLOCK_DEFAULT))
goto set_error_page; goto set_error_page;
} }
got_it: got_it:
...@@ -1271,12 +1282,11 @@ static int f2fs_mpage_readpages(struct address_space *mapping, ...@@ -1271,12 +1282,11 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
bio = NULL; bio = NULL;
} }
if (bio == NULL) { if (bio == NULL) {
bio = f2fs_grab_bio(inode, block_nr, nr_pages); bio = f2fs_grab_read_bio(inode, block_nr, nr_pages);
if (IS_ERR(bio)) { if (IS_ERR(bio)) {
bio = NULL; bio = NULL;
goto set_error_page; goto set_error_page;
} }
bio_set_op_attrs(bio, REQ_OP_READ, 0);
} }
if (bio_add_page(bio, page, blocksize, 0) < blocksize) if (bio_add_page(bio, page, blocksize, 0) < blocksize)
...@@ -1341,11 +1351,11 @@ static int encrypt_one_page(struct f2fs_io_info *fio) ...@@ -1341,11 +1351,11 @@ static int encrypt_one_page(struct f2fs_io_info *fio)
struct inode *inode = fio->page->mapping->host; struct inode *inode = fio->page->mapping->host;
gfp_t gfp_flags = GFP_NOFS; gfp_t gfp_flags = GFP_NOFS;
if (!f2fs_encrypted_inode(inode) || !S_ISREG(inode->i_mode)) if (!f2fs_encrypted_file(inode))
return 0; return 0;
/* wait for GCed encrypted page writeback */ /* wait for GCed encrypted page writeback */
f2fs_wait_on_encrypted_page_writeback(fio->sbi, fio->old_blkaddr); f2fs_wait_on_block_writeback(fio->sbi, fio->old_blkaddr);
retry_encrypt: retry_encrypt:
fio->encrypted_page = fscrypt_encrypt_page(inode, fio->page, fio->encrypted_page = fscrypt_encrypt_page(inode, fio->page,
...@@ -1471,7 +1481,8 @@ int do_write_data_page(struct f2fs_io_info *fio) ...@@ -1471,7 +1481,8 @@ int do_write_data_page(struct f2fs_io_info *fio)
} }
static int __write_data_page(struct page *page, bool *submitted, static int __write_data_page(struct page *page, bool *submitted,
struct writeback_control *wbc) struct writeback_control *wbc,
enum iostat_type io_type)
{ {
struct inode *inode = page->mapping->host; struct inode *inode = page->mapping->host;
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
...@@ -1492,6 +1503,7 @@ static int __write_data_page(struct page *page, bool *submitted, ...@@ -1492,6 +1503,7 @@ static int __write_data_page(struct page *page, bool *submitted,
.encrypted_page = NULL, .encrypted_page = NULL,
.submitted = false, .submitted = false,
.need_lock = LOCK_RETRY, .need_lock = LOCK_RETRY,
.io_type = io_type,
}; };
trace_f2fs_writepage(page, DATA); trace_f2fs_writepage(page, DATA);
...@@ -1598,7 +1610,7 @@ static int __write_data_page(struct page *page, bool *submitted, ...@@ -1598,7 +1610,7 @@ static int __write_data_page(struct page *page, bool *submitted,
static int f2fs_write_data_page(struct page *page, static int f2fs_write_data_page(struct page *page,
struct writeback_control *wbc) struct writeback_control *wbc)
{ {
return __write_data_page(page, NULL, wbc); return __write_data_page(page, NULL, wbc, FS_DATA_IO);
} }
/* /*
...@@ -1607,7 +1619,8 @@ static int f2fs_write_data_page(struct page *page, ...@@ -1607,7 +1619,8 @@ static int f2fs_write_data_page(struct page *page,
* warm/hot data page. * warm/hot data page.
*/ */
static int f2fs_write_cache_pages(struct address_space *mapping, static int f2fs_write_cache_pages(struct address_space *mapping,
struct writeback_control *wbc) struct writeback_control *wbc,
enum iostat_type io_type)
{ {
int ret = 0; int ret = 0;
int done = 0; int done = 0;
...@@ -1697,7 +1710,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping, ...@@ -1697,7 +1710,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
if (!clear_page_dirty_for_io(page)) if (!clear_page_dirty_for_io(page))
goto continue_unlock; goto continue_unlock;
ret = __write_data_page(page, &submitted, wbc); ret = __write_data_page(page, &submitted, wbc, io_type);
if (unlikely(ret)) { if (unlikely(ret)) {
/* /*
* keep nr_to_write, since vfs uses this to * keep nr_to_write, since vfs uses this to
...@@ -1752,8 +1765,9 @@ static int f2fs_write_cache_pages(struct address_space *mapping, ...@@ -1752,8 +1765,9 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
return ret; return ret;
} }
static int f2fs_write_data_pages(struct address_space *mapping, int __f2fs_write_data_pages(struct address_space *mapping,
struct writeback_control *wbc) struct writeback_control *wbc,
enum iostat_type io_type)
{ {
struct inode *inode = mapping->host; struct inode *inode = mapping->host;
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
...@@ -1790,7 +1804,7 @@ static int f2fs_write_data_pages(struct address_space *mapping, ...@@ -1790,7 +1804,7 @@ static int f2fs_write_data_pages(struct address_space *mapping,
goto skip_write; goto skip_write;
blk_start_plug(&plug); blk_start_plug(&plug);
ret = f2fs_write_cache_pages(mapping, wbc); ret = f2fs_write_cache_pages(mapping, wbc, io_type);
blk_finish_plug(&plug); blk_finish_plug(&plug);
if (wbc->sync_mode == WB_SYNC_ALL) if (wbc->sync_mode == WB_SYNC_ALL)
...@@ -1809,6 +1823,16 @@ static int f2fs_write_data_pages(struct address_space *mapping, ...@@ -1809,6 +1823,16 @@ static int f2fs_write_data_pages(struct address_space *mapping,
return 0; return 0;
} }
static int f2fs_write_data_pages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct inode *inode = mapping->host;
return __f2fs_write_data_pages(mapping, wbc,
F2FS_I(inode)->cp_task == current ?
FS_CP_DATA_IO : FS_DATA_IO);
}
static void f2fs_write_failed(struct address_space *mapping, loff_t to) static void f2fs_write_failed(struct address_space *mapping, loff_t to)
{ {
struct inode *inode = mapping->host; struct inode *inode = mapping->host;
...@@ -1858,7 +1882,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, ...@@ -1858,7 +1882,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi,
set_new_dnode(&dn, inode, ipage, ipage, 0); set_new_dnode(&dn, inode, ipage, ipage, 0);
if (f2fs_has_inline_data(inode)) { if (f2fs_has_inline_data(inode)) {
if (pos + len <= MAX_INLINE_DATA) { if (pos + len <= MAX_INLINE_DATA(inode)) {
read_inline_data(page, ipage); read_inline_data(page, ipage);
set_inode_flag(inode, FI_DATA_EXIST); set_inode_flag(inode, FI_DATA_EXIST);
if (inode->i_nlink) if (inode->i_nlink)
...@@ -1956,8 +1980,8 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping, ...@@ -1956,8 +1980,8 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
f2fs_wait_on_page_writeback(page, DATA, false); f2fs_wait_on_page_writeback(page, DATA, false);
/* wait for GCed encrypted page writeback */ /* wait for GCed encrypted page writeback */
if (f2fs_encrypted_inode(inode) && S_ISREG(inode->i_mode)) if (f2fs_encrypted_file(inode))
f2fs_wait_on_encrypted_page_writeback(sbi, blkaddr); f2fs_wait_on_block_writeback(sbi, blkaddr);
if (len == PAGE_SIZE || PageUptodate(page)) if (len == PAGE_SIZE || PageUptodate(page))
return 0; return 0;
...@@ -1971,21 +1995,9 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping, ...@@ -1971,21 +1995,9 @@ static int f2fs_write_begin(struct file *file, struct address_space *mapping,
zero_user_segment(page, 0, PAGE_SIZE); zero_user_segment(page, 0, PAGE_SIZE);
SetPageUptodate(page); SetPageUptodate(page);
} else { } else {
struct bio *bio; err = f2fs_submit_page_read(inode, page, blkaddr);
if (err)
bio = f2fs_grab_bio(inode, blkaddr, 1);
if (IS_ERR(bio)) {
err = PTR_ERR(bio);
goto fail;
}
bio->bi_opf = REQ_OP_READ;
if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
bio_put(bio);
err = -EFAULT;
goto fail; goto fail;
}
__submit_bio(sbi, bio, DATA);
lock_page(page); lock_page(page);
if (unlikely(page->mapping != mapping)) { if (unlikely(page->mapping != mapping)) {
...@@ -2075,10 +2087,13 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) ...@@ -2075,10 +2087,13 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
up_read(&F2FS_I(inode)->dio_rwsem[rw]); up_read(&F2FS_I(inode)->dio_rwsem[rw]);
if (rw == WRITE) { if (rw == WRITE) {
if (err > 0) if (err > 0) {
f2fs_update_iostat(F2FS_I_SB(inode), APP_DIRECT_IO,
err);
set_inode_flag(inode, FI_UPDATE_WRITE); set_inode_flag(inode, FI_UPDATE_WRITE);
else if (err < 0) } else if (err < 0) {
f2fs_write_failed(mapping, offset + count); f2fs_write_failed(mapping, offset + count);
}
} }
trace_f2fs_direct_IO_exit(inode, offset, count, rw, err); trace_f2fs_direct_IO_exit(inode, offset, count, rw, err);
......
...@@ -705,6 +705,8 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, ...@@ -705,6 +705,8 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
struct f2fs_dentry_block *dentry_blk; struct f2fs_dentry_block *dentry_blk;
unsigned int bit_pos; unsigned int bit_pos;
int slots = GET_DENTRY_SLOTS(le16_to_cpu(dentry->name_len)); int slots = GET_DENTRY_SLOTS(le16_to_cpu(dentry->name_len));
struct address_space *mapping = page_mapping(page);
unsigned long flags;
int i; int i;
f2fs_update_time(F2FS_I_SB(dir), REQ_TIME); f2fs_update_time(F2FS_I_SB(dir), REQ_TIME);
...@@ -735,6 +737,11 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, ...@@ -735,6 +737,11 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
if (bit_pos == NR_DENTRY_IN_BLOCK && if (bit_pos == NR_DENTRY_IN_BLOCK &&
!truncate_hole(dir, page->index, page->index + 1)) { !truncate_hole(dir, page->index, page->index + 1)) {
spin_lock_irqsave(&mapping->tree_lock, flags);
radix_tree_tag_clear(&mapping->page_tree, page_index(page),
PAGECACHE_TAG_DIRTY);
spin_unlock_irqrestore(&mapping->tree_lock, flags);
clear_page_dirty_for_io(page); clear_page_dirty_for_io(page);
ClearPagePrivate(page); ClearPagePrivate(page);
ClearPageUptodate(page); ClearPageUptodate(page);
......
...@@ -91,6 +91,8 @@ extern char *fault_name[FAULT_MAX]; ...@@ -91,6 +91,8 @@ extern char *fault_name[FAULT_MAX];
#define F2FS_MOUNT_LFS 0x00040000 #define F2FS_MOUNT_LFS 0x00040000
#define F2FS_MOUNT_USRQUOTA 0x00080000 #define F2FS_MOUNT_USRQUOTA 0x00080000
#define F2FS_MOUNT_GRPQUOTA 0x00100000 #define F2FS_MOUNT_GRPQUOTA 0x00100000
#define F2FS_MOUNT_PRJQUOTA 0x00200000
#define F2FS_MOUNT_QUOTA 0x00400000
#define clear_opt(sbi, option) ((sbi)->mount_opt.opt &= ~F2FS_MOUNT_##option) #define clear_opt(sbi, option) ((sbi)->mount_opt.opt &= ~F2FS_MOUNT_##option)
#define set_opt(sbi, option) ((sbi)->mount_opt.opt |= F2FS_MOUNT_##option) #define set_opt(sbi, option) ((sbi)->mount_opt.opt |= F2FS_MOUNT_##option)
...@@ -110,8 +112,12 @@ struct f2fs_mount_info { ...@@ -110,8 +112,12 @@ struct f2fs_mount_info {
unsigned int opt; unsigned int opt;
}; };
#define F2FS_FEATURE_ENCRYPT 0x0001 #define F2FS_FEATURE_ENCRYPT 0x0001
#define F2FS_FEATURE_BLKZONED 0x0002 #define F2FS_FEATURE_BLKZONED 0x0002
#define F2FS_FEATURE_ATOMIC_WRITE 0x0004
#define F2FS_FEATURE_EXTRA_ATTR 0x0008
#define F2FS_FEATURE_PRJQUOTA 0x0010
#define F2FS_FEATURE_INODE_CHKSUM 0x0020
#define F2FS_HAS_FEATURE(sb, mask) \ #define F2FS_HAS_FEATURE(sb, mask) \
((F2FS_SB(sb)->raw_super->feature & cpu_to_le32(mask)) != 0) ((F2FS_SB(sb)->raw_super->feature & cpu_to_le32(mask)) != 0)
...@@ -142,6 +148,8 @@ enum { ...@@ -142,6 +148,8 @@ enum {
(BATCHED_TRIM_SEGMENTS(sbi) << (sbi)->log_blocks_per_seg) (BATCHED_TRIM_SEGMENTS(sbi) << (sbi)->log_blocks_per_seg)
#define MAX_DISCARD_BLOCKS(sbi) BLKS_PER_SEC(sbi) #define MAX_DISCARD_BLOCKS(sbi) BLKS_PER_SEC(sbi)
#define DISCARD_ISSUE_RATE 8 #define DISCARD_ISSUE_RATE 8
#define DEF_MIN_DISCARD_ISSUE_TIME 50 /* 50 ms, if exists */
#define DEF_MAX_DISCARD_ISSUE_TIME 60000 /* 60 s, if no candidates */
#define DEF_CP_INTERVAL 60 /* 60 secs */ #define DEF_CP_INTERVAL 60 /* 60 secs */
#define DEF_IDLE_INTERVAL 5 /* 5 secs */ #define DEF_IDLE_INTERVAL 5 /* 5 secs */
...@@ -190,11 +198,18 @@ struct discard_entry { ...@@ -190,11 +198,18 @@ struct discard_entry {
unsigned char discard_map[SIT_VBLOCK_MAP_SIZE]; /* segment discard bitmap */ unsigned char discard_map[SIT_VBLOCK_MAP_SIZE]; /* segment discard bitmap */
}; };
/* default discard granularity of inner discard thread, unit: block count */
#define DEFAULT_DISCARD_GRANULARITY 16
/* max discard pend list number */ /* max discard pend list number */
#define MAX_PLIST_NUM 512 #define MAX_PLIST_NUM 512
#define plist_idx(blk_num) ((blk_num) >= MAX_PLIST_NUM ? \ #define plist_idx(blk_num) ((blk_num) >= MAX_PLIST_NUM ? \
(MAX_PLIST_NUM - 1) : (blk_num - 1)) (MAX_PLIST_NUM - 1) : (blk_num - 1))
#define P_ACTIVE 0x01
#define P_TRIM 0x02
#define plist_issue(tag) (((tag) & P_ACTIVE) || ((tag) & P_TRIM))
enum { enum {
D_PREP, D_PREP,
D_SUBMIT, D_SUBMIT,
...@@ -230,11 +245,14 @@ struct discard_cmd_control { ...@@ -230,11 +245,14 @@ struct discard_cmd_control {
struct task_struct *f2fs_issue_discard; /* discard thread */ struct task_struct *f2fs_issue_discard; /* discard thread */
struct list_head entry_list; /* 4KB discard entry list */ struct list_head entry_list; /* 4KB discard entry list */
struct list_head pend_list[MAX_PLIST_NUM];/* store pending entries */ struct list_head pend_list[MAX_PLIST_NUM];/* store pending entries */
unsigned char pend_list_tag[MAX_PLIST_NUM];/* tag for pending entries */
struct list_head wait_list; /* store on-flushing entries */ struct list_head wait_list; /* store on-flushing entries */
wait_queue_head_t discard_wait_queue; /* waiting queue for wake-up */ wait_queue_head_t discard_wait_queue; /* waiting queue for wake-up */
unsigned int discard_wake; /* to wake up discard thread */
struct mutex cmd_lock; struct mutex cmd_lock;
unsigned int nr_discards; /* # of discards in the list */ unsigned int nr_discards; /* # of discards in the list */
unsigned int max_discards; /* max. discards to be issued */ unsigned int max_discards; /* max. discards to be issued */
unsigned int discard_granularity; /* discard granularity */
unsigned int undiscard_blks; /* # of undiscard blocks */ unsigned int undiscard_blks; /* # of undiscard blocks */
atomic_t issued_discard; /* # of issued discard */ atomic_t issued_discard; /* # of issued discard */
atomic_t issing_discard; /* # of issing discard */ atomic_t issing_discard; /* # of issing discard */
...@@ -308,6 +326,7 @@ static inline bool __has_cursum_space(struct f2fs_journal *journal, ...@@ -308,6 +326,7 @@ static inline bool __has_cursum_space(struct f2fs_journal *journal,
struct f2fs_flush_device) struct f2fs_flush_device)
#define F2FS_IOC_GARBAGE_COLLECT_RANGE _IOW(F2FS_IOCTL_MAGIC, 11, \ #define F2FS_IOC_GARBAGE_COLLECT_RANGE _IOW(F2FS_IOCTL_MAGIC, 11, \
struct f2fs_gc_range) struct f2fs_gc_range)
#define F2FS_IOC_GET_FEATURES _IOR(F2FS_IOCTL_MAGIC, 12, __u32)
#define F2FS_IOC_SET_ENCRYPTION_POLICY FS_IOC_SET_ENCRYPTION_POLICY #define F2FS_IOC_SET_ENCRYPTION_POLICY FS_IOC_SET_ENCRYPTION_POLICY
#define F2FS_IOC_GET_ENCRYPTION_POLICY FS_IOC_GET_ENCRYPTION_POLICY #define F2FS_IOC_GET_ENCRYPTION_POLICY FS_IOC_GET_ENCRYPTION_POLICY
...@@ -332,6 +351,9 @@ static inline bool __has_cursum_space(struct f2fs_journal *journal, ...@@ -332,6 +351,9 @@ static inline bool __has_cursum_space(struct f2fs_journal *journal,
#define F2FS_IOC32_GETVERSION FS_IOC32_GETVERSION #define F2FS_IOC32_GETVERSION FS_IOC32_GETVERSION
#endif #endif
#define F2FS_IOC_FSGETXATTR FS_IOC_FSGETXATTR
#define F2FS_IOC_FSSETXATTR FS_IOC_FSSETXATTR
struct f2fs_gc_range { struct f2fs_gc_range {
u32 sync; u32 sync;
u64 start; u64 start;
...@@ -355,16 +377,36 @@ struct f2fs_flush_device { ...@@ -355,16 +377,36 @@ struct f2fs_flush_device {
u32 segments; /* # of segments to flush */ u32 segments; /* # of segments to flush */
}; };
/* for inline stuff */
#define DEF_INLINE_RESERVED_SIZE 1
static inline int get_extra_isize(struct inode *inode);
#define MAX_INLINE_DATA(inode) (sizeof(__le32) * \
(CUR_ADDRS_PER_INODE(inode) - \
DEF_INLINE_RESERVED_SIZE - \
F2FS_INLINE_XATTR_ADDRS))
/* for inline dir */
#define NR_INLINE_DENTRY(inode) (MAX_INLINE_DATA(inode) * BITS_PER_BYTE / \
((SIZE_OF_DIR_ENTRY + F2FS_SLOT_LEN) * \
BITS_PER_BYTE + 1))
#define INLINE_DENTRY_BITMAP_SIZE(inode) ((NR_INLINE_DENTRY(inode) + \
BITS_PER_BYTE - 1) / BITS_PER_BYTE)
#define INLINE_RESERVED_SIZE(inode) (MAX_INLINE_DATA(inode) - \
((SIZE_OF_DIR_ENTRY + F2FS_SLOT_LEN) * \
NR_INLINE_DENTRY(inode) + \
INLINE_DENTRY_BITMAP_SIZE(inode)))
/* /*
* For INODE and NODE manager * For INODE and NODE manager
*/ */
/* for directory operations */ /* for directory operations */
struct f2fs_dentry_ptr { struct f2fs_dentry_ptr {
struct inode *inode; struct inode *inode;
const void *bitmap; void *bitmap;
struct f2fs_dir_entry *dentry; struct f2fs_dir_entry *dentry;
__u8 (*filename)[F2FS_SLOT_LEN]; __u8 (*filename)[F2FS_SLOT_LEN];
int max; int max;
int nr_bitmap;
}; };
static inline void make_dentry_ptr_block(struct inode *inode, static inline void make_dentry_ptr_block(struct inode *inode,
...@@ -372,19 +414,26 @@ static inline void make_dentry_ptr_block(struct inode *inode, ...@@ -372,19 +414,26 @@ static inline void make_dentry_ptr_block(struct inode *inode,
{ {
d->inode = inode; d->inode = inode;
d->max = NR_DENTRY_IN_BLOCK; d->max = NR_DENTRY_IN_BLOCK;
d->nr_bitmap = SIZE_OF_DENTRY_BITMAP;
d->bitmap = &t->dentry_bitmap; d->bitmap = &t->dentry_bitmap;
d->dentry = t->dentry; d->dentry = t->dentry;
d->filename = t->filename; d->filename = t->filename;
} }
static inline void make_dentry_ptr_inline(struct inode *inode, static inline void make_dentry_ptr_inline(struct inode *inode,
struct f2fs_dentry_ptr *d, struct f2fs_inline_dentry *t) struct f2fs_dentry_ptr *d, void *t)
{ {
int entry_cnt = NR_INLINE_DENTRY(inode);
int bitmap_size = INLINE_DENTRY_BITMAP_SIZE(inode);
int reserved_size = INLINE_RESERVED_SIZE(inode);
d->inode = inode; d->inode = inode;
d->max = NR_INLINE_DENTRY; d->max = entry_cnt;
d->bitmap = &t->dentry_bitmap; d->nr_bitmap = bitmap_size;
d->dentry = t->dentry; d->bitmap = t;
d->filename = t->filename; d->dentry = t + bitmap_size + reserved_size;
d->filename = t + bitmap_size + reserved_size +
SIZE_OF_DIR_ENTRY * entry_cnt;
} }
/* /*
...@@ -473,12 +522,13 @@ struct f2fs_map_blocks { ...@@ -473,12 +522,13 @@ struct f2fs_map_blocks {
}; };
/* for flag in get_data_block */ /* for flag in get_data_block */
#define F2FS_GET_BLOCK_READ 0 enum {
#define F2FS_GET_BLOCK_DIO 1 F2FS_GET_BLOCK_DEFAULT,
#define F2FS_GET_BLOCK_FIEMAP 2 F2FS_GET_BLOCK_FIEMAP,
#define F2FS_GET_BLOCK_BMAP 3 F2FS_GET_BLOCK_BMAP,
#define F2FS_GET_BLOCK_PRE_DIO 4 F2FS_GET_BLOCK_PRE_DIO,
#define F2FS_GET_BLOCK_PRE_AIO 5 F2FS_GET_BLOCK_PRE_AIO,
};
/* /*
* i_advise uses FADVISE_XXX_BIT. We can add additional hints later. * i_advise uses FADVISE_XXX_BIT. We can add additional hints later.
...@@ -521,6 +571,7 @@ struct f2fs_inode_info { ...@@ -521,6 +571,7 @@ struct f2fs_inode_info {
f2fs_hash_t chash; /* hash value of given file name */ f2fs_hash_t chash; /* hash value of given file name */
unsigned int clevel; /* maximum level of given file name */ unsigned int clevel; /* maximum level of given file name */
struct task_struct *task; /* lookup and create consistency */ struct task_struct *task; /* lookup and create consistency */
struct task_struct *cp_task; /* separate cp/wb IO stats*/
nid_t i_xattr_nid; /* node id that contains xattrs */ nid_t i_xattr_nid; /* node id that contains xattrs */
loff_t last_disk_size; /* lastly written file size */ loff_t last_disk_size; /* lastly written file size */
...@@ -533,10 +584,15 @@ struct f2fs_inode_info { ...@@ -533,10 +584,15 @@ struct f2fs_inode_info {
struct list_head dirty_list; /* dirty list for dirs and files */ struct list_head dirty_list; /* dirty list for dirs and files */
struct list_head gdirty_list; /* linked in global dirty list */ struct list_head gdirty_list; /* linked in global dirty list */
struct list_head inmem_pages; /* inmemory pages managed by f2fs */ struct list_head inmem_pages; /* inmemory pages managed by f2fs */
struct task_struct *inmem_task; /* store inmemory task */
struct mutex inmem_lock; /* lock for inmemory pages */ struct mutex inmem_lock; /* lock for inmemory pages */
struct extent_tree *extent_tree; /* cached extent_tree entry */ struct extent_tree *extent_tree; /* cached extent_tree entry */
struct rw_semaphore dio_rwsem[2];/* avoid racing between dio and gc */ struct rw_semaphore dio_rwsem[2];/* avoid racing between dio and gc */
struct rw_semaphore i_mmap_sem; struct rw_semaphore i_mmap_sem;
struct rw_semaphore i_xattr_sem; /* avoid racing between reading and changing EAs */
int i_extra_isize; /* size of extra space located in i_addr */
kprojid_t i_projid; /* id for project quota */
}; };
static inline void get_extent_info(struct extent_info *ext, static inline void get_extent_info(struct extent_info *ext,
...@@ -823,6 +879,23 @@ enum need_lock_type { ...@@ -823,6 +879,23 @@ enum need_lock_type {
LOCK_RETRY, LOCK_RETRY,
}; };
enum iostat_type {
APP_DIRECT_IO, /* app direct IOs */
APP_BUFFERED_IO, /* app buffered IOs */
APP_WRITE_IO, /* app write IOs */
APP_MAPPED_IO, /* app mapped IOs */
FS_DATA_IO, /* data IOs from kworker/fsync/reclaimer */
FS_NODE_IO, /* node IOs from kworker/fsync/reclaimer */
FS_META_IO, /* meta IOs from kworker/reclaimer */
FS_GC_DATA_IO, /* data IOs from forground gc */
FS_GC_NODE_IO, /* node IOs from forground gc */
FS_CP_DATA_IO, /* data IOs from checkpoint */
FS_CP_NODE_IO, /* node IOs from checkpoint */
FS_CP_META_IO, /* meta IOs from checkpoint */
FS_DISCARD, /* discard */
NR_IO_TYPE,
};
struct f2fs_io_info { struct f2fs_io_info {
struct f2fs_sb_info *sbi; /* f2fs_sb_info pointer */ struct f2fs_sb_info *sbi; /* f2fs_sb_info pointer */
enum page_type type; /* contains DATA/NODE/META/META_FLUSH */ enum page_type type; /* contains DATA/NODE/META/META_FLUSH */
...@@ -837,6 +910,7 @@ struct f2fs_io_info { ...@@ -837,6 +910,7 @@ struct f2fs_io_info {
bool submitted; /* indicate IO submission */ bool submitted; /* indicate IO submission */
int need_lock; /* indicate we need to lock cp_rwsem */ int need_lock; /* indicate we need to lock cp_rwsem */
bool in_list; /* indicate fio is in io_list */ bool in_list; /* indicate fio is in io_list */
enum iostat_type io_type; /* io type */
}; };
#define is_read_io(rw) ((rw) == READ) #define is_read_io(rw) ((rw) == READ)
...@@ -1028,6 +1102,11 @@ struct f2fs_sb_info { ...@@ -1028,6 +1102,11 @@ struct f2fs_sb_info {
#endif #endif
spinlock_t stat_lock; /* lock for stat operations */ spinlock_t stat_lock; /* lock for stat operations */
/* For app/fs IO statistics */
spinlock_t iostat_lock;
unsigned long long write_iostat[NR_IO_TYPE];
bool iostat_enable;
/* For sysfs suppport */ /* For sysfs suppport */
struct kobject s_kobj; struct kobject s_kobj;
struct completion s_kobj_unregister; struct completion s_kobj_unregister;
...@@ -1046,10 +1125,19 @@ struct f2fs_sb_info { ...@@ -1046,10 +1125,19 @@ struct f2fs_sb_info {
/* Reference to checksum algorithm driver via cryptoapi */ /* Reference to checksum algorithm driver via cryptoapi */
struct crypto_shash *s_chksum_driver; struct crypto_shash *s_chksum_driver;
/* Precomputed FS UUID checksum for seeding other checksums */
__u32 s_chksum_seed;
/* For fault injection */ /* For fault injection */
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
struct f2fs_fault_info fault_info; struct f2fs_fault_info fault_info;
#endif #endif
#ifdef CONFIG_QUOTA
/* Names of quota files with journalled quota */
char *s_qf_names[MAXQUOTAS];
int s_jquota_fmt; /* Format of quota to use */
#endif
}; };
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
...@@ -1137,6 +1225,27 @@ static inline bool f2fs_crc_valid(struct f2fs_sb_info *sbi, __u32 blk_crc, ...@@ -1137,6 +1225,27 @@ static inline bool f2fs_crc_valid(struct f2fs_sb_info *sbi, __u32 blk_crc,
return f2fs_crc32(sbi, buf, buf_size) == blk_crc; return f2fs_crc32(sbi, buf, buf_size) == blk_crc;
} }
static inline u32 f2fs_chksum(struct f2fs_sb_info *sbi, u32 crc,
const void *address, unsigned int length)
{
struct {
struct shash_desc shash;
char ctx[4];
} desc;
int err;
BUG_ON(crypto_shash_descsize(sbi->s_chksum_driver) != sizeof(desc.ctx));
desc.shash.tfm = sbi->s_chksum_driver;
desc.shash.flags = 0;
*(u32 *)desc.ctx = crc;
err = crypto_shash_update(&desc.shash, address, length);
BUG_ON(err);
return *(u32 *)desc.ctx;
}
static inline struct f2fs_inode_info *F2FS_I(struct inode *inode) static inline struct f2fs_inode_info *F2FS_I(struct inode *inode)
{ {
return container_of(inode, struct f2fs_inode_info, vfs_inode); return container_of(inode, struct f2fs_inode_info, vfs_inode);
...@@ -1760,20 +1869,38 @@ static inline bool IS_INODE(struct page *page) ...@@ -1760,20 +1869,38 @@ static inline bool IS_INODE(struct page *page)
return RAW_IS_INODE(p); return RAW_IS_INODE(p);
} }
static inline int offset_in_addr(struct f2fs_inode *i)
{
return (i->i_inline & F2FS_EXTRA_ATTR) ?
(le16_to_cpu(i->i_extra_isize) / sizeof(__le32)) : 0;
}
static inline __le32 *blkaddr_in_node(struct f2fs_node *node) static inline __le32 *blkaddr_in_node(struct f2fs_node *node)
{ {
return RAW_IS_INODE(node) ? node->i.i_addr : node->dn.addr; return RAW_IS_INODE(node) ? node->i.i_addr : node->dn.addr;
} }
static inline block_t datablock_addr(struct page *node_page, static inline int f2fs_has_extra_attr(struct inode *inode);
unsigned int offset) static inline block_t datablock_addr(struct inode *inode,
struct page *node_page, unsigned int offset)
{ {
struct f2fs_node *raw_node; struct f2fs_node *raw_node;
__le32 *addr_array; __le32 *addr_array;
int base = 0;
bool is_inode = IS_INODE(node_page);
raw_node = F2FS_NODE(node_page); raw_node = F2FS_NODE(node_page);
/* from GC path only */
if (!inode) {
if (is_inode)
base = offset_in_addr(&raw_node->i);
} else if (f2fs_has_extra_attr(inode) && is_inode) {
base = get_extra_isize(inode);
}
addr_array = blkaddr_in_node(raw_node); addr_array = blkaddr_in_node(raw_node);
return le32_to_cpu(addr_array[offset]); return le32_to_cpu(addr_array[base + offset]);
} }
static inline int f2fs_test_bit(unsigned int nr, char *addr) static inline int f2fs_test_bit(unsigned int nr, char *addr)
...@@ -1836,6 +1963,20 @@ static inline void f2fs_change_bit(unsigned int nr, char *addr) ...@@ -1836,6 +1963,20 @@ static inline void f2fs_change_bit(unsigned int nr, char *addr)
*addr ^= mask; *addr ^= mask;
} }
#define F2FS_REG_FLMASK (~(FS_DIRSYNC_FL | FS_TOPDIR_FL))
#define F2FS_OTHER_FLMASK (FS_NODUMP_FL | FS_NOATIME_FL)
#define F2FS_FL_INHERITED (FS_PROJINHERIT_FL)
static inline __u32 f2fs_mask_flags(umode_t mode, __u32 flags)
{
if (S_ISDIR(mode))
return flags;
else if (S_ISREG(mode))
return flags & F2FS_REG_FLMASK;
else
return flags & F2FS_OTHER_FLMASK;
}
/* used for f2fs_inode_info->flags */ /* used for f2fs_inode_info->flags */
enum { enum {
FI_NEW_INODE, /* indicate newly allocated inode */ FI_NEW_INODE, /* indicate newly allocated inode */
...@@ -1864,6 +2005,8 @@ enum { ...@@ -1864,6 +2005,8 @@ enum {
FI_DIRTY_FILE, /* indicate regular/symlink has dirty pages */ FI_DIRTY_FILE, /* indicate regular/symlink has dirty pages */
FI_NO_PREALLOC, /* indicate skipped preallocated blocks */ FI_NO_PREALLOC, /* indicate skipped preallocated blocks */
FI_HOT_DATA, /* indicate file is hot */ FI_HOT_DATA, /* indicate file is hot */
FI_EXTRA_ATTR, /* indicate file has extra attribute */
FI_PROJ_INHERIT, /* indicate file inherits projectid */
}; };
static inline void __mark_inode_dirty_flag(struct inode *inode, static inline void __mark_inode_dirty_flag(struct inode *inode,
...@@ -1983,6 +2126,8 @@ static inline void get_inline_info(struct inode *inode, struct f2fs_inode *ri) ...@@ -1983,6 +2126,8 @@ static inline void get_inline_info(struct inode *inode, struct f2fs_inode *ri)
set_bit(FI_DATA_EXIST, &fi->flags); set_bit(FI_DATA_EXIST, &fi->flags);
if (ri->i_inline & F2FS_INLINE_DOTS) if (ri->i_inline & F2FS_INLINE_DOTS)
set_bit(FI_INLINE_DOTS, &fi->flags); set_bit(FI_INLINE_DOTS, &fi->flags);
if (ri->i_inline & F2FS_EXTRA_ATTR)
set_bit(FI_EXTRA_ATTR, &fi->flags);
} }
static inline void set_raw_inline(struct inode *inode, struct f2fs_inode *ri) static inline void set_raw_inline(struct inode *inode, struct f2fs_inode *ri)
...@@ -1999,6 +2144,13 @@ static inline void set_raw_inline(struct inode *inode, struct f2fs_inode *ri) ...@@ -1999,6 +2144,13 @@ static inline void set_raw_inline(struct inode *inode, struct f2fs_inode *ri)
ri->i_inline |= F2FS_DATA_EXIST; ri->i_inline |= F2FS_DATA_EXIST;
if (is_inode_flag_set(inode, FI_INLINE_DOTS)) if (is_inode_flag_set(inode, FI_INLINE_DOTS))
ri->i_inline |= F2FS_INLINE_DOTS; ri->i_inline |= F2FS_INLINE_DOTS;
if (is_inode_flag_set(inode, FI_EXTRA_ATTR))
ri->i_inline |= F2FS_EXTRA_ATTR;
}
static inline int f2fs_has_extra_attr(struct inode *inode)
{
return is_inode_flag_set(inode, FI_EXTRA_ATTR);
} }
static inline int f2fs_has_inline_xattr(struct inode *inode) static inline int f2fs_has_inline_xattr(struct inode *inode)
...@@ -2009,8 +2161,8 @@ static inline int f2fs_has_inline_xattr(struct inode *inode) ...@@ -2009,8 +2161,8 @@ static inline int f2fs_has_inline_xattr(struct inode *inode)
static inline unsigned int addrs_per_inode(struct inode *inode) static inline unsigned int addrs_per_inode(struct inode *inode)
{ {
if (f2fs_has_inline_xattr(inode)) if (f2fs_has_inline_xattr(inode))
return DEF_ADDRS_PER_INODE - F2FS_INLINE_XATTR_ADDRS; return CUR_ADDRS_PER_INODE(inode) - F2FS_INLINE_XATTR_ADDRS;
return DEF_ADDRS_PER_INODE; return CUR_ADDRS_PER_INODE(inode);
} }
static inline void *inline_xattr_addr(struct page *page) static inline void *inline_xattr_addr(struct page *page)
...@@ -2069,11 +2221,12 @@ static inline bool f2fs_is_drop_cache(struct inode *inode) ...@@ -2069,11 +2221,12 @@ static inline bool f2fs_is_drop_cache(struct inode *inode)
return is_inode_flag_set(inode, FI_DROP_CACHE); return is_inode_flag_set(inode, FI_DROP_CACHE);
} }
static inline void *inline_data_addr(struct page *page) static inline void *inline_data_addr(struct inode *inode, struct page *page)
{ {
struct f2fs_inode *ri = F2FS_INODE(page); struct f2fs_inode *ri = F2FS_INODE(page);
int extra_size = get_extra_isize(inode);
return (void *)&(ri->i_addr[1]); return (void *)&(ri->i_addr[extra_size + DEF_INLINE_RESERVED_SIZE]);
} }
static inline int f2fs_has_inline_dentry(struct inode *inode) static inline int f2fs_has_inline_dentry(struct inode *inode)
...@@ -2164,10 +2317,50 @@ static inline void *f2fs_kmalloc(struct f2fs_sb_info *sbi, ...@@ -2164,10 +2317,50 @@ static inline void *f2fs_kmalloc(struct f2fs_sb_info *sbi,
return kmalloc(size, flags); return kmalloc(size, flags);
} }
static inline int get_extra_isize(struct inode *inode)
{
return F2FS_I(inode)->i_extra_isize / sizeof(__le32);
}
#define get_inode_mode(i) \ #define get_inode_mode(i) \
((is_inode_flag_set(i, FI_ACL_MODE)) ? \ ((is_inode_flag_set(i, FI_ACL_MODE)) ? \
(F2FS_I(i)->i_acl_mode) : ((i)->i_mode)) (F2FS_I(i)->i_acl_mode) : ((i)->i_mode))
#define F2FS_TOTAL_EXTRA_ATTR_SIZE \
(offsetof(struct f2fs_inode, i_extra_end) - \
offsetof(struct f2fs_inode, i_extra_isize)) \
#define F2FS_OLD_ATTRIBUTE_SIZE (offsetof(struct f2fs_inode, i_addr))
#define F2FS_FITS_IN_INODE(f2fs_inode, extra_isize, field) \
((offsetof(typeof(*f2fs_inode), field) + \
sizeof((f2fs_inode)->field)) \
<= (F2FS_OLD_ATTRIBUTE_SIZE + extra_isize)) \
static inline void f2fs_reset_iostat(struct f2fs_sb_info *sbi)
{
int i;
spin_lock(&sbi->iostat_lock);
for (i = 0; i < NR_IO_TYPE; i++)
sbi->write_iostat[i] = 0;
spin_unlock(&sbi->iostat_lock);
}
static inline void f2fs_update_iostat(struct f2fs_sb_info *sbi,
enum iostat_type type, unsigned long long io_bytes)
{
if (!sbi->iostat_enable)
return;
spin_lock(&sbi->iostat_lock);
sbi->write_iostat[type] += io_bytes;
if (type == APP_WRITE_IO || type == APP_DIRECT_IO)
sbi->write_iostat[APP_BUFFERED_IO] =
sbi->write_iostat[APP_WRITE_IO] -
sbi->write_iostat[APP_DIRECT_IO];
spin_unlock(&sbi->iostat_lock);
}
/* /*
* file.c * file.c
*/ */
...@@ -2187,6 +2380,8 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg); ...@@ -2187,6 +2380,8 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
* inode.c * inode.c
*/ */
void f2fs_set_inode_flags(struct inode *inode); void f2fs_set_inode_flags(struct inode *inode);
bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct page *page);
void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page);
struct inode *f2fs_iget(struct super_block *sb, unsigned long ino); struct inode *f2fs_iget(struct super_block *sb, unsigned long ino);
struct inode *f2fs_iget_retry(struct super_block *sb, unsigned long ino); struct inode *f2fs_iget_retry(struct super_block *sb, unsigned long ino);
int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink); int try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink);
...@@ -2255,6 +2450,8 @@ static inline int f2fs_add_link(struct dentry *dentry, struct inode *inode) ...@@ -2255,6 +2450,8 @@ static inline int f2fs_add_link(struct dentry *dentry, struct inode *inode)
*/ */
int f2fs_inode_dirtied(struct inode *inode, bool sync); int f2fs_inode_dirtied(struct inode *inode, bool sync);
void f2fs_inode_synced(struct inode *inode); void f2fs_inode_synced(struct inode *inode);
void f2fs_enable_quota_files(struct f2fs_sb_info *sbi);
void f2fs_quota_off_umount(struct super_block *sb);
int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover); int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover);
int f2fs_sync_fs(struct super_block *sb, int sync); int f2fs_sync_fs(struct super_block *sb, int sync);
extern __printf(3, 4) extern __printf(3, 4)
...@@ -2285,15 +2482,15 @@ int truncate_xattr_node(struct inode *inode, struct page *page); ...@@ -2285,15 +2482,15 @@ int truncate_xattr_node(struct inode *inode, struct page *page);
int wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, nid_t ino); int wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, nid_t ino);
int remove_inode_page(struct inode *inode); int remove_inode_page(struct inode *inode);
struct page *new_inode_page(struct inode *inode); struct page *new_inode_page(struct inode *inode);
struct page *new_node_page(struct dnode_of_data *dn, struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs);
unsigned int ofs, struct page *ipage);
void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid); void ra_node_page(struct f2fs_sb_info *sbi, nid_t nid);
struct page *get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid); struct page *get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid);
struct page *get_node_page_ra(struct page *parent, int start); struct page *get_node_page_ra(struct page *parent, int start);
void move_node_page(struct page *node_page, int gc_type); void move_node_page(struct page *node_page, int gc_type);
int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode, int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
struct writeback_control *wbc, bool atomic); struct writeback_control *wbc, bool atomic);
int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc); int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc,
bool do_balance, enum iostat_type io_type);
void build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount); void build_free_nids(struct f2fs_sb_info *sbi, bool sync, bool mount);
bool alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid); bool alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid);
void alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid); void alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid);
...@@ -2314,6 +2511,7 @@ void destroy_node_manager_caches(void); ...@@ -2314,6 +2511,7 @@ void destroy_node_manager_caches(void);
/* /*
* segment.c * segment.c
*/ */
bool need_SSR(struct f2fs_sb_info *sbi);
void register_inmem_page(struct inode *inode, struct page *page); void register_inmem_page(struct inode *inode, struct page *page);
void drop_inmem_pages(struct inode *inode); void drop_inmem_pages(struct inode *inode);
void drop_inmem_page(struct inode *inode, struct page *page); void drop_inmem_page(struct inode *inode, struct page *page);
...@@ -2336,7 +2534,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range); ...@@ -2336,7 +2534,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range);
bool exist_trim_candidates(struct f2fs_sb_info *sbi, struct cp_control *cpc); bool exist_trim_candidates(struct f2fs_sb_info *sbi, struct cp_control *cpc);
struct page *get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno); struct page *get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno);
void update_meta_page(struct f2fs_sb_info *sbi, void *src, block_t blk_addr); void update_meta_page(struct f2fs_sb_info *sbi, void *src, block_t blk_addr);
void write_meta_page(struct f2fs_sb_info *sbi, struct page *page); void write_meta_page(struct f2fs_sb_info *sbi, struct page *page,
enum iostat_type io_type);
void write_node_page(unsigned int nid, struct f2fs_io_info *fio); void write_node_page(unsigned int nid, struct f2fs_io_info *fio);
void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio); void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio);
int rewrite_data_page(struct f2fs_io_info *fio); int rewrite_data_page(struct f2fs_io_info *fio);
...@@ -2353,8 +2552,7 @@ void allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, ...@@ -2353,8 +2552,7 @@ void allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
struct f2fs_io_info *fio, bool add_list); struct f2fs_io_info *fio, bool add_list);
void f2fs_wait_on_page_writeback(struct page *page, void f2fs_wait_on_page_writeback(struct page *page,
enum page_type type, bool ordered); enum page_type type, bool ordered);
void f2fs_wait_on_encrypted_page_writeback(struct f2fs_sb_info *sbi, void f2fs_wait_on_block_writeback(struct f2fs_sb_info *sbi, block_t blkaddr);
block_t blkaddr);
void write_data_summaries(struct f2fs_sb_info *sbi, block_t start_blk); void write_data_summaries(struct f2fs_sb_info *sbi, block_t start_blk);
void write_node_summaries(struct f2fs_sb_info *sbi, block_t start_blk); void write_node_summaries(struct f2fs_sb_info *sbi, block_t start_blk);
int lookup_journal_in_cursum(struct f2fs_journal *journal, int type, int lookup_journal_in_cursum(struct f2fs_journal *journal, int type,
...@@ -2377,7 +2575,7 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, ...@@ -2377,7 +2575,7 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages,
int type, bool sync); int type, bool sync);
void ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index); void ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index);
long sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type, long sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
long nr_to_write); long nr_to_write, enum iostat_type io_type);
void add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type); void add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type);
void remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type); void remove_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type);
void release_ino_entry(struct f2fs_sb_info *sbi, bool all); void release_ino_entry(struct f2fs_sb_info *sbi, bool all);
...@@ -2430,6 +2628,9 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, ...@@ -2430,6 +2628,9 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
u64 start, u64 len); u64 start, u64 len);
void f2fs_set_page_dirty_nobuffers(struct page *page); void f2fs_set_page_dirty_nobuffers(struct page *page);
int __f2fs_write_data_pages(struct address_space *mapping,
struct writeback_control *wbc,
enum iostat_type io_type);
void f2fs_invalidate_page(struct page *page, unsigned int offset, void f2fs_invalidate_page(struct page *page, unsigned int offset,
unsigned int length); unsigned int length);
int f2fs_release_page(struct page *page, gfp_t wait); int f2fs_release_page(struct page *page, gfp_t wait);
...@@ -2726,10 +2927,10 @@ void destroy_extent_cache(void); ...@@ -2726,10 +2927,10 @@ void destroy_extent_cache(void);
/* /*
* sysfs.c * sysfs.c
*/ */
int __init f2fs_register_sysfs(void); int __init f2fs_init_sysfs(void);
void f2fs_unregister_sysfs(void); void f2fs_exit_sysfs(void);
int f2fs_init_sysfs(struct f2fs_sb_info *sbi); int f2fs_register_sysfs(struct f2fs_sb_info *sbi);
void f2fs_exit_sysfs(struct f2fs_sb_info *sbi); void f2fs_unregister_sysfs(struct f2fs_sb_info *sbi);
/* /*
* crypto support * crypto support
...@@ -2739,6 +2940,11 @@ static inline bool f2fs_encrypted_inode(struct inode *inode) ...@@ -2739,6 +2940,11 @@ static inline bool f2fs_encrypted_inode(struct inode *inode)
return file_is_encrypt(inode); return file_is_encrypt(inode);
} }
static inline bool f2fs_encrypted_file(struct inode *inode)
{
return f2fs_encrypted_inode(inode) && S_ISREG(inode->i_mode);
}
static inline void f2fs_set_encrypted_inode(struct inode *inode) static inline void f2fs_set_encrypted_inode(struct inode *inode)
{ {
#ifdef CONFIG_F2FS_FS_ENCRYPTION #ifdef CONFIG_F2FS_FS_ENCRYPTION
...@@ -2761,6 +2967,21 @@ static inline int f2fs_sb_mounted_blkzoned(struct super_block *sb) ...@@ -2761,6 +2967,21 @@ static inline int f2fs_sb_mounted_blkzoned(struct super_block *sb)
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_BLKZONED); return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_BLKZONED);
} }
static inline int f2fs_sb_has_extra_attr(struct super_block *sb)
{
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_EXTRA_ATTR);
}
static inline int f2fs_sb_has_project_quota(struct super_block *sb)
{
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_PRJQUOTA);
}
static inline int f2fs_sb_has_inode_chksum(struct super_block *sb)
{
return F2FS_HAS_FEATURE(sb, F2FS_FEATURE_INODE_CHKSUM);
}
#ifdef CONFIG_BLK_DEV_ZONED #ifdef CONFIG_BLK_DEV_ZONED
static inline int get_blkz_type(struct f2fs_sb_info *sbi, static inline int get_blkz_type(struct f2fs_sb_info *sbi,
struct block_device *bdev, block_t blkaddr) struct block_device *bdev, block_t blkaddr)
......
...@@ -98,14 +98,16 @@ static int f2fs_vm_page_mkwrite(struct vm_fault *vmf) ...@@ -98,14 +98,16 @@ static int f2fs_vm_page_mkwrite(struct vm_fault *vmf)
if (!PageUptodate(page)) if (!PageUptodate(page))
SetPageUptodate(page); SetPageUptodate(page);
f2fs_update_iostat(sbi, APP_MAPPED_IO, F2FS_BLKSIZE);
trace_f2fs_vm_page_mkwrite(page, DATA); trace_f2fs_vm_page_mkwrite(page, DATA);
mapped: mapped:
/* fill the page */ /* fill the page */
f2fs_wait_on_page_writeback(page, DATA, false); f2fs_wait_on_page_writeback(page, DATA, false);
/* wait for GCed encrypted page writeback */ /* wait for GCed encrypted page writeback */
if (f2fs_encrypted_inode(inode) && S_ISREG(inode->i_mode)) if (f2fs_encrypted_file(inode))
f2fs_wait_on_encrypted_page_writeback(sbi, dn.data_blkaddr); f2fs_wait_on_block_writeback(sbi, dn.data_blkaddr);
out_sem: out_sem:
up_read(&F2FS_I(inode)->i_mmap_sem); up_read(&F2FS_I(inode)->i_mmap_sem);
...@@ -274,9 +276,19 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end, ...@@ -274,9 +276,19 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end,
goto sync_nodes; goto sync_nodes;
} }
ret = wait_on_node_pages_writeback(sbi, ino); /*
if (ret) * If it's atomic_write, it's just fine to keep write ordering. So
goto out; * here we don't need to wait for node write completion, since we use
* node chain which serializes node blocks. If one of node writes are
* reordered, we can see simply broken chain, resulting in stopping
* roll-forward recovery. It means we'll recover all or none node blocks
* given fsync mark.
*/
if (!atomic) {
ret = wait_on_node_pages_writeback(sbi, ino);
if (ret)
goto out;
}
/* once recovery info is written, don't need to tack this */ /* once recovery info is written, don't need to tack this */
remove_ino_entry(sbi, ino, APPEND_INO); remove_ino_entry(sbi, ino, APPEND_INO);
...@@ -382,7 +394,8 @@ static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence) ...@@ -382,7 +394,8 @@ static loff_t f2fs_seek_block(struct file *file, loff_t offset, int whence)
dn.ofs_in_node++, pgofs++, dn.ofs_in_node++, pgofs++,
data_ofs = (loff_t)pgofs << PAGE_SHIFT) { data_ofs = (loff_t)pgofs << PAGE_SHIFT) {
block_t blkaddr; block_t blkaddr;
blkaddr = datablock_addr(dn.node_page, dn.ofs_in_node); blkaddr = datablock_addr(dn.inode,
dn.node_page, dn.ofs_in_node);
if (__found_offset(blkaddr, dirty, pgofs, whence)) { if (__found_offset(blkaddr, dirty, pgofs, whence)) {
f2fs_put_dnode(&dn); f2fs_put_dnode(&dn);
...@@ -467,9 +480,13 @@ int truncate_data_blocks_range(struct dnode_of_data *dn, int count) ...@@ -467,9 +480,13 @@ int truncate_data_blocks_range(struct dnode_of_data *dn, int count)
struct f2fs_node *raw_node; struct f2fs_node *raw_node;
int nr_free = 0, ofs = dn->ofs_in_node, len = count; int nr_free = 0, ofs = dn->ofs_in_node, len = count;
__le32 *addr; __le32 *addr;
int base = 0;
if (IS_INODE(dn->node_page) && f2fs_has_extra_attr(dn->inode))
base = get_extra_isize(dn->inode);
raw_node = F2FS_NODE(dn->node_page); raw_node = F2FS_NODE(dn->node_page);
addr = blkaddr_in_node(raw_node) + ofs; addr = blkaddr_in_node(raw_node) + base + ofs;
for (; count > 0; count--, addr++, dn->ofs_in_node++) { for (; count > 0; count--, addr++, dn->ofs_in_node++) {
block_t blkaddr = le32_to_cpu(*addr); block_t blkaddr = le32_to_cpu(*addr);
...@@ -647,7 +664,7 @@ int f2fs_getattr(const struct path *path, struct kstat *stat, ...@@ -647,7 +664,7 @@ int f2fs_getattr(const struct path *path, struct kstat *stat,
struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_inode_info *fi = F2FS_I(inode);
unsigned int flags; unsigned int flags;
flags = fi->i_flags & FS_FL_USER_VISIBLE; flags = fi->i_flags & (FS_FL_USER_VISIBLE | FS_PROJINHERIT_FL);
if (flags & FS_APPEND_FL) if (flags & FS_APPEND_FL)
stat->attributes |= STATX_ATTR_APPEND; stat->attributes |= STATX_ATTR_APPEND;
if (flags & FS_COMPR_FL) if (flags & FS_COMPR_FL)
...@@ -927,7 +944,8 @@ static int __read_out_blkaddrs(struct inode *inode, block_t *blkaddr, ...@@ -927,7 +944,8 @@ static int __read_out_blkaddrs(struct inode *inode, block_t *blkaddr,
done = min((pgoff_t)ADDRS_PER_PAGE(dn.node_page, inode) - done = min((pgoff_t)ADDRS_PER_PAGE(dn.node_page, inode) -
dn.ofs_in_node, len); dn.ofs_in_node, len);
for (i = 0; i < done; i++, blkaddr++, do_replace++, dn.ofs_in_node++) { for (i = 0; i < done; i++, blkaddr++, do_replace++, dn.ofs_in_node++) {
*blkaddr = datablock_addr(dn.node_page, dn.ofs_in_node); *blkaddr = datablock_addr(dn.inode,
dn.node_page, dn.ofs_in_node);
if (!is_checkpointed_data(sbi, *blkaddr)) { if (!is_checkpointed_data(sbi, *blkaddr)) {
if (test_opt(sbi, LFS)) { if (test_opt(sbi, LFS)) {
...@@ -1003,8 +1021,8 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode, ...@@ -1003,8 +1021,8 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
ADDRS_PER_PAGE(dn.node_page, dst_inode) - ADDRS_PER_PAGE(dn.node_page, dst_inode) -
dn.ofs_in_node, len - i); dn.ofs_in_node, len - i);
do { do {
dn.data_blkaddr = datablock_addr(dn.node_page, dn.data_blkaddr = datablock_addr(dn.inode,
dn.ofs_in_node); dn.node_page, dn.ofs_in_node);
truncate_data_blocks_range(&dn, 1); truncate_data_blocks_range(&dn, 1);
if (do_replace[i]) { if (do_replace[i]) {
...@@ -1173,7 +1191,8 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start, ...@@ -1173,7 +1191,8 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
int ret; int ret;
for (; index < end; index++, dn->ofs_in_node++) { for (; index < end; index++, dn->ofs_in_node++) {
if (datablock_addr(dn->node_page, dn->ofs_in_node) == NULL_ADDR) if (datablock_addr(dn->inode, dn->node_page,
dn->ofs_in_node) == NULL_ADDR)
count++; count++;
} }
...@@ -1184,8 +1203,8 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start, ...@@ -1184,8 +1203,8 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start,
dn->ofs_in_node = ofs_in_node; dn->ofs_in_node = ofs_in_node;
for (index = start; index < end; index++, dn->ofs_in_node++) { for (index = start; index < end; index++, dn->ofs_in_node++) {
dn->data_blkaddr = dn->data_blkaddr = datablock_addr(dn->inode,
datablock_addr(dn->node_page, dn->ofs_in_node); dn->node_page, dn->ofs_in_node);
/* /*
* reserve_new_blocks will not guarantee entire block * reserve_new_blocks will not guarantee entire block
* allocation. * allocation.
...@@ -1495,33 +1514,67 @@ static int f2fs_release_file(struct inode *inode, struct file *filp) ...@@ -1495,33 +1514,67 @@ static int f2fs_release_file(struct inode *inode, struct file *filp)
return 0; return 0;
} }
#define F2FS_REG_FLMASK (~(FS_DIRSYNC_FL | FS_TOPDIR_FL)) static int f2fs_file_flush(struct file *file, fl_owner_t id)
#define F2FS_OTHER_FLMASK (FS_NODUMP_FL | FS_NOATIME_FL)
static inline __u32 f2fs_mask_flags(umode_t mode, __u32 flags)
{ {
if (S_ISDIR(mode)) struct inode *inode = file_inode(file);
return flags;
else if (S_ISREG(mode)) /*
return flags & F2FS_REG_FLMASK; * If the process doing a transaction is crashed, we should do
else * roll-back. Otherwise, other reader/write can see corrupted database
return flags & F2FS_OTHER_FLMASK; * until all the writers close its file. Since this should be done
* before dropping file lock, it needs to do in ->flush.
*/
if (f2fs_is_atomic_file(inode) &&
F2FS_I(inode)->inmem_task == current)
drop_inmem_pages(inode);
return 0;
} }
static int f2fs_ioc_getflags(struct file *filp, unsigned long arg) static int f2fs_ioc_getflags(struct file *filp, unsigned long arg)
{ {
struct inode *inode = file_inode(filp); struct inode *inode = file_inode(filp);
struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_inode_info *fi = F2FS_I(inode);
unsigned int flags = fi->i_flags & FS_FL_USER_VISIBLE; unsigned int flags = fi->i_flags &
(FS_FL_USER_VISIBLE | FS_PROJINHERIT_FL);
return put_user(flags, (int __user *)arg); return put_user(flags, (int __user *)arg);
} }
static int __f2fs_ioc_setflags(struct inode *inode, unsigned int flags)
{
struct f2fs_inode_info *fi = F2FS_I(inode);
unsigned int oldflags;
/* Is it quota file? Do not allow user to mess with it */
if (IS_NOQUOTA(inode))
return -EPERM;
flags = f2fs_mask_flags(inode->i_mode, flags);
oldflags = fi->i_flags;
if ((flags ^ oldflags) & (FS_APPEND_FL | FS_IMMUTABLE_FL))
if (!capable(CAP_LINUX_IMMUTABLE))
return -EPERM;
flags = flags & (FS_FL_USER_MODIFIABLE | FS_PROJINHERIT_FL);
flags |= oldflags & ~(FS_FL_USER_MODIFIABLE | FS_PROJINHERIT_FL);
fi->i_flags = flags;
if (fi->i_flags & FS_PROJINHERIT_FL)
set_inode_flag(inode, FI_PROJ_INHERIT);
else
clear_inode_flag(inode, FI_PROJ_INHERIT);
inode->i_ctime = current_time(inode);
f2fs_set_inode_flags(inode);
f2fs_mark_inode_dirty_sync(inode, false);
return 0;
}
static int f2fs_ioc_setflags(struct file *filp, unsigned long arg) static int f2fs_ioc_setflags(struct file *filp, unsigned long arg)
{ {
struct inode *inode = file_inode(filp); struct inode *inode = file_inode(filp);
struct f2fs_inode_info *fi = F2FS_I(inode);
unsigned int flags; unsigned int flags;
unsigned int oldflags;
int ret; int ret;
if (!inode_owner_or_capable(inode)) if (!inode_owner_or_capable(inode))
...@@ -1536,31 +1589,8 @@ static int f2fs_ioc_setflags(struct file *filp, unsigned long arg) ...@@ -1536,31 +1589,8 @@ static int f2fs_ioc_setflags(struct file *filp, unsigned long arg)
inode_lock(inode); inode_lock(inode);
/* Is it quota file? Do not allow user to mess with it */ ret = __f2fs_ioc_setflags(inode, flags);
if (IS_NOQUOTA(inode)) {
ret = -EPERM;
goto unlock_out;
}
flags = f2fs_mask_flags(inode->i_mode, flags);
oldflags = fi->i_flags;
if ((flags ^ oldflags) & (FS_APPEND_FL | FS_IMMUTABLE_FL)) {
if (!capable(CAP_LINUX_IMMUTABLE)) {
ret = -EPERM;
goto unlock_out;
}
}
flags = flags & FS_FL_USER_MODIFIABLE;
flags |= oldflags & ~FS_FL_USER_MODIFIABLE;
fi->i_flags = flags;
inode->i_ctime = current_time(inode);
f2fs_set_inode_flags(inode);
f2fs_mark_inode_dirty_sync(inode, false);
unlock_out:
inode_unlock(inode); inode_unlock(inode);
mnt_drop_write_file(filp); mnt_drop_write_file(filp);
return ret; return ret;
...@@ -1610,10 +1640,12 @@ static int f2fs_ioc_start_atomic_write(struct file *filp) ...@@ -1610,10 +1640,12 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX); ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
if (ret) { if (ret) {
clear_inode_flag(inode, FI_ATOMIC_FILE); clear_inode_flag(inode, FI_ATOMIC_FILE);
clear_inode_flag(inode, FI_HOT_DATA);
goto out; goto out;
} }
inc_stat: inc_stat:
F2FS_I(inode)->inmem_task = current;
stat_inc_atomic_write(inode); stat_inc_atomic_write(inode);
stat_update_max_atomic_write(inode); stat_update_max_atomic_write(inode);
out: out:
...@@ -1647,10 +1679,11 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp) ...@@ -1647,10 +1679,11 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp)
ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 0, true); ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 0, true);
if (!ret) { if (!ret) {
clear_inode_flag(inode, FI_ATOMIC_FILE); clear_inode_flag(inode, FI_ATOMIC_FILE);
clear_inode_flag(inode, FI_HOT_DATA);
stat_dec_atomic_write(inode); stat_dec_atomic_write(inode);
} }
} else { } else {
ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 0, true); ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 1, false);
} }
err_out: err_out:
inode_unlock(inode); inode_unlock(inode);
...@@ -1786,7 +1819,7 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg) ...@@ -1786,7 +1819,7 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
f2fs_stop_checkpoint(sbi, false); f2fs_stop_checkpoint(sbi, false);
break; break;
case F2FS_GOING_DOWN_METAFLUSH: case F2FS_GOING_DOWN_METAFLUSH:
sync_meta_pages(sbi, META, LONG_MAX); sync_meta_pages(sbi, META, LONG_MAX, FS_META_IO);
f2fs_stop_checkpoint(sbi, false); f2fs_stop_checkpoint(sbi, false);
break; break;
default: default:
...@@ -2043,7 +2076,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi, ...@@ -2043,7 +2076,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
*/ */
while (map.m_lblk < pg_end) { while (map.m_lblk < pg_end) {
map.m_len = pg_end - map.m_lblk; map.m_len = pg_end - map.m_lblk;
err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_READ); err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_DEFAULT);
if (err) if (err)
goto out; goto out;
...@@ -2085,7 +2118,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi, ...@@ -2085,7 +2118,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi,
do_map: do_map:
map.m_len = pg_end - map.m_lblk; map.m_len = pg_end - map.m_lblk;
err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_READ); err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_DEFAULT);
if (err) if (err)
goto clear_out; goto clear_out;
...@@ -2384,6 +2417,210 @@ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg) ...@@ -2384,6 +2417,210 @@ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg)
return ret; return ret;
} }
static int f2fs_ioc_get_features(struct file *filp, unsigned long arg)
{
struct inode *inode = file_inode(filp);
u32 sb_feature = le32_to_cpu(F2FS_I_SB(inode)->raw_super->feature);
/* Must validate to set it with SQLite behavior in Android. */
sb_feature |= F2FS_FEATURE_ATOMIC_WRITE;
return put_user(sb_feature, (u32 __user *)arg);
}
#ifdef CONFIG_QUOTA
static int f2fs_ioc_setproject(struct file *filp, __u32 projid)
{
struct inode *inode = file_inode(filp);
struct f2fs_inode_info *fi = F2FS_I(inode);
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct super_block *sb = sbi->sb;
struct dquot *transfer_to[MAXQUOTAS] = {};
struct page *ipage;
kprojid_t kprojid;
int err;
if (!f2fs_sb_has_project_quota(sb)) {
if (projid != F2FS_DEF_PROJID)
return -EOPNOTSUPP;
else
return 0;
}
if (!f2fs_has_extra_attr(inode))
return -EOPNOTSUPP;
kprojid = make_kprojid(&init_user_ns, (projid_t)projid);
if (projid_eq(kprojid, F2FS_I(inode)->i_projid))
return 0;
err = mnt_want_write_file(filp);
if (err)
return err;
err = -EPERM;
inode_lock(inode);
/* Is it quota file? Do not allow user to mess with it */
if (IS_NOQUOTA(inode))
goto out_unlock;
ipage = get_node_page(sbi, inode->i_ino);
if (IS_ERR(ipage)) {
err = PTR_ERR(ipage);
goto out_unlock;
}
if (!F2FS_FITS_IN_INODE(F2FS_INODE(ipage), fi->i_extra_isize,
i_projid)) {
err = -EOVERFLOW;
f2fs_put_page(ipage, 1);
goto out_unlock;
}
f2fs_put_page(ipage, 1);
dquot_initialize(inode);
transfer_to[PRJQUOTA] = dqget(sb, make_kqid_projid(kprojid));
if (!IS_ERR(transfer_to[PRJQUOTA])) {
err = __dquot_transfer(inode, transfer_to);
dqput(transfer_to[PRJQUOTA]);
if (err)
goto out_dirty;
}
F2FS_I(inode)->i_projid = kprojid;
inode->i_ctime = current_time(inode);
out_dirty:
f2fs_mark_inode_dirty_sync(inode, true);
out_unlock:
inode_unlock(inode);
mnt_drop_write_file(filp);
return err;
}
#else
static int f2fs_ioc_setproject(struct file *filp, __u32 projid)
{
if (projid != F2FS_DEF_PROJID)
return -EOPNOTSUPP;
return 0;
}
#endif
/* Transfer internal flags to xflags */
static inline __u32 f2fs_iflags_to_xflags(unsigned long iflags)
{
__u32 xflags = 0;
if (iflags & FS_SYNC_FL)
xflags |= FS_XFLAG_SYNC;
if (iflags & FS_IMMUTABLE_FL)
xflags |= FS_XFLAG_IMMUTABLE;
if (iflags & FS_APPEND_FL)
xflags |= FS_XFLAG_APPEND;
if (iflags & FS_NODUMP_FL)
xflags |= FS_XFLAG_NODUMP;
if (iflags & FS_NOATIME_FL)
xflags |= FS_XFLAG_NOATIME;
if (iflags & FS_PROJINHERIT_FL)
xflags |= FS_XFLAG_PROJINHERIT;
return xflags;
}
#define F2FS_SUPPORTED_FS_XFLAGS (FS_XFLAG_SYNC | FS_XFLAG_IMMUTABLE | \
FS_XFLAG_APPEND | FS_XFLAG_NODUMP | \
FS_XFLAG_NOATIME | FS_XFLAG_PROJINHERIT)
/* Flags we can manipulate with through EXT4_IOC_FSSETXATTR */
#define F2FS_FL_XFLAG_VISIBLE (FS_SYNC_FL | \
FS_IMMUTABLE_FL | \
FS_APPEND_FL | \
FS_NODUMP_FL | \
FS_NOATIME_FL | \
FS_PROJINHERIT_FL)
/* Transfer xflags flags to internal */
static inline unsigned long f2fs_xflags_to_iflags(__u32 xflags)
{
unsigned long iflags = 0;
if (xflags & FS_XFLAG_SYNC)
iflags |= FS_SYNC_FL;
if (xflags & FS_XFLAG_IMMUTABLE)
iflags |= FS_IMMUTABLE_FL;
if (xflags & FS_XFLAG_APPEND)
iflags |= FS_APPEND_FL;
if (xflags & FS_XFLAG_NODUMP)
iflags |= FS_NODUMP_FL;
if (xflags & FS_XFLAG_NOATIME)
iflags |= FS_NOATIME_FL;
if (xflags & FS_XFLAG_PROJINHERIT)
iflags |= FS_PROJINHERIT_FL;
return iflags;
}
static int f2fs_ioc_fsgetxattr(struct file *filp, unsigned long arg)
{
struct inode *inode = file_inode(filp);
struct f2fs_inode_info *fi = F2FS_I(inode);
struct fsxattr fa;
memset(&fa, 0, sizeof(struct fsxattr));
fa.fsx_xflags = f2fs_iflags_to_xflags(fi->i_flags &
(FS_FL_USER_VISIBLE | FS_PROJINHERIT_FL));
if (f2fs_sb_has_project_quota(inode->i_sb))
fa.fsx_projid = (__u32)from_kprojid(&init_user_ns,
fi->i_projid);
if (copy_to_user((struct fsxattr __user *)arg, &fa, sizeof(fa)))
return -EFAULT;
return 0;
}
static int f2fs_ioc_fssetxattr(struct file *filp, unsigned long arg)
{
struct inode *inode = file_inode(filp);
struct f2fs_inode_info *fi = F2FS_I(inode);
struct fsxattr fa;
unsigned int flags;
int err;
if (copy_from_user(&fa, (struct fsxattr __user *)arg, sizeof(fa)))
return -EFAULT;
/* Make sure caller has proper permission */
if (!inode_owner_or_capable(inode))
return -EACCES;
if (fa.fsx_xflags & ~F2FS_SUPPORTED_FS_XFLAGS)
return -EOPNOTSUPP;
flags = f2fs_xflags_to_iflags(fa.fsx_xflags);
if (f2fs_mask_flags(inode->i_mode, flags) != flags)
return -EOPNOTSUPP;
err = mnt_want_write_file(filp);
if (err)
return err;
inode_lock(inode);
flags = (fi->i_flags & ~F2FS_FL_XFLAG_VISIBLE) |
(flags & F2FS_FL_XFLAG_VISIBLE);
err = __f2fs_ioc_setflags(inode, flags);
inode_unlock(inode);
mnt_drop_write_file(filp);
if (err)
return err;
err = f2fs_ioc_setproject(filp, fa.fsx_projid);
if (err)
return err;
return 0;
}
long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{ {
...@@ -2426,6 +2663,12 @@ long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) ...@@ -2426,6 +2663,12 @@ long f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
return f2fs_ioc_move_range(filp, arg); return f2fs_ioc_move_range(filp, arg);
case F2FS_IOC_FLUSH_DEVICE: case F2FS_IOC_FLUSH_DEVICE:
return f2fs_ioc_flush_device(filp, arg); return f2fs_ioc_flush_device(filp, arg);
case F2FS_IOC_GET_FEATURES:
return f2fs_ioc_get_features(filp, arg);
case F2FS_IOC_FSGETXATTR:
return f2fs_ioc_fsgetxattr(filp, arg);
case F2FS_IOC_FSSETXATTR:
return f2fs_ioc_fssetxattr(filp, arg);
default: default:
return -ENOTTY; return -ENOTTY;
} }
...@@ -2455,6 +2698,9 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) ...@@ -2455,6 +2698,9 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
ret = __generic_file_write_iter(iocb, from); ret = __generic_file_write_iter(iocb, from);
blk_finish_plug(&plug); blk_finish_plug(&plug);
clear_inode_flag(inode, FI_NO_PREALLOC); clear_inode_flag(inode, FI_NO_PREALLOC);
if (ret > 0)
f2fs_update_iostat(F2FS_I_SB(inode), APP_WRITE_IO, ret);
} }
inode_unlock(inode); inode_unlock(inode);
...@@ -2491,6 +2737,9 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg) ...@@ -2491,6 +2737,9 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
case F2FS_IOC_DEFRAGMENT: case F2FS_IOC_DEFRAGMENT:
case F2FS_IOC_MOVE_RANGE: case F2FS_IOC_MOVE_RANGE:
case F2FS_IOC_FLUSH_DEVICE: case F2FS_IOC_FLUSH_DEVICE:
case F2FS_IOC_GET_FEATURES:
case F2FS_IOC_FSGETXATTR:
case F2FS_IOC_FSSETXATTR:
break; break;
default: default:
return -ENOIOCTLCMD; return -ENOIOCTLCMD;
...@@ -2506,6 +2755,7 @@ const struct file_operations f2fs_file_operations = { ...@@ -2506,6 +2755,7 @@ const struct file_operations f2fs_file_operations = {
.open = f2fs_file_open, .open = f2fs_file_open,
.release = f2fs_release_file, .release = f2fs_release_file,
.mmap = f2fs_file_mmap, .mmap = f2fs_file_mmap,
.flush = f2fs_file_flush,
.fsync = f2fs_sync_file, .fsync = f2fs_sync_file,
.fallocate = f2fs_fallocate, .fallocate = f2fs_fallocate,
.unlocked_ioctl = f2fs_ioctl, .unlocked_ioctl = f2fs_ioctl,
......
...@@ -28,16 +28,21 @@ static int gc_thread_func(void *data) ...@@ -28,16 +28,21 @@ static int gc_thread_func(void *data)
struct f2fs_sb_info *sbi = data; struct f2fs_sb_info *sbi = data;
struct f2fs_gc_kthread *gc_th = sbi->gc_thread; struct f2fs_gc_kthread *gc_th = sbi->gc_thread;
wait_queue_head_t *wq = &sbi->gc_thread->gc_wait_queue_head; wait_queue_head_t *wq = &sbi->gc_thread->gc_wait_queue_head;
long wait_ms; unsigned int wait_ms;
wait_ms = gc_th->min_sleep_time; wait_ms = gc_th->min_sleep_time;
set_freezable(); set_freezable();
do { do {
wait_event_interruptible_timeout(*wq, wait_event_interruptible_timeout(*wq,
kthread_should_stop() || freezing(current), kthread_should_stop() || freezing(current) ||
gc_th->gc_wake,
msecs_to_jiffies(wait_ms)); msecs_to_jiffies(wait_ms));
/* give it a try one time */
if (gc_th->gc_wake)
gc_th->gc_wake = 0;
if (try_to_freeze()) if (try_to_freeze())
continue; continue;
if (kthread_should_stop()) if (kthread_should_stop())
...@@ -55,6 +60,9 @@ static int gc_thread_func(void *data) ...@@ -55,6 +60,9 @@ static int gc_thread_func(void *data)
} }
#endif #endif
if (!sb_start_write_trylock(sbi->sb))
continue;
/* /*
* [GC triggering condition] * [GC triggering condition]
* 0. GC is not conducted currently. * 0. GC is not conducted currently.
...@@ -69,19 +77,24 @@ static int gc_thread_func(void *data) ...@@ -69,19 +77,24 @@ static int gc_thread_func(void *data)
* So, I'd like to wait some time to collect dirty segments. * So, I'd like to wait some time to collect dirty segments.
*/ */
if (!mutex_trylock(&sbi->gc_mutex)) if (!mutex_trylock(&sbi->gc_mutex))
continue; goto next;
if (gc_th->gc_urgent) {
wait_ms = gc_th->urgent_sleep_time;
goto do_gc;
}
if (!is_idle(sbi)) { if (!is_idle(sbi)) {
increase_sleep_time(gc_th, &wait_ms); increase_sleep_time(gc_th, &wait_ms);
mutex_unlock(&sbi->gc_mutex); mutex_unlock(&sbi->gc_mutex);
continue; goto next;
} }
if (has_enough_invalid_blocks(sbi)) if (has_enough_invalid_blocks(sbi))
decrease_sleep_time(gc_th, &wait_ms); decrease_sleep_time(gc_th, &wait_ms);
else else
increase_sleep_time(gc_th, &wait_ms); increase_sleep_time(gc_th, &wait_ms);
do_gc:
stat_inc_bggc_count(sbi); stat_inc_bggc_count(sbi);
/* if return value is not zero, no victim was selected */ /* if return value is not zero, no victim was selected */
...@@ -93,6 +106,8 @@ static int gc_thread_func(void *data) ...@@ -93,6 +106,8 @@ static int gc_thread_func(void *data)
/* balancing f2fs's metadata periodically */ /* balancing f2fs's metadata periodically */
f2fs_balance_fs_bg(sbi); f2fs_balance_fs_bg(sbi);
next:
sb_end_write(sbi->sb);
} while (!kthread_should_stop()); } while (!kthread_should_stop());
return 0; return 0;
...@@ -110,11 +125,14 @@ int start_gc_thread(struct f2fs_sb_info *sbi) ...@@ -110,11 +125,14 @@ int start_gc_thread(struct f2fs_sb_info *sbi)
goto out; goto out;
} }
gc_th->urgent_sleep_time = DEF_GC_THREAD_URGENT_SLEEP_TIME;
gc_th->min_sleep_time = DEF_GC_THREAD_MIN_SLEEP_TIME; gc_th->min_sleep_time = DEF_GC_THREAD_MIN_SLEEP_TIME;
gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME; gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME;
gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME; gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME;
gc_th->gc_idle = 0; gc_th->gc_idle = 0;
gc_th->gc_urgent = 0;
gc_th->gc_wake= 0;
sbi->gc_thread = gc_th; sbi->gc_thread = gc_th;
init_waitqueue_head(&sbi->gc_thread->gc_wait_queue_head); init_waitqueue_head(&sbi->gc_thread->gc_wait_queue_head);
...@@ -259,20 +277,11 @@ static unsigned int get_greedy_cost(struct f2fs_sb_info *sbi, ...@@ -259,20 +277,11 @@ static unsigned int get_greedy_cost(struct f2fs_sb_info *sbi,
valid_blocks * 2 : valid_blocks; valid_blocks * 2 : valid_blocks;
} }
static unsigned int get_ssr_cost(struct f2fs_sb_info *sbi,
unsigned int segno)
{
struct seg_entry *se = get_seg_entry(sbi, segno);
return se->ckpt_valid_blocks > se->valid_blocks ?
se->ckpt_valid_blocks : se->valid_blocks;
}
static inline unsigned int get_gc_cost(struct f2fs_sb_info *sbi, static inline unsigned int get_gc_cost(struct f2fs_sb_info *sbi,
unsigned int segno, struct victim_sel_policy *p) unsigned int segno, struct victim_sel_policy *p)
{ {
if (p->alloc_mode == SSR) if (p->alloc_mode == SSR)
return get_ssr_cost(sbi, segno); return get_seg_entry(sbi, segno)->ckpt_valid_blocks;
/* alloc_mode == LFS */ /* alloc_mode == LFS */
if (p->gc_mode == GC_GREEDY) if (p->gc_mode == GC_GREEDY)
...@@ -582,7 +591,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -582,7 +591,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
} }
*nofs = ofs_of_node(node_page); *nofs = ofs_of_node(node_page);
source_blkaddr = datablock_addr(node_page, ofs_in_node); source_blkaddr = datablock_addr(NULL, node_page, ofs_in_node);
f2fs_put_page(node_page, 1); f2fs_put_page(node_page, 1);
if (source_blkaddr != blkaddr) if (source_blkaddr != blkaddr)
...@@ -590,8 +599,12 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -590,8 +599,12 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
return true; return true;
} }
static void move_encrypted_block(struct inode *inode, block_t bidx, /*
unsigned int segno, int off) * Move data block via META_MAPPING while keeping locked data page.
* This can be used to move blocks, aka LBAs, directly on disk.
*/
static void move_data_block(struct inode *inode, block_t bidx,
unsigned int segno, int off)
{ {
struct f2fs_io_info fio = { struct f2fs_io_info fio = {
.sbi = F2FS_I_SB(inode), .sbi = F2FS_I_SB(inode),
...@@ -684,6 +697,8 @@ static void move_encrypted_block(struct inode *inode, block_t bidx, ...@@ -684,6 +697,8 @@ static void move_encrypted_block(struct inode *inode, block_t bidx,
fio.new_blkaddr = newaddr; fio.new_blkaddr = newaddr;
f2fs_submit_page_write(&fio); f2fs_submit_page_write(&fio);
f2fs_update_iostat(fio.sbi, FS_GC_DATA_IO, F2FS_BLKSIZE);
f2fs_update_data_blkaddr(&dn, newaddr); f2fs_update_data_blkaddr(&dn, newaddr);
set_inode_flag(inode, FI_APPEND_WRITE); set_inode_flag(inode, FI_APPEND_WRITE);
if (page->index == 0) if (page->index == 0)
...@@ -731,6 +746,7 @@ static void move_data_page(struct inode *inode, block_t bidx, int gc_type, ...@@ -731,6 +746,7 @@ static void move_data_page(struct inode *inode, block_t bidx, int gc_type,
.page = page, .page = page,
.encrypted_page = NULL, .encrypted_page = NULL,
.need_lock = LOCK_REQ, .need_lock = LOCK_REQ,
.io_type = FS_GC_DATA_IO,
}; };
bool is_dirty = PageDirty(page); bool is_dirty = PageDirty(page);
int err; int err;
...@@ -819,8 +835,7 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -819,8 +835,7 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
continue; continue;
/* if encrypted inode, let's go phase 3 */ /* if encrypted inode, let's go phase 3 */
if (f2fs_encrypted_inode(inode) && if (f2fs_encrypted_file(inode)) {
S_ISREG(inode->i_mode)) {
add_gc_inode(gc_list, inode); add_gc_inode(gc_list, inode);
continue; continue;
} }
...@@ -854,14 +869,18 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -854,14 +869,18 @@ static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
continue; continue;
} }
locked = true; locked = true;
/* wait for all inflight aio data */
inode_dio_wait(inode);
} }
start_bidx = start_bidx_of_node(nofs, inode) start_bidx = start_bidx_of_node(nofs, inode)
+ ofs_in_node; + ofs_in_node;
if (f2fs_encrypted_inode(inode) && S_ISREG(inode->i_mode)) if (f2fs_encrypted_file(inode))
move_encrypted_block(inode, start_bidx, segno, off); move_data_block(inode, start_bidx, segno, off);
else else
move_data_page(inode, start_bidx, gc_type, segno, off); move_data_page(inode, start_bidx, gc_type,
segno, off);
if (locked) { if (locked) {
up_write(&fi->dio_rwsem[WRITE]); up_write(&fi->dio_rwsem[WRITE]);
...@@ -898,7 +917,7 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi, ...@@ -898,7 +917,7 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
struct blk_plug plug; struct blk_plug plug;
unsigned int segno = start_segno; unsigned int segno = start_segno;
unsigned int end_segno = start_segno + sbi->segs_per_sec; unsigned int end_segno = start_segno + sbi->segs_per_sec;
int sec_freed = 0; int seg_freed = 0;
unsigned char type = IS_DATASEG(get_seg_entry(sbi, segno)->type) ? unsigned char type = IS_DATASEG(get_seg_entry(sbi, segno)->type) ?
SUM_TYPE_DATA : SUM_TYPE_NODE; SUM_TYPE_DATA : SUM_TYPE_NODE;
...@@ -944,6 +963,10 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi, ...@@ -944,6 +963,10 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
gc_type); gc_type);
stat_inc_seg_count(sbi, type, gc_type); stat_inc_seg_count(sbi, type, gc_type);
if (gc_type == FG_GC &&
get_valid_blocks(sbi, segno, false) == 0)
seg_freed++;
next: next:
f2fs_put_page(sum_page, 0); f2fs_put_page(sum_page, 0);
} }
...@@ -954,21 +977,17 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi, ...@@ -954,21 +977,17 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi,
blk_finish_plug(&plug); blk_finish_plug(&plug);
if (gc_type == FG_GC &&
get_valid_blocks(sbi, start_segno, true) == 0)
sec_freed = 1;
stat_inc_call_count(sbi->stat_info); stat_inc_call_count(sbi->stat_info);
return sec_freed; return seg_freed;
} }
int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
bool background, unsigned int segno) bool background, unsigned int segno)
{ {
int gc_type = sync ? FG_GC : BG_GC; int gc_type = sync ? FG_GC : BG_GC;
int sec_freed = 0; int sec_freed = 0, seg_freed = 0, total_freed = 0;
int ret; int ret = 0;
struct cp_control cpc; struct cp_control cpc;
unsigned int init_segno = segno; unsigned int init_segno = segno;
struct gc_inode_list gc_list = { struct gc_inode_list gc_list = {
...@@ -976,6 +995,15 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, ...@@ -976,6 +995,15 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
.iroot = RADIX_TREE_INIT(GFP_NOFS), .iroot = RADIX_TREE_INIT(GFP_NOFS),
}; };
trace_f2fs_gc_begin(sbi->sb, sync, background,
get_pages(sbi, F2FS_DIRTY_NODES),
get_pages(sbi, F2FS_DIRTY_DENTS),
get_pages(sbi, F2FS_DIRTY_IMETA),
free_sections(sbi),
free_segments(sbi),
reserved_segments(sbi),
prefree_segments(sbi));
cpc.reason = __get_cp_reason(sbi); cpc.reason = __get_cp_reason(sbi);
gc_more: gc_more:
if (unlikely(!(sbi->sb->s_flags & MS_ACTIVE))) { if (unlikely(!(sbi->sb->s_flags & MS_ACTIVE))) {
...@@ -1002,17 +1030,20 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, ...@@ -1002,17 +1030,20 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
gc_type = FG_GC; gc_type = FG_GC;
} }
ret = -EINVAL;
/* f2fs_balance_fs doesn't need to do BG_GC in critical path. */ /* f2fs_balance_fs doesn't need to do BG_GC in critical path. */
if (gc_type == BG_GC && !background) if (gc_type == BG_GC && !background) {
ret = -EINVAL;
goto stop; goto stop;
if (!__get_victim(sbi, &segno, gc_type)) }
if (!__get_victim(sbi, &segno, gc_type)) {
ret = -ENODATA;
goto stop; goto stop;
ret = 0; }
if (do_garbage_collect(sbi, segno, &gc_list, gc_type) && seg_freed = do_garbage_collect(sbi, segno, &gc_list, gc_type);
gc_type == FG_GC) if (gc_type == FG_GC && seg_freed == sbi->segs_per_sec)
sec_freed++; sec_freed++;
total_freed += seg_freed;
if (gc_type == FG_GC) if (gc_type == FG_GC)
sbi->cur_victim_sec = NULL_SEGNO; sbi->cur_victim_sec = NULL_SEGNO;
...@@ -1029,6 +1060,16 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, ...@@ -1029,6 +1060,16 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync,
stop: stop:
SIT_I(sbi)->last_victim[ALLOC_NEXT] = 0; SIT_I(sbi)->last_victim[ALLOC_NEXT] = 0;
SIT_I(sbi)->last_victim[FLUSH_DEVICE] = init_segno; SIT_I(sbi)->last_victim[FLUSH_DEVICE] = init_segno;
trace_f2fs_gc_end(sbi->sb, ret, total_freed, sec_freed,
get_pages(sbi, F2FS_DIRTY_NODES),
get_pages(sbi, F2FS_DIRTY_DENTS),
get_pages(sbi, F2FS_DIRTY_IMETA),
free_sections(sbi),
free_segments(sbi),
reserved_segments(sbi),
prefree_segments(sbi));
mutex_unlock(&sbi->gc_mutex); mutex_unlock(&sbi->gc_mutex);
put_gc_inode(&gc_list); put_gc_inode(&gc_list);
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
* whether IO subsystem is idle * whether IO subsystem is idle
* or not * or not
*/ */
#define DEF_GC_THREAD_URGENT_SLEEP_TIME 500 /* 500 ms */
#define DEF_GC_THREAD_MIN_SLEEP_TIME 30000 /* milliseconds */ #define DEF_GC_THREAD_MIN_SLEEP_TIME 30000 /* milliseconds */
#define DEF_GC_THREAD_MAX_SLEEP_TIME 60000 #define DEF_GC_THREAD_MAX_SLEEP_TIME 60000
#define DEF_GC_THREAD_NOGC_SLEEP_TIME 300000 /* wait 5 min */ #define DEF_GC_THREAD_NOGC_SLEEP_TIME 300000 /* wait 5 min */
...@@ -27,12 +28,15 @@ struct f2fs_gc_kthread { ...@@ -27,12 +28,15 @@ struct f2fs_gc_kthread {
wait_queue_head_t gc_wait_queue_head; wait_queue_head_t gc_wait_queue_head;
/* for gc sleep time */ /* for gc sleep time */
unsigned int urgent_sleep_time;
unsigned int min_sleep_time; unsigned int min_sleep_time;
unsigned int max_sleep_time; unsigned int max_sleep_time;
unsigned int no_gc_sleep_time; unsigned int no_gc_sleep_time;
/* for changing gc mode */ /* for changing gc mode */
unsigned int gc_idle; unsigned int gc_idle;
unsigned int gc_urgent;
unsigned int gc_wake;
}; };
struct gc_inode_list { struct gc_inode_list {
...@@ -65,25 +69,32 @@ static inline block_t limit_free_user_blocks(struct f2fs_sb_info *sbi) ...@@ -65,25 +69,32 @@ static inline block_t limit_free_user_blocks(struct f2fs_sb_info *sbi)
} }
static inline void increase_sleep_time(struct f2fs_gc_kthread *gc_th, static inline void increase_sleep_time(struct f2fs_gc_kthread *gc_th,
long *wait) unsigned int *wait)
{ {
unsigned int min_time = gc_th->min_sleep_time;
unsigned int max_time = gc_th->max_sleep_time;
if (*wait == gc_th->no_gc_sleep_time) if (*wait == gc_th->no_gc_sleep_time)
return; return;
*wait += gc_th->min_sleep_time; if ((long long)*wait + (long long)min_time > (long long)max_time)
if (*wait > gc_th->max_sleep_time) *wait = max_time;
*wait = gc_th->max_sleep_time; else
*wait += min_time;
} }
static inline void decrease_sleep_time(struct f2fs_gc_kthread *gc_th, static inline void decrease_sleep_time(struct f2fs_gc_kthread *gc_th,
long *wait) unsigned int *wait)
{ {
unsigned int min_time = gc_th->min_sleep_time;
if (*wait == gc_th->no_gc_sleep_time) if (*wait == gc_th->no_gc_sleep_time)
*wait = gc_th->max_sleep_time; *wait = gc_th->max_sleep_time;
*wait -= gc_th->min_sleep_time; if ((long long)*wait - (long long)min_time < (long long)min_time)
if (*wait <= gc_th->min_sleep_time) *wait = min_time;
*wait = gc_th->min_sleep_time; else
*wait -= min_time;
} }
static inline bool has_enough_invalid_blocks(struct f2fs_sb_info *sbi) static inline bool has_enough_invalid_blocks(struct f2fs_sb_info *sbi)
......
...@@ -22,10 +22,10 @@ bool f2fs_may_inline_data(struct inode *inode) ...@@ -22,10 +22,10 @@ bool f2fs_may_inline_data(struct inode *inode)
if (!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode)) if (!S_ISREG(inode->i_mode) && !S_ISLNK(inode->i_mode))
return false; return false;
if (i_size_read(inode) > MAX_INLINE_DATA) if (i_size_read(inode) > MAX_INLINE_DATA(inode))
return false; return false;
if (f2fs_encrypted_inode(inode) && S_ISREG(inode->i_mode)) if (f2fs_encrypted_file(inode))
return false; return false;
return true; return true;
...@@ -44,6 +44,7 @@ bool f2fs_may_inline_dentry(struct inode *inode) ...@@ -44,6 +44,7 @@ bool f2fs_may_inline_dentry(struct inode *inode)
void read_inline_data(struct page *page, struct page *ipage) void read_inline_data(struct page *page, struct page *ipage)
{ {
struct inode *inode = page->mapping->host;
void *src_addr, *dst_addr; void *src_addr, *dst_addr;
if (PageUptodate(page)) if (PageUptodate(page))
...@@ -51,12 +52,12 @@ void read_inline_data(struct page *page, struct page *ipage) ...@@ -51,12 +52,12 @@ void read_inline_data(struct page *page, struct page *ipage)
f2fs_bug_on(F2FS_P_SB(page), page->index); f2fs_bug_on(F2FS_P_SB(page), page->index);
zero_user_segment(page, MAX_INLINE_DATA, PAGE_SIZE); zero_user_segment(page, MAX_INLINE_DATA(inode), PAGE_SIZE);
/* Copy the whole inline data block */ /* Copy the whole inline data block */
src_addr = inline_data_addr(ipage); src_addr = inline_data_addr(inode, ipage);
dst_addr = kmap_atomic(page); dst_addr = kmap_atomic(page);
memcpy(dst_addr, src_addr, MAX_INLINE_DATA); memcpy(dst_addr, src_addr, MAX_INLINE_DATA(inode));
flush_dcache_page(page); flush_dcache_page(page);
kunmap_atomic(dst_addr); kunmap_atomic(dst_addr);
if (!PageUptodate(page)) if (!PageUptodate(page))
...@@ -67,13 +68,13 @@ void truncate_inline_inode(struct inode *inode, struct page *ipage, u64 from) ...@@ -67,13 +68,13 @@ void truncate_inline_inode(struct inode *inode, struct page *ipage, u64 from)
{ {
void *addr; void *addr;
if (from >= MAX_INLINE_DATA) if (from >= MAX_INLINE_DATA(inode))
return; return;
addr = inline_data_addr(ipage); addr = inline_data_addr(inode, ipage);
f2fs_wait_on_page_writeback(ipage, NODE, true); f2fs_wait_on_page_writeback(ipage, NODE, true);
memset(addr + from, 0, MAX_INLINE_DATA - from); memset(addr + from, 0, MAX_INLINE_DATA(inode) - from);
set_page_dirty(ipage); set_page_dirty(ipage);
if (from == 0) if (from == 0)
...@@ -116,6 +117,7 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page) ...@@ -116,6 +117,7 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page)
.op_flags = REQ_SYNC | REQ_PRIO, .op_flags = REQ_SYNC | REQ_PRIO,
.page = page, .page = page,
.encrypted_page = NULL, .encrypted_page = NULL,
.io_type = FS_DATA_IO,
}; };
int dirty, err; int dirty, err;
...@@ -200,6 +202,8 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page) ...@@ -200,6 +202,8 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
{ {
void *src_addr, *dst_addr; void *src_addr, *dst_addr;
struct dnode_of_data dn; struct dnode_of_data dn;
struct address_space *mapping = page_mapping(page);
unsigned long flags;
int err; int err;
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
...@@ -216,11 +220,16 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page) ...@@ -216,11 +220,16 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
f2fs_wait_on_page_writeback(dn.inode_page, NODE, true); f2fs_wait_on_page_writeback(dn.inode_page, NODE, true);
src_addr = kmap_atomic(page); src_addr = kmap_atomic(page);
dst_addr = inline_data_addr(dn.inode_page); dst_addr = inline_data_addr(inode, dn.inode_page);
memcpy(dst_addr, src_addr, MAX_INLINE_DATA); memcpy(dst_addr, src_addr, MAX_INLINE_DATA(inode));
kunmap_atomic(src_addr); kunmap_atomic(src_addr);
set_page_dirty(dn.inode_page); set_page_dirty(dn.inode_page);
spin_lock_irqsave(&mapping->tree_lock, flags);
radix_tree_tag_clear(&mapping->page_tree, page_index(page),
PAGECACHE_TAG_DIRTY);
spin_unlock_irqrestore(&mapping->tree_lock, flags);
set_inode_flag(inode, FI_APPEND_WRITE); set_inode_flag(inode, FI_APPEND_WRITE);
set_inode_flag(inode, FI_DATA_EXIST); set_inode_flag(inode, FI_DATA_EXIST);
...@@ -255,9 +264,9 @@ bool recover_inline_data(struct inode *inode, struct page *npage) ...@@ -255,9 +264,9 @@ bool recover_inline_data(struct inode *inode, struct page *npage)
f2fs_wait_on_page_writeback(ipage, NODE, true); f2fs_wait_on_page_writeback(ipage, NODE, true);
src_addr = inline_data_addr(npage); src_addr = inline_data_addr(inode, npage);
dst_addr = inline_data_addr(ipage); dst_addr = inline_data_addr(inode, ipage);
memcpy(dst_addr, src_addr, MAX_INLINE_DATA); memcpy(dst_addr, src_addr, MAX_INLINE_DATA(inode));
set_inode_flag(inode, FI_INLINE_DATA); set_inode_flag(inode, FI_INLINE_DATA);
set_inode_flag(inode, FI_DATA_EXIST); set_inode_flag(inode, FI_DATA_EXIST);
...@@ -285,11 +294,11 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir, ...@@ -285,11 +294,11 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir,
struct fscrypt_name *fname, struct page **res_page) struct fscrypt_name *fname, struct page **res_page)
{ {
struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb); struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
struct f2fs_inline_dentry *inline_dentry;
struct qstr name = FSTR_TO_QSTR(&fname->disk_name); struct qstr name = FSTR_TO_QSTR(&fname->disk_name);
struct f2fs_dir_entry *de; struct f2fs_dir_entry *de;
struct f2fs_dentry_ptr d; struct f2fs_dentry_ptr d;
struct page *ipage; struct page *ipage;
void *inline_dentry;
f2fs_hash_t namehash; f2fs_hash_t namehash;
ipage = get_node_page(sbi, dir->i_ino); ipage = get_node_page(sbi, dir->i_ino);
...@@ -300,9 +309,9 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir, ...@@ -300,9 +309,9 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir,
namehash = f2fs_dentry_hash(&name, fname); namehash = f2fs_dentry_hash(&name, fname);
inline_dentry = inline_data_addr(ipage); inline_dentry = inline_data_addr(dir, ipage);
make_dentry_ptr_inline(NULL, &d, inline_dentry); make_dentry_ptr_inline(dir, &d, inline_dentry);
de = find_target_dentry(fname, namehash, NULL, &d); de = find_target_dentry(fname, namehash, NULL, &d);
unlock_page(ipage); unlock_page(ipage);
if (de) if (de)
...@@ -316,19 +325,19 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir, ...@@ -316,19 +325,19 @@ struct f2fs_dir_entry *find_in_inline_dir(struct inode *dir,
int make_empty_inline_dir(struct inode *inode, struct inode *parent, int make_empty_inline_dir(struct inode *inode, struct inode *parent,
struct page *ipage) struct page *ipage)
{ {
struct f2fs_inline_dentry *inline_dentry;
struct f2fs_dentry_ptr d; struct f2fs_dentry_ptr d;
void *inline_dentry;
inline_dentry = inline_data_addr(ipage); inline_dentry = inline_data_addr(inode, ipage);
make_dentry_ptr_inline(NULL, &d, inline_dentry); make_dentry_ptr_inline(inode, &d, inline_dentry);
do_make_empty_dir(inode, parent, &d); do_make_empty_dir(inode, parent, &d);
set_page_dirty(ipage); set_page_dirty(ipage);
/* update i_size to MAX_INLINE_DATA */ /* update i_size to MAX_INLINE_DATA */
if (i_size_read(inode) < MAX_INLINE_DATA) if (i_size_read(inode) < MAX_INLINE_DATA(inode))
f2fs_i_size_write(inode, MAX_INLINE_DATA); f2fs_i_size_write(inode, MAX_INLINE_DATA(inode));
return 0; return 0;
} }
...@@ -337,11 +346,12 @@ int make_empty_inline_dir(struct inode *inode, struct inode *parent, ...@@ -337,11 +346,12 @@ int make_empty_inline_dir(struct inode *inode, struct inode *parent,
* release ipage in this function. * release ipage in this function.
*/ */
static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
struct f2fs_inline_dentry *inline_dentry) void *inline_dentry)
{ {
struct page *page; struct page *page;
struct dnode_of_data dn; struct dnode_of_data dn;
struct f2fs_dentry_block *dentry_blk; struct f2fs_dentry_block *dentry_blk;
struct f2fs_dentry_ptr src, dst;
int err; int err;
page = f2fs_grab_cache_page(dir->i_mapping, 0, false); page = f2fs_grab_cache_page(dir->i_mapping, 0, false);
...@@ -356,25 +366,24 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, ...@@ -356,25 +366,24 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
goto out; goto out;
f2fs_wait_on_page_writeback(page, DATA, true); f2fs_wait_on_page_writeback(page, DATA, true);
zero_user_segment(page, MAX_INLINE_DATA, PAGE_SIZE); zero_user_segment(page, MAX_INLINE_DATA(dir), PAGE_SIZE);
dentry_blk = kmap_atomic(page); dentry_blk = kmap_atomic(page);
make_dentry_ptr_inline(dir, &src, inline_dentry);
make_dentry_ptr_block(dir, &dst, dentry_blk);
/* copy data from inline dentry block to new dentry block */ /* copy data from inline dentry block to new dentry block */
memcpy(dentry_blk->dentry_bitmap, inline_dentry->dentry_bitmap, memcpy(dst.bitmap, src.bitmap, src.nr_bitmap);
INLINE_DENTRY_BITMAP_SIZE); memset(dst.bitmap + src.nr_bitmap, 0, dst.nr_bitmap - src.nr_bitmap);
memset(dentry_blk->dentry_bitmap + INLINE_DENTRY_BITMAP_SIZE, 0,
SIZE_OF_DENTRY_BITMAP - INLINE_DENTRY_BITMAP_SIZE);
/* /*
* we do not need to zero out remainder part of dentry and filename * we do not need to zero out remainder part of dentry and filename
* field, since we have used bitmap for marking the usage status of * field, since we have used bitmap for marking the usage status of
* them, besides, we can also ignore copying/zeroing reserved space * them, besides, we can also ignore copying/zeroing reserved space
* of dentry block, because them haven't been used so far. * of dentry block, because them haven't been used so far.
*/ */
memcpy(dentry_blk->dentry, inline_dentry->dentry, memcpy(dst.dentry, src.dentry, SIZE_OF_DIR_ENTRY * src.max);
sizeof(struct f2fs_dir_entry) * NR_INLINE_DENTRY); memcpy(dst.filename, src.filename, src.max * F2FS_SLOT_LEN);
memcpy(dentry_blk->filename, inline_dentry->filename,
NR_INLINE_DENTRY * F2FS_SLOT_LEN);
kunmap_atomic(dentry_blk); kunmap_atomic(dentry_blk);
if (!PageUptodate(page)) if (!PageUptodate(page))
...@@ -395,14 +404,13 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, ...@@ -395,14 +404,13 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
return err; return err;
} }
static int f2fs_add_inline_entries(struct inode *dir, static int f2fs_add_inline_entries(struct inode *dir, void *inline_dentry)
struct f2fs_inline_dentry *inline_dentry)
{ {
struct f2fs_dentry_ptr d; struct f2fs_dentry_ptr d;
unsigned long bit_pos = 0; unsigned long bit_pos = 0;
int err = 0; int err = 0;
make_dentry_ptr_inline(NULL, &d, inline_dentry); make_dentry_ptr_inline(dir, &d, inline_dentry);
while (bit_pos < d.max) { while (bit_pos < d.max) {
struct f2fs_dir_entry *de; struct f2fs_dir_entry *de;
...@@ -444,19 +452,19 @@ static int f2fs_add_inline_entries(struct inode *dir, ...@@ -444,19 +452,19 @@ static int f2fs_add_inline_entries(struct inode *dir,
} }
static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage, static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage,
struct f2fs_inline_dentry *inline_dentry) void *inline_dentry)
{ {
struct f2fs_inline_dentry *backup_dentry; void *backup_dentry;
int err; int err;
backup_dentry = f2fs_kmalloc(F2FS_I_SB(dir), backup_dentry = f2fs_kmalloc(F2FS_I_SB(dir),
sizeof(struct f2fs_inline_dentry), GFP_F2FS_ZERO); MAX_INLINE_DATA(dir), GFP_F2FS_ZERO);
if (!backup_dentry) { if (!backup_dentry) {
f2fs_put_page(ipage, 1); f2fs_put_page(ipage, 1);
return -ENOMEM; return -ENOMEM;
} }
memcpy(backup_dentry, inline_dentry, MAX_INLINE_DATA); memcpy(backup_dentry, inline_dentry, MAX_INLINE_DATA(dir));
truncate_inline_inode(dir, ipage, 0); truncate_inline_inode(dir, ipage, 0);
unlock_page(ipage); unlock_page(ipage);
...@@ -473,9 +481,9 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage, ...@@ -473,9 +481,9 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage,
return 0; return 0;
recover: recover:
lock_page(ipage); lock_page(ipage);
memcpy(inline_dentry, backup_dentry, MAX_INLINE_DATA); memcpy(inline_dentry, backup_dentry, MAX_INLINE_DATA(dir));
f2fs_i_depth_write(dir, 0); f2fs_i_depth_write(dir, 0);
f2fs_i_size_write(dir, MAX_INLINE_DATA); f2fs_i_size_write(dir, MAX_INLINE_DATA(dir));
set_page_dirty(ipage); set_page_dirty(ipage);
f2fs_put_page(ipage, 1); f2fs_put_page(ipage, 1);
...@@ -484,7 +492,7 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage, ...@@ -484,7 +492,7 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage,
} }
static int f2fs_convert_inline_dir(struct inode *dir, struct page *ipage, static int f2fs_convert_inline_dir(struct inode *dir, struct page *ipage,
struct f2fs_inline_dentry *inline_dentry) void *inline_dentry)
{ {
if (!F2FS_I(dir)->i_dir_level) if (!F2FS_I(dir)->i_dir_level)
return f2fs_move_inline_dirents(dir, ipage, inline_dentry); return f2fs_move_inline_dirents(dir, ipage, inline_dentry);
...@@ -500,7 +508,7 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name, ...@@ -500,7 +508,7 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
struct page *ipage; struct page *ipage;
unsigned int bit_pos; unsigned int bit_pos;
f2fs_hash_t name_hash; f2fs_hash_t name_hash;
struct f2fs_inline_dentry *inline_dentry = NULL; void *inline_dentry = NULL;
struct f2fs_dentry_ptr d; struct f2fs_dentry_ptr d;
int slots = GET_DENTRY_SLOTS(new_name->len); int slots = GET_DENTRY_SLOTS(new_name->len);
struct page *page = NULL; struct page *page = NULL;
...@@ -510,10 +518,11 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name, ...@@ -510,10 +518,11 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
if (IS_ERR(ipage)) if (IS_ERR(ipage))
return PTR_ERR(ipage); return PTR_ERR(ipage);
inline_dentry = inline_data_addr(ipage); inline_dentry = inline_data_addr(dir, ipage);
bit_pos = room_for_filename(&inline_dentry->dentry_bitmap, make_dentry_ptr_inline(dir, &d, inline_dentry);
slots, NR_INLINE_DENTRY);
if (bit_pos >= NR_INLINE_DENTRY) { bit_pos = room_for_filename(d.bitmap, slots, d.max);
if (bit_pos >= d.max) {
err = f2fs_convert_inline_dir(dir, ipage, inline_dentry); err = f2fs_convert_inline_dir(dir, ipage, inline_dentry);
if (err) if (err)
return err; return err;
...@@ -534,7 +543,6 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name, ...@@ -534,7 +543,6 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
f2fs_wait_on_page_writeback(ipage, NODE, true); f2fs_wait_on_page_writeback(ipage, NODE, true);
name_hash = f2fs_dentry_hash(new_name, NULL); name_hash = f2fs_dentry_hash(new_name, NULL);
make_dentry_ptr_inline(NULL, &d, inline_dentry);
f2fs_update_dentry(ino, mode, &d, new_name, name_hash, bit_pos); f2fs_update_dentry(ino, mode, &d, new_name, name_hash, bit_pos);
set_page_dirty(ipage); set_page_dirty(ipage);
...@@ -557,7 +565,8 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name, ...@@ -557,7 +565,8 @@ int f2fs_add_inline_entry(struct inode *dir, const struct qstr *new_name,
void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry, struct page *page, void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry, struct page *page,
struct inode *dir, struct inode *inode) struct inode *dir, struct inode *inode)
{ {
struct f2fs_inline_dentry *inline_dentry; struct f2fs_dentry_ptr d;
void *inline_dentry;
int slots = GET_DENTRY_SLOTS(le16_to_cpu(dentry->name_len)); int slots = GET_DENTRY_SLOTS(le16_to_cpu(dentry->name_len));
unsigned int bit_pos; unsigned int bit_pos;
int i; int i;
...@@ -565,11 +574,12 @@ void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry, struct page *page, ...@@ -565,11 +574,12 @@ void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry, struct page *page,
lock_page(page); lock_page(page);
f2fs_wait_on_page_writeback(page, NODE, true); f2fs_wait_on_page_writeback(page, NODE, true);
inline_dentry = inline_data_addr(page); inline_dentry = inline_data_addr(dir, page);
bit_pos = dentry - inline_dentry->dentry; make_dentry_ptr_inline(dir, &d, inline_dentry);
bit_pos = dentry - d.dentry;
for (i = 0; i < slots; i++) for (i = 0; i < slots; i++)
__clear_bit_le(bit_pos + i, __clear_bit_le(bit_pos + i, d.bitmap);
&inline_dentry->dentry_bitmap);
set_page_dirty(page); set_page_dirty(page);
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
...@@ -586,20 +596,21 @@ bool f2fs_empty_inline_dir(struct inode *dir) ...@@ -586,20 +596,21 @@ bool f2fs_empty_inline_dir(struct inode *dir)
struct f2fs_sb_info *sbi = F2FS_I_SB(dir); struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
struct page *ipage; struct page *ipage;
unsigned int bit_pos = 2; unsigned int bit_pos = 2;
struct f2fs_inline_dentry *inline_dentry; void *inline_dentry;
struct f2fs_dentry_ptr d;
ipage = get_node_page(sbi, dir->i_ino); ipage = get_node_page(sbi, dir->i_ino);
if (IS_ERR(ipage)) if (IS_ERR(ipage))
return false; return false;
inline_dentry = inline_data_addr(ipage); inline_dentry = inline_data_addr(dir, ipage);
bit_pos = find_next_bit_le(&inline_dentry->dentry_bitmap, make_dentry_ptr_inline(dir, &d, inline_dentry);
NR_INLINE_DENTRY,
bit_pos); bit_pos = find_next_bit_le(d.bitmap, d.max, bit_pos);
f2fs_put_page(ipage, 1); f2fs_put_page(ipage, 1);
if (bit_pos < NR_INLINE_DENTRY) if (bit_pos < d.max)
return false; return false;
return true; return true;
...@@ -609,25 +620,27 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx, ...@@ -609,25 +620,27 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
struct fscrypt_str *fstr) struct fscrypt_str *fstr)
{ {
struct inode *inode = file_inode(file); struct inode *inode = file_inode(file);
struct f2fs_inline_dentry *inline_dentry = NULL;
struct page *ipage = NULL; struct page *ipage = NULL;
struct f2fs_dentry_ptr d; struct f2fs_dentry_ptr d;
void *inline_dentry = NULL;
int err; int err;
if (ctx->pos == NR_INLINE_DENTRY) make_dentry_ptr_inline(inode, &d, inline_dentry);
if (ctx->pos == d.max)
return 0; return 0;
ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino); ipage = get_node_page(F2FS_I_SB(inode), inode->i_ino);
if (IS_ERR(ipage)) if (IS_ERR(ipage))
return PTR_ERR(ipage); return PTR_ERR(ipage);
inline_dentry = inline_data_addr(ipage); inline_dentry = inline_data_addr(inode, ipage);
make_dentry_ptr_inline(inode, &d, inline_dentry); make_dentry_ptr_inline(inode, &d, inline_dentry);
err = f2fs_fill_dentries(ctx, &d, 0, fstr); err = f2fs_fill_dentries(ctx, &d, 0, fstr);
if (!err) if (!err)
ctx->pos = NR_INLINE_DENTRY; ctx->pos = d.max;
f2fs_put_page(ipage, 1); f2fs_put_page(ipage, 1);
return err < 0 ? err : 0; return err < 0 ? err : 0;
...@@ -652,7 +665,7 @@ int f2fs_inline_data_fiemap(struct inode *inode, ...@@ -652,7 +665,7 @@ int f2fs_inline_data_fiemap(struct inode *inode,
goto out; goto out;
} }
ilen = min_t(size_t, MAX_INLINE_DATA, i_size_read(inode)); ilen = min_t(size_t, MAX_INLINE_DATA(inode), i_size_read(inode));
if (start >= ilen) if (start >= ilen)
goto out; goto out;
if (start + len < ilen) if (start + len < ilen)
...@@ -661,7 +674,8 @@ int f2fs_inline_data_fiemap(struct inode *inode, ...@@ -661,7 +674,8 @@ int f2fs_inline_data_fiemap(struct inode *inode,
get_node_info(F2FS_I_SB(inode), inode->i_ino, &ni); get_node_info(F2FS_I_SB(inode), inode->i_ino, &ni);
byteaddr = (__u64)ni.blk_addr << inode->i_sb->s_blocksize_bits; byteaddr = (__u64)ni.blk_addr << inode->i_sb->s_blocksize_bits;
byteaddr += (char *)inline_data_addr(ipage) - (char *)F2FS_INODE(ipage); byteaddr += (char *)inline_data_addr(inode, ipage) -
(char *)F2FS_INODE(ipage);
err = fiemap_fill_next_extent(fieinfo, start, byteaddr, ilen, flags); err = fiemap_fill_next_extent(fieinfo, start, byteaddr, ilen, flags);
out: out:
f2fs_put_page(ipage, 1); f2fs_put_page(ipage, 1);
......
...@@ -49,20 +49,22 @@ void f2fs_set_inode_flags(struct inode *inode) ...@@ -49,20 +49,22 @@ void f2fs_set_inode_flags(struct inode *inode)
static void __get_inode_rdev(struct inode *inode, struct f2fs_inode *ri) static void __get_inode_rdev(struct inode *inode, struct f2fs_inode *ri)
{ {
int extra_size = get_extra_isize(inode);
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) || if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) ||
S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) { S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) {
if (ri->i_addr[0]) if (ri->i_addr[extra_size])
inode->i_rdev = inode->i_rdev = old_decode_dev(
old_decode_dev(le32_to_cpu(ri->i_addr[0])); le32_to_cpu(ri->i_addr[extra_size]));
else else
inode->i_rdev = inode->i_rdev = new_decode_dev(
new_decode_dev(le32_to_cpu(ri->i_addr[1])); le32_to_cpu(ri->i_addr[extra_size + 1]));
} }
} }
static bool __written_first_block(struct f2fs_inode *ri) static bool __written_first_block(struct f2fs_inode *ri)
{ {
block_t addr = le32_to_cpu(ri->i_addr[0]); block_t addr = le32_to_cpu(ri->i_addr[offset_in_addr(ri)]);
if (addr != NEW_ADDR && addr != NULL_ADDR) if (addr != NEW_ADDR && addr != NULL_ADDR)
return true; return true;
...@@ -71,25 +73,27 @@ static bool __written_first_block(struct f2fs_inode *ri) ...@@ -71,25 +73,27 @@ static bool __written_first_block(struct f2fs_inode *ri)
static void __set_inode_rdev(struct inode *inode, struct f2fs_inode *ri) static void __set_inode_rdev(struct inode *inode, struct f2fs_inode *ri)
{ {
int extra_size = get_extra_isize(inode);
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) { if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
if (old_valid_dev(inode->i_rdev)) { if (old_valid_dev(inode->i_rdev)) {
ri->i_addr[0] = ri->i_addr[extra_size] =
cpu_to_le32(old_encode_dev(inode->i_rdev)); cpu_to_le32(old_encode_dev(inode->i_rdev));
ri->i_addr[1] = 0; ri->i_addr[extra_size + 1] = 0;
} else { } else {
ri->i_addr[0] = 0; ri->i_addr[extra_size] = 0;
ri->i_addr[1] = ri->i_addr[extra_size + 1] =
cpu_to_le32(new_encode_dev(inode->i_rdev)); cpu_to_le32(new_encode_dev(inode->i_rdev));
ri->i_addr[2] = 0; ri->i_addr[extra_size + 2] = 0;
} }
} }
} }
static void __recover_inline_status(struct inode *inode, struct page *ipage) static void __recover_inline_status(struct inode *inode, struct page *ipage)
{ {
void *inline_data = inline_data_addr(ipage); void *inline_data = inline_data_addr(inode, ipage);
__le32 *start = inline_data; __le32 *start = inline_data;
__le32 *end = start + MAX_INLINE_DATA / sizeof(__le32); __le32 *end = start + MAX_INLINE_DATA(inode) / sizeof(__le32);
while (start < end) { while (start < end) {
if (*start++) { if (*start++) {
...@@ -104,12 +108,84 @@ static void __recover_inline_status(struct inode *inode, struct page *ipage) ...@@ -104,12 +108,84 @@ static void __recover_inline_status(struct inode *inode, struct page *ipage)
return; return;
} }
static bool f2fs_enable_inode_chksum(struct f2fs_sb_info *sbi, struct page *page)
{
struct f2fs_inode *ri = &F2FS_NODE(page)->i;
int extra_isize = le32_to_cpu(ri->i_extra_isize);
if (!f2fs_sb_has_inode_chksum(sbi->sb))
return false;
if (!RAW_IS_INODE(F2FS_NODE(page)) || !(ri->i_inline & F2FS_EXTRA_ATTR))
return false;
if (!F2FS_FITS_IN_INODE(ri, extra_isize, i_inode_checksum))
return false;
return true;
}
static __u32 f2fs_inode_chksum(struct f2fs_sb_info *sbi, struct page *page)
{
struct f2fs_node *node = F2FS_NODE(page);
struct f2fs_inode *ri = &node->i;
__le32 ino = node->footer.ino;
__le32 gen = ri->i_generation;
__u32 chksum, chksum_seed;
__u32 dummy_cs = 0;
unsigned int offset = offsetof(struct f2fs_inode, i_inode_checksum);
unsigned int cs_size = sizeof(dummy_cs);
chksum = f2fs_chksum(sbi, sbi->s_chksum_seed, (__u8 *)&ino,
sizeof(ino));
chksum_seed = f2fs_chksum(sbi, chksum, (__u8 *)&gen, sizeof(gen));
chksum = f2fs_chksum(sbi, chksum_seed, (__u8 *)ri, offset);
chksum = f2fs_chksum(sbi, chksum, (__u8 *)&dummy_cs, cs_size);
offset += cs_size;
chksum = f2fs_chksum(sbi, chksum, (__u8 *)ri + offset,
F2FS_BLKSIZE - offset);
return chksum;
}
bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct page *page)
{
struct f2fs_inode *ri;
__u32 provided, calculated;
if (!f2fs_enable_inode_chksum(sbi, page) ||
PageDirty(page) || PageWriteback(page))
return true;
ri = &F2FS_NODE(page)->i;
provided = le32_to_cpu(ri->i_inode_checksum);
calculated = f2fs_inode_chksum(sbi, page);
if (provided != calculated)
f2fs_msg(sbi->sb, KERN_WARNING,
"checksum invalid, ino = %x, %x vs. %x",
ino_of_node(page), provided, calculated);
return provided == calculated;
}
void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page)
{
struct f2fs_inode *ri = &F2FS_NODE(page)->i;
if (!f2fs_enable_inode_chksum(sbi, page))
return;
ri->i_inode_checksum = cpu_to_le32(f2fs_inode_chksum(sbi, page));
}
static int do_read_inode(struct inode *inode) static int do_read_inode(struct inode *inode)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_inode_info *fi = F2FS_I(inode);
struct page *node_page; struct page *node_page;
struct f2fs_inode *ri; struct f2fs_inode *ri;
projid_t i_projid;
/* Check if ino is within scope */ /* Check if ino is within scope */
if (check_nid_range(sbi, inode->i_ino)) { if (check_nid_range(sbi, inode->i_ino)) {
...@@ -153,6 +229,9 @@ static int do_read_inode(struct inode *inode) ...@@ -153,6 +229,9 @@ static int do_read_inode(struct inode *inode)
get_inline_info(inode, ri); get_inline_info(inode, ri);
fi->i_extra_isize = f2fs_has_extra_attr(inode) ?
le16_to_cpu(ri->i_extra_isize) : 0;
/* check data exist */ /* check data exist */
if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode)) if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode))
__recover_inline_status(inode, node_page); __recover_inline_status(inode, node_page);
...@@ -166,6 +245,16 @@ static int do_read_inode(struct inode *inode) ...@@ -166,6 +245,16 @@ static int do_read_inode(struct inode *inode)
if (!need_inode_block_update(sbi, inode->i_ino)) if (!need_inode_block_update(sbi, inode->i_ino))
fi->last_disk_size = inode->i_size; fi->last_disk_size = inode->i_size;
if (fi->i_flags & FS_PROJINHERIT_FL)
set_inode_flag(inode, FI_PROJ_INHERIT);
if (f2fs_has_extra_attr(inode) && f2fs_sb_has_project_quota(sbi->sb) &&
F2FS_FITS_IN_INODE(ri, fi->i_extra_isize, i_projid))
i_projid = (projid_t)le32_to_cpu(ri->i_projid);
else
i_projid = F2FS_DEF_PROJID;
fi->i_projid = make_kprojid(&init_user_ns, i_projid);
f2fs_put_page(node_page, 1); f2fs_put_page(node_page, 1);
stat_inc_inline_xattr(inode); stat_inc_inline_xattr(inode);
...@@ -292,6 +381,20 @@ int update_inode(struct inode *inode, struct page *node_page) ...@@ -292,6 +381,20 @@ int update_inode(struct inode *inode, struct page *node_page)
ri->i_generation = cpu_to_le32(inode->i_generation); ri->i_generation = cpu_to_le32(inode->i_generation);
ri->i_dir_level = F2FS_I(inode)->i_dir_level; ri->i_dir_level = F2FS_I(inode)->i_dir_level;
if (f2fs_has_extra_attr(inode)) {
ri->i_extra_isize = cpu_to_le16(F2FS_I(inode)->i_extra_isize);
if (f2fs_sb_has_project_quota(F2FS_I_SB(inode)->sb) &&
F2FS_FITS_IN_INODE(ri, F2FS_I(inode)->i_extra_isize,
i_projid)) {
projid_t i_projid;
i_projid = from_kprojid(&init_user_ns,
F2FS_I(inode)->i_projid);
ri->i_projid = cpu_to_le32(i_projid);
}
}
__set_inode_rdev(inode, ri); __set_inode_rdev(inode, ri);
set_cold_node(inode, node_page); set_cold_node(inode, node_page);
...@@ -416,6 +519,9 @@ void f2fs_evict_inode(struct inode *inode) ...@@ -416,6 +519,9 @@ void f2fs_evict_inode(struct inode *inode)
stat_dec_inline_dir(inode); stat_dec_inline_dir(inode);
stat_dec_inline_inode(inode); stat_dec_inline_inode(inode);
if (!is_set_ckpt_flags(sbi, CP_ERROR_FLAG))
f2fs_bug_on(sbi, is_inode_flag_set(inode, FI_DIRTY_INODE));
/* ino == 0, if f2fs_new_inode() was failed t*/ /* ino == 0, if f2fs_new_inode() was failed t*/
if (inode->i_ino) if (inode->i_ino)
invalidate_mapping_pages(NODE_MAPPING(sbi), inode->i_ino, invalidate_mapping_pages(NODE_MAPPING(sbi), inode->i_ino,
......
...@@ -58,6 +58,13 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -58,6 +58,13 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
goto fail; goto fail;
} }
if (f2fs_sb_has_project_quota(sbi->sb) &&
(F2FS_I(dir)->i_flags & FS_PROJINHERIT_FL))
F2FS_I(inode)->i_projid = F2FS_I(dir)->i_projid;
else
F2FS_I(inode)->i_projid = make_kprojid(&init_user_ns,
F2FS_DEF_PROJID);
err = dquot_initialize(inode); err = dquot_initialize(inode);
if (err) if (err)
goto fail_drop; goto fail_drop;
...@@ -72,6 +79,11 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -72,6 +79,11 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
set_inode_flag(inode, FI_NEW_INODE); set_inode_flag(inode, FI_NEW_INODE);
if (f2fs_sb_has_extra_attr(sbi->sb)) {
set_inode_flag(inode, FI_EXTRA_ATTR);
F2FS_I(inode)->i_extra_isize = F2FS_TOTAL_EXTRA_ATTR_SIZE;
}
if (test_opt(sbi, INLINE_XATTR)) if (test_opt(sbi, INLINE_XATTR))
set_inode_flag(inode, FI_INLINE_XATTR); set_inode_flag(inode, FI_INLINE_XATTR);
if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode)) if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode))
...@@ -85,6 +97,15 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode) ...@@ -85,6 +97,15 @@ static struct inode *f2fs_new_inode(struct inode *dir, umode_t mode)
stat_inc_inline_inode(inode); stat_inc_inline_inode(inode);
stat_inc_inline_dir(inode); stat_inc_inline_dir(inode);
F2FS_I(inode)->i_flags =
f2fs_mask_flags(mode, F2FS_I(dir)->i_flags & F2FS_FL_INHERITED);
if (S_ISDIR(inode->i_mode))
F2FS_I(inode)->i_flags |= FS_INDEX_FL;
if (F2FS_I(inode)->i_flags & FS_PROJINHERIT_FL)
set_inode_flag(inode, FI_PROJ_INHERIT);
trace_f2fs_new_inode(inode, 0); trace_f2fs_new_inode(inode, 0);
return inode; return inode;
...@@ -204,6 +225,11 @@ static int f2fs_link(struct dentry *old_dentry, struct inode *dir, ...@@ -204,6 +225,11 @@ static int f2fs_link(struct dentry *old_dentry, struct inode *dir,
!fscrypt_has_permitted_context(dir, inode)) !fscrypt_has_permitted_context(dir, inode))
return -EPERM; return -EPERM;
if (is_inode_flag_set(dir, FI_PROJ_INHERIT) &&
(!projid_eq(F2FS_I(dir)->i_projid,
F2FS_I(old_dentry->d_inode)->i_projid)))
return -EXDEV;
err = dquot_initialize(dir); err = dquot_initialize(dir);
if (err) if (err)
return err; return err;
...@@ -261,6 +287,10 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino) ...@@ -261,6 +287,10 @@ static int __recover_dot_dentries(struct inode *dir, nid_t pino)
return 0; return 0;
} }
err = dquot_initialize(dir);
if (err)
return err;
f2fs_balance_fs(sbi, true); f2fs_balance_fs(sbi, true);
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
...@@ -724,6 +754,11 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -724,6 +754,11 @@ static int f2fs_rename(struct inode *old_dir, struct dentry *old_dentry,
goto out; goto out;
} }
if (is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
(!projid_eq(F2FS_I(new_dir)->i_projid,
F2FS_I(old_dentry->d_inode)->i_projid)))
return -EXDEV;
err = dquot_initialize(old_dir); err = dquot_initialize(old_dir);
if (err) if (err)
goto out; goto out;
...@@ -912,6 +947,14 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -912,6 +947,14 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
!fscrypt_has_permitted_context(old_dir, new_inode))) !fscrypt_has_permitted_context(old_dir, new_inode)))
return -EPERM; return -EPERM;
if ((is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
!projid_eq(F2FS_I(new_dir)->i_projid,
F2FS_I(old_dentry->d_inode)->i_projid)) ||
(is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
!projid_eq(F2FS_I(old_dir)->i_projid,
F2FS_I(new_dentry->d_inode)->i_projid)))
return -EXDEV;
err = dquot_initialize(old_dir); err = dquot_initialize(old_dir);
if (err) if (err)
goto out; goto out;
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#include "f2fs.h" #include "f2fs.h"
#include "node.h" #include "node.h"
#include "segment.h" #include "segment.h"
#include "xattr.h"
#include "trace.h" #include "trace.h"
#include <trace/events/f2fs.h> #include <trace/events/f2fs.h>
...@@ -554,7 +555,7 @@ static int get_node_path(struct inode *inode, long block, ...@@ -554,7 +555,7 @@ static int get_node_path(struct inode *inode, long block,
level = 3; level = 3;
goto got; goto got;
} else { } else {
BUG(); return -E2BIG;
} }
got: got:
return level; return level;
...@@ -578,6 +579,8 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode) ...@@ -578,6 +579,8 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
int err = 0; int err = 0;
level = get_node_path(dn->inode, index, offset, noffset); level = get_node_path(dn->inode, index, offset, noffset);
if (level < 0)
return level;
nids[0] = dn->inode->i_ino; nids[0] = dn->inode->i_ino;
npage[0] = dn->inode_page; npage[0] = dn->inode_page;
...@@ -613,7 +616,7 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode) ...@@ -613,7 +616,7 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
} }
dn->nid = nids[i]; dn->nid = nids[i];
npage[i] = new_node_page(dn, noffset[i], NULL); npage[i] = new_node_page(dn, noffset[i]);
if (IS_ERR(npage[i])) { if (IS_ERR(npage[i])) {
alloc_nid_failed(sbi, nids[i]); alloc_nid_failed(sbi, nids[i]);
err = PTR_ERR(npage[i]); err = PTR_ERR(npage[i]);
...@@ -654,7 +657,8 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode) ...@@ -654,7 +657,8 @@ int get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
dn->nid = nids[level]; dn->nid = nids[level];
dn->ofs_in_node = offset[level]; dn->ofs_in_node = offset[level];
dn->node_page = npage[level]; dn->node_page = npage[level];
dn->data_blkaddr = datablock_addr(dn->node_page, dn->ofs_in_node); dn->data_blkaddr = datablock_addr(dn->inode,
dn->node_page, dn->ofs_in_node);
return 0; return 0;
release_pages: release_pages:
...@@ -876,6 +880,8 @@ int truncate_inode_blocks(struct inode *inode, pgoff_t from) ...@@ -876,6 +880,8 @@ int truncate_inode_blocks(struct inode *inode, pgoff_t from)
trace_f2fs_truncate_inode_blocks_enter(inode, from); trace_f2fs_truncate_inode_blocks_enter(inode, from);
level = get_node_path(inode, from, offset, noffset); level = get_node_path(inode, from, offset, noffset);
if (level < 0)
return level;
page = get_node_page(sbi, inode->i_ino); page = get_node_page(sbi, inode->i_ino);
if (IS_ERR(page)) { if (IS_ERR(page)) {
...@@ -1022,11 +1028,10 @@ struct page *new_inode_page(struct inode *inode) ...@@ -1022,11 +1028,10 @@ struct page *new_inode_page(struct inode *inode)
set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino); set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino);
/* caller should f2fs_put_page(page, 1); */ /* caller should f2fs_put_page(page, 1); */
return new_node_page(&dn, 0, NULL); return new_node_page(&dn, 0);
} }
struct page *new_node_page(struct dnode_of_data *dn, struct page *new_node_page(struct dnode_of_data *dn, unsigned int ofs)
unsigned int ofs, struct page *ipage)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode);
struct node_info new_ni; struct node_info new_ni;
...@@ -1170,6 +1175,11 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid, ...@@ -1170,6 +1175,11 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid,
err = -EIO; err = -EIO;
goto out_err; goto out_err;
} }
if (!f2fs_inode_chksum_verify(sbi, page)) {
err = -EBADMSG;
goto out_err;
}
page_hit: page_hit:
if(unlikely(nid != nid_of_node(page))) { if(unlikely(nid != nid_of_node(page))) {
f2fs_msg(sbi->sb, KERN_WARNING, "inconsistent node block, " f2fs_msg(sbi->sb, KERN_WARNING, "inconsistent node block, "
...@@ -1177,9 +1187,9 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid, ...@@ -1177,9 +1187,9 @@ static struct page *__get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid,
nid, nid_of_node(page), ino_of_node(page), nid, nid_of_node(page), ino_of_node(page),
ofs_of_node(page), cpver_of_node(page), ofs_of_node(page), cpver_of_node(page),
next_blkaddr_of_node(page)); next_blkaddr_of_node(page));
ClearPageUptodate(page);
err = -EINVAL; err = -EINVAL;
out_err: out_err:
ClearPageUptodate(page);
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
return ERR_PTR(err); return ERR_PTR(err);
} }
...@@ -1326,7 +1336,8 @@ static struct page *last_fsync_dnode(struct f2fs_sb_info *sbi, nid_t ino) ...@@ -1326,7 +1336,8 @@ static struct page *last_fsync_dnode(struct f2fs_sb_info *sbi, nid_t ino)
} }
static int __write_node_page(struct page *page, bool atomic, bool *submitted, static int __write_node_page(struct page *page, bool atomic, bool *submitted,
struct writeback_control *wbc) struct writeback_control *wbc, bool do_balance,
enum iostat_type io_type)
{ {
struct f2fs_sb_info *sbi = F2FS_P_SB(page); struct f2fs_sb_info *sbi = F2FS_P_SB(page);
nid_t nid; nid_t nid;
...@@ -1339,6 +1350,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted, ...@@ -1339,6 +1350,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
.page = page, .page = page,
.encrypted_page = NULL, .encrypted_page = NULL,
.submitted = false, .submitted = false,
.io_type = io_type,
}; };
trace_f2fs_writepage(page, NODE); trace_f2fs_writepage(page, NODE);
...@@ -1395,6 +1407,8 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted, ...@@ -1395,6 +1407,8 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
if (submitted) if (submitted)
*submitted = fio.submitted; *submitted = fio.submitted;
if (do_balance)
f2fs_balance_fs(sbi, false);
return 0; return 0;
redirty_out: redirty_out:
...@@ -1405,7 +1419,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted, ...@@ -1405,7 +1419,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
static int f2fs_write_node_page(struct page *page, static int f2fs_write_node_page(struct page *page,
struct writeback_control *wbc) struct writeback_control *wbc)
{ {
return __write_node_page(page, false, NULL, wbc); return __write_node_page(page, false, NULL, wbc, false, FS_NODE_IO);
} }
int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode, int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
...@@ -1493,7 +1507,8 @@ int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -1493,7 +1507,8 @@ int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
ret = __write_node_page(page, atomic && ret = __write_node_page(page, atomic &&
page == last_page, page == last_page,
&submitted, wbc); &submitted, wbc, true,
FS_NODE_IO);
if (ret) { if (ret) {
unlock_page(page); unlock_page(page);
f2fs_put_page(last_page, 0); f2fs_put_page(last_page, 0);
...@@ -1530,7 +1545,8 @@ int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -1530,7 +1545,8 @@ int fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
return ret ? -EIO: 0; return ret ? -EIO: 0;
} }
int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc) int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc,
bool do_balance, enum iostat_type io_type)
{ {
pgoff_t index, end; pgoff_t index, end;
struct pagevec pvec; struct pagevec pvec;
...@@ -1608,7 +1624,8 @@ int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc) ...@@ -1608,7 +1624,8 @@ int sync_node_pages(struct f2fs_sb_info *sbi, struct writeback_control *wbc)
set_fsync_mark(page, 0); set_fsync_mark(page, 0);
set_dentry_mark(page, 0); set_dentry_mark(page, 0);
ret = __write_node_page(page, false, &submitted, wbc); ret = __write_node_page(page, false, &submitted,
wbc, do_balance, io_type);
if (ret) if (ret)
unlock_page(page); unlock_page(page);
else if (submitted) else if (submitted)
...@@ -1697,7 +1714,7 @@ static int f2fs_write_node_pages(struct address_space *mapping, ...@@ -1697,7 +1714,7 @@ static int f2fs_write_node_pages(struct address_space *mapping,
diff = nr_pages_to_write(sbi, NODE, wbc); diff = nr_pages_to_write(sbi, NODE, wbc);
wbc->sync_mode = WB_SYNC_NONE; wbc->sync_mode = WB_SYNC_NONE;
blk_start_plug(&plug); blk_start_plug(&plug);
sync_node_pages(sbi, wbc); sync_node_pages(sbi, wbc, true, FS_NODE_IO);
blk_finish_plug(&plug); blk_finish_plug(&plug);
wbc->nr_to_write = max((long)0, wbc->nr_to_write - diff); wbc->nr_to_write = max((long)0, wbc->nr_to_write - diff);
return 0; return 0;
...@@ -2191,7 +2208,8 @@ int recover_xattr_data(struct inode *inode, struct page *page, block_t blkaddr) ...@@ -2191,7 +2208,8 @@ int recover_xattr_data(struct inode *inode, struct page *page, block_t blkaddr)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
nid_t prev_xnid = F2FS_I(inode)->i_xattr_nid; nid_t prev_xnid = F2FS_I(inode)->i_xattr_nid;
nid_t new_xnid = nid_of_node(page); nid_t new_xnid;
struct dnode_of_data dn;
struct node_info ni; struct node_info ni;
struct page *xpage; struct page *xpage;
...@@ -2207,22 +2225,22 @@ int recover_xattr_data(struct inode *inode, struct page *page, block_t blkaddr) ...@@ -2207,22 +2225,22 @@ int recover_xattr_data(struct inode *inode, struct page *page, block_t blkaddr)
recover_xnid: recover_xnid:
/* 2: update xattr nid in inode */ /* 2: update xattr nid in inode */
remove_free_nid(sbi, new_xnid); if (!alloc_nid(sbi, &new_xnid))
f2fs_i_xnid_write(inode, new_xnid); return -ENOSPC;
if (unlikely(inc_valid_node_count(sbi, inode, false)))
f2fs_bug_on(sbi, 1); set_new_dnode(&dn, inode, NULL, NULL, new_xnid);
xpage = new_node_page(&dn, XATTR_NODE_OFFSET);
if (IS_ERR(xpage)) {
alloc_nid_failed(sbi, new_xnid);
return PTR_ERR(xpage);
}
alloc_nid_done(sbi, new_xnid);
update_inode_page(inode); update_inode_page(inode);
/* 3: update and set xattr node page dirty */ /* 3: update and set xattr node page dirty */
xpage = grab_cache_page(NODE_MAPPING(sbi), new_xnid); memcpy(F2FS_NODE(xpage), F2FS_NODE(page), VALID_XATTR_BLOCK_SIZE);
if (!xpage)
return -ENOMEM;
memcpy(F2FS_NODE(xpage), F2FS_NODE(page), PAGE_SIZE);
get_node_info(sbi, new_xnid, &ni);
ni.ino = inode->i_ino;
set_node_addr(sbi, &ni, NEW_ADDR, false);
set_page_dirty(xpage); set_page_dirty(xpage);
f2fs_put_page(xpage, 1); f2fs_put_page(xpage, 1);
...@@ -2262,7 +2280,14 @@ int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page) ...@@ -2262,7 +2280,14 @@ int recover_inode_page(struct f2fs_sb_info *sbi, struct page *page)
dst->i_blocks = cpu_to_le64(1); dst->i_blocks = cpu_to_le64(1);
dst->i_links = cpu_to_le32(1); dst->i_links = cpu_to_le32(1);
dst->i_xattr_nid = 0; dst->i_xattr_nid = 0;
dst->i_inline = src->i_inline & F2FS_INLINE_XATTR; dst->i_inline = src->i_inline & (F2FS_INLINE_XATTR | F2FS_EXTRA_ATTR);
if (dst->i_inline & F2FS_EXTRA_ATTR) {
dst->i_extra_isize = src->i_extra_isize;
if (f2fs_sb_has_project_quota(sbi->sb) &&
F2FS_FITS_IN_INODE(src, le16_to_cpu(src->i_extra_isize),
i_projid))
dst->i_projid = src->i_projid;
}
new_ni = old_ni; new_ni = old_ni;
new_ni.ino = ino; new_ni.ino = ino;
......
...@@ -69,20 +69,34 @@ static struct fsync_inode_entry *get_fsync_inode(struct list_head *head, ...@@ -69,20 +69,34 @@ static struct fsync_inode_entry *get_fsync_inode(struct list_head *head,
} }
static struct fsync_inode_entry *add_fsync_inode(struct f2fs_sb_info *sbi, static struct fsync_inode_entry *add_fsync_inode(struct f2fs_sb_info *sbi,
struct list_head *head, nid_t ino) struct list_head *head, nid_t ino, bool quota_inode)
{ {
struct inode *inode; struct inode *inode;
struct fsync_inode_entry *entry; struct fsync_inode_entry *entry;
int err;
inode = f2fs_iget_retry(sbi->sb, ino); inode = f2fs_iget_retry(sbi->sb, ino);
if (IS_ERR(inode)) if (IS_ERR(inode))
return ERR_CAST(inode); return ERR_CAST(inode);
err = dquot_initialize(inode);
if (err)
goto err_out;
if (quota_inode) {
err = dquot_alloc_inode(inode);
if (err)
goto err_out;
}
entry = f2fs_kmem_cache_alloc(fsync_entry_slab, GFP_F2FS_ZERO); entry = f2fs_kmem_cache_alloc(fsync_entry_slab, GFP_F2FS_ZERO);
entry->inode = inode; entry->inode = inode;
list_add_tail(&entry->list, head); list_add_tail(&entry->list, head);
return entry; return entry;
err_out:
iput(inode);
return ERR_PTR(err);
} }
static void del_fsync_inode(struct fsync_inode_entry *entry) static void del_fsync_inode(struct fsync_inode_entry *entry)
...@@ -107,7 +121,8 @@ static int recover_dentry(struct inode *inode, struct page *ipage, ...@@ -107,7 +121,8 @@ static int recover_dentry(struct inode *inode, struct page *ipage,
entry = get_fsync_inode(dir_list, pino); entry = get_fsync_inode(dir_list, pino);
if (!entry) { if (!entry) {
entry = add_fsync_inode(F2FS_I_SB(inode), dir_list, pino); entry = add_fsync_inode(F2FS_I_SB(inode), dir_list,
pino, false);
if (IS_ERR(entry)) { if (IS_ERR(entry)) {
dir = ERR_CAST(entry); dir = ERR_CAST(entry);
err = PTR_ERR(entry); err = PTR_ERR(entry);
...@@ -140,6 +155,13 @@ static int recover_dentry(struct inode *inode, struct page *ipage, ...@@ -140,6 +155,13 @@ static int recover_dentry(struct inode *inode, struct page *ipage,
err = -EEXIST; err = -EEXIST;
goto out_unmap_put; goto out_unmap_put;
} }
err = dquot_initialize(einode);
if (err) {
iput(einode);
goto out_unmap_put;
}
err = acquire_orphan_inode(F2FS_I_SB(inode)); err = acquire_orphan_inode(F2FS_I_SB(inode));
if (err) { if (err) {
iput(einode); iput(einode);
...@@ -226,18 +248,22 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, ...@@ -226,18 +248,22 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head,
entry = get_fsync_inode(head, ino_of_node(page)); entry = get_fsync_inode(head, ino_of_node(page));
if (!entry) { if (!entry) {
bool quota_inode = false;
if (!check_only && if (!check_only &&
IS_INODE(page) && is_dent_dnode(page)) { IS_INODE(page) && is_dent_dnode(page)) {
err = recover_inode_page(sbi, page); err = recover_inode_page(sbi, page);
if (err) if (err)
break; break;
quota_inode = true;
} }
/* /*
* CP | dnode(F) | inode(DF) * CP | dnode(F) | inode(DF)
* For this case, we should not give up now. * For this case, we should not give up now.
*/ */
entry = add_fsync_inode(sbi, head, ino_of_node(page)); entry = add_fsync_inode(sbi, head, ino_of_node(page),
quota_inode);
if (IS_ERR(entry)) { if (IS_ERR(entry)) {
err = PTR_ERR(entry); err = PTR_ERR(entry);
if (err == -ENOENT) { if (err == -ENOENT) {
...@@ -291,7 +317,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi, ...@@ -291,7 +317,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
return 0; return 0;
/* Get the previous summary */ /* Get the previous summary */
for (i = CURSEG_WARM_DATA; i <= CURSEG_COLD_DATA; i++) { for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++) {
struct curseg_info *curseg = CURSEG_I(sbi, i); struct curseg_info *curseg = CURSEG_I(sbi, i);
if (curseg->segno == segno) { if (curseg->segno == segno) {
sum = curseg->sum_blk->entries[blkoff]; sum = curseg->sum_blk->entries[blkoff];
...@@ -328,10 +354,18 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi, ...@@ -328,10 +354,18 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
f2fs_put_page(node_page, 1); f2fs_put_page(node_page, 1);
if (ino != dn->inode->i_ino) { if (ino != dn->inode->i_ino) {
int ret;
/* Deallocate previous index in the node page */ /* Deallocate previous index in the node page */
inode = f2fs_iget_retry(sbi->sb, ino); inode = f2fs_iget_retry(sbi->sb, ino);
if (IS_ERR(inode)) if (IS_ERR(inode))
return PTR_ERR(inode); return PTR_ERR(inode);
ret = dquot_initialize(inode);
if (ret) {
iput(inode);
return ret;
}
} else { } else {
inode = dn->inode; inode = dn->inode;
} }
...@@ -361,7 +395,8 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi, ...@@ -361,7 +395,8 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
return 0; return 0;
truncate_out: truncate_out:
if (datablock_addr(tdn.node_page, tdn.ofs_in_node) == blkaddr) if (datablock_addr(tdn.inode, tdn.node_page,
tdn.ofs_in_node) == blkaddr)
truncate_data_blocks_range(&tdn, 1); truncate_data_blocks_range(&tdn, 1);
if (dn->inode->i_ino == nid && !dn->inode_page_locked) if (dn->inode->i_ino == nid && !dn->inode_page_locked)
unlock_page(dn->inode_page); unlock_page(dn->inode_page);
...@@ -414,8 +449,8 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -414,8 +449,8 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
for (; start < end; start++, dn.ofs_in_node++) { for (; start < end; start++, dn.ofs_in_node++) {
block_t src, dest; block_t src, dest;
src = datablock_addr(dn.node_page, dn.ofs_in_node); src = datablock_addr(dn.inode, dn.node_page, dn.ofs_in_node);
dest = datablock_addr(page, dn.ofs_in_node); dest = datablock_addr(dn.inode, page, dn.ofs_in_node);
/* skip recovering if dest is the same as src */ /* skip recovering if dest is the same as src */
if (src == dest) if (src == dest)
...@@ -557,12 +592,27 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) ...@@ -557,12 +592,27 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
struct list_head dir_list; struct list_head dir_list;
int err; int err;
int ret = 0; int ret = 0;
unsigned long s_flags = sbi->sb->s_flags;
bool need_writecp = false; bool need_writecp = false;
if (s_flags & MS_RDONLY) {
f2fs_msg(sbi->sb, KERN_INFO, "orphan cleanup on readonly fs");
sbi->sb->s_flags &= ~MS_RDONLY;
}
#ifdef CONFIG_QUOTA
/* Needed for iput() to work correctly and not trash data */
sbi->sb->s_flags |= MS_ACTIVE;
/* Turn on quotas so that they are updated correctly */
f2fs_enable_quota_files(sbi);
#endif
fsync_entry_slab = f2fs_kmem_cache_create("f2fs_fsync_inode_entry", fsync_entry_slab = f2fs_kmem_cache_create("f2fs_fsync_inode_entry",
sizeof(struct fsync_inode_entry)); sizeof(struct fsync_inode_entry));
if (!fsync_entry_slab) if (!fsync_entry_slab) {
return -ENOMEM; err = -ENOMEM;
goto out;
}
INIT_LIST_HEAD(&inode_list); INIT_LIST_HEAD(&inode_list);
INIT_LIST_HEAD(&dir_list); INIT_LIST_HEAD(&dir_list);
...@@ -573,11 +623,11 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) ...@@ -573,11 +623,11 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
/* step #1: find fsynced inode numbers */ /* step #1: find fsynced inode numbers */
err = find_fsync_dnodes(sbi, &inode_list, check_only); err = find_fsync_dnodes(sbi, &inode_list, check_only);
if (err || list_empty(&inode_list)) if (err || list_empty(&inode_list))
goto out; goto skip;
if (check_only) { if (check_only) {
ret = 1; ret = 1;
goto out; goto skip;
} }
need_writecp = true; need_writecp = true;
...@@ -586,7 +636,7 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) ...@@ -586,7 +636,7 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
err = recover_data(sbi, &inode_list, &dir_list); err = recover_data(sbi, &inode_list, &dir_list);
if (!err) if (!err)
f2fs_bug_on(sbi, !list_empty(&inode_list)); f2fs_bug_on(sbi, !list_empty(&inode_list));
out: skip:
destroy_fsync_dnodes(&inode_list); destroy_fsync_dnodes(&inode_list);
/* truncate meta pages to be used by the recovery */ /* truncate meta pages to be used by the recovery */
...@@ -599,8 +649,6 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) ...@@ -599,8 +649,6 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
} }
clear_sbi_flag(sbi, SBI_POR_DOING); clear_sbi_flag(sbi, SBI_POR_DOING);
if (err)
set_ckpt_flags(sbi, CP_ERROR_FLAG);
mutex_unlock(&sbi->cp_mutex); mutex_unlock(&sbi->cp_mutex);
/* let's drop all the directory inodes for clean checkpoint */ /* let's drop all the directory inodes for clean checkpoint */
...@@ -614,5 +662,12 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) ...@@ -614,5 +662,12 @@ int recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
} }
kmem_cache_destroy(fsync_entry_slab); kmem_cache_destroy(fsync_entry_slab);
out:
#ifdef CONFIG_QUOTA
/* Turn quotas off */
f2fs_quota_off_umount(sbi->sb);
#endif
sbi->sb->s_flags = s_flags; /* Restore MS_RDONLY status */
return ret ? ret: err; return ret ? ret: err;
} }
...@@ -17,10 +17,12 @@ ...@@ -17,10 +17,12 @@
#include <linux/swap.h> #include <linux/swap.h>
#include <linux/timer.h> #include <linux/timer.h>
#include <linux/freezer.h> #include <linux/freezer.h>
#include <linux/sched/signal.h>
#include "f2fs.h" #include "f2fs.h"
#include "segment.h" #include "segment.h"
#include "node.h" #include "node.h"
#include "gc.h"
#include "trace.h" #include "trace.h"
#include <trace/events/f2fs.h> #include <trace/events/f2fs.h>
...@@ -167,6 +169,21 @@ static unsigned long __find_rev_next_zero_bit(const unsigned long *addr, ...@@ -167,6 +169,21 @@ static unsigned long __find_rev_next_zero_bit(const unsigned long *addr,
return result - size + __reverse_ffz(tmp); return result - size + __reverse_ffz(tmp);
} }
bool need_SSR(struct f2fs_sb_info *sbi)
{
int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES);
int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS);
int imeta_secs = get_blocktype_secs(sbi, F2FS_DIRTY_IMETA);
if (test_opt(sbi, LFS))
return false;
if (sbi->gc_thread && sbi->gc_thread->gc_urgent)
return true;
return free_sections(sbi) <= (node_secs + 2 * dent_secs + imeta_secs +
2 * reserved_sections(sbi));
}
void register_inmem_page(struct inode *inode, struct page *page) void register_inmem_page(struct inode *inode, struct page *page)
{ {
struct f2fs_inode_info *fi = F2FS_I(inode); struct f2fs_inode_info *fi = F2FS_I(inode);
...@@ -213,9 +230,15 @@ static int __revoke_inmem_pages(struct inode *inode, ...@@ -213,9 +230,15 @@ static int __revoke_inmem_pages(struct inode *inode,
struct node_info ni; struct node_info ni;
trace_f2fs_commit_inmem_page(page, INMEM_REVOKE); trace_f2fs_commit_inmem_page(page, INMEM_REVOKE);
retry:
set_new_dnode(&dn, inode, NULL, NULL, 0); set_new_dnode(&dn, inode, NULL, NULL, 0);
if (get_dnode_of_data(&dn, page->index, LOOKUP_NODE)) { err = get_dnode_of_data(&dn, page->index, LOOKUP_NODE);
if (err) {
if (err == -ENOMEM) {
congestion_wait(BLK_RW_ASYNC, HZ/50);
cond_resched();
goto retry;
}
err = -EAGAIN; err = -EAGAIN;
goto next; goto next;
} }
...@@ -248,6 +271,7 @@ void drop_inmem_pages(struct inode *inode) ...@@ -248,6 +271,7 @@ void drop_inmem_pages(struct inode *inode)
mutex_unlock(&fi->inmem_lock); mutex_unlock(&fi->inmem_lock);
clear_inode_flag(inode, FI_ATOMIC_FILE); clear_inode_flag(inode, FI_ATOMIC_FILE);
clear_inode_flag(inode, FI_HOT_DATA);
stat_dec_atomic_write(inode); stat_dec_atomic_write(inode);
} }
...@@ -292,6 +316,7 @@ static int __commit_inmem_pages(struct inode *inode, ...@@ -292,6 +316,7 @@ static int __commit_inmem_pages(struct inode *inode,
.type = DATA, .type = DATA,
.op = REQ_OP_WRITE, .op = REQ_OP_WRITE,
.op_flags = REQ_SYNC | REQ_PRIO, .op_flags = REQ_SYNC | REQ_PRIO,
.io_type = FS_DATA_IO,
}; };
pgoff_t last_idx = ULONG_MAX; pgoff_t last_idx = ULONG_MAX;
int err = 0; int err = 0;
...@@ -309,17 +334,21 @@ static int __commit_inmem_pages(struct inode *inode, ...@@ -309,17 +334,21 @@ static int __commit_inmem_pages(struct inode *inode,
inode_dec_dirty_pages(inode); inode_dec_dirty_pages(inode);
remove_dirty_inode(inode); remove_dirty_inode(inode);
} }
retry:
fio.page = page; fio.page = page;
fio.old_blkaddr = NULL_ADDR; fio.old_blkaddr = NULL_ADDR;
fio.encrypted_page = NULL; fio.encrypted_page = NULL;
fio.need_lock = LOCK_DONE; fio.need_lock = LOCK_DONE;
err = do_write_data_page(&fio); err = do_write_data_page(&fio);
if (err) { if (err) {
if (err == -ENOMEM) {
congestion_wait(BLK_RW_ASYNC, HZ/50);
cond_resched();
goto retry;
}
unlock_page(page); unlock_page(page);
break; break;
} }
/* record old blkaddr for revoking */ /* record old blkaddr for revoking */
cur->old_addr = fio.old_blkaddr; cur->old_addr = fio.old_blkaddr;
last_idx = page->index; last_idx = page->index;
...@@ -481,6 +510,8 @@ static int issue_flush_thread(void *data) ...@@ -481,6 +510,8 @@ static int issue_flush_thread(void *data)
if (kthread_should_stop()) if (kthread_should_stop())
return 0; return 0;
sb_start_intwrite(sbi->sb);
if (!llist_empty(&fcc->issue_list)) { if (!llist_empty(&fcc->issue_list)) {
struct flush_cmd *cmd, *next; struct flush_cmd *cmd, *next;
int ret; int ret;
...@@ -499,6 +530,8 @@ static int issue_flush_thread(void *data) ...@@ -499,6 +530,8 @@ static int issue_flush_thread(void *data)
fcc->dispatch_list = NULL; fcc->dispatch_list = NULL;
} }
sb_end_intwrite(sbi->sb);
wait_event_interruptible(*q, wait_event_interruptible(*q,
kthread_should_stop() || !llist_empty(&fcc->issue_list)); kthread_should_stop() || !llist_empty(&fcc->issue_list));
goto repeat; goto repeat;
...@@ -519,8 +552,7 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi) ...@@ -519,8 +552,7 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi)
return ret; return ret;
} }
if (!atomic_read(&fcc->issing_flush)) { if (atomic_inc_return(&fcc->issing_flush) == 1) {
atomic_inc(&fcc->issing_flush);
ret = submit_flush_wait(sbi); ret = submit_flush_wait(sbi);
atomic_dec(&fcc->issing_flush); atomic_dec(&fcc->issing_flush);
...@@ -530,18 +562,39 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi) ...@@ -530,18 +562,39 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi)
init_completion(&cmd.wait); init_completion(&cmd.wait);
atomic_inc(&fcc->issing_flush);
llist_add(&cmd.llnode, &fcc->issue_list); llist_add(&cmd.llnode, &fcc->issue_list);
if (!fcc->dispatch_list) /* update issue_list before we wake up issue_flush thread */
smp_mb();
if (waitqueue_active(&fcc->flush_wait_queue))
wake_up(&fcc->flush_wait_queue); wake_up(&fcc->flush_wait_queue);
if (fcc->f2fs_issue_flush) { if (fcc->f2fs_issue_flush) {
wait_for_completion(&cmd.wait); wait_for_completion(&cmd.wait);
atomic_dec(&fcc->issing_flush); atomic_dec(&fcc->issing_flush);
} else { } else {
llist_del_all(&fcc->issue_list); struct llist_node *list;
atomic_set(&fcc->issing_flush, 0);
list = llist_del_all(&fcc->issue_list);
if (!list) {
wait_for_completion(&cmd.wait);
atomic_dec(&fcc->issing_flush);
} else {
struct flush_cmd *tmp, *next;
ret = submit_flush_wait(sbi);
llist_for_each_entry_safe(tmp, next, list, llnode) {
if (tmp == &cmd) {
cmd.ret = ret;
atomic_dec(&fcc->issing_flush);
continue;
}
tmp->ret = ret;
complete(&tmp->wait);
}
}
} }
return cmd.ret; return cmd.ret;
...@@ -778,11 +831,14 @@ void __check_sit_bitmap(struct f2fs_sb_info *sbi, ...@@ -778,11 +831,14 @@ void __check_sit_bitmap(struct f2fs_sb_info *sbi,
sentry = get_seg_entry(sbi, segno); sentry = get_seg_entry(sbi, segno);
offset = GET_BLKOFF_FROM_SEG0(sbi, blk); offset = GET_BLKOFF_FROM_SEG0(sbi, blk);
size = min((unsigned long)(end - blk), max_blocks); if (end < START_BLOCK(sbi, segno + 1))
size = GET_BLKOFF_FROM_SEG0(sbi, end);
else
size = max_blocks;
map = (unsigned long *)(sentry->cur_valid_map); map = (unsigned long *)(sentry->cur_valid_map);
offset = __find_rev_next_bit(map, size, offset); offset = __find_rev_next_bit(map, size, offset);
f2fs_bug_on(sbi, offset != size); f2fs_bug_on(sbi, offset != size);
blk += size; blk = START_BLOCK(sbi, segno + 1);
} }
#endif #endif
} }
...@@ -815,6 +871,8 @@ static void __submit_discard_cmd(struct f2fs_sb_info *sbi, ...@@ -815,6 +871,8 @@ static void __submit_discard_cmd(struct f2fs_sb_info *sbi,
submit_bio(bio); submit_bio(bio);
list_move_tail(&dc->list, &dcc->wait_list); list_move_tail(&dc->list, &dcc->wait_list);
__check_sit_bitmap(sbi, dc->start, dc->start + dc->len); __check_sit_bitmap(sbi, dc->start, dc->start + dc->len);
f2fs_update_iostat(sbi, FS_DISCARD, 1);
} }
} else { } else {
__remove_discard_cmd(sbi, dc); __remove_discard_cmd(sbi, dc);
...@@ -996,32 +1054,81 @@ static int __queue_discard_cmd(struct f2fs_sb_info *sbi, ...@@ -996,32 +1054,81 @@ static int __queue_discard_cmd(struct f2fs_sb_info *sbi,
return 0; return 0;
} }
static void __issue_discard_cmd(struct f2fs_sb_info *sbi, bool issue_cond) static int __issue_discard_cmd(struct f2fs_sb_info *sbi, bool issue_cond)
{ {
struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info; struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
struct list_head *pend_list; struct list_head *pend_list;
struct discard_cmd *dc, *tmp; struct discard_cmd *dc, *tmp;
struct blk_plug plug; struct blk_plug plug;
int i, iter = 0; int iter = 0, issued = 0;
int i;
bool io_interrupted = false;
mutex_lock(&dcc->cmd_lock); mutex_lock(&dcc->cmd_lock);
f2fs_bug_on(sbi, f2fs_bug_on(sbi,
!__check_rb_tree_consistence(sbi, &dcc->root)); !__check_rb_tree_consistence(sbi, &dcc->root));
blk_start_plug(&plug); blk_start_plug(&plug);
for (i = MAX_PLIST_NUM - 1; i >= 0; i--) { for (i = MAX_PLIST_NUM - 1;
i >= 0 && plist_issue(dcc->pend_list_tag[i]); i--) {
pend_list = &dcc->pend_list[i]; pend_list = &dcc->pend_list[i];
list_for_each_entry_safe(dc, tmp, pend_list, list) { list_for_each_entry_safe(dc, tmp, pend_list, list) {
f2fs_bug_on(sbi, dc->state != D_PREP); f2fs_bug_on(sbi, dc->state != D_PREP);
if (!issue_cond || is_idle(sbi)) /* Hurry up to finish fstrim */
if (dcc->pend_list_tag[i] & P_TRIM) {
__submit_discard_cmd(sbi, dc);
issued++;
if (fatal_signal_pending(current))
break;
continue;
}
if (!issue_cond) {
__submit_discard_cmd(sbi, dc); __submit_discard_cmd(sbi, dc);
if (issue_cond && iter++ > DISCARD_ISSUE_RATE) issued++;
continue;
}
if (is_idle(sbi)) {
__submit_discard_cmd(sbi, dc);
issued++;
} else {
io_interrupted = true;
}
if (++iter >= DISCARD_ISSUE_RATE)
goto out; goto out;
} }
if (list_empty(pend_list) && dcc->pend_list_tag[i] & P_TRIM)
dcc->pend_list_tag[i] &= (~P_TRIM);
} }
out: out:
blk_finish_plug(&plug); blk_finish_plug(&plug);
mutex_unlock(&dcc->cmd_lock); mutex_unlock(&dcc->cmd_lock);
if (!issued && io_interrupted)
issued = -1;
return issued;
}
static void __drop_discard_cmd(struct f2fs_sb_info *sbi)
{
struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
struct list_head *pend_list;
struct discard_cmd *dc, *tmp;
int i;
mutex_lock(&dcc->cmd_lock);
for (i = MAX_PLIST_NUM - 1; i >= 0; i--) {
pend_list = &dcc->pend_list[i];
list_for_each_entry_safe(dc, tmp, pend_list, list) {
f2fs_bug_on(sbi, dc->state != D_PREP);
__remove_discard_cmd(sbi, dc);
}
}
mutex_unlock(&dcc->cmd_lock);
} }
static void __wait_one_discard_bio(struct f2fs_sb_info *sbi, static void __wait_one_discard_bio(struct f2fs_sb_info *sbi,
...@@ -1102,34 +1209,63 @@ void stop_discard_thread(struct f2fs_sb_info *sbi) ...@@ -1102,34 +1209,63 @@ void stop_discard_thread(struct f2fs_sb_info *sbi)
} }
} }
/* This comes from f2fs_put_super */ /* This comes from f2fs_put_super and f2fs_trim_fs */
void f2fs_wait_discard_bios(struct f2fs_sb_info *sbi) void f2fs_wait_discard_bios(struct f2fs_sb_info *sbi)
{ {
__issue_discard_cmd(sbi, false); __issue_discard_cmd(sbi, false);
__drop_discard_cmd(sbi);
__wait_discard_cmd(sbi, false); __wait_discard_cmd(sbi, false);
} }
static void mark_discard_range_all(struct f2fs_sb_info *sbi)
{
struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
int i;
mutex_lock(&dcc->cmd_lock);
for (i = 0; i < MAX_PLIST_NUM; i++)
dcc->pend_list_tag[i] |= P_TRIM;
mutex_unlock(&dcc->cmd_lock);
}
static int issue_discard_thread(void *data) static int issue_discard_thread(void *data)
{ {
struct f2fs_sb_info *sbi = data; struct f2fs_sb_info *sbi = data;
struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info; struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
wait_queue_head_t *q = &dcc->discard_wait_queue; wait_queue_head_t *q = &dcc->discard_wait_queue;
unsigned int wait_ms = DEF_MIN_DISCARD_ISSUE_TIME;
int issued;
set_freezable(); set_freezable();
do { do {
wait_event_interruptible(*q, kthread_should_stop() || wait_event_interruptible_timeout(*q,
freezing(current) || kthread_should_stop() || freezing(current) ||
atomic_read(&dcc->discard_cmd_cnt)); dcc->discard_wake,
msecs_to_jiffies(wait_ms));
if (try_to_freeze()) if (try_to_freeze())
continue; continue;
if (kthread_should_stop()) if (kthread_should_stop())
return 0; return 0;
__issue_discard_cmd(sbi, true); if (dcc->discard_wake) {
__wait_discard_cmd(sbi, true); dcc->discard_wake = 0;
if (sbi->gc_thread && sbi->gc_thread->gc_urgent)
mark_discard_range_all(sbi);
}
sb_start_intwrite(sbi->sb);
issued = __issue_discard_cmd(sbi, true);
if (issued) {
__wait_discard_cmd(sbi, true);
wait_ms = DEF_MIN_DISCARD_ISSUE_TIME;
} else {
wait_ms = DEF_MAX_DISCARD_ISSUE_TIME;
}
sb_end_intwrite(sbi->sb);
congestion_wait(BLK_RW_SYNC, HZ/50);
} while (!kthread_should_stop()); } while (!kthread_should_stop());
return 0; return 0;
} }
...@@ -1320,7 +1456,8 @@ static void set_prefree_as_free_segments(struct f2fs_sb_info *sbi) ...@@ -1320,7 +1456,8 @@ static void set_prefree_as_free_segments(struct f2fs_sb_info *sbi)
void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc) void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc)
{ {
struct list_head *head = &(SM_I(sbi)->dcc_info->entry_list); struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
struct list_head *head = &dcc->entry_list;
struct discard_entry *entry, *this; struct discard_entry *entry, *this;
struct dirty_seglist_info *dirty_i = DIRTY_I(sbi); struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
unsigned long *prefree_map = dirty_i->dirty_segmap[PRE]; unsigned long *prefree_map = dirty_i->dirty_segmap[PRE];
...@@ -1402,11 +1539,11 @@ void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1402,11 +1539,11 @@ void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc)
goto find_next; goto find_next;
list_del(&entry->list); list_del(&entry->list);
SM_I(sbi)->dcc_info->nr_discards -= total_len; dcc->nr_discards -= total_len;
kmem_cache_free(discard_entry_slab, entry); kmem_cache_free(discard_entry_slab, entry);
} }
wake_up(&SM_I(sbi)->dcc_info->discard_wait_queue); wake_up_discard_thread(sbi, false);
} }
static int create_discard_cmd_control(struct f2fs_sb_info *sbi) static int create_discard_cmd_control(struct f2fs_sb_info *sbi)
...@@ -1424,9 +1561,13 @@ static int create_discard_cmd_control(struct f2fs_sb_info *sbi) ...@@ -1424,9 +1561,13 @@ static int create_discard_cmd_control(struct f2fs_sb_info *sbi)
if (!dcc) if (!dcc)
return -ENOMEM; return -ENOMEM;
dcc->discard_granularity = DEFAULT_DISCARD_GRANULARITY;
INIT_LIST_HEAD(&dcc->entry_list); INIT_LIST_HEAD(&dcc->entry_list);
for (i = 0; i < MAX_PLIST_NUM; i++) for (i = 0; i < MAX_PLIST_NUM; i++) {
INIT_LIST_HEAD(&dcc->pend_list[i]); INIT_LIST_HEAD(&dcc->pend_list[i]);
if (i >= dcc->discard_granularity - 1)
dcc->pend_list_tag[i] |= P_ACTIVE;
}
INIT_LIST_HEAD(&dcc->wait_list); INIT_LIST_HEAD(&dcc->wait_list);
mutex_init(&dcc->cmd_lock); mutex_init(&dcc->cmd_lock);
atomic_set(&dcc->issued_discard, 0); atomic_set(&dcc->issued_discard, 0);
...@@ -1491,6 +1632,10 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del) ...@@ -1491,6 +1632,10 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
struct seg_entry *se; struct seg_entry *se;
unsigned int segno, offset; unsigned int segno, offset;
long int new_vblocks; long int new_vblocks;
bool exist;
#ifdef CONFIG_F2FS_CHECK_FS
bool mir_exist;
#endif
segno = GET_SEGNO(sbi, blkaddr); segno = GET_SEGNO(sbi, blkaddr);
...@@ -1507,17 +1652,25 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del) ...@@ -1507,17 +1652,25 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
/* Update valid block bitmap */ /* Update valid block bitmap */
if (del > 0) { if (del > 0) {
if (f2fs_test_and_set_bit(offset, se->cur_valid_map)) { exist = f2fs_test_and_set_bit(offset, se->cur_valid_map);
#ifdef CONFIG_F2FS_CHECK_FS #ifdef CONFIG_F2FS_CHECK_FS
if (f2fs_test_and_set_bit(offset, mir_exist = f2fs_test_and_set_bit(offset,
se->cur_valid_map_mir)) se->cur_valid_map_mir);
f2fs_bug_on(sbi, 1); if (unlikely(exist != mir_exist)) {
else f2fs_msg(sbi->sb, KERN_ERR, "Inconsistent error "
WARN_ON(1); "when setting bitmap, blk:%u, old bit:%d",
#else blkaddr, exist);
f2fs_bug_on(sbi, 1); f2fs_bug_on(sbi, 1);
}
#endif #endif
if (unlikely(exist)) {
f2fs_msg(sbi->sb, KERN_ERR,
"Bitmap was wrongly set, blk:%u", blkaddr);
f2fs_bug_on(sbi, 1);
se->valid_blocks--;
del = 0;
} }
if (f2fs_discard_en(sbi) && if (f2fs_discard_en(sbi) &&
!f2fs_test_and_set_bit(offset, se->discard_map)) !f2fs_test_and_set_bit(offset, se->discard_map))
sbi->discard_blks--; sbi->discard_blks--;
...@@ -1528,17 +1681,25 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del) ...@@ -1528,17 +1681,25 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
se->ckpt_valid_blocks++; se->ckpt_valid_blocks++;
} }
} else { } else {
if (!f2fs_test_and_clear_bit(offset, se->cur_valid_map)) { exist = f2fs_test_and_clear_bit(offset, se->cur_valid_map);
#ifdef CONFIG_F2FS_CHECK_FS #ifdef CONFIG_F2FS_CHECK_FS
if (!f2fs_test_and_clear_bit(offset, mir_exist = f2fs_test_and_clear_bit(offset,
se->cur_valid_map_mir)) se->cur_valid_map_mir);
f2fs_bug_on(sbi, 1); if (unlikely(exist != mir_exist)) {
else f2fs_msg(sbi->sb, KERN_ERR, "Inconsistent error "
WARN_ON(1); "when clearing bitmap, blk:%u, old bit:%d",
#else blkaddr, exist);
f2fs_bug_on(sbi, 1); f2fs_bug_on(sbi, 1);
}
#endif #endif
if (unlikely(!exist)) {
f2fs_msg(sbi->sb, KERN_ERR,
"Bitmap was wrongly cleared, blk:%u", blkaddr);
f2fs_bug_on(sbi, 1);
se->valid_blocks++;
del = 0;
} }
if (f2fs_discard_en(sbi) && if (f2fs_discard_en(sbi) &&
f2fs_test_and_clear_bit(offset, se->discard_map)) f2fs_test_and_clear_bit(offset, se->discard_map))
sbi->discard_blks++; sbi->discard_blks++;
...@@ -1900,7 +2061,7 @@ static void __refresh_next_blkoff(struct f2fs_sb_info *sbi, ...@@ -1900,7 +2061,7 @@ static void __refresh_next_blkoff(struct f2fs_sb_info *sbi,
* This function always allocates a used segment(from dirty seglist) by SSR * This function always allocates a used segment(from dirty seglist) by SSR
* manner, so it should recover the existing segment information of valid blocks * manner, so it should recover the existing segment information of valid blocks
*/ */
static void change_curseg(struct f2fs_sb_info *sbi, int type, bool reuse) static void change_curseg(struct f2fs_sb_info *sbi, int type)
{ {
struct dirty_seglist_info *dirty_i = DIRTY_I(sbi); struct dirty_seglist_info *dirty_i = DIRTY_I(sbi);
struct curseg_info *curseg = CURSEG_I(sbi, type); struct curseg_info *curseg = CURSEG_I(sbi, type);
...@@ -1921,12 +2082,10 @@ static void change_curseg(struct f2fs_sb_info *sbi, int type, bool reuse) ...@@ -1921,12 +2082,10 @@ static void change_curseg(struct f2fs_sb_info *sbi, int type, bool reuse)
curseg->alloc_type = SSR; curseg->alloc_type = SSR;
__next_free_blkoff(sbi, curseg, 0); __next_free_blkoff(sbi, curseg, 0);
if (reuse) { sum_page = get_sum_page(sbi, new_segno);
sum_page = get_sum_page(sbi, new_segno); sum_node = (struct f2fs_summary_block *)page_address(sum_page);
sum_node = (struct f2fs_summary_block *)page_address(sum_page); memcpy(curseg->sum_blk, sum_node, SUM_ENTRY_SIZE);
memcpy(curseg->sum_blk, sum_node, SUM_ENTRY_SIZE); f2fs_put_page(sum_page, 1);
f2fs_put_page(sum_page, 1);
}
} }
static int get_ssr_segment(struct f2fs_sb_info *sbi, int type) static int get_ssr_segment(struct f2fs_sb_info *sbi, int type)
...@@ -1990,7 +2149,7 @@ static void allocate_segment_by_default(struct f2fs_sb_info *sbi, ...@@ -1990,7 +2149,7 @@ static void allocate_segment_by_default(struct f2fs_sb_info *sbi,
else if (curseg->alloc_type == LFS && is_next_segment_free(sbi, type)) else if (curseg->alloc_type == LFS && is_next_segment_free(sbi, type))
new_curseg(sbi, type, false); new_curseg(sbi, type, false);
else if (need_SSR(sbi) && get_ssr_segment(sbi, type)) else if (need_SSR(sbi) && get_ssr_segment(sbi, type))
change_curseg(sbi, type, true); change_curseg(sbi, type);
else else
new_curseg(sbi, type, false); new_curseg(sbi, type, false);
...@@ -2083,6 +2242,9 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range) ...@@ -2083,6 +2242,9 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
schedule(); schedule();
} }
/* It's time to issue all the filed discards */
mark_discard_range_all(sbi);
f2fs_wait_discard_bios(sbi);
out: out:
range->len = F2FS_BLK_TO_BYTES(cpc.trimmed); range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
return err; return err;
...@@ -2202,9 +2364,12 @@ void allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, ...@@ -2202,9 +2364,12 @@ void allocate_data_block(struct f2fs_sb_info *sbi, struct page *page,
mutex_unlock(&sit_i->sentry_lock); mutex_unlock(&sit_i->sentry_lock);
if (page && IS_NODESEG(type)) if (page && IS_NODESEG(type)) {
fill_node_footer_blkaddr(page, NEXT_FREE_BLKADDR(sbi, curseg)); fill_node_footer_blkaddr(page, NEXT_FREE_BLKADDR(sbi, curseg));
f2fs_inode_chksum_set(sbi, page);
}
if (add_list) { if (add_list) {
struct f2fs_bio_info *io; struct f2fs_bio_info *io;
...@@ -2236,7 +2401,8 @@ static void do_write_page(struct f2fs_summary *sum, struct f2fs_io_info *fio) ...@@ -2236,7 +2401,8 @@ static void do_write_page(struct f2fs_summary *sum, struct f2fs_io_info *fio)
} }
} }
void write_meta_page(struct f2fs_sb_info *sbi, struct page *page) void write_meta_page(struct f2fs_sb_info *sbi, struct page *page,
enum iostat_type io_type)
{ {
struct f2fs_io_info fio = { struct f2fs_io_info fio = {
.sbi = sbi, .sbi = sbi,
...@@ -2255,6 +2421,8 @@ void write_meta_page(struct f2fs_sb_info *sbi, struct page *page) ...@@ -2255,6 +2421,8 @@ void write_meta_page(struct f2fs_sb_info *sbi, struct page *page)
set_page_writeback(page); set_page_writeback(page);
f2fs_submit_page_write(&fio); f2fs_submit_page_write(&fio);
f2fs_update_iostat(sbi, io_type, F2FS_BLKSIZE);
} }
void write_node_page(unsigned int nid, struct f2fs_io_info *fio) void write_node_page(unsigned int nid, struct f2fs_io_info *fio)
...@@ -2263,6 +2431,8 @@ void write_node_page(unsigned int nid, struct f2fs_io_info *fio) ...@@ -2263,6 +2431,8 @@ void write_node_page(unsigned int nid, struct f2fs_io_info *fio)
set_summary(&sum, nid, 0, 0); set_summary(&sum, nid, 0, 0);
do_write_page(&sum, fio); do_write_page(&sum, fio);
f2fs_update_iostat(fio->sbi, fio->io_type, F2FS_BLKSIZE);
} }
void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio) void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio)
...@@ -2276,13 +2446,22 @@ void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio) ...@@ -2276,13 +2446,22 @@ void write_data_page(struct dnode_of_data *dn, struct f2fs_io_info *fio)
set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version); set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version);
do_write_page(&sum, fio); do_write_page(&sum, fio);
f2fs_update_data_blkaddr(dn, fio->new_blkaddr); f2fs_update_data_blkaddr(dn, fio->new_blkaddr);
f2fs_update_iostat(sbi, fio->io_type, F2FS_BLKSIZE);
} }
int rewrite_data_page(struct f2fs_io_info *fio) int rewrite_data_page(struct f2fs_io_info *fio)
{ {
int err;
fio->new_blkaddr = fio->old_blkaddr; fio->new_blkaddr = fio->old_blkaddr;
stat_inc_inplace_blocks(fio->sbi); stat_inc_inplace_blocks(fio->sbi);
return f2fs_submit_page_bio(fio);
err = f2fs_submit_page_bio(fio);
f2fs_update_iostat(fio->sbi, fio->io_type, F2FS_BLKSIZE);
return err;
} }
void __f2fs_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, void __f2fs_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
...@@ -2324,7 +2503,7 @@ void __f2fs_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -2324,7 +2503,7 @@ void __f2fs_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
/* change the current segment */ /* change the current segment */
if (segno != curseg->segno) { if (segno != curseg->segno) {
curseg->next_segno = segno; curseg->next_segno = segno;
change_curseg(sbi, type, true); change_curseg(sbi, type);
} }
curseg->next_blkoff = GET_BLKOFF_FROM_SEG0(sbi, new_blkaddr); curseg->next_blkoff = GET_BLKOFF_FROM_SEG0(sbi, new_blkaddr);
...@@ -2343,7 +2522,7 @@ void __f2fs_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, ...@@ -2343,7 +2522,7 @@ void __f2fs_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
if (recover_curseg) { if (recover_curseg) {
if (old_cursegno != curseg->segno) { if (old_cursegno != curseg->segno) {
curseg->next_segno = old_cursegno; curseg->next_segno = old_cursegno;
change_curseg(sbi, type, true); change_curseg(sbi, type);
} }
curseg->next_blkoff = old_blkoff; curseg->next_blkoff = old_blkoff;
} }
...@@ -2382,8 +2561,7 @@ void f2fs_wait_on_page_writeback(struct page *page, ...@@ -2382,8 +2561,7 @@ void f2fs_wait_on_page_writeback(struct page *page,
} }
} }
void f2fs_wait_on_encrypted_page_writeback(struct f2fs_sb_info *sbi, void f2fs_wait_on_block_writeback(struct f2fs_sb_info *sbi, block_t blkaddr)
block_t blkaddr)
{ {
struct page *cpage; struct page *cpage;
......
...@@ -492,29 +492,11 @@ static inline int overprovision_segments(struct f2fs_sb_info *sbi) ...@@ -492,29 +492,11 @@ static inline int overprovision_segments(struct f2fs_sb_info *sbi)
return SM_I(sbi)->ovp_segments; return SM_I(sbi)->ovp_segments;
} }
static inline int overprovision_sections(struct f2fs_sb_info *sbi)
{
return GET_SEC_FROM_SEG(sbi, (unsigned int)overprovision_segments(sbi));
}
static inline int reserved_sections(struct f2fs_sb_info *sbi) static inline int reserved_sections(struct f2fs_sb_info *sbi)
{ {
return GET_SEC_FROM_SEG(sbi, (unsigned int)reserved_segments(sbi)); return GET_SEC_FROM_SEG(sbi, (unsigned int)reserved_segments(sbi));
} }
static inline bool need_SSR(struct f2fs_sb_info *sbi)
{
int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES);
int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS);
int imeta_secs = get_blocktype_secs(sbi, F2FS_DIRTY_IMETA);
if (test_opt(sbi, LFS))
return false;
return free_sections(sbi) <= (node_secs + 2 * dent_secs + imeta_secs +
2 * reserved_sections(sbi));
}
static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
int freed, int needed) int freed, int needed)
{ {
...@@ -577,6 +559,10 @@ static inline bool need_inplace_update_policy(struct inode *inode, ...@@ -577,6 +559,10 @@ static inline bool need_inplace_update_policy(struct inode *inode,
if (test_opt(sbi, LFS)) if (test_opt(sbi, LFS))
return false; return false;
/* if this is cold file, we should overwrite to avoid fragmentation */
if (file_is_cold(inode))
return true;
if (policy & (0x1 << F2FS_IPU_FORCE)) if (policy & (0x1 << F2FS_IPU_FORCE))
return true; return true;
if (policy & (0x1 << F2FS_IPU_SSR) && need_SSR(sbi)) if (policy & (0x1 << F2FS_IPU_SSR) && need_SSR(sbi))
...@@ -799,3 +785,28 @@ static inline long nr_pages_to_write(struct f2fs_sb_info *sbi, int type, ...@@ -799,3 +785,28 @@ static inline long nr_pages_to_write(struct f2fs_sb_info *sbi, int type,
wbc->nr_to_write = desired; wbc->nr_to_write = desired;
return desired - nr_to_write; return desired - nr_to_write;
} }
static inline void wake_up_discard_thread(struct f2fs_sb_info *sbi, bool force)
{
struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
bool wakeup = false;
int i;
if (force)
goto wake_up;
mutex_lock(&dcc->cmd_lock);
for (i = MAX_PLIST_NUM - 1;
i >= 0 && plist_issue(dcc->pend_list_tag[i]); i--) {
if (!list_empty(&dcc->pend_list[i])) {
wakeup = true;
break;
}
}
mutex_unlock(&dcc->cmd_lock);
if (!wakeup)
return;
wake_up:
dcc->discard_wake = 1;
wake_up_interruptible_all(&dcc->discard_wait_queue);
}
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
#include <linux/quotaops.h> #include <linux/quotaops.h>
#include <linux/f2fs_fs.h> #include <linux/f2fs_fs.h>
#include <linux/sysfs.h> #include <linux/sysfs.h>
#include <linux/quota.h>
#include "f2fs.h" #include "f2fs.h"
#include "node.h" #include "node.h"
...@@ -107,8 +108,20 @@ enum { ...@@ -107,8 +108,20 @@ enum {
Opt_fault_injection, Opt_fault_injection,
Opt_lazytime, Opt_lazytime,
Opt_nolazytime, Opt_nolazytime,
Opt_quota,
Opt_noquota,
Opt_usrquota, Opt_usrquota,
Opt_grpquota, Opt_grpquota,
Opt_prjquota,
Opt_usrjquota,
Opt_grpjquota,
Opt_prjjquota,
Opt_offusrjquota,
Opt_offgrpjquota,
Opt_offprjjquota,
Opt_jqfmt_vfsold,
Opt_jqfmt_vfsv0,
Opt_jqfmt_vfsv1,
Opt_err, Opt_err,
}; };
...@@ -144,8 +157,20 @@ static match_table_t f2fs_tokens = { ...@@ -144,8 +157,20 @@ static match_table_t f2fs_tokens = {
{Opt_fault_injection, "fault_injection=%u"}, {Opt_fault_injection, "fault_injection=%u"},
{Opt_lazytime, "lazytime"}, {Opt_lazytime, "lazytime"},
{Opt_nolazytime, "nolazytime"}, {Opt_nolazytime, "nolazytime"},
{Opt_quota, "quota"},
{Opt_noquota, "noquota"},
{Opt_usrquota, "usrquota"}, {Opt_usrquota, "usrquota"},
{Opt_grpquota, "grpquota"}, {Opt_grpquota, "grpquota"},
{Opt_prjquota, "prjquota"},
{Opt_usrjquota, "usrjquota=%s"},
{Opt_grpjquota, "grpjquota=%s"},
{Opt_prjjquota, "prjjquota=%s"},
{Opt_offusrjquota, "usrjquota="},
{Opt_offgrpjquota, "grpjquota="},
{Opt_offprjjquota, "prjjquota="},
{Opt_jqfmt_vfsold, "jqfmt=vfsold"},
{Opt_jqfmt_vfsv0, "jqfmt=vfsv0"},
{Opt_jqfmt_vfsv1, "jqfmt=vfsv1"},
{Opt_err, NULL}, {Opt_err, NULL},
}; };
...@@ -157,7 +182,7 @@ void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...) ...@@ -157,7 +182,7 @@ void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...)
va_start(args, fmt); va_start(args, fmt);
vaf.fmt = fmt; vaf.fmt = fmt;
vaf.va = &args; vaf.va = &args;
printk("%sF2FS-fs (%s): %pV\n", level, sb->s_id, &vaf); printk_ratelimited("%sF2FS-fs (%s): %pV\n", level, sb->s_id, &vaf);
va_end(args); va_end(args);
} }
...@@ -168,6 +193,104 @@ static void init_once(void *foo) ...@@ -168,6 +193,104 @@ static void init_once(void *foo)
inode_init_once(&fi->vfs_inode); inode_init_once(&fi->vfs_inode);
} }
#ifdef CONFIG_QUOTA
static const char * const quotatypes[] = INITQFNAMES;
#define QTYPE2NAME(t) (quotatypes[t])
static int f2fs_set_qf_name(struct super_block *sb, int qtype,
substring_t *args)
{
struct f2fs_sb_info *sbi = F2FS_SB(sb);
char *qname;
int ret = -EINVAL;
if (sb_any_quota_loaded(sb) && !sbi->s_qf_names[qtype]) {
f2fs_msg(sb, KERN_ERR,
"Cannot change journaled "
"quota options when quota turned on");
return -EINVAL;
}
qname = match_strdup(args);
if (!qname) {
f2fs_msg(sb, KERN_ERR,
"Not enough memory for storing quotafile name");
return -EINVAL;
}
if (sbi->s_qf_names[qtype]) {
if (strcmp(sbi->s_qf_names[qtype], qname) == 0)
ret = 0;
else
f2fs_msg(sb, KERN_ERR,
"%s quota file already specified",
QTYPE2NAME(qtype));
goto errout;
}
if (strchr(qname, '/')) {
f2fs_msg(sb, KERN_ERR,
"quotafile must be on filesystem root");
goto errout;
}
sbi->s_qf_names[qtype] = qname;
set_opt(sbi, QUOTA);
return 0;
errout:
kfree(qname);
return ret;
}
static int f2fs_clear_qf_name(struct super_block *sb, int qtype)
{
struct f2fs_sb_info *sbi = F2FS_SB(sb);
if (sb_any_quota_loaded(sb) && sbi->s_qf_names[qtype]) {
f2fs_msg(sb, KERN_ERR, "Cannot change journaled quota options"
" when quota turned on");
return -EINVAL;
}
kfree(sbi->s_qf_names[qtype]);
sbi->s_qf_names[qtype] = NULL;
return 0;
}
static int f2fs_check_quota_options(struct f2fs_sb_info *sbi)
{
/*
* We do the test below only for project quotas. 'usrquota' and
* 'grpquota' mount options are allowed even without quota feature
* to support legacy quotas in quota files.
*/
if (test_opt(sbi, PRJQUOTA) && !f2fs_sb_has_project_quota(sbi->sb)) {
f2fs_msg(sbi->sb, KERN_ERR, "Project quota feature not enabled. "
"Cannot enable project quota enforcement.");
return -1;
}
if (sbi->s_qf_names[USRQUOTA] || sbi->s_qf_names[GRPQUOTA] ||
sbi->s_qf_names[PRJQUOTA]) {
if (test_opt(sbi, USRQUOTA) && sbi->s_qf_names[USRQUOTA])
clear_opt(sbi, USRQUOTA);
if (test_opt(sbi, GRPQUOTA) && sbi->s_qf_names[GRPQUOTA])
clear_opt(sbi, GRPQUOTA);
if (test_opt(sbi, PRJQUOTA) && sbi->s_qf_names[PRJQUOTA])
clear_opt(sbi, PRJQUOTA);
if (test_opt(sbi, GRPQUOTA) || test_opt(sbi, USRQUOTA) ||
test_opt(sbi, PRJQUOTA)) {
f2fs_msg(sbi->sb, KERN_ERR, "old and new quota "
"format mixing");
return -1;
}
if (!sbi->s_jquota_fmt) {
f2fs_msg(sbi->sb, KERN_ERR, "journaled quota format "
"not specified");
return -1;
}
}
return 0;
}
#endif
static int parse_options(struct super_block *sb, char *options) static int parse_options(struct super_block *sb, char *options)
{ {
struct f2fs_sb_info *sbi = F2FS_SB(sb); struct f2fs_sb_info *sbi = F2FS_SB(sb);
...@@ -175,6 +298,9 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -175,6 +298,9 @@ static int parse_options(struct super_block *sb, char *options)
substring_t args[MAX_OPT_ARGS]; substring_t args[MAX_OPT_ARGS];
char *p, *name; char *p, *name;
int arg = 0; int arg = 0;
#ifdef CONFIG_QUOTA
int ret;
#endif
if (!options) if (!options)
return 0; return 0;
...@@ -386,15 +512,76 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -386,15 +512,76 @@ static int parse_options(struct super_block *sb, char *options)
sb->s_flags &= ~MS_LAZYTIME; sb->s_flags &= ~MS_LAZYTIME;
break; break;
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
case Opt_quota:
case Opt_usrquota: case Opt_usrquota:
set_opt(sbi, USRQUOTA); set_opt(sbi, USRQUOTA);
break; break;
case Opt_grpquota: case Opt_grpquota:
set_opt(sbi, GRPQUOTA); set_opt(sbi, GRPQUOTA);
break; break;
case Opt_prjquota:
set_opt(sbi, PRJQUOTA);
break;
case Opt_usrjquota:
ret = f2fs_set_qf_name(sb, USRQUOTA, &args[0]);
if (ret)
return ret;
break;
case Opt_grpjquota:
ret = f2fs_set_qf_name(sb, GRPQUOTA, &args[0]);
if (ret)
return ret;
break;
case Opt_prjjquota:
ret = f2fs_set_qf_name(sb, PRJQUOTA, &args[0]);
if (ret)
return ret;
break;
case Opt_offusrjquota:
ret = f2fs_clear_qf_name(sb, USRQUOTA);
if (ret)
return ret;
break;
case Opt_offgrpjquota:
ret = f2fs_clear_qf_name(sb, GRPQUOTA);
if (ret)
return ret;
break;
case Opt_offprjjquota:
ret = f2fs_clear_qf_name(sb, PRJQUOTA);
if (ret)
return ret;
break;
case Opt_jqfmt_vfsold:
sbi->s_jquota_fmt = QFMT_VFS_OLD;
break;
case Opt_jqfmt_vfsv0:
sbi->s_jquota_fmt = QFMT_VFS_V0;
break;
case Opt_jqfmt_vfsv1:
sbi->s_jquota_fmt = QFMT_VFS_V1;
break;
case Opt_noquota:
clear_opt(sbi, QUOTA);
clear_opt(sbi, USRQUOTA);
clear_opt(sbi, GRPQUOTA);
clear_opt(sbi, PRJQUOTA);
break;
#else #else
case Opt_quota:
case Opt_usrquota: case Opt_usrquota:
case Opt_grpquota: case Opt_grpquota:
case Opt_prjquota:
case Opt_usrjquota:
case Opt_grpjquota:
case Opt_prjjquota:
case Opt_offusrjquota:
case Opt_offgrpjquota:
case Opt_offprjjquota:
case Opt_jqfmt_vfsold:
case Opt_jqfmt_vfsv0:
case Opt_jqfmt_vfsv1:
case Opt_noquota:
f2fs_msg(sb, KERN_INFO, f2fs_msg(sb, KERN_INFO,
"quota operations not supported"); "quota operations not supported");
break; break;
...@@ -406,6 +593,10 @@ static int parse_options(struct super_block *sb, char *options) ...@@ -406,6 +593,10 @@ static int parse_options(struct super_block *sb, char *options)
return -EINVAL; return -EINVAL;
} }
} }
#ifdef CONFIG_QUOTA
if (f2fs_check_quota_options(sbi))
return -EINVAL;
#endif
if (F2FS_IO_SIZE_BITS(sbi) && !test_opt(sbi, LFS)) { if (F2FS_IO_SIZE_BITS(sbi) && !test_opt(sbi, LFS)) {
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
...@@ -439,6 +630,7 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb) ...@@ -439,6 +630,7 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
init_rwsem(&fi->dio_rwsem[READ]); init_rwsem(&fi->dio_rwsem[READ]);
init_rwsem(&fi->dio_rwsem[WRITE]); init_rwsem(&fi->dio_rwsem[WRITE]);
init_rwsem(&fi->i_mmap_sem); init_rwsem(&fi->i_mmap_sem);
init_rwsem(&fi->i_xattr_sem);
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
memset(&fi->i_dquot, 0, sizeof(fi->i_dquot)); memset(&fi->i_dquot, 0, sizeof(fi->i_dquot));
...@@ -446,6 +638,7 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb) ...@@ -446,6 +638,7 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb)
#endif #endif
/* Will be used by directory only */ /* Will be used by directory only */
fi->i_dir_level = F2FS_SB(sb)->dir_level; fi->i_dir_level = F2FS_SB(sb)->dir_level;
return &fi->vfs_inode; return &fi->vfs_inode;
} }
...@@ -584,7 +777,6 @@ static void destroy_device_list(struct f2fs_sb_info *sbi) ...@@ -584,7 +777,6 @@ static void destroy_device_list(struct f2fs_sb_info *sbi)
kfree(sbi->devs); kfree(sbi->devs);
} }
static void f2fs_quota_off_umount(struct super_block *sb);
static void f2fs_put_super(struct super_block *sb) static void f2fs_put_super(struct super_block *sb)
{ {
struct f2fs_sb_info *sbi = F2FS_SB(sb); struct f2fs_sb_info *sbi = F2FS_SB(sb);
...@@ -642,7 +834,7 @@ static void f2fs_put_super(struct super_block *sb) ...@@ -642,7 +834,7 @@ static void f2fs_put_super(struct super_block *sb)
kfree(sbi->ckpt); kfree(sbi->ckpt);
f2fs_exit_sysfs(sbi); f2fs_unregister_sysfs(sbi);
sb->s_fs_info = NULL; sb->s_fs_info = NULL;
if (sbi->s_chksum_driver) if (sbi->s_chksum_driver)
...@@ -651,6 +843,10 @@ static void f2fs_put_super(struct super_block *sb) ...@@ -651,6 +843,10 @@ static void f2fs_put_super(struct super_block *sb)
destroy_device_list(sbi); destroy_device_list(sbi);
mempool_destroy(sbi->write_io_dummy); mempool_destroy(sbi->write_io_dummy);
#ifdef CONFIG_QUOTA
for (i = 0; i < MAXQUOTAS; i++)
kfree(sbi->s_qf_names[i]);
#endif
destroy_percpu_info(sbi); destroy_percpu_info(sbi);
for (i = 0; i < NR_PAGE_TYPE; i++) for (i = 0; i < NR_PAGE_TYPE; i++)
kfree(sbi->write_io[i]); kfree(sbi->write_io[i]);
...@@ -664,6 +860,9 @@ int f2fs_sync_fs(struct super_block *sb, int sync) ...@@ -664,6 +860,9 @@ int f2fs_sync_fs(struct super_block *sb, int sync)
trace_f2fs_sync_fs(sb, sync); trace_f2fs_sync_fs(sb, sync);
if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING)))
return -EAGAIN;
if (sync) { if (sync) {
struct cp_control cpc; struct cp_control cpc;
...@@ -698,6 +897,48 @@ static int f2fs_unfreeze(struct super_block *sb) ...@@ -698,6 +897,48 @@ static int f2fs_unfreeze(struct super_block *sb)
return 0; return 0;
} }
#ifdef CONFIG_QUOTA
static int f2fs_statfs_project(struct super_block *sb,
kprojid_t projid, struct kstatfs *buf)
{
struct kqid qid;
struct dquot *dquot;
u64 limit;
u64 curblock;
qid = make_kqid_projid(projid);
dquot = dqget(sb, qid);
if (IS_ERR(dquot))
return PTR_ERR(dquot);
spin_lock(&dq_data_lock);
limit = (dquot->dq_dqb.dqb_bsoftlimit ?
dquot->dq_dqb.dqb_bsoftlimit :
dquot->dq_dqb.dqb_bhardlimit) >> sb->s_blocksize_bits;
if (limit && buf->f_blocks > limit) {
curblock = dquot->dq_dqb.dqb_curspace >> sb->s_blocksize_bits;
buf->f_blocks = limit;
buf->f_bfree = buf->f_bavail =
(buf->f_blocks > curblock) ?
(buf->f_blocks - curblock) : 0;
}
limit = dquot->dq_dqb.dqb_isoftlimit ?
dquot->dq_dqb.dqb_isoftlimit :
dquot->dq_dqb.dqb_ihardlimit;
if (limit && buf->f_files > limit) {
buf->f_files = limit;
buf->f_ffree =
(buf->f_files > dquot->dq_dqb.dqb_curinodes) ?
(buf->f_files - dquot->dq_dqb.dqb_curinodes) : 0;
}
spin_unlock(&dq_data_lock);
dqput(dquot);
return 0;
}
#endif
static int f2fs_statfs(struct dentry *dentry, struct kstatfs *buf) static int f2fs_statfs(struct dentry *dentry, struct kstatfs *buf)
{ {
struct super_block *sb = dentry->d_sb; struct super_block *sb = dentry->d_sb;
...@@ -733,9 +974,49 @@ static int f2fs_statfs(struct dentry *dentry, struct kstatfs *buf) ...@@ -733,9 +974,49 @@ static int f2fs_statfs(struct dentry *dentry, struct kstatfs *buf)
buf->f_fsid.val[0] = (u32)id; buf->f_fsid.val[0] = (u32)id;
buf->f_fsid.val[1] = (u32)(id >> 32); buf->f_fsid.val[1] = (u32)(id >> 32);
#ifdef CONFIG_QUOTA
if (is_inode_flag_set(dentry->d_inode, FI_PROJ_INHERIT) &&
sb_has_quota_limits_enabled(sb, PRJQUOTA)) {
f2fs_statfs_project(sb, F2FS_I(dentry->d_inode)->i_projid, buf);
}
#endif
return 0; return 0;
} }
static inline void f2fs_show_quota_options(struct seq_file *seq,
struct super_block *sb)
{
#ifdef CONFIG_QUOTA
struct f2fs_sb_info *sbi = F2FS_SB(sb);
if (sbi->s_jquota_fmt) {
char *fmtname = "";
switch (sbi->s_jquota_fmt) {
case QFMT_VFS_OLD:
fmtname = "vfsold";
break;
case QFMT_VFS_V0:
fmtname = "vfsv0";
break;
case QFMT_VFS_V1:
fmtname = "vfsv1";
break;
}
seq_printf(seq, ",jqfmt=%s", fmtname);
}
if (sbi->s_qf_names[USRQUOTA])
seq_show_option(seq, "usrjquota", sbi->s_qf_names[USRQUOTA]);
if (sbi->s_qf_names[GRPQUOTA])
seq_show_option(seq, "grpjquota", sbi->s_qf_names[GRPQUOTA]);
if (sbi->s_qf_names[PRJQUOTA])
seq_show_option(seq, "prjjquota", sbi->s_qf_names[PRJQUOTA]);
#endif
}
static int f2fs_show_options(struct seq_file *seq, struct dentry *root) static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
{ {
struct f2fs_sb_info *sbi = F2FS_SB(root->d_sb); struct f2fs_sb_info *sbi = F2FS_SB(root->d_sb);
...@@ -809,11 +1090,16 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root) ...@@ -809,11 +1090,16 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
sbi->fault_info.inject_rate); sbi->fault_info.inject_rate);
#endif #endif
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
if (test_opt(sbi, QUOTA))
seq_puts(seq, ",quota");
if (test_opt(sbi, USRQUOTA)) if (test_opt(sbi, USRQUOTA))
seq_puts(seq, ",usrquota"); seq_puts(seq, ",usrquota");
if (test_opt(sbi, GRPQUOTA)) if (test_opt(sbi, GRPQUOTA))
seq_puts(seq, ",grpquota"); seq_puts(seq, ",grpquota");
if (test_opt(sbi, PRJQUOTA))
seq_puts(seq, ",prjquota");
#endif #endif
f2fs_show_quota_options(seq, sbi->sb);
return 0; return 0;
} }
...@@ -862,6 +1148,11 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -862,6 +1148,11 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
struct f2fs_fault_info ffi = sbi->fault_info; struct f2fs_fault_info ffi = sbi->fault_info;
#endif #endif
#ifdef CONFIG_QUOTA
int s_jquota_fmt;
char *s_qf_names[MAXQUOTAS];
int i, j;
#endif
/* /*
* Save the old mount options in case we * Save the old mount options in case we
...@@ -871,6 +1162,23 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -871,6 +1162,23 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
old_sb_flags = sb->s_flags; old_sb_flags = sb->s_flags;
active_logs = sbi->active_logs; active_logs = sbi->active_logs;
#ifdef CONFIG_QUOTA
s_jquota_fmt = sbi->s_jquota_fmt;
for (i = 0; i < MAXQUOTAS; i++) {
if (sbi->s_qf_names[i]) {
s_qf_names[i] = kstrdup(sbi->s_qf_names[i],
GFP_KERNEL);
if (!s_qf_names[i]) {
for (j = 0; j < i; j++)
kfree(s_qf_names[j]);
return -ENOMEM;
}
} else {
s_qf_names[i] = NULL;
}
}
#endif
/* recover superblocks we couldn't write due to previous RO mount */ /* recover superblocks we couldn't write due to previous RO mount */
if (!(*flags & MS_RDONLY) && is_sbi_flag_set(sbi, SBI_NEED_SB_WRITE)) { if (!(*flags & MS_RDONLY) && is_sbi_flag_set(sbi, SBI_NEED_SB_WRITE)) {
err = f2fs_commit_super(sbi, false); err = f2fs_commit_super(sbi, false);
...@@ -952,6 +1260,11 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -952,6 +1260,11 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
goto restore_gc; goto restore_gc;
} }
skip: skip:
#ifdef CONFIG_QUOTA
/* Release old quota file names */
for (i = 0; i < MAXQUOTAS; i++)
kfree(s_qf_names[i]);
#endif
/* Update the POSIXACL Flag */ /* Update the POSIXACL Flag */
sb->s_flags = (sb->s_flags & ~MS_POSIXACL) | sb->s_flags = (sb->s_flags & ~MS_POSIXACL) |
(test_opt(sbi, POSIX_ACL) ? MS_POSIXACL : 0); (test_opt(sbi, POSIX_ACL) ? MS_POSIXACL : 0);
...@@ -966,6 +1279,13 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -966,6 +1279,13 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
stop_gc_thread(sbi); stop_gc_thread(sbi);
} }
restore_opts: restore_opts:
#ifdef CONFIG_QUOTA
sbi->s_jquota_fmt = s_jquota_fmt;
for (i = 0; i < MAXQUOTAS; i++) {
kfree(sbi->s_qf_names[i]);
sbi->s_qf_names[i] = s_qf_names[i];
}
#endif
sbi->mount_opt = org_mount_opt; sbi->mount_opt = org_mount_opt;
sbi->active_logs = active_logs; sbi->active_logs = active_logs;
sb->s_flags = old_sb_flags; sb->s_flags = old_sb_flags;
...@@ -1065,7 +1385,7 @@ static ssize_t f2fs_quota_write(struct super_block *sb, int type, ...@@ -1065,7 +1385,7 @@ static ssize_t f2fs_quota_write(struct super_block *sb, int type,
} }
if (len == towrite) if (len == towrite)
return err; return 0;
inode->i_version++; inode->i_version++;
inode->i_mtime = inode->i_ctime = current_time(inode); inode->i_mtime = inode->i_ctime = current_time(inode);
f2fs_mark_inode_dirty_sync(inode, false); f2fs_mark_inode_dirty_sync(inode, false);
...@@ -1082,6 +1402,27 @@ static qsize_t *f2fs_get_reserved_space(struct inode *inode) ...@@ -1082,6 +1402,27 @@ static qsize_t *f2fs_get_reserved_space(struct inode *inode)
return &F2FS_I(inode)->i_reserved_quota; return &F2FS_I(inode)->i_reserved_quota;
} }
static int f2fs_quota_on_mount(struct f2fs_sb_info *sbi, int type)
{
return dquot_quota_on_mount(sbi->sb, sbi->s_qf_names[type],
sbi->s_jquota_fmt, type);
}
void f2fs_enable_quota_files(struct f2fs_sb_info *sbi)
{
int i, ret;
for (i = 0; i < MAXQUOTAS; i++) {
if (sbi->s_qf_names[i]) {
ret = f2fs_quota_on_mount(sbi, i);
if (ret < 0)
f2fs_msg(sbi->sb, KERN_ERR,
"Cannot turn on journaled "
"quota: error %d", ret);
}
}
}
static int f2fs_quota_sync(struct super_block *sb, int type) static int f2fs_quota_sync(struct super_block *sb, int type)
{ {
struct quota_info *dqopt = sb_dqopt(sb); struct quota_info *dqopt = sb_dqopt(sb);
...@@ -1119,7 +1460,7 @@ static int f2fs_quota_on(struct super_block *sb, int type, int format_id, ...@@ -1119,7 +1460,7 @@ static int f2fs_quota_on(struct super_block *sb, int type, int format_id,
struct inode *inode; struct inode *inode;
int err; int err;
err = f2fs_quota_sync(sb, -1); err = f2fs_quota_sync(sb, type);
if (err) if (err)
return err; return err;
...@@ -1147,7 +1488,7 @@ static int f2fs_quota_off(struct super_block *sb, int type) ...@@ -1147,7 +1488,7 @@ static int f2fs_quota_off(struct super_block *sb, int type)
if (!inode || !igrab(inode)) if (!inode || !igrab(inode))
return dquot_quota_off(sb, type); return dquot_quota_off(sb, type);
f2fs_quota_sync(sb, -1); f2fs_quota_sync(sb, type);
err = dquot_quota_off(sb, type); err = dquot_quota_off(sb, type);
if (err) if (err)
...@@ -1163,7 +1504,7 @@ static int f2fs_quota_off(struct super_block *sb, int type) ...@@ -1163,7 +1504,7 @@ static int f2fs_quota_off(struct super_block *sb, int type)
return err; return err;
} }
static void f2fs_quota_off_umount(struct super_block *sb) void f2fs_quota_off_umount(struct super_block *sb)
{ {
int type; int type;
...@@ -1171,6 +1512,12 @@ static void f2fs_quota_off_umount(struct super_block *sb) ...@@ -1171,6 +1512,12 @@ static void f2fs_quota_off_umount(struct super_block *sb)
f2fs_quota_off(sb, type); f2fs_quota_off(sb, type);
} }
int f2fs_get_projid(struct inode *inode, kprojid_t *projid)
{
*projid = F2FS_I(inode)->i_projid;
return 0;
}
static const struct dquot_operations f2fs_quota_operations = { static const struct dquot_operations f2fs_quota_operations = {
.get_reserved_space = f2fs_get_reserved_space, .get_reserved_space = f2fs_get_reserved_space,
.write_dquot = dquot_commit, .write_dquot = dquot_commit,
...@@ -1180,6 +1527,7 @@ static const struct dquot_operations f2fs_quota_operations = { ...@@ -1180,6 +1527,7 @@ static const struct dquot_operations f2fs_quota_operations = {
.write_info = dquot_commit_info, .write_info = dquot_commit_info,
.alloc_dquot = dquot_alloc, .alloc_dquot = dquot_alloc,
.destroy_dquot = dquot_destroy, .destroy_dquot = dquot_destroy,
.get_projid = f2fs_get_projid,
.get_next_id = dquot_get_next_id, .get_next_id = dquot_get_next_id,
}; };
...@@ -1194,12 +1542,12 @@ static const struct quotactl_ops f2fs_quotactl_ops = { ...@@ -1194,12 +1542,12 @@ static const struct quotactl_ops f2fs_quotactl_ops = {
.get_nextdqblk = dquot_get_next_dqblk, .get_nextdqblk = dquot_get_next_dqblk,
}; };
#else #else
static inline void f2fs_quota_off_umount(struct super_block *sb) void f2fs_quota_off_umount(struct super_block *sb)
{ {
} }
#endif #endif
static struct super_operations f2fs_sops = { static const struct super_operations f2fs_sops = {
.alloc_inode = f2fs_alloc_inode, .alloc_inode = f2fs_alloc_inode,
.drop_inode = f2fs_drop_inode, .drop_inode = f2fs_drop_inode,
.destroy_inode = f2fs_destroy_inode, .destroy_inode = f2fs_destroy_inode,
...@@ -1303,9 +1651,16 @@ static const struct export_operations f2fs_export_ops = { ...@@ -1303,9 +1651,16 @@ static const struct export_operations f2fs_export_ops = {
static loff_t max_file_blocks(void) static loff_t max_file_blocks(void)
{ {
loff_t result = (DEF_ADDRS_PER_INODE - F2FS_INLINE_XATTR_ADDRS); loff_t result = 0;
loff_t leaf_count = ADDRS_PER_BLOCK; loff_t leaf_count = ADDRS_PER_BLOCK;
/*
* note: previously, result is equal to (DEF_ADDRS_PER_INODE -
* F2FS_INLINE_XATTR_ADDRS), but now f2fs try to reserve more
* space in inode.i_addr, it will be more safe to reassign
* result as zero.
*/
/* two direct node blocks */ /* two direct node blocks */
result += (leaf_count * 2); result += (leaf_count * 2);
...@@ -1922,6 +2277,11 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -1922,6 +2277,11 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
sb->s_fs_info = sbi; sb->s_fs_info = sbi;
sbi->raw_super = raw_super; sbi->raw_super = raw_super;
/* precompute checksum seed for metadata */
if (f2fs_sb_has_inode_chksum(sb))
sbi->s_chksum_seed = f2fs_chksum(sbi, ~0, raw_super->uuid,
sizeof(raw_super->uuid));
/* /*
* The BLKZONED feature indicates that the drive was formatted with * The BLKZONED feature indicates that the drive was formatted with
* zone alignment optimization. This is optional for host-aware * zone alignment optimization. This is optional for host-aware
...@@ -1956,7 +2316,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -1956,7 +2316,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
sb->dq_op = &f2fs_quota_operations; sb->dq_op = &f2fs_quota_operations;
sb->s_qcop = &f2fs_quotactl_ops; sb->s_qcop = &f2fs_quotactl_ops;
sb->s_quota_types = QTYPE_MASK_USR | QTYPE_MASK_GRP; sb->s_quota_types = QTYPE_MASK_USR | QTYPE_MASK_GRP | QTYPE_MASK_PRJ;
#endif #endif
sb->s_op = &f2fs_sops; sb->s_op = &f2fs_sops;
...@@ -1980,6 +2340,10 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -1980,6 +2340,10 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
set_sbi_flag(sbi, SBI_POR_DOING); set_sbi_flag(sbi, SBI_POR_DOING);
spin_lock_init(&sbi->stat_lock); spin_lock_init(&sbi->stat_lock);
/* init iostat info */
spin_lock_init(&sbi->iostat_lock);
sbi->iostat_enable = false;
for (i = 0; i < NR_PAGE_TYPE; i++) { for (i = 0; i < NR_PAGE_TYPE; i++) {
int n = (i == META) ? 1: NR_TEMP_TYPE; int n = (i == META) ? 1: NR_TEMP_TYPE;
int j; int j;
...@@ -2098,11 +2462,6 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2098,11 +2462,6 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
if (err) if (err)
goto free_nm; goto free_nm;
/* if there are nt orphan nodes free them */
err = recover_orphan_inodes(sbi);
if (err)
goto free_node_inode;
/* read root inode and dentry */ /* read root inode and dentry */
root = f2fs_iget(sb, F2FS_ROOT_INO(sbi)); root = f2fs_iget(sb, F2FS_ROOT_INO(sbi));
if (IS_ERR(root)) { if (IS_ERR(root)) {
...@@ -2122,10 +2481,15 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2122,10 +2481,15 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
goto free_root_inode; goto free_root_inode;
} }
err = f2fs_init_sysfs(sbi); err = f2fs_register_sysfs(sbi);
if (err) if (err)
goto free_root_inode; goto free_root_inode;
/* if there are nt orphan nodes free them */
err = recover_orphan_inodes(sbi);
if (err)
goto free_sysfs;
/* recover fsynced data */ /* recover fsynced data */
if (!test_opt(sbi, DISABLE_ROLL_FORWARD)) { if (!test_opt(sbi, DISABLE_ROLL_FORWARD)) {
/* /*
...@@ -2135,7 +2499,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2135,7 +2499,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
if (bdev_read_only(sb->s_bdev) && if (bdev_read_only(sb->s_bdev) &&
!is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) { !is_set_ckpt_flags(sbi, CP_UMOUNT_FLAG)) {
err = -EROFS; err = -EROFS;
goto free_sysfs; goto free_meta;
} }
if (need_fsck) if (need_fsck)
...@@ -2149,7 +2513,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2149,7 +2513,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
need_fsck = true; need_fsck = true;
f2fs_msg(sb, KERN_ERR, f2fs_msg(sb, KERN_ERR,
"Cannot recover all fsync data errno=%d", err); "Cannot recover all fsync data errno=%d", err);
goto free_sysfs; goto free_meta;
} }
} else { } else {
err = recover_fsync_data(sbi, true); err = recover_fsync_data(sbi, true);
...@@ -2173,7 +2537,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2173,7 +2537,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
/* After POR, we can run background GC thread.*/ /* After POR, we can run background GC thread.*/
err = start_gc_thread(sbi); err = start_gc_thread(sbi);
if (err) if (err)
goto free_sysfs; goto free_meta;
} }
kfree(options); kfree(options);
...@@ -2191,9 +2555,17 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2191,9 +2555,17 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
f2fs_update_time(sbi, REQ_TIME); f2fs_update_time(sbi, REQ_TIME);
return 0; return 0;
free_sysfs: free_meta:
f2fs_sync_inode_meta(sbi); f2fs_sync_inode_meta(sbi);
f2fs_exit_sysfs(sbi); /*
* Some dirty meta pages can be produced by recover_orphan_inodes()
* failed by EIO. Then, iput(node_inode) can trigger balance_fs_bg()
* followed by write_checkpoint() through f2fs_write_node_pages(), which
* falls into an infinite loop in sync_meta_pages().
*/
truncate_inode_pages_final(META_MAPPING(sbi));
free_sysfs:
f2fs_unregister_sysfs(sbi);
free_root_inode: free_root_inode:
dput(sb->s_root); dput(sb->s_root);
sb->s_root = NULL; sb->s_root = NULL;
...@@ -2202,13 +2574,6 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2202,13 +2574,6 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
mutex_lock(&sbi->umount_mutex); mutex_lock(&sbi->umount_mutex);
release_ino_entry(sbi, true); release_ino_entry(sbi, true);
f2fs_leave_shrinker(sbi); f2fs_leave_shrinker(sbi);
/*
* Some dirty meta pages can be produced by recover_orphan_inodes()
* failed by EIO. Then, iput(node_inode) can trigger balance_fs_bg()
* followed by write_checkpoint() through f2fs_write_node_pages(), which
* falls into an infinite loop in sync_meta_pages().
*/
truncate_inode_pages_final(META_MAPPING(sbi));
iput(sbi->node_inode); iput(sbi->node_inode);
mutex_unlock(&sbi->umount_mutex); mutex_unlock(&sbi->umount_mutex);
f2fs_destroy_stats(sbi); f2fs_destroy_stats(sbi);
...@@ -2228,6 +2593,10 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -2228,6 +2593,10 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
for (i = 0; i < NR_PAGE_TYPE; i++) for (i = 0; i < NR_PAGE_TYPE; i++)
kfree(sbi->write_io[i]); kfree(sbi->write_io[i]);
destroy_percpu_info(sbi); destroy_percpu_info(sbi);
#ifdef CONFIG_QUOTA
for (i = 0; i < MAXQUOTAS; i++)
kfree(sbi->s_qf_names[i]);
#endif
kfree(options); kfree(options);
free_sb_buf: free_sb_buf:
kfree(raw_super); kfree(raw_super);
...@@ -2311,7 +2680,7 @@ static int __init init_f2fs_fs(void) ...@@ -2311,7 +2680,7 @@ static int __init init_f2fs_fs(void)
err = create_extent_cache(); err = create_extent_cache();
if (err) if (err)
goto free_checkpoint_caches; goto free_checkpoint_caches;
err = f2fs_register_sysfs(); err = f2fs_init_sysfs();
if (err) if (err)
goto free_extent_cache; goto free_extent_cache;
err = register_shrinker(&f2fs_shrinker_info); err = register_shrinker(&f2fs_shrinker_info);
...@@ -2330,7 +2699,7 @@ static int __init init_f2fs_fs(void) ...@@ -2330,7 +2699,7 @@ static int __init init_f2fs_fs(void)
free_shrinker: free_shrinker:
unregister_shrinker(&f2fs_shrinker_info); unregister_shrinker(&f2fs_shrinker_info);
free_sysfs: free_sysfs:
f2fs_unregister_sysfs(); f2fs_exit_sysfs();
free_extent_cache: free_extent_cache:
destroy_extent_cache(); destroy_extent_cache();
free_checkpoint_caches: free_checkpoint_caches:
...@@ -2350,7 +2719,7 @@ static void __exit exit_f2fs_fs(void) ...@@ -2350,7 +2719,7 @@ static void __exit exit_f2fs_fs(void)
f2fs_destroy_root_stats(); f2fs_destroy_root_stats();
unregister_filesystem(&f2fs_fs_type); unregister_filesystem(&f2fs_fs_type);
unregister_shrinker(&f2fs_shrinker_info); unregister_shrinker(&f2fs_shrinker_info);
f2fs_unregister_sysfs(); f2fs_exit_sysfs();
destroy_extent_cache(); destroy_extent_cache();
destroy_checkpoint_caches(); destroy_checkpoint_caches();
destroy_segment_manager_caches(); destroy_segment_manager_caches();
......
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
#include "gc.h" #include "gc.h"
static struct proc_dir_entry *f2fs_proc_root; static struct proc_dir_entry *f2fs_proc_root;
static struct kset *f2fs_kset;
/* Sysfs support for f2fs */ /* Sysfs support for f2fs */
enum { enum {
...@@ -41,6 +40,7 @@ struct f2fs_attr { ...@@ -41,6 +40,7 @@ struct f2fs_attr {
const char *, size_t); const char *, size_t);
int struct_type; int struct_type;
int offset; int offset;
int id;
}; };
static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type) static unsigned char *__struct_ptr(struct f2fs_sb_info *sbi, int struct_type)
...@@ -76,6 +76,34 @@ static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a, ...@@ -76,6 +76,34 @@ static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a,
BD_PART_WRITTEN(sbi))); BD_PART_WRITTEN(sbi)));
} }
static ssize_t features_show(struct f2fs_attr *a,
struct f2fs_sb_info *sbi, char *buf)
{
struct super_block *sb = sbi->sb;
int len = 0;
if (!sb->s_bdev->bd_part)
return snprintf(buf, PAGE_SIZE, "0\n");
if (f2fs_sb_has_crypto(sb))
len += snprintf(buf, PAGE_SIZE - len, "%s",
"encryption");
if (f2fs_sb_mounted_blkzoned(sb))
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
len ? ", " : "", "blkzoned");
if (f2fs_sb_has_extra_attr(sb))
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
len ? ", " : "", "extra_attr");
if (f2fs_sb_has_project_quota(sb))
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
len ? ", " : "", "projquota");
if (f2fs_sb_has_inode_chksum(sb))
len += snprintf(buf + len, PAGE_SIZE - len, "%s%s",
len ? ", " : "", "inode_checksum");
len += snprintf(buf + len, PAGE_SIZE - len, "\n");
return len;
}
static ssize_t f2fs_sbi_show(struct f2fs_attr *a, static ssize_t f2fs_sbi_show(struct f2fs_attr *a,
struct f2fs_sb_info *sbi, char *buf) struct f2fs_sb_info *sbi, char *buf)
{ {
...@@ -124,7 +152,39 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a, ...@@ -124,7 +152,39 @@ static ssize_t f2fs_sbi_store(struct f2fs_attr *a,
spin_unlock(&sbi->stat_lock); spin_unlock(&sbi->stat_lock);
return count; return count;
} }
if (!strcmp(a->attr.name, "discard_granularity")) {
struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info;
int i;
if (t == 0 || t > MAX_PLIST_NUM)
return -EINVAL;
if (t == *ui)
return count;
mutex_lock(&dcc->cmd_lock);
for (i = 0; i < MAX_PLIST_NUM; i++) {
if (i >= t - 1)
dcc->pend_list_tag[i] |= P_ACTIVE;
else
dcc->pend_list_tag[i] &= (~P_ACTIVE);
}
mutex_unlock(&dcc->cmd_lock);
*ui = t;
return count;
}
*ui = t; *ui = t;
if (!strcmp(a->attr.name, "iostat_enable") && *ui == 0)
f2fs_reset_iostat(sbi);
if (!strcmp(a->attr.name, "gc_urgent") && t == 1 && sbi->gc_thread) {
sbi->gc_thread->gc_wake = 1;
wake_up_interruptible_all(&sbi->gc_thread->gc_wait_queue_head);
wake_up_discard_thread(sbi, true);
}
return count; return count;
} }
...@@ -155,6 +215,30 @@ static void f2fs_sb_release(struct kobject *kobj) ...@@ -155,6 +215,30 @@ static void f2fs_sb_release(struct kobject *kobj)
complete(&sbi->s_kobj_unregister); complete(&sbi->s_kobj_unregister);
} }
enum feat_id {
FEAT_CRYPTO = 0,
FEAT_BLKZONED,
FEAT_ATOMIC_WRITE,
FEAT_EXTRA_ATTR,
FEAT_PROJECT_QUOTA,
FEAT_INODE_CHECKSUM,
};
static ssize_t f2fs_feature_show(struct f2fs_attr *a,
struct f2fs_sb_info *sbi, char *buf)
{
switch (a->id) {
case FEAT_CRYPTO:
case FEAT_BLKZONED:
case FEAT_ATOMIC_WRITE:
case FEAT_EXTRA_ATTR:
case FEAT_PROJECT_QUOTA:
case FEAT_INODE_CHECKSUM:
return snprintf(buf, PAGE_SIZE, "supported\n");
}
return 0;
}
#define F2FS_ATTR_OFFSET(_struct_type, _name, _mode, _show, _store, _offset) \ #define F2FS_ATTR_OFFSET(_struct_type, _name, _mode, _show, _store, _offset) \
static struct f2fs_attr f2fs_attr_##_name = { \ static struct f2fs_attr f2fs_attr_##_name = { \
.attr = {.name = __stringify(_name), .mode = _mode }, \ .attr = {.name = __stringify(_name), .mode = _mode }, \
...@@ -172,12 +256,23 @@ static struct f2fs_attr f2fs_attr_##_name = { \ ...@@ -172,12 +256,23 @@ static struct f2fs_attr f2fs_attr_##_name = { \
#define F2FS_GENERAL_RO_ATTR(name) \ #define F2FS_GENERAL_RO_ATTR(name) \
static struct f2fs_attr f2fs_attr_##name = __ATTR(name, 0444, name##_show, NULL) static struct f2fs_attr f2fs_attr_##name = __ATTR(name, 0444, name##_show, NULL)
#define F2FS_FEATURE_RO_ATTR(_name, _id) \
static struct f2fs_attr f2fs_attr_##_name = { \
.attr = {.name = __stringify(_name), .mode = 0444 }, \
.show = f2fs_feature_show, \
.id = _id, \
}
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_urgent_sleep_time,
urgent_sleep_time);
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_min_sleep_time, min_sleep_time); F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_min_sleep_time, min_sleep_time);
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_max_sleep_time, max_sleep_time); F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_max_sleep_time, max_sleep_time);
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_no_gc_sleep_time, no_gc_sleep_time); F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_no_gc_sleep_time, no_gc_sleep_time);
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_idle, gc_idle); F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_idle, gc_idle);
F2FS_RW_ATTR(GC_THREAD, f2fs_gc_kthread, gc_urgent, gc_urgent);
F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, reclaim_segments, rec_prefree_segments); F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, reclaim_segments, rec_prefree_segments);
F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, max_small_discards, max_discards); F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, max_small_discards, max_discards);
F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, discard_granularity, discard_granularity);
F2FS_RW_ATTR(RESERVED_BLOCKS, f2fs_sb_info, reserved_blocks, reserved_blocks); F2FS_RW_ATTR(RESERVED_BLOCKS, f2fs_sb_info, reserved_blocks, reserved_blocks);
F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, batched_trim_sections, trim_sections); F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, batched_trim_sections, trim_sections);
F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, ipu_policy, ipu_policy); F2FS_RW_ATTR(SM_INFO, f2fs_sm_info, ipu_policy, ipu_policy);
...@@ -191,20 +286,36 @@ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search); ...@@ -191,20 +286,36 @@ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_victim_search, max_victim_search);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, dir_level, dir_level);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, cp_interval, interval_time[CP_TIME]); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, cp_interval, interval_time[CP_TIME]);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, idle_interval, interval_time[REQ_TIME]); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, idle_interval, interval_time[REQ_TIME]);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, iostat_enable, iostat_enable);
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
F2FS_RW_ATTR(FAULT_INFO_RATE, f2fs_fault_info, inject_rate, inject_rate); F2FS_RW_ATTR(FAULT_INFO_RATE, f2fs_fault_info, inject_rate, inject_rate);
F2FS_RW_ATTR(FAULT_INFO_TYPE, f2fs_fault_info, inject_type, inject_type); F2FS_RW_ATTR(FAULT_INFO_TYPE, f2fs_fault_info, inject_type, inject_type);
#endif #endif
F2FS_GENERAL_RO_ATTR(lifetime_write_kbytes); F2FS_GENERAL_RO_ATTR(lifetime_write_kbytes);
F2FS_GENERAL_RO_ATTR(features);
#ifdef CONFIG_F2FS_FS_ENCRYPTION
F2FS_FEATURE_RO_ATTR(encryption, FEAT_CRYPTO);
#endif
#ifdef CONFIG_BLK_DEV_ZONED
F2FS_FEATURE_RO_ATTR(block_zoned, FEAT_BLKZONED);
#endif
F2FS_FEATURE_RO_ATTR(atomic_write, FEAT_ATOMIC_WRITE);
F2FS_FEATURE_RO_ATTR(extra_attr, FEAT_EXTRA_ATTR);
F2FS_FEATURE_RO_ATTR(project_quota, FEAT_PROJECT_QUOTA);
F2FS_FEATURE_RO_ATTR(inode_checksum, FEAT_INODE_CHECKSUM);
#define ATTR_LIST(name) (&f2fs_attr_##name.attr) #define ATTR_LIST(name) (&f2fs_attr_##name.attr)
static struct attribute *f2fs_attrs[] = { static struct attribute *f2fs_attrs[] = {
ATTR_LIST(gc_urgent_sleep_time),
ATTR_LIST(gc_min_sleep_time), ATTR_LIST(gc_min_sleep_time),
ATTR_LIST(gc_max_sleep_time), ATTR_LIST(gc_max_sleep_time),
ATTR_LIST(gc_no_gc_sleep_time), ATTR_LIST(gc_no_gc_sleep_time),
ATTR_LIST(gc_idle), ATTR_LIST(gc_idle),
ATTR_LIST(gc_urgent),
ATTR_LIST(reclaim_segments), ATTR_LIST(reclaim_segments),
ATTR_LIST(max_small_discards), ATTR_LIST(max_small_discards),
ATTR_LIST(discard_granularity),
ATTR_LIST(batched_trim_sections), ATTR_LIST(batched_trim_sections),
ATTR_LIST(ipu_policy), ATTR_LIST(ipu_policy),
ATTR_LIST(min_ipu_util), ATTR_LIST(min_ipu_util),
...@@ -217,26 +328,59 @@ static struct attribute *f2fs_attrs[] = { ...@@ -217,26 +328,59 @@ static struct attribute *f2fs_attrs[] = {
ATTR_LIST(dirty_nats_ratio), ATTR_LIST(dirty_nats_ratio),
ATTR_LIST(cp_interval), ATTR_LIST(cp_interval),
ATTR_LIST(idle_interval), ATTR_LIST(idle_interval),
ATTR_LIST(iostat_enable),
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
ATTR_LIST(inject_rate), ATTR_LIST(inject_rate),
ATTR_LIST(inject_type), ATTR_LIST(inject_type),
#endif #endif
ATTR_LIST(lifetime_write_kbytes), ATTR_LIST(lifetime_write_kbytes),
ATTR_LIST(features),
ATTR_LIST(reserved_blocks), ATTR_LIST(reserved_blocks),
NULL, NULL,
}; };
static struct attribute *f2fs_feat_attrs[] = {
#ifdef CONFIG_F2FS_FS_ENCRYPTION
ATTR_LIST(encryption),
#endif
#ifdef CONFIG_BLK_DEV_ZONED
ATTR_LIST(block_zoned),
#endif
ATTR_LIST(atomic_write),
ATTR_LIST(extra_attr),
ATTR_LIST(project_quota),
ATTR_LIST(inode_checksum),
NULL,
};
static const struct sysfs_ops f2fs_attr_ops = { static const struct sysfs_ops f2fs_attr_ops = {
.show = f2fs_attr_show, .show = f2fs_attr_show,
.store = f2fs_attr_store, .store = f2fs_attr_store,
}; };
static struct kobj_type f2fs_ktype = { static struct kobj_type f2fs_sb_ktype = {
.default_attrs = f2fs_attrs, .default_attrs = f2fs_attrs,
.sysfs_ops = &f2fs_attr_ops, .sysfs_ops = &f2fs_attr_ops,
.release = f2fs_sb_release, .release = f2fs_sb_release,
}; };
static struct kobj_type f2fs_ktype = {
.sysfs_ops = &f2fs_attr_ops,
};
static struct kset f2fs_kset = {
.kobj = {.ktype = &f2fs_ktype},
};
static struct kobj_type f2fs_feat_ktype = {
.default_attrs = f2fs_feat_attrs,
.sysfs_ops = &f2fs_attr_ops,
};
static struct kobject f2fs_feat = {
.kset = &f2fs_kset,
};
static int segment_info_seq_show(struct seq_file *seq, void *offset) static int segment_info_seq_show(struct seq_file *seq, void *offset)
{ {
struct super_block *sb = seq->private; struct super_block *sb = seq->private;
...@@ -288,6 +432,48 @@ static int segment_bits_seq_show(struct seq_file *seq, void *offset) ...@@ -288,6 +432,48 @@ static int segment_bits_seq_show(struct seq_file *seq, void *offset)
return 0; return 0;
} }
static int iostat_info_seq_show(struct seq_file *seq, void *offset)
{
struct super_block *sb = seq->private;
struct f2fs_sb_info *sbi = F2FS_SB(sb);
time64_t now = ktime_get_real_seconds();
if (!sbi->iostat_enable)
return 0;
seq_printf(seq, "time: %-16llu\n", now);
/* print app IOs */
seq_printf(seq, "app buffered: %-16llu\n",
sbi->write_iostat[APP_BUFFERED_IO]);
seq_printf(seq, "app direct: %-16llu\n",
sbi->write_iostat[APP_DIRECT_IO]);
seq_printf(seq, "app mapped: %-16llu\n",
sbi->write_iostat[APP_MAPPED_IO]);
/* print fs IOs */
seq_printf(seq, "fs data: %-16llu\n",
sbi->write_iostat[FS_DATA_IO]);
seq_printf(seq, "fs node: %-16llu\n",
sbi->write_iostat[FS_NODE_IO]);
seq_printf(seq, "fs meta: %-16llu\n",
sbi->write_iostat[FS_META_IO]);
seq_printf(seq, "fs gc data: %-16llu\n",
sbi->write_iostat[FS_GC_DATA_IO]);
seq_printf(seq, "fs gc node: %-16llu\n",
sbi->write_iostat[FS_GC_NODE_IO]);
seq_printf(seq, "fs cp data: %-16llu\n",
sbi->write_iostat[FS_CP_DATA_IO]);
seq_printf(seq, "fs cp node: %-16llu\n",
sbi->write_iostat[FS_CP_NODE_IO]);
seq_printf(seq, "fs cp meta: %-16llu\n",
sbi->write_iostat[FS_CP_META_IO]);
seq_printf(seq, "fs discard: %-16llu\n",
sbi->write_iostat[FS_DISCARD]);
return 0;
}
#define F2FS_PROC_FILE_DEF(_name) \ #define F2FS_PROC_FILE_DEF(_name) \
static int _name##_open_fs(struct inode *inode, struct file *file) \ static int _name##_open_fs(struct inode *inode, struct file *file) \
{ \ { \
...@@ -303,28 +489,47 @@ static const struct file_operations f2fs_seq_##_name##_fops = { \ ...@@ -303,28 +489,47 @@ static const struct file_operations f2fs_seq_##_name##_fops = { \
F2FS_PROC_FILE_DEF(segment_info); F2FS_PROC_FILE_DEF(segment_info);
F2FS_PROC_FILE_DEF(segment_bits); F2FS_PROC_FILE_DEF(segment_bits);
F2FS_PROC_FILE_DEF(iostat_info);
int __init f2fs_register_sysfs(void) int __init f2fs_init_sysfs(void)
{ {
f2fs_proc_root = proc_mkdir("fs/f2fs", NULL); int ret;
f2fs_kset = kset_create_and_add("f2fs", NULL, fs_kobj); kobject_set_name(&f2fs_kset.kobj, "f2fs");
if (!f2fs_kset) f2fs_kset.kobj.parent = fs_kobj;
return -ENOMEM; ret = kset_register(&f2fs_kset);
return 0; if (ret)
return ret;
ret = kobject_init_and_add(&f2fs_feat, &f2fs_feat_ktype,
NULL, "features");
if (ret)
kset_unregister(&f2fs_kset);
else
f2fs_proc_root = proc_mkdir("fs/f2fs", NULL);
return ret;
} }
void f2fs_unregister_sysfs(void) void f2fs_exit_sysfs(void)
{ {
kset_unregister(f2fs_kset); kobject_put(&f2fs_feat);
kset_unregister(&f2fs_kset);
remove_proc_entry("fs/f2fs", NULL); remove_proc_entry("fs/f2fs", NULL);
f2fs_proc_root = NULL;
} }
int f2fs_init_sysfs(struct f2fs_sb_info *sbi) int f2fs_register_sysfs(struct f2fs_sb_info *sbi)
{ {
struct super_block *sb = sbi->sb; struct super_block *sb = sbi->sb;
int err; int err;
sbi->s_kobj.kset = &f2fs_kset;
init_completion(&sbi->s_kobj_unregister);
err = kobject_init_and_add(&sbi->s_kobj, &f2fs_sb_ktype, NULL,
"%s", sb->s_id);
if (err)
return err;
if (f2fs_proc_root) if (f2fs_proc_root)
sbi->s_proc = proc_mkdir(sb->s_id, f2fs_proc_root); sbi->s_proc = proc_mkdir(sb->s_id, f2fs_proc_root);
...@@ -333,33 +538,19 @@ int f2fs_init_sysfs(struct f2fs_sb_info *sbi) ...@@ -333,33 +538,19 @@ int f2fs_init_sysfs(struct f2fs_sb_info *sbi)
&f2fs_seq_segment_info_fops, sb); &f2fs_seq_segment_info_fops, sb);
proc_create_data("segment_bits", S_IRUGO, sbi->s_proc, proc_create_data("segment_bits", S_IRUGO, sbi->s_proc,
&f2fs_seq_segment_bits_fops, sb); &f2fs_seq_segment_bits_fops, sb);
proc_create_data("iostat_info", S_IRUGO, sbi->s_proc,
&f2fs_seq_iostat_info_fops, sb);
} }
sbi->s_kobj.kset = f2fs_kset;
init_completion(&sbi->s_kobj_unregister);
err = kobject_init_and_add(&sbi->s_kobj, &f2fs_ktype, NULL,
"%s", sb->s_id);
if (err)
goto err_out;
return 0; return 0;
err_out:
if (sbi->s_proc) {
remove_proc_entry("segment_info", sbi->s_proc);
remove_proc_entry("segment_bits", sbi->s_proc);
remove_proc_entry(sb->s_id, f2fs_proc_root);
}
return err;
} }
void f2fs_exit_sysfs(struct f2fs_sb_info *sbi) void f2fs_unregister_sysfs(struct f2fs_sb_info *sbi)
{ {
kobject_del(&sbi->s_kobj);
kobject_put(&sbi->s_kobj);
wait_for_completion(&sbi->s_kobj_unregister);
if (sbi->s_proc) { if (sbi->s_proc) {
remove_proc_entry("iostat_info", sbi->s_proc);
remove_proc_entry("segment_info", sbi->s_proc); remove_proc_entry("segment_info", sbi->s_proc);
remove_proc_entry("segment_bits", sbi->s_proc); remove_proc_entry("segment_bits", sbi->s_proc);
remove_proc_entry(sbi->sb->s_id, f2fs_proc_root); remove_proc_entry(sbi->sb->s_id, f2fs_proc_root);
} }
kobject_del(&sbi->s_kobj);
} }
...@@ -442,7 +442,7 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize, ...@@ -442,7 +442,7 @@ static inline int write_all_xattrs(struct inode *inode, __u32 hsize,
} else { } else {
struct dnode_of_data dn; struct dnode_of_data dn;
set_new_dnode(&dn, inode, NULL, NULL, new_nid); set_new_dnode(&dn, inode, NULL, NULL, new_nid);
xpage = new_node_page(&dn, XATTR_NODE_OFFSET, ipage); xpage = new_node_page(&dn, XATTR_NODE_OFFSET);
if (IS_ERR(xpage)) { if (IS_ERR(xpage)) {
alloc_nid_failed(sbi, new_nid); alloc_nid_failed(sbi, new_nid);
return PTR_ERR(xpage); return PTR_ERR(xpage);
...@@ -473,8 +473,10 @@ int f2fs_getxattr(struct inode *inode, int index, const char *name, ...@@ -473,8 +473,10 @@ int f2fs_getxattr(struct inode *inode, int index, const char *name,
if (len > F2FS_NAME_LEN) if (len > F2FS_NAME_LEN)
return -ERANGE; return -ERANGE;
down_read(&F2FS_I(inode)->i_xattr_sem);
error = lookup_all_xattrs(inode, ipage, index, len, name, error = lookup_all_xattrs(inode, ipage, index, len, name,
&entry, &base_addr); &entry, &base_addr);
up_read(&F2FS_I(inode)->i_xattr_sem);
if (error) if (error)
return error; return error;
...@@ -503,7 +505,9 @@ ssize_t f2fs_listxattr(struct dentry *dentry, char *buffer, size_t buffer_size) ...@@ -503,7 +505,9 @@ ssize_t f2fs_listxattr(struct dentry *dentry, char *buffer, size_t buffer_size)
int error = 0; int error = 0;
size_t rest = buffer_size; size_t rest = buffer_size;
down_read(&F2FS_I(inode)->i_xattr_sem);
error = read_all_xattrs(inode, NULL, &base_addr); error = read_all_xattrs(inode, NULL, &base_addr);
up_read(&F2FS_I(inode)->i_xattr_sem);
if (error) if (error)
return error; return error;
...@@ -686,7 +690,9 @@ int f2fs_setxattr(struct inode *inode, int index, const char *name, ...@@ -686,7 +690,9 @@ int f2fs_setxattr(struct inode *inode, int index, const char *name,
f2fs_lock_op(sbi); f2fs_lock_op(sbi);
/* protect xattr_ver */ /* protect xattr_ver */
down_write(&F2FS_I(inode)->i_sem); down_write(&F2FS_I(inode)->i_sem);
down_write(&F2FS_I(inode)->i_xattr_sem);
err = __f2fs_setxattr(inode, index, name, value, size, ipage, flags); err = __f2fs_setxattr(inode, index, name, value, size, ipage, flags);
up_write(&F2FS_I(inode)->i_xattr_sem);
up_write(&F2FS_I(inode)->i_sem); up_write(&F2FS_I(inode)->i_sem);
f2fs_unlock_op(sbi); f2fs_unlock_op(sbi);
......
...@@ -186,6 +186,8 @@ struct f2fs_extent { ...@@ -186,6 +186,8 @@ struct f2fs_extent {
#define F2FS_NAME_LEN 255 #define F2FS_NAME_LEN 255
#define F2FS_INLINE_XATTR_ADDRS 50 /* 200 bytes for inline xattrs */ #define F2FS_INLINE_XATTR_ADDRS 50 /* 200 bytes for inline xattrs */
#define DEF_ADDRS_PER_INODE 923 /* Address Pointers in an Inode */ #define DEF_ADDRS_PER_INODE 923 /* Address Pointers in an Inode */
#define CUR_ADDRS_PER_INODE(inode) (DEF_ADDRS_PER_INODE - \
get_extra_isize(inode))
#define DEF_NIDS_PER_INODE 5 /* Node IDs in an Inode */ #define DEF_NIDS_PER_INODE 5 /* Node IDs in an Inode */
#define ADDRS_PER_INODE(inode) addrs_per_inode(inode) #define ADDRS_PER_INODE(inode) addrs_per_inode(inode)
#define ADDRS_PER_BLOCK 1018 /* Address Pointers in a Direct Block */ #define ADDRS_PER_BLOCK 1018 /* Address Pointers in a Direct Block */
...@@ -205,9 +207,7 @@ struct f2fs_extent { ...@@ -205,9 +207,7 @@ struct f2fs_extent {
#define F2FS_INLINE_DENTRY 0x04 /* file inline dentry flag */ #define F2FS_INLINE_DENTRY 0x04 /* file inline dentry flag */
#define F2FS_DATA_EXIST 0x08 /* file inline data exist flag */ #define F2FS_DATA_EXIST 0x08 /* file inline data exist flag */
#define F2FS_INLINE_DOTS 0x10 /* file having implicit dot dentries */ #define F2FS_INLINE_DOTS 0x10 /* file having implicit dot dentries */
#define F2FS_EXTRA_ATTR 0x20 /* file having extra attribute */
#define MAX_INLINE_DATA (sizeof(__le32) * (DEF_ADDRS_PER_INODE - \
F2FS_INLINE_XATTR_ADDRS - 1))
struct f2fs_inode { struct f2fs_inode {
__le16 i_mode; /* file mode */ __le16 i_mode; /* file mode */
...@@ -235,8 +235,16 @@ struct f2fs_inode { ...@@ -235,8 +235,16 @@ struct f2fs_inode {
struct f2fs_extent i_ext; /* caching a largest extent */ struct f2fs_extent i_ext; /* caching a largest extent */
__le32 i_addr[DEF_ADDRS_PER_INODE]; /* Pointers to data blocks */ union {
struct {
__le16 i_extra_isize; /* extra inode attribute size */
__le16 i_padding; /* padding */
__le32 i_projid; /* project id */
__le32 i_inode_checksum;/* inode meta checksum */
__le32 i_extra_end[0]; /* for attribute size calculation */
};
__le32 i_addr[DEF_ADDRS_PER_INODE]; /* Pointers to data blocks */
};
__le32 i_nid[DEF_NIDS_PER_INODE]; /* direct(2), indirect(2), __le32 i_nid[DEF_NIDS_PER_INODE]; /* direct(2), indirect(2),
double_indirect(1) node id */ double_indirect(1) node id */
} __packed; } __packed;
...@@ -465,7 +473,7 @@ typedef __le32 f2fs_hash_t; ...@@ -465,7 +473,7 @@ typedef __le32 f2fs_hash_t;
#define MAX_DIR_BUCKETS (1 << ((MAX_DIR_HASH_DEPTH / 2) - 1)) #define MAX_DIR_BUCKETS (1 << ((MAX_DIR_HASH_DEPTH / 2) - 1))
/* /*
* space utilization of regular dentry and inline dentry * space utilization of regular dentry and inline dentry (w/o extra reservation)
* regular dentry inline dentry * regular dentry inline dentry
* bitmap 1 * 27 = 27 1 * 23 = 23 * bitmap 1 * 27 = 27 1 * 23 = 23
* reserved 1 * 3 = 3 1 * 7 = 7 * reserved 1 * 3 = 3 1 * 7 = 7
...@@ -501,24 +509,6 @@ struct f2fs_dentry_block { ...@@ -501,24 +509,6 @@ struct f2fs_dentry_block {
__u8 filename[NR_DENTRY_IN_BLOCK][F2FS_SLOT_LEN]; __u8 filename[NR_DENTRY_IN_BLOCK][F2FS_SLOT_LEN];
} __packed; } __packed;
/* for inline dir */
#define NR_INLINE_DENTRY (MAX_INLINE_DATA * BITS_PER_BYTE / \
((SIZE_OF_DIR_ENTRY + F2FS_SLOT_LEN) * \
BITS_PER_BYTE + 1))
#define INLINE_DENTRY_BITMAP_SIZE ((NR_INLINE_DENTRY + \
BITS_PER_BYTE - 1) / BITS_PER_BYTE)
#define INLINE_RESERVED_SIZE (MAX_INLINE_DATA - \
((SIZE_OF_DIR_ENTRY + F2FS_SLOT_LEN) * \
NR_INLINE_DENTRY + INLINE_DENTRY_BITMAP_SIZE))
/* inline directory entry structure */
struct f2fs_inline_dentry {
__u8 dentry_bitmap[INLINE_DENTRY_BITMAP_SIZE];
__u8 reserved[INLINE_RESERVED_SIZE];
struct f2fs_dir_entry dentry[NR_INLINE_DENTRY];
__u8 filename[NR_INLINE_DENTRY][F2FS_SLOT_LEN];
} __packed;
/* file types used in inode_info->flags */ /* file types used in inode_info->flags */
enum { enum {
F2FS_FT_UNKNOWN, F2FS_FT_UNKNOWN,
...@@ -534,4 +524,6 @@ enum { ...@@ -534,4 +524,6 @@ enum {
#define S_SHIFT 12 #define S_SHIFT 12
#define F2FS_DEF_PROJID 0 /* default project ID */
#endif /* _LINUX_F2FS_FS_H */ #endif /* _LINUX_F2FS_FS_H */
...@@ -543,14 +543,14 @@ TRACE_EVENT(f2fs_map_blocks, ...@@ -543,14 +543,14 @@ TRACE_EVENT(f2fs_map_blocks,
TRACE_EVENT(f2fs_background_gc, TRACE_EVENT(f2fs_background_gc,
TP_PROTO(struct super_block *sb, long wait_ms, TP_PROTO(struct super_block *sb, unsigned int wait_ms,
unsigned int prefree, unsigned int free), unsigned int prefree, unsigned int free),
TP_ARGS(sb, wait_ms, prefree, free), TP_ARGS(sb, wait_ms, prefree, free),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(dev_t, dev) __field(dev_t, dev)
__field(long, wait_ms) __field(unsigned int, wait_ms)
__field(unsigned int, prefree) __field(unsigned int, prefree)
__field(unsigned int, free) __field(unsigned int, free)
), ),
...@@ -562,13 +562,120 @@ TRACE_EVENT(f2fs_background_gc, ...@@ -562,13 +562,120 @@ TRACE_EVENT(f2fs_background_gc,
__entry->free = free; __entry->free = free;
), ),
TP_printk("dev = (%d,%d), wait_ms = %ld, prefree = %u, free = %u", TP_printk("dev = (%d,%d), wait_ms = %u, prefree = %u, free = %u",
show_dev(__entry->dev), show_dev(__entry->dev),
__entry->wait_ms, __entry->wait_ms,
__entry->prefree, __entry->prefree,
__entry->free) __entry->free)
); );
TRACE_EVENT(f2fs_gc_begin,
TP_PROTO(struct super_block *sb, bool sync, bool background,
long long dirty_nodes, long long dirty_dents,
long long dirty_imeta, unsigned int free_sec,
unsigned int free_seg, int reserved_seg,
unsigned int prefree_seg),
TP_ARGS(sb, sync, background, dirty_nodes, dirty_dents, dirty_imeta,
free_sec, free_seg, reserved_seg, prefree_seg),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(bool, sync)
__field(bool, background)
__field(long long, dirty_nodes)
__field(long long, dirty_dents)
__field(long long, dirty_imeta)
__field(unsigned int, free_sec)
__field(unsigned int, free_seg)
__field(int, reserved_seg)
__field(unsigned int, prefree_seg)
),
TP_fast_assign(
__entry->dev = sb->s_dev;
__entry->sync = sync;
__entry->background = background;
__entry->dirty_nodes = dirty_nodes;
__entry->dirty_dents = dirty_dents;
__entry->dirty_imeta = dirty_imeta;
__entry->free_sec = free_sec;
__entry->free_seg = free_seg;
__entry->reserved_seg = reserved_seg;
__entry->prefree_seg = prefree_seg;
),
TP_printk("dev = (%d,%d), sync = %d, background = %d, nodes = %lld, "
"dents = %lld, imeta = %lld, free_sec:%u, free_seg:%u, "
"rsv_seg:%d, prefree_seg:%u",
show_dev(__entry->dev),
__entry->sync,
__entry->background,
__entry->dirty_nodes,
__entry->dirty_dents,
__entry->dirty_imeta,
__entry->free_sec,
__entry->free_seg,
__entry->reserved_seg,
__entry->prefree_seg)
);
TRACE_EVENT(f2fs_gc_end,
TP_PROTO(struct super_block *sb, int ret, int seg_freed,
int sec_freed, long long dirty_nodes,
long long dirty_dents, long long dirty_imeta,
unsigned int free_sec, unsigned int free_seg,
int reserved_seg, unsigned int prefree_seg),
TP_ARGS(sb, ret, seg_freed, sec_freed, dirty_nodes, dirty_dents,
dirty_imeta, free_sec, free_seg, reserved_seg, prefree_seg),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(int, ret)
__field(int, seg_freed)
__field(int, sec_freed)
__field(long long, dirty_nodes)
__field(long long, dirty_dents)
__field(long long, dirty_imeta)
__field(unsigned int, free_sec)
__field(unsigned int, free_seg)
__field(int, reserved_seg)
__field(unsigned int, prefree_seg)
),
TP_fast_assign(
__entry->dev = sb->s_dev;
__entry->ret = ret;
__entry->seg_freed = seg_freed;
__entry->sec_freed = sec_freed;
__entry->dirty_nodes = dirty_nodes;
__entry->dirty_dents = dirty_dents;
__entry->dirty_imeta = dirty_imeta;
__entry->free_sec = free_sec;
__entry->free_seg = free_seg;
__entry->reserved_seg = reserved_seg;
__entry->prefree_seg = prefree_seg;
),
TP_printk("dev = (%d,%d), ret = %d, seg_freed = %d, sec_freed = %d, "
"nodes = %lld, dents = %lld, imeta = %lld, free_sec:%u, "
"free_seg:%u, rsv_seg:%d, prefree_seg:%u",
show_dev(__entry->dev),
__entry->ret,
__entry->seg_freed,
__entry->sec_freed,
__entry->dirty_nodes,
__entry->dirty_dents,
__entry->dirty_imeta,
__entry->free_sec,
__entry->free_seg,
__entry->reserved_seg,
__entry->prefree_seg)
);
TRACE_EVENT(f2fs_get_victim, TRACE_EVENT(f2fs_get_victim,
TP_PROTO(struct super_block *sb, int type, int gc_type, TP_PROTO(struct super_block *sb, int type, int gc_type,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment