Commit ff49c86f authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'f2fs-for-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs

Pull f2fs updates from Jaegeuk Kim:
 "In this round, we've made more work into per-file compression support.

  For example, F2FS_IOC_GET | SET_COMPRESS_OPTION provides a way to
  change the algorithm or cluster size per file. F2FS_IOC_COMPRESS |
  DECOMPRESS_FILE provides a way to compress and decompress the existing
  normal files manually.

  There is also a new mount option, compress_mode=fs|user, which can
  control who compresses the data.

  Chao also added a checksum feature with a mount option so that
  we are able to detect any corrupted cluster.

  In addition, Daniel contributed casefolding with encryption patch,
  which will be used for Android devices.

  Summary:

  Enhancements:
   - add ioctls and mount option to manage per-file compression feature
   - support casefolding with encryption
   - support checksum for compressed cluster
   - avoid IO starvation by replacing mutex with rwsem
   - add sysfs, max_io_bytes, to control max bio size

  Bug fixes:
   - fix use-after-free issue when compression and fsverity are enabled
   - fix consistency corruption during fault injection test
   - fix data offset for lseek
   - get rid of buffer_head which has 32bits limit in fiemap
   - fix some bugs in multi-partitions support
   - fix nat entry count calculation in shrinker
   - fix some stat information

  And, we've refactored some logics and fix minor bugs as well"

* tag 'f2fs-for-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (36 commits)
  f2fs: compress: fix compression chksum
  f2fs: fix shift-out-of-bounds in sanity_check_raw_super()
  f2fs: fix race of pending_pages in decompression
  f2fs: fix to account inline xattr correctly during recovery
  f2fs: inline: fix wrong inline inode stat
  f2fs: inline: correct comment in f2fs_recover_inline_data
  f2fs: don't check PAGE_SIZE again in sanity_check_raw_super()
  f2fs: convert to F2FS_*_INO macro
  f2fs: introduce max_io_bytes, a sysfs entry, to limit bio size
  f2fs: don't allow any writes on readonly mount
  f2fs: avoid race condition for shrinker count
  f2fs: add F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
  f2fs: add compress_mode mount option
  f2fs: Remove unnecessary unlikely()
  f2fs: init dirty_secmap incorrectly
  f2fs: remove buffer_head which has 32bits limit
  f2fs: fix wrong block count instead of bytes
  f2fs: use new conversion functions between blks and bytes
  f2fs: rename logical_to_blk and blk_to_logical
  f2fs: fix kbytes written stat for multi-device case
  ...
parents b97d4c42 75e91c88
...@@ -370,3 +370,10 @@ Date: April 2020 ...@@ -370,3 +370,10 @@ Date: April 2020
Contact: "Daeho Jeong" <daehojeong@google.com> Contact: "Daeho Jeong" <daehojeong@google.com>
Description: Give a way to change iostat_period time. 3secs by default. Description: Give a way to change iostat_period time. 3secs by default.
The new iostat trace gives stats gap given the period. The new iostat trace gives stats gap given the period.
What: /sys/fs/f2fs/<disk>/max_io_bytes
Date: December 2020
Contact: "Jaegeuk Kim" <jaegeuk@kernel.org>
Description: This gives a control to limit the bio size in f2fs.
Default is zero, which will follow underlying block layer limit,
whereas, if it has a certain bytes value, f2fs won't submit a
bio larger than that size.
...@@ -260,6 +260,14 @@ compress_extension=%s Support adding specified extension, so that f2fs can enab ...@@ -260,6 +260,14 @@ compress_extension=%s Support adding specified extension, so that f2fs can enab
For other files, we can still enable compression via ioctl. For other files, we can still enable compression via ioctl.
Note that, there is one reserved special extension '*', it Note that, there is one reserved special extension '*', it
can be set to enable compression for all files. can be set to enable compression for all files.
compress_chksum Support verifying chksum of raw data in compressed cluster.
compress_mode=%s Control file compression mode. This supports "fs" and "user"
modes. In "fs" mode (default), f2fs does automatic compression
on the compression enabled files. In "user" mode, f2fs disables
the automaic compression and gives the user discretion of
choosing the target file and the timing. The user can do manual
compression/decompression on the compression enabled files using
ioctls.
inlinecrypt When possible, encrypt/decrypt the contents of encrypted inlinecrypt When possible, encrypt/decrypt the contents of encrypted
files using the blk-crypto framework rather than files using the blk-crypto framework rather than
filesystem-layer encryption. This allows the use of filesystem-layer encryption. This allows the use of
...@@ -810,6 +818,34 @@ Compress metadata layout:: ...@@ -810,6 +818,34 @@ Compress metadata layout::
| data length | data chksum | reserved | compressed data | | data length | data chksum | reserved | compressed data |
+-------------+-------------+----------+----------------------------+ +-------------+-------------+----------+----------------------------+
Compression mode
--------------------------
f2fs supports "fs" and "user" compression modes with "compression_mode" mount option.
With this option, f2fs provides a choice to select the way how to compress the
compression enabled files (refer to "Compression implementation" section for how to
enable compression on a regular inode).
1) compress_mode=fs
This is the default option. f2fs does automatic compression in the writeback of the
compression enabled files.
2) compress_mode=user
This disables the automaic compression and gives the user discretion of choosing the
target file and the timing. The user can do manual compression/decompression on the
compression enabled files using F2FS_IOC_DECOMPRESS_FILE and F2FS_IOC_COMPRESS_FILE
ioctls like the below.
To decompress a file,
fd = open(filename, O_WRONLY, 0);
ret = ioctl(fd, F2FS_IOC_DECOMPRESS_FILE);
To compress a file,
fd = open(filename, O_WRONLY, 0);
ret = ioctl(fd, F2FS_IOC_COMPRESS_FILE);
NVMe Zoned Namespace devices NVMe Zoned Namespace devices
---------------------------- ----------------------------
......
...@@ -6746,6 +6746,7 @@ F: Documentation/filesystems/f2fs.rst ...@@ -6746,6 +6746,7 @@ F: Documentation/filesystems/f2fs.rst
F: fs/f2fs/ F: fs/f2fs/
F: include/linux/f2fs_fs.h F: include/linux/f2fs_fs.h
F: include/trace/events/f2fs.h F: include/trace/events/f2fs.h
F: include/uapi/linux/f2fs.h
F71805F HARDWARE MONITORING DRIVER F71805F HARDWARE MONITORING DRIVER
M: Jean Delvare <jdelvare@suse.com> M: Jean Delvare <jdelvare@suse.com>
......
...@@ -574,7 +574,3 @@ int fscrypt_d_revalidate(struct dentry *dentry, unsigned int flags) ...@@ -574,7 +574,3 @@ int fscrypt_d_revalidate(struct dentry *dentry, unsigned int flags)
return valid; return valid;
} }
EXPORT_SYMBOL_GPL(fscrypt_d_revalidate); EXPORT_SYMBOL_GPL(fscrypt_d_revalidate);
const struct dentry_operations fscrypt_d_ops = {
.d_revalidate = fscrypt_d_revalidate,
};
...@@ -297,7 +297,6 @@ int fscrypt_fname_encrypt(const struct inode *inode, const struct qstr *iname, ...@@ -297,7 +297,6 @@ int fscrypt_fname_encrypt(const struct inode *inode, const struct qstr *iname,
bool fscrypt_fname_encrypted_size(const union fscrypt_policy *policy, bool fscrypt_fname_encrypted_size(const union fscrypt_policy *policy,
u32 orig_len, u32 max_len, u32 orig_len, u32 max_len,
u32 *encrypted_len_ret); u32 *encrypted_len_ret);
extern const struct dentry_operations fscrypt_d_ops;
/* hkdf.c */ /* hkdf.c */
......
...@@ -108,7 +108,6 @@ int __fscrypt_prepare_lookup(struct inode *dir, struct dentry *dentry, ...@@ -108,7 +108,6 @@ int __fscrypt_prepare_lookup(struct inode *dir, struct dentry *dentry,
spin_lock(&dentry->d_lock); spin_lock(&dentry->d_lock);
dentry->d_flags |= DCACHE_NOKEY_NAME; dentry->d_flags |= DCACHE_NOKEY_NAME;
spin_unlock(&dentry->d_lock); spin_unlock(&dentry->d_lock);
d_set_d_op(dentry, &fscrypt_d_ops);
} }
return err; return err;
} }
......
...@@ -657,10 +657,3 @@ const struct file_operations ext4_dir_operations = { ...@@ -657,10 +657,3 @@ const struct file_operations ext4_dir_operations = {
.fsync = ext4_sync_file, .fsync = ext4_sync_file,
.release = ext4_release_dir, .release = ext4_release_dir,
}; };
#ifdef CONFIG_UNICODE
const struct dentry_operations ext4_dentry_ops = {
.d_hash = generic_ci_d_hash,
.d_compare = generic_ci_d_compare,
};
#endif
...@@ -3381,10 +3381,6 @@ static inline void ext4_unlock_group(struct super_block *sb, ...@@ -3381,10 +3381,6 @@ static inline void ext4_unlock_group(struct super_block *sb,
/* dir.c */ /* dir.c */
extern const struct file_operations ext4_dir_operations; extern const struct file_operations ext4_dir_operations;
#ifdef CONFIG_UNICODE
extern const struct dentry_operations ext4_dentry_ops;
#endif
/* file.c */ /* file.c */
extern const struct inode_operations ext4_file_inode_operations; extern const struct inode_operations ext4_file_inode_operations;
extern const struct file_operations ext4_file_operations; extern const struct file_operations ext4_file_operations;
......
...@@ -1608,6 +1608,7 @@ static struct buffer_head *ext4_lookup_entry(struct inode *dir, ...@@ -1608,6 +1608,7 @@ static struct buffer_head *ext4_lookup_entry(struct inode *dir,
struct buffer_head *bh; struct buffer_head *bh;
err = ext4_fname_prepare_lookup(dir, dentry, &fname); err = ext4_fname_prepare_lookup(dir, dentry, &fname);
generic_set_encrypted_ci_d_ops(dentry);
if (err == -ENOENT) if (err == -ENOENT)
return NULL; return NULL;
if (err) if (err)
......
...@@ -4963,11 +4963,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) ...@@ -4963,11 +4963,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
goto failed_mount4; goto failed_mount4;
} }
#ifdef CONFIG_UNICODE
if (sb->s_encoding)
sb->s_d_op = &ext4_dentry_ops;
#endif
sb->s_root = d_make_root(root); sb->s_root = d_make_root(root);
if (!sb->s_root) { if (!sb->s_root) {
ext4_msg(sb, KERN_ERR, "get root dentry failed"); ext4_msg(sb, KERN_ERR, "get root dentry failed");
......
...@@ -384,7 +384,7 @@ int f2fs_init_acl(struct inode *inode, struct inode *dir, struct page *ipage, ...@@ -384,7 +384,7 @@ int f2fs_init_acl(struct inode *inode, struct inode *dir, struct page *ipage,
struct page *dpage) struct page *dpage)
{ {
struct posix_acl *default_acl = NULL, *acl = NULL; struct posix_acl *default_acl = NULL, *acl = NULL;
int error = 0; int error;
error = f2fs_acl_create(dir, &inode->i_mode, &default_acl, &acl, dpage); error = f2fs_acl_create(dir, &inode->i_mode, &default_acl, &acl, dpage);
if (error) if (error)
......
...@@ -37,7 +37,7 @@ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io) ...@@ -37,7 +37,7 @@ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io)
struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index) struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index)
{ {
struct address_space *mapping = META_MAPPING(sbi); struct address_space *mapping = META_MAPPING(sbi);
struct page *page = NULL; struct page *page;
repeat: repeat:
page = f2fs_grab_cache_page(mapping, index, false); page = f2fs_grab_cache_page(mapping, index, false);
if (!page) { if (!page) {
...@@ -348,13 +348,13 @@ static int f2fs_write_meta_pages(struct address_space *mapping, ...@@ -348,13 +348,13 @@ static int f2fs_write_meta_pages(struct address_space *mapping,
goto skip_write; goto skip_write;
/* if locked failed, cp will flush dirty pages instead */ /* if locked failed, cp will flush dirty pages instead */
if (!mutex_trylock(&sbi->cp_mutex)) if (!down_write_trylock(&sbi->cp_global_sem))
goto skip_write; goto skip_write;
trace_f2fs_writepages(mapping->host, wbc, META); trace_f2fs_writepages(mapping->host, wbc, META);
diff = nr_pages_to_write(sbi, META, wbc); diff = nr_pages_to_write(sbi, META, wbc);
written = f2fs_sync_meta_pages(sbi, META, wbc->nr_to_write, FS_META_IO); written = f2fs_sync_meta_pages(sbi, META, wbc->nr_to_write, FS_META_IO);
mutex_unlock(&sbi->cp_mutex); up_write(&sbi->cp_global_sem);
wbc->nr_to_write = max((long)0, wbc->nr_to_write - written - diff); wbc->nr_to_write = max((long)0, wbc->nr_to_write - written - diff);
return 0; return 0;
...@@ -1385,6 +1385,26 @@ static void commit_checkpoint(struct f2fs_sb_info *sbi, ...@@ -1385,6 +1385,26 @@ static void commit_checkpoint(struct f2fs_sb_info *sbi,
f2fs_submit_merged_write(sbi, META_FLUSH); f2fs_submit_merged_write(sbi, META_FLUSH);
} }
static inline u64 get_sectors_written(struct block_device *bdev)
{
return (u64)part_stat_read(bdev, sectors[STAT_WRITE]);
}
u64 f2fs_get_sectors_written(struct f2fs_sb_info *sbi)
{
if (f2fs_is_multi_device(sbi)) {
u64 sectors = 0;
int i;
for (i = 0; i < sbi->s_ndevs; i++)
sectors += get_sectors_written(FDEV(i).bdev);
return sectors;
}
return get_sectors_written(sbi->sb->s_bdev);
}
static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
{ {
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
...@@ -1488,8 +1508,9 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1488,8 +1508,9 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
start_blk += data_sum_blocks; start_blk += data_sum_blocks;
/* Record write statistics in the hot node summary */ /* Record write statistics in the hot node summary */
kbytes_written = sbi->kbytes_written + BD_PART_WRITTEN(sbi); kbytes_written = sbi->kbytes_written;
kbytes_written += (f2fs_get_sectors_written(sbi) -
sbi->sectors_written_start) >> 1;
seg_i->journal->info.kbytes_written = cpu_to_le64(kbytes_written); seg_i->journal->info.kbytes_written = cpu_to_le64(kbytes_written);
if (__remain_node_summaries(cpc->reason)) { if (__remain_node_summaries(cpc->reason)) {
...@@ -1569,7 +1590,7 @@ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1569,7 +1590,7 @@ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
f2fs_warn(sbi, "Start checkpoint disabled!"); f2fs_warn(sbi, "Start checkpoint disabled!");
} }
if (cpc->reason != CP_RESIZE) if (cpc->reason != CP_RESIZE)
mutex_lock(&sbi->cp_mutex); down_write(&sbi->cp_global_sem);
if (!is_sbi_flag_set(sbi, SBI_IS_DIRTY) && if (!is_sbi_flag_set(sbi, SBI_IS_DIRTY) &&
((cpc->reason & CP_FASTBOOT) || (cpc->reason & CP_SYNC) || ((cpc->reason & CP_FASTBOOT) || (cpc->reason & CP_SYNC) ||
...@@ -1597,7 +1618,7 @@ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1597,7 +1618,7 @@ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
goto out; goto out;
} }
if (NM_I(sbi)->dirty_nat_cnt == 0 && if (NM_I(sbi)->nat_cnt[DIRTY_NAT] == 0 &&
SIT_I(sbi)->dirty_sentries == 0 && SIT_I(sbi)->dirty_sentries == 0 &&
prefree_segments(sbi) == 0) { prefree_segments(sbi) == 0) {
f2fs_flush_sit_entries(sbi, cpc); f2fs_flush_sit_entries(sbi, cpc);
...@@ -1644,7 +1665,7 @@ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1644,7 +1665,7 @@ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, "finish checkpoint"); trace_f2fs_write_checkpoint(sbi->sb, cpc->reason, "finish checkpoint");
out: out:
if (cpc->reason != CP_RESIZE) if (cpc->reason != CP_RESIZE)
mutex_unlock(&sbi->cp_mutex); up_write(&sbi->cp_global_sem);
return err; return err;
} }
......
...@@ -602,6 +602,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc) ...@@ -602,6 +602,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc)
f2fs_cops[fi->i_compress_algorithm]; f2fs_cops[fi->i_compress_algorithm];
unsigned int max_len, new_nr_cpages; unsigned int max_len, new_nr_cpages;
struct page **new_cpages; struct page **new_cpages;
u32 chksum = 0;
int i, ret; int i, ret;
trace_f2fs_compress_pages_start(cc->inode, cc->cluster_idx, trace_f2fs_compress_pages_start(cc->inode, cc->cluster_idx,
...@@ -655,6 +656,11 @@ static int f2fs_compress_pages(struct compress_ctx *cc) ...@@ -655,6 +656,11 @@ static int f2fs_compress_pages(struct compress_ctx *cc)
cc->cbuf->clen = cpu_to_le32(cc->clen); cc->cbuf->clen = cpu_to_le32(cc->clen);
if (fi->i_compress_flag & 1 << COMPRESS_CHKSUM)
chksum = f2fs_crc32(F2FS_I_SB(cc->inode),
cc->cbuf->cdata, cc->clen);
cc->cbuf->chksum = cpu_to_le32(chksum);
for (i = 0; i < COMPRESS_DATA_RESERVED_SIZE; i++) for (i = 0; i < COMPRESS_DATA_RESERVED_SIZE; i++)
cc->cbuf->reserved[i] = cpu_to_le32(0); cc->cbuf->reserved[i] = cpu_to_le32(0);
...@@ -790,6 +796,22 @@ void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity) ...@@ -790,6 +796,22 @@ void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity)
ret = cops->decompress_pages(dic); ret = cops->decompress_pages(dic);
if (!ret && (fi->i_compress_flag & 1 << COMPRESS_CHKSUM)) {
u32 provided = le32_to_cpu(dic->cbuf->chksum);
u32 calculated = f2fs_crc32(sbi, dic->cbuf->cdata, dic->clen);
if (provided != calculated) {
if (!is_inode_flag_set(dic->inode, FI_COMPRESS_CORRUPT)) {
set_inode_flag(dic->inode, FI_COMPRESS_CORRUPT);
printk_ratelimited(
"%sF2FS-fs (%s): checksum invalid, nid = %lu, %x vs %x",
KERN_INFO, sbi->sb->s_id, dic->inode->i_ino,
provided, calculated);
}
set_sbi_flag(sbi, SBI_NEED_FSCK);
}
}
out_vunmap_cbuf: out_vunmap_cbuf:
vm_unmap_ram(dic->cbuf, dic->nr_cpages); vm_unmap_ram(dic->cbuf, dic->nr_cpages);
out_vunmap_rbuf: out_vunmap_rbuf:
...@@ -798,8 +820,6 @@ void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity) ...@@ -798,8 +820,6 @@ void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity)
if (cops->destroy_decompress_ctx) if (cops->destroy_decompress_ctx)
cops->destroy_decompress_ctx(dic); cops->destroy_decompress_ctx(dic);
out_free_dic: out_free_dic:
if (verity)
atomic_set(&dic->pending_pages, dic->nr_cpages);
if (!verity) if (!verity)
f2fs_decompress_end_io(dic->rpages, dic->cluster_size, f2fs_decompress_end_io(dic->rpages, dic->cluster_size,
ret, false); ret, false);
...@@ -921,7 +941,7 @@ int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index) ...@@ -921,7 +941,7 @@ int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index)
static bool cluster_may_compress(struct compress_ctx *cc) static bool cluster_may_compress(struct compress_ctx *cc)
{ {
if (!f2fs_compressed_file(cc->inode)) if (!f2fs_need_compress_data(cc->inode))
return false; return false;
if (f2fs_is_atomic_file(cc->inode)) if (f2fs_is_atomic_file(cc->inode))
return false; return false;
......
This diff is collapsed.
...@@ -145,8 +145,8 @@ static void update_general_status(struct f2fs_sb_info *sbi) ...@@ -145,8 +145,8 @@ static void update_general_status(struct f2fs_sb_info *sbi)
si->node_pages = NODE_MAPPING(sbi)->nrpages; si->node_pages = NODE_MAPPING(sbi)->nrpages;
if (sbi->meta_inode) if (sbi->meta_inode)
si->meta_pages = META_MAPPING(sbi)->nrpages; si->meta_pages = META_MAPPING(sbi)->nrpages;
si->nats = NM_I(sbi)->nat_cnt; si->nats = NM_I(sbi)->nat_cnt[TOTAL_NAT];
si->dirty_nats = NM_I(sbi)->dirty_nat_cnt; si->dirty_nats = NM_I(sbi)->nat_cnt[DIRTY_NAT];
si->sits = MAIN_SEGS(sbi); si->sits = MAIN_SEGS(sbi);
si->dirty_sits = SIT_I(sbi)->dirty_sentries; si->dirty_sits = SIT_I(sbi)->dirty_sentries;
si->free_nids = NM_I(sbi)->nid_cnt[FREE_NID]; si->free_nids = NM_I(sbi)->nid_cnt[FREE_NID];
...@@ -278,9 +278,10 @@ static void update_mem_info(struct f2fs_sb_info *sbi) ...@@ -278,9 +278,10 @@ static void update_mem_info(struct f2fs_sb_info *sbi)
si->cache_mem += (NM_I(sbi)->nid_cnt[FREE_NID] + si->cache_mem += (NM_I(sbi)->nid_cnt[FREE_NID] +
NM_I(sbi)->nid_cnt[PREALLOC_NID]) * NM_I(sbi)->nid_cnt[PREALLOC_NID]) *
sizeof(struct free_nid); sizeof(struct free_nid);
si->cache_mem += NM_I(sbi)->nat_cnt * sizeof(struct nat_entry); si->cache_mem += NM_I(sbi)->nat_cnt[TOTAL_NAT] *
si->cache_mem += NM_I(sbi)->dirty_nat_cnt * sizeof(struct nat_entry);
sizeof(struct nat_entry_set); si->cache_mem += NM_I(sbi)->nat_cnt[DIRTY_NAT] *
sizeof(struct nat_entry_set);
si->cache_mem += si->inmem_pages * sizeof(struct inmem_pages); si->cache_mem += si->inmem_pages * sizeof(struct inmem_pages);
for (i = 0; i < MAX_INO_ENTRY; i++) for (i = 0; i < MAX_INO_ENTRY; i++)
si->cache_mem += sbi->im[i].ino_num * sizeof(struct ino_entry); si->cache_mem += sbi->im[i].ino_num * sizeof(struct ino_entry);
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
* Copyright (c) 2012 Samsung Electronics Co., Ltd. * Copyright (c) 2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com/ * http://www.samsung.com/
*/ */
#include <asm/unaligned.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/f2fs_fs.h> #include <linux/f2fs_fs.h>
#include <linux/sched/signal.h> #include <linux/sched/signal.h>
...@@ -206,30 +207,55 @@ static struct f2fs_dir_entry *find_in_block(struct inode *dir, ...@@ -206,30 +207,55 @@ static struct f2fs_dir_entry *find_in_block(struct inode *dir,
/* /*
* Test whether a case-insensitive directory entry matches the filename * Test whether a case-insensitive directory entry matches the filename
* being searched for. * being searched for.
*
* Returns 1 for a match, 0 for no match, and -errno on an error.
*/ */
static bool f2fs_match_ci_name(const struct inode *dir, const struct qstr *name, static int f2fs_match_ci_name(const struct inode *dir, const struct qstr *name,
const u8 *de_name, u32 de_name_len) const u8 *de_name, u32 de_name_len)
{ {
const struct super_block *sb = dir->i_sb; const struct super_block *sb = dir->i_sb;
const struct unicode_map *um = sb->s_encoding; const struct unicode_map *um = sb->s_encoding;
struct fscrypt_str decrypted_name = FSTR_INIT(NULL, de_name_len);
struct qstr entry = QSTR_INIT(de_name, de_name_len); struct qstr entry = QSTR_INIT(de_name, de_name_len);
int res; int res;
if (IS_ENCRYPTED(dir)) {
const struct fscrypt_str encrypted_name =
FSTR_INIT((u8 *)de_name, de_name_len);
if (WARN_ON_ONCE(!fscrypt_has_encryption_key(dir)))
return -EINVAL;
decrypted_name.name = kmalloc(de_name_len, GFP_KERNEL);
if (!decrypted_name.name)
return -ENOMEM;
res = fscrypt_fname_disk_to_usr(dir, 0, 0, &encrypted_name,
&decrypted_name);
if (res < 0)
goto out;
entry.name = decrypted_name.name;
entry.len = decrypted_name.len;
}
res = utf8_strncasecmp_folded(um, name, &entry); res = utf8_strncasecmp_folded(um, name, &entry);
if (res < 0) { /*
/* * In strict mode, ignore invalid names. In non-strict mode,
* In strict mode, ignore invalid names. In non-strict mode, * fall back to treating them as opaque byte sequences.
* fall back to treating them as opaque byte sequences. */
*/ if (res < 0 && !sb_has_strict_encoding(sb)) {
if (sb_has_strict_encoding(sb) || name->len != entry.len) res = name->len == entry.len &&
return false; memcmp(name->name, entry.name, name->len) == 0;
return !memcmp(name->name, entry.name, name->len); } else {
/* utf8_strncasecmp_folded returns 0 on match */
res = (res == 0);
} }
return res == 0; out:
kfree(decrypted_name.name);
return res;
} }
#endif /* CONFIG_UNICODE */ #endif /* CONFIG_UNICODE */
static inline bool f2fs_match_name(const struct inode *dir, static inline int f2fs_match_name(const struct inode *dir,
const struct f2fs_filename *fname, const struct f2fs_filename *fname,
const u8 *de_name, u32 de_name_len) const u8 *de_name, u32 de_name_len)
{ {
...@@ -256,6 +282,7 @@ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d, ...@@ -256,6 +282,7 @@ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d,
struct f2fs_dir_entry *de; struct f2fs_dir_entry *de;
unsigned long bit_pos = 0; unsigned long bit_pos = 0;
int max_len = 0; int max_len = 0;
int res = 0;
if (max_slots) if (max_slots)
*max_slots = 0; *max_slots = 0;
...@@ -273,10 +300,15 @@ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d, ...@@ -273,10 +300,15 @@ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d,
continue; continue;
} }
if (de->hash_code == fname->hash && if (de->hash_code == fname->hash) {
f2fs_match_name(d->inode, fname, d->filename[bit_pos], res = f2fs_match_name(d->inode, fname,
le16_to_cpu(de->name_len))) d->filename[bit_pos],
goto found; le16_to_cpu(de->name_len));
if (res < 0)
return ERR_PTR(res);
if (res)
goto found;
}
if (max_slots && max_len > *max_slots) if (max_slots && max_len > *max_slots)
*max_slots = max_len; *max_slots = max_len;
...@@ -326,7 +358,11 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir, ...@@ -326,7 +358,11 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
} }
de = find_in_block(dir, dentry_page, fname, &max_slots); de = find_in_block(dir, dentry_page, fname, &max_slots);
if (de) { if (IS_ERR(de)) {
*res_page = ERR_CAST(de);
de = NULL;
break;
} else if (de) {
*res_page = dentry_page; *res_page = dentry_page;
break; break;
} }
...@@ -448,17 +484,39 @@ void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de, ...@@ -448,17 +484,39 @@ void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de,
f2fs_put_page(page, 1); f2fs_put_page(page, 1);
} }
static void init_dent_inode(const struct f2fs_filename *fname, static void init_dent_inode(struct inode *dir, struct inode *inode,
const struct f2fs_filename *fname,
struct page *ipage) struct page *ipage)
{ {
struct f2fs_inode *ri; struct f2fs_inode *ri;
if (!fname) /* tmpfile case? */
return;
f2fs_wait_on_page_writeback(ipage, NODE, true, true); f2fs_wait_on_page_writeback(ipage, NODE, true, true);
/* copy name info. to this inode page */ /* copy name info. to this inode page */
ri = F2FS_INODE(ipage); ri = F2FS_INODE(ipage);
ri->i_namelen = cpu_to_le32(fname->disk_name.len); ri->i_namelen = cpu_to_le32(fname->disk_name.len);
memcpy(ri->i_name, fname->disk_name.name, fname->disk_name.len); memcpy(ri->i_name, fname->disk_name.name, fname->disk_name.len);
if (IS_ENCRYPTED(dir)) {
file_set_enc_name(inode);
/*
* Roll-forward recovery doesn't have encryption keys available,
* so it can't compute the dirhash for encrypted+casefolded
* filenames. Append it to i_name if possible. Else, disable
* roll-forward recovery of the dentry (i.e., make fsync'ing the
* file force a checkpoint) by setting LOST_PINO.
*/
if (IS_CASEFOLDED(dir)) {
if (fname->disk_name.len + sizeof(f2fs_hash_t) <=
F2FS_NAME_LEN)
put_unaligned(fname->hash, (f2fs_hash_t *)
&ri->i_name[fname->disk_name.len]);
else
file_lost_pino(inode);
}
}
set_page_dirty(ipage); set_page_dirty(ipage);
} }
...@@ -541,11 +599,7 @@ struct page *f2fs_init_inode_metadata(struct inode *inode, struct inode *dir, ...@@ -541,11 +599,7 @@ struct page *f2fs_init_inode_metadata(struct inode *inode, struct inode *dir,
return page; return page;
} }
if (fname) { init_dent_inode(dir, inode, fname, page);
init_dent_inode(fname, page);
if (IS_ENCRYPTED(dir))
file_set_enc_name(inode);
}
/* /*
* This file should be checkpointed during fsync. * This file should be checkpointed during fsync.
...@@ -1091,10 +1145,3 @@ const struct file_operations f2fs_dir_operations = { ...@@ -1091,10 +1145,3 @@ const struct file_operations f2fs_dir_operations = {
.compat_ioctl = f2fs_compat_ioctl, .compat_ioctl = f2fs_compat_ioctl,
#endif #endif
}; };
#ifdef CONFIG_UNICODE
const struct dentry_operations f2fs_dentry_ops = {
.d_hash = generic_ci_d_hash,
.d_compare = generic_ci_d_compare,
};
#endif
This diff is collapsed.
This diff is collapsed.
...@@ -1986,7 +1986,7 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count) ...@@ -1986,7 +1986,7 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
freeze_super(sbi->sb); freeze_super(sbi->sb);
down_write(&sbi->gc_lock); down_write(&sbi->gc_lock);
mutex_lock(&sbi->cp_mutex); down_write(&sbi->cp_global_sem);
spin_lock(&sbi->stat_lock); spin_lock(&sbi->stat_lock);
if (shrunk_blocks + valid_user_blocks(sbi) + if (shrunk_blocks + valid_user_blocks(sbi) +
...@@ -2031,7 +2031,7 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count) ...@@ -2031,7 +2031,7 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
spin_unlock(&sbi->stat_lock); spin_unlock(&sbi->stat_lock);
} }
out_err: out_err:
mutex_unlock(&sbi->cp_mutex); up_write(&sbi->cp_global_sem);
up_write(&sbi->gc_lock); up_write(&sbi->gc_lock);
thaw_super(sbi->sb); thaw_super(sbi->sb);
clear_sbi_flag(sbi, SBI_IS_RESIZEFS); clear_sbi_flag(sbi, SBI_IS_RESIZEFS);
......
...@@ -111,7 +111,9 @@ void f2fs_hash_filename(const struct inode *dir, struct f2fs_filename *fname) ...@@ -111,7 +111,9 @@ void f2fs_hash_filename(const struct inode *dir, struct f2fs_filename *fname)
* If the casefolded name is provided, hash it instead of the * If the casefolded name is provided, hash it instead of the
* on-disk name. If the casefolded name is *not* provided, that * on-disk name. If the casefolded name is *not* provided, that
* should only be because the name wasn't valid Unicode, so fall * should only be because the name wasn't valid Unicode, so fall
* back to treating the name as an opaque byte sequence. * back to treating the name as an opaque byte sequence. Note
* that to handle encrypted directories, the fallback must use
* usr_fname (plaintext) rather than disk_name (ciphertext).
*/ */
WARN_ON_ONCE(!fname->usr_fname->name); WARN_ON_ONCE(!fname->usr_fname->name);
if (fname->cf_name.name) { if (fname->cf_name.name) {
...@@ -121,6 +123,13 @@ void f2fs_hash_filename(const struct inode *dir, struct f2fs_filename *fname) ...@@ -121,6 +123,13 @@ void f2fs_hash_filename(const struct inode *dir, struct f2fs_filename *fname)
name = fname->usr_fname->name; name = fname->usr_fname->name;
len = fname->usr_fname->len; len = fname->usr_fname->len;
} }
if (IS_ENCRYPTED(dir)) {
struct qstr tmp = QSTR_INIT(name, len);
fname->hash =
cpu_to_le32(fscrypt_fname_siphash(dir, &tmp));
return;
}
} }
#endif #endif
fname->hash = cpu_to_le32(TEA_hash_name(name, len)); fname->hash = cpu_to_le32(TEA_hash_name(name, len));
......
...@@ -188,7 +188,8 @@ int f2fs_convert_inline_inode(struct inode *inode) ...@@ -188,7 +188,8 @@ int f2fs_convert_inline_inode(struct inode *inode)
struct page *ipage, *page; struct page *ipage, *page;
int err = 0; int err = 0;
if (!f2fs_has_inline_data(inode)) if (!f2fs_has_inline_data(inode) ||
f2fs_hw_is_readonly(sbi) || f2fs_readonly(sbi->sb))
return 0; return 0;
page = f2fs_grab_cache_page(inode->i_mapping, 0, false); page = f2fs_grab_cache_page(inode->i_mapping, 0, false);
...@@ -266,7 +267,7 @@ int f2fs_recover_inline_data(struct inode *inode, struct page *npage) ...@@ -266,7 +267,7 @@ int f2fs_recover_inline_data(struct inode *inode, struct page *npage)
* [prev.] [next] of inline_data flag * [prev.] [next] of inline_data flag
* o o -> recover inline_data * o o -> recover inline_data
* o x -> remove inline_data, and then recover data blocks * o x -> remove inline_data, and then recover data blocks
* x o -> remove inline_data, and then recover inline_data * x o -> remove data blocks, and then recover inline_data
* x x -> recover data blocks * x x -> recover data blocks
*/ */
if (IS_INODE(npage)) if (IS_INODE(npage))
...@@ -298,6 +299,7 @@ int f2fs_recover_inline_data(struct inode *inode, struct page *npage) ...@@ -298,6 +299,7 @@ int f2fs_recover_inline_data(struct inode *inode, struct page *npage)
if (IS_ERR(ipage)) if (IS_ERR(ipage))
return PTR_ERR(ipage); return PTR_ERR(ipage);
f2fs_truncate_inline_inode(inode, ipage, 0); f2fs_truncate_inline_inode(inode, ipage, 0);
stat_dec_inline_inode(inode);
clear_inode_flag(inode, FI_INLINE_DATA); clear_inode_flag(inode, FI_INLINE_DATA);
f2fs_put_page(ipage, 1); f2fs_put_page(ipage, 1);
} else if (ri && (ri->i_inline & F2FS_INLINE_DATA)) { } else if (ri && (ri->i_inline & F2FS_INLINE_DATA)) {
...@@ -306,6 +308,7 @@ int f2fs_recover_inline_data(struct inode *inode, struct page *npage) ...@@ -306,6 +308,7 @@ int f2fs_recover_inline_data(struct inode *inode, struct page *npage)
ret = f2fs_truncate_blocks(inode, 0, false); ret = f2fs_truncate_blocks(inode, 0, false);
if (ret) if (ret)
return ret; return ret;
stat_inc_inline_inode(inode);
goto process_inline; goto process_inline;
} }
return 0; return 0;
...@@ -332,6 +335,10 @@ struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir, ...@@ -332,6 +335,10 @@ struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
make_dentry_ptr_inline(dir, &d, inline_dentry); make_dentry_ptr_inline(dir, &d, inline_dentry);
de = f2fs_find_target_dentry(&d, fname, NULL); de = f2fs_find_target_dentry(&d, fname, NULL);
unlock_page(ipage); unlock_page(ipage);
if (IS_ERR(de)) {
*res_page = ERR_CAST(de);
de = NULL;
}
if (de) if (de)
*res_page = ipage; *res_page = ipage;
else else
......
...@@ -456,6 +456,7 @@ static int do_read_inode(struct inode *inode) ...@@ -456,6 +456,7 @@ static int do_read_inode(struct inode *inode)
le64_to_cpu(ri->i_compr_blocks)); le64_to_cpu(ri->i_compr_blocks));
fi->i_compress_algorithm = ri->i_compress_algorithm; fi->i_compress_algorithm = ri->i_compress_algorithm;
fi->i_log_cluster_size = ri->i_log_cluster_size; fi->i_log_cluster_size = ri->i_log_cluster_size;
fi->i_compress_flag = le16_to_cpu(ri->i_compress_flag);
fi->i_cluster_size = 1 << fi->i_log_cluster_size; fi->i_cluster_size = 1 << fi->i_log_cluster_size;
set_inode_flag(inode, FI_COMPRESSED_FILE); set_inode_flag(inode, FI_COMPRESSED_FILE);
} }
...@@ -634,6 +635,8 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page) ...@@ -634,6 +635,8 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page)
&F2FS_I(inode)->i_compr_blocks)); &F2FS_I(inode)->i_compr_blocks));
ri->i_compress_algorithm = ri->i_compress_algorithm =
F2FS_I(inode)->i_compress_algorithm; F2FS_I(inode)->i_compress_algorithm;
ri->i_compress_flag =
cpu_to_le16(F2FS_I(inode)->i_compress_flag);
ri->i_log_cluster_size = ri->i_log_cluster_size =
F2FS_I(inode)->i_log_cluster_size; F2FS_I(inode)->i_log_cluster_size;
} }
......
...@@ -497,6 +497,7 @@ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry, ...@@ -497,6 +497,7 @@ static struct dentry *f2fs_lookup(struct inode *dir, struct dentry *dentry,
} }
err = f2fs_prepare_lookup(dir, dentry, &fname); err = f2fs_prepare_lookup(dir, dentry, &fname);
generic_set_encrypted_ci_d_ops(dentry);
if (err == -ENOENT) if (err == -ENOENT)
goto out_splice; goto out_splice;
if (err) if (err)
......
...@@ -62,8 +62,8 @@ bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type) ...@@ -62,8 +62,8 @@ bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type)
sizeof(struct free_nid)) >> PAGE_SHIFT; sizeof(struct free_nid)) >> PAGE_SHIFT;
res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 2); res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 2);
} else if (type == NAT_ENTRIES) { } else if (type == NAT_ENTRIES) {
mem_size = (nm_i->nat_cnt * sizeof(struct nat_entry)) >> mem_size = (nm_i->nat_cnt[TOTAL_NAT] *
PAGE_SHIFT; sizeof(struct nat_entry)) >> PAGE_SHIFT;
res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 2); res = mem_size < ((avail_ram * nm_i->ram_thresh / 100) >> 2);
if (excess_cached_nats(sbi)) if (excess_cached_nats(sbi))
res = false; res = false;
...@@ -109,7 +109,7 @@ static void clear_node_page_dirty(struct page *page) ...@@ -109,7 +109,7 @@ static void clear_node_page_dirty(struct page *page)
static struct page *get_current_nat_page(struct f2fs_sb_info *sbi, nid_t nid) static struct page *get_current_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
{ {
return f2fs_get_meta_page(sbi, current_nat_addr(sbi, nid)); return f2fs_get_meta_page_retry(sbi, current_nat_addr(sbi, nid));
} }
static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid) static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid)
...@@ -177,7 +177,8 @@ static struct nat_entry *__init_nat_entry(struct f2fs_nm_info *nm_i, ...@@ -177,7 +177,8 @@ static struct nat_entry *__init_nat_entry(struct f2fs_nm_info *nm_i,
list_add_tail(&ne->list, &nm_i->nat_entries); list_add_tail(&ne->list, &nm_i->nat_entries);
spin_unlock(&nm_i->nat_list_lock); spin_unlock(&nm_i->nat_list_lock);
nm_i->nat_cnt++; nm_i->nat_cnt[TOTAL_NAT]++;
nm_i->nat_cnt[RECLAIMABLE_NAT]++;
return ne; return ne;
} }
...@@ -207,7 +208,8 @@ static unsigned int __gang_lookup_nat_cache(struct f2fs_nm_info *nm_i, ...@@ -207,7 +208,8 @@ static unsigned int __gang_lookup_nat_cache(struct f2fs_nm_info *nm_i,
static void __del_from_nat_cache(struct f2fs_nm_info *nm_i, struct nat_entry *e) static void __del_from_nat_cache(struct f2fs_nm_info *nm_i, struct nat_entry *e)
{ {
radix_tree_delete(&nm_i->nat_root, nat_get_nid(e)); radix_tree_delete(&nm_i->nat_root, nat_get_nid(e));
nm_i->nat_cnt--; nm_i->nat_cnt[TOTAL_NAT]--;
nm_i->nat_cnt[RECLAIMABLE_NAT]--;
__free_nat_entry(e); __free_nat_entry(e);
} }
...@@ -253,7 +255,8 @@ static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i, ...@@ -253,7 +255,8 @@ static void __set_nat_cache_dirty(struct f2fs_nm_info *nm_i,
if (get_nat_flag(ne, IS_DIRTY)) if (get_nat_flag(ne, IS_DIRTY))
goto refresh_list; goto refresh_list;
nm_i->dirty_nat_cnt++; nm_i->nat_cnt[DIRTY_NAT]++;
nm_i->nat_cnt[RECLAIMABLE_NAT]--;
set_nat_flag(ne, IS_DIRTY, true); set_nat_flag(ne, IS_DIRTY, true);
refresh_list: refresh_list:
spin_lock(&nm_i->nat_list_lock); spin_lock(&nm_i->nat_list_lock);
...@@ -273,7 +276,8 @@ static void __clear_nat_cache_dirty(struct f2fs_nm_info *nm_i, ...@@ -273,7 +276,8 @@ static void __clear_nat_cache_dirty(struct f2fs_nm_info *nm_i,
set_nat_flag(ne, IS_DIRTY, false); set_nat_flag(ne, IS_DIRTY, false);
set->entry_cnt--; set->entry_cnt--;
nm_i->dirty_nat_cnt--; nm_i->nat_cnt[DIRTY_NAT]--;
nm_i->nat_cnt[RECLAIMABLE_NAT]++;
} }
static unsigned int __gang_lookup_nat_set(struct f2fs_nm_info *nm_i, static unsigned int __gang_lookup_nat_set(struct f2fs_nm_info *nm_i,
...@@ -2590,9 +2594,15 @@ int f2fs_recover_inline_xattr(struct inode *inode, struct page *page) ...@@ -2590,9 +2594,15 @@ int f2fs_recover_inline_xattr(struct inode *inode, struct page *page)
ri = F2FS_INODE(page); ri = F2FS_INODE(page);
if (ri->i_inline & F2FS_INLINE_XATTR) { if (ri->i_inline & F2FS_INLINE_XATTR) {
set_inode_flag(inode, FI_INLINE_XATTR); if (!f2fs_has_inline_xattr(inode)) {
set_inode_flag(inode, FI_INLINE_XATTR);
stat_inc_inline_xattr(inode);
}
} else { } else {
clear_inode_flag(inode, FI_INLINE_XATTR); if (f2fs_has_inline_xattr(inode)) {
stat_dec_inline_xattr(inode);
clear_inode_flag(inode, FI_INLINE_XATTR);
}
goto update_inode; goto update_inode;
} }
...@@ -2944,14 +2954,17 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -2944,14 +2954,17 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
LIST_HEAD(sets); LIST_HEAD(sets);
int err = 0; int err = 0;
/* during unmount, let's flush nat_bits before checking dirty_nat_cnt */ /*
* during unmount, let's flush nat_bits before checking
* nat_cnt[DIRTY_NAT].
*/
if (enabled_nat_bits(sbi, cpc)) { if (enabled_nat_bits(sbi, cpc)) {
down_write(&nm_i->nat_tree_lock); down_write(&nm_i->nat_tree_lock);
remove_nats_in_journal(sbi); remove_nats_in_journal(sbi);
up_write(&nm_i->nat_tree_lock); up_write(&nm_i->nat_tree_lock);
} }
if (!nm_i->dirty_nat_cnt) if (!nm_i->nat_cnt[DIRTY_NAT])
return 0; return 0;
down_write(&nm_i->nat_tree_lock); down_write(&nm_i->nat_tree_lock);
...@@ -2962,7 +2975,8 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -2962,7 +2975,8 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
* into nat entry set. * into nat entry set.
*/ */
if (enabled_nat_bits(sbi, cpc) || if (enabled_nat_bits(sbi, cpc) ||
!__has_cursum_space(journal, nm_i->dirty_nat_cnt, NAT_JOURNAL)) !__has_cursum_space(journal,
nm_i->nat_cnt[DIRTY_NAT], NAT_JOURNAL))
remove_nats_in_journal(sbi); remove_nats_in_journal(sbi);
while ((found = __gang_lookup_nat_set(nm_i, while ((found = __gang_lookup_nat_set(nm_i,
...@@ -3086,7 +3100,6 @@ static int init_node_manager(struct f2fs_sb_info *sbi) ...@@ -3086,7 +3100,6 @@ static int init_node_manager(struct f2fs_sb_info *sbi)
F2FS_RESERVED_NODE_NUM; F2FS_RESERVED_NODE_NUM;
nm_i->nid_cnt[FREE_NID] = 0; nm_i->nid_cnt[FREE_NID] = 0;
nm_i->nid_cnt[PREALLOC_NID] = 0; nm_i->nid_cnt[PREALLOC_NID] = 0;
nm_i->nat_cnt = 0;
nm_i->ram_thresh = DEF_RAM_THRESHOLD; nm_i->ram_thresh = DEF_RAM_THRESHOLD;
nm_i->ra_nid_pages = DEF_RA_NID_PAGES; nm_i->ra_nid_pages = DEF_RA_NID_PAGES;
nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD; nm_i->dirty_nats_ratio = DEF_DIRTY_NAT_RATIO_THRESHOLD;
...@@ -3220,7 +3233,7 @@ void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi) ...@@ -3220,7 +3233,7 @@ void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi)
__del_from_nat_cache(nm_i, natvec[idx]); __del_from_nat_cache(nm_i, natvec[idx]);
} }
} }
f2fs_bug_on(sbi, nm_i->nat_cnt); f2fs_bug_on(sbi, nm_i->nat_cnt[TOTAL_NAT]);
/* destroy nat set cache */ /* destroy nat set cache */
nid = 0; nid = 0;
......
...@@ -126,13 +126,13 @@ static inline void raw_nat_from_node_info(struct f2fs_nat_entry *raw_ne, ...@@ -126,13 +126,13 @@ static inline void raw_nat_from_node_info(struct f2fs_nat_entry *raw_ne,
static inline bool excess_dirty_nats(struct f2fs_sb_info *sbi) static inline bool excess_dirty_nats(struct f2fs_sb_info *sbi)
{ {
return NM_I(sbi)->dirty_nat_cnt >= NM_I(sbi)->max_nid * return NM_I(sbi)->nat_cnt[DIRTY_NAT] >= NM_I(sbi)->max_nid *
NM_I(sbi)->dirty_nats_ratio / 100; NM_I(sbi)->dirty_nats_ratio / 100;
} }
static inline bool excess_cached_nats(struct f2fs_sb_info *sbi) static inline bool excess_cached_nats(struct f2fs_sb_info *sbi)
{ {
return NM_I(sbi)->nat_cnt >= DEF_NAT_CACHE_THRESHOLD; return NM_I(sbi)->nat_cnt[TOTAL_NAT] >= DEF_NAT_CACHE_THRESHOLD;
} }
static inline bool excess_dirty_nodes(struct f2fs_sb_info *sbi) static inline bool excess_dirty_nodes(struct f2fs_sb_info *sbi)
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
* Copyright (c) 2012 Samsung Electronics Co., Ltd. * Copyright (c) 2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com/ * http://www.samsung.com/
*/ */
#include <asm/unaligned.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/f2fs_fs.h> #include <linux/f2fs_fs.h>
#include "f2fs.h" #include "f2fs.h"
...@@ -128,7 +129,16 @@ static int init_recovered_filename(const struct inode *dir, ...@@ -128,7 +129,16 @@ static int init_recovered_filename(const struct inode *dir,
} }
/* Compute the hash of the filename */ /* Compute the hash of the filename */
if (IS_CASEFOLDED(dir)) { if (IS_ENCRYPTED(dir) && IS_CASEFOLDED(dir)) {
/*
* In this case the hash isn't computable without the key, so it
* was saved on-disk.
*/
if (fname->disk_name.len + sizeof(f2fs_hash_t) > F2FS_NAME_LEN)
return -EINVAL;
fname->hash = get_unaligned((f2fs_hash_t *)
&raw_inode->i_name[fname->disk_name.len]);
} else if (IS_CASEFOLDED(dir)) {
err = f2fs_init_casefolded_name(dir, fname); err = f2fs_init_casefolded_name(dir, fname);
if (err) if (err)
return err; return err;
...@@ -789,7 +799,7 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) ...@@ -789,7 +799,7 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
INIT_LIST_HEAD(&dir_list); INIT_LIST_HEAD(&dir_list);
/* prevent checkpoint */ /* prevent checkpoint */
mutex_lock(&sbi->cp_mutex); down_write(&sbi->cp_global_sem);
/* step #1: find fsynced inode numbers */ /* step #1: find fsynced inode numbers */
err = find_fsync_dnodes(sbi, &inode_list, check_only); err = find_fsync_dnodes(sbi, &inode_list, check_only);
...@@ -840,7 +850,7 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) ...@@ -840,7 +850,7 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
if (!err) if (!err)
clear_sbi_flag(sbi, SBI_POR_DOING); clear_sbi_flag(sbi, SBI_POR_DOING);
mutex_unlock(&sbi->cp_mutex); up_write(&sbi->cp_global_sem);
/* let's drop all the directory inodes for clean checkpoint */ /* let's drop all the directory inodes for clean checkpoint */
destroy_fsync_dnodes(&dir_list, err); destroy_fsync_dnodes(&dir_list, err);
......
...@@ -529,31 +529,38 @@ void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi, bool from_bg) ...@@ -529,31 +529,38 @@ void f2fs_balance_fs_bg(struct f2fs_sb_info *sbi, bool from_bg)
else else
f2fs_build_free_nids(sbi, false, false); f2fs_build_free_nids(sbi, false, false);
if (!is_idle(sbi, REQ_TIME) && if (excess_dirty_nats(sbi) || excess_dirty_nodes(sbi) ||
(!excess_dirty_nats(sbi) && !excess_dirty_nodes(sbi))) excess_prefree_segs(sbi))
goto do_sync;
/* there is background inflight IO or foreground operation recently */
if (is_inflight_io(sbi, REQ_TIME) ||
(!f2fs_time_over(sbi, REQ_TIME) && rwsem_is_locked(&sbi->cp_rwsem)))
return; return;
/* exceed periodical checkpoint timeout threshold */
if (f2fs_time_over(sbi, CP_TIME))
goto do_sync;
/* checkpoint is the only way to shrink partial cached entries */ /* checkpoint is the only way to shrink partial cached entries */
if (!f2fs_available_free_memory(sbi, NAT_ENTRIES) || if (f2fs_available_free_memory(sbi, NAT_ENTRIES) ||
!f2fs_available_free_memory(sbi, INO_ENTRIES) || f2fs_available_free_memory(sbi, INO_ENTRIES))
excess_prefree_segs(sbi) || return;
excess_dirty_nats(sbi) ||
excess_dirty_nodes(sbi) ||
f2fs_time_over(sbi, CP_TIME)) {
if (test_opt(sbi, DATA_FLUSH) && from_bg) {
struct blk_plug plug;
mutex_lock(&sbi->flush_lock);
blk_start_plug(&plug);
f2fs_sync_dirty_inodes(sbi, FILE_INODE);
blk_finish_plug(&plug);
mutex_unlock(&sbi->flush_lock); do_sync:
} if (test_opt(sbi, DATA_FLUSH) && from_bg) {
f2fs_sync_fs(sbi->sb, true); struct blk_plug plug;
stat_inc_bg_cp_count(sbi->stat_info);
mutex_lock(&sbi->flush_lock);
blk_start_plug(&plug);
f2fs_sync_dirty_inodes(sbi, FILE_INODE);
blk_finish_plug(&plug);
mutex_unlock(&sbi->flush_lock);
} }
f2fs_sync_fs(sbi->sb, true);
stat_inc_bg_cp_count(sbi->stat_info);
} }
static int __submit_flush_wait(struct f2fs_sb_info *sbi, static int __submit_flush_wait(struct f2fs_sb_info *sbi,
...@@ -3254,7 +3261,7 @@ static int __get_segment_type_6(struct f2fs_io_info *fio) ...@@ -3254,7 +3261,7 @@ static int __get_segment_type_6(struct f2fs_io_info *fio)
else else
return CURSEG_COLD_DATA; return CURSEG_COLD_DATA;
} }
if (file_is_cold(inode) || f2fs_compressed_file(inode)) if (file_is_cold(inode) || f2fs_need_compress_data(inode))
return CURSEG_COLD_DATA; return CURSEG_COLD_DATA;
if (file_is_hot(inode) || if (file_is_hot(inode) ||
is_inode_flag_set(inode, FI_HOT_DATA) || is_inode_flag_set(inode, FI_HOT_DATA) ||
...@@ -4544,7 +4551,7 @@ static void init_dirty_segmap(struct f2fs_sb_info *sbi) ...@@ -4544,7 +4551,7 @@ static void init_dirty_segmap(struct f2fs_sb_info *sbi)
return; return;
mutex_lock(&dirty_i->seglist_lock); mutex_lock(&dirty_i->seglist_lock);
for (segno = 0; segno < MAIN_SECS(sbi); segno += blks_per_sec) { for (segno = 0; segno < MAIN_SEGS(sbi); segno += sbi->segs_per_sec) {
valid_blocks = get_valid_blocks(sbi, segno, true); valid_blocks = get_valid_blocks(sbi, segno, true);
secno = GET_SEC_FROM_SEG(sbi, segno); secno = GET_SEC_FROM_SEG(sbi, segno);
......
...@@ -18,9 +18,7 @@ static unsigned int shrinker_run_no; ...@@ -18,9 +18,7 @@ static unsigned int shrinker_run_no;
static unsigned long __count_nat_entries(struct f2fs_sb_info *sbi) static unsigned long __count_nat_entries(struct f2fs_sb_info *sbi)
{ {
long count = NM_I(sbi)->nat_cnt - NM_I(sbi)->dirty_nat_cnt; return NM_I(sbi)->nat_cnt[RECLAIMABLE_NAT];
return count > 0 ? count : 0;
} }
static unsigned long __count_free_nids(struct f2fs_sb_info *sbi) static unsigned long __count_free_nids(struct f2fs_sb_info *sbi)
......
...@@ -146,6 +146,8 @@ enum { ...@@ -146,6 +146,8 @@ enum {
Opt_compress_algorithm, Opt_compress_algorithm,
Opt_compress_log_size, Opt_compress_log_size,
Opt_compress_extension, Opt_compress_extension,
Opt_compress_chksum,
Opt_compress_mode,
Opt_atgc, Opt_atgc,
Opt_err, Opt_err,
}; };
...@@ -214,6 +216,8 @@ static match_table_t f2fs_tokens = { ...@@ -214,6 +216,8 @@ static match_table_t f2fs_tokens = {
{Opt_compress_algorithm, "compress_algorithm=%s"}, {Opt_compress_algorithm, "compress_algorithm=%s"},
{Opt_compress_log_size, "compress_log_size=%u"}, {Opt_compress_log_size, "compress_log_size=%u"},
{Opt_compress_extension, "compress_extension=%s"}, {Opt_compress_extension, "compress_extension=%s"},
{Opt_compress_chksum, "compress_chksum"},
{Opt_compress_mode, "compress_mode=%s"},
{Opt_atgc, "atgc"}, {Opt_atgc, "atgc"},
{Opt_err, NULL}, {Opt_err, NULL},
}; };
...@@ -934,10 +938,29 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount) ...@@ -934,10 +938,29 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
F2FS_OPTION(sbi).compress_ext_cnt++; F2FS_OPTION(sbi).compress_ext_cnt++;
kfree(name); kfree(name);
break; break;
case Opt_compress_chksum:
F2FS_OPTION(sbi).compress_chksum = true;
break;
case Opt_compress_mode:
name = match_strdup(&args[0]);
if (!name)
return -ENOMEM;
if (!strcmp(name, "fs")) {
F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS;
} else if (!strcmp(name, "user")) {
F2FS_OPTION(sbi).compress_mode = COMPR_MODE_USER;
} else {
kfree(name);
return -EINVAL;
}
kfree(name);
break;
#else #else
case Opt_compress_algorithm: case Opt_compress_algorithm:
case Opt_compress_log_size: case Opt_compress_log_size:
case Opt_compress_extension: case Opt_compress_extension:
case Opt_compress_chksum:
case Opt_compress_mode:
f2fs_info(sbi, "compression options not supported"); f2fs_info(sbi, "compression options not supported");
break; break;
#endif #endif
...@@ -1523,6 +1546,14 @@ static inline void f2fs_show_compress_options(struct seq_file *seq, ...@@ -1523,6 +1546,14 @@ static inline void f2fs_show_compress_options(struct seq_file *seq,
seq_printf(seq, ",compress_extension=%s", seq_printf(seq, ",compress_extension=%s",
F2FS_OPTION(sbi).extensions[i]); F2FS_OPTION(sbi).extensions[i]);
} }
if (F2FS_OPTION(sbi).compress_chksum)
seq_puts(seq, ",compress_chksum");
if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_FS)
seq_printf(seq, ",compress_mode=%s", "fs");
else if (F2FS_OPTION(sbi).compress_mode == COMPR_MODE_USER)
seq_printf(seq, ",compress_mode=%s", "user");
} }
static int f2fs_show_options(struct seq_file *seq, struct dentry *root) static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
...@@ -1672,6 +1703,7 @@ static void default_options(struct f2fs_sb_info *sbi) ...@@ -1672,6 +1703,7 @@ static void default_options(struct f2fs_sb_info *sbi)
F2FS_OPTION(sbi).compress_algorithm = COMPRESS_LZ4; F2FS_OPTION(sbi).compress_algorithm = COMPRESS_LZ4;
F2FS_OPTION(sbi).compress_log_size = MIN_COMPRESS_LOG_SIZE; F2FS_OPTION(sbi).compress_log_size = MIN_COMPRESS_LOG_SIZE;
F2FS_OPTION(sbi).compress_ext_cnt = 0; F2FS_OPTION(sbi).compress_ext_cnt = 0;
F2FS_OPTION(sbi).compress_mode = COMPR_MODE_FS;
F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON; F2FS_OPTION(sbi).bggc_mode = BGGC_MODE_ON;
sbi->sb->s_flags &= ~SB_INLINECRYPT; sbi->sb->s_flags &= ~SB_INLINECRYPT;
...@@ -1904,7 +1936,6 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) ...@@ -1904,7 +1936,6 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data)
if (*flags & SB_RDONLY || if (*flags & SB_RDONLY ||
F2FS_OPTION(sbi).whint_mode != org_mount_opt.whint_mode) { F2FS_OPTION(sbi).whint_mode != org_mount_opt.whint_mode) {
writeback_inodes_sb(sb, WB_REASON_SYNC);
sync_inodes_sb(sb); sync_inodes_sb(sb);
set_sbi_flag(sbi, SBI_IS_DIRTY); set_sbi_flag(sbi, SBI_IS_DIRTY);
...@@ -2744,7 +2775,6 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi, ...@@ -2744,7 +2775,6 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
block_t total_sections, blocks_per_seg; block_t total_sections, blocks_per_seg;
struct f2fs_super_block *raw_super = (struct f2fs_super_block *) struct f2fs_super_block *raw_super = (struct f2fs_super_block *)
(bh->b_data + F2FS_SUPER_OFFSET); (bh->b_data + F2FS_SUPER_OFFSET);
unsigned int blocksize;
size_t crc_offset = 0; size_t crc_offset = 0;
__u32 crc = 0; __u32 crc = 0;
...@@ -2770,18 +2800,11 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi, ...@@ -2770,18 +2800,11 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
} }
} }
/* Currently, support only 4KB page cache size */
if (F2FS_BLKSIZE != PAGE_SIZE) {
f2fs_info(sbi, "Invalid page_cache_size (%lu), supports only 4KB",
PAGE_SIZE);
return -EFSCORRUPTED;
}
/* Currently, support only 4KB block size */ /* Currently, support only 4KB block size */
blocksize = 1 << le32_to_cpu(raw_super->log_blocksize); if (le32_to_cpu(raw_super->log_blocksize) != F2FS_BLKSIZE_BITS) {
if (blocksize != F2FS_BLKSIZE) { f2fs_info(sbi, "Invalid log_blocksize (%u), supports only %u",
f2fs_info(sbi, "Invalid blocksize (%u), supports only 4KB", le32_to_cpu(raw_super->log_blocksize),
blocksize); F2FS_BLKSIZE_BITS);
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
...@@ -3071,9 +3094,9 @@ static void init_sb_info(struct f2fs_sb_info *sbi) ...@@ -3071,9 +3094,9 @@ static void init_sb_info(struct f2fs_sb_info *sbi)
sbi->total_node_count = sbi->total_node_count =
(le32_to_cpu(raw_super->segment_count_nat) / 2) (le32_to_cpu(raw_super->segment_count_nat) / 2)
* sbi->blocks_per_seg * NAT_ENTRY_PER_BLOCK; * sbi->blocks_per_seg * NAT_ENTRY_PER_BLOCK;
sbi->root_ino_num = le32_to_cpu(raw_super->root_ino); F2FS_ROOT_INO(sbi) = le32_to_cpu(raw_super->root_ino);
sbi->node_ino_num = le32_to_cpu(raw_super->node_ino); F2FS_NODE_INO(sbi) = le32_to_cpu(raw_super->node_ino);
sbi->meta_ino_num = le32_to_cpu(raw_super->meta_ino); F2FS_META_INO(sbi) = le32_to_cpu(raw_super->meta_ino);
sbi->cur_victim_sec = NULL_SECNO; sbi->cur_victim_sec = NULL_SECNO;
sbi->next_victim_seg[BG_GC] = NULL_SEGNO; sbi->next_victim_seg[BG_GC] = NULL_SEGNO;
sbi->next_victim_seg[FG_GC] = NULL_SEGNO; sbi->next_victim_seg[FG_GC] = NULL_SEGNO;
...@@ -3399,12 +3422,6 @@ static int f2fs_setup_casefold(struct f2fs_sb_info *sbi) ...@@ -3399,12 +3422,6 @@ static int f2fs_setup_casefold(struct f2fs_sb_info *sbi)
struct unicode_map *encoding; struct unicode_map *encoding;
__u16 encoding_flags; __u16 encoding_flags;
if (f2fs_sb_has_encrypt(sbi)) {
f2fs_err(sbi,
"Can't mount with encoding and encryption");
return -EINVAL;
}
if (f2fs_sb_read_encoding(sbi->raw_super, &encoding_info, if (f2fs_sb_read_encoding(sbi->raw_super, &encoding_info,
&encoding_flags)) { &encoding_flags)) {
f2fs_err(sbi, f2fs_err(sbi,
...@@ -3427,7 +3444,6 @@ static int f2fs_setup_casefold(struct f2fs_sb_info *sbi) ...@@ -3427,7 +3444,6 @@ static int f2fs_setup_casefold(struct f2fs_sb_info *sbi)
sbi->sb->s_encoding = encoding; sbi->sb->s_encoding = encoding;
sbi->sb->s_encoding_flags = encoding_flags; sbi->sb->s_encoding_flags = encoding_flags;
sbi->sb->s_d_op = &f2fs_dentry_ops;
} }
#else #else
if (f2fs_sb_has_casefold(sbi)) { if (f2fs_sb_has_casefold(sbi)) {
...@@ -3559,7 +3575,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -3559,7 +3575,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
sbi->valid_super_block = valid_super_block; sbi->valid_super_block = valid_super_block;
init_rwsem(&sbi->gc_lock); init_rwsem(&sbi->gc_lock);
mutex_init(&sbi->writepages); mutex_init(&sbi->writepages);
mutex_init(&sbi->cp_mutex); init_rwsem(&sbi->cp_global_sem);
init_rwsem(&sbi->node_write); init_rwsem(&sbi->node_write);
init_rwsem(&sbi->node_change); init_rwsem(&sbi->node_change);
...@@ -3700,8 +3716,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -3700,8 +3716,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
} }
/* For write statistics */ /* For write statistics */
sbi->sectors_written_start = sbi->sectors_written_start = f2fs_get_sectors_written(sbi);
(u64)part_stat_read(sb->s_bdev, sectors[STAT_WRITE]);
/* Read accumulated write IO statistics if exists */ /* Read accumulated write IO statistics if exists */
seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE); seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE);
...@@ -3916,6 +3931,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) ...@@ -3916,6 +3931,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
#ifdef CONFIG_UNICODE #ifdef CONFIG_UNICODE
utf8_unload(sb->s_encoding); utf8_unload(sb->s_encoding);
sb->s_encoding = NULL;
#endif #endif
free_options: free_options:
#ifdef CONFIG_QUOTA #ifdef CONFIG_QUOTA
......
...@@ -92,7 +92,8 @@ static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a, ...@@ -92,7 +92,8 @@ static ssize_t lifetime_write_kbytes_show(struct f2fs_attr *a,
{ {
return sprintf(buf, "%llu\n", return sprintf(buf, "%llu\n",
(unsigned long long)(sbi->kbytes_written + (unsigned long long)(sbi->kbytes_written +
BD_PART_WRITTEN(sbi))); ((f2fs_get_sectors_written(sbi) -
sbi->sectors_written_start) >> 1)));
} }
static ssize_t features_show(struct f2fs_attr *a, static ssize_t features_show(struct f2fs_attr *a,
...@@ -557,6 +558,7 @@ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, ...@@ -557,6 +558,7 @@ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info,
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, iostat_enable, iostat_enable); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, iostat_enable, iostat_enable);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, iostat_period_ms, iostat_period_ms); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, iostat_period_ms, iostat_period_ms);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, readdir_ra, readdir_ra); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, readdir_ra, readdir_ra);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, max_io_bytes, max_io_bytes);
F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_pin_file_thresh, gc_pin_file_threshold); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, gc_pin_file_thresh, gc_pin_file_threshold);
F2FS_RW_ATTR(F2FS_SBI, f2fs_super_block, extension_list, extension_list); F2FS_RW_ATTR(F2FS_SBI, f2fs_super_block, extension_list, extension_list);
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
...@@ -641,6 +643,7 @@ static struct attribute *f2fs_attrs[] = { ...@@ -641,6 +643,7 @@ static struct attribute *f2fs_attrs[] = {
ATTR_LIST(iostat_enable), ATTR_LIST(iostat_enable),
ATTR_LIST(iostat_period_ms), ATTR_LIST(iostat_period_ms),
ATTR_LIST(readdir_ra), ATTR_LIST(readdir_ra),
ATTR_LIST(max_io_bytes),
ATTR_LIST(gc_pin_file_thresh), ATTR_LIST(gc_pin_file_thresh),
ATTR_LIST(extension_list), ATTR_LIST(extension_list),
#ifdef CONFIG_F2FS_FAULT_INJECTION #ifdef CONFIG_F2FS_FAULT_INJECTION
......
...@@ -1451,4 +1451,74 @@ int generic_ci_d_hash(const struct dentry *dentry, struct qstr *str) ...@@ -1451,4 +1451,74 @@ int generic_ci_d_hash(const struct dentry *dentry, struct qstr *str)
return 0; return 0;
} }
EXPORT_SYMBOL(generic_ci_d_hash); EXPORT_SYMBOL(generic_ci_d_hash);
static const struct dentry_operations generic_ci_dentry_ops = {
.d_hash = generic_ci_d_hash,
.d_compare = generic_ci_d_compare,
};
#endif
#ifdef CONFIG_FS_ENCRYPTION
static const struct dentry_operations generic_encrypted_dentry_ops = {
.d_revalidate = fscrypt_d_revalidate,
};
#endif
#if defined(CONFIG_FS_ENCRYPTION) && defined(CONFIG_UNICODE)
static const struct dentry_operations generic_encrypted_ci_dentry_ops = {
.d_hash = generic_ci_d_hash,
.d_compare = generic_ci_d_compare,
.d_revalidate = fscrypt_d_revalidate,
};
#endif
/**
* generic_set_encrypted_ci_d_ops - helper for setting d_ops for given dentry
* @dentry: dentry to set ops on
*
* Casefolded directories need d_hash and d_compare set, so that the dentries
* contained in them are handled case-insensitively. Note that these operations
* are needed on the parent directory rather than on the dentries in it, and
* while the casefolding flag can be toggled on and off on an empty directory,
* dentry_operations can't be changed later. As a result, if the filesystem has
* casefolding support enabled at all, we have to give all dentries the
* casefolding operations even if their inode doesn't have the casefolding flag
* currently (and thus the casefolding ops would be no-ops for now).
*
* Encryption works differently in that the only dentry operation it needs is
* d_revalidate, which it only needs on dentries that have the no-key name flag.
* The no-key flag can't be set "later", so we don't have to worry about that.
*
* Finally, to maximize compatibility with overlayfs (which isn't compatible
* with certain dentry operations) and to avoid taking an unnecessary
* performance hit, we use custom dentry_operations for each possible
* combination rather than always installing all operations.
*/
void generic_set_encrypted_ci_d_ops(struct dentry *dentry)
{
#ifdef CONFIG_FS_ENCRYPTION
bool needs_encrypt_ops = dentry->d_flags & DCACHE_NOKEY_NAME;
#endif
#ifdef CONFIG_UNICODE
bool needs_ci_ops = dentry->d_sb->s_encoding;
#endif
#if defined(CONFIG_FS_ENCRYPTION) && defined(CONFIG_UNICODE)
if (needs_encrypt_ops && needs_ci_ops) {
d_set_d_op(dentry, &generic_encrypted_ci_dentry_ops);
return;
}
#endif #endif
#ifdef CONFIG_FS_ENCRYPTION
if (needs_encrypt_ops) {
d_set_d_op(dentry, &generic_encrypted_dentry_ops);
return;
}
#endif
#ifdef CONFIG_UNICODE
if (needs_ci_ops) {
d_set_d_op(dentry, &generic_ci_dentry_ops);
return;
}
#endif
}
EXPORT_SYMBOL(generic_set_encrypted_ci_d_ops);
...@@ -203,6 +203,7 @@ static struct dentry *ubifs_lookup(struct inode *dir, struct dentry *dentry, ...@@ -203,6 +203,7 @@ static struct dentry *ubifs_lookup(struct inode *dir, struct dentry *dentry,
dbg_gen("'%pd' in dir ino %lu", dentry, dir->i_ino); dbg_gen("'%pd' in dir ino %lu", dentry, dir->i_ino);
err = fscrypt_prepare_lookup(dir, dentry, &nm); err = fscrypt_prepare_lookup(dir, dentry, &nm);
generic_set_encrypted_ci_d_ops(dentry);
if (err == -ENOENT) if (err == -ENOENT)
return d_splice_alias(NULL, dentry); return d_splice_alias(NULL, dentry);
if (err) if (err)
......
...@@ -273,7 +273,7 @@ struct f2fs_inode { ...@@ -273,7 +273,7 @@ struct f2fs_inode {
__le64 i_compr_blocks; /* # of compressed blocks */ __le64 i_compr_blocks; /* # of compressed blocks */
__u8 i_compress_algorithm; /* compress algorithm */ __u8 i_compress_algorithm; /* compress algorithm */
__u8 i_log_cluster_size; /* log of cluster size */ __u8 i_log_cluster_size; /* log of cluster size */
__le16 i_padding; /* padding */ __le16 i_compress_flag; /* compress flag */
__le32 i_extra_end[0]; /* for attribute size calculation */ __le32 i_extra_end[0]; /* for attribute size calculation */
} __packed; } __packed;
__le32 i_addr[DEF_ADDRS_PER_INODE]; /* Pointers to data blocks */ __le32 i_addr[DEF_ADDRS_PER_INODE]; /* Pointers to data blocks */
......
...@@ -3198,6 +3198,7 @@ extern int generic_ci_d_hash(const struct dentry *dentry, struct qstr *str); ...@@ -3198,6 +3198,7 @@ extern int generic_ci_d_hash(const struct dentry *dentry, struct qstr *str);
extern int generic_ci_d_compare(const struct dentry *dentry, unsigned int len, extern int generic_ci_d_compare(const struct dentry *dentry, unsigned int len,
const char *str, const struct qstr *name); const char *str, const struct qstr *name);
#endif #endif
extern void generic_set_encrypted_ci_d_ops(struct dentry *dentry);
#ifdef CONFIG_MIGRATION #ifdef CONFIG_MIGRATION
extern int buffer_migrate_page(struct address_space *, extern int buffer_migrate_page(struct address_space *,
......
...@@ -757,8 +757,11 @@ static inline int fscrypt_prepare_rename(struct inode *old_dir, ...@@ -757,8 +757,11 @@ static inline int fscrypt_prepare_rename(struct inode *old_dir,
* key is available, then the lookup is assumed to be by plaintext name; * key is available, then the lookup is assumed to be by plaintext name;
* otherwise, it is assumed to be by no-key name. * otherwise, it is assumed to be by no-key name.
* *
* This also installs a custom ->d_revalidate() method which will invalidate the * This will set DCACHE_NOKEY_NAME on the dentry if the lookup is by no-key
* dentry if it was created without the key and the key is later added. * name. In this case the filesystem must assign the dentry a dentry_operations
* which contains fscrypt_d_revalidate (or contains a d_revalidate method that
* calls fscrypt_d_revalidate), so that the dentry will be invalidated if the
* directory's encryption key is later added.
* *
* Return: 0 on success; -ENOENT if the directory's key is unavailable but the * Return: 0 on success; -ENOENT if the directory's key is unavailable but the
* filename isn't a valid no-key name, so a negative dentry should be created; * filename isn't a valid no-key name, so a negative dentry should be created;
......
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
#define _TRACE_F2FS_H #define _TRACE_F2FS_H
#include <linux/tracepoint.h> #include <linux/tracepoint.h>
#include <uapi/linux/f2fs.h>
#define show_dev(dev) MAJOR(dev), MINOR(dev) #define show_dev(dev) MAJOR(dev), MINOR(dev)
#define show_dev_ino(entry) show_dev(entry->dev), (unsigned long)entry->ino #define show_dev_ino(entry) show_dev(entry->dev), (unsigned long)entry->ino
......
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
#ifndef _UAPI_LINUX_F2FS_H
#define _UAPI_LINUX_F2FS_H
#include <linux/types.h>
#include <linux/ioctl.h>
/*
* f2fs-specific ioctl commands
*/
#define F2FS_IOCTL_MAGIC 0xf5
#define F2FS_IOC_START_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 1)
#define F2FS_IOC_COMMIT_ATOMIC_WRITE _IO(F2FS_IOCTL_MAGIC, 2)
#define F2FS_IOC_START_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 3)
#define F2FS_IOC_RELEASE_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 4)
#define F2FS_IOC_ABORT_VOLATILE_WRITE _IO(F2FS_IOCTL_MAGIC, 5)
#define F2FS_IOC_GARBAGE_COLLECT _IOW(F2FS_IOCTL_MAGIC, 6, __u32)
#define F2FS_IOC_WRITE_CHECKPOINT _IO(F2FS_IOCTL_MAGIC, 7)
#define F2FS_IOC_DEFRAGMENT _IOWR(F2FS_IOCTL_MAGIC, 8, \
struct f2fs_defragment)
#define F2FS_IOC_MOVE_RANGE _IOWR(F2FS_IOCTL_MAGIC, 9, \
struct f2fs_move_range)
#define F2FS_IOC_FLUSH_DEVICE _IOW(F2FS_IOCTL_MAGIC, 10, \
struct f2fs_flush_device)
#define F2FS_IOC_GARBAGE_COLLECT_RANGE _IOW(F2FS_IOCTL_MAGIC, 11, \
struct f2fs_gc_range)
#define F2FS_IOC_GET_FEATURES _IOR(F2FS_IOCTL_MAGIC, 12, __u32)
#define F2FS_IOC_SET_PIN_FILE _IOW(F2FS_IOCTL_MAGIC, 13, __u32)
#define F2FS_IOC_GET_PIN_FILE _IOR(F2FS_IOCTL_MAGIC, 14, __u32)
#define F2FS_IOC_PRECACHE_EXTENTS _IO(F2FS_IOCTL_MAGIC, 15)
#define F2FS_IOC_RESIZE_FS _IOW(F2FS_IOCTL_MAGIC, 16, __u64)
#define F2FS_IOC_GET_COMPRESS_BLOCKS _IOR(F2FS_IOCTL_MAGIC, 17, __u64)
#define F2FS_IOC_RELEASE_COMPRESS_BLOCKS \
_IOR(F2FS_IOCTL_MAGIC, 18, __u64)
#define F2FS_IOC_RESERVE_COMPRESS_BLOCKS \
_IOR(F2FS_IOCTL_MAGIC, 19, __u64)
#define F2FS_IOC_SEC_TRIM_FILE _IOW(F2FS_IOCTL_MAGIC, 20, \
struct f2fs_sectrim_range)
#define F2FS_IOC_GET_COMPRESS_OPTION _IOR(F2FS_IOCTL_MAGIC, 21, \
struct f2fs_comp_option)
#define F2FS_IOC_SET_COMPRESS_OPTION _IOW(F2FS_IOCTL_MAGIC, 22, \
struct f2fs_comp_option)
#define F2FS_IOC_DECOMPRESS_FILE _IO(F2FS_IOCTL_MAGIC, 23)
#define F2FS_IOC_COMPRESS_FILE _IO(F2FS_IOCTL_MAGIC, 24)
/*
* should be same as XFS_IOC_GOINGDOWN.
* Flags for going down operation used by FS_IOC_GOINGDOWN
*/
#define F2FS_IOC_SHUTDOWN _IOR('X', 125, __u32) /* Shutdown */
#define F2FS_GOING_DOWN_FULLSYNC 0x0 /* going down with full sync */
#define F2FS_GOING_DOWN_METASYNC 0x1 /* going down with metadata */
#define F2FS_GOING_DOWN_NOSYNC 0x2 /* going down */
#define F2FS_GOING_DOWN_METAFLUSH 0x3 /* going down with meta flush */
#define F2FS_GOING_DOWN_NEED_FSCK 0x4 /* going down to trigger fsck */
/*
* Flags used by F2FS_IOC_SEC_TRIM_FILE
*/
#define F2FS_TRIM_FILE_DISCARD 0x1 /* send discard command */
#define F2FS_TRIM_FILE_ZEROOUT 0x2 /* zero out */
#define F2FS_TRIM_FILE_MASK 0x3
struct f2fs_gc_range {
__u32 sync;
__u64 start;
__u64 len;
};
struct f2fs_defragment {
__u64 start;
__u64 len;
};
struct f2fs_move_range {
__u32 dst_fd; /* destination fd */
__u64 pos_in; /* start position in src_fd */
__u64 pos_out; /* start position in dst_fd */
__u64 len; /* size to move */
};
struct f2fs_flush_device {
__u32 dev_num; /* device number to flush */
__u32 segments; /* # of segments to flush */
};
struct f2fs_sectrim_range {
__u64 start;
__u64 len;
__u64 flags;
};
struct f2fs_comp_option {
__u8 algorithm;
__u8 log_cluster_size;
};
#endif /* _UAPI_LINUX_F2FS_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment