Commit 8171acb8 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'erofs-for-5.19-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs

Pull more erofs updates from Gao Xiang:
 "This is a follow-up to the main updates, including some fixes of
  fscache mode related to compressed inodes and a cachefiles tracepoint.
  There is also a patch to fix an unexpected decompression strategy
  change due to a cleanup in the past. All the fixes are quite small.

  Apart from these, documentation is also updated for a better
  description of recent new features.

  In addition, this has some trivial cleanups without actual code logic
  changes, so I could have a more recent codebase to work on folios and
  avoiding the PG_error page flag for the next cycle.

  Summary:

   - Leave compressed inodes unsupported in fscache mode for now

   - Avoid crash when using tracepoint cachefiles_prep_read

   - Fix `backmost' behavior due to a recent cleanup

   - Update documentation for better description of recent new features

   - Several decompression cleanups w/o logical change"

* tag 'erofs-for-5.19-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs:
  erofs: fix 'backmost' member of z_erofs_decompress_frontend
  erofs: simplify z_erofs_pcluster_readmore()
  erofs: get rid of label `restart_now'
  erofs: get rid of `struct z_erofs_collection'
  erofs: update documentation
  erofs: fix crash when enable tracepoint cachefiles_prep_read
  erofs: leave compressed inodes unsupported in fscache mode for now
parents e5b02087 4398d3c3
.. SPDX-License-Identifier: GPL-2.0 .. SPDX-License-Identifier: GPL-2.0
====================================== ======================================
Enhanced Read-Only File System - EROFS EROFS - Enhanced Read-Only File System
====================================== ======================================
Overview Overview
======== ========
EROFS file-system stands for Enhanced Read-Only File System. Different EROFS filesystem stands for Enhanced Read-Only File System. It aims to form a
from other read-only file systems, it aims to be designed for flexibility, generic read-only filesystem solution for various read-only use cases instead
scalability, but be kept simple and high performance. of just focusing on storage space saving without considering any side effects
of runtime performance.
It is designed as a better filesystem solution for the following scenarios: It is designed to meet the needs of flexibility, feature extendability and user
payload friendly, etc. Apart from those, it is still kept as a simple
random-access friendly high-performance filesystem to get rid of unneeded I/O
amplification and memory-resident overhead compared to similar approaches.
It is implemented to be a better choice for the following scenarios:
- read-only storage media or - read-only storage media or
- part of a fully trusted read-only solution, which means it needs to be - part of a fully trusted read-only solution, which means it needs to be
immutable and bit-for-bit identical to the official golden image for immutable and bit-for-bit identical to the official golden image for
their releases due to security and other considerations and their releases due to security or other considerations and
- hope to minimize extra storage space with guaranteed end-to-end performance - hope to minimize extra storage space with guaranteed end-to-end performance
by using compact layout, transparent file compression and direct access, by using compact layout, transparent file compression and direct access,
especially for those embedded devices with limited memory and high-density especially for those embedded devices with limited memory and high-density
hosts with numerous containers; hosts with numerous containers.
Here is the main features of EROFS: Here is the main features of EROFS:
- Little endian on-disk design; - Little endian on-disk design;
- Currently 4KB block size (nobh) and therefore maximum 16TB address space; - 4KiB block size and 32-bit block addresses, therefore 16TiB address space
at most for now;
- Metadata & data could be mixed by design;
- 2 inode versions for different requirements: - Two inode layouts for different requirements:
===================== ============ ===================================== ===================== ============ ======================================
compact (v1) extended (v2) compact (v1) extended (v2)
===================== ============ ===================================== ===================== ============ ======================================
Inode metadata size 32 bytes 64 bytes Inode metadata size 32 bytes 64 bytes
Max file size 4 GB 16 EB (also limited by max. vol size) Max file size 4 GiB 16 EiB (also limited by max. vol size)
Max uids/gids 65536 4294967296 Max uids/gids 65536 4294967296
Per-inode timestamp no yes (64 + 32-bit timestamp) Per-inode timestamp no yes (64 + 32-bit timestamp)
Max hardlinks 65536 4294967296 Max hardlinks 65536 4294967296
Metadata reserved 4 bytes 14 bytes Metadata reserved 8 bytes 18 bytes
===================== ============ ===================================== ===================== ============ ======================================
- Metadata and data could be mixed as an option;
- Support extended attributes (xattrs) as an option; - Support extended attributes (xattrs) as an option;
- Support xattr inline and tail-end data inline for all files; - Support tailpacking data and xattr inline compared to byte-addressed
unaligned metadata or smaller block size alternatives;
- Support POSIX.1e ACLs by using xattrs; - Support POSIX.1e ACLs by using xattrs;
- Support transparent data compression as an option: - Support transparent data compression as an option:
LZ4 algorithm with the fixed-sized output compression for high performance; LZ4 and MicroLZMA algorithms can be used on a per-file basis; In addition,
inplace decompression is also supported to avoid bounce compressed buffers
and page cache thrashing.
- Support direct I/O on uncompressed files to avoid double caching for loop
devices;
- Multiple device support for multi-layer container images. - Support FSDAX on uncompressed images for secure containers and ramdisks in
order to get rid of unnecessary page cache.
- Support multiple devices for multi blob container images;
- Support file-based on-demand loading with the Fscache infrastructure.
The following git tree provides the file system user-space tools under The following git tree provides the file system user-space tools under
development (ex, formatting tool mkfs.erofs): development, such as a formatting tool (mkfs.erofs), an on-disk consistency &
compatibility checking tool (fsck.erofs), and a debugging tool (dump.erofs):
- git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git - git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git
...@@ -91,6 +110,7 @@ dax={always,never} Use direct access (no page cache). See ...@@ -91,6 +110,7 @@ dax={always,never} Use direct access (no page cache). See
Documentation/filesystems/dax.rst. Documentation/filesystems/dax.rst.
dax A legacy option which is an alias for ``dax=always``. dax A legacy option which is an alias for ``dax=always``.
device=%s Specify a path to an extra device to be used together. device=%s Specify a path to an extra device to be used together.
fsid=%s Specify a filesystem image ID for Fscache back-end.
=================== ========================================================= =================== =========================================================
Sysfs Entries Sysfs Entries
...@@ -226,8 +246,8 @@ Note that apart from the offset of the first filename, nameoff0 also indicates ...@@ -226,8 +246,8 @@ Note that apart from the offset of the first filename, nameoff0 also indicates
the total number of directory entries in this block since it is no need to the total number of directory entries in this block since it is no need to
introduce another on-disk field at all. introduce another on-disk field at all.
Chunk-based file Chunk-based files
---------------- -----------------
In order to support chunk-based data deduplication, a new inode data layout has In order to support chunk-based data deduplication, a new inode data layout has
been supported since Linux v5.15: Files are split in equal-sized data chunks been supported since Linux v5.15: Files are split in equal-sized data chunks
with ``extents`` area of the inode metadata indicating how to get the chunk with ``extents`` area of the inode metadata indicating how to get the chunk
......
...@@ -17,6 +17,7 @@ static struct netfs_io_request *erofs_fscache_alloc_request(struct address_space ...@@ -17,6 +17,7 @@ static struct netfs_io_request *erofs_fscache_alloc_request(struct address_space
rreq->start = start; rreq->start = start;
rreq->len = len; rreq->len = len;
rreq->mapping = mapping; rreq->mapping = mapping;
rreq->inode = mapping->host;
INIT_LIST_HEAD(&rreq->subrequests); INIT_LIST_HEAD(&rreq->subrequests);
refcount_set(&rreq->ref, 1); refcount_set(&rreq->ref, 1);
return rreq; return rreq;
......
...@@ -288,7 +288,10 @@ static int erofs_fill_inode(struct inode *inode, int isdir) ...@@ -288,7 +288,10 @@ static int erofs_fill_inode(struct inode *inode, int isdir)
} }
if (erofs_inode_is_data_compressed(vi->datalayout)) { if (erofs_inode_is_data_compressed(vi->datalayout)) {
err = z_erofs_fill_inode(inode); if (!erofs_is_fscache_mode(inode->i_sb))
err = z_erofs_fill_inode(inode);
else
err = -EOPNOTSUPP;
goto out_unlock; goto out_unlock;
} }
inode->i_mapping->a_ops = &erofs_raw_access_aops; inode->i_mapping->a_ops = &erofs_raw_access_aops;
......
This diff is collapsed.
...@@ -12,21 +12,40 @@ ...@@ -12,21 +12,40 @@
#define Z_EROFS_PCLUSTER_MAX_PAGES (Z_EROFS_PCLUSTER_MAX_SIZE / PAGE_SIZE) #define Z_EROFS_PCLUSTER_MAX_PAGES (Z_EROFS_PCLUSTER_MAX_SIZE / PAGE_SIZE)
#define Z_EROFS_NR_INLINE_PAGEVECS 3 #define Z_EROFS_NR_INLINE_PAGEVECS 3
#define Z_EROFS_PCLUSTER_FULL_LENGTH 0x00000001
#define Z_EROFS_PCLUSTER_LENGTH_BIT 1
/*
* let's leave a type here in case of introducing
* another tagged pointer later.
*/
typedef void *z_erofs_next_pcluster_t;
/* /*
* Structure fields follow one of the following exclusion rules. * Structure fields follow one of the following exclusion rules.
* *
* I: Modifiable by initialization/destruction paths and read-only * I: Modifiable by initialization/destruction paths and read-only
* for everyone else; * for everyone else;
* *
* L: Field should be protected by pageset lock; * L: Field should be protected by the pcluster lock;
* *
* A: Field should be accessed / updated in atomic for parallelized code. * A: Field should be accessed / updated in atomic for parallelized code.
*/ */
struct z_erofs_collection { struct z_erofs_pcluster {
struct erofs_workgroup obj;
struct mutex lock; struct mutex lock;
/* A: point to next chained pcluster or TAILs */
z_erofs_next_pcluster_t next;
/* A: lower limit of decompressed length and if full length or not */
unsigned int length;
/* I: page offset of start position of decompression */ /* I: page offset of start position of decompression */
unsigned short pageofs; unsigned short pageofs_out;
/* I: page offset of inline compressed data */
unsigned short pageofs_in;
/* L: maximum relative page index in pagevec[] */ /* L: maximum relative page index in pagevec[] */
unsigned short nr_pages; unsigned short nr_pages;
...@@ -41,29 +60,6 @@ struct z_erofs_collection { ...@@ -41,29 +60,6 @@ struct z_erofs_collection {
/* I: can be used to free the pcluster by RCU. */ /* I: can be used to free the pcluster by RCU. */
struct rcu_head rcu; struct rcu_head rcu;
}; };
};
#define Z_EROFS_PCLUSTER_FULL_LENGTH 0x00000001
#define Z_EROFS_PCLUSTER_LENGTH_BIT 1
/*
* let's leave a type here in case of introducing
* another tagged pointer later.
*/
typedef void *z_erofs_next_pcluster_t;
struct z_erofs_pcluster {
struct erofs_workgroup obj;
struct z_erofs_collection primary_collection;
/* A: point to next chained pcluster or TAILs */
z_erofs_next_pcluster_t next;
/* A: lower limit of decompressed length and if full length or not */
unsigned int length;
/* I: page offset of inline compressed data */
unsigned short pageofs_in;
union { union {
/* I: physical cluster size in pages */ /* I: physical cluster size in pages */
...@@ -80,8 +76,6 @@ struct z_erofs_pcluster { ...@@ -80,8 +76,6 @@ struct z_erofs_pcluster {
struct page *compressed_pages[]; struct page *compressed_pages[];
}; };
#define z_erofs_primarycollection(pcluster) (&(pcluster)->primary_collection)
/* let's avoid the valid 32-bit kernel addresses */ /* let's avoid the valid 32-bit kernel addresses */
/* the chained workgroup has't submitted io (still open) */ /* the chained workgroup has't submitted io (still open) */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment