Commit f008b1d6 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'netfs-prep-20220318' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs

Pull netfs updates from David Howells:
 "Netfs prep for write helpers.

  Having had a go at implementing write helpers and content encryption
  support in netfslib, it seems that the netfs_read_{,sub}request
  structs and the equivalent write request structs were almost the same
  and so should be merged, thereby requiring only one set of
  alloc/get/put functions and a common set of tracepoints.

  Merging the structs also has the advantage that if a bounce buffer is
  added to the request struct, a read operation can be performed to fill
  the bounce buffer, the contents of the buffer can be modified and then
  a write operation can be performed on it to send the data wherever it
  needs to go using the same request structure all the way through. The
  I/O handlers would then transparently perform any required crypto.
  This should make it easier to perform RMW cycles if needed.

  The potentially common functions and structs, however, by their names
  all proclaim themselves to be associated with the read side of things.

  The bulk of these changes alter this in the following ways:

   - Rename struct netfs_read_{,sub}request to netfs_io_{,sub}request.

   - Rename some enums, members and flags to make them more appropriate.

   - Adjust some comments to match.

   - Drop "read"/"rreq" from the names of common functions. For
     instance, netfs_get_read_request() becomes netfs_get_request().

   - The ->init_rreq() and ->issue_op() methods become ->init_request()
     and ->issue_read(). I've kept the latter as a read-specific
     function and in another branch added an ->issue_write() method.

  The driver source is then reorganised into a number of files:

        fs/netfs/buffered_read.c        Create read reqs to the pagecache
        fs/netfs/io.c                   Dispatchers for read and write reqs
        fs/netfs/main.c                 Some general miscellaneous bits
        fs/netfs/objects.c              Alloc, get and put functions
        fs/netfs/stats.c                Optional procfs statistics.

  and future development can be fitted into this scheme, e.g.:

        fs/netfs/buffered_write.c       Modify the pagecache
        fs/netfs/buffered_flush.c       Writeback from the pagecache
        fs/netfs/direct_read.c          DIO read support
        fs/netfs/direct_write.c         DIO write support
        fs/netfs/unbuffered_write.c     Write modifications directly back

  Beyond the above changes, there are also some changes that affect how
  things work:

   - Make fscache_end_operation() generally available.

   - In the netfs tracing header, generate enums from the symbol ->
     string mapping tables rather than manually coding them.

   - Add a struct for filesystems that uses netfslib to put into their
     inode wrapper structs to hold extra state that netfslib is
     interested in, such as the fscache cookie. This allows netfslib
     functions to be set in filesystem operation tables and jumped to
     directly without having to have a filesystem wrapper.

   - Add a member to the struct added above to track the remote inode
     length as that may differ if local modifications are buffered. We
     may need to supply an appropriate EOF pointer when storing data (in
     AFS for example).

   - Pass extra information to netfs_alloc_request() so that the
     ->init_request() hook can access it and retain information to
     indicate the origin of the operation.

   - Make the ->init_request() hook return an error, thereby allowing a
     filesystem that isn't allowed to cache an inode (ceph or cifs, for
     example) to skip readahead.

   - Switch to using refcount_t for subrequests and add tracepoints to
     log refcount changes for the request and subrequest structs.

   - Add a function to consolidate dispatching a read request. Similar
     code is used in three places and another couple are likely to be
     added in the future"

Link: https://lore.kernel.org/all/2639515.1648483225@warthog.procyon.org.uk/

* tag 'netfs-prep-20220318' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs:
  afs: Maintain netfs_i_context::remote_i_size
  netfs: Keep track of the actual remote file size
  netfs: Split some core bits out into their own file
  netfs: Split fs/netfs/read_helper.c
  netfs: Rename read_helper.c to io.c
  netfs: Prepare to split read_helper.c
  netfs: Add a function to consolidate beginning a read
  netfs: Add a netfs inode context
  ceph: Make ceph_init_request() check caps on readahead
  netfs: Change ->init_request() to return an error code
  netfs: Refactor arguments for netfs_alloc_read_request
  netfs: Adjust the netfs_failure tracepoint to indicate non-subreq lines
  netfs: Trace refcounting on the netfs_io_subrequest struct
  netfs: Trace refcounting on the netfs_io_request struct
  netfs: Adjust the netfs_rreq tracepoint slightly
  netfs: Split netfs_io_* object handling out
  netfs: Finish off rename of netfs_read_request to netfs_io_request
  netfs: Rename netfs_read_*request to netfs_io_*request
  netfs: Generate enums from trace symbol mapping lists
  fscache: export fscache_end_operation()
parents 478f74a3 ab487a4c
...@@ -7,6 +7,8 @@ Network Filesystem Helper Library ...@@ -7,6 +7,8 @@ Network Filesystem Helper Library
.. Contents: .. Contents:
- Overview. - Overview.
- Per-inode context.
- Inode context helper functions.
- Buffered read helpers. - Buffered read helpers.
- Read helper functions. - Read helper functions.
- Read helper structures. - Read helper structures.
...@@ -28,6 +30,69 @@ Note that the library module doesn't link against local caching directly, so ...@@ -28,6 +30,69 @@ Note that the library module doesn't link against local caching directly, so
access must be provided by the netfs. access must be provided by the netfs.
Per-Inode Context
=================
The network filesystem helper library needs a place to store a bit of state for
its use on each netfs inode it is helping to manage. To this end, a context
structure is defined::
struct netfs_i_context {
const struct netfs_request_ops *ops;
struct fscache_cookie *cache;
};
A network filesystem that wants to use netfs lib must place one of these
directly after the VFS ``struct inode`` it allocates, usually as part of its
own struct. This can be done in a way similar to the following::
struct my_inode {
struct {
/* These must be contiguous */
struct inode vfs_inode;
struct netfs_i_context netfs_ctx;
};
...
};
This allows netfslib to find its state by simple offset from the inode pointer,
thereby allowing the netfslib helper functions to be pointed to directly by the
VFS/VM operation tables.
The structure contains the following fields:
* ``ops``
The set of operations provided by the network filesystem to netfslib.
* ``cache``
Local caching cookie, or NULL if no caching is enabled. This field does not
exist if fscache is disabled.
Inode Context Helper Functions
------------------------------
To help deal with the per-inode context, a number helper functions are
provided. Firstly, a function to perform basic initialisation on a context and
set the operations table pointer::
void netfs_i_context_init(struct inode *inode,
const struct netfs_request_ops *ops);
then two functions to cast between the VFS inode structure and the netfs
context::
struct netfs_i_context *netfs_i_context(struct inode *inode);
struct inode *netfs_inode(struct netfs_i_context *ctx);
and finally, a function to get the cache cookie pointer from the context
attached to an inode (or NULL if fscache is disabled)::
struct fscache_cookie *netfs_i_cookie(struct inode *inode);
Buffered Read Helpers Buffered Read Helpers
===================== =====================
...@@ -70,38 +135,22 @@ Read Helper Functions ...@@ -70,38 +135,22 @@ Read Helper Functions
Three read helpers are provided:: Three read helpers are provided::
void netfs_readahead(struct readahead_control *ractl, void netfs_readahead(struct readahead_control *ractl);
const struct netfs_read_request_ops *ops,
void *netfs_priv);
int netfs_readpage(struct file *file, int netfs_readpage(struct file *file,
struct folio *folio, struct page *page);
const struct netfs_read_request_ops *ops,
void *netfs_priv);
int netfs_write_begin(struct file *file, int netfs_write_begin(struct file *file,
struct address_space *mapping, struct address_space *mapping,
loff_t pos, loff_t pos,
unsigned int len, unsigned int len,
unsigned int flags, unsigned int flags,
struct folio **_folio, struct folio **_folio,
void **_fsdata, void **_fsdata);
const struct netfs_read_request_ops *ops,
void *netfs_priv);
Each corresponds to a VM operation, with the addition of a couple of parameters
for the use of the read helpers:
* ``ops`` Each corresponds to a VM address space operation. These operations use the
state in the per-inode context.
A table of operations through which the helpers can talk to the filesystem.
* ``netfs_priv``
Filesystem private data (can be NULL). For ->readahead() and ->readpage(), the network filesystem just point directly
at the corresponding read helper; whereas for ->write_begin(), it may be a
Both of these values will be stored into the read request structure.
For ->readahead() and ->readpage(), the network filesystem should just jump
into the corresponding read helper; whereas for ->write_begin(), it may be a
little more complicated as the network filesystem might want to flush little more complicated as the network filesystem might want to flush
conflicting writes or track dirty data and needs to put the acquired folio if conflicting writes or track dirty data and needs to put the acquired folio if
an error occurs after calling the helper. an error occurs after calling the helper.
...@@ -116,7 +165,7 @@ occurs, the request will get partially completed if sufficient data is read. ...@@ -116,7 +165,7 @@ occurs, the request will get partially completed if sufficient data is read.
Additionally, there is:: Additionally, there is::
* void netfs_subreq_terminated(struct netfs_read_subrequest *subreq, * void netfs_subreq_terminated(struct netfs_io_subrequest *subreq,
ssize_t transferred_or_error, ssize_t transferred_or_error,
bool was_async); bool was_async);
...@@ -132,7 +181,7 @@ Read Helper Structures ...@@ -132,7 +181,7 @@ Read Helper Structures
The read helpers make use of a couple of structures to maintain the state of The read helpers make use of a couple of structures to maintain the state of
the read. The first is a structure that manages a read request as a whole:: the read. The first is a structure that manages a read request as a whole::
struct netfs_read_request { struct netfs_io_request {
struct inode *inode; struct inode *inode;
struct address_space *mapping; struct address_space *mapping;
struct netfs_cache_resources cache_resources; struct netfs_cache_resources cache_resources;
...@@ -140,7 +189,7 @@ the read. The first is a structure that manages a read request as a whole:: ...@@ -140,7 +189,7 @@ the read. The first is a structure that manages a read request as a whole::
loff_t start; loff_t start;
size_t len; size_t len;
loff_t i_size; loff_t i_size;
const struct netfs_read_request_ops *netfs_ops; const struct netfs_request_ops *netfs_ops;
unsigned int debug_id; unsigned int debug_id;
... ...
}; };
...@@ -187,8 +236,8 @@ The above fields are the ones the netfs can use. They are: ...@@ -187,8 +236,8 @@ The above fields are the ones the netfs can use. They are:
The second structure is used to manage individual slices of the overall read The second structure is used to manage individual slices of the overall read
request:: request::
struct netfs_read_subrequest { struct netfs_io_subrequest {
struct netfs_read_request *rreq; struct netfs_io_request *rreq;
loff_t start; loff_t start;
size_t len; size_t len;
size_t transferred; size_t transferred;
...@@ -244,32 +293,26 @@ Read Helper Operations ...@@ -244,32 +293,26 @@ Read Helper Operations
The network filesystem must provide the read helpers with a table of operations The network filesystem must provide the read helpers with a table of operations
through which it can issue requests and negotiate:: through which it can issue requests and negotiate::
struct netfs_read_request_ops { struct netfs_request_ops {
void (*init_rreq)(struct netfs_read_request *rreq, struct file *file); void (*init_request)(struct netfs_io_request *rreq, struct file *file);
bool (*is_cache_enabled)(struct inode *inode); int (*begin_cache_operation)(struct netfs_io_request *rreq);
int (*begin_cache_operation)(struct netfs_read_request *rreq); void (*expand_readahead)(struct netfs_io_request *rreq);
void (*expand_readahead)(struct netfs_read_request *rreq); bool (*clamp_length)(struct netfs_io_subrequest *subreq);
bool (*clamp_length)(struct netfs_read_subrequest *subreq); void (*issue_read)(struct netfs_io_subrequest *subreq);
void (*issue_op)(struct netfs_read_subrequest *subreq); bool (*is_still_valid)(struct netfs_io_request *rreq);
bool (*is_still_valid)(struct netfs_read_request *rreq);
int (*check_write_begin)(struct file *file, loff_t pos, unsigned len, int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
struct folio *folio, void **_fsdata); struct folio *folio, void **_fsdata);
void (*done)(struct netfs_read_request *rreq); void (*done)(struct netfs_io_request *rreq);
void (*cleanup)(struct address_space *mapping, void *netfs_priv); void (*cleanup)(struct address_space *mapping, void *netfs_priv);
}; };
The operations are as follows: The operations are as follows:
* ``init_rreq()`` * ``init_request()``
[Optional] This is called to initialise the request structure. It is given [Optional] This is called to initialise the request structure. It is given
the file for reference and can modify the ->netfs_priv value. the file for reference and can modify the ->netfs_priv value.
* ``is_cache_enabled()``
[Required] This is called by netfs_write_begin() to ask if the file is being
cached. It should return true if it is being cached and false otherwise.
* ``begin_cache_operation()`` * ``begin_cache_operation()``
[Optional] This is called to ask the network filesystem to call into the [Optional] This is called to ask the network filesystem to call into the
...@@ -305,7 +348,7 @@ The operations are as follows: ...@@ -305,7 +348,7 @@ The operations are as follows:
This should return 0 on success and an error code on error. This should return 0 on success and an error code on error.
* ``issue_op()`` * ``issue_read()``
[Required] The helpers use this to dispatch a subrequest to the server for [Required] The helpers use this to dispatch a subrequest to the server for
reading. In the subrequest, ->start, ->len and ->transferred indicate what reading. In the subrequest, ->start, ->len and ->transferred indicate what
...@@ -420,12 +463,12 @@ The network filesystem's ->begin_cache_operation() method is called to set up a ...@@ -420,12 +463,12 @@ The network filesystem's ->begin_cache_operation() method is called to set up a
cache and this must call into the cache to do the work. If using fscache, for cache and this must call into the cache to do the work. If using fscache, for
example, the cache would call:: example, the cache would call::
int fscache_begin_read_operation(struct netfs_read_request *rreq, int fscache_begin_read_operation(struct netfs_io_request *rreq,
struct fscache_cookie *cookie); struct fscache_cookie *cookie);
passing in the request pointer and the cookie corresponding to the file. passing in the request pointer and the cookie corresponding to the file.
The netfs_read_request object contains a place for the cache to hang its The netfs_io_request object contains a place for the cache to hang its
state:: state::
struct netfs_cache_resources { struct netfs_cache_resources {
...@@ -443,7 +486,7 @@ operation table looks like the following:: ...@@ -443,7 +486,7 @@ operation table looks like the following::
void (*expand_readahead)(struct netfs_cache_resources *cres, void (*expand_readahead)(struct netfs_cache_resources *cres,
loff_t *_start, size_t *_len, loff_t i_size); loff_t *_start, size_t *_len, loff_t i_size);
enum netfs_read_source (*prepare_read)(struct netfs_read_subrequest *subreq, enum netfs_io_source (*prepare_read)(struct netfs_io_subrequest *subreq,
loff_t i_size); loff_t i_size);
int (*read)(struct netfs_cache_resources *cres, int (*read)(struct netfs_cache_resources *cres,
...@@ -562,4 +605,5 @@ API Function Reference ...@@ -562,4 +605,5 @@ API Function Reference
====================== ======================
.. kernel-doc:: include/linux/netfs.h .. kernel-doc:: include/linux/netfs.h
.. kernel-doc:: fs/netfs/read_helper.c .. kernel-doc:: fs/netfs/buffered_read.c
.. kernel-doc:: fs/netfs/io.c
...@@ -49,22 +49,20 @@ int v9fs_cache_session_get_cookie(struct v9fs_session_info *v9ses, ...@@ -49,22 +49,20 @@ int v9fs_cache_session_get_cookie(struct v9fs_session_info *v9ses,
void v9fs_cache_inode_get_cookie(struct inode *inode) void v9fs_cache_inode_get_cookie(struct inode *inode)
{ {
struct v9fs_inode *v9inode; struct v9fs_inode *v9inode = V9FS_I(inode);
struct v9fs_session_info *v9ses; struct v9fs_session_info *v9ses;
__le32 version; __le32 version;
__le64 path; __le64 path;
if (!S_ISREG(inode->i_mode)) if (!S_ISREG(inode->i_mode))
return; return;
if (WARN_ON(v9fs_inode_cookie(v9inode)))
v9inode = V9FS_I(inode);
if (WARN_ON(v9inode->fscache))
return; return;
version = cpu_to_le32(v9inode->qid.version); version = cpu_to_le32(v9inode->qid.version);
path = cpu_to_le64(v9inode->qid.path); path = cpu_to_le64(v9inode->qid.path);
v9ses = v9fs_inode2v9ses(inode); v9ses = v9fs_inode2v9ses(inode);
v9inode->fscache = v9inode->netfs_ctx.cache =
fscache_acquire_cookie(v9fs_session_cache(v9ses), fscache_acquire_cookie(v9fs_session_cache(v9ses),
0, 0,
&path, sizeof(path), &path, sizeof(path),
...@@ -72,5 +70,5 @@ void v9fs_cache_inode_get_cookie(struct inode *inode) ...@@ -72,5 +70,5 @@ void v9fs_cache_inode_get_cookie(struct inode *inode)
i_size_read(&v9inode->vfs_inode)); i_size_read(&v9inode->vfs_inode));
p9_debug(P9_DEBUG_FSC, "inode %p get cookie %p\n", p9_debug(P9_DEBUG_FSC, "inode %p get cookie %p\n",
inode, v9inode->fscache); inode, v9fs_inode_cookie(v9inode));
} }
...@@ -623,9 +623,7 @@ static void v9fs_sysfs_cleanup(void) ...@@ -623,9 +623,7 @@ static void v9fs_sysfs_cleanup(void)
static void v9fs_inode_init_once(void *foo) static void v9fs_inode_init_once(void *foo)
{ {
struct v9fs_inode *v9inode = (struct v9fs_inode *)foo; struct v9fs_inode *v9inode = (struct v9fs_inode *)foo;
#ifdef CONFIG_9P_FSCACHE
v9inode->fscache = NULL;
#endif
memset(&v9inode->qid, 0, sizeof(v9inode->qid)); memset(&v9inode->qid, 0, sizeof(v9inode->qid));
inode_init_once(&v9inode->vfs_inode); inode_init_once(&v9inode->vfs_inode);
} }
......
...@@ -9,6 +9,7 @@ ...@@ -9,6 +9,7 @@
#define FS_9P_V9FS_H #define FS_9P_V9FS_H
#include <linux/backing-dev.h> #include <linux/backing-dev.h>
#include <linux/netfs.h>
/** /**
* enum p9_session_flags - option flags for each 9P session * enum p9_session_flags - option flags for each 9P session
...@@ -108,14 +109,15 @@ struct v9fs_session_info { ...@@ -108,14 +109,15 @@ struct v9fs_session_info {
#define V9FS_INO_INVALID_ATTR 0x01 #define V9FS_INO_INVALID_ATTR 0x01
struct v9fs_inode { struct v9fs_inode {
#ifdef CONFIG_9P_FSCACHE struct {
struct fscache_cookie *fscache; /* These must be contiguous */
#endif struct inode vfs_inode; /* the VFS's inode record */
struct netfs_i_context netfs_ctx; /* Netfslib context */
};
struct p9_qid qid; struct p9_qid qid;
unsigned int cache_validity; unsigned int cache_validity;
struct p9_fid *writeback_fid; struct p9_fid *writeback_fid;
struct mutex v_mutex; struct mutex v_mutex;
struct inode vfs_inode;
}; };
static inline struct v9fs_inode *V9FS_I(const struct inode *inode) static inline struct v9fs_inode *V9FS_I(const struct inode *inode)
...@@ -126,7 +128,7 @@ static inline struct v9fs_inode *V9FS_I(const struct inode *inode) ...@@ -126,7 +128,7 @@ static inline struct v9fs_inode *V9FS_I(const struct inode *inode)
static inline struct fscache_cookie *v9fs_inode_cookie(struct v9fs_inode *v9inode) static inline struct fscache_cookie *v9fs_inode_cookie(struct v9fs_inode *v9inode)
{ {
#ifdef CONFIG_9P_FSCACHE #ifdef CONFIG_9P_FSCACHE
return v9inode->fscache; return netfs_i_cookie(&v9inode->vfs_inode);
#else #else
return NULL; return NULL;
#endif #endif
...@@ -163,6 +165,7 @@ extern struct inode *v9fs_inode_from_fid(struct v9fs_session_info *v9ses, ...@@ -163,6 +165,7 @@ extern struct inode *v9fs_inode_from_fid(struct v9fs_session_info *v9ses,
extern const struct inode_operations v9fs_dir_inode_operations_dotl; extern const struct inode_operations v9fs_dir_inode_operations_dotl;
extern const struct inode_operations v9fs_file_inode_operations_dotl; extern const struct inode_operations v9fs_file_inode_operations_dotl;
extern const struct inode_operations v9fs_symlink_inode_operations_dotl; extern const struct inode_operations v9fs_symlink_inode_operations_dotl;
extern const struct netfs_request_ops v9fs_req_ops;
extern struct inode *v9fs_inode_from_fid_dotl(struct v9fs_session_info *v9ses, extern struct inode *v9fs_inode_from_fid_dotl(struct v9fs_session_info *v9ses,
struct p9_fid *fid, struct p9_fid *fid,
struct super_block *sb, int new); struct super_block *sb, int new);
......
...@@ -28,12 +28,12 @@ ...@@ -28,12 +28,12 @@
#include "fid.h" #include "fid.h"
/** /**
* v9fs_req_issue_op - Issue a read from 9P * v9fs_issue_read - Issue a read from 9P
* @subreq: The read to make * @subreq: The read to make
*/ */
static void v9fs_req_issue_op(struct netfs_read_subrequest *subreq) static void v9fs_issue_read(struct netfs_io_subrequest *subreq)
{ {
struct netfs_read_request *rreq = subreq->rreq; struct netfs_io_request *rreq = subreq->rreq;
struct p9_fid *fid = rreq->netfs_priv; struct p9_fid *fid = rreq->netfs_priv;
struct iov_iter to; struct iov_iter to;
loff_t pos = subreq->start + subreq->transferred; loff_t pos = subreq->start + subreq->transferred;
...@@ -52,20 +52,21 @@ static void v9fs_req_issue_op(struct netfs_read_subrequest *subreq) ...@@ -52,20 +52,21 @@ static void v9fs_req_issue_op(struct netfs_read_subrequest *subreq)
} }
/** /**
* v9fs_init_rreq - Initialise a read request * v9fs_init_request - Initialise a read request
* @rreq: The read request * @rreq: The read request
* @file: The file being read from * @file: The file being read from
*/ */
static void v9fs_init_rreq(struct netfs_read_request *rreq, struct file *file) static int v9fs_init_request(struct netfs_io_request *rreq, struct file *file)
{ {
struct p9_fid *fid = file->private_data; struct p9_fid *fid = file->private_data;
refcount_inc(&fid->count); refcount_inc(&fid->count);
rreq->netfs_priv = fid; rreq->netfs_priv = fid;
return 0;
} }
/** /**
* v9fs_req_cleanup - Cleanup request initialized by v9fs_init_rreq * v9fs_req_cleanup - Cleanup request initialized by v9fs_init_request
* @mapping: unused mapping of request to cleanup * @mapping: unused mapping of request to cleanup
* @priv: private data to cleanup, a fid, guaranted non-null. * @priv: private data to cleanup, a fid, guaranted non-null.
*/ */
...@@ -76,22 +77,11 @@ static void v9fs_req_cleanup(struct address_space *mapping, void *priv) ...@@ -76,22 +77,11 @@ static void v9fs_req_cleanup(struct address_space *mapping, void *priv)
p9_client_clunk(fid); p9_client_clunk(fid);
} }
/**
* v9fs_is_cache_enabled - Determine if caching is enabled for an inode
* @inode: The inode to check
*/
static bool v9fs_is_cache_enabled(struct inode *inode)
{
struct fscache_cookie *cookie = v9fs_inode_cookie(V9FS_I(inode));
return fscache_cookie_enabled(cookie) && cookie->cache_priv;
}
/** /**
* v9fs_begin_cache_operation - Begin a cache operation for a read * v9fs_begin_cache_operation - Begin a cache operation for a read
* @rreq: The read request * @rreq: The read request
*/ */
static int v9fs_begin_cache_operation(struct netfs_read_request *rreq) static int v9fs_begin_cache_operation(struct netfs_io_request *rreq)
{ {
#ifdef CONFIG_9P_FSCACHE #ifdef CONFIG_9P_FSCACHE
struct fscache_cookie *cookie = v9fs_inode_cookie(V9FS_I(rreq->inode)); struct fscache_cookie *cookie = v9fs_inode_cookie(V9FS_I(rreq->inode));
...@@ -102,36 +92,13 @@ static int v9fs_begin_cache_operation(struct netfs_read_request *rreq) ...@@ -102,36 +92,13 @@ static int v9fs_begin_cache_operation(struct netfs_read_request *rreq)
#endif #endif
} }
static const struct netfs_read_request_ops v9fs_req_ops = { const struct netfs_request_ops v9fs_req_ops = {
.init_rreq = v9fs_init_rreq, .init_request = v9fs_init_request,
.is_cache_enabled = v9fs_is_cache_enabled,
.begin_cache_operation = v9fs_begin_cache_operation, .begin_cache_operation = v9fs_begin_cache_operation,
.issue_op = v9fs_req_issue_op, .issue_read = v9fs_issue_read,
.cleanup = v9fs_req_cleanup, .cleanup = v9fs_req_cleanup,
}; };
/**
* v9fs_vfs_readpage - read an entire page in from 9P
* @file: file being read
* @page: structure to page
*
*/
static int v9fs_vfs_readpage(struct file *file, struct page *page)
{
struct folio *folio = page_folio(page);
return netfs_readpage(file, folio, &v9fs_req_ops, NULL);
}
/**
* v9fs_vfs_readahead - read a set of pages from 9P
* @ractl: The readahead parameters
*/
static void v9fs_vfs_readahead(struct readahead_control *ractl)
{
netfs_readahead(ractl, &v9fs_req_ops, NULL);
}
/** /**
* v9fs_release_page - release the private state associated with a page * v9fs_release_page - release the private state associated with a page
* @page: The page to be released * @page: The page to be released
...@@ -308,8 +275,7 @@ static int v9fs_write_begin(struct file *filp, struct address_space *mapping, ...@@ -308,8 +275,7 @@ static int v9fs_write_begin(struct file *filp, struct address_space *mapping,
* file. We need to do this before we get a lock on the page in case * file. We need to do this before we get a lock on the page in case
* there's more than one writer competing for the same cache block. * there's more than one writer competing for the same cache block.
*/ */
retval = netfs_write_begin(filp, mapping, pos, len, flags, &folio, fsdata, retval = netfs_write_begin(filp, mapping, pos, len, flags, &folio, fsdata);
&v9fs_req_ops, NULL);
if (retval < 0) if (retval < 0)
return retval; return retval;
...@@ -370,8 +336,8 @@ static bool v9fs_dirty_folio(struct address_space *mapping, struct folio *folio) ...@@ -370,8 +336,8 @@ static bool v9fs_dirty_folio(struct address_space *mapping, struct folio *folio)
#endif #endif
const struct address_space_operations v9fs_addr_operations = { const struct address_space_operations v9fs_addr_operations = {
.readpage = v9fs_vfs_readpage, .readpage = netfs_readpage,
.readahead = v9fs_vfs_readahead, .readahead = netfs_readahead,
.dirty_folio = v9fs_dirty_folio, .dirty_folio = v9fs_dirty_folio,
.writepage = v9fs_vfs_writepage, .writepage = v9fs_vfs_writepage,
.write_begin = v9fs_write_begin, .write_begin = v9fs_write_begin,
......
...@@ -231,9 +231,6 @@ struct inode *v9fs_alloc_inode(struct super_block *sb) ...@@ -231,9 +231,6 @@ struct inode *v9fs_alloc_inode(struct super_block *sb)
v9inode = alloc_inode_sb(sb, v9fs_inode_cache, GFP_KERNEL); v9inode = alloc_inode_sb(sb, v9fs_inode_cache, GFP_KERNEL);
if (!v9inode) if (!v9inode)
return NULL; return NULL;
#ifdef CONFIG_9P_FSCACHE
v9inode->fscache = NULL;
#endif
v9inode->writeback_fid = NULL; v9inode->writeback_fid = NULL;
v9inode->cache_validity = 0; v9inode->cache_validity = 0;
mutex_init(&v9inode->v_mutex); mutex_init(&v9inode->v_mutex);
...@@ -250,6 +247,14 @@ void v9fs_free_inode(struct inode *inode) ...@@ -250,6 +247,14 @@ void v9fs_free_inode(struct inode *inode)
kmem_cache_free(v9fs_inode_cache, V9FS_I(inode)); kmem_cache_free(v9fs_inode_cache, V9FS_I(inode));
} }
/*
* Set parameters for the netfs library
*/
static void v9fs_set_netfs_context(struct inode *inode)
{
netfs_i_context_init(inode, &v9fs_req_ops);
}
int v9fs_init_inode(struct v9fs_session_info *v9ses, int v9fs_init_inode(struct v9fs_session_info *v9ses,
struct inode *inode, umode_t mode, dev_t rdev) struct inode *inode, umode_t mode, dev_t rdev)
{ {
...@@ -338,6 +343,8 @@ int v9fs_init_inode(struct v9fs_session_info *v9ses, ...@@ -338,6 +343,8 @@ int v9fs_init_inode(struct v9fs_session_info *v9ses,
err = -EINVAL; err = -EINVAL;
goto error; goto error;
} }
v9fs_set_netfs_context(inode);
error: error:
return err; return err;
......
...@@ -76,6 +76,7 @@ struct inode *afs_iget_pseudo_dir(struct super_block *sb, bool root) ...@@ -76,6 +76,7 @@ struct inode *afs_iget_pseudo_dir(struct super_block *sb, bool root)
/* there shouldn't be an existing inode */ /* there shouldn't be an existing inode */
BUG_ON(!(inode->i_state & I_NEW)); BUG_ON(!(inode->i_state & I_NEW));
netfs_i_context_init(inode, NULL);
inode->i_size = 0; inode->i_size = 0;
inode->i_mode = S_IFDIR | S_IRUGO | S_IXUGO; inode->i_mode = S_IFDIR | S_IRUGO | S_IXUGO;
if (root) { if (root) {
......
...@@ -19,13 +19,11 @@ ...@@ -19,13 +19,11 @@
#include "internal.h" #include "internal.h"
static int afs_file_mmap(struct file *file, struct vm_area_struct *vma); static int afs_file_mmap(struct file *file, struct vm_area_struct *vma);
static int afs_readpage(struct file *file, struct page *page);
static int afs_symlink_readpage(struct file *file, struct page *page); static int afs_symlink_readpage(struct file *file, struct page *page);
static void afs_invalidate_folio(struct folio *folio, size_t offset, static void afs_invalidate_folio(struct folio *folio, size_t offset,
size_t length); size_t length);
static int afs_releasepage(struct page *page, gfp_t gfp_flags); static int afs_releasepage(struct page *page, gfp_t gfp_flags);
static void afs_readahead(struct readahead_control *ractl);
static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter); static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter);
static void afs_vm_open(struct vm_area_struct *area); static void afs_vm_open(struct vm_area_struct *area);
static void afs_vm_close(struct vm_area_struct *area); static void afs_vm_close(struct vm_area_struct *area);
...@@ -52,8 +50,8 @@ const struct inode_operations afs_file_inode_operations = { ...@@ -52,8 +50,8 @@ const struct inode_operations afs_file_inode_operations = {
}; };
const struct address_space_operations afs_file_aops = { const struct address_space_operations afs_file_aops = {
.readpage = afs_readpage, .readpage = netfs_readpage,
.readahead = afs_readahead, .readahead = netfs_readahead,
.dirty_folio = afs_dirty_folio, .dirty_folio = afs_dirty_folio,
.launder_folio = afs_launder_folio, .launder_folio = afs_launder_folio,
.releasepage = afs_releasepage, .releasepage = afs_releasepage,
...@@ -240,7 +238,7 @@ void afs_put_read(struct afs_read *req) ...@@ -240,7 +238,7 @@ void afs_put_read(struct afs_read *req)
static void afs_fetch_data_notify(struct afs_operation *op) static void afs_fetch_data_notify(struct afs_operation *op)
{ {
struct afs_read *req = op->fetch.req; struct afs_read *req = op->fetch.req;
struct netfs_read_subrequest *subreq = req->subreq; struct netfs_io_subrequest *subreq = req->subreq;
int error = op->error; int error = op->error;
if (error == -ECONNABORTED) if (error == -ECONNABORTED)
...@@ -310,7 +308,7 @@ int afs_fetch_data(struct afs_vnode *vnode, struct afs_read *req) ...@@ -310,7 +308,7 @@ int afs_fetch_data(struct afs_vnode *vnode, struct afs_read *req)
return afs_do_sync_operation(op); return afs_do_sync_operation(op);
} }
static void afs_req_issue_op(struct netfs_read_subrequest *subreq) static void afs_issue_read(struct netfs_io_subrequest *subreq)
{ {
struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode); struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode);
struct afs_read *fsreq; struct afs_read *fsreq;
...@@ -359,19 +357,13 @@ static int afs_symlink_readpage(struct file *file, struct page *page) ...@@ -359,19 +357,13 @@ static int afs_symlink_readpage(struct file *file, struct page *page)
return ret; return ret;
} }
static void afs_init_rreq(struct netfs_read_request *rreq, struct file *file) static int afs_init_request(struct netfs_io_request *rreq, struct file *file)
{ {
rreq->netfs_priv = key_get(afs_file_key(file)); rreq->netfs_priv = key_get(afs_file_key(file));
return 0;
} }
static bool afs_is_cache_enabled(struct inode *inode) static int afs_begin_cache_operation(struct netfs_io_request *rreq)
{
struct fscache_cookie *cookie = afs_vnode_cache(AFS_FS_I(inode));
return fscache_cookie_enabled(cookie) && cookie->cache_priv;
}
static int afs_begin_cache_operation(struct netfs_read_request *rreq)
{ {
#ifdef CONFIG_AFS_FSCACHE #ifdef CONFIG_AFS_FSCACHE
struct afs_vnode *vnode = AFS_FS_I(rreq->inode); struct afs_vnode *vnode = AFS_FS_I(rreq->inode);
...@@ -396,27 +388,14 @@ static void afs_priv_cleanup(struct address_space *mapping, void *netfs_priv) ...@@ -396,27 +388,14 @@ static void afs_priv_cleanup(struct address_space *mapping, void *netfs_priv)
key_put(netfs_priv); key_put(netfs_priv);
} }
const struct netfs_read_request_ops afs_req_ops = { const struct netfs_request_ops afs_req_ops = {
.init_rreq = afs_init_rreq, .init_request = afs_init_request,
.is_cache_enabled = afs_is_cache_enabled,
.begin_cache_operation = afs_begin_cache_operation, .begin_cache_operation = afs_begin_cache_operation,
.check_write_begin = afs_check_write_begin, .check_write_begin = afs_check_write_begin,
.issue_op = afs_req_issue_op, .issue_read = afs_issue_read,
.cleanup = afs_priv_cleanup, .cleanup = afs_priv_cleanup,
}; };
static int afs_readpage(struct file *file, struct page *page)
{
struct folio *folio = page_folio(page);
return netfs_readpage(file, folio, &afs_req_ops, NULL);
}
static void afs_readahead(struct readahead_control *ractl)
{
netfs_readahead(ractl, &afs_req_ops, NULL);
}
int afs_write_inode(struct inode *inode, struct writeback_control *wbc) int afs_write_inode(struct inode *inode, struct writeback_control *wbc)
{ {
fscache_unpin_writeback(wbc, afs_vnode_cache(AFS_FS_I(inode))); fscache_unpin_writeback(wbc, afs_vnode_cache(AFS_FS_I(inode)));
......
...@@ -53,6 +53,14 @@ static noinline void dump_vnode(struct afs_vnode *vnode, struct afs_vnode *paren ...@@ -53,6 +53,14 @@ static noinline void dump_vnode(struct afs_vnode *vnode, struct afs_vnode *paren
dump_stack(); dump_stack();
} }
/*
* Set parameters for the netfs library
*/
static void afs_set_netfs_context(struct afs_vnode *vnode)
{
netfs_i_context_init(&vnode->vfs_inode, &afs_req_ops);
}
/* /*
* Initialise an inode from the vnode status. * Initialise an inode from the vnode status.
*/ */
...@@ -128,6 +136,7 @@ static int afs_inode_init_from_status(struct afs_operation *op, ...@@ -128,6 +136,7 @@ static int afs_inode_init_from_status(struct afs_operation *op,
} }
afs_set_i_size(vnode, status->size); afs_set_i_size(vnode, status->size);
afs_set_netfs_context(vnode);
vnode->invalid_before = status->data_version; vnode->invalid_before = status->data_version;
inode_set_iversion_raw(&vnode->vfs_inode, status->data_version); inode_set_iversion_raw(&vnode->vfs_inode, status->data_version);
...@@ -237,6 +246,7 @@ static void afs_apply_status(struct afs_operation *op, ...@@ -237,6 +246,7 @@ static void afs_apply_status(struct afs_operation *op,
* idea of what the size should be that's not the same as * idea of what the size should be that's not the same as
* what's on the server. * what's on the server.
*/ */
vnode->netfs_ctx.remote_i_size = status->size;
if (change_size) { if (change_size) {
afs_set_i_size(vnode, status->size); afs_set_i_size(vnode, status->size);
inode->i_ctime = t; inode->i_ctime = t;
...@@ -420,7 +430,7 @@ static void afs_get_inode_cache(struct afs_vnode *vnode) ...@@ -420,7 +430,7 @@ static void afs_get_inode_cache(struct afs_vnode *vnode)
struct afs_vnode_cache_aux aux; struct afs_vnode_cache_aux aux;
if (vnode->status.type != AFS_FTYPE_FILE) { if (vnode->status.type != AFS_FTYPE_FILE) {
vnode->cache = NULL; vnode->netfs_ctx.cache = NULL;
return; return;
} }
...@@ -430,12 +440,14 @@ static void afs_get_inode_cache(struct afs_vnode *vnode) ...@@ -430,12 +440,14 @@ static void afs_get_inode_cache(struct afs_vnode *vnode)
key.vnode_id_ext[1] = htonl(vnode->fid.vnode_hi); key.vnode_id_ext[1] = htonl(vnode->fid.vnode_hi);
afs_set_cache_aux(vnode, &aux); afs_set_cache_aux(vnode, &aux);
vnode->cache = fscache_acquire_cookie( afs_vnode_set_cache(vnode,
vnode->volume->cache, fscache_acquire_cookie(
vnode->status.type == AFS_FTYPE_FILE ? 0 : FSCACHE_ADV_SINGLE_CHUNK, vnode->volume->cache,
&key, sizeof(key), vnode->status.type == AFS_FTYPE_FILE ?
&aux, sizeof(aux), 0 : FSCACHE_ADV_SINGLE_CHUNK,
vnode->status.size); &key, sizeof(key),
&aux, sizeof(aux),
vnode->status.size));
#endif #endif
} }
...@@ -528,6 +540,7 @@ struct inode *afs_root_iget(struct super_block *sb, struct key *key) ...@@ -528,6 +540,7 @@ struct inode *afs_root_iget(struct super_block *sb, struct key *key)
vnode = AFS_FS_I(inode); vnode = AFS_FS_I(inode);
vnode->cb_v_break = as->volume->cb_v_break, vnode->cb_v_break = as->volume->cb_v_break,
afs_set_netfs_context(vnode);
op = afs_alloc_operation(key, as->volume); op = afs_alloc_operation(key, as->volume);
if (IS_ERR(op)) { if (IS_ERR(op)) {
...@@ -786,11 +799,8 @@ void afs_evict_inode(struct inode *inode) ...@@ -786,11 +799,8 @@ void afs_evict_inode(struct inode *inode)
afs_put_wb_key(wbk); afs_put_wb_key(wbk);
} }
#ifdef CONFIG_AFS_FSCACHE fscache_relinquish_cookie(afs_vnode_cache(vnode),
fscache_relinquish_cookie(vnode->cache,
test_bit(AFS_VNODE_DELETED, &vnode->flags)); test_bit(AFS_VNODE_DELETED, &vnode->flags));
vnode->cache = NULL;
#endif
afs_prune_wb_keys(vnode); afs_prune_wb_keys(vnode);
afs_put_permits(rcu_access_pointer(vnode->permit_cache)); afs_put_permits(rcu_access_pointer(vnode->permit_cache));
......
...@@ -207,7 +207,7 @@ struct afs_read { ...@@ -207,7 +207,7 @@ struct afs_read {
loff_t file_size; /* File size returned by server */ loff_t file_size; /* File size returned by server */
struct key *key; /* The key to use to reissue the read */ struct key *key; /* The key to use to reissue the read */
struct afs_vnode *vnode; /* The file being read into. */ struct afs_vnode *vnode; /* The file being read into. */
struct netfs_read_subrequest *subreq; /* Fscache helper read request this belongs to */ struct netfs_io_subrequest *subreq; /* Fscache helper read request this belongs to */
afs_dataversion_t data_version; /* Version number returned by server */ afs_dataversion_t data_version; /* Version number returned by server */
refcount_t usage; refcount_t usage;
unsigned int call_debug_id; unsigned int call_debug_id;
...@@ -619,15 +619,16 @@ enum afs_lock_state { ...@@ -619,15 +619,16 @@ enum afs_lock_state {
* leak from one inode to another. * leak from one inode to another.
*/ */
struct afs_vnode { struct afs_vnode {
struct inode vfs_inode; /* the VFS's inode record */ struct {
/* These must be contiguous */
struct inode vfs_inode; /* the VFS's inode record */
struct netfs_i_context netfs_ctx; /* Netfslib context */
};
struct afs_volume *volume; /* volume on which vnode resides */ struct afs_volume *volume; /* volume on which vnode resides */
struct afs_fid fid; /* the file identifier for this inode */ struct afs_fid fid; /* the file identifier for this inode */
struct afs_file_status status; /* AFS status info for this file */ struct afs_file_status status; /* AFS status info for this file */
afs_dataversion_t invalid_before; /* Child dentries are invalid before this */ afs_dataversion_t invalid_before; /* Child dentries are invalid before this */
#ifdef CONFIG_AFS_FSCACHE
struct fscache_cookie *cache; /* caching cookie */
#endif
struct afs_permits __rcu *permit_cache; /* cache of permits so far obtained */ struct afs_permits __rcu *permit_cache; /* cache of permits so far obtained */
struct mutex io_lock; /* Lock for serialising I/O on this mutex */ struct mutex io_lock; /* Lock for serialising I/O on this mutex */
struct rw_semaphore validate_lock; /* lock for validating this vnode */ struct rw_semaphore validate_lock; /* lock for validating this vnode */
...@@ -674,12 +675,20 @@ struct afs_vnode { ...@@ -674,12 +675,20 @@ struct afs_vnode {
static inline struct fscache_cookie *afs_vnode_cache(struct afs_vnode *vnode) static inline struct fscache_cookie *afs_vnode_cache(struct afs_vnode *vnode)
{ {
#ifdef CONFIG_AFS_FSCACHE #ifdef CONFIG_AFS_FSCACHE
return vnode->cache; return netfs_i_cookie(&vnode->vfs_inode);
#else #else
return NULL; return NULL;
#endif #endif
} }
static inline void afs_vnode_set_cache(struct afs_vnode *vnode,
struct fscache_cookie *cookie)
{
#ifdef CONFIG_AFS_FSCACHE
vnode->netfs_ctx.cache = cookie;
#endif
}
/* /*
* cached security record for one user's attempt to access a vnode * cached security record for one user's attempt to access a vnode
*/ */
...@@ -1063,7 +1072,7 @@ extern const struct address_space_operations afs_file_aops; ...@@ -1063,7 +1072,7 @@ extern const struct address_space_operations afs_file_aops;
extern const struct address_space_operations afs_symlink_aops; extern const struct address_space_operations afs_symlink_aops;
extern const struct inode_operations afs_file_inode_operations; extern const struct inode_operations afs_file_inode_operations;
extern const struct file_operations afs_file_operations; extern const struct file_operations afs_file_operations;
extern const struct netfs_read_request_ops afs_req_ops; extern const struct netfs_request_ops afs_req_ops;
extern int afs_cache_wb_key(struct afs_vnode *, struct afs_file *); extern int afs_cache_wb_key(struct afs_vnode *, struct afs_file *);
extern void afs_put_wb_key(struct afs_wb_key *); extern void afs_put_wb_key(struct afs_wb_key *);
......
...@@ -688,13 +688,11 @@ static struct inode *afs_alloc_inode(struct super_block *sb) ...@@ -688,13 +688,11 @@ static struct inode *afs_alloc_inode(struct super_block *sb)
/* Reset anything that shouldn't leak from one inode to the next. */ /* Reset anything that shouldn't leak from one inode to the next. */
memset(&vnode->fid, 0, sizeof(vnode->fid)); memset(&vnode->fid, 0, sizeof(vnode->fid));
memset(&vnode->status, 0, sizeof(vnode->status)); memset(&vnode->status, 0, sizeof(vnode->status));
afs_vnode_set_cache(vnode, NULL);
vnode->volume = NULL; vnode->volume = NULL;
vnode->lock_key = NULL; vnode->lock_key = NULL;
vnode->permit_cache = NULL; vnode->permit_cache = NULL;
#ifdef CONFIG_AFS_FSCACHE
vnode->cache = NULL;
#endif
vnode->flags = 1 << AFS_VNODE_UNSET; vnode->flags = 1 << AFS_VNODE_UNSET;
vnode->lock_state = AFS_VNODE_LOCK_NONE; vnode->lock_state = AFS_VNODE_LOCK_NONE;
......
...@@ -60,8 +60,7 @@ int afs_write_begin(struct file *file, struct address_space *mapping, ...@@ -60,8 +60,7 @@ int afs_write_begin(struct file *file, struct address_space *mapping,
* file. We need to do this before we get a lock on the page in case * file. We need to do this before we get a lock on the page in case
* there's more than one writer competing for the same cache block. * there's more than one writer competing for the same cache block.
*/ */
ret = netfs_write_begin(file, mapping, pos, len, flags, &folio, fsdata, ret = netfs_write_begin(file, mapping, pos, len, flags, &folio, fsdata);
&afs_req_ops, NULL);
if (ret < 0) if (ret < 0)
return ret; return ret;
...@@ -355,9 +354,10 @@ static const struct afs_operation_ops afs_store_data_operation = { ...@@ -355,9 +354,10 @@ static const struct afs_operation_ops afs_store_data_operation = {
static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, loff_t pos, static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, loff_t pos,
bool laundering) bool laundering)
{ {
struct netfs_i_context *ictx = &vnode->netfs_ctx;
struct afs_operation *op; struct afs_operation *op;
struct afs_wb_key *wbk = NULL; struct afs_wb_key *wbk = NULL;
loff_t size = iov_iter_count(iter), i_size; loff_t size = iov_iter_count(iter);
int ret = -ENOKEY; int ret = -ENOKEY;
_enter("%s{%llx:%llu.%u},%llx,%llx", _enter("%s{%llx:%llu.%u},%llx,%llx",
...@@ -379,15 +379,13 @@ static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, loff_t ...@@ -379,15 +379,13 @@ static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, loff_t
return -ENOMEM; return -ENOMEM;
} }
i_size = i_size_read(&vnode->vfs_inode);
afs_op_set_vnode(op, 0, vnode); afs_op_set_vnode(op, 0, vnode);
op->file[0].dv_delta = 1; op->file[0].dv_delta = 1;
op->file[0].modification = true; op->file[0].modification = true;
op->store.write_iter = iter; op->store.write_iter = iter;
op->store.pos = pos; op->store.pos = pos;
op->store.size = size; op->store.size = size;
op->store.i_size = max(pos + size, i_size); op->store.i_size = max(pos + size, ictx->remote_i_size);
op->store.laundering = laundering; op->store.laundering = laundering;
op->mtime = vnode->vfs_inode.i_mtime; op->mtime = vnode->vfs_inode.i_mtime;
op->flags |= AFS_OPERATION_UNINTR; op->flags |= AFS_OPERATION_UNINTR;
......
...@@ -380,18 +380,18 @@ static int cachefiles_write(struct netfs_cache_resources *cres, ...@@ -380,18 +380,18 @@ static int cachefiles_write(struct netfs_cache_resources *cres,
* Prepare a read operation, shortening it to a cached/uncached * Prepare a read operation, shortening it to a cached/uncached
* boundary as appropriate. * boundary as appropriate.
*/ */
static enum netfs_read_source cachefiles_prepare_read(struct netfs_read_subrequest *subreq, static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest *subreq,
loff_t i_size) loff_t i_size)
{ {
enum cachefiles_prepare_read_trace why; enum cachefiles_prepare_read_trace why;
struct netfs_read_request *rreq = subreq->rreq; struct netfs_io_request *rreq = subreq->rreq;
struct netfs_cache_resources *cres = &rreq->cache_resources; struct netfs_cache_resources *cres = &rreq->cache_resources;
struct cachefiles_object *object; struct cachefiles_object *object;
struct cachefiles_cache *cache; struct cachefiles_cache *cache;
struct fscache_cookie *cookie = fscache_cres_cookie(cres); struct fscache_cookie *cookie = fscache_cres_cookie(cres);
const struct cred *saved_cred; const struct cred *saved_cred;
struct file *file = cachefiles_cres_file(cres); struct file *file = cachefiles_cres_file(cres);
enum netfs_read_source ret = NETFS_DOWNLOAD_FROM_SERVER; enum netfs_io_source ret = NETFS_DOWNLOAD_FROM_SERVER;
loff_t off, to; loff_t off, to;
ino_t ino = file ? file_inode(file)->i_ino : 0; ino_t ino = file ? file_inode(file)->i_ino : 0;
...@@ -404,7 +404,7 @@ static enum netfs_read_source cachefiles_prepare_read(struct netfs_read_subreque ...@@ -404,7 +404,7 @@ static enum netfs_read_source cachefiles_prepare_read(struct netfs_read_subreque
} }
if (test_bit(FSCACHE_COOKIE_NO_DATA_TO_READ, &cookie->flags)) { if (test_bit(FSCACHE_COOKIE_NO_DATA_TO_READ, &cookie->flags)) {
__set_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags); __set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
why = cachefiles_trace_read_no_data; why = cachefiles_trace_read_no_data;
goto out_no_object; goto out_no_object;
} }
...@@ -473,7 +473,7 @@ static enum netfs_read_source cachefiles_prepare_read(struct netfs_read_subreque ...@@ -473,7 +473,7 @@ static enum netfs_read_source cachefiles_prepare_read(struct netfs_read_subreque
goto out; goto out;
download_and_store: download_and_store:
__set_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags); __set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
out: out:
cachefiles_end_secure(cache, saved_cred); cachefiles_end_secure(cache, saved_cred);
out_no_object: out_no_object:
......
...@@ -182,7 +182,7 @@ static int ceph_releasepage(struct page *page, gfp_t gfp) ...@@ -182,7 +182,7 @@ static int ceph_releasepage(struct page *page, gfp_t gfp)
return 1; return 1;
} }
static void ceph_netfs_expand_readahead(struct netfs_read_request *rreq) static void ceph_netfs_expand_readahead(struct netfs_io_request *rreq)
{ {
struct inode *inode = rreq->inode; struct inode *inode = rreq->inode;
struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_inode_info *ci = ceph_inode(inode);
...@@ -199,7 +199,7 @@ static void ceph_netfs_expand_readahead(struct netfs_read_request *rreq) ...@@ -199,7 +199,7 @@ static void ceph_netfs_expand_readahead(struct netfs_read_request *rreq)
rreq->len = roundup(rreq->len, lo->stripe_unit); rreq->len = roundup(rreq->len, lo->stripe_unit);
} }
static bool ceph_netfs_clamp_length(struct netfs_read_subrequest *subreq) static bool ceph_netfs_clamp_length(struct netfs_io_subrequest *subreq)
{ {
struct inode *inode = subreq->rreq->inode; struct inode *inode = subreq->rreq->inode;
struct ceph_fs_client *fsc = ceph_inode_to_client(inode); struct ceph_fs_client *fsc = ceph_inode_to_client(inode);
...@@ -218,7 +218,7 @@ static void finish_netfs_read(struct ceph_osd_request *req) ...@@ -218,7 +218,7 @@ static void finish_netfs_read(struct ceph_osd_request *req)
{ {
struct ceph_fs_client *fsc = ceph_inode_to_client(req->r_inode); struct ceph_fs_client *fsc = ceph_inode_to_client(req->r_inode);
struct ceph_osd_data *osd_data = osd_req_op_extent_osd_data(req, 0); struct ceph_osd_data *osd_data = osd_req_op_extent_osd_data(req, 0);
struct netfs_read_subrequest *subreq = req->r_priv; struct netfs_io_subrequest *subreq = req->r_priv;
int num_pages; int num_pages;
int err = req->r_result; int err = req->r_result;
...@@ -244,9 +244,9 @@ static void finish_netfs_read(struct ceph_osd_request *req) ...@@ -244,9 +244,9 @@ static void finish_netfs_read(struct ceph_osd_request *req)
iput(req->r_inode); iput(req->r_inode);
} }
static bool ceph_netfs_issue_op_inline(struct netfs_read_subrequest *subreq) static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
{ {
struct netfs_read_request *rreq = subreq->rreq; struct netfs_io_request *rreq = subreq->rreq;
struct inode *inode = rreq->inode; struct inode *inode = rreq->inode;
struct ceph_mds_reply_info_parsed *rinfo; struct ceph_mds_reply_info_parsed *rinfo;
struct ceph_mds_reply_info_in *iinfo; struct ceph_mds_reply_info_in *iinfo;
...@@ -258,7 +258,7 @@ static bool ceph_netfs_issue_op_inline(struct netfs_read_subrequest *subreq) ...@@ -258,7 +258,7 @@ static bool ceph_netfs_issue_op_inline(struct netfs_read_subrequest *subreq)
size_t len; size_t len;
__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags); __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
__clear_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags); __clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
if (subreq->start >= inode->i_size) if (subreq->start >= inode->i_size)
goto out; goto out;
...@@ -297,9 +297,9 @@ static bool ceph_netfs_issue_op_inline(struct netfs_read_subrequest *subreq) ...@@ -297,9 +297,9 @@ static bool ceph_netfs_issue_op_inline(struct netfs_read_subrequest *subreq)
return true; return true;
} }
static void ceph_netfs_issue_op(struct netfs_read_subrequest *subreq) static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq)
{ {
struct netfs_read_request *rreq = subreq->rreq; struct netfs_io_request *rreq = subreq->rreq;
struct inode *inode = rreq->inode; struct inode *inode = rreq->inode;
struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_inode_info *ci = ceph_inode(inode);
struct ceph_fs_client *fsc = ceph_inode_to_client(inode); struct ceph_fs_client *fsc = ceph_inode_to_client(inode);
...@@ -353,6 +353,45 @@ static void ceph_netfs_issue_op(struct netfs_read_subrequest *subreq) ...@@ -353,6 +353,45 @@ static void ceph_netfs_issue_op(struct netfs_read_subrequest *subreq)
dout("%s: result %d\n", __func__, err); dout("%s: result %d\n", __func__, err);
} }
static int ceph_init_request(struct netfs_io_request *rreq, struct file *file)
{
struct inode *inode = rreq->inode;
int got = 0, want = CEPH_CAP_FILE_CACHE;
int ret = 0;
if (rreq->origin != NETFS_READAHEAD)
return 0;
if (file) {
struct ceph_rw_context *rw_ctx;
struct ceph_file_info *fi = file->private_data;
rw_ctx = ceph_find_rw_context(fi);
if (rw_ctx)
return 0;
}
/*
* readahead callers do not necessarily hold Fcb caps
* (e.g. fadvise, madvise).
*/
ret = ceph_try_get_caps(inode, CEPH_CAP_FILE_RD, want, true, &got);
if (ret < 0) {
dout("start_read %p, error getting cap\n", inode);
return ret;
}
if (!(got & want)) {
dout("start_read %p, no cache cap\n", inode);
return -EACCES;
}
if (ret == 0)
return -EACCES;
rreq->netfs_priv = (void *)(uintptr_t)got;
return 0;
}
static void ceph_readahead_cleanup(struct address_space *mapping, void *priv) static void ceph_readahead_cleanup(struct address_space *mapping, void *priv)
{ {
struct inode *inode = mapping->host; struct inode *inode = mapping->host;
...@@ -363,64 +402,16 @@ static void ceph_readahead_cleanup(struct address_space *mapping, void *priv) ...@@ -363,64 +402,16 @@ static void ceph_readahead_cleanup(struct address_space *mapping, void *priv)
ceph_put_cap_refs(ci, got); ceph_put_cap_refs(ci, got);
} }
static const struct netfs_read_request_ops ceph_netfs_read_ops = { const struct netfs_request_ops ceph_netfs_ops = {
.is_cache_enabled = ceph_is_cache_enabled, .init_request = ceph_init_request,
.begin_cache_operation = ceph_begin_cache_operation, .begin_cache_operation = ceph_begin_cache_operation,
.issue_op = ceph_netfs_issue_op, .issue_read = ceph_netfs_issue_read,
.expand_readahead = ceph_netfs_expand_readahead, .expand_readahead = ceph_netfs_expand_readahead,
.clamp_length = ceph_netfs_clamp_length, .clamp_length = ceph_netfs_clamp_length,
.check_write_begin = ceph_netfs_check_write_begin, .check_write_begin = ceph_netfs_check_write_begin,
.cleanup = ceph_readahead_cleanup, .cleanup = ceph_readahead_cleanup,
}; };
/* read a single page, without unlocking it. */
static int ceph_readpage(struct file *file, struct page *subpage)
{
struct folio *folio = page_folio(subpage);
struct inode *inode = file_inode(file);
struct ceph_inode_info *ci = ceph_inode(inode);
struct ceph_vino vino = ceph_vino(inode);
size_t len = folio_size(folio);
u64 off = folio_file_pos(folio);
dout("readpage ino %llx.%llx file %p off %llu len %zu folio %p index %lu\n inline %d",
vino.ino, vino.snap, file, off, len, folio, folio_index(folio),
ci->i_inline_version != CEPH_INLINE_NONE);
return netfs_readpage(file, folio, &ceph_netfs_read_ops, NULL);
}
static void ceph_readahead(struct readahead_control *ractl)
{
struct inode *inode = file_inode(ractl->file);
struct ceph_file_info *fi = ractl->file->private_data;
struct ceph_rw_context *rw_ctx;
int got = 0;
int ret = 0;
if (ceph_inode(inode)->i_inline_version != CEPH_INLINE_NONE)
return;
rw_ctx = ceph_find_rw_context(fi);
if (!rw_ctx) {
/*
* readahead callers do not necessarily hold Fcb caps
* (e.g. fadvise, madvise).
*/
int want = CEPH_CAP_FILE_CACHE;
ret = ceph_try_get_caps(inode, CEPH_CAP_FILE_RD, want, true, &got);
if (ret < 0)
dout("start_read %p, error getting cap\n", inode);
else if (!(got & want))
dout("start_read %p, no cache cap\n", inode);
if (ret <= 0)
return;
}
netfs_readahead(ractl, &ceph_netfs_read_ops, (void *)(uintptr_t)got);
}
#ifdef CONFIG_CEPH_FSCACHE #ifdef CONFIG_CEPH_FSCACHE
static void ceph_set_page_fscache(struct page *page) static void ceph_set_page_fscache(struct page *page)
{ {
...@@ -1327,8 +1318,7 @@ static int ceph_write_begin(struct file *file, struct address_space *mapping, ...@@ -1327,8 +1318,7 @@ static int ceph_write_begin(struct file *file, struct address_space *mapping,
struct folio *folio = NULL; struct folio *folio = NULL;
int r; int r;
r = netfs_write_begin(file, inode->i_mapping, pos, len, 0, &folio, NULL, r = netfs_write_begin(file, inode->i_mapping, pos, len, 0, &folio, NULL);
&ceph_netfs_read_ops, NULL);
if (r == 0) if (r == 0)
folio_wait_fscache(folio); folio_wait_fscache(folio);
if (r < 0) { if (r < 0) {
...@@ -1382,8 +1372,8 @@ static int ceph_write_end(struct file *file, struct address_space *mapping, ...@@ -1382,8 +1372,8 @@ static int ceph_write_end(struct file *file, struct address_space *mapping,
} }
const struct address_space_operations ceph_aops = { const struct address_space_operations ceph_aops = {
.readpage = ceph_readpage, .readpage = netfs_readpage,
.readahead = ceph_readahead, .readahead = netfs_readahead,
.writepage = ceph_writepage, .writepage = ceph_writepage,
.writepages = ceph_writepages_start, .writepages = ceph_writepages_start,
.write_begin = ceph_write_begin, .write_begin = ceph_write_begin,
......
...@@ -29,26 +29,25 @@ void ceph_fscache_register_inode_cookie(struct inode *inode) ...@@ -29,26 +29,25 @@ void ceph_fscache_register_inode_cookie(struct inode *inode)
if (!(inode->i_state & I_NEW)) if (!(inode->i_state & I_NEW))
return; return;
WARN_ON_ONCE(ci->fscache); WARN_ON_ONCE(ci->netfs_ctx.cache);
ci->fscache = fscache_acquire_cookie(fsc->fscache, 0, ci->netfs_ctx.cache =
&ci->i_vino, sizeof(ci->i_vino), fscache_acquire_cookie(fsc->fscache, 0,
&ci->i_version, sizeof(ci->i_version), &ci->i_vino, sizeof(ci->i_vino),
i_size_read(inode)); &ci->i_version, sizeof(ci->i_version),
i_size_read(inode));
} }
void ceph_fscache_unregister_inode_cookie(struct ceph_inode_info* ci) void ceph_fscache_unregister_inode_cookie(struct ceph_inode_info *ci)
{ {
struct fscache_cookie *cookie = ci->fscache; fscache_relinquish_cookie(ceph_fscache_cookie(ci), false);
fscache_relinquish_cookie(cookie, false);
} }
void ceph_fscache_use_cookie(struct inode *inode, bool will_modify) void ceph_fscache_use_cookie(struct inode *inode, bool will_modify)
{ {
struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_inode_info *ci = ceph_inode(inode);
fscache_use_cookie(ci->fscache, will_modify); fscache_use_cookie(ceph_fscache_cookie(ci), will_modify);
} }
void ceph_fscache_unuse_cookie(struct inode *inode, bool update) void ceph_fscache_unuse_cookie(struct inode *inode, bool update)
...@@ -58,9 +57,10 @@ void ceph_fscache_unuse_cookie(struct inode *inode, bool update) ...@@ -58,9 +57,10 @@ void ceph_fscache_unuse_cookie(struct inode *inode, bool update)
if (update) { if (update) {
loff_t i_size = i_size_read(inode); loff_t i_size = i_size_read(inode);
fscache_unuse_cookie(ci->fscache, &ci->i_version, &i_size); fscache_unuse_cookie(ceph_fscache_cookie(ci),
&ci->i_version, &i_size);
} else { } else {
fscache_unuse_cookie(ci->fscache, NULL, NULL); fscache_unuse_cookie(ceph_fscache_cookie(ci), NULL, NULL);
} }
} }
...@@ -69,14 +69,14 @@ void ceph_fscache_update(struct inode *inode) ...@@ -69,14 +69,14 @@ void ceph_fscache_update(struct inode *inode)
struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_inode_info *ci = ceph_inode(inode);
loff_t i_size = i_size_read(inode); loff_t i_size = i_size_read(inode);
fscache_update_cookie(ci->fscache, &ci->i_version, &i_size); fscache_update_cookie(ceph_fscache_cookie(ci), &ci->i_version, &i_size);
} }
void ceph_fscache_invalidate(struct inode *inode, bool dio_write) void ceph_fscache_invalidate(struct inode *inode, bool dio_write)
{ {
struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_inode_info *ci = ceph_inode(inode);
fscache_invalidate(ceph_inode(inode)->fscache, fscache_invalidate(ceph_fscache_cookie(ci),
&ci->i_version, i_size_read(inode), &ci->i_version, i_size_read(inode),
dio_write ? FSCACHE_INVAL_DIO_WRITE : 0); dio_write ? FSCACHE_INVAL_DIO_WRITE : 0);
} }
......
...@@ -26,14 +26,9 @@ void ceph_fscache_unuse_cookie(struct inode *inode, bool update); ...@@ -26,14 +26,9 @@ void ceph_fscache_unuse_cookie(struct inode *inode, bool update);
void ceph_fscache_update(struct inode *inode); void ceph_fscache_update(struct inode *inode);
void ceph_fscache_invalidate(struct inode *inode, bool dio_write); void ceph_fscache_invalidate(struct inode *inode, bool dio_write);
static inline void ceph_fscache_inode_init(struct ceph_inode_info *ci)
{
ci->fscache = NULL;
}
static inline struct fscache_cookie *ceph_fscache_cookie(struct ceph_inode_info *ci) static inline struct fscache_cookie *ceph_fscache_cookie(struct ceph_inode_info *ci)
{ {
return ci->fscache; return netfs_i_cookie(&ci->vfs_inode);
} }
static inline void ceph_fscache_resize(struct inode *inode, loff_t to) static inline void ceph_fscache_resize(struct inode *inode, loff_t to)
...@@ -62,7 +57,7 @@ static inline int ceph_fscache_dirty_folio(struct address_space *mapping, ...@@ -62,7 +57,7 @@ static inline int ceph_fscache_dirty_folio(struct address_space *mapping,
return fscache_dirty_folio(mapping, folio, ceph_fscache_cookie(ci)); return fscache_dirty_folio(mapping, folio, ceph_fscache_cookie(ci));
} }
static inline int ceph_begin_cache_operation(struct netfs_read_request *rreq) static inline int ceph_begin_cache_operation(struct netfs_io_request *rreq)
{ {
struct fscache_cookie *cookie = ceph_fscache_cookie(ceph_inode(rreq->inode)); struct fscache_cookie *cookie = ceph_fscache_cookie(ceph_inode(rreq->inode));
...@@ -91,10 +86,6 @@ static inline void ceph_fscache_unregister_fs(struct ceph_fs_client* fsc) ...@@ -91,10 +86,6 @@ static inline void ceph_fscache_unregister_fs(struct ceph_fs_client* fsc)
{ {
} }
static inline void ceph_fscache_inode_init(struct ceph_inode_info *ci)
{
}
static inline void ceph_fscache_register_inode_cookie(struct inode *inode) static inline void ceph_fscache_register_inode_cookie(struct inode *inode)
{ {
} }
...@@ -144,7 +135,7 @@ static inline bool ceph_is_cache_enabled(struct inode *inode) ...@@ -144,7 +135,7 @@ static inline bool ceph_is_cache_enabled(struct inode *inode)
return false; return false;
} }
static inline int ceph_begin_cache_operation(struct netfs_read_request *rreq) static inline int ceph_begin_cache_operation(struct netfs_io_request *rreq)
{ {
return -ENOBUFS; return -ENOBUFS;
} }
......
...@@ -459,6 +459,9 @@ struct inode *ceph_alloc_inode(struct super_block *sb) ...@@ -459,6 +459,9 @@ struct inode *ceph_alloc_inode(struct super_block *sb)
dout("alloc_inode %p\n", &ci->vfs_inode); dout("alloc_inode %p\n", &ci->vfs_inode);
/* Set parameters for the netfs library */
netfs_i_context_init(&ci->vfs_inode, &ceph_netfs_ops);
spin_lock_init(&ci->i_ceph_lock); spin_lock_init(&ci->i_ceph_lock);
ci->i_version = 0; ci->i_version = 0;
...@@ -544,9 +547,6 @@ struct inode *ceph_alloc_inode(struct super_block *sb) ...@@ -544,9 +547,6 @@ struct inode *ceph_alloc_inode(struct super_block *sb)
INIT_WORK(&ci->i_work, ceph_inode_work); INIT_WORK(&ci->i_work, ceph_inode_work);
ci->i_work_mask = 0; ci->i_work_mask = 0;
memset(&ci->i_btime, '\0', sizeof(ci->i_btime)); memset(&ci->i_btime, '\0', sizeof(ci->i_btime));
ceph_fscache_inode_init(ci);
return &ci->vfs_inode; return &ci->vfs_inode;
} }
......
...@@ -17,13 +17,11 @@ ...@@ -17,13 +17,11 @@
#include <linux/posix_acl.h> #include <linux/posix_acl.h>
#include <linux/refcount.h> #include <linux/refcount.h>
#include <linux/security.h> #include <linux/security.h>
#include <linux/netfs.h>
#include <linux/fscache.h>
#include <linux/ceph/libceph.h> #include <linux/ceph/libceph.h>
#ifdef CONFIG_CEPH_FSCACHE
#include <linux/fscache.h>
#endif
/* large granularity for statfs utilization stats to facilitate /* large granularity for statfs utilization stats to facilitate
* large volume sizes on 32-bit machines. */ * large volume sizes on 32-bit machines. */
#define CEPH_BLOCK_SHIFT 22 /* 4 MB */ #define CEPH_BLOCK_SHIFT 22 /* 4 MB */
...@@ -318,6 +316,11 @@ struct ceph_inode_xattrs_info { ...@@ -318,6 +316,11 @@ struct ceph_inode_xattrs_info {
* Ceph inode. * Ceph inode.
*/ */
struct ceph_inode_info { struct ceph_inode_info {
struct {
/* These must be contiguous */
struct inode vfs_inode;
struct netfs_i_context netfs_ctx; /* Netfslib context */
};
struct ceph_vino i_vino; /* ceph ino + snap */ struct ceph_vino i_vino; /* ceph ino + snap */
spinlock_t i_ceph_lock; spinlock_t i_ceph_lock;
...@@ -428,11 +431,6 @@ struct ceph_inode_info { ...@@ -428,11 +431,6 @@ struct ceph_inode_info {
struct work_struct i_work; struct work_struct i_work;
unsigned long i_work_mask; unsigned long i_work_mask;
#ifdef CONFIG_CEPH_FSCACHE
struct fscache_cookie *fscache;
#endif
struct inode vfs_inode; /* at end */
}; };
static inline struct ceph_inode_info * static inline struct ceph_inode_info *
...@@ -1216,6 +1214,7 @@ extern void __ceph_touch_fmode(struct ceph_inode_info *ci, ...@@ -1216,6 +1214,7 @@ extern void __ceph_touch_fmode(struct ceph_inode_info *ci,
/* addr.c */ /* addr.c */
extern const struct address_space_operations ceph_aops; extern const struct address_space_operations ceph_aops;
extern const struct netfs_request_ops ceph_netfs_ops;
extern int ceph_mmap(struct file *file, struct vm_area_struct *vma); extern int ceph_mmap(struct file *file, struct vm_area_struct *vma);
extern int ceph_uninline_data(struct file *file); extern int ceph_uninline_data(struct file *file);
extern int ceph_pool_perm_check(struct inode *inode, int need); extern int ceph_pool_perm_check(struct inode *inode, int need);
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#include <linux/mempool.h> #include <linux/mempool.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/utsname.h> #include <linux/utsname.h>
#include <linux/netfs.h>
#include "cifs_fs_sb.h" #include "cifs_fs_sb.h"
#include "cifsacl.h" #include "cifsacl.h"
#include <crypto/internal/hash.h> #include <crypto/internal/hash.h>
...@@ -1402,6 +1403,11 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file); ...@@ -1402,6 +1403,11 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file);
*/ */
struct cifsInodeInfo { struct cifsInodeInfo {
struct {
/* These must be contiguous */
struct inode vfs_inode; /* the VFS's inode record */
struct netfs_i_context netfs_ctx; /* Netfslib context */
};
bool can_cache_brlcks; bool can_cache_brlcks;
struct list_head llist; /* locks helb by this inode */ struct list_head llist; /* locks helb by this inode */
/* /*
...@@ -1432,10 +1438,6 @@ struct cifsInodeInfo { ...@@ -1432,10 +1438,6 @@ struct cifsInodeInfo {
u64 uniqueid; /* server inode number */ u64 uniqueid; /* server inode number */
u64 createtime; /* creation time on server */ u64 createtime; /* creation time on server */
__u8 lease_key[SMB2_LEASE_KEY_SIZE]; /* lease key for this inode */ __u8 lease_key[SMB2_LEASE_KEY_SIZE]; /* lease key for this inode */
#ifdef CONFIG_CIFS_FSCACHE
struct fscache_cookie *fscache;
#endif
struct inode vfs_inode;
struct list_head deferred_closes; /* list of deferred closes */ struct list_head deferred_closes; /* list of deferred closes */
spinlock_t deferred_lock; /* protection on deferred list */ spinlock_t deferred_lock; /* protection on deferred list */
bool lease_granted; /* Flag to indicate whether lease or oplock is granted. */ bool lease_granted; /* Flag to indicate whether lease or oplock is granted. */
......
...@@ -103,7 +103,7 @@ void cifs_fscache_get_inode_cookie(struct inode *inode) ...@@ -103,7 +103,7 @@ void cifs_fscache_get_inode_cookie(struct inode *inode)
cifs_fscache_fill_coherency(&cifsi->vfs_inode, &cd); cifs_fscache_fill_coherency(&cifsi->vfs_inode, &cd);
cifsi->fscache = cifsi->netfs_ctx.cache =
fscache_acquire_cookie(tcon->fscache, 0, fscache_acquire_cookie(tcon->fscache, 0,
&cifsi->uniqueid, sizeof(cifsi->uniqueid), &cifsi->uniqueid, sizeof(cifsi->uniqueid),
&cd, sizeof(cd), &cd, sizeof(cd),
...@@ -126,22 +126,15 @@ void cifs_fscache_unuse_inode_cookie(struct inode *inode, bool update) ...@@ -126,22 +126,15 @@ void cifs_fscache_unuse_inode_cookie(struct inode *inode, bool update)
void cifs_fscache_release_inode_cookie(struct inode *inode) void cifs_fscache_release_inode_cookie(struct inode *inode)
{ {
struct cifsInodeInfo *cifsi = CIFS_I(inode); struct cifsInodeInfo *cifsi = CIFS_I(inode);
struct fscache_cookie *cookie = cifs_inode_cookie(inode);
if (cifsi->fscache) { if (cookie) {
cifs_dbg(FYI, "%s: (0x%p)\n", __func__, cifsi->fscache); cifs_dbg(FYI, "%s: (0x%p)\n", __func__, cookie);
fscache_relinquish_cookie(cifsi->fscache, false); fscache_relinquish_cookie(cookie, false);
cifsi->fscache = NULL; cifsi->netfs_ctx.cache = NULL;
} }
} }
static inline void fscache_end_operation(struct netfs_cache_resources *cres)
{
const struct netfs_cache_ops *ops = fscache_operation_valid(cres);
if (ops)
ops->end_operation(cres);
}
/* /*
* Fallback page reading interface. * Fallback page reading interface.
*/ */
......
...@@ -61,7 +61,7 @@ void cifs_fscache_fill_coherency(struct inode *inode, ...@@ -61,7 +61,7 @@ void cifs_fscache_fill_coherency(struct inode *inode,
static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode)
{ {
return CIFS_I(inode)->fscache; return netfs_i_cookie(inode);
} }
static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags) static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags)
......
...@@ -70,17 +70,6 @@ static inline void fscache_see_cookie(struct fscache_cookie *cookie, ...@@ -70,17 +70,6 @@ static inline void fscache_see_cookie(struct fscache_cookie *cookie,
where); where);
} }
/*
* io.c
*/
static inline void fscache_end_operation(struct netfs_cache_resources *cres)
{
const struct netfs_cache_ops *ops = fscache_operation_valid(cres);
if (ops)
ops->end_operation(cres);
}
/* /*
* main.c * main.c
*/ */
......
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
netfs-y := read_helper.o stats.o netfs-y := \
buffered_read.o \
io.o \
main.o \
objects.o
netfs-$(CONFIG_NETFS_STATS) += stats.o
obj-$(CONFIG_NETFS_SUPPORT) := netfs.o obj-$(CONFIG_NETFS_SUPPORT) := netfs.o
This diff is collapsed.
...@@ -5,6 +5,10 @@ ...@@ -5,6 +5,10 @@
* Written by David Howells (dhowells@redhat.com) * Written by David Howells (dhowells@redhat.com)
*/ */
#include <linux/netfs.h>
#include <linux/fscache.h>
#include <trace/events/netfs.h>
#ifdef pr_fmt #ifdef pr_fmt
#undef pr_fmt #undef pr_fmt
#endif #endif
...@@ -12,10 +16,39 @@ ...@@ -12,10 +16,39 @@
#define pr_fmt(fmt) "netfs: " fmt #define pr_fmt(fmt) "netfs: " fmt
/* /*
* read_helper.c * buffered_read.c
*/
void netfs_rreq_unlock_folios(struct netfs_io_request *rreq);
/*
* io.c
*/
int netfs_begin_read(struct netfs_io_request *rreq, bool sync);
/*
* main.c
*/ */
extern unsigned int netfs_debug; extern unsigned int netfs_debug;
/*
* objects.c
*/
struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
struct file *file,
loff_t start, size_t len,
enum netfs_io_origin origin);
void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async);
void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
enum netfs_rreq_ref_trace what);
struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq);
static inline void netfs_see_request(struct netfs_io_request *rreq,
enum netfs_rreq_ref_trace what)
{
trace_netfs_rreq_ref(rreq->debug_id, refcount_read(&rreq->ref), what);
}
/* /*
* stats.c * stats.c
*/ */
...@@ -55,6 +88,21 @@ static inline void netfs_stat_d(atomic_t *stat) ...@@ -55,6 +88,21 @@ static inline void netfs_stat_d(atomic_t *stat)
#define netfs_stat_d(x) do {} while(0) #define netfs_stat_d(x) do {} while(0)
#endif #endif
/*
* Miscellaneous functions.
*/
static inline bool netfs_is_cache_enabled(struct netfs_i_context *ctx)
{
#if IS_ENABLED(CONFIG_FSCACHE)
struct fscache_cookie *cookie = ctx->cache;
return fscache_cookie_valid(cookie) && cookie->cache_priv &&
fscache_cookie_enabled(cookie);
#else
return false;
#endif
}
/*****************************************************************************/ /*****************************************************************************/
/* /*
* debug tracing * debug tracing
......
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0-or-later
/* Miscellaneous bits for the netfs support library.
*
* Copyright (C) 2022 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/module.h>
#include <linux/export.h>
#include "internal.h"
#define CREATE_TRACE_POINTS
#include <trace/events/netfs.h>
MODULE_DESCRIPTION("Network fs support");
MODULE_AUTHOR("Red Hat, Inc.");
MODULE_LICENSE("GPL");
unsigned netfs_debug;
module_param_named(debug, netfs_debug, uint, S_IWUSR | S_IRUGO);
MODULE_PARM_DESC(netfs_debug, "Netfs support debugging mask");
// SPDX-License-Identifier: GPL-2.0-only
/* Object lifetime handling and tracing.
*
* Copyright (C) 2022 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/slab.h>
#include "internal.h"
/*
* Allocate an I/O request and initialise it.
*/
struct netfs_io_request *netfs_alloc_request(struct address_space *mapping,
struct file *file,
loff_t start, size_t len,
enum netfs_io_origin origin)
{
static atomic_t debug_ids;
struct inode *inode = file ? file_inode(file) : mapping->host;
struct netfs_i_context *ctx = netfs_i_context(inode);
struct netfs_io_request *rreq;
int ret;
rreq = kzalloc(sizeof(struct netfs_io_request), GFP_KERNEL);
if (!rreq)
return ERR_PTR(-ENOMEM);
rreq->start = start;
rreq->len = len;
rreq->origin = origin;
rreq->netfs_ops = ctx->ops;
rreq->mapping = mapping;
rreq->inode = inode;
rreq->i_size = i_size_read(inode);
rreq->debug_id = atomic_inc_return(&debug_ids);
INIT_LIST_HEAD(&rreq->subrequests);
refcount_set(&rreq->ref, 1);
__set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
if (rreq->netfs_ops->init_request) {
ret = rreq->netfs_ops->init_request(rreq, file);
if (ret < 0) {
kfree(rreq);
return ERR_PTR(ret);
}
}
netfs_stat(&netfs_n_rh_rreq);
return rreq;
}
void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what)
{
int r;
__refcount_inc(&rreq->ref, &r);
trace_netfs_rreq_ref(rreq->debug_id, r + 1, what);
}
void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
{
struct netfs_io_subrequest *subreq;
while (!list_empty(&rreq->subrequests)) {
subreq = list_first_entry(&rreq->subrequests,
struct netfs_io_subrequest, rreq_link);
list_del(&subreq->rreq_link);
netfs_put_subrequest(subreq, was_async,
netfs_sreq_trace_put_clear);
}
}
static void netfs_free_request(struct work_struct *work)
{
struct netfs_io_request *rreq =
container_of(work, struct netfs_io_request, work);
netfs_clear_subrequests(rreq, false);
if (rreq->netfs_priv)
rreq->netfs_ops->cleanup(rreq->mapping, rreq->netfs_priv);
trace_netfs_rreq(rreq, netfs_rreq_trace_free);
if (rreq->cache_resources.ops)
rreq->cache_resources.ops->end_operation(&rreq->cache_resources);
kfree(rreq);
netfs_stat_d(&netfs_n_rh_rreq);
}
void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
enum netfs_rreq_ref_trace what)
{
unsigned int debug_id = rreq->debug_id;
bool dead;
int r;
dead = __refcount_dec_and_test(&rreq->ref, &r);
trace_netfs_rreq_ref(debug_id, r - 1, what);
if (dead) {
if (was_async) {
rreq->work.func = netfs_free_request;
if (!queue_work(system_unbound_wq, &rreq->work))
BUG();
} else {
netfs_free_request(&rreq->work);
}
}
}
/*
* Allocate and partially initialise an I/O request structure.
*/
struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq)
{
struct netfs_io_subrequest *subreq;
subreq = kzalloc(sizeof(struct netfs_io_subrequest), GFP_KERNEL);
if (subreq) {
INIT_LIST_HEAD(&subreq->rreq_link);
refcount_set(&subreq->ref, 2);
subreq->rreq = rreq;
netfs_get_request(rreq, netfs_rreq_trace_get_subreq);
netfs_stat(&netfs_n_rh_sreq);
}
return subreq;
}
void netfs_get_subrequest(struct netfs_io_subrequest *subreq,
enum netfs_sreq_ref_trace what)
{
int r;
__refcount_inc(&subreq->ref, &r);
trace_netfs_sreq_ref(subreq->rreq->debug_id, subreq->debug_index, r + 1,
what);
}
static void netfs_free_subrequest(struct netfs_io_subrequest *subreq,
bool was_async)
{
struct netfs_io_request *rreq = subreq->rreq;
trace_netfs_sreq(subreq, netfs_sreq_trace_free);
kfree(subreq);
netfs_stat_d(&netfs_n_rh_sreq);
netfs_put_request(rreq, was_async, netfs_rreq_trace_put_subreq);
}
void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async,
enum netfs_sreq_ref_trace what)
{
unsigned int debug_index = subreq->debug_index;
unsigned int debug_id = subreq->rreq->debug_id;
bool dead;
int r;
dead = __refcount_dec_and_test(&subreq->ref, &r);
trace_netfs_sreq_ref(debug_id, debug_index, r - 1, what);
if (dead)
netfs_free_subrequest(subreq, was_async);
}
...@@ -7,7 +7,6 @@ ...@@ -7,7 +7,6 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/netfs.h>
#include "internal.h" #include "internal.h"
atomic_t netfs_n_rh_readahead; atomic_t netfs_n_rh_readahead;
......
...@@ -238,14 +238,6 @@ void nfs_fscache_release_file(struct inode *inode, struct file *filp) ...@@ -238,14 +238,6 @@ void nfs_fscache_release_file(struct inode *inode, struct file *filp)
} }
} }
static inline void fscache_end_operation(struct netfs_cache_resources *cres)
{
const struct netfs_cache_ops *ops = fscache_operation_valid(cres);
if (ops)
ops->end_operation(cres);
}
/* /*
* Fallback page reading interface. * Fallback page reading interface.
*/ */
......
...@@ -456,6 +456,20 @@ int fscache_begin_read_operation(struct netfs_cache_resources *cres, ...@@ -456,6 +456,20 @@ int fscache_begin_read_operation(struct netfs_cache_resources *cres,
return -ENOBUFS; return -ENOBUFS;
} }
/**
* fscache_end_operation - End the read operation for the netfs lib
* @cres: The cache resources for the read operation
*
* Clean up the resources at the end of the read request.
*/
static inline void fscache_end_operation(struct netfs_cache_resources *cres)
{
const struct netfs_cache_ops *ops = fscache_operation_valid(cres);
if (ops)
ops->end_operation(cres);
}
/** /**
* fscache_read - Start a read from the cache. * fscache_read - Start a read from the cache.
* @cres: The cache resources to use * @cres: The cache resources to use
......
...@@ -18,6 +18,8 @@ ...@@ -18,6 +18,8 @@
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/pagemap.h> #include <linux/pagemap.h>
enum netfs_sreq_ref_trace;
/* /*
* Overload PG_private_2 to give us PG_fscache - this is used to indicate that * Overload PG_private_2 to give us PG_fscache - this is used to indicate that
* a page is currently backed by a local disk cache * a page is currently backed by a local disk cache
...@@ -106,7 +108,7 @@ static inline int wait_on_page_fscache_killable(struct page *page) ...@@ -106,7 +108,7 @@ static inline int wait_on_page_fscache_killable(struct page *page)
return folio_wait_private_2_killable(page_folio(page)); return folio_wait_private_2_killable(page_folio(page));
} }
enum netfs_read_source { enum netfs_io_source {
NETFS_FILL_WITH_ZEROES, NETFS_FILL_WITH_ZEROES,
NETFS_DOWNLOAD_FROM_SERVER, NETFS_DOWNLOAD_FROM_SERVER,
NETFS_READ_FROM_CACHE, NETFS_READ_FROM_CACHE,
...@@ -116,6 +118,17 @@ enum netfs_read_source { ...@@ -116,6 +118,17 @@ enum netfs_read_source {
typedef void (*netfs_io_terminated_t)(void *priv, ssize_t transferred_or_error, typedef void (*netfs_io_terminated_t)(void *priv, ssize_t transferred_or_error,
bool was_async); bool was_async);
/*
* Per-inode description. This must be directly after the inode struct.
*/
struct netfs_i_context {
const struct netfs_request_ops *ops;
#if IS_ENABLED(CONFIG_FSCACHE)
struct fscache_cookie *cache;
#endif
loff_t remote_i_size; /* Size of the remote file */
};
/* /*
* Resources required to do operations on a cache. * Resources required to do operations on a cache.
*/ */
...@@ -130,69 +143,75 @@ struct netfs_cache_resources { ...@@ -130,69 +143,75 @@ struct netfs_cache_resources {
/* /*
* Descriptor for a single component subrequest. * Descriptor for a single component subrequest.
*/ */
struct netfs_read_subrequest { struct netfs_io_subrequest {
struct netfs_read_request *rreq; /* Supervising read request */ struct netfs_io_request *rreq; /* Supervising I/O request */
struct list_head rreq_link; /* Link in rreq->subrequests */ struct list_head rreq_link; /* Link in rreq->subrequests */
loff_t start; /* Where to start the I/O */ loff_t start; /* Where to start the I/O */
size_t len; /* Size of the I/O */ size_t len; /* Size of the I/O */
size_t transferred; /* Amount of data transferred */ size_t transferred; /* Amount of data transferred */
refcount_t usage; refcount_t ref;
short error; /* 0 or error that occurred */ short error; /* 0 or error that occurred */
unsigned short debug_index; /* Index in list (for debugging output) */ unsigned short debug_index; /* Index in list (for debugging output) */
enum netfs_read_source source; /* Where to read from */ enum netfs_io_source source; /* Where to read from/write to */
unsigned long flags; unsigned long flags;
#define NETFS_SREQ_WRITE_TO_CACHE 0 /* Set if should write to cache */ #define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the cache */
#define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be cleared */ #define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be cleared */
#define NETFS_SREQ_SHORT_READ 2 /* Set if there was a short read from the cache */ #define NETFS_SREQ_SHORT_IO 2 /* Set if the I/O was short */
#define NETFS_SREQ_SEEK_DATA_READ 3 /* Set if ->read() should SEEK_DATA first */ #define NETFS_SREQ_SEEK_DATA_READ 3 /* Set if ->read() should SEEK_DATA first */
#define NETFS_SREQ_NO_PROGRESS 4 /* Set if we didn't manage to read any data */ #define NETFS_SREQ_NO_PROGRESS 4 /* Set if we didn't manage to read any data */
}; };
enum netfs_io_origin {
NETFS_READAHEAD, /* This read was triggered by readahead */
NETFS_READPAGE, /* This read is a synchronous read */
NETFS_READ_FOR_WRITE, /* This read is to prepare a write */
} __mode(byte);
/* /*
* Descriptor for a read helper request. This is used to make multiple I/O * Descriptor for an I/O helper request. This is used to make multiple I/O
* requests on a variety of sources and then stitch the result together. * operations to a variety of data stores and then stitch the result together.
*/ */
struct netfs_read_request { struct netfs_io_request {
struct work_struct work; struct work_struct work;
struct inode *inode; /* The file being accessed */ struct inode *inode; /* The file being accessed */
struct address_space *mapping; /* The mapping being accessed */ struct address_space *mapping; /* The mapping being accessed */
struct netfs_cache_resources cache_resources; struct netfs_cache_resources cache_resources;
struct list_head subrequests; /* Requests to fetch I/O from disk or net */ struct list_head subrequests; /* Contributory I/O operations */
void *netfs_priv; /* Private data for the netfs */ void *netfs_priv; /* Private data for the netfs */
unsigned int debug_id; unsigned int debug_id;
atomic_t nr_rd_ops; /* Number of read ops in progress */ atomic_t nr_outstanding; /* Number of ops in progress */
atomic_t nr_wr_ops; /* Number of write ops in progress */ atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */
size_t submitted; /* Amount submitted for I/O so far */ size_t submitted; /* Amount submitted for I/O so far */
size_t len; /* Length of the request */ size_t len; /* Length of the request */
short error; /* 0 or error that occurred */ short error; /* 0 or error that occurred */
enum netfs_io_origin origin; /* Origin of the request */
loff_t i_size; /* Size of the file */ loff_t i_size; /* Size of the file */
loff_t start; /* Start position */ loff_t start; /* Start position */
pgoff_t no_unlock_folio; /* Don't unlock this folio after read */ pgoff_t no_unlock_folio; /* Don't unlock this folio after read */
refcount_t usage; refcount_t ref;
unsigned long flags; unsigned long flags;
#define NETFS_RREQ_INCOMPLETE_IO 0 /* Some ioreqs terminated short or with error */ #define NETFS_RREQ_INCOMPLETE_IO 0 /* Some ioreqs terminated short or with error */
#define NETFS_RREQ_WRITE_TO_CACHE 1 /* Need to write to the cache */ #define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */
#define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on completion */ #define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on completion */
#define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */ #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */
#define NETFS_RREQ_FAILED 4 /* The request failed */ #define NETFS_RREQ_FAILED 4 /* The request failed */
#define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes */ #define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes */
const struct netfs_read_request_ops *netfs_ops; const struct netfs_request_ops *netfs_ops;
}; };
/* /*
* Operations the network filesystem can/must provide to the helpers. * Operations the network filesystem can/must provide to the helpers.
*/ */
struct netfs_read_request_ops { struct netfs_request_ops {
bool (*is_cache_enabled)(struct inode *inode); int (*init_request)(struct netfs_io_request *rreq, struct file *file);
void (*init_rreq)(struct netfs_read_request *rreq, struct file *file); int (*begin_cache_operation)(struct netfs_io_request *rreq);
int (*begin_cache_operation)(struct netfs_read_request *rreq); void (*expand_readahead)(struct netfs_io_request *rreq);
void (*expand_readahead)(struct netfs_read_request *rreq); bool (*clamp_length)(struct netfs_io_subrequest *subreq);
bool (*clamp_length)(struct netfs_read_subrequest *subreq); void (*issue_read)(struct netfs_io_subrequest *subreq);
void (*issue_op)(struct netfs_read_subrequest *subreq); bool (*is_still_valid)(struct netfs_io_request *rreq);
bool (*is_still_valid)(struct netfs_read_request *rreq);
int (*check_write_begin)(struct file *file, loff_t pos, unsigned len, int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
struct folio *folio, void **_fsdata); struct folio *folio, void **_fsdata);
void (*done)(struct netfs_read_request *rreq); void (*done)(struct netfs_io_request *rreq);
void (*cleanup)(struct address_space *mapping, void *netfs_priv); void (*cleanup)(struct address_space *mapping, void *netfs_priv);
}; };
...@@ -235,7 +254,7 @@ struct netfs_cache_ops { ...@@ -235,7 +254,7 @@ struct netfs_cache_ops {
/* Prepare a read operation, shortening it to a cached/uncached /* Prepare a read operation, shortening it to a cached/uncached
* boundary as appropriate. * boundary as appropriate.
*/ */
enum netfs_read_source (*prepare_read)(struct netfs_read_subrequest *subreq, enum netfs_io_source (*prepare_read)(struct netfs_io_subrequest *subreq,
loff_t i_size); loff_t i_size);
/* Prepare a write operation, working out what part of the write we can /* Prepare a write operation, working out what part of the write we can
...@@ -254,20 +273,89 @@ struct netfs_cache_ops { ...@@ -254,20 +273,89 @@ struct netfs_cache_ops {
}; };
struct readahead_control; struct readahead_control;
extern void netfs_readahead(struct readahead_control *, extern void netfs_readahead(struct readahead_control *);
const struct netfs_read_request_ops *, extern int netfs_readpage(struct file *, struct page *);
void *);
extern int netfs_readpage(struct file *,
struct folio *,
const struct netfs_read_request_ops *,
void *);
extern int netfs_write_begin(struct file *, struct address_space *, extern int netfs_write_begin(struct file *, struct address_space *,
loff_t, unsigned int, unsigned int, struct folio **, loff_t, unsigned int, unsigned int, struct folio **,
void **, void **);
const struct netfs_read_request_ops *,
void *);
extern void netfs_subreq_terminated(struct netfs_read_subrequest *, ssize_t, bool); extern void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool);
extern void netfs_get_subrequest(struct netfs_io_subrequest *subreq,
enum netfs_sreq_ref_trace what);
extern void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
bool was_async, enum netfs_sreq_ref_trace what);
extern void netfs_stats_show(struct seq_file *); extern void netfs_stats_show(struct seq_file *);
/**
* netfs_i_context - Get the netfs inode context from the inode
* @inode: The inode to query
*
* Get the netfs lib inode context from the network filesystem's inode. The
* context struct is expected to directly follow on from the VFS inode struct.
*/
static inline struct netfs_i_context *netfs_i_context(struct inode *inode)
{
return (struct netfs_i_context *)(inode + 1);
}
/**
* netfs_inode - Get the netfs inode from the inode context
* @ctx: The context to query
*
* Get the netfs inode from the netfs library's inode context. The VFS inode
* is expected to directly precede the context struct.
*/
static inline struct inode *netfs_inode(struct netfs_i_context *ctx)
{
return ((struct inode *)ctx) - 1;
}
/**
* netfs_i_context_init - Initialise a netfs lib context
* @inode: The inode with which the context is associated
* @ops: The netfs's operations list
*
* Initialise the netfs library context struct. This is expected to follow on
* directly from the VFS inode struct.
*/
static inline void netfs_i_context_init(struct inode *inode,
const struct netfs_request_ops *ops)
{
struct netfs_i_context *ctx = netfs_i_context(inode);
memset(ctx, 0, sizeof(*ctx));
ctx->ops = ops;
ctx->remote_i_size = i_size_read(inode);
}
/**
* netfs_resize_file - Note that a file got resized
* @inode: The inode being resized
* @new_i_size: The new file size
*
* Inform the netfs lib that a file got resized so that it can adjust its state.
*/
static inline void netfs_resize_file(struct inode *inode, loff_t new_i_size)
{
struct netfs_i_context *ctx = netfs_i_context(inode);
ctx->remote_i_size = new_i_size;
}
/**
* netfs_i_cookie - Get the cache cookie from the inode
* @inode: The inode to query
*
* Get the caching cookie (if enabled) from the network filesystem's inode.
*/
static inline struct fscache_cookie *netfs_i_cookie(struct inode *inode)
{
#if IS_ENABLED(CONFIG_FSCACHE)
struct netfs_i_context *ctx = netfs_i_context(inode);
return ctx->cache;
#else
return NULL;
#endif
}
#endif /* _LINUX_NETFS_H */ #endif /* _LINUX_NETFS_H */
...@@ -426,8 +426,8 @@ TRACE_EVENT(cachefiles_vol_coherency, ...@@ -426,8 +426,8 @@ TRACE_EVENT(cachefiles_vol_coherency,
); );
TRACE_EVENT(cachefiles_prep_read, TRACE_EVENT(cachefiles_prep_read,
TP_PROTO(struct netfs_read_subrequest *sreq, TP_PROTO(struct netfs_io_subrequest *sreq,
enum netfs_read_source source, enum netfs_io_source source,
enum cachefiles_prepare_read_trace why, enum cachefiles_prepare_read_trace why,
ino_t cache_inode), ino_t cache_inode),
...@@ -437,7 +437,7 @@ TRACE_EVENT(cachefiles_prep_read, ...@@ -437,7 +437,7 @@ TRACE_EVENT(cachefiles_prep_read,
__field(unsigned int, rreq ) __field(unsigned int, rreq )
__field(unsigned short, index ) __field(unsigned short, index )
__field(unsigned short, flags ) __field(unsigned short, flags )
__field(enum netfs_read_source, source ) __field(enum netfs_io_source, source )
__field(enum cachefiles_prepare_read_trace, why ) __field(enum cachefiles_prepare_read_trace, why )
__field(size_t, len ) __field(size_t, len )
__field(loff_t, start ) __field(loff_t, start )
......
...@@ -15,63 +15,25 @@ ...@@ -15,63 +15,25 @@
/* /*
* Define enums for tracing information. * Define enums for tracing information.
*/ */
#ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
#define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
enum netfs_read_trace {
netfs_read_trace_expanded,
netfs_read_trace_readahead,
netfs_read_trace_readpage,
netfs_read_trace_write_begin,
};
enum netfs_rreq_trace {
netfs_rreq_trace_assess,
netfs_rreq_trace_done,
netfs_rreq_trace_free,
netfs_rreq_trace_resubmit,
netfs_rreq_trace_unlock,
netfs_rreq_trace_unmark,
netfs_rreq_trace_write,
};
enum netfs_sreq_trace {
netfs_sreq_trace_download_instead,
netfs_sreq_trace_free,
netfs_sreq_trace_prepare,
netfs_sreq_trace_resubmit_short,
netfs_sreq_trace_submit,
netfs_sreq_trace_terminated,
netfs_sreq_trace_write,
netfs_sreq_trace_write_skip,
netfs_sreq_trace_write_term,
};
enum netfs_failure {
netfs_fail_check_write_begin,
netfs_fail_copy_to_cache,
netfs_fail_read,
netfs_fail_short_readpage,
netfs_fail_short_write_begin,
netfs_fail_prepare_write,
};
#endif
#define netfs_read_traces \ #define netfs_read_traces \
EM(netfs_read_trace_expanded, "EXPANDED ") \ EM(netfs_read_trace_expanded, "EXPANDED ") \
EM(netfs_read_trace_readahead, "READAHEAD") \ EM(netfs_read_trace_readahead, "READAHEAD") \
EM(netfs_read_trace_readpage, "READPAGE ") \ EM(netfs_read_trace_readpage, "READPAGE ") \
E_(netfs_read_trace_write_begin, "WRITEBEGN") E_(netfs_read_trace_write_begin, "WRITEBEGN")
#define netfs_rreq_origins \
EM(NETFS_READAHEAD, "RA") \
EM(NETFS_READPAGE, "RP") \
E_(NETFS_READ_FOR_WRITE, "RW")
#define netfs_rreq_traces \ #define netfs_rreq_traces \
EM(netfs_rreq_trace_assess, "ASSESS") \ EM(netfs_rreq_trace_assess, "ASSESS ") \
EM(netfs_rreq_trace_done, "DONE ") \ EM(netfs_rreq_trace_copy, "COPY ") \
EM(netfs_rreq_trace_free, "FREE ") \ EM(netfs_rreq_trace_done, "DONE ") \
EM(netfs_rreq_trace_resubmit, "RESUBM") \ EM(netfs_rreq_trace_free, "FREE ") \
EM(netfs_rreq_trace_unlock, "UNLOCK") \ EM(netfs_rreq_trace_resubmit, "RESUBMT") \
EM(netfs_rreq_trace_unmark, "UNMARK") \ EM(netfs_rreq_trace_unlock, "UNLOCK ") \
E_(netfs_rreq_trace_write, "WRITE ") E_(netfs_rreq_trace_unmark, "UNMARK ")
#define netfs_sreq_sources \ #define netfs_sreq_sources \
EM(NETFS_FILL_WITH_ZEROES, "ZERO") \ EM(NETFS_FILL_WITH_ZEROES, "ZERO") \
...@@ -94,10 +56,47 @@ enum netfs_failure { ...@@ -94,10 +56,47 @@ enum netfs_failure {
EM(netfs_fail_check_write_begin, "check-write-begin") \ EM(netfs_fail_check_write_begin, "check-write-begin") \
EM(netfs_fail_copy_to_cache, "copy-to-cache") \ EM(netfs_fail_copy_to_cache, "copy-to-cache") \
EM(netfs_fail_read, "read") \ EM(netfs_fail_read, "read") \
EM(netfs_fail_short_readpage, "short-readpage") \ EM(netfs_fail_short_read, "short-read") \
EM(netfs_fail_short_write_begin, "short-write-begin") \
E_(netfs_fail_prepare_write, "prep-write") E_(netfs_fail_prepare_write, "prep-write")
#define netfs_rreq_ref_traces \
EM(netfs_rreq_trace_get_hold, "GET HOLD ") \
EM(netfs_rreq_trace_get_subreq, "GET SUBREQ ") \
EM(netfs_rreq_trace_put_complete, "PUT COMPLT ") \
EM(netfs_rreq_trace_put_discard, "PUT DISCARD") \
EM(netfs_rreq_trace_put_failed, "PUT FAILED ") \
EM(netfs_rreq_trace_put_hold, "PUT HOLD ") \
EM(netfs_rreq_trace_put_subreq, "PUT SUBREQ ") \
EM(netfs_rreq_trace_put_zero_len, "PUT ZEROLEN") \
E_(netfs_rreq_trace_new, "NEW ")
#define netfs_sreq_ref_traces \
EM(netfs_sreq_trace_get_copy_to_cache, "GET COPY2C ") \
EM(netfs_sreq_trace_get_resubmit, "GET RESUBMIT") \
EM(netfs_sreq_trace_get_short_read, "GET SHORTRD") \
EM(netfs_sreq_trace_new, "NEW ") \
EM(netfs_sreq_trace_put_clear, "PUT CLEAR ") \
EM(netfs_sreq_trace_put_failed, "PUT FAILED ") \
EM(netfs_sreq_trace_put_merged, "PUT MERGED ") \
EM(netfs_sreq_trace_put_no_copy, "PUT NO COPY") \
E_(netfs_sreq_trace_put_terminated, "PUT TERM ")
#ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
#define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
#undef EM
#undef E_
#define EM(a, b) a,
#define E_(a, b) a
enum netfs_read_trace { netfs_read_traces } __mode(byte);
enum netfs_rreq_trace { netfs_rreq_traces } __mode(byte);
enum netfs_sreq_trace { netfs_sreq_traces } __mode(byte);
enum netfs_failure { netfs_failures } __mode(byte);
enum netfs_rreq_ref_trace { netfs_rreq_ref_traces } __mode(byte);
enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __mode(byte);
#endif
/* /*
* Export enum symbols via userspace. * Export enum symbols via userspace.
...@@ -108,10 +107,13 @@ enum netfs_failure { ...@@ -108,10 +107,13 @@ enum netfs_failure {
#define E_(a, b) TRACE_DEFINE_ENUM(a); #define E_(a, b) TRACE_DEFINE_ENUM(a);
netfs_read_traces; netfs_read_traces;
netfs_rreq_origins;
netfs_rreq_traces; netfs_rreq_traces;
netfs_sreq_sources; netfs_sreq_sources;
netfs_sreq_traces; netfs_sreq_traces;
netfs_failures; netfs_failures;
netfs_rreq_ref_traces;
netfs_sreq_ref_traces;
/* /*
* Now redefine the EM() and E_() macros to map the enums to the strings that * Now redefine the EM() and E_() macros to map the enums to the strings that
...@@ -123,7 +125,7 @@ netfs_failures; ...@@ -123,7 +125,7 @@ netfs_failures;
#define E_(a, b) { a, b } #define E_(a, b) { a, b }
TRACE_EVENT(netfs_read, TRACE_EVENT(netfs_read,
TP_PROTO(struct netfs_read_request *rreq, TP_PROTO(struct netfs_io_request *rreq,
loff_t start, size_t len, loff_t start, size_t len,
enum netfs_read_trace what), enum netfs_read_trace what),
...@@ -156,31 +158,34 @@ TRACE_EVENT(netfs_read, ...@@ -156,31 +158,34 @@ TRACE_EVENT(netfs_read,
); );
TRACE_EVENT(netfs_rreq, TRACE_EVENT(netfs_rreq,
TP_PROTO(struct netfs_read_request *rreq, TP_PROTO(struct netfs_io_request *rreq,
enum netfs_rreq_trace what), enum netfs_rreq_trace what),
TP_ARGS(rreq, what), TP_ARGS(rreq, what),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(unsigned int, rreq ) __field(unsigned int, rreq )
__field(unsigned short, flags ) __field(unsigned int, flags )
__field(enum netfs_io_origin, origin )
__field(enum netfs_rreq_trace, what ) __field(enum netfs_rreq_trace, what )
), ),
TP_fast_assign( TP_fast_assign(
__entry->rreq = rreq->debug_id; __entry->rreq = rreq->debug_id;
__entry->flags = rreq->flags; __entry->flags = rreq->flags;
__entry->origin = rreq->origin;
__entry->what = what; __entry->what = what;
), ),
TP_printk("R=%08x %s f=%02x", TP_printk("R=%08x %s %s f=%02x",
__entry->rreq, __entry->rreq,
__print_symbolic(__entry->origin, netfs_rreq_origins),
__print_symbolic(__entry->what, netfs_rreq_traces), __print_symbolic(__entry->what, netfs_rreq_traces),
__entry->flags) __entry->flags)
); );
TRACE_EVENT(netfs_sreq, TRACE_EVENT(netfs_sreq,
TP_PROTO(struct netfs_read_subrequest *sreq, TP_PROTO(struct netfs_io_subrequest *sreq,
enum netfs_sreq_trace what), enum netfs_sreq_trace what),
TP_ARGS(sreq, what), TP_ARGS(sreq, what),
...@@ -190,7 +195,7 @@ TRACE_EVENT(netfs_sreq, ...@@ -190,7 +195,7 @@ TRACE_EVENT(netfs_sreq,
__field(unsigned short, index ) __field(unsigned short, index )
__field(short, error ) __field(short, error )
__field(unsigned short, flags ) __field(unsigned short, flags )
__field(enum netfs_read_source, source ) __field(enum netfs_io_source, source )
__field(enum netfs_sreq_trace, what ) __field(enum netfs_sreq_trace, what )
__field(size_t, len ) __field(size_t, len )
__field(size_t, transferred ) __field(size_t, transferred )
...@@ -211,26 +216,26 @@ TRACE_EVENT(netfs_sreq, ...@@ -211,26 +216,26 @@ TRACE_EVENT(netfs_sreq,
TP_printk("R=%08x[%u] %s %s f=%02x s=%llx %zx/%zx e=%d", TP_printk("R=%08x[%u] %s %s f=%02x s=%llx %zx/%zx e=%d",
__entry->rreq, __entry->index, __entry->rreq, __entry->index,
__print_symbolic(__entry->what, netfs_sreq_traces),
__print_symbolic(__entry->source, netfs_sreq_sources), __print_symbolic(__entry->source, netfs_sreq_sources),
__print_symbolic(__entry->what, netfs_sreq_traces),
__entry->flags, __entry->flags,
__entry->start, __entry->transferred, __entry->len, __entry->start, __entry->transferred, __entry->len,
__entry->error) __entry->error)
); );
TRACE_EVENT(netfs_failure, TRACE_EVENT(netfs_failure,
TP_PROTO(struct netfs_read_request *rreq, TP_PROTO(struct netfs_io_request *rreq,
struct netfs_read_subrequest *sreq, struct netfs_io_subrequest *sreq,
int error, enum netfs_failure what), int error, enum netfs_failure what),
TP_ARGS(rreq, sreq, error, what), TP_ARGS(rreq, sreq, error, what),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(unsigned int, rreq ) __field(unsigned int, rreq )
__field(unsigned short, index ) __field(short, index )
__field(short, error ) __field(short, error )
__field(unsigned short, flags ) __field(unsigned short, flags )
__field(enum netfs_read_source, source ) __field(enum netfs_io_source, source )
__field(enum netfs_failure, what ) __field(enum netfs_failure, what )
__field(size_t, len ) __field(size_t, len )
__field(size_t, transferred ) __field(size_t, transferred )
...@@ -239,17 +244,17 @@ TRACE_EVENT(netfs_failure, ...@@ -239,17 +244,17 @@ TRACE_EVENT(netfs_failure,
TP_fast_assign( TP_fast_assign(
__entry->rreq = rreq->debug_id; __entry->rreq = rreq->debug_id;
__entry->index = sreq ? sreq->debug_index : 0; __entry->index = sreq ? sreq->debug_index : -1;
__entry->error = error; __entry->error = error;
__entry->flags = sreq ? sreq->flags : 0; __entry->flags = sreq ? sreq->flags : 0;
__entry->source = sreq ? sreq->source : NETFS_INVALID_READ; __entry->source = sreq ? sreq->source : NETFS_INVALID_READ;
__entry->what = what; __entry->what = what;
__entry->len = sreq ? sreq->len : 0; __entry->len = sreq ? sreq->len : rreq->len;
__entry->transferred = sreq ? sreq->transferred : 0; __entry->transferred = sreq ? sreq->transferred : 0;
__entry->start = sreq ? sreq->start : 0; __entry->start = sreq ? sreq->start : 0;
), ),
TP_printk("R=%08x[%u] %s f=%02x s=%llx %zx/%zx %s e=%d", TP_printk("R=%08x[%d] %s f=%02x s=%llx %zx/%zx %s e=%d",
__entry->rreq, __entry->index, __entry->rreq, __entry->index,
__print_symbolic(__entry->source, netfs_sreq_sources), __print_symbolic(__entry->source, netfs_sreq_sources),
__entry->flags, __entry->flags,
...@@ -258,6 +263,59 @@ TRACE_EVENT(netfs_failure, ...@@ -258,6 +263,59 @@ TRACE_EVENT(netfs_failure,
__entry->error) __entry->error)
); );
TRACE_EVENT(netfs_rreq_ref,
TP_PROTO(unsigned int rreq_debug_id, int ref,
enum netfs_rreq_ref_trace what),
TP_ARGS(rreq_debug_id, ref, what),
TP_STRUCT__entry(
__field(unsigned int, rreq )
__field(int, ref )
__field(enum netfs_rreq_ref_trace, what )
),
TP_fast_assign(
__entry->rreq = rreq_debug_id;
__entry->ref = ref;
__entry->what = what;
),
TP_printk("R=%08x %s r=%u",
__entry->rreq,
__print_symbolic(__entry->what, netfs_rreq_ref_traces),
__entry->ref)
);
TRACE_EVENT(netfs_sreq_ref,
TP_PROTO(unsigned int rreq_debug_id, unsigned int subreq_debug_index,
int ref, enum netfs_sreq_ref_trace what),
TP_ARGS(rreq_debug_id, subreq_debug_index, ref, what),
TP_STRUCT__entry(
__field(unsigned int, rreq )
__field(unsigned int, subreq )
__field(int, ref )
__field(enum netfs_sreq_ref_trace, what )
),
TP_fast_assign(
__entry->rreq = rreq_debug_id;
__entry->subreq = subreq_debug_index;
__entry->ref = ref;
__entry->what = what;
),
TP_printk("R=%08x[%x] %s r=%u",
__entry->rreq,
__entry->subreq,
__print_symbolic(__entry->what, netfs_sreq_ref_traces),
__entry->ref)
);
#undef EM
#undef E_
#endif /* _TRACE_NETFS_H */ #endif /* _TRACE_NETFS_H */
/* This part must be outside protection */ /* This part must be outside protection */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment