Commit 8834147f authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'fscache-rewrite-20220111' of...

Merge tag 'fscache-rewrite-20220111' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs

Pull fscache rewrite from David Howells:
 "This is a set of patches that rewrites the fscache driver and the
  cachefiles driver, significantly simplifying the code compared to
  what's upstream, removing the complex operation scheduling and object
  state machine in favour of something much smaller and simpler.

  The series is structured such that the first few patches disable
  fscache use by the network filesystems using it, remove the cachefiles
  driver entirely and as much of the fscache driver as can be got away
  with without causing build failures in the network filesystems.

  The patches after that recreate fscache and then cachefiles,
  attempting to add the pieces in a logical order. Finally, the
  filesystems are reenabled and then the very last patch changes the
  documentation.

  [!] Note: I have dropped the cifs patch for the moment, leaving local
      caching in cifs disabled. I've been having trouble getting that
      working. I think I have it done, but it needs more testing (there
      seem to be some test failures occurring with v5.16 also from
      xfstests), so I propose deferring that patch to the end of the
      merge window.

  WHY REWRITE?
  ============

  Fscache's operation scheduling API was intended to handle sequencing
  of cache operations, which were all required (where possible) to run
  asynchronously in parallel with the operations being done by the
  network filesystem, whilst allowing the cache to be brought online and
  offline and to interrupt service for invalidation.

  With the advent of the tmpfile capacity in the VFS, however, an
  opportunity arises to do invalidation much more simply, without having
  to wait for I/O that's actually in progress: Cachefiles can simply
  create a tmpfile, cut over the file pointer for the backing object
  attached to a cookie and abandon the in-progress I/O, dismissing it
  upon completion.

  Future work here would involve using Omar Sandoval's vfs_link() with
  AT_LINK_REPLACE[1] to allow an extant file to be displaced by a new
  hard link from a tmpfile as currently I have to unlink the old file
  first.

  These patches can also simplify the object state handling as I/O
  operations to the cache don't all have to be brought to a stop in
  order to invalidate a file. To that end, and with an eye on to writing
  a new backing cache model in the future, I've taken the opportunity to
  simplify the indexing structure.

  I've separated the index cookie concept from the file cookie concept
  by C type now. The former is now called a "volume cookie" (struct
  fscache_volume) and there is a container of file cookies. There are
  then just the two levels. All the index cookie levels are collapsed
  into a single volume cookie, and this has a single printable string as
  a key. For instance, an AFS volume would have a key of something like
  "afs,example.com,1000555", combining the filesystem name, cell name
  and volume ID. This is freeform, but must not have '/' chars in it.

  I've also eliminated all pointers back from fscache into the network
  filesystem. This required the duplication of a little bit of data in
  the cookie (cookie key, coherency data and file size), but it's not
  actually that much. This gets rid of problems with making sure we keep
  netfs data structures around so that the cache can access them.

  These patches mean that most of the code that was in the drivers
  before is simply gone and those drivers are now almost entirely new
  code. That being the case, there doesn't seem any particular reason to
  try and maintain bisectability across it. Further, there has to be a
  point in the middle where things are cut over as there's a single
  point everything has to go through (ie. /dev/cachefiles) and it can't
  be in use by two drivers at once.

  ISSUES YET OUTSTANDING
  ======================

  There are some issues still outstanding, unaddressed by this patchset,
  that will need fixing in future patchsets, but that don't stop this
  series from being usable:

  (1) The cachefiles driver needs to stop using the backing filesystem's
      metadata to store information about what parts of the cache are
      populated. This is not reliable with modern extent-based
      filesystems.

      Fixing this is deferred to a separate patchset as it involves
      negotiation with the network filesystem and the VM as to how much
      data to download to fulfil a read - which brings me on to (2)...

  (2) NFS (and CIFS with the dropped patch) do not take account of how
      the cache would like I/O to be structured to meet its granularity
      requirements. Previously, the cache used page granularity, which
      was fine as the network filesystems also dealt in page
      granularity, and the backing filesystem (ext4, xfs or whatever)
      did whatever it did out of sight. However, we now have folios to
      deal with and the cache will now have to store its own metadata to
      track its contents.

      The change I'm looking at making for cachefiles is to store
      content bitmaps in one or more xattrs and making a bit in the map
      correspond to something like a 256KiB block. However, the size of
      an xattr and the fact that they have to be read/updated in one go
      means that I'm looking at covering 1GiB of data per 512-byte map
      and storing each map in an xattr. Cachefiles has the potential to
      grow into a fully fledged filesystem of its very own if I'm not
      careful.

      However, I'm also looking at changing things even more radically
      and going to a different model of how the cache is arranged and
      managed - one that's more akin to the way, say, openafs does
      things - which brings me on to (3)...

  (3) The way cachefilesd does culling is very inefficient for large
      caches and it would be better to move it into the kernel if I can
      as cachefilesd has to keep asking the kernel if it can cull a
      file. Changing the way the backend works would allow this to be
      addressed.

  BITS THAT MAY BE CONTROVERSIAL
  ==============================

  There are some bits I've added that may be controversial:

  (1) I've provided a flag, S_KERNEL_FILE, that cachefiles uses to check
      if a files is already being used by some other kernel service
      (e.g. a duplicate cachefiles cache in the same directory) and
      reject it if it is. This isn't entirely necessary, but it helps
      prevent accidental data corruption.

      I don't want to use S_SWAPFILE as that has other effects, but
      quite possibly swapon() should set S_KERNEL_FILE too.

      Note that it doesn't prevent userspace from interfering, though
      perhaps it should. (I have made it prevent a marked directory from
      being rmdir-able).

  (2) Cachefiles wants to keep the backing file for a cookie open whilst
      we might need to write to it from network filesystem writeback.
      The problem is that the network filesystem unuses its cookie when
      its file is closed, and so we have nothing pinning the cachefiles
      file open and it will get closed automatically after a short time
      to avoid EMFILE/ENFILE problems.

      Reopening the cache file, however, is a problem if this is being
      done due to writeback triggered by exit(). Some filesystems will
      oops if we try to open a file in that context because they want to
      access current->fs or suchlike.

      To get around this, I added the following:

      (A) An inode flag, I_PINNING_FSCACHE_WB, to be set on a network
          filesystem inode to indicate that we have a usage count on the
          cookie caching that inode.

      (B) A flag in struct writeback_control, unpinned_fscache_wb, that
          is set when __writeback_single_inode() clears the last dirty
          page from i_pages - at which point it clears
          I_PINNING_FSCACHE_WB and sets this flag.

          This has to be done here so that clearing I_PINNING_FSCACHE_WB
          can be done atomically with the check of PAGECACHE_TAG_DIRTY
          that clears I_DIRTY_PAGES.

      (C) A function, fscache_set_page_dirty(), which if it is not set,
          sets I_PINNING_FSCACHE_WB and calls fscache_use_cookie() to
          pin the cache resources.

      (D) A function, fscache_unpin_writeback(), to be called by
          ->write_inode() to unuse the cookie.

      (E) A function, fscache_clear_inode_writeback(), to be called when
          the inode is evicted, before clear_inode() is called. This
          cleans up any lingering I_PINNING_FSCACHE_WB.

      The network filesystem can then use these tools to make sure that
      fscache_write_to_cache() can write locally modified data to the
      cache as well as to the server.

      For the future, I'm working on write helpers for netfs lib that
      should allow this facility to be removed by keeping track of the
      dirty regions separately - but that's incomplete at the moment and
      is also going to be affected by folios, one way or another, since
      it deals with pages"

Link: https://lore.kernel.org/all/510611.1641942444@warthog.procyon.org.uk/
Tested-by: Dominique Martinet <asmadeus@codewreck.org> # 9p
Tested-by: kafs-testing@auristor.com # afs
Tested-by: Jeff Layton <jlayton@kernel.org> # ceph
Tested-by: Dave Wysochanski <dwysocha@redhat.com> # nfs
Tested-by: Daire Byrne <daire@dneg.com> # nfs

* tag 'fscache-rewrite-20220111' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs: (67 commits)
  9p, afs, ceph, nfs: Use current_is_kswapd() rather than gfpflags_allow_blocking()
  fscache: Add a tracepoint for cookie use/unuse
  fscache: Rewrite documentation
  ceph: add fscache writeback support
  ceph: conversion to new fscache API
  nfs: Implement cache I/O by accessing the cache directly
  nfs: Convert to new fscache volume/cookie API
  9p: Copy local writes to the cache when writing to the server
  9p: Use fscache indexing rewrite and reenable caching
  afs: Skip truncation on the server of data we haven't written yet
  afs: Copy local writes to the cache when writing to the server
  afs: Convert afs to use the new fscache API
  fscache, cachefiles: Display stat of culling events
  fscache, cachefiles: Display stats of no-space events
  cachefiles: Allow cachefiles to actually function
  fscache, cachefiles: Store the volume coherency data
  cachefiles: Implement the I/O routines
  cachefiles: Implement cookie resize for truncate
  cachefiles: Implement begin and end I/O operation
  cachefiles: Implement backing file wrangling
  ...
parents 8975f897 d7bdba1c
.. SPDX-License-Identifier: GPL-2.0
===============================================
CacheFiles: CACHE ON ALREADY MOUNTED FILESYSTEM
===============================================
===================================
Cache on Already Mounted Filesystem
===================================
.. Contents:
......
......@@ -7,8 +7,6 @@ Filesystem Caching
:maxdepth: 2
fscache
object
netfs-api
backend-api
cachefiles
netfs-api
operations
This diff is collapsed.
.. SPDX-License-Identifier: GPL-2.0
================================
Asynchronous Operations Handling
================================
By: David Howells <dhowells@redhat.com>
.. Contents:
(*) Overview.
(*) Operation record initialisation.
(*) Parameters.
(*) Procedure.
(*) Asynchronous callback.
Overview
========
FS-Cache has an asynchronous operations handling facility that it uses for its
data storage and retrieval routines. Its operations are represented by
fscache_operation structs, though these are usually embedded into some other
structure.
This facility is available to and expected to be used by the cache backends,
and FS-Cache will create operations and pass them off to the appropriate cache
backend for completion.
To make use of this facility, <linux/fscache-cache.h> should be #included.
Operation Record Initialisation
===============================
An operation is recorded in an fscache_operation struct::
struct fscache_operation {
union {
struct work_struct fast_work;
struct slow_work slow_work;
};
unsigned long flags;
fscache_operation_processor_t processor;
...
};
Someone wanting to issue an operation should allocate something with this
struct embedded in it. They should initialise it by calling::
void fscache_operation_init(struct fscache_operation *op,
fscache_operation_release_t release);
with the operation to be initialised and the release function to use.
The op->flags parameter should be set to indicate the CPU time provision and
the exclusivity (see the Parameters section).
The op->fast_work, op->slow_work and op->processor flags should be set as
appropriate for the CPU time provision (see the Parameters section).
FSCACHE_OP_WAITING may be set in op->flags prior to each submission of the
operation and waited for afterwards.
Parameters
==========
There are a number of parameters that can be set in the operation record's flag
parameter. There are three options for the provision of CPU time in these
operations:
(1) The operation may be done synchronously (FSCACHE_OP_MYTHREAD). A thread
may decide it wants to handle an operation itself without deferring it to
another thread.
This is, for example, used in read operations for calling readpages() on
the backing filesystem in CacheFiles. Although readpages() does an
asynchronous data fetch, the determination of whether pages exist is done
synchronously - and the netfs does not proceed until this has been
determined.
If this option is to be used, FSCACHE_OP_WAITING must be set in op->flags
before submitting the operation, and the operating thread must wait for it
to be cleared before proceeding::
wait_on_bit(&op->flags, FSCACHE_OP_WAITING,
TASK_UNINTERRUPTIBLE);
(2) The operation may be fast asynchronous (FSCACHE_OP_FAST), in which case it
will be given to keventd to process. Such an operation is not permitted
to sleep on I/O.
This is, for example, used by CacheFiles to copy data from a backing fs
page to a netfs page after the backing fs has read the page in.
If this option is used, op->fast_work and op->processor must be
initialised before submitting the operation::
INIT_WORK(&op->fast_work, do_some_work);
(3) The operation may be slow asynchronous (FSCACHE_OP_SLOW), in which case it
will be given to the slow work facility to process. Such an operation is
permitted to sleep on I/O.
This is, for example, used by FS-Cache to handle background writes of
pages that have just been fetched from a remote server.
If this option is used, op->slow_work and op->processor must be
initialised before submitting the operation::
fscache_operation_init_slow(op, processor)
Furthermore, operations may be one of two types:
(1) Exclusive (FSCACHE_OP_EXCLUSIVE). Operations of this type may not run in
conjunction with any other operation on the object being operated upon.
An example of this is the attribute change operation, in which the file
being written to may need truncation.
(2) Shareable. Operations of this type may be running simultaneously. It's
up to the operation implementation to prevent interference between other
operations running at the same time.
Procedure
=========
Operations are used through the following procedure:
(1) The submitting thread must allocate the operation and initialise it
itself. Normally this would be part of a more specific structure with the
generic op embedded within.
(2) The submitting thread must then submit the operation for processing using
one of the following two functions::
int fscache_submit_op(struct fscache_object *object,
struct fscache_operation *op);
int fscache_submit_exclusive_op(struct fscache_object *object,
struct fscache_operation *op);
The first function should be used to submit non-exclusive ops and the
second to submit exclusive ones. The caller must still set the
FSCACHE_OP_EXCLUSIVE flag.
If successful, both functions will assign the operation to the specified
object and return 0. -ENOBUFS will be returned if the object specified is
permanently unavailable.
The operation manager will defer operations on an object that is still
undergoing lookup or creation. The operation will also be deferred if an
operation of conflicting exclusivity is in progress on the object.
If the operation is asynchronous, the manager will retain a reference to
it, so the caller should put their reference to it by passing it to::
void fscache_put_operation(struct fscache_operation *op);
(3) If the submitting thread wants to do the work itself, and has marked the
operation with FSCACHE_OP_MYTHREAD, then it should monitor
FSCACHE_OP_WAITING as described above and check the state of the object if
necessary (the object might have died while the thread was waiting).
When it has finished doing its processing, it should call
fscache_op_complete() and fscache_put_operation() on it.
(4) The operation holds an effective lock upon the object, preventing other
exclusive ops conflicting until it is released. The operation can be
enqueued for further immediate asynchronous processing by adjusting the
CPU time provisioning option if necessary, eg::
op->flags &= ~FSCACHE_OP_TYPE;
op->flags |= ~FSCACHE_OP_FAST;
and calling::
void fscache_enqueue_operation(struct fscache_operation *op)
This can be used to allow other things to have use of the worker thread
pools.
Asynchronous Callback
=====================
When used in asynchronous mode, the worker thread pool will invoke the
processor method with a pointer to the operation. This should then get at the
container struct by using container_of()::
static void fscache_write_op(struct fscache_operation *_op)
{
struct fscache_storage *op =
container_of(_op, struct fscache_storage, op);
...
}
The caller holds a reference on the operation, and will invoke
fscache_put_operation() when the processor function returns. The processor
function is at liberty to call fscache_enqueue_operation() or to take extra
references.
......@@ -454,7 +454,8 @@ operation table looks like the following::
void *term_func_priv);
int (*prepare_write)(struct netfs_cache_resources *cres,
loff_t *_start, size_t *_len, loff_t i_size);
loff_t *_start, size_t *_len, loff_t i_size,
bool no_space_allocated_yet);
int (*write)(struct netfs_cache_resources *cres,
loff_t start_pos,
......@@ -515,11 +516,14 @@ The methods defined in the table are:
* ``prepare_write()``
[Required] Called to adjust a write to the cache and check that there is
sufficient space in the cache. The start and length values indicate the
size of the write that netfslib is proposing, and this can be adjusted by
the cache to respect DIO boundaries. The file size is passed for
information.
[Required] Called to prepare a write to the cache to take place. This
involves checking to see whether the cache has sufficient space to honour
the write. ``*_start`` and ``*_len`` indicate the region to be written; the
region can be shrunk or it can be expanded to a page boundary either way as
necessary to align for direct I/O. i_size holds the size of the object and
is provided for reference. no_space_allocated_yet is set to true if the
caller is certain that no data has been written to that region - for example
if it tried to do a read from there already.
* ``write()``
......
......@@ -16,186 +16,61 @@
#include "v9fs.h"
#include "cache.h"
#define CACHETAG_LEN 11
struct fscache_netfs v9fs_cache_netfs = {
.name = "9p",
.version = 0,
};
/*
* v9fs_random_cachetag - Generate a random tag to be associated
* with a new cache session.
*
* The value of jiffies is used for a fairly randomly cache tag.
*/
static
int v9fs_random_cachetag(struct v9fs_session_info *v9ses)
int v9fs_cache_session_get_cookie(struct v9fs_session_info *v9ses,
const char *dev_name)
{
v9ses->cachetag = kmalloc(CACHETAG_LEN, GFP_KERNEL);
if (!v9ses->cachetag)
return -ENOMEM;
struct fscache_volume *vcookie;
char *name, *p;
return scnprintf(v9ses->cachetag, CACHETAG_LEN, "%lu", jiffies);
}
const struct fscache_cookie_def v9fs_cache_session_index_def = {
.name = "9P.session",
.type = FSCACHE_COOKIE_TYPE_INDEX,
};
name = kasprintf(GFP_KERNEL, "9p,%s,%s",
dev_name, v9ses->cachetag ?: v9ses->aname);
if (!name)
return -ENOMEM;
void v9fs_cache_session_get_cookie(struct v9fs_session_info *v9ses)
{
/* If no cache session tag was specified, we generate a random one. */
if (!v9ses->cachetag) {
if (v9fs_random_cachetag(v9ses) < 0) {
v9ses->fscache = NULL;
kfree(v9ses->cachetag);
v9ses->cachetag = NULL;
return;
for (p = name; *p; p++)
if (*p == '/')
*p = ';';
vcookie = fscache_acquire_volume(name, NULL, NULL, 0);
p9_debug(P9_DEBUG_FSC, "session %p get volume %p (%s)\n",
v9ses, vcookie, name);
if (IS_ERR(vcookie)) {
if (vcookie != ERR_PTR(-EBUSY)) {
kfree(name);
return PTR_ERR(vcookie);
}
pr_err("Cache volume key already in use (%s)\n", name);
vcookie = NULL;
}
v9ses->fscache = fscache_acquire_cookie(v9fs_cache_netfs.primary_index,
&v9fs_cache_session_index_def,
v9ses->cachetag,
strlen(v9ses->cachetag),
NULL, 0,
v9ses, 0, true);
p9_debug(P9_DEBUG_FSC, "session %p get cookie %p\n",
v9ses, v9ses->fscache);
}
void v9fs_cache_session_put_cookie(struct v9fs_session_info *v9ses)
{
p9_debug(P9_DEBUG_FSC, "session %p put cookie %p\n",
v9ses, v9ses->fscache);
fscache_relinquish_cookie(v9ses->fscache, NULL, false);
v9ses->fscache = NULL;
}
static enum
fscache_checkaux v9fs_cache_inode_check_aux(void *cookie_netfs_data,
const void *buffer,
uint16_t buflen,
loff_t object_size)
{
const struct v9fs_inode *v9inode = cookie_netfs_data;
if (buflen != sizeof(v9inode->qid.version))
return FSCACHE_CHECKAUX_OBSOLETE;
if (memcmp(buffer, &v9inode->qid.version,
sizeof(v9inode->qid.version)))
return FSCACHE_CHECKAUX_OBSOLETE;
return FSCACHE_CHECKAUX_OKAY;
v9ses->fscache = vcookie;
kfree(name);
return 0;
}
const struct fscache_cookie_def v9fs_cache_inode_index_def = {
.name = "9p.inode",
.type = FSCACHE_COOKIE_TYPE_DATAFILE,
.check_aux = v9fs_cache_inode_check_aux,
};
void v9fs_cache_inode_get_cookie(struct inode *inode)
{
struct v9fs_inode *v9inode;
struct v9fs_session_info *v9ses;
__le32 version;
__le64 path;
if (!S_ISREG(inode->i_mode))
return;
v9inode = V9FS_I(inode);
if (v9inode->fscache)
if (WARN_ON(v9inode->fscache))
return;
version = cpu_to_le32(v9inode->qid.version);
path = cpu_to_le64(v9inode->qid.path);
v9ses = v9fs_inode2v9ses(inode);
v9inode->fscache = fscache_acquire_cookie(v9ses->fscache,
&v9fs_cache_inode_index_def,
&v9inode->qid.path,
sizeof(v9inode->qid.path),
&v9inode->qid.version,
sizeof(v9inode->qid.version),
v9inode,
i_size_read(&v9inode->vfs_inode),
true);
v9inode->fscache =
fscache_acquire_cookie(v9fs_session_cache(v9ses),
0,
&path, sizeof(path),
&version, sizeof(version),
i_size_read(&v9inode->vfs_inode));
p9_debug(P9_DEBUG_FSC, "inode %p get cookie %p\n",
inode, v9inode->fscache);
}
void v9fs_cache_inode_put_cookie(struct inode *inode)
{
struct v9fs_inode *v9inode = V9FS_I(inode);
if (!v9inode->fscache)
return;
p9_debug(P9_DEBUG_FSC, "inode %p put cookie %p\n",
inode, v9inode->fscache);
fscache_relinquish_cookie(v9inode->fscache, &v9inode->qid.version,
false);
v9inode->fscache = NULL;
}
void v9fs_cache_inode_flush_cookie(struct inode *inode)
{
struct v9fs_inode *v9inode = V9FS_I(inode);
if (!v9inode->fscache)
return;
p9_debug(P9_DEBUG_FSC, "inode %p flush cookie %p\n",
inode, v9inode->fscache);
fscache_relinquish_cookie(v9inode->fscache, NULL, true);
v9inode->fscache = NULL;
}
void v9fs_cache_inode_set_cookie(struct inode *inode, struct file *filp)
{
struct v9fs_inode *v9inode = V9FS_I(inode);
if (!v9inode->fscache)
return;
mutex_lock(&v9inode->fscache_lock);
if ((filp->f_flags & O_ACCMODE) != O_RDONLY)
v9fs_cache_inode_flush_cookie(inode);
else
v9fs_cache_inode_get_cookie(inode);
mutex_unlock(&v9inode->fscache_lock);
}
void v9fs_cache_inode_reset_cookie(struct inode *inode)
{
struct v9fs_inode *v9inode = V9FS_I(inode);
struct v9fs_session_info *v9ses;
struct fscache_cookie *old;
if (!v9inode->fscache)
return;
old = v9inode->fscache;
mutex_lock(&v9inode->fscache_lock);
fscache_relinquish_cookie(v9inode->fscache, NULL, true);
v9ses = v9fs_inode2v9ses(inode);
v9inode->fscache = fscache_acquire_cookie(v9ses->fscache,
&v9fs_cache_inode_index_def,
&v9inode->qid.path,
sizeof(v9inode->qid.path),
&v9inode->qid.version,
sizeof(v9inode->qid.version),
v9inode,
i_size_read(&v9inode->vfs_inode),
true);
p9_debug(P9_DEBUG_FSC, "inode %p revalidating cookie old %p new %p\n",
inode, old, v9inode->fscache);
mutex_unlock(&v9inode->fscache_lock);
}
......@@ -7,26 +7,15 @@
#ifndef _9P_CACHE_H
#define _9P_CACHE_H
#define FSCACHE_USE_NEW_IO_API
#include <linux/fscache.h>
#ifdef CONFIG_9P_FSCACHE
extern struct fscache_netfs v9fs_cache_netfs;
extern const struct fscache_cookie_def v9fs_cache_session_index_def;
extern const struct fscache_cookie_def v9fs_cache_inode_index_def;
extern void v9fs_cache_session_get_cookie(struct v9fs_session_info *v9ses);
extern void v9fs_cache_session_put_cookie(struct v9fs_session_info *v9ses);
extern int v9fs_cache_session_get_cookie(struct v9fs_session_info *v9ses,
const char *dev_name);
extern void v9fs_cache_inode_get_cookie(struct inode *inode);
extern void v9fs_cache_inode_put_cookie(struct inode *inode);
extern void v9fs_cache_inode_flush_cookie(struct inode *inode);
extern void v9fs_cache_inode_set_cookie(struct inode *inode, struct file *filp);
extern void v9fs_cache_inode_reset_cookie(struct inode *inode);
extern int __v9fs_cache_register(void);
extern void __v9fs_cache_unregister(void);
#else /* CONFIG_9P_FSCACHE */
......@@ -34,13 +23,5 @@ static inline void v9fs_cache_inode_get_cookie(struct inode *inode)
{
}
static inline void v9fs_cache_inode_put_cookie(struct inode *inode)
{
}
static inline void v9fs_cache_inode_set_cookie(struct inode *inode, struct file *file)
{
}
#endif /* CONFIG_9P_FSCACHE */
#endif /* _9P_CACHE_H */
......@@ -469,7 +469,11 @@ struct p9_fid *v9fs_session_init(struct v9fs_session_info *v9ses,
#ifdef CONFIG_9P_FSCACHE
/* register the session for caching */
v9fs_cache_session_get_cookie(v9ses);
if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) {
rc = v9fs_cache_session_get_cookie(v9ses, dev_name);
if (rc < 0)
goto err_clnt;
}
#endif
spin_lock(&v9fs_sessionlist_lock);
list_add(&v9ses->slist, &v9fs_sessionlist);
......@@ -502,8 +506,7 @@ void v9fs_session_close(struct v9fs_session_info *v9ses)
}
#ifdef CONFIG_9P_FSCACHE
if (v9ses->fscache)
v9fs_cache_session_put_cookie(v9ses);
fscache_relinquish_volume(v9fs_session_cache(v9ses), NULL, false);
kfree(v9ses->cachetag);
#endif
kfree(v9ses->uname);
......@@ -665,20 +668,12 @@ static int v9fs_cache_register(void)
ret = v9fs_init_inode_cache();
if (ret < 0)
return ret;
#ifdef CONFIG_9P_FSCACHE
ret = fscache_register_netfs(&v9fs_cache_netfs);
if (ret < 0)
v9fs_destroy_inode_cache();
#endif
return ret;
}
static void v9fs_cache_unregister(void)
{
v9fs_destroy_inode_cache();
#ifdef CONFIG_9P_FSCACHE
fscache_unregister_netfs(&v9fs_cache_netfs);
#endif
}
/**
......
......@@ -89,7 +89,7 @@ struct v9fs_session_info {
unsigned int cache;
#ifdef CONFIG_9P_FSCACHE
char *cachetag;
struct fscache_cookie *fscache;
struct fscache_volume *fscache;
#endif
char *uname; /* user name to mount as */
......@@ -109,7 +109,6 @@ struct v9fs_session_info {
struct v9fs_inode {
#ifdef CONFIG_9P_FSCACHE
struct mutex fscache_lock;
struct fscache_cookie *fscache;
#endif
struct p9_qid qid;
......@@ -133,6 +132,16 @@ static inline struct fscache_cookie *v9fs_inode_cookie(struct v9fs_inode *v9inod
#endif
}
static inline struct fscache_volume *v9fs_session_cache(struct v9fs_session_info *v9ses)
{
#ifdef CONFIG_9P_FSCACHE
return v9ses->fscache;
#else
return NULL;
#endif
}
extern int v9fs_show_options(struct seq_file *m, struct dentry *root);
struct p9_fid *v9fs_session_init(struct v9fs_session_info *v9ses,
......
......@@ -16,6 +16,7 @@
#include <linux/pagemap.h>
#include <linux/idr.h>
#include <linux/sched.h>
#include <linux/swap.h>
#include <linux/uio.h>
#include <linux/netfs.h>
#include <net/9p/9p.h>
......@@ -78,7 +79,7 @@ static bool v9fs_is_cache_enabled(struct inode *inode)
{
struct fscache_cookie *cookie = v9fs_inode_cookie(V9FS_I(inode));
return fscache_cookie_enabled(cookie) && !hlist_empty(&cookie->backing_objects);
return fscache_cookie_enabled(cookie) && cookie->cache_priv;
}
/**
......@@ -87,9 +88,13 @@ static bool v9fs_is_cache_enabled(struct inode *inode)
*/
static int v9fs_begin_cache_operation(struct netfs_read_request *rreq)
{
#ifdef CONFIG_9P_FSCACHE
struct fscache_cookie *cookie = v9fs_inode_cookie(V9FS_I(rreq->inode));
return fscache_begin_read_operation(rreq, cookie);
return fscache_begin_read_operation(&rreq->cache_resources, cookie);
#else
return -ENOBUFS;
#endif
}
static const struct netfs_read_request_ops v9fs_req_ops = {
......@@ -133,16 +138,18 @@ static void v9fs_vfs_readahead(struct readahead_control *ractl)
static int v9fs_release_page(struct page *page, gfp_t gfp)
{
struct folio *folio = page_folio(page);
struct inode *inode = folio_inode(folio);
if (folio_test_private(folio))
return 0;
#ifdef CONFIG_9P_FSCACHE
if (folio_test_fscache(folio)) {
if (!(gfp & __GFP_DIRECT_RECLAIM) || !(gfp & __GFP_FS))
if (current_is_kswapd() || !(gfp & __GFP_FS))
return 0;
folio_wait_fscache(folio);
}
#endif
fscache_note_page_release(v9fs_inode_cookie(V9FS_I(inode)));
return 1;
}
......@@ -161,10 +168,25 @@ static void v9fs_invalidate_page(struct page *page, unsigned int offset,
folio_wait_fscache(folio);
}
static void v9fs_write_to_cache_done(void *priv, ssize_t transferred_or_error,
bool was_async)
{
struct v9fs_inode *v9inode = priv;
__le32 version;
if (IS_ERR_VALUE(transferred_or_error) &&
transferred_or_error != -ENOBUFS) {
version = cpu_to_le32(v9inode->qid.version);
fscache_invalidate(v9fs_inode_cookie(v9inode), &version,
i_size_read(&v9inode->vfs_inode), 0);
}
}
static int v9fs_vfs_write_folio_locked(struct folio *folio)
{
struct inode *inode = folio_inode(folio);
struct v9fs_inode *v9inode = V9FS_I(inode);
struct fscache_cookie *cookie = v9fs_inode_cookie(v9inode);
loff_t start = folio_pos(folio);
loff_t i_size = i_size_read(inode);
struct iov_iter from;
......@@ -181,10 +203,21 @@ static int v9fs_vfs_write_folio_locked(struct folio *folio)
/* We should have writeback_fid always set */
BUG_ON(!v9inode->writeback_fid);
folio_wait_fscache(folio);
folio_start_writeback(folio);
p9_client_write(v9inode->writeback_fid, start, &from, &err);
if (err == 0 &&
fscache_cookie_enabled(cookie) &&
test_bit(FSCACHE_COOKIE_IS_CACHING, &cookie->flags)) {
folio_start_fscache(folio);
fscache_write_to_cache(v9fs_inode_cookie(v9inode),
folio_mapping(folio), start, len, i_size,
v9fs_write_to_cache_done, v9inode,
true);
}
folio_end_writeback(folio);
return err;
}
......@@ -303,6 +336,7 @@ static int v9fs_write_end(struct file *filp, struct address_space *mapping,
loff_t last_pos = pos + copied;
struct folio *folio = page_folio(subpage);
struct inode *inode = mapping->host;
struct v9fs_inode *v9inode = V9FS_I(inode);
p9_debug(P9_DEBUG_VFS, "filp %p, mapping %p\n", filp, mapping);
......@@ -322,6 +356,7 @@ static int v9fs_write_end(struct file *filp, struct address_space *mapping,
if (last_pos > inode->i_size) {
inode_add_bytes(inode, last_pos - inode->i_size);
i_size_write(inode, last_pos);
fscache_update_cookie(v9fs_inode_cookie(v9inode), NULL, &last_pos);
}
folio_mark_dirty(folio);
out:
......@@ -331,11 +366,25 @@ static int v9fs_write_end(struct file *filp, struct address_space *mapping,
return copied;
}
#ifdef CONFIG_9P_FSCACHE
/*
* Mark a page as having been made dirty and thus needing writeback. We also
* need to pin the cache object to write back to.
*/
static int v9fs_set_page_dirty(struct page *page)
{
struct v9fs_inode *v9inode = V9FS_I(page->mapping->host);
return fscache_set_page_dirty(page, v9fs_inode_cookie(v9inode));
}
#else
#define v9fs_set_page_dirty __set_page_dirty_nobuffers
#endif
const struct address_space_operations v9fs_addr_operations = {
.readpage = v9fs_vfs_readpage,
.readahead = v9fs_vfs_readahead,
.set_page_dirty = __set_page_dirty_nobuffers,
.set_page_dirty = v9fs_set_page_dirty,
.writepage = v9fs_vfs_writepage,
.write_begin = v9fs_write_begin,
.write_end = v9fs_write_end,
......
......@@ -17,6 +17,7 @@
#include <linux/idr.h>
#include <linux/slab.h>
#include <linux/uio.h>
#include <linux/fscache.h>
#include <net/9p/9p.h>
#include <net/9p/client.h>
......@@ -205,7 +206,10 @@ static int v9fs_dir_readdir_dotl(struct file *file, struct dir_context *ctx)
int v9fs_dir_release(struct inode *inode, struct file *filp)
{
struct v9fs_inode *v9inode = V9FS_I(inode);
struct p9_fid *fid;
__le32 version;
loff_t i_size;
fid = filp->private_data;
p9_debug(P9_DEBUG_VFS, "inode: %p filp: %p fid: %d\n",
......@@ -216,6 +220,15 @@ int v9fs_dir_release(struct inode *inode, struct file *filp)
spin_unlock(&inode->i_lock);
p9_client_clunk(fid);
}
if ((filp->f_mode & FMODE_WRITE)) {
version = cpu_to_le32(v9inode->qid.version);
i_size = i_size_read(inode);
fscache_unuse_cookie(v9fs_inode_cookie(v9inode),
&version, &i_size);
} else {
fscache_unuse_cookie(v9fs_inode_cookie(v9inode), NULL, NULL);
}
return 0;
}
......
......@@ -93,7 +93,8 @@ int v9fs_file_open(struct inode *inode, struct file *file)
}
mutex_unlock(&v9inode->v_mutex);
if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE)
v9fs_cache_inode_set_cookie(inode, file);
fscache_use_cookie(v9fs_inode_cookie(v9inode),
file->f_mode & FMODE_WRITE);
v9fs_open_fid_add(inode, fid);
return 0;
out_error:
......
......@@ -233,7 +233,6 @@ struct inode *v9fs_alloc_inode(struct super_block *sb)
return NULL;
#ifdef CONFIG_9P_FSCACHE
v9inode->fscache = NULL;
mutex_init(&v9inode->fscache_lock);
#endif
v9inode->writeback_fid = NULL;
v9inode->cache_validity = 0;
......@@ -381,12 +380,16 @@ struct inode *v9fs_get_inode(struct super_block *sb, umode_t mode, dev_t rdev)
void v9fs_evict_inode(struct inode *inode)
{
struct v9fs_inode *v9inode = V9FS_I(inode);
__le32 version;
truncate_inode_pages_final(&inode->i_data);
version = cpu_to_le32(v9inode->qid.version);
fscache_clear_inode_writeback(v9fs_inode_cookie(v9inode), inode,
&version);
clear_inode(inode);
filemap_fdatawrite(&inode->i_data);
v9fs_cache_inode_put_cookie(inode);
fscache_relinquish_cookie(v9fs_inode_cookie(v9inode), false);
/* clunk the fid stashed in writeback_fid */
if (v9inode->writeback_fid) {
p9_client_clunk(v9inode->writeback_fid);
......@@ -869,7 +872,8 @@ v9fs_vfs_atomic_open(struct inode *dir, struct dentry *dentry,
file->private_data = fid;
if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE)
v9fs_cache_inode_set_cookie(d_inode(dentry), file);
fscache_use_cookie(v9fs_inode_cookie(v9inode),
file->f_mode & FMODE_WRITE);
v9fs_open_fid_add(inode, fid);
file->f_mode |= FMODE_CREATED;
......@@ -1072,6 +1076,8 @@ static int v9fs_vfs_setattr(struct user_namespace *mnt_userns,
struct dentry *dentry, struct iattr *iattr)
{
int retval, use_dentry = 0;
struct inode *inode = d_inode(dentry);
struct v9fs_inode *v9inode = V9FS_I(inode);
struct v9fs_session_info *v9ses;
struct p9_fid *fid = NULL;
struct p9_wstat wstat;
......@@ -1117,7 +1123,7 @@ static int v9fs_vfs_setattr(struct user_namespace *mnt_userns,
/* Write all dirty data */
if (d_is_reg(dentry))
filemap_write_and_wait(d_inode(dentry)->i_mapping);
filemap_write_and_wait(inode->i_mapping);
retval = p9_client_wstat(fid, &wstat);
......@@ -1128,13 +1134,15 @@ static int v9fs_vfs_setattr(struct user_namespace *mnt_userns,
return retval;
if ((iattr->ia_valid & ATTR_SIZE) &&
iattr->ia_size != i_size_read(d_inode(dentry)))
truncate_setsize(d_inode(dentry), iattr->ia_size);
iattr->ia_size != i_size_read(inode)) {
truncate_setsize(inode, iattr->ia_size);
fscache_resize_cookie(v9fs_inode_cookie(v9inode), iattr->ia_size);
}
v9fs_invalidate_inode_attr(d_inode(dentry));
v9fs_invalidate_inode_attr(inode);
setattr_copy(&init_user_ns, d_inode(dentry), iattr);
mark_inode_dirty(d_inode(dentry));
setattr_copy(&init_user_ns, inode, iattr);
mark_inode_dirty(inode);
return 0;
}
......
......@@ -344,7 +344,8 @@ v9fs_vfs_atomic_open_dotl(struct inode *dir, struct dentry *dentry,
goto err_clunk_old_fid;
file->private_data = ofid;
if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE)
v9fs_cache_inode_set_cookie(inode, file);
fscache_use_cookie(v9fs_inode_cookie(v9inode),
file->f_mode & FMODE_WRITE);
v9fs_open_fid_add(inode, ofid);
file->f_mode |= FMODE_CREATED;
out:
......
......@@ -20,6 +20,7 @@
#include <linux/slab.h>
#include <linux/statfs.h>
#include <linux/magic.h>
#include <linux/fscache.h>
#include <net/9p/9p.h>
#include <net/9p/client.h>
......@@ -309,6 +310,7 @@ static int v9fs_write_inode(struct inode *inode,
__mark_inode_dirty(inode, I_DIRTY_DATASYNC);
return ret;
}
fscache_unpin_writeback(wbc, v9fs_inode_cookie(v9inode));
return 0;
}
......@@ -332,6 +334,7 @@ static int v9fs_write_inode_dotl(struct inode *inode,
__mark_inode_dirty(inode, I_DIRTY_DATASYNC);
return ret;
}
fscache_unpin_writeback(wbc, v9fs_inode_cookie(v9inode));
return 0;
}
......
......@@ -3,10 +3,7 @@
# Makefile for Red Hat Linux AFS client.
#
afs-cache-$(CONFIG_AFS_FSCACHE) := cache.o
kafs-y := \
$(afs-cache-y) \
addr_list.o \
callback.o \
cell.o \
......
// SPDX-License-Identifier: GPL-2.0-or-later
/* AFS caching stuff
*
* Copyright (C) 2008 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/sched.h>
#include "internal.h"
static enum fscache_checkaux afs_vnode_cache_check_aux(void *cookie_netfs_data,
const void *buffer,
uint16_t buflen,
loff_t object_size);
struct fscache_netfs afs_cache_netfs = {
.name = "afs",
.version = 2,
};
struct fscache_cookie_def afs_cell_cache_index_def = {
.name = "AFS.cell",
.type = FSCACHE_COOKIE_TYPE_INDEX,
};
struct fscache_cookie_def afs_volume_cache_index_def = {
.name = "AFS.volume",
.type = FSCACHE_COOKIE_TYPE_INDEX,
};
struct fscache_cookie_def afs_vnode_cache_index_def = {
.name = "AFS.vnode",
.type = FSCACHE_COOKIE_TYPE_DATAFILE,
.check_aux = afs_vnode_cache_check_aux,
};
/*
* check that the auxiliary data indicates that the entry is still valid
*/
static enum fscache_checkaux afs_vnode_cache_check_aux(void *cookie_netfs_data,
const void *buffer,
uint16_t buflen,
loff_t object_size)
{
struct afs_vnode *vnode = cookie_netfs_data;
struct afs_vnode_cache_aux aux;
_enter("{%llx,%x,%llx},%p,%u",
vnode->fid.vnode, vnode->fid.unique, vnode->status.data_version,
buffer, buflen);
memcpy(&aux, buffer, sizeof(aux));
/* check the size of the data is what we're expecting */
if (buflen != sizeof(aux)) {
_leave(" = OBSOLETE [len %hx != %zx]", buflen, sizeof(aux));
return FSCACHE_CHECKAUX_OBSOLETE;
}
if (vnode->status.data_version != aux.data_version) {
_leave(" = OBSOLETE [vers %llx != %llx]",
aux.data_version, vnode->status.data_version);
return FSCACHE_CHECKAUX_OBSOLETE;
}
_leave(" = SUCCESS");
return FSCACHE_CHECKAUX_OKAY;
}
......@@ -680,13 +680,6 @@ static int afs_activate_cell(struct afs_net *net, struct afs_cell *cell)
return ret;
}
#ifdef CONFIG_AFS_FSCACHE
cell->cache = fscache_acquire_cookie(afs_cache_netfs.primary_index,
&afs_cell_cache_index_def,
cell->name, strlen(cell->name),
NULL, 0,
cell, 0, true);
#endif
ret = afs_proc_cell_setup(cell);
if (ret < 0)
return ret;
......@@ -723,11 +716,6 @@ static void afs_deactivate_cell(struct afs_net *net, struct afs_cell *cell)
afs_dynroot_rmdir(net, cell);
mutex_unlock(&net->proc_cells_lock);
#ifdef CONFIG_AFS_FSCACHE
fscache_relinquish_cookie(cell->cache, NULL, false);
cell->cache = NULL;
#endif
_leave("");
}
......
......@@ -14,6 +14,7 @@
#include <linux/gfp.h>
#include <linux/task_io_accounting_ops.h>
#include <linux/mm.h>
#include <linux/swap.h>
#include <linux/netfs.h>
#include "internal.h"
......@@ -159,6 +160,8 @@ int afs_open(struct inode *inode, struct file *file)
if (file->f_flags & O_TRUNC)
set_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags);
fscache_use_cookie(afs_vnode_cache(vnode), file->f_mode & FMODE_WRITE);
file->private_data = af;
_leave(" = 0");
return 0;
......@@ -177,8 +180,10 @@ int afs_open(struct inode *inode, struct file *file)
*/
int afs_release(struct inode *inode, struct file *file)
{
struct afs_vnode_cache_aux aux;
struct afs_vnode *vnode = AFS_FS_I(inode);
struct afs_file *af = file->private_data;
loff_t i_size;
int ret = 0;
_enter("{%llx:%llu},", vnode->fid.vid, vnode->fid.vnode);
......@@ -189,6 +194,15 @@ int afs_release(struct inode *inode, struct file *file)
file->private_data = NULL;
if (af->wb)
afs_put_wb_key(af->wb);
if ((file->f_mode & FMODE_WRITE)) {
i_size = i_size_read(&vnode->vfs_inode);
afs_set_cache_aux(vnode, &aux);
fscache_unuse_cookie(afs_vnode_cache(vnode), &aux, &i_size);
} else {
fscache_unuse_cookie(afs_vnode_cache(vnode), NULL, NULL);
}
key_put(af->key);
kfree(af);
afs_prune_wb_keys(vnode);
......@@ -354,14 +368,19 @@ static bool afs_is_cache_enabled(struct inode *inode)
{
struct fscache_cookie *cookie = afs_vnode_cache(AFS_FS_I(inode));
return fscache_cookie_enabled(cookie) && !hlist_empty(&cookie->backing_objects);
return fscache_cookie_enabled(cookie) && cookie->cache_priv;
}
static int afs_begin_cache_operation(struct netfs_read_request *rreq)
{
#ifdef CONFIG_AFS_FSCACHE
struct afs_vnode *vnode = AFS_FS_I(rreq->inode);
return fscache_begin_read_operation(rreq, afs_vnode_cache(vnode));
return fscache_begin_read_operation(&rreq->cache_resources,
afs_vnode_cache(vnode));
#else
return -ENOBUFS;
#endif
}
static int afs_check_write_begin(struct file *file, loff_t pos, unsigned len,
......@@ -398,6 +417,12 @@ static void afs_readahead(struct readahead_control *ractl)
netfs_readahead(ractl, &afs_req_ops, NULL);
}
int afs_write_inode(struct inode *inode, struct writeback_control *wbc)
{
fscache_unpin_writeback(wbc, afs_vnode_cache(AFS_FS_I(inode)));
return 0;
}
/*
* Adjust the dirty region of the page on truncation or full invalidation,
* getting rid of the markers altogether if the region is entirely invalidated.
......@@ -480,23 +505,24 @@ static void afs_invalidatepage(struct page *page, unsigned int offset,
* release a page and clean up its private state if it's not busy
* - return true if the page can now be released, false if not
*/
static int afs_releasepage(struct page *page, gfp_t gfp_flags)
static int afs_releasepage(struct page *page, gfp_t gfp)
{
struct folio *folio = page_folio(page);
struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
_enter("{{%llx:%llu}[%lu],%lx},%x",
vnode->fid.vid, vnode->fid.vnode, folio_index(folio), folio->flags,
gfp_flags);
gfp);
/* deny if page is being written to the cache and the caller hasn't
* elected to wait */
#ifdef CONFIG_AFS_FSCACHE
if (folio_test_fscache(folio)) {
if (!(gfp_flags & __GFP_DIRECT_RECLAIM) || !(gfp_flags & __GFP_FS))
if (current_is_kswapd() || !(gfp & __GFP_FS))
return false;
folio_wait_fscache(folio);
}
fscache_note_page_release(afs_vnode_cache(vnode));
#endif
if (folio_test_private(folio)) {
......
......@@ -413,9 +413,9 @@ static void afs_get_inode_cache(struct afs_vnode *vnode)
{
#ifdef CONFIG_AFS_FSCACHE
struct {
u32 vnode_id;
u32 unique;
u32 vnode_id_ext[2]; /* Allow for a 96-bit key */
__be32 vnode_id;
__be32 unique;
__be32 vnode_id_ext[2]; /* Allow for a 96-bit key */
} __packed key;
struct afs_vnode_cache_aux aux;
......@@ -424,17 +424,18 @@ static void afs_get_inode_cache(struct afs_vnode *vnode)
return;
}
key.vnode_id = vnode->fid.vnode;
key.unique = vnode->fid.unique;
key.vnode_id_ext[0] = vnode->fid.vnode >> 32;
key.vnode_id_ext[1] = vnode->fid.vnode_hi;
aux.data_version = vnode->status.data_version;
key.vnode_id = htonl(vnode->fid.vnode);
key.unique = htonl(vnode->fid.unique);
key.vnode_id_ext[0] = htonl(vnode->fid.vnode >> 32);
key.vnode_id_ext[1] = htonl(vnode->fid.vnode_hi);
afs_set_cache_aux(vnode, &aux);
vnode->cache = fscache_acquire_cookie(vnode->volume->cache,
&afs_vnode_cache_index_def,
vnode->cache = fscache_acquire_cookie(
vnode->volume->cache,
vnode->status.type == AFS_FTYPE_FILE ? 0 : FSCACHE_ADV_SINGLE_CHUNK,
&key, sizeof(key),
&aux, sizeof(aux),
vnode, vnode->status.size, true);
vnode->status.size);
#endif
}
......@@ -563,9 +564,7 @@ static void afs_zap_data(struct afs_vnode *vnode)
{
_enter("{%llx:%llu}", vnode->fid.vid, vnode->fid.vnode);
#ifdef CONFIG_AFS_FSCACHE
fscache_invalidate(vnode->cache);
#endif
afs_invalidate_cache(vnode, 0);
/* nuke all the non-dirty pages that aren't locked, mapped or being
* written back in a regular file and completely discard the pages in a
......@@ -762,9 +761,8 @@ int afs_drop_inode(struct inode *inode)
*/
void afs_evict_inode(struct inode *inode)
{
struct afs_vnode *vnode;
vnode = AFS_FS_I(inode);
struct afs_vnode_cache_aux aux;
struct afs_vnode *vnode = AFS_FS_I(inode);
_enter("{%llx:%llu.%d}",
vnode->fid.vid,
......@@ -776,6 +774,9 @@ void afs_evict_inode(struct inode *inode)
ASSERTCMP(inode->i_ino, ==, vnode->fid.vnode);
truncate_inode_pages_final(&inode->i_data);
afs_set_cache_aux(vnode, &aux);
fscache_clear_inode_writeback(afs_vnode_cache(vnode), inode, &aux);
clear_inode(inode);
while (!list_empty(&vnode->wb_keys)) {
......@@ -786,14 +787,9 @@ void afs_evict_inode(struct inode *inode)
}
#ifdef CONFIG_AFS_FSCACHE
{
struct afs_vnode_cache_aux aux;
aux.data_version = vnode->status.data_version;
fscache_relinquish_cookie(vnode->cache, &aux,
fscache_relinquish_cookie(vnode->cache,
test_bit(AFS_VNODE_DELETED, &vnode->flags));
vnode->cache = NULL;
}
#endif
afs_prune_wb_keys(vnode);
......@@ -833,6 +829,9 @@ static void afs_setattr_edit_file(struct afs_operation *op)
if (size < i_size)
truncate_pagecache(inode, size);
if (size != i_size)
fscache_resize_cookie(afs_vnode_cache(vp->vnode),
vp->scb.status.size);
}
}
......@@ -849,40 +848,67 @@ static const struct afs_operation_ops afs_setattr_operation = {
int afs_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
struct iattr *attr)
{
const unsigned int supported =
ATTR_SIZE | ATTR_MODE | ATTR_UID | ATTR_GID |
ATTR_MTIME | ATTR_MTIME_SET | ATTR_TIMES_SET | ATTR_TOUCH;
struct afs_operation *op;
struct afs_vnode *vnode = AFS_FS_I(d_inode(dentry));
struct inode *inode = &vnode->vfs_inode;
loff_t i_size;
int ret;
_enter("{%llx:%llu},{n=%pd},%x",
vnode->fid.vid, vnode->fid.vnode, dentry,
attr->ia_valid);
if (!(attr->ia_valid & (ATTR_SIZE | ATTR_MODE | ATTR_UID | ATTR_GID |
ATTR_MTIME | ATTR_MTIME_SET | ATTR_TIMES_SET |
ATTR_TOUCH))) {
if (!(attr->ia_valid & supported)) {
_leave(" = 0 [unsupported]");
return 0;
}
i_size = i_size_read(inode);
if (attr->ia_valid & ATTR_SIZE) {
if (!S_ISREG(vnode->vfs_inode.i_mode))
if (!S_ISREG(inode->i_mode))
return -EISDIR;
ret = inode_newsize_ok(&vnode->vfs_inode, attr->ia_size);
ret = inode_newsize_ok(inode, attr->ia_size);
if (ret)
return ret;
if (attr->ia_size == i_size_read(&vnode->vfs_inode))
if (attr->ia_size == i_size)
attr->ia_valid &= ~ATTR_SIZE;
}
/* flush any dirty data outstanding on a regular file */
if (S_ISREG(vnode->vfs_inode.i_mode))
filemap_write_and_wait(vnode->vfs_inode.i_mapping);
fscache_use_cookie(afs_vnode_cache(vnode), true);
/* Prevent any new writebacks from starting whilst we do this. */
down_write(&vnode->validate_lock);
if ((attr->ia_valid & ATTR_SIZE) && S_ISREG(inode->i_mode)) {
loff_t size = attr->ia_size;
/* Wait for any outstanding writes to the server to complete */
loff_t from = min(size, i_size);
loff_t to = max(size, i_size);
ret = filemap_fdatawait_range(inode->i_mapping, from, to);
if (ret < 0)
goto out_unlock;
/* Don't talk to the server if we're just shortening in-memory
* writes that haven't gone to the server yet.
*/
if (!(attr->ia_valid & (supported & ~ATTR_SIZE & ~ATTR_MTIME)) &&
attr->ia_size < i_size &&
attr->ia_size > vnode->status.size) {
truncate_pagecache(inode, attr->ia_size);
fscache_resize_cookie(afs_vnode_cache(vnode),
attr->ia_size);
i_size_write(inode, attr->ia_size);
ret = 0;
goto out_unlock;
}
}
op = afs_alloc_operation(((attr->ia_valid & ATTR_FILE) ?
afs_file_key(attr->ia_file) : NULL),
vnode->volume);
......@@ -907,6 +933,7 @@ int afs_setattr(struct user_namespace *mnt_userns, struct dentry *dentry,
out_unlock:
up_write(&vnode->validate_lock);
fscache_unuse_cookie(afs_vnode_cache(vnode), NULL, NULL);
_leave(" = %d", ret);
return ret;
}
......@@ -14,7 +14,6 @@
#include <linux/key.h>
#include <linux/workqueue.h>
#include <linux/sched.h>
#define FSCACHE_USE_NEW_IO_API
#include <linux/fscache.h>
#include <linux/backing-dev.h>
#include <linux/uuid.h>
......@@ -364,9 +363,6 @@ struct afs_cell {
struct key *anonymous_key; /* anonymous user key for this cell */
struct work_struct manager; /* Manager for init/deinit/dns */
struct hlist_node proc_link; /* /proc cell list link */
#ifdef CONFIG_AFS_FSCACHE
struct fscache_cookie *cache; /* caching cookie */
#endif
time64_t dns_expiry; /* Time AFSDB/SRV record expires */
time64_t last_inactive; /* Time of last drop of usage count */
atomic_t ref; /* Struct refcount */
......@@ -590,7 +586,7 @@ struct afs_volume {
#define AFS_VOLUME_BUSY 5 /* - T if volume busy notice given */
#define AFS_VOLUME_MAYBE_NO_IBULK 6 /* - T if some servers don't have InlineBulkStatus */
#ifdef CONFIG_AFS_FSCACHE
struct fscache_cookie *cache; /* caching cookie */
struct fscache_volume *cache; /* Caching cookie */
#endif
struct afs_server_list __rcu *servers; /* List of servers on which volume resides */
rwlock_t servers_lock; /* Lock for ->servers */
......@@ -872,9 +868,24 @@ struct afs_operation {
* Cache auxiliary data.
*/
struct afs_vnode_cache_aux {
u64 data_version;
__be64 data_version;
} __packed;
static inline void afs_set_cache_aux(struct afs_vnode *vnode,
struct afs_vnode_cache_aux *aux)
{
aux->data_version = cpu_to_be64(vnode->status.data_version);
}
static inline void afs_invalidate_cache(struct afs_vnode *vnode, unsigned int flags)
{
struct afs_vnode_cache_aux aux;
afs_set_cache_aux(vnode, &aux);
fscache_invalidate(afs_vnode_cache(vnode), &aux,
i_size_read(&vnode->vfs_inode), flags);
}
/*
* We use folio->private to hold the amount of the folio that we've written to,
* splitting the field into two parts. However, we need to represent a range
......@@ -962,13 +973,6 @@ extern void afs_merge_fs_addr6(struct afs_addr_list *, __be32 *, u16);
*/
#ifdef CONFIG_AFS_FSCACHE
extern struct fscache_netfs afs_cache_netfs;
extern struct fscache_cookie_def afs_cell_cache_index_def;
extern struct fscache_cookie_def afs_volume_cache_index_def;
extern struct fscache_cookie_def afs_vnode_cache_index_def;
#else
#define afs_cell_cache_index_def (*(struct fscache_cookie_def *) NULL)
#define afs_volume_cache_index_def (*(struct fscache_cookie_def *) NULL)
#define afs_vnode_cache_index_def (*(struct fscache_cookie_def *) NULL)
#endif
/*
......@@ -1068,6 +1072,7 @@ extern int afs_release(struct inode *, struct file *);
extern int afs_fetch_data(struct afs_vnode *, struct afs_read *);
extern struct afs_read *afs_alloc_read(gfp_t);
extern void afs_put_read(struct afs_read *);
extern int afs_write_inode(struct inode *, struct writeback_control *);
static inline struct afs_read *afs_get_read(struct afs_read *req)
{
......@@ -1506,7 +1511,7 @@ extern struct afs_vlserver_list *afs_extract_vlserver_list(struct afs_cell *,
* volume.c
*/
extern struct afs_volume *afs_create_volume(struct afs_fs_context *);
extern void afs_activate_volume(struct afs_volume *);
extern int afs_activate_volume(struct afs_volume *);
extern void afs_deactivate_volume(struct afs_volume *);
extern struct afs_volume *afs_get_volume(struct afs_volume *, enum afs_volume_trace);
extern void afs_put_volume(struct afs_net *, struct afs_volume *, enum afs_volume_trace);
......@@ -1515,7 +1520,11 @@ extern int afs_check_volume_status(struct afs_volume *, struct afs_operation *);
/*
* write.c
*/
#ifdef CONFIG_AFS_FSCACHE
extern int afs_set_page_dirty(struct page *);
#else
#define afs_set_page_dirty __set_page_dirty_nobuffers
#endif
extern int afs_write_begin(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len, unsigned flags,
struct page **pagep, void **fsdata);
......
......@@ -186,13 +186,6 @@ static int __init afs_init(void)
if (!afs_lock_manager)
goto error_lockmgr;
#ifdef CONFIG_AFS_FSCACHE
/* we want to be able to cache */
ret = fscache_register_netfs(&afs_cache_netfs);
if (ret < 0)
goto error_cache;
#endif
ret = register_pernet_device(&afs_net_ops);
if (ret < 0)
goto error_net;
......@@ -215,10 +208,6 @@ static int __init afs_init(void)
error_fs:
unregister_pernet_device(&afs_net_ops);
error_net:
#ifdef CONFIG_AFS_FSCACHE
fscache_unregister_netfs(&afs_cache_netfs);
error_cache:
#endif
destroy_workqueue(afs_lock_manager);
error_lockmgr:
destroy_workqueue(afs_async_calls);
......@@ -245,9 +234,6 @@ static void __exit afs_exit(void)
proc_remove(afs_proc_symlink);
afs_fs_exit();
unregister_pernet_device(&afs_net_ops);
#ifdef CONFIG_AFS_FSCACHE
fscache_unregister_netfs(&afs_cache_netfs);
#endif
destroy_workqueue(afs_lock_manager);
destroy_workqueue(afs_async_calls);
destroy_workqueue(afs_wq);
......
......@@ -55,6 +55,7 @@ int afs_net_id;
static const struct super_operations afs_super_ops = {
.statfs = afs_statfs,
.alloc_inode = afs_alloc_inode,
.write_inode = afs_write_inode,
.drop_inode = afs_drop_inode,
.destroy_inode = afs_destroy_inode,
.free_inode = afs_free_inode,
......
......@@ -268,15 +268,30 @@ void afs_put_volume(struct afs_net *net, struct afs_volume *volume,
/*
* Activate a volume.
*/
void afs_activate_volume(struct afs_volume *volume)
int afs_activate_volume(struct afs_volume *volume)
{
#ifdef CONFIG_AFS_FSCACHE
volume->cache = fscache_acquire_cookie(volume->cell->cache,
&afs_volume_cache_index_def,
&volume->vid, sizeof(volume->vid),
NULL, 0,
volume, 0, true);
struct fscache_volume *vcookie;
char *name;
name = kasprintf(GFP_KERNEL, "afs,%s,%llx",
volume->cell->name, volume->vid);
if (!name)
return -ENOMEM;
vcookie = fscache_acquire_volume(name, NULL, NULL, 0);
if (IS_ERR(vcookie)) {
if (vcookie != ERR_PTR(-EBUSY)) {
kfree(name);
return PTR_ERR(vcookie);
}
pr_err("AFS: Cache volume key already in use (%s)\n", name);
vcookie = NULL;
}
volume->cache = vcookie;
kfree(name);
#endif
return 0;
}
/*
......@@ -287,7 +302,7 @@ void afs_deactivate_volume(struct afs_volume *volume)
_enter("%s", volume->name);
#ifdef CONFIG_AFS_FSCACHE
fscache_relinquish_cookie(volume->cache, NULL,
fscache_relinquish_volume(volume->cache, NULL,
test_bit(AFS_VOLUME_DELETED, &volume->flags));
volume->cache = NULL;
#endif
......
......@@ -12,17 +12,30 @@
#include <linux/writeback.h>
#include <linux/pagevec.h>
#include <linux/netfs.h>
#include <linux/fscache.h>
#include "internal.h"
static void afs_write_to_cache(struct afs_vnode *vnode, loff_t start, size_t len,
loff_t i_size, bool caching);
#ifdef CONFIG_AFS_FSCACHE
/*
* mark a page as having been made dirty and thus needing writeback
* Mark a page as having been made dirty and thus needing writeback. We also
* need to pin the cache object to write back to.
*/
int afs_set_page_dirty(struct page *page)
{
_enter("");
return __set_page_dirty_nobuffers(page);
return fscache_set_page_dirty(page, afs_vnode_cache(AFS_FS_I(page->mapping->host)));
}
static void afs_folio_start_fscache(bool caching, struct folio *folio)
{
if (caching)
folio_start_fscache(folio);
}
#else
static void afs_folio_start_fscache(bool caching, struct folio *folio)
{
}
#endif
/*
* prepare to perform part of a write to a page
......@@ -114,7 +127,7 @@ int afs_write_end(struct file *file, struct address_space *mapping,
unsigned long priv;
unsigned int f, from = offset_in_folio(folio, pos);
unsigned int t, to = from + copied;
loff_t i_size, maybe_i_size;
loff_t i_size, write_end_pos;
_enter("{%llx:%llu},{%lx}",
vnode->fid.vid, vnode->fid.vnode, folio_index(folio));
......@@ -131,15 +144,16 @@ int afs_write_end(struct file *file, struct address_space *mapping,
if (copied == 0)
goto out;
maybe_i_size = pos + copied;
write_end_pos = pos + copied;
i_size = i_size_read(&vnode->vfs_inode);
if (maybe_i_size > i_size) {
if (write_end_pos > i_size) {
write_seqlock(&vnode->cb_lock);
i_size = i_size_read(&vnode->vfs_inode);
if (maybe_i_size > i_size)
afs_set_i_size(vnode, maybe_i_size);
if (write_end_pos > i_size)
afs_set_i_size(vnode, write_end_pos);
write_sequnlock(&vnode->cb_lock);
fscache_update_cookie(afs_vnode_cache(vnode), NULL, &write_end_pos);
}
if (folio_test_private(folio)) {
......@@ -418,6 +432,7 @@ static void afs_extend_writeback(struct address_space *mapping,
loff_t start,
loff_t max_len,
bool new_content,
bool caching,
unsigned int *_len)
{
struct pagevec pvec;
......@@ -464,7 +479,9 @@ static void afs_extend_writeback(struct address_space *mapping,
folio_put(folio);
break;
}
if (!folio_test_dirty(folio) || folio_test_writeback(folio)) {
if (!folio_test_dirty(folio) ||
folio_test_writeback(folio) ||
folio_test_fscache(folio)) {
folio_unlock(folio);
folio_put(folio);
break;
......@@ -512,6 +529,7 @@ static void afs_extend_writeback(struct address_space *mapping,
BUG();
if (folio_start_writeback(folio))
BUG();
afs_folio_start_fscache(caching, folio);
*_count -= folio_nr_pages(folio);
folio_unlock(folio);
......@@ -539,6 +557,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
unsigned int offset, to, len, max_len;
loff_t i_size = i_size_read(&vnode->vfs_inode);
bool new_content = test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags);
bool caching = fscache_cookie_enabled(afs_vnode_cache(vnode));
long count = wbc->nr_to_write;
int ret;
......@@ -546,6 +565,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
if (folio_start_writeback(folio))
BUG();
afs_folio_start_fscache(caching, folio);
count -= folio_nr_pages(folio);
......@@ -572,7 +592,8 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
if (len < max_len &&
(to == folio_size(folio) || new_content))
afs_extend_writeback(mapping, vnode, &count,
start, max_len, new_content, &len);
start, max_len, new_content,
caching, &len);
len = min_t(loff_t, len, max_len);
}
......@@ -585,12 +606,19 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
if (start < i_size) {
_debug("write back %x @%llx [%llx]", len, start, i_size);
/* Speculatively write to the cache. We have to fix this up
* later if the store fails.
*/
afs_write_to_cache(vnode, start, len, i_size, caching);
iov_iter_xarray(&iter, WRITE, &mapping->i_pages, start, len);
ret = afs_store_data(vnode, &iter, start, false);
} else {
_debug("write discard %x @%llx [%llx]", len, start, i_size);
/* The dirty region was entirely beyond the EOF. */
fscache_clear_page_bits(afs_vnode_cache(vnode),
mapping, start, len, caching);
afs_pages_written_back(vnode, start, len);
ret = 0;
}
......@@ -649,6 +677,10 @@ int afs_writepage(struct page *subpage, struct writeback_control *wbc)
_enter("{%lx},", folio_index(folio));
#ifdef CONFIG_AFS_FSCACHE
folio_wait_fscache(folio);
#endif
start = folio_index(folio) * PAGE_SIZE;
ret = afs_write_back_from_locked_folio(folio_mapping(folio), wbc,
folio, start, LLONG_MAX - start);
......@@ -714,10 +746,15 @@ static int afs_writepages_region(struct address_space *mapping,
continue;
}
if (folio_test_writeback(folio)) {
if (folio_test_writeback(folio) ||
folio_test_fscache(folio)) {
folio_unlock(folio);
if (wbc->sync_mode != WB_SYNC_NONE)
if (wbc->sync_mode != WB_SYNC_NONE) {
folio_wait_writeback(folio);
#ifdef CONFIG_AFS_FSCACHE
folio_wait_fscache(folio);
#endif
}
folio_put(folio);
continue;
}
......@@ -970,3 +1007,28 @@ int afs_launder_page(struct page *subpage)
folio_wait_fscache(folio);
return ret;
}
/*
* Deal with the completion of writing the data to the cache.
*/
static void afs_write_to_cache_done(void *priv, ssize_t transferred_or_error,
bool was_async)
{
struct afs_vnode *vnode = priv;
if (IS_ERR_VALUE(transferred_or_error) &&
transferred_or_error != -ENOBUFS)
afs_invalidate_cache(vnode, 0);
}
/*
* Save the write to the cache also.
*/
static void afs_write_to_cache(struct afs_vnode *vnode,
loff_t start, size_t len, loff_t i_size,
bool caching)
{
fscache_write_to_cache(afs_vnode_cache(vnode),
vnode->vfs_inode.i_mapping, start, len, i_size,
afs_write_to_cache_done, vnode, caching);
}
......@@ -19,3 +19,10 @@ config CACHEFILES_DEBUG
caching on files module. If this is set, the debugging output may be
enabled by setting bits in /sys/modules/cachefiles/parameter/debug or
by including a debugging specifier in /etc/cachefilesd.conf.
config CACHEFILES_ERROR_INJECTION
bool "Provide error injection for cachefiles"
depends on CACHEFILES && SYSCTL
help
This permits error injection to be enabled in cachefiles whilst a
cache is in service.
......@@ -4,15 +4,17 @@
#
cachefiles-y := \
bind.o \
cache.o \
daemon.o \
interface.o \
io.o \
key.o \
main.o \
namei.o \
rdwr.o \
security.o \
volume.o \
xattr.o
cachefiles-$(CONFIG_CACHEFILES_ERROR_INJECTION) += error_inject.o
obj-$(CONFIG_CACHEFILES) := cachefiles.o
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0-or-later
/* Error injection handling.
*
* Copyright (C) 2021 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
#include <linux/sysctl.h>
#include "internal.h"
unsigned int cachefiles_error_injection_state;
static struct ctl_table_header *cachefiles_sysctl;
static struct ctl_table cachefiles_sysctls[] = {
{
.procname = "error_injection",
.data = &cachefiles_error_injection_state,
.maxlen = sizeof(unsigned int),
.mode = 0644,
.proc_handler = proc_douintvec,
},
{}
};
static struct ctl_table cachefiles_sysctls_root[] = {
{
.procname = "cachefiles",
.mode = 0555,
.child = cachefiles_sysctls,
},
{}
};
int __init cachefiles_register_error_injection(void)
{
cachefiles_sysctl = register_sysctl_table(cachefiles_sysctls_root);
if (!cachefiles_sysctl)
return -ENOMEM;
return 0;
}
void cachefiles_unregister_error_injection(void)
{
unregister_sysctl_table(cachefiles_sysctl);
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -2,7 +2,7 @@
/* Network filesystem caching backend to use cache files on a premounted
* filesystem
*
* Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
* Copyright (C) 2021 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
......@@ -18,6 +18,8 @@
#include <linux/statfs.h>
#include <linux/sysctl.h>
#include <linux/miscdevice.h>
#include <linux/netfs.h>
#include <trace/events/netfs.h>
#define CREATE_TRACE_POINTS
#include "internal.h"
......@@ -37,14 +39,6 @@ static struct miscdevice cachefiles_dev = {
.fops = &cachefiles_daemon_fops,
};
static void cachefiles_object_init_once(void *_object)
{
struct cachefiles_object *object = _object;
memset(object, 0, sizeof(*object));
spin_lock_init(&object->work_lock);
}
/*
* initialise the fs caching module
*/
......@@ -52,6 +46,9 @@ static int __init cachefiles_init(void)
{
int ret;
ret = cachefiles_register_error_injection();
if (ret < 0)
goto error_einj;
ret = misc_register(&cachefiles_dev);
if (ret < 0)
goto error_dev;
......@@ -61,9 +58,7 @@ static int __init cachefiles_init(void)
cachefiles_object_jar =
kmem_cache_create("cachefiles_object_jar",
sizeof(struct cachefiles_object),
0,
SLAB_HWCACHE_ALIGN,
cachefiles_object_init_once);
0, SLAB_HWCACHE_ALIGN, NULL);
if (!cachefiles_object_jar) {
pr_notice("Failed to allocate an object jar\n");
goto error_object_jar;
......@@ -75,6 +70,8 @@ static int __init cachefiles_init(void)
error_object_jar:
misc_deregister(&cachefiles_dev);
error_dev:
cachefiles_unregister_error_injection();
error_einj:
pr_err("failed to register: %d\n", ret);
return ret;
}
......@@ -90,6 +87,7 @@ static void __exit cachefiles_exit(void)
kmem_cache_destroy(cachefiles_object_jar);
misc_deregister(&cachefiles_dev);
cachefiles_unregister_error_injection();
}
module_exit(cachefiles_exit);
This diff is collapsed.
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0-or-later
/* CacheFiles security management
*
* Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
* Copyright (C) 2007, 2021 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*/
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -1856,7 +1856,7 @@ static int try_nonblocking_invalidate(struct inode *inode)
u32 invalidating_gen = ci->i_rdcache_gen;
spin_unlock(&ci->i_ceph_lock);
ceph_fscache_invalidate(inode);
ceph_fscache_invalidate(inode, false);
invalidate_mapping_pages(&inode->i_data, 0, -1);
spin_lock(&ci->i_ceph_lock);
......@@ -2388,6 +2388,7 @@ int ceph_write_inode(struct inode *inode, struct writeback_control *wbc)
int wait = (wbc->sync_mode == WB_SYNC_ALL && !wbc->for_sync);
dout("write_inode %p wait=%d\n", inode, wait);
ceph_fscache_unpin_writeback(inode, wbc);
if (wait) {
dirty = try_flush_caps(inode, &flush_tid);
if (dirty)
......
This diff is collapsed.
This diff is collapsed.
......@@ -787,16 +787,10 @@ static int __init init_caches(void)
if (!ceph_wb_pagevec_pool)
goto bad_pagevec_pool;
error = ceph_fscache_register();
if (error)
goto bad_fscache;
return 0;
bad_fscache:
kmem_cache_destroy(ceph_mds_request_cachep);
bad_pagevec_pool:
mempool_destroy(ceph_wb_pagevec_pool);
kmem_cache_destroy(ceph_mds_request_cachep);
bad_mds_req:
kmem_cache_destroy(ceph_dir_file_cachep);
bad_dir_file:
......@@ -828,8 +822,6 @@ static void destroy_caches(void)
kmem_cache_destroy(ceph_dir_file_cachep);
kmem_cache_destroy(ceph_mds_request_cachep);
mempool_destroy(ceph_wb_pagevec_pool);
ceph_fscache_unregister();
}
static void __ceph_umount_begin(struct ceph_fs_client *fsc)
......
This diff is collapsed.
......@@ -188,7 +188,7 @@ config CIFS_SMB_DIRECT
config CIFS_FSCACHE
bool "Provide CIFS client caching support"
depends on CIFS=m && FSCACHE || CIFS=y && FSCACHE=y
depends on CIFS=m && FSCACHE_OLD_API || CIFS=y && FSCACHE_OLD_API=y
help
Makes CIFS FS-Cache capable. Say Y here if you want your CIFS data
to be cached locally on disk through the general filesystem cache
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment