1. 21 Jul, 2011 8 commits
    • Christoph Hellwig's avatar
      fat: remove i_alloc_sem abuse · 58268691
      Christoph Hellwig authored
      Add a new rw_semaphore to protect bmap against truncate.  Previous
      i_alloc_sem was abused for this, but it's going away in this series.
      
      Note that we can't simply use i_mutex, given that the swapon code
      calls ->bmap under it.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      58268691
    • Tobias Klauser's avatar
      VFS: Fixup kerneldoc for generic_permission() · 8c5dc70a
      Tobias Klauser authored
      The flags parameter went away in
      d749519b444db985e40b897f73ce1898b11f997e
      Signed-off-by: default avatarTobias Klauser <tklauser@distanz.ch>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      8c5dc70a
    • Tomasz Stanislawski's avatar
      anonfd: fix missing declaration · e46ebd27
      Tomasz Stanislawski authored
      The forward declaration of struct file_operations is
      added to avoid compilation warnings.
      Signed-off-by: default avatarTomasz Stanislawski <t.stanislaws@samsung.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      e46ebd27
    • Dave Chinner's avatar
      xfs: make use of new shrinker callout for the inode cache · 8daaa831
      Dave Chinner authored
      Convert the inode reclaim shrinker to use the new per-sb shrinker
      operations. This allows much bigger reclaim batches to be used, and
      allows the XFS inode cache to be shrunk in proportion with the VFS
      dentry and inode caches. This avoids the problem of the VFS caches
      being shrunk significantly before the XFS inode cache is shrunk
      resulting in imbalances in the caches during reclaim.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      8daaa831
    • Dave Chinner's avatar
      vfs: increase shrinker batch size · 8ab47664
      Dave Chinner authored
      Now that the per-sb shrinker is responsible for shrinking 2 or more
      caches, increase the batch size to keep econmies of scale for
      shrinking each cache.  Increase the shrinker batch size to 1024
      objects.
      
      To allow for a large increase in batch size, add a conditional
      reschedule to prune_icache_sb() so that we don't hold the LRU spin
      lock for too long. This mirrors the behaviour of the
      __shrink_dcache_sb(), and allows us to increase the batch size
      without needing to worry about problems caused by long lock hold
      times.
      
      To ensure that filesystems using the per-sb shrinker callouts don't
      cause problems, document that the object freeing method must
      reschedule appropriately inside loops.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      8ab47664
    • Dave Chinner's avatar
      superblock: add filesystem shrinker operations · 0e1fdafd
      Dave Chinner authored
      Now we have a per-superblock shrinker implementation, we can add a
      filesystem specific callout to it to allow filesystem internal
      caches to be shrunk by the superblock shrinker.
      
      Rather than perpetuate the multipurpose shrinker callback API (i.e.
      nr_to_scan == 0 meaning "tell me how many objects freeable in the
      cache), two operations will be added. The first will return the
      number of objects that are freeable, the second is the actual
      shrinker call.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      0e1fdafd
    • Dave Chinner's avatar
      inode: remove iprune_sem · 4f8c19fd
      Dave Chinner authored
      Now that we have per-sb shrinkers with a lifecycle that is a subset
      of the superblock lifecycle and can reliably detect a filesystem
      being unmounted, there is not longer any race condition for the
      iprune_sem to protect against. Hence we can remove it.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      4f8c19fd
    • Dave Chinner's avatar
      superblock: introduce per-sb cache shrinker infrastructure · b0d40c92
      Dave Chinner authored
      With context based shrinkers, we can implement a per-superblock
      shrinker that shrinks the caches attached to the superblock. We
      currently have global shrinkers for the inode and dentry caches that
      split up into per-superblock operations via a coarse proportioning
      method that does not batch very well.  The global shrinkers also
      have a dependency - dentries pin inodes - so we have to be very
      careful about how we register the global shrinkers so that the
      implicit call order is always correct.
      
      With a per-sb shrinker callout, we can encode this dependency
      directly into the per-sb shrinker, hence avoiding the need for
      strictly ordering shrinker registrations. We also have no need for
      any proportioning code for the shrinker subsystem already provides
      this functionality across all shrinkers. Allowing the shrinker to
      operate on a single superblock at a time means that we do less
      superblock list traversals and locking and reclaim should batch more
      effectively. This should result in less CPU overhead for reclaim and
      potentially faster reclaim of items from each filesystem.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      b0d40c92
  2. 20 Jul, 2011 32 commits