1. 26 Sep, 2014 4 commits
  2. 17 Sep, 2014 11 commits
    • J. Bruce Fields's avatar
      nfsd4: clarify how grace period ends · 70b28235
      J. Bruce Fields authored
      The grace period is ended in two steps--first userland is notified that
      the grace period is now long enough that any clients who have not yet
      reclaimed can be safely forgotten, then we flip the switch that forbids
      reclaims and allows new opens.  I had to think a bit to convince myself
      that the ordering was right here.  Document it.
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      70b28235
    • J. Bruce Fields's avatar
      nfsd4: stop grace_time update at end of grace period · bea57fe4
      J. Bruce Fields authored
      The attempt to automatically set a new grace period time at the end of
      the grace period isn't really helpful.  We'll probably shut down and
      reboot before we actually make use of the new grace period time anyway.
      So may as well leave it up to the init system to get this right.
      
      This just confuses people when they see /proc/fs/nfsd/nfsv4gracetime
      change from what they set it to.
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      bea57fe4
    • Jeff Layton's avatar
      nfsd: skip subsequent UMH "create" operations after the first one for v4.0 clients · 65decb65
      Jeff Layton authored
      In the case of v4.0 clients, we may call into the "create" client
      tracking operation multiple times (once for each openowner). Upcalling
      for each one of those is wasteful and slow however. We can skip doing
      further "create" operations after the first one if we know that one has
      already been done.
      
      v4.1+ clients generally only call into this function once (on
      RECLAIM_COMPLETE), and we can't skip upcalling on the create even if the
      STABLE bit is set. Doing so would make it impossible for nfsdcltrack to
      lift the grace period early since the timestamp has a different meaning
      in the case where the client is expected to issue a RECLAIM_COMPLETE.
      Signed-off-by: default avatarJeff Layton <jlayton@primarydata.com>
      65decb65
    • Jeff Layton's avatar
      nfsd: set and test NFSD4_CLIENT_STABLE bit to reduce nfsdcltrack upcalls · 788a7914
      Jeff Layton authored
      The nfsdcltrack upcall doesn't utilize the NFSD4_CLIENT_STABLE flag,
      which basically results in an upcall every time we call into the client
      tracking ops.
      
      Change it to set this bit on a successful "check" or "create" request,
      and clear it on a "remove" request.  Also, check to see if that bit is
      set before upcalling on a "check" or "remove" request, and skip
      upcalling appropriately, depending on its state.
      Signed-off-by: default avatarJeff Layton <jlayton@primarydata.com>
      788a7914
    • Jeff Layton's avatar
      nfsd: serialize nfsdcltrack upcalls for a particular client · d682e750
      Jeff Layton authored
      In a later patch, we want to add a flag that will allow us to reduce the
      need for upcalls. In order to handle that correctly, we'll need to
      ensure that racing upcalls for the same client can't occur. In practice
      it should be rare for this to occur with a well-behaved client, but it
      is possible.
      
      Convert one of the bits in the cl_flags field to be an upcall bitlock,
      and use it to ensure that upcalls for the same client are serialized.
      Signed-off-by: default avatarJeff Layton <jlayton@primarydata.com>
      d682e750
    • Jeff Layton's avatar
      nfsd: pass extra info in env vars to upcalls to allow for early grace period end · d4318acd
      Jeff Layton authored
      In order to support lifting the grace period early, we must tell
      nfsdcltrack what sort of client the "create" upcall is for. We can't
      reliably tell if a v4.0 client has completed reclaiming, so we can only
      lift the grace period once all the v4.1+ clients have issued a
      RECLAIM_COMPLETE and if there are no v4.0 clients.
      
      Also, in order to lift the grace period, we have to tell userland when
      the grace period started so that it can tell whether a RECLAIM_COMPLETE
      has been issued for each client since then.
      
      Since this is all optional info, we pass it along in environment
      variables to the "init" and "create" upcalls. By doing this, we don't
      need to revise the upcall format. The UMH upcall can simply make use of
      this info if it happens to be present. If it's not then it can just
      avoid lifting the grace period early.
      Signed-off-by: default avatarJeff Layton <jlayton@primarydata.com>
      d4318acd
    • Jeff Layton's avatar
      nfsd: add a v4_end_grace file to /proc/fs/nfsd · 7f5ef2e9
      Jeff Layton authored
      Allow a privileged userland process to end the v4 grace period early.
      Writing "Y", "y", or "1" to the file will cause the v4 grace period to
      be lifted.  The basic idea with this will be to allow the userland
      client tracking program to lift the grace period once it knows that no
      more clients will be reclaiming state.
      Signed-off-by: default avatarJeff Layton <jlayton@primarydata.com>
      7f5ef2e9
    • Jeff Layton's avatar
      lockd: add a /proc/fs/lockd/nlm_end_grace file · d68e3c4a
      Jeff Layton authored
      Add a new procfile that will allow a (privileged) userland process to
      end the NLM grace period early. The basic idea here will be to have
      sm-notify write to this file, if it sent out no NOTIFY requests when
      it runs. In that situation, we can generally expect that there will be
      no reclaim requests so the grace period can be lifted early.
      Signed-off-by: default avatarJeff Layton <jlayton@primarydata.com>
      d68e3c4a
    • Jeff Layton's avatar
      nfsd: reject reclaim request when client has already sent RECLAIM_COMPLETE · 3b3e7b72
      Jeff Layton authored
      As stated in RFC 5661, section 18.51.3:
      
          Once a RECLAIM_COMPLETE is done, there can be no further reclaim
          operations for locks whose scope is defined as having completed
          recovery.  Once the client sends RECLAIM_COMPLETE, the server will
          not allow the client to do subsequent reclaims of locking state for
          that scope and, if these are attempted, will return
          NFS4ERR_NO_GRACE.
      
      Ensure that we enforce that requirement.
      Signed-off-by: default avatarJeff Layton <jlayton@primarydata.com>
      3b3e7b72
    • Jeff Layton's avatar
      nfsd: remove redundant boot_time parm from grace_done client tracking op · 919b8049
      Jeff Layton authored
      Since it's stored in nfsd_net, we don't need to pass it in separately.
      Signed-off-by: default avatarJeff Layton <jlayton@primarydata.com>
      919b8049
    • Jeff Layton's avatar
      lockd: move lockd's grace period handling into its own module · f7790029
      Jeff Layton authored
      Currently, all of the grace period handling is part of lockd. Eventually
      though we'd like to be able to build v4-only servers, at which point
      we'll need to put all of this elsewhere.
      
      Move the code itself into fs/nfs_common and have it build a grace.ko
      module. Then, rejigger the Kconfig options so that both nfsd and lockd
      enable it automatically.
      Signed-off-by: default avatarJeff Layton <jlayton@primarydata.com>
      f7790029
  3. 11 Sep, 2014 1 commit
  4. 03 Sep, 2014 6 commits
  5. 02 Sep, 2014 2 commits
  6. 28 Aug, 2014 5 commits
  7. 18 Aug, 2014 1 commit
    • Rajesh Ghanekar's avatar
      nfsd: allow turning off nfsv3 readdir_plus · 18c01ab3
      Rajesh Ghanekar authored
      One of our customer's application only needs file names, not file
      attributes. With directories having 10K+ inodes (assuming buffer cache
      has directory blocks cached having file names, but inode cache is
      limited and hence need eviction of older cached inodes), older inodes
      are evicted periodically. So if they keep on doing readdir(2) from NSF
      client on multiple directories, some directory's files are periodically
      removed from inode cache and hence new readdir(2) on same directory
      requires disk access to bring back inodes again to inode cache.
      
      As READDIRPLUS request fetches attributes also, doing getattr on each
      file on server, it causes unnecessary disk accesses. If READDIRPLUS on
      NFS client is returned with -ENOTSUPP, NFS client uses READDIR request
      which just gets the names of the files in a directory, not attributes,
      hence avoiding disk accesses on server.
      
      There's already a corresponding client-side mount option, but an export
      option reduces the need for configuration across multiple clients.
      
      This flag affects NFSv3 only.  If it turns out it's needed for NFSv4 as
      well then we may have to figure out how to extend the behavior to NFSv4,
      but it's not currently obvious how to do that.
      Signed-off-by: default avatarRajesh Ghanekar <rajesh_ghanekar@symantec.com>
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      18c01ab3
  8. 17 Aug, 2014 10 commits