- 13 Oct, 2014 2 commits
-
-
Trond Myklebust authored
It is OK for pageused == pagecount in the loop, as long as we don't add another entry to the *pages array. Move the test so that it only triggers in that case. Reported-by: Steve Dickson <SteveD@redhat.com> Fixes: bba5c188 (nfs: disallow duplicate pages in pgio page vectors) Cc: Weston Andros Adamson <dros@primarydata.com> Cc: stable@vger.kernel.org # 3.16.x Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Trond Myklebust authored
SteveD reports the following Oops: RIP: 0010:[<ffffffffa053461d>] [<ffffffffa053461d>] __put_nfs_open_context+0x1d/0x100 [nfs] RSP: 0018:ffff880fed687b90 EFLAGS: 00010286 RAX: 0000000000000024 RBX: 0000000000000000 RCX: 0000000000000006 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: ffff880fed687bc0 R08: 0000000000000092 R09: 000000000000047a R10: 0000000000000000 R11: ffff880fed6878d6 R12: ffff880fed687d20 R13: ffff880fed687d20 R14: 0000000000000070 R15: ffffea000aa33ec0 FS: 00007fce290f0740(0000) GS:ffff8807ffc60000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000070 CR3: 00000007f2e79000 CR4: 00000000000007e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Stack: 0000000000000000 ffff880036c5e510 ffff880fed687d20 ffff880fed687d20 ffff880036c5e200 ffffea000aa33ec0 ffff880fed687bd0 ffffffffa0534710 ffff880fed687be8 ffffffffa053d5f0 ffff880036c5e200 ffff880fed687c08 Call Trace: [<ffffffffa0534710>] put_nfs_open_context+0x10/0x20 [nfs] [<ffffffffa053d5f0>] nfs_pgio_data_destroy+0x20/0x40 [nfs] [<ffffffffa053d672>] nfs_pgio_error+0x22/0x40 [nfs] [<ffffffffa053d8f4>] nfs_generic_pgio+0x74/0x2e0 [nfs] [<ffffffffa06b18c3>] pnfs_generic_pg_writepages+0x63/0x210 [nfsv4] [<ffffffffa053d579>] nfs_pageio_doio+0x19/0x50 [nfs] [<ffffffffa053eb84>] nfs_pageio_complete+0x24/0x30 [nfs] [<ffffffffa053cb25>] nfs_direct_write_schedule_iovec+0x115/0x1f0 [nfs] [<ffffffffa053675f>] ? nfs_get_lock_context+0x4f/0x120 [nfs] [<ffffffffa053d252>] nfs_file_direct_write+0x262/0x420 [nfs] [<ffffffffa0532d91>] nfs_file_write+0x131/0x1d0 [nfs] [<ffffffffa0532c60>] ? nfs_need_sync_write.isra.17+0x40/0x40 [nfs] [<ffffffff812127b8>] do_io_submit+0x3b8/0x840 [<ffffffff81212c50>] SyS_io_submit+0x10/0x20 [<ffffffff81610f29>] system_call_fastpath+0x16/0x1b This is due to the calls to nfs_pgio_error() in nfs_generic_pgio(), which happen before the nfs_pgio_header's open context is referenced in nfs_pgio_rpcsetup(). Reported-by: Steve Dickson <SteveD@redhat.com> Cc: stable@vger.kernel.org # 3.16.x Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
- 08 Oct, 2014 2 commits
-
-
Trond Myklebust authored
You cannot call pnfs_put_lseg_async() more than once per lseg, so it is really an inappropriate way to deal with a refcount issue. Instead, replace it with a function that decrements the refcount, and puts the final 'free' operation (which is incompatible with locks) on the workqueue. Cc: Weston Andros Adamson <dros@primarydata.com> Fixes: e6cf82d1: pnfs: add pnfs_put_lseg_async Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Tom Haynes authored
nfs4_insert_deviceid_node() was removed in 661373b1Signed-off-by: Tom Haynes <loghyr@primarydata.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
- 30 Sep, 2014 5 commits
-
-
Trond Myklebust authored
Merge NFSv4.2 client SEEK implementation from Anna * client-4.2: (55 commits) NFS: Implement SEEK NFSD: Implement SEEK NFSD: Add generic v4.2 infrastructure svcrdma: advertise the correct max payload nfsd: introduce nfsd4_callback_ops nfsd: split nfsd4_callback initialization and use nfsd: introduce a generic nfsd4_cb nfsd: remove nfsd4_callback.cb_op nfsd: do not clear rpc_resp in nfsd4_cb_done_sequence nfsd: fix nfsd4_cb_recall_done error handling nfsd4: clarify how grace period ends nfsd4: stop grace_time update at end of grace period nfsd: skip subsequent UMH "create" operations after the first one for v4.0 clients nfsd: set and test NFSD4_CLIENT_STABLE bit to reduce nfsdcltrack upcalls nfsd: serialize nfsdcltrack upcalls for a particular client nfsd: pass extra info in env vars to upcalls to allow for early grace period end nfsd: add a v4_end_grace file to /proc/fs/nfsd lockd: add a /proc/fs/lockd/nlm_end_grace file nfsd: reject reclaim request when client has already sent RECLAIM_COMPLETE nfsd: remove redundant boot_time parm from grace_done client tracking op ...
-
Trond Myklebust authored
* bugfixes: NFSv4.1: Fix an NFSv4.1 state renewal regression NFSv4: fix open/lock state recovery error handling NFSv4: Fix lock recovery when CREATE_SESSION/SETCLIENTID_CONFIRM fails NFS: Fabricate fscache server index key correctly SUNRPC: Add missing support for RPC_CLNT_CREATE_NO_RETRANS_TIMEOUT nfs: fix duplicate proc entries
-
Andy Adamson authored
Commit 2f60ea6b ("NFSv4: The NFSv4.0 client must send RENEW calls if it holds a delegation") set the NFS4_RENEW_TIMEOUT flag in nfs4_renew_state, and does not put an nfs41_proc_async_sequence call, the NFSv4.1 lease renewal heartbeat call, on the wire to renew the NFSv4.1 state if the flag was not set. The NFS4_RENEW_TIMEOUT flag is set when "now" is after the last renewal (cl_last_renewal) plus the lease time divided by 3. This is arbitrary and sometimes does the following: In normal operation, the only way a future state renewal call is put on the wire is via a call to nfs4_schedule_state_renewal, which schedules a nfs4_renew_state workqueue task. nfs4_renew_state determines if the NFS4_RENEW_TIMEOUT should be set, and the calls nfs41_proc_async_sequence, which only gets sent if the NFS4_RENEW_TIMEOUT flag is set. Then the nfs41_proc_async_sequence rpc_release function schedules another state remewal via nfs4_schedule_state_renewal. Without this change we can get into a state where an application stops accessing the NFSv4.1 share, state renewal calls stop due to the NFS4_RENEW_TIMEOUT flag _not_ being set. The only way to recover from this situation is with a clientid re-establishment, once the application resumes and the server has timed out the lease and so returns NFS4ERR_BAD_SESSION on the subsequent SEQUENCE operation. An example application: open, lock, write a file. sleep for 6 * lease (could be less) ulock, close. In the above example with NFSv4.1 delegations enabled, without this change, there are no OP_SEQUENCE state renewal calls during the sleep, and the clientid is recovered due to lease expiration on the close. This issue does not occur with NFSv4.1 delegations disabled, nor with NFSv4.0, with or without delegations enabled. Signed-off-by: Andy Adamson <andros@netapp.com> Link: http://lkml.kernel.org/r/1411486536-23401-1-git-send-email-andros@netapp.com Fixes: 2f60ea6b (NFSv4: The NFSv4.0 client must send RENEW calls...) Cc: stable@vger.kernel.org # 3.2.x Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Anna Schumaker authored
The SEEK operation is used when an application makes an lseek call with either the SEEK_HOLE or SEEK_DATA flags set. I fall back on nfs_file_llseek() if the server does not have SEEK support. Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Trond Myklebust authored
- Pull in patch 'NFSD: Implement SEEK' from Bruce's nfsd-next tree for dependencies.
-
- 29 Sep, 2014 3 commits
-
-
Anna Schumaker authored
This patch adds server support for the NFS v4.2 operation SEEK, which returns the position of the next hole or data segment in a file. Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Anna Schumaker authored
It's cleaner to introduce everything at once and have the server reply with "not supported" than it would be to introduce extra operations when implementing a specific one in the middle of the list. Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Steve Wise authored
Svcrdma currently advertises 1MB, which is too large. The correct value is the minimum of RPCSVC_MAXPAYLOAD and the max scatter-gather allowed in an NFSRDMA IO chunk * the host page size. This bug is usually benign because the Linux X64 NFSRDMA client correctly limits the payload size to the correct value (64*4096 = 256KB). But if the Linux client is PPC64 with a 64KB page size, then the client will indeed use a payload size that will overflow the server. Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
- 28 Sep, 2014 2 commits
-
-
Trond Myklebust authored
The current open/lock state recovery unfortunately does not handle errors such as NFS4ERR_CONN_NOT_BOUND_TO_SESSION correctly. Instead of looping, just proceeds as if the state manager is finished recovering. This patch ensures that we loop back, handle higher priority errors and complete the open/lock state recovery. Cc: stable@vger.kernel.org Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Trond Myklebust authored
If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted a second time, then the client will currently take this to mean that it must declare all locks to be stale, and hence ineligible for reboot recovery. RFC3530 and RFC5661 both suggest that the client should instead rely on the server to respond to inelegible open share, lock and delegation reclaim requests with NFS4ERR_NO_GRACE in this situation. Cc: stable@vger.kernel.org Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
- 26 Sep, 2014 8 commits
-
-
Christoph Hellwig authored
Add a higher level abstraction than the rpc_ops for callback operations. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Christoph Hellwig authored
Split out initializing the nfs4_callback structure from using it. For the NULL callback this gets rid of tons of pointless re-initializations. Note that I don't quite understand what protects us from running multiple NULL callbacks at the same time, but at least this chance doesn't make it worse.. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Christoph Hellwig authored
Add a helper to queue up a callback. CB_NULL has a bit of special casing because it is special in the specification, but all other new callback operations will be able to share code with this and a few more changes to refactor the callback code. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Christoph Hellwig authored
We can always get at the private data by using container_of, no need for a void pointer. Also introduce a little to_delegation helper to avoid opencoding the container_of everywhere. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Benny Halevy authored
This is incorrect when a callback is has to be restarted, in which case the XDR decoding of the second iteration will see a NULL cb argument. [hch: updated description] Signed-off-by: Benny Halevy <bhalevy@panasas.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
Christoph Hellwig authored
For any error that is not EBADHANDLE or NFS4ERR_BAD_STATEID, nfsd4_cb_recall_done first marks the connection down, then retries until dl_retries hits zero, then marks the connection down again and sets cb_done. This changes the code to only retry for EBADHANDLE or NFS4ERR_BAD_STATEID, and factors setting cb_done into a single point in the function. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-
David Howells authored
When fabricating a server index key for fscache, we should clear the index key buffer before starting to fill it in, not in the middle. Reported-by: James Pearson <james-p@moving-picture.com> Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Steve Dickson <steved@redhat.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Trond Myklebust authored
The flag RPC_CLNT_CREATE_NO_RETRANS_TIMEOUT was intended introduced in order to allow NFSv4 clients to disable resend timeouts. Since those cause the RPC layer to break the connection, they mess up the duplicate reply caches that remain indexed on the port number in NFSv4.. This patch includes the code that was missing in the original to set the appropriate flag in struct rpc_clnt, when the caller of rpc_create() sets RPC_CLNT_CREATE_NO_RETRANS_TIMEOUT. Fixes: 8a19a0b6 (SUNRPC: Add RPC task and client level options to...) Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
- 25 Sep, 2014 14 commits
-
-
Trond Myklebust authored
Silence a few warnings about missing symbols that are due to missing includes of nfs3_fs.h. Fixes: 00a36a10 (NFS: Move v3 declarations out of internal.h) Fixes: cb8c20fa (NFS: Move NFS v3 acl functions to nfs3_fs.h) Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
NeilBrown authored
Now that nfs_release_page() doesn't block indefinitely, other deadlock avoidance mechanisms aren't needed. - it doesn't hurt for kswapd to block occasionally. If it doesn't want to block it would clear __GFP_WAIT. The current_is_kswapd() was only added to avoid deadlocks and we have a new approach for that. - memory allocation in the SUNRPC layer can very rarely try to ->releasepage() a page it is trying to handle. The deadlock is removed as nfs_release_page() doesn't block indefinitely. So we don't need to set PF_FSTRANS for sunrpc network operations any more. Signed-off-by: NeilBrown <neilb@suse.de> Acked-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
NeilBrown authored
If nfs_release_page() is called on a sequence of pages which are all in the same file which is blocked on COMMIT, each page could contribute a 1 second delay which could be come excessive. I have seen delays of as much as 208 seconds. To keep the delay to one second, mark the bdi as write-congested if the commit didn't finished. Once it does finish, the write-congested flag will be cleared by nfs_commit_release_pages(). With this, the longest total delay in try_to_free_pages that I have seen is under 3 seconds. With no waiting in nfs_release_page at all I have seen delays of nearly 1.5 seconds. Signed-off-by: NeilBrown <neilb@suse.de> Acked-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
NeilBrown authored
Support for loop-back mounted NFS filesystems is useful when NFS is used to access shared storage in a high-availability cluster. If the node running the NFS server fails, some other node can mount the filesystem and start providing NFS service. If that node already had the filesystem NFS mounted, it will now have it loop-back mounted. nfsd can suffer a deadlock when allocating memory and entering direct reclaim. While direct reclaim does not write to the NFS filesystem it can send and wait for a COMMIT through nfs_release_page(). This patch modifies nfs_release_page() to wait a limited time for the commit to complete - one second. If the commit doesn't complete in this time, nfs_release_page() will fail. This means it might now fail in some cases where it wouldn't before. These cases are only when 'gfp' includes '__GFP_WAIT'. nfs_release_page() is only called by try_to_release_page(), and that can only be called on an NFS page with required 'gfp' flags from - page_cache_pipe_buf_steal() in splice.c - shrink_page_list() in vmscan.c - invalidate_inode_pages2_range() in truncate.c The first two handle failure quite safely. The last is only called after ->launder_page() has been called, and that will have waited for the commit to finish already. So aborting if the commit takes longer than 1 second is perfectly safe. Signed-off-by: NeilBrown <neilb@suse.de> Acked-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
NeilBrown authored
This will allow NFS to wait for PG_private to be cleared and, particularly, to send a wake-up when it is. Signed-off-by: NeilBrown <neilb@suse.de> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
NeilBrown authored
In commit c1221321 sched: Allow wait_on_bit_action() functions to support a timeout I suggested that a "wait_on_bit_timeout()" interface would not meet my need. This isn't true - I was just over-engineering. Including a 'private' field in wait_bit_key instead of a focused "timeout" field was just premature generalization. If some other use is ever found, it can be generalized or added later. So this patch renames "private" to "timeout" with a meaning "stop waiting when "jiffies" reaches or passes "timeout", and adds two of the many possible wait..bit..timeout() interfaces: wait_on_page_bit_killable_timeout(), which is the one I want to use, and out_of_line_wait_on_bit_timeout() which is a reasonably general example. Others can be added as needed. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NeilBrown <neilb@suse.de> Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
NeilBrown authored
commit b31268ac FS: Use stable writes when not doing a bulk flush was a bit heavy handed. The particular problem that lead to this patch was that small writes to an O_SYNC file we being written as UNSTABLE writes followed by a commit. This is appropriate for large writes (which require multiple NFS requests) but for small writes (single NFS request), using NFS_FILE_SYNC is more efficient. So that patch causes the code to select between the two methods depending on how many nfs requests get generated. Unfortunately this ends up applying to non O_SYNC writes as well. In particular if you memory-map a file and update random pages, then when they are eventually written out by writeback they will go as NFS_FILE_SYNC. This is inefficient and slows down the application. So: only set FLUSH_COND_STABLE when wbc->sync_mode is WB_SYNC_ALL. With this patch: O_SYNC writes are NFS_FILE_SYNC for single requests, and NFS_UNSTABLE followed by COMMIT for multiple requests Writing immediately before close of fsync follow the same pattern. Non-O_SYNC writes without an fsync of close eventually get flushed out as UNSTABLE and a commit follows eventually as appropriate. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
NeilBrown authored
Currently asynchronous NFSv4 request will be retried with exponential timeout (from 1/10 to 15 seconds), but async requests will always use a 15second retry. Some "async" requests are really synchronous though. The async mechanism is used to allow the request to continue if the requesting process is killed. In those cases, an exponential retry is appropriate. For example, if two different clients both open a file and get a READ delegation, and one client then unlinks the file (while still holding an open file descriptor), that unlink will used the "silly-rename" handling which is async. The first rename will result in NFS4ERR_DELAY while the delegation is reclaimed from the other client. The rename will not be retried for 15 seconds, causing an unlink to take 15 seconds rather than 100msec. This patch only added exponential timeout for async unlink and async rename. Other async calls, such as 'close' are sometimes waited for so they might benefit from exponential timeout too. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Jason Baron authored
If an iptables drop rule is added for an nfs server, the client can end up in a softlockup. Because of the way that xs_sendpages() is structured, the -EPERM is ignored since the prior bits of the packet may have been successfully queued and thus xs_sendpages() returns a non-zero value. Then, xs_udp_send_request() thinks that because some bits were queued it should return -EAGAIN. We then try the request again and again, resulting in cpu spinning. Reproducer: 1) open a file on the nfs server '/nfs/foo' (mounted using udp) 2) iptables -A OUTPUT -d <nfs server ip> -j DROP 3) write to /nfs/foo 4) close /nfs/foo 5) iptables -D OUTPUT -d <nfs server ip> -j DROP The softlockup occurs in step 4 above. The previous patch, allows xs_sendpages() to return both a sent count and any error values that may have occurred. Thus, if we get an -EPERM, return that to the higher level code. With this patch in place we can successfully abort the above sequence and avoid the softlockup. I also tried the above test case on an nfs mount on tcp and although the system does not softlockup, I still ended up with the 'hung_task' firing after 120 seconds, due to the i/o being stuck. The tcp case appears a bit harder to fix, since -EPERM appears to get ignored much lower down in the stack and does not propogate up to xs_sendpages(). This case is not quite as insidious as the softlockup and it is not addressed here. Reported-by: Yigong Lou <ylou@akamai.com> Signed-off-by: Jason Baron <jbaron@akamai.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Jason Baron authored
If an error is returned after the first bits of a packet have already been successfully queued, xs_sendpages() will return a positive 'int' value indicating success. Callers seem to treat this as -EAGAIN. However, there are cases where its not a question of waiting for the write queue to drain. For example, when there is an iptables rule dropping packets to the destination, the lower level code can return -EPERM only after parts of the packet have been successfully queued. In this case, we can end up continuously retrying resulting in a kernel softlockup. This patch is intended to make no changes in behavior but is in preparation for subsequent patches that can make decisions based on both on the number of bytes sent by xs_sendpages() and any errors that may have be returned. Signed-off-by: Jason Baron <jbaron@akamai.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Benjamin Coddington authored
If rpc.statd is restarted, upcalls to monitor hosts can fail with ECONNREFUSED. In that case force a lookup of statd's new port and retry the upcall. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Benjamin Coddington authored
When aborting a connection to preserve source ports, don't wake the task in xs_error_report. This allows tasks with RPC_TASK_SOFTCONN to succeed if the connection needs to be re-established since it preserves the task's status instead of setting it to the status of the aborting kernel_connect(). This may also avoid a potential conflict on the socket's lock. Signed-off-by: Benjamin Coddington <bcodding@redhat.com> Cc: stable@vger.kernel.org # 3.14+ Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Olga Kornievskaia authored
Commit c9fdeb28 removed a 'continue' after checking if the lease needs to be renewed. However, if client hasn't moved, the code falls down to starting reboot recovery erroneously (ie., sends open reclaim and gets back stale_clientid error) before recovering from getting stale_clientid on the renew operation. Signed-off-by: Olga Kornievskaia <kolga@netapp.com> Fixes: c9fdeb28 (NFS: Add basic migration support to state manager thread) Cc: stable@vger.kernel.org # 3.13+ Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Fabian Frederick authored
Commit 65b38851 ("NFS: Fix /proc/fs/nfsfs/servers and /proc/fs/nfsfs/volumes") updated the following function: static int nfs_volume_list_open(struct inode *inode, struct file *file) it used &nfs_server_list_ops instead of &nfs_volume_list_ops which means cat /proc/fs/nfsfs/volumes = /proc/fs/nfsfs/servers Signed-off-by: Fabian Frederick <fabf@skynet.be> Fixes: 65b38851 (NFS: Fix /proc/fs/nfsfs/servers and...) Cc: stable@vger.kernel.org # 3.4.x+ Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
- 21 Sep, 2014 1 commit
-
-
Trond Myklebust authored
kbuild test robot reports: fs/built-in.o: In function `bl_map_stripe': >> :(.text+0x965b4): undefined reference to `__aeabi_uldivmod' >> :(.text+0x965cc): undefined reference to `__aeabi_uldivmod' >> :(.text+0x96604): undefined reference to `__aeabi_uldivmod' Fixes: 5c83746a (pnfs/blocklayout: in-kernel GETDEVICEINFO XDR parsing) Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Christoph Hellwig <hch@lst.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
- 18 Sep, 2014 2 commits
-
-
Trond Myklebust authored
James Drew reports another bug whereby the NFS client is now sending an OPEN_DOWNGRADE in a situation where it should really have sent a CLOSE: the client is opening the file for O_RDWR, but then trying to do a downgrade to O_RDONLY, which is not allowed by the NFSv4 spec. Reported-by: James Drews <drews@engr.wisc.edu> Link: http://lkml.kernel.org/r/541AD7E5.8020409@engr.wisc.edu Fixes: aee7af35 (NFSv4: Fix problems with close in the presence...) Cc: stable@vger.kernel.org # 2.6.33+ Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
Steve Dickson authored
There is a race between nfs4_state_manager() and nfs_server_remove_lists() that happens during a nfsv3 mount. The v3 mount notices there is already a supper block so nfs_server_remove_lists() called which uses the nfs_client_lock spin lock to synchronize access to the client list. At the same time nfs4_state_manager() is running through the client list looking for work to do, using the same lock. When nfs4_state_manager() wins the race to the list, a v3 client pointer is found and not ignored properly which causes the panic. Moving some protocol checks before the state checking avoids the panic. CC: Stable Tree <stable@vger.kernel.org> Signed-off-by: Steve Dickson <steved@redhat.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
-
- 17 Sep, 2014 1 commit
-
-
J. Bruce Fields authored
The grace period is ended in two steps--first userland is notified that the grace period is now long enough that any clients who have not yet reclaimed can be safely forgotten, then we flip the switch that forbids reclaims and allows new opens. I had to think a bit to convince myself that the ordering was right here. Document it. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-