- 01 Mar, 2012 6 commits
-
-
Weston Andros Adamson authored
server_scope would never be freed if nfs4_check_cl_exchange_flags() returned non-zero Signed-off-by: Weston Andros Adamson <dros@netapp.com> Cc: stable@vger.kernel.org Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Weston Andros Adamson authored
Send the nfs implementation id in EXCHANGE_ID requests unless the module parameter nfs.send_implementation_id is 0. This adds a CONFIG variable for the nii_domain that defaults to "kernel.org". Signed-off-by: Weston Andros Adamson <dros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Bryan Schumaker authored
This patch removes the old hashmap-based caching and instead uses a "request key actor" to place an upcall to the legacy idmapper rather than going through /sbin/request-key. This will only be used as a fallback if /etc/request-key.conf isn't configured to use nfsidmap. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Bryan Schumaker authored
The keyctl_set_timeout function isn't exported to other parts of the kernel, but I want to use it for the NFS idmapper. I already have the key, but I wanted a generic way to set the timeout. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Acked-by: David Howells <dhowells@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Trond Myklebust authored
The NFS4CLNT_LAYOUTRECALL bit is a long-term impediment to scalability. It basically stops all other recalls by a given server once any layout recall is requested. If the recall is for a different file, then we don't care. If the recall applies to the same file, then we're in one of two situations: Either we are in the case of a replay of an existing request, in which case the session is supposed to deal with matters, or we are dealing with a completely different request, in which case we should just try to process it. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Trond Myklebust authored
The NFS4CLNT_LAYOUTRECALL tests in pnfs_layout_process and pnfs_update_layout are redundant. In the case of a bulk layout recall, we're always testing for the NFS_LAYOUT_BULK_RECALL flay anyway. In the case of a file or segment recall, the call to pnfs_set_layout_stateid() updates the layout_header 'barrier' sequence id, which triggers the test in pnfs_layoutgets_blocked() and is less race-prone than NFS4CLNT_LAYOUTRECALL anyway. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
- 27 Feb, 2012 4 commits
-
-
Stanislav Kinsbursky authored
Currently, wait queue, used for polling of RPC pipe changes from user-space, is a part of RPC pipe. But the pipe data itself can be released on NFS umount prior to dentry-inode pair, connected to it (is case of this pair is open by some process). This is not a problem for almost all pipe users, because all PipeFS file operations checks pipe reference prior to using it. Except evenfd. This thing registers itself with "poll" file operation and thus has a reference to pipe wait queue. This leads to oopses on destroying eventfd after NFS umount (like rpc_idmapd do) since not pipe data left to the point already. The solution is to wait queue from pipe data to internal RPC inode data. This looks more logical, because this wiat queue used only for user-space processes, which already holds inode reference. Note: upcalls have to get pipe->dentry prior to dereferecing wait queue to make sure, that mount point won't disappear from underneath us. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
There are 2 tightly bound objects: pipe data (created for kernel needs, has reference to dentry, which depends on PipeFS mount/umount) and PipeFS dentry/inode pair (created on mount for user-space needs). They both independently may have or have not a valid reference to each other. This means, that we have to make sure, that pipe->dentry reference is valid on upcalls, and dentry->pipe reference is valid on downcalls. The latter check is absent - my fault. IOW, PipeFS dentry can be opened by some process (rpc.idmapd for example), but it's pipe data can belong to NFS mount, which was unmounted already and thus pipe data was destroyed. To fix this, pipe reference have to be set to NULL on rpc_unlink() and checked on PipeFS file operations instead of pipe->dentry check. Note: PipeFS "poll" file operation will be updated in next patch, because it's logic is more complicated. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
v3: 1) Lookup for client is performed from the beginning of the list on each PipeFS event handling operation. Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry creation, which can be called on mount notification, where this per-net client lock is taken on clients list walk. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
v3: 1) Lookup for client is performed from the beginning of the list on each PipeFS event handling operation. Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry creation, which can be called on mount notification, where this per-net client lock is taken on clients list walk. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
- 26 Feb, 2012 1 commit
-
-
Trond Myklebust authored
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
- 19 Feb, 2012 2 commits
-
-
Trond Myklebust authored
Otherwise we have no guarantee that the net namespace won't just disappear from underneath us once the task that created it is destroyed. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
-
Trond Myklebust authored
Currently, the nfs_parsed_mount_data->net field is initialised in the nfs_parse_mount_options() function, which means that it only gets set if we're using text based mounts. The legacy binary mount interface is therefore broken. Fix is to initialise the ->net field in nfs_alloc_parsed_mount_data. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
-
- 17 Feb, 2012 2 commits
-
-
Weston Andros Adamson authored
Include RPC statistics from all data servers in /proc/self/mountstats for pNFS filelayout mounts. Signed-off-by: Weston Andros Adamson <dros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Andy Adamson authored
Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
- 16 Feb, 2012 4 commits
-
-
Chuck Lever authored
Clean up: Fix a debugging message which had an obsolete function name in it (nfs_follow_mountpoint). Introduced by commit 36d43a43 "NFS: Use d_automount() rather than abusing follow_link()" (January 14, 2011) Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Chuck Lever authored
Our dprintk() debugging facility doesn't specify any verbosity level for it's printk() calls, but it should. The default verbosity for printk's is KERN_DEFAULT. You might argue that these are debugging printk's and thus the verbosity should be KERN_DEBUG. That would mean that to see NFS and SUNRPC debugging output an admin would also have to boost the syslog verbosity, which would be insufferably noisy. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Andy Adamson authored
With static RPC slots, the xprt backlog queue stats were useful in showing when the transport (TCP) was starved by lack of RPC slots. The new dynamic RPC slot code, commit d9ba131d, always provides an RPC slot and so only uses the xprt backlog queue when the tcp_max_slot_table_entries value has been hit or when an allocation error occurs. All requests are now placed on the xprt sending or pending queue which need to be monitored for debugging. The max_slot stat shows the maximum number of dynamic RPC slots reached which is useful when debugging performance issues. Add the new fields at the end of the mountstats xprt stanza so that mountstats outputs the previous correct values and ignores the new fields. Bump NFS_IOSTATS_VERS. Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
- 15 Feb, 2012 21 commits
-
-
Vitaliy Gusev authored
ca_maxoperations: For the backchannel, the server MUST NOT change the value the client offers. For the fore channel, the server MAY change the requested value. ca_maxrequests: For the backchannel, the server MUST NOT change the value the client offers. For the fore channel, the server MAY change the requested value. Signed-off-by: Vitaliy Gusev <gusev.vitaliy@nexenta.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Weston Andros Adamson authored
Don't allow invalid 'vers' and 'minorversion' combinations in mount options, such as "vers=3,minorversion=1". Signed-off-by: Weston Andros Adamson <dros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Trond Myklebust authored
The tracepoint code relies on the queue->name being defined in order to be able to display the name of the waitqueue on which an RPC task is sleeping. Reported-by: Randy Dunlap <rdunlap@xenotime.net> Reported-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Randy Dunlap <rdunlap@xenotime.net>
-
Trond Myklebust authored
Don't allocate the legacy idmapper tables until we actually need them. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Reviewed-by: Jeff Layton <jlayton@redhat.com>
-
Trond Myklebust authored
Add the appropriate 'select KEYS' to the NFSv4 Kconfig entry. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Trond Myklebust authored
Instead of pre-allocating the storage for all the strings, we can significantly reduce the size of that table by doing the allocation when we do the downcall. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Reviewed-by: Jeff Layton <jlayton@redhat.com>
-
Weston Andros Adamson authored
Signed-off-by: Weston Andros Adamson <dros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
It's not compilable in case of CONFIG_NFS_V4_1 is not set. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Trond Myklebust authored
Ensure that we initialise the nfs_net->nfs_client_lock spinlock. Also ensure that nfs_server_remove_lists() doesn't try to dereference server->nfs_client before that is initialised. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: Stanislav Kinsbursky <skinsbursky@parallels.com>
-
Stanislav Kinsbursky authored
Lockd now managed in network namespace context. And this patch introduces network namespace related NLM hosts shutdown in case of releasing per-net Lockd resources. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
NLM host is network namespace aware now. So NSM have to take it into account. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
This object depends on RPC client, and thus on network namespace. So let's make it's allocation and lookup in network namespace context. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
This patch introduces per-net Lockd initialization and destruction routines. The logic is the same as in global Lockd up and down routines. Probably the solution is not the best one. But at least it looks clear. So per-net "up" routine are called only in case of lockd is running already. If per-net resources are not allocated yet, then service is being registered with local portmapper and lockd sockets created. Per-net "down" routine is called on every lockd_down() call in case of global users counter is not zero. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
Lockd is going to be shared between network namespaces - i.e. going to be able to handle lock requests from different network namespaces. This means, that network namespace related resources have to be allocated not once (like now), but for every network namespace context, from which service is requested to operate. This patch implements Lockd per-net users accounting. New per-net counter is used to determine, when per-net resources have to be freed. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
This patch parametrizes Lockd permanent sockets creation routine by network namespace context. It also replaces hard-coded init_net with current network namespace context in Lockd sockets creation routines. This approach looks safe, because Lockd is created during NFS mount (or NFS server start) and thus socket is required exactly in current network namespace context. But in the same time it means, that Lockd sockets inherits first Lockd requester network namespace. This issue will be fixed in further patches of the series. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
This function is enough for releasing resources, allocated for network namespace context, in case of sharing service between them. IOW, each service "user" (LockD, NFSd, etc), which wants to share service between network namespaces, have to release related resources by the function, introduced in this patch, instead of performing service shutdown (of course in case the service is shared already to the moment of release). Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
v2: Added comment to BUG_ON's in svc_destroy() to make code looks clearer. This patch introduces network namespace filter for service destruction function. Nothing special here - just do exactly the same operations, but only for tranports in passed networks namespace context. BTW, BUG_ON() checks for empty service transports lists were returned into svc_destroy() function. This is because of swithing generic svc_close_all() to networks namespace dependable svc_close_net(). Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
This patch moves service transports deletion from service sockets lists to separated function. This is a precursor patch, which would be usefull with service shutdown in network namespace context, introduced later in the series. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Stanislav Kinsbursky authored
This patch moves removing of service transport from it's pools ready lists to separated function. Also this clear is now done with list_for_each_entry_safe() helper. This is a precursor patch, which would be usefull with service shutdown in network namespace context, introduced later in the series. Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Trond Myklebust authored
Add the module parameter 'max_session_slots' to set the initial number of slots that the NFSv4.1 client will attempt to negotiate with the server. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-
Trond Myklebust authored
It is perfectly legal to negotiate up to 2^32-1 slots in the protocol, and with 10GigE, we are already seeing that 255 slots is far too limiting. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
-