Commit 31e6b01f authored by Nick Piggin's avatar Nick Piggin

fs: rcu-walk for path lookup

Perform common cases of path lookups without any stores or locking in the
ancestor dentry elements. This is called rcu-walk, as opposed to the current
algorithm which is a refcount based walk, or ref-walk.

This results in far fewer atomic operations on every path element,
significantly improving path lookup performance. It also avoids cacheline
bouncing on common dentries, significantly improving scalability.

The overall design is like this:
* LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
* Take the RCU lock for the entire path walk, starting with the acquiring
  of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
  not required for dentry persistence.
* synchronize_rcu is called when unregistering a filesystem, so we can
  access d_ops and i_ops during rcu-walk.
* Similarly take the vfsmount lock for the entire path walk. So now mnt
  refcounts are not required for persistence. Also we are free to perform mount
  lookups, and to assume dentry mount points and mount roots are stable up and
  down the path.
* Have a per-dentry seqlock to protect the dentry name, parent, and inode,
  so we can load this tuple atomically, and also check whether any of its
  members have changed.
* Dentry lookups (based on parent, candidate string tuple) recheck the parent
  sequence after the child is found in case anything changed in the parent
  during the path walk.
* inode is also RCU protected so we can load d_inode and use the inode for
  limited things.
* i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
* i_op can be loaded.

When we reach the destination dentry, we lock it, recheck lookup sequence,
and increment its refcount and mountpoint refcount. RCU and vfsmount locks
are dropped. This is termed "dropping rcu-walk". If the dentry refcount does
not match, we can not drop rcu-walk gracefully at the current point in the
lokup, so instead return -ECHILD (for want of a better errno). This signals the
path walking code to re-do the entire lookup with a ref-walk.

Aside from the final dentry, there are other situations that may be encounted
where we cannot continue rcu-walk. In that case, we drop rcu-walk (ie. take
a reference on the last good dentry) and continue with a ref-walk. Again, if
we can drop rcu-walk gracefully, we return -ECHILD and do the whole lookup
using ref-walk. But it is very important that we can continue with ref-walk
for most cases, particularly to avoid the overhead of double lookups, and to
gain the scalability advantages on common path elements (like cwd and root).

The cases where rcu-walk cannot continue are:
* NULL dentry (ie. any uncached path element)
* parent with d_inode->i_op->permission or ACLs
* dentries with d_revalidate
* Following links

In future patches, permission checks and d_revalidate become rcu-walk aware. It
may be possible eventually to make following links rcu-walk aware.

Uncached path elements will always require dropping to ref-walk mode, at the
very least because i_mutex needs to be grabbed, and objects allocated.
Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
parent 3c22cd57
RCU-based dcache locking model
==============================
On many workloads, the most common operation on dcache is to look up a
dentry, given a parent dentry and the name of the child. Typically,
for every open(), stat() etc., the dentry corresponding to the
pathname will be looked up by walking the tree starting with the first
component of the pathname and using that dentry along with the next
component to look up the next level and so on. Since it is a frequent
operation for workloads like multiuser environments and web servers,
it is important to optimize this path.
Prior to 2.5.10, dcache_lock was acquired in d_lookup and thus in
every component during path look-up. Since 2.5.10 onwards, fast-walk
algorithm changed this by holding the dcache_lock at the beginning and
walking as many cached path component dentries as possible. This
significantly decreases the number of acquisition of
dcache_lock. However it also increases the lock hold time
significantly and affects performance in large SMP machines. Since
2.5.62 kernel, dcache has been using a new locking model that uses RCU
to make dcache look-up lock-free.
The current dcache locking model is not very different from the
existing dcache locking model. Prior to 2.5.62 kernel, dcache_lock
protected the hash chain, d_child, d_alias, d_lru lists as well as
d_inode and several other things like mount look-up. RCU-based changes
affect only the way the hash chain is protected. For everything else
the dcache_lock must be taken for both traversing as well as
updating. The hash chain updates too take the dcache_lock. The
significant change is the way d_lookup traverses the hash chain, it
doesn't acquire the dcache_lock for this and rely on RCU to ensure
that the dentry has not been *freed*.
dcache_lock no longer exists, dentry locking is explained in fs/dcache.c
Dcache locking details
======================
For many multi-user workloads, open() and stat() on files are very
frequently occurring operations. Both involve walking of path names to
find the dentry corresponding to the concerned file. In 2.4 kernel,
dcache_lock was held during look-up of each path component. Contention
and cache-line bouncing of this global lock caused significant
scalability problems. With the introduction of RCU in Linux kernel,
this was worked around by making the look-up of path components during
path walking lock-free.
Safe lock-free look-up of dcache hash table
===========================================
Dcache is a complex data structure with the hash table entries also
linked together in other lists. In 2.4 kernel, dcache_lock protected
all the lists. RCU dentry hash walking works like this:
1. The deletion from hash chain is done using hlist_del_rcu() macro
which doesn't initialize next pointer of the deleted dentry and
this allows us to walk safely lock-free while a deletion is
happening. This is a standard hlist_rcu iteration.
2. Insertion of a dentry into the hash table is done using
hlist_add_head_rcu() which take care of ordering the writes - the
writes to the dentry must be visible before the dentry is
inserted. This works in conjunction with hlist_for_each_rcu(),
which has since been replaced by hlist_for_each_entry_rcu(), while
walking the hash chain. The only requirement is that all
initialization to the dentry must be done before
hlist_add_head_rcu() since we don't have lock protection
while traversing the hash chain.
3. The dentry looked up without holding locks cannot be returned for
walking if it is unhashed. It then may have a NULL d_inode or other
bogosity since RCU doesn't protect the other fields in the dentry. We
therefore use a flag DCACHE_UNHASHED to indicate unhashed dentries
and use this in conjunction with a per-dentry lock (d_lock). Once
looked up without locks, we acquire the per-dentry lock (d_lock) and
check if the dentry is unhashed. If so, the look-up is failed. If not,
the reference count of the dentry is increased and the dentry is
returned.
4. Once a dentry is looked up, it must be ensured during the path walk
for that component it doesn't go away. In pre-2.5.10 code, this was
done holding a reference to the dentry. dcache_rcu does the same.
In some sense, dcache_rcu path walking looks like the pre-2.5.10
version.
5. All dentry hash chain updates must take the per-dentry lock (see
fs/dcache.c). This excludes dput() to ensure that a dentry that has
been looked up concurrently does not get deleted before dget() can
take a ref.
6. There are several ways to do reference counting of RCU protected
objects. One such example is in ipv4 route cache where deferred
freeing (using call_rcu()) is done as soon as the reference count
goes to zero. This cannot be done in the case of dentries because
tearing down of dentries require blocking (dentry_iput()) which
isn't supported from RCU callbacks. Instead, tearing down of
dentries happen synchronously in dput(), but actual freeing happens
later when RCU grace period is over. This allows safe lock-free
walking of the hash chains, but a matched dentry may have been
partially torn down. The checking of DCACHE_UNHASHED flag with
d_lock held detects such dentries and prevents them from being
returned from look-up.
Maintaining POSIX rename semantics
==================================
Since look-up of dentries is lock-free, it can race against a
concurrent rename operation. For example, during rename of file A to
B, look-up of either A or B must succeed. So, if look-up of B happens
after A has been removed from the hash chain but not added to the new
hash chain, it may fail. Also, a comparison while the name is being
written concurrently by a rename may result in false positive matches
violating rename semantics. Issues related to race with rename are
handled as described below :
1. Look-up can be done in two ways - d_lookup() which is safe from
simultaneous renames and __d_lookup() which is not. If
__d_lookup() fails, it must be followed up by a d_lookup() to
correctly determine whether a dentry is in the hash table or
not. d_lookup() protects look-ups using a sequence lock
(rename_lock).
2. The name associated with a dentry (d_name) may be changed if a
rename is allowed to happen simultaneously. To avoid memcmp() in
__d_lookup() go out of bounds due to a rename and false positive
comparison, the name comparison is done while holding the
per-dentry lock. This prevents concurrent renames during this
operation.
3. Hash table walking during look-up may move to a different bucket as
the current dentry is moved to a different bucket due to rename.
But we use hlists in dcache hash table and they are
null-terminated. So, even if a dentry moves to a different bucket,
hash chain walk will terminate. [with a list_head list, it may not
since termination is when the list_head in the original bucket is
reached]. Since we redo the d_parent check and compare name while
holding d_lock, lock-free look-up will not race against d_move().
4. There can be a theoretical race when a dentry keeps coming back to
original bucket due to double moves. Due to this look-up may
consider that it has never moved and can end up in a infinite loop.
But this is not any worse that theoretical livelocks we already
have in the kernel.
Important guidelines for filesystem developers related to dcache_rcu
====================================================================
1. Existing dcache interfaces (pre-2.5.62) exported to filesystem
don't change. Only dcache internal implementation changes. However
filesystems *must not* delete from the dentry hash chains directly
using the list macros like allowed earlier. They must use dcache
APIs like d_drop() or __d_drop() depending on the situation.
2. d_flags is now protected by a per-dentry lock (d_lock). All access
to d_flags must be protected by it.
3. For a hashed dentry, checking of d_count needs to be protected by
d_lock.
Papers and other documentation on dcache locking
================================================
1. Scaling dcache with RCU (http://linuxjournal.com/article.php?sid=7124).
2. http://lse.sourceforge.net/locking/dcache/dcache.html
Path walking and name lookup locking
====================================
Path resolution is the finding a dentry corresponding to a path name string, by
performing a path walk. Typically, for every open(), stat() etc., the path name
will be resolved. Paths are resolved by walking the namespace tree, starting
with the first component of the pathname (eg. root or cwd) with a known dentry,
then finding the child of that dentry, which is named the next component in the
path string. Then repeating the lookup from the child dentry and finding its
child with the next element, and so on.
Since it is a frequent operation for workloads like multiuser environments and
web servers, it is important to optimize this code.
Path walking synchronisation history:
Prior to 2.5.10, dcache_lock was acquired in d_lookup (dcache hash lookup) and
thus in every component during path look-up. Since 2.5.10 onwards, fast-walk
algorithm changed this by holding the dcache_lock at the beginning and walking
as many cached path component dentries as possible. This significantly
decreases the number of acquisition of dcache_lock. However it also increases
the lock hold time significantly and affects performance in large SMP machines.
Since 2.5.62 kernel, dcache has been using a new locking model that uses RCU to
make dcache look-up lock-free.
All the above algorithms required taking a lock and reference count on the
dentry that was looked up, so that may be used as the basis for walking the
next path element. This is inefficient and unscalable. It is inefficient
because of the locks and atomic operations required for every dentry element
slows things down. It is not scalable because many parallel applications that
are path-walk intensive tend to do path lookups starting from a common dentry
(usually, the root "/" or current working directory). So contention on these
common path elements causes lock and cacheline queueing.
Since 2.6.38, RCU is used to make a significant part of the entire path walk
(including dcache look-up) completely "store-free" (so, no locks, atomics, or
even stores into cachelines of common dentries). This is known as "rcu-walk"
path walking.
Path walking overview
=====================
A name string specifies a start (root directory, cwd, fd-relative) and a
sequence of elements (directory entry names), which together refer to a path in
the namespace. A path is represented as a (dentry, vfsmount) tuple. The name
elements are sub-strings, seperated by '/'.
Name lookups will want to find a particular path that a name string refers to
(usually the final element, or parent of final element). This is done by taking
the path given by the name's starting point (which we know in advance -- eg.
current->fs->cwd or current->fs->root) as the first parent of the lookup. Then
iteratively for each subsequent name element, look up the child of the current
parent with the given name and if it is not the desired entry, make it the
parent for the next lookup.
A parent, of course, must be a directory, and we must have appropriate
permissions on the parent inode to be able to walk into it.
Turning the child into a parent for the next lookup requires more checks and
procedures. Symlinks essentially substitute the symlink name for the target
name in the name string, and require some recursive path walking. Mount points
must be followed into (thus changing the vfsmount that subsequent path elements
refer to), switching from the mount point path to the root of the particular
mounted vfsmount. These behaviours are variously modified depending on the
exact path walking flags.
Path walking then must, broadly, do several particular things:
- find the start point of the walk;
- perform permissions and validity checks on inodes;
- perform dcache hash name lookups on (parent, name element) tuples;
- traverse mount points;
- traverse symlinks;
- lookup and create missing parts of the path on demand.
Safe store-free look-up of dcache hash table
============================================
Dcache name lookup
------------------
In order to lookup a dcache (parent, name) tuple, we take a hash on the tuple
and use that to select a bucket in the dcache-hash table. The list of entries
in that bucket is then walked, and we do a full comparison of each entry
against our (parent, name) tuple.
The hash lists are RCU protected, so list walking is not serialised with
concurrent updates (insertion, deletion from the hash). This is a standard RCU
list application with the exception of renames, which will be covered below.
Parent and name members of a dentry, as well as its membership in the dcache
hash, and its inode are protected by the per-dentry d_lock spinlock. A
reference is taken on the dentry (while the fields are verified under d_lock),
and this stabilises its d_inode pointer and actual inode. This gives a stable
point to perform the next step of our path walk against.
These members are also protected by d_seq seqlock, although this offers
read-only protection and no durability of results, so care must be taken when
using d_seq for synchronisation (see seqcount based lookups, below).
Renames
-------
Back to the rename case. In usual RCU protected lists, the only operations that
will happen to an object is insertion, and then eventually removal from the
list. The object will not be reused until an RCU grace period is complete.
This ensures the RCU list traversal primitives can run over the object without
problems (see RCU documentation for how this works).
However when a dentry is renamed, its hash value can change, requiring it to be
moved to a new hash list. Allocating and inserting a new alias would be
expensive and also problematic for directory dentries. Latency would be far to
high to wait for a grace period after removing the dentry and before inserting
it in the new hash bucket. So what is done is to insert the dentry into the
new list immediately.
However, when the dentry's list pointers are updated to point to objects in the
new list before waiting for a grace period, this can result in a concurrent RCU
lookup of the old list veering off into the new (incorrect) list and missing
the remaining dentries on the list.
There is no fundamental problem with walking down the wrong list, because the
dentry comparisons will never match. However it is fatal to miss a matching
dentry. So a seqlock is used to detect when a rename has occurred, and so the
lookup can be retried.
1 2 3
+---+ +---+ +---+
hlist-->| N-+->| N-+->| N-+->
head <--+-P |<-+-P |<-+-P |
+---+ +---+ +---+
Rename of dentry 2 may require it deleted from the above list, and inserted
into a new list. Deleting 2 gives the following list.
1 3
+---+ +---+ (don't worry, the longer pointers do not
hlist-->| N-+-------->| N-+-> impose a measurable performance overhead
head <--+-P |<--------+-P | on modern CPUs)
+---+ +---+
^ 2 ^
| +---+ |
| | N-+----+
+----+-P |
+---+
This is a standard RCU-list deletion, which leaves the deleted object's
pointers intact, so a concurrent list walker that is currently looking at
object 2 will correctly continue to object 3 when it is time to traverse the
next object.
However, when inserting object 2 onto a new list, we end up with this:
1 3
+---+ +---+
hlist-->| N-+-------->| N-+->
head <--+-P |<--------+-P |
+---+ +---+
2
+---+
| N-+---->
<----+-P |
+---+
Because we didn't wait for a grace period, there may be a concurrent lookup
still at 2. Now when it follows 2's 'next' pointer, it will walk off into
another list without ever having checked object 3.
A related, but distinctly different, issue is that of rename atomicity versus
lookup operations. If a file is renamed from 'A' to 'B', a lookup must only
find either 'A' or 'B'. So if a lookup of 'A' returns NULL, a subsequent lookup
of 'B' must succeed (note the reverse is not true).
Between deleting the dentry from the old hash list, and inserting it on the new
hash list, a lookup may find neither 'A' nor 'B' matching the dentry. The same
rename seqlock is also used to cover this race in much the same way, by
retrying a negative lookup result if a rename was in progress.
Seqcount based lookups
----------------------
In refcount based dcache lookups, d_lock is used to serialise access to
the dentry, stabilising it while comparing its name and parent and then
taking a reference count (the reference count then gives a stable place to
start the next part of the path walk from).
As explained above, we would like to do path walking without taking locks or
reference counts on intermediate dentries along the path. To do this, a per
dentry seqlock (d_seq) is used to take a "coherent snapshot" of what the dentry
looks like (its name, parent, and inode). That snapshot is then used to start
the next part of the path walk. When loading the coherent snapshot under d_seq,
care must be taken to load the members up-front, and use those pointers rather
than reloading from the dentry later on (otherwise we'd have interesting things
like d_inode going NULL underneath us, if the name was unlinked).
Also important is to avoid performing any destructive operations (pretty much:
no non-atomic stores to shared data), and to recheck the seqcount when we are
"done" with the operation. Retry or abort if the seqcount does not match.
Avoiding destructive or changing operations means we can easily unwind from
failure.
What this means is that a caller, provided they are holding RCU lock to
protect the dentry object from disappearing, can perform a seqcount based
lookup which does not increment the refcount on the dentry or write to
it in any way. This returned dentry can be used for subsequent operations,
provided that d_seq is rechecked after that operation is complete.
Inodes are also rcu freed, so the seqcount lookup dentry's inode may also be
queried for permissions.
With this two parts of the puzzle, we can do path lookups without taking
locks or refcounts on dentry elements.
RCU-walk path walking design
============================
Path walking code now has two distinct modes, ref-walk and rcu-walk. ref-walk
is the traditional[*] way of performing dcache lookups using d_lock to
serialise concurrent modifications to the dentry and take a reference count on
it. ref-walk is simple and obvious, and may sleep, take locks, etc while path
walking is operating on each dentry. rcu-walk uses seqcount based dentry
lookups, and can perform lookup of intermediate elements without any stores to
shared data in the dentry or inode. rcu-walk can not be applied to all cases,
eg. if the filesystem must sleep or perform non trivial operations, rcu-walk
must be switched to ref-walk mode.
[*] RCU is still used for the dentry hash lookup in ref-walk, but not the full
path walk.
Where ref-walk uses a stable, refcounted ``parent'' to walk the remaining
path string, rcu-walk uses a d_seq protected snapshot. When looking up a
child of this parent snapshot, we open d_seq critical section on the child
before closing d_seq critical section on the parent. This gives an interlocking
ladder of snapshots to walk down.
proc 101
/----------------\
/ comm: "vi" \
/ fs.root: dentry0 \
\ fs.cwd: dentry2 /
\ /
\----------------/
So when vi wants to open("/home/npiggin/test.c", O_RDWR), then it will
start from current->fs->root, which is a pinned dentry. Alternatively,
"./test.c" would start from cwd; both names refer to the same path in
the context of proc101.
dentry 0
+---------------------+ rcu-walk begins here, we note d_seq, check the
| name: "/" | inode's permission, and then look up the next
| inode: 10 | path element which is "home"...
| children:"home", ...|
+---------------------+
|
dentry 1 V
+---------------------+ ... which brings us here. We find dentry1 via
| name: "home" | hash lookup, then note d_seq and compare name
| inode: 678 | string and parent pointer. When we have a match,
| children:"npiggin" | we now recheck the d_seq of dentry0. Then we
+---------------------+ check inode and look up the next element.
|
dentry2 V
+---------------------+ Note: if dentry0 is now modified, lookup is
| name: "npiggin" | not necessarily invalid, so we need only keep a
| inode: 543 | parent for d_seq verification, and grandparents
| children:"a.c", ... | can be forgotten.
+---------------------+
|
dentry3 V
+---------------------+ At this point we have our destination dentry.
| name: "a.c" | We now take its d_lock, verify d_seq of this
| inode: 14221 | dentry. If that checks out, we can increment
| children:NULL | its refcount because we're holding d_lock.
+---------------------+
Taking a refcount on a dentry from rcu-walk mode, by taking its d_lock,
re-checking its d_seq, and then incrementing its refcount is called
"dropping rcu" or dropping from rcu-walk into ref-walk mode.
It is, in some sense, a bit of a house of cards. If the seqcount check of the
parent snapshot fails, the house comes down, because we had closed the d_seq
section on the grandparent, so we have nothing left to stand on. In that case,
the path walk must be fully restarted (which we do in ref-walk mode, to avoid
live locks). It is costly to have a full restart, but fortunately they are
quite rare.
When we reach a point where sleeping is required, or a filesystem callout
requires ref-walk, then instead of restarting the walk, we attempt to drop rcu
at the last known good dentry we have. Avoiding a full restart in ref-walk in
these cases is fundamental for performance and scalability because blocking
operations such as creates and unlinks are not uncommon.
The detailed design for rcu-walk is like this:
* LOOKUP_RCU is set in nd->flags, which distinguishes rcu-walk from ref-walk.
* Take the RCU lock for the entire path walk, starting with the acquiring
of the starting path (eg. root/cwd/fd-path). So now dentry refcounts are
not required for dentry persistence.
* synchronize_rcu is called when unregistering a filesystem, so we can
access d_ops and i_ops during rcu-walk.
* Similarly take the vfsmount lock for the entire path walk. So now mnt
refcounts are not required for persistence. Also we are free to perform mount
lookups, and to assume dentry mount points and mount roots are stable up and
down the path.
* Have a per-dentry seqlock to protect the dentry name, parent, and inode,
so we can load this tuple atomically, and also check whether any of its
members have changed.
* Dentry lookups (based on parent, candidate string tuple) recheck the parent
sequence after the child is found in case anything changed in the parent
during the path walk.
* inode is also RCU protected so we can load d_inode and use the inode for
limited things.
* i_mode, i_uid, i_gid can be tested for exec permissions during path walk.
* i_op can be loaded.
* When the destination dentry is reached, drop rcu there (ie. take d_lock,
verify d_seq, increment refcount).
* If seqlock verification fails anywhere along the path, do a full restart
of the path lookup in ref-walk mode. -ECHILD tends to be used (for want of
a better errno) to signal an rcu-walk failure.
The cases where rcu-walk cannot continue are:
* NULL dentry (ie. any uncached path element)
* parent with d_inode->i_op->permission or ACLs
* dentries with d_revalidate
* Following links
In future patches, permission checks and d_revalidate become rcu-walk aware. It
may be possible eventually to make following links rcu-walk aware.
Uncached path elements will always require dropping to ref-walk mode, at the
very least because i_mutex needs to be grabbed, and objects allocated.
Final note:
"store-free" path walking is not strictly store free. We take vfsmount lock
and refcounts (both of which can be made per-cpu), and we also store to the
stack (which is essentially CPU-local), and we also have to take locks and
refcount on final dentry.
The point is that shared data, where practically possible, is not locked
or stored into. The result is massive improvements in performance and
scalability of path resolution.
Papers and other documentation on dcache locking
================================================
1. Scaling dcache with RCU (http://linuxjournal.com/article.php?sid=7124).
2. http://lse.sourceforge.net/locking/dcache/dcache.html
...@@ -152,9 +152,23 @@ static void d_free(struct dentry *dentry) ...@@ -152,9 +152,23 @@ static void d_free(struct dentry *dentry)
call_rcu(&dentry->d_u.d_rcu, __d_free); call_rcu(&dentry->d_u.d_rcu, __d_free);
} }
/**
* dentry_rcuwalk_barrier - invalidate in-progress rcu-walk lookups
* After this call, in-progress rcu-walk path lookup will fail. This
* should be called after unhashing, and after changing d_inode (if
* the dentry has not already been unhashed).
*/
static inline void dentry_rcuwalk_barrier(struct dentry *dentry)
{
assert_spin_locked(&dentry->d_lock);
/* Go through a barrier */
write_seqcount_barrier(&dentry->d_seq);
}
/* /*
* Release the dentry's inode, using the filesystem * Release the dentry's inode, using the filesystem
* d_iput() operation if defined. * d_iput() operation if defined. Dentry has no refcount
* and is unhashed.
*/ */
static void dentry_iput(struct dentry * dentry) static void dentry_iput(struct dentry * dentry)
__releases(dentry->d_lock) __releases(dentry->d_lock)
...@@ -178,6 +192,28 @@ static void dentry_iput(struct dentry * dentry) ...@@ -178,6 +192,28 @@ static void dentry_iput(struct dentry * dentry)
} }
} }
/*
* Release the dentry's inode, using the filesystem
* d_iput() operation if defined. dentry remains in-use.
*/
static void dentry_unlink_inode(struct dentry * dentry)
__releases(dentry->d_lock)
__releases(dcache_inode_lock)
{
struct inode *inode = dentry->d_inode;
dentry->d_inode = NULL;
list_del_init(&dentry->d_alias);
dentry_rcuwalk_barrier(dentry);
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_inode_lock);
if (!inode->i_nlink)
fsnotify_inoderemove(inode);
if (dentry->d_op && dentry->d_op->d_iput)
dentry->d_op->d_iput(dentry, inode);
else
iput(inode);
}
/* /*
* dentry_lru_(add|del|move_tail) must be called with d_lock held. * dentry_lru_(add|del|move_tail) must be called with d_lock held.
*/ */
...@@ -272,6 +308,7 @@ void __d_drop(struct dentry *dentry) ...@@ -272,6 +308,7 @@ void __d_drop(struct dentry *dentry)
spin_lock(&dcache_hash_lock); spin_lock(&dcache_hash_lock);
hlist_del_rcu(&dentry->d_hash); hlist_del_rcu(&dentry->d_hash);
spin_unlock(&dcache_hash_lock); spin_unlock(&dcache_hash_lock);
dentry_rcuwalk_barrier(dentry);
} }
} }
EXPORT_SYMBOL(__d_drop); EXPORT_SYMBOL(__d_drop);
...@@ -309,6 +346,7 @@ static inline struct dentry *dentry_kill(struct dentry *dentry, int ref) ...@@ -309,6 +346,7 @@ static inline struct dentry *dentry_kill(struct dentry *dentry, int ref)
spin_unlock(&dcache_inode_lock); spin_unlock(&dcache_inode_lock);
goto relock; goto relock;
} }
if (ref) if (ref)
dentry->d_count--; dentry->d_count--;
/* if dentry was on the d_lru list delete it from there */ /* if dentry was on the d_lru list delete it from there */
...@@ -1221,6 +1259,7 @@ struct dentry *d_alloc(struct dentry * parent, const struct qstr *name) ...@@ -1221,6 +1259,7 @@ struct dentry *d_alloc(struct dentry * parent, const struct qstr *name)
dentry->d_count = 1; dentry->d_count = 1;
dentry->d_flags = DCACHE_UNHASHED; dentry->d_flags = DCACHE_UNHASHED;
spin_lock_init(&dentry->d_lock); spin_lock_init(&dentry->d_lock);
seqcount_init(&dentry->d_seq);
dentry->d_inode = NULL; dentry->d_inode = NULL;
dentry->d_parent = NULL; dentry->d_parent = NULL;
dentry->d_sb = NULL; dentry->d_sb = NULL;
...@@ -1269,6 +1308,7 @@ static void __d_instantiate(struct dentry *dentry, struct inode *inode) ...@@ -1269,6 +1308,7 @@ static void __d_instantiate(struct dentry *dentry, struct inode *inode)
if (inode) if (inode)
list_add(&dentry->d_alias, &inode->i_dentry); list_add(&dentry->d_alias, &inode->i_dentry);
dentry->d_inode = inode; dentry->d_inode = inode;
dentry_rcuwalk_barrier(dentry);
spin_unlock(&dentry->d_lock); spin_unlock(&dentry->d_lock);
fsnotify_d_instantiate(dentry, inode); fsnotify_d_instantiate(dentry, inode);
} }
...@@ -1610,6 +1650,111 @@ struct dentry *d_add_ci(struct dentry *dentry, struct inode *inode, ...@@ -1610,6 +1650,111 @@ struct dentry *d_add_ci(struct dentry *dentry, struct inode *inode,
} }
EXPORT_SYMBOL(d_add_ci); EXPORT_SYMBOL(d_add_ci);
/**
* __d_lookup_rcu - search for a dentry (racy, store-free)
* @parent: parent dentry
* @name: qstr of name we wish to find
* @seq: returns d_seq value at the point where the dentry was found
* @inode: returns dentry->d_inode when the inode was found valid.
* Returns: dentry, or NULL
*
* __d_lookup_rcu is the dcache lookup function for rcu-walk name
* resolution (store-free path walking) design described in
* Documentation/filesystems/path-lookup.txt.
*
* This is not to be used outside core vfs.
*
* __d_lookup_rcu must only be used in rcu-walk mode, ie. with vfsmount lock
* held, and rcu_read_lock held. The returned dentry must not be stored into
* without taking d_lock and checking d_seq sequence count against @seq
* returned here.
*
* A refcount may be taken on the found dentry with the __d_rcu_to_refcount
* function.
*
* Alternatively, __d_lookup_rcu may be called again to look up the child of
* the returned dentry, so long as its parent's seqlock is checked after the
* child is looked up. Thus, an interlocking stepping of sequence lock checks
* is formed, giving integrity down the path walk.
*/
struct dentry *__d_lookup_rcu(struct dentry *parent, struct qstr *name,
unsigned *seq, struct inode **inode)
{
unsigned int len = name->len;
unsigned int hash = name->hash;
const unsigned char *str = name->name;
struct hlist_head *head = d_hash(parent, hash);
struct hlist_node *node;
struct dentry *dentry;
/*
* Note: There is significant duplication with __d_lookup_rcu which is
* required to prevent single threaded performance regressions
* especially on architectures where smp_rmb (in seqcounts) are costly.
* Keep the two functions in sync.
*/
/*
* The hash list is protected using RCU.
*
* Carefully use d_seq when comparing a candidate dentry, to avoid
* races with d_move().
*
* It is possible that concurrent renames can mess up our list
* walk here and result in missing our dentry, resulting in the
* false-negative result. d_lookup() protects against concurrent
* renames using rename_lock seqlock.
*
* See Documentation/vfs/dcache-locking.txt for more details.
*/
hlist_for_each_entry_rcu(dentry, node, head, d_hash) {
struct inode *i;
const char *tname;
int tlen;
if (dentry->d_name.hash != hash)
continue;
seqretry:
*seq = read_seqcount_begin(&dentry->d_seq);
if (dentry->d_parent != parent)
continue;
if (d_unhashed(dentry))
continue;
tlen = dentry->d_name.len;
tname = dentry->d_name.name;
i = dentry->d_inode;
/*
* This seqcount check is required to ensure name and
* len are loaded atomically, so as not to walk off the
* edge of memory when walking. If we could load this
* atomically some other way, we could drop this check.
*/
if (read_seqcount_retry(&dentry->d_seq, *seq))
goto seqretry;
if (parent->d_op && parent->d_op->d_compare) {
if (parent->d_op->d_compare(parent, *inode,
dentry, i,
tlen, tname, name))
continue;
} else {
if (tlen != len)
continue;
if (memcmp(tname, str, tlen))
continue;
}
/*
* No extra seqcount check is required after the name
* compare. The caller must perform a seqcount check in
* order to do anything useful with the returned dentry
* anyway.
*/
*inode = i;
return dentry;
}
return NULL;
}
/** /**
* d_lookup - search for a dentry * d_lookup - search for a dentry
* @parent: parent dentry * @parent: parent dentry
...@@ -1621,9 +1766,9 @@ EXPORT_SYMBOL(d_add_ci); ...@@ -1621,9 +1766,9 @@ EXPORT_SYMBOL(d_add_ci);
* dentry is returned. The caller must use dput to free the entry when it has * dentry is returned. The caller must use dput to free the entry when it has
* finished using it. %NULL is returned if the dentry does not exist. * finished using it. %NULL is returned if the dentry does not exist.
*/ */
struct dentry * d_lookup(struct dentry * parent, struct qstr * name) struct dentry *d_lookup(struct dentry *parent, struct qstr *name)
{ {
struct dentry * dentry = NULL; struct dentry *dentry;
unsigned seq; unsigned seq;
do { do {
...@@ -1636,7 +1781,7 @@ struct dentry * d_lookup(struct dentry * parent, struct qstr * name) ...@@ -1636,7 +1781,7 @@ struct dentry * d_lookup(struct dentry * parent, struct qstr * name)
} }
EXPORT_SYMBOL(d_lookup); EXPORT_SYMBOL(d_lookup);
/* /**
* __d_lookup - search for a dentry (racy) * __d_lookup - search for a dentry (racy)
* @parent: parent dentry * @parent: parent dentry
* @name: qstr of name we wish to find * @name: qstr of name we wish to find
...@@ -1651,16 +1796,23 @@ EXPORT_SYMBOL(d_lookup); ...@@ -1651,16 +1796,23 @@ EXPORT_SYMBOL(d_lookup);
* *
* __d_lookup callers must be commented. * __d_lookup callers must be commented.
*/ */
struct dentry * __d_lookup(struct dentry * parent, struct qstr * name) struct dentry *__d_lookup(struct dentry *parent, struct qstr *name)
{ {
unsigned int len = name->len; unsigned int len = name->len;
unsigned int hash = name->hash; unsigned int hash = name->hash;
const unsigned char *str = name->name; const unsigned char *str = name->name;
struct hlist_head *head = d_hash(parent,hash); struct hlist_head *head = d_hash(parent,hash);
struct dentry *found = NULL;
struct hlist_node *node; struct hlist_node *node;
struct dentry *found = NULL;
struct dentry *dentry; struct dentry *dentry;
/*
* Note: There is significant duplication with __d_lookup_rcu which is
* required to prevent single threaded performance regressions
* especially on architectures where smp_rmb (in seqcounts) are costly.
* Keep the two functions in sync.
*/
/* /*
* The hash list is protected using RCU. * The hash list is protected using RCU.
* *
...@@ -1677,24 +1829,15 @@ struct dentry * __d_lookup(struct dentry * parent, struct qstr * name) ...@@ -1677,24 +1829,15 @@ struct dentry * __d_lookup(struct dentry * parent, struct qstr * name)
rcu_read_lock(); rcu_read_lock();
hlist_for_each_entry_rcu(dentry, node, head, d_hash) { hlist_for_each_entry_rcu(dentry, node, head, d_hash) {
struct qstr *qstr; const char *tname;
int tlen;
if (dentry->d_name.hash != hash) if (dentry->d_name.hash != hash)
continue; continue;
if (dentry->d_parent != parent)
continue;
spin_lock(&dentry->d_lock); spin_lock(&dentry->d_lock);
/*
* Recheck the dentry after taking the lock - d_move may have
* changed things. Don't bother checking the hash because
* we're about to compare the whole name anyway.
*/
if (dentry->d_parent != parent) if (dentry->d_parent != parent)
goto next; goto next;
/* non-existing due to RCU? */
if (d_unhashed(dentry)) if (d_unhashed(dentry))
goto next; goto next;
...@@ -1702,16 +1845,17 @@ struct dentry * __d_lookup(struct dentry * parent, struct qstr * name) ...@@ -1702,16 +1845,17 @@ struct dentry * __d_lookup(struct dentry * parent, struct qstr * name)
* It is safe to compare names since d_move() cannot * It is safe to compare names since d_move() cannot
* change the qstr (protected by d_lock). * change the qstr (protected by d_lock).
*/ */
qstr = &dentry->d_name; tlen = dentry->d_name.len;
tname = dentry->d_name.name;
if (parent->d_op && parent->d_op->d_compare) { if (parent->d_op && parent->d_op->d_compare) {
if (parent->d_op->d_compare(parent, parent->d_inode, if (parent->d_op->d_compare(parent, parent->d_inode,
dentry, dentry->d_inode, dentry, dentry->d_inode,
qstr->len, qstr->name, name)) tlen, tname, name))
goto next; goto next;
} else { } else {
if (qstr->len != len) if (tlen != len)
goto next; goto next;
if (memcmp(qstr->name, str, len)) if (memcmp(tname, str, tlen))
goto next; goto next;
} }
...@@ -1821,7 +1965,7 @@ void d_delete(struct dentry * dentry) ...@@ -1821,7 +1965,7 @@ void d_delete(struct dentry * dentry)
goto again; goto again;
} }
dentry->d_flags &= ~DCACHE_CANT_MOUNT; dentry->d_flags &= ~DCACHE_CANT_MOUNT;
dentry_iput(dentry); dentry_unlink_inode(dentry);
fsnotify_nameremove(dentry, isdir); fsnotify_nameremove(dentry, isdir);
return; return;
} }
...@@ -1884,7 +2028,9 @@ void dentry_update_name_case(struct dentry *dentry, struct qstr *name) ...@@ -1884,7 +2028,9 @@ void dentry_update_name_case(struct dentry *dentry, struct qstr *name)
BUG_ON(dentry->d_name.len != name->len); /* d_lookup gives this */ BUG_ON(dentry->d_name.len != name->len); /* d_lookup gives this */
spin_lock(&dentry->d_lock); spin_lock(&dentry->d_lock);
write_seqcount_begin(&dentry->d_seq);
memcpy((unsigned char *)dentry->d_name.name, name->name, name->len); memcpy((unsigned char *)dentry->d_name.name, name->name, name->len);
write_seqcount_end(&dentry->d_seq);
spin_unlock(&dentry->d_lock); spin_unlock(&dentry->d_lock);
} }
EXPORT_SYMBOL(dentry_update_name_case); EXPORT_SYMBOL(dentry_update_name_case);
...@@ -1997,6 +2143,9 @@ void d_move(struct dentry * dentry, struct dentry * target) ...@@ -1997,6 +2143,9 @@ void d_move(struct dentry * dentry, struct dentry * target)
dentry_lock_for_move(dentry, target); dentry_lock_for_move(dentry, target);
write_seqcount_begin(&dentry->d_seq);
write_seqcount_begin(&target->d_seq);
/* Move the dentry to the target hash queue, if on different bucket */ /* Move the dentry to the target hash queue, if on different bucket */
spin_lock(&dcache_hash_lock); spin_lock(&dcache_hash_lock);
if (!d_unhashed(dentry)) if (!d_unhashed(dentry))
...@@ -2005,6 +2154,7 @@ void d_move(struct dentry * dentry, struct dentry * target) ...@@ -2005,6 +2154,7 @@ void d_move(struct dentry * dentry, struct dentry * target)
spin_unlock(&dcache_hash_lock); spin_unlock(&dcache_hash_lock);
/* Unhash the target: dput() will then get rid of it */ /* Unhash the target: dput() will then get rid of it */
/* __d_drop does write_seqcount_barrier, but they're OK to nest. */
__d_drop(target); __d_drop(target);
list_del(&dentry->d_u.d_child); list_del(&dentry->d_u.d_child);
...@@ -2028,6 +2178,9 @@ void d_move(struct dentry * dentry, struct dentry * target) ...@@ -2028,6 +2178,9 @@ void d_move(struct dentry * dentry, struct dentry * target)
list_add(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs); list_add(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs);
write_seqcount_end(&target->d_seq);
write_seqcount_end(&dentry->d_seq);
dentry_unlock_parents_for_move(dentry, target); dentry_unlock_parents_for_move(dentry, target);
spin_unlock(&target->d_lock); spin_unlock(&target->d_lock);
fsnotify_d_move(dentry); fsnotify_d_move(dentry);
...@@ -2110,6 +2263,9 @@ static void __d_materialise_dentry(struct dentry *dentry, struct dentry *anon) ...@@ -2110,6 +2263,9 @@ static void __d_materialise_dentry(struct dentry *dentry, struct dentry *anon)
dentry_lock_for_move(anon, dentry); dentry_lock_for_move(anon, dentry);
write_seqcount_begin(&dentry->d_seq);
write_seqcount_begin(&anon->d_seq);
dparent = dentry->d_parent; dparent = dentry->d_parent;
aparent = anon->d_parent; aparent = anon->d_parent;
...@@ -2130,6 +2286,9 @@ static void __d_materialise_dentry(struct dentry *dentry, struct dentry *anon) ...@@ -2130,6 +2286,9 @@ static void __d_materialise_dentry(struct dentry *dentry, struct dentry *anon)
else else
INIT_LIST_HEAD(&anon->d_u.d_child); INIT_LIST_HEAD(&anon->d_u.d_child);
write_seqcount_end(&dentry->d_seq);
write_seqcount_end(&anon->d_seq);
dentry_unlock_parents_for_move(anon, dentry); dentry_unlock_parents_for_move(anon, dentry);
spin_unlock(&dentry->d_lock); spin_unlock(&dentry->d_lock);
......
...@@ -115,6 +115,9 @@ int unregister_filesystem(struct file_system_type * fs) ...@@ -115,6 +115,9 @@ int unregister_filesystem(struct file_system_type * fs)
tmp = &(*tmp)->next; tmp = &(*tmp)->next;
} }
write_unlock(&file_systems_lock); write_unlock(&file_systems_lock);
synchronize_rcu();
return -EINVAL; return -EINVAL;
} }
......
...@@ -169,8 +169,8 @@ EXPORT_SYMBOL(putname); ...@@ -169,8 +169,8 @@ EXPORT_SYMBOL(putname);
/* /*
* This does basic POSIX ACL permission checking * This does basic POSIX ACL permission checking
*/ */
static int acl_permission_check(struct inode *inode, int mask, static inline int __acl_permission_check(struct inode *inode, int mask,
int (*check_acl)(struct inode *inode, int mask)) int (*check_acl)(struct inode *inode, int mask), int rcu)
{ {
umode_t mode = inode->i_mode; umode_t mode = inode->i_mode;
...@@ -180,10 +180,14 @@ static int acl_permission_check(struct inode *inode, int mask, ...@@ -180,10 +180,14 @@ static int acl_permission_check(struct inode *inode, int mask,
mode >>= 6; mode >>= 6;
else { else {
if (IS_POSIXACL(inode) && (mode & S_IRWXG) && check_acl) { if (IS_POSIXACL(inode) && (mode & S_IRWXG) && check_acl) {
if (rcu) {
return -ECHILD;
} else {
int error = check_acl(inode, mask); int error = check_acl(inode, mask);
if (error != -EAGAIN) if (error != -EAGAIN)
return error; return error;
} }
}
if (in_group_p(inode->i_gid)) if (in_group_p(inode->i_gid))
mode >>= 3; mode >>= 3;
...@@ -197,6 +201,12 @@ static int acl_permission_check(struct inode *inode, int mask, ...@@ -197,6 +201,12 @@ static int acl_permission_check(struct inode *inode, int mask,
return -EACCES; return -EACCES;
} }
static inline int acl_permission_check(struct inode *inode, int mask,
int (*check_acl)(struct inode *inode, int mask))
{
return __acl_permission_check(inode, mask, check_acl, 0);
}
/** /**
* generic_permission - check for access rights on a Posix-like filesystem * generic_permission - check for access rights on a Posix-like filesystem
* @inode: inode to check access rights for * @inode: inode to check access rights for
...@@ -374,6 +384,173 @@ void path_put(struct path *path) ...@@ -374,6 +384,173 @@ void path_put(struct path *path)
} }
EXPORT_SYMBOL(path_put); EXPORT_SYMBOL(path_put);
/**
* nameidata_drop_rcu - drop this nameidata out of rcu-walk
* @nd: nameidata pathwalk data to drop
* @Returns: 0 on success, -ECHLID on failure
*
* Path walking has 2 modes, rcu-walk and ref-walk (see
* Documentation/filesystems/path-lookup.txt). __drop_rcu* functions attempt
* to drop out of rcu-walk mode and take normal reference counts on dentries
* and vfsmounts to transition to rcu-walk mode. __drop_rcu* functions take
* refcounts at the last known good point before rcu-walk got stuck, so
* ref-walk may continue from there. If this is not successful (eg. a seqcount
* has changed), then failure is returned and path walk restarts from the
* beginning in ref-walk mode.
*
* nameidata_drop_rcu attempts to drop the current nd->path and nd->root into
* ref-walk. Must be called from rcu-walk context.
*/
static int nameidata_drop_rcu(struct nameidata *nd)
{
struct fs_struct *fs = current->fs;
struct dentry *dentry = nd->path.dentry;
BUG_ON(!(nd->flags & LOOKUP_RCU));
if (nd->root.mnt) {
spin_lock(&fs->lock);
if (nd->root.mnt != fs->root.mnt ||
nd->root.dentry != fs->root.dentry)
goto err_root;
}
spin_lock(&dentry->d_lock);
if (!__d_rcu_to_refcount(dentry, nd->seq))
goto err;
BUG_ON(nd->inode != dentry->d_inode);
spin_unlock(&dentry->d_lock);
if (nd->root.mnt) {
path_get(&nd->root);
spin_unlock(&fs->lock);
}
mntget(nd->path.mnt);
rcu_read_unlock();
br_read_unlock(vfsmount_lock);
nd->flags &= ~LOOKUP_RCU;
return 0;
err:
spin_unlock(&dentry->d_lock);
err_root:
if (nd->root.mnt)
spin_unlock(&fs->lock);
return -ECHILD;
}
/* Try to drop out of rcu-walk mode if we were in it, otherwise do nothing. */
static inline int nameidata_drop_rcu_maybe(struct nameidata *nd)
{
if (nd->flags & LOOKUP_RCU)
return nameidata_drop_rcu(nd);
return 0;
}
/**
* nameidata_dentry_drop_rcu - drop nameidata and dentry out of rcu-walk
* @nd: nameidata pathwalk data to drop
* @dentry: dentry to drop
* @Returns: 0 on success, -ECHLID on failure
*
* nameidata_dentry_drop_rcu attempts to drop the current nd->path and nd->root,
* and dentry into ref-walk. @dentry must be a path found by a do_lookup call on
* @nd. Must be called from rcu-walk context.
*/
static int nameidata_dentry_drop_rcu(struct nameidata *nd, struct dentry *dentry)
{
struct fs_struct *fs = current->fs;
struct dentry *parent = nd->path.dentry;
BUG_ON(!(nd->flags & LOOKUP_RCU));
if (nd->root.mnt) {
spin_lock(&fs->lock);
if (nd->root.mnt != fs->root.mnt ||
nd->root.dentry != fs->root.dentry)
goto err_root;
}
spin_lock(&parent->d_lock);
spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
if (!__d_rcu_to_refcount(dentry, nd->seq))
goto err;
/*
* If the sequence check on the child dentry passed, then the child has
* not been removed from its parent. This means the parent dentry must
* be valid and able to take a reference at this point.
*/
BUG_ON(!IS_ROOT(dentry) && dentry->d_parent != parent);
BUG_ON(!parent->d_count);
parent->d_count++;
spin_unlock(&dentry->d_lock);
spin_unlock(&parent->d_lock);
if (nd->root.mnt) {
path_get(&nd->root);
spin_unlock(&fs->lock);
}
mntget(nd->path.mnt);
rcu_read_unlock();
br_read_unlock(vfsmount_lock);
nd->flags &= ~LOOKUP_RCU;
return 0;
err:
spin_unlock(&dentry->d_lock);
spin_unlock(&parent->d_lock);
err_root:
if (nd->root.mnt)
spin_unlock(&fs->lock);
return -ECHILD;
}
/* Try to drop out of rcu-walk mode if we were in it, otherwise do nothing. */
static inline int nameidata_dentry_drop_rcu_maybe(struct nameidata *nd, struct dentry *dentry)
{
if (nd->flags & LOOKUP_RCU)
return nameidata_dentry_drop_rcu(nd, dentry);
return 0;
}
/**
* nameidata_drop_rcu_last - drop nameidata ending path walk out of rcu-walk
* @nd: nameidata pathwalk data to drop
* @Returns: 0 on success, -ECHLID on failure
*
* nameidata_drop_rcu_last attempts to drop the current nd->path into ref-walk.
* nd->path should be the final element of the lookup, so nd->root is discarded.
* Must be called from rcu-walk context.
*/
static int nameidata_drop_rcu_last(struct nameidata *nd)
{
struct dentry *dentry = nd->path.dentry;
BUG_ON(!(nd->flags & LOOKUP_RCU));
nd->flags &= ~LOOKUP_RCU;
nd->root.mnt = NULL;
spin_lock(&dentry->d_lock);
if (!__d_rcu_to_refcount(dentry, nd->seq))
goto err_unlock;
BUG_ON(nd->inode != dentry->d_inode);
spin_unlock(&dentry->d_lock);
mntget(nd->path.mnt);
rcu_read_unlock();
br_read_unlock(vfsmount_lock);
return 0;
err_unlock:
spin_unlock(&dentry->d_lock);
rcu_read_unlock();
br_read_unlock(vfsmount_lock);
return -ECHILD;
}
/* Try to drop out of rcu-walk mode if we were in it, otherwise do nothing. */
static inline int nameidata_drop_rcu_last_maybe(struct nameidata *nd)
{
if (likely(nd->flags & LOOKUP_RCU))
return nameidata_drop_rcu_last(nd);
return 0;
}
/** /**
* release_open_intent - free up open intent resources * release_open_intent - free up open intent resources
* @nd: pointer to nameidata * @nd: pointer to nameidata
...@@ -459,26 +636,40 @@ force_reval_path(struct path *path, struct nameidata *nd) ...@@ -459,26 +636,40 @@ force_reval_path(struct path *path, struct nameidata *nd)
* short-cut DAC fails, then call ->permission() to do more * short-cut DAC fails, then call ->permission() to do more
* complete permission check. * complete permission check.
*/ */
static int exec_permission(struct inode *inode) static inline int __exec_permission(struct inode *inode, int rcu)
{ {
int ret; int ret;
if (inode->i_op->permission) { if (inode->i_op->permission) {
if (rcu)
return -ECHILD;
ret = inode->i_op->permission(inode, MAY_EXEC); ret = inode->i_op->permission(inode, MAY_EXEC);
if (!ret) if (!ret)
goto ok; goto ok;
return ret; return ret;
} }
ret = acl_permission_check(inode, MAY_EXEC, inode->i_op->check_acl); ret = __acl_permission_check(inode, MAY_EXEC, inode->i_op->check_acl, rcu);
if (!ret) if (!ret)
goto ok; goto ok;
if (rcu && ret == -ECHILD)
return ret;
if (capable(CAP_DAC_OVERRIDE) || capable(CAP_DAC_READ_SEARCH)) if (capable(CAP_DAC_OVERRIDE) || capable(CAP_DAC_READ_SEARCH))
goto ok; goto ok;
return ret; return ret;
ok: ok:
return security_inode_permission(inode, MAY_EXEC); return security_inode_exec_permission(inode, rcu);
}
static int exec_permission(struct inode *inode)
{
return __exec_permission(inode, 0);
}
static int exec_permission_rcu(struct inode *inode)
{
return __exec_permission(inode, 1);
} }
static __always_inline void set_root(struct nameidata *nd) static __always_inline void set_root(struct nameidata *nd)
...@@ -489,8 +680,20 @@ static __always_inline void set_root(struct nameidata *nd) ...@@ -489,8 +680,20 @@ static __always_inline void set_root(struct nameidata *nd)
static int link_path_walk(const char *, struct nameidata *); static int link_path_walk(const char *, struct nameidata *);
static __always_inline void set_root_rcu(struct nameidata *nd)
{
if (!nd->root.mnt) {
struct fs_struct *fs = current->fs;
spin_lock(&fs->lock);
nd->root = fs->root;
spin_unlock(&fs->lock);
}
}
static __always_inline int __vfs_follow_link(struct nameidata *nd, const char *link) static __always_inline int __vfs_follow_link(struct nameidata *nd, const char *link)
{ {
int ret;
if (IS_ERR(link)) if (IS_ERR(link))
goto fail; goto fail;
...@@ -500,8 +703,10 @@ static __always_inline int __vfs_follow_link(struct nameidata *nd, const char *l ...@@ -500,8 +703,10 @@ static __always_inline int __vfs_follow_link(struct nameidata *nd, const char *l
nd->path = nd->root; nd->path = nd->root;
path_get(&nd->root); path_get(&nd->root);
} }
nd->inode = nd->path.dentry->d_inode;
return link_path_walk(link, nd); ret = link_path_walk(link, nd);
return ret;
fail: fail:
path_put(&nd->path); path_put(&nd->path);
return PTR_ERR(link); return PTR_ERR(link);
...@@ -516,11 +721,12 @@ static void path_put_conditional(struct path *path, struct nameidata *nd) ...@@ -516,11 +721,12 @@ static void path_put_conditional(struct path *path, struct nameidata *nd)
static inline void path_to_nameidata(struct path *path, struct nameidata *nd) static inline void path_to_nameidata(struct path *path, struct nameidata *nd)
{ {
if (!(nd->flags & LOOKUP_RCU)) {
dput(nd->path.dentry); dput(nd->path.dentry);
if (nd->path.mnt != path->mnt) { if (nd->path.mnt != path->mnt)
mntput(nd->path.mnt); mntput(nd->path.mnt);
nd->path.mnt = path->mnt;
} }
nd->path.mnt = path->mnt;
nd->path.dentry = path->dentry; nd->path.dentry = path->dentry;
} }
...@@ -535,9 +741,11 @@ __do_follow_link(struct path *path, struct nameidata *nd, void **p) ...@@ -535,9 +741,11 @@ __do_follow_link(struct path *path, struct nameidata *nd, void **p)
if (path->mnt != nd->path.mnt) { if (path->mnt != nd->path.mnt) {
path_to_nameidata(path, nd); path_to_nameidata(path, nd);
nd->inode = nd->path.dentry->d_inode;
dget(dentry); dget(dentry);
} }
mntget(path->mnt); mntget(path->mnt);
nd->last_type = LAST_BIND; nd->last_type = LAST_BIND;
*p = dentry->d_inode->i_op->follow_link(dentry, nd); *p = dentry->d_inode->i_op->follow_link(dentry, nd);
error = PTR_ERR(*p); error = PTR_ERR(*p);
...@@ -591,6 +799,20 @@ static inline int do_follow_link(struct path *path, struct nameidata *nd) ...@@ -591,6 +799,20 @@ static inline int do_follow_link(struct path *path, struct nameidata *nd)
return err; return err;
} }
static int follow_up_rcu(struct path *path)
{
struct vfsmount *parent;
struct dentry *mountpoint;
parent = path->mnt->mnt_parent;
if (parent == path->mnt)
return 0;
mountpoint = path->mnt->mnt_mountpoint;
path->dentry = mountpoint;
path->mnt = parent;
return 1;
}
int follow_up(struct path *path) int follow_up(struct path *path)
{ {
struct vfsmount *parent; struct vfsmount *parent;
...@@ -615,6 +837,21 @@ int follow_up(struct path *path) ...@@ -615,6 +837,21 @@ int follow_up(struct path *path)
/* /*
* serialization is taken care of in namespace.c * serialization is taken care of in namespace.c
*/ */
static void __follow_mount_rcu(struct nameidata *nd, struct path *path,
struct inode **inode)
{
while (d_mountpoint(path->dentry)) {
struct vfsmount *mounted;
mounted = __lookup_mnt(path->mnt, path->dentry, 1);
if (!mounted)
return;
path->mnt = mounted;
path->dentry = mounted->mnt_root;
nd->seq = read_seqcount_begin(&path->dentry->d_seq);
*inode = path->dentry->d_inode;
}
}
static int __follow_mount(struct path *path) static int __follow_mount(struct path *path)
{ {
int res = 0; int res = 0;
...@@ -660,7 +897,42 @@ int follow_down(struct path *path) ...@@ -660,7 +897,42 @@ int follow_down(struct path *path)
return 0; return 0;
} }
static __always_inline void follow_dotdot(struct nameidata *nd) static int follow_dotdot_rcu(struct nameidata *nd)
{
struct inode *inode = nd->inode;
set_root_rcu(nd);
while(1) {
if (nd->path.dentry == nd->root.dentry &&
nd->path.mnt == nd->root.mnt) {
break;
}
if (nd->path.dentry != nd->path.mnt->mnt_root) {
struct dentry *old = nd->path.dentry;
struct dentry *parent = old->d_parent;
unsigned seq;
seq = read_seqcount_begin(&parent->d_seq);
if (read_seqcount_retry(&old->d_seq, nd->seq))
return -ECHILD;
inode = parent->d_inode;
nd->path.dentry = parent;
nd->seq = seq;
break;
}
if (!follow_up_rcu(&nd->path))
break;
nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq);
inode = nd->path.dentry->d_inode;
}
__follow_mount_rcu(nd, &nd->path, &inode);
nd->inode = inode;
return 0;
}
static void follow_dotdot(struct nameidata *nd)
{ {
set_root(nd); set_root(nd);
...@@ -681,6 +953,7 @@ static __always_inline void follow_dotdot(struct nameidata *nd) ...@@ -681,6 +953,7 @@ static __always_inline void follow_dotdot(struct nameidata *nd)
break; break;
} }
follow_mount(&nd->path); follow_mount(&nd->path);
nd->inode = nd->path.dentry->d_inode;
} }
/* /*
...@@ -718,18 +991,17 @@ static struct dentry *d_alloc_and_lookup(struct dentry *parent, ...@@ -718,18 +991,17 @@ static struct dentry *d_alloc_and_lookup(struct dentry *parent,
* It _is_ time-critical. * It _is_ time-critical.
*/ */
static int do_lookup(struct nameidata *nd, struct qstr *name, static int do_lookup(struct nameidata *nd, struct qstr *name,
struct path *path) struct path *path, struct inode **inode)
{ {
struct vfsmount *mnt = nd->path.mnt; struct vfsmount *mnt = nd->path.mnt;
struct dentry *dentry, *parent; struct dentry *dentry, *parent = nd->path.dentry;
struct inode *dir; struct inode *dir;
/* /*
* See if the low-level filesystem might want * See if the low-level filesystem might want
* to use its own hash.. * to use its own hash..
*/ */
if (nd->path.dentry->d_op && nd->path.dentry->d_op->d_hash) { if (parent->d_op && parent->d_op->d_hash) {
int err = nd->path.dentry->d_op->d_hash(nd->path.dentry, int err = parent->d_op->d_hash(parent, nd->inode, name);
nd->path.dentry->d_inode, name);
if (err < 0) if (err < 0)
return err; return err;
} }
...@@ -739,7 +1011,32 @@ static int do_lookup(struct nameidata *nd, struct qstr *name, ...@@ -739,7 +1011,32 @@ static int do_lookup(struct nameidata *nd, struct qstr *name,
* of a false negative due to a concurrent rename, we're going to * of a false negative due to a concurrent rename, we're going to
* do the non-racy lookup, below. * do the non-racy lookup, below.
*/ */
dentry = __d_lookup(nd->path.dentry, name); if (nd->flags & LOOKUP_RCU) {
unsigned seq;
*inode = nd->inode;
dentry = __d_lookup_rcu(parent, name, &seq, inode);
if (!dentry) {
if (nameidata_drop_rcu(nd))
return -ECHILD;
goto need_lookup;
}
/* Memory barrier in read_seqcount_begin of child is enough */
if (__read_seqcount_retry(&parent->d_seq, nd->seq))
return -ECHILD;
nd->seq = seq;
if (dentry->d_op && dentry->d_op->d_revalidate) {
/* We commonly drop rcu-walk here */
if (nameidata_dentry_drop_rcu(nd, dentry))
return -ECHILD;
goto need_revalidate;
}
path->mnt = mnt;
path->dentry = dentry;
__follow_mount_rcu(nd, path, inode);
} else {
dentry = __d_lookup(parent, name);
if (!dentry) if (!dentry)
goto need_lookup; goto need_lookup;
found: found:
...@@ -749,11 +1046,13 @@ static int do_lookup(struct nameidata *nd, struct qstr *name, ...@@ -749,11 +1046,13 @@ static int do_lookup(struct nameidata *nd, struct qstr *name,
path->mnt = mnt; path->mnt = mnt;
path->dentry = dentry; path->dentry = dentry;
__follow_mount(path); __follow_mount(path);
*inode = path->dentry->d_inode;
}
return 0; return 0;
need_lookup: need_lookup:
parent = nd->path.dentry;
dir = parent->d_inode; dir = parent->d_inode;
BUG_ON(nd->inode != dir);
mutex_lock(&dir->i_mutex); mutex_lock(&dir->i_mutex);
/* /*
...@@ -815,7 +1114,6 @@ static inline int follow_on_final(struct inode *inode, unsigned lookup_flags) ...@@ -815,7 +1114,6 @@ static inline int follow_on_final(struct inode *inode, unsigned lookup_flags)
static int link_path_walk(const char *name, struct nameidata *nd) static int link_path_walk(const char *name, struct nameidata *nd)
{ {
struct path next; struct path next;
struct inode *inode;
int err; int err;
unsigned int lookup_flags = nd->flags; unsigned int lookup_flags = nd->flags;
...@@ -824,18 +1122,28 @@ static int link_path_walk(const char *name, struct nameidata *nd) ...@@ -824,18 +1122,28 @@ static int link_path_walk(const char *name, struct nameidata *nd)
if (!*name) if (!*name)
goto return_reval; goto return_reval;
inode = nd->path.dentry->d_inode;
if (nd->depth) if (nd->depth)
lookup_flags = LOOKUP_FOLLOW | (nd->flags & LOOKUP_CONTINUE); lookup_flags = LOOKUP_FOLLOW | (nd->flags & LOOKUP_CONTINUE);
/* At this point we know we have a real path component. */ /* At this point we know we have a real path component. */
for(;;) { for(;;) {
struct inode *inode;
unsigned long hash; unsigned long hash;
struct qstr this; struct qstr this;
unsigned int c; unsigned int c;
nd->flags |= LOOKUP_CONTINUE; nd->flags |= LOOKUP_CONTINUE;
err = exec_permission(inode); if (nd->flags & LOOKUP_RCU) {
err = exec_permission_rcu(nd->inode);
if (err == -ECHILD) {
if (nameidata_drop_rcu(nd))
return -ECHILD;
goto exec_again;
}
} else {
exec_again:
err = exec_permission(nd->inode);
}
if (err) if (err)
break; break;
...@@ -869,34 +1177,41 @@ static int link_path_walk(const char *name, struct nameidata *nd) ...@@ -869,34 +1177,41 @@ static int link_path_walk(const char *name, struct nameidata *nd)
case 2: case 2:
if (this.name[1] != '.') if (this.name[1] != '.')
break; break;
if (nd->flags & LOOKUP_RCU) {
if (follow_dotdot_rcu(nd))
return -ECHILD;
} else
follow_dotdot(nd); follow_dotdot(nd);
inode = nd->path.dentry->d_inode;
/* fallthrough */ /* fallthrough */
case 1: case 1:
continue; continue;
} }
/* This does the actual lookups.. */ /* This does the actual lookups.. */
err = do_lookup(nd, &this, &next); err = do_lookup(nd, &this, &next, &inode);
if (err) if (err)
break; break;
err = -ENOENT; err = -ENOENT;
inode = next.dentry->d_inode;
if (!inode) if (!inode)
goto out_dput; goto out_dput;
if (inode->i_op->follow_link) { if (inode->i_op->follow_link) {
/* We commonly drop rcu-walk here */
if (nameidata_dentry_drop_rcu_maybe(nd, next.dentry))
return -ECHILD;
BUG_ON(inode != next.dentry->d_inode);
err = do_follow_link(&next, nd); err = do_follow_link(&next, nd);
if (err) if (err)
goto return_err; goto return_err;
nd->inode = nd->path.dentry->d_inode;
err = -ENOENT; err = -ENOENT;
inode = nd->path.dentry->d_inode; if (!nd->inode)
if (!inode)
break; break;
} else } else {
path_to_nameidata(&next, nd); path_to_nameidata(&next, nd);
nd->inode = inode;
}
err = -ENOTDIR; err = -ENOTDIR;
if (!inode->i_op->lookup) if (!nd->inode->i_op->lookup)
break; break;
continue; continue;
/* here ends the main loop */ /* here ends the main loop */
...@@ -914,29 +1229,36 @@ static int link_path_walk(const char *name, struct nameidata *nd) ...@@ -914,29 +1229,36 @@ static int link_path_walk(const char *name, struct nameidata *nd)
case 2: case 2:
if (this.name[1] != '.') if (this.name[1] != '.')
break; break;
if (nd->flags & LOOKUP_RCU) {
if (follow_dotdot_rcu(nd))
return -ECHILD;
} else
follow_dotdot(nd); follow_dotdot(nd);
inode = nd->path.dentry->d_inode;
/* fallthrough */ /* fallthrough */
case 1: case 1:
goto return_reval; goto return_reval;
} }
err = do_lookup(nd, &this, &next); err = do_lookup(nd, &this, &next, &inode);
if (err) if (err)
break; break;
inode = next.dentry->d_inode;
if (follow_on_final(inode, lookup_flags)) { if (follow_on_final(inode, lookup_flags)) {
if (nameidata_dentry_drop_rcu_maybe(nd, next.dentry))
return -ECHILD;
BUG_ON(inode != next.dentry->d_inode);
err = do_follow_link(&next, nd); err = do_follow_link(&next, nd);
if (err) if (err)
goto return_err; goto return_err;
inode = nd->path.dentry->d_inode; nd->inode = nd->path.dentry->d_inode;
} else } else {
path_to_nameidata(&next, nd); path_to_nameidata(&next, nd);
nd->inode = inode;
}
err = -ENOENT; err = -ENOENT;
if (!inode) if (!nd->inode)
break; break;
if (lookup_flags & LOOKUP_DIRECTORY) { if (lookup_flags & LOOKUP_DIRECTORY) {
err = -ENOTDIR; err = -ENOTDIR;
if (!inode->i_op->lookup) if (!nd->inode->i_op->lookup)
break; break;
} }
goto return_base; goto return_base;
...@@ -958,6 +1280,8 @@ static int link_path_walk(const char *name, struct nameidata *nd) ...@@ -958,6 +1280,8 @@ static int link_path_walk(const char *name, struct nameidata *nd)
*/ */
if (nd->path.dentry && nd->path.dentry->d_sb && if (nd->path.dentry && nd->path.dentry->d_sb &&
(nd->path.dentry->d_sb->s_type->fs_flags & FS_REVAL_DOT)) { (nd->path.dentry->d_sb->s_type->fs_flags & FS_REVAL_DOT)) {
if (nameidata_drop_rcu_maybe(nd))
return -ECHILD;
err = -ESTALE; err = -ESTALE;
/* Note: we do not d_invalidate() */ /* Note: we do not d_invalidate() */
if (!nd->path.dentry->d_op->d_revalidate( if (!nd->path.dentry->d_op->d_revalidate(
...@@ -965,16 +1289,34 @@ static int link_path_walk(const char *name, struct nameidata *nd) ...@@ -965,16 +1289,34 @@ static int link_path_walk(const char *name, struct nameidata *nd)
break; break;
} }
return_base: return_base:
if (nameidata_drop_rcu_last_maybe(nd))
return -ECHILD;
return 0; return 0;
out_dput: out_dput:
if (!(nd->flags & LOOKUP_RCU))
path_put_conditional(&next, nd); path_put_conditional(&next, nd);
break; break;
} }
if (!(nd->flags & LOOKUP_RCU))
path_put(&nd->path); path_put(&nd->path);
return_err: return_err:
return err; return err;
} }
static inline int path_walk_rcu(const char *name, struct nameidata *nd)
{
current->total_link_count = 0;
return link_path_walk(name, nd);
}
static inline int path_walk_simple(const char *name, struct nameidata *nd)
{
current->total_link_count = 0;
return link_path_walk(name, nd);
}
static int path_walk(const char *name, struct nameidata *nd) static int path_walk(const char *name, struct nameidata *nd)
{ {
struct path save = nd->path; struct path save = nd->path;
...@@ -1000,6 +1342,88 @@ static int path_walk(const char *name, struct nameidata *nd) ...@@ -1000,6 +1342,88 @@ static int path_walk(const char *name, struct nameidata *nd)
return result; return result;
} }
static void path_finish_rcu(struct nameidata *nd)
{
if (nd->flags & LOOKUP_RCU) {
/* RCU dangling. Cancel it. */
nd->flags &= ~LOOKUP_RCU;
nd->root.mnt = NULL;
rcu_read_unlock();
br_read_unlock(vfsmount_lock);
}
if (nd->file)
fput(nd->file);
}
static int path_init_rcu(int dfd, const char *name, unsigned int flags, struct nameidata *nd)
{
int retval = 0;
int fput_needed;
struct file *file;
nd->last_type = LAST_ROOT; /* if there are only slashes... */
nd->flags = flags | LOOKUP_RCU;
nd->depth = 0;
nd->root.mnt = NULL;
nd->file = NULL;
if (*name=='/') {
struct fs_struct *fs = current->fs;
br_read_lock(vfsmount_lock);
rcu_read_lock();
spin_lock(&fs->lock);
nd->root = fs->root;
nd->path = nd->root;
nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq);
spin_unlock(&fs->lock);
} else if (dfd == AT_FDCWD) {
struct fs_struct *fs = current->fs;
br_read_lock(vfsmount_lock);
rcu_read_lock();
spin_lock(&fs->lock);
nd->path = fs->pwd;
nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq);
spin_unlock(&fs->lock);
} else {
struct dentry *dentry;
file = fget_light(dfd, &fput_needed);
retval = -EBADF;
if (!file)
goto out_fail;
dentry = file->f_path.dentry;
retval = -ENOTDIR;
if (!S_ISDIR(dentry->d_inode->i_mode))
goto fput_fail;
retval = file_permission(file, MAY_EXEC);
if (retval)
goto fput_fail;
nd->path = file->f_path;
if (fput_needed)
nd->file = file;
nd->seq = read_seqcount_begin(&nd->path.dentry->d_seq);
br_read_lock(vfsmount_lock);
rcu_read_lock();
}
nd->inode = nd->path.dentry->d_inode;
return 0;
fput_fail:
fput_light(file, fput_needed);
out_fail:
return retval;
}
static int path_init(int dfd, const char *name, unsigned int flags, struct nameidata *nd) static int path_init(int dfd, const char *name, unsigned int flags, struct nameidata *nd)
{ {
int retval = 0; int retval = 0;
...@@ -1040,6 +1464,7 @@ static int path_init(int dfd, const char *name, unsigned int flags, struct namei ...@@ -1040,6 +1464,7 @@ static int path_init(int dfd, const char *name, unsigned int flags, struct namei
fput_light(file, fput_needed); fput_light(file, fput_needed);
} }
nd->inode = nd->path.dentry->d_inode;
return 0; return 0;
fput_fail: fput_fail:
...@@ -1052,16 +1477,53 @@ static int path_init(int dfd, const char *name, unsigned int flags, struct namei ...@@ -1052,16 +1477,53 @@ static int path_init(int dfd, const char *name, unsigned int flags, struct namei
static int do_path_lookup(int dfd, const char *name, static int do_path_lookup(int dfd, const char *name,
unsigned int flags, struct nameidata *nd) unsigned int flags, struct nameidata *nd)
{ {
int retval = path_init(dfd, name, flags, nd); int retval;
if (!retval)
/*
* Path walking is largely split up into 2 different synchronisation
* schemes, rcu-walk and ref-walk (explained in
* Documentation/filesystems/path-lookup.txt). These share much of the
* path walk code, but some things particularly setup, cleanup, and
* following mounts are sufficiently divergent that functions are
* duplicated. Typically there is a function foo(), and its RCU
* analogue, foo_rcu().
*
* -ECHILD is the error number of choice (just to avoid clashes) that
* is returned if some aspect of an rcu-walk fails. Such an error must
* be handled by restarting a traditional ref-walk (which will always
* be able to complete).
*/
retval = path_init_rcu(dfd, name, flags, nd);
if (unlikely(retval))
return retval;
retval = path_walk_rcu(name, nd);
path_finish_rcu(nd);
if (nd->root.mnt) {
path_put(&nd->root);
nd->root.mnt = NULL;
}
if (unlikely(retval == -ECHILD || retval == -ESTALE)) {
/* slower, locked walk */
if (retval == -ESTALE)
flags |= LOOKUP_REVAL;
retval = path_init(dfd, name, flags, nd);
if (unlikely(retval))
return retval;
retval = path_walk(name, nd); retval = path_walk(name, nd);
if (unlikely(!retval && !audit_dummy_context() && nd->path.dentry &&
nd->path.dentry->d_inode))
audit_inode(name, nd->path.dentry);
if (nd->root.mnt) { if (nd->root.mnt) {
path_put(&nd->root); path_put(&nd->root);
nd->root.mnt = NULL; nd->root.mnt = NULL;
} }
}
if (likely(!retval)) {
if (unlikely(!audit_dummy_context())) {
if (nd->path.dentry && nd->inode)
audit_inode(name, nd->path.dentry);
}
}
return retval; return retval;
} }
...@@ -1104,10 +1566,11 @@ int vfs_path_lookup(struct dentry *dentry, struct vfsmount *mnt, ...@@ -1104,10 +1566,11 @@ int vfs_path_lookup(struct dentry *dentry, struct vfsmount *mnt,
path_get(&nd->path); path_get(&nd->path);
nd->root = nd->path; nd->root = nd->path;
path_get(&nd->root); path_get(&nd->root);
nd->inode = nd->path.dentry->d_inode;
retval = path_walk(name, nd); retval = path_walk(name, nd);
if (unlikely(!retval && !audit_dummy_context() && nd->path.dentry && if (unlikely(!retval && !audit_dummy_context() && nd->path.dentry &&
nd->path.dentry->d_inode)) nd->inode))
audit_inode(name, nd->path.dentry); audit_inode(name, nd->path.dentry);
path_put(&nd->root); path_put(&nd->root);
...@@ -1488,6 +1951,7 @@ static int __open_namei_create(struct nameidata *nd, struct path *path, ...@@ -1488,6 +1951,7 @@ static int __open_namei_create(struct nameidata *nd, struct path *path,
mutex_unlock(&dir->d_inode->i_mutex); mutex_unlock(&dir->d_inode->i_mutex);
dput(nd->path.dentry); dput(nd->path.dentry);
nd->path.dentry = path->dentry; nd->path.dentry = path->dentry;
if (error) if (error)
return error; return error;
/* Don't check for write permission, don't truncate */ /* Don't check for write permission, don't truncate */
...@@ -1582,6 +2046,9 @@ static struct file *finish_open(struct nameidata *nd, ...@@ -1582,6 +2046,9 @@ static struct file *finish_open(struct nameidata *nd,
return ERR_PTR(error); return ERR_PTR(error);
} }
/*
* Handle O_CREAT case for do_filp_open
*/
static struct file *do_last(struct nameidata *nd, struct path *path, static struct file *do_last(struct nameidata *nd, struct path *path,
int open_flag, int acc_mode, int open_flag, int acc_mode,
int mode, const char *pathname) int mode, const char *pathname)
...@@ -1603,42 +2070,16 @@ static struct file *do_last(struct nameidata *nd, struct path *path, ...@@ -1603,42 +2070,16 @@ static struct file *do_last(struct nameidata *nd, struct path *path,
} }
/* fallthrough */ /* fallthrough */
case LAST_ROOT: case LAST_ROOT:
if (open_flag & O_CREAT)
goto exit; goto exit;
/* fallthrough */
case LAST_BIND: case LAST_BIND:
audit_inode(pathname, dir); audit_inode(pathname, dir);
goto ok; goto ok;
} }
/* trailing slashes? */ /* trailing slashes? */
if (nd->last.name[nd->last.len]) { if (nd->last.name[nd->last.len])
if (open_flag & O_CREAT)
goto exit;
nd->flags |= LOOKUP_DIRECTORY | LOOKUP_FOLLOW;
}
/* just plain open? */
if (!(open_flag & O_CREAT)) {
error = do_lookup(nd, &nd->last, path);
if (error)
goto exit; goto exit;
error = -ENOENT;
if (!path->dentry->d_inode)
goto exit_dput;
if (path->dentry->d_inode->i_op->follow_link)
return NULL;
error = -ENOTDIR;
if (nd->flags & LOOKUP_DIRECTORY) {
if (!path->dentry->d_inode->i_op->lookup)
goto exit_dput;
}
path_to_nameidata(path, nd);
audit_inode(pathname, nd->path.dentry);
goto ok;
}
/* OK, it's O_CREAT */
mutex_lock(&dir->d_inode->i_mutex); mutex_lock(&dir->d_inode->i_mutex);
path->dentry = lookup_hash(nd); path->dentry = lookup_hash(nd);
...@@ -1709,8 +2150,9 @@ static struct file *do_last(struct nameidata *nd, struct path *path, ...@@ -1709,8 +2150,9 @@ static struct file *do_last(struct nameidata *nd, struct path *path,
return NULL; return NULL;
path_to_nameidata(path, nd); path_to_nameidata(path, nd);
nd->inode = path->dentry->d_inode;
error = -EISDIR; error = -EISDIR;
if (S_ISDIR(path->dentry->d_inode->i_mode)) if (S_ISDIR(nd->inode->i_mode))
goto exit; goto exit;
ok: ok:
filp = finish_open(nd, open_flag, acc_mode); filp = finish_open(nd, open_flag, acc_mode);
...@@ -1741,7 +2183,7 @@ struct file *do_filp_open(int dfd, const char *pathname, ...@@ -1741,7 +2183,7 @@ struct file *do_filp_open(int dfd, const char *pathname,
struct path path; struct path path;
int count = 0; int count = 0;
int flag = open_to_namei_flags(open_flag); int flag = open_to_namei_flags(open_flag);
int force_reval = 0; int flags;
if (!(open_flag & O_CREAT)) if (!(open_flag & O_CREAT))
mode = 0; mode = 0;
...@@ -1770,54 +2212,84 @@ struct file *do_filp_open(int dfd, const char *pathname, ...@@ -1770,54 +2212,84 @@ struct file *do_filp_open(int dfd, const char *pathname,
if (open_flag & O_APPEND) if (open_flag & O_APPEND)
acc_mode |= MAY_APPEND; acc_mode |= MAY_APPEND;
/* find the parent */ flags = LOOKUP_OPEN;
if (open_flag & O_CREAT) {
flags |= LOOKUP_CREATE;
if (open_flag & O_EXCL)
flags |= LOOKUP_EXCL;
}
if (open_flag & O_DIRECTORY)
flags |= LOOKUP_DIRECTORY;
if (!(open_flag & O_NOFOLLOW))
flags |= LOOKUP_FOLLOW;
filp = get_empty_filp();
if (!filp)
return ERR_PTR(-ENFILE);
filp->f_flags = open_flag;
nd.intent.open.file = filp;
nd.intent.open.flags = flag;
nd.intent.open.create_mode = mode;
if (open_flag & O_CREAT)
goto creat;
/* !O_CREAT, simple open */
error = do_path_lookup(dfd, pathname, flags, &nd);
if (unlikely(error))
goto out_filp;
error = -ELOOP;
if (!(nd.flags & LOOKUP_FOLLOW)) {
if (nd.inode->i_op->follow_link)
goto out_path;
}
error = -ENOTDIR;
if (nd.flags & LOOKUP_DIRECTORY) {
if (!nd.inode->i_op->lookup)
goto out_path;
}
audit_inode(pathname, nd.path.dentry);
filp = finish_open(&nd, open_flag, acc_mode);
return filp;
creat:
/* OK, have to create the file. Find the parent. */
error = path_init_rcu(dfd, pathname,
LOOKUP_PARENT | (flags & LOOKUP_REVAL), &nd);
if (error)
goto out_filp;
error = path_walk_rcu(pathname, &nd);
path_finish_rcu(&nd);
if (unlikely(error == -ECHILD || error == -ESTALE)) {
/* slower, locked walk */
if (error == -ESTALE) {
reval: reval:
error = path_init(dfd, pathname, LOOKUP_PARENT, &nd); flags |= LOOKUP_REVAL;
}
error = path_init(dfd, pathname,
LOOKUP_PARENT | (flags & LOOKUP_REVAL), &nd);
if (error) if (error)
return ERR_PTR(error); goto out_filp;
if (force_reval)
nd.flags |= LOOKUP_REVAL;
current->total_link_count = 0; error = path_walk_simple(pathname, &nd);
error = link_path_walk(pathname, &nd);
if (error) {
filp = ERR_PTR(error);
goto out;
} }
if (unlikely(!audit_dummy_context()) && (open_flag & O_CREAT)) if (unlikely(error))
goto out_filp;
if (unlikely(!audit_dummy_context()))
audit_inode(pathname, nd.path.dentry); audit_inode(pathname, nd.path.dentry);
/* /*
* We have the parent and last component. * We have the parent and last component.
*/ */
nd.flags = flags;
error = -ENFILE;
filp = get_empty_filp();
if (filp == NULL)
goto exit_parent;
nd.intent.open.file = filp;
filp->f_flags = open_flag;
nd.intent.open.flags = flag;
nd.intent.open.create_mode = mode;
nd.flags &= ~LOOKUP_PARENT;
nd.flags |= LOOKUP_OPEN;
if (open_flag & O_CREAT) {
nd.flags |= LOOKUP_CREATE;
if (open_flag & O_EXCL)
nd.flags |= LOOKUP_EXCL;
}
if (open_flag & O_DIRECTORY)
nd.flags |= LOOKUP_DIRECTORY;
if (!(open_flag & O_NOFOLLOW))
nd.flags |= LOOKUP_FOLLOW;
filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname); filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname);
while (unlikely(!filp)) { /* trailing symlink */ while (unlikely(!filp)) { /* trailing symlink */
struct path holder; struct path holder;
struct inode *inode = path.dentry->d_inode;
void *cookie; void *cookie;
error = -ELOOP; error = -ELOOP;
/* S_ISDIR part is a temporary automount kludge */ /* S_ISDIR part is a temporary automount kludge */
if (!(nd.flags & LOOKUP_FOLLOW) && !S_ISDIR(inode->i_mode)) if (!(nd.flags & LOOKUP_FOLLOW) && !S_ISDIR(nd.inode->i_mode))
goto exit_dput; goto exit_dput;
if (count++ == 32) if (count++ == 32)
goto exit_dput; goto exit_dput;
...@@ -1838,36 +2310,33 @@ struct file *do_filp_open(int dfd, const char *pathname, ...@@ -1838,36 +2310,33 @@ struct file *do_filp_open(int dfd, const char *pathname,
goto exit_dput; goto exit_dput;
error = __do_follow_link(&path, &nd, &cookie); error = __do_follow_link(&path, &nd, &cookie);
if (unlikely(error)) { if (unlikely(error)) {
if (!IS_ERR(cookie) && nd.inode->i_op->put_link)
nd.inode->i_op->put_link(path.dentry, &nd, cookie);
/* nd.path had been dropped */ /* nd.path had been dropped */
if (!IS_ERR(cookie) && inode->i_op->put_link) nd.path = path;
inode->i_op->put_link(path.dentry, &nd, cookie); goto out_path;
path_put(&path);
release_open_intent(&nd);
filp = ERR_PTR(error);
goto out;
} }
holder = path; holder = path;
nd.flags &= ~LOOKUP_PARENT; nd.flags &= ~LOOKUP_PARENT;
filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname); filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname);
if (inode->i_op->put_link) if (nd.inode->i_op->put_link)
inode->i_op->put_link(holder.dentry, &nd, cookie); nd.inode->i_op->put_link(holder.dentry, &nd, cookie);
path_put(&holder); path_put(&holder);
} }
out: out:
if (nd.root.mnt) if (nd.root.mnt)
path_put(&nd.root); path_put(&nd.root);
if (filp == ERR_PTR(-ESTALE) && !force_reval) { if (filp == ERR_PTR(-ESTALE) && !(flags & LOOKUP_REVAL))
force_reval = 1;
goto reval; goto reval;
}
return filp; return filp;
exit_dput: exit_dput:
path_put_conditional(&path, &nd); path_put_conditional(&path, &nd);
out_path:
path_put(&nd.path);
out_filp:
if (!IS_ERR(nd.intent.open.file)) if (!IS_ERR(nd.intent.open.file))
release_open_intent(&nd); release_open_intent(&nd);
exit_parent:
path_put(&nd.path);
filp = ERR_PTR(error); filp = ERR_PTR(error);
goto out; goto out;
} }
......
...@@ -402,6 +402,10 @@ static int proc_sys_compare(const struct dentry *parent, ...@@ -402,6 +402,10 @@ static int proc_sys_compare(const struct dentry *parent,
const struct dentry *dentry, const struct inode *inode, const struct dentry *dentry, const struct inode *inode,
unsigned int len, const char *str, const struct qstr *name) unsigned int len, const char *str, const struct qstr *name)
{ {
/* Although proc doesn't have negative dentries, rcu-walk means
* that inode here can be NULL */
if (!inode)
return 0;
if (name->len != len) if (name->len != len)
return 1; return 1;
if (memcmp(name->name, str, len)) if (memcmp(name->name, str, len))
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include <linux/list.h> #include <linux/list.h>
#include <linux/rculist.h> #include <linux/rculist.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/seqlock.h>
#include <linux/cache.h> #include <linux/cache.h>
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
...@@ -90,6 +91,7 @@ struct dentry { ...@@ -90,6 +91,7 @@ struct dentry {
unsigned int d_count; /* protected by d_lock */ unsigned int d_count; /* protected by d_lock */
unsigned int d_flags; /* protected by d_lock */ unsigned int d_flags; /* protected by d_lock */
spinlock_t d_lock; /* per dentry lock */ spinlock_t d_lock; /* per dentry lock */
seqcount_t d_seq; /* per dentry seqlock */
int d_mounted; int d_mounted;
struct inode *d_inode; /* Where the name belongs to - NULL is struct inode *d_inode; /* Where the name belongs to - NULL is
* negative */ * negative */
...@@ -266,9 +268,33 @@ extern void d_move(struct dentry *, struct dentry *); ...@@ -266,9 +268,33 @@ extern void d_move(struct dentry *, struct dentry *);
extern struct dentry *d_ancestor(struct dentry *, struct dentry *); extern struct dentry *d_ancestor(struct dentry *, struct dentry *);
/* appendix may either be NULL or be used for transname suffixes */ /* appendix may either be NULL or be used for transname suffixes */
extern struct dentry * d_lookup(struct dentry *, struct qstr *); extern struct dentry *d_lookup(struct dentry *, struct qstr *);
extern struct dentry * __d_lookup(struct dentry *, struct qstr *); extern struct dentry *d_hash_and_lookup(struct dentry *, struct qstr *);
extern struct dentry * d_hash_and_lookup(struct dentry *, struct qstr *); extern struct dentry *__d_lookup(struct dentry *, struct qstr *);
extern struct dentry *__d_lookup_rcu(struct dentry *parent, struct qstr *name,
unsigned *seq, struct inode **inode);
/**
* __d_rcu_to_refcount - take a refcount on dentry if sequence check is ok
* @dentry: dentry to take a ref on
* @seq: seqcount to verify against
* @Returns: 0 on failure, else 1.
*
* __d_rcu_to_refcount operates on a dentry,seq pair that was returned
* by __d_lookup_rcu, to get a reference on an rcu-walk dentry.
*/
static inline int __d_rcu_to_refcount(struct dentry *dentry, unsigned seq)
{
int ret = 0;
assert_spin_locked(&dentry->d_lock);
if (!read_seqcount_retry(&dentry->d_seq, seq)) {
ret = 1;
dentry->d_count++;
}
return ret;
}
/* validate "insecure" dentry pointer */ /* validate "insecure" dentry pointer */
extern int d_validate(struct dentry *, struct dentry *); extern int d_validate(struct dentry *, struct dentry *);
......
...@@ -19,7 +19,10 @@ struct nameidata { ...@@ -19,7 +19,10 @@ struct nameidata {
struct path path; struct path path;
struct qstr last; struct qstr last;
struct path root; struct path root;
struct file *file;
struct inode *inode; /* path.dentry.d_inode */
unsigned int flags; unsigned int flags;
unsigned seq;
int last_type; int last_type;
unsigned depth; unsigned depth;
char *saved_names[MAX_NESTED_LINKS + 1]; char *saved_names[MAX_NESTED_LINKS + 1];
...@@ -43,11 +46,13 @@ enum {LAST_NORM, LAST_ROOT, LAST_DOT, LAST_DOTDOT, LAST_BIND}; ...@@ -43,11 +46,13 @@ enum {LAST_NORM, LAST_ROOT, LAST_DOT, LAST_DOTDOT, LAST_BIND};
* - internal "there are more path components" flag * - internal "there are more path components" flag
* - dentry cache is untrusted; force a real lookup * - dentry cache is untrusted; force a real lookup
*/ */
#define LOOKUP_FOLLOW 1 #define LOOKUP_FOLLOW 0x0001
#define LOOKUP_DIRECTORY 2 #define LOOKUP_DIRECTORY 0x0002
#define LOOKUP_CONTINUE 4 #define LOOKUP_CONTINUE 0x0004
#define LOOKUP_PARENT 16
#define LOOKUP_REVAL 64 #define LOOKUP_PARENT 0x0010
#define LOOKUP_REVAL 0x0020
#define LOOKUP_RCU 0x0040
/* /*
* Intent data * Intent data
*/ */
......
...@@ -457,7 +457,6 @@ static inline void security_free_mnt_opts(struct security_mnt_opts *opts) ...@@ -457,7 +457,6 @@ static inline void security_free_mnt_opts(struct security_mnt_opts *opts)
* called when the actual read/write operations are performed. * called when the actual read/write operations are performed.
* @inode contains the inode structure to check. * @inode contains the inode structure to check.
* @mask contains the permission mask. * @mask contains the permission mask.
* @nd contains the nameidata (may be NULL).
* Return 0 if permission is granted. * Return 0 if permission is granted.
* @inode_setattr: * @inode_setattr:
* Check permission before setting file attributes. Note that the kernel * Check permission before setting file attributes. Note that the kernel
...@@ -1713,6 +1712,7 @@ int security_inode_rename(struct inode *old_dir, struct dentry *old_dentry, ...@@ -1713,6 +1712,7 @@ int security_inode_rename(struct inode *old_dir, struct dentry *old_dentry,
int security_inode_readlink(struct dentry *dentry); int security_inode_readlink(struct dentry *dentry);
int security_inode_follow_link(struct dentry *dentry, struct nameidata *nd); int security_inode_follow_link(struct dentry *dentry, struct nameidata *nd);
int security_inode_permission(struct inode *inode, int mask); int security_inode_permission(struct inode *inode, int mask);
int security_inode_exec_permission(struct inode *inode, unsigned int flags);
int security_inode_setattr(struct dentry *dentry, struct iattr *attr); int security_inode_setattr(struct dentry *dentry, struct iattr *attr);
int security_inode_getattr(struct vfsmount *mnt, struct dentry *dentry); int security_inode_getattr(struct vfsmount *mnt, struct dentry *dentry);
int security_inode_setxattr(struct dentry *dentry, const char *name, int security_inode_setxattr(struct dentry *dentry, const char *name,
...@@ -2102,6 +2102,12 @@ static inline int security_inode_permission(struct inode *inode, int mask) ...@@ -2102,6 +2102,12 @@ static inline int security_inode_permission(struct inode *inode, int mask)
return 0; return 0;
} }
static inline int security_inode_exec_permission(struct inode *inode,
unsigned int flags)
{
return 0;
}
static inline int security_inode_setattr(struct dentry *dentry, static inline int security_inode_setattr(struct dentry *dentry,
struct iattr *attr) struct iattr *attr)
{ {
......
...@@ -513,6 +513,15 @@ int security_inode_permission(struct inode *inode, int mask) ...@@ -513,6 +513,15 @@ int security_inode_permission(struct inode *inode, int mask)
return security_ops->inode_permission(inode, mask); return security_ops->inode_permission(inode, mask);
} }
int security_inode_exec_permission(struct inode *inode, unsigned int flags)
{
if (unlikely(IS_PRIVATE(inode)))
return 0;
if (flags)
return -ECHILD;
return security_ops->inode_permission(inode, MAY_EXEC);
}
int security_inode_setattr(struct dentry *dentry, struct iattr *attr) int security_inode_setattr(struct dentry *dentry, struct iattr *attr)
{ {
if (unlikely(IS_PRIVATE(dentry->d_inode))) if (unlikely(IS_PRIVATE(dentry->d_inode)))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment