Commit b5c84bf6 authored by Nick Piggin's avatar Nick Piggin

fs: dcache remove dcache_lock

dcache_lock no longer protects anything. remove it.
Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
parent 949854d0
......@@ -21,14 +21,14 @@ prototypes:
char *(*d_dname)((struct dentry *dentry, char *buffer, int buflen);
locking rules:
dcache_lock rename_lock ->d_lock may block
d_revalidate: no no no yes
d_hash no no no no
d_compare: no yes no no
d_delete: yes no yes no
d_release: no no no yes
d_iput: no no no yes
d_dname: no no no no
rename_lock ->d_lock may block
d_revalidate: no no yes
d_hash no no no
d_compare: yes no no
d_delete: no yes no
d_release: no no yes
d_iput: no no yes
d_dname: no no no
--------------------------- inode_operations ---------------------------
prototypes:
......
......@@ -31,6 +31,7 @@ significant change is the way d_lookup traverses the hash chain, it
doesn't acquire the dcache_lock for this and rely on RCU to ensure
that the dentry has not been *freed*.
dcache_lock no longer exists, dentry locking is explained in fs/dcache.c
Dcache locking details
======================
......@@ -50,14 +51,12 @@ Safe lock-free look-up of dcache hash table
Dcache is a complex data structure with the hash table entries also
linked together in other lists. In 2.4 kernel, dcache_lock protected
all the lists. We applied RCU only on hash chain walking. The rest of
the lists are still protected by dcache_lock. Some of the important
changes are :
all the lists. RCU dentry hash walking works like this:
1. The deletion from hash chain is done using hlist_del_rcu() macro
which doesn't initialize next pointer of the deleted dentry and
this allows us to walk safely lock-free while a deletion is
happening.
happening. This is a standard hlist_rcu iteration.
2. Insertion of a dentry into the hash table is done using
hlist_add_head_rcu() which take care of ordering the writes - the
......@@ -66,19 +65,18 @@ changes are :
which has since been replaced by hlist_for_each_entry_rcu(), while
walking the hash chain. The only requirement is that all
initialization to the dentry must be done before
hlist_add_head_rcu() since we don't have dcache_lock protection
while traversing the hash chain. This isn't different from the
existing code.
3. The dentry looked up without holding dcache_lock by cannot be
returned for walking if it is unhashed. It then may have a NULL
d_inode or other bogosity since RCU doesn't protect the other
fields in the dentry. We therefore use a flag DCACHE_UNHASHED to
indicate unhashed dentries and use this in conjunction with a
per-dentry lock (d_lock). Once looked up without the dcache_lock,
we acquire the per-dentry lock (d_lock) and check if the dentry is
unhashed. If so, the look-up is failed. If not, the reference count
of the dentry is increased and the dentry is returned.
hlist_add_head_rcu() since we don't have lock protection
while traversing the hash chain.
3. The dentry looked up without holding locks cannot be returned for
walking if it is unhashed. It then may have a NULL d_inode or other
bogosity since RCU doesn't protect the other fields in the dentry. We
therefore use a flag DCACHE_UNHASHED to indicate unhashed dentries
and use this in conjunction with a per-dentry lock (d_lock). Once
looked up without locks, we acquire the per-dentry lock (d_lock) and
check if the dentry is unhashed. If so, the look-up is failed. If not,
the reference count of the dentry is increased and the dentry is
returned.
4. Once a dentry is looked up, it must be ensured during the path walk
for that component it doesn't go away. In pre-2.5.10 code, this was
......@@ -86,10 +84,10 @@ changes are :
In some sense, dcache_rcu path walking looks like the pre-2.5.10
version.
5. All dentry hash chain updates must take the dcache_lock as well as
the per-dentry lock in that order. dput() does this to ensure that
a dentry that has just been looked up in another CPU doesn't get
deleted before dget() can be done on it.
5. All dentry hash chain updates must take the per-dentry lock (see
fs/dcache.c). This excludes dput() to ensure that a dentry that has
been looked up concurrently does not get deleted before dget() can
take a ref.
6. There are several ways to do reference counting of RCU protected
objects. One such example is in ipv4 route cache where deferred
......
......@@ -216,7 +216,6 @@ had ->revalidate()) add calls in ->follow_link()/->readlink().
->d_parent changes are not protected by BKL anymore. Read access is safe
if at least one of the following is true:
* filesystem has no cross-directory rename()
* dcache_lock is held
* we know that parent had been locked (e.g. we are looking at
->d_parent of ->lookup() argument).
* we are called from ->rename().
......@@ -340,3 +339,10 @@ look at examples of other filesystems) for guidance.
.d_hash() calling convention and locking rules are significantly
changed. Read updated documentation in Documentation/filesystems/vfs.txt (and
look at examples of other filesystems) for guidance.
---
[mandatory]
dcache_lock is gone, replaced by fine grained locks. See fs/dcache.c
for details of what locks to replace dcache_lock with in order to protect
particular things. Most of the time, a filesystem only needs ->d_lock, which
protects *all* the dcache state of a given dentry.
......@@ -159,21 +159,18 @@ static void spufs_prune_dir(struct dentry *dir)
mutex_lock(&dir->d_inode->i_mutex);
list_for_each_entry_safe(dentry, tmp, &dir->d_subdirs, d_u.d_child) {
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
if (!(d_unhashed(dentry)) && dentry->d_inode) {
dget_locked_dlock(dentry);
__d_drop(dentry);
spin_unlock(&dentry->d_lock);
simple_unlink(dir->d_inode, dentry);
/* XXX: what is dcache_lock protecting here? Other
/* XXX: what was dcache_lock protecting here? Other
* filesystems (IB, configfs) release dcache_lock
* before unlink */
spin_unlock(&dcache_lock);
dput(dentry);
} else {
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
}
}
shrink_dcache_parent(dir);
......
......@@ -277,18 +277,14 @@ static int remove_file(struct dentry *parent, char *name)
goto bail;
}
spin_lock(&dcache_lock);
spin_lock(&tmp->d_lock);
if (!(d_unhashed(tmp) && tmp->d_inode)) {
dget_locked_dlock(tmp);
__d_drop(tmp);
spin_unlock(&tmp->d_lock);
spin_unlock(&dcache_lock);
simple_unlink(parent->d_inode, tmp);
} else {
} else
spin_unlock(&tmp->d_lock);
spin_unlock(&dcache_lock);
}
ret = 0;
bail:
......
......@@ -453,17 +453,14 @@ static int remove_file(struct dentry *parent, char *name)
goto bail;
}
spin_lock(&dcache_lock);
spin_lock(&tmp->d_lock);
if (!(d_unhashed(tmp) && tmp->d_inode)) {
dget_locked_dlock(tmp);
__d_drop(tmp);
spin_unlock(&tmp->d_lock);
spin_unlock(&dcache_lock);
simple_unlink(parent->d_inode, tmp);
} else {
spin_unlock(&tmp->d_lock);
spin_unlock(&dcache_lock);
}
ret = 0;
......
......@@ -101,7 +101,6 @@ int pohmelfs_path_length(struct pohmelfs_inode *pi)
d = first;
seq = read_seqbegin(&rename_lock);
rcu_read_lock();
spin_lock(&dcache_lock);
if (!IS_ROOT(d) && d_unhashed(d))
len += UNHASHED_OBSCURE_STRING_SIZE; /* Obscure " (deleted)" string */
......@@ -110,7 +109,6 @@ int pohmelfs_path_length(struct pohmelfs_inode *pi)
len += d->d_name.len + 1; /* Plus slash */
d = d->d_parent;
}
spin_unlock(&dcache_lock);
rcu_read_unlock();
if (read_seqretry(&rename_lock, seq))
goto rename_retry;
......
......@@ -62,7 +62,6 @@ smb_invalidate_dircache_entries(struct dentry *parent)
struct list_head *next;
struct dentry *dentry;
spin_lock(&dcache_lock);
spin_lock(&parent->d_lock);
next = parent->d_subdirs.next;
while (next != &parent->d_subdirs) {
......@@ -72,7 +71,6 @@ smb_invalidate_dircache_entries(struct dentry *parent)
next = next->next;
}
spin_unlock(&parent->d_lock);
spin_unlock(&dcache_lock);
}
/*
......@@ -98,7 +96,6 @@ smb_dget_fpos(struct dentry *dentry, struct dentry *parent, unsigned long fpos)
}
/* If a pointer is invalid, we search the dentry. */
spin_lock(&dcache_lock);
spin_lock(&parent->d_lock);
next = parent->d_subdirs.next;
while (next != &parent->d_subdirs) {
......@@ -115,7 +112,6 @@ smb_dget_fpos(struct dentry *dentry, struct dentry *parent, unsigned long fpos)
dent = NULL;
out_unlock:
spin_unlock(&parent->d_lock);
spin_unlock(&dcache_lock);
return dent;
}
......
......@@ -343,7 +343,6 @@ static int usbfs_empty (struct dentry *dentry)
{
struct list_head *list;
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
list_for_each(list, &dentry->d_subdirs) {
struct dentry *de = list_entry(list, struct dentry, d_u.d_child);
......@@ -352,13 +351,11 @@ static int usbfs_empty (struct dentry *dentry)
if (usbfs_positive(de)) {
spin_unlock(&de->d_lock);
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
return 0;
}
spin_unlock(&de->d_lock);
}
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
return 1;
}
......
......@@ -270,13 +270,11 @@ static struct dentry *v9fs_dentry_from_dir_inode(struct inode *inode)
{
struct dentry *dentry;
spin_lock(&dcache_lock);
spin_lock(&dcache_inode_lock);
/* Directory should have only one entry. */
BUG_ON(S_ISDIR(inode->i_mode) && !list_is_singular(&inode->i_dentry));
dentry = list_entry(inode->i_dentry.next, struct dentry, d_alias);
spin_unlock(&dcache_inode_lock);
spin_unlock(&dcache_lock);
return dentry;
}
......
......@@ -128,7 +128,6 @@ affs_fix_dcache(struct dentry *dentry, u32 entry_ino)
void *data = dentry->d_fsdata;
struct list_head *head, *next;
spin_lock(&dcache_lock);
spin_lock(&dcache_inode_lock);
head = &inode->i_dentry;
next = head->next;
......@@ -141,7 +140,6 @@ affs_fix_dcache(struct dentry *dentry, u32 entry_ino)
next = next->next;
}
spin_unlock(&dcache_inode_lock);
spin_unlock(&dcache_lock);
}
......
......@@ -16,6 +16,7 @@
#include <linux/auto_fs4.h>
#include <linux/auto_dev-ioctl.h>
#include <linux/mutex.h>
#include <linux/spinlock.h>
#include <linux/list.h>
/* This is the range of ioctl() numbers we claim as ours */
......@@ -60,6 +61,8 @@ do { \
current->pid, __func__, ##args); \
} while (0)
extern spinlock_t autofs4_lock;
/* Unified info structure. This is pointed to by both the dentry and
inode structures. Each file in the filesystem has an instance of this
structure. It holds a reference to the dentry, so dentries are never
......
......@@ -102,7 +102,7 @@ static struct dentry *get_next_positive_dentry(struct dentry *prev,
if (prev == NULL)
return dget(prev);
spin_lock(&dcache_lock);
spin_lock(&autofs4_lock);
relock:
p = prev;
spin_lock(&p->d_lock);
......@@ -114,7 +114,7 @@ static struct dentry *get_next_positive_dentry(struct dentry *prev,
if (p == root) {
spin_unlock(&p->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
dput(prev);
return NULL;
}
......@@ -144,7 +144,7 @@ static struct dentry *get_next_positive_dentry(struct dentry *prev,
dget_dlock(ret);
spin_unlock(&ret->d_lock);
spin_unlock(&p->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
dput(prev);
......@@ -408,13 +408,13 @@ struct dentry *autofs4_expire_indirect(struct super_block *sb,
ino->flags |= AUTOFS_INF_EXPIRING;
init_completion(&ino->expire_complete);
spin_unlock(&sbi->fs_lock);
spin_lock(&dcache_lock);
spin_lock(&autofs4_lock);
spin_lock(&expired->d_parent->d_lock);
spin_lock_nested(&expired->d_lock, DENTRY_D_LOCK_NESTED);
list_move(&expired->d_parent->d_subdirs, &expired->d_u.d_child);
spin_unlock(&expired->d_lock);
spin_unlock(&expired->d_parent->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
return expired;
}
......
......@@ -23,6 +23,8 @@
#include "autofs_i.h"
DEFINE_SPINLOCK(autofs4_lock);
static int autofs4_dir_symlink(struct inode *,struct dentry *,const char *);
static int autofs4_dir_unlink(struct inode *,struct dentry *);
static int autofs4_dir_rmdir(struct inode *,struct dentry *);
......@@ -142,15 +144,15 @@ static int autofs4_dir_open(struct inode *inode, struct file *file)
* autofs file system so just let the libfs routines handle
* it.
*/
spin_lock(&dcache_lock);
spin_lock(&autofs4_lock);
spin_lock(&dentry->d_lock);
if (!d_mountpoint(dentry) && list_empty(&dentry->d_subdirs)) {
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
return -ENOENT;
}
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
out:
return dcache_dir_open(inode, file);
......@@ -255,11 +257,11 @@ static void *autofs4_follow_link(struct dentry *dentry, struct nameidata *nd)
/* We trigger a mount for almost all flags */
lookup_type = autofs4_need_mount(nd->flags);
spin_lock(&sbi->fs_lock);
spin_lock(&dcache_lock);
spin_lock(&autofs4_lock);
spin_lock(&dentry->d_lock);
if (!(lookup_type || ino->flags & AUTOFS_INF_PENDING)) {
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
spin_unlock(&sbi->fs_lock);
goto follow;
}
......@@ -272,7 +274,7 @@ static void *autofs4_follow_link(struct dentry *dentry, struct nameidata *nd)
if (ino->flags & AUTOFS_INF_PENDING ||
(!d_mountpoint(dentry) && list_empty(&dentry->d_subdirs))) {
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
spin_unlock(&sbi->fs_lock);
status = try_to_fill_dentry(dentry, nd->flags);
......@@ -282,7 +284,7 @@ static void *autofs4_follow_link(struct dentry *dentry, struct nameidata *nd)
goto follow;
}
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
spin_unlock(&sbi->fs_lock);
follow:
/*
......@@ -353,14 +355,14 @@ static int autofs4_revalidate(struct dentry *dentry, struct nameidata *nd)
return 0;
/* Check for a non-mountpoint directory with no contents */
spin_lock(&dcache_lock);
spin_lock(&autofs4_lock);
spin_lock(&dentry->d_lock);
if (S_ISDIR(dentry->d_inode->i_mode) &&
!d_mountpoint(dentry) && list_empty(&dentry->d_subdirs)) {
DPRINTK("dentry=%p %.*s, emptydir",
dentry, dentry->d_name.len, dentry->d_name.name);
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
/* The daemon never causes a mount to trigger */
if (oz_mode)
......@@ -377,7 +379,7 @@ static int autofs4_revalidate(struct dentry *dentry, struct nameidata *nd)
return status;
}
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
return 1;
}
......@@ -432,7 +434,7 @@ static struct dentry *autofs4_lookup_active(struct dentry *dentry)
const unsigned char *str = name->name;
struct list_head *p, *head;
spin_lock(&dcache_lock);
spin_lock(&autofs4_lock);
spin_lock(&sbi->lookup_lock);
head = &sbi->active_list;
list_for_each(p, head) {
......@@ -465,14 +467,14 @@ static struct dentry *autofs4_lookup_active(struct dentry *dentry)
dget_dlock(active);
spin_unlock(&active->d_lock);
spin_unlock(&sbi->lookup_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
return active;
}
next:
spin_unlock(&active->d_lock);
}
spin_unlock(&sbi->lookup_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
return NULL;
}
......@@ -487,7 +489,7 @@ static struct dentry *autofs4_lookup_expiring(struct dentry *dentry)
const unsigned char *str = name->name;
struct list_head *p, *head;
spin_lock(&dcache_lock);
spin_lock(&autofs4_lock);
spin_lock(&sbi->lookup_lock);
head = &sbi->expiring_list;
list_for_each(p, head) {
......@@ -520,14 +522,14 @@ static struct dentry *autofs4_lookup_expiring(struct dentry *dentry)
dget_dlock(expiring);
spin_unlock(&expiring->d_lock);
spin_unlock(&sbi->lookup_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
return expiring;
}
next:
spin_unlock(&expiring->d_lock);
}
spin_unlock(&sbi->lookup_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
return NULL;
}
......@@ -763,12 +765,12 @@ static int autofs4_dir_unlink(struct inode *dir, struct dentry *dentry)
dir->i_mtime = CURRENT_TIME;
spin_lock(&dcache_lock);
spin_lock(&autofs4_lock);
autofs4_add_expiring(dentry);
spin_lock(&dentry->d_lock);
__d_drop(dentry);
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
return 0;
}
......@@ -785,20 +787,20 @@ static int autofs4_dir_rmdir(struct inode *dir, struct dentry *dentry)
if (!autofs4_oz_mode(sbi))
return -EACCES;
spin_lock(&dcache_lock);
spin_lock(&autofs4_lock);
spin_lock(&sbi->lookup_lock);
spin_lock(&dentry->d_lock);
if (!list_empty(&dentry->d_subdirs)) {
spin_unlock(&dentry->d_lock);
spin_unlock(&sbi->lookup_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
return -ENOTEMPTY;
}
__autofs4_add_expiring(dentry);
spin_unlock(&sbi->lookup_lock);
__d_drop(dentry);
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
if (atomic_dec_and_test(&ino->count)) {
p_ino = autofs4_dentry_ino(dentry->d_parent);
......
......@@ -194,14 +194,15 @@ static int autofs4_getpath(struct autofs_sb_info *sbi,
rename_retry:
buf = *name;
len = 0;
seq = read_seqbegin(&rename_lock);
rcu_read_lock();
spin_lock(&dcache_lock);
spin_lock(&autofs4_lock);
for (tmp = dentry ; tmp != root ; tmp = tmp->d_parent)
len += tmp->d_name.len + 1;
if (!len || --len > NAME_MAX) {
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
rcu_read_unlock();
if (read_seqretry(&rename_lock, seq))
goto rename_retry;
......@@ -217,7 +218,7 @@ static int autofs4_getpath(struct autofs_sb_info *sbi,
p -= tmp->d_name.len;
strncpy(p, tmp->d_name.name, tmp->d_name.len);
}
spin_unlock(&dcache_lock);
spin_unlock(&autofs4_lock);
rcu_read_unlock();
if (read_seqretry(&rename_lock, seq))
goto rename_retry;
......
......@@ -112,7 +112,6 @@ static int __dcache_readdir(struct file *filp,
dout("__dcache_readdir %p at %llu (last %p)\n", dir, filp->f_pos,
last);
spin_lock(&dcache_lock);
spin_lock(&parent->d_lock);
/* start at beginning? */
......@@ -156,7 +155,6 @@ static int __dcache_readdir(struct file *filp,
dget_dlock(dentry);
spin_unlock(&dentry->d_lock);
spin_unlock(&parent->d_lock);
spin_unlock(&dcache_lock);
dout(" %llu (%llu) dentry %p %.*s %p\n", di->offset, filp->f_pos,
dentry, dentry->d_name.len, dentry->d_name.name, dentry->d_inode);
......@@ -182,21 +180,19 @@ static int __dcache_readdir(struct file *filp,
filp->f_pos++;
/* make sure a dentry wasn't dropped while we didn't have dcache_lock */
/* make sure a dentry wasn't dropped while we didn't have parent lock */
if (!ceph_i_test(dir, CEPH_I_COMPLETE)) {
dout(" lost I_COMPLETE on %p; falling back to mds\n", dir);
err = -EAGAIN;
goto out;
}
spin_lock(&dcache_lock);
spin_lock(&parent->d_lock);
p = p->prev; /* advance to next dentry */
goto more;
out_unlock:
spin_unlock(&parent->d_lock);
spin_unlock(&dcache_lock);
out:
if (last)
dput(last);
......
......@@ -841,7 +841,6 @@ static void ceph_set_dentry_offset(struct dentry *dn)
di->offset = ceph_inode(inode)->i_max_offset++;
spin_unlock(&inode->i_lock);
spin_lock(&dcache_lock);
spin_lock(&dir->d_lock);
spin_lock_nested(&dn->d_lock, DENTRY_D_LOCK_NESTED);
list_move(&dn->d_u.d_child, &dir->d_subdirs);
......@@ -849,7 +848,6 @@ static void ceph_set_dentry_offset(struct dentry *dn)
dn->d_u.d_child.prev, dn->d_u.d_child.next);
spin_unlock(&dn->d_lock);
spin_unlock(&dir->d_lock);
spin_unlock(&dcache_lock);
}
/*
......@@ -1233,13 +1231,11 @@ int ceph_readdir_prepopulate(struct ceph_mds_request *req,
goto retry_lookup;
} else {
/* reorder parent's d_subdirs */
spin_lock(&dcache_lock);
spin_lock(&parent->d_lock);
spin_lock_nested(&dn->d_lock, DENTRY_D_LOCK_NESTED);
list_move(&dn->d_u.d_child, &parent->d_subdirs);
spin_unlock(&dn->d_lock);
spin_unlock(&parent->d_lock);
spin_unlock(&dcache_lock);
}
di = dn->d_fsdata;
......
......@@ -809,17 +809,14 @@ inode_has_hashed_dentries(struct inode *inode)
{
struct dentry *dentry;
spin_lock(&dcache_lock);
spin_lock(&dcache_inode_lock);
list_for_each_entry(dentry, &inode->i_dentry, d_alias) {
if (!d_unhashed(dentry) || IS_ROOT(dentry)) {
spin_unlock(&dcache_inode_lock);
spin_unlock(&dcache_lock);
return true;
}
}
spin_unlock(&dcache_inode_lock);
spin_unlock(&dcache_lock);
return false;
}
......
......@@ -93,7 +93,6 @@ static void coda_flag_children(struct dentry *parent, int flag)
struct list_head *child;
struct dentry *de;
spin_lock(&dcache_lock);
spin_lock(&parent->d_lock);
list_for_each(child, &parent->d_subdirs)
{
......@@ -104,7 +103,6 @@ static void coda_flag_children(struct dentry *parent, int flag)
coda_flag_inode(de->d_inode, flag);
}
spin_unlock(&parent->d_lock);
spin_unlock(&dcache_lock);
return;
}
......
......@@ -120,7 +120,6 @@ static inline struct config_item *configfs_get_config_item(struct dentry *dentry
{
struct config_item * item = NULL;
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
if (!d_unhashed(dentry)) {
struct configfs_dirent * sd = dentry->d_fsdata;
......@@ -131,7 +130,6 @@ static inline struct config_item *configfs_get_config_item(struct dentry *dentry
item = config_item_get(sd->s_element);
}
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
return item;
}
......
......@@ -250,18 +250,14 @@ void configfs_drop_dentry(struct configfs_dirent * sd, struct dentry * parent)
struct dentry * dentry = sd->s_dentry;
if (dentry) {
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
if (!(d_unhashed(dentry) && dentry->d_inode)) {
dget_locked_dlock(dentry);
__d_drop(dentry);
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
simple_unlink(parent->d_inode, dentry);
} else {
} else
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
}
}
}
......
This diff is collapsed.
......@@ -47,24 +47,20 @@ find_acceptable_alias(struct dentry *result,
if (acceptable(context, result))
return result;
spin_lock(&dcache_lock);
spin_lock(&dcache_inode_lock);
list_for_each_entry(dentry, &result->d_inode->i_dentry, d_alias) {
dget_locked(dentry);
spin_unlock(&dcache_inode_lock);
spin_unlock(&dcache_lock);
if (toput)
dput(toput);
if (dentry != result && acceptable(context, dentry)) {
dput(result);
return dentry;
}
spin_lock(&dcache_lock);
spin_lock(&dcache_inode_lock);
toput = dentry;
}
spin_unlock(&dcache_inode_lock);
spin_unlock(&dcache_lock);
if (toput)
dput(toput);
......
......@@ -100,7 +100,6 @@ loff_t dcache_dir_lseek(struct file *file, loff_t offset, int origin)
struct dentry *cursor = file->private_data;
loff_t n = file->f_pos - 2;
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
/* d_lock not required for cursor */
list_del(&cursor->d_u.d_child);
......@@ -116,7 +115,6 @@ loff_t dcache_dir_lseek(struct file *file, loff_t offset, int origin)
}
list_add_tail(&cursor->d_u.d_child, p);
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
}
}
mutex_unlock(&dentry->d_inode->i_mutex);
......@@ -159,7 +157,6 @@ int dcache_readdir(struct file * filp, void * dirent, filldir_t filldir)
i++;
/* fallthrough */
default:
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
if (filp->f_pos == 2)
list_move(q, &dentry->d_subdirs);
......@@ -175,13 +172,11 @@ int dcache_readdir(struct file * filp, void * dirent, filldir_t filldir)
spin_unlock(&next->d_lock);
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
if (filldir(dirent, next->d_name.name,
next->d_name.len, filp->f_pos,
next->d_inode->i_ino,
dt_type(next->d_inode)) < 0)
return 0;
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
spin_lock_nested(&next->d_lock, DENTRY_D_LOCK_NESTED);
/* next is still alive */
......@@ -191,7 +186,6 @@ int dcache_readdir(struct file * filp, void * dirent, filldir_t filldir)
filp->f_pos++;
}
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
}
return 0;
}
......@@ -285,7 +279,6 @@ int simple_empty(struct dentry *dentry)
struct dentry *child;
int ret = 0;
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
list_for_each_entry(child, &dentry->d_subdirs, d_u.d_child) {
spin_lock_nested(&child->d_lock, DENTRY_D_LOCK_NESTED);
......@@ -298,7 +291,6 @@ int simple_empty(struct dentry *dentry)
ret = 1;
out:
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
return ret;
}
......
......@@ -612,8 +612,8 @@ int follow_up(struct path *path)
return 1;
}
/* no need for dcache_lock, as serialization is taken care in
* namespace.c
/*
* serialization is taken care of in namespace.c
*/
static int __follow_mount(struct path *path)
{
......@@ -645,9 +645,6 @@ static void follow_mount(struct path *path)
}
}
/* no need for dcache_lock, as serialization is taken care in
* namespace.c
*/
int follow_down(struct path *path)
{
struct vfsmount *mounted;
......@@ -2131,12 +2128,10 @@ void dentry_unhash(struct dentry *dentry)
{
dget(dentry);
shrink_dcache_parent(dentry);
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
if (dentry->d_count == 2)
__d_drop(dentry);
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
}
int vfs_rmdir(struct inode *dir, struct dentry *dentry)
......
......@@ -391,7 +391,6 @@ ncp_dget_fpos(struct dentry *dentry, struct dentry *parent, unsigned long fpos)
}
/* If a pointer is invalid, we search the dentry. */
spin_lock(&dcache_lock);
spin_lock(&parent->d_lock);
next = parent->d_subdirs.next;
while (next != &parent->d_subdirs) {
......@@ -402,13 +401,11 @@ ncp_dget_fpos(struct dentry *dentry, struct dentry *parent, unsigned long fpos)
else
dent = NULL;
spin_unlock(&parent->d_lock);
spin_unlock(&dcache_lock);
goto out;
}
next = next->next;
}
spin_unlock(&parent->d_lock);
spin_unlock(&dcache_lock);
return NULL;
out:
......
......@@ -193,7 +193,6 @@ ncp_renew_dentries(struct dentry *parent)
struct list_head *next;
struct dentry *dentry;
spin_lock(&dcache_lock);
spin_lock(&parent->d_lock);
next = parent->d_subdirs.next;
while (next != &parent->d_subdirs) {
......@@ -207,7 +206,6 @@ ncp_renew_dentries(struct dentry *parent)
next = next->next;
}
spin_unlock(&parent->d_lock);
spin_unlock(&dcache_lock);
}
static inline void
......@@ -217,7 +215,6 @@ ncp_invalidate_dircache_entries(struct dentry *parent)
struct list_head *next;
struct dentry *dentry;
spin_lock(&dcache_lock);
spin_lock(&parent->d_lock);
next = parent->d_subdirs.next;
while (next != &parent->d_subdirs) {
......@@ -227,7 +224,6 @@ ncp_invalidate_dircache_entries(struct dentry *parent)
next = next->next;
}
spin_unlock(&parent->d_lock);
spin_unlock(&dcache_lock);
}
struct ncp_cache_head {
......
......@@ -1718,11 +1718,9 @@ static int nfs_unlink(struct inode *dir, struct dentry *dentry)
dfprintk(VFS, "NFS: unlink(%s/%ld, %s)\n", dir->i_sb->s_id,
dir->i_ino, dentry->d_name.name);
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
if (dentry->d_count > 1) {
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
/* Start asynchronous writeout of the inode */
write_inode_now(dentry->d_inode, 0);
error = nfs_sillyrename(dir, dentry);
......@@ -1733,7 +1731,6 @@ static int nfs_unlink(struct inode *dir, struct dentry *dentry)
need_rehash = 1;
}
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
error = nfs_safe_remove(dentry);
if (!error || error == -ENOENT) {
nfs_set_verifier(dentry, nfs_save_change_attribute(dir));
......
......@@ -63,13 +63,11 @@ static int nfs_superblock_set_dummy_root(struct super_block *sb, struct inode *i
* This again causes shrink_dcache_for_umount_subtree() to
* Oops, since the test for IS_ROOT() will fail.
*/
spin_lock(&dcache_lock);
spin_lock(&dcache_inode_lock);
spin_lock(&sb->s_root->d_lock);
list_del_init(&sb->s_root->d_alias);
spin_unlock(&sb->s_root->d_lock);
spin_unlock(&dcache_inode_lock);
spin_unlock(&dcache_lock);
}
return 0;
}
......
......@@ -60,7 +60,6 @@ char *nfs_path(const char *base,
seq = read_seqbegin(&rename_lock);
rcu_read_lock();
spin_lock(&dcache_lock);
while (!IS_ROOT(dentry) && dentry != droot) {
namelen = dentry->d_name.len;
buflen -= namelen + 1;
......@@ -71,7 +70,6 @@ char *nfs_path(const char *base,
*--end = '/';
dentry = dentry->d_parent;
}
spin_unlock(&dcache_lock);
rcu_read_unlock();
if (read_seqretry(&rename_lock, seq))
goto rename_retry;
......@@ -91,7 +89,6 @@ char *nfs_path(const char *base,
memcpy(end, base, namelen);
return end;
Elong_unlock:
spin_unlock(&dcache_lock);
rcu_read_unlock();
if (read_seqretry(&rename_lock, seq))
goto rename_retry;
......
......@@ -59,7 +59,6 @@ void __fsnotify_update_child_dentry_flags(struct inode *inode)
/* determine if the children should tell inode about their events */
watched = fsnotify_inode_watches_children(inode);
spin_lock(&dcache_lock);
spin_lock(&dcache_inode_lock);
/* run all of the dentries associated with this inode. Since this is a
* directory, there damn well better only be one item on this list */
......@@ -84,7 +83,6 @@ void __fsnotify_update_child_dentry_flags(struct inode *inode)
spin_unlock(&alias->d_lock);
}
spin_unlock(&dcache_inode_lock);
spin_unlock(&dcache_lock);
}
/* Notify this dentry's parent about a child's events. */
......
......@@ -169,7 +169,6 @@ struct dentry *ocfs2_find_local_alias(struct inode *inode,
struct list_head *p;
struct dentry *dentry = NULL;
spin_lock(&dcache_lock);
spin_lock(&dcache_inode_lock);
list_for_each(p, &inode->i_dentry) {
dentry = list_entry(p, struct dentry, d_alias);
......@@ -189,7 +188,6 @@ struct dentry *ocfs2_find_local_alias(struct inode *inode,
}
spin_unlock(&dcache_inode_lock);
spin_unlock(&dcache_lock);
return dentry;
}
......
......@@ -183,7 +183,6 @@ struct dentry_operations {
#define DCACHE_GENOCIDE 0x0200
extern spinlock_t dcache_inode_lock;
extern spinlock_t dcache_lock;
extern seqlock_t rename_lock;
static inline int dname_external(struct dentry *dentry)
......@@ -296,8 +295,8 @@ extern char *dentry_path(struct dentry *, char *, int);
* destroyed when it has references. dget() should never be
* called for dentries with zero reference counter. For these cases
* (preferably none, functions in dcache.c are sufficient for normal
* needs and they take necessary precautions) you should hold dcache_lock
* and call dget_locked() instead of dget().
* needs and they take necessary precautions) you should hold d_lock
* and call dget_dlock() instead of dget().
*/
static inline struct dentry *dget_dlock(struct dentry *dentry)
{
......
......@@ -1378,7 +1378,7 @@ struct super_block {
#else
struct list_head s_files;
#endif
/* s_dentry_lru and s_nr_dentry_unused are protected by dcache_lock */
/* s_dentry_lru, s_nr_dentry_unused protected by dcache.c lru locks */
struct list_head s_dentry_lru; /* unused dentry lru */
int s_nr_dentry_unused; /* # of dentry on lru */
......@@ -2446,6 +2446,10 @@ static inline ino_t parent_ino(struct dentry *dentry)
{
ino_t res;
/*
* Don't strictly need d_lock here? If the parent ino could change
* then surely we'd have a deeper race in the caller?
*/
spin_lock(&dentry->d_lock);
res = dentry->d_parent->d_inode->i_ino;
spin_unlock(&dentry->d_lock);
......
......@@ -17,7 +17,6 @@
/*
* fsnotify_d_instantiate - instantiate a dentry for inode
* Called with dcache_lock held.
*/
static inline void fsnotify_d_instantiate(struct dentry *dentry,
struct inode *inode)
......@@ -62,7 +61,6 @@ static inline int fsnotify_perm(struct file *file, int mask)
/*
* fsnotify_d_move - dentry has been moved
* Called with dcache_lock and dentry->d_lock held.
*/
static inline void fsnotify_d_move(struct dentry *dentry)
{
......
......@@ -329,9 +329,15 @@ static inline void __fsnotify_update_dcache_flags(struct dentry *dentry)
{
struct dentry *parent;
assert_spin_locked(&dcache_lock);
assert_spin_locked(&dentry->d_lock);
/*
* Serialisation of setting PARENT_WATCHED on the dentries is provided
* by d_lock. If inotify_inode_watched changes after we have taken
* d_lock, the following __fsnotify_update_child_dentry_flags call will
* find our entry, so it will spin until we complete here, and update
* us with the new state.
*/
parent = dentry->d_parent;
if (parent->d_inode && fsnotify_inode_watches_children(parent->d_inode))
dentry->d_flags |= DCACHE_FSNOTIFY_PARENT_WATCHED;
......@@ -341,15 +347,12 @@ static inline void __fsnotify_update_dcache_flags(struct dentry *dentry)
/*
* fsnotify_d_instantiate - instantiate a dentry for inode
* Called with dcache_lock held.
*/
static inline void __fsnotify_d_instantiate(struct dentry *dentry, struct inode *inode)
{
if (!inode)
return;
assert_spin_locked(&dcache_lock);
spin_lock(&dentry->d_lock);
__fsnotify_update_dcache_flags(dentry);
spin_unlock(&dentry->d_lock);
......
......@@ -41,7 +41,6 @@ enum {LAST_NORM, LAST_ROOT, LAST_DOT, LAST_DOTDOT, LAST_BIND};
* - require a directory
* - ending slashes ok even for nonexistent files
* - internal "there are more path components" flag
* - locked when lookup done with dcache_lock held
* - dentry cache is untrusted; force a real lookup
*/
#define LOOKUP_FOLLOW 1
......
......@@ -876,7 +876,6 @@ static void cgroup_clear_directory(struct dentry *dentry)
struct list_head *node;
BUG_ON(!mutex_is_locked(&dentry->d_inode->i_mutex));
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
node = dentry->d_subdirs.next;
while (node != &dentry->d_subdirs) {
......@@ -891,18 +890,15 @@ static void cgroup_clear_directory(struct dentry *dentry)
dget_locked_dlock(d);
spin_unlock(&d->d_lock);
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
d_delete(d);
simple_unlink(dentry->d_inode, d);
dput(d);
spin_lock(&dcache_lock);
spin_lock(&dentry->d_lock);
} else
spin_unlock(&d->d_lock);
node = dentry->d_subdirs.next;
}
spin_unlock(&dentry->d_lock);
spin_unlock(&dcache_lock);
}
/*
......@@ -914,14 +910,12 @@ static void cgroup_d_remove_dir(struct dentry *dentry)
cgroup_clear_directory(dentry);
spin_lock(&dcache_lock);
parent = dentry->d_parent;
spin_lock(&parent->d_lock);
spin_lock(&dentry->d_lock);
list_del_init(&dentry->d_u.d_child);
spin_unlock(&dentry->d_lock);
spin_unlock(&parent->d_lock);
spin_unlock(&dcache_lock);
remove_dir(dentry);
}
......
......@@ -102,9 +102,6 @@
* ->inode_lock (zap_pte_range->set_page_dirty)
* ->private_lock (zap_pte_range->__set_page_dirty_buffers)
*
* ->task->proc_lock
* ->dcache_lock (proc_pid_lookup)
*
* (code doesn't rely on that order, so you could switch it around)
* ->tasklist_lock (memory_failure, collect_procs_ao)
* ->i_mmap_lock
......
......@@ -1145,7 +1145,6 @@ static void sel_remove_entries(struct dentry *de)
{
struct list_head *node;
spin_lock(&dcache_lock);
spin_lock(&de->d_lock);
node = de->d_subdirs.next;
while (node != &de->d_subdirs) {
......@@ -1158,11 +1157,9 @@ static void sel_remove_entries(struct dentry *de)
dget_locked_dlock(d);
spin_unlock(&de->d_lock);
spin_unlock(&d->d_lock);
spin_unlock(&dcache_lock);
d_delete(d);
simple_unlink(de->d_inode, d);
dput(d);
spin_lock(&dcache_lock);
spin_lock(&de->d_lock);
} else
spin_unlock(&d->d_lock);
......@@ -1170,7 +1167,6 @@ static void sel_remove_entries(struct dentry *de)
}
spin_unlock(&de->d_lock);
spin_unlock(&dcache_lock);
}
#define BOOL_DIR_NAME "booleans"
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment