Commit 988e4532 authored by Paul Mackerras's avatar Paul Mackerras Committed by Greg Kroah-Hartman

powerpc: Make sure "cache" directory is removed when offlining cpu

commit 91b973f9 upstream.

The code in remove_cache_dir() is supposed to remove the "cache"
subdirectory from the sysfs directory for a CPU when that CPU is
being offlined.  It tries to do this by calling kobject_put() on
the kobject for the subdirectory.  However, the subdirectory only
gets removed once the last reference goes away, and the reference
being put here may well not be the last reference.  That means
that the "cache" subdirectory may still exist when the offlining
operation has finished.  If the same CPU subsequently gets onlined,
the code tries to add a new "cache" subdirectory.  If the old
subdirectory has not yet been removed, we get a WARN_ON in the
sysfs code, with stack trace, and an error message printed on the
console.  Further, we ultimately end up with an online cpu with no
"cache" subdirectory.

This fixes it by doing an explicit kobject_del() at the point where
we want the subdirectory to go away.  kobject_del() removes the sysfs
directory even though the object still exists in memory.  The object
will get freed at some point in the future.  A subsequent onlining
operation can create a new sysfs directory, even if the old object
still exists in memory, without causing any problems.
Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 06c23d08
...@@ -788,6 +788,9 @@ static void remove_cache_dir(struct cache_dir *cache_dir) ...@@ -788,6 +788,9 @@ static void remove_cache_dir(struct cache_dir *cache_dir)
{ {
remove_index_dirs(cache_dir); remove_index_dirs(cache_dir);
/* Remove cache dir from sysfs */
kobject_del(cache_dir->kobj);
kobject_put(cache_dir->kobj); kobject_put(cache_dir->kobj);
kfree(cache_dir); kfree(cache_dir);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment