Commit 14208b0e authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup

Pull cgroup updates from Tejun Heo:
 "A lot of activities on cgroup side.  Heavy restructuring including
  locking simplification took place to improve the code base and enable
  implementation of the unified hierarchy, which currently exists behind
  a __DEVEL__ mount option.  The core support is mostly complete but
  individual controllers need further work.  To explain the design and
  rationales of the the unified hierarchy

        Documentation/cgroups/unified-hierarchy.txt

  is added.

  Another notable change is css (cgroup_subsys_state - what each
  controller uses to identify and interact with a cgroup) iteration
  update.  This is part of continuing updates on css object lifetime and
  visibility.  cgroup started with reference count draining on removal
  way back and is now reaching a point where csses behave and are
  iterated like normal refcnted objects albeit with some complexities to
  allow distinguishing the state where they're being deleted.  The css
  iteration update isn't taken advantage of yet but is planned to be
  used to simplify memcg significantly"

* 'for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: (77 commits)
  cgroup: disallow disabled controllers on the default hierarchy
  cgroup: don't destroy the default root
  cgroup: disallow debug controller on the default hierarchy
  cgroup: clean up MAINTAINERS entries
  cgroup: implement css_tryget()
  device_cgroup: use css_has_online_children() instead of has_children()
  cgroup: convert cgroup_has_live_children() into css_has_online_children()
  cgroup: use CSS_ONLINE instead of CGRP_DEAD
  cgroup: iterate cgroup_subsys_states directly
  cgroup: introduce CSS_RELEASED and reduce css iteration fallback window
  cgroup: move cgroup->serial_nr into cgroup_subsys_state
  cgroup: link all cgroup_subsys_states in their sibling lists
  cgroup: move cgroup->sibling and ->children into cgroup_subsys_state
  cgroup: remove cgroup->parent
  device_cgroup: remove direct access to cgroup->children
  memcg: update memcg_has_children() to use css_next_child()
  memcg: remove tasks/children test from mem_cgroup_force_empty()
  cgroup: remove css_parent()
  cgroup: skip refcnting on normal root csses and cgrp_dfl_root self css
  cgroup: use cgroup->self.refcnt for cgroup refcnting
  ...
parents 6ea4fa70 c731ae1d
...@@ -458,15 +458,11 @@ About use_hierarchy, see Section 6. ...@@ -458,15 +458,11 @@ About use_hierarchy, see Section 6.
5.1 force_empty 5.1 force_empty
memory.force_empty interface is provided to make cgroup's memory usage empty. memory.force_empty interface is provided to make cgroup's memory usage empty.
You can use this interface only when the cgroup has no tasks.
When writing anything to this When writing anything to this
# echo 0 > memory.force_empty # echo 0 > memory.force_empty
Almost all pages tracked by this memory cgroup will be unmapped and freed. the cgroup will be reclaimed and as many pages reclaimed as possible.
Some pages cannot be freed because they are locked or in-use. Such pages are
moved to parent (if use_hierarchy==1) or root (if use_hierarchy==0) and this
cgroup will be empty.
The typical use case for this interface is before calling rmdir(). The typical use case for this interface is before calling rmdir().
Because rmdir() moves all pages to parent, some out-of-use page caches can be Because rmdir() moves all pages to parent, some out-of-use page caches can be
......
Cgroup unified hierarchy
April, 2014 Tejun Heo <tj@kernel.org>
This document describes the changes made by unified hierarchy and
their rationales. It will eventually be merged into the main cgroup
documentation.
CONTENTS
1. Background
2. Basic Operation
2-1. Mounting
2-2. cgroup.subtree_control
2-3. cgroup.controllers
3. Structural Constraints
3-1. Top-down
3-2. No internal tasks
4. Other Changes
4-1. [Un]populated Notification
4-2. Other Core Changes
4-3. Per-Controller Changes
4-3-1. blkio
4-3-2. cpuset
4-3-3. memory
5. Planned Changes
5-1. CAP for resource control
1. Background
cgroup allows an arbitrary number of hierarchies and each hierarchy
can host any number of controllers. While this seems to provide a
high level of flexibility, it isn't quite useful in practice.
For example, as there is only one instance of each controller, utility
type controllers such as freezer which can be useful in all
hierarchies can only be used in one. The issue is exacerbated by the
fact that controllers can't be moved around once hierarchies are
populated. Another issue is that all controllers bound to a hierarchy
are forced to have exactly the same view of the hierarchy. It isn't
possible to vary the granularity depending on the specific controller.
In practice, these issues heavily limit which controllers can be put
on the same hierarchy and most configurations resort to putting each
controller on its own hierarchy. Only closely related ones, such as
the cpu and cpuacct controllers, make sense to put on the same
hierarchy. This often means that userland ends up managing multiple
similar hierarchies repeating the same steps on each hierarchy
whenever a hierarchy management operation is necessary.
Unfortunately, support for multiple hierarchies comes at a steep cost.
Internal implementation in cgroup core proper is dazzlingly
complicated but more importantly the support for multiple hierarchies
restricts how cgroup is used in general and what controllers can do.
There's no limit on how many hierarchies there may be, which means
that a task's cgroup membership can't be described in finite length.
The key may contain any varying number of entries and is unlimited in
length, which makes it highly awkward to handle and leads to addition
of controllers which exist only to identify membership, which in turn
exacerbates the original problem.
Also, as a controller can't have any expectation regarding what shape
of hierarchies other controllers would be on, each controller has to
assume that all other controllers are operating on completely
orthogonal hierarchies. This makes it impossible, or at least very
cumbersome, for controllers to cooperate with each other.
In most use cases, putting controllers on hierarchies which are
completely orthogonal to each other isn't necessary. What usually is
called for is the ability to have differing levels of granularity
depending on the specific controller. In other words, hierarchy may
be collapsed from leaf towards root when viewed from specific
controllers. For example, a given configuration might not care about
how memory is distributed beyond a certain level while still wanting
to control how CPU cycles are distributed.
Unified hierarchy is the next version of cgroup interface. It aims to
address the aforementioned issues by having more structure while
retaining enough flexibility for most use cases. Various other
general and controller-specific interface issues are also addressed in
the process.
2. Basic Operation
2-1. Mounting
Currently, unified hierarchy can be mounted with the following mount
command. Note that this is still under development and scheduled to
change soon.
mount -t cgroup -o __DEVEL__sane_behavior cgroup $MOUNT_POINT
All controllers which are not bound to other hierarchies are
automatically bound to unified hierarchy and show up at the root of
it. Controllers which are enabled only in the root of unified
hierarchy can be bound to other hierarchies at any time. This allows
mixing unified hierarchy with the traditional multiple hierarchies in
a fully backward compatible way.
2-2. cgroup.subtree_control
All cgroups on unified hierarchy have a "cgroup.subtree_control" file
which governs which controllers are enabled on the children of the
cgroup. Let's assume a hierarchy like the following.
root - A - B - C
\ D
root's "cgroup.subtree_control" file determines which controllers are
enabled on A. A's on B. B's on C and D. This coincides with the
fact that controllers on the immediate sub-level are used to
distribute the resources of the parent. In fact, it's natural to
assume that resource control knobs of a child belong to its parent.
Enabling a controller in a "cgroup.subtree_control" file declares that
distribution of the respective resources of the cgroup will be
controlled. Note that this means that controller enable states are
shared among siblings.
When read, the file contains a space-separated list of currently
enabled controllers. A write to the file should contain a
space-separated list of controllers with '+' or '-' prefixed (without
the quotes). Controllers prefixed with '+' are enabled and '-'
disabled. If a controller is listed multiple times, the last entry
wins. The specific operations are executed atomically - either all
succeed or fail.
2-3. cgroup.controllers
Read-only "cgroup.controllers" file contains a space-separated list of
controllers which can be enabled in the cgroup's
"cgroup.subtree_control" file.
In the root cgroup, this lists controllers which are not bound to
other hierarchies and the content changes as controllers are bound to
and unbound from other hierarchies.
In non-root cgroups, the content of this file equals that of the
parent's "cgroup.subtree_control" file as only controllers enabled
from the parent can be used in its children.
3. Structural Constraints
3-1. Top-down
As it doesn't make sense to nest control of an uncontrolled resource,
all non-root "cgroup.subtree_control" files can only contain
controllers which are enabled in the parent's "cgroup.subtree_control"
file. A controller can be enabled only if the parent has the
controller enabled and a controller can't be disabled if one or more
children have it enabled.
3-2. No internal tasks
One long-standing issue that cgroup faces is the competition between
tasks belonging to the parent cgroup and its children cgroups. This
is inherently nasty as two different types of entities compete and
there is no agreed-upon obvious way to handle it. Different
controllers are doing different things.
The cpu controller considers tasks and cgroups as equivalents and maps
nice levels to cgroup weights. This works for some cases but falls
flat when children should be allocated specific ratios of CPU cycles
and the number of internal tasks fluctuates - the ratios constantly
change as the number of competing entities fluctuates. There also are
other issues. The mapping from nice level to weight isn't obvious or
universal, and there are various other knobs which simply aren't
available for tasks.
The blkio controller implicitly creates a hidden leaf node for each
cgroup to host the tasks. The hidden leaf has its own copies of all
the knobs with "leaf_" prefixed. While this allows equivalent control
over internal tasks, it's with serious drawbacks. It always adds an
extra layer of nesting which may not be necessary, makes the interface
messy and significantly complicates the implementation.
The memory controller currently doesn't have a way to control what
happens between internal tasks and child cgroups and the behavior is
not clearly defined. There have been attempts to add ad-hoc behaviors
and knobs to tailor the behavior to specific workloads. Continuing
this direction will lead to problems which will be extremely difficult
to resolve in the long term.
Multiple controllers struggle with internal tasks and came up with
different ways to deal with it; unfortunately, all the approaches in
use now are severely flawed and, furthermore, the widely different
behaviors make cgroup as whole highly inconsistent.
It is clear that this is something which needs to be addressed from
cgroup core proper in a uniform way so that controllers don't need to
worry about it and cgroup as a whole shows a consistent and logical
behavior. To achieve that, unified hierarchy enforces the following
structural constraint:
Except for the root, only cgroups which don't contain any task may
have controllers enabled in their "cgroup.subtree_control" files.
Combined with other properties, this guarantees that, when a
controller is looking at the part of the hierarchy which has it
enabled, tasks are always only on the leaves. This rules out
situations where child cgroups compete against internal tasks of the
parent.
There are two things to note. Firstly, the root cgroup is exempt from
the restriction. Root contains tasks and anonymous resource
consumption which can't be associated with any other cgroup and
requires special treatment from most controllers. How resource
consumption in the root cgroup is governed is up to each controller.
Secondly, the restriction doesn't take effect if there is no enabled
controller in the cgroup's "cgroup.subtree_control" file. This is
important as otherwise it wouldn't be possible to create children of a
populated cgroup. To control resource distribution of a cgroup, the
cgroup must create children and transfer all its tasks to the children
before enabling controllers in its "cgroup.subtree_control" file.
4. Other Changes
4-1. [Un]populated Notification
cgroup users often need a way to determine when a cgroup's
subhierarchy becomes empty so that it can be cleaned up. cgroup
currently provides release_agent for it; unfortunately, this mechanism
is riddled with issues.
- It delivers events by forking and execing a userland binary
specified as the release_agent. This is a long deprecated method of
notification delivery. It's extremely heavy, slow and cumbersome to
integrate with larger infrastructure.
- There is single monitoring point at the root. There's no way to
delegate management of a subtree.
- The event isn't recursive. It triggers when a cgroup doesn't have
any tasks or child cgroups. Events for internal nodes trigger only
after all children are removed. This again makes it impossible to
delegate management of a subtree.
- Events are filtered from the kernel side. A "notify_on_release"
file is used to subscribe to or suppress release events. This is
unnecessarily complicated and probably done this way because event
delivery itself was expensive.
Unified hierarchy implements an interface file "cgroup.populated"
which can be used to monitor whether the cgroup's subhierarchy has
tasks in it or not. Its value is 0 if there is no task in the cgroup
and its descendants; otherwise, 1. poll and [id]notify events are
triggered when the value changes.
This is significantly lighter and simpler and trivially allows
delegating management of subhierarchy - subhierarchy monitoring can
block further propagation simply by putting itself or another process
in the subhierarchy and monitor events that it's interested in from
there without interfering with monitoring higher in the tree.
In unified hierarchy, the release_agent mechanism is no longer
supported and the interface files "release_agent" and
"notify_on_release" do not exist.
4-2. Other Core Changes
- None of the mount options is allowed.
- remount is disallowed.
- rename(2) is disallowed.
- The "tasks" file is removed. Everything should at process
granularity. Use the "cgroup.procs" file instead.
- The "cgroup.procs" file is not sorted. pids will be unique unless
they got recycled in-between reads.
- The "cgroup.clone_children" file is removed.
4-3. Per-Controller Changes
4-3-1. blkio
- blk-throttle becomes properly hierarchical.
4-3-2. cpuset
- Tasks are kept in empty cpusets after hotplug and take on the masks
of the nearest non-empty ancestor, instead of being moved to it.
- A task can be moved into an empty cpuset, and again it takes on the
masks of the nearest non-empty ancestor.
4-3-3. memory
- use_hierarchy is on by default and the cgroup file for the flag is
not created.
5. Planned Changes
5-1. CAP for resource control
Unified hierarchy will require one of the capabilities(7), which is
yet to be decided, for all resource control related knobs. Process
organization operations - creation of sub-cgroups and migration of
processes in sub-hierarchies may be delegated by changing the
ownership and/or permissions on the cgroup directory and
"cgroup.procs" interface file; however, all operations which affect
resource control - writes to a "cgroup.subtree_control" file or any
controller-specific knobs - will require an explicit CAP privilege.
This, in part, is to prevent the cgroup interface from being
inadvertently promoted to programmable API used by non-privileged
binaries. cgroup exposes various aspects of the system in ways which
aren't properly abstracted for direct consumption by regular programs.
This is an administration interface much closer to sysctl knobs than
system calls. Even the basic access model, being filesystem path
based, isn't suitable for direct consumption. There's no way to
access "my cgroup" in a race-free way or make multiple operations
atomic against migration to another cgroup.
Another aspect is that, for better or for worse, the cgroup interface
goes through far less scrutiny than regular interfaces for
unprivileged userland. The upside is that cgroup is able to expose
useful features which may not be suitable for general consumption in a
reasonable time frame. It provides a relatively short path between
internal details and userland-visible interface. Of course, this
shortcut comes with high risk. We go through what we go through for
general kernel APIs for good reasons. It may end up leaking internal
details in a way which can exert significant pain by locking the
kernel into a contract that can't be maintained in a reasonable
manner.
Also, due to the specific nature, cgroup and its controllers don't
tend to attract attention from a wide scope of developers. cgroup's
short history is already fraught with severely mis-designed
interfaces, unnecessary commitments to and exposing of internal
details, broken and dangerous implementations of various features.
Keeping cgroup as an administration interface is both advantageous for
its role and imperative given its nature. Some of the cgroup features
may make sense for unprivileged access. If deemed justified, those
must be further abstracted and implemented as a different interface,
be it a system call or process-private filesystem, and survive through
the scrutiny that any interface for general consumption is required to
go through.
Requiring CAP is not a complete solution but should serve as a
significant deterrent against spraying cgroup usages in non-privileged
programs.
...@@ -2384,16 +2384,35 @@ L: netdev@vger.kernel.org ...@@ -2384,16 +2384,35 @@ L: netdev@vger.kernel.org
S: Maintained S: Maintained
F: drivers/connector/ F: drivers/connector/
CONTROL GROUPS (CGROUPS) CONTROL GROUP (CGROUP)
M: Tejun Heo <tj@kernel.org> M: Tejun Heo <tj@kernel.org>
M: Li Zefan <lizefan@huawei.com> M: Li Zefan <lizefan@huawei.com>
L: containers@lists.linux-foundation.org
L: cgroups@vger.kernel.org L: cgroups@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git
S: Maintained S: Maintained
F: Documentation/cgroups/
F: include/linux/cgroup* F: include/linux/cgroup*
F: kernel/cgroup* F: kernel/cgroup*
F: mm/*cgroup*
CONTROL GROUP - CPUSET
M: Li Zefan <lizefan@huawei.com>
L: cgroups@vger.kernel.org
W: http://www.bullopensource.org/cpuset/
W: http://oss.sgi.com/projects/cpusets/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git
S: Maintained
F: Documentation/cgroups/cpusets.txt
F: include/linux/cpuset.h
F: kernel/cpuset.c
CONTROL GROUP - MEMORY RESOURCE CONTROLLER (MEMCG)
M: Johannes Weiner <hannes@cmpxchg.org>
M: Michal Hocko <mhocko@suse.cz>
L: cgroups@vger.kernel.org
L: linux-mm@kvack.org
S: Maintained
F: mm/memcontrol.c
F: mm/page_cgroup.c
CORETEMP HARDWARE MONITORING DRIVER CORETEMP HARDWARE MONITORING DRIVER
M: Fenghua Yu <fenghua.yu@intel.com> M: Fenghua Yu <fenghua.yu@intel.com>
...@@ -2464,17 +2483,6 @@ M: Thomas Renninger <trenn@suse.de> ...@@ -2464,17 +2483,6 @@ M: Thomas Renninger <trenn@suse.de>
S: Maintained S: Maintained
F: tools/power/cpupower/ F: tools/power/cpupower/
CPUSETS
M: Li Zefan <lizefan@huawei.com>
L: cgroups@vger.kernel.org
W: http://www.bullopensource.org/cpuset/
W: http://oss.sgi.com/projects/cpusets/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git
S: Maintained
F: Documentation/cgroups/cpusets.txt
F: include/linux/cpuset.h
F: kernel/cpuset.c
CRAMFS FILESYSTEM CRAMFS FILESYSTEM
W: http://sourceforge.net/projects/cramfs/ W: http://sourceforge.net/projects/cramfs/
S: Orphan / Obsolete S: Orphan / Obsolete
...@@ -5757,17 +5765,6 @@ F: include/linux/memory_hotplug.h ...@@ -5757,17 +5765,6 @@ F: include/linux/memory_hotplug.h
F: include/linux/vmalloc.h F: include/linux/vmalloc.h
F: mm/ F: mm/
MEMORY RESOURCE CONTROLLER
M: Johannes Weiner <hannes@cmpxchg.org>
M: Michal Hocko <mhocko@suse.cz>
M: Balbir Singh <bsingharora@gmail.com>
M: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
L: cgroups@vger.kernel.org
L: linux-mm@kvack.org
S: Maintained
F: mm/memcontrol.c
F: mm/page_cgroup.c
MEMORY TECHNOLOGY DEVICES (MTD) MEMORY TECHNOLOGY DEVICES (MTD)
M: David Woodhouse <dwmw2@infradead.org> M: David Woodhouse <dwmw2@infradead.org>
M: Brian Norris <computersforpeace@gmail.com> M: Brian Norris <computersforpeace@gmail.com>
......
...@@ -1971,7 +1971,7 @@ int bio_associate_current(struct bio *bio) ...@@ -1971,7 +1971,7 @@ int bio_associate_current(struct bio *bio)
/* associate blkcg if exists */ /* associate blkcg if exists */
rcu_read_lock(); rcu_read_lock();
css = task_css(current, blkio_cgrp_id); css = task_css(current, blkio_cgrp_id);
if (css && css_tryget(css)) if (css && css_tryget_online(css))
bio->bi_css = css; bio->bi_css = css;
rcu_read_unlock(); rcu_read_unlock();
......
...@@ -185,7 +185,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg, ...@@ -185,7 +185,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
lockdep_assert_held(q->queue_lock); lockdep_assert_held(q->queue_lock);
/* blkg holds a reference to blkcg */ /* blkg holds a reference to blkcg */
if (!css_tryget(&blkcg->css)) { if (!css_tryget_online(&blkcg->css)) {
ret = -EINVAL; ret = -EINVAL;
goto err_free_blkg; goto err_free_blkg;
} }
......
...@@ -204,7 +204,7 @@ static inline struct blkcg *bio_blkcg(struct bio *bio) ...@@ -204,7 +204,7 @@ static inline struct blkcg *bio_blkcg(struct bio *bio)
*/ */
static inline struct blkcg *blkcg_parent(struct blkcg *blkcg) static inline struct blkcg *blkcg_parent(struct blkcg *blkcg)
{ {
return css_to_blkcg(css_parent(&blkcg->css)); return css_to_blkcg(blkcg->css.parent);
} }
/** /**
......
...@@ -1346,10 +1346,10 @@ static int tg_print_conf_uint(struct seq_file *sf, void *v) ...@@ -1346,10 +1346,10 @@ static int tg_print_conf_uint(struct seq_file *sf, void *v)
return 0; return 0;
} }
static int tg_set_conf(struct cgroup_subsys_state *css, struct cftype *cft, static ssize_t tg_set_conf(struct kernfs_open_file *of,
const char *buf, bool is_u64) char *buf, size_t nbytes, loff_t off, bool is_u64)
{ {
struct blkcg *blkcg = css_to_blkcg(css); struct blkcg *blkcg = css_to_blkcg(of_css(of));
struct blkg_conf_ctx ctx; struct blkg_conf_ctx ctx;
struct throtl_grp *tg; struct throtl_grp *tg;
struct throtl_service_queue *sq; struct throtl_service_queue *sq;
...@@ -1368,9 +1368,9 @@ static int tg_set_conf(struct cgroup_subsys_state *css, struct cftype *cft, ...@@ -1368,9 +1368,9 @@ static int tg_set_conf(struct cgroup_subsys_state *css, struct cftype *cft,
ctx.v = -1; ctx.v = -1;
if (is_u64) if (is_u64)
*(u64 *)((void *)tg + cft->private) = ctx.v; *(u64 *)((void *)tg + of_cft(of)->private) = ctx.v;
else else
*(unsigned int *)((void *)tg + cft->private) = ctx.v; *(unsigned int *)((void *)tg + of_cft(of)->private) = ctx.v;
throtl_log(&tg->service_queue, throtl_log(&tg->service_queue,
"limit change rbps=%llu wbps=%llu riops=%u wiops=%u", "limit change rbps=%llu wbps=%llu riops=%u wiops=%u",
...@@ -1404,19 +1404,19 @@ static int tg_set_conf(struct cgroup_subsys_state *css, struct cftype *cft, ...@@ -1404,19 +1404,19 @@ static int tg_set_conf(struct cgroup_subsys_state *css, struct cftype *cft,
} }
blkg_conf_finish(&ctx); blkg_conf_finish(&ctx);
return 0; return nbytes;
} }
static int tg_set_conf_u64(struct cgroup_subsys_state *css, struct cftype *cft, static ssize_t tg_set_conf_u64(struct kernfs_open_file *of,
char *buf) char *buf, size_t nbytes, loff_t off)
{ {
return tg_set_conf(css, cft, buf, true); return tg_set_conf(of, buf, nbytes, off, true);
} }
static int tg_set_conf_uint(struct cgroup_subsys_state *css, struct cftype *cft, static ssize_t tg_set_conf_uint(struct kernfs_open_file *of,
char *buf) char *buf, size_t nbytes, loff_t off)
{ {
return tg_set_conf(css, cft, buf, false); return tg_set_conf(of, buf, nbytes, off, false);
} }
static struct cftype throtl_files[] = { static struct cftype throtl_files[] = {
...@@ -1424,25 +1424,25 @@ static struct cftype throtl_files[] = { ...@@ -1424,25 +1424,25 @@ static struct cftype throtl_files[] = {
.name = "throttle.read_bps_device", .name = "throttle.read_bps_device",
.private = offsetof(struct throtl_grp, bps[READ]), .private = offsetof(struct throtl_grp, bps[READ]),
.seq_show = tg_print_conf_u64, .seq_show = tg_print_conf_u64,
.write_string = tg_set_conf_u64, .write = tg_set_conf_u64,
}, },
{ {
.name = "throttle.write_bps_device", .name = "throttle.write_bps_device",
.private = offsetof(struct throtl_grp, bps[WRITE]), .private = offsetof(struct throtl_grp, bps[WRITE]),
.seq_show = tg_print_conf_u64, .seq_show = tg_print_conf_u64,
.write_string = tg_set_conf_u64, .write = tg_set_conf_u64,
}, },
{ {
.name = "throttle.read_iops_device", .name = "throttle.read_iops_device",
.private = offsetof(struct throtl_grp, iops[READ]), .private = offsetof(struct throtl_grp, iops[READ]),
.seq_show = tg_print_conf_uint, .seq_show = tg_print_conf_uint,
.write_string = tg_set_conf_uint, .write = tg_set_conf_uint,
}, },
{ {
.name = "throttle.write_iops_device", .name = "throttle.write_iops_device",
.private = offsetof(struct throtl_grp, iops[WRITE]), .private = offsetof(struct throtl_grp, iops[WRITE]),
.seq_show = tg_print_conf_uint, .seq_show = tg_print_conf_uint,
.write_string = tg_set_conf_uint, .write = tg_set_conf_uint,
}, },
{ {
.name = "throttle.io_service_bytes", .name = "throttle.io_service_bytes",
......
...@@ -1670,11 +1670,11 @@ static int cfq_print_leaf_weight(struct seq_file *sf, void *v) ...@@ -1670,11 +1670,11 @@ static int cfq_print_leaf_weight(struct seq_file *sf, void *v)
return 0; return 0;
} }
static int __cfqg_set_weight_device(struct cgroup_subsys_state *css, static ssize_t __cfqg_set_weight_device(struct kernfs_open_file *of,
struct cftype *cft, const char *buf, char *buf, size_t nbytes, loff_t off,
bool is_leaf_weight) bool is_leaf_weight)
{ {
struct blkcg *blkcg = css_to_blkcg(css); struct blkcg *blkcg = css_to_blkcg(of_css(of));
struct blkg_conf_ctx ctx; struct blkg_conf_ctx ctx;
struct cfq_group *cfqg; struct cfq_group *cfqg;
int ret; int ret;
...@@ -1697,19 +1697,19 @@ static int __cfqg_set_weight_device(struct cgroup_subsys_state *css, ...@@ -1697,19 +1697,19 @@ static int __cfqg_set_weight_device(struct cgroup_subsys_state *css,
} }
blkg_conf_finish(&ctx); blkg_conf_finish(&ctx);
return ret; return ret ?: nbytes;
} }
static int cfqg_set_weight_device(struct cgroup_subsys_state *css, static ssize_t cfqg_set_weight_device(struct kernfs_open_file *of,
struct cftype *cft, char *buf) char *buf, size_t nbytes, loff_t off)
{ {
return __cfqg_set_weight_device(css, cft, buf, false); return __cfqg_set_weight_device(of, buf, nbytes, off, false);
} }
static int cfqg_set_leaf_weight_device(struct cgroup_subsys_state *css, static ssize_t cfqg_set_leaf_weight_device(struct kernfs_open_file *of,
struct cftype *cft, char *buf) char *buf, size_t nbytes, loff_t off)
{ {
return __cfqg_set_weight_device(css, cft, buf, true); return __cfqg_set_weight_device(of, buf, nbytes, off, true);
} }
static int __cfq_set_weight(struct cgroup_subsys_state *css, struct cftype *cft, static int __cfq_set_weight(struct cgroup_subsys_state *css, struct cftype *cft,
...@@ -1837,7 +1837,7 @@ static struct cftype cfq_blkcg_files[] = { ...@@ -1837,7 +1837,7 @@ static struct cftype cfq_blkcg_files[] = {
.name = "weight_device", .name = "weight_device",
.flags = CFTYPE_ONLY_ON_ROOT, .flags = CFTYPE_ONLY_ON_ROOT,
.seq_show = cfqg_print_leaf_weight_device, .seq_show = cfqg_print_leaf_weight_device,
.write_string = cfqg_set_leaf_weight_device, .write = cfqg_set_leaf_weight_device,
}, },
{ {
.name = "weight", .name = "weight",
...@@ -1851,7 +1851,7 @@ static struct cftype cfq_blkcg_files[] = { ...@@ -1851,7 +1851,7 @@ static struct cftype cfq_blkcg_files[] = {
.name = "weight_device", .name = "weight_device",
.flags = CFTYPE_NOT_ON_ROOT, .flags = CFTYPE_NOT_ON_ROOT,
.seq_show = cfqg_print_weight_device, .seq_show = cfqg_print_weight_device,
.write_string = cfqg_set_weight_device, .write = cfqg_set_weight_device,
}, },
{ {
.name = "weight", .name = "weight",
...@@ -1863,7 +1863,7 @@ static struct cftype cfq_blkcg_files[] = { ...@@ -1863,7 +1863,7 @@ static struct cftype cfq_blkcg_files[] = {
{ {
.name = "leaf_weight_device", .name = "leaf_weight_device",
.seq_show = cfqg_print_leaf_weight_device, .seq_show = cfqg_print_leaf_weight_device,
.write_string = cfqg_set_leaf_weight_device, .write = cfqg_set_leaf_weight_device,
}, },
{ {
.name = "leaf_weight", .name = "leaf_weight",
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/percpu-refcount.h> #include <linux/percpu-refcount.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/kernfs.h> #include <linux/kernfs.h>
#include <linux/wait.h>
#ifdef CONFIG_CGROUPS #ifdef CONFIG_CGROUPS
...@@ -47,21 +48,45 @@ enum cgroup_subsys_id { ...@@ -47,21 +48,45 @@ enum cgroup_subsys_id {
}; };
#undef SUBSYS #undef SUBSYS
/* Per-subsystem/per-cgroup state maintained by the system. */ /*
* Per-subsystem/per-cgroup state maintained by the system. This is the
* fundamental structural building block that controllers deal with.
*
* Fields marked with "PI:" are public and immutable and may be accessed
* directly without synchronization.
*/
struct cgroup_subsys_state { struct cgroup_subsys_state {
/* the cgroup that this css is attached to */ /* PI: the cgroup that this css is attached to */
struct cgroup *cgroup; struct cgroup *cgroup;
/* the cgroup subsystem that this css is attached to */ /* PI: the cgroup subsystem that this css is attached to */
struct cgroup_subsys *ss; struct cgroup_subsys *ss;
/* reference count - access via css_[try]get() and css_put() */ /* reference count - access via css_[try]get() and css_put() */
struct percpu_ref refcnt; struct percpu_ref refcnt;
/* the parent css */ /* PI: the parent css */
struct cgroup_subsys_state *parent; struct cgroup_subsys_state *parent;
unsigned long flags; /* siblings list anchored at the parent's ->children */
struct list_head sibling;
struct list_head children;
/*
* PI: Subsys-unique ID. 0 is unused and root is always 1. The
* matching css can be looked up using css_from_id().
*/
int id;
unsigned int flags;
/*
* Monotonically increasing unique serial number which defines a
* uniform order among all csses. It's guaranteed that all
* ->children lists are in the ascending order of ->serial_nr and
* used to allow interrupting and resuming iterations.
*/
u64 serial_nr;
/* percpu_ref killing and RCU release */ /* percpu_ref killing and RCU release */
struct rcu_head rcu_head; struct rcu_head rcu_head;
...@@ -70,8 +95,9 @@ struct cgroup_subsys_state { ...@@ -70,8 +95,9 @@ struct cgroup_subsys_state {
/* bits in struct cgroup_subsys_state flags field */ /* bits in struct cgroup_subsys_state flags field */
enum { enum {
CSS_ROOT = (1 << 0), /* this CSS is the root of the subsystem */ CSS_NO_REF = (1 << 0), /* no reference counting for this css */
CSS_ONLINE = (1 << 1), /* between ->css_online() and ->css_offline() */ CSS_ONLINE = (1 << 1), /* between ->css_online() and ->css_offline() */
CSS_RELEASED = (1 << 2), /* refcnt reached zero, released */
}; };
/** /**
...@@ -82,8 +108,7 @@ enum { ...@@ -82,8 +108,7 @@ enum {
*/ */
static inline void css_get(struct cgroup_subsys_state *css) static inline void css_get(struct cgroup_subsys_state *css)
{ {
/* We don't need to reference count the root state */ if (!(css->flags & CSS_NO_REF))
if (!(css->flags & CSS_ROOT))
percpu_ref_get(&css->refcnt); percpu_ref_get(&css->refcnt);
} }
...@@ -91,35 +116,51 @@ static inline void css_get(struct cgroup_subsys_state *css) ...@@ -91,35 +116,51 @@ static inline void css_get(struct cgroup_subsys_state *css)
* css_tryget - try to obtain a reference on the specified css * css_tryget - try to obtain a reference on the specified css
* @css: target css * @css: target css
* *
* Obtain a reference on @css if it's alive. The caller naturally needs to * Obtain a reference on @css unless it already has reached zero and is
* ensure that @css is accessible but doesn't have to be holding a * being released. This function doesn't care whether @css is on or
* offline. The caller naturally needs to ensure that @css is accessible
* but doesn't have to be holding a reference on it - IOW, RCU protected
* access is good enough for this function. Returns %true if a reference
* count was successfully obtained; %false otherwise.
*/
static inline bool css_tryget(struct cgroup_subsys_state *css)
{
if (!(css->flags & CSS_NO_REF))
return percpu_ref_tryget(&css->refcnt);
return true;
}
/**
* css_tryget_online - try to obtain a reference on the specified css if online
* @css: target css
*
* Obtain a reference on @css if it's online. The caller naturally needs
* to ensure that @css is accessible but doesn't have to be holding a
* reference on it - IOW, RCU protected access is good enough for this * reference on it - IOW, RCU protected access is good enough for this
* function. Returns %true if a reference count was successfully obtained; * function. Returns %true if a reference count was successfully obtained;
* %false otherwise. * %false otherwise.
*/ */
static inline bool css_tryget(struct cgroup_subsys_state *css) static inline bool css_tryget_online(struct cgroup_subsys_state *css)
{ {
if (css->flags & CSS_ROOT) if (!(css->flags & CSS_NO_REF))
return true; return percpu_ref_tryget_live(&css->refcnt);
return percpu_ref_tryget_live(&css->refcnt); return true;
} }
/** /**
* css_put - put a css reference * css_put - put a css reference
* @css: target css * @css: target css
* *
* Put a reference obtained via css_get() and css_tryget(). * Put a reference obtained via css_get() and css_tryget_online().
*/ */
static inline void css_put(struct cgroup_subsys_state *css) static inline void css_put(struct cgroup_subsys_state *css)
{ {
if (!(css->flags & CSS_ROOT)) if (!(css->flags & CSS_NO_REF))
percpu_ref_put(&css->refcnt); percpu_ref_put(&css->refcnt);
} }
/* bits in struct cgroup flags field */ /* bits in struct cgroup flags field */
enum { enum {
/* Control Group is dead */
CGRP_DEAD,
/* /*
* Control Group has previously had a child cgroup or a task, * Control Group has previously had a child cgroup or a task,
* but no longer (only if CGRP_NOTIFY_ON_RELEASE is set) * but no longer (only if CGRP_NOTIFY_ON_RELEASE is set)
...@@ -133,48 +174,37 @@ enum { ...@@ -133,48 +174,37 @@ enum {
* specified at mount time and thus is implemented here. * specified at mount time and thus is implemented here.
*/ */
CGRP_CPUSET_CLONE_CHILDREN, CGRP_CPUSET_CLONE_CHILDREN,
/* see the comment above CGRP_ROOT_SANE_BEHAVIOR for details */
CGRP_SANE_BEHAVIOR,
}; };
struct cgroup { struct cgroup {
/* self css with NULL ->ss, points back to this cgroup */
struct cgroup_subsys_state self;
unsigned long flags; /* "unsigned long" so bitops work */ unsigned long flags; /* "unsigned long" so bitops work */
/* /*
* idr allocated in-hierarchy ID. * idr allocated in-hierarchy ID.
* *
* The ID of the root cgroup is always 0, and a new cgroup * ID 0 is not used, the ID of the root cgroup is always 1, and a
* will be assigned with a smallest available ID. * new cgroup will be assigned with a smallest available ID.
* *
* Allocating/Removing ID must be protected by cgroup_mutex. * Allocating/Removing ID must be protected by cgroup_mutex.
*/ */
int id; int id;
/* the number of attached css's */
int nr_css;
atomic_t refcnt;
/* /*
* We link our 'sibling' struct into our parent's 'children'. * If this cgroup contains any tasks, it contributes one to
* Our children link their 'sibling' into our 'children'. * populated_cnt. All children with non-zero popuplated_cnt of
* their own contribute one. The count is zero iff there's no task
* in this cgroup or its subtree.
*/ */
struct list_head sibling; /* my parent's children */ int populated_cnt;
struct list_head children; /* my children */
struct cgroup *parent; /* my parent */
struct kernfs_node *kn; /* cgroup kernfs entry */ struct kernfs_node *kn; /* cgroup kernfs entry */
struct kernfs_node *populated_kn; /* kn for "cgroup.subtree_populated" */
/* /* the bitmask of subsystems enabled on the child cgroups */
* Monotonically increasing unique serial number which defines a unsigned int child_subsys_mask;
* uniform order among all cgroups. It's guaranteed that all
* ->children lists are in the ascending order of ->serial_nr.
* It's used to allow interrupting and resuming iterations.
*/
u64 serial_nr;
/* The bitmask of subsystems attached to this cgroup */
unsigned long subsys_mask;
/* Private pointers for each registered subsystem */ /* Private pointers for each registered subsystem */
struct cgroup_subsys_state __rcu *subsys[CGROUP_SUBSYS_COUNT]; struct cgroup_subsys_state __rcu *subsys[CGROUP_SUBSYS_COUNT];
...@@ -187,6 +217,15 @@ struct cgroup { ...@@ -187,6 +217,15 @@ struct cgroup {
*/ */
struct list_head cset_links; struct list_head cset_links;
/*
* On the default hierarchy, a css_set for a cgroup with some
* susbsys disabled will point to css's which are associated with
* the closest ancestor which has the subsys enabled. The
* following lists all css_sets which point to this cgroup's css
* for the given subsystem.
*/
struct list_head e_csets[CGROUP_SUBSYS_COUNT];
/* /*
* Linked list running through all cgroups that can * Linked list running through all cgroups that can
* potentially be reaped by the release agent. Protected by * potentially be reaped by the release agent. Protected by
...@@ -201,12 +240,8 @@ struct cgroup { ...@@ -201,12 +240,8 @@ struct cgroup {
struct list_head pidlists; struct list_head pidlists;
struct mutex pidlist_mutex; struct mutex pidlist_mutex;
/* dummy css with NULL ->ss, points back to this cgroup */ /* used to wait for offlining of csses */
struct cgroup_subsys_state dummy_css; wait_queue_head_t offline_waitq;
/* For css percpu_ref killing and RCU-protected deletion */
struct rcu_head rcu_head;
struct work_struct destroy_work;
}; };
#define MAX_CGROUP_ROOT_NAMELEN 64 #define MAX_CGROUP_ROOT_NAMELEN 64
...@@ -250,6 +285,12 @@ enum { ...@@ -250,6 +285,12 @@ enum {
* *
* - "cgroup.clone_children" is removed. * - "cgroup.clone_children" is removed.
* *
* - "cgroup.subtree_populated" is available. Its value is 0 if
* the cgroup and its descendants contain no task; otherwise, 1.
* The file also generates kernfs notification which can be
* monitored through poll and [di]notify when the value of the
* file changes.
*
* - If mount is requested with sane_behavior but without any * - If mount is requested with sane_behavior but without any
* subsystem, the default unified hierarchy is mounted. * subsystem, the default unified hierarchy is mounted.
* *
...@@ -264,6 +305,8 @@ enum { ...@@ -264,6 +305,8 @@ enum {
* the flag is not created. * the flag is not created.
* *
* - blkcg: blk-throttle becomes properly hierarchical. * - blkcg: blk-throttle becomes properly hierarchical.
*
* - debug: disallowed on the default hierarchy.
*/ */
CGRP_ROOT_SANE_BEHAVIOR = (1 << 0), CGRP_ROOT_SANE_BEHAVIOR = (1 << 0),
...@@ -282,6 +325,9 @@ enum { ...@@ -282,6 +325,9 @@ enum {
struct cgroup_root { struct cgroup_root {
struct kernfs_root *kf_root; struct kernfs_root *kf_root;
/* The bitmask of subsystems attached to this hierarchy */
unsigned int subsys_mask;
/* Unique id for this hierarchy. */ /* Unique id for this hierarchy. */
int hierarchy_id; int hierarchy_id;
...@@ -295,7 +341,7 @@ struct cgroup_root { ...@@ -295,7 +341,7 @@ struct cgroup_root {
struct list_head root_list; struct list_head root_list;
/* Hierarchy-specific flags */ /* Hierarchy-specific flags */
unsigned long flags; unsigned int flags;
/* IDs for cgroups in this hierarchy */ /* IDs for cgroups in this hierarchy */
struct idr cgroup_idr; struct idr cgroup_idr;
...@@ -342,6 +388,9 @@ struct css_set { ...@@ -342,6 +388,9 @@ struct css_set {
*/ */
struct list_head cgrp_links; struct list_head cgrp_links;
/* the default cgroup associated with this css_set */
struct cgroup *dfl_cgrp;
/* /*
* Set of subsystem states, one for each subsystem. This array is * Set of subsystem states, one for each subsystem. This array is
* immutable after creation apart from the init_css_set during * immutable after creation apart from the init_css_set during
...@@ -366,6 +415,15 @@ struct css_set { ...@@ -366,6 +415,15 @@ struct css_set {
struct cgroup *mg_src_cgrp; struct cgroup *mg_src_cgrp;
struct css_set *mg_dst_cset; struct css_set *mg_dst_cset;
/*
* On the default hierarhcy, ->subsys[ssid] may point to a css
* attached to an ancestor instead of the cgroup this css_set is
* associated with. The following node is anchored at
* ->subsys[ssid]->cgroup->e_csets[ssid] and provides a way to
* iterate through all css's attached to a given cgroup.
*/
struct list_head e_cset_node[CGROUP_SUBSYS_COUNT];
/* For RCU-protected deletion */ /* For RCU-protected deletion */
struct rcu_head rcu_head; struct rcu_head rcu_head;
}; };
...@@ -405,8 +463,7 @@ struct cftype { ...@@ -405,8 +463,7 @@ struct cftype {
/* /*
* The maximum length of string, excluding trailing nul, that can * The maximum length of string, excluding trailing nul, that can
* be passed to write_string. If < PAGE_SIZE-1, PAGE_SIZE-1 is * be passed to write. If < PAGE_SIZE-1, PAGE_SIZE-1 is assumed.
* assumed.
*/ */
size_t max_write_len; size_t max_write_len;
...@@ -453,19 +510,13 @@ struct cftype { ...@@ -453,19 +510,13 @@ struct cftype {
s64 val); s64 val);
/* /*
* write_string() is passed a nul-terminated kernelspace * write() is the generic write callback which maps directly to
* buffer of maximum length determined by max_write_len. * kernfs write operation and overrides all other operations.
* Returns 0 or -ve error code. * Maximum write size is determined by ->max_write_len. Use
*/ * of_css/cft() to access the associated css and cft.
int (*write_string)(struct cgroup_subsys_state *css, struct cftype *cft,
char *buffer);
/*
* trigger() callback can be used to get some kick from the
* userspace, when the actual string written is not important
* at all. The private field can be used to determine the
* kick type for multiplexing.
*/ */
int (*trigger)(struct cgroup_subsys_state *css, unsigned int event); ssize_t (*write)(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off);
#ifdef CONFIG_DEBUG_LOCK_ALLOC #ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lock_class_key lockdep_key; struct lock_class_key lockdep_key;
...@@ -504,14 +555,24 @@ static inline ino_t cgroup_ino(struct cgroup *cgrp) ...@@ -504,14 +555,24 @@ static inline ino_t cgroup_ino(struct cgroup *cgrp)
return 0; return 0;
} }
static inline struct cftype *seq_cft(struct seq_file *seq) /* cft/css accessors for cftype->write() operation */
static inline struct cftype *of_cft(struct kernfs_open_file *of)
{ {
struct kernfs_open_file *of = seq->private;
return of->kn->priv; return of->kn->priv;
} }
struct cgroup_subsys_state *seq_css(struct seq_file *seq); struct cgroup_subsys_state *of_css(struct kernfs_open_file *of);
/* cft/css accessors for cftype->seq_*() operations */
static inline struct cftype *seq_cft(struct seq_file *seq)
{
return of_cft(seq->private);
}
static inline struct cgroup_subsys_state *seq_css(struct seq_file *seq)
{
return of_css(seq->private);
}
/* /*
* Name / path handling functions. All are thin wrappers around the kernfs * Name / path handling functions. All are thin wrappers around the kernfs
...@@ -612,6 +673,9 @@ struct cgroup_subsys { ...@@ -612,6 +673,9 @@ struct cgroup_subsys {
/* link to parent, protected by cgroup_lock() */ /* link to parent, protected by cgroup_lock() */
struct cgroup_root *root; struct cgroup_root *root;
/* idr for css->id */
struct idr css_idr;
/* /*
* List of cftypes. Each entry is the first entry of an array * List of cftypes. Each entry is the first entry of an array
* terminated by zero length name. * terminated by zero length name.
...@@ -626,19 +690,6 @@ struct cgroup_subsys { ...@@ -626,19 +690,6 @@ struct cgroup_subsys {
#include <linux/cgroup_subsys.h> #include <linux/cgroup_subsys.h>
#undef SUBSYS #undef SUBSYS
/**
* css_parent - find the parent css
* @css: the target cgroup_subsys_state
*
* Return the parent css of @css. This function is guaranteed to return
* non-NULL parent as long as @css isn't the root.
*/
static inline
struct cgroup_subsys_state *css_parent(struct cgroup_subsys_state *css)
{
return css->parent;
}
/** /**
* task_css_set_check - obtain a task's css_set with extra access conditions * task_css_set_check - obtain a task's css_set with extra access conditions
* @task: the task to obtain css_set for * @task: the task to obtain css_set for
...@@ -731,14 +782,14 @@ struct cgroup_subsys_state *css_from_id(int id, struct cgroup_subsys *ss); ...@@ -731,14 +782,14 @@ struct cgroup_subsys_state *css_from_id(int id, struct cgroup_subsys *ss);
* @pos: the css * to use as the loop cursor * @pos: the css * to use as the loop cursor
* @parent: css whose children to walk * @parent: css whose children to walk
* *
* Walk @parent's children. Must be called under rcu_read_lock(). A child * Walk @parent's children. Must be called under rcu_read_lock().
* css which hasn't finished ->css_online() or already has finished
* ->css_offline() may show up during traversal and it's each subsystem's
* responsibility to verify that each @pos is alive.
* *
* If a subsystem synchronizes against the parent in its ->css_online() and * If a subsystem synchronizes ->css_online() and the start of iteration, a
* before starting iterating, a css which finished ->css_online() is * css which finished ->css_online() is guaranteed to be visible in the
* guaranteed to be visible in the future iterations. * future iterations and will stay visible until the last reference is put.
* A css which hasn't finished ->css_online() or already finished
* ->css_offline() may show up during traversal. It's each subsystem's
* responsibility to synchronize against on/offlining.
* *
* It is allowed to temporarily drop RCU read lock during iteration. The * It is allowed to temporarily drop RCU read lock during iteration. The
* caller is responsible for ensuring that @pos remains accessible until * caller is responsible for ensuring that @pos remains accessible until
...@@ -761,17 +812,16 @@ css_rightmost_descendant(struct cgroup_subsys_state *pos); ...@@ -761,17 +812,16 @@ css_rightmost_descendant(struct cgroup_subsys_state *pos);
* @root: css whose descendants to walk * @root: css whose descendants to walk
* *
* Walk @root's descendants. @root is included in the iteration and the * Walk @root's descendants. @root is included in the iteration and the
* first node to be visited. Must be called under rcu_read_lock(). A * first node to be visited. Must be called under rcu_read_lock().
* descendant css which hasn't finished ->css_online() or already has
* finished ->css_offline() may show up during traversal and it's each
* subsystem's responsibility to verify that each @pos is alive.
* *
* If a subsystem synchronizes against the parent in its ->css_online() and * If a subsystem synchronizes ->css_online() and the start of iteration, a
* before starting iterating, and synchronizes against @pos on each * css which finished ->css_online() is guaranteed to be visible in the
* iteration, any descendant css which finished ->css_online() is * future iterations and will stay visible until the last reference is put.
* guaranteed to be visible in the future iterations. * A css which hasn't finished ->css_online() or already finished
* ->css_offline() may show up during traversal. It's each subsystem's
* responsibility to synchronize against on/offlining.
* *
* In other words, the following guarantees that a descendant can't escape * For example, the following guarantees that a descendant can't escape
* state updates of its ancestors. * state updates of its ancestors.
* *
* my_online(@css) * my_online(@css)
...@@ -827,18 +877,34 @@ css_next_descendant_post(struct cgroup_subsys_state *pos, ...@@ -827,18 +877,34 @@ css_next_descendant_post(struct cgroup_subsys_state *pos,
* *
* Similar to css_for_each_descendant_pre() but performs post-order * Similar to css_for_each_descendant_pre() but performs post-order
* traversal instead. @root is included in the iteration and the last * traversal instead. @root is included in the iteration and the last
* node to be visited. Note that the walk visibility guarantee described * node to be visited.
* in pre-order walk doesn't apply the same to post-order walks. *
* If a subsystem synchronizes ->css_online() and the start of iteration, a
* css which finished ->css_online() is guaranteed to be visible in the
* future iterations and will stay visible until the last reference is put.
* A css which hasn't finished ->css_online() or already finished
* ->css_offline() may show up during traversal. It's each subsystem's
* responsibility to synchronize against on/offlining.
*
* Note that the walk visibility guarantee example described in pre-order
* walk doesn't apply the same to post-order walks.
*/ */
#define css_for_each_descendant_post(pos, css) \ #define css_for_each_descendant_post(pos, css) \
for ((pos) = css_next_descendant_post(NULL, (css)); (pos); \ for ((pos) = css_next_descendant_post(NULL, (css)); (pos); \
(pos) = css_next_descendant_post((pos), (css))) (pos) = css_next_descendant_post((pos), (css)))
bool css_has_online_children(struct cgroup_subsys_state *css);
/* A css_task_iter should be treated as an opaque object */ /* A css_task_iter should be treated as an opaque object */
struct css_task_iter { struct css_task_iter {
struct cgroup_subsys_state *origin_css; struct cgroup_subsys *ss;
struct list_head *cset_link;
struct list_head *task; struct list_head *cset_pos;
struct list_head *cset_head;
struct list_head *task_pos;
struct list_head *tasks_head;
struct list_head *mg_tasks_head;
}; };
void css_task_iter_start(struct cgroup_subsys_state *css, void css_task_iter_start(struct cgroup_subsys_state *css,
...@@ -849,8 +915,8 @@ void css_task_iter_end(struct css_task_iter *it); ...@@ -849,8 +915,8 @@ void css_task_iter_end(struct css_task_iter *it);
int cgroup_attach_task_all(struct task_struct *from, struct task_struct *); int cgroup_attach_task_all(struct task_struct *from, struct task_struct *);
int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from); int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from);
struct cgroup_subsys_state *css_tryget_from_dir(struct dentry *dentry, struct cgroup_subsys_state *css_tryget_online_from_dir(struct dentry *dentry,
struct cgroup_subsys *ss); struct cgroup_subsys *ss);
#else /* !CONFIG_CGROUPS */ #else /* !CONFIG_CGROUPS */
......
...@@ -7,10 +7,6 @@ ...@@ -7,10 +7,6 @@
SUBSYS(cpuset) SUBSYS(cpuset)
#endif #endif
#if IS_ENABLED(CONFIG_CGROUP_DEBUG)
SUBSYS(debug)
#endif
#if IS_ENABLED(CONFIG_CGROUP_SCHED) #if IS_ENABLED(CONFIG_CGROUP_SCHED)
SUBSYS(cpu) SUBSYS(cpu)
#endif #endif
...@@ -50,6 +46,13 @@ SUBSYS(net_prio) ...@@ -50,6 +46,13 @@ SUBSYS(net_prio)
#if IS_ENABLED(CONFIG_CGROUP_HUGETLB) #if IS_ENABLED(CONFIG_CGROUP_HUGETLB)
SUBSYS(hugetlb) SUBSYS(hugetlb)
#endif #endif
/*
* The following subsystems are not supported on the default hierarchy.
*/
#if IS_ENABLED(CONFIG_CGROUP_DEBUG)
SUBSYS(debug)
#endif
/* /*
* DO NOT ADD ANY SUBSYSTEM WITHOUT EXPLICIT ACKS FROM CGROUP MAINTAINERS. * DO NOT ADD ANY SUBSYSTEM WITHOUT EXPLICIT ACKS FROM CGROUP MAINTAINERS.
*/ */
...@@ -26,6 +26,8 @@ ...@@ -26,6 +26,8 @@
* distribution for more details. * distribution for more details.
*/ */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/cgroup.h> #include <linux/cgroup.h>
#include <linux/cred.h> #include <linux/cred.h>
#include <linux/ctype.h> #include <linux/ctype.h>
...@@ -69,15 +71,6 @@ ...@@ -69,15 +71,6 @@
#define CGROUP_FILE_NAME_MAX (MAX_CGROUP_TYPE_NAMELEN + \ #define CGROUP_FILE_NAME_MAX (MAX_CGROUP_TYPE_NAMELEN + \
MAX_CFTYPE_NAME + 2) MAX_CFTYPE_NAME + 2)
/*
* cgroup_tree_mutex nests above cgroup_mutex and protects cftypes, file
* creation/removal and hierarchy changing operations including cgroup
* creation, removal, css association and controller rebinding. This outer
* lock is needed mainly to resolve the circular dependency between kernfs
* active ref and cgroup_mutex. cgroup_tree_mutex nests above both.
*/
static DEFINE_MUTEX(cgroup_tree_mutex);
/* /*
* cgroup_mutex is the master lock. Any modification to cgroup or its * cgroup_mutex is the master lock. Any modification to cgroup or its
* hierarchy must be performed while holding it. * hierarchy must be performed while holding it.
...@@ -98,17 +91,22 @@ static DEFINE_MUTEX(cgroup_mutex); ...@@ -98,17 +91,22 @@ static DEFINE_MUTEX(cgroup_mutex);
static DECLARE_RWSEM(css_set_rwsem); static DECLARE_RWSEM(css_set_rwsem);
#endif #endif
/*
* Protects cgroup_idr and css_idr so that IDs can be released without
* grabbing cgroup_mutex.
*/
static DEFINE_SPINLOCK(cgroup_idr_lock);
/* /*
* Protects cgroup_subsys->release_agent_path. Modifying it also requires * Protects cgroup_subsys->release_agent_path. Modifying it also requires
* cgroup_mutex. Reading requires either cgroup_mutex or this spinlock. * cgroup_mutex. Reading requires either cgroup_mutex or this spinlock.
*/ */
static DEFINE_SPINLOCK(release_agent_path_lock); static DEFINE_SPINLOCK(release_agent_path_lock);
#define cgroup_assert_mutexes_or_rcu_locked() \ #define cgroup_assert_mutex_or_rcu_locked() \
rcu_lockdep_assert(rcu_read_lock_held() || \ rcu_lockdep_assert(rcu_read_lock_held() || \
lockdep_is_held(&cgroup_tree_mutex) || \
lockdep_is_held(&cgroup_mutex), \ lockdep_is_held(&cgroup_mutex), \
"cgroup_[tree_]mutex or RCU read lock required"); "cgroup_mutex or RCU read lock required");
/* /*
* cgroup destruction makes heavy use of work items and there can be a lot * cgroup destruction makes heavy use of work items and there can be a lot
...@@ -151,6 +149,13 @@ struct cgroup_root cgrp_dfl_root; ...@@ -151,6 +149,13 @@ struct cgroup_root cgrp_dfl_root;
*/ */
static bool cgrp_dfl_root_visible; static bool cgrp_dfl_root_visible;
/* some controllers are not supported in the default hierarchy */
static const unsigned int cgrp_dfl_root_inhibit_ss_mask = 0
#ifdef CONFIG_CGROUP_DEBUG
| (1 << debug_cgrp_id)
#endif
;
/* The list of hierarchy roots */ /* The list of hierarchy roots */
static LIST_HEAD(cgroup_roots); static LIST_HEAD(cgroup_roots);
...@@ -160,14 +165,13 @@ static int cgroup_root_count; ...@@ -160,14 +165,13 @@ static int cgroup_root_count;
static DEFINE_IDR(cgroup_hierarchy_idr); static DEFINE_IDR(cgroup_hierarchy_idr);
/* /*
* Assign a monotonically increasing serial number to cgroups. It * Assign a monotonically increasing serial number to csses. It guarantees
* guarantees cgroups with bigger numbers are newer than those with smaller * cgroups with bigger numbers are newer than those with smaller numbers.
* numbers. Also, as cgroups are always appended to the parent's * Also, as csses are always appended to the parent's ->children list, it
* ->children list, it guarantees that sibling cgroups are always sorted in * guarantees that sibling csses are always sorted in the ascending serial
* the ascending serial number order on the list. Protected by * number order on the list. Protected by cgroup_mutex.
* cgroup_mutex.
*/ */
static u64 cgroup_serial_nr_next = 1; static u64 css_serial_nr_next = 1;
/* This flag indicates whether tasks in the fork and exit paths should /* This flag indicates whether tasks in the fork and exit paths should
* check for fork/exit handlers to call. This avoids us having to do * check for fork/exit handlers to call. This avoids us having to do
...@@ -180,17 +184,59 @@ static struct cftype cgroup_base_files[]; ...@@ -180,17 +184,59 @@ static struct cftype cgroup_base_files[];
static void cgroup_put(struct cgroup *cgrp); static void cgroup_put(struct cgroup *cgrp);
static int rebind_subsystems(struct cgroup_root *dst_root, static int rebind_subsystems(struct cgroup_root *dst_root,
unsigned long ss_mask); unsigned int ss_mask);
static void cgroup_destroy_css_killed(struct cgroup *cgrp);
static int cgroup_destroy_locked(struct cgroup *cgrp); static int cgroup_destroy_locked(struct cgroup *cgrp);
static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss);
static void css_release(struct percpu_ref *ref);
static void kill_css(struct cgroup_subsys_state *css);
static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[], static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[],
bool is_add); bool is_add);
static void cgroup_pidlist_destroy_all(struct cgroup *cgrp); static void cgroup_pidlist_destroy_all(struct cgroup *cgrp);
/* IDR wrappers which synchronize using cgroup_idr_lock */
static int cgroup_idr_alloc(struct idr *idr, void *ptr, int start, int end,
gfp_t gfp_mask)
{
int ret;
idr_preload(gfp_mask);
spin_lock_bh(&cgroup_idr_lock);
ret = idr_alloc(idr, ptr, start, end, gfp_mask);
spin_unlock_bh(&cgroup_idr_lock);
idr_preload_end();
return ret;
}
static void *cgroup_idr_replace(struct idr *idr, void *ptr, int id)
{
void *ret;
spin_lock_bh(&cgroup_idr_lock);
ret = idr_replace(idr, ptr, id);
spin_unlock_bh(&cgroup_idr_lock);
return ret;
}
static void cgroup_idr_remove(struct idr *idr, int id)
{
spin_lock_bh(&cgroup_idr_lock);
idr_remove(idr, id);
spin_unlock_bh(&cgroup_idr_lock);
}
static struct cgroup *cgroup_parent(struct cgroup *cgrp)
{
struct cgroup_subsys_state *parent_css = cgrp->self.parent;
if (parent_css)
return container_of(parent_css, struct cgroup, self);
return NULL;
}
/** /**
* cgroup_css - obtain a cgroup's css for the specified subsystem * cgroup_css - obtain a cgroup's css for the specified subsystem
* @cgrp: the cgroup of interest * @cgrp: the cgroup of interest
* @ss: the subsystem of interest (%NULL returns the dummy_css) * @ss: the subsystem of interest (%NULL returns @cgrp->self)
* *
* Return @cgrp's css (cgroup_subsys_state) associated with @ss. This * Return @cgrp's css (cgroup_subsys_state) associated with @ss. This
* function must be called either under cgroup_mutex or rcu_read_lock() and * function must be called either under cgroup_mutex or rcu_read_lock() and
...@@ -203,23 +249,49 @@ static struct cgroup_subsys_state *cgroup_css(struct cgroup *cgrp, ...@@ -203,23 +249,49 @@ static struct cgroup_subsys_state *cgroup_css(struct cgroup *cgrp,
{ {
if (ss) if (ss)
return rcu_dereference_check(cgrp->subsys[ss->id], return rcu_dereference_check(cgrp->subsys[ss->id],
lockdep_is_held(&cgroup_tree_mutex) ||
lockdep_is_held(&cgroup_mutex)); lockdep_is_held(&cgroup_mutex));
else else
return &cgrp->dummy_css; return &cgrp->self;
}
/**
* cgroup_e_css - obtain a cgroup's effective css for the specified subsystem
* @cgrp: the cgroup of interest
* @ss: the subsystem of interest (%NULL returns @cgrp->self)
*
* Similar to cgroup_css() but returns the effctive css, which is defined
* as the matching css of the nearest ancestor including self which has @ss
* enabled. If @ss is associated with the hierarchy @cgrp is on, this
* function is guaranteed to return non-NULL css.
*/
static struct cgroup_subsys_state *cgroup_e_css(struct cgroup *cgrp,
struct cgroup_subsys *ss)
{
lockdep_assert_held(&cgroup_mutex);
if (!ss)
return &cgrp->self;
if (!(cgrp->root->subsys_mask & (1 << ss->id)))
return NULL;
while (cgroup_parent(cgrp) &&
!(cgroup_parent(cgrp)->child_subsys_mask & (1 << ss->id)))
cgrp = cgroup_parent(cgrp);
return cgroup_css(cgrp, ss);
} }
/* convenient tests for these bits */ /* convenient tests for these bits */
static inline bool cgroup_is_dead(const struct cgroup *cgrp) static inline bool cgroup_is_dead(const struct cgroup *cgrp)
{ {
return test_bit(CGRP_DEAD, &cgrp->flags); return !(cgrp->self.flags & CSS_ONLINE);
} }
struct cgroup_subsys_state *seq_css(struct seq_file *seq) struct cgroup_subsys_state *of_css(struct kernfs_open_file *of)
{ {
struct kernfs_open_file *of = seq->private;
struct cgroup *cgrp = of->kn->parent->priv; struct cgroup *cgrp = of->kn->parent->priv;
struct cftype *cft = seq_cft(seq); struct cftype *cft = of_cft(of);
/* /*
* This is open and unprotected implementation of cgroup_css(). * This is open and unprotected implementation of cgroup_css().
...@@ -232,9 +304,9 @@ struct cgroup_subsys_state *seq_css(struct seq_file *seq) ...@@ -232,9 +304,9 @@ struct cgroup_subsys_state *seq_css(struct seq_file *seq)
if (cft->ss) if (cft->ss)
return rcu_dereference_raw(cgrp->subsys[cft->ss->id]); return rcu_dereference_raw(cgrp->subsys[cft->ss->id]);
else else
return &cgrp->dummy_css; return &cgrp->self;
} }
EXPORT_SYMBOL_GPL(seq_css); EXPORT_SYMBOL_GPL(of_css);
/** /**
* cgroup_is_descendant - test ancestry * cgroup_is_descendant - test ancestry
...@@ -250,7 +322,7 @@ bool cgroup_is_descendant(struct cgroup *cgrp, struct cgroup *ancestor) ...@@ -250,7 +322,7 @@ bool cgroup_is_descendant(struct cgroup *cgrp, struct cgroup *ancestor)
while (cgrp) { while (cgrp) {
if (cgrp == ancestor) if (cgrp == ancestor)
return true; return true;
cgrp = cgrp->parent; cgrp = cgroup_parent(cgrp);
} }
return false; return false;
} }
...@@ -274,16 +346,29 @@ static int notify_on_release(const struct cgroup *cgrp) ...@@ -274,16 +346,29 @@ static int notify_on_release(const struct cgroup *cgrp)
* @ssid: the index of the subsystem, CGROUP_SUBSYS_COUNT after reaching the end * @ssid: the index of the subsystem, CGROUP_SUBSYS_COUNT after reaching the end
* @cgrp: the target cgroup to iterate css's of * @cgrp: the target cgroup to iterate css's of
* *
* Should be called under cgroup_mutex. * Should be called under cgroup_[tree_]mutex.
*/ */
#define for_each_css(css, ssid, cgrp) \ #define for_each_css(css, ssid, cgrp) \
for ((ssid) = 0; (ssid) < CGROUP_SUBSYS_COUNT; (ssid)++) \ for ((ssid) = 0; (ssid) < CGROUP_SUBSYS_COUNT; (ssid)++) \
if (!((css) = rcu_dereference_check( \ if (!((css) = rcu_dereference_check( \
(cgrp)->subsys[(ssid)], \ (cgrp)->subsys[(ssid)], \
lockdep_is_held(&cgroup_tree_mutex) || \
lockdep_is_held(&cgroup_mutex)))) { } \ lockdep_is_held(&cgroup_mutex)))) { } \
else else
/**
* for_each_e_css - iterate all effective css's of a cgroup
* @css: the iteration cursor
* @ssid: the index of the subsystem, CGROUP_SUBSYS_COUNT after reaching the end
* @cgrp: the target cgroup to iterate css's of
*
* Should be called under cgroup_[tree_]mutex.
*/
#define for_each_e_css(css, ssid, cgrp) \
for ((ssid) = 0; (ssid) < CGROUP_SUBSYS_COUNT; (ssid)++) \
if (!((css) = cgroup_e_css(cgrp, cgroup_subsys[(ssid)]))) \
; \
else
/** /**
* for_each_subsys - iterate all enabled cgroup subsystems * for_each_subsys - iterate all enabled cgroup subsystems
* @ss: the iteration cursor * @ss: the iteration cursor
...@@ -297,22 +382,13 @@ static int notify_on_release(const struct cgroup *cgrp) ...@@ -297,22 +382,13 @@ static int notify_on_release(const struct cgroup *cgrp)
#define for_each_root(root) \ #define for_each_root(root) \
list_for_each_entry((root), &cgroup_roots, root_list) list_for_each_entry((root), &cgroup_roots, root_list)
/** /* iterate over child cgrps, lock should be held throughout iteration */
* cgroup_lock_live_group - take cgroup_mutex and check that cgrp is alive. #define cgroup_for_each_live_child(child, cgrp) \
* @cgrp: the cgroup to be checked for liveness list_for_each_entry((child), &(cgrp)->self.children, self.sibling) \
* if (({ lockdep_assert_held(&cgroup_mutex); \
* On success, returns true; the mutex should be later unlocked. On cgroup_is_dead(child); })) \
* failure returns false with no lock held. ; \
*/ else
static bool cgroup_lock_live_group(struct cgroup *cgrp)
{
mutex_lock(&cgroup_mutex);
if (cgroup_is_dead(cgrp)) {
mutex_unlock(&cgroup_mutex);
return false;
}
return true;
}
/* the list of cgroups eligible for automatic release. Protected by /* the list of cgroups eligible for automatic release. Protected by
* release_list_lock */ * release_list_lock */
...@@ -360,6 +436,43 @@ struct css_set init_css_set = { ...@@ -360,6 +436,43 @@ struct css_set init_css_set = {
static int css_set_count = 1; /* 1 for init_css_set */ static int css_set_count = 1; /* 1 for init_css_set */
/**
* cgroup_update_populated - updated populated count of a cgroup
* @cgrp: the target cgroup
* @populated: inc or dec populated count
*
* @cgrp is either getting the first task (css_set) or losing the last.
* Update @cgrp->populated_cnt accordingly. The count is propagated
* towards root so that a given cgroup's populated_cnt is zero iff the
* cgroup and all its descendants are empty.
*
* @cgrp's interface file "cgroup.populated" is zero if
* @cgrp->populated_cnt is zero and 1 otherwise. When @cgrp->populated_cnt
* changes from or to zero, userland is notified that the content of the
* interface file has changed. This can be used to detect when @cgrp and
* its descendants become populated or empty.
*/
static void cgroup_update_populated(struct cgroup *cgrp, bool populated)
{
lockdep_assert_held(&css_set_rwsem);
do {
bool trigger;
if (populated)
trigger = !cgrp->populated_cnt++;
else
trigger = !--cgrp->populated_cnt;
if (!trigger)
break;
if (cgrp->populated_kn)
kernfs_notify(cgrp->populated_kn);
cgrp = cgroup_parent(cgrp);
} while (cgrp);
}
/* /*
* hash table for cgroup groups. This improves the performance to find * hash table for cgroup groups. This improves the performance to find
* an existing css_set. This hash doesn't (currently) take into * an existing css_set. This hash doesn't (currently) take into
...@@ -384,6 +497,8 @@ static unsigned long css_set_hash(struct cgroup_subsys_state *css[]) ...@@ -384,6 +497,8 @@ static unsigned long css_set_hash(struct cgroup_subsys_state *css[])
static void put_css_set_locked(struct css_set *cset, bool taskexit) static void put_css_set_locked(struct css_set *cset, bool taskexit)
{ {
struct cgrp_cset_link *link, *tmp_link; struct cgrp_cset_link *link, *tmp_link;
struct cgroup_subsys *ss;
int ssid;
lockdep_assert_held(&css_set_rwsem); lockdep_assert_held(&css_set_rwsem);
...@@ -391,6 +506,8 @@ static void put_css_set_locked(struct css_set *cset, bool taskexit) ...@@ -391,6 +506,8 @@ static void put_css_set_locked(struct css_set *cset, bool taskexit)
return; return;
/* This css_set is dead. unlink it and release cgroup refcounts */ /* This css_set is dead. unlink it and release cgroup refcounts */
for_each_subsys(ss, ssid)
list_del(&cset->e_cset_node[ssid]);
hash_del(&cset->hlist); hash_del(&cset->hlist);
css_set_count--; css_set_count--;
...@@ -401,10 +518,13 @@ static void put_css_set_locked(struct css_set *cset, bool taskexit) ...@@ -401,10 +518,13 @@ static void put_css_set_locked(struct css_set *cset, bool taskexit)
list_del(&link->cgrp_link); list_del(&link->cgrp_link);
/* @cgrp can't go away while we're holding css_set_rwsem */ /* @cgrp can't go away while we're holding css_set_rwsem */
if (list_empty(&cgrp->cset_links) && notify_on_release(cgrp)) { if (list_empty(&cgrp->cset_links)) {
if (taskexit) cgroup_update_populated(cgrp, false);
set_bit(CGRP_RELEASABLE, &cgrp->flags); if (notify_on_release(cgrp)) {
check_for_release(cgrp); if (taskexit)
set_bit(CGRP_RELEASABLE, &cgrp->flags);
check_for_release(cgrp);
}
} }
kfree(link); kfree(link);
...@@ -453,20 +573,20 @@ static bool compare_css_sets(struct css_set *cset, ...@@ -453,20 +573,20 @@ static bool compare_css_sets(struct css_set *cset,
{ {
struct list_head *l1, *l2; struct list_head *l1, *l2;
if (memcmp(template, cset->subsys, sizeof(cset->subsys))) { /*
/* Not all subsystems matched */ * On the default hierarchy, there can be csets which are
* associated with the same set of cgroups but different csses.
* Let's first ensure that csses match.
*/
if (memcmp(template, cset->subsys, sizeof(cset->subsys)))
return false; return false;
}
/* /*
* Compare cgroup pointers in order to distinguish between * Compare cgroup pointers in order to distinguish between
* different cgroups in heirarchies with no subsystems. We * different cgroups in hierarchies. As different cgroups may
* could get by with just this check alone (and skip the * share the same effective css, this comparison is always
* memcmp above) but on most setups the memcmp check will * necessary.
* avoid the need for this more expensive check on almost all
* candidates.
*/ */
l1 = &cset->cgrp_links; l1 = &cset->cgrp_links;
l2 = &old_cset->cgrp_links; l2 = &old_cset->cgrp_links;
while (1) { while (1) {
...@@ -530,14 +650,17 @@ static struct css_set *find_existing_css_set(struct css_set *old_cset, ...@@ -530,14 +650,17 @@ static struct css_set *find_existing_css_set(struct css_set *old_cset,
* won't change, so no need for locking. * won't change, so no need for locking.
*/ */
for_each_subsys(ss, i) { for_each_subsys(ss, i) {
if (root->cgrp.subsys_mask & (1UL << i)) { if (root->subsys_mask & (1UL << i)) {
/* Subsystem is in this hierarchy. So we want /*
* the subsystem state from the new * @ss is in this hierarchy, so we want the
* cgroup */ * effective css from @cgrp.
template[i] = cgroup_css(cgrp, ss); */
template[i] = cgroup_e_css(cgrp, ss);
} else { } else {
/* Subsystem is not in this hierarchy, so we /*
* don't want to change the subsystem state */ * @ss is not in this hierarchy, so we don't want
* to change the css.
*/
template[i] = old_cset->subsys[i]; template[i] = old_cset->subsys[i];
} }
} }
...@@ -603,10 +726,18 @@ static void link_css_set(struct list_head *tmp_links, struct css_set *cset, ...@@ -603,10 +726,18 @@ static void link_css_set(struct list_head *tmp_links, struct css_set *cset,
struct cgrp_cset_link *link; struct cgrp_cset_link *link;
BUG_ON(list_empty(tmp_links)); BUG_ON(list_empty(tmp_links));
if (cgroup_on_dfl(cgrp))
cset->dfl_cgrp = cgrp;
link = list_first_entry(tmp_links, struct cgrp_cset_link, cset_link); link = list_first_entry(tmp_links, struct cgrp_cset_link, cset_link);
link->cset = cset; link->cset = cset;
link->cgrp = cgrp; link->cgrp = cgrp;
if (list_empty(&cgrp->cset_links))
cgroup_update_populated(cgrp, true);
list_move(&link->cset_link, &cgrp->cset_links); list_move(&link->cset_link, &cgrp->cset_links);
/* /*
* Always add links to the tail of the list so that the list * Always add links to the tail of the list so that the list
* is sorted by order of hierarchy creation * is sorted by order of hierarchy creation
...@@ -629,7 +760,9 @@ static struct css_set *find_css_set(struct css_set *old_cset, ...@@ -629,7 +760,9 @@ static struct css_set *find_css_set(struct css_set *old_cset,
struct css_set *cset; struct css_set *cset;
struct list_head tmp_links; struct list_head tmp_links;
struct cgrp_cset_link *link; struct cgrp_cset_link *link;
struct cgroup_subsys *ss;
unsigned long key; unsigned long key;
int ssid;
lockdep_assert_held(&cgroup_mutex); lockdep_assert_held(&cgroup_mutex);
...@@ -680,10 +813,14 @@ static struct css_set *find_css_set(struct css_set *old_cset, ...@@ -680,10 +813,14 @@ static struct css_set *find_css_set(struct css_set *old_cset,
css_set_count++; css_set_count++;
/* Add this cgroup group to the hash table */ /* Add @cset to the hash table */
key = css_set_hash(cset->subsys); key = css_set_hash(cset->subsys);
hash_add(css_set_table, &cset->hlist, key); hash_add(css_set_table, &cset->hlist, key);
for_each_subsys(ss, ssid)
list_add_tail(&cset->e_cset_node[ssid],
&cset->subsys[ssid]->cgroup->e_csets[ssid]);
up_write(&css_set_rwsem); up_write(&css_set_rwsem);
return cset; return cset;
...@@ -736,14 +873,13 @@ static void cgroup_destroy_root(struct cgroup_root *root) ...@@ -736,14 +873,13 @@ static void cgroup_destroy_root(struct cgroup_root *root)
struct cgroup *cgrp = &root->cgrp; struct cgroup *cgrp = &root->cgrp;
struct cgrp_cset_link *link, *tmp_link; struct cgrp_cset_link *link, *tmp_link;
mutex_lock(&cgroup_tree_mutex);
mutex_lock(&cgroup_mutex); mutex_lock(&cgroup_mutex);
BUG_ON(atomic_read(&root->nr_cgrps)); BUG_ON(atomic_read(&root->nr_cgrps));
BUG_ON(!list_empty(&cgrp->children)); BUG_ON(!list_empty(&cgrp->self.children));
/* Rebind all subsystems back to the default hierarchy */ /* Rebind all subsystems back to the default hierarchy */
rebind_subsystems(&cgrp_dfl_root, cgrp->subsys_mask); rebind_subsystems(&cgrp_dfl_root, root->subsys_mask);
/* /*
* Release all the links from cset_links to this hierarchy's * Release all the links from cset_links to this hierarchy's
...@@ -766,7 +902,6 @@ static void cgroup_destroy_root(struct cgroup_root *root) ...@@ -766,7 +902,6 @@ static void cgroup_destroy_root(struct cgroup_root *root)
cgroup_exit_root_id(root); cgroup_exit_root_id(root);
mutex_unlock(&cgroup_mutex); mutex_unlock(&cgroup_mutex);
mutex_unlock(&cgroup_tree_mutex);
kernfs_destroy_root(root->kf_root); kernfs_destroy_root(root->kf_root);
cgroup_free_root(root); cgroup_free_root(root);
...@@ -849,7 +984,7 @@ static struct cgroup *task_cgroup_from_root(struct task_struct *task, ...@@ -849,7 +984,7 @@ static struct cgroup *task_cgroup_from_root(struct task_struct *task,
* update of a tasks cgroup pointer by cgroup_attach_task() * update of a tasks cgroup pointer by cgroup_attach_task()
*/ */
static int cgroup_populate_dir(struct cgroup *cgrp, unsigned long subsys_mask); static int cgroup_populate_dir(struct cgroup *cgrp, unsigned int subsys_mask);
static struct kernfs_syscall_ops cgroup_kf_syscall_ops; static struct kernfs_syscall_ops cgroup_kf_syscall_ops;
static const struct file_operations proc_cgroupstats_operations; static const struct file_operations proc_cgroupstats_operations;
...@@ -884,79 +1019,95 @@ static umode_t cgroup_file_mode(const struct cftype *cft) ...@@ -884,79 +1019,95 @@ static umode_t cgroup_file_mode(const struct cftype *cft)
if (cft->read_u64 || cft->read_s64 || cft->seq_show) if (cft->read_u64 || cft->read_s64 || cft->seq_show)
mode |= S_IRUGO; mode |= S_IRUGO;
if (cft->write_u64 || cft->write_s64 || cft->write_string || if (cft->write_u64 || cft->write_s64 || cft->write)
cft->trigger)
mode |= S_IWUSR; mode |= S_IWUSR;
return mode; return mode;
} }
static void cgroup_free_fn(struct work_struct *work) static void cgroup_get(struct cgroup *cgrp)
{ {
struct cgroup *cgrp = container_of(work, struct cgroup, destroy_work); WARN_ON_ONCE(cgroup_is_dead(cgrp));
css_get(&cgrp->self);
atomic_dec(&cgrp->root->nr_cgrps);
cgroup_pidlist_destroy_all(cgrp);
if (cgrp->parent) {
/*
* We get a ref to the parent, and put the ref when this
* cgroup is being freed, so it's guaranteed that the
* parent won't be destroyed before its children.
*/
cgroup_put(cgrp->parent);
kernfs_put(cgrp->kn);
kfree(cgrp);
} else {
/*
* This is root cgroup's refcnt reaching zero, which
* indicates that the root should be released.
*/
cgroup_destroy_root(cgrp->root);
}
} }
static void cgroup_free_rcu(struct rcu_head *head) static void cgroup_put(struct cgroup *cgrp)
{ {
struct cgroup *cgrp = container_of(head, struct cgroup, rcu_head); css_put(&cgrp->self);
INIT_WORK(&cgrp->destroy_work, cgroup_free_fn);
queue_work(cgroup_destroy_wq, &cgrp->destroy_work);
} }
static void cgroup_get(struct cgroup *cgrp) /**
* cgroup_kn_unlock - unlocking helper for cgroup kernfs methods
* @kn: the kernfs_node being serviced
*
* This helper undoes cgroup_kn_lock_live() and should be invoked before
* the method finishes if locking succeeded. Note that once this function
* returns the cgroup returned by cgroup_kn_lock_live() may become
* inaccessible any time. If the caller intends to continue to access the
* cgroup, it should pin it before invoking this function.
*/
static void cgroup_kn_unlock(struct kernfs_node *kn)
{ {
WARN_ON_ONCE(cgroup_is_dead(cgrp)); struct cgroup *cgrp;
WARN_ON_ONCE(atomic_read(&cgrp->refcnt) <= 0);
atomic_inc(&cgrp->refcnt); if (kernfs_type(kn) == KERNFS_DIR)
cgrp = kn->priv;
else
cgrp = kn->parent->priv;
mutex_unlock(&cgroup_mutex);
kernfs_unbreak_active_protection(kn);
cgroup_put(cgrp);
} }
static void cgroup_put(struct cgroup *cgrp) /**
* cgroup_kn_lock_live - locking helper for cgroup kernfs methods
* @kn: the kernfs_node being serviced
*
* This helper is to be used by a cgroup kernfs method currently servicing
* @kn. It breaks the active protection, performs cgroup locking and
* verifies that the associated cgroup is alive. Returns the cgroup if
* alive; otherwise, %NULL. A successful return should be undone by a
* matching cgroup_kn_unlock() invocation.
*
* Any cgroup kernfs method implementation which requires locking the
* associated cgroup should use this helper. It avoids nesting cgroup
* locking under kernfs active protection and allows all kernfs operations
* including self-removal.
*/
static struct cgroup *cgroup_kn_lock_live(struct kernfs_node *kn)
{ {
if (!atomic_dec_and_test(&cgrp->refcnt)) struct cgroup *cgrp;
return;
if (WARN_ON_ONCE(cgrp->parent && !cgroup_is_dead(cgrp))) if (kernfs_type(kn) == KERNFS_DIR)
return; cgrp = kn->priv;
else
cgrp = kn->parent->priv;
/* /*
* XXX: cgrp->id is only used to look up css's. As cgroup and * We're gonna grab cgroup_mutex which nests outside kernfs
* css's lifetimes will be decoupled, it should be made * active_ref. cgroup liveliness check alone provides enough
* per-subsystem and moved to css->id so that lookups are * protection against removal. Ensure @cgrp stays accessible and
* successful until the target css is released. * break the active_ref protection.
*/ */
cgroup_get(cgrp);
kernfs_break_active_protection(kn);
mutex_lock(&cgroup_mutex); mutex_lock(&cgroup_mutex);
idr_remove(&cgrp->root->cgroup_idr, cgrp->id);
mutex_unlock(&cgroup_mutex);
cgrp->id = -1;
call_rcu(&cgrp->rcu_head, cgroup_free_rcu); if (!cgroup_is_dead(cgrp))
return cgrp;
cgroup_kn_unlock(kn);
return NULL;
} }
static void cgroup_rm_file(struct cgroup *cgrp, const struct cftype *cft) static void cgroup_rm_file(struct cgroup *cgrp, const struct cftype *cft)
{ {
char name[CGROUP_FILE_NAME_MAX]; char name[CGROUP_FILE_NAME_MAX];
lockdep_assert_held(&cgroup_tree_mutex); lockdep_assert_held(&cgroup_mutex);
kernfs_remove_by_name(cgrp->kn, cgroup_file_name(cgrp, cft, name)); kernfs_remove_by_name(cgrp->kn, cgroup_file_name(cgrp, cft, name));
} }
...@@ -965,7 +1116,7 @@ static void cgroup_rm_file(struct cgroup *cgrp, const struct cftype *cft) ...@@ -965,7 +1116,7 @@ static void cgroup_rm_file(struct cgroup *cgrp, const struct cftype *cft)
* @cgrp: target cgroup * @cgrp: target cgroup
* @subsys_mask: mask of the subsystem ids whose files should be removed * @subsys_mask: mask of the subsystem ids whose files should be removed
*/ */
static void cgroup_clear_dir(struct cgroup *cgrp, unsigned long subsys_mask) static void cgroup_clear_dir(struct cgroup *cgrp, unsigned int subsys_mask)
{ {
struct cgroup_subsys *ss; struct cgroup_subsys *ss;
int i; int i;
...@@ -973,40 +1124,40 @@ static void cgroup_clear_dir(struct cgroup *cgrp, unsigned long subsys_mask) ...@@ -973,40 +1124,40 @@ static void cgroup_clear_dir(struct cgroup *cgrp, unsigned long subsys_mask)
for_each_subsys(ss, i) { for_each_subsys(ss, i) {
struct cftype *cfts; struct cftype *cfts;
if (!test_bit(i, &subsys_mask)) if (!(subsys_mask & (1 << i)))
continue; continue;
list_for_each_entry(cfts, &ss->cfts, node) list_for_each_entry(cfts, &ss->cfts, node)
cgroup_addrm_files(cgrp, cfts, false); cgroup_addrm_files(cgrp, cfts, false);
} }
} }
static int rebind_subsystems(struct cgroup_root *dst_root, static int rebind_subsystems(struct cgroup_root *dst_root, unsigned int ss_mask)
unsigned long ss_mask)
{ {
struct cgroup_subsys *ss; struct cgroup_subsys *ss;
int ssid, ret; unsigned int tmp_ss_mask;
int ssid, i, ret;
lockdep_assert_held(&cgroup_tree_mutex);
lockdep_assert_held(&cgroup_mutex); lockdep_assert_held(&cgroup_mutex);
for_each_subsys(ss, ssid) { for_each_subsys(ss, ssid) {
if (!(ss_mask & (1 << ssid))) if (!(ss_mask & (1 << ssid)))
continue; continue;
/* if @ss is on the dummy_root, we can always move it */ /* if @ss has non-root csses attached to it, can't move */
if (ss->root == &cgrp_dfl_root) if (css_next_child(NULL, cgroup_css(&ss->root->cgrp, ss)))
continue;
/* if @ss has non-root cgroups attached to it, can't move */
if (!list_empty(&ss->root->cgrp.children))
return -EBUSY; return -EBUSY;
/* can't move between two non-dummy roots either */ /* can't move between two non-dummy roots either */
if (dst_root != &cgrp_dfl_root) if (ss->root != &cgrp_dfl_root && dst_root != &cgrp_dfl_root)
return -EBUSY; return -EBUSY;
} }
ret = cgroup_populate_dir(&dst_root->cgrp, ss_mask); /* skip creating root files on dfl_root for inhibited subsystems */
tmp_ss_mask = ss_mask;
if (dst_root == &cgrp_dfl_root)
tmp_ss_mask &= ~cgrp_dfl_root_inhibit_ss_mask;
ret = cgroup_populate_dir(&dst_root->cgrp, tmp_ss_mask);
if (ret) { if (ret) {
if (dst_root != &cgrp_dfl_root) if (dst_root != &cgrp_dfl_root)
return ret; return ret;
...@@ -1018,9 +1169,9 @@ static int rebind_subsystems(struct cgroup_root *dst_root, ...@@ -1018,9 +1169,9 @@ static int rebind_subsystems(struct cgroup_root *dst_root,
* Just warn about it and continue. * Just warn about it and continue.
*/ */
if (cgrp_dfl_root_visible) { if (cgrp_dfl_root_visible) {
pr_warning("cgroup: failed to create files (%d) while rebinding 0x%lx to default root\n", pr_warn("failed to create files (%d) while rebinding 0x%x to default root\n",
ret, ss_mask); ret, ss_mask);
pr_warning("cgroup: you may retry by moving them to a different hierarchy and unbinding\n"); pr_warn("you may retry by moving them to a different hierarchy and unbinding\n");
} }
} }
...@@ -1028,15 +1179,14 @@ static int rebind_subsystems(struct cgroup_root *dst_root, ...@@ -1028,15 +1179,14 @@ static int rebind_subsystems(struct cgroup_root *dst_root,
* Nothing can fail from this point on. Remove files for the * Nothing can fail from this point on. Remove files for the
* removed subsystems and rebind each subsystem. * removed subsystems and rebind each subsystem.
*/ */
mutex_unlock(&cgroup_mutex);
for_each_subsys(ss, ssid) for_each_subsys(ss, ssid)
if (ss_mask & (1 << ssid)) if (ss_mask & (1 << ssid))
cgroup_clear_dir(&ss->root->cgrp, 1 << ssid); cgroup_clear_dir(&ss->root->cgrp, 1 << ssid);
mutex_lock(&cgroup_mutex);
for_each_subsys(ss, ssid) { for_each_subsys(ss, ssid) {
struct cgroup_root *src_root; struct cgroup_root *src_root;
struct cgroup_subsys_state *css; struct cgroup_subsys_state *css;
struct css_set *cset;
if (!(ss_mask & (1 << ssid))) if (!(ss_mask & (1 << ssid)))
continue; continue;
...@@ -1051,8 +1201,19 @@ static int rebind_subsystems(struct cgroup_root *dst_root, ...@@ -1051,8 +1201,19 @@ static int rebind_subsystems(struct cgroup_root *dst_root,
ss->root = dst_root; ss->root = dst_root;
css->cgroup = &dst_root->cgrp; css->cgroup = &dst_root->cgrp;
src_root->cgrp.subsys_mask &= ~(1 << ssid); down_write(&css_set_rwsem);
dst_root->cgrp.subsys_mask |= 1 << ssid; hash_for_each(css_set_table, i, cset, hlist)
list_move_tail(&cset->e_cset_node[ss->id],
&dst_root->cgrp.e_csets[ss->id]);
up_write(&css_set_rwsem);
src_root->subsys_mask &= ~(1 << ssid);
src_root->cgrp.child_subsys_mask &= ~(1 << ssid);
/* default hierarchy doesn't enable controllers by default */
dst_root->subsys_mask |= 1 << ssid;
if (dst_root != &cgrp_dfl_root)
dst_root->cgrp.child_subsys_mask |= 1 << ssid;
if (ss->bind) if (ss->bind)
ss->bind(css); ss->bind(css);
...@@ -1070,7 +1231,7 @@ static int cgroup_show_options(struct seq_file *seq, ...@@ -1070,7 +1231,7 @@ static int cgroup_show_options(struct seq_file *seq,
int ssid; int ssid;
for_each_subsys(ss, ssid) for_each_subsys(ss, ssid)
if (root->cgrp.subsys_mask & (1 << ssid)) if (root->subsys_mask & (1 << ssid))
seq_printf(seq, ",%s", ss->name); seq_printf(seq, ",%s", ss->name);
if (root->flags & CGRP_ROOT_SANE_BEHAVIOR) if (root->flags & CGRP_ROOT_SANE_BEHAVIOR)
seq_puts(seq, ",sane_behavior"); seq_puts(seq, ",sane_behavior");
...@@ -1092,8 +1253,8 @@ static int cgroup_show_options(struct seq_file *seq, ...@@ -1092,8 +1253,8 @@ static int cgroup_show_options(struct seq_file *seq,
} }
struct cgroup_sb_opts { struct cgroup_sb_opts {
unsigned long subsys_mask; unsigned int subsys_mask;
unsigned long flags; unsigned int flags;
char *release_agent; char *release_agent;
bool cpuset_clone_children; bool cpuset_clone_children;
char *name; char *name;
...@@ -1101,24 +1262,16 @@ struct cgroup_sb_opts { ...@@ -1101,24 +1262,16 @@ struct cgroup_sb_opts {
bool none; bool none;
}; };
/*
* Convert a hierarchy specifier into a bitmask of subsystems and
* flags. Call with cgroup_mutex held to protect the cgroup_subsys[]
* array. This function takes refcounts on subsystems to be used, unless it
* returns error, in which case no refcounts are taken.
*/
static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts) static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts)
{ {
char *token, *o = data; char *token, *o = data;
bool all_ss = false, one_ss = false; bool all_ss = false, one_ss = false;
unsigned long mask = (unsigned long)-1; unsigned int mask = -1U;
struct cgroup_subsys *ss; struct cgroup_subsys *ss;
int i; int i;
BUG_ON(!mutex_is_locked(&cgroup_mutex));
#ifdef CONFIG_CPUSETS #ifdef CONFIG_CPUSETS
mask = ~(1UL << cpuset_cgrp_id); mask = ~(1U << cpuset_cgrp_id);
#endif #endif
memset(opts, 0, sizeof(*opts)); memset(opts, 0, sizeof(*opts));
...@@ -1199,7 +1352,7 @@ static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts) ...@@ -1199,7 +1352,7 @@ static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts)
/* Mutually exclusive option 'all' + subsystem name */ /* Mutually exclusive option 'all' + subsystem name */
if (all_ss) if (all_ss)
return -EINVAL; return -EINVAL;
set_bit(i, &opts->subsys_mask); opts->subsys_mask |= (1 << i);
one_ss = true; one_ss = true;
break; break;
...@@ -1211,12 +1364,12 @@ static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts) ...@@ -1211,12 +1364,12 @@ static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts)
/* Consistency checks */ /* Consistency checks */
if (opts->flags & CGRP_ROOT_SANE_BEHAVIOR) { if (opts->flags & CGRP_ROOT_SANE_BEHAVIOR) {
pr_warning("cgroup: sane_behavior: this is still under development and its behaviors will change, proceed at your own risk\n"); pr_warn("sane_behavior: this is still under development and its behaviors will change, proceed at your own risk\n");
if ((opts->flags & (CGRP_ROOT_NOPREFIX | CGRP_ROOT_XATTR)) || if ((opts->flags & (CGRP_ROOT_NOPREFIX | CGRP_ROOT_XATTR)) ||
opts->cpuset_clone_children || opts->release_agent || opts->cpuset_clone_children || opts->release_agent ||
opts->name) { opts->name) {
pr_err("cgroup: sane_behavior: noprefix, xattr, clone_children, release_agent and name are not allowed\n"); pr_err("sane_behavior: noprefix, xattr, clone_children, release_agent and name are not allowed\n");
return -EINVAL; return -EINVAL;
} }
} else { } else {
...@@ -1228,7 +1381,7 @@ static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts) ...@@ -1228,7 +1381,7 @@ static int parse_cgroupfs_options(char *data, struct cgroup_sb_opts *opts)
if (all_ss || (!one_ss && !opts->none && !opts->name)) if (all_ss || (!one_ss && !opts->none && !opts->name))
for_each_subsys(ss, i) for_each_subsys(ss, i)
if (!ss->disabled) if (!ss->disabled)
set_bit(i, &opts->subsys_mask); opts->subsys_mask |= (1 << i);
/* /*
* We either have to specify by name or by subsystems. (So * We either have to specify by name or by subsystems. (So
...@@ -1259,14 +1412,13 @@ static int cgroup_remount(struct kernfs_root *kf_root, int *flags, char *data) ...@@ -1259,14 +1412,13 @@ static int cgroup_remount(struct kernfs_root *kf_root, int *flags, char *data)
int ret = 0; int ret = 0;
struct cgroup_root *root = cgroup_root_from_kf(kf_root); struct cgroup_root *root = cgroup_root_from_kf(kf_root);
struct cgroup_sb_opts opts; struct cgroup_sb_opts opts;
unsigned long added_mask, removed_mask; unsigned int added_mask, removed_mask;
if (root->flags & CGRP_ROOT_SANE_BEHAVIOR) { if (root->flags & CGRP_ROOT_SANE_BEHAVIOR) {
pr_err("cgroup: sane_behavior: remount is not allowed\n"); pr_err("sane_behavior: remount is not allowed\n");
return -EINVAL; return -EINVAL;
} }
mutex_lock(&cgroup_tree_mutex);
mutex_lock(&cgroup_mutex); mutex_lock(&cgroup_mutex);
/* See what subsystems are wanted */ /* See what subsystems are wanted */
...@@ -1274,17 +1426,17 @@ static int cgroup_remount(struct kernfs_root *kf_root, int *flags, char *data) ...@@ -1274,17 +1426,17 @@ static int cgroup_remount(struct kernfs_root *kf_root, int *flags, char *data)
if (ret) if (ret)
goto out_unlock; goto out_unlock;
if (opts.subsys_mask != root->cgrp.subsys_mask || opts.release_agent) if (opts.subsys_mask != root->subsys_mask || opts.release_agent)
pr_warning("cgroup: option changes via remount are deprecated (pid=%d comm=%s)\n", pr_warn("option changes via remount are deprecated (pid=%d comm=%s)\n",
task_tgid_nr(current), current->comm); task_tgid_nr(current), current->comm);
added_mask = opts.subsys_mask & ~root->cgrp.subsys_mask; added_mask = opts.subsys_mask & ~root->subsys_mask;
removed_mask = root->cgrp.subsys_mask & ~opts.subsys_mask; removed_mask = root->subsys_mask & ~opts.subsys_mask;
/* Don't allow flags or name to change at remount */ /* Don't allow flags or name to change at remount */
if (((opts.flags ^ root->flags) & CGRP_ROOT_OPTION_MASK) || if (((opts.flags ^ root->flags) & CGRP_ROOT_OPTION_MASK) ||
(opts.name && strcmp(opts.name, root->name))) { (opts.name && strcmp(opts.name, root->name))) {
pr_err("cgroup: option or name mismatch, new: 0x%lx \"%s\", old: 0x%lx \"%s\"\n", pr_err("option or name mismatch, new: 0x%x \"%s\", old: 0x%x \"%s\"\n",
opts.flags & CGRP_ROOT_OPTION_MASK, opts.name ?: "", opts.flags & CGRP_ROOT_OPTION_MASK, opts.name ?: "",
root->flags & CGRP_ROOT_OPTION_MASK, root->name); root->flags & CGRP_ROOT_OPTION_MASK, root->name);
ret = -EINVAL; ret = -EINVAL;
...@@ -1292,7 +1444,7 @@ static int cgroup_remount(struct kernfs_root *kf_root, int *flags, char *data) ...@@ -1292,7 +1444,7 @@ static int cgroup_remount(struct kernfs_root *kf_root, int *flags, char *data)
} }
/* remounting is not allowed for populated hierarchies */ /* remounting is not allowed for populated hierarchies */
if (!list_empty(&root->cgrp.children)) { if (!list_empty(&root->cgrp.self.children)) {
ret = -EBUSY; ret = -EBUSY;
goto out_unlock; goto out_unlock;
} }
...@@ -1312,7 +1464,6 @@ static int cgroup_remount(struct kernfs_root *kf_root, int *flags, char *data) ...@@ -1312,7 +1464,6 @@ static int cgroup_remount(struct kernfs_root *kf_root, int *flags, char *data)
kfree(opts.release_agent); kfree(opts.release_agent);
kfree(opts.name); kfree(opts.name);
mutex_unlock(&cgroup_mutex); mutex_unlock(&cgroup_mutex);
mutex_unlock(&cgroup_tree_mutex);
return ret; return ret;
} }
...@@ -1370,14 +1521,22 @@ static void cgroup_enable_task_cg_lists(void) ...@@ -1370,14 +1521,22 @@ static void cgroup_enable_task_cg_lists(void)
static void init_cgroup_housekeeping(struct cgroup *cgrp) static void init_cgroup_housekeeping(struct cgroup *cgrp)
{ {
atomic_set(&cgrp->refcnt, 1); struct cgroup_subsys *ss;
INIT_LIST_HEAD(&cgrp->sibling); int ssid;
INIT_LIST_HEAD(&cgrp->children);
INIT_LIST_HEAD(&cgrp->self.sibling);
INIT_LIST_HEAD(&cgrp->self.children);
INIT_LIST_HEAD(&cgrp->cset_links); INIT_LIST_HEAD(&cgrp->cset_links);
INIT_LIST_HEAD(&cgrp->release_list); INIT_LIST_HEAD(&cgrp->release_list);
INIT_LIST_HEAD(&cgrp->pidlists); INIT_LIST_HEAD(&cgrp->pidlists);
mutex_init(&cgrp->pidlist_mutex); mutex_init(&cgrp->pidlist_mutex);
cgrp->dummy_css.cgroup = cgrp; cgrp->self.cgroup = cgrp;
cgrp->self.flags |= CSS_ONLINE;
for_each_subsys(ss, ssid)
INIT_LIST_HEAD(&cgrp->e_csets[ssid]);
init_waitqueue_head(&cgrp->offline_waitq);
} }
static void init_cgroup_root(struct cgroup_root *root, static void init_cgroup_root(struct cgroup_root *root,
...@@ -1400,21 +1559,24 @@ static void init_cgroup_root(struct cgroup_root *root, ...@@ -1400,21 +1559,24 @@ static void init_cgroup_root(struct cgroup_root *root,
set_bit(CGRP_CPUSET_CLONE_CHILDREN, &root->cgrp.flags); set_bit(CGRP_CPUSET_CLONE_CHILDREN, &root->cgrp.flags);
} }
static int cgroup_setup_root(struct cgroup_root *root, unsigned long ss_mask) static int cgroup_setup_root(struct cgroup_root *root, unsigned int ss_mask)
{ {
LIST_HEAD(tmp_links); LIST_HEAD(tmp_links);
struct cgroup *root_cgrp = &root->cgrp; struct cgroup *root_cgrp = &root->cgrp;
struct css_set *cset; struct css_set *cset;
int i, ret; int i, ret;
lockdep_assert_held(&cgroup_tree_mutex);
lockdep_assert_held(&cgroup_mutex); lockdep_assert_held(&cgroup_mutex);
ret = idr_alloc(&root->cgroup_idr, root_cgrp, 0, 1, GFP_KERNEL); ret = cgroup_idr_alloc(&root->cgroup_idr, root_cgrp, 1, 2, GFP_NOWAIT);
if (ret < 0) if (ret < 0)
goto out; goto out;
root_cgrp->id = ret; root_cgrp->id = ret;
ret = percpu_ref_init(&root_cgrp->self.refcnt, css_release);
if (ret)
goto out;
/* /*
* We're accessing css_set_count without locking css_set_rwsem here, * We're accessing css_set_count without locking css_set_rwsem here,
* but that's OK - it can only be increased by someone holding * but that's OK - it can only be increased by someone holding
...@@ -1423,11 +1585,11 @@ static int cgroup_setup_root(struct cgroup_root *root, unsigned long ss_mask) ...@@ -1423,11 +1585,11 @@ static int cgroup_setup_root(struct cgroup_root *root, unsigned long ss_mask)
*/ */
ret = allocate_cgrp_cset_links(css_set_count, &tmp_links); ret = allocate_cgrp_cset_links(css_set_count, &tmp_links);
if (ret) if (ret)
goto out; goto cancel_ref;
ret = cgroup_init_root_id(root); ret = cgroup_init_root_id(root);
if (ret) if (ret)
goto out; goto cancel_ref;
root->kf_root = kernfs_create_root(&cgroup_kf_syscall_ops, root->kf_root = kernfs_create_root(&cgroup_kf_syscall_ops,
KERNFS_ROOT_CREATE_DEACTIVATED, KERNFS_ROOT_CREATE_DEACTIVATED,
...@@ -1463,7 +1625,7 @@ static int cgroup_setup_root(struct cgroup_root *root, unsigned long ss_mask) ...@@ -1463,7 +1625,7 @@ static int cgroup_setup_root(struct cgroup_root *root, unsigned long ss_mask)
link_css_set(&tmp_links, cset, root_cgrp); link_css_set(&tmp_links, cset, root_cgrp);
up_write(&css_set_rwsem); up_write(&css_set_rwsem);
BUG_ON(!list_empty(&root_cgrp->children)); BUG_ON(!list_empty(&root_cgrp->self.children));
BUG_ON(atomic_read(&root->nr_cgrps) != 1); BUG_ON(atomic_read(&root->nr_cgrps) != 1);
kernfs_activate(root_cgrp->kn); kernfs_activate(root_cgrp->kn);
...@@ -1475,6 +1637,8 @@ static int cgroup_setup_root(struct cgroup_root *root, unsigned long ss_mask) ...@@ -1475,6 +1637,8 @@ static int cgroup_setup_root(struct cgroup_root *root, unsigned long ss_mask)
root->kf_root = NULL; root->kf_root = NULL;
exit_root_id: exit_root_id:
cgroup_exit_root_id(root); cgroup_exit_root_id(root);
cancel_ref:
percpu_ref_cancel_init(&root_cgrp->self.refcnt);
out: out:
free_cgrp_cset_links(&tmp_links); free_cgrp_cset_links(&tmp_links);
return ret; return ret;
...@@ -1497,14 +1661,13 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type, ...@@ -1497,14 +1661,13 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type,
if (!use_task_css_set_links) if (!use_task_css_set_links)
cgroup_enable_task_cg_lists(); cgroup_enable_task_cg_lists();
mutex_lock(&cgroup_tree_mutex);
mutex_lock(&cgroup_mutex); mutex_lock(&cgroup_mutex);
/* First find the desired set of subsystems */ /* First find the desired set of subsystems */
ret = parse_cgroupfs_options(data, &opts); ret = parse_cgroupfs_options(data, &opts);
if (ret) if (ret)
goto out_unlock; goto out_unlock;
retry:
/* look for a matching existing root */ /* look for a matching existing root */
if (!opts.subsys_mask && !opts.none && !opts.name) { if (!opts.subsys_mask && !opts.none && !opts.name) {
cgrp_dfl_root_visible = true; cgrp_dfl_root_visible = true;
...@@ -1536,7 +1699,7 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type, ...@@ -1536,7 +1699,7 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type,
* subsystems) then they must match. * subsystems) then they must match.
*/ */
if ((opts.subsys_mask || opts.none) && if ((opts.subsys_mask || opts.none) &&
(opts.subsys_mask != root->cgrp.subsys_mask)) { (opts.subsys_mask != root->subsys_mask)) {
if (!name_match) if (!name_match)
continue; continue;
ret = -EBUSY; ret = -EBUSY;
...@@ -1545,28 +1708,27 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type, ...@@ -1545,28 +1708,27 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type,
if ((root->flags ^ opts.flags) & CGRP_ROOT_OPTION_MASK) { if ((root->flags ^ opts.flags) & CGRP_ROOT_OPTION_MASK) {
if ((root->flags | opts.flags) & CGRP_ROOT_SANE_BEHAVIOR) { if ((root->flags | opts.flags) & CGRP_ROOT_SANE_BEHAVIOR) {
pr_err("cgroup: sane_behavior: new mount options should match the existing superblock\n"); pr_err("sane_behavior: new mount options should match the existing superblock\n");
ret = -EINVAL; ret = -EINVAL;
goto out_unlock; goto out_unlock;
} else { } else {
pr_warning("cgroup: new mount options do not match the existing superblock, will be ignored\n"); pr_warn("new mount options do not match the existing superblock, will be ignored\n");
} }
} }
/* /*
* A root's lifetime is governed by its root cgroup. Zero * A root's lifetime is governed by its root cgroup.
* ref indicate that the root is being destroyed. Wait for * tryget_live failure indicate that the root is being
* destruction to complete so that the subsystems are free. * destroyed. Wait for destruction to complete so that the
* We can use wait_queue for the wait but this path is * subsystems are free. We can use wait_queue for the wait
* super cold. Let's just sleep for a bit and retry. * but this path is super cold. Let's just sleep for a bit
* and retry.
*/ */
if (!atomic_inc_not_zero(&root->cgrp.refcnt)) { if (!percpu_ref_tryget_live(&root->cgrp.self.refcnt)) {
mutex_unlock(&cgroup_mutex); mutex_unlock(&cgroup_mutex);
mutex_unlock(&cgroup_tree_mutex);
msleep(10); msleep(10);
mutex_lock(&cgroup_tree_mutex); ret = restart_syscall();
mutex_lock(&cgroup_mutex); goto out_free;
goto retry;
} }
ret = 0; ret = 0;
...@@ -1597,8 +1759,7 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type, ...@@ -1597,8 +1759,7 @@ static struct dentry *cgroup_mount(struct file_system_type *fs_type,
out_unlock: out_unlock:
mutex_unlock(&cgroup_mutex); mutex_unlock(&cgroup_mutex);
mutex_unlock(&cgroup_tree_mutex); out_free:
kfree(opts.release_agent); kfree(opts.release_agent);
kfree(opts.name); kfree(opts.name);
...@@ -1617,7 +1778,19 @@ static void cgroup_kill_sb(struct super_block *sb) ...@@ -1617,7 +1778,19 @@ static void cgroup_kill_sb(struct super_block *sb)
struct kernfs_root *kf_root = kernfs_root_from_sb(sb); struct kernfs_root *kf_root = kernfs_root_from_sb(sb);
struct cgroup_root *root = cgroup_root_from_kf(kf_root); struct cgroup_root *root = cgroup_root_from_kf(kf_root);
cgroup_put(&root->cgrp); /*
* If @root doesn't have any mounts or children, start killing it.
* This prevents new mounts by disabling percpu_ref_tryget_live().
* cgroup_mount() may wait for @root's release.
*
* And don't kill the default root.
*/
if (css_has_online_children(&root->cgrp.self) ||
root == &cgrp_dfl_root)
cgroup_put(&root->cgrp);
else
percpu_ref_kill(&root->cgrp.self.refcnt);
kernfs_kill_sb(sb); kernfs_kill_sb(sb);
} }
...@@ -1739,7 +1912,7 @@ struct task_struct *cgroup_taskset_next(struct cgroup_taskset *tset) ...@@ -1739,7 +1912,7 @@ struct task_struct *cgroup_taskset_next(struct cgroup_taskset *tset)
/** /**
* cgroup_task_migrate - move a task from one cgroup to another. * cgroup_task_migrate - move a task from one cgroup to another.
* @old_cgrp; the cgroup @tsk is being migrated from * @old_cgrp: the cgroup @tsk is being migrated from
* @tsk: the task being migrated * @tsk: the task being migrated
* @new_cset: the new css_set @tsk is being attached to * @new_cset: the new css_set @tsk is being attached to
* *
...@@ -1831,10 +2004,6 @@ static void cgroup_migrate_add_src(struct css_set *src_cset, ...@@ -1831,10 +2004,6 @@ static void cgroup_migrate_add_src(struct css_set *src_cset,
src_cgrp = cset_cgroup_from_root(src_cset, dst_cgrp->root); src_cgrp = cset_cgroup_from_root(src_cset, dst_cgrp->root);
/* nothing to do if this cset already belongs to the cgroup */
if (src_cgrp == dst_cgrp)
return;
if (!list_empty(&src_cset->mg_preload_node)) if (!list_empty(&src_cset->mg_preload_node))
return; return;
...@@ -1849,13 +2018,14 @@ static void cgroup_migrate_add_src(struct css_set *src_cset, ...@@ -1849,13 +2018,14 @@ static void cgroup_migrate_add_src(struct css_set *src_cset,
/** /**
* cgroup_migrate_prepare_dst - prepare destination css_sets for migration * cgroup_migrate_prepare_dst - prepare destination css_sets for migration
* @dst_cgrp: the destination cgroup * @dst_cgrp: the destination cgroup (may be %NULL)
* @preloaded_csets: list of preloaded source css_sets * @preloaded_csets: list of preloaded source css_sets
* *
* Tasks are about to be moved to @dst_cgrp and all the source css_sets * Tasks are about to be moved to @dst_cgrp and all the source css_sets
* have been preloaded to @preloaded_csets. This function looks up and * have been preloaded to @preloaded_csets. This function looks up and
* pins all destination css_sets, links each to its source, and put them on * pins all destination css_sets, links each to its source, and append them
* @preloaded_csets. * to @preloaded_csets. If @dst_cgrp is %NULL, the destination of each
* source css_set is assumed to be its cgroup on the default hierarchy.
* *
* This function must be called after cgroup_migrate_add_src() has been * This function must be called after cgroup_migrate_add_src() has been
* called on each migration source css_set. After migration is performed * called on each migration source css_set. After migration is performed
...@@ -1866,19 +2036,42 @@ static int cgroup_migrate_prepare_dst(struct cgroup *dst_cgrp, ...@@ -1866,19 +2036,42 @@ static int cgroup_migrate_prepare_dst(struct cgroup *dst_cgrp,
struct list_head *preloaded_csets) struct list_head *preloaded_csets)
{ {
LIST_HEAD(csets); LIST_HEAD(csets);
struct css_set *src_cset; struct css_set *src_cset, *tmp_cset;
lockdep_assert_held(&cgroup_mutex); lockdep_assert_held(&cgroup_mutex);
/*
* Except for the root, child_subsys_mask must be zero for a cgroup
* with tasks so that child cgroups don't compete against tasks.
*/
if (dst_cgrp && cgroup_on_dfl(dst_cgrp) && cgroup_parent(dst_cgrp) &&
dst_cgrp->child_subsys_mask)
return -EBUSY;
/* look up the dst cset for each src cset and link it to src */ /* look up the dst cset for each src cset and link it to src */
list_for_each_entry(src_cset, preloaded_csets, mg_preload_node) { list_for_each_entry_safe(src_cset, tmp_cset, preloaded_csets, mg_preload_node) {
struct css_set *dst_cset; struct css_set *dst_cset;
dst_cset = find_css_set(src_cset, dst_cgrp); dst_cset = find_css_set(src_cset,
dst_cgrp ?: src_cset->dfl_cgrp);
if (!dst_cset) if (!dst_cset)
goto err; goto err;
WARN_ON_ONCE(src_cset->mg_dst_cset || dst_cset->mg_dst_cset); WARN_ON_ONCE(src_cset->mg_dst_cset || dst_cset->mg_dst_cset);
/*
* If src cset equals dst, it's noop. Drop the src.
* cgroup_migrate() will skip the cset too. Note that we
* can't handle src == dst as some nodes are used by both.
*/
if (src_cset == dst_cset) {
src_cset->mg_src_cgrp = NULL;
list_del_init(&src_cset->mg_preload_node);
put_css_set(src_cset, false);
put_css_set(dst_cset, false);
continue;
}
src_cset->mg_dst_cset = dst_cset; src_cset->mg_dst_cset = dst_cset;
if (list_empty(&dst_cset->mg_preload_node)) if (list_empty(&dst_cset->mg_preload_node))
...@@ -1887,7 +2080,7 @@ static int cgroup_migrate_prepare_dst(struct cgroup *dst_cgrp, ...@@ -1887,7 +2080,7 @@ static int cgroup_migrate_prepare_dst(struct cgroup *dst_cgrp,
put_css_set(dst_cset, false); put_css_set(dst_cset, false);
} }
list_splice(&csets, preloaded_csets); list_splice_tail(&csets, preloaded_csets);
return 0; return 0;
err: err:
cgroup_migrate_finish(&csets); cgroup_migrate_finish(&csets);
...@@ -1968,7 +2161,7 @@ static int cgroup_migrate(struct cgroup *cgrp, struct task_struct *leader, ...@@ -1968,7 +2161,7 @@ static int cgroup_migrate(struct cgroup *cgrp, struct task_struct *leader,
return 0; return 0;
/* check that we can legitimately attach to the cgroup */ /* check that we can legitimately attach to the cgroup */
for_each_css(css, i, cgrp) { for_each_e_css(css, i, cgrp) {
if (css->ss->can_attach) { if (css->ss->can_attach) {
ret = css->ss->can_attach(css, &tset); ret = css->ss->can_attach(css, &tset);
if (ret) { if (ret) {
...@@ -1998,7 +2191,7 @@ static int cgroup_migrate(struct cgroup *cgrp, struct task_struct *leader, ...@@ -1998,7 +2191,7 @@ static int cgroup_migrate(struct cgroup *cgrp, struct task_struct *leader,
*/ */
tset.csets = &tset.dst_csets; tset.csets = &tset.dst_csets;
for_each_css(css, i, cgrp) for_each_e_css(css, i, cgrp)
if (css->ss->attach) if (css->ss->attach)
css->ss->attach(css, &tset); css->ss->attach(css, &tset);
...@@ -2006,7 +2199,7 @@ static int cgroup_migrate(struct cgroup *cgrp, struct task_struct *leader, ...@@ -2006,7 +2199,7 @@ static int cgroup_migrate(struct cgroup *cgrp, struct task_struct *leader,
goto out_release_tset; goto out_release_tset;
out_cancel_attach: out_cancel_attach:
for_each_css(css, i, cgrp) { for_each_e_css(css, i, cgrp) {
if (css == failed_css) if (css == failed_css)
break; break;
if (css->ss->cancel_attach) if (css->ss->cancel_attach)
...@@ -2065,13 +2258,20 @@ static int cgroup_attach_task(struct cgroup *dst_cgrp, ...@@ -2065,13 +2258,20 @@ static int cgroup_attach_task(struct cgroup *dst_cgrp,
* function to attach either it or all tasks in its threadgroup. Will lock * function to attach either it or all tasks in its threadgroup. Will lock
* cgroup_mutex and threadgroup. * cgroup_mutex and threadgroup.
*/ */
static int attach_task_by_pid(struct cgroup *cgrp, u64 pid, bool threadgroup) static ssize_t __cgroup_procs_write(struct kernfs_open_file *of, char *buf,
size_t nbytes, loff_t off, bool threadgroup)
{ {
struct task_struct *tsk; struct task_struct *tsk;
const struct cred *cred = current_cred(), *tcred; const struct cred *cred = current_cred(), *tcred;
struct cgroup *cgrp;
pid_t pid;
int ret; int ret;
if (!cgroup_lock_live_group(cgrp)) if (kstrtoint(strstrip(buf), 0, &pid) || pid < 0)
return -EINVAL;
cgrp = cgroup_kn_lock_live(of->kn);
if (!cgrp)
return -ENODEV; return -ENODEV;
retry_find_task: retry_find_task:
...@@ -2137,8 +2337,8 @@ static int attach_task_by_pid(struct cgroup *cgrp, u64 pid, bool threadgroup) ...@@ -2137,8 +2337,8 @@ static int attach_task_by_pid(struct cgroup *cgrp, u64 pid, bool threadgroup)
put_task_struct(tsk); put_task_struct(tsk);
out_unlock_cgroup: out_unlock_cgroup:
mutex_unlock(&cgroup_mutex); cgroup_kn_unlock(of->kn);
return ret; return ret ?: nbytes;
} }
/** /**
...@@ -2172,43 +2372,44 @@ int cgroup_attach_task_all(struct task_struct *from, struct task_struct *tsk) ...@@ -2172,43 +2372,44 @@ int cgroup_attach_task_all(struct task_struct *from, struct task_struct *tsk)
} }
EXPORT_SYMBOL_GPL(cgroup_attach_task_all); EXPORT_SYMBOL_GPL(cgroup_attach_task_all);
static int cgroup_tasks_write(struct cgroup_subsys_state *css, static ssize_t cgroup_tasks_write(struct kernfs_open_file *of,
struct cftype *cft, u64 pid) char *buf, size_t nbytes, loff_t off)
{ {
return attach_task_by_pid(css->cgroup, pid, false); return __cgroup_procs_write(of, buf, nbytes, off, false);
} }
static int cgroup_procs_write(struct cgroup_subsys_state *css, static ssize_t cgroup_procs_write(struct kernfs_open_file *of,
struct cftype *cft, u64 tgid) char *buf, size_t nbytes, loff_t off)
{ {
return attach_task_by_pid(css->cgroup, tgid, true); return __cgroup_procs_write(of, buf, nbytes, off, true);
} }
static int cgroup_release_agent_write(struct cgroup_subsys_state *css, static ssize_t cgroup_release_agent_write(struct kernfs_open_file *of,
struct cftype *cft, char *buffer) char *buf, size_t nbytes, loff_t off)
{ {
struct cgroup_root *root = css->cgroup->root; struct cgroup *cgrp;
BUILD_BUG_ON(sizeof(cgrp->root->release_agent_path) < PATH_MAX);
BUILD_BUG_ON(sizeof(root->release_agent_path) < PATH_MAX); cgrp = cgroup_kn_lock_live(of->kn);
if (!cgroup_lock_live_group(css->cgroup)) if (!cgrp)
return -ENODEV; return -ENODEV;
spin_lock(&release_agent_path_lock); spin_lock(&release_agent_path_lock);
strlcpy(root->release_agent_path, buffer, strlcpy(cgrp->root->release_agent_path, strstrip(buf),
sizeof(root->release_agent_path)); sizeof(cgrp->root->release_agent_path));
spin_unlock(&release_agent_path_lock); spin_unlock(&release_agent_path_lock);
mutex_unlock(&cgroup_mutex); cgroup_kn_unlock(of->kn);
return 0; return nbytes;
} }
static int cgroup_release_agent_show(struct seq_file *seq, void *v) static int cgroup_release_agent_show(struct seq_file *seq, void *v)
{ {
struct cgroup *cgrp = seq_css(seq)->cgroup; struct cgroup *cgrp = seq_css(seq)->cgroup;
if (!cgroup_lock_live_group(cgrp)) spin_lock(&release_agent_path_lock);
return -ENODEV;
seq_puts(seq, cgrp->root->release_agent_path); seq_puts(seq, cgrp->root->release_agent_path);
spin_unlock(&release_agent_path_lock);
seq_putc(seq, '\n'); seq_putc(seq, '\n');
mutex_unlock(&cgroup_mutex);
return 0; return 0;
} }
...@@ -2220,58 +2421,371 @@ static int cgroup_sane_behavior_show(struct seq_file *seq, void *v) ...@@ -2220,58 +2421,371 @@ static int cgroup_sane_behavior_show(struct seq_file *seq, void *v)
return 0; return 0;
} }
static ssize_t cgroup_file_write(struct kernfs_open_file *of, char *buf, static void cgroup_print_ss_mask(struct seq_file *seq, unsigned int ss_mask)
size_t nbytes, loff_t off)
{ {
struct cgroup *cgrp = of->kn->parent->priv; struct cgroup_subsys *ss;
struct cftype *cft = of->kn->priv; bool printed = false;
struct cgroup_subsys_state *css; int ssid;
int ret;
/*
* kernfs guarantees that a file isn't deleted with operations in
* flight, which means that the matching css is and stays alive and
* doesn't need to be pinned. The RCU locking is not necessary
* either. It's just for the convenience of using cgroup_css().
*/
rcu_read_lock();
css = cgroup_css(cgrp, cft->ss);
rcu_read_unlock();
if (cft->write_string) { for_each_subsys(ss, ssid) {
ret = cft->write_string(css, cft, strstrip(buf)); if (ss_mask & (1 << ssid)) {
} else if (cft->write_u64) { if (printed)
unsigned long long v; seq_putc(seq, ' ');
ret = kstrtoull(buf, 0, &v); seq_printf(seq, "%s", ss->name);
if (!ret) printed = true;
ret = cft->write_u64(css, cft, v); }
} else if (cft->write_s64) {
long long v;
ret = kstrtoll(buf, 0, &v);
if (!ret)
ret = cft->write_s64(css, cft, v);
} else if (cft->trigger) {
ret = cft->trigger(css, (unsigned int)cft->private);
} else {
ret = -EINVAL;
} }
if (printed)
return ret ?: nbytes; seq_putc(seq, '\n');
} }
static void *cgroup_seqfile_start(struct seq_file *seq, loff_t *ppos) /* show controllers which are currently attached to the default hierarchy */
static int cgroup_root_controllers_show(struct seq_file *seq, void *v)
{ {
return seq_cft(seq)->seq_start(seq, ppos); struct cgroup *cgrp = seq_css(seq)->cgroup;
cgroup_print_ss_mask(seq, cgrp->root->subsys_mask &
~cgrp_dfl_root_inhibit_ss_mask);
return 0;
} }
static void *cgroup_seqfile_next(struct seq_file *seq, void *v, loff_t *ppos) /* show controllers which are enabled from the parent */
static int cgroup_controllers_show(struct seq_file *seq, void *v)
{ {
return seq_cft(seq)->seq_next(seq, v, ppos); struct cgroup *cgrp = seq_css(seq)->cgroup;
cgroup_print_ss_mask(seq, cgroup_parent(cgrp)->child_subsys_mask);
return 0;
} }
static void cgroup_seqfile_stop(struct seq_file *seq, void *v) /* show controllers which are enabled for a given cgroup's children */
static int cgroup_subtree_control_show(struct seq_file *seq, void *v)
{ {
seq_cft(seq)->seq_stop(seq, v); struct cgroup *cgrp = seq_css(seq)->cgroup;
cgroup_print_ss_mask(seq, cgrp->child_subsys_mask);
return 0;
}
/**
* cgroup_update_dfl_csses - update css assoc of a subtree in default hierarchy
* @cgrp: root of the subtree to update csses for
*
* @cgrp's child_subsys_mask has changed and its subtree's (self excluded)
* css associations need to be updated accordingly. This function looks up
* all css_sets which are attached to the subtree, creates the matching
* updated css_sets and migrates the tasks to the new ones.
*/
static int cgroup_update_dfl_csses(struct cgroup *cgrp)
{
LIST_HEAD(preloaded_csets);
struct cgroup_subsys_state *css;
struct css_set *src_cset;
int ret;
lockdep_assert_held(&cgroup_mutex);
/* look up all csses currently attached to @cgrp's subtree */
down_read(&css_set_rwsem);
css_for_each_descendant_pre(css, cgroup_css(cgrp, NULL)) {
struct cgrp_cset_link *link;
/* self is not affected by child_subsys_mask change */
if (css->cgroup == cgrp)
continue;
list_for_each_entry(link, &css->cgroup->cset_links, cset_link)
cgroup_migrate_add_src(link->cset, cgrp,
&preloaded_csets);
}
up_read(&css_set_rwsem);
/* NULL dst indicates self on default hierarchy */
ret = cgroup_migrate_prepare_dst(NULL, &preloaded_csets);
if (ret)
goto out_finish;
list_for_each_entry(src_cset, &preloaded_csets, mg_preload_node) {
struct task_struct *last_task = NULL, *task;
/* src_csets precede dst_csets, break on the first dst_cset */
if (!src_cset->mg_src_cgrp)
break;
/*
* All tasks in src_cset need to be migrated to the
* matching dst_cset. Empty it process by process. We
* walk tasks but migrate processes. The leader might even
* belong to a different cset but such src_cset would also
* be among the target src_csets because the default
* hierarchy enforces per-process membership.
*/
while (true) {
down_read(&css_set_rwsem);
task = list_first_entry_or_null(&src_cset->tasks,
struct task_struct, cg_list);
if (task) {
task = task->group_leader;
WARN_ON_ONCE(!task_css_set(task)->mg_src_cgrp);
get_task_struct(task);
}
up_read(&css_set_rwsem);
if (!task)
break;
/* guard against possible infinite loop */
if (WARN(last_task == task,
"cgroup: update_dfl_csses failed to make progress, aborting in inconsistent state\n"))
goto out_finish;
last_task = task;
threadgroup_lock(task);
/* raced against de_thread() from another thread? */
if (!thread_group_leader(task)) {
threadgroup_unlock(task);
put_task_struct(task);
continue;
}
ret = cgroup_migrate(src_cset->dfl_cgrp, task, true);
threadgroup_unlock(task);
put_task_struct(task);
if (WARN(ret, "cgroup: failed to update controllers for the default hierarchy (%d), further operations may crash or hang\n", ret))
goto out_finish;
}
}
out_finish:
cgroup_migrate_finish(&preloaded_csets);
return ret;
}
/* change the enabled child controllers for a cgroup in the default hierarchy */
static ssize_t cgroup_subtree_control_write(struct kernfs_open_file *of,
char *buf, size_t nbytes,
loff_t off)
{
unsigned int enable = 0, disable = 0;
struct cgroup *cgrp, *child;
struct cgroup_subsys *ss;
char *tok;
int ssid, ret;
/*
* Parse input - space separated list of subsystem names prefixed
* with either + or -.
*/
buf = strstrip(buf);
while ((tok = strsep(&buf, " "))) {
if (tok[0] == '\0')
continue;
for_each_subsys(ss, ssid) {
if (ss->disabled || strcmp(tok + 1, ss->name) ||
((1 << ss->id) & cgrp_dfl_root_inhibit_ss_mask))
continue;
if (*tok == '+') {
enable |= 1 << ssid;
disable &= ~(1 << ssid);
} else if (*tok == '-') {
disable |= 1 << ssid;
enable &= ~(1 << ssid);
} else {
return -EINVAL;
}
break;
}
if (ssid == CGROUP_SUBSYS_COUNT)
return -EINVAL;
}
cgrp = cgroup_kn_lock_live(of->kn);
if (!cgrp)
return -ENODEV;
for_each_subsys(ss, ssid) {
if (enable & (1 << ssid)) {
if (cgrp->child_subsys_mask & (1 << ssid)) {
enable &= ~(1 << ssid);
continue;
}
/*
* Because css offlining is asynchronous, userland
* might try to re-enable the same controller while
* the previous instance is still around. In such
* cases, wait till it's gone using offline_waitq.
*/
cgroup_for_each_live_child(child, cgrp) {
DEFINE_WAIT(wait);
if (!cgroup_css(child, ss))
continue;
cgroup_get(child);
prepare_to_wait(&child->offline_waitq, &wait,
TASK_UNINTERRUPTIBLE);
cgroup_kn_unlock(of->kn);
schedule();
finish_wait(&child->offline_waitq, &wait);
cgroup_put(child);
return restart_syscall();
}
/* unavailable or not enabled on the parent? */
if (!(cgrp_dfl_root.subsys_mask & (1 << ssid)) ||
(cgroup_parent(cgrp) &&
!(cgroup_parent(cgrp)->child_subsys_mask & (1 << ssid)))) {
ret = -ENOENT;
goto out_unlock;
}
} else if (disable & (1 << ssid)) {
if (!(cgrp->child_subsys_mask & (1 << ssid))) {
disable &= ~(1 << ssid);
continue;
}
/* a child has it enabled? */
cgroup_for_each_live_child(child, cgrp) {
if (child->child_subsys_mask & (1 << ssid)) {
ret = -EBUSY;
goto out_unlock;
}
}
}
}
if (!enable && !disable) {
ret = 0;
goto out_unlock;
}
/*
* Except for the root, child_subsys_mask must be zero for a cgroup
* with tasks so that child cgroups don't compete against tasks.
*/
if (enable && cgroup_parent(cgrp) && !list_empty(&cgrp->cset_links)) {
ret = -EBUSY;
goto out_unlock;
}
/*
* Create csses for enables and update child_subsys_mask. This
* changes cgroup_e_css() results which in turn makes the
* subsequent cgroup_update_dfl_csses() associate all tasks in the
* subtree to the updated csses.
*/
for_each_subsys(ss, ssid) {
if (!(enable & (1 << ssid)))
continue;
cgroup_for_each_live_child(child, cgrp) {
ret = create_css(child, ss);
if (ret)
goto err_undo_css;
}
}
cgrp->child_subsys_mask |= enable;
cgrp->child_subsys_mask &= ~disable;
ret = cgroup_update_dfl_csses(cgrp);
if (ret)
goto err_undo_css;
/* all tasks are now migrated away from the old csses, kill them */
for_each_subsys(ss, ssid) {
if (!(disable & (1 << ssid)))
continue;
cgroup_for_each_live_child(child, cgrp)
kill_css(cgroup_css(child, ss));
}
kernfs_activate(cgrp->kn);
ret = 0;
out_unlock:
cgroup_kn_unlock(of->kn);
return ret ?: nbytes;
err_undo_css:
cgrp->child_subsys_mask &= ~enable;
cgrp->child_subsys_mask |= disable;
for_each_subsys(ss, ssid) {
if (!(enable & (1 << ssid)))
continue;
cgroup_for_each_live_child(child, cgrp) {
struct cgroup_subsys_state *css = cgroup_css(child, ss);
if (css)
kill_css(css);
}
}
goto out_unlock;
}
static int cgroup_populated_show(struct seq_file *seq, void *v)
{
seq_printf(seq, "%d\n", (bool)seq_css(seq)->cgroup->populated_cnt);
return 0;
}
static ssize_t cgroup_file_write(struct kernfs_open_file *of, char *buf,
size_t nbytes, loff_t off)
{
struct cgroup *cgrp = of->kn->parent->priv;
struct cftype *cft = of->kn->priv;
struct cgroup_subsys_state *css;
int ret;
if (cft->write)
return cft->write(of, buf, nbytes, off);
/*
* kernfs guarantees that a file isn't deleted with operations in
* flight, which means that the matching css is and stays alive and
* doesn't need to be pinned. The RCU locking is not necessary
* either. It's just for the convenience of using cgroup_css().
*/
rcu_read_lock();
css = cgroup_css(cgrp, cft->ss);
rcu_read_unlock();
if (cft->write_u64) {
unsigned long long v;
ret = kstrtoull(buf, 0, &v);
if (!ret)
ret = cft->write_u64(css, cft, v);
} else if (cft->write_s64) {
long long v;
ret = kstrtoll(buf, 0, &v);
if (!ret)
ret = cft->write_s64(css, cft, v);
} else {
ret = -EINVAL;
}
return ret ?: nbytes;
}
static void *cgroup_seqfile_start(struct seq_file *seq, loff_t *ppos)
{
return seq_cft(seq)->seq_start(seq, ppos);
}
static void *cgroup_seqfile_next(struct seq_file *seq, void *v, loff_t *ppos)
{
return seq_cft(seq)->seq_next(seq, v, ppos);
}
static void cgroup_seqfile_stop(struct seq_file *seq, void *v)
{
seq_cft(seq)->seq_stop(seq, v);
} }
static int cgroup_seqfile_show(struct seq_file *m, void *arg) static int cgroup_seqfile_show(struct seq_file *m, void *arg)
...@@ -2328,20 +2842,18 @@ static int cgroup_rename(struct kernfs_node *kn, struct kernfs_node *new_parent, ...@@ -2328,20 +2842,18 @@ static int cgroup_rename(struct kernfs_node *kn, struct kernfs_node *new_parent,
return -EPERM; return -EPERM;
/* /*
* We're gonna grab cgroup_tree_mutex which nests outside kernfs * We're gonna grab cgroup_mutex which nests outside kernfs
* active_ref. kernfs_rename() doesn't require active_ref * active_ref. kernfs_rename() doesn't require active_ref
* protection. Break them before grabbing cgroup_tree_mutex. * protection. Break them before grabbing cgroup_mutex.
*/ */
kernfs_break_active_protection(new_parent); kernfs_break_active_protection(new_parent);
kernfs_break_active_protection(kn); kernfs_break_active_protection(kn);
mutex_lock(&cgroup_tree_mutex);
mutex_lock(&cgroup_mutex); mutex_lock(&cgroup_mutex);
ret = kernfs_rename(kn, new_parent, new_name_str); ret = kernfs_rename(kn, new_parent, new_name_str);
mutex_unlock(&cgroup_mutex); mutex_unlock(&cgroup_mutex);
mutex_unlock(&cgroup_tree_mutex);
kernfs_unbreak_active_protection(kn); kernfs_unbreak_active_protection(kn);
kernfs_unbreak_active_protection(new_parent); kernfs_unbreak_active_protection(new_parent);
...@@ -2379,9 +2891,14 @@ static int cgroup_add_file(struct cgroup *cgrp, struct cftype *cft) ...@@ -2379,9 +2891,14 @@ static int cgroup_add_file(struct cgroup *cgrp, struct cftype *cft)
return PTR_ERR(kn); return PTR_ERR(kn);
ret = cgroup_kn_set_ugid(kn); ret = cgroup_kn_set_ugid(kn);
if (ret) if (ret) {
kernfs_remove(kn); kernfs_remove(kn);
return ret; return ret;
}
if (cft->seq_show == cgroup_populated_show)
cgrp->populated_kn = kn;
return 0;
} }
/** /**
...@@ -2401,7 +2918,7 @@ static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[], ...@@ -2401,7 +2918,7 @@ static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[],
struct cftype *cft; struct cftype *cft;
int ret; int ret;
lockdep_assert_held(&cgroup_tree_mutex); lockdep_assert_held(&cgroup_mutex);
for (cft = cfts; cft->name[0] != '\0'; cft++) { for (cft = cfts; cft->name[0] != '\0'; cft++) {
/* does cft->flags tell us to skip this file on @cgrp? */ /* does cft->flags tell us to skip this file on @cgrp? */
...@@ -2409,16 +2926,16 @@ static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[], ...@@ -2409,16 +2926,16 @@ static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[],
continue; continue;
if ((cft->flags & CFTYPE_INSANE) && cgroup_sane_behavior(cgrp)) if ((cft->flags & CFTYPE_INSANE) && cgroup_sane_behavior(cgrp))
continue; continue;
if ((cft->flags & CFTYPE_NOT_ON_ROOT) && !cgrp->parent) if ((cft->flags & CFTYPE_NOT_ON_ROOT) && !cgroup_parent(cgrp))
continue; continue;
if ((cft->flags & CFTYPE_ONLY_ON_ROOT) && cgrp->parent) if ((cft->flags & CFTYPE_ONLY_ON_ROOT) && cgroup_parent(cgrp))
continue; continue;
if (is_add) { if (is_add) {
ret = cgroup_add_file(cgrp, cft); ret = cgroup_add_file(cgrp, cft);
if (ret) { if (ret) {
pr_warn("cgroup_addrm_files: failed to add %s, err=%d\n", pr_warn("%s: failed to add %s, err=%d\n",
cft->name, ret); __func__, cft->name, ret);
return ret; return ret;
} }
} else { } else {
...@@ -2436,11 +2953,7 @@ static int cgroup_apply_cftypes(struct cftype *cfts, bool is_add) ...@@ -2436,11 +2953,7 @@ static int cgroup_apply_cftypes(struct cftype *cfts, bool is_add)
struct cgroup_subsys_state *css; struct cgroup_subsys_state *css;
int ret = 0; int ret = 0;
lockdep_assert_held(&cgroup_tree_mutex); lockdep_assert_held(&cgroup_mutex);
/* don't bother if @ss isn't attached */
if (ss->root == &cgrp_dfl_root)
return 0;
/* add/rm files for all cgroups created before */ /* add/rm files for all cgroups created before */
css_for_each_descendant_pre(css, cgroup_css(root, ss)) { css_for_each_descendant_pre(css, cgroup_css(root, ss)) {
...@@ -2508,7 +3021,7 @@ static int cgroup_init_cftypes(struct cgroup_subsys *ss, struct cftype *cfts) ...@@ -2508,7 +3021,7 @@ static int cgroup_init_cftypes(struct cgroup_subsys *ss, struct cftype *cfts)
static int cgroup_rm_cftypes_locked(struct cftype *cfts) static int cgroup_rm_cftypes_locked(struct cftype *cfts)
{ {
lockdep_assert_held(&cgroup_tree_mutex); lockdep_assert_held(&cgroup_mutex);
if (!cfts || !cfts[0].ss) if (!cfts || !cfts[0].ss)
return -ENOENT; return -ENOENT;
...@@ -2534,9 +3047,9 @@ int cgroup_rm_cftypes(struct cftype *cfts) ...@@ -2534,9 +3047,9 @@ int cgroup_rm_cftypes(struct cftype *cfts)
{ {
int ret; int ret;
mutex_lock(&cgroup_tree_mutex); mutex_lock(&cgroup_mutex);
ret = cgroup_rm_cftypes_locked(cfts); ret = cgroup_rm_cftypes_locked(cfts);
mutex_unlock(&cgroup_tree_mutex); mutex_unlock(&cgroup_mutex);
return ret; return ret;
} }
...@@ -2558,6 +3071,9 @@ int cgroup_add_cftypes(struct cgroup_subsys *ss, struct cftype *cfts) ...@@ -2558,6 +3071,9 @@ int cgroup_add_cftypes(struct cgroup_subsys *ss, struct cftype *cfts)
{ {
int ret; int ret;
if (ss->disabled)
return 0;
if (!cfts || cfts[0].name[0] == '\0') if (!cfts || cfts[0].name[0] == '\0')
return 0; return 0;
...@@ -2565,14 +3081,14 @@ int cgroup_add_cftypes(struct cgroup_subsys *ss, struct cftype *cfts) ...@@ -2565,14 +3081,14 @@ int cgroup_add_cftypes(struct cgroup_subsys *ss, struct cftype *cfts)
if (ret) if (ret)
return ret; return ret;
mutex_lock(&cgroup_tree_mutex); mutex_lock(&cgroup_mutex);
list_add_tail(&cfts->node, &ss->cfts); list_add_tail(&cfts->node, &ss->cfts);
ret = cgroup_apply_cftypes(cfts, true); ret = cgroup_apply_cftypes(cfts, true);
if (ret) if (ret)
cgroup_rm_cftypes_locked(cfts); cgroup_rm_cftypes_locked(cfts);
mutex_unlock(&cgroup_tree_mutex); mutex_unlock(&cgroup_mutex);
return ret; return ret;
} }
...@@ -2596,57 +3112,65 @@ static int cgroup_task_count(const struct cgroup *cgrp) ...@@ -2596,57 +3112,65 @@ static int cgroup_task_count(const struct cgroup *cgrp)
/** /**
* css_next_child - find the next child of a given css * css_next_child - find the next child of a given css
* @pos_css: the current position (%NULL to initiate traversal) * @pos: the current position (%NULL to initiate traversal)
* @parent_css: css whose children to walk * @parent: css whose children to walk
* *
* This function returns the next child of @parent_css and should be called * This function returns the next child of @parent and should be called
* under either cgroup_mutex or RCU read lock. The only requirement is * under either cgroup_mutex or RCU read lock. The only requirement is
* that @parent_css and @pos_css are accessible. The next sibling is * that @parent and @pos are accessible. The next sibling is guaranteed to
* guaranteed to be returned regardless of their states. * be returned regardless of their states.
*
* If a subsystem synchronizes ->css_online() and the start of iteration, a
* css which finished ->css_online() is guaranteed to be visible in the
* future iterations and will stay visible until the last reference is put.
* A css which hasn't finished ->css_online() or already finished
* ->css_offline() may show up during traversal. It's each subsystem's
* responsibility to synchronize against on/offlining.
*/ */
struct cgroup_subsys_state * struct cgroup_subsys_state *css_next_child(struct cgroup_subsys_state *pos,
css_next_child(struct cgroup_subsys_state *pos_css, struct cgroup_subsys_state *parent)
struct cgroup_subsys_state *parent_css)
{ {
struct cgroup *pos = pos_css ? pos_css->cgroup : NULL; struct cgroup_subsys_state *next;
struct cgroup *cgrp = parent_css->cgroup;
struct cgroup *next;
cgroup_assert_mutexes_or_rcu_locked(); cgroup_assert_mutex_or_rcu_locked();
/* /*
* @pos could already have been removed. Once a cgroup is removed, * @pos could already have been unlinked from the sibling list.
* its ->sibling.next is no longer updated when its next sibling * Once a cgroup is removed, its ->sibling.next is no longer
* changes. As CGRP_DEAD assertion is serialized and happens * updated when its next sibling changes. CSS_RELEASED is set when
* before the cgroup is taken off the ->sibling list, if we see it * @pos is taken off list, at which time its next pointer is valid,
* unasserted, it's guaranteed that the next sibling hasn't * and, as releases are serialized, the one pointed to by the next
* finished its grace period even if it's already removed, and thus * pointer is guaranteed to not have started release yet. This
* safe to dereference from this RCU critical section. If * implies that if we observe !CSS_RELEASED on @pos in this RCU
* ->sibling.next is inaccessible, cgroup_is_dead() is guaranteed * critical section, the one pointed to by its next pointer is
* to be visible as %true here. * guaranteed to not have finished its RCU grace period even if we
* have dropped rcu_read_lock() inbetween iterations.
* *
* If @pos is dead, its next pointer can't be dereferenced; * If @pos has CSS_RELEASED set, its next pointer can't be
* however, as each cgroup is given a monotonically increasing * dereferenced; however, as each css is given a monotonically
* unique serial number and always appended to the sibling list, * increasing unique serial number and always appended to the
* the next one can be found by walking the parent's children until * sibling list, the next one can be found by walking the parent's
* we see a cgroup with higher serial number than @pos's. While * children until the first css with higher serial number than
* this path can be slower, it's taken only when either the current * @pos's. While this path can be slower, it happens iff iteration
* cgroup is removed or iteration and removal race. * races against release and the race window is very small.
*/ */
if (!pos) { if (!pos) {
next = list_entry_rcu(cgrp->children.next, struct cgroup, sibling); next = list_entry_rcu(parent->children.next, struct cgroup_subsys_state, sibling);
} else if (likely(!cgroup_is_dead(pos))) { } else if (likely(!(pos->flags & CSS_RELEASED))) {
next = list_entry_rcu(pos->sibling.next, struct cgroup, sibling); next = list_entry_rcu(pos->sibling.next, struct cgroup_subsys_state, sibling);
} else { } else {
list_for_each_entry_rcu(next, &cgrp->children, sibling) list_for_each_entry_rcu(next, &parent->children, sibling)
if (next->serial_nr > pos->serial_nr) if (next->serial_nr > pos->serial_nr)
break; break;
} }
if (&next->sibling == &cgrp->children) /*
return NULL; * @next, if not pointing to the head, can be dereferenced and is
* the next sibling.
return cgroup_css(next, parent_css->ss); */
if (&next->sibling != &parent->children)
return next;
return NULL;
} }
/** /**
...@@ -2662,6 +3186,13 @@ css_next_child(struct cgroup_subsys_state *pos_css, ...@@ -2662,6 +3186,13 @@ css_next_child(struct cgroup_subsys_state *pos_css,
* doesn't require the whole traversal to be contained in a single critical * doesn't require the whole traversal to be contained in a single critical
* section. This function will return the correct next descendant as long * section. This function will return the correct next descendant as long
* as both @pos and @root are accessible and @pos is a descendant of @root. * as both @pos and @root are accessible and @pos is a descendant of @root.
*
* If a subsystem synchronizes ->css_online() and the start of iteration, a
* css which finished ->css_online() is guaranteed to be visible in the
* future iterations and will stay visible until the last reference is put.
* A css which hasn't finished ->css_online() or already finished
* ->css_offline() may show up during traversal. It's each subsystem's
* responsibility to synchronize against on/offlining.
*/ */
struct cgroup_subsys_state * struct cgroup_subsys_state *
css_next_descendant_pre(struct cgroup_subsys_state *pos, css_next_descendant_pre(struct cgroup_subsys_state *pos,
...@@ -2669,7 +3200,7 @@ css_next_descendant_pre(struct cgroup_subsys_state *pos, ...@@ -2669,7 +3200,7 @@ css_next_descendant_pre(struct cgroup_subsys_state *pos,
{ {
struct cgroup_subsys_state *next; struct cgroup_subsys_state *next;
cgroup_assert_mutexes_or_rcu_locked(); cgroup_assert_mutex_or_rcu_locked();
/* if first iteration, visit @root */ /* if first iteration, visit @root */
if (!pos) if (!pos)
...@@ -2682,10 +3213,10 @@ css_next_descendant_pre(struct cgroup_subsys_state *pos, ...@@ -2682,10 +3213,10 @@ css_next_descendant_pre(struct cgroup_subsys_state *pos,
/* no child, visit my or the closest ancestor's next sibling */ /* no child, visit my or the closest ancestor's next sibling */
while (pos != root) { while (pos != root) {
next = css_next_child(pos, css_parent(pos)); next = css_next_child(pos, pos->parent);
if (next) if (next)
return next; return next;
pos = css_parent(pos); pos = pos->parent;
} }
return NULL; return NULL;
...@@ -2709,7 +3240,7 @@ css_rightmost_descendant(struct cgroup_subsys_state *pos) ...@@ -2709,7 +3240,7 @@ css_rightmost_descendant(struct cgroup_subsys_state *pos)
{ {
struct cgroup_subsys_state *last, *tmp; struct cgroup_subsys_state *last, *tmp;
cgroup_assert_mutexes_or_rcu_locked(); cgroup_assert_mutex_or_rcu_locked();
do { do {
last = pos; last = pos;
...@@ -2749,6 +3280,13 @@ css_leftmost_descendant(struct cgroup_subsys_state *pos) ...@@ -2749,6 +3280,13 @@ css_leftmost_descendant(struct cgroup_subsys_state *pos)
* section. This function will return the correct next descendant as long * section. This function will return the correct next descendant as long
* as both @pos and @cgroup are accessible and @pos is a descendant of * as both @pos and @cgroup are accessible and @pos is a descendant of
* @cgroup. * @cgroup.
*
* If a subsystem synchronizes ->css_online() and the start of iteration, a
* css which finished ->css_online() is guaranteed to be visible in the
* future iterations and will stay visible until the last reference is put.
* A css which hasn't finished ->css_online() or already finished
* ->css_offline() may show up during traversal. It's each subsystem's
* responsibility to synchronize against on/offlining.
*/ */
struct cgroup_subsys_state * struct cgroup_subsys_state *
css_next_descendant_post(struct cgroup_subsys_state *pos, css_next_descendant_post(struct cgroup_subsys_state *pos,
...@@ -2756,7 +3294,7 @@ css_next_descendant_post(struct cgroup_subsys_state *pos, ...@@ -2756,7 +3294,7 @@ css_next_descendant_post(struct cgroup_subsys_state *pos,
{ {
struct cgroup_subsys_state *next; struct cgroup_subsys_state *next;
cgroup_assert_mutexes_or_rcu_locked(); cgroup_assert_mutex_or_rcu_locked();
/* if first iteration, visit leftmost descendant which may be @root */ /* if first iteration, visit leftmost descendant which may be @root */
if (!pos) if (!pos)
...@@ -2767,12 +3305,36 @@ css_next_descendant_post(struct cgroup_subsys_state *pos, ...@@ -2767,12 +3305,36 @@ css_next_descendant_post(struct cgroup_subsys_state *pos,
return NULL; return NULL;
/* if there's an unvisited sibling, visit its leftmost descendant */ /* if there's an unvisited sibling, visit its leftmost descendant */
next = css_next_child(pos, css_parent(pos)); next = css_next_child(pos, pos->parent);
if (next) if (next)
return css_leftmost_descendant(next); return css_leftmost_descendant(next);
/* no sibling left, visit parent */ /* no sibling left, visit parent */
return css_parent(pos); return pos->parent;
}
/**
* css_has_online_children - does a css have online children
* @css: the target css
*
* Returns %true if @css has any online children; otherwise, %false. This
* function can be called from any context but the caller is responsible
* for synchronizing against on/offlining as necessary.
*/
bool css_has_online_children(struct cgroup_subsys_state *css)
{
struct cgroup_subsys_state *child;
bool ret = false;
rcu_read_lock();
css_for_each_child(child, css) {
if (css->flags & CSS_ONLINE) {
ret = true;
break;
}
}
rcu_read_unlock();
return ret;
} }
/** /**
...@@ -2783,27 +3345,36 @@ css_next_descendant_post(struct cgroup_subsys_state *pos, ...@@ -2783,27 +3345,36 @@ css_next_descendant_post(struct cgroup_subsys_state *pos,
*/ */
static void css_advance_task_iter(struct css_task_iter *it) static void css_advance_task_iter(struct css_task_iter *it)
{ {
struct list_head *l = it->cset_link; struct list_head *l = it->cset_pos;
struct cgrp_cset_link *link; struct cgrp_cset_link *link;
struct css_set *cset; struct css_set *cset;
/* Advance to the next non-empty css_set */ /* Advance to the next non-empty css_set */
do { do {
l = l->next; l = l->next;
if (l == &it->origin_css->cgroup->cset_links) { if (l == it->cset_head) {
it->cset_link = NULL; it->cset_pos = NULL;
return; return;
} }
link = list_entry(l, struct cgrp_cset_link, cset_link);
cset = link->cset; if (it->ss) {
cset = container_of(l, struct css_set,
e_cset_node[it->ss->id]);
} else {
link = list_entry(l, struct cgrp_cset_link, cset_link);
cset = link->cset;
}
} while (list_empty(&cset->tasks) && list_empty(&cset->mg_tasks)); } while (list_empty(&cset->tasks) && list_empty(&cset->mg_tasks));
it->cset_link = l; it->cset_pos = l;
if (!list_empty(&cset->tasks)) if (!list_empty(&cset->tasks))
it->task = cset->tasks.next; it->task_pos = cset->tasks.next;
else else
it->task = cset->mg_tasks.next; it->task_pos = cset->mg_tasks.next;
it->tasks_head = &cset->tasks;
it->mg_tasks_head = &cset->mg_tasks;
} }
/** /**
...@@ -2829,8 +3400,14 @@ void css_task_iter_start(struct cgroup_subsys_state *css, ...@@ -2829,8 +3400,14 @@ void css_task_iter_start(struct cgroup_subsys_state *css,
down_read(&css_set_rwsem); down_read(&css_set_rwsem);
it->origin_css = css; it->ss = css->ss;
it->cset_link = &css->cgroup->cset_links;
if (it->ss)
it->cset_pos = &css->cgroup->e_csets[css->ss->id];
else
it->cset_pos = &css->cgroup->cset_links;
it->cset_head = it->cset_pos;
css_advance_task_iter(it); css_advance_task_iter(it);
} }
...@@ -2846,12 +3423,10 @@ void css_task_iter_start(struct cgroup_subsys_state *css, ...@@ -2846,12 +3423,10 @@ void css_task_iter_start(struct cgroup_subsys_state *css,
struct task_struct *css_task_iter_next(struct css_task_iter *it) struct task_struct *css_task_iter_next(struct css_task_iter *it)
{ {
struct task_struct *res; struct task_struct *res;
struct list_head *l = it->task; struct list_head *l = it->task_pos;
struct cgrp_cset_link *link = list_entry(it->cset_link,
struct cgrp_cset_link, cset_link);
/* If the iterator cg is NULL, we have no tasks */ /* If the iterator cg is NULL, we have no tasks */
if (!it->cset_link) if (!it->cset_pos)
return NULL; return NULL;
res = list_entry(l, struct task_struct, cg_list); res = list_entry(l, struct task_struct, cg_list);
...@@ -2862,13 +3437,13 @@ struct task_struct *css_task_iter_next(struct css_task_iter *it) ...@@ -2862,13 +3437,13 @@ struct task_struct *css_task_iter_next(struct css_task_iter *it)
*/ */
l = l->next; l = l->next;
if (l == &link->cset->tasks) if (l == it->tasks_head)
l = link->cset->mg_tasks.next; l = it->mg_tasks_head->next;
if (l == &link->cset->mg_tasks) if (l == it->mg_tasks_head)
css_advance_task_iter(it); css_advance_task_iter(it);
else else
it->task = l; it->task_pos = l;
return res; return res;
} }
...@@ -2921,7 +3496,7 @@ int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from) ...@@ -2921,7 +3496,7 @@ int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from)
* ->can_attach() fails. * ->can_attach() fails.
*/ */
do { do {
css_task_iter_start(&from->dummy_css, &it); css_task_iter_start(&from->self, &it);
task = css_task_iter_next(&it); task = css_task_iter_next(&it);
if (task) if (task)
get_task_struct(task); get_task_struct(task);
...@@ -3186,7 +3761,7 @@ static int pidlist_array_load(struct cgroup *cgrp, enum cgroup_filetype type, ...@@ -3186,7 +3761,7 @@ static int pidlist_array_load(struct cgroup *cgrp, enum cgroup_filetype type,
if (!array) if (!array)
return -ENOMEM; return -ENOMEM;
/* now, populate the array */ /* now, populate the array */
css_task_iter_start(&cgrp->dummy_css, &it); css_task_iter_start(&cgrp->self, &it);
while ((tsk = css_task_iter_next(&it))) { while ((tsk = css_task_iter_next(&it))) {
if (unlikely(n == length)) if (unlikely(n == length))
break; break;
...@@ -3248,7 +3823,7 @@ int cgroupstats_build(struct cgroupstats *stats, struct dentry *dentry) ...@@ -3248,7 +3823,7 @@ int cgroupstats_build(struct cgroupstats *stats, struct dentry *dentry)
/* /*
* We aren't being called from kernfs and there's no guarantee on * We aren't being called from kernfs and there's no guarantee on
* @kn->priv's validity. For this and css_tryget_from_dir(), * @kn->priv's validity. For this and css_tryget_online_from_dir(),
* @kn->priv is RCU safe. Let's do the RCU dancing. * @kn->priv is RCU safe. Let's do the RCU dancing.
*/ */
rcu_read_lock(); rcu_read_lock();
...@@ -3260,7 +3835,7 @@ int cgroupstats_build(struct cgroupstats *stats, struct dentry *dentry) ...@@ -3260,7 +3835,7 @@ int cgroupstats_build(struct cgroupstats *stats, struct dentry *dentry)
} }
rcu_read_unlock(); rcu_read_unlock();
css_task_iter_start(&cgrp->dummy_css, &it); css_task_iter_start(&cgrp->self, &it);
while ((tsk = css_task_iter_next(&it))) { while ((tsk = css_task_iter_next(&it))) {
switch (tsk->state) { switch (tsk->state) {
case TASK_RUNNING: case TASK_RUNNING:
...@@ -3390,17 +3965,6 @@ static int cgroup_pidlist_show(struct seq_file *s, void *v) ...@@ -3390,17 +3965,6 @@ static int cgroup_pidlist_show(struct seq_file *s, void *v)
return seq_printf(s, "%d\n", *(int *)v); return seq_printf(s, "%d\n", *(int *)v);
} }
/*
* seq_operations functions for iterating on pidlists through seq_file -
* independent of whether it's tasks or procs
*/
static const struct seq_operations cgroup_pidlist_seq_operations = {
.start = cgroup_pidlist_start,
.stop = cgroup_pidlist_stop,
.next = cgroup_pidlist_next,
.show = cgroup_pidlist_show,
};
static u64 cgroup_read_notify_on_release(struct cgroup_subsys_state *css, static u64 cgroup_read_notify_on_release(struct cgroup_subsys_state *css,
struct cftype *cft) struct cftype *cft)
{ {
...@@ -3442,7 +4006,7 @@ static struct cftype cgroup_base_files[] = { ...@@ -3442,7 +4006,7 @@ static struct cftype cgroup_base_files[] = {
.seq_stop = cgroup_pidlist_stop, .seq_stop = cgroup_pidlist_stop,
.seq_show = cgroup_pidlist_show, .seq_show = cgroup_pidlist_show,
.private = CGROUP_FILE_PROCS, .private = CGROUP_FILE_PROCS,
.write_u64 = cgroup_procs_write, .write = cgroup_procs_write,
.mode = S_IRUGO | S_IWUSR, .mode = S_IRUGO | S_IWUSR,
}, },
{ {
...@@ -3456,6 +4020,27 @@ static struct cftype cgroup_base_files[] = { ...@@ -3456,6 +4020,27 @@ static struct cftype cgroup_base_files[] = {
.flags = CFTYPE_ONLY_ON_ROOT, .flags = CFTYPE_ONLY_ON_ROOT,
.seq_show = cgroup_sane_behavior_show, .seq_show = cgroup_sane_behavior_show,
}, },
{
.name = "cgroup.controllers",
.flags = CFTYPE_ONLY_ON_DFL | CFTYPE_ONLY_ON_ROOT,
.seq_show = cgroup_root_controllers_show,
},
{
.name = "cgroup.controllers",
.flags = CFTYPE_ONLY_ON_DFL | CFTYPE_NOT_ON_ROOT,
.seq_show = cgroup_controllers_show,
},
{
.name = "cgroup.subtree_control",
.flags = CFTYPE_ONLY_ON_DFL,
.seq_show = cgroup_subtree_control_show,
.write = cgroup_subtree_control_write,
},
{
.name = "cgroup.populated",
.flags = CFTYPE_ONLY_ON_DFL | CFTYPE_NOT_ON_ROOT,
.seq_show = cgroup_populated_show,
},
/* /*
* Historical crazy stuff. These don't have "cgroup." prefix and * Historical crazy stuff. These don't have "cgroup." prefix and
...@@ -3470,7 +4055,7 @@ static struct cftype cgroup_base_files[] = { ...@@ -3470,7 +4055,7 @@ static struct cftype cgroup_base_files[] = {
.seq_stop = cgroup_pidlist_stop, .seq_stop = cgroup_pidlist_stop,
.seq_show = cgroup_pidlist_show, .seq_show = cgroup_pidlist_show,
.private = CGROUP_FILE_TASKS, .private = CGROUP_FILE_TASKS,
.write_u64 = cgroup_tasks_write, .write = cgroup_tasks_write,
.mode = S_IRUGO | S_IWUSR, .mode = S_IRUGO | S_IWUSR,
}, },
{ {
...@@ -3483,7 +4068,7 @@ static struct cftype cgroup_base_files[] = { ...@@ -3483,7 +4068,7 @@ static struct cftype cgroup_base_files[] = {
.name = "release_agent", .name = "release_agent",
.flags = CFTYPE_INSANE | CFTYPE_ONLY_ON_ROOT, .flags = CFTYPE_INSANE | CFTYPE_ONLY_ON_ROOT,
.seq_show = cgroup_release_agent_show, .seq_show = cgroup_release_agent_show,
.write_string = cgroup_release_agent_write, .write = cgroup_release_agent_write,
.max_write_len = PATH_MAX - 1, .max_write_len = PATH_MAX - 1,
}, },
{ } /* terminate */ { } /* terminate */
...@@ -3496,7 +4081,7 @@ static struct cftype cgroup_base_files[] = { ...@@ -3496,7 +4081,7 @@ static struct cftype cgroup_base_files[] = {
* *
* On failure, no file is added. * On failure, no file is added.
*/ */
static int cgroup_populate_dir(struct cgroup *cgrp, unsigned long subsys_mask) static int cgroup_populate_dir(struct cgroup *cgrp, unsigned int subsys_mask)
{ {
struct cgroup_subsys *ss; struct cgroup_subsys *ss;
int i, ret = 0; int i, ret = 0;
...@@ -3505,7 +4090,7 @@ static int cgroup_populate_dir(struct cgroup *cgrp, unsigned long subsys_mask) ...@@ -3505,7 +4090,7 @@ static int cgroup_populate_dir(struct cgroup *cgrp, unsigned long subsys_mask)
for_each_subsys(ss, i) { for_each_subsys(ss, i) {
struct cftype *cfts; struct cftype *cfts;
if (!test_bit(i, &subsys_mask)) if (!(subsys_mask & (1 << i)))
continue; continue;
list_for_each_entry(cfts, &ss->cfts, node) { list_for_each_entry(cfts, &ss->cfts, node) {
...@@ -3527,9 +4112,9 @@ static int cgroup_populate_dir(struct cgroup *cgrp, unsigned long subsys_mask) ...@@ -3527,9 +4112,9 @@ static int cgroup_populate_dir(struct cgroup *cgrp, unsigned long subsys_mask)
* Implemented in kill_css(). * Implemented in kill_css().
* *
* 2. When the percpu_ref is confirmed to be visible as killed on all CPUs * 2. When the percpu_ref is confirmed to be visible as killed on all CPUs
* and thus css_tryget() is guaranteed to fail, the css can be offlined * and thus css_tryget_online() is guaranteed to fail, the css can be
* by invoking offline_css(). After offlining, the base ref is put. * offlined by invoking offline_css(). After offlining, the base ref is
* Implemented in css_killed_work_fn(). * put. Implemented in css_killed_work_fn().
* *
* 3. When the percpu_ref reaches zero, the only possible remaining * 3. When the percpu_ref reaches zero, the only possible remaining
* accessors are inside RCU read sections. css_release() schedules the * accessors are inside RCU read sections. css_release() schedules the
...@@ -3548,11 +4133,37 @@ static void css_free_work_fn(struct work_struct *work) ...@@ -3548,11 +4133,37 @@ static void css_free_work_fn(struct work_struct *work)
container_of(work, struct cgroup_subsys_state, destroy_work); container_of(work, struct cgroup_subsys_state, destroy_work);
struct cgroup *cgrp = css->cgroup; struct cgroup *cgrp = css->cgroup;
if (css->parent) if (css->ss) {
css_put(css->parent); /* css free path */
if (css->parent)
css_put(css->parent);
css->ss->css_free(css); css->ss->css_free(css);
cgroup_put(cgrp); cgroup_put(cgrp);
} else {
/* cgroup free path */
atomic_dec(&cgrp->root->nr_cgrps);
cgroup_pidlist_destroy_all(cgrp);
if (cgroup_parent(cgrp)) {
/*
* We get a ref to the parent, and put the ref when
* this cgroup is being freed, so it's guaranteed
* that the parent won't be destroyed before its
* children.
*/
cgroup_put(cgroup_parent(cgrp));
kernfs_put(cgrp->kn);
kfree(cgrp);
} else {
/*
* This is root cgroup's refcnt reaching zero,
* which indicates that the root should be
* released.
*/
cgroup_destroy_root(cgrp->root);
}
}
} }
static void css_free_rcu_fn(struct rcu_head *rcu_head) static void css_free_rcu_fn(struct rcu_head *rcu_head)
...@@ -3564,26 +4175,59 @@ static void css_free_rcu_fn(struct rcu_head *rcu_head) ...@@ -3564,26 +4175,59 @@ static void css_free_rcu_fn(struct rcu_head *rcu_head)
queue_work(cgroup_destroy_wq, &css->destroy_work); queue_work(cgroup_destroy_wq, &css->destroy_work);
} }
static void css_release_work_fn(struct work_struct *work)
{
struct cgroup_subsys_state *css =
container_of(work, struct cgroup_subsys_state, destroy_work);
struct cgroup_subsys *ss = css->ss;
struct cgroup *cgrp = css->cgroup;
mutex_lock(&cgroup_mutex);
css->flags |= CSS_RELEASED;
list_del_rcu(&css->sibling);
if (ss) {
/* css release path */
cgroup_idr_remove(&ss->css_idr, css->id);
} else {
/* cgroup release path */
cgroup_idr_remove(&cgrp->root->cgroup_idr, cgrp->id);
cgrp->id = -1;
}
mutex_unlock(&cgroup_mutex);
call_rcu(&css->rcu_head, css_free_rcu_fn);
}
static void css_release(struct percpu_ref *ref) static void css_release(struct percpu_ref *ref)
{ {
struct cgroup_subsys_state *css = struct cgroup_subsys_state *css =
container_of(ref, struct cgroup_subsys_state, refcnt); container_of(ref, struct cgroup_subsys_state, refcnt);
RCU_INIT_POINTER(css->cgroup->subsys[css->ss->id], NULL); INIT_WORK(&css->destroy_work, css_release_work_fn);
call_rcu(&css->rcu_head, css_free_rcu_fn); queue_work(cgroup_destroy_wq, &css->destroy_work);
} }
static void init_css(struct cgroup_subsys_state *css, struct cgroup_subsys *ss, static void init_and_link_css(struct cgroup_subsys_state *css,
struct cgroup *cgrp) struct cgroup_subsys *ss, struct cgroup *cgrp)
{ {
lockdep_assert_held(&cgroup_mutex);
cgroup_get(cgrp);
memset(css, 0, sizeof(*css));
css->cgroup = cgrp; css->cgroup = cgrp;
css->ss = ss; css->ss = ss;
css->flags = 0; INIT_LIST_HEAD(&css->sibling);
INIT_LIST_HEAD(&css->children);
css->serial_nr = css_serial_nr_next++;
if (cgrp->parent) if (cgroup_parent(cgrp)) {
css->parent = cgroup_css(cgrp->parent, ss); css->parent = cgroup_css(cgroup_parent(cgrp), ss);
else css_get(css->parent);
css->flags |= CSS_ROOT; }
BUG_ON(cgroup_css(cgrp, ss)); BUG_ON(cgroup_css(cgrp, ss));
} }
...@@ -3594,14 +4238,12 @@ static int online_css(struct cgroup_subsys_state *css) ...@@ -3594,14 +4238,12 @@ static int online_css(struct cgroup_subsys_state *css)
struct cgroup_subsys *ss = css->ss; struct cgroup_subsys *ss = css->ss;
int ret = 0; int ret = 0;
lockdep_assert_held(&cgroup_tree_mutex);
lockdep_assert_held(&cgroup_mutex); lockdep_assert_held(&cgroup_mutex);
if (ss->css_online) if (ss->css_online)
ret = ss->css_online(css); ret = ss->css_online(css);
if (!ret) { if (!ret) {
css->flags |= CSS_ONLINE; css->flags |= CSS_ONLINE;
css->cgroup->nr_css++;
rcu_assign_pointer(css->cgroup->subsys[ss->id], css); rcu_assign_pointer(css->cgroup->subsys[ss->id], css);
} }
return ret; return ret;
...@@ -3612,7 +4254,6 @@ static void offline_css(struct cgroup_subsys_state *css) ...@@ -3612,7 +4254,6 @@ static void offline_css(struct cgroup_subsys_state *css)
{ {
struct cgroup_subsys *ss = css->ss; struct cgroup_subsys *ss = css->ss;
lockdep_assert_held(&cgroup_tree_mutex);
lockdep_assert_held(&cgroup_mutex); lockdep_assert_held(&cgroup_mutex);
if (!(css->flags & CSS_ONLINE)) if (!(css->flags & CSS_ONLINE))
...@@ -3622,8 +4263,9 @@ static void offline_css(struct cgroup_subsys_state *css) ...@@ -3622,8 +4263,9 @@ static void offline_css(struct cgroup_subsys_state *css)
ss->css_offline(css); ss->css_offline(css);
css->flags &= ~CSS_ONLINE; css->flags &= ~CSS_ONLINE;
css->cgroup->nr_css--; RCU_INIT_POINTER(css->cgroup->subsys[ss->id], NULL);
RCU_INIT_POINTER(css->cgroup->subsys[ss->id], css);
wake_up_all(&css->cgroup->offline_waitq);
} }
/** /**
...@@ -3637,111 +4279,102 @@ static void offline_css(struct cgroup_subsys_state *css) ...@@ -3637,111 +4279,102 @@ static void offline_css(struct cgroup_subsys_state *css)
*/ */
static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss) static int create_css(struct cgroup *cgrp, struct cgroup_subsys *ss)
{ {
struct cgroup *parent = cgrp->parent; struct cgroup *parent = cgroup_parent(cgrp);
struct cgroup_subsys_state *parent_css = cgroup_css(parent, ss);
struct cgroup_subsys_state *css; struct cgroup_subsys_state *css;
int err; int err;
lockdep_assert_held(&cgroup_mutex); lockdep_assert_held(&cgroup_mutex);
css = ss->css_alloc(cgroup_css(parent, ss)); css = ss->css_alloc(parent_css);
if (IS_ERR(css)) if (IS_ERR(css))
return PTR_ERR(css); return PTR_ERR(css);
init_and_link_css(css, ss, cgrp);
err = percpu_ref_init(&css->refcnt, css_release); err = percpu_ref_init(&css->refcnt, css_release);
if (err) if (err)
goto err_free_css; goto err_free_css;
init_css(css, ss, cgrp); err = cgroup_idr_alloc(&ss->css_idr, NULL, 2, 0, GFP_NOWAIT);
if (err < 0)
goto err_free_percpu_ref;
css->id = err;
err = cgroup_populate_dir(cgrp, 1 << ss->id); err = cgroup_populate_dir(cgrp, 1 << ss->id);
if (err) if (err)
goto err_free_percpu_ref; goto err_free_id;
/* @css is ready to be brought online now, make it visible */
list_add_tail_rcu(&css->sibling, &parent_css->children);
cgroup_idr_replace(&ss->css_idr, css, css->id);
err = online_css(css); err = online_css(css);
if (err) if (err)
goto err_clear_dir; goto err_list_del;
cgroup_get(cgrp);
css_get(css->parent);
cgrp->subsys_mask |= 1 << ss->id;
if (ss->broken_hierarchy && !ss->warned_broken_hierarchy && if (ss->broken_hierarchy && !ss->warned_broken_hierarchy &&
parent->parent) { cgroup_parent(parent)) {
pr_warning("cgroup: %s (%d) created nested cgroup for controller \"%s\" which has incomplete hierarchy support. Nested cgroups may change behavior in the future.\n", pr_warn("%s (%d) created nested cgroup for controller \"%s\" which has incomplete hierarchy support. Nested cgroups may change behavior in the future.\n",
current->comm, current->pid, ss->name); current->comm, current->pid, ss->name);
if (!strcmp(ss->name, "memory")) if (!strcmp(ss->name, "memory"))
pr_warning("cgroup: \"memory\" requires setting use_hierarchy to 1 on the root.\n"); pr_warn("\"memory\" requires setting use_hierarchy to 1 on the root\n");
ss->warned_broken_hierarchy = true; ss->warned_broken_hierarchy = true;
} }
return 0; return 0;
err_clear_dir: err_list_del:
list_del_rcu(&css->sibling);
cgroup_clear_dir(css->cgroup, 1 << css->ss->id); cgroup_clear_dir(css->cgroup, 1 << css->ss->id);
err_free_id:
cgroup_idr_remove(&ss->css_idr, css->id);
err_free_percpu_ref: err_free_percpu_ref:
percpu_ref_cancel_init(&css->refcnt); percpu_ref_cancel_init(&css->refcnt);
err_free_css: err_free_css:
ss->css_free(css); call_rcu(&css->rcu_head, css_free_rcu_fn);
return err; return err;
} }
/** static int cgroup_mkdir(struct kernfs_node *parent_kn, const char *name,
* cgroup_create - create a cgroup umode_t mode)
* @parent: cgroup that will be parent of the new cgroup
* @name: name of the new cgroup
* @mode: mode to set on new cgroup
*/
static long cgroup_create(struct cgroup *parent, const char *name,
umode_t mode)
{ {
struct cgroup *cgrp; struct cgroup *parent, *cgrp;
struct cgroup_root *root = parent->root; struct cgroup_root *root;
int ssid, err;
struct cgroup_subsys *ss; struct cgroup_subsys *ss;
struct kernfs_node *kn; struct kernfs_node *kn;
int ssid, ret;
/* parent = cgroup_kn_lock_live(parent_kn);
* XXX: The default hierarchy isn't fully implemented yet. Block if (!parent)
* !root cgroup creation on it for now. return -ENODEV;
*/ root = parent->root;
if (root == &cgrp_dfl_root)
return -EINVAL;
/* allocate the cgroup and its ID, 0 is reserved for the root */ /* allocate the cgroup and its ID, 0 is reserved for the root */
cgrp = kzalloc(sizeof(*cgrp), GFP_KERNEL); cgrp = kzalloc(sizeof(*cgrp), GFP_KERNEL);
if (!cgrp) if (!cgrp) {
return -ENOMEM; ret = -ENOMEM;
goto out_unlock;
mutex_lock(&cgroup_tree_mutex);
/*
* Only live parents can have children. Note that the liveliness
* check isn't strictly necessary because cgroup_mkdir() and
* cgroup_rmdir() are fully synchronized by i_mutex; however, do it
* anyway so that locking is contained inside cgroup proper and we
* don't get nasty surprises if we ever grow another caller.
*/
if (!cgroup_lock_live_group(parent)) {
err = -ENODEV;
goto err_unlock_tree;
} }
ret = percpu_ref_init(&cgrp->self.refcnt, css_release);
if (ret)
goto out_free_cgrp;
/* /*
* Temporarily set the pointer to NULL, so idr_find() won't return * Temporarily set the pointer to NULL, so idr_find() won't return
* a half-baked cgroup. * a half-baked cgroup.
*/ */
cgrp->id = idr_alloc(&root->cgroup_idr, NULL, 1, 0, GFP_KERNEL); cgrp->id = cgroup_idr_alloc(&root->cgroup_idr, NULL, 2, 0, GFP_NOWAIT);
if (cgrp->id < 0) { if (cgrp->id < 0) {
err = -ENOMEM; ret = -ENOMEM;
goto err_unlock; goto out_cancel_ref;
} }
init_cgroup_housekeeping(cgrp); init_cgroup_housekeeping(cgrp);
cgrp->parent = parent; cgrp->self.parent = &parent->self;
cgrp->dummy_css.parent = &parent->dummy_css; cgrp->root = root;
cgrp->root = parent->root;
if (notify_on_release(parent)) if (notify_on_release(parent))
set_bit(CGRP_NOTIFY_ON_RELEASE, &cgrp->flags); set_bit(CGRP_NOTIFY_ON_RELEASE, &cgrp->flags);
...@@ -3752,8 +4385,8 @@ static long cgroup_create(struct cgroup *parent, const char *name, ...@@ -3752,8 +4385,8 @@ static long cgroup_create(struct cgroup *parent, const char *name,
/* create the directory */ /* create the directory */
kn = kernfs_create_dir(parent->kn, name, mode, cgrp); kn = kernfs_create_dir(parent->kn, name, mode, cgrp);
if (IS_ERR(kn)) { if (IS_ERR(kn)) {
err = PTR_ERR(kn); ret = PTR_ERR(kn);
goto err_free_id; goto out_free_id;
} }
cgrp->kn = kn; cgrp->kn = kn;
...@@ -3763,10 +4396,10 @@ static long cgroup_create(struct cgroup *parent, const char *name, ...@@ -3763,10 +4396,10 @@ static long cgroup_create(struct cgroup *parent, const char *name,
*/ */
kernfs_get(kn); kernfs_get(kn);
cgrp->serial_nr = cgroup_serial_nr_next++; cgrp->self.serial_nr = css_serial_nr_next++;
/* allocation complete, commit to creation */ /* allocation complete, commit to creation */
list_add_tail_rcu(&cgrp->sibling, &cgrp->parent->children); list_add_tail_rcu(&cgrp->self.sibling, &cgroup_parent(cgrp)->self.children);
atomic_inc(&root->nr_cgrps); atomic_inc(&root->nr_cgrps);
cgroup_get(parent); cgroup_get(parent);
...@@ -3774,107 +4407,66 @@ static long cgroup_create(struct cgroup *parent, const char *name, ...@@ -3774,107 +4407,66 @@ static long cgroup_create(struct cgroup *parent, const char *name,
* @cgrp is now fully operational. If something fails after this * @cgrp is now fully operational. If something fails after this
* point, it'll be released via the normal destruction path. * point, it'll be released via the normal destruction path.
*/ */
idr_replace(&root->cgroup_idr, cgrp, cgrp->id); cgroup_idr_replace(&root->cgroup_idr, cgrp, cgrp->id);
err = cgroup_kn_set_ugid(kn); ret = cgroup_kn_set_ugid(kn);
if (err) if (ret)
goto err_destroy; goto out_destroy;
err = cgroup_addrm_files(cgrp, cgroup_base_files, true); ret = cgroup_addrm_files(cgrp, cgroup_base_files, true);
if (err) if (ret)
goto err_destroy; goto out_destroy;
/* let's create and online css's */ /* let's create and online css's */
for_each_subsys(ss, ssid) { for_each_subsys(ss, ssid) {
if (root->cgrp.subsys_mask & (1 << ssid)) { if (parent->child_subsys_mask & (1 << ssid)) {
err = create_css(cgrp, ss); ret = create_css(cgrp, ss);
if (err) if (ret)
goto err_destroy; goto out_destroy;
} }
} }
kernfs_activate(kn); /*
* On the default hierarchy, a child doesn't automatically inherit
* child_subsys_mask from the parent. Each is configured manually.
*/
if (!cgroup_on_dfl(cgrp))
cgrp->child_subsys_mask = parent->child_subsys_mask;
mutex_unlock(&cgroup_mutex); kernfs_activate(kn);
mutex_unlock(&cgroup_tree_mutex);
return 0; ret = 0;
goto out_unlock;
err_free_id: out_free_id:
idr_remove(&root->cgroup_idr, cgrp->id); cgroup_idr_remove(&root->cgroup_idr, cgrp->id);
err_unlock: out_cancel_ref:
mutex_unlock(&cgroup_mutex); percpu_ref_cancel_init(&cgrp->self.refcnt);
err_unlock_tree: out_free_cgrp:
mutex_unlock(&cgroup_tree_mutex);
kfree(cgrp); kfree(cgrp);
return err; out_unlock:
cgroup_kn_unlock(parent_kn);
return ret;
err_destroy: out_destroy:
cgroup_destroy_locked(cgrp); cgroup_destroy_locked(cgrp);
mutex_unlock(&cgroup_mutex); goto out_unlock;
mutex_unlock(&cgroup_tree_mutex);
return err;
}
static int cgroup_mkdir(struct kernfs_node *parent_kn, const char *name,
umode_t mode)
{
struct cgroup *parent = parent_kn->priv;
int ret;
/*
* cgroup_create() grabs cgroup_tree_mutex which nests outside
* kernfs active_ref and cgroup_create() already synchronizes
* properly against removal through cgroup_lock_live_group().
* Break it before calling cgroup_create().
*/
cgroup_get(parent);
kernfs_break_active_protection(parent_kn);
ret = cgroup_create(parent, name, mode);
kernfs_unbreak_active_protection(parent_kn);
cgroup_put(parent);
return ret;
} }
/* /*
* This is called when the refcnt of a css is confirmed to be killed. * This is called when the refcnt of a css is confirmed to be killed.
* css_tryget() is now guaranteed to fail. * css_tryget_online() is now guaranteed to fail. Tell the subsystem to
* initate destruction and put the css ref from kill_css().
*/ */
static void css_killed_work_fn(struct work_struct *work) static void css_killed_work_fn(struct work_struct *work)
{ {
struct cgroup_subsys_state *css = struct cgroup_subsys_state *css =
container_of(work, struct cgroup_subsys_state, destroy_work); container_of(work, struct cgroup_subsys_state, destroy_work);
struct cgroup *cgrp = css->cgroup;
mutex_lock(&cgroup_tree_mutex);
mutex_lock(&cgroup_mutex); mutex_lock(&cgroup_mutex);
/*
* css_tryget() is guaranteed to fail now. Tell subsystems to
* initate destruction.
*/
offline_css(css); offline_css(css);
/*
* If @cgrp is marked dead, it's waiting for refs of all css's to
* be disabled before proceeding to the second phase of cgroup
* destruction. If we are the last one, kick it off.
*/
if (!cgrp->nr_css && cgroup_is_dead(cgrp))
cgroup_destroy_css_killed(cgrp);
mutex_unlock(&cgroup_mutex); mutex_unlock(&cgroup_mutex);
mutex_unlock(&cgroup_tree_mutex);
/*
* Put the css refs from kill_css(). Each css holds an extra
* reference to the cgroup's dentry and cgroup removal proceeds
* regardless of css refs. On the last put of each css, whenever
* that may be, the extra dentry ref is put so that dentry
* destruction happens only after all css's are released.
*/
css_put(css); css_put(css);
} }
...@@ -3888,9 +4480,18 @@ static void css_killed_ref_fn(struct percpu_ref *ref) ...@@ -3888,9 +4480,18 @@ static void css_killed_ref_fn(struct percpu_ref *ref)
queue_work(cgroup_destroy_wq, &css->destroy_work); queue_work(cgroup_destroy_wq, &css->destroy_work);
} }
static void __kill_css(struct cgroup_subsys_state *css) /**
* kill_css - destroy a css
* @css: css to destroy
*
* This function initiates destruction of @css by removing cgroup interface
* files and putting its base reference. ->css_offline() will be invoked
* asynchronously once css_tryget_online() is guaranteed to fail and when
* the reference count reaches zero, @css will be released.
*/
static void kill_css(struct cgroup_subsys_state *css)
{ {
lockdep_assert_held(&cgroup_tree_mutex); lockdep_assert_held(&cgroup_mutex);
/* /*
* This must happen before css is disassociated with its cgroup. * This must happen before css is disassociated with its cgroup.
...@@ -3907,7 +4508,7 @@ static void __kill_css(struct cgroup_subsys_state *css) ...@@ -3907,7 +4508,7 @@ static void __kill_css(struct cgroup_subsys_state *css)
/* /*
* cgroup core guarantees that, by the time ->css_offline() is * cgroup core guarantees that, by the time ->css_offline() is
* invoked, no new css reference will be given out via * invoked, no new css reference will be given out via
* css_tryget(). We can't simply call percpu_ref_kill() and * css_tryget_online(). We can't simply call percpu_ref_kill() and
* proceed to offlining css's because percpu_ref_kill() doesn't * proceed to offlining css's because percpu_ref_kill() doesn't
* guarantee that the ref is seen as killed on all CPUs on return. * guarantee that the ref is seen as killed on all CPUs on return.
* *
...@@ -3917,37 +4518,15 @@ static void __kill_css(struct cgroup_subsys_state *css) ...@@ -3917,37 +4518,15 @@ static void __kill_css(struct cgroup_subsys_state *css)
percpu_ref_kill_and_confirm(&css->refcnt, css_killed_ref_fn); percpu_ref_kill_and_confirm(&css->refcnt, css_killed_ref_fn);
} }
/**
* kill_css - destroy a css
* @css: css to destroy
*
* This function initiates destruction of @css by removing cgroup interface
* files and putting its base reference. ->css_offline() will be invoked
* asynchronously once css_tryget() is guaranteed to fail and when the
* reference count reaches zero, @css will be released.
*/
static void kill_css(struct cgroup_subsys_state *css)
{
struct cgroup *cgrp = css->cgroup;
lockdep_assert_held(&cgroup_tree_mutex);
/* if already killed, noop */
if (cgrp->subsys_mask & (1 << css->ss->id)) {
cgrp->subsys_mask &= ~(1 << css->ss->id);
__kill_css(css);
}
}
/** /**
* cgroup_destroy_locked - the first stage of cgroup destruction * cgroup_destroy_locked - the first stage of cgroup destruction
* @cgrp: cgroup to be destroyed * @cgrp: cgroup to be destroyed
* *
* css's make use of percpu refcnts whose killing latency shouldn't be * css's make use of percpu refcnts whose killing latency shouldn't be
* exposed to userland and are RCU protected. Also, cgroup core needs to * exposed to userland and are RCU protected. Also, cgroup core needs to
* guarantee that css_tryget() won't succeed by the time ->css_offline() is * guarantee that css_tryget_online() won't succeed by the time
* invoked. To satisfy all the requirements, destruction is implemented in * ->css_offline() is invoked. To satisfy all the requirements,
* the following two steps. * destruction is implemented in the following two steps.
* *
* s1. Verify @cgrp can be destroyed and mark it dying. Remove all * s1. Verify @cgrp can be destroyed and mark it dying. Remove all
* userland visible parts and start killing the percpu refcnts of * userland visible parts and start killing the percpu refcnts of
...@@ -3966,12 +4545,10 @@ static void kill_css(struct cgroup_subsys_state *css) ...@@ -3966,12 +4545,10 @@ static void kill_css(struct cgroup_subsys_state *css)
static int cgroup_destroy_locked(struct cgroup *cgrp) static int cgroup_destroy_locked(struct cgroup *cgrp)
__releases(&cgroup_mutex) __acquires(&cgroup_mutex) __releases(&cgroup_mutex) __acquires(&cgroup_mutex)
{ {
struct cgroup *child;
struct cgroup_subsys_state *css; struct cgroup_subsys_state *css;
bool empty; bool empty;
int ssid; int ssid;
lockdep_assert_held(&cgroup_tree_mutex);
lockdep_assert_held(&cgroup_mutex); lockdep_assert_held(&cgroup_mutex);
/* /*
...@@ -3985,127 +4562,68 @@ static int cgroup_destroy_locked(struct cgroup *cgrp) ...@@ -3985,127 +4562,68 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
return -EBUSY; return -EBUSY;
/* /*
* Make sure there's no live children. We can't test ->children * Make sure there's no live children. We can't test emptiness of
* emptiness as dead children linger on it while being destroyed; * ->self.children as dead children linger on it while being
* otherwise, "rmdir parent/child parent" may fail with -EBUSY. * drained; otherwise, "rmdir parent/child parent" may fail.
*/ */
empty = true; if (css_has_online_children(&cgrp->self))
rcu_read_lock();
list_for_each_entry_rcu(child, &cgrp->children, sibling) {
empty = cgroup_is_dead(child);
if (!empty)
break;
}
rcu_read_unlock();
if (!empty)
return -EBUSY; return -EBUSY;
/* /*
* Mark @cgrp dead. This prevents further task migration and child * Mark @cgrp dead. This prevents further task migration and child
* creation by disabling cgroup_lock_live_group(). Note that * creation by disabling cgroup_lock_live_group().
* CGRP_DEAD assertion is depended upon by css_next_child() to
* resume iteration after dropping RCU read lock. See
* css_next_child() for details.
*/ */
set_bit(CGRP_DEAD, &cgrp->flags); cgrp->self.flags &= ~CSS_ONLINE;
/* /* initiate massacre of all css's */
* Initiate massacre of all css's. cgroup_destroy_css_killed()
* will be invoked to perform the rest of destruction once the
* percpu refs of all css's are confirmed to be killed. This
* involves removing the subsystem's files, drop cgroup_mutex.
*/
mutex_unlock(&cgroup_mutex);
for_each_css(css, ssid, cgrp) for_each_css(css, ssid, cgrp)
kill_css(css); kill_css(css);
mutex_lock(&cgroup_mutex);
/* CGRP_DEAD is set, remove from ->release_list for the last time */ /* CSS_ONLINE is clear, remove from ->release_list for the last time */
raw_spin_lock(&release_list_lock); raw_spin_lock(&release_list_lock);
if (!list_empty(&cgrp->release_list)) if (!list_empty(&cgrp->release_list))
list_del_init(&cgrp->release_list); list_del_init(&cgrp->release_list);
raw_spin_unlock(&release_list_lock); raw_spin_unlock(&release_list_lock);
/* /*
* If @cgrp has css's attached, the second stage of cgroup * Remove @cgrp directory along with the base files. @cgrp has an
* destruction is kicked off from css_killed_work_fn() after the * extra ref on its kn.
* refs of all attached css's are killed. If @cgrp doesn't have
* any css, we kick it off here.
*/ */
if (!cgrp->nr_css) kernfs_remove(cgrp->kn);
cgroup_destroy_css_killed(cgrp);
/* remove @cgrp directory along with the base files */ set_bit(CGRP_RELEASABLE, &cgroup_parent(cgrp)->flags);
mutex_unlock(&cgroup_mutex); check_for_release(cgroup_parent(cgrp));
/* /* put the base reference */
* There are two control paths which try to determine cgroup from percpu_ref_kill(&cgrp->self.refcnt);
* dentry without going through kernfs - cgroupstats_build() and
* css_tryget_from_dir(). Those are supported by RCU protecting
* clearing of cgrp->kn->priv backpointer, which should happen
* after all files under it have been removed.
*/
kernfs_remove(cgrp->kn); /* @cgrp has an extra ref on its kn */
RCU_INIT_POINTER(*(void __rcu __force **)&cgrp->kn->priv, NULL);
mutex_lock(&cgroup_mutex);
return 0; return 0;
}; };
/**
* cgroup_destroy_css_killed - the second step of cgroup destruction
* @work: cgroup->destroy_free_work
*
* This function is invoked from a work item for a cgroup which is being
* destroyed after all css's are offlined and performs the rest of
* destruction. This is the second step of destruction described in the
* comment above cgroup_destroy_locked().
*/
static void cgroup_destroy_css_killed(struct cgroup *cgrp)
{
struct cgroup *parent = cgrp->parent;
lockdep_assert_held(&cgroup_tree_mutex);
lockdep_assert_held(&cgroup_mutex);
/* delete this cgroup from parent->children */
list_del_rcu(&cgrp->sibling);
cgroup_put(cgrp);
set_bit(CGRP_RELEASABLE, &parent->flags);
check_for_release(parent);
}
static int cgroup_rmdir(struct kernfs_node *kn) static int cgroup_rmdir(struct kernfs_node *kn)
{ {
struct cgroup *cgrp = kn->priv; struct cgroup *cgrp;
int ret = 0; int ret = 0;
/* cgrp = cgroup_kn_lock_live(kn);
* This is self-destruction but @kn can't be removed while this if (!cgrp)
* callback is in progress. Let's break active protection. Once return 0;
* the protection is broken, @cgrp can be destroyed at any point. cgroup_get(cgrp); /* for @kn->priv clearing */
* Pin it so that it stays accessible.
*/
cgroup_get(cgrp);
kernfs_break_active_protection(kn);
mutex_lock(&cgroup_tree_mutex); ret = cgroup_destroy_locked(cgrp);
mutex_lock(&cgroup_mutex);
cgroup_kn_unlock(kn);
/* /*
* @cgrp might already have been destroyed while we're trying to * There are two control paths which try to determine cgroup from
* grab the mutexes. * dentry without going through kernfs - cgroupstats_build() and
* css_tryget_online_from_dir(). Those are supported by RCU
* protecting clearing of cgrp->kn->priv backpointer, which should
* happen after all files under it have been removed.
*/ */
if (!cgroup_is_dead(cgrp)) if (!ret)
ret = cgroup_destroy_locked(cgrp); RCU_INIT_POINTER(*(void __rcu __force **)&kn->priv, NULL);
mutex_unlock(&cgroup_mutex);
mutex_unlock(&cgroup_tree_mutex);
kernfs_unbreak_active_protection(kn);
cgroup_put(cgrp); cgroup_put(cgrp);
return ret; return ret;
} }
...@@ -4118,15 +4636,15 @@ static struct kernfs_syscall_ops cgroup_kf_syscall_ops = { ...@@ -4118,15 +4636,15 @@ static struct kernfs_syscall_ops cgroup_kf_syscall_ops = {
.rename = cgroup_rename, .rename = cgroup_rename,
}; };
static void __init cgroup_init_subsys(struct cgroup_subsys *ss) static void __init cgroup_init_subsys(struct cgroup_subsys *ss, bool early)
{ {
struct cgroup_subsys_state *css; struct cgroup_subsys_state *css;
printk(KERN_INFO "Initializing cgroup subsys %s\n", ss->name); printk(KERN_INFO "Initializing cgroup subsys %s\n", ss->name);
mutex_lock(&cgroup_tree_mutex);
mutex_lock(&cgroup_mutex); mutex_lock(&cgroup_mutex);
idr_init(&ss->css_idr);
INIT_LIST_HEAD(&ss->cfts); INIT_LIST_HEAD(&ss->cfts);
/* Create the root cgroup state for this subsystem */ /* Create the root cgroup state for this subsystem */
...@@ -4134,7 +4652,21 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss) ...@@ -4134,7 +4652,21 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss)
css = ss->css_alloc(cgroup_css(&cgrp_dfl_root.cgrp, ss)); css = ss->css_alloc(cgroup_css(&cgrp_dfl_root.cgrp, ss));
/* We don't handle early failures gracefully */ /* We don't handle early failures gracefully */
BUG_ON(IS_ERR(css)); BUG_ON(IS_ERR(css));
init_css(css, ss, &cgrp_dfl_root.cgrp); init_and_link_css(css, ss, &cgrp_dfl_root.cgrp);
/*
* Root csses are never destroyed and we can't initialize
* percpu_ref during early init. Disable refcnting.
*/
css->flags |= CSS_NO_REF;
if (early) {
/* allocation can't be done safely during early init */
css->id = 1;
} else {
css->id = cgroup_idr_alloc(&ss->css_idr, css, 1, 2, GFP_KERNEL);
BUG_ON(css->id < 0);
}
/* Update the init_css_set to contain a subsys /* Update the init_css_set to contain a subsys
* pointer to this state - since the subsystem is * pointer to this state - since the subsystem is
...@@ -4151,10 +4683,7 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss) ...@@ -4151,10 +4683,7 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss)
BUG_ON(online_css(css)); BUG_ON(online_css(css));
cgrp_dfl_root.cgrp.subsys_mask |= 1 << ss->id;
mutex_unlock(&cgroup_mutex); mutex_unlock(&cgroup_mutex);
mutex_unlock(&cgroup_tree_mutex);
} }
/** /**
...@@ -4171,6 +4700,8 @@ int __init cgroup_init_early(void) ...@@ -4171,6 +4700,8 @@ int __init cgroup_init_early(void)
int i; int i;
init_cgroup_root(&cgrp_dfl_root, &opts); init_cgroup_root(&cgrp_dfl_root, &opts);
cgrp_dfl_root.cgrp.self.flags |= CSS_NO_REF;
RCU_INIT_POINTER(init_task.cgroups, &init_css_set); RCU_INIT_POINTER(init_task.cgroups, &init_css_set);
for_each_subsys(ss, i) { for_each_subsys(ss, i) {
...@@ -4185,7 +4716,7 @@ int __init cgroup_init_early(void) ...@@ -4185,7 +4716,7 @@ int __init cgroup_init_early(void)
ss->name = cgroup_subsys_name[i]; ss->name = cgroup_subsys_name[i];
if (ss->early_init) if (ss->early_init)
cgroup_init_subsys(ss); cgroup_init_subsys(ss, true);
} }
return 0; return 0;
} }
...@@ -4204,7 +4735,6 @@ int __init cgroup_init(void) ...@@ -4204,7 +4735,6 @@ int __init cgroup_init(void)
BUG_ON(cgroup_init_cftypes(NULL, cgroup_base_files)); BUG_ON(cgroup_init_cftypes(NULL, cgroup_base_files));
mutex_lock(&cgroup_tree_mutex);
mutex_lock(&cgroup_mutex); mutex_lock(&cgroup_mutex);
/* Add init_css_set to the hash table */ /* Add init_css_set to the hash table */
...@@ -4214,18 +4744,31 @@ int __init cgroup_init(void) ...@@ -4214,18 +4744,31 @@ int __init cgroup_init(void)
BUG_ON(cgroup_setup_root(&cgrp_dfl_root, 0)); BUG_ON(cgroup_setup_root(&cgrp_dfl_root, 0));
mutex_unlock(&cgroup_mutex); mutex_unlock(&cgroup_mutex);
mutex_unlock(&cgroup_tree_mutex);
for_each_subsys(ss, ssid) { for_each_subsys(ss, ssid) {
if (!ss->early_init) if (ss->early_init) {
cgroup_init_subsys(ss); struct cgroup_subsys_state *css =
init_css_set.subsys[ss->id];
css->id = cgroup_idr_alloc(&ss->css_idr, css, 1, 2,
GFP_KERNEL);
BUG_ON(css->id < 0);
} else {
cgroup_init_subsys(ss, false);
}
list_add_tail(&init_css_set.e_cset_node[ssid],
&cgrp_dfl_root.cgrp.e_csets[ssid]);
/* /*
* cftype registration needs kmalloc and can't be done * Setting dfl_root subsys_mask needs to consider the
* during early_init. Register base cftypes separately. * disabled flag and cftype registration needs kmalloc,
* both of which aren't available during early_init.
*/ */
if (ss->base_cftypes) if (!ss->disabled) {
cgrp_dfl_root.subsys_mask |= 1 << ss->id;
WARN_ON(cgroup_add_cftypes(ss, ss->base_cftypes)); WARN_ON(cgroup_add_cftypes(ss, ss->base_cftypes));
}
} }
cgroup_kobj = kobject_create_and_add("cgroup", fs_kobj); cgroup_kobj = kobject_create_and_add("cgroup", fs_kobj);
...@@ -4308,7 +4851,7 @@ int proc_cgroup_show(struct seq_file *m, void *v) ...@@ -4308,7 +4851,7 @@ int proc_cgroup_show(struct seq_file *m, void *v)
seq_printf(m, "%d:", root->hierarchy_id); seq_printf(m, "%d:", root->hierarchy_id);
for_each_subsys(ss, ssid) for_each_subsys(ss, ssid)
if (root->cgrp.subsys_mask & (1 << ssid)) if (root->subsys_mask & (1 << ssid))
seq_printf(m, "%s%s", count++ ? "," : "", ss->name); seq_printf(m, "%s%s", count++ ? "," : "", ss->name);
if (strlen(root->name)) if (strlen(root->name))
seq_printf(m, "%sname=%s", count ? "," : "", seq_printf(m, "%sname=%s", count ? "," : "",
...@@ -4503,8 +5046,8 @@ void cgroup_exit(struct task_struct *tsk) ...@@ -4503,8 +5046,8 @@ void cgroup_exit(struct task_struct *tsk)
static void check_for_release(struct cgroup *cgrp) static void check_for_release(struct cgroup *cgrp)
{ {
if (cgroup_is_releasable(cgrp) && if (cgroup_is_releasable(cgrp) && list_empty(&cgrp->cset_links) &&
list_empty(&cgrp->cset_links) && list_empty(&cgrp->children)) { !css_has_online_children(&cgrp->self)) {
/* /*
* Control Group is currently removeable. If it's not * Control Group is currently removeable. If it's not
* already queued for a userspace notification, queue * already queued for a userspace notification, queue
...@@ -4621,7 +5164,7 @@ static int __init cgroup_disable(char *str) ...@@ -4621,7 +5164,7 @@ static int __init cgroup_disable(char *str)
__setup("cgroup_disable=", cgroup_disable); __setup("cgroup_disable=", cgroup_disable);
/** /**
* css_tryget_from_dir - get corresponding css from the dentry of a cgroup dir * css_tryget_online_from_dir - get corresponding css from a cgroup dentry
* @dentry: directory dentry of interest * @dentry: directory dentry of interest
* @ss: subsystem of interest * @ss: subsystem of interest
* *
...@@ -4629,8 +5172,8 @@ __setup("cgroup_disable=", cgroup_disable); ...@@ -4629,8 +5172,8 @@ __setup("cgroup_disable=", cgroup_disable);
* to get the corresponding css and return it. If such css doesn't exist * to get the corresponding css and return it. If such css doesn't exist
* or can't be pinned, an ERR_PTR value is returned. * or can't be pinned, an ERR_PTR value is returned.
*/ */
struct cgroup_subsys_state *css_tryget_from_dir(struct dentry *dentry, struct cgroup_subsys_state *css_tryget_online_from_dir(struct dentry *dentry,
struct cgroup_subsys *ss) struct cgroup_subsys *ss)
{ {
struct kernfs_node *kn = kernfs_node_from_dentry(dentry); struct kernfs_node *kn = kernfs_node_from_dentry(dentry);
struct cgroup_subsys_state *css = NULL; struct cgroup_subsys_state *css = NULL;
...@@ -4646,13 +5189,13 @@ struct cgroup_subsys_state *css_tryget_from_dir(struct dentry *dentry, ...@@ -4646,13 +5189,13 @@ struct cgroup_subsys_state *css_tryget_from_dir(struct dentry *dentry,
/* /*
* This path doesn't originate from kernfs and @kn could already * This path doesn't originate from kernfs and @kn could already
* have been or be removed at any point. @kn->priv is RCU * have been or be removed at any point. @kn->priv is RCU
* protected for this access. See destroy_locked() for details. * protected for this access. See cgroup_rmdir() for details.
*/ */
cgrp = rcu_dereference(kn->priv); cgrp = rcu_dereference(kn->priv);
if (cgrp) if (cgrp)
css = cgroup_css(cgrp, ss); css = cgroup_css(cgrp, ss);
if (!css || !css_tryget(css)) if (!css || !css_tryget_online(css))
css = ERR_PTR(-ENOENT); css = ERR_PTR(-ENOENT);
rcu_read_unlock(); rcu_read_unlock();
...@@ -4669,14 +5212,8 @@ struct cgroup_subsys_state *css_tryget_from_dir(struct dentry *dentry, ...@@ -4669,14 +5212,8 @@ struct cgroup_subsys_state *css_tryget_from_dir(struct dentry *dentry,
*/ */
struct cgroup_subsys_state *css_from_id(int id, struct cgroup_subsys *ss) struct cgroup_subsys_state *css_from_id(int id, struct cgroup_subsys *ss)
{ {
struct cgroup *cgrp; WARN_ON_ONCE(!rcu_read_lock_held());
return idr_find(&ss->css_idr, id);
cgroup_assert_mutexes_or_rcu_locked();
cgrp = idr_find(&ss->root->cgroup_idr, id);
if (cgrp)
return cgroup_css(cgrp, ss);
return NULL;
} }
#ifdef CONFIG_CGROUP_DEBUG #ifdef CONFIG_CGROUP_DEBUG
......
...@@ -59,7 +59,7 @@ static inline struct freezer *task_freezer(struct task_struct *task) ...@@ -59,7 +59,7 @@ static inline struct freezer *task_freezer(struct task_struct *task)
static struct freezer *parent_freezer(struct freezer *freezer) static struct freezer *parent_freezer(struct freezer *freezer)
{ {
return css_freezer(css_parent(&freezer->css)); return css_freezer(freezer->css.parent);
} }
bool cgroup_freezing(struct task_struct *task) bool cgroup_freezing(struct task_struct *task)
...@@ -73,10 +73,6 @@ bool cgroup_freezing(struct task_struct *task) ...@@ -73,10 +73,6 @@ bool cgroup_freezing(struct task_struct *task)
return ret; return ret;
} }
/*
* cgroups_write_string() limits the size of freezer state strings to
* CGROUP_LOCAL_BUFFER_SIZE
*/
static const char *freezer_state_strs(unsigned int state) static const char *freezer_state_strs(unsigned int state)
{ {
if (state & CGROUP_FROZEN) if (state & CGROUP_FROZEN)
...@@ -304,7 +300,7 @@ static int freezer_read(struct seq_file *m, void *v) ...@@ -304,7 +300,7 @@ static int freezer_read(struct seq_file *m, void *v)
/* update states bottom-up */ /* update states bottom-up */
css_for_each_descendant_post(pos, css) { css_for_each_descendant_post(pos, css) {
if (!css_tryget(pos)) if (!css_tryget_online(pos))
continue; continue;
rcu_read_unlock(); rcu_read_unlock();
...@@ -404,7 +400,7 @@ static void freezer_change_state(struct freezer *freezer, bool freeze) ...@@ -404,7 +400,7 @@ static void freezer_change_state(struct freezer *freezer, bool freeze)
struct freezer *pos_f = css_freezer(pos); struct freezer *pos_f = css_freezer(pos);
struct freezer *parent = parent_freezer(pos_f); struct freezer *parent = parent_freezer(pos_f);
if (!css_tryget(pos)) if (!css_tryget_online(pos))
continue; continue;
rcu_read_unlock(); rcu_read_unlock();
...@@ -423,20 +419,22 @@ static void freezer_change_state(struct freezer *freezer, bool freeze) ...@@ -423,20 +419,22 @@ static void freezer_change_state(struct freezer *freezer, bool freeze)
mutex_unlock(&freezer_mutex); mutex_unlock(&freezer_mutex);
} }
static int freezer_write(struct cgroup_subsys_state *css, struct cftype *cft, static ssize_t freezer_write(struct kernfs_open_file *of,
char *buffer) char *buf, size_t nbytes, loff_t off)
{ {
bool freeze; bool freeze;
if (strcmp(buffer, freezer_state_strs(0)) == 0) buf = strstrip(buf);
if (strcmp(buf, freezer_state_strs(0)) == 0)
freeze = false; freeze = false;
else if (strcmp(buffer, freezer_state_strs(CGROUP_FROZEN)) == 0) else if (strcmp(buf, freezer_state_strs(CGROUP_FROZEN)) == 0)
freeze = true; freeze = true;
else else
return -EINVAL; return -EINVAL;
freezer_change_state(css_freezer(css), freeze); freezer_change_state(css_freezer(of_css(of)), freeze);
return 0; return nbytes;
} }
static u64 freezer_self_freezing_read(struct cgroup_subsys_state *css, static u64 freezer_self_freezing_read(struct cgroup_subsys_state *css,
...@@ -460,7 +458,7 @@ static struct cftype files[] = { ...@@ -460,7 +458,7 @@ static struct cftype files[] = {
.name = "state", .name = "state",
.flags = CFTYPE_NOT_ON_ROOT, .flags = CFTYPE_NOT_ON_ROOT,
.seq_show = freezer_read, .seq_show = freezer_read,
.write_string = freezer_write, .write = freezer_write,
}, },
{ {
.name = "self_freezing", .name = "self_freezing",
......
...@@ -119,7 +119,7 @@ static inline struct cpuset *task_cs(struct task_struct *task) ...@@ -119,7 +119,7 @@ static inline struct cpuset *task_cs(struct task_struct *task)
static inline struct cpuset *parent_cs(struct cpuset *cs) static inline struct cpuset *parent_cs(struct cpuset *cs)
{ {
return css_cs(css_parent(&cs->css)); return css_cs(cs->css.parent);
} }
#ifdef CONFIG_NUMA #ifdef CONFIG_NUMA
...@@ -691,11 +691,8 @@ static int generate_sched_domains(cpumask_var_t **domains, ...@@ -691,11 +691,8 @@ static int generate_sched_domains(cpumask_var_t **domains,
if (nslot == ndoms) { if (nslot == ndoms) {
static int warnings = 10; static int warnings = 10;
if (warnings) { if (warnings) {
printk(KERN_WARNING pr_warn("rebuild_sched_domains confused: nslot %d, ndoms %d, csn %d, i %d, apn %d\n",
"rebuild_sched_domains confused:" nslot, ndoms, csn, i, apn);
" nslot %d, ndoms %d, csn %d, i %d,"
" apn %d\n",
nslot, ndoms, csn, i, apn);
warnings--; warnings--;
} }
continue; continue;
...@@ -870,7 +867,7 @@ static void update_tasks_cpumask_hier(struct cpuset *root_cs, bool update_root) ...@@ -870,7 +867,7 @@ static void update_tasks_cpumask_hier(struct cpuset *root_cs, bool update_root)
continue; continue;
} }
} }
if (!css_tryget(&cp->css)) if (!css_tryget_online(&cp->css))
continue; continue;
rcu_read_unlock(); rcu_read_unlock();
...@@ -885,6 +882,7 @@ static void update_tasks_cpumask_hier(struct cpuset *root_cs, bool update_root) ...@@ -885,6 +882,7 @@ static void update_tasks_cpumask_hier(struct cpuset *root_cs, bool update_root)
/** /**
* update_cpumask - update the cpus_allowed mask of a cpuset and all tasks in it * update_cpumask - update the cpus_allowed mask of a cpuset and all tasks in it
* @cs: the cpuset to consider * @cs: the cpuset to consider
* @trialcs: trial cpuset
* @buf: buffer of cpu numbers written to this cpuset * @buf: buffer of cpu numbers written to this cpuset
*/ */
static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs, static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
...@@ -1105,7 +1103,7 @@ static void update_tasks_nodemask_hier(struct cpuset *root_cs, bool update_root) ...@@ -1105,7 +1103,7 @@ static void update_tasks_nodemask_hier(struct cpuset *root_cs, bool update_root)
continue; continue;
} }
} }
if (!css_tryget(&cp->css)) if (!css_tryget_online(&cp->css))
continue; continue;
rcu_read_unlock(); rcu_read_unlock();
...@@ -1600,13 +1598,15 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft, ...@@ -1600,13 +1598,15 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
/* /*
* Common handling for a write to a "cpus" or "mems" file. * Common handling for a write to a "cpus" or "mems" file.
*/ */
static int cpuset_write_resmask(struct cgroup_subsys_state *css, static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
struct cftype *cft, char *buf) char *buf, size_t nbytes, loff_t off)
{ {
struct cpuset *cs = css_cs(css); struct cpuset *cs = css_cs(of_css(of));
struct cpuset *trialcs; struct cpuset *trialcs;
int retval = -ENODEV; int retval = -ENODEV;
buf = strstrip(buf);
/* /*
* CPU or memory hotunplug may leave @cs w/o any execution * CPU or memory hotunplug may leave @cs w/o any execution
* resources, in which case the hotplug code asynchronously updates * resources, in which case the hotplug code asynchronously updates
...@@ -1630,7 +1630,7 @@ static int cpuset_write_resmask(struct cgroup_subsys_state *css, ...@@ -1630,7 +1630,7 @@ static int cpuset_write_resmask(struct cgroup_subsys_state *css,
goto out_unlock; goto out_unlock;
} }
switch (cft->private) { switch (of_cft(of)->private) {
case FILE_CPULIST: case FILE_CPULIST:
retval = update_cpumask(cs, trialcs, buf); retval = update_cpumask(cs, trialcs, buf);
break; break;
...@@ -1645,7 +1645,7 @@ static int cpuset_write_resmask(struct cgroup_subsys_state *css, ...@@ -1645,7 +1645,7 @@ static int cpuset_write_resmask(struct cgroup_subsys_state *css,
free_trial_cpuset(trialcs); free_trial_cpuset(trialcs);
out_unlock: out_unlock:
mutex_unlock(&cpuset_mutex); mutex_unlock(&cpuset_mutex);
return retval; return retval ?: nbytes;
} }
/* /*
...@@ -1747,7 +1747,7 @@ static struct cftype files[] = { ...@@ -1747,7 +1747,7 @@ static struct cftype files[] = {
{ {
.name = "cpus", .name = "cpus",
.seq_show = cpuset_common_seq_show, .seq_show = cpuset_common_seq_show,
.write_string = cpuset_write_resmask, .write = cpuset_write_resmask,
.max_write_len = (100U + 6 * NR_CPUS), .max_write_len = (100U + 6 * NR_CPUS),
.private = FILE_CPULIST, .private = FILE_CPULIST,
}, },
...@@ -1755,7 +1755,7 @@ static struct cftype files[] = { ...@@ -1755,7 +1755,7 @@ static struct cftype files[] = {
{ {
.name = "mems", .name = "mems",
.seq_show = cpuset_common_seq_show, .seq_show = cpuset_common_seq_show,
.write_string = cpuset_write_resmask, .write = cpuset_write_resmask,
.max_write_len = (100U + 6 * MAX_NUMNODES), .max_write_len = (100U + 6 * MAX_NUMNODES),
.private = FILE_MEMLIST, .private = FILE_MEMLIST,
}, },
...@@ -2011,7 +2011,7 @@ static void remove_tasks_in_empty_cpuset(struct cpuset *cs) ...@@ -2011,7 +2011,7 @@ static void remove_tasks_in_empty_cpuset(struct cpuset *cs)
parent = parent_cs(parent); parent = parent_cs(parent);
if (cgroup_transfer_tasks(parent->css.cgroup, cs->css.cgroup)) { if (cgroup_transfer_tasks(parent->css.cgroup, cs->css.cgroup)) {
printk(KERN_ERR "cpuset: failed to transfer tasks out of empty cpuset "); pr_err("cpuset: failed to transfer tasks out of empty cpuset ");
pr_cont_cgroup_name(cs->css.cgroup); pr_cont_cgroup_name(cs->css.cgroup);
pr_cont("\n"); pr_cont("\n");
} }
...@@ -2149,7 +2149,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work) ...@@ -2149,7 +2149,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
rcu_read_lock(); rcu_read_lock();
cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) { cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) {
if (cs == &top_cpuset || !css_tryget(&cs->css)) if (cs == &top_cpuset || !css_tryget_online(&cs->css))
continue; continue;
rcu_read_unlock(); rcu_read_unlock();
...@@ -2530,7 +2530,7 @@ int cpuset_mems_allowed_intersects(const struct task_struct *tsk1, ...@@ -2530,7 +2530,7 @@ int cpuset_mems_allowed_intersects(const struct task_struct *tsk1,
/** /**
* cpuset_print_task_mems_allowed - prints task's cpuset and mems_allowed * cpuset_print_task_mems_allowed - prints task's cpuset and mems_allowed
* @task: pointer to task_struct of some task. * @tsk: pointer to task_struct of some task.
* *
* Description: Prints @task's name, cpuset name, and cached copy of its * Description: Prints @task's name, cpuset name, and cached copy of its
* mems_allowed to the kernel log. * mems_allowed to the kernel log.
...@@ -2548,7 +2548,7 @@ void cpuset_print_task_mems_allowed(struct task_struct *tsk) ...@@ -2548,7 +2548,7 @@ void cpuset_print_task_mems_allowed(struct task_struct *tsk)
cgrp = task_cs(tsk)->css.cgroup; cgrp = task_cs(tsk)->css.cgroup;
nodelist_scnprintf(cpuset_nodelist, CPUSET_NODELIST_LEN, nodelist_scnprintf(cpuset_nodelist, CPUSET_NODELIST_LEN,
tsk->mems_allowed); tsk->mems_allowed);
printk(KERN_INFO "%s cpuset=", tsk->comm); pr_info("%s cpuset=", tsk->comm);
pr_cont_cgroup_name(cgrp); pr_cont_cgroup_name(cgrp);
pr_cont(" mems_allowed=%s\n", cpuset_nodelist); pr_cont(" mems_allowed=%s\n", cpuset_nodelist);
...@@ -2640,10 +2640,10 @@ int proc_cpuset_show(struct seq_file *m, void *unused_v) ...@@ -2640,10 +2640,10 @@ int proc_cpuset_show(struct seq_file *m, void *unused_v)
/* Display task mems_allowed in /proc/<pid>/status file. */ /* Display task mems_allowed in /proc/<pid>/status file. */
void cpuset_task_status_allowed(struct seq_file *m, struct task_struct *task) void cpuset_task_status_allowed(struct seq_file *m, struct task_struct *task)
{ {
seq_printf(m, "Mems_allowed:\t"); seq_puts(m, "Mems_allowed:\t");
seq_nodemask(m, &task->mems_allowed); seq_nodemask(m, &task->mems_allowed);
seq_printf(m, "\n"); seq_puts(m, "\n");
seq_printf(m, "Mems_allowed_list:\t"); seq_puts(m, "Mems_allowed_list:\t");
seq_nodemask_list(m, &task->mems_allowed); seq_nodemask_list(m, &task->mems_allowed);
seq_printf(m, "\n"); seq_puts(m, "\n");
} }
...@@ -608,7 +608,8 @@ static inline int perf_cgroup_connect(int fd, struct perf_event *event, ...@@ -608,7 +608,8 @@ static inline int perf_cgroup_connect(int fd, struct perf_event *event,
if (!f.file) if (!f.file)
return -EBADF; return -EBADF;
css = css_tryget_from_dir(f.file->f_dentry, &perf_event_cgrp_subsys); css = css_tryget_online_from_dir(f.file->f_dentry,
&perf_event_cgrp_subsys);
if (IS_ERR(css)) { if (IS_ERR(css)) {
ret = PTR_ERR(css); ret = PTR_ERR(css);
goto out; goto out;
......
...@@ -7669,7 +7669,7 @@ cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) ...@@ -7669,7 +7669,7 @@ cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
static int cpu_cgroup_css_online(struct cgroup_subsys_state *css) static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
{ {
struct task_group *tg = css_tg(css); struct task_group *tg = css_tg(css);
struct task_group *parent = css_tg(css_parent(css)); struct task_group *parent = css_tg(css->parent);
if (parent) if (parent)
sched_online_group(tg, parent); sched_online_group(tg, parent);
......
...@@ -46,7 +46,7 @@ static inline struct cpuacct *task_ca(struct task_struct *tsk) ...@@ -46,7 +46,7 @@ static inline struct cpuacct *task_ca(struct task_struct *tsk)
static inline struct cpuacct *parent_ca(struct cpuacct *ca) static inline struct cpuacct *parent_ca(struct cpuacct *ca)
{ {
return css_ca(css_parent(&ca->css)); return css_ca(ca->css.parent);
} }
static DEFINE_PER_CPU(u64, root_cpuacct_cpuusage); static DEFINE_PER_CPU(u64, root_cpuacct_cpuusage);
......
...@@ -52,7 +52,7 @@ static inline bool hugetlb_cgroup_is_root(struct hugetlb_cgroup *h_cg) ...@@ -52,7 +52,7 @@ static inline bool hugetlb_cgroup_is_root(struct hugetlb_cgroup *h_cg)
static inline struct hugetlb_cgroup * static inline struct hugetlb_cgroup *
parent_hugetlb_cgroup(struct hugetlb_cgroup *h_cg) parent_hugetlb_cgroup(struct hugetlb_cgroup *h_cg)
{ {
return hugetlb_cgroup_from_css(css_parent(&h_cg->css)); return hugetlb_cgroup_from_css(h_cg->css.parent);
} }
static inline bool hugetlb_cgroup_have_usage(struct hugetlb_cgroup *h_cg) static inline bool hugetlb_cgroup_have_usage(struct hugetlb_cgroup *h_cg)
...@@ -181,7 +181,7 @@ int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages, ...@@ -181,7 +181,7 @@ int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
again: again:
rcu_read_lock(); rcu_read_lock();
h_cg = hugetlb_cgroup_from_task(current); h_cg = hugetlb_cgroup_from_task(current);
if (!css_tryget(&h_cg->css)) { if (!css_tryget_online(&h_cg->css)) {
rcu_read_unlock(); rcu_read_unlock();
goto again; goto again;
} }
...@@ -253,15 +253,16 @@ static u64 hugetlb_cgroup_read_u64(struct cgroup_subsys_state *css, ...@@ -253,15 +253,16 @@ static u64 hugetlb_cgroup_read_u64(struct cgroup_subsys_state *css,
return res_counter_read_u64(&h_cg->hugepage[idx], name); return res_counter_read_u64(&h_cg->hugepage[idx], name);
} }
static int hugetlb_cgroup_write(struct cgroup_subsys_state *css, static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
struct cftype *cft, char *buffer) char *buf, size_t nbytes, loff_t off)
{ {
int idx, name, ret; int idx, name, ret;
unsigned long long val; unsigned long long val;
struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(css); struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(of_css(of));
idx = MEMFILE_IDX(cft->private); buf = strstrip(buf);
name = MEMFILE_ATTR(cft->private); idx = MEMFILE_IDX(of_cft(of)->private);
name = MEMFILE_ATTR(of_cft(of)->private);
switch (name) { switch (name) {
case RES_LIMIT: case RES_LIMIT:
...@@ -271,7 +272,7 @@ static int hugetlb_cgroup_write(struct cgroup_subsys_state *css, ...@@ -271,7 +272,7 @@ static int hugetlb_cgroup_write(struct cgroup_subsys_state *css,
break; break;
} }
/* This function does all necessary parse...reuse it */ /* This function does all necessary parse...reuse it */
ret = res_counter_memparse_write_strategy(buffer, &val); ret = res_counter_memparse_write_strategy(buf, &val);
if (ret) if (ret)
break; break;
ret = res_counter_set_limit(&h_cg->hugepage[idx], val); ret = res_counter_set_limit(&h_cg->hugepage[idx], val);
...@@ -280,17 +281,17 @@ static int hugetlb_cgroup_write(struct cgroup_subsys_state *css, ...@@ -280,17 +281,17 @@ static int hugetlb_cgroup_write(struct cgroup_subsys_state *css,
ret = -EINVAL; ret = -EINVAL;
break; break;
} }
return ret; return ret ?: nbytes;
} }
static int hugetlb_cgroup_reset(struct cgroup_subsys_state *css, static ssize_t hugetlb_cgroup_reset(struct kernfs_open_file *of,
unsigned int event) char *buf, size_t nbytes, loff_t off)
{ {
int idx, name, ret = 0; int idx, name, ret = 0;
struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(css); struct hugetlb_cgroup *h_cg = hugetlb_cgroup_from_css(of_css(of));
idx = MEMFILE_IDX(event); idx = MEMFILE_IDX(of_cft(of)->private);
name = MEMFILE_ATTR(event); name = MEMFILE_ATTR(of_cft(of)->private);
switch (name) { switch (name) {
case RES_MAX_USAGE: case RES_MAX_USAGE:
...@@ -303,7 +304,7 @@ static int hugetlb_cgroup_reset(struct cgroup_subsys_state *css, ...@@ -303,7 +304,7 @@ static int hugetlb_cgroup_reset(struct cgroup_subsys_state *css,
ret = -EINVAL; ret = -EINVAL;
break; break;
} }
return ret; return ret ?: nbytes;
} }
static char *mem_fmt(char *buf, int size, unsigned long hsize) static char *mem_fmt(char *buf, int size, unsigned long hsize)
...@@ -331,7 +332,7 @@ static void __init __hugetlb_cgroup_file_init(int idx) ...@@ -331,7 +332,7 @@ static void __init __hugetlb_cgroup_file_init(int idx)
snprintf(cft->name, MAX_CFTYPE_NAME, "%s.limit_in_bytes", buf); snprintf(cft->name, MAX_CFTYPE_NAME, "%s.limit_in_bytes", buf);
cft->private = MEMFILE_PRIVATE(idx, RES_LIMIT); cft->private = MEMFILE_PRIVATE(idx, RES_LIMIT);
cft->read_u64 = hugetlb_cgroup_read_u64; cft->read_u64 = hugetlb_cgroup_read_u64;
cft->write_string = hugetlb_cgroup_write; cft->write = hugetlb_cgroup_write;
/* Add the usage file */ /* Add the usage file */
cft = &h->cgroup_files[1]; cft = &h->cgroup_files[1];
...@@ -343,14 +344,14 @@ static void __init __hugetlb_cgroup_file_init(int idx) ...@@ -343,14 +344,14 @@ static void __init __hugetlb_cgroup_file_init(int idx)
cft = &h->cgroup_files[2]; cft = &h->cgroup_files[2];
snprintf(cft->name, MAX_CFTYPE_NAME, "%s.max_usage_in_bytes", buf); snprintf(cft->name, MAX_CFTYPE_NAME, "%s.max_usage_in_bytes", buf);
cft->private = MEMFILE_PRIVATE(idx, RES_MAX_USAGE); cft->private = MEMFILE_PRIVATE(idx, RES_MAX_USAGE);
cft->trigger = hugetlb_cgroup_reset; cft->write = hugetlb_cgroup_reset;
cft->read_u64 = hugetlb_cgroup_read_u64; cft->read_u64 = hugetlb_cgroup_read_u64;
/* Add the failcntfile */ /* Add the failcntfile */
cft = &h->cgroup_files[3]; cft = &h->cgroup_files[3];
snprintf(cft->name, MAX_CFTYPE_NAME, "%s.failcnt", buf); snprintf(cft->name, MAX_CFTYPE_NAME, "%s.failcnt", buf);
cft->private = MEMFILE_PRIVATE(idx, RES_FAILCNT); cft->private = MEMFILE_PRIVATE(idx, RES_FAILCNT);
cft->trigger = hugetlb_cgroup_reset; cft->write = hugetlb_cgroup_reset;
cft->read_u64 = hugetlb_cgroup_read_u64; cft->read_u64 = hugetlb_cgroup_read_u64;
/* NULL terminate the last cft */ /* NULL terminate the last cft */
......
...@@ -526,18 +526,14 @@ static inline bool mem_cgroup_is_root(struct mem_cgroup *memcg) ...@@ -526,18 +526,14 @@ static inline bool mem_cgroup_is_root(struct mem_cgroup *memcg)
static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg) static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg)
{ {
/* return memcg->css.id;
* The ID of the root cgroup is 0, but memcg treat 0 as an
* invalid ID, so we return (cgroup_id + 1).
*/
return memcg->css.cgroup->id + 1;
} }
static inline struct mem_cgroup *mem_cgroup_from_id(unsigned short id) static inline struct mem_cgroup *mem_cgroup_from_id(unsigned short id)
{ {
struct cgroup_subsys_state *css; struct cgroup_subsys_state *css;
css = css_from_id(id - 1, &memory_cgrp_subsys); css = css_from_id(id, &memory_cgrp_subsys);
return mem_cgroup_from_css(css); return mem_cgroup_from_css(css);
} }
...@@ -570,7 +566,8 @@ void sock_update_memcg(struct sock *sk) ...@@ -570,7 +566,8 @@ void sock_update_memcg(struct sock *sk)
memcg = mem_cgroup_from_task(current); memcg = mem_cgroup_from_task(current);
cg_proto = sk->sk_prot->proto_cgroup(memcg); cg_proto = sk->sk_prot->proto_cgroup(memcg);
if (!mem_cgroup_is_root(memcg) && if (!mem_cgroup_is_root(memcg) &&
memcg_proto_active(cg_proto) && css_tryget(&memcg->css)) { memcg_proto_active(cg_proto) &&
css_tryget_online(&memcg->css)) {
sk->sk_cgrp = cg_proto; sk->sk_cgrp = cg_proto;
} }
rcu_read_unlock(); rcu_read_unlock();
...@@ -831,7 +828,7 @@ __mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz) ...@@ -831,7 +828,7 @@ __mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz)
*/ */
__mem_cgroup_remove_exceeded(mz, mctz); __mem_cgroup_remove_exceeded(mz, mctz);
if (!res_counter_soft_limit_excess(&mz->memcg->res) || if (!res_counter_soft_limit_excess(&mz->memcg->res) ||
!css_tryget(&mz->memcg->css)) !css_tryget_online(&mz->memcg->css))
goto retry; goto retry;
done: done:
return mz; return mz;
...@@ -1073,7 +1070,7 @@ static struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm) ...@@ -1073,7 +1070,7 @@ static struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
if (unlikely(!memcg)) if (unlikely(!memcg))
memcg = root_mem_cgroup; memcg = root_mem_cgroup;
} }
} while (!css_tryget(&memcg->css)); } while (!css_tryget_online(&memcg->css));
rcu_read_unlock(); rcu_read_unlock();
return memcg; return memcg;
} }
...@@ -1110,7 +1107,8 @@ static struct mem_cgroup *__mem_cgroup_iter_next(struct mem_cgroup *root, ...@@ -1110,7 +1107,8 @@ static struct mem_cgroup *__mem_cgroup_iter_next(struct mem_cgroup *root,
*/ */
if (next_css) { if (next_css) {
if ((next_css == &root->css) || if ((next_css == &root->css) ||
((next_css->flags & CSS_ONLINE) && css_tryget(next_css))) ((next_css->flags & CSS_ONLINE) &&
css_tryget_online(next_css)))
return mem_cgroup_from_css(next_css); return mem_cgroup_from_css(next_css);
prev_css = next_css; prev_css = next_css;
...@@ -1156,7 +1154,7 @@ mem_cgroup_iter_load(struct mem_cgroup_reclaim_iter *iter, ...@@ -1156,7 +1154,7 @@ mem_cgroup_iter_load(struct mem_cgroup_reclaim_iter *iter,
* would be returned all the time. * would be returned all the time.
*/ */
if (position && position != root && if (position && position != root &&
!css_tryget(&position->css)) !css_tryget_online(&position->css))
position = NULL; position = NULL;
} }
return position; return position;
...@@ -1533,7 +1531,7 @@ static unsigned long mem_cgroup_margin(struct mem_cgroup *memcg) ...@@ -1533,7 +1531,7 @@ static unsigned long mem_cgroup_margin(struct mem_cgroup *memcg)
int mem_cgroup_swappiness(struct mem_cgroup *memcg) int mem_cgroup_swappiness(struct mem_cgroup *memcg)
{ {
/* root ? */ /* root ? */
if (mem_cgroup_disabled() || !css_parent(&memcg->css)) if (mem_cgroup_disabled() || !memcg->css.parent)
return vm_swappiness; return vm_swappiness;
return memcg->swappiness; return memcg->swappiness;
...@@ -2769,9 +2767,9 @@ static void __mem_cgroup_cancel_local_charge(struct mem_cgroup *memcg, ...@@ -2769,9 +2767,9 @@ static void __mem_cgroup_cancel_local_charge(struct mem_cgroup *memcg,
/* /*
* A helper function to get mem_cgroup from ID. must be called under * A helper function to get mem_cgroup from ID. must be called under
* rcu_read_lock(). The caller is responsible for calling css_tryget if * rcu_read_lock(). The caller is responsible for calling
* the mem_cgroup is used for charging. (dropping refcnt from swap can be * css_tryget_online() if the mem_cgroup is used for charging. (dropping
* called against removed memcg.) * refcnt from swap can be called against removed memcg.)
*/ */
static struct mem_cgroup *mem_cgroup_lookup(unsigned short id) static struct mem_cgroup *mem_cgroup_lookup(unsigned short id)
{ {
...@@ -2794,14 +2792,14 @@ struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page) ...@@ -2794,14 +2792,14 @@ struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page)
lock_page_cgroup(pc); lock_page_cgroup(pc);
if (PageCgroupUsed(pc)) { if (PageCgroupUsed(pc)) {
memcg = pc->mem_cgroup; memcg = pc->mem_cgroup;
if (memcg && !css_tryget(&memcg->css)) if (memcg && !css_tryget_online(&memcg->css))
memcg = NULL; memcg = NULL;
} else if (PageSwapCache(page)) { } else if (PageSwapCache(page)) {
ent.val = page_private(page); ent.val = page_private(page);
id = lookup_swap_cgroup_id(ent); id = lookup_swap_cgroup_id(ent);
rcu_read_lock(); rcu_read_lock();
memcg = mem_cgroup_lookup(id); memcg = mem_cgroup_lookup(id);
if (memcg && !css_tryget(&memcg->css)) if (memcg && !css_tryget_online(&memcg->css))
memcg = NULL; memcg = NULL;
rcu_read_unlock(); rcu_read_unlock();
} }
...@@ -3365,7 +3363,7 @@ struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep, ...@@ -3365,7 +3363,7 @@ struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep,
} }
/* The corresponding put will be done in the workqueue. */ /* The corresponding put will be done in the workqueue. */
if (!css_tryget(&memcg->css)) if (!css_tryget_online(&memcg->css))
goto out; goto out;
rcu_read_unlock(); rcu_read_unlock();
...@@ -4125,8 +4123,8 @@ void mem_cgroup_uncharge_swap(swp_entry_t ent) ...@@ -4125,8 +4123,8 @@ void mem_cgroup_uncharge_swap(swp_entry_t ent)
memcg = mem_cgroup_lookup(id); memcg = mem_cgroup_lookup(id);
if (memcg) { if (memcg) {
/* /*
* We uncharge this because swap is freed. * We uncharge this because swap is freed. This memcg can
* This memcg can be obsolete one. We avoid calling css_tryget * be obsolete one. We avoid calling css_tryget_online().
*/ */
if (!mem_cgroup_is_root(memcg)) if (!mem_cgroup_is_root(memcg))
res_counter_uncharge(&memcg->memsw, PAGE_SIZE); res_counter_uncharge(&memcg->memsw, PAGE_SIZE);
...@@ -4711,18 +4709,28 @@ static void mem_cgroup_reparent_charges(struct mem_cgroup *memcg) ...@@ -4711,18 +4709,28 @@ static void mem_cgroup_reparent_charges(struct mem_cgroup *memcg)
} while (usage > 0); } while (usage > 0);
} }
/*
* Test whether @memcg has children, dead or alive. Note that this
* function doesn't care whether @memcg has use_hierarchy enabled and
* returns %true if there are child csses according to the cgroup
* hierarchy. Testing use_hierarchy is the caller's responsiblity.
*/
static inline bool memcg_has_children(struct mem_cgroup *memcg) static inline bool memcg_has_children(struct mem_cgroup *memcg)
{ {
lockdep_assert_held(&memcg_create_mutex); bool ret;
/* /*
* The lock does not prevent addition or deletion to the list * The lock does not prevent addition or deletion of children, but
* of children, but it prevents a new child from being * it prevents a new child from being initialized based on this
* initialized based on this parent in css_online(), so it's * parent in css_online(), so it's enough to decide whether
* enough to decide whether hierarchically inherited * hierarchically inherited attributes can still be changed or not.
* attributes can still be changed or not.
*/ */
return memcg->use_hierarchy && lockdep_assert_held(&memcg_create_mutex);
!list_empty(&memcg->css.cgroup->children);
rcu_read_lock();
ret = css_next_child(NULL, &memcg->css);
rcu_read_unlock();
return ret;
} }
/* /*
...@@ -4734,11 +4742,6 @@ static inline bool memcg_has_children(struct mem_cgroup *memcg) ...@@ -4734,11 +4742,6 @@ static inline bool memcg_has_children(struct mem_cgroup *memcg)
static int mem_cgroup_force_empty(struct mem_cgroup *memcg) static int mem_cgroup_force_empty(struct mem_cgroup *memcg)
{ {
int nr_retries = MEM_CGROUP_RECLAIM_RETRIES; int nr_retries = MEM_CGROUP_RECLAIM_RETRIES;
struct cgroup *cgrp = memcg->css.cgroup;
/* returns EBUSY if there is a task or if we come here twice. */
if (cgroup_has_tasks(cgrp) || !list_empty(&cgrp->children))
return -EBUSY;
/* we call try-to-free pages for make this cgroup empty */ /* we call try-to-free pages for make this cgroup empty */
lru_add_drain_all(); lru_add_drain_all();
...@@ -4758,20 +4761,19 @@ static int mem_cgroup_force_empty(struct mem_cgroup *memcg) ...@@ -4758,20 +4761,19 @@ static int mem_cgroup_force_empty(struct mem_cgroup *memcg)
} }
} }
lru_add_drain();
mem_cgroup_reparent_charges(memcg);
return 0; return 0;
} }
static int mem_cgroup_force_empty_write(struct cgroup_subsys_state *css, static ssize_t mem_cgroup_force_empty_write(struct kernfs_open_file *of,
unsigned int event) char *buf, size_t nbytes,
loff_t off)
{ {
struct mem_cgroup *memcg = mem_cgroup_from_css(css); struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
if (mem_cgroup_is_root(memcg)) if (mem_cgroup_is_root(memcg))
return -EINVAL; return -EINVAL;
return mem_cgroup_force_empty(memcg); return mem_cgroup_force_empty(memcg) ?: nbytes;
} }
static u64 mem_cgroup_hierarchy_read(struct cgroup_subsys_state *css, static u64 mem_cgroup_hierarchy_read(struct cgroup_subsys_state *css,
...@@ -4785,7 +4787,7 @@ static int mem_cgroup_hierarchy_write(struct cgroup_subsys_state *css, ...@@ -4785,7 +4787,7 @@ static int mem_cgroup_hierarchy_write(struct cgroup_subsys_state *css,
{ {
int retval = 0; int retval = 0;
struct mem_cgroup *memcg = mem_cgroup_from_css(css); struct mem_cgroup *memcg = mem_cgroup_from_css(css);
struct mem_cgroup *parent_memcg = mem_cgroup_from_css(css_parent(&memcg->css)); struct mem_cgroup *parent_memcg = mem_cgroup_from_css(memcg->css.parent);
mutex_lock(&memcg_create_mutex); mutex_lock(&memcg_create_mutex);
...@@ -4802,7 +4804,7 @@ static int mem_cgroup_hierarchy_write(struct cgroup_subsys_state *css, ...@@ -4802,7 +4804,7 @@ static int mem_cgroup_hierarchy_write(struct cgroup_subsys_state *css,
*/ */
if ((!parent_memcg || !parent_memcg->use_hierarchy) && if ((!parent_memcg || !parent_memcg->use_hierarchy) &&
(val == 1 || val == 0)) { (val == 1 || val == 0)) {
if (list_empty(&memcg->css.cgroup->children)) if (!memcg_has_children(memcg))
memcg->use_hierarchy = val; memcg->use_hierarchy = val;
else else
retval = -EBUSY; retval = -EBUSY;
...@@ -4919,7 +4921,8 @@ static int __memcg_activate_kmem(struct mem_cgroup *memcg, ...@@ -4919,7 +4921,8 @@ static int __memcg_activate_kmem(struct mem_cgroup *memcg,
* of course permitted. * of course permitted.
*/ */
mutex_lock(&memcg_create_mutex); mutex_lock(&memcg_create_mutex);
if (cgroup_has_tasks(memcg->css.cgroup) || memcg_has_children(memcg)) if (cgroup_has_tasks(memcg->css.cgroup) ||
(memcg->use_hierarchy && memcg_has_children(memcg)))
err = -EBUSY; err = -EBUSY;
mutex_unlock(&memcg_create_mutex); mutex_unlock(&memcg_create_mutex);
if (err) if (err)
...@@ -5021,17 +5024,18 @@ static int memcg_update_kmem_limit(struct mem_cgroup *memcg, ...@@ -5021,17 +5024,18 @@ static int memcg_update_kmem_limit(struct mem_cgroup *memcg,
* The user of this function is... * The user of this function is...
* RES_LIMIT. * RES_LIMIT.
*/ */
static int mem_cgroup_write(struct cgroup_subsys_state *css, struct cftype *cft, static ssize_t mem_cgroup_write(struct kernfs_open_file *of,
char *buffer) char *buf, size_t nbytes, loff_t off)
{ {
struct mem_cgroup *memcg = mem_cgroup_from_css(css); struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
enum res_type type; enum res_type type;
int name; int name;
unsigned long long val; unsigned long long val;
int ret; int ret;
type = MEMFILE_TYPE(cft->private); buf = strstrip(buf);
name = MEMFILE_ATTR(cft->private); type = MEMFILE_TYPE(of_cft(of)->private);
name = MEMFILE_ATTR(of_cft(of)->private);
switch (name) { switch (name) {
case RES_LIMIT: case RES_LIMIT:
...@@ -5040,7 +5044,7 @@ static int mem_cgroup_write(struct cgroup_subsys_state *css, struct cftype *cft, ...@@ -5040,7 +5044,7 @@ static int mem_cgroup_write(struct cgroup_subsys_state *css, struct cftype *cft,
break; break;
} }
/* This function does all necessary parse...reuse it */ /* This function does all necessary parse...reuse it */
ret = res_counter_memparse_write_strategy(buffer, &val); ret = res_counter_memparse_write_strategy(buf, &val);
if (ret) if (ret)
break; break;
if (type == _MEM) if (type == _MEM)
...@@ -5053,7 +5057,7 @@ static int mem_cgroup_write(struct cgroup_subsys_state *css, struct cftype *cft, ...@@ -5053,7 +5057,7 @@ static int mem_cgroup_write(struct cgroup_subsys_state *css, struct cftype *cft,
return -EINVAL; return -EINVAL;
break; break;
case RES_SOFT_LIMIT: case RES_SOFT_LIMIT:
ret = res_counter_memparse_write_strategy(buffer, &val); ret = res_counter_memparse_write_strategy(buf, &val);
if (ret) if (ret)
break; break;
/* /*
...@@ -5070,7 +5074,7 @@ static int mem_cgroup_write(struct cgroup_subsys_state *css, struct cftype *cft, ...@@ -5070,7 +5074,7 @@ static int mem_cgroup_write(struct cgroup_subsys_state *css, struct cftype *cft,
ret = -EINVAL; /* should be BUG() ? */ ret = -EINVAL; /* should be BUG() ? */
break; break;
} }
return ret; return ret ?: nbytes;
} }
static void memcg_get_hierarchical_limit(struct mem_cgroup *memcg, static void memcg_get_hierarchical_limit(struct mem_cgroup *memcg,
...@@ -5083,8 +5087,8 @@ static void memcg_get_hierarchical_limit(struct mem_cgroup *memcg, ...@@ -5083,8 +5087,8 @@ static void memcg_get_hierarchical_limit(struct mem_cgroup *memcg,
if (!memcg->use_hierarchy) if (!memcg->use_hierarchy)
goto out; goto out;
while (css_parent(&memcg->css)) { while (memcg->css.parent) {
memcg = mem_cgroup_from_css(css_parent(&memcg->css)); memcg = mem_cgroup_from_css(memcg->css.parent);
if (!memcg->use_hierarchy) if (!memcg->use_hierarchy)
break; break;
tmp = res_counter_read_u64(&memcg->res, RES_LIMIT); tmp = res_counter_read_u64(&memcg->res, RES_LIMIT);
...@@ -5097,14 +5101,15 @@ static void memcg_get_hierarchical_limit(struct mem_cgroup *memcg, ...@@ -5097,14 +5101,15 @@ static void memcg_get_hierarchical_limit(struct mem_cgroup *memcg,
*memsw_limit = min_memsw_limit; *memsw_limit = min_memsw_limit;
} }
static int mem_cgroup_reset(struct cgroup_subsys_state *css, unsigned int event) static ssize_t mem_cgroup_reset(struct kernfs_open_file *of, char *buf,
size_t nbytes, loff_t off)
{ {
struct mem_cgroup *memcg = mem_cgroup_from_css(css); struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
int name; int name;
enum res_type type; enum res_type type;
type = MEMFILE_TYPE(event); type = MEMFILE_TYPE(of_cft(of)->private);
name = MEMFILE_ATTR(event); name = MEMFILE_ATTR(of_cft(of)->private);
switch (name) { switch (name) {
case RES_MAX_USAGE: case RES_MAX_USAGE:
...@@ -5129,7 +5134,7 @@ static int mem_cgroup_reset(struct cgroup_subsys_state *css, unsigned int event) ...@@ -5129,7 +5134,7 @@ static int mem_cgroup_reset(struct cgroup_subsys_state *css, unsigned int event)
break; break;
} }
return 0; return nbytes;
} }
static u64 mem_cgroup_move_charge_read(struct cgroup_subsys_state *css, static u64 mem_cgroup_move_charge_read(struct cgroup_subsys_state *css,
...@@ -5322,7 +5327,7 @@ static int mem_cgroup_swappiness_write(struct cgroup_subsys_state *css, ...@@ -5322,7 +5327,7 @@ static int mem_cgroup_swappiness_write(struct cgroup_subsys_state *css,
if (val > 100) if (val > 100)
return -EINVAL; return -EINVAL;
if (css_parent(css)) if (css->parent)
memcg->swappiness = val; memcg->swappiness = val;
else else
vm_swappiness = val; vm_swappiness = val;
...@@ -5659,7 +5664,7 @@ static int mem_cgroup_oom_control_write(struct cgroup_subsys_state *css, ...@@ -5659,7 +5664,7 @@ static int mem_cgroup_oom_control_write(struct cgroup_subsys_state *css,
struct mem_cgroup *memcg = mem_cgroup_from_css(css); struct mem_cgroup *memcg = mem_cgroup_from_css(css);
/* cannot set to root cgroup and only 0 and 1 are allowed */ /* cannot set to root cgroup and only 0 and 1 are allowed */
if (!css_parent(css) || !((val == 0) || (val == 1))) if (!css->parent || !((val == 0) || (val == 1)))
return -EINVAL; return -EINVAL;
memcg->oom_kill_disable = val; memcg->oom_kill_disable = val;
...@@ -5705,10 +5710,10 @@ static void kmem_cgroup_css_offline(struct mem_cgroup *memcg) ...@@ -5705,10 +5710,10 @@ static void kmem_cgroup_css_offline(struct mem_cgroup *memcg)
* which is then paired with css_put during uncharge resp. here. * which is then paired with css_put during uncharge resp. here.
* *
* Although this might sound strange as this path is called from * Although this might sound strange as this path is called from
* css_offline() when the referencemight have dropped down to 0 * css_offline() when the referencemight have dropped down to 0 and
* and shouldn't be incremented anymore (css_tryget would fail) * shouldn't be incremented anymore (css_tryget_online() would
* we do not have other options because of the kmem allocations * fail) we do not have other options because of the kmem
* lifetime. * allocations lifetime.
*/ */
css_get(&memcg->css); css_get(&memcg->css);
...@@ -5827,9 +5832,10 @@ static void memcg_event_ptable_queue_proc(struct file *file, ...@@ -5827,9 +5832,10 @@ static void memcg_event_ptable_queue_proc(struct file *file,
* Input must be in format '<event_fd> <control_fd> <args>'. * Input must be in format '<event_fd> <control_fd> <args>'.
* Interpretation of args is defined by control file implementation. * Interpretation of args is defined by control file implementation.
*/ */
static int memcg_write_event_control(struct cgroup_subsys_state *css, static ssize_t memcg_write_event_control(struct kernfs_open_file *of,
struct cftype *cft, char *buffer) char *buf, size_t nbytes, loff_t off)
{ {
struct cgroup_subsys_state *css = of_css(of);
struct mem_cgroup *memcg = mem_cgroup_from_css(css); struct mem_cgroup *memcg = mem_cgroup_from_css(css);
struct mem_cgroup_event *event; struct mem_cgroup_event *event;
struct cgroup_subsys_state *cfile_css; struct cgroup_subsys_state *cfile_css;
...@@ -5840,15 +5846,17 @@ static int memcg_write_event_control(struct cgroup_subsys_state *css, ...@@ -5840,15 +5846,17 @@ static int memcg_write_event_control(struct cgroup_subsys_state *css,
char *endp; char *endp;
int ret; int ret;
efd = simple_strtoul(buffer, &endp, 10); buf = strstrip(buf);
efd = simple_strtoul(buf, &endp, 10);
if (*endp != ' ') if (*endp != ' ')
return -EINVAL; return -EINVAL;
buffer = endp + 1; buf = endp + 1;
cfd = simple_strtoul(buffer, &endp, 10); cfd = simple_strtoul(buf, &endp, 10);
if ((*endp != ' ') && (*endp != '\0')) if ((*endp != ' ') && (*endp != '\0'))
return -EINVAL; return -EINVAL;
buffer = endp + 1; buf = endp + 1;
event = kzalloc(sizeof(*event), GFP_KERNEL); event = kzalloc(sizeof(*event), GFP_KERNEL);
if (!event) if (!event)
...@@ -5916,8 +5924,8 @@ static int memcg_write_event_control(struct cgroup_subsys_state *css, ...@@ -5916,8 +5924,8 @@ static int memcg_write_event_control(struct cgroup_subsys_state *css,
* automatically removed on cgroup destruction but the removal is * automatically removed on cgroup destruction but the removal is
* asynchronous, so take an extra ref on @css. * asynchronous, so take an extra ref on @css.
*/ */
cfile_css = css_tryget_from_dir(cfile.file->f_dentry->d_parent, cfile_css = css_tryget_online_from_dir(cfile.file->f_dentry->d_parent,
&memory_cgrp_subsys); &memory_cgrp_subsys);
ret = -EINVAL; ret = -EINVAL;
if (IS_ERR(cfile_css)) if (IS_ERR(cfile_css))
goto out_put_cfile; goto out_put_cfile;
...@@ -5926,7 +5934,7 @@ static int memcg_write_event_control(struct cgroup_subsys_state *css, ...@@ -5926,7 +5934,7 @@ static int memcg_write_event_control(struct cgroup_subsys_state *css,
goto out_put_cfile; goto out_put_cfile;
} }
ret = event->register_event(memcg, event->eventfd, buffer); ret = event->register_event(memcg, event->eventfd, buf);
if (ret) if (ret)
goto out_put_css; goto out_put_css;
...@@ -5939,7 +5947,7 @@ static int memcg_write_event_control(struct cgroup_subsys_state *css, ...@@ -5939,7 +5947,7 @@ static int memcg_write_event_control(struct cgroup_subsys_state *css,
fdput(cfile); fdput(cfile);
fdput(efile); fdput(efile);
return 0; return nbytes;
out_put_css: out_put_css:
css_put(css); css_put(css);
...@@ -5964,25 +5972,25 @@ static struct cftype mem_cgroup_files[] = { ...@@ -5964,25 +5972,25 @@ static struct cftype mem_cgroup_files[] = {
{ {
.name = "max_usage_in_bytes", .name = "max_usage_in_bytes",
.private = MEMFILE_PRIVATE(_MEM, RES_MAX_USAGE), .private = MEMFILE_PRIVATE(_MEM, RES_MAX_USAGE),
.trigger = mem_cgroup_reset, .write = mem_cgroup_reset,
.read_u64 = mem_cgroup_read_u64, .read_u64 = mem_cgroup_read_u64,
}, },
{ {
.name = "limit_in_bytes", .name = "limit_in_bytes",
.private = MEMFILE_PRIVATE(_MEM, RES_LIMIT), .private = MEMFILE_PRIVATE(_MEM, RES_LIMIT),
.write_string = mem_cgroup_write, .write = mem_cgroup_write,
.read_u64 = mem_cgroup_read_u64, .read_u64 = mem_cgroup_read_u64,
}, },
{ {
.name = "soft_limit_in_bytes", .name = "soft_limit_in_bytes",
.private = MEMFILE_PRIVATE(_MEM, RES_SOFT_LIMIT), .private = MEMFILE_PRIVATE(_MEM, RES_SOFT_LIMIT),
.write_string = mem_cgroup_write, .write = mem_cgroup_write,
.read_u64 = mem_cgroup_read_u64, .read_u64 = mem_cgroup_read_u64,
}, },
{ {
.name = "failcnt", .name = "failcnt",
.private = MEMFILE_PRIVATE(_MEM, RES_FAILCNT), .private = MEMFILE_PRIVATE(_MEM, RES_FAILCNT),
.trigger = mem_cgroup_reset, .write = mem_cgroup_reset,
.read_u64 = mem_cgroup_read_u64, .read_u64 = mem_cgroup_read_u64,
}, },
{ {
...@@ -5991,7 +5999,7 @@ static struct cftype mem_cgroup_files[] = { ...@@ -5991,7 +5999,7 @@ static struct cftype mem_cgroup_files[] = {
}, },
{ {
.name = "force_empty", .name = "force_empty",
.trigger = mem_cgroup_force_empty_write, .write = mem_cgroup_force_empty_write,
}, },
{ {
.name = "use_hierarchy", .name = "use_hierarchy",
...@@ -6001,7 +6009,7 @@ static struct cftype mem_cgroup_files[] = { ...@@ -6001,7 +6009,7 @@ static struct cftype mem_cgroup_files[] = {
}, },
{ {
.name = "cgroup.event_control", /* XXX: for compat */ .name = "cgroup.event_control", /* XXX: for compat */
.write_string = memcg_write_event_control, .write = memcg_write_event_control,
.flags = CFTYPE_NO_PREFIX, .flags = CFTYPE_NO_PREFIX,
.mode = S_IWUGO, .mode = S_IWUGO,
}, },
...@@ -6034,7 +6042,7 @@ static struct cftype mem_cgroup_files[] = { ...@@ -6034,7 +6042,7 @@ static struct cftype mem_cgroup_files[] = {
{ {
.name = "kmem.limit_in_bytes", .name = "kmem.limit_in_bytes",
.private = MEMFILE_PRIVATE(_KMEM, RES_LIMIT), .private = MEMFILE_PRIVATE(_KMEM, RES_LIMIT),
.write_string = mem_cgroup_write, .write = mem_cgroup_write,
.read_u64 = mem_cgroup_read_u64, .read_u64 = mem_cgroup_read_u64,
}, },
{ {
...@@ -6045,13 +6053,13 @@ static struct cftype mem_cgroup_files[] = { ...@@ -6045,13 +6053,13 @@ static struct cftype mem_cgroup_files[] = {
{ {
.name = "kmem.failcnt", .name = "kmem.failcnt",
.private = MEMFILE_PRIVATE(_KMEM, RES_FAILCNT), .private = MEMFILE_PRIVATE(_KMEM, RES_FAILCNT),
.trigger = mem_cgroup_reset, .write = mem_cgroup_reset,
.read_u64 = mem_cgroup_read_u64, .read_u64 = mem_cgroup_read_u64,
}, },
{ {
.name = "kmem.max_usage_in_bytes", .name = "kmem.max_usage_in_bytes",
.private = MEMFILE_PRIVATE(_KMEM, RES_MAX_USAGE), .private = MEMFILE_PRIVATE(_KMEM, RES_MAX_USAGE),
.trigger = mem_cgroup_reset, .write = mem_cgroup_reset,
.read_u64 = mem_cgroup_read_u64, .read_u64 = mem_cgroup_read_u64,
}, },
#ifdef CONFIG_SLABINFO #ifdef CONFIG_SLABINFO
...@@ -6074,19 +6082,19 @@ static struct cftype memsw_cgroup_files[] = { ...@@ -6074,19 +6082,19 @@ static struct cftype memsw_cgroup_files[] = {
{ {
.name = "memsw.max_usage_in_bytes", .name = "memsw.max_usage_in_bytes",
.private = MEMFILE_PRIVATE(_MEMSWAP, RES_MAX_USAGE), .private = MEMFILE_PRIVATE(_MEMSWAP, RES_MAX_USAGE),
.trigger = mem_cgroup_reset, .write = mem_cgroup_reset,
.read_u64 = mem_cgroup_read_u64, .read_u64 = mem_cgroup_read_u64,
}, },
{ {
.name = "memsw.limit_in_bytes", .name = "memsw.limit_in_bytes",
.private = MEMFILE_PRIVATE(_MEMSWAP, RES_LIMIT), .private = MEMFILE_PRIVATE(_MEMSWAP, RES_LIMIT),
.write_string = mem_cgroup_write, .write = mem_cgroup_write,
.read_u64 = mem_cgroup_read_u64, .read_u64 = mem_cgroup_read_u64,
}, },
{ {
.name = "memsw.failcnt", .name = "memsw.failcnt",
.private = MEMFILE_PRIVATE(_MEMSWAP, RES_FAILCNT), .private = MEMFILE_PRIVATE(_MEMSWAP, RES_FAILCNT),
.trigger = mem_cgroup_reset, .write = mem_cgroup_reset,
.read_u64 = mem_cgroup_read_u64, .read_u64 = mem_cgroup_read_u64,
}, },
{ }, /* terminate */ { }, /* terminate */
...@@ -6264,9 +6272,9 @@ static int ...@@ -6264,9 +6272,9 @@ static int
mem_cgroup_css_online(struct cgroup_subsys_state *css) mem_cgroup_css_online(struct cgroup_subsys_state *css)
{ {
struct mem_cgroup *memcg = mem_cgroup_from_css(css); struct mem_cgroup *memcg = mem_cgroup_from_css(css);
struct mem_cgroup *parent = mem_cgroup_from_css(css_parent(css)); struct mem_cgroup *parent = mem_cgroup_from_css(css->parent);
if (css->cgroup->id > MEM_CGROUP_ID_MAX) if (css->id > MEM_CGROUP_ID_MAX)
return -ENOSPC; return -ENOSPC;
if (!parent) if (!parent)
...@@ -6361,7 +6369,7 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) ...@@ -6361,7 +6369,7 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css)
/* /*
* XXX: css_offline() would be where we should reparent all * XXX: css_offline() would be where we should reparent all
* memory to prepare the cgroup for destruction. However, * memory to prepare the cgroup for destruction. However,
* memcg does not do css_tryget() and res_counter charging * memcg does not do css_tryget_online() and res_counter charging
* under the same RCU lock region, which means that charging * under the same RCU lock region, which means that charging
* could race with offlining. Offlining only happens to * could race with offlining. Offlining only happens to
* cgroups with no tasks in them but charges can show up * cgroups with no tasks in them but charges can show up
...@@ -6375,9 +6383,9 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css) ...@@ -6375,9 +6383,9 @@ static void mem_cgroup_css_free(struct cgroup_subsys_state *css)
* lookup_swap_cgroup_id() * lookup_swap_cgroup_id()
* rcu_read_lock() * rcu_read_lock()
* mem_cgroup_lookup() * mem_cgroup_lookup()
* css_tryget() * css_tryget_online()
* rcu_read_unlock() * rcu_read_unlock()
* disable css_tryget() * disable css_tryget_online()
* call_rcu() * call_rcu()
* offline_css() * offline_css()
* reparent_charges() * reparent_charges()
......
...@@ -42,7 +42,7 @@ cgrp_css_alloc(struct cgroup_subsys_state *parent_css) ...@@ -42,7 +42,7 @@ cgrp_css_alloc(struct cgroup_subsys_state *parent_css)
static int cgrp_css_online(struct cgroup_subsys_state *css) static int cgrp_css_online(struct cgroup_subsys_state *css)
{ {
struct cgroup_cls_state *cs = css_cls_state(css); struct cgroup_cls_state *cs = css_cls_state(css);
struct cgroup_cls_state *parent = css_cls_state(css_parent(css)); struct cgroup_cls_state *parent = css_cls_state(css->parent);
if (parent) if (parent)
cs->classid = parent->classid; cs->classid = parent->classid;
......
...@@ -140,7 +140,7 @@ cgrp_css_alloc(struct cgroup_subsys_state *parent_css) ...@@ -140,7 +140,7 @@ cgrp_css_alloc(struct cgroup_subsys_state *parent_css)
static int cgrp_css_online(struct cgroup_subsys_state *css) static int cgrp_css_online(struct cgroup_subsys_state *css)
{ {
struct cgroup_subsys_state *parent_css = css_parent(css); struct cgroup_subsys_state *parent_css = css->parent;
struct net_device *dev; struct net_device *dev;
int ret = 0; int ret = 0;
...@@ -185,15 +185,15 @@ static int read_priomap(struct seq_file *sf, void *v) ...@@ -185,15 +185,15 @@ static int read_priomap(struct seq_file *sf, void *v)
return 0; return 0;
} }
static int write_priomap(struct cgroup_subsys_state *css, struct cftype *cft, static ssize_t write_priomap(struct kernfs_open_file *of,
char *buffer) char *buf, size_t nbytes, loff_t off)
{ {
char devname[IFNAMSIZ + 1]; char devname[IFNAMSIZ + 1];
struct net_device *dev; struct net_device *dev;
u32 prio; u32 prio;
int ret; int ret;
if (sscanf(buffer, "%"__stringify(IFNAMSIZ)"s %u", devname, &prio) != 2) if (sscanf(buf, "%"__stringify(IFNAMSIZ)"s %u", devname, &prio) != 2)
return -EINVAL; return -EINVAL;
dev = dev_get_by_name(&init_net, devname); dev = dev_get_by_name(&init_net, devname);
...@@ -202,11 +202,11 @@ static int write_priomap(struct cgroup_subsys_state *css, struct cftype *cft, ...@@ -202,11 +202,11 @@ static int write_priomap(struct cgroup_subsys_state *css, struct cftype *cft,
rtnl_lock(); rtnl_lock();
ret = netprio_set_prio(css, dev, prio); ret = netprio_set_prio(of_css(of), dev, prio);
rtnl_unlock(); rtnl_unlock();
dev_put(dev); dev_put(dev);
return ret; return ret ?: nbytes;
} }
static int update_netprio(const void *v, struct file *file, unsigned n) static int update_netprio(const void *v, struct file *file, unsigned n)
...@@ -239,7 +239,7 @@ static struct cftype ss_files[] = { ...@@ -239,7 +239,7 @@ static struct cftype ss_files[] = {
{ {
.name = "ifpriomap", .name = "ifpriomap",
.seq_show = read_priomap, .seq_show = read_priomap,
.write_string = write_priomap, .write = write_priomap,
}, },
{ } /* terminate */ { } /* terminate */
}; };
......
...@@ -102,17 +102,19 @@ static int tcp_update_limit(struct mem_cgroup *memcg, u64 val) ...@@ -102,17 +102,19 @@ static int tcp_update_limit(struct mem_cgroup *memcg, u64 val)
return 0; return 0;
} }
static int tcp_cgroup_write(struct cgroup_subsys_state *css, struct cftype *cft, static ssize_t tcp_cgroup_write(struct kernfs_open_file *of,
char *buffer) char *buf, size_t nbytes, loff_t off)
{ {
struct mem_cgroup *memcg = mem_cgroup_from_css(css); struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
unsigned long long val; unsigned long long val;
int ret = 0; int ret = 0;
switch (cft->private) { buf = strstrip(buf);
switch (of_cft(of)->private) {
case RES_LIMIT: case RES_LIMIT:
/* see memcontrol.c */ /* see memcontrol.c */
ret = res_counter_memparse_write_strategy(buffer, &val); ret = res_counter_memparse_write_strategy(buf, &val);
if (ret) if (ret)
break; break;
ret = tcp_update_limit(memcg, val); ret = tcp_update_limit(memcg, val);
...@@ -121,7 +123,7 @@ static int tcp_cgroup_write(struct cgroup_subsys_state *css, struct cftype *cft, ...@@ -121,7 +123,7 @@ static int tcp_cgroup_write(struct cgroup_subsys_state *css, struct cftype *cft,
ret = -EINVAL; ret = -EINVAL;
break; break;
} }
return ret; return ret ?: nbytes;
} }
static u64 tcp_read_stat(struct mem_cgroup *memcg, int type, u64 default_val) static u64 tcp_read_stat(struct mem_cgroup *memcg, int type, u64 default_val)
...@@ -168,17 +170,18 @@ static u64 tcp_cgroup_read(struct cgroup_subsys_state *css, struct cftype *cft) ...@@ -168,17 +170,18 @@ static u64 tcp_cgroup_read(struct cgroup_subsys_state *css, struct cftype *cft)
return val; return val;
} }
static int tcp_cgroup_reset(struct cgroup_subsys_state *css, unsigned int event) static ssize_t tcp_cgroup_reset(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{ {
struct mem_cgroup *memcg; struct mem_cgroup *memcg;
struct cg_proto *cg_proto; struct cg_proto *cg_proto;
memcg = mem_cgroup_from_css(css); memcg = mem_cgroup_from_css(of_css(of));
cg_proto = tcp_prot.proto_cgroup(memcg); cg_proto = tcp_prot.proto_cgroup(memcg);
if (!cg_proto) if (!cg_proto)
return 0; return nbytes;
switch (event) { switch (of_cft(of)->private) {
case RES_MAX_USAGE: case RES_MAX_USAGE:
res_counter_reset_max(&cg_proto->memory_allocated); res_counter_reset_max(&cg_proto->memory_allocated);
break; break;
...@@ -187,13 +190,13 @@ static int tcp_cgroup_reset(struct cgroup_subsys_state *css, unsigned int event) ...@@ -187,13 +190,13 @@ static int tcp_cgroup_reset(struct cgroup_subsys_state *css, unsigned int event)
break; break;
} }
return 0; return nbytes;
} }
static struct cftype tcp_files[] = { static struct cftype tcp_files[] = {
{ {
.name = "kmem.tcp.limit_in_bytes", .name = "kmem.tcp.limit_in_bytes",
.write_string = tcp_cgroup_write, .write = tcp_cgroup_write,
.read_u64 = tcp_cgroup_read, .read_u64 = tcp_cgroup_read,
.private = RES_LIMIT, .private = RES_LIMIT,
}, },
...@@ -205,13 +208,13 @@ static struct cftype tcp_files[] = { ...@@ -205,13 +208,13 @@ static struct cftype tcp_files[] = {
{ {
.name = "kmem.tcp.failcnt", .name = "kmem.tcp.failcnt",
.private = RES_FAILCNT, .private = RES_FAILCNT,
.trigger = tcp_cgroup_reset, .write = tcp_cgroup_reset,
.read_u64 = tcp_cgroup_read, .read_u64 = tcp_cgroup_read,
}, },
{ {
.name = "kmem.tcp.max_usage_in_bytes", .name = "kmem.tcp.max_usage_in_bytes",
.private = RES_MAX_USAGE, .private = RES_MAX_USAGE,
.trigger = tcp_cgroup_reset, .write = tcp_cgroup_reset,
.read_u64 = tcp_cgroup_read, .read_u64 = tcp_cgroup_read,
}, },
{ } /* terminate */ { } /* terminate */
......
...@@ -182,7 +182,7 @@ static inline bool is_devcg_online(const struct dev_cgroup *devcg) ...@@ -182,7 +182,7 @@ static inline bool is_devcg_online(const struct dev_cgroup *devcg)
static int devcgroup_online(struct cgroup_subsys_state *css) static int devcgroup_online(struct cgroup_subsys_state *css)
{ {
struct dev_cgroup *dev_cgroup = css_to_devcgroup(css); struct dev_cgroup *dev_cgroup = css_to_devcgroup(css);
struct dev_cgroup *parent_dev_cgroup = css_to_devcgroup(css_parent(css)); struct dev_cgroup *parent_dev_cgroup = css_to_devcgroup(css->parent);
int ret = 0; int ret = 0;
mutex_lock(&devcgroup_mutex); mutex_lock(&devcgroup_mutex);
...@@ -455,7 +455,7 @@ static bool verify_new_ex(struct dev_cgroup *dev_cgroup, ...@@ -455,7 +455,7 @@ static bool verify_new_ex(struct dev_cgroup *dev_cgroup,
static int parent_has_perm(struct dev_cgroup *childcg, static int parent_has_perm(struct dev_cgroup *childcg,
struct dev_exception_item *ex) struct dev_exception_item *ex)
{ {
struct dev_cgroup *parent = css_to_devcgroup(css_parent(&childcg->css)); struct dev_cgroup *parent = css_to_devcgroup(childcg->css.parent);
if (!parent) if (!parent)
return 1; return 1;
...@@ -476,7 +476,7 @@ static int parent_has_perm(struct dev_cgroup *childcg, ...@@ -476,7 +476,7 @@ static int parent_has_perm(struct dev_cgroup *childcg,
static bool parent_allows_removal(struct dev_cgroup *childcg, static bool parent_allows_removal(struct dev_cgroup *childcg,
struct dev_exception_item *ex) struct dev_exception_item *ex)
{ {
struct dev_cgroup *parent = css_to_devcgroup(css_parent(&childcg->css)); struct dev_cgroup *parent = css_to_devcgroup(childcg->css.parent);
if (!parent) if (!parent)
return true; return true;
...@@ -587,13 +587,6 @@ static int propagate_exception(struct dev_cgroup *devcg_root, ...@@ -587,13 +587,6 @@ static int propagate_exception(struct dev_cgroup *devcg_root,
return rc; return rc;
} }
static inline bool has_children(struct dev_cgroup *devcgroup)
{
struct cgroup *cgrp = devcgroup->css.cgroup;
return !list_empty(&cgrp->children);
}
/* /*
* Modify the exception list using allow/deny rules. * Modify the exception list using allow/deny rules.
* CAP_SYS_ADMIN is needed for this. It's at least separate from CAP_MKNOD * CAP_SYS_ADMIN is needed for this. It's at least separate from CAP_MKNOD
...@@ -614,7 +607,7 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup, ...@@ -614,7 +607,7 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
char temp[12]; /* 11 + 1 characters needed for a u32 */ char temp[12]; /* 11 + 1 characters needed for a u32 */
int count, rc = 0; int count, rc = 0;
struct dev_exception_item ex; struct dev_exception_item ex;
struct dev_cgroup *parent = css_to_devcgroup(css_parent(&devcgroup->css)); struct dev_cgroup *parent = css_to_devcgroup(devcgroup->css.parent);
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
return -EPERM; return -EPERM;
...@@ -626,7 +619,7 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup, ...@@ -626,7 +619,7 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
case 'a': case 'a':
switch (filetype) { switch (filetype) {
case DEVCG_ALLOW: case DEVCG_ALLOW:
if (has_children(devcgroup)) if (css_has_online_children(&devcgroup->css))
return -EINVAL; return -EINVAL;
if (!may_allow_all(parent)) if (!may_allow_all(parent))
...@@ -642,7 +635,7 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup, ...@@ -642,7 +635,7 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
return rc; return rc;
break; break;
case DEVCG_DENY: case DEVCG_DENY:
if (has_children(devcgroup)) if (css_has_online_children(&devcgroup->css))
return -EINVAL; return -EINVAL;
dev_exception_clean(devcgroup); dev_exception_clean(devcgroup);
...@@ -767,27 +760,27 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup, ...@@ -767,27 +760,27 @@ static int devcgroup_update_access(struct dev_cgroup *devcgroup,
return rc; return rc;
} }
static int devcgroup_access_write(struct cgroup_subsys_state *css, static ssize_t devcgroup_access_write(struct kernfs_open_file *of,
struct cftype *cft, char *buffer) char *buf, size_t nbytes, loff_t off)
{ {
int retval; int retval;
mutex_lock(&devcgroup_mutex); mutex_lock(&devcgroup_mutex);
retval = devcgroup_update_access(css_to_devcgroup(css), retval = devcgroup_update_access(css_to_devcgroup(of_css(of)),
cft->private, buffer); of_cft(of)->private, strstrip(buf));
mutex_unlock(&devcgroup_mutex); mutex_unlock(&devcgroup_mutex);
return retval; return retval ?: nbytes;
} }
static struct cftype dev_cgroup_files[] = { static struct cftype dev_cgroup_files[] = {
{ {
.name = "allow", .name = "allow",
.write_string = devcgroup_access_write, .write = devcgroup_access_write,
.private = DEVCG_ALLOW, .private = DEVCG_ALLOW,
}, },
{ {
.name = "deny", .name = "deny",
.write_string = devcgroup_access_write, .write = devcgroup_access_write,
.private = DEVCG_DENY, .private = DEVCG_DENY,
}, },
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment