Commit 26c4caea authored by Vasiliy Kulikov's avatar Vasiliy Kulikov Committed by Linus Torvalds

taskstats: don't allow duplicate entries in listener mode

Currently a single process may register exit handlers unlimited times.
It may lead to a bloated listeners chain and very slow process
terminations.

Eg after 10KK sent TASKSTATS_CMD_ATTR_REGISTER_CPUMASKs ~300 Mb of
kernel memory is stolen for the handlers chain and "time id" shows 2-7
seconds instead of normal 0.003.  It makes it possible to exhaust all
kernel memory and to eat much of CPU time by triggerring numerous exits
on a single CPU.

The patch limits the number of times a single process may register
itself on a single CPU to one.

One little issue is kept unfixed - as taskstats_exit() is called before
exit_files() in do_exit(), the orphaned listener entry (if it was not
explicitly deregistered) is kept until the next someone's exit() and
implicit deregistration in send_cpu_listeners().  So, if a process
registered itself as a listener exits and the next spawned process gets
the same pid, it would inherit taskstats attributes.
Signed-off-by: default avatarVasiliy Kulikov <segooon@gmail.com>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: <stable@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 08142579
...@@ -285,16 +285,18 @@ static void fill_tgid_exit(struct task_struct *tsk) ...@@ -285,16 +285,18 @@ static void fill_tgid_exit(struct task_struct *tsk)
static int add_del_listener(pid_t pid, const struct cpumask *mask, int isadd) static int add_del_listener(pid_t pid, const struct cpumask *mask, int isadd)
{ {
struct listener_list *listeners; struct listener_list *listeners;
struct listener *s, *tmp; struct listener *s, *tmp, *s2;
unsigned int cpu; unsigned int cpu;
if (!cpumask_subset(mask, cpu_possible_mask)) if (!cpumask_subset(mask, cpu_possible_mask))
return -EINVAL; return -EINVAL;
s = NULL;
if (isadd == REGISTER) { if (isadd == REGISTER) {
for_each_cpu(cpu, mask) { for_each_cpu(cpu, mask) {
s = kmalloc_node(sizeof(struct listener), GFP_KERNEL, if (!s)
cpu_to_node(cpu)); s = kmalloc_node(sizeof(struct listener),
GFP_KERNEL, cpu_to_node(cpu));
if (!s) if (!s)
goto cleanup; goto cleanup;
s->pid = pid; s->pid = pid;
...@@ -303,9 +305,16 @@ static int add_del_listener(pid_t pid, const struct cpumask *mask, int isadd) ...@@ -303,9 +305,16 @@ static int add_del_listener(pid_t pid, const struct cpumask *mask, int isadd)
listeners = &per_cpu(listener_array, cpu); listeners = &per_cpu(listener_array, cpu);
down_write(&listeners->sem); down_write(&listeners->sem);
list_for_each_entry_safe(s2, tmp, &listeners->list, list) {
if (s2->pid == pid)
goto next_cpu;
}
list_add(&s->list, &listeners->list); list_add(&s->list, &listeners->list);
s = NULL;
next_cpu:
up_write(&listeners->sem); up_write(&listeners->sem);
} }
kfree(s);
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment