Commit 38715258 authored by Ken Chen's avatar Ken Chen Committed by Linus Torvalds

latencytop: fix per task accumulator

Per task latencytop accumulator prematurely terminates due to erroneous
placement of latency_record_count.  It should be incremented whenever a
new record is allocated instead of increment on every latencytop event.

Also fix search iterator to only search known record events instead of
blindly searching all pre-allocated space.
Signed-off-by: default avatarKen Chen <kenchen@google.com>
Reviewed-by: default avatarArjan van de Ven <arjan@infradead.org>
Cc: <stable@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 8d056cb9
...@@ -194,14 +194,7 @@ __account_scheduler_latency(struct task_struct *tsk, int usecs, int inter) ...@@ -194,14 +194,7 @@ __account_scheduler_latency(struct task_struct *tsk, int usecs, int inter)
account_global_scheduler_latency(tsk, &lat); account_global_scheduler_latency(tsk, &lat);
/* for (i = 0; i < tsk->latency_record_count; i++) {
* short term hack; if we're > 32 we stop; future we recycle:
*/
tsk->latency_record_count++;
if (tsk->latency_record_count >= LT_SAVECOUNT)
goto out_unlock;
for (i = 0; i < LT_SAVECOUNT; i++) {
struct latency_record *mylat; struct latency_record *mylat;
int same = 1; int same = 1;
...@@ -227,8 +220,14 @@ __account_scheduler_latency(struct task_struct *tsk, int usecs, int inter) ...@@ -227,8 +220,14 @@ __account_scheduler_latency(struct task_struct *tsk, int usecs, int inter)
} }
} }
/*
* short term hack; if we're > 32 we stop; future we recycle:
*/
if (tsk->latency_record_count >= LT_SAVECOUNT)
goto out_unlock;
/* Allocated a new one: */ /* Allocated a new one: */
i = tsk->latency_record_count; i = tsk->latency_record_count++;
memcpy(&tsk->latency_record[i], &lat, sizeof(struct latency_record)); memcpy(&tsk->latency_record[i], &lat, sizeof(struct latency_record));
out_unlock: out_unlock:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment