Commit 9f014e3a authored by Roy Ben Shlomo's avatar Roy Ben Shlomo Committed by Arnaldo Carvalho de Melo

perf/core: Fix several typos in comments

Fix typos in a few functions' documentation comments.
Signed-off-by: default avatarRoy Ben Shlomo <royb@sentinelone.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: royb@sentinelone.com
Link: http://lore.kernel.org/lkml/20190920171254.31373-1-royb@sentinelone.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
parent 6ef81c55
...@@ -2239,7 +2239,7 @@ static void __perf_event_disable(struct perf_event *event, ...@@ -2239,7 +2239,7 @@ static void __perf_event_disable(struct perf_event *event,
* *
* If event->ctx is a cloned context, callers must make sure that * If event->ctx is a cloned context, callers must make sure that
* every task struct that event->ctx->task could possibly point to * every task struct that event->ctx->task could possibly point to
* remains valid. This condition is satisifed when called through * remains valid. This condition is satisfied when called through
* perf_event_for_each_child or perf_event_for_each because they * perf_event_for_each_child or perf_event_for_each because they
* hold the top-level event's child_mutex, so any descendant that * hold the top-level event's child_mutex, so any descendant that
* goes to exit will block in perf_event_exit_event(). * goes to exit will block in perf_event_exit_event().
...@@ -6054,7 +6054,7 @@ static void perf_sample_regs_intr(struct perf_regs *regs_intr, ...@@ -6054,7 +6054,7 @@ static void perf_sample_regs_intr(struct perf_regs *regs_intr,
* Get remaining task size from user stack pointer. * Get remaining task size from user stack pointer.
* *
* It'd be better to take stack vma map and limit this more * It'd be better to take stack vma map and limit this more
* precisly, but there's no way to get it safely under interrupt, * precisely, but there's no way to get it safely under interrupt,
* so using TASK_SIZE as limit. * so using TASK_SIZE as limit.
*/ */
static u64 perf_ustack_task_size(struct pt_regs *regs) static u64 perf_ustack_task_size(struct pt_regs *regs)
...@@ -6616,7 +6616,7 @@ void perf_prepare_sample(struct perf_event_header *header, ...@@ -6616,7 +6616,7 @@ void perf_prepare_sample(struct perf_event_header *header,
if (sample_type & PERF_SAMPLE_STACK_USER) { if (sample_type & PERF_SAMPLE_STACK_USER) {
/* /*
* Either we need PERF_SAMPLE_STACK_USER bit to be allways * Either we need PERF_SAMPLE_STACK_USER bit to be always
* processed as the last one or have additional check added * processed as the last one or have additional check added
* in case new sample type is added, because we could eat * in case new sample type is added, because we could eat
* up the rest of the sample size. * up the rest of the sample size.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment