1. 06 Feb, 2019 32 commits
  2. 04 Feb, 2019 8 commits
    • Elena Reshetova's avatar
      perf/ring_buffer: Convert ring_buffer.aux_refcount to refcount_t · ca3bb3d0
      Elena Reshetova authored
      atomic_t variables are currently used to implement reference
      counters with the following properties:
      
       - counter is initialized to 1 using atomic_set()
       - a resource is freed upon counter reaching zero
       - once counter reaches zero, its further
         increments aren't allowed
       - counter schema uses basic atomic operations
         (set, inc, inc_not_zero, dec_and_test, etc.)
      
      Such atomic variables should be converted to a newly provided
      refcount_t type and API that prevents accidental counter overflows
      and underflows. This is important since overflows and underflows
      can lead to use-after-free situation and be exploitable.
      
      The variable ring_buffer.aux_refcount is used as pure reference counter.
      Convert it to refcount_t and fix up the operations.
      
      ** Important note for maintainers:
      
      Some functions from refcount_t API defined in lib/refcount.c
      have different memory ordering guarantees than their atomic
      counterparts. Please check Documentation/core-api/refcount-vs-atomic.rst
      for more information.
      
      Normally the differences should not matter since refcount_t provides
      enough guarantees to satisfy the refcounting use cases, but in
      some rare cases it might matter.
      Please double check that you don't have some undocumented
      memory guarantees for this variable usage.
      
      For the ring_buffer.aux_refcount it might make a difference
      in following places:
      
       - perf_aux_output_begin(): increment in refcount_inc_not_zero() only
         guarantees control dependency on success vs. fully ordered
         atomic counterpart
       - rb_free_aux(): decrement in refcount_dec_and_test() only
         provides RELEASE ordering and ACQUIRE ordering + control dependency
         on success vs. fully ordered atomic counterpart
      Suggested-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDavid Windsor <dwindsor@gmail.com>
      Reviewed-by: default avatarHans Liljestrand <ishkamiel@gmail.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: namhyung@kernel.org
      Link: https://lkml.kernel.org/r/1548678448-24458-4-git-send-email-elena.reshetova@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ca3bb3d0
    • Elena Reshetova's avatar
      perf/ring_buffer: Convert ring_buffer.refcount to refcount_t · fecb8ed2
      Elena Reshetova authored
      atomic_t variables are currently used to implement reference
      counters with the following properties:
      
       - counter is initialized to 1 using atomic_set()
       - a resource is freed upon counter reaching zero
       - once counter reaches zero, its further
         increments aren't allowed
       - counter schema uses basic atomic operations
         (set, inc, inc_not_zero, dec_and_test, etc.)
      
      Such atomic variables should be converted to a newly provided
      refcount_t type and API that prevents accidental counter overflows
      and underflows. This is important since overflows and underflows
      can lead to use-after-free situation and be exploitable.
      
      The variable ring_buffer.refcount is used as pure reference counter.
      Convert it to refcount_t and fix up the operations.
      
      ** Important note for maintainers:
      
      Some functions from refcount_t API defined in lib/refcount.c
      have different memory ordering guarantees than their atomic
      counterparts. Please check Documentation/core-api/refcount-vs-atomic.rst
      for more information.
      
      Normally the differences should not matter since refcount_t provides
      enough guarantees to satisfy the refcounting use cases, but in
      some rare cases it might matter.
      Please double check that you don't have some undocumented
      memory guarantees for this variable usage.
      
      For the ring_buffer.refcount it might make a difference
      in following places:
      
       - ring_buffer_get(): increment in refcount_inc_not_zero() only
         guarantees control dependency on success vs. fully ordered
         atomic counterpart
       - ring_buffer_put(): decrement in refcount_dec_and_test() only
         provides RELEASE ordering and ACQUIRE ordering + control dependency
         on success vs. fully ordered atomic counterpart
      Suggested-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDavid Windsor <dwindsor@gmail.com>
      Reviewed-by: default avatarHans Liljestrand <ishkamiel@gmail.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: namhyung@kernel.org
      Link: https://lkml.kernel.org/r/1548678448-24458-3-git-send-email-elena.reshetova@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      fecb8ed2
    • Elena Reshetova's avatar
      perf: Convert perf_event_context.refcount to refcount_t · 8c94abbb
      Elena Reshetova authored
      atomic_t variables are currently used to implement reference
      counters with the following properties:
      
       - counter is initialized to 1 using atomic_set()
       - a resource is freed upon counter reaching zero
       - once counter reaches zero, its further
         increments aren't allowed
       - counter schema uses basic atomic operations
         (set, inc, inc_not_zero, dec_and_test, etc.)
      
      Such atomic variables should be converted to a newly provided
      refcount_t type and API that prevents accidental counter overflows
      and underflows. This is important since overflows and underflows
      can lead to use-after-free situation and be exploitable.
      
      The variable perf_event_context.refcount is used as pure reference counter.
      Convert it to refcount_t and fix up the operations.
      
      ** Important note for maintainers:
      
      Some functions from refcount_t API defined in lib/refcount.c
      have different memory ordering guarantees than their atomic
      counterparts. Please check Documentation/core-api/refcount-vs-atomic.rst
      for more information.
      
      Normally the differences should not matter since refcount_t provides
      enough guarantees to satisfy the refcounting use cases, but in
      some rare cases it might matter.
      Please double check that you don't have some undocumented
      memory guarantees for this variable usage.
      
      For the perf_event_context.refcount it might make a difference
      in following places:
      
       - get_ctx(), perf_event_ctx_lock_nested(), perf_lock_task_context()
         and __perf_event_ctx_lock_double(): increment in
         refcount_inc_not_zero() only guarantees control dependency
         on success vs. fully ordered atomic counterpart
       - put_ctx(): decrement in refcount_dec_and_test() provides
         RELEASE ordering and ACQUIRE ordering + control dependency on success
         vs. fully ordered atomic counterpart
      Suggested-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarElena Reshetova <elena.reshetova@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDavid Windsor <dwindsor@gmail.com>
      Reviewed-by: default avatarHans Liljestrand <ishkamiel@gmail.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: namhyung@kernel.org
      Link: https://lkml.kernel.org/r/1548678448-24458-2-git-send-email-elena.reshetova@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8c94abbb
    • Thomas Gleixner's avatar
      perf/uprobes: Convert to SPDX license identifier · 720e596a
      Thomas Gleixner authored
      Replace the license boiler plate with a SPDX license identifier.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarPaul McKenney <paulmck@linux.ibm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kate Stewart <kstewart@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20190116111308.211981422@linutronix.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      720e596a
    • Thomas Gleixner's avatar
      perf/hw_breakpoints: Convert to SPDX license identifier · 469eb32e
      Thomas Gleixner authored
      Replace the license boiler plate with a SPDX license identifier.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarPaul McKenney <paulmck@linux.ibm.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kate Stewart <kstewart@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20190116111308.105855650@linutronix.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      469eb32e
    • Thomas Gleixner's avatar
      perf/core: Convert to SPDX license identifiers · 8e86e015
      Thomas Gleixner authored
      Use proper SPDX license identifiers instead of the bogus reference to
      kernel-base/COPYING.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kate Stewart <kstewart@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20190116111308.012666937@linutronix.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8e86e015
    • Ingo Molnar's avatar
      98cb6210
    • Mark Rutland's avatar
      perf/core: Don't WARN() for impossible ring-buffer sizes · 9dff0aa9
      Mark Rutland authored
      The perf tool uses /proc/sys/kernel/perf_event_mlock_kb to determine how
      large its ringbuffer mmap should be. This can be configured to arbitrary
      values, which can be larger than the maximum possible allocation from
      kmalloc.
      
      When this is configured to a suitably large value (e.g. thanks to the
      perf fuzzer), attempting to use perf record triggers a WARN_ON_ONCE() in
      __alloc_pages_nodemask():
      
         WARNING: CPU: 2 PID: 5666 at mm/page_alloc.c:4511 __alloc_pages_nodemask+0x3f8/0xbc8
      
      Let's avoid this by checking that the requested allocation is possible
      before calling kzalloc.
      Reported-by: default avatarJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarJulien Thierry <julien.thierry@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: <stable@vger.kernel.org>
      Link: https://lkml.kernel.org/r/20190110142745.25495-1-mark.rutland@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      9dff0aa9