1. 20 Oct, 2021 3 commits
  2. 13 Oct, 2021 1 commit
  3. 11 Oct, 2021 10 commits
  4. 08 Oct, 2021 3 commits
  5. 07 Oct, 2021 1 commit
  6. 05 Oct, 2021 2 commits
    • Steven Rostedt (VMware)'s avatar
      tracing: Create a sparse bitmask for pid filtering · 8d6e9098
      Steven Rostedt (VMware) authored
      When the trace_pid_list was created, the default pid max was 32768.
      Creating a bitmask that can hold one bit for all 32768 took up 4096 (one
      page). Having a one page bitmask was not much of a problem, and that was
      used for mapping pids. But today, systems are bigger and can run more
      tasks, and now the default pid_max is usually set to 4194304. Which means
      to handle that many pids requires 524288 bytes. Worse yet, the pid_max can
      be set to 2^30 (1073741824 or 1G) which would take 134217728 (128M) of
      memory to store this array.
      
      Since the pid_list array is very sparsely populated, it is a huge waste of
      memory to store all possible bits for each pid when most will not be set.
      
      Instead, use a page table scheme to store the array, and allow this to
      handle up to 30 bit pids.
      
      The pid_mask will start out with 256 entries for the first 8 MSB bits.
      This will cost 1K for 32 bit architectures and 2K for 64 bit. Each of
      these will have a 256 array to store the next 8 bits of the pid (another
      1 or 2K). These will hold an 2K byte bitmask (which will cover the LSB
      14 bits or 16384 pids).
      
      When the trace_pid_list is allocated, it will have the 1/2K upper bits
      allocated, and then it will allocate a cache for the next upper chunks and
      the lower chunks (default 6 of each). Then when a bit is "set", these
      chunks will be pulled from the free list and added to the array. If the
      free list gets down to a lever (default 2), it will trigger an irqwork
      that will refill the cache back up.
      
      On clearing a bit, if the clear causes the bitmask to be zero, that chunk
      will then be placed back into the free cache for later use, keeping the
      need to allocate more down to a minimum.
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      8d6e9098
    • Steven Rostedt (VMware)'s avatar
      tracing: Place trace_pid_list logic into abstract functions · 6954e415
      Steven Rostedt (VMware) authored
      Instead of having the logic that does trace_pid_list open coded, wrap it in
      abstract functions. This will allow a rewrite of the logic that implements
      the trace_pid_list without affecting the users.
      
      Note, this causes a change in behavior. Every time a pid is written into
      the set_*_pid file, it creates a new list and uses RCU to update it. If
      pid_max is lowered, but there was a pid currently in the list that was
      higher than pid_max, those pids will now be removed on updating the list.
      The old behavior kept that from happening.
      
      The rewrite of the pid_list logic will no longer depend on pid_max,
      and will return the old behavior.
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      6954e415
  7. 01 Oct, 2021 20 commits