Commit bd30fe6a authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'wq-for-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq

Pull workqueue updates from Tejun Heo:

 - Unbound workqueues now support more flexible affinity scopes.

   The default behavior is to soft-affine according to last level cache
   boundaries. A work item queued from a given LLC is executed by a
   worker running on the same LLC but the worker may be moved across
   cache boundaries as the scheduler sees fit. On machines which
   multiple L3 caches, which are becoming more popular along with
   chiplet designs, this improves cache locality while not harming work
   conservation too much.

   Unbound workqueues are now also a lot more flexible in terms of
   execution affinity. Differeing levels of affinity scopes are
   supported and both the default and per-workqueue affinity settings
   can be modified dynamically. This should help working around amny of
   sub-optimal behaviors observed recently with asymmetric ARM CPUs.

   This involved signficant restructuring of workqueue code. Nothing was
   reported yet but there's some risk of subtle regressions. Should keep
   an eye out.

 - Rescuer workers now has more identifiable comms.

 - workqueue.unbound_cpus added so that CPUs which can be used by
   workqueue can be constrained early during boot.

 - Now that all the in-tree users have been flushed out, trigger warning
   if system-wide workqueues are flushed.

* tag 'wq-for-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: (31 commits)
  workqueue: fix data race with the pwq->stats[] increment
  workqueue: Rename rescuer kworker
  workqueue: Make default affinity_scope dynamically updatable
  workqueue: Add "Affinity Scopes and Performance" section to documentation
  workqueue: Implement non-strict affinity scope for unbound workqueues
  workqueue: Add workqueue_attrs->__pod_cpumask
  workqueue: Factor out need_more_worker() check and worker wake-up
  workqueue: Factor out work to worker assignment and collision handling
  workqueue: Add multiple affinity scopes and interface to select them
  workqueue: Modularize wq_pod_type initialization
  workqueue: Add tools/workqueue/wq_dump.py which prints out workqueue configuration
  workqueue: Generalize unbound CPU pods
  workqueue: Factor out clearing of workqueue-only attrs fields
  workqueue: Factor out actual cpumask calculation to reduce subtlety in wq_update_pod()
  workqueue: Initialize unbound CPU pods later in the boot
  workqueue: Move wq_pod_init() below workqueue_init()
  workqueue: Rename NUMA related names to use pod instead
  workqueue: Rename workqueue_attrs->no_numa to ->ordered
  workqueue: Make unbound workqueues to use per-cpu pool_workqueues
  workqueue: Call wq_update_unbound_numa() on all CPUs in NUMA node on CPU hotplug
  ...
parents 7716f383 fe48ba7d
...@@ -7076,6 +7076,13 @@ ...@@ -7076,6 +7076,13 @@
disables both lockup detectors. Default is 10 disables both lockup detectors. Default is 10
seconds. seconds.
workqueue.unbound_cpus=
[KNL,SMP] Specify to constrain one or some CPUs
to use in unbound workqueues.
Format: <cpu-list>
By default, all online CPUs are available for
unbound workqueues.
workqueue.watchdog_thresh= workqueue.watchdog_thresh=
If CONFIG_WQ_WATCHDOG is configured, workqueue can If CONFIG_WQ_WATCHDOG is configured, workqueue can
warn stall conditions and dump internal state to warn stall conditions and dump internal state to
...@@ -7097,15 +7104,6 @@ ...@@ -7097,15 +7104,6 @@
threshold repeatedly. They are likely good threshold repeatedly. They are likely good
candidates for using WQ_UNBOUND workqueues instead. candidates for using WQ_UNBOUND workqueues instead.
workqueue.disable_numa
By default, all work items queued to unbound
workqueues are affine to the NUMA nodes they're
issued on, which results in better behavior in
general. If NUMA affinity needs to be disabled for
whatever reason, this option can be used. Note
that this also can be controlled per-workqueue for
workqueues visible under /sys/bus/workqueue/.
workqueue.power_efficient workqueue.power_efficient
Per-cpu workqueues are generally preferred because Per-cpu workqueues are generally preferred because
they show better performance thanks to cache they show better performance thanks to cache
...@@ -7121,6 +7119,18 @@ ...@@ -7121,6 +7119,18 @@
The default value of this parameter is determined by The default value of this parameter is determined by
the config option CONFIG_WQ_POWER_EFFICIENT_DEFAULT. the config option CONFIG_WQ_POWER_EFFICIENT_DEFAULT.
workqueue.default_affinity_scope=
Select the default affinity scope to use for unbound
workqueues. Can be one of "cpu", "smt", "cache",
"numa" and "system". Default is "cache". For more
information, see the Affinity Scopes section in
Documentation/core-api/workqueue.rst.
This can be changed after boot by writing to the
matching /sys/module/workqueue/parameters file. All
workqueues with the "default" affinity scope will be
updated accordignly.
workqueue.debug_force_rr_cpu workqueue.debug_force_rr_cpu
Workqueue used to implicitly guarantee that work Workqueue used to implicitly guarantee that work
items queued without explicit CPU specified are put items queued without explicit CPU specified are put
......
This diff is collapsed.
...@@ -125,6 +125,17 @@ struct rcu_work { ...@@ -125,6 +125,17 @@ struct rcu_work {
struct workqueue_struct *wq; struct workqueue_struct *wq;
}; };
enum wq_affn_scope {
WQ_AFFN_DFL, /* use system default */
WQ_AFFN_CPU, /* one pod per CPU */
WQ_AFFN_SMT, /* one pod poer SMT */
WQ_AFFN_CACHE, /* one pod per LLC */
WQ_AFFN_NUMA, /* one pod per NUMA node */
WQ_AFFN_SYSTEM, /* one pod across the whole system */
WQ_AFFN_NR_TYPES,
};
/** /**
* struct workqueue_attrs - A struct for workqueue attributes. * struct workqueue_attrs - A struct for workqueue attributes.
* *
...@@ -138,17 +149,58 @@ struct workqueue_attrs { ...@@ -138,17 +149,58 @@ struct workqueue_attrs {
/** /**
* @cpumask: allowed CPUs * @cpumask: allowed CPUs
*
* Work items in this workqueue are affine to these CPUs and not allowed
* to execute on other CPUs. A pool serving a workqueue must have the
* same @cpumask.
*/ */
cpumask_var_t cpumask; cpumask_var_t cpumask;
/** /**
* @no_numa: disable NUMA affinity * @__pod_cpumask: internal attribute used to create per-pod pools
*
* Internal use only.
*
* Per-pod unbound worker pools are used to improve locality. Always a
* subset of ->cpumask. A workqueue can be associated with multiple
* worker pools with disjoint @__pod_cpumask's. Whether the enforcement
* of a pool's @__pod_cpumask is strict depends on @affn_strict.
*/
cpumask_var_t __pod_cpumask;
/**
* @affn_strict: affinity scope is strict
* *
* Unlike other fields, ``no_numa`` isn't a property of a worker_pool. It * If clear, workqueue will make a best-effort attempt at starting the
* only modifies how :c:func:`apply_workqueue_attrs` select pools and thus * worker inside @__pod_cpumask but the scheduler is free to migrate it
* doesn't participate in pool hash calculations or equality comparisons. * outside.
*
* If set, workers are only allowed to run inside @__pod_cpumask.
*/
bool affn_strict;
/*
* Below fields aren't properties of a worker_pool. They only modify how
* :c:func:`apply_workqueue_attrs` select pools and thus don't
* participate in pool hash calculations or equality comparisons.
*/
/**
* @affn_scope: unbound CPU affinity scope
*
* CPU pods are used to improve execution locality of unbound work
* items. There are multiple pod types, one for each wq_affn_scope, and
* every CPU in the system belongs to one pod in every pod type. CPUs
* that belong to the same pod share the worker pool. For example,
* selecting %WQ_AFFN_NUMA makes the workqueue use a separate worker
* pool for each NUMA node.
*/
enum wq_affn_scope affn_scope;
/**
* @ordered: work items must be executed one by one in queueing order
*/ */
bool no_numa; bool ordered;
}; };
static inline struct delayed_work *to_delayed_work(struct work_struct *work) static inline struct delayed_work *to_delayed_work(struct work_struct *work)
...@@ -343,14 +395,10 @@ enum { ...@@ -343,14 +395,10 @@ enum {
__WQ_ORDERED_EXPLICIT = 1 << 19, /* internal: alloc_ordered_workqueue() */ __WQ_ORDERED_EXPLICIT = 1 << 19, /* internal: alloc_ordered_workqueue() */
WQ_MAX_ACTIVE = 512, /* I like 512, better ideas? */ WQ_MAX_ACTIVE = 512, /* I like 512, better ideas? */
WQ_MAX_UNBOUND_PER_CPU = 4, /* 4 * #cpus for unbound wq */ WQ_UNBOUND_MAX_ACTIVE = WQ_MAX_ACTIVE,
WQ_DFL_ACTIVE = WQ_MAX_ACTIVE / 2, WQ_DFL_ACTIVE = WQ_MAX_ACTIVE / 2,
}; };
/* unbound wq's aren't per-cpu, scale max_active according to #cpus */
#define WQ_UNBOUND_MAX_ACTIVE \
max_t(int, WQ_MAX_ACTIVE, num_possible_cpus() * WQ_MAX_UNBOUND_PER_CPU)
/* /*
* System-wide workqueues which are always present. * System-wide workqueues which are always present.
* *
...@@ -391,7 +439,7 @@ extern struct workqueue_struct *system_freezable_power_efficient_wq; ...@@ -391,7 +439,7 @@ extern struct workqueue_struct *system_freezable_power_efficient_wq;
* alloc_workqueue - allocate a workqueue * alloc_workqueue - allocate a workqueue
* @fmt: printf format for the name of the workqueue * @fmt: printf format for the name of the workqueue
* @flags: WQ_* flags * @flags: WQ_* flags
* @max_active: max in-flight work items, 0 for default * @max_active: max in-flight work items per CPU, 0 for default
* remaining args: args for @fmt * remaining args: args for @fmt
* *
* Allocate a workqueue with the specified parameters. For detailed * Allocate a workqueue with the specified parameters. For detailed
...@@ -569,6 +617,7 @@ static inline bool schedule_work(struct work_struct *work) ...@@ -569,6 +617,7 @@ static inline bool schedule_work(struct work_struct *work)
/* /*
* Detect attempt to flush system-wide workqueues at compile time when possible. * Detect attempt to flush system-wide workqueues at compile time when possible.
* Warn attempt to flush system-wide workqueues at runtime.
* *
* See https://lkml.kernel.org/r/49925af7-78a8-a3dd-bce6-cfc02e1a9236@I-love.SAKURA.ne.jp * See https://lkml.kernel.org/r/49925af7-78a8-a3dd-bce6-cfc02e1a9236@I-love.SAKURA.ne.jp
* for reasons and steps for converting system-wide workqueues into local workqueues. * for reasons and steps for converting system-wide workqueues into local workqueues.
...@@ -576,52 +625,13 @@ static inline bool schedule_work(struct work_struct *work) ...@@ -576,52 +625,13 @@ static inline bool schedule_work(struct work_struct *work)
extern void __warn_flushing_systemwide_wq(void) extern void __warn_flushing_systemwide_wq(void)
__compiletime_warning("Please avoid flushing system-wide workqueues."); __compiletime_warning("Please avoid flushing system-wide workqueues.");
/** /* Please stop using this function, for this function will be removed in near future. */
* flush_scheduled_work - ensure that any scheduled work has run to completion.
*
* Forces execution of the kernel-global workqueue and blocks until its
* completion.
*
* It's very easy to get into trouble if you don't take great care.
* Either of the following situations will lead to deadlock:
*
* One of the work items currently on the workqueue needs to acquire
* a lock held by your code or its caller.
*
* Your code is running in the context of a work routine.
*
* They will be detected by lockdep when they occur, but the first might not
* occur very often. It depends on what work items are on the workqueue and
* what locks they need, which you have no control over.
*
* In most situations flushing the entire workqueue is overkill; you merely
* need to know that a particular work item isn't queued and isn't running.
* In such cases you should use cancel_delayed_work_sync() or
* cancel_work_sync() instead.
*
* Please stop calling this function! A conversion to stop flushing system-wide
* workqueues is in progress. This function will be removed after all in-tree
* users stopped calling this function.
*/
/*
* The background of commit 771c035372a036f8 ("deprecate the
* '__deprecated' attribute warnings entirely and for good") is that,
* since Linus builds all modules between every single pull he does,
* the standard kernel build needs to be _clean_ in order to be able to
* notice when new problems happen. Therefore, don't emit warning while
* there are in-tree users.
*/
#define flush_scheduled_work() \ #define flush_scheduled_work() \
({ \ ({ \
if (0) \
__warn_flushing_systemwide_wq(); \ __warn_flushing_systemwide_wq(); \
__flush_workqueue(system_wq); \ __flush_workqueue(system_wq); \
}) })
/*
* Although there is no longer in-tree caller, for now just emit warning
* in order to give out-of-tree callers time to update.
*/
#define flush_workqueue(wq) \ #define flush_workqueue(wq) \
({ \ ({ \
struct workqueue_struct *_wq = (wq); \ struct workqueue_struct *_wq = (wq); \
...@@ -714,5 +724,6 @@ int workqueue_offline_cpu(unsigned int cpu); ...@@ -714,5 +724,6 @@ int workqueue_offline_cpu(unsigned int cpu);
void __init workqueue_init_early(void); void __init workqueue_init_early(void);
void __init workqueue_init(void); void __init workqueue_init(void);
void __init workqueue_init_topology(void);
#endif #endif
...@@ -1540,6 +1540,7 @@ static noinline void __init kernel_init_freeable(void) ...@@ -1540,6 +1540,7 @@ static noinline void __init kernel_init_freeable(void)
smp_init(); smp_init();
sched_init_smp(); sched_init_smp();
workqueue_init_topology();
padata_init(); padata_init();
page_alloc_init_late(); page_alloc_init_late();
......
This diff is collapsed.
...@@ -48,7 +48,7 @@ struct worker { ...@@ -48,7 +48,7 @@ struct worker {
/* A: runs through worker->node */ /* A: runs through worker->node */
unsigned long last_active; /* K: last active timestamp */ unsigned long last_active; /* K: last active timestamp */
unsigned int flags; /* X: flags */ unsigned int flags; /* L: flags */
int id; /* I: worker id */ int id; /* I: worker id */
/* /*
......
#!/usr/bin/env drgn
#
# Copyright (C) 2023 Tejun Heo <tj@kernel.org>
# Copyright (C) 2023 Meta Platforms, Inc. and affiliates.
desc = """
This is a drgn script to show the current workqueue configuration. For more
info on drgn, visit https://github.com/osandov/drgn.
Affinity Scopes
===============
Shows the CPUs that can be used for unbound workqueues and how they will be
grouped by each available affinity type. For each type:
nr_pods number of CPU pods in the affinity type
pod_cpus CPUs in each pod
pod_node NUMA node for memory allocation for each pod
cpu_pod pod that each CPU is associated to
Worker Pools
============
Lists all worker pools indexed by their ID. For each pool:
ref number of pool_workqueue's associated with this pool
nice nice value of the worker threads in the pool
idle number of idle workers
workers number of all workers
cpu CPU the pool is associated with (per-cpu pool)
cpus CPUs the workers in the pool can run on (unbound pool)
Workqueue CPU -> pool
=====================
Lists all workqueues along with their type and worker pool association. For
each workqueue:
NAME TYPE[,FLAGS] POOL_ID...
NAME name of the workqueue
TYPE percpu, unbound or ordered
FLAGS S: strict affinity scope
POOL_ID worker pool ID associated with each possible CPU
"""
import sys
import drgn
from drgn.helpers.linux.list import list_for_each_entry,list_empty
from drgn.helpers.linux.percpu import per_cpu_ptr
from drgn.helpers.linux.cpumask import for_each_cpu,for_each_possible_cpu
from drgn.helpers.linux.idr import idr_for_each
import argparse
parser = argparse.ArgumentParser(description=desc,
formatter_class=argparse.RawTextHelpFormatter)
args = parser.parse_args()
def err(s):
print(s, file=sys.stderr, flush=True)
sys.exit(1)
def cpumask_str(cpumask):
output = ""
base = 0
v = 0
for cpu in for_each_cpu(cpumask[0]):
while cpu - base >= 32:
output += f'{hex(v)} '
base += 32
v = 0
v |= 1 << (cpu - base)
if v > 0:
output += f'{v:08x}'
return output.strip()
worker_pool_idr = prog['worker_pool_idr']
workqueues = prog['workqueues']
wq_unbound_cpumask = prog['wq_unbound_cpumask']
wq_pod_types = prog['wq_pod_types']
wq_affn_dfl = prog['wq_affn_dfl']
wq_affn_names = prog['wq_affn_names']
WQ_UNBOUND = prog['WQ_UNBOUND']
WQ_ORDERED = prog['__WQ_ORDERED']
WQ_MEM_RECLAIM = prog['WQ_MEM_RECLAIM']
WQ_AFFN_CPU = prog['WQ_AFFN_CPU']
WQ_AFFN_SMT = prog['WQ_AFFN_SMT']
WQ_AFFN_CACHE = prog['WQ_AFFN_CACHE']
WQ_AFFN_NUMA = prog['WQ_AFFN_NUMA']
WQ_AFFN_SYSTEM = prog['WQ_AFFN_SYSTEM']
print('Affinity Scopes')
print('===============')
print(f'wq_unbound_cpumask={cpumask_str(wq_unbound_cpumask)}')
def print_pod_type(pt):
print(f' nr_pods {pt.nr_pods.value_()}')
print(' pod_cpus', end='')
for pod in range(pt.nr_pods):
print(f' [{pod}]={cpumask_str(pt.pod_cpus[pod])}', end='')
print('')
print(' pod_node', end='')
for pod in range(pt.nr_pods):
print(f' [{pod}]={pt.pod_node[pod].value_()}', end='')
print('')
print(f' cpu_pod ', end='')
for cpu in for_each_possible_cpu(prog):
print(f' [{cpu}]={pt.cpu_pod[cpu].value_()}', end='')
print('')
for affn in [WQ_AFFN_CPU, WQ_AFFN_SMT, WQ_AFFN_CACHE, WQ_AFFN_NUMA, WQ_AFFN_SYSTEM]:
print('')
print(f'{wq_affn_names[affn].string_().decode().upper()}{" (default)" if affn == wq_affn_dfl else ""}')
print_pod_type(wq_pod_types[affn])
print('')
print('Worker Pools')
print('============')
max_pool_id_len = 0
max_ref_len = 0
for pi, pool in idr_for_each(worker_pool_idr):
pool = drgn.Object(prog, 'struct worker_pool', address=pool)
max_pool_id_len = max(max_pool_id_len, len(f'{pi}'))
max_ref_len = max(max_ref_len, len(f'{pool.refcnt.value_()}'))
for pi, pool in idr_for_each(worker_pool_idr):
pool = drgn.Object(prog, 'struct worker_pool', address=pool)
print(f'pool[{pi:0{max_pool_id_len}}] ref={pool.refcnt.value_():{max_ref_len}} nice={pool.attrs.nice.value_():3} ', end='')
print(f'idle/workers={pool.nr_idle.value_():3}/{pool.nr_workers.value_():3} ', end='')
if pool.cpu >= 0:
print(f'cpu={pool.cpu.value_():3}', end='')
else:
print(f'cpus={cpumask_str(pool.attrs.cpumask)}', end='')
print(f' pod_cpus={cpumask_str(pool.attrs.__pod_cpumask)}', end='')
if pool.attrs.affn_strict:
print(' strict', end='')
print('')
print('')
print('Workqueue CPU -> pool')
print('=====================')
print('[ workqueue \ type CPU', end='')
for cpu in for_each_possible_cpu(prog):
print(f' {cpu:{max_pool_id_len}}', end='')
print(' dfl]')
for wq in list_for_each_entry('struct workqueue_struct', workqueues.address_of_(), 'list'):
print(f'{wq.name.string_().decode()[-24:]:24}', end='')
if wq.flags & WQ_UNBOUND:
if wq.flags & WQ_ORDERED:
print(' ordered ', end='')
else:
print(' unbound', end='')
if wq.unbound_attrs.affn_strict:
print(',S ', end='')
else:
print(' ', end='')
else:
print(' percpu ', end='')
for cpu in for_each_possible_cpu(prog):
pool_id = per_cpu_ptr(wq.cpu_pwq, cpu)[0].pool.id.value_()
field_len = max(len(str(cpu)), max_pool_id_len)
print(f' {pool_id:{field_len}}', end='')
if wq.flags & WQ_UNBOUND:
print(f' {wq.dfl_pwq.pool.id.value_():{max_pool_id_len}}', end='')
print('')
...@@ -20,8 +20,11 @@ https://github.com/osandov/drgn. ...@@ -20,8 +20,11 @@ https://github.com/osandov/drgn.
and got excluded from concurrency management to avoid stalling and got excluded from concurrency management to avoid stalling
other work items. other work items.
CMwake The number of concurrency-management wake-ups while executing a CMW/RPR For per-cpu workqueues, the number of concurrency-management
work item of the workqueue. wake-ups while executing a work item of the workqueue. For
unbound workqueues, the number of times a worker was repatriated
to its affinity scope after being migrated to an off-scope CPU by
the scheduler.
mayday The number of times the rescuer was requested while waiting for mayday The number of times the rescuer was requested while waiting for
new worker creation. new worker creation.
...@@ -65,6 +68,7 @@ PWQ_STAT_COMPLETED = prog['PWQ_STAT_COMPLETED'] # work items completed exec ...@@ -65,6 +68,7 @@ PWQ_STAT_COMPLETED = prog['PWQ_STAT_COMPLETED'] # work items completed exec
PWQ_STAT_CPU_TIME = prog['PWQ_STAT_CPU_TIME'] # total CPU time consumed PWQ_STAT_CPU_TIME = prog['PWQ_STAT_CPU_TIME'] # total CPU time consumed
PWQ_STAT_CPU_INTENSIVE = prog['PWQ_STAT_CPU_INTENSIVE'] # wq_cpu_intensive_thresh_us violations PWQ_STAT_CPU_INTENSIVE = prog['PWQ_STAT_CPU_INTENSIVE'] # wq_cpu_intensive_thresh_us violations
PWQ_STAT_CM_WAKEUP = prog['PWQ_STAT_CM_WAKEUP'] # concurrency-management worker wakeups PWQ_STAT_CM_WAKEUP = prog['PWQ_STAT_CM_WAKEUP'] # concurrency-management worker wakeups
PWQ_STAT_REPATRIATED = prog['PWQ_STAT_REPATRIATED'] # unbound workers brought back into scope
PWQ_STAT_MAYDAY = prog['PWQ_STAT_MAYDAY'] # maydays to rescuer PWQ_STAT_MAYDAY = prog['PWQ_STAT_MAYDAY'] # maydays to rescuer
PWQ_STAT_RESCUED = prog['PWQ_STAT_RESCUED'] # linked work items executed by rescuer PWQ_STAT_RESCUED = prog['PWQ_STAT_RESCUED'] # linked work items executed by rescuer
PWQ_NR_STATS = prog['PWQ_NR_STATS'] PWQ_NR_STATS = prog['PWQ_NR_STATS']
...@@ -89,22 +93,25 @@ class WqStats: ...@@ -89,22 +93,25 @@ class WqStats:
'cpu_time' : self.stats[PWQ_STAT_CPU_TIME], 'cpu_time' : self.stats[PWQ_STAT_CPU_TIME],
'cpu_intensive' : self.stats[PWQ_STAT_CPU_INTENSIVE], 'cpu_intensive' : self.stats[PWQ_STAT_CPU_INTENSIVE],
'cm_wakeup' : self.stats[PWQ_STAT_CM_WAKEUP], 'cm_wakeup' : self.stats[PWQ_STAT_CM_WAKEUP],
'repatriated' : self.stats[PWQ_STAT_REPATRIATED],
'mayday' : self.stats[PWQ_STAT_MAYDAY], 'mayday' : self.stats[PWQ_STAT_MAYDAY],
'rescued' : self.stats[PWQ_STAT_RESCUED], } 'rescued' : self.stats[PWQ_STAT_RESCUED], }
def table_header_str(): def table_header_str():
return f'{"":>24} {"total":>8} {"infl":>5} {"CPUtime":>8} '\ return f'{"":>24} {"total":>8} {"infl":>5} {"CPUtime":>8} '\
f'{"CPUitsv":>7} {"CMwake":>7} {"mayday":>7} {"rescued":>7}' f'{"CPUitsv":>7} {"CMW/RPR":>7} {"mayday":>7} {"rescued":>7}'
def table_row_str(self): def table_row_str(self):
cpu_intensive = '-' cpu_intensive = '-'
cm_wakeup = '-' cmw_rpr = '-'
mayday = '-' mayday = '-'
rescued = '-' rescued = '-'
if not self.unbound: if self.unbound:
cmw_rpr = str(self.stats[PWQ_STAT_REPATRIATED]);
else:
cpu_intensive = str(self.stats[PWQ_STAT_CPU_INTENSIVE]) cpu_intensive = str(self.stats[PWQ_STAT_CPU_INTENSIVE])
cm_wakeup = str(self.stats[PWQ_STAT_CM_WAKEUP]) cmw_rpr = str(self.stats[PWQ_STAT_CM_WAKEUP])
if self.mem_reclaim: if self.mem_reclaim:
mayday = str(self.stats[PWQ_STAT_MAYDAY]) mayday = str(self.stats[PWQ_STAT_MAYDAY])
...@@ -115,7 +122,7 @@ class WqStats: ...@@ -115,7 +122,7 @@ class WqStats:
f'{max(self.stats[PWQ_STAT_STARTED] - self.stats[PWQ_STAT_COMPLETED], 0):5} ' \ f'{max(self.stats[PWQ_STAT_STARTED] - self.stats[PWQ_STAT_COMPLETED], 0):5} ' \
f'{self.stats[PWQ_STAT_CPU_TIME] / 1000000:8.1f} ' \ f'{self.stats[PWQ_STAT_CPU_TIME] / 1000000:8.1f} ' \
f'{cpu_intensive:>7} ' \ f'{cpu_intensive:>7} ' \
f'{cm_wakeup:>7} ' \ f'{cmw_rpr:>7} ' \
f'{mayday:>7} ' \ f'{mayday:>7} ' \
f'{rescued:>7} ' f'{rescued:>7} '
return out.rstrip(':') return out.rstrip(':')
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment