Commit aa6fde93 authored by Tejun Heo's avatar Tejun Heo

workqueue: Scale up wq_cpu_intensive_thresh_us if BogoMIPS is below 4000

wq_cpu_intensive_thresh_us is used to detect CPU-hogging per-cpu work items.
Once detected, they're excluded from concurrency management to prevent them
from blocking other per-cpu work items. If CONFIG_WQ_CPU_INTENSIVE_REPORT is
enabled, repeat offenders are also reported so that the code can be updated.

The default threshold is 10ms which is long enough to do fair bit of work on
modern CPUs while short enough to be usually not noticeable. This
unfortunately leads to a lot of, arguable spurious, detections on very slow
CPUs. Using the same threshold across CPUs whose performance levels may be
apart by multiple levels of magnitude doesn't make whole lot of sense.

This patch scales up wq_cpu_intensive_thresh_us upto 1 second when BogoMIPS
is below 4000. This is obviously very inaccurate but it doesn't have to be
accurate to be useful. The mechanism is still useful when the threshold is
fully scaled up and the benefits of reports are usually shared with everyone
regardless of who's reporting, so as long as there are sufficient number of
fast machines reporting, we don't lose much.

Some (or is it all?) ARM CPUs systemtically report significantly lower
BogoMIPS. While this doesn't break anything, given how widespread ARM CPUs
are, it's at least a missed opportunity and it probably would be a good idea
to teach workqueue about it.
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
Reported-and-Tested-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
parent b2ec116a
......@@ -52,6 +52,7 @@
#include <linux/sched/debug.h>
#include <linux/nmi.h>
#include <linux/kvm_para.h>
#include <linux/delay.h>
#include "workqueue_internal.h"
......@@ -338,8 +339,10 @@ static cpumask_var_t *wq_numa_possible_cpumask;
* Per-cpu work items which run for longer than the following threshold are
* automatically considered CPU intensive and excluded from concurrency
* management to prevent them from noticeably delaying other per-cpu work items.
* ULONG_MAX indicates that the user hasn't overridden it with a boot parameter.
* The actual value is initialized in wq_cpu_intensive_thresh_init().
*/
static unsigned long wq_cpu_intensive_thresh_us = 10000;
static unsigned long wq_cpu_intensive_thresh_us = ULONG_MAX;
module_param_named(cpu_intensive_thresh_us, wq_cpu_intensive_thresh_us, ulong, 0644);
static bool wq_disable_numa;
......@@ -6513,6 +6516,42 @@ void __init workqueue_init_early(void)
!system_freezable_power_efficient_wq);
}
static void __init wq_cpu_intensive_thresh_init(void)
{
unsigned long thresh;
unsigned long bogo;
/* if the user set it to a specific value, keep it */
if (wq_cpu_intensive_thresh_us != ULONG_MAX)
return;
/*
* The default of 10ms is derived from the fact that most modern (as of
* 2023) processors can do a lot in 10ms and that it's just below what
* most consider human-perceivable. However, the kernel also runs on a
* lot slower CPUs including microcontrollers where the threshold is way
* too low.
*
* Let's scale up the threshold upto 1 second if BogoMips is below 4000.
* This is by no means accurate but it doesn't have to be. The mechanism
* is still useful even when the threshold is fully scaled up. Also, as
* the reports would usually be applicable to everyone, some machines
* operating on longer thresholds won't significantly diminish their
* usefulness.
*/
thresh = 10 * USEC_PER_MSEC;
/* see init/calibrate.c for lpj -> BogoMIPS calculation */
bogo = max_t(unsigned long, loops_per_jiffy / 500000 * HZ, 1);
if (bogo < 4000)
thresh = min_t(unsigned long, thresh * 4000 / bogo, USEC_PER_SEC);
pr_debug("wq_cpu_intensive_thresh: lpj=%lu BogoMIPS=%lu thresh_us=%lu\n",
loops_per_jiffy, bogo, thresh);
wq_cpu_intensive_thresh_us = thresh;
}
/**
* workqueue_init - bring workqueue subsystem fully online
*
......@@ -6528,6 +6567,8 @@ void __init workqueue_init(void)
struct worker_pool *pool;
int cpu, bkt;
wq_cpu_intensive_thresh_init();
/*
* It'd be simpler to initialize NUMA in workqueue_init_early() but
* CPU to node mapping may not be available that early on some
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment