1. 11 Jul, 2024 3 commits
    • Lai Jiangshan's avatar
      workqueue: Remove cpus_read_lock() from apply_wqattrs_lock() · 19af4575
      Lai Jiangshan authored
      1726a171 ("workqueue: Put PWQ allocation and WQ enlistment in the same
      lock C.S.") led to the following possible deadlock:
      
        WARNING: possible recursive locking detected
        6.10.0-rc5-00004-g1d4c6111406c #1 Not tainted
         --------------------------------------------
         swapper/0/1 is trying to acquire lock:
         c27760f4 (cpu_hotplug_lock){++++}-{0:0}, at: alloc_workqueue (kernel/workqueue.c:5152 kernel/workqueue.c:5730) 
        
         but task is already holding lock:
         c27760f4 (cpu_hotplug_lock){++++}-{0:0}, at: padata_alloc (kernel/padata.c:1007) 
         ...  
         stack backtrace:
         ...
         cpus_read_lock (include/linux/percpu-rwsem.h:53 kernel/cpu.c:488) 
         alloc_workqueue (kernel/workqueue.c:5152 kernel/workqueue.c:5730) 
         padata_alloc (kernel/padata.c:1007 (discriminator 1)) 
         pcrypt_init_padata (crypto/pcrypt.c:327 (discriminator 1)) 
         pcrypt_init (crypto/pcrypt.c:353) 
         do_one_initcall (init/main.c:1267) 
         do_initcalls (init/main.c:1328 (discriminator 1) init/main.c:1345 (discriminator 1)) 
         kernel_init_freeable (init/main.c:1364) 
         kernel_init (init/main.c:1469) 
         ret_from_fork (arch/x86/kernel/process.c:153) 
         ret_from_fork_asm (arch/x86/entry/entry_32.S:737) 
         entry_INT80_32 (arch/x86/entry/entry_32.S:944) 
      
      This is caused by pcrypt allocating a workqueue while holding
      cpus_read_lock(), so workqueue code can't do it again as that can lead to
      deadlocks if down_write starts after the first down_read.
      
      The pwq creations and installations have been reworked based on
      wq_online_cpumask rather than cpu_online_mask making cpus_read_lock() is
      unneeded during wqattrs changes. Fix the deadlock by removing
      cpus_read_lock() from apply_wqattrs_lock().
      
      tj: Updated changelog.
      Signed-off-by: default avatarLai Jiangshan <jiangshan.ljs@antgroup.com>
      Fixes: 1726a171 ("workqueue: Put PWQ allocation and WQ enlistment in the same lock C.S.")
      Link: http://lkml.kernel.org/r/202407081521.83b627c1-lkp@intel.comSigned-off-by: default avatarTejun Heo <tj@kernel.org>
      19af4575
    • Lai Jiangshan's avatar
      workqueue: Simplify wq_calc_pod_cpumask() with wq_online_cpumask · fbb3d4c1
      Lai Jiangshan authored
      Avoid relying on cpu_online_mask for wqattrs changes so that
      cpus_read_lock() can be removed from apply_wqattrs_lock().
      
      And with wq_online_cpumask, attrs->__pod_cpumask doesn't need to be
      reused as a temporary storage to calculate if the pod have any online
      CPUs @attrs wants since @cpu_going_down is not in the wq_online_cpumask.
      Signed-off-by: default avatarLai Jiangshan <jiangshan.ljs@antgroup.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      fbb3d4c1
    • Lai Jiangshan's avatar
      workqueue: Add wq_online_cpumask · 8d84baf7
      Lai Jiangshan authored
      The new wq_online_mask mirrors the cpu_online_mask except during
      hotplugging; specifically, it differs between the hotplugging stages
      of workqueue_offline_cpu() and workqueue_online_cpu(), during which
      the transitioning CPU is not represented in the mask.
      Signed-off-by: default avatarLai Jiangshan <jiangshan.ljs@antgroup.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      8d84baf7
  2. 05 Jul, 2024 5 commits
  3. 02 Jul, 2024 2 commits
  4. 25 Jun, 2024 2 commits
  5. 21 Jun, 2024 4 commits
  6. 19 Jun, 2024 1 commit
    • Lai Jiangshan's avatar
      workqueue: Avoid nr_active manipulation in grabbing inactive items · b56c7207
      Lai Jiangshan authored
      Current try_to_grab_pending() activates the inactive item and
      subsequently treats it as though it were a standard activated item.
      
      This approach prevents duplicating handling logic for both active and
      inactive items, yet the premature activation of an inactive item
      triggers trace_workqueue_activate_work(), yielding an unintended user
      space visible side effect.
      
      And the unnecessary increment of the nr_active, which is not a simple
      counter now, followed by a counteracted decrement, is inefficient and
      complicates the code.
      
      Just remove the nr_active manipulation code in grabbing inactive items.
      Signed-off-by: default avatarLai Jiangshan <jiangshan.ljs@antgroup.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      b56c7207
  7. 10 Jun, 2024 1 commit
    • Julia Lawall's avatar
      workqueue: replace call_rcu by kfree_rcu for simple kmem_cache_free callback · 37c2277f
      Julia Lawall authored
      Since SLOB was removed, it is not necessary to use call_rcu
      when the callback only performs kmem_cache_free. Use
      kfree_rcu() directly.
      
      The changes were done using the following Coccinelle semantic patch.
      This semantic patch is designed to ignore cases where the callback
      function is used in another way.
      
      // <smpl>
      @r@
      expression e;
      local idexpression e2;
      identifier cb,f;
      position p;
      @@
      
      (
      call_rcu(...,e2)
      |
      call_rcu(&e->f,cb@p)
      )
      
      @r1@
      type T;
      identifier x,r.cb;
      @@
      
       cb(...) {
      (
         kmem_cache_free(...);
      |
         T x = ...;
         kmem_cache_free(...,x);
      |
         T x;
         x = ...;
         kmem_cache_free(...,x);
      )
       }
      
      @s depends on r1@
      position p != r.p;
      identifier r.cb;
      @@
      
       cb@p
      
      @script:ocaml@
      cb << r.cb;
      p << s.p;
      @@
      
      Printf.eprintf "Other use of %s at %s:%d\n"
         cb (List.hd p).file (List.hd p).line
      
      @depends on r1 && !s@
      expression e;
      identifier r.cb,f;
      position r.p;
      @@
      
      - call_rcu(&e->f,cb@p)
      + kfree_rcu(e,f)
      
      @r1a depends on !s@
      type T;
      identifier x,r.cb;
      @@
      
      - cb(...) {
      (
      -  kmem_cache_free(...);
      |
      -  T x = ...;
      -  kmem_cache_free(...,x);
      |
      -  T x;
      -  x = ...;
      -  kmem_cache_free(...,x);
      )
      - }
      // </smpl>
      Signed-off-by: default avatarJulia Lawall <Julia.Lawall@inria.fr>
      Reviewed-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      Reviewed-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      37c2277f
  8. 07 Jun, 2024 1 commit
  9. 06 Jun, 2024 21 commits