1. 29 Aug, 2017 37 commits
  2. 28 Aug, 2017 3 commits
    • Linus Torvalds's avatar
      page waitqueue: always add new entries at the end · 9c3a815f
      Linus Torvalds authored
      Commit 3510ca20 ("Minor page waitqueue cleanups") made the page
      queue code always add new waiters to the back of the queue, which helps
      upcoming patches to batch the wakeups for some horrid loads where the
      wait queues grow to thousands of entries.
      
      However, I forgot about the nasrt add_page_wait_queue() special case
      code that is only used by the cachefiles code.  That one still continued
      to add the new wait queue entries at the beginning of the list.
      
      Fix it, because any sane batched wakeup will require that we don't
      suddenly start getting new entries at the beginning of the list that we
      already handled in a previous batch.
      
      [ The current code always does the whole list while holding the lock, so
        wait queue ordering doesn't matter for correctness, but even then it's
        better to add later entries at the end from a fairness standpoint ]
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9c3a815f
    • Tejun Heo's avatar
      cpumask: fix spurious cpumask_of_node() on non-NUMA multi-node configs · b339752d
      Tejun Heo authored
      When !NUMA, cpumask_of_node(@node) equals cpu_online_mask regardless of
      @node.  The assumption seems that if !NUMA, there shouldn't be more than
      one node and thus reporting cpu_online_mask regardless of @node is
      correct.  However, that assumption was broken years ago to support
      DISCONTIGMEM and whether a system has multiple nodes or not is
      separately controlled by NEED_MULTIPLE_NODES.
      
      This means that, on a system with !NUMA && NEED_MULTIPLE_NODES,
      cpumask_of_node() will report cpu_online_mask for all possible nodes,
      indicating that the CPUs are associated with multiple nodes which is an
      impossible configuration.
      
      This bug has been around forever but doesn't look like it has caused any
      noticeable symptoms.  However, it triggers a WARN recently added to
      workqueue to verify NUMA affinity configuration.
      
      Fix it by reporting empty cpumask on non-zero nodes if !NUMA.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-and-tested-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b339752d
    • Alexey Brodkin's avatar
      ARCv2: SMP: Mask only private-per-core IRQ lines on boot at core intc · e8206d2b
      Alexey Brodkin authored
      Recent commit a8ec3ee8 "arc: Mask individual IRQ lines during core
      INTC init" breaks interrupt handling on ARCv2 SMP systems.
      
      That commit masked all interrupts at onset, as some controllers on some
      boards (customer as well as internal), would assert interrutps early
      before any handlers were installed.  For SMP systems, the masking was
      done at each cpu's core-intc.  Later, when the IRQ was actually
      requested, it was unmasked, but only on the requesting cpu.
      
      For "common" interrupts, which were wired up from the 2nd level IDU
      intc, this was as issue as they needed to be enabled on ALL the cpus
      (given that IDU IRQs are by default served Round Robin across cpus)
      
      So fix that by NOT masking "common" interrupts at core-intc, but instead
      at the 2nd level IDU intc (latter already being done in idu_of_init())
      
      Fixes: a8ec3ee8 ("arc: Mask individual IRQ lines during core INTC init")
      Signed-off-by: default avatarAlexey Brodkin <abrodkin@synopsys.com>
      [vgupta: reworked changelog, removed the extraneous idu_irq_mask_raw()]
      Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e8206d2b