Commit b385d21f authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds

mm: delete unnecessary and unsafe init_tlb_ubc()

init_tlb_ubc() looked unnecessary to me: tlb_ubc is statically
initialized with zeroes in the init_task, and copied from parent to
child while it is quiescent in arch_dup_task_struct(); so I went to
delete it.

But inserted temporary debug WARN_ONs in place of init_tlb_ubc() to
check that it was always empty at that point, and found them firing:
because memcg reclaim can recurse into global reclaim (when allocating
biosets for swapout in my case), and arrive back at the init_tlb_ubc()
in shrink_node_memcg().

Resetting tlb_ubc.flush_required at that point is wrong: if the upper
level needs a deferred TLB flush, but the lower level turns out not to,
we miss a TLB flush.  But fortunately, that's the only part of the
protocol that does not nest: with the initialization removed, cpumask
collects bits from upper and lower levels, and flushes TLB when needed.

Fixes: 72b252ae ("mm: send one IPI per CPU to TLB flush all entries after unmapping pages")
Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
Cc: stable@vger.kernel.org # 4.3+
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 71664665
...@@ -2303,23 +2303,6 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, ...@@ -2303,23 +2303,6 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
} }
} }
#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
static void init_tlb_ubc(void)
{
/*
* This deliberately does not clear the cpumask as it's expensive
* and unnecessary. If there happens to be data in there then the
* first SWAP_CLUSTER_MAX pages will send an unnecessary IPI and
* then will be cleared.
*/
current->tlb_ubc.flush_required = false;
}
#else
static inline void init_tlb_ubc(void)
{
}
#endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */
/* /*
* This is a basic per-node page freer. Used by both kswapd and direct reclaim. * This is a basic per-node page freer. Used by both kswapd and direct reclaim.
*/ */
...@@ -2355,8 +2338,6 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc ...@@ -2355,8 +2338,6 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
scan_adjusted = (global_reclaim(sc) && !current_is_kswapd() && scan_adjusted = (global_reclaim(sc) && !current_is_kswapd() &&
sc->priority == DEF_PRIORITY); sc->priority == DEF_PRIORITY);
init_tlb_ubc();
blk_start_plug(&plug); blk_start_plug(&plug);
while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] || while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
nr[LRU_INACTIVE_FILE]) { nr[LRU_INACTIVE_FILE]) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment