• Nadav Amit's avatar
    mm: migrate: prevent racy access to tlb_flush_pending · 16af97dc
    Nadav Amit authored
    Patch series "fixes of TLB batching races", v6.
    
    It turns out that Linux TLB batching mechanism suffers from various
    races.  Races that are caused due to batching during reclamation were
    recently handled by Mel and this patch-set deals with others.  The more
    fundamental issue is that concurrent updates of the page-tables allow
    for TLB flushes to be batched on one core, while another core changes
    the page-tables.  This other core may assume a PTE change does not
    require a flush based on the updated PTE value, while it is unaware that
    TLB flushes are still pending.
    
    This behavior affects KSM (which may result in memory corruption) and
    MADV_FREE and MADV_DONTNEED (which may result in incorrect behavior).  A
    proof-of-concept can easily produce the wrong behavior of MADV_DONTNEED.
    Memory corruption in KSM is harder to produce in practice, but was
    observed by hacking the kernel and adding a delay before flushing and
    replacing the KSM page.
    
    Finally, there is also one memory barrier missing, which may affect
    architectures with weak memory model.
    
    This patch (of 7):
    
    Setting and clearing mm->tlb_flush_pending can be performed by multiple
    threads, since mmap_sem may only be acquired for read in
    task_numa_work().  If this happens, tlb_flush_pending might be cleared
    while one of the threads still changes PTEs and batches TLB flushes.
    
    This can lead to the same race between migration and
    change_protection_range() that led to the introduction of
    tlb_flush_pending.  The result of this race was data corruption, which
    means that this patch also addresses a theoretically possible data
    corruption.
    
    An actual data corruption was not observed, yet the race was was
    confirmed by adding assertion to check tlb_flush_pending is not set by
    two threads, adding artificial latency in change_protection_range() and
    using sysctl to reduce kernel.numa_balancing_scan_delay_ms.
    
    Link: http://lkml.kernel.org/r/20170802000818.4760-2-namit@vmware.com
    Fixes: 20841405 ("mm: fix TLB flush race between migration, and
    change_protection_range")
    Signed-off-by: default avatarNadav Amit <namit@vmware.com>
    Acked-by: default avatarMel Gorman <mgorman@suse.de>
    Acked-by: default avatarRik van Riel <riel@redhat.com>
    Acked-by: default avatarMinchan Kim <minchan@kernel.org>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: "David S. Miller" <davem@davemloft.net>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Jeff Dike <jdike@addtoit.com>
    Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
    Cc: Mel Gorman <mgorman@techsingularity.net>
    Cc: Russell King <linux@armlinux.org.uk>
    Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
    Cc: Tony Luck <tony.luck@intel.com>
    Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    16af97dc
mprotect.c 13.5 KB