1. 13 Mar, 2021 14 commits
    • Arnd Bergmann's avatar
      linux/compiler-clang.h: define HAVE_BUILTIN_BSWAP* · 97e49102
      Arnd Bergmann authored
      Separating compiler-clang.h from compiler-gcc.h inadventently dropped the
      definitions of the three HAVE_BUILTIN_BSWAP macros, which requires falling
      back to the open-coded version and hoping that the compiler detects it.
      
      Since all versions of clang support the __builtin_bswap interfaces, add
      back the flags and have the headers pick these up automatically.
      
      This results in a 4% improvement of compilation speed for arm defconfig.
      
      Note: it might also be worth revisiting which architectures set
      CONFIG_ARCH_USE_BUILTIN_BSWAP for one compiler or the other, today this is
      set on six architectures (arm32, csky, mips, powerpc, s390, x86), while
      another ten architectures define custom helpers (alpha, arc, ia64, m68k,
      mips, nios2, parisc, sh, sparc, xtensa), and the rest (arm64, h8300,
      hexagon, microblaze, nds32, openrisc, riscv) just get the unoptimized
      version and rely on the compiler to detect it.
      
      A long time ago, the compiler builtins were architecture specific, but
      nowadays, all compilers that are able to build the kernel have correct
      implementations of them, though some may not be as optimized as the inline
      asm versions.
      
      The patch that dropped the optimization landed in v4.19, so as discussed
      it would be fairly safe to backport this revert to stable kernels to the
      4.19/5.4/5.10 stable kernels, but there is a remaining risk for
      regressions, and it has no known side-effects besides compile speed.
      
      Link: https://lkml.kernel.org/r/20210226161151.2629097-1-arnd@kernel.org
      Link: https://lore.kernel.org/lkml/20210225164513.3667778-1-arnd@kernel.org/
      Fixes: 815f0ddb ("include/linux/compiler*.h: make compiler-*.h mutually exclusive")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Reviewed-by: default avatarNathan Chancellor <nathan@kernel.org>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarMiguel Ojeda <ojeda@kernel.org>
      Acked-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Acked-by: default avatarLuc Van Oostenryck <luc.vanoostenryck@gmail.com>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Sami Tolvanen <samitolvanen@google.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Arvind Sankar <nivedita@alum.mit.edu>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      97e49102
    • Vlastimil Babka's avatar
      MAINTAINERS: exclude uapi directories in API/ABI section · f0b15b60
      Vlastimil Babka authored
      Commit 7b4693e6 ("MAINTAINERS: add uapi directories to API/ABI
      section") added include/uapi/ and arch/*/include/uapi/ so that patches
      modifying them CC linux-api.  However that was already done in the past
      and resulted in too much noise and thus later removed, as explained in
      b14fd334 ("MAINTAINERS: trim the file triggers for ABI/API")
      
      To prevent another round of addition and removal in the future, change the
      entries to X: (explicit exclusion) for documentation purposes, although
      they are not subdirectories of broader included directories, as there is
      apparently no defined way to add plain comments in subsystem sections.
      
      Link: https://lkml.kernel.org/r/20210301100255.25229-1-vbabka@suse.czSigned-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reported-by: default avatarMichael Kerrisk (man-pages) <mtk.manpages@gmail.com>
      Acked-by: default avatarMichael Kerrisk (man-pages) <mtk.manpages@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f0b15b60
    • Lior Ribak's avatar
      binfmt_misc: fix possible deadlock in bm_register_write · e7850f4d
      Lior Ribak authored
      There is a deadlock in bm_register_write:
      
      First, in the begining of the function, a lock is taken on the binfmt_misc
      root inode with inode_lock(d_inode(root)).
      
      Then, if the user used the MISC_FMT_OPEN_FILE flag, the function will call
      open_exec on the user-provided interpreter.
      
      open_exec will call a path lookup, and if the path lookup process includes
      the root of binfmt_misc, it will try to take a shared lock on its inode
      again, but it is already locked, and the code will get stuck in a deadlock
      
      To reproduce the bug:
      $ echo ":iiiii:E::ii::/proc/sys/fs/binfmt_misc/bla:F" > /proc/sys/fs/binfmt_misc/register
      
      backtrace of where the lock occurs (#5):
      0  schedule () at ./arch/x86/include/asm/current.h:15
      1  0xffffffff81b51237 in rwsem_down_read_slowpath (sem=0xffff888003b202e0, count=<optimized out>, state=state@entry=2) at kernel/locking/rwsem.c:992
      2  0xffffffff81b5150a in __down_read_common (state=2, sem=<optimized out>) at kernel/locking/rwsem.c:1213
      3  __down_read (sem=<optimized out>) at kernel/locking/rwsem.c:1222
      4  down_read (sem=<optimized out>) at kernel/locking/rwsem.c:1355
      5  0xffffffff811ee22a in inode_lock_shared (inode=<optimized out>) at ./include/linux/fs.h:783
      6  open_last_lookups (op=0xffffc9000022fe34, file=0xffff888004098600, nd=0xffffc9000022fd10) at fs/namei.c:3177
      7  path_openat (nd=nd@entry=0xffffc9000022fd10, op=op@entry=0xffffc9000022fe34, flags=flags@entry=65) at fs/namei.c:3366
      8  0xffffffff811efe1c in do_filp_open (dfd=<optimized out>, pathname=pathname@entry=0xffff8880031b9000, op=op@entry=0xffffc9000022fe34) at fs/namei.c:3396
      9  0xffffffff811e493f in do_open_execat (fd=fd@entry=-100, name=name@entry=0xffff8880031b9000, flags=<optimized out>, flags@entry=0) at fs/exec.c:913
      10 0xffffffff811e4a92 in open_exec (name=<optimized out>) at fs/exec.c:948
      11 0xffffffff8124aa84 in bm_register_write (file=<optimized out>, buffer=<optimized out>, count=19, ppos=<optimized out>) at fs/binfmt_misc.c:682
      12 0xffffffff811decd2 in vfs_write (file=file@entry=0xffff888004098500, buf=buf@entry=0xa758d0 ":iiiii:E::ii::i:CF
      ", count=count@entry=19, pos=pos@entry=0xffffc9000022ff10) at fs/read_write.c:603
      13 0xffffffff811defda in ksys_write (fd=<optimized out>, buf=0xa758d0 ":iiiii:E::ii::i:CF
      ", count=19) at fs/read_write.c:658
      14 0xffffffff81b49813 in do_syscall_64 (nr=<optimized out>, regs=0xffffc9000022ff58) at arch/x86/entry/common.c:46
      15 0xffffffff81c0007c in entry_SYSCALL_64 () at arch/x86/entry/entry_64.S:120
      
      To solve the issue, the open_exec call is moved to before the write
      lock is taken by bm_register_write
      
      Link: https://lkml.kernel.org/r/20210228224414.95962-1-liorribak@gmail.com
      Fixes: 948b701a ("binfmt_misc: add persistent opened binary handler for containers")
      Signed-off-by: default avatarLior Ribak <liorribak@gmail.com>
      Acked-by: default avatarHelge Deller <deller@gmx.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e7850f4d
    • OGAWA Hirofumi's avatar
      mm/highmem.c: fix zero_user_segments() with start > end · 184cee51
      OGAWA Hirofumi authored
      zero_user_segments() is used from __block_write_begin_int(), for example
      like the following
      
      	zero_user_segments(page, 4096, 1024, 512, 918)
      
      But new the zero_user_segments() implementation for for HIGHMEM +
      TRANSPARENT_HUGEPAGE doesn't handle "start > end" case correctly, and hits
      BUG_ON().  (we can fix __block_write_begin_int() instead though, it is the
      old and multiple usage)
      
      Also it calls kmap_atomic() unnecessarily while start == end == 0.
      
      Link: https://lkml.kernel.org/r/87v9ab60r4.fsf@mail.parknet.co.jp
      Fixes: 0060ef3b ("mm: support THPs in zero_user_segments")
      Signed-off-by: default avatarOGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      184cee51
    • Peter Xu's avatar
      hugetlb: do early cow when page pinned on src mm · 4eae4efa
      Peter Xu authored
      This is the last missing piece of the COW-during-fork effort when there're
      pinned pages found.  One can reference 70e806e4 ("mm: Do early cow for
      pinned pages during fork() for ptes", 2020-09-27) for more information,
      since we do similar things here rather than pte this time, but just for
      hugetlb.
      
      Note that after Jason's recent work on 57efa1fe ("mm/gup: prevent
      gup_fast from racing with COW during fork", 2020-12-15) which is safer and
      easier to understand, we're safe now within the whole copy_page_range()
      against gup-fast, we don't need the wr-protect trick that proposed in
      70e806e4 anymore.
      
      Link: https://lkml.kernel.org/r/20210217233547.93892-6-peterx@redhat.comSigned-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Reviewed-by: default avatarJason Gunthorpe <jgg@ziepe.ca>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: David Airlie <airlied@linux.ie>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: Gal Pressman <galpress@amazon.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jann Horn <jannh@google.com>
      Cc: Kirill Shutemov <kirill@shutemov.name>
      Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Roland Scheidegger <sroland@vmware.com>
      Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
      Cc: Wei Zhang <wzam@amazon.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4eae4efa
    • Peter Xu's avatar
      mm: use is_cow_mapping() across tree where proper · ca6eb14d
      Peter Xu authored
      After is_cow_mapping() is exported in mm.h, replace some manual checks
      elsewhere throughout the tree but start to use the new helper.
      
      Link: https://lkml.kernel.org/r/20210217233547.93892-5-peterx@redhat.comSigned-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarJason Gunthorpe <jgg@ziepe.ca>
      Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
      Cc: Roland Scheidegger <sroland@vmware.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: Gal Pressman <galpress@amazon.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jann Horn <jannh@google.com>
      Cc: Kirill Shutemov <kirill@shutemov.name>
      Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Wei Zhang <wzam@amazon.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ca6eb14d
    • Peter Xu's avatar
      mm: introduce page_needs_cow_for_dma() for deciding whether cow · 97a7e473
      Peter Xu authored
      We've got quite a few places (pte, pmd, pud) that explicitly checked
      against whether we should break the cow right now during fork().  It's
      easier to provide a helper, especially before we work the same thing on
      hugetlbfs.
      
      Since we'll reference is_cow_mapping() in mm.h, move it there too.
      Actually it suites mm.h more since internal.h is mm/ only, but mm.h is
      exported to the whole kernel.  With that we should expect another patch to
      use is_cow_mapping() whenever we can across the kernel since we do use it
      quite a lot but it's always done with raw code against VM_* flags.
      
      Link: https://lkml.kernel.org/r/20210217233547.93892-4-peterx@redhat.comSigned-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarJason Gunthorpe <jgg@ziepe.ca>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: David Airlie <airlied@linux.ie>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: Gal Pressman <galpress@amazon.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jann Horn <jannh@google.com>
      Cc: Kirill Shutemov <kirill@shutemov.name>
      Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Roland Scheidegger <sroland@vmware.com>
      Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
      Cc: Wei Zhang <wzam@amazon.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      97a7e473
    • Peter Xu's avatar
      hugetlb: break earlier in add_reservation_in_range() when we can · ca7e0457
      Peter Xu authored
      All the regions maintained in hugetlb reserved map is inclusive on "from"
      but exclusive on "to".  We can break earlier even if rg->from==t because
      it already means no possible intersection.
      
      This does not need a Fixes in all cases because when it happens
      (rg->from==t) we'll not break out of the loop while we should, however the
      next thing we'd do is still add the last file_region we'd need and quit
      the loop in the next round.  So this change is not a bugfix (since the old
      code should still run okay iiuc), but we'd better still touch it up to
      make it logically sane.
      
      Link: https://lkml.kernel.org/r/20210217233547.93892-3-peterx@redhat.comSigned-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Reviewed-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: David Airlie <airlied@linux.ie>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: Gal Pressman <galpress@amazon.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jann Horn <jannh@google.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: Kirill Shutemov <kirill@shutemov.name>
      Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Roland Scheidegger <sroland@vmware.com>
      Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
      Cc: Wei Zhang <wzam@amazon.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ca7e0457
    • Peter Xu's avatar
      hugetlb: dedup the code to add a new file_region · 2103cf9c
      Peter Xu authored
      Patch series "mm/hugetlb: Early cow on fork, and a few cleanups", v5.
      
      As reported by Gal [1], we still miss the code clip to handle early cow
      for hugetlb case, which is true.  Again, it still feels odd to fork()
      after using a few huge pages, especially if they're privately mapped to
      me..  However I do agree with Gal and Jason in that we should still have
      that since that'll complete the early cow on fork effort at least, and
      it'll still fix issues where buffers are not well under control and not
      easy to apply MADV_DONTFORK.
      
      The first two patches (1-2) are some cleanups I noticed when reading into
      the hugetlb reserve map code.  I think it's good to have but they're not
      necessary for fixing the fork issue.
      
      The last two patches (3-4) are the real fix.
      
      I tested this with a fork() after some vfio-pci assignment, so I'm pretty
      sure the page copy path could trigger well (page will be accounted right
      after the fork()), but I didn't do data check since the card I assigned is
      some random nic.
      
        https://github.com/xzpeter/linux/tree/fork-cow-pin-huge
      
      [1] https://lore.kernel.org/lkml/27564187-4a08-f187-5a84-3df50009f6ca@amazon.com/
      
      Introduce hugetlb_resv_map_add() helper to add a new file_region rather
      than duplication the similar code twice in add_reservation_in_range().
      
      Link: https://lkml.kernel.org/r/20210217233547.93892-1-peterx@redhat.com
      Link: https://lkml.kernel.org/r/20210217233547.93892-2-peterx@redhat.comSigned-off-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
      Reviewed-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
      Cc: Gal Pressman <galpress@amazon.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Wei Zhang <wzam@amazon.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: Jann Horn <jannh@google.com>
      Cc: Kirill Tkhai <ktkhai@virtuozzo.com>
      Cc: Kirill Shutemov <kirill@shutemov.name>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Roland Scheidegger <sroland@vmware.com>
      Cc: VMware Graphics <linux-graphics-maintainer@vmware.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2103cf9c
    • Fenghua Yu's avatar
      mm/fork: clear PASID for new mm · 82e69a12
      Fenghua Yu authored
      When a new mm is created, its PASID should be cleared, i.e.  the PASID is
      initialized to its init state 0 on both ARM and X86.
      
      This patch was part of the series introducing mm->pasid, but got lost
      along the way [1].  It still makes sense to have it, because each address
      space has a different PASID.  And the IOMMU code in
      iommu_sva_alloc_pasid() expects the pasid field of a new mm struct to be
      cleared.
      
      [1] https://lore.kernel.org/linux-iommu/YDgh53AcQHT+T3L0@otcwcpicx3.sc.intel.com/
      
      Link: https://lkml.kernel.org/r/20210302103837.2562625-1-jean-philippe@linaro.orgSigned-off-by: default avatarFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: default avatarJean-Philippe Brucker <jean-philippe@linaro.org>
      Reviewed-by: default avatarTony Luck <tony.luck@intel.com>
      Cc: Jacob Pan <jacob.jun.pan@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      82e69a12
    • Mike Rapoport's avatar
      mm/page_alloc.c: refactor initialization of struct page for holes in memory layout · 0740a50b
      Mike Rapoport authored
      There could be struct pages that are not backed by actual physical memory.
      This can happen when the actual memory bank is not a multiple of
      SECTION_SIZE or when an architecture does not register memory holes
      reserved by the firmware as memblock.memory.
      
      Such pages are currently initialized using init_unavailable_mem() function
      that iterates through PFNs in holes in memblock.memory and if there is a
      struct page corresponding to a PFN, the fields of this page are set to
      default values and it is marked as Reserved.
      
      init_unavailable_mem() does not take into account zone and node the page
      belongs to and sets both zone and node links in struct page to zero.
      
      Before commit 73a6e474 ("mm: memmap_init: iterate over memblock
      regions rather that check each PFN") the holes inside a zone were
      re-initialized during memmap_init() and got their zone/node links right.
      However, after that commit nothing updates the struct pages representing
      such holes.
      
      On a system that has firmware reserved holes in a zone above ZONE_DMA, for
      instance in a configuration below:
      
      	# grep -A1 E820 /proc/iomem
      	7a17b000-7a216fff : Unknown E820 type
      	7a217000-7bffffff : System RAM
      
      unset zone link in struct page will trigger
      
      	VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page);
      
      in set_pfnblock_flags_mask() when called with a struct page from a range
      other than E820_TYPE_RAM because there are pages in the range of
      ZONE_DMA32 but the unset zone link in struct page makes them appear as a
      part of ZONE_DMA.
      
      Interleave initialization of the unavailable pages with the normal
      initialization of memory map, so that zone and node information will be
      properly set on struct pages that are not backed by the actual memory.
      
      With this change the pages for holes inside a zone will get proper
      zone/node links and the pages that are not spanned by any node will get
      links to the adjacent zone/node.  The holes between nodes will be
      prepended to the zone/node above the hole and the trailing pages in the
      last section that will be appended to the zone/node below.
      
      [akpm@linux-foundation.org: don't initialize static to zero, use %llu for u64]
      
      Link: https://lkml.kernel.org/r/20210225224351.7356-2-rppt@kernel.org
      Fixes: 73a6e474 ("mm: memmap_init: iterate over memblock regions rather that check each PFN")
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reported-by: default avatarQian Cai <cai@lca.pw>
      Reported-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Łukasz Majczak <lma@semihalf.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: "Sarvela, Tomi P" <tomi.p.sarvela@intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0740a50b
    • Masahiro Yamada's avatar
      init/Kconfig: make COMPILE_TEST depend on HAS_IOMEM · ea29b20a
      Masahiro Yamada authored
      I read the commit log of the following two:
      
      - bc083a64 ("init/Kconfig: make COMPILE_TEST depend on !UML")
      - 334ef6ed ("init/Kconfig: make COMPILE_TEST depend on !S390")
      
      Both are talking about HAS_IOMEM dependency missing in many drivers.
      
      So, 'depends on HAS_IOMEM' seems the direct, sensible solution to me.
      
      This does not change the behavior of UML. UML still cannot enable
      COMPILE_TEST because it does not provide HAS_IOMEM.
      
      The current dependency for S390 is too strong. Under the condition of
      CONFIG_PCI=y, S390 provides HAS_IOMEM, hence can enable COMPILE_TEST.
      
      I also removed the meaningless 'default n'.
      
      Link: https://lkml.kernel.org/r/20210224140809.1067582-1-masahiroy@kernel.orgSigned-off-by: default avatarMasahiro Yamada <masahiroy@kernel.org>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Arnd Bergmann <arnd@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: KP Singh <kpsingh@google.com>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Terrell <terrelln@fb.com>
      Cc: Quentin Perret <qperret@google.com>
      Cc: Valentin Schneider <valentin.schneider@arm.com>
      Cc: "Enrico Weigelt, metux IT consult" <lkml@metux.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ea29b20a
    • Arnd Bergmann's avatar
      stop_machine: mark helpers __always_inline · cbf78d85
      Arnd Bergmann authored
      With clang-13, some functions only get partially inlined, with a
      specialized version referring to a global variable.  This triggers a
      harmless build-time check for the intel-rng driver:
      
      WARNING: modpost: drivers/char/hw_random/intel-rng.o(.text+0xe): Section mismatch in reference from the function stop_machine() to the function .init.text:intel_rng_hw_init()
      The function stop_machine() references
      the function __init intel_rng_hw_init().
      This is often because stop_machine lacks a __init
      annotation or the annotation of intel_rng_hw_init is wrong.
      
      In this instance, an easy workaround is to force the stop_machine()
      function to be inline, along with related interfaces that did not show the
      same behavior at the moment, but theoretically could.
      
      The combination of the two patches listed below triggers the behavior in
      clang-13, but individually these commits are correct.
      
      Link: https://lkml.kernel.org/r/20210225130153.1956990-1-arnd@kernel.org
      Fixes: fe5595c0 ("stop_machine: Provide stop_machine_cpuslocked()")
      Fixes: ee527cd3 ("Use stop_machine_run in the Intel RNG driver")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Valentin Schneider <valentin.schneider@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cbf78d85
    • Arnd Bergmann's avatar
      memblock: fix section mismatch warning · 34dc2efb
      Arnd Bergmann authored
      The inlining logic in clang-13 is rewritten to often not inline some
      functions that were inlined by all earlier compilers.
      
      In case of the memblock interfaces, this exposed a harmless bug of a
      missing __init annotation:
      
      WARNING: modpost: vmlinux.o(.text+0x507c0a): Section mismatch in reference from the function memblock_bottom_up() to the variable .meminit.data:memblock
      The function memblock_bottom_up() references
      the variable __meminitdata memblock.
      This is often because memblock_bottom_up lacks a __meminitdata
      annotation or the annotation of memblock is wrong.
      
      Interestingly, these annotations were present originally, but got removed
      with the explanation that the __init annotation prevents the function from
      getting inlined.  I checked this again and found that while this is the
      case with clang, gcc (version 7 through 10, did not test others) does
      inline the functions regardless.
      
      As the previous change was apparently intended to help the clang builds,
      reverting it to help the newer clang versions seems appropriate as well.
      gcc builds don't seem to care either way.
      
      Link: https://lkml.kernel.org/r/20210225133808.2188581-1-arnd@kernel.org
      Fixes: 5bdba520 ("mm: memblock: drop __init from memblock functions to make it inline")
      Reference: 2cfb3665 ("include/linux/memblock.h: add __init to memblock_set_bottom_up()")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Cc: Nathan Chancellor <nathan@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Faiyaz Mohammed <faiyazm@codeaurora.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Aslan Bakirov <aslan@fb.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      34dc2efb
  2. 12 Mar, 2021 5 commits
  3. 11 Mar, 2021 21 commits