1. 12 Dec, 2022 32 commits
  2. 10 Dec, 2022 8 commits
    • Andrew Morton's avatar
      3b910105
    • Tejun Heo's avatar
      memcg: fix possible use-after-free in memcg_write_event_control() · 4a7ba45b
      Tejun Heo authored
      memcg_write_event_control() accesses the dentry->d_name of the specified
      control fd to route the write call.  As a cgroup interface file can't be
      renamed, it's safe to access d_name as long as the specified file is a
      regular cgroup file.  Also, as these cgroup interface files can't be
      removed before the directory, it's safe to access the parent too.
      
      Prior to 347c4a87 ("memcg: remove cgroup_event->cft"), there was a
      call to __file_cft() which verified that the specified file is a regular
      cgroupfs file before further accesses.  The cftype pointer returned from
      __file_cft() was no longer necessary and the commit inadvertently dropped
      the file type check with it allowing any file to slip through.  With the
      invarients broken, the d_name and parent accesses can now race against
      renames and removals of arbitrary files and cause use-after-free's.
      
      Fix the bug by resurrecting the file type check in __file_cft().  Now that
      cgroupfs is implemented through kernfs, checking the file operations needs
      to go through a layer of indirection.  Instead, let's check the superblock
      and dentry type.
      
      Link: https://lkml.kernel.org/r/Y5FRm/cfcKPGzWwl@slm.duckdns.org
      Fixes: 347c4a87 ("memcg: remove cgroup_event->cft")
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarJann Horn <jannh@google.com>
      Acked-by: default avatarRoman Gushchin <roman.gushchin@linux.dev>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: <stable@vger.kernel.org>	[3.14+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      4a7ba45b
    • Muchun Song's avatar
      MAINTAINERS: update Muchun Song's email · a501788a
      Muchun Song authored
      I'm moving to the @linux.dev account.  Map my old addresses and update it
      to my new address.
      
      Link: https://lkml.kernel.org/r/20221208115548.85244-1-songmuchun@bytedance.comSigned-off-by: default avatarMuchun Song <songmuchun@bytedance.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      a501788a
    • John Starks's avatar
      mm/gup: fix gup_pud_range() for dax · fcd0ccd8
      John Starks authored
      For dax pud, pud_huge() returns true on x86. So the function works as long
      as hugetlb is configured. However, dax doesn't depend on hugetlb.
      Commit 414fd080 ("mm/gup: fix gup_pmd_range() for dax") fixed
      devmap-backed huge PMDs, but missed devmap-backed huge PUDs. Fix this as
      well.
      
      This fixes the below kernel panic:
      
      general protection fault, probably for non-canonical address 0x69e7c000cc478: 0000 [#1] SMP
      	< snip >
      Call Trace:
      <TASK>
      get_user_pages_fast+0x1f/0x40
      iov_iter_get_pages+0xc6/0x3b0
      ? mempool_alloc+0x5d/0x170
      bio_iov_iter_get_pages+0x82/0x4e0
      ? bvec_alloc+0x91/0xc0
      ? bio_alloc_bioset+0x19a/0x2a0
      blkdev_direct_IO+0x282/0x480
      ? __io_complete_rw_common+0xc0/0xc0
      ? filemap_range_has_page+0x82/0xc0
      generic_file_direct_write+0x9d/0x1a0
      ? inode_update_time+0x24/0x30
      __generic_file_write_iter+0xbd/0x1e0
      blkdev_write_iter+0xb4/0x150
      ? io_import_iovec+0x8d/0x340
      io_write+0xf9/0x300
      io_issue_sqe+0x3c3/0x1d30
      ? sysvec_reschedule_ipi+0x6c/0x80
      __io_queue_sqe+0x33/0x240
      ? fget+0x76/0xa0
      io_submit_sqes+0xe6a/0x18d0
      ? __fget_light+0xd1/0x100
      __x64_sys_io_uring_enter+0x199/0x880
      ? __context_tracking_enter+0x1f/0x70
      ? irqentry_exit_to_user_mode+0x24/0x30
      ? irqentry_exit+0x1d/0x30
      ? __context_tracking_exit+0xe/0x70
      do_syscall_64+0x3b/0x90
      entry_SYSCALL_64_after_hwframe+0x61/0xcb
      RIP: 0033:0x7fc97c11a7be
      	< snip >
      </TASK>
      ---[ end trace 48b2e0e67debcaeb ]---
      RIP: 0010:internal_get_user_pages_fast+0x340/0x990
      	< snip >
      Kernel panic - not syncing: Fatal exception
      Kernel Offset: disabled
      
      Link: https://lkml.kernel.org/r/1670392853-28252-1-git-send-email-ssengar@linux.microsoft.com
      Fixes: 414fd080 ("mm/gup: fix gup_pmd_range() for dax")
      Signed-off-by: default avatarJohn Starks <jostarks@microsoft.com>
      Signed-off-by: default avatarSaurabh Sengar <ssengar@linux.microsoft.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Yu Zhao <yuzhao@google.com>
      Cc: Jason Gunthorpe <jgg@nvidia.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Alistair Popple <apopple@nvidia.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      fcd0ccd8
    • Liam Howlett's avatar
      mmap: fix do_brk_flags() modifying obviously incorrect VMAs · 6c28ca64
      Liam Howlett authored
      Add more sanity checks to the VMA that do_brk_flags() will expand.  Ensure
      the VMA matches basic merge requirements within the function before
      calling can_vma_merge_after().
      
      Drop the duplicate checks from vm_brk_flags() since they will be enforced
      later.
      
      The old code would expand file VMAs on brk(), which is functionally
      wrong and also dangerous in terms of locking because the brk() path
      isn't designed for file VMAs and therefore doesn't lock the file
      mapping.  Checking can_vma_merge_after() ensures that new anonymous
      VMAs can't be merged into file VMAs.
      
      See https://lore.kernel.org/linux-mm/CAG48ez1tJZTOjS_FjRZhvtDA-STFmdw8PEizPDwMGFd_ui0Nrw@mail.gmail.com/
      
      Link: https://lkml.kernel.org/r/20221205192304.1957418-1-Liam.Howlett@oracle.com
      Fixes: 2e7ce7d3 ("mm/mmap: change do_brk_flags() to expand existing VMA and add do_brk_munmap()")
      Signed-off-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Suggested-by: default avatarJann Horn <jannh@google.com>
      Cc: Jason A. Donenfeld <Jason@zx2c4.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: SeongJae Park <sj@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Yu Zhao <yuzhao@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      6c28ca64
    • David Hildenbrand's avatar
      mm/swap: fix SWP_PFN_BITS with CONFIG_PHYS_ADDR_T_64BIT on 32bit · 630dc25e
      David Hildenbrand authored
      We use "unsigned long" to store a PFN in the kernel and phys_addr_t to
      store a physical address.
      
      On a 64bit system, both are 64bit wide.  However, on a 32bit system, the
      latter might be 64bit wide.  This is, for example, the case on x86 with
      PAE: phys_addr_t and PTEs are 64bit wide, while "unsigned long" only spans
      32bit.
      
      The current definition of SWP_PFN_BITS without MAX_PHYSMEM_BITS misses
      that case, and assumes that the maximum PFN is limited by an 32bit
      phys_addr_t.  This implies, that SWP_PFN_BITS will currently only be able
      to cover 4 GiB - 1 on any 32bit system with 4k page size, which is wrong.
      
      Let's rely on the number of bits in phys_addr_t instead, but make sure to
      not exceed the maximum swap offset, to not make the BUILD_BUG_ON() in
      is_pfn_swap_entry() unhappy.  Note that swp_entry_t is effectively an
      unsigned long and the maximum swap offset shares that value with the swap
      type.
      
      For example, on an 8 GiB x86 PAE system with a kernel config based on
      Debian 11.5 (-> CONFIG_FLATMEM=y, CONFIG_X86_PAE=y), we will currently
      fail removing migration entries (remove_migration_ptes()), because
      mm/page_vma_mapped.c:check_pte() will fail to identify a PFN match as
      swp_offset_pfn() wrongly masks off PFN bits.  For example,
      split_huge_page_to_list()->...->remap_page() will leave migration entries
      in place and continue to unlock the page.
      
      Later, when we stumble over these migration entries (e.g., via
      /proc/self/pagemap), pfn_swap_entry_to_page() will BUG_ON() because these
      migration entries shouldn't exist anymore and the page was unlocked.
      
      [   33.067591] kernel BUG at include/linux/swapops.h:497!
      [   33.067597] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
      [   33.067602] CPU: 3 PID: 742 Comm: cow Tainted: G            E      6.1.0-rc8+ #16
      [   33.067605] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-1.fc36 04/01/2014
      [   33.067606] EIP: pagemap_pmd_range+0x644/0x650
      [   33.067612] Code: 00 00 00 00 66 90 89 ce b9 00 f0 ff ff e9 ff fb ff ff 89 d8 31 db e8 48 c6 52 00 e9 23 fb ff ff e8 61 83 56 00 e9 b6 fe ff ff <0f> 0b bf 00 f0 ff ff e9 38 fa ff ff 3e 8d 74 26 00 55 89 e5 57 31
      [   33.067615] EAX: ee394000 EBX: 00000002 ECX: ee394000 EDX: 00000000
      [   33.067617] ESI: c1b0ded4 EDI: 00024a00 EBP: c1b0ddb4 ESP: c1b0dd68
      [   33.067619] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 EFLAGS: 00010246
      [   33.067624] CR0: 80050033 CR2: b7a00000 CR3: 01bbbd20 CR4: 00350ef0
      [   33.067625] Call Trace:
      [   33.067628]  ? madvise_free_pte_range+0x720/0x720
      [   33.067632]  ? smaps_pte_range+0x4b0/0x4b0
      [   33.067634]  walk_pgd_range+0x325/0x720
      [   33.067637]  ? mt_find+0x1d6/0x3a0
      [   33.067641]  ? mt_find+0x1d6/0x3a0
      [   33.067643]  __walk_page_range+0x164/0x170
      [   33.067646]  walk_page_range+0xf9/0x170
      [   33.067648]  ? __kmem_cache_alloc_node+0x2a8/0x340
      [   33.067653]  pagemap_read+0x124/0x280
      [   33.067658]  ? default_llseek+0x101/0x160
      [   33.067662]  ? smaps_account+0x1d0/0x1d0
      [   33.067664]  vfs_read+0x90/0x290
      [   33.067667]  ? do_madvise.part.0+0x24b/0x390
      [   33.067669]  ? debug_smp_processor_id+0x12/0x20
      [   33.067673]  ksys_pread64+0x58/0x90
      [   33.067675]  __ia32_sys_ia32_pread64+0x1b/0x20
      [   33.067680]  __do_fast_syscall_32+0x4c/0xc0
      [   33.067683]  do_fast_syscall_32+0x29/0x60
      [   33.067686]  do_SYSENTER_32+0x15/0x20
      [   33.067689]  entry_SYSENTER_32+0x98/0xf1
      
      Decrease the indentation level of SWP_PFN_BITS and SWP_PFN_MASK to keep it
      readable and consistent.
      
      [david@redhat.com: rely on sizeof(phys_addr_t) and min_t() instead]
        Link: https://lkml.kernel.org/r/20221206105737.69478-1-david@redhat.com
      [david@redhat.com: use "int" for comparison, as we're only comparing numbers < 64]
        Link: https://lkml.kernel.org/r/1f157500-2676-7cef-a84e-9224ed64e540@redhat.com
      Link: https://lkml.kernel.org/r/20221205150857.167583-1-david@redhat.com
      Fixes: 0d206b5d ("mm/swap: add swp_offset_pfn() to fetch PFN from swap entry")
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Acked-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarYang Shi <shy828301@gmail.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      630dc25e
    • Hugh Dickins's avatar
      tmpfs: fix data loss from failed fallocate · 44bcabd7
      Hugh Dickins authored
      Fix tmpfs data loss when the fallocate system call is interrupted by a
      signal, or fails for some other reason.  The partial folio handling in
      shmem_undo_range() forgot to consider this unfalloc case, and was liable
      to erase or truncate out data which had already been committed earlier.
      
      It turns out that none of the partial folio handling there is appropriate
      for the unfalloc case, which just wants to proceed to removal of whole
      folios: which find_get_entries() provides, even when partially covered.
      
      Original patch by Rui Wang.
      
      Link: https://lore.kernel.org/linux-mm/33b85d82.7764.1842e9ab207.Coremail.chenguoqic@163.com/
      Link: https://lkml.kernel.org/r/a5dac112-cf4b-7af-a33-f386e347fd38@google.com
      Fixes: b9a8a419 ("truncate,shmem: Handle truncates that split large folios")
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Reported-by: default avatarGuoqi Chen <chenguoqic@163.com>
        Link: https://lore.kernel.org/all/20221101032248.819360-1-kernel@hev.cc/
      Cc: Rui Wang <kernel@hev.cc>
      Cc: Huacai Chen <chenhuacai@loongson.cn>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com>
      Cc: <stable@vger.kernel.org>	[5.17+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      44bcabd7
    • Michal Hocko's avatar
      kselftests: cgroup: update kmem test precision tolerance · de16d6e4
      Michal Hocko authored
      1813e51e ("memcg: increase MEMCG_CHARGE_BATCH to 64") has changed
      the batch size while this test case has been left behind. This has led
      to a test failure reported by test bot:
      not ok 2 selftests: cgroup: test_kmem # exit=1
      
      Update the tolerance for the pcp charges to reflect the
      MEMCG_CHARGE_BATCH change to fix this.
      
      [akpm@linux-foundation.org: update comments, per Roman]
      Link: https://lkml.kernel.org/r/Y4m8Unt6FhWKC6IH@dhcp22.suse.cz
      Fixes: 1813e51e ("memcg: increase MEMCG_CHARGE_BATCH to 64")
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reported-by: default avatarkernel test robot <yujie.liu@intel.com>
        Link: https://lore.kernel.org/oe-lkp/202212010958.c1053bd3-yujie.liu@intel.comAcked-by: default avatarShakeel Butt <shakeelb@google.com>
      Acked-by: default avatarRoman Gushchin <roman.gushchin@linux.dev>
      Tested-by: default avatarYujie Liu <yujie.liu@intel.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Feng Tang <feng.tang@intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: "Michal Koutný" <mkoutny@suse.com>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Soheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      de16d6e4