1. 12 Sep, 2022 9 commits
    • Ira Weiny's avatar
      checkpatch: add kmap and kmap_atomic to the deprecated list · defdaff1
      Ira Weiny authored
      kmap() and kmap_atomic() are being deprecated in favor of
      kmap_local_page().
      
      There are two main problems with kmap(): (1) It comes with an overhead as
      mapping space is restricted and protected by a global lock for
      synchronization and (2) it also requires global TLB invalidation when the
      kmap's pool wraps and it might block when the mapping space is fully
      utilized until a slot becomes available.
      
      kmap_local_page() is safe from any context and is therefore redundant with
      kmap_atomic() with the exception of any pagefault or preemption disable
      requirements.  However, using kmap_atomic() for these side effects makes
      the code less clear.  So any requirement for pagefault or preemption
      disable should be made explicitly.
      
      With kmap_local_page() the mappings are per thread, CPU local, can take
      page faults, and can be called from any context (including interrupts). 
      It is faster than kmap() in kernels with HIGHMEM enabled.  Furthermore,
      the tasks can be preempted and, when they are scheduled to run again, the
      kernel virtual addresses are restored.
      
      Link: https://lkml.kernel.org/r/20220813220034.806698-1-ira.weiny@intel.comSigned-off-by: default avatarIra Weiny <ira.weiny@intel.com>
      Suggested-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Suggested-by: default avatarFabio M. De Francesco <fmdefrancesco@gmail.com>
      Reviewed-by: default avatarChaitanya Kulkarni <kch@nvidia.com>
      Cc: Joe Perches <joe@perches.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      defdaff1
    • Fabio M. De Francesco's avatar
      fs/isofs: replace kmap() with kmap_local_page() · 5bb6ce3a
      Fabio M. De Francesco authored
      The use of kmap() is being deprecated in favor of kmap_local_page().
      
      There are two main problems with kmap(): (1) It comes with an overhead as
      mapping space is restricted and protected by a global lock for
      synchronization and (2) it also requires global TLB invalidation when the
      kmap's pool wraps and it might block when the mapping space is fully
      utilized until a slot becomes available.
      
      With kmap_local_page() the mappings are per thread, CPU local, can take
      page faults, and can be called from any context (including interrupts). 
      Tasks can be preempted and, when scheduled to run again, the kernel
      virtual addresses are restored and still valid.  It is faster than kmap()
      in kernels with HIGHMEM enabled.
      
      Since kmap_local_page() can be safely used in compress.c, it should be
      called everywhere instead of kmap().
      
      Therefore, replace kmap() with kmap_local_page() in compress.c.  Where it
      is needed, use memzero_page() instead of open coding kmap_local_page()
      plus memset() to fill the pages with zeros.  Delete the redundant
      flush_dcache_page() in the two call sites of memzero_page().
      
      Tested with mkisofs on a QEMU/KVM x86_32 VM, 6GB RAM, booting a kernel
      with HIGHMEM64GB enabled.
      
      Link: https://lkml.kernel.org/r/20220801122709.8164-1-fmdefrancesco@gmail.comSigned-off-by: default avatarFabio M. De Francesco <fmdefrancesco@gmail.com>
      Suggested-by: default avatarIra Weiny <ira.weiny@intel.com>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Roman Gushchin <roman.gushchin@linux.dev>
      Cc: Pali Rohár <pali@kernel.org>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      5bb6ce3a
    • Arnd Bergmann's avatar
      treewide: defconfig: address renamed CONFIG_DEBUG_INFO=y · 64367f2e
      Arnd Bergmann authored
      CONFIG_DEBUG_INFO is now implicitly selected if one picks one of the
      explicit options that could be DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT,
      DEBUG_INFO_DWARF4, DEBUG_INFO_DWARF5.
      
      This was actually not what I had in mind when I suggested making it a
      'choice' statement, but it's too late to change again now, and the Kconfig
      logic is more sensible in the new form.
      
      Change any defconfig file that had CONFIG_DEBUG_INFO enabled but did not
      pick DWARF4 or DWARF5 explicitly to now pick the toolchain default.
      
      Link: https://lkml.kernel.org/r/20220811114609.2097335-1-arnd@kernel.org
      Fixes: f9b3cd24 ("Kconfig.debug: make DEBUG_INFO selectable from a choice")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Vineet Gupta <vgupta@kernel.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Dinh Nguyen <dinguyen@kernel.org>
      Cc: Yoshinori Sato <ysato@users.osdn.me>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      64367f2e
    • Manfred Spraul's avatar
      ipc/util.c: cleanup and improve sysvipc_find_ipc() · 58b5c203
      Manfred Spraul authored
      sysvipc_find_ipc() can be simplified further:
      
      - It uses a for() loop to locate the next entry in the idr.
        This can be replaced with idr_get_next().
      
      - It receives two parameters (pos - which is actually
        an idr index and not a position, and new_pos, which
        is really a position).
        One parameter is sufficient.
      
      Link: https://lore.kernel.org/all/20210903052020.3265-3-manfred@colorfullife.com/
      Link: https://lkml.kernel.org/r/20220805115733.104763-1-manfred@colorfullife.comSigned-off-by: default avatarManfred Spraul <manfred@colorfullife.com>
      Acked-by: default avatarDavidlohr Bueso <dave@stgolabs.net>
      Acked-by: default avatarWaiman Long <longman@redhat.com>
      Cc: "Eric W . Biederman" <ebiederm@xmission.com>
      Cc: <1vier1@web.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      58b5c203
    • Borislav Petkov's avatar
      scripts/decodecode: improve faulting line determination · 765f2bf0
      Borislav Petkov authored
      There are cases where the IP pointer in a Code: line in an oops doesn't
      point at the beginning of an instruction:
      
      Code: 0f bd c2 e9 a0 cd b5 e4 48 0f bd c2 e9 97 cd b5 e4 0f 1f 80 00 00 00 00 \
      	  e9 8b cd b5 e4 0f 1f 00 66 0f a3 d0 e9 7f cd b5 e4 0f 1f <80> 00 00 00 \
      	  00 0f a3 d0 e9 70 cd b5 e4 48 0f a3 d0 e9 67 cd b5
      
        e9 7f cd b5 e4          jmp    0xffffffffe4b5cda8
        0f 1f 80 00 00 00 00    nopl   0x0(%rax)
      	^^
      
      and the current way of determining the faulting instruction line doesn't
      work because disassembled instructions are counted from the IP byte to
      the end and when that thing points in the middle, the trailing bytes can
      be interpreted as different insns:
      
        Code starting with the faulting instruction
        ===========================================
           0:   80 00 00                addb   $0x0,(%rax)
           3:   00 00                   add    %al,(%rax)
      
      whereas, this is part of
      
      0f 1f 80 00 00 00 00    nopl   0x0(%rax)
      
           5:   0f a3 d0                bt     %edx,%eax
           ...
      
      leading to:
      
        1d:   0f 1f 00                nopl   (%rax)
        20:   66 0f a3 d0             bt     %dx,%ax
        24:*  e9 7f cd b5 e4          jmp    0xffffffffe4b5cda8               <-- trapping instruction
        29:   0f 1f 80 00 00 00 00    nopl   0x0(%rax)
        30:   0f a3 d0                bt     %edx,%eax
      
      which is the wrong faulting instruction.
      
      Change the way the faulting line number is determined by matching the
      opcode bytes from the beginning, leading to correct output:
      
        1d:   0f 1f 00                nopl   (%rax)
        20:   66 0f a3 d0             bt     %dx,%ax
        24:   e9 7f cd b5 e4          jmp    0xffffffffe4b5cda8
        29:*  0f 1f 80 00 00 00 00    nopl   0x0(%rax)                <-- trapping instruction
        30:   0f a3 d0                bt     %edx,%eax
      
      While at it, make decodecode use bash as the interpreter - that thing
      should be present on everything by now. It simplifies the code a lot
      too.
      
      Link: https://lkml.kernel.org/r/20220808085928.29840-1-bp@alien8.deSigned-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      765f2bf0
    • Fabio M. De Francesco's avatar
      hfsplus: convert kmap() to kmap_local_page() in btree.c · 9f25f357
      Fabio M. De Francesco authored
      kmap() is being deprecated in favor of kmap_local_page().
      
      There are two main problems with kmap(): (1) It comes with an overhead as
      mapping space is restricted and protected by a global lock for
      synchronization and (2) it also requires global TLB invalidation when the
      kmap's pool wraps and it might block when the mapping space is fully
      utilized until a slot becomes available.
      
      With kmap_local_page() the mappings are per thread, CPU local, can take
      page faults, and can be called from any context (including interrupts). 
      It is faster than kmap() in kernels with HIGHMEM enabled.  Furthermore,
      the tasks can be preempted and, when they are scheduled to run again, the
      kernel virtual addresses are restored and are still valid.
      
      Since its use in btree.c is safe everywhere, it should be preferred.
      
      Therefore, replace kmap() with kmap_local_page() in btree.c.
      
      Tested in a QEMU/KVM x86_32 VM, 6GB RAM, booting a kernel with
      HIGHMEM64GB enabled.
      
      Link: https://lkml.kernel.org/r/20220809203105.26183-5-fmdefrancesco@gmail.comSigned-off-by: default avatarFabio M. De Francesco <fmdefrancesco@gmail.com>
      Suggested-by: default avatarIra Weiny <ira.weiny@intel.com>
      Reviewed-by: default avatarIra Weiny <ira.weiny@intel.com>
      Reviewed-by: default avatarViacheslav Dubeyko <slava@dubeyko.com>
      Cc: Bart Van Assche <bvanassche@acm.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      9f25f357
    • Fabio M. De Francesco's avatar
      hfsplus: convert kmap() to kmap_local_page() in bitmap.c · f9ef3b95
      Fabio M. De Francesco authored
      kmap() is being deprecated in favor of kmap_local_page().
      
      There are two main problems with kmap(): (1) It comes with an overhead as
      mapping space is restricted and protected by a global lock for
      synchronization and (2) it also requires global TLB invalidation when the
      kmap's pool wraps and it might block when the mapping space is fully
      utilized until a slot becomes available.
      
      With kmap_local_page() the mappings are per thread, CPU local, can take
      page faults, and can be called from any context (including interrupts). 
      It is faster than kmap() in kernels with HIGHMEM enabled.  Furthermore,
      the tasks can be preempted and, when they are scheduled to run again, the
      kernel virtual addresses are restored and are still valid.
      
      Since its use in bitmap.c is safe everywhere, it should be preferred.
      
      Therefore, replace kmap() with kmap_local_page() in bitmap.c.
      
      Tested in a QEMU/KVM x86_32 VM, 6GB RAM, booting a kernel with
      HIGHMEM64GB enabled.
      
      Link: https://lkml.kernel.org/r/20220809203105.26183-4-fmdefrancesco@gmail.comSigned-off-by: default avatarFabio M. De Francesco <fmdefrancesco@gmail.com>
      Suggested-by: default avatarIra Weiny <ira.weiny@intel.com>
      Reviewed-by: default avatarIra Weiny <ira.weiny@intel.com>
      Reviewed-by: default avatarViacheslav Dubeyko <slava@dubeyko.com>
      Cc: Bart Van Assche <bvanassche@acm.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      f9ef3b95
    • Fabio M. De Francesco's avatar
      hfsplus: convert kmap() to kmap_local_page() in bnode.c · 6c3014a6
      Fabio M. De Francesco authored
      kmap() is being deprecated in favor of kmap_local_page().
      
      Two main problems with kmap(): (1) It comes with an overhead as mapping
      space is restricted and protected by a global lock for synchronization and
      (2) it also requires global TLB invalidation when the kmap's pool wraps
      and it might block when the mapping space is fully utilized until a slot
      becomes available.
      
      With kmap_local_page() the mappings are per thread, CPU local, can take
      page faults, and can be called from any context (including interrupts). 
      It is faster than kmap() in kernels with HIGHMEM enabled.  Furthermore,
      the tasks can be preempted and, when they are scheduled to run again, the
      kernel virtual addresses are restored and still valid.
      
      Since its use in bnode.c is safe everywhere, it should be preferred.
      
      Therefore, replace kmap() with kmap_local_page() in bnode.c.  Where
      possible, use the suited standard helpers (memzero_page(), memcpy_page())
      instead of open coding kmap_local_page() plus memset() or memcpy().
      
      Tested in a QEMU/KVM x86_32 VM, 6GB RAM, booting a kernel with
      HIGHMEM64GB enabled.
      
      Link: https://lkml.kernel.org/r/20220809203105.26183-3-fmdefrancesco@gmail.comSigned-off-by: default avatarFabio M. De Francesco <fmdefrancesco@gmail.com>
      Suggested-by: default avatarIra Weiny <ira.weiny@intel.com>
      Reviewed-by: default avatarIra Weiny <ira.weiny@intel.com>
      Reviewed-by: default avatarViacheslav Dubeyko <slava@dubeyko.com>
      Cc: Bart Van Assche <bvanassche@acm.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      6c3014a6
    • Fabio M. De Francesco's avatar
      hfsplus: unmap the page in the "fail_page" label · f5b23d67
      Fabio M. De Francesco authored
      Patch series "hfsplus: Replace kmap() with kmap_local_page()".
      
      kmap() is being deprecated in favor of kmap_local_page().
      
      There are two main problems with kmap(): (1) It comes with an overhead as
      mapping space is restricted and protected by a global lock for
      synchronization and (2) it also requires global TLB invalidation when the
      kmap’s pool wraps and it might block when the mapping space is fully
      utilized until a slot becomes available.
      
      With kmap_local_page() the mappings are per thread, CPU local, can take
      page faults, and can be called from any context (including interrupts). 
      It is faster than kmap() in kernels with HIGHMEM enabled.  Furthermore,
      the tasks can be preempted and, when they are scheduled to run again, the
      kernel virtual addresses are restored and still valid.
      
      Since its use in fs/hfsplus is safe everywhere, it should be preferred.
      
      Therefore, replace kmap() with kmap_local_page() in fs/hfsplus.  Where
      possible, use the suited standard helpers (memzero_page(), memcpy_page())
      instead of open coding kmap_local_page() plus memset() or memcpy().
      
      Fix a bug due to a page being not unmapped if the code jumps to the
      "fail_page" label (1/4).
      
      Tested in a QEMU/KVM x86_32 VM, 6GB RAM, booting a kernel with
      HIGHMEM64GB enabled.
      
      
      This patch (of 4):
      
      Several paths within hfs_btree_open() jump to the "fail_page" label where
      put_page() is called while the page is still mapped.
      
      Call kunmap() to unmap the page soon before put_page().
      
      Link: https://lkml.kernel.org/r/20220809203105.26183-1-fmdefrancesco@gmail.com
      Link: https://lkml.kernel.org/r/20220809203105.26183-2-fmdefrancesco@gmail.comSigned-off-by: default avatarFabio M. De Francesco <fmdefrancesco@gmail.com>
      Reviewed-by: default avatarIra Weiny <ira.weiny@intel.com>
      Reviewed-by: default avatarViacheslav Dubeyko <slava@dubeyko.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Fabio M. De Francesco <fmdefrancesco@gmail.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Bart Van Assche <bvanassche@acm.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      f5b23d67
  2. 28 Aug, 2022 25 commits
  3. 27 Aug, 2022 6 commits