1. 14 Dec, 2021 5 commits
    • Mark Rutland's avatar
      arm64: atomics: lse: define RETURN ops in terms of FETCH ops · 053f58ba
      Mark Rutland authored
      The FEAT_LSE atomic instructions include LD* instructions which return
      the original value of a memory location can be used to directly
      implement FETCH opertations. Each RETURN op is implemented as a copy of
      the corresponding FETCH op with a trailing instruction to generate the
      new value of the memory location. We only directly implement
      *_fetch_add*(), for which we have a trailing `add` instruction.
      
      As the compiler has no visibility of the `add`, this leads to less than
      optimal code generation when consuming the result.
      
      For example, the compiler cannot constant-fold the addition into later
      operations, and currently GCC 11.1.0 will compile:
      
             return __lse_atomic_sub_return(1, v) == 0;
      
      As:
      
      	mov     w1, #0xffffffff
      	ldaddal w1, w2, [x0]
      	add     w1, w1, w2
      	cmp     w1, #0x0
      	cset    w0, eq  // eq = none
      	ret
      
      This patch improves this by replacing the `add` with C addition after
      the inline assembly block, e.g.
      
      	ret += i;
      
      This allows the compiler to manipulate `i`. This permits the compiler to
      merge the `add` and `cmp` for the above, e.g.
      
      	mov     w1, #0xffffffff
      	ldaddal w1, w1, [x0]
      	cmp     w1, #0x1
      	cset    w0, eq  // eq = none
      	ret
      
      With this change the assembly for each RETURN op is identical to the
      corresponding FETCH op (including barriers and clobbers) so I've removed
      the inline assembly and rewritten each RETURN op in terms of the
      corresponding FETCH op, e.g.
      
      | static inline void __lse_atomic_add_return(int i, atomic_t *v)
      | {
      |       return __lse_atomic_fetch_add(i, v) + i
      | }
      
      The new construction does not adversely affect the common case, and
      before and after this patch GCC 11.1.0 can compile:
      
      	__lse_atomic_add_return(i, v)
      
      As:
      
      	ldaddal w0, w2, [x1]
      	add     w0, w0, w2
      
      ... while having the freedom to do better elsewhere.
      
      This is intended as an optimization and cleanup.
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20211210151410.2782645-6-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      053f58ba
    • Mark Rutland's avatar
      arm64: atomics: lse: improve constraints for simple ops · 8a578a75
      Mark Rutland authored
      We have overly conservative assembly constraints for the basic FEAT_LSE
      atomic instructions, and using more accurate and permissive constraints
      will allow for better code generation.
      
      The FEAT_LSE basic atomic instructions have come in two forms:
      
      	LD{op}{order}{size} <Rs>, <Rt>, [<Rn>]
      	ST{op}{order}{size} <Rs>, [<Rn>]
      
      The ST* forms are aliases of the LD* forms where:
      
      	ST{op}{order}{size} <Rs>, [<Rn>]
      Is:
      	LD{op}{order}{size} <Rs>, XZR, [<Rn>]
      
      For either form, both <Rs> and <Rn> are read but not written back to,
      and <Rt> is written with the original value of the memory location.
      Where (<Rt> == <Rs>) or (<Rt> == <Rn>), <Rt> is written *after* the
      other register value(s) are consumed. There are no UNPREDICTABLE or
      CONSTRAINED UNPREDICTABLE behaviours when any pair of <Rs>, <Rt>, or
      <Rn> are the same register.
      
      Our current inline assembly always uses <Rs> == <Rt>, treating this
      register as both an input and an output (using a '+r' constraint). This
      forces the compiler to do some unnecessary register shuffling and/or
      redundant value generation.
      
      For example, the compiler cannot reuse the <Rs> value, and currently GCC
      11.1.0 will compile:
      
      	__lse_atomic_add(1, a);
      	__lse_atomic_add(1, b);
      	__lse_atomic_add(1, c);
      
      As:
      
      	mov     w3, #0x1
      	mov     w4, w3
      	stadd   w4, [x0]
      	mov     w0, w3
      	stadd   w0, [x1]
      	stadd   w3, [x2]
      
      We can improve this with more accurate constraints, separating <Rs> and
      <Rt>, where <Rs> is an input-only register ('r'), and <Rt> is an
      output-only value ('=r'). As <Rt> is written back after <Rs> is
      consumed, it does not need to be earlyclobber ('=&r'), leaving the
      compiler free to use the same register for both <Rs> and <Rt> where this
      is desirable.
      
      At the same time, the redundant 'r' constraint for `v` is removed, as
      the `+Q` constraint is sufficient.
      
      With this change, the above example becomes:
      
      	mov     w3, #0x1
      	stadd   w3, [x0]
      	stadd   w3, [x1]
      	stadd   w3, [x2]
      
      I've made this change for the non-value-returning and FETCH ops. The
      RETURN ops have a multi-instruction sequence for which we cannot use the
      same constraints, and a subsequent patch will rewrite hte RETURN ops in
      terms of the FETCH ops, relying on the ability for the compiler to reuse
      the <Rs> value.
      
      This is intended as an optimization.
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20211210151410.2782645-5-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      8a578a75
    • Mark Rutland's avatar
      arm64: atomics: lse: define ANDs in terms of ANDNOTs · 5e9e43c9
      Mark Rutland authored
      The FEAT_LSE atomic instructions include atomic bit-clear instructions
      (`ldclr*` and `stclr*`) which can be used to directly implement ANDNOT
      operations. Each AND op is implemented as a copy of the corresponding
      ANDNOT op with a leading `mvn` instruction to apply a bitwise NOT to the
      `i` argument.
      
      As the compiler has no visibility of the `mvn`, this leads to less than
      optimal code generation when generating `i` into a register. For
      example, __lse_atomic_fetch_and(0xf, v) can be compiled to:
      
      	mov     w1, #0xf
      	mvn     w1, w1
      	ldclral w1, w1, [x2]
      
      This patch improves this by replacing the `mvn` with NOT in C before the
      inline assembly block, e.g.
      
      	i = ~i;
      
      This allows the compiler to generate `i` into a register more optimally,
      e.g.
      
      	mov     w1, #0xfffffff0
      	ldclral w1, w1, [x2]
      
      With this change the assembly for each AND op is identical to the
      corresponding ANDNOT op (including barriers and clobbers), so I've
      removed the inline assembly and rewritten each AND op in terms of the
      corresponding ANDNOT op, e.g.
      
      | static inline void __lse_atomic_and(int i, atomic_t *v)
      | {
      | 	return __lse_atomic_andnot(~i, v);
      | }
      
      This is intended as an optimization and cleanup.
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20211210151410.2782645-4-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      5e9e43c9
    • Mark Rutland's avatar
      arm64: atomics lse: define SUBs in terms of ADDs · ef532450
      Mark Rutland authored
      The FEAT_LSE atomic instructions include atomic ADD instructions
      (`stadd*` and `ldadd*`), but do not include atomic SUB instructions, so
      we must build all of the SUB operations using the ADD instructions. We
      open-code these today, with each SUB op implemented as a copy of the
      corresponding ADD op with a leading `neg` instruction in the inline
      assembly to negate the `i` argument.
      
      As the compiler has no visibility of the `neg`, this leads to less than
      optimal code generation when generating `i` into a register. For
      example, __les_atomic_fetch_sub(1, v) can be compiled to:
      
      	mov     w1, #0x1
      	neg     w1, w1
      	ldaddal w1, w1, [x2]
      
      This patch improves this by replacing the `neg` with negation in C
      before the inline assembly block, e.g.
      
      	i = -i;
      
      This allows the compiler to generate `i` into a register more optimally,
      e.g.
      
      	mov     w1, #0xffffffff
      	ldaddal w1, w1, [x2]
      
      With this change the assembly for each SUB op is identical to the
      corresponding ADD op (including barriers and clobbers), so I've removed
      the inline assembly and rewritten each SUB op in terms of the
      corresponding ADD op, e.g.
      
      | static inline void __lse_atomic_sub(int i, atomic_t *v)
      | {
      | 	__lse_atomic_add(-i, v);
      | }
      
      For clarity I've moved the definition of each SUB op immediately after
      the corresponding ADD op, and used a single macro to create the RETURN
      forms of both ops.
      
      This is intended as an optimization and cleanup.
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20211210151410.2782645-3-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ef532450
    • Mark Rutland's avatar
      arm64: atomics: format whitespace consistently · 8e6082e9
      Mark Rutland authored
      The code for the atomic ops is formatted inconsistently, and while this
      is not a functional problem it is rather distracting when working on
      them.
      
      Some have ops have consistent indentation, e.g.
      
      | #define ATOMIC_OP_ADD_RETURN(name, mb, cl...)                           \
      | static inline int __lse_atomic_add_return##name(int i, atomic_t *v)     \
      | {                                                                       \
      |         u32 tmp;                                                        \
      |                                                                         \
      |         asm volatile(                                                   \
      |         __LSE_PREAMBLE                                                  \
      |         "       ldadd" #mb "    %w[i], %w[tmp], %[v]\n"                 \
      |         "       add     %w[i], %w[i], %w[tmp]"                          \
      |         : [i] "+r" (i), [v] "+Q" (v->counter), [tmp] "=&r" (tmp)        \
      |         : "r" (v)                                                       \
      |         : cl);                                                          \
      |                                                                         \
      |         return i;                                                       \
      | }
      
      While others have negative indentation for some lines, and/or have
      misaligned trailing backslashes, e.g.
      
      | static inline void __lse_atomic_##op(int i, atomic_t *v)                        \
      | {                                                                       \
      |         asm volatile(                                                   \
      |         __LSE_PREAMBLE                                                  \
      | "       " #asm_op "     %w[i], %[v]\n"                                  \
      |         : [i] "+r" (i), [v] "+Q" (v->counter)                           \
      |         : "r" (v));                                                     \
      | }
      
      This patch makes the indentation consistent and also aligns the trailing
      backslashes. This makes the code easier to read for those (like myself)
      who are easily distracted by these inconsistencies.
      
      This is intended as a cleanup.
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20211210151410.2782645-2-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      8e6082e9
  2. 28 Nov, 2021 8 commits
  3. 27 Nov, 2021 17 commits
    • Linus Torvalds's avatar
      Merge tag '5.16-rc2-ksmbd-fixes' of git://git.samba.org/ksmbd · 3498e7f2
      Linus Torvalds authored
      Pull ksmbd fixes from Steve French:
       "Five ksmbd server fixes, four of them for stable:
      
         - memleak fix
      
         - fix for default data stream on filesystems that don't support xattr
      
         - error logging fix
      
         - session setup fix
      
         - minor doc cleanup"
      
      * tag '5.16-rc2-ksmbd-fixes' of git://git.samba.org/ksmbd:
        ksmbd: fix memleak in get_file_stream_info()
        ksmbd: contain default data stream even if xattr is empty
        ksmbd: downgrade addition info error msg to debug in smb2_get_info_sec()
        docs: filesystem: cifs: ksmbd: Fix small layout issues
        ksmbd: Fix an error handling path in 'smb2_sess_setup()'
      3498e7f2
    • Guenter Roeck's avatar
      vmxnet3: Use generic Kconfig option for page size limit · 00169a92
      Guenter Roeck authored
      Use the architecture independent Kconfig option PAGE_SIZE_LESS_THAN_64KB
      to indicate that VMXNET3 requires a page size smaller than 64kB.
      Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      00169a92
    • Guenter Roeck's avatar
      fs: ntfs: Limit NTFS_RW to page sizes smaller than 64k · 4eec7faf
      Guenter Roeck authored
      NTFS_RW code allocates page size dependent arrays on the stack. This
      results in build failures if the page size is 64k or larger.
      
        fs/ntfs/aops.c: In function 'ntfs_write_mst_block':
        fs/ntfs/aops.c:1311:1: error:
      	the frame size of 2240 bytes is larger than 2048 bytes
      
      Since commit f22969a6 ("powerpc/64s: Default to 64K pages for 64 bit
      book3s") this affects ppc:allmodconfig builds, but other architectures
      supporting page sizes of 64k or larger are also affected.
      
      Increasing the maximum frame size for affected architectures just to
      silence this error does not really help.  The frame size would have to
      be set to a really large value for 256k pages.  Also, a large frame size
      could potentially result in stack overruns in this code and elsewhere
      and is therefore not desirable.  Make NTFS_RW dependent on page sizes
      smaller than 64k instead.
      Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Cc: Anton Altaparmakov <anton@tuxera.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4eec7faf
    • Guenter Roeck's avatar
      arch: Add generic Kconfig option indicating page size smaller than 64k · 1f0e290c
      Guenter Roeck authored
      NTFS_RW and VMXNET3 require a page size smaller than 64kB.  Add generic
      Kconfig option for use outside architecture code to avoid architecture
      specific Kconfig options in that code.
      Suggested-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Cc: Anton Altaparmakov <anton@tuxera.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1f0e290c
    • Steven Rostedt (VMware)'s avatar
      tracing: Test the 'Do not trace this pid' case in create event · 27ff768f
      Steven Rostedt (VMware) authored
      When creating a new event (via a module, kprobe, eprobe, etc), the
      descriptors that are created must add flags for pid filtering if an
      instance has pid filtering enabled, as the flags are used at the time the
      event is executed to know if pid filtering should be done or not.
      
      The "Only trace this pid" case was added, but a cut and paste error made
      that case checked twice, instead of checking the "Trace all but this pid"
      case.
      
      Link: https://lore.kernel.org/all/202111280401.qC0z99JB-lkp@intel.com/
      
      Fixes: 6cb20650 ("tracing: Check pid filtering when creating events")
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      27ff768f
    • Linus Torvalds's avatar
      Merge tag 'xfs-5.16-fixes-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux · 4f0dda35
      Linus Torvalds authored
      Pull xfs fixes from Darrick Wong:
       "Fixes for a resource leak and a build robot complaint about totally
        dead code:
      
         - Fix buffer resource leak that could lead to livelock on corrupt fs.
      
         - Remove unused function xfs_inew_wait to shut up the build robots"
      
      * tag 'xfs-5.16-fixes-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
        xfs: remove xfs_inew_wait
        xfs: Fix the free logic of state in xfs_attr_node_hasname
      4f0dda35
    • Linus Torvalds's avatar
      Merge tag 'iomap-5.16-fixes-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux · adfb743a
      Linus Torvalds authored
      Pull iomap fixes from Darrick Wong:
       "A single iomap bug fix and a cleanup for 5.16-rc2.
      
        The bug fix changes how iomap deals with reading from an inline data
        region -- whereas the current code (incorrectly) lets the iomap read
        iter try for more bytes after reading the inline region (which zeroes
        the rest of the page!) and hopes the next iteration terminates, we
        surveyed the inlinedata implementations and realized that all
        inlinedata implementations also require that the inlinedata region end
        at EOF, so we can simply terminate the read.
      
        The second patch documents these assumptions in the code so that
        they're not subtle implications anymore, and cleans up some of the
        grosser parts of that function.
      
        Summary:
      
         - Fix an accounting problem where unaligned inline data reads can run
           off the end of the read iomap iterator. iomap has historically
           required that inline data mappings only exist at the end of a file,
           though this wasn't documented anywhere.
      
         - Document iomap_read_inline_data and change its return type to be
           appropriate for the information that it's actually returning"
      
      * tag 'iomap-5.16-fixes-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux:
        iomap: iomap_read_inline_data cleanup
        iomap: Fix inline extent handling in iomap_readpage
      adfb743a
    • Linus Torvalds's avatar
      Merge tag 'trace-v5.16-rc2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace · 86155d6b
      Linus Torvalds authored
      Pull tracing fixes from Steven Rostedt:
       "Two fixes to event pid filtering:
      
         - Make sure newly created events reflect the current state of pid
           filtering
      
         - Take pid filtering into account when recording trigger events.
           (Also clean up the if statement to be cleaner)"
      
      * tag 'trace-v5.16-rc2-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
        tracing: Fix pid filtering when triggers are attached
        tracing: Check pid filtering when creating events
      86155d6b
    • Linus Torvalds's avatar
      Merge tag 'io_uring-5.16-2021-11-27' of git://git.kernel.dk/linux-block · 86799cdf
      Linus Torvalds authored
      Pull more io_uring fixes from Jens Axboe:
       "The locking fixup that was applied earlier this rc has both a deadlock
        and IRQ safety issue, let's get that ironed out before -rc3. This
        contains:
      
         - Link traversal locking fix (Pavel)
      
         - Cancelation fix (Pavel)
      
         - Relocate cond_resched() for huge buffer chain freeing, avoiding a
           softlockup warning (Ye)
      
         - Fix timespec validation (Ye)"
      
      * tag 'io_uring-5.16-2021-11-27' of git://git.kernel.dk/linux-block:
        io_uring: Fix undefined-behaviour in io_issue_sqe
        io_uring: fix soft lockup when call __io_remove_buffers
        io_uring: fix link traversal locking
        io_uring: fail cancellation for EXITING tasks
      86799cdf
    • Linus Torvalds's avatar
      Merge tag 'block-5.16-2021-11-27' of git://git.kernel.dk/linux-block · 650c8edf
      Linus Torvalds authored
      Pull more block fixes from Jens Axboe:
       "Turns out that the flushing out of pending fixes before the
        Thanksgiving break didn't quite work out in terms of timing, so here's
        a followup set of fixes:
      
         - rq_qos_done() should be called regardless of whether or not we're
           the final put of the request, it's not related to the freeing of
           the state. This fixes an IO stall with wbt that a few users have
           reported, a regression in this release.
      
         - Only define zram_wb_devops if it's used, fixing a compilation
           warning for some compilers"
      
      * tag 'block-5.16-2021-11-27' of git://git.kernel.dk/linux-block:
        zram: only make zram_wb_devops for CONFIG_ZRAM_WRITEBACK
        block: call rq_qos_done() before ref check in batch completions
      650c8edf
    • Linus Torvalds's avatar
      Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi · 9e9fbe44
      Linus Torvalds authored
      Pull SCSI fixes from James Bottomley:
       "Twelve fixes, eleven in drivers (target, qla2xx, scsi_debug, mpt3sas,
        ufs). The core fix is a minor correction to the previous state update
        fix for the iscsi daemons"
      
      * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
        scsi: scsi_debug: Zero clear zones at reset write pointer
        scsi: core: sysfs: Fix setting device state to SDEV_RUNNING
        scsi: scsi_debug: Sanity check block descriptor length in resp_mode_select()
        scsi: target: configfs: Delete unnecessary checks for NULL
        scsi: target: core: Use RCU helpers for INQUIRY t10_alua_tg_pt_gp
        scsi: mpt3sas: Fix incorrect system timestamp
        scsi: mpt3sas: Fix system going into read-only mode
        scsi: mpt3sas: Fix kernel panic during drive powercycle test
        scsi: ufs: ufs-mediatek: Add put_device() after of_find_device_by_node()
        scsi: scsi_debug: Fix type in min_t to avoid stack OOB
        scsi: qla2xxx: edif: Fix off by one bug in qla_edif_app_getfcinfo()
        scsi: ufs: ufshpb: Fix warning in ufshpb_set_hpb_read_to_upiu()
      9e9fbe44
    • Linus Torvalds's avatar
      Merge tag 'nfs-for-5.16-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs · 74139277
      Linus Torvalds authored
      Pull NFS client fixes from Trond Myklebust:
       "Highlights include:
      
        Stable fixes:
      
         - NFSv42: Fix pagecache invalidation after COPY/CLONE
      
        Bugfixes:
      
         - NFSv42: Don't fail clone() just because the server failed to return
           post-op attributes
      
         - SUNRPC: use different lockdep keys for INET6 and LOCAL
      
         - NFSv4.1: handle NFS4ERR_NOSPC from CREATE_SESSION
      
         - SUNRPC: fix header include guard in trace header"
      
      * tag 'nfs-for-5.16-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
        SUNRPC: use different lock keys for INET6 and LOCAL
        sunrpc: fix header include guard in trace header
        NFSv4.1: handle NFS4ERR_NOSPC by CREATE_SESSION
        NFSv42: Fix pagecache invalidation after COPY/CLONE
        NFS: Add a tracepoint to show the results of nfs_set_cache_invalid()
        NFSv42: Don't fail clone() unless the OP_CLONE operation failed
      74139277
    • Linus Torvalds's avatar
      Merge tag 'erofs-for-5.16-rc3-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs · 52dc4c64
      Linus Torvalds authored
      Pull erofs fix from Gao Xiang:
       "Fix an ABBA deadlock introduced by XArray conversion"
      
      * tag 'erofs-for-5.16-rc3-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs:
        erofs: fix deadlock when shrink erofs slab
      52dc4c64
    • Linus Torvalds's avatar
      Merge tag 'powerpc-5.16-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux · 7b65b798
      Linus Torvalds authored
      Pull powerpc fixes from Michael Ellerman:
       "Fix KVM using a Power9 instruction on earlier CPUs, which could lead
        to the host SLB being incorrectly invalidated and a subsequent host
        crash.
      
        Fix kernel hardlockup on vmap stack overflow on 32-bit.
      
        Thanks to Christophe Leroy, Nicholas Piggin, and Fabiano Rosas"
      
      * tag 'powerpc-5.16-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
        powerpc/32: Fix hardlockup on vmap stack overflow
        KVM: PPC: Book3S HV: Prevent POWER7/8 TLB flush flushing SLB
      7b65b798
    • Linus Torvalds's avatar
      Merge tag 'mips-fixes_5.16_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux · 6be08803
      Linus Torvalds authored
      Pull MIPS fixes from Thomas Bogendoerfer:
      
       - build fix for ZSTD enabled configs
      
       - fix for preempt warning
      
       - fix for loongson FTLB detection
      
       - fix for page table level selection
      
      * tag 'mips-fixes_5.16_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
        MIPS: use 3-level pgtable for 64KB page size on MIPS_VA_BITS_48
        MIPS: loongson64: fix FTLB configuration
        MIPS: Fix using smp_processor_id() in preemptible in show_cpuinfo()
        MIPS: boot/compressed/: add __ashldi3 to target for ZSTD compression
      6be08803
    • Ye Bin's avatar
      io_uring: Fix undefined-behaviour in io_issue_sqe · f6223ff7
      Ye Bin authored
      We got issue as follows:
      ================================================================================
      UBSAN: Undefined behaviour in ./include/linux/ktime.h:42:14
      signed integer overflow:
      -4966321760114568020 * 1000000000 cannot be represented in type 'long long int'
      CPU: 1 PID: 2186 Comm: syz-executor.2 Not tainted 4.19.90+ #12
      Hardware name: linux,dummy-virt (DT)
      Call trace:
       dump_backtrace+0x0/0x3f0 arch/arm64/kernel/time.c:78
       show_stack+0x28/0x38 arch/arm64/kernel/traps.c:158
       __dump_stack lib/dump_stack.c:77 [inline]
       dump_stack+0x170/0x1dc lib/dump_stack.c:118
       ubsan_epilogue+0x18/0xb4 lib/ubsan.c:161
       handle_overflow+0x188/0x1dc lib/ubsan.c:192
       __ubsan_handle_mul_overflow+0x34/0x44 lib/ubsan.c:213
       ktime_set include/linux/ktime.h:42 [inline]
       timespec64_to_ktime include/linux/ktime.h:78 [inline]
       io_timeout fs/io_uring.c:5153 [inline]
       io_issue_sqe+0x42c8/0x4550 fs/io_uring.c:5599
       __io_queue_sqe+0x1b0/0xbc0 fs/io_uring.c:5988
       io_queue_sqe+0x1ac/0x248 fs/io_uring.c:6067
       io_submit_sqe fs/io_uring.c:6137 [inline]
       io_submit_sqes+0xed8/0x1c88 fs/io_uring.c:6331
       __do_sys_io_uring_enter fs/io_uring.c:8170 [inline]
       __se_sys_io_uring_enter fs/io_uring.c:8129 [inline]
       __arm64_sys_io_uring_enter+0x490/0x980 fs/io_uring.c:8129
       invoke_syscall arch/arm64/kernel/syscall.c:53 [inline]
       el0_svc_common+0x374/0x570 arch/arm64/kernel/syscall.c:121
       el0_svc_handler+0x190/0x260 arch/arm64/kernel/syscall.c:190
       el0_svc+0x10/0x218 arch/arm64/kernel/entry.S:1017
      ================================================================================
      
      As ktime_set only judge 'secs' if big than KTIME_SEC_MAX, but if we pass
      negative value maybe lead to overflow.
      To address this issue, we must check if 'sec' is negative.
      Signed-off-by: default avatarYe Bin <yebin10@huawei.com>
      Link: https://lore.kernel.org/r/20211118015907.844807-1-yebin10@huawei.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      f6223ff7
    • Ye Bin's avatar
      io_uring: fix soft lockup when call __io_remove_buffers · 1d0254e6
      Ye Bin authored
      I got issue as follows:
      [ 567.094140] __io_remove_buffers: [1]start ctx=0xffff8881067bf000 bgid=65533 buf=0xffff8881fefe1680
      [  594.360799] watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [kworker/u32:5:108]
      [  594.364987] Modules linked in:
      [  594.365405] irq event stamp: 604180238
      [  594.365906] hardirqs last  enabled at (604180237): [<ffffffff93fec9bd>] _raw_spin_unlock_irqrestore+0x2d/0x50
      [  594.367181] hardirqs last disabled at (604180238): [<ffffffff93fbbadb>] sysvec_apic_timer_interrupt+0xb/0xc0
      [  594.368420] softirqs last  enabled at (569080666): [<ffffffff94200654>] __do_softirq+0x654/0xa9e
      [  594.369551] softirqs last disabled at (569080575): [<ffffffff913e1d6a>] irq_exit_rcu+0x1ca/0x250
      [  594.370692] CPU: 2 PID: 108 Comm: kworker/u32:5 Tainted: G            L    5.15.0-next-20211112+ #88
      [  594.371891] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190727_073836-buildvm-ppc64le-16.ppc.fedoraproject.org-3.fc31 04/01/2014
      [  594.373604] Workqueue: events_unbound io_ring_exit_work
      [  594.374303] RIP: 0010:_raw_spin_unlock_irqrestore+0x33/0x50
      [  594.375037] Code: 48 83 c7 18 53 48 89 f3 48 8b 74 24 10 e8 55 f5 55 fd 48 89 ef e8 ed a7 56 fd 80 e7 02 74 06 e8 43 13 7b fd fb bf 01 00 00 00 <e8> f8 78 474
      [  594.377433] RSP: 0018:ffff888101587a70 EFLAGS: 00000202
      [  594.378120] RAX: 0000000024030f0d RBX: 0000000000000246 RCX: 1ffffffff2f09106
      [  594.379053] RDX: 0000000000000000 RSI: ffffffff9449f0e0 RDI: 0000000000000001
      [  594.379991] RBP: ffffffff9586cdc0 R08: 0000000000000001 R09: fffffbfff2effcab
      [  594.380923] R10: ffffffff977fe557 R11: fffffbfff2effcaa R12: ffff8881b8f3def0
      [  594.381858] R13: 0000000000000246 R14: ffff888153a8b070 R15: 0000000000000000
      [  594.382787] FS:  0000000000000000(0000) GS:ffff888399c00000(0000) knlGS:0000000000000000
      [  594.383851] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  594.384602] CR2: 00007fcbe71d2000 CR3: 00000000b4216000 CR4: 00000000000006e0
      [  594.385540] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [  594.386474] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [  594.387403] Call Trace:
      [  594.387738]  <TASK>
      [  594.388042]  find_and_remove_object+0x118/0x160
      [  594.389321]  delete_object_full+0xc/0x20
      [  594.389852]  kfree+0x193/0x470
      [  594.390275]  __io_remove_buffers.part.0+0xed/0x147
      [  594.390931]  io_ring_ctx_free+0x342/0x6a2
      [  594.392159]  io_ring_exit_work+0x41e/0x486
      [  594.396419]  process_one_work+0x906/0x15a0
      [  594.399185]  worker_thread+0x8b/0xd80
      [  594.400259]  kthread+0x3bf/0x4a0
      [  594.401847]  ret_from_fork+0x22/0x30
      [  594.402343]  </TASK>
      
      Message from syslogd@localhost at Nov 13 09:09:54 ...
      kernel:watchdog: BUG: soft lockup - CPU#2 stuck for 26s! [kworker/u32:5:108]
      [  596.793660] __io_remove_buffers: [2099199]start ctx=0xffff8881067bf000 bgid=65533 buf=0xffff8881fefe1680
      
      We can reproduce this issue by follow syzkaller log:
      r0 = syz_io_uring_setup(0x401, &(0x7f0000000300), &(0x7f0000003000/0x2000)=nil, &(0x7f0000ff8000/0x4000)=nil, &(0x7f0000000280)=<r1=>0x0, &(0x7f0000000380)=<r2=>0x0)
      sendmsg$ETHTOOL_MSG_FEATURES_SET(0xffffffffffffffff, &(0x7f0000003080)={0x0, 0x0, &(0x7f0000003040)={&(0x7f0000000040)=ANY=[], 0x18}}, 0x0)
      syz_io_uring_submit(r1, r2, &(0x7f0000000240)=@IORING_OP_PROVIDE_BUFFERS={0x1f, 0x5, 0x0, 0x401, 0x1, 0x0, 0x100, 0x0, 0x1, {0xfffd}}, 0x0)
      io_uring_enter(r0, 0x3a2d, 0x0, 0x0, 0x0, 0x0)
      
      The reason above issue  is 'buf->list' has 2,100,000 nodes, occupied cpu lead
      to soft lockup.
      To solve this issue, we need add schedule point when do while loop in
      '__io_remove_buffers'.
      After add  schedule point we do regression, get follow data.
      [  240.141864] __io_remove_buffers: [1]start ctx=0xffff888170603000 bgid=65533 buf=0xffff8881116fcb00
      [  268.408260] __io_remove_buffers: [1]start ctx=0xffff8881b92d2000 bgid=65533 buf=0xffff888130c83180
      [  275.899234] __io_remove_buffers: [2099199]start ctx=0xffff888170603000 bgid=65533 buf=0xffff8881116fcb00
      [  296.741404] __io_remove_buffers: [1]start ctx=0xffff8881b659c000 bgid=65533 buf=0xffff8881010fe380
      [  305.090059] __io_remove_buffers: [2099199]start ctx=0xffff8881b92d2000 bgid=65533 buf=0xffff888130c83180
      [  325.415746] __io_remove_buffers: [1]start ctx=0xffff8881b92d1000 bgid=65533 buf=0xffff8881a17d8f00
      [  333.160318] __io_remove_buffers: [2099199]start ctx=0xffff8881b659c000 bgid=65533 buf=0xffff8881010fe380
      ...
      
      Fixes:8bab4c09("io_uring: allow conditional reschedule for intensive iterators")
      Signed-off-by: default avatarYe Bin <yebin10@huawei.com>
      Link: https://lore.kernel.org/r/20211122024737.2198530-1-yebin10@huawei.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      1d0254e6
  4. 26 Nov, 2021 10 commits