1. 14 Mar, 2015 40 commits
    • Michiel vd Garde's avatar
      USB: serial: cp210x: Adding Seletek device id's · e7d3ac35
      Michiel vd Garde authored
      commit 675af708 upstream.
      
      These device ID's are not associated with the cp210x module currently,
      but should be. This patch allows the devices to operate upon connecting
      them to the usb bus as intended.
      Signed-off-by: default avatarMichiel van de Garde <mgparser@gmail.com>
      Signed-off-by: default avatarJohan Hovold <johan@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      e7d3ac35
    • Johan Hovold's avatar
      Revert "USB: serial: make bulk_out_size a lower limit" · 64264c33
      Johan Hovold authored
      commit bc4b1f48 upstream.
      
      This reverts commit 5083fd7b.
      
      A bulk-out size smaller than the end-point size is indeed valid. The
      offending commit broke the usb-debug driver for EHCI debug devices,
      which use 8-byte buffers.
      
      Fixes: 5083fd7b ("USB: serial: make bulk_out_size a lower limit")
      Reported-by: default avatar"Li, Elvin" <elvin.li@intel.com>
      Signed-off-by: default avatarJohan Hovold <johan@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      64264c33
    • Hans de Goede's avatar
      uas: Add US_FL_NO_REPORT_OPCODES for JMicron JMS539 · ddcc85e5
      Hans de Goede authored
      commit 59e980ef upstream.
      
      Like the JMicron JMS567 enclosures with the JMS539 choke on report-opcodes,
      so avoid it.
      Tested-and-reported-by: default avatarTom Arild Naess <tanaess@gmail.com>
      Signed-off-by: default avatarHans de Goede <hdegoede@redhat.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      ddcc85e5
    • James Hogan's avatar
      KVM: MIPS: Fix trace event to save PC directly · a3ee1048
      James Hogan authored
      commit b3cffac0 upstream.
      
      Currently the guest exit trace event saves the VCPU pointer to the
      structure, and the guest PC is retrieved by dereferencing it when the
      event is printed rather than directly from the trace record. This isn't
      safe as the printing may occur long afterwards, after the PC has changed
      and potentially after the VCPU has been freed. Usually this results in
      the same (wrong) PC being printed for multiple trace events. It also
      isn't portable as userland has no way to access the VCPU data structure
      when interpreting the trace record itself.
      
      Lets save the actual PC in the structure so that the correct value is
      accessible later.
      
      Fixes: 669e846e ("KVM/MIPS32: MIPS arch specific APIs for KVM")
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      a3ee1048
    • Paolo Bonzini's avatar
      KVM: emulate: fix CMPXCHG8B on 32-bit hosts · c8b6504b
      Paolo Bonzini authored
      commit 4ff6f8e6 upstream.
      
      This has been broken for a long time: it broke first in 2.6.35, then was
      almost fixed in 2.6.36 but this one-liner slipped through the cracks.
      The bug shows up as an infinite loop in Windows 7 (and newer) boot on
      32-bit hosts without EPT.
      
      Windows uses CMPXCHG8B to write to page tables, which causes a
      page fault if running without EPT; the emulator is then called from
      kvm_mmu_page_fault.  The loop then happens if the higher 4 bytes are
      not 0; the common case for this is that the NX bit (bit 63) is 1.
      
      Fixes: 6550e1f1
      Fixes: 16518d5aReported-by: default avatarErik Rull <erik.rull@rdsoftware.de>
      Tested-by: default avatarErik Rull <erik.rull@rdsoftware.de>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      c8b6504b
    • Quentin Casasnovas's avatar
      Btrfs:__add_inode_ref: out of bounds memory read when looking for extended ref. · 218c8863
      Quentin Casasnovas authored
      commit dd9ef135 upstream.
      
      Improper arithmetics when calculting the address of the extended ref could
      lead to an out of bounds memory read and kernel panic.
      Signed-off-by: default avatarQuentin Casasnovas <quentin.casasnovas@oracle.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.cz>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      218c8863
    • Filipe Manana's avatar
      Btrfs: fix data loss in the fast fsync path · 0ab92529
      Filipe Manana authored
      commit 3a8b36f3 upstream.
      
      When using the fast file fsync code path we can miss the fact that new
      writes happened since the last file fsync and therefore return without
      waiting for the IO to finish and write the new extents to the fsync log.
      
      Here's an example scenario where the fsync will miss the fact that new
      file data exists that wasn't yet durably persisted:
      
      1. fs_info->last_trans_committed == N - 1 and current transaction is
         transaction N (fs_info->generation == N);
      
      2. do a buffered write;
      
      3. fsync our inode, this clears our inode's full sync flag, starts
         an ordered extent and waits for it to complete - when it completes
         at btrfs_finish_ordered_io(), the inode's last_trans is set to the
         value N (via btrfs_update_inode_fallback -> btrfs_update_inode ->
         btrfs_set_inode_last_trans);
      
      4. transaction N is committed, so fs_info->last_trans_committed is now
         set to the value N and fs_info->generation remains with the value N;
      
      5. do another buffered write, when this happens btrfs_file_write_iter
         sets our inode's last_trans to the value N + 1 (that is
         fs_info->generation + 1 == N + 1);
      
      6. transaction N + 1 is started and fs_info->generation now has the
         value N + 1;
      
      7. transaction N + 1 is committed, so fs_info->last_trans_committed
         is set to the value N + 1;
      
      8. fsync our inode - because it doesn't have the full sync flag set,
         we only start the ordered extent, we don't wait for it to complete
         (only in a later phase) therefore its last_trans field has the
         value N + 1 set previously by btrfs_file_write_iter(), and so we
         have:
      
             inode->last_trans <= fs_info->last_trans_committed
                 (N + 1)              (N + 1)
      
         Which made us not log the last buffered write and exit the fsync
         handler immediately, returning success (0) to user space and resulting
         in data loss after a crash.
      
      This can actually be triggered deterministically and the following excerpt
      from a testcase I made for xfstests triggers the issue. It moves a dummy
      file across directories and then fsyncs the old parent directory - this
      is just to trigger a transaction commit, so moving files around isn't
      directly related to the issue but it was chosen because running 'sync' for
      example does more than just committing the current transaction, as it
      flushes/waits for all file data to be persisted. The issue can also happen
      at random periods, since the transaction kthread periodicaly commits the
      current transaction (about every 30 seconds by default).
      The body of the test is:
      
        _scratch_mkfs >> $seqres.full 2>&1
        _init_flakey
        _mount_flakey
      
        # Create our main test file 'foo', the one we check for data loss.
        # By doing an fsync against our file, it makes btrfs clear the 'needs_full_sync'
        # bit from its flags (btrfs inode specific flags).
        $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 8K" \
                        -c "fsync" $SCRATCH_MNT/foo | _filter_xfs_io
      
        # Now create one other file and 2 directories. We will move this second file
        # from one directory to the other later because it forces btrfs to commit its
        # currently open transaction if we fsync the old parent directory. This is
        # necessary to trigger the data loss bug that affected btrfs.
        mkdir $SCRATCH_MNT/testdir_1
        touch $SCRATCH_MNT/testdir_1/bar
        mkdir $SCRATCH_MNT/testdir_2
      
        # Make sure everything is durably persisted.
        sync
      
        # Write more 8Kb of data to our file.
        $XFS_IO_PROG -c "pwrite -S 0xbb 8K 8K" $SCRATCH_MNT/foo | _filter_xfs_io
      
        # Move our 'bar' file into a new directory.
        mv $SCRATCH_MNT/testdir_1/bar $SCRATCH_MNT/testdir_2/bar
      
        # Fsync our first directory. Because it had a file moved into some other
        # directory, this made btrfs commit the currently open transaction. This is
        # a condition necessary to trigger the data loss bug.
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/testdir_1
      
        # Now fsync our main test file. If the fsync succeeds, we expect the 8Kb of
        # data we wrote previously to be persisted and available if a crash happens.
        # This did not happen with btrfs, because of the transaction commit that
        # happened when we fsynced the parent directory.
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foo
      
        # Simulate a crash/power loss.
        _load_flakey_table $FLAKEY_DROP_WRITES
        _unmount_flakey
      
        _load_flakey_table $FLAKEY_ALLOW_WRITES
        _mount_flakey
      
        # Now check that all data we wrote before are available.
        echo "File content after log replay:"
        od -t x1 $SCRATCH_MNT/foo
      
        status=0
        exit
      
      The expected golden output for the test, which is what we get with this
      fix applied (or when running against ext3/4 and xfs), is:
      
        wrote 8192/8192 bytes at offset 0
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        wrote 8192/8192 bytes at offset 8192
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        File content after log replay:
        0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
        *
        0020000 bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb
        *
        0040000
      
      Without this fix applied, the output shows the test file does not have
      the second 8Kb extent that we successfully fsynced:
      
        wrote 8192/8192 bytes at offset 0
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        wrote 8192/8192 bytes at offset 8192
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        File content after log replay:
        0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
        *
        0020000
      
      So fix this by skipping the fsync only if we're doing a full sync and
      if the inode's last_trans is <= fs_info->last_trans_committed, or if
      the inode is already in the log. Also remove setting the inode's
      last_trans in btrfs_file_write_iter since it's useless/unreliable.
      
      Also because btrfs_file_write_iter no longer sets inode->last_trans to
      fs_info->generation + 1, don't set last_trans to 0 if we bail out and don't
      bail out if last_trans is 0, otherwise something as simple as the following
      example wouldn't log the second write on the last fsync:
      
        1. write to file
      
        2. fsync file
      
        3. fsync file
             |--> btrfs_inode_in_log() returns true and it set last_trans to 0
      
        4. write to file
             |--> btrfs_file_write_iter() no longers sets last_trans, so it
                  remained with a value of 0
        5. fsync
             |--> inode->last_trans == 0, so it bails out without logging the
                  second write
      
      A test case for xfstests will be sent soon.
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      0ab92529
    • David Sterba's avatar
      btrfs: fix lost return value due to variable shadowing · d771f1d4
      David Sterba authored
      commit 1932b7be upstream.
      
      A block-local variable stores error code but btrfs_get_blocks_direct may
      not return it in the end as there's a ret defined in the function scope.
      
      Fixes: d187663e ("Btrfs: lock extents as we map them in DIO")
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.cz>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      d771f1d4
    • Filipe Manana's avatar
      Btrfs: fix fsync race leading to ordered extent memory leaks · 67b67dfa
      Filipe Manana authored
      commit 4d884fce upstream.
      
      We can have multiple fsync operations against the same file during the
      same transaction and they can collect the same ordered extents while they
      don't complete (still accessible from the inode's ordered tree). If this
      happens, those ordered extents will never get their reference counts
      decremented to 0, leading to memory leaks and inode leaks (an iput for an
      ordered extent's inode is scheduled only when the ordered extent's refcount
      drops to 0). The following sequence diagram explains this race:
      
               CPU 1                                         CPU 2
      
      btrfs_sync_file()
      
                                                       btrfs_sync_file()
      
        mutex_lock(inode->i_mutex)
        btrfs_log_inode()
          btrfs_get_logged_extents()
            --> collects ordered extent X
            --> increments ordered
                extent X's refcount
          btrfs_submit_logged_extents()
        mutex_unlock(inode->i_mutex)
      
                                                         mutex_lock(inode->i_mutex)
        btrfs_sync_log()
           btrfs_wait_logged_extents()
             --> list_del_init(&ordered->log_list)
                                                           btrfs_log_inode()
                                                             btrfs_get_logged_extents()
                                                               --> Adds ordered extent X
                                                                   to logged_list because
                                                                   at this point:
                                                                   list_empty(&ordered->log_list)
                                                                   && test_bit(BTRFS_ORDERED_LOGGED,
                                                                               &ordered->flags) == 0
                                                               --> Increments ordered extent
                                                                   X's refcount
             --> check if ordered extent's io is
                 finished or not, start it if
                 necessary and wait for it to finish
             --> sets bit BTRFS_ORDERED_LOGGED
                 on ordered extent X's flags
                 and adds it to trans->ordered
        btrfs_sync_log() finishes
      
                                                             btrfs_submit_logged_extents()
                                                           btrfs_log_inode() finishes
                                                         mutex_unlock(inode->i_mutex)
      
      btrfs_sync_file() finishes
      
                                                         btrfs_sync_log()
                                                            btrfs_wait_logged_extents()
                                                              --> Sees ordered extent X has the
                                                                  bit BTRFS_ORDERED_LOGGED set in
                                                                  its flags
                                                              --> X's refcount is untouched
                                                         btrfs_sync_log() finishes
      
                                                       btrfs_sync_file() finishes
      
      btrfs_commit_transaction()
        --> called by transaction kthread for e.g.
        btrfs_wait_pending_ordered()
          --> waits for ordered extent X to
              complete
          --> decrements ordered extent X's
              refcount by 1 only, corresponding
              to the increment done by the fsync
              task ran by CPU 1
      
      In the scenario of the above diagram, after the transaction commit,
      the ordered extent will remain with a refcount of 1 forever, leaking
      the ordered extent structure and preventing the i_count of its inode
      from ever decreasing to 0, since the delayed iput is scheduled only
      when the ordered extent's refcount drops to 0, preventing the inode
      from ever being evicted by the VFS.
      
      Fix this by using the flag BTRFS_ORDERED_LOGGED differently. Use it to
      mean that an ordered extent is already being processed by an fsync call,
      which will attach it to the current transaction, preventing it from being
      collected by subsequent fsync operations against the same inode.
      
      This race was introduced with the following change (added in 3.19 and
      backported to stable 3.18 and 3.17):
      
        Btrfs: make sure logged extents complete in the current transaction V3
        commit 50d9aa99
      
      I ran into this issue while running xfstests/generic/113 in a loop, which
      failed about 1 out of 10 runs with the following warning in dmesg:
      
      [ 2612.440038] WARNING: CPU: 4 PID: 22057 at fs/btrfs/disk-io.c:3558 free_fs_root+0x36/0x133 [btrfs]()
      [ 2612.442810] Modules linked in: btrfs crc32c_generic xor raid6_pq nfsd auth_rpcgss oid_registry nfs_acl nfs lockd grace fscache sunrpc loop processor parport_pc parport psmouse therma
      l_sys i2c_piix4 serio_raw pcspkr evdev microcode button i2c_core ext4 crc16 jbd2 mbcache sd_mod sg sr_mod cdrom virtio_scsi ata_generic virtio_pci ata_piix virtio_ring libata virtio flo
      ppy e1000 scsi_mod [last unloaded: btrfs]
      [ 2612.452711] CPU: 4 PID: 22057 Comm: umount Tainted: G        W      3.19.0-rc5-btrfs-next-4+ #1
      [ 2612.454921] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
      [ 2612.457709]  0000000000000009 ffff8801342c3c78 ffffffff8142425e ffff88023ec8f2d8
      [ 2612.459829]  0000000000000000 ffff8801342c3cb8 ffffffff81045308 ffff880046460000
      [ 2612.461564]  ffffffffa036da56 ffff88003d07b000 ffff880046460000 ffff880046460068
      [ 2612.463163] Call Trace:
      [ 2612.463719]  [<ffffffff8142425e>] dump_stack+0x4c/0x65
      [ 2612.464789]  [<ffffffff81045308>] warn_slowpath_common+0xa1/0xbb
      [ 2612.466026]  [<ffffffffa036da56>] ? free_fs_root+0x36/0x133 [btrfs]
      [ 2612.467247]  [<ffffffff810453c5>] warn_slowpath_null+0x1a/0x1c
      [ 2612.468416]  [<ffffffffa036da56>] free_fs_root+0x36/0x133 [btrfs]
      [ 2612.469625]  [<ffffffffa036f2a7>] btrfs_drop_and_free_fs_root+0x93/0x9b [btrfs]
      [ 2612.471251]  [<ffffffffa036f353>] btrfs_free_fs_roots+0xa4/0xd6 [btrfs]
      [ 2612.472536]  [<ffffffff8142612e>] ? wait_for_completion+0x24/0x26
      [ 2612.473742]  [<ffffffffa0370bbc>] close_ctree+0x1f3/0x33c [btrfs]
      [ 2612.475477]  [<ffffffff81059d1d>] ? destroy_workqueue+0x148/0x1ba
      [ 2612.476695]  [<ffffffffa034e3da>] btrfs_put_super+0x19/0x1b [btrfs]
      [ 2612.477911]  [<ffffffff81153e53>] generic_shutdown_super+0x73/0xef
      [ 2612.479106]  [<ffffffff811540e2>] kill_anon_super+0x13/0x1e
      [ 2612.480226]  [<ffffffffa034e1e3>] btrfs_kill_super+0x17/0x23 [btrfs]
      [ 2612.481471]  [<ffffffff81154307>] deactivate_locked_super+0x3b/0x50
      [ 2612.482686]  [<ffffffff811547a7>] deactivate_super+0x3f/0x43
      [ 2612.483791]  [<ffffffff8116b3ed>] cleanup_mnt+0x59/0x78
      [ 2612.484842]  [<ffffffff8116b44c>] __cleanup_mnt+0x12/0x14
      [ 2612.485900]  [<ffffffff8105d019>] task_work_run+0x8f/0xbc
      [ 2612.486960]  [<ffffffff810028d8>] do_notify_resume+0x5a/0x6b
      [ 2612.488083]  [<ffffffff81236e5b>] ? trace_hardirqs_on_thunk+0x3a/0x3f
      [ 2612.489333]  [<ffffffff8142a17f>] int_signal+0x12/0x17
      [ 2612.490353] ---[ end trace 54a960a6bdcb8d93 ]---
      [ 2612.557253] VFS: Busy inodes after unmount of sdb. Self-destruct in 5 seconds.  Have a nice day...
      
      Kmemleak confirmed the ordered extent leak (and btrfs inode specific
      structures such as delayed nodes):
      
      $ cat /sys/kernel/debug/kmemleak
      unreferenced object 0xffff880154290db0 (size 576):
        comm "btrfsck", pid 21980, jiffies 4295542503 (age 1273.412s)
        hex dump (first 32 bytes):
          01 40 00 00 01 00 00 00 b0 1d f1 4e 01 88 ff ff  .@.........N....
          00 00 00 00 00 00 00 00 c8 0d 29 54 01 88 ff ff  ..........)T....
        backtrace:
          [<ffffffff8141d74d>] kmemleak_update_trace+0x4c/0x6a
          [<ffffffff8122f2c0>] radix_tree_node_alloc+0x6d/0x83
          [<ffffffff8122fb26>] __radix_tree_create+0x109/0x190
          [<ffffffff8122fbdd>] radix_tree_insert+0x30/0xac
          [<ffffffffa03b9bde>] btrfs_get_or_create_delayed_node+0x130/0x187 [btrfs]
          [<ffffffffa03bb82d>] btrfs_delayed_delete_inode_ref+0x32/0xac [btrfs]
          [<ffffffffa0379dae>] __btrfs_unlink_inode+0xee/0x288 [btrfs]
          [<ffffffffa037c715>] btrfs_unlink_inode+0x1e/0x40 [btrfs]
          [<ffffffffa037c797>] btrfs_unlink+0x60/0x9b [btrfs]
          [<ffffffff8115d7f0>] vfs_unlink+0x9c/0xed
          [<ffffffff8115f5de>] do_unlinkat+0x12c/0x1fa
          [<ffffffff811601a7>] SyS_unlinkat+0x29/0x2b
          [<ffffffff81429e92>] system_call_fastpath+0x12/0x17
          [<ffffffffffffffff>] 0xffffffffffffffff
      unreferenced object 0xffff88014ef11db0 (size 576):
        comm "rm", pid 22009, jiffies 4295542593 (age 1273.052s)
        hex dump (first 32 bytes):
          02 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00  ................
          00 00 00 00 00 00 00 00 c8 1d f1 4e 01 88 ff ff  ...........N....
        backtrace:
          [<ffffffff8141d74d>] kmemleak_update_trace+0x4c/0x6a
          [<ffffffff8122f2c0>] radix_tree_node_alloc+0x6d/0x83
          [<ffffffff8122fb26>] __radix_tree_create+0x109/0x190
          [<ffffffff8122fbdd>] radix_tree_insert+0x30/0xac
          [<ffffffffa03b9bde>] btrfs_get_or_create_delayed_node+0x130/0x187 [btrfs]
          [<ffffffffa03bb82d>] btrfs_delayed_delete_inode_ref+0x32/0xac [btrfs]
          [<ffffffffa0379dae>] __btrfs_unlink_inode+0xee/0x288 [btrfs]
          [<ffffffffa037c715>] btrfs_unlink_inode+0x1e/0x40 [btrfs]
          [<ffffffffa037c797>] btrfs_unlink+0x60/0x9b [btrfs]
          [<ffffffff8115d7f0>] vfs_unlink+0x9c/0xed
          [<ffffffff8115f5de>] do_unlinkat+0x12c/0x1fa
          [<ffffffff811601a7>] SyS_unlinkat+0x29/0x2b
          [<ffffffff81429e92>] system_call_fastpath+0x12/0x17
          [<ffffffffffffffff>] 0xffffffffffffffff
      unreferenced object 0xffff8800336feda8 (size 584):
        comm "aio-stress", pid 22031, jiffies 4295543006 (age 1271.400s)
        hex dump (first 32 bytes):
          00 40 3e 00 00 00 00 00 00 00 8f 42 00 00 00 00  .@>........B....
          00 00 01 00 00 00 00 00 00 00 01 00 00 00 00 00  ................
        backtrace:
          [<ffffffff8114eb34>] create_object+0x172/0x29a
          [<ffffffff8141d790>] kmemleak_alloc+0x25/0x41
          [<ffffffff81141ae6>] kmemleak_alloc_recursive.constprop.52+0x16/0x18
          [<ffffffff81145288>] kmem_cache_alloc+0xf7/0x198
          [<ffffffffa0389243>] __btrfs_add_ordered_extent+0x43/0x309 [btrfs]
          [<ffffffffa038968b>] btrfs_add_ordered_extent_dio+0x12/0x14 [btrfs]
          [<ffffffffa03810e2>] btrfs_get_blocks_direct+0x3ef/0x571 [btrfs]
          [<ffffffff81181349>] do_blockdev_direct_IO+0x62a/0xb47
          [<ffffffff8118189a>] __blockdev_direct_IO+0x34/0x36
          [<ffffffffa03776e5>] btrfs_direct_IO+0x16a/0x1e8 [btrfs]
          [<ffffffff81100373>] generic_file_direct_write+0xb8/0x12d
          [<ffffffffa038615c>] btrfs_file_write_iter+0x24b/0x42f [btrfs]
          [<ffffffff8118bb0d>] aio_run_iocb+0x2b7/0x32e
          [<ffffffff8118c99a>] do_io_submit+0x26e/0x2ff
          [<ffffffff8118ca3b>] SyS_io_submit+0x10/0x12
          [<ffffffff81429e92>] system_call_fastpath+0x12/0x17
      Signed-off-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      67b67dfa
    • Alexander Usyskin's avatar
      mei: make device disabled on stop unconditionally · 85f1ab83
      Alexander Usyskin authored
      commit 6c15a851 upstream.
      
      Set the internal device state to to disabled after hardware reset in stop flow.
      This will cover cases when driver was not brought to disabled state because of
      an error and in stop flow we wish not to retry the reset.
      Signed-off-by: default avatarAlexander Usyskin <alexander.usyskin@intel.com>
      Signed-off-by: default avatarTomas Winkler <tomas.winkler@intel.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      85f1ab83
    • Angelo Compagnucci's avatar
      iio:adc:mcp3422 Fix incorrect scales table · 2c76bd8b
      Angelo Compagnucci authored
      commit 9e128ced upstream.
      
      This patch fixes uncorrect order of mcp3422_scales table, the values
      was erroneously transposed.
      It removes also an unused array and a wrong comment.
      Signed-off-by: default avatarAngelo Compagnucci <angelo.compagnucci@gmail.com>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      2c76bd8b
    • Urs Fässler's avatar
      iio: ad5686: fix optional reference voltage declaration · 328499bc
      Urs Fässler authored
      commit da019f59 upstream.
      
      When not using the "_optional" function, a dummy regulator is returned
      and the driver fails to initialize.
      Signed-off-by: default avatarUrs Fässler <urs.fassler@bytesatwork.ch>
      Acked-by: default avatarLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      328499bc
    • Kristina Martšenko's avatar
      iio: mxs-lradc: only update the buffer when its conversions have finished · 30fa0dbf
      Kristina Martšenko authored
      commit 89bb35e2 upstream.
      
      Using the touchscreen while running buffered capture results in the
      buffer reporting lots of wrong values, often just zeros. This is because
      we push readings to the buffer every time a touchscreen interrupt
      arrives, including when the buffer's own conversions have not yet
      finished. So let's only push to the buffer when its conversions are
      ready.
      Signed-off-by: default avatarKristina Martšenko <kristina.martsenko@gmail.com>
      Reviewed-by: default avatarMarek Vasut <marex@denx.de>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      30fa0dbf
    • Kristina Martšenko's avatar
      iio: mxs-lradc: make ADC reads not unschedule touchscreen conversions · 9f074c6a
      Kristina Martšenko authored
      commit 6abe0300 upstream.
      
      Reading a channel through sysfs, or starting a buffered capture, can
      occasionally turn off the touchscreen.
      
      This is because the read_raw() and buffer preenable()/postdisable()
      callbacks unschedule current conversions on all channels. If a delay
      channel happens to schedule a touchscreen conversion at the same time,
      the conversion gets cancelled and the touchscreen sequence stops.
      
      This is probably related to this note from the reference manual:
      
      	"If a delay group schedules channels to be sampled and a manual
      	write to the schedule field in CTRL0 occurs while the block is
      	discarding samples, the LRADC will switch to the new schedule
      	and will not sample the channels that were previously scheduled.
      	The time window for this to happen is very small and lasts only
      	while the LRADC is discarding samples."
      
      So make the callbacks only unschedule conversions for the channels they
      use. This means channel 0 for read_raw() and channels 0-5 for the buffer
      (if the touchscreen is enabled). Since the touchscreen uses different
      channels (6 and 7), it no longer gets turned off.
      
      This is tested and fixes the issue on i.MX28, but hasn't been tested on
      i.MX23.
      Signed-off-by: default avatarKristina Martšenko <kristina.martsenko@gmail.com>
      Reviewed-by: default avatarMarek Vasut <marex@denx.de>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      9f074c6a
    • Kristina Martšenko's avatar
      iio: mxs-lradc: make ADC reads not disable touchscreen interrupts · 1322dc3b
      Kristina Martšenko authored
      commit 86bf7f3e upstream.
      
      Reading a channel through sysfs, or starting a buffered capture, will
      currently turn off the touchscreen. This is because the read_raw() and
      buffer preenable()/postdisable() callbacks disable interrupts for all
      LRADC channels, including those the touchscreen uses.
      
      So make the callbacks only disable interrupts for the channels they use.
      This means channel 0 for read_raw() and channels 0-5 for the buffer (if
      the touchscreen is enabled). Since the touchscreen uses different
      channels (6 and 7), it no longer gets turned off.
      
      Note that only i.MX28 is affected by this issue, i.MX23 should be fine.
      Signed-off-by: default avatarKristina Martšenko <kristina.martsenko@gmail.com>
      Reviewed-by: default avatarMarek Vasut <marex@denx.de>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      1322dc3b
    • Kristina Martšenko's avatar
      iio: mxs-lradc: separate touchscreen and buffer virtual channels · 567a6d5e
      Kristina Martšenko authored
      commit f81197b8 upstream.
      
      The touchscreen was initially designed [1] to map all of its physical
      channels to one virtual channel, leaving buffered capture to use the
      remaining 7 virtual channels. When the touchscreen was reimplemented
      [2], it was made to use four virtual channels, which overlap and
      conflict with the channels the buffer uses.
      
      As a result, when the buffer is enabled, the touchscreen's virtual
      channels are remapped to whichever physical channels the buffer was
      configured with, causing the touchscreen to read those instead of the
      touch measurement channels. Effectively the touchscreen stops working.
      
      So here we separate the channels again, giving the touchscreen 2 virtual
      channels and the buffer 6. We can't give the touchscreen just 1 channel
      as before, as the current pressure calculation requires 2 channels to be
      read at the same time.
      
      This makes the touchscreen continue to work during buffered capture. It
      has been tested on i.MX28, but not on i.MX23.
      
      [1] 06ddd353 ("iio: mxs: Implement support for touchscreen")
      [2] dee05308 ("Staging/iio/adc/touchscreen/MXS: add interrupt driven
      touch detection")
      Signed-off-by: default avatarKristina Martšenko <kristina.martsenko@gmail.com>
      Reviewed-by: default avatarMarek Vasut <marex@denx.de>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      567a6d5e
    • Rasmus Villemoes's avatar
      iio: imu: adis16400: Fix sign extension · c0d65db7
      Rasmus Villemoes authored
      commit 19e353f2 upstream.
      
      The intention is obviously to sign-extend a 12 bit quantity. But
      because of C's promotion rules, the assignment is equivalent to "val16
      &= 0xfff;". Use the proper API for this.
      Signed-off-by: default avatarRasmus Villemoes <linux@rasmusvillemoes.dk>
      Acked-by: default avatarLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      c0d65db7
    • Stefan Wahren's avatar
      iio: mxs-lradc: fix iio channel map regression · ec23c856
      Stefan Wahren authored
      commit 03305e53 upstream.
      
      Since commit c8231a9a ("iio: mxs-lradc: compute temperature
      from channel 8 and 9") with the removal of adc channel 9 there is
      no 1-1 mapping in the channel spec.
      
      All hwmon channel values above 9 are accessible via there index minus
      one. So add a hidden iio channel 9 to fix this issue.
      Signed-off-by: default avatarStefan Wahren <stefan.wahren@i2se.com>
      Acked-by: default avatarAlexandre Belloni <alexandre.belloni@free-electrons.com>
      Reviewed-by: default avatarMarek Vasut <marex@denx.de>
      Signed-off-by: default avatarJonathan Cameron <jic23@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      ec23c856
    • Quentin Casasnovas's avatar
      x86/fpu/xsaves: Fix improper uses of __ex_table · 6ddd115f
      Quentin Casasnovas authored
      commit 06c8173e upstream.
      
      Commit:
      
        f31a9f7c ("x86/xsaves: Use xsaves/xrstors to save and restore xsave area")
      
      introduced alternative instructions for XSAVES/XRSTORS and commit:
      
        adb9d526 ("x86/xsaves: Add xsaves and xrstors support for booting time")
      
      added support for the XSAVES/XRSTORS instructions at boot time.
      
      Unfortunately both failed to properly protect them against faulting:
      
      The 'xstate_fault' macro will use the closest label named '1'
      backward and that ends up in the .altinstr_replacement section
      rather than in .text. This means that the kernel will never find
      in the __ex_table the .text address where this instruction might
      fault, leading to serious problems if userspace manages to
      trigger the fault.
      Signed-off-by: default avatarQuentin Casasnovas <quentin.casasnovas@oracle.com>
      Signed-off-by: default avatarJamie Iles <jamie.iles@oracle.com>
      [ Improved the changelog, fixed some whitespace noise. ]
      Acked-by: default avatarBorislav Petkov <bp@alien8.de>
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Allan Xavier <mr.a.xavier@gmail.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: adb9d526 ("x86/xsaves: Add xsaves and xrstors support for booting time")
      Fixes: f31a9f7c ("x86/xsaves: Use xsaves/xrstors to save and restore xsave area")
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      6ddd115f
    • Andy Lutomirski's avatar
      x86/asm/entry/64: Remove a bogus 'ret_from_fork' optimization · ce5dd33c
      Andy Lutomirski authored
      commit 956421fb upstream.
      
      'ret_from_fork' checks TIF_IA32 to determine whether 'pt_regs' and
      the related state make sense for 'ret_from_sys_call'.  This is
      entirely the wrong check.  TS_COMPAT would make a little more
      sense, but there's really no point in keeping this optimization
      at all.
      
      This fixes a return to the wrong user CS if we came from int
      0x80 in a 64-bit task.
      Signed-off-by: default avatarAndy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/4710be56d76ef994ddf59087aad98c000fbab9a4.1424989793.git.luto@amacapital.net
      [ Backported from tip:x86/asm. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      ce5dd33c
    • Nicholas Bellinger's avatar
      target: Check for LBA + sectors wrap-around in sbc_parse_cdb · cdc937a5
      Nicholas Bellinger authored
      commit aa179935 upstream.
      
      This patch adds a check to sbc_parse_cdb() in order to detect when
      an LBA + sector vs. end-of-device calculation wraps when the LBA is
      sufficently large enough (eg: 0xFFFFFFFFFFFFFFFF).
      
      Cc: Martin Petersen <martin.petersen@oracle.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarNicholas Bellinger <nab@linux-iscsi.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      cdc937a5
    • Nicholas Bellinger's avatar
      target: Add missing WRITE_SAME end-of-device sanity check · f38a130b
      Nicholas Bellinger authored
      commit 8e575c50 upstream.
      
      This patch adds a check to sbc_setup_write_same() to verify
      the incoming WRITE_SAME LBA + number of blocks does not exceed
      past the end-of-device.
      
      Also check for potential LBA wrap-around as well.
      Reported-by: default avatarBart Van Assche <bart.vanassche@sandisk.com>
      Cc: Martin Petersen <martin.petersen@oracle.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarNicholas Bellinger <nab@linux-iscsi.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      f38a130b
    • Nicholas Bellinger's avatar
      target: Fix PR_APTPL_BUF_LEN buffer size limitation · aff40baf
      Nicholas Bellinger authored
      commit f161d4b4 upstream.
      
      This patch addresses the original PR_APTPL_BUF_LEN = 8k limitiation
      for write-out of PR APTPL metadata that Martin has recently been
      running into.
      
      It changes core_scsi3_update_and_write_aptpl() to use vzalloc'ed
      memory instead of kzalloc, and increases the default hardcoded
      length to 256k.
      
      It also adds logic in core_scsi3_update_and_write_aptpl() to double
      the original length upon core_scsi3_update_aptpl_buf() failure, and
      retries until the vzalloc'ed buffer is large enough to accommodate
      the outgoing APTPL metadata.
      Reported-by: default avatarMartin Svec <martin.svec@zoner.cz>
      Signed-off-by: default avatarNicholas Bellinger <nab@linux-iscsi.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      aff40baf
    • Shobhit Kumar's avatar
      drm/i915: Correct the IOSF Dev_FN field for IOSF transfers · e12d499e
      Shobhit Kumar authored
      commit d180d2bb upstream.
      
      As per the specififcation, the SB_DevFn is the PCI_DEVFN of the target
      device and not the source. So PCI_DEVFN(2,0) is not correct. Further the
      port ID should be enough to identify devices unless they are MFD. The
      SB_DevFn was intended to remove ambiguity in case of these MFD devices.
      
      For non MFD devices the recommendation for the target device IP was to
      ignore these fields, but not all of them followed the recommendation.
      Some like CCK ignore these fields and hence PCI_DEVFN(2, 0) works and so
      does PCI_DEVFN(0, 0) as it works for DPIO. The issue came to light because
      of GPIONC which was not getting programmed correctly with PCI_DEVFN(2, 0).
      It turned out that this did not follow the recommendation and expected 0
      in this field.
      
      In general the recommendation is to use SB_DevFn as PCI_DEVFN(0, 0) for
      all devices except target PCI devices.
      Signed-off-by: default avatarShobhit Kumar <shobhit.kumar@intel.com>
      Reviewed-by: default avatarVille Syrjälä <ville.syrjala@linux.intel.com>
      Signed-off-by: default avatarJani Nikula <jani.nikula@intel.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      e12d499e
    • Michał Winiarski's avatar
      drm/i915: Prevent use-after-free in invalidate_range_start callback · 5c8bf2b8
      Michał Winiarski authored
      commit 460822b0 upstream.
      
      It's possible for invalidate_range_start mmu notifier callback to race
      against userptr object release. If the gem object was released prior to
      obtaining the spinlock in invalidate_range_start we're hitting null
      pointer dereference.
      
      Testcase: igt/gem_userptr_blits/stress-mm-invalidate-close
      Testcase: igt/gem_userptr_blits/stress-mm-invalidate-close-overlap
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarMichał Winiarski <michal.winiarski@intel.com>
      Reviewed-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      [Jani: added code comment suggested by Chris]
      Signed-off-by: default avatarJani Nikula <jani.nikula@intel.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      5c8bf2b8
    • Daniel Vetter's avatar
      drm/i915: Drop vblank wait from intel_dp_link_down · 4182c01b
      Daniel Vetter authored
      commit 0ca09685 upstream.
      
      Nothing in Bspec seems to indicate that we actually needs this, and it
      looks like can't work since by this point the pipe is off and so
      vblanks won't really happen any more.
      
      Note that Bspec mentions that it takes a vblank for this bit to
      change, but _only_ when enabling.
      
      Dropping this code quenches an annoying backtrace introduced by the
      more anal checking since
      
      commit 51e31d49
      Author: Daniel Vetter <daniel.vetter@ffwll.ch>
      Date:   Mon Sep 15 12:36:02 2014 +0200
      
          drm/i915: Use generic vblank wait
      
      Note: This fixes the fallout from the above commit, but does not address
      the shortcomings of the IBX transcoder select workaround implementation
      discussed during review [1].
      
      [1] http://mid.gmane.org/87y4o7usxf.fsf@intel.com
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=86095Signed-off-by: default avatarDaniel Vetter <daniel.vetter@intel.com>
      Reviewed-by: default avatarPaulo Zanoni <paulo.r.zanoni@intel.com>
      Signed-off-by: default avatarJani Nikula <jani.nikula@intel.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      4182c01b
    • Chris Wilson's avatar
      drm/i915: Insert a command barrier on BLT/BSD cache flushes · 2de5e09a
      Chris Wilson authored
      commit f0a1fb10 upstream.
      
      This looked like an odd regression from
      
      commit ec5cc0f9
      Author: Chris Wilson <chris@chris-wilson.co.uk>
      Date:   Thu Jun 12 10:28:55 2014 +0100
      
          drm/i915: Restrict GPU boost to the RCS engine
      
      but in reality it undercovered a much older coherency bug. The issue that
      boosting the GPU frequency on the BCS ring was masking was that we could
      wake the CPU up after completion of a BCS batch and inspect memory prior
      to the write cache being fully evicted. In order to serialise the
      breadcrumb interrupt (and so ensure that the CPU's view of memory is
      coherent) we need to perform a post-sync operation in the MI_FLUSH_DW.
      
      v2: Fix all the MI_FLUSH_DW (bsd plus the duplication in execlists).
      
      Also fix the invalidate_domains mask in gen8_emit_flush() for ring !=
      VCS.
      
      Testcase: gpuX-rcs-gpu-read-after-write
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Acked-by: default avatarDaniel Vetter <daniel@ffwll.ch>
      Signed-off-by: default avatarJani Nikula <jani.nikula@intel.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      2de5e09a
    • Alex Deucher's avatar
      drm/radeon: fix voltage setup on hawaii · 93fd529d
      Alex Deucher authored
      commit 09b6e85f upstream.
      
      Missing parameter when fetching the real voltage values
      from atom.  Fixes problems with dynamic clocking on
      certain boards.
      
      bug:
      https://bugs.freedesktop.org/show_bug.cgi?id=87457Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      93fd529d
    • Alex Deucher's avatar
      drm/radeon/dp: Set EDP_CONFIGURATION_SET for bridge chips if necessary · c112fd14
      Alex Deucher authored
      commit 66c2b84b upstream.
      
      Don't restrict it to just eDP panels.  Some LVDS bridge chips require
      this.  Fixes blank panels on resume on certain laptops.  Noticed
      by mrnuke on IRC.
      
      bug:
      https://bugs.freedesktop.org/show_bug.cgi?id=42960Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      c112fd14
    • Christian König's avatar
      drm/radeon: workaround for CP HW bug on CIK · 5b777674
      Christian König authored
      commit a9c73a0e upstream.
      
      Emit the EOP twice to avoid cache flushing problems.
      Signed-off-by: default avatarChristian König <christian.koenig@amd.com>
      Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      5b777674
    • Alex Deucher's avatar
      drm/radeon: only enable kv/kb dpm interrupts once v3 · fee477d5
      Alex Deucher authored
      commit 410af8d7 upstream.
      
      Enable at init and disable on fini. Workaround for hardware problems.
      
      v2 (chk): extend commit message
      v3: add new function
      Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: Christian König <christian.koenig@amd.com> (v2)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      fee477d5
    • Michel Dänzer's avatar
      drm/radeon: Don't try to enable write-combining without PAT · 22d4d7e7
      Michel Dänzer authored
      commit a53fa438 upstream.
      
      Doing so can cause things to become slow.
      
      Print a warning at compile time and an informative message at runtime in
      that case.
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88758Reviewed-by: default avatarChristian König <christian.koenig@amd.com>
      Signed-off-by: default avatarMichel Dänzer <michel.daenzer@amd.com>
      Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      22d4d7e7
    • David Ung's avatar
      drm/tegra: Use correct relocation target offsets · c5131a98
      David Ung authored
      commit 31f40f86 upstream.
      
      When copying a relocation from userspace, copy the correct target
      offset.
      Signed-off-by: default avatarDavid Ung <davidu@nvidia.com>
      Fixes: 961e3bea ("drm/tegra: Make job submission 64-bit safe")
      [treding@nvidia.com: provide a better commit message]
      Signed-off-by: default avatarThierry Reding <treding@nvidia.com>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      c5131a98
    • Hugh Dickins's avatar
      mm: fix negative nr_isolated counts · b7c386cf
      Hugh Dickins authored
      commit ff59909a upstream.
      
      The vmstat interfaces are good at hiding negative counts (at least when
      CONFIG_SMP); but if you peer behind the curtain, you find that
      nr_isolated_anon and nr_isolated_file soon go negative, and grow ever
      more negative: so they can absorb larger and larger numbers of isolated
      pages, yet still appear to be zero.
      
      I'm happy to avoid a congestion_wait() when too_many_isolated() myself;
      but I guess it's there for a good reason, in which case we ought to get
      too_many_isolated() working again.
      
      The imbalance comes from isolate_migratepages()'s ISOLATE_ABORT case:
      putback_movable_pages() decrements the NR_ISOLATED counts, but we forgot
      to call acct_isolated() to increment them.
      
      It is possible that the bug whcih this patch fixes could cause OOM kills
      when the system still has a lot of reclaimable page cache.
      
      Fixes: edc2ca61 ("mm, compaction: move pageblock checks up from isolate_migratepages_range()")
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      b7c386cf
    • Grazvydas Ignotas's avatar
      mm/memory.c: actually remap enough memory · 463fb5a2
      Grazvydas Ignotas authored
      commit 9cb12d7b upstream.
      
      For whatever reason, generic_access_phys() only remaps one page, but
      actually allows to access arbitrary size.  It's quite easy to trigger
      large reads, like printing out large structure with gdb, which leads to a
      crash.  Fix it by remapping correct size.
      
      Fixes: 28b2ee20 ("access_process_vm device memory infrastructure")
      Signed-off-by: default avatarGrazvydas Ignotas <notasas@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      463fb5a2
    • Joonsoo Kim's avatar
      mm/compaction: fix wrong order check in compact_finished() · 42af81da
      Joonsoo Kim authored
      commit 372549c2 upstream.
      
      What we want to check here is whether there is highorder freepage in buddy
      list of other migratetype in order to steal it without fragmentation.
      But, current code just checks cc->order which means allocation request
      order.  So, this is wrong.
      
      Without this fix, non-movable synchronous compaction below pageblock order
      would not stopped until compaction is complete, because migratetype of
      most pageblocks are movable and high order freepage made by compaction is
      usually on movable type buddy list.
      
      There is some report related to this bug. See below link.
      
        http://www.spinics.net/lists/linux-mm/msg81666.html
      
      Although the issued system still has load spike comes from compaction,
      this makes that system completely stable and responsive according to his
      report.
      
      stress-highalloc test in mmtests with non movable order 7 allocation
      doesn't show any notable difference in allocation success rate, but, it
      shows more compaction success rate.
      
      Compaction success rate (Compaction success * 100 / Compaction stalls, %)
      18.47 : 28.94
      
      Fixes: 1fb3f8ca ("mm: compaction: capture a suitable high-order page immediately when it is made available")
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarZhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      42af81da
    • Roman Gushchin's avatar
      mm/nommu.c: fix arithmetic overflow in __vm_enough_memory() · f0f7d8f6
      Roman Gushchin authored
      commit 8138a67a upstream.
      
      I noticed that "allowed" can easily overflow by falling below 0, because
      (total_vm / 32) can be larger than "allowed".  The problem occurs in
      OVERCOMMIT_NONE mode.
      
      In this case, a huge allocation can success and overcommit the system
      (despite OVERCOMMIT_NONE mode).  All subsequent allocations will fall
      (system-wide), so system become unusable.
      
      The problem was masked out by commit c9b1d098
      ("mm: limit growth of 3% hardcoded other user reserve"),
      but it's easy to reproduce it on older kernels:
      1) set overcommit_memory sysctl to 2
      2) mmap() large file multiple times (with VM_SHARED flag)
      3) try to malloc() large amount of memory
      
      It also can be reproduced on newer kernels, but miss-configured
      sysctl_user_reserve_kbytes is required.
      
      Fix this issue by switching to signed arithmetic here.
      Signed-off-by: default avatarRoman Gushchin <klamm@yandex-team.ru>
      Cc: Andrew Shewmaker <agshew@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      f0f7d8f6
    • Roman Gushchin's avatar
      mm/mmap.c: fix arithmetic overflow in __vm_enough_memory() · 64ac3205
      Roman Gushchin authored
      commit 5703b087 upstream.
      
      I noticed, that "allowed" can easily overflow by falling below 0,
      because (total_vm / 32) can be larger than "allowed".  The problem
      occurs in OVERCOMMIT_NONE mode.
      
      In this case, a huge allocation can success and overcommit the system
      (despite OVERCOMMIT_NONE mode).  All subsequent allocations will fall
      (system-wide), so system become unusable.
      
      The problem was masked out by commit c9b1d098
      ("mm: limit growth of 3% hardcoded other user reserve"),
      but it's easy to reproduce it on older kernels:
      1) set overcommit_memory sysctl to 2
      2) mmap() large file multiple times (with VM_SHARED flag)
      3) try to malloc() large amount of memory
      
      It also can be reproduced on newer kernels, but miss-configured
      sysctl_user_reserve_kbytes is required.
      
      Fix this issue by switching to signed arithmetic here.
      
      [akpm@linux-foundation.org: use min_t]
      Signed-off-by: default avatarRoman Gushchin <klamm@yandex-team.ru>
      Cc: Andrew Shewmaker <agshew@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      64ac3205
    • Vlastimil Babka's avatar
      mm: when stealing freepages, also take pages created by splitting buddy page · 8473518c
      Vlastimil Babka authored
      commit 99592d59 upstream.
      
      When studying page stealing, I noticed some weird looking decisions in
      try_to_steal_freepages().  The first I assume is a bug (Patch 1), the
      following two patches were driven by evaluation.
      
      Testing was done with stress-highalloc of mmtests, using the
      mm_page_alloc_extfrag tracepoint and postprocessing to get counts of how
      often page stealing occurs for individual migratetypes, and what
      migratetypes are used for fallbacks.  Arguably, the worst case of page
      stealing is when UNMOVABLE allocation steals from MOVABLE pageblock.
      RECLAIMABLE allocation stealing from MOVABLE allocation is also not ideal,
      so the goal is to minimize these two cases.
      
      The evaluation of v2 wasn't always clear win and Joonsoo questioned the
      results.  Here I used different baseline which includes RFC compaction
      improvements from [1].  I found that the compaction improvements reduce
      variability of stress-highalloc, so there's less noise in the data.
      
      First, let's look at stress-highalloc configured to do sync compaction,
      and how these patches reduce page stealing events during the test.  First
      column is after fresh reboot, other two are reiterations of test without
      reboot.  That was all accumulater over 5 re-iterations (so the benchmark
      was run 5x3 times with 5 fresh restarts).
      
      Baseline:
      
                                                         3.19-rc4        3.19-rc4        3.19-rc4
                                                        5-nothp-1       5-nothp-2       5-nothp-3
      Page alloc extfrag event                               10264225     8702233    10244125
      Extfrag fragmenting                                    10263271     8701552    10243473
      Extfrag fragmenting for unmovable                         13595       17616       15960
      Extfrag fragmenting unmovable placed with movable          7989       12193        8447
      Extfrag fragmenting for reclaimable                         658        1840        1817
      Extfrag fragmenting reclaimable placed with movable         558        1677        1679
      Extfrag fragmenting for movable                        10249018     8682096    10225696
      
      With Patch 1:
                                                         3.19-rc4        3.19-rc4        3.19-rc4
                                                        6-nothp-1       6-nothp-2       6-nothp-3
      Page alloc extfrag event                               11834954     9877523     9774860
      Extfrag fragmenting                                    11833993     9876880     9774245
      Extfrag fragmenting for unmovable                          7342       16129       11712
      Extfrag fragmenting unmovable placed with movable          4191       10547        6270
      Extfrag fragmenting for reclaimable                         373        1130         923
      Extfrag fragmenting reclaimable placed with movable         302         906         738
      Extfrag fragmenting for movable                        11826278     9859621     9761610
      
      With Patch 2:
                                                         3.19-rc4        3.19-rc4        3.19-rc4
                                                        7-nothp-1       7-nothp-2       7-nothp-3
      Page alloc extfrag event                                4725990     3668793     3807436
      Extfrag fragmenting                                     4725104     3668252     3806898
      Extfrag fragmenting for unmovable                          6678        7974        7281
      Extfrag fragmenting unmovable placed with movable          2051        3829        4017
      Extfrag fragmenting for reclaimable                         429        1208        1278
      Extfrag fragmenting reclaimable placed with movable         369         976        1034
      Extfrag fragmenting for movable                         4717997     3659070     3798339
      
      With Patch 3:
                                                         3.19-rc4        3.19-rc4        3.19-rc4
                                                        8-nothp-1       8-nothp-2       8-nothp-3
      Page alloc extfrag event                                5016183     4700142     3850633
      Extfrag fragmenting                                     5015325     4699613     3850072
      Extfrag fragmenting for unmovable                          1312        3154        3088
      Extfrag fragmenting unmovable placed with movable          1115        2777        2714
      Extfrag fragmenting for reclaimable                         437        1193        1097
      Extfrag fragmenting reclaimable placed with movable         330         969         879
      Extfrag fragmenting for movable                         5013576     4695266     3845887
      
      In v2 we've seen apparent regression with Patch 1 for unmovable events,
      this is now gone, suggesting it was indeed noise.  Here, each patch
      improves the situation for unmovable events.  Reclaimable is improved by
      patch 1 and then either the same modulo noise, or perhaps sligtly worse -
      a small price for unmovable improvements, IMHO.  The number of movable
      allocations falling back to other migratetypes is most noisy, but it's
      reduced to half at Patch 2 nevertheless.  These are least critical as
      compaction can move them around.
      
      If we look at success rates, the patches don't affect them, that didn't change.
      
      Baseline:
                                   3.19-rc4              3.19-rc4              3.19-rc4
                                  5-nothp-1             5-nothp-2             5-nothp-3
      Success 1 Min         49.00 (  0.00%)       42.00 ( 14.29%)       41.00 ( 16.33%)
      Success 1 Mean        51.00 (  0.00%)       45.00 ( 11.76%)       42.60 ( 16.47%)
      Success 1 Max         55.00 (  0.00%)       51.00 (  7.27%)       46.00 ( 16.36%)
      Success 2 Min         53.00 (  0.00%)       47.00 ( 11.32%)       44.00 ( 16.98%)
      Success 2 Mean        59.60 (  0.00%)       50.80 ( 14.77%)       48.20 ( 19.13%)
      Success 2 Max         64.00 (  0.00%)       56.00 ( 12.50%)       52.00 ( 18.75%)
      Success 3 Min         84.00 (  0.00%)       82.00 (  2.38%)       78.00 (  7.14%)
      Success 3 Mean        85.60 (  0.00%)       82.80 (  3.27%)       79.40 (  7.24%)
      Success 3 Max         86.00 (  0.00%)       83.00 (  3.49%)       80.00 (  6.98%)
      
      Patch 1:
                                   3.19-rc4              3.19-rc4              3.19-rc4
                                  6-nothp-1             6-nothp-2             6-nothp-3
      Success 1 Min         49.00 (  0.00%)       44.00 ( 10.20%)       44.00 ( 10.20%)
      Success 1 Mean        51.80 (  0.00%)       46.00 ( 11.20%)       45.80 ( 11.58%)
      Success 1 Max         54.00 (  0.00%)       49.00 (  9.26%)       49.00 (  9.26%)
      Success 2 Min         58.00 (  0.00%)       49.00 ( 15.52%)       48.00 ( 17.24%)
      Success 2 Mean        60.40 (  0.00%)       51.80 ( 14.24%)       50.80 ( 15.89%)
      Success 2 Max         63.00 (  0.00%)       54.00 ( 14.29%)       55.00 ( 12.70%)
      Success 3 Min         84.00 (  0.00%)       81.00 (  3.57%)       79.00 (  5.95%)
      Success 3 Mean        85.00 (  0.00%)       81.60 (  4.00%)       79.80 (  6.12%)
      Success 3 Max         86.00 (  0.00%)       82.00 (  4.65%)       82.00 (  4.65%)
      
      Patch 2:
      
                                   3.19-rc4              3.19-rc4              3.19-rc4
                                  7-nothp-1             7-nothp-2             7-nothp-3
      Success 1 Min         50.00 (  0.00%)       44.00 ( 12.00%)       39.00 ( 22.00%)
      Success 1 Mean        52.80 (  0.00%)       45.60 ( 13.64%)       42.40 ( 19.70%)
      Success 1 Max         55.00 (  0.00%)       46.00 ( 16.36%)       47.00 ( 14.55%)
      Success 2 Min         52.00 (  0.00%)       48.00 (  7.69%)       45.00 ( 13.46%)
      Success 2 Mean        53.40 (  0.00%)       49.80 (  6.74%)       48.80 (  8.61%)
      Success 2 Max         57.00 (  0.00%)       52.00 (  8.77%)       52.00 (  8.77%)
      Success 3 Min         84.00 (  0.00%)       81.00 (  3.57%)       79.00 (  5.95%)
      Success 3 Mean        85.00 (  0.00%)       82.40 (  3.06%)       79.60 (  6.35%)
      Success 3 Max         86.00 (  0.00%)       83.00 (  3.49%)       80.00 (  6.98%)
      
      Patch 3:
                                   3.19-rc4              3.19-rc4              3.19-rc4
                                  8-nothp-1             8-nothp-2             8-nothp-3
      Success 1 Min         46.00 (  0.00%)       44.00 (  4.35%)       42.00 (  8.70%)
      Success 1 Mean        50.20 (  0.00%)       45.60 (  9.16%)       44.00 ( 12.35%)
      Success 1 Max         52.00 (  0.00%)       47.00 (  9.62%)       47.00 (  9.62%)
      Success 2 Min         53.00 (  0.00%)       49.00 (  7.55%)       48.00 (  9.43%)
      Success 2 Mean        55.80 (  0.00%)       50.60 (  9.32%)       49.00 ( 12.19%)
      Success 2 Max         59.00 (  0.00%)       52.00 ( 11.86%)       51.00 ( 13.56%)
      Success 3 Min         84.00 (  0.00%)       80.00 (  4.76%)       79.00 (  5.95%)
      Success 3 Mean        85.40 (  0.00%)       81.60 (  4.45%)       80.40 (  5.85%)
      Success 3 Max         87.00 (  0.00%)       83.00 (  4.60%)       82.00 (  5.75%)
      
      While there's no improvement here, I consider reduced fragmentation events
      to be worth on its own.  Patch 2 also seems to reduce scanning for free
      pages, and migrations in compaction, suggesting it has somewhat less work
      to do:
      
      Patch 1:
      
      Compaction stalls                 4153        3959        3978
      Compaction success                1523        1441        1446
      Compaction failures               2630        2517        2531
      Page migrate success           4600827     4943120     5104348
      Page migrate failure             19763       16656       17806
      Compaction pages isolated      9597640    10305617    10653541
      Compaction migrate scanned    77828948    86533283    87137064
      Compaction free scanned      517758295   521312840   521462251
      Compaction cost                   5503        5932        6110
      
      Patch 2:
      
      Compaction stalls                 3800        3450        3518
      Compaction success                1421        1316        1317
      Compaction failures               2379        2134        2201
      Page migrate success           4160421     4502708     4752148
      Page migrate failure             19705       14340       14911
      Compaction pages isolated      8731983     9382374     9910043
      Compaction migrate scanned    98362797    96349194    98609686
      Compaction free scanned      496512560   469502017   480442545
      Compaction cost                   5173        5526        5811
      
      As with v2, /proc/pagetypeinfo appears unaffected with respect to numbers
      of unmovable and reclaimable pageblocks.
      
      Configuring the benchmark to allocate like THP page fault (i.e.  no sync
      compaction) gives much noisier results for iterations 2 and 3 after
      reboot.  This is not so surprising given how [1] offers lower improvements
      in this scenario due to less restarts after deferred compaction which
      would change compaction pivot.
      
      Baseline:
                                                         3.19-rc4        3.19-rc4        3.19-rc4
                                                          5-thp-1         5-thp-2         5-thp-3
      Page alloc extfrag event                                8148965     6227815     6646741
      Extfrag fragmenting                                     8147872     6227130     6646117
      Extfrag fragmenting for unmovable                         10324       12942       15975
      Extfrag fragmenting unmovable placed with movable          5972        8495       10907
      Extfrag fragmenting for reclaimable                         601        1707        2210
      Extfrag fragmenting reclaimable placed with movable         520        1570        2000
      Extfrag fragmenting for movable                         8136947     6212481     6627932
      
      Patch 1:
                                                         3.19-rc4        3.19-rc4        3.19-rc4
                                                          6-thp-1         6-thp-2         6-thp-3
      Page alloc extfrag event                                8345457     7574471     7020419
      Extfrag fragmenting                                     8343546     7573777     7019718
      Extfrag fragmenting for unmovable                         10256       18535       30716
      Extfrag fragmenting unmovable placed with movable          6893       11726       22181
      Extfrag fragmenting for reclaimable                         465        1208        1023
      Extfrag fragmenting reclaimable placed with movable         353         996         843
      Extfrag fragmenting for movable                         8332825     7554034     6987979
      
      Patch 2:
                                                         3.19-rc4        3.19-rc4        3.19-rc4
                                                          7-thp-1         7-thp-2         7-thp-3
      Page alloc extfrag event                                3512847     3020756     2891625
      Extfrag fragmenting                                     3511940     3020185     2891059
      Extfrag fragmenting for unmovable                          9017        6892        6191
      Extfrag fragmenting unmovable placed with movable          1524        3053        2435
      Extfrag fragmenting for reclaimable                         445        1081        1160
      Extfrag fragmenting reclaimable placed with movable         375         918         986
      Extfrag fragmenting for movable                         3502478     3012212     2883708
      
      Patch 3:
                                                         3.19-rc4        3.19-rc4        3.19-rc4
                                                          8-thp-1         8-thp-2         8-thp-3
      Page alloc extfrag event                                3181699     3082881     2674164
      Extfrag fragmenting                                     3180812     3082303     2673611
      Extfrag fragmenting for unmovable                          1201        4031        4040
      Extfrag fragmenting unmovable placed with movable           974        3611        3645
      Extfrag fragmenting for reclaimable                         478        1165        1294
      Extfrag fragmenting reclaimable placed with movable         387         985        1030
      Extfrag fragmenting for movable                         3179133     3077107     2668277
      
      The improvements for first iteration are clear, the rest is much noisier
      and can appear like regression for Patch 1.  Anyway, patch 2 rectifies it.
      
      Allocation success rates are again unaffected so there's no point in
      making this e-mail any longer.
      
      [1] http://marc.info/?l=linux-mm&m=142166196321125&w=2
      
      This patch (of 3):
      
      When __rmqueue_fallback() is called to allocate a page of order X, it will
      find a page of order Y >= X of a fallback migratetype, which is different
      from the desired migratetype.  With the help of try_to_steal_freepages(),
      it may change the migratetype (to the desired one) also of:
      
      1) all currently free pages in the pageblock containing the fallback page
      2) the fallback pageblock itself
      3) buddy pages created by splitting the fallback page (when Y > X)
      
      These decisions take the order Y into account, as well as the desired
      migratetype, with the goal of preventing multiple fallback allocations
      that could e.g.  distribute UNMOVABLE allocations among multiple
      pageblocks.
      
      Originally, decision for 1) has implied the decision for 3).  Commit
      47118af0 ("mm: mmzone: MIGRATE_CMA migration type added") changed that
      (probably unintentionally) so that the buddy pages in case 3) are always
      changed to the desired migratetype, except for CMA pageblocks.
      
      Commit fef903ef ("mm/page_allo.c: restructure free-page stealing code
      and fix a bug") did some refactoring and added a comment that the case of
      3) is intended.  Commit 0cbef29a ("mm: __rmqueue_fallback() should
      respect pageblock type") removed the comment and tried to restore the
      original behavior where 1) implies 3), but due to the previous
      refactoring, the result is instead that only 2) implies 3) - and the
      conditions for 2) are less frequently met than conditions for 1).  This
      may increase fragmentation in situations where the code decides to steal
      all free pages from the pageblock (case 1)), but then gives back the buddy
      pages produced by splitting.
      
      This patch restores the original intended logic where 1) implies 3).
      During testing with stress-highalloc from mmtests, this has shown to
      decrease the number of events where UNMOVABLE and RECLAIMABLE allocations
      steal from MOVABLE pageblocks, which can lead to permanent fragmentation.
      In some cases it has increased the number of events when MOVABLE
      allocations steal from UNMOVABLE or RECLAIMABLE pageblocks, but these are
      fixable by sync compaction and thus less harmful.
      
      Note that evaluation has shown that the behavior introduced by
      47118af0 for buddy pages in case 3) is actually even better than the
      original logic, so the following patch will introduce it properly once
      again.  For stable backports of this patch it makes thus sense to only fix
      versions containing 0cbef29a.
      
      [iamjoonsoo.kim@lge.com: tracepoint fix]
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      8473518c
    • Andrey Ryabinin's avatar
      mm, hugetlb: remove unnecessary lower bound on sysctl handlers"? · 6f59e648
      Andrey Ryabinin authored
      commit 3cd7645d upstream.
      
      Commit ed4d4902 ("mm, hugetlb: remove hugetlb_zero and
      hugetlb_infinity") replaced 'unsigned long hugetlb_zero' with 'int zero'
      leading to out-of-bounds access in proc_doulongvec_minmax().  Use
      '.extra1 = NULL' instead of '.extra1 = &zero'.  Passing NULL is
      equivalent to passing minimal value, which is 0 for unsigned types.
      
      Fixes: ed4d4902 ("mm, hugetlb: remove hugetlb_zero and hugetlb_infinity")
      Signed-off-by: default avatarAndrey Ryabinin <a.ryabinin@samsung.com>
      Reported-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Suggested-by: default avatarManfred Spraul <manfred@colorfullife.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      6f59e648