1. 29 May, 2014 15 commits
  2. 23 May, 2014 11 commits
  3. 15 May, 2014 14 commits
    • Andreas Schwab's avatar
      powerpc: Add vr save/restore functions · bcbac220
      Andreas Schwab authored
      commit 8fe9c93e upstream.
      
      GCC 4.8 now generates out-of-line vr save/restore functions when
      optimizing for size.  They are needed for the raid6 altivec support.
      Signed-off-by: default avatarAndreas Schwab <schwab@linux-m68k.org>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      bcbac220
    • Jiri Slaby's avatar
      Linux 3.12.20 · 52e218bb
      Jiri Slaby authored
      52e218bb
    • Gerd Hoffmann's avatar
      drm: cirrus: add power management support · b4dac01c
      Gerd Hoffmann authored
      commit 2f1e8007 upstream.
      
      cirrus kms driver lacks power management support, thus
      the vga display doesn't work any more after S3 resume.
      
      Fix this by adding suspend and resume functions.
      Also make the mode_set function unblank the screen.
      Signed-off-by: default avatarGerd Hoffmann <kraxel@redhat.com>
      Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      b4dac01c
    • Hans de Goede's avatar
      93cfde3b
    • Hans de Goede's avatar
      Input: synaptics - add min/max quirk for ThinkPad T431s, L440, L540, S1 Yoga and X1 · 7419385d
      Hans de Goede authored
      commit 46a2986e upstream.
      
      We expect that all the Haswell series will need such quirks, sigh.
      
      The T431s seems to be T430 hardware in a T440s case, using the T440s touchpad,
      with the same min/max issue.
      
      The X1 Carbon 3rd generation name says 2nd while it is a 3rd generation.
      
      The X1 and T431s share a PnPID with the T540p, but the reported ranges are
      closer to those of the T440s.
      
      HdG: Squashed 5 quirk patches into one. T431s + L440 + L540 are written by me,
      S1 Yoga and X1 are written by Benjamin Tissoires.
      
      Hdg: Standardized S1 Yoga and X1 values, Yoga uses the same touchpad as the
      X240, X1 uses the same touchpad as the T440.
      Signed-off-by: default avatarBenjamin Tissoires <benjamin.tissoires@redhat.com>
      Signed-off-by: default avatarHans de Goede <hdegoede@redhat.com>
      Signed-off-by: default avatarDmitry Torokhov <dmitry.torokhov@gmail.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      7419385d
    • Russell King's avatar
      mmc: sdhci-bcm-kona: fix build errors when built-in · 440fc619
      Russell King authored
      commit 4025ce24 upstream.
      
      `sdhci_bcm_kona_remove' referenced in section `.data' of drivers/built-in.o: defined in discarded section `.exit.text' of drivers/built-in.o
      
      Fixes: 058feb53 ("mmc: sdhci-bcm-kona: make linker-section warning go away")
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Tested-by: default avatarMarkus Mayer <markus.mayer@linaro.org>
      Acked-by: default avatarMatt Porter <mporter@linaro.org>
      Signed-off-by: default avatarChris Ball <chris@printf.net>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      440fc619
    • Jens Axboe's avatar
      lib/percpu_counter.c: fix bad percpu counter state during suspend · 743660fd
      Jens Axboe authored
      commit e39435ce upstream.
      
      I got a bug report yesterday from Laszlo Ersek in which he states that
      his kvm instance fails to suspend.  Laszlo bisected it down to this
      commit 1cf7e9c6 ("virtio_blk: blk-mq support") where virtio-blk is
      converted to use the blk-mq infrastructure.
      
      After digging a bit, it became clear that the issue was with the queue
      drain.  blk-mq tracks queue usage in a percpu counter, which is
      incremented on request alloc and decremented when the request is freed.
      The initial hunt was for an inconsistency in blk-mq, but everything
      seemed fine.  In fact, the counter only returned crazy values when
      suspend was in progress.
      
      When a CPU is unplugged, the percpu counters merges that CPU state with
      the general state.  blk-mq takes care to register a hotcpu notifier with
      the appropriate priority, so we know it runs after the percpu counter
      notifier.  However, the percpu counter notifier only merges the state
      when the CPU is fully gone.  This leaves a state transition where the
      CPU going away is no longer in the online mask, yet it still holds
      private values.  This means that in this state, percpu_counter_sum()
      returns invalid results, and the suspend then hangs waiting for
      abs(dead-cpu-value) requests to complete which of course will never
      happen.
      
      Fix this by clearing the state earlier, so we never have a case where
      the CPU isn't in online mask but still holds private state.  This bug
      has been there since forever, I guess we don't have a lot of users where
      percpu counters needs to be reliable during the suspend cycle.
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      Reported-by: default avatarLaszlo Ersek <lersek@redhat.com>
      Tested-by: default avatarLaszlo Ersek <lersek@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      743660fd
    • Jeff Layton's avatar
      lockd: ensure we tear down any live sockets when socket creation fails during lockd_up · 22fa1a91
      Jeff Layton authored
      commit 679b033d upstream.
      
      We had a Fedora ABRT report with a stack trace like this:
      
      kernel BUG at net/sunrpc/svc.c:550!
      invalid opcode: 0000 [#1] SMP
      [...]
      CPU: 2 PID: 913 Comm: rpc.nfsd Not tainted 3.13.6-200.fc20.x86_64 #1
      Hardware name: Hewlett-Packard HP ProBook 4740s/1846, BIOS 68IRR Ver. F.40 01/29/2013
      task: ffff880146b00000 ti: ffff88003f9b8000 task.ti: ffff88003f9b8000
      RIP: 0010:[<ffffffffa0305fa8>]  [<ffffffffa0305fa8>] svc_destroy+0x128/0x130 [sunrpc]
      RSP: 0018:ffff88003f9b9de0  EFLAGS: 00010206
      RAX: ffff88003f829628 RBX: ffff88003f829600 RCX: 00000000000041ee
      RDX: 0000000000000000 RSI: 0000000000000286 RDI: 0000000000000286
      RBP: ffff88003f9b9de8 R08: 0000000000017360 R09: ffff88014fa97360
      R10: ffffffff8114ce57 R11: ffffea00051c9c00 R12: ffff88003f829600
      R13: 00000000ffffff9e R14: ffffffff81cc7cc0 R15: 0000000000000000
      FS:  00007f4fde284840(0000) GS:ffff88014fa80000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 00007f4fdf5192f8 CR3: 00000000a569a000 CR4: 00000000001407e0
      Stack:
       ffff88003f792300 ffff88003f9b9e18 ffffffffa02de02a 0000000000000000
       ffffffff81cc7cc0 ffff88003f9cb000 0000000000000008 ffff88003f9b9e60
       ffffffffa033bb35 ffffffff8131c86c ffff88003f9cb000 ffff8800a5715008
      Call Trace:
       [<ffffffffa02de02a>] lockd_up+0xaa/0x330 [lockd]
       [<ffffffffa033bb35>] nfsd_svc+0x1b5/0x2f0 [nfsd]
       [<ffffffff8131c86c>] ? simple_strtoull+0x2c/0x50
       [<ffffffffa033c630>] ? write_pool_threads+0x280/0x280 [nfsd]
       [<ffffffffa033c6bb>] write_threads+0x8b/0xf0 [nfsd]
       [<ffffffff8114efa4>] ? __get_free_pages+0x14/0x50
       [<ffffffff8114eff6>] ? get_zeroed_page+0x16/0x20
       [<ffffffff811dec51>] ? simple_transaction_get+0xb1/0xd0
       [<ffffffffa033c098>] nfsctl_transaction_write+0x48/0x80 [nfsd]
       [<ffffffff811b8b34>] vfs_write+0xb4/0x1f0
       [<ffffffff811c3f99>] ? putname+0x29/0x40
       [<ffffffff811b9569>] SyS_write+0x49/0xa0
       [<ffffffff810fc2a6>] ? __audit_syscall_exit+0x1f6/0x2a0
       [<ffffffff816962e9>] system_call_fastpath+0x16/0x1b
      Code: 31 c0 e8 82 db 37 e1 e9 2a ff ff ff 48 8b 07 8b 57 14 48 c7 c7 d5 c6 31 a0 48 8b 70 20 31 c0 e8 65 db 37 e1 e9 f4 fe ff ff 0f 0b <0f> 0b 66 0f 1f 44 00 00 0f 1f 44 00 00 55 48 89 e5 41 56 41 55
      RIP  [<ffffffffa0305fa8>] svc_destroy+0x128/0x130 [sunrpc]
       RSP <ffff88003f9b9de0>
      
      Evidently, we created some lockd sockets and then failed to create
      others. make_socks then returned an error and we tried to tear down the
      svc, but svc->sv_permsocks was not empty so we ended up tripping over
      the BUG() in svc_destroy().
      
      Fix this by ensuring that we tear down any live sockets we created when
      socket creation is going to return an error.
      
      Fixes: 786185b5 (SUNRPC: move per-net operations from...)
      Reported-by: default avatarRaphos <raphoszap@laposte.net>
      Signed-off-by: default avatarJeff Layton <jlayton@redhat.com>
      Reviewed-by: default avatarStanislav Kinsbursky <skinsbursky@parallels.com>
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      22fa1a91
    • Benjamin LaHaise's avatar
      aio: v4 ensure access to ctx->ring_pages is correctly serialised for migration · b74183e7
      Benjamin LaHaise authored
      commit fa8a53c3 upstream.
      
      As reported by Tang Chen, Gu Zheng and Yasuaki Isimatsu, the following issues
      exist in the aio ring page migration support.
      
      As a result, for example, we have the following problem:
      
                  thread 1                      |              thread 2
                                                |
      aio_migratepage()                         |
       |-> take ctx->completion_lock            |
       |-> migrate_page_copy(new, old)          |
       |   *NOW*, ctx->ring_pages[idx] == old   |
                                                |
                                                |    *NOW*, ctx->ring_pages[idx] == old
                                                |    aio_read_events_ring()
                                                |     |-> ring = kmap_atomic(ctx->ring_pages[0])
                                                |     |-> ring->head = head;          *HERE, write to the old ring page*
                                                |     |-> kunmap_atomic(ring);
                                                |
       |-> ctx->ring_pages[idx] = new           |
       |   *BUT NOW*, the content of            |
       |    ring_pages[idx] is old.             |
       |-> release ctx->completion_lock         |
      
      As above, the new ring page will not be updated.
      
      Fix this issue, as well as prevent races in aio_ring_setup() by holding
      the ring_lock mutex during kioctx setup and page migration.  This avoids
      the overhead of taking another spinlock in aio_read_events_ring() as Tang's
      and Gu's original fix did, pushing the overhead into the migration code.
      
      Note that to handle the nesting of ring_lock inside of mmap_sem, the
      migratepage operation uses mutex_trylock().  Page migration is not a 100%
      critical operation in this case, so the ocassional failure can be
      tolerated.  This issue was reported by Sasha Levin.
      
      Based on feedback from Linus, avoid the extra taking of ctx->completion_lock.
      Instead, make page migration fully serialised by mapping->private_lock, and
      have aio_free_ring() simply disconnect the kioctx from the mapping by calling
      put_aio_ring_file() before touching ctx->ring_pages[].  This simplifies the
      error handling logic in aio_migratepage(), and should improve robustness.
      
      v4: always do mutex_unlock() in cases when kioctx setup fails.
      Reported-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Reported-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Gu Zheng <guz.fnst@cn.fujitsu.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      b74183e7
    • Sekhar Nori's avatar
      dma: edma: fix incorrect SG list handling · 4d22eb08
      Sekhar Nori authored
      commit 5fc68a6c upstream.
      
      The code to handle any length SG lists calls edma_resume()
      even before edma_start() is called. This is incorrect
      because edma_resume() enables edma events on the channel
      after which CPU (in edma_start) cannot clear posted
      events by writing to ECR (per the EDMA user's guide).
      
      Because of this EDMA transfers fail to start if due
      to some reason there is a pending EDMA event registered
      even before EDMA transfers are started. This can happen if
      an EDMA event is a byproduct of device initialization.
      
      Fix this by calling edma_resume() only if it is not the
      first batch of MAX_NR_SG elements.
      
      Without this patch, MMC/SD fails to function on DA850 EVM
      with DMA. The behaviour is triggered by specific IP and
      this can explain why the issue was not reported before
      (example with MMC/SD on AM335x).
      
      Tested on DA850 EVM and AM335x EVM-SK using MMC/SD card.
      
      Cc: Joel Fernandes <joelf@ti.com>
      Acked-by: default avatarJoel Fernandes <joelf@ti.com>
      Tested-by: default avatarJon Ringle <jringle@gridpoint.com>
      Tested-by: default avatarAlexander Holler <holler@ahsoftware.de>
      Reported-by: default avatarJon Ringle <jringle@gridpoint.com>
      Signed-off-by: default avatarSekhar Nori <nsekhar@ti.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      4d22eb08
    • Mike Snitzer's avatar
      dm thin: fix dangling bio in process_deferred_bios error path · 17001268
      Mike Snitzer authored
      commit fe76cd88 upstream.
      
      If unable to ensure_next_mapping() we must add the current bio, which
      was removed from the @bios list via bio_list_pop, back to the
      deferred_bios list before all the remaining @bios.
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Acked-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      17001268
    • Joe Thornber's avatar
      dm: take care to copy the space map roots before locking the superblock · 8038ee15
      Joe Thornber authored
      commit 5a32083d upstream.
      
      In theory copying the space map root can fail, but in practice it never
      does because we're careful to check what size buffer is needed.
      
      But make certain we're able to copy the space map roots before
      locking the superblock.
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      8038ee15
    • Joe Thornber's avatar
      dm transaction manager: fix corruption due to non-atomic transaction commit · 44c13517
      Joe Thornber authored
      commit a9d45396 upstream.
      
      The persistent-data library used by dm-thin, dm-cache, etc is
      transactional.  If anything goes wrong, such as an io error when writing
      new metadata or a power failure, then we roll back to the last
      transaction.
      
      Atomicity when committing a transaction is achieved by:
      
      a) Never overwriting data from the previous transaction.
      b) Writing the superblock last, after all other metadata has hit the
         disk.
      
      This commit and the following commit ("dm: take care to copy the space
      map roots before locking the superblock") fix a bug associated with (b).
      When committing it was possible for the superblock to still be written
      in spite of an io error occurring during the preceeding metadata flush.
      With these commits we're careful not to take the write lock out on the
      superblock until after the metadata flush has completed.
      
      Change the transaction manager's semantics for dm_tm_commit() to assume
      all data has been flushed _before_ the single superblock that is passed
      in.
      
      As a prerequisite, split the block manager's block unlocking and
      flushing by simplifying dm_bm_flush_and_unlock() to dm_bm_flush().  Now
      the unlocking must be done separately.
      
      This issue was discovered by forcing io errors at the crucial time
      using dm-flakey.
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      44c13517
    • Mike Snitzer's avatar
      dm cache: prevent corruption caused by discard_block_size > cache_block_size · 34b397a6
      Mike Snitzer authored
      commit d132cc6d upstream.
      
      If the discard block size is larger than the cache block size we will
      not properly quiesce IO to a region that is about to be discarded.  This
      results in a race between a cache migration where no copy is needed, and
      a write to an adjacent cache block that's within the same large discard
      block.
      
      Workaround this by limiting the discard_block_size to cache_block_size.
      Also limit the max_discard_sectors to cache_block_size.
      
      A more comprehensive fix that introduces range locking support in the
      bio_prison and proper quiescing of a discard range that spans multiple
      cache blocks is already in development.
      Reported-by: default avatarMorgan Mears <Morgan.Mears@netapp.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Acked-by: default avatarJoe Thornber <ejt@redhat.com>
      Acked-by: default avatarHeinz Mauelshagen <heinzm@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      34b397a6