1. 09 Aug, 2012 33 commits
  2. 29 Jul, 2012 7 commits
    • Greg Kroah-Hartman's avatar
      Linux 3.4.7 · 0d0eef55
      Greg Kroah-Hartman authored
      0d0eef55
    • Jeff Layton's avatar
      cifs: when CONFIG_HIGHMEM is set, serialize the read/write kmaps · 5d0c7b47
      Jeff Layton authored
      commit 3cf003c0 upstream.
      
      [The async read code was broadened to include uncached reads in 3.5, so
      the mainline patch did not apply directly. This patch is just a backport
      to account for that change.]
      
      Jian found that when he ran fsx on a 32 bit arch with a large wsize the
      process and one of the bdi writeback kthreads would sometimes deadlock
      with a stack trace like this:
      
      crash> bt
      PID: 2789   TASK: f02edaa0  CPU: 3   COMMAND: "fsx"
       #0 [eed63cbc] schedule at c083c5b3
       #1 [eed63d80] kmap_high at c0500ec8
       #2 [eed63db0] cifs_async_writev at f7fabcd7 [cifs]
       #3 [eed63df0] cifs_writepages at f7fb7f5c [cifs]
       #4 [eed63e50] do_writepages at c04f3e32
       #5 [eed63e54] __filemap_fdatawrite_range at c04e152a
       #6 [eed63ea4] filemap_fdatawrite at c04e1b3e
       #7 [eed63eb4] cifs_file_aio_write at f7fa111a [cifs]
       #8 [eed63ecc] do_sync_write at c052d202
       #9 [eed63f74] vfs_write at c052d4ee
      #10 [eed63f94] sys_write at c052df4c
      #11 [eed63fb0] ia32_sysenter_target at c0409a98
          EAX: 00000004  EBX: 00000003  ECX: abd73b73  EDX: 012a65c6
          DS:  007b      ESI: 012a65c6  ES:  007b      EDI: 00000000
          SS:  007b      ESP: bf8db178  EBP: bf8db1f8  GS:  0033
          CS:  0073      EIP: 40000424  ERR: 00000004  EFLAGS: 00000246
      
      Each task would kmap part of its address array before getting stuck, but
      not enough to actually issue the write.
      
      This patch fixes this by serializing the marshal_iov operations for
      async reads and writes. The idea here is to ensure that cifs
      aggressively tries to populate a request before attempting to fulfill
      another one. As soon as all of the pages are kmapped for a request, then
      we can unlock and allow another one to proceed.
      
      There's no need to do this serialization on non-CONFIG_HIGHMEM arches
      however, so optimize all of this out when CONFIG_HIGHMEM isn't set.
      Reported-by: default avatarJian Li <jiali@redhat.com>
      Signed-off-by: default avatarJeff Layton <jlayton@redhat.com>
      Signed-off-by: default avatarSteve French <smfrench@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      5d0c7b47
    • Tushar Behera's avatar
      ARM: SAMSUNG: Update default rate for xusbxti clock · a22ad130
      Tushar Behera authored
      commit bdd3cc26 upstream.
      
      The rate of xusbxti clock is set in individual machine files.
      The default value should be defined at the clock definition
      and individual machine files should modify it if required.
      
      Division by zero in kernel.
      [<c0011849>] (unwind_backtrace+0x1/0x9c) from [<c022c663>] (Ldiv0+0x9/0x12)
      [<c022c663>] (Ldiv0+0x9/0x12) from [<c001a3c3>] (s3c_setrate_clksrc+0x33/0x78)
      [<c001a3c3>] (s3c_setrate_clksrc+0x33/0x78) from [<c0019e67>] (clk_set_rate+0x2f/0x78)
      Signed-off-by: default avatarTushar Behera <tushar.behera@linaro.org>
      Signed-off-by: default avatarKukjin Kim <kgene.kim@samsung.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a22ad130
    • Mikulas Patocka's avatar
      dm raid1: set discard_zeroes_data_unsupported · 5a4db9ee
      Mikulas Patocka authored
      commit 7c8d3a42 upstream.
      
      We can't guarantee that REQ_DISCARD on dm-mirror zeroes the data even if
      the underlying disks support zero on discard.  So this patch sets
      ti->discard_zeroes_data_unsupported.
      
      For example, if the mirror is in the process of resynchronizing, it may
      happen that kcopyd reads a piece of data, then discard is sent on the
      same area and then kcopyd writes the piece of data to another leg.
      Consequently, the data is not zeroed.
      
      The flag was made available by commit 983c7db3
      (dm crypt: always disable discard_zeroes_data).
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5a4db9ee
    • Mikulas Patocka's avatar
      dm raid1: fix crash with mirror recovery and discard · 91aafba4
      Mikulas Patocka authored
      commit 751f188d upstream.
      
      This patch fixes a crash when a discard request is sent during mirror
      recovery.
      
      Firstly, some background.  Generally, the following sequence happens during
      mirror synchronization:
      - function do_recovery is called
      - do_recovery calls dm_rh_recovery_prepare
      - dm_rh_recovery_prepare uses a semaphore to limit the number
        simultaneously recovered regions (by default the semaphore value is 1,
        so only one region at a time is recovered)
      - dm_rh_recovery_prepare calls __rh_recovery_prepare,
        __rh_recovery_prepare asks the log driver for the next region to
        recover. Then, it sets the region state to DM_RH_RECOVERING. If there
        are no pending I/Os on this region, the region is added to
        quiesced_regions list. If there are pending I/Os, the region is not
        added to any list. It is added to the quiesced_regions list later (by
        dm_rh_dec function) when all I/Os finish.
      - when the region is on quiesced_regions list, there are no I/Os in
        flight on this region. The region is popped from the list in
        dm_rh_recovery_start function. Then, a kcopyd job is started in the
        recover function.
      - when the kcopyd job finishes, recovery_complete is called. It calls
        dm_rh_recovery_end. dm_rh_recovery_end adds the region to
        recovered_regions or failed_recovered_regions list (depending on
        whether the copy operation was successful or not).
      
      The above mechanism assumes that if the region is in DM_RH_RECOVERING
      state, no new I/Os are started on this region. When I/O is started,
      dm_rh_inc_pending is called, which increases reg->pending count. When
      I/O is finished, dm_rh_dec is called. It decreases reg->pending count.
      If the count is zero and the region was in DM_RH_RECOVERING state,
      dm_rh_dec adds it to the quiesced_regions list.
      
      Consequently, if we call dm_rh_inc_pending/dm_rh_dec while the region is
      in DM_RH_RECOVERING state, it could be added to quiesced_regions list
      multiple times or it could be added to this list when kcopyd is copying
      data (it is assumed that the region is not on any list while kcopyd does
      its jobs). This results in memory corruption and crash.
      
      There already exist bypasses for REQ_FLUSH requests: REQ_FLUSH requests
      do not belong to any region, so they are always added to the sync list
      in do_writes. dm_rh_inc_pending does not increase count for REQ_FLUSH
      requests. In mirror_end_io, dm_rh_dec is never called for REQ_FLUSH
      requests. These bypasses avoid the crash possibility described above.
      
      These bypasses were improperly implemented for REQ_DISCARD when
      the mirror target gained discard support in commit
      5fc2ffea (dm raid1: support discard).
      
      In do_writes, REQ_DISCARD requests is always added to the sync queue and
      immediately dispatched (even if the region is in DM_RH_RECOVERING).  However,
      dm_rh_inc and dm_rh_dec is called for REQ_DISCARD resusts.  So it violates the
      rule that no I/Os are started on DM_RH_RECOVERING regions, and causes the list
      corruption described above.
      
      This patch changes it so that REQ_DISCARD requests follow the same path
      as REQ_FLUSH. This avoids the crash.
      
      Reference: https://bugzilla.redhat.com/837607Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      91aafba4
    • Mikulas Patocka's avatar
      dm thin: do not send discards to shared blocks · 5b8bbc39
      Mikulas Patocka authored
      commit 650d2a06 upstream.
      
      When process_discard receives a partial discard that doesn't cover a
      full block, it sends this discard down to that block. Unfortunately, the
      block can be shared and the discard would corrupt the other snapshots
      sharing this block.
      
      This patch detects block sharing and ends the discard with success when
      sending it to the shared block.
      
      The above change means that if the device supports discard it can't be
      guaranteed that a discard request zeroes data. Therefore, we set
      ti->discard_zeroes_data_unsupported.
      
      Thin target discard support with this bug arrived in commit
      104655fd (dm thin: support discards).
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Signed-off-by: default avatarAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5b8bbc39
    • Boaz Harrosh's avatar
      pnfs-obj: don't leak objio_state if ore_write/read fails · 08603bdd
      Boaz Harrosh authored
      commit 9909d45a upstream.
      
      [Bug since 3.2 Kernel]
      Signed-off-by: default avatarBoaz Harrosh <bharrosh@panasas.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      08603bdd