1. 08 Apr, 2013 8 commits
  2. 04 Apr, 2013 3 commits
  3. 03 Apr, 2013 2 commits
  4. 02 Apr, 2013 3 commits
    • James Hogan's avatar
      clk: fix clk_mux::flags kerneldoc · 3566d40c
      James Hogan authored
      The kerneldoc comment for struct clk_mux documented the non-existent
      num_clks instead of flags. Correct this.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: default avatarMike Turquette <mturquette@linaro.org>
      3566d40c
    • Mike Turquette's avatar
      clk: allow reentrant calls into the clk framework · 533ddeb1
      Mike Turquette authored
      Reentrancy into the clock framework is necessary for clock operations
      that result in nested calls to the clk api.  A common example is a clock
      that is prepared via an i2c transaction, such as a clock inside of a
      discrete audio chip or a power management IC.  The i2c subsystem itself
      will use the clk api resulting in a deadlock:
      
      clk_prepare(audio_clk)
      	i2c_transfer(..)
      		clk_prepare(i2c_controller_clk)
      
      The ability to reenter the clock framework prevents this deadlock.
      
      Other use cases exist such as allowing .set_rate callbacks to call
      clk_set_parent to achieve the best rate, or to save power in certain
      configurations.  Yet another example is performing pinctrl operations
      from a clk_ops callback.  Calls into the pinctrl subsystem may call
      clk_{un}prepare on an unrelated clock.  Allowing for nested calls to
      reenter the clock framework enables both of these use cases.
      
      Reentrancy is implemented by two global pointers that track the owner
      currently holding a global lock.  One pointer tracks the owner during
      sleepable, mutex-protected operations and the other one tracks the owner
      during non-interruptible, spinlock-protected operations.
      
      When the clk framework is entered we try to hold the global lock.  If it
      is held we compare the current task against the current owner; a match
      implies a nested call and we reenter.  If the values do not match then
      we block on the lock until it is released.
      Signed-off-by: default avatarMike Turquette <mturquette@linaro.org>
      Cc: Rajagopal Venkat <rajagopal.venkat@linaro.org>
      Cc: David Brown <davidb@codeaurora.org>
      Tested-by: default avatarLaurent Pinchart <laurent.pinchart@ideasonboard.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      533ddeb1
    • Mike Turquette's avatar
      clk: abstract locking out into helper functions · eab89f69
      Mike Turquette authored
      Create locking helpers for the global mutex and global spinlock.  The
      definitions of these helpers will be expanded upon in the next patch
      which introduces reentrancy into the locking scheme.
      Signed-off-by: default avatarMike Turquette <mturquette@linaro.org>
      Cc: Rajagopal Venkat <rajagopal.venkat@linaro.org>
      Cc: David Brown <davidb@codeaurora.org>
      Tested-by: default avatarLaurent Pinchart <laurent.pinchart@ideasonboard.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      eab89f69
  5. 28 Mar, 2013 1 commit
  6. 27 Mar, 2013 6 commits
  7. 26 Mar, 2013 1 commit
    • Prashant Gaikwad's avatar
      clk: Add composite clock type · ece70094
      Prashant Gaikwad authored
      Not all clocks are required to be decomposed into basic clock
      types but at the same time want to use the functionality
      provided by these basic clock types instead of duplicating.
      
      For example, Tegra SoC has ~100 clocks which can be decomposed
      into Mux -> Div -> Gate clock types making the clock count to
      ~300. Also, parent change operation can not be performed on gate
      clock which forces to use mux clock in driver if want to change
      the parent.
      
      Instead aggregate the basic clock types functionality into one
      clock and just use this clock for all operations. This clock
      type re-uses the functionality of basic clock types and not
      limited to basic clock types but any hardware-specific
      implementation.
      Signed-off-by: default avatarPrashant Gaikwad <pgaikwad@nvidia.com>
      Signed-off-by: default avatarMike Turquette <mturquette@linaro.org>
      ece70094
  8. 22 Mar, 2013 3 commits
  9. 21 Mar, 2013 2 commits
  10. 20 Mar, 2013 1 commit
  11. 19 Mar, 2013 5 commits
  12. 17 Mar, 2013 4 commits
    • Linus Torvalds's avatar
      Linux 3.9-rc3 · a937536b
      Linus Torvalds authored
      a937536b
    • David Rientjes's avatar
      perf,x86: fix link failure for non-Intel configs · 6c4d3bc9
      David Rientjes authored
      Commit 1d9d8639 ("perf,x86: fix kernel crash with PEBS/BTS after
      suspend/resume") introduces a link failure since
      perf_restore_debug_store() is only defined for CONFIG_CPU_SUP_INTEL:
      
      	arch/x86/power/built-in.o: In function `restore_processor_state':
      	(.text+0x45c): undefined reference to `perf_restore_debug_store'
      
      Fix it by defining the dummy function appropriately.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6c4d3bc9
    • Linus Torvalds's avatar
      perf,x86: fix wrmsr_on_cpu() warning on suspend/resume · 2a6e06b2
      Linus Torvalds authored
      Commit 1d9d8639 ("perf,x86: fix kernel crash with PEBS/BTS after
      suspend/resume") fixed a crash when doing PEBS performance profiling
      after resuming, but in using init_debug_store_on_cpu() to restore the
      DS_AREA mtrr it also resulted in a new WARN_ON() triggering.
      
      init_debug_store_on_cpu() uses "wrmsr_on_cpu()", which in turn uses CPU
      cross-calls to do the MSR update.  Which is not really valid at the
      early resume stage, and the warning is quite reasonable.  Now, it all
      happens to _work_, for the simple reason that smp_call_function_single()
      ends up just doing the call directly on the CPU when the CPU number
      matches, but we really should just do the wrmsr() directly instead.
      
      This duplicates the wrmsr() logic, but hopefully we can just remove the
      wrmsr_on_cpu() version eventually.
      Reported-and-tested-by: default avatarParag Warudkar <parag.lkml@gmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2a6e06b2
    • Linus Torvalds's avatar
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs · 08637024
      Linus Torvalds authored
      Pull btrfs fixes from Chris Mason:
       "Eric's rcu barrier patch fixes a long standing problem with our
        unmount code hanging on to devices in workqueue helpers.  Liu Bo
        nailed down a difficult assertion for in-memory extent mappings."
      
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs:
        Btrfs: fix warning of free_extent_map
        Btrfs: fix warning when creating snapshots
        Btrfs: return as soon as possible when edquot happens
        Btrfs: return EIO if we have extent tree corruption
        btrfs: use rcu_barrier() to wait for bdev puts at unmount
        Btrfs: remove btrfs_try_spin_lock
        Btrfs: get better concurrency for snapshot-aware defrag work
      08637024
  13. 16 Mar, 2013 1 commit
    • Liu Bo's avatar
      Btrfs: fix warning of free_extent_map · 3b277594
      Liu Bo authored
      Users report that an extent map's list is still linked when it's actually
      going to be freed from cache.
      
      The story is that
      
      a) when we're going to drop an extent map and may split this large one into
      smaller ems, and if this large one is flagged as EXTENT_FLAG_LOGGING which means
      that it's on the list to be logged, then the smaller ems split from it will also
      be flagged as EXTENT_FLAG_LOGGING, and this is _not_ expected.
      
      b) we'll keep ems from unlinking the list and freeing when they are flagged with
      EXTENT_FLAG_LOGGING, because the log code holds one reference.
      
      The end result is the warning, but the truth is that we set the flag
      EXTENT_FLAG_LOGGING only during fsync.
      
      So clear flag EXTENT_FLAG_LOGGING for extent maps split from a large one.
      Reported-by: default avatarJohannes Hirte <johannes.hirte@fem.tu-ilmenau.de>
      Reported-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: default avatarChris Mason <chris.mason@fusionio.com>
      3b277594