1. 29 Aug, 2013 4 commits
    • Lars-Peter Clausen's avatar
      regmap: rbtree: Make cache_present bitmap per node · 3f4ff561
      Lars-Peter Clausen authored
      With devices which have a dense and small register map but placed at a large
      offset the global cache_present bitmap imposes a huge memory overhead. Making
      the cache_present per rbtree node avoids the issue and easily reduces the memory
      footprint by a factor of ten. For devices with a more sparse map or without a
      large base register offset the memory usage might increase slightly by a few
      bytes, but not significantly. E.g. for a device which has ~50 registers at
      offset 0x4000 the memory footprint of the register cache goes down form 2496
      bytes to 175 bytes.
      
      Moving the bitmap to a per node basis means that the handling of the bitmap is
      now cache implementation specific and can no longer be managed by the core. The
      regcache_sync_block() function is extended by a additional parameter so that the
      cache implementation can tell the core which registers in the block are set and
      which are not. The parameter is optional and if NULL the core assumes that all
      registers are set. The rbtree cache also needs to implement its own drop
      callback instead of relying on the core to handle this.
      Signed-off-by: default avatarLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: default avatarMark Brown <broonie@linaro.org>
      3f4ff561
    • Lars-Peter Clausen's avatar
      regmap: rbtree: Reduce number of nodes, take 2 · 472fdec7
      Lars-Peter Clausen authored
      Support for reducing the number of nodes and memory consumption of the rbtree
      cache by allowing for small unused holes in the node's register cache block was
      initially added in commit 0c7ed856 ("regmap: Cut down on the average # of nodes
      in the rbtree cache"). But the commit had problems and so its effect was
      reverted again in commit 4e67fb5f ("regmap: rbtree: Fix overlapping rbnodes.").
      This patch brings the feature back of reducing the average number of nodes,
      which will speedup node look-up, while at the same time also reducing the memory
      usage of the rbtree cache. This patch takes a slightly different approach than
      the original patch though. It modifies the adjacent node look-up to not only
      consider nodes that are just one to the left or the right of the register but
      any node that falls in a certain range around the register. The range is
      calculated based on how much memory it would take to allocate a new node
      compared to how much memory it takes adding a set of unused registers to an
      existing node. E.g. if a node takes up 24 bytes and each register in a block
      uses 1 byte the range will be from the register address - 24 to the register
      address + 24. If we find a node that falls within this range it is cheaper or as
      expensive to add the register to the existing node and have a couple of unused
      registers in the node's cache compared to allocating a new node.
      Signed-off-by: default avatarLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: default avatarMark Brown <broonie@linaro.org>
      472fdec7
    • Lars-Peter Clausen's avatar
      regmap: rbtree: Simplify adjacent node look-up · 194c753a
      Lars-Peter Clausen authored
      A register which is adjacent to a node will either be left to the first
      register or right to the last register. It will not be within the node's range,
      so there is no point in checking for each register cached by the node whether
      the new register is next to it. It is sufficient to check whether the register
      comes before the first register or after the last register of the node.
      Signed-off-by: default avatarLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: default avatarMark Brown <broonie@linaro.org>
      194c753a
    • Mark Brown's avatar
  2. 27 Aug, 2013 1 commit
  3. 26 Aug, 2013 1 commit
  4. 25 Aug, 2013 4 commits
  5. 24 Aug, 2013 8 commits
  6. 23 Aug, 2013 17 commits
  7. 22 Aug, 2013 5 commits