Commit b460a522 authored by Mark Brown's avatar Mark Brown

regcache: Push async I/O request down into the rbtree cache

Currently the regcache core unconditionally enables async I/O for all cache
types, causing problems for the maple tree cache which dynamically allocates
the buffers used to write registers to the device since async requires the
buffers to be kept around until the I/O has been completed.

This use of async I/O is mainly for the rbtree cache which stores data in
a format directly usable for regmap_raw_write(), though there is a special
case for single register writes which would also have allowed it to be used
with the flat cache. It is a bit of a landmine for other caches since it
implicitly converts sync operations to async, and with modern hardware it
is not clear that async I/O is actually a performance win as shown by the
performance work David Jander did with SPI. In multi core systems the cost
of managing concurrency ends up swamping the performance benefit and almost
all modern systems are multi core.

Address this by pushing the enablement of async I/O down into the rbtree
cache where it is actively used, avoiding surprises for other cache
implementations.
Reported-by: default avatarCharles Keepax <ckeepax@opensource.cirrus.com>
Fixes: bfa0b38c ("regmap: maple: Implement block sync for the maple tree cache")
Reviewed-by: default avatarCharles Keepax <ckeepax@opensource.cirrus.com>
Tested-by: default avatarCharles Keepax <ckeepax@opensource.cirrus.com>
Signed-off-by: default avatarMark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20230719-regcache-async-rbtree-v1-1-b03d30cf1daf@kernel.orgSigned-off-by: default avatarMark Brown <broonie@kernel.org>
parent 0c9d2eb5
...@@ -471,6 +471,8 @@ static int regcache_rbtree_sync(struct regmap *map, unsigned int min, ...@@ -471,6 +471,8 @@ static int regcache_rbtree_sync(struct regmap *map, unsigned int min,
unsigned int start, end; unsigned int start, end;
int ret; int ret;
map->async = true;
rbtree_ctx = map->cache; rbtree_ctx = map->cache;
for (node = rb_first(&rbtree_ctx->root); node; node = rb_next(node)) { for (node = rb_first(&rbtree_ctx->root); node; node = rb_next(node)) {
rbnode = rb_entry(node, struct regcache_rbtree_node, node); rbnode = rb_entry(node, struct regcache_rbtree_node, node);
...@@ -499,6 +501,8 @@ static int regcache_rbtree_sync(struct regmap *map, unsigned int min, ...@@ -499,6 +501,8 @@ static int regcache_rbtree_sync(struct regmap *map, unsigned int min,
return ret; return ret;
} }
map->async = false;
return regmap_async_complete(map); return regmap_async_complete(map);
} }
......
...@@ -368,8 +368,6 @@ int regcache_sync(struct regmap *map) ...@@ -368,8 +368,6 @@ int regcache_sync(struct regmap *map)
if (!map->cache_dirty) if (!map->cache_dirty)
goto out; goto out;
map->async = true;
/* Apply any patch first */ /* Apply any patch first */
map->cache_bypass = true; map->cache_bypass = true;
for (i = 0; i < map->patch_regs; i++) { for (i = 0; i < map->patch_regs; i++) {
...@@ -392,7 +390,6 @@ int regcache_sync(struct regmap *map) ...@@ -392,7 +390,6 @@ int regcache_sync(struct regmap *map)
out: out:
/* Restore the bypass state */ /* Restore the bypass state */
map->async = false;
map->cache_bypass = bypass; map->cache_bypass = bypass;
map->no_sync_defaults = false; map->no_sync_defaults = false;
map->unlock(map->lock_arg); map->unlock(map->lock_arg);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment