Commit 92ca922f authored by Hong zhi guo's avatar Hong zhi guo Committed by Linus Torvalds

vmalloc: walk vmap_areas by sorted list instead of rb_next()

There's a walk by repeating rb_next to find a suitable hole.  Could be
simply replaced by walk on the sorted vmap_area_list.  More simpler and
efficient.

Mutation of the list and tree only happens in pair within
__insert_vmap_area and __free_vmap_area, under protection of
vmap_area_lock.  The patch code is also under vmap_area_lock, so the list
walk is safe, and consistent with the tree walk.

Tested on SMP by repeating batch of vmalloc anf vfree for random sizes and
rounds for hours.
Signed-off-by: default avatarHong Zhiguo <honkiko@gmail.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent c2cddf99
...@@ -413,11 +413,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, ...@@ -413,11 +413,11 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
if (addr + size - 1 < addr) if (addr + size - 1 < addr)
goto overflow; goto overflow;
n = rb_next(&first->rb_node); if (list_is_last(&first->list, &vmap_area_list))
if (n)
first = rb_entry(n, struct vmap_area, rb_node);
else
goto found; goto found;
first = list_entry(first->list.next,
struct vmap_area, list);
} }
found: found:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment