Commit 5bcd234d authored by Christoph Lameter's avatar Christoph Lameter Committed by Linus Torvalds

[PATCH] slab: fix two issues in kmalloc_node / __cache_alloc_node

This addresses two issues:

1. Kmalloc_node() may intermittently return NULL if we are allocating
   from the current node and are unable to obtain memory for the current
   node from the page allocator.  This is because we call ___cache_alloc()
   if nodeid == numa_node_id() and ____cache_alloc is not able to fallback
   to other nodes.

   This was introduced in the 2.6.19 development cycle.  <= 2.6.18 in
   that case does not do a restricted allocation and blindly trusts the
   page allocator to have given us memory from the indicated node.  It
   inserts the page regardless of the node it came from into the queues for
   the current node.

2. If kmalloc_node() is used on a node that has not been bootstrapped
   yet then we may try to pass an invalid node number to
   ____cache_alloc_node() triggering a BUG().

   Change the function to call fallback_alloc() instead.  Only call
   fallback_alloc() if we are allowed to fallback at all.  The need to
   handle a node not bootstrapped yet also first surfaced in the 2.6.19
   cycle.

Update the comments since they were still describing the old kmalloc_node
from 2.6.12.
Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 1b1cec4b
...@@ -3459,29 +3459,45 @@ int fastcall kmem_ptr_validate(struct kmem_cache *cachep, void *ptr) ...@@ -3459,29 +3459,45 @@ int fastcall kmem_ptr_validate(struct kmem_cache *cachep, void *ptr)
* @flags: See kmalloc(). * @flags: See kmalloc().
* @nodeid: node number of the target node. * @nodeid: node number of the target node.
* *
* Identical to kmem_cache_alloc, except that this function is slow * Identical to kmem_cache_alloc but it will allocate memory on the given
* and can sleep. And it will allocate memory on the given node, which * node, which can improve the performance for cpu bound structures.
* can improve the performance for cpu bound structures. *
* New and improved: it will now make sure that the object gets * Fallback to other node is possible if __GFP_THISNODE is not set.
* put on the correct node list so that there is no false sharing.
*/ */
static __always_inline void * static __always_inline void *
__cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, __cache_alloc_node(struct kmem_cache *cachep, gfp_t flags,
int nodeid, void *caller) int nodeid, void *caller)
{ {
unsigned long save_flags; unsigned long save_flags;
void *ptr; void *ptr = NULL;
cache_alloc_debugcheck_before(cachep, flags); cache_alloc_debugcheck_before(cachep, flags);
local_irq_save(save_flags); local_irq_save(save_flags);
if (nodeid == -1 || nodeid == numa_node_id() || if (unlikely(nodeid == -1))
!cachep->nodelists[nodeid]) nodeid = numa_node_id();
ptr = ____cache_alloc(cachep, flags);
else
ptr = ____cache_alloc_node(cachep, flags, nodeid);
local_irq_restore(save_flags);
if (likely(cachep->nodelists[nodeid])) {
if (nodeid == numa_node_id()) {
/*
* Use the locally cached objects if possible.
* However ____cache_alloc does not allow fallback
* to other nodes. It may fail while we still have
* objects on other nodes available.
*/
ptr = ____cache_alloc(cachep, flags);
}
if (!ptr) {
/* ___cache_alloc_node can fall back to other nodes */
ptr = ____cache_alloc_node(cachep, flags, nodeid);
}
} else {
/* Node not bootstrapped yet */
if (!(flags & __GFP_THISNODE))
ptr = fallback_alloc(cachep, flags);
}
local_irq_restore(save_flags);
ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller);
return ptr; return ptr;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment