Commit ec62d04e authored by David Hildenbrand's avatar David Hildenbrand Committed by Linus Torvalds

kernel/resource: make release_mem_region_adjustable() never fail

Patch series "selective merging of system ram resources", v4.

Some add_memory*() users add memory in small, contiguous memory blocks.
Examples include virtio-mem, hyper-v balloon, and the XEN balloon.

This can quickly result in a lot of memory resources, whereby the actual
resource boundaries are not of interest (e.g., it might be relevant for
DIMMs, exposed via /proc/iomem to user space).  We really want to merge
added resources in this scenario where possible.

Resources are effectively stored in a list-based tree.  Having a lot of
resources not only wastes memory, it also makes traversing that tree more
expensive, and makes /proc/iomem explode in size (e.g., requiring
kexec-tools to manually merge resources when creating a kdump header.  The
current kexec-tools resource count limit does not allow for more than
~100GB of memory with a memory block size of 128MB on x86-64).

Let's allow to selectively merge system ram resources by specifying a new
flag for add_memory*().  Patch #5 contains a /proc/iomem example.  Only
tested with virtio-mem.

This patch (of 8):

Let's make sure splitting a resource on memory hotunplug will never fail.
This will become more relevant once we merge selected System RAM resources
- then, we'll trigger that case more often on memory hotunplug.

In general, this function is already unlikely to fail.  When we remove
memory, we free up quite a lot of metadata (memmap, page tables, memory
block device, etc.).  The only reason it could really fail would be when
injecting allocation errors.

All other error cases inside release_mem_region_adjustable() seem to be
sanity checks if the function would be abused in different context - let's
add WARN_ON_ONCE() in these cases so we can catch them.

[natechancellor@gmail.com: fix use of ternary condition in release_mem_region_adjustable]
  Link: https://lkml.kernel.org/r/20200922060748.2452056-1-natechancellor@gmail.com
  Link: https://github.com/ClangBuiltLinux/linux/issues/1159Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Signed-off-by: default avatarNathan Chancellor <natechancellor@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Kees Cook <keescook@chromium.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: Wei Yang <richardw.yang@linux.intel.com>
Cc: Anton Blanchard <anton@ozlabs.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Len Brown <lenb@kernel.org>
Cc: Leonardo Bras <leobras.c@gmail.com>
Cc: Libor Pechacek <lpechacek@suse.cz>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Nathan Lynch <nathanl@linux.ibm.com>
Cc: "Oliver O'Halloran" <oohall@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Pingfan Liu <kernelfans@gmail.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Roger Pau Monn <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Wei Liu <wei.liu@kernel.org>
Link: https://lkml.kernel.org/r/20200911103459.10306-2-david@redhat.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent b30c5927
...@@ -248,8 +248,8 @@ extern struct resource * __request_region(struct resource *, ...@@ -248,8 +248,8 @@ extern struct resource * __request_region(struct resource *,
extern void __release_region(struct resource *, resource_size_t, extern void __release_region(struct resource *, resource_size_t,
resource_size_t); resource_size_t);
#ifdef CONFIG_MEMORY_HOTREMOVE #ifdef CONFIG_MEMORY_HOTREMOVE
extern int release_mem_region_adjustable(struct resource *, resource_size_t, extern void release_mem_region_adjustable(struct resource *, resource_size_t,
resource_size_t); resource_size_t);
#endif #endif
/* Wrappers for managed devices */ /* Wrappers for managed devices */
......
...@@ -1258,21 +1258,28 @@ EXPORT_SYMBOL(__release_region); ...@@ -1258,21 +1258,28 @@ EXPORT_SYMBOL(__release_region);
* assumes that all children remain in the lower address entry for * assumes that all children remain in the lower address entry for
* simplicity. Enhance this logic when necessary. * simplicity. Enhance this logic when necessary.
*/ */
int release_mem_region_adjustable(struct resource *parent, void release_mem_region_adjustable(struct resource *parent,
resource_size_t start, resource_size_t size) resource_size_t start, resource_size_t size)
{ {
struct resource *new_res = NULL;
bool alloc_nofail = false;
struct resource **p; struct resource **p;
struct resource *res; struct resource *res;
struct resource *new_res;
resource_size_t end; resource_size_t end;
int ret = -EINVAL;
end = start + size - 1; end = start + size - 1;
if ((start < parent->start) || (end > parent->end)) if (WARN_ON_ONCE((start < parent->start) || (end > parent->end)))
return ret; return;
/* The alloc_resource() result gets checked later */ /*
new_res = alloc_resource(GFP_KERNEL); * We free up quite a lot of memory on memory hotunplug (esp., memap),
* just before releasing the region. This is highly unlikely to
* fail - let's play save and make it never fail as the caller cannot
* perform any error handling (e.g., trying to re-add memory will fail
* similarly).
*/
retry:
new_res = alloc_resource(GFP_KERNEL | (alloc_nofail ? __GFP_NOFAIL : 0));
p = &parent->child; p = &parent->child;
write_lock(&resource_lock); write_lock(&resource_lock);
...@@ -1298,7 +1305,6 @@ int release_mem_region_adjustable(struct resource *parent, ...@@ -1298,7 +1305,6 @@ int release_mem_region_adjustable(struct resource *parent,
* so if we are dealing with them, let us just back off here. * so if we are dealing with them, let us just back off here.
*/ */
if (!(res->flags & IORESOURCE_SYSRAM)) { if (!(res->flags & IORESOURCE_SYSRAM)) {
ret = 0;
break; break;
} }
...@@ -1315,20 +1321,23 @@ int release_mem_region_adjustable(struct resource *parent, ...@@ -1315,20 +1321,23 @@ int release_mem_region_adjustable(struct resource *parent,
/* free the whole entry */ /* free the whole entry */
*p = res->sibling; *p = res->sibling;
free_resource(res); free_resource(res);
ret = 0;
} else if (res->start == start && res->end != end) { } else if (res->start == start && res->end != end) {
/* adjust the start */ /* adjust the start */
ret = __adjust_resource(res, end + 1, WARN_ON_ONCE(__adjust_resource(res, end + 1,
res->end - end); res->end - end));
} else if (res->start != start && res->end == end) { } else if (res->start != start && res->end == end) {
/* adjust the end */ /* adjust the end */
ret = __adjust_resource(res, res->start, WARN_ON_ONCE(__adjust_resource(res, res->start,
start - res->start); start - res->start));
} else { } else {
/* split into two entries */ /* split into two entries - we need a new resource */
if (!new_res) { if (!new_res) {
ret = -ENOMEM; new_res = alloc_resource(GFP_ATOMIC);
break; if (!new_res) {
alloc_nofail = true;
write_unlock(&resource_lock);
goto retry;
}
} }
new_res->name = res->name; new_res->name = res->name;
new_res->start = end + 1; new_res->start = end + 1;
...@@ -1339,9 +1348,8 @@ int release_mem_region_adjustable(struct resource *parent, ...@@ -1339,9 +1348,8 @@ int release_mem_region_adjustable(struct resource *parent,
new_res->sibling = res->sibling; new_res->sibling = res->sibling;
new_res->child = NULL; new_res->child = NULL;
ret = __adjust_resource(res, res->start, if (WARN_ON_ONCE(__adjust_resource(res, res->start,
start - res->start); start - res->start)))
if (ret)
break; break;
res->sibling = new_res; res->sibling = new_res;
new_res = NULL; new_res = NULL;
...@@ -1352,7 +1360,6 @@ int release_mem_region_adjustable(struct resource *parent, ...@@ -1352,7 +1360,6 @@ int release_mem_region_adjustable(struct resource *parent,
write_unlock(&resource_lock); write_unlock(&resource_lock);
free_resource(new_res); free_resource(new_res);
return ret;
} }
#endif /* CONFIG_MEMORY_HOTREMOVE */ #endif /* CONFIG_MEMORY_HOTREMOVE */
......
...@@ -1727,26 +1727,6 @@ void try_offline_node(int nid) ...@@ -1727,26 +1727,6 @@ void try_offline_node(int nid)
} }
EXPORT_SYMBOL(try_offline_node); EXPORT_SYMBOL(try_offline_node);
static void __release_memory_resource(resource_size_t start,
resource_size_t size)
{
int ret;
/*
* When removing memory in the same granularity as it was added,
* this function never fails. It might only fail if resources
* have to be adjusted or split. We'll ignore the error, as
* removing of memory cannot fail.
*/
ret = release_mem_region_adjustable(&iomem_resource, start, size);
if (ret) {
resource_size_t endres = start + size - 1;
pr_warn("Unable to release resource <%pa-%pa> (%d)\n",
&start, &endres, ret);
}
}
static int __ref try_remove_memory(int nid, u64 start, u64 size) static int __ref try_remove_memory(int nid, u64 start, u64 size)
{ {
int rc = 0; int rc = 0;
...@@ -1780,7 +1760,7 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size) ...@@ -1780,7 +1760,7 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
memblock_remove(start, size); memblock_remove(start, size);
} }
__release_memory_resource(start, size); release_mem_region_adjustable(&iomem_resource, start, size);
try_offline_node(nid); try_offline_node(nid);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment