Commit 512626a0 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://linux-arm.org/linux-2.6

* 'for-linus' of git://linux-arm.org/linux-2.6:
  kmemleak: Add the corresponding MAINTAINERS entry
  kmemleak: Simple testing module for kmemleak
  kmemleak: Enable the building of the memory leak detector
  kmemleak: Remove some of the kmemleak false positives
  kmemleak: Add modules support
  kmemleak: Add kmemleak_alloc callback from alloc_large_system_hash
  kmemleak: Add the vmalloc memory allocation/freeing hooks
  kmemleak: Add the slub memory allocation/freeing hooks
  kmemleak: Add the slob memory allocation/freeing hooks
  kmemleak: Add the slab memory allocation/freeing hooks
  kmemleak: Add documentation on the memory leak detector
  kmemleak: Add the base support

Manual conflict resolution (with the slab/earlyboot changes) in:
	drivers/char/vt.c
	init/main.c
	mm/slab.c
parents 8a1ca8ce 3aa27bbe
...@@ -1083,6 +1083,10 @@ and is between 256 and 4096 characters. It is defined in the file ...@@ -1083,6 +1083,10 @@ and is between 256 and 4096 characters. It is defined in the file
Configure the RouterBoard 532 series on-chip Configure the RouterBoard 532 series on-chip
Ethernet adapter MAC address. Ethernet adapter MAC address.
kmemleak= [KNL] Boot-time kmemleak enable/disable
Valid arguments: on, off
Default: on
kstack=N [X86] Print N words from the kernel stack kstack=N [X86] Print N words from the kernel stack
in oops dumps. in oops dumps.
......
Kernel Memory Leak Detector
===========================
Introduction
------------
Kmemleak provides a way of detecting possible kernel memory leaks in a
way similar to a tracing garbage collector
(http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Tracing_garbage_collectors),
with the difference that the orphan objects are not freed but only
reported via /sys/kernel/debug/kmemleak. A similar method is used by the
Valgrind tool (memcheck --leak-check) to detect the memory leaks in
user-space applications.
Usage
-----
CONFIG_DEBUG_KMEMLEAK in "Kernel hacking" has to be enabled. A kernel
thread scans the memory every 10 minutes (by default) and prints any new
unreferenced objects found. To trigger an intermediate scan and display
all the possible memory leaks:
# mount -t debugfs nodev /sys/kernel/debug/
# cat /sys/kernel/debug/kmemleak
Note that the orphan objects are listed in the order they were allocated
and one object at the beginning of the list may cause other subsequent
objects to be reported as orphan.
Memory scanning parameters can be modified at run-time by writing to the
/sys/kernel/debug/kmemleak file. The following parameters are supported:
off - disable kmemleak (irreversible)
stack=on - enable the task stacks scanning
stack=off - disable the tasks stacks scanning
scan=on - start the automatic memory scanning thread
scan=off - stop the automatic memory scanning thread
scan=<secs> - set the automatic memory scanning period in seconds (0
to disable it)
Kmemleak can also be disabled at boot-time by passing "kmemleak=off" on
the kernel command line.
Basic Algorithm
---------------
The memory allocations via kmalloc, vmalloc, kmem_cache_alloc and
friends are traced and the pointers, together with additional
information like size and stack trace, are stored in a prio search tree.
The corresponding freeing function calls are tracked and the pointers
removed from the kmemleak data structures.
An allocated block of memory is considered orphan if no pointer to its
start address or to any location inside the block can be found by
scanning the memory (including saved registers). This means that there
might be no way for the kernel to pass the address of the allocated
block to a freeing function and therefore the block is considered a
memory leak.
The scanning algorithm steps:
1. mark all objects as white (remaining white objects will later be
considered orphan)
2. scan the memory starting with the data section and stacks, checking
the values against the addresses stored in the prio search tree. If
a pointer to a white object is found, the object is added to the
gray list
3. scan the gray objects for matching addresses (some white objects
can become gray and added at the end of the gray list) until the
gray set is finished
4. the remaining white objects are considered orphan and reported via
/sys/kernel/debug/kmemleak
Some allocated memory blocks have pointers stored in the kernel's
internal data structures and they cannot be detected as orphans. To
avoid this, kmemleak can also store the number of values pointing to an
address inside the block address range that need to be found so that the
block is not considered a leak. One example is __vmalloc().
Kmemleak API
------------
See the include/linux/kmemleak.h header for the functions prototype.
kmemleak_init - initialize kmemleak
kmemleak_alloc - notify of a memory block allocation
kmemleak_free - notify of a memory block freeing
kmemleak_not_leak - mark an object as not a leak
kmemleak_ignore - do not scan or report an object as leak
kmemleak_scan_area - add scan areas inside a memory block
kmemleak_no_scan - do not scan a memory block
kmemleak_erase - erase an old value in a pointer variable
kmemleak_alloc_recursive - as kmemleak_alloc but checks the recursiveness
kmemleak_free_recursive - as kmemleak_free but checks the recursiveness
Dealing with false positives/negatives
--------------------------------------
The false negatives are real memory leaks (orphan objects) but not
reported by kmemleak because values found during the memory scanning
point to such objects. To reduce the number of false negatives, kmemleak
provides the kmemleak_ignore, kmemleak_scan_area, kmemleak_no_scan and
kmemleak_erase functions (see above). The task stacks also increase the
amount of false negatives and their scanning is not enabled by default.
The false positives are objects wrongly reported as being memory leaks
(orphan). For objects known not to be leaks, kmemleak provides the
kmemleak_not_leak function. The kmemleak_ignore could also be used if
the memory block is known not to contain other pointers and it will no
longer be scanned.
Some of the reported leaks are only transient, especially on SMP
systems, because of pointers temporarily stored in CPU registers or
stacks. Kmemleak defines MSECS_MIN_AGE (defaulting to 1000) representing
the minimum age of an object to be reported as a memory leak.
Limitations and Drawbacks
-------------------------
The main drawback is the reduced performance of memory allocation and
freeing. To avoid other penalties, the memory scanning is only performed
when the /sys/kernel/debug/kmemleak file is read. Anyway, this tool is
intended for debugging purposes where the performance might not be the
most important requirement.
To keep the algorithm simple, kmemleak scans for values pointing to any
address inside a block's address range. This may lead to an increased
number of false negatives. However, it is likely that a real memory leak
will eventually become visible.
Another source of false negatives is the data stored in non-pointer
values. In a future version, kmemleak could only scan the pointer
members in the allocated structures. This feature would solve many of
the false negative cases described above.
The tool can report false positives. These are cases where an allocated
block doesn't need to be freed (some cases in the init_call functions),
the pointer is calculated by other methods than the usual container_of
macro or the pointer is stored in a location not scanned by kmemleak.
Page allocations and ioremap are not tracked. Only the ARM and x86
architectures are currently supported.
...@@ -3370,6 +3370,12 @@ F: Documentation/trace/kmemtrace.txt ...@@ -3370,6 +3370,12 @@ F: Documentation/trace/kmemtrace.txt
F: include/trace/kmemtrace.h F: include/trace/kmemtrace.h
F: kernel/trace/kmemtrace.c F: kernel/trace/kmemtrace.c
KMEMLEAK
P: Catalin Marinas
M: catalin.marinas@arm.com
L: linux-kernel@vger.kernel.org
S: Maintained
KPROBES KPROBES
P: Ananth N Mavinakayanahalli P: Ananth N Mavinakayanahalli
M: ananth@in.ibm.com M: ananth@in.ibm.com
......
...@@ -103,6 +103,7 @@ ...@@ -103,6 +103,7 @@
#include <linux/io.h> #include <linux/io.h>
#include <asm/system.h> #include <asm/system.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/kmemleak.h>
#define MAX_NR_CON_DRIVER 16 #define MAX_NR_CON_DRIVER 16
......
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
#include <linux/uio.h> #include <linux/uio.h>
#include <linux/namei.h> #include <linux/namei.h>
#include <linux/log2.h> #include <linux/log2.h>
#include <linux/kmemleak.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include "internal.h" #include "internal.h"
...@@ -492,6 +493,11 @@ void __init bdev_cache_init(void) ...@@ -492,6 +493,11 @@ void __init bdev_cache_init(void)
bd_mnt = kern_mount(&bd_type); bd_mnt = kern_mount(&bd_type);
if (IS_ERR(bd_mnt)) if (IS_ERR(bd_mnt))
panic("Cannot create bdev pseudo-fs"); panic("Cannot create bdev pseudo-fs");
/*
* This vfsmount structure is only used to obtain the
* blockdev_superblock, so tell kmemleak not to report it.
*/
kmemleak_not_leak(bd_mnt);
blockdev_superblock = bd_mnt->mnt_sb; /* For writeback */ blockdev_superblock = bd_mnt->mnt_sb; /* For writeback */
} }
......
/*
* include/linux/kmemleak.h
*
* Copyright (C) 2008 ARM Limited
* Written by Catalin Marinas <catalin.marinas@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef __KMEMLEAK_H
#define __KMEMLEAK_H
#ifdef CONFIG_DEBUG_KMEMLEAK
extern void kmemleak_init(void);
extern void kmemleak_alloc(const void *ptr, size_t size, int min_count,
gfp_t gfp);
extern void kmemleak_free(const void *ptr);
extern void kmemleak_padding(const void *ptr, unsigned long offset,
size_t size);
extern void kmemleak_not_leak(const void *ptr);
extern void kmemleak_ignore(const void *ptr);
extern void kmemleak_scan_area(const void *ptr, unsigned long offset,
size_t length, gfp_t gfp);
extern void kmemleak_no_scan(const void *ptr);
static inline void kmemleak_alloc_recursive(const void *ptr, size_t size,
int min_count, unsigned long flags,
gfp_t gfp)
{
if (!(flags & SLAB_NOLEAKTRACE))
kmemleak_alloc(ptr, size, min_count, gfp);
}
static inline void kmemleak_free_recursive(const void *ptr, unsigned long flags)
{
if (!(flags & SLAB_NOLEAKTRACE))
kmemleak_free(ptr);
}
static inline void kmemleak_erase(void **ptr)
{
*ptr = NULL;
}
#else
static inline void kmemleak_init(void)
{
}
static inline void kmemleak_alloc(const void *ptr, size_t size, int min_count,
gfp_t gfp)
{
}
static inline void kmemleak_alloc_recursive(const void *ptr, size_t size,
int min_count, unsigned long flags,
gfp_t gfp)
{
}
static inline void kmemleak_free(const void *ptr)
{
}
static inline void kmemleak_free_recursive(const void *ptr, unsigned long flags)
{
}
static inline void kmemleak_not_leak(const void *ptr)
{
}
static inline void kmemleak_ignore(const void *ptr)
{
}
static inline void kmemleak_scan_area(const void *ptr, unsigned long offset,
size_t length, gfp_t gfp)
{
}
static inline void kmemleak_erase(void **ptr)
{
}
static inline void kmemleak_no_scan(const void *ptr)
{
}
#endif /* CONFIG_DEBUG_KMEMLEAK */
#endif /* __KMEMLEAK_H */
...@@ -86,7 +86,12 @@ struct percpu_data { ...@@ -86,7 +86,12 @@ struct percpu_data {
void *ptrs[1]; void *ptrs[1];
}; };
/* pointer disguising messes up the kmemleak objects tracking */
#ifndef CONFIG_DEBUG_KMEMLEAK
#define __percpu_disguise(pdata) (struct percpu_data *)~(unsigned long)(pdata) #define __percpu_disguise(pdata) (struct percpu_data *)~(unsigned long)(pdata)
#else
#define __percpu_disguise(pdata) (struct percpu_data *)(pdata)
#endif
#define per_cpu_ptr(ptr, cpu) \ #define per_cpu_ptr(ptr, cpu) \
({ \ ({ \
......
...@@ -62,6 +62,8 @@ ...@@ -62,6 +62,8 @@
# define SLAB_DEBUG_OBJECTS 0x00000000UL # define SLAB_DEBUG_OBJECTS 0x00000000UL
#endif #endif
#define SLAB_NOLEAKTRACE 0x00800000UL /* Avoid kmemleak tracing */
/* The following flags affect the page allocator grouping pages by mobility */ /* The following flags affect the page allocator grouping pages by mobility */
#define SLAB_RECLAIM_ACCOUNT 0x00020000UL /* Objects are reclaimable */ #define SLAB_RECLAIM_ACCOUNT 0x00020000UL /* Objects are reclaimable */
#define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */ #define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */
......
...@@ -56,6 +56,7 @@ ...@@ -56,6 +56,7 @@
#include <linux/debug_locks.h> #include <linux/debug_locks.h>
#include <linux/debugobjects.h> #include <linux/debugobjects.h>
#include <linux/lockdep.h> #include <linux/lockdep.h>
#include <linux/kmemleak.h>
#include <linux/pid_namespace.h> #include <linux/pid_namespace.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/kthread.h> #include <linux/kthread.h>
...@@ -621,6 +622,7 @@ asmlinkage void __init start_kernel(void) ...@@ -621,6 +622,7 @@ asmlinkage void __init start_kernel(void)
/* init some links before init_ISA_irqs() */ /* init some links before init_ISA_irqs() */
early_irq_init(); early_irq_init();
init_IRQ(); init_IRQ();
prio_tree_init();
init_timers(); init_timers();
hrtimers_init(); hrtimers_init();
softirq_init(); softirq_init();
...@@ -667,6 +669,7 @@ asmlinkage void __init start_kernel(void) ...@@ -667,6 +669,7 @@ asmlinkage void __init start_kernel(void)
enable_debug_pagealloc(); enable_debug_pagealloc();
cpu_hotplug_init(); cpu_hotplug_init();
kmemtrace_init(); kmemtrace_init();
kmemleak_init();
debug_objects_mem_init(); debug_objects_mem_init();
idr_init_cache(); idr_init_cache();
setup_per_cpu_pageset(); setup_per_cpu_pageset();
...@@ -676,7 +679,6 @@ asmlinkage void __init start_kernel(void) ...@@ -676,7 +679,6 @@ asmlinkage void __init start_kernel(void)
calibrate_delay(); calibrate_delay();
pidmap_init(); pidmap_init();
pgtable_cache_init(); pgtable_cache_init();
prio_tree_init();
anon_vma_init(); anon_vma_init();
#ifdef CONFIG_X86 #ifdef CONFIG_X86
if (efi_enabled) if (efi_enabled)
......
...@@ -53,6 +53,7 @@ ...@@ -53,6 +53,7 @@
#include <linux/ftrace.h> #include <linux/ftrace.h>
#include <linux/async.h> #include <linux/async.h>
#include <linux/percpu.h> #include <linux/percpu.h>
#include <linux/kmemleak.h>
#if 0 #if 0
#define DEBUGP printk #define DEBUGP printk
...@@ -433,6 +434,7 @@ static void *percpu_modalloc(unsigned long size, unsigned long align, ...@@ -433,6 +434,7 @@ static void *percpu_modalloc(unsigned long size, unsigned long align,
unsigned long extra; unsigned long extra;
unsigned int i; unsigned int i;
void *ptr; void *ptr;
int cpu;
if (align > PAGE_SIZE) { if (align > PAGE_SIZE) {
printk(KERN_WARNING "%s: per-cpu alignment %li > %li\n", printk(KERN_WARNING "%s: per-cpu alignment %li > %li\n",
...@@ -462,6 +464,11 @@ static void *percpu_modalloc(unsigned long size, unsigned long align, ...@@ -462,6 +464,11 @@ static void *percpu_modalloc(unsigned long size, unsigned long align,
if (!split_block(i, size)) if (!split_block(i, size))
return NULL; return NULL;
/* add the per-cpu scanning areas */
for_each_possible_cpu(cpu)
kmemleak_alloc(ptr + per_cpu_offset(cpu), size, 0,
GFP_KERNEL);
/* Mark allocated */ /* Mark allocated */
pcpu_size[i] = -pcpu_size[i]; pcpu_size[i] = -pcpu_size[i];
return ptr; return ptr;
...@@ -476,6 +483,7 @@ static void percpu_modfree(void *freeme) ...@@ -476,6 +483,7 @@ static void percpu_modfree(void *freeme)
{ {
unsigned int i; unsigned int i;
void *ptr = __per_cpu_start + block_size(pcpu_size[0]); void *ptr = __per_cpu_start + block_size(pcpu_size[0]);
int cpu;
/* First entry is core kernel percpu data. */ /* First entry is core kernel percpu data. */
for (i = 1; i < pcpu_num_used; ptr += block_size(pcpu_size[i]), i++) { for (i = 1; i < pcpu_num_used; ptr += block_size(pcpu_size[i]), i++) {
...@@ -487,6 +495,10 @@ static void percpu_modfree(void *freeme) ...@@ -487,6 +495,10 @@ static void percpu_modfree(void *freeme)
BUG(); BUG();
free: free:
/* remove the per-cpu scanning areas */
for_each_possible_cpu(cpu)
kmemleak_free(freeme + per_cpu_offset(cpu));
/* Merge with previous? */ /* Merge with previous? */
if (pcpu_size[i-1] >= 0) { if (pcpu_size[i-1] >= 0) {
pcpu_size[i-1] += pcpu_size[i]; pcpu_size[i-1] += pcpu_size[i];
...@@ -1879,6 +1891,36 @@ static void *module_alloc_update_bounds(unsigned long size) ...@@ -1879,6 +1891,36 @@ static void *module_alloc_update_bounds(unsigned long size)
return ret; return ret;
} }
#ifdef CONFIG_DEBUG_KMEMLEAK
static void kmemleak_load_module(struct module *mod, Elf_Ehdr *hdr,
Elf_Shdr *sechdrs, char *secstrings)
{
unsigned int i;
/* only scan the sections containing data */
kmemleak_scan_area(mod->module_core, (unsigned long)mod -
(unsigned long)mod->module_core,
sizeof(struct module), GFP_KERNEL);
for (i = 1; i < hdr->e_shnum; i++) {
if (!(sechdrs[i].sh_flags & SHF_ALLOC))
continue;
if (strncmp(secstrings + sechdrs[i].sh_name, ".data", 5) != 0
&& strncmp(secstrings + sechdrs[i].sh_name, ".bss", 4) != 0)
continue;
kmemleak_scan_area(mod->module_core, sechdrs[i].sh_addr -
(unsigned long)mod->module_core,
sechdrs[i].sh_size, GFP_KERNEL);
}
}
#else
static inline void kmemleak_load_module(struct module *mod, Elf_Ehdr *hdr,
Elf_Shdr *sechdrs, char *secstrings)
{
}
#endif
/* Allocate and load the module: note that size of section 0 is always /* Allocate and load the module: note that size of section 0 is always
zero, and we rely on this for optional sections. */ zero, and we rely on this for optional sections. */
static noinline struct module *load_module(void __user *umod, static noinline struct module *load_module(void __user *umod,
...@@ -2049,6 +2091,12 @@ static noinline struct module *load_module(void __user *umod, ...@@ -2049,6 +2091,12 @@ static noinline struct module *load_module(void __user *umod,
/* Do the allocs. */ /* Do the allocs. */
ptr = module_alloc_update_bounds(mod->core_size); ptr = module_alloc_update_bounds(mod->core_size);
/*
* The pointer to this block is stored in the module structure
* which is inside the block. Just mark it as not being a
* leak.
*/
kmemleak_not_leak(ptr);
if (!ptr) { if (!ptr) {
err = -ENOMEM; err = -ENOMEM;
goto free_percpu; goto free_percpu;
...@@ -2057,6 +2105,13 @@ static noinline struct module *load_module(void __user *umod, ...@@ -2057,6 +2105,13 @@ static noinline struct module *load_module(void __user *umod,
mod->module_core = ptr; mod->module_core = ptr;
ptr = module_alloc_update_bounds(mod->init_size); ptr = module_alloc_update_bounds(mod->init_size);
/*
* The pointer to this block is stored in the module structure
* which is inside the block. This block doesn't need to be
* scanned as it contains data and code that will be freed
* after the module is initialized.
*/
kmemleak_ignore(ptr);
if (!ptr && mod->init_size) { if (!ptr && mod->init_size) {
err = -ENOMEM; err = -ENOMEM;
goto free_core; goto free_core;
...@@ -2087,6 +2142,7 @@ static noinline struct module *load_module(void __user *umod, ...@@ -2087,6 +2142,7 @@ static noinline struct module *load_module(void __user *umod,
} }
/* Module has been moved. */ /* Module has been moved. */
mod = (void *)sechdrs[modindex].sh_addr; mod = (void *)sechdrs[modindex].sh_addr;
kmemleak_load_module(mod, hdr, sechdrs, secstrings);
#if defined(CONFIG_MODULE_UNLOAD) && defined(CONFIG_SMP) #if defined(CONFIG_MODULE_UNLOAD) && defined(CONFIG_SMP)
mod->refptr = percpu_modalloc(sizeof(local_t), __alignof__(local_t), mod->refptr = percpu_modalloc(sizeof(local_t), __alignof__(local_t),
......
...@@ -336,6 +336,38 @@ config SLUB_STATS ...@@ -336,6 +336,38 @@ config SLUB_STATS
out which slabs are relevant to a particular load. out which slabs are relevant to a particular load.
Try running: slabinfo -DA Try running: slabinfo -DA
config DEBUG_KMEMLEAK
bool "Kernel memory leak detector"
depends on DEBUG_KERNEL && EXPERIMENTAL && (X86 || ARM) && \
!MEMORY_HOTPLUG
select DEBUG_SLAB if SLAB
select SLUB_DEBUG if SLUB
select DEBUG_FS if SYSFS
select STACKTRACE if STACKTRACE_SUPPORT
select KALLSYMS
help
Say Y here if you want to enable the memory leak
detector. The memory allocation/freeing is traced in a way
similar to the Boehm's conservative garbage collector, the
difference being that the orphan objects are not freed but
only shown in /sys/kernel/debug/kmemleak. Enabling this
feature will introduce an overhead to memory
allocations. See Documentation/kmemleak.txt for more
details.
In order to access the kmemleak file, debugfs needs to be
mounted (usually at /sys/kernel/debug).
config DEBUG_KMEMLEAK_TEST
tristate "Simple test for the kernel memory leak detector"
depends on DEBUG_KMEMLEAK
help
Say Y or M here to build a test for the kernel memory leak
detector. This option enables a module that explicitly leaks
memory.
If unsure, say N.
config DEBUG_PREEMPT config DEBUG_PREEMPT
bool "Debug preemptible kernel" bool "Debug preemptible kernel"
depends on DEBUG_KERNEL && PREEMPT && (TRACE_IRQFLAGS_SUPPORT || PPC64) depends on DEBUG_KERNEL && PREEMPT && (TRACE_IRQFLAGS_SUPPORT || PPC64)
......
...@@ -38,3 +38,5 @@ obj-$(CONFIG_SMP) += allocpercpu.o ...@@ -38,3 +38,5 @@ obj-$(CONFIG_SMP) += allocpercpu.o
endif endif
obj-$(CONFIG_QUICKLIST) += quicklist.o obj-$(CONFIG_QUICKLIST) += quicklist.o
obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o
obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
/*
* mm/kmemleak-test.c
*
* Copyright (C) 2008 ARM Limited
* Written by Catalin Marinas <catalin.marinas@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/list.h>
#include <linux/percpu.h>
#include <linux/fdtable.h>
#include <linux/kmemleak.h>
struct test_node {
long header[25];
struct list_head list;
long footer[25];
};
static LIST_HEAD(test_list);
static DEFINE_PER_CPU(void *, test_pointer);
/*
* Some very simple testing. This function needs to be extended for
* proper testing.
*/
static int __init kmemleak_test_init(void)
{
struct test_node *elem;
int i;
printk(KERN_INFO "Kmemleak testing\n");
/* make some orphan objects */
pr_info("kmemleak: kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL));
pr_info("kmemleak: kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL));
pr_info("kmemleak: kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL));
pr_info("kmemleak: kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL));
pr_info("kmemleak: kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL));
pr_info("kmemleak: kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL));
pr_info("kmemleak: kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL));
pr_info("kmemleak: kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL));
#ifndef CONFIG_MODULES
pr_info("kmemleak: kmem_cache_alloc(files_cachep) = %p\n",
kmem_cache_alloc(files_cachep, GFP_KERNEL));
pr_info("kmemleak: kmem_cache_alloc(files_cachep) = %p\n",
kmem_cache_alloc(files_cachep, GFP_KERNEL));
#endif
pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
/*
* Add elements to a list. They should only appear as orphan
* after the module is removed.
*/
for (i = 0; i < 10; i++) {
elem = kmalloc(sizeof(*elem), GFP_KERNEL);
pr_info("kmemleak: kmalloc(sizeof(*elem)) = %p\n", elem);
if (!elem)
return -ENOMEM;
memset(elem, 0, sizeof(*elem));
INIT_LIST_HEAD(&elem->list);
list_add_tail(&elem->list, &test_list);
}
for_each_possible_cpu(i) {
per_cpu(test_pointer, i) = kmalloc(129, GFP_KERNEL);
pr_info("kmemleak: kmalloc(129) = %p\n",
per_cpu(test_pointer, i));
}
return 0;
}
module_init(kmemleak_test_init);
static void __exit kmemleak_test_exit(void)
{
struct test_node *elem, *tmp;
/*
* Remove the list elements without actually freeing the
* memory.
*/
list_for_each_entry_safe(elem, tmp, &test_list, list)
list_del(&elem->list);
}
module_exit(kmemleak_test_exit);
MODULE_LICENSE("GPL");
This diff is collapsed.
...@@ -46,6 +46,7 @@ ...@@ -46,6 +46,7 @@
#include <linux/page-isolation.h> #include <linux/page-isolation.h>
#include <linux/page_cgroup.h> #include <linux/page_cgroup.h>
#include <linux/debugobjects.h> #include <linux/debugobjects.h>
#include <linux/kmemleak.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#include <asm/div64.h> #include <asm/div64.h>
...@@ -4546,6 +4547,16 @@ void *__init alloc_large_system_hash(const char *tablename, ...@@ -4546,6 +4547,16 @@ void *__init alloc_large_system_hash(const char *tablename,
if (_hash_mask) if (_hash_mask)
*_hash_mask = (1 << log2qty) - 1; *_hash_mask = (1 << log2qty) - 1;
/*
* If hashdist is set, the table allocation is done with __vmalloc()
* which invokes the kmemleak_alloc() callback. This function may also
* be called before the slab and kmemleak are initialised when
* kmemleak simply buffers the request to be executed later
* (GFP_ATOMIC flag ignored in this case).
*/
if (!hashdist)
kmemleak_alloc(table, size, 1, GFP_ATOMIC);
return table; return table;
} }
......
...@@ -107,6 +107,7 @@ ...@@ -107,6 +107,7 @@
#include <linux/string.h> #include <linux/string.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <linux/nodemask.h> #include <linux/nodemask.h>
#include <linux/kmemleak.h>
#include <linux/mempolicy.h> #include <linux/mempolicy.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/fault-inject.h> #include <linux/fault-inject.h>
...@@ -178,13 +179,13 @@ ...@@ -178,13 +179,13 @@
SLAB_STORE_USER | \ SLAB_STORE_USER | \
SLAB_RECLAIM_ACCOUNT | SLAB_PANIC | \ SLAB_RECLAIM_ACCOUNT | SLAB_PANIC | \
SLAB_DESTROY_BY_RCU | SLAB_MEM_SPREAD | \ SLAB_DESTROY_BY_RCU | SLAB_MEM_SPREAD | \
SLAB_DEBUG_OBJECTS) SLAB_DEBUG_OBJECTS | SLAB_NOLEAKTRACE)
#else #else
# define CREATE_MASK (SLAB_HWCACHE_ALIGN | \ # define CREATE_MASK (SLAB_HWCACHE_ALIGN | \
SLAB_CACHE_DMA | \ SLAB_CACHE_DMA | \
SLAB_RECLAIM_ACCOUNT | SLAB_PANIC | \ SLAB_RECLAIM_ACCOUNT | SLAB_PANIC | \
SLAB_DESTROY_BY_RCU | SLAB_MEM_SPREAD | \ SLAB_DESTROY_BY_RCU | SLAB_MEM_SPREAD | \
SLAB_DEBUG_OBJECTS) SLAB_DEBUG_OBJECTS | SLAB_NOLEAKTRACE)
#endif #endif
/* /*
...@@ -964,6 +965,14 @@ static struct array_cache *alloc_arraycache(int node, int entries, ...@@ -964,6 +965,14 @@ static struct array_cache *alloc_arraycache(int node, int entries,
struct array_cache *nc = NULL; struct array_cache *nc = NULL;
nc = kmalloc_node(memsize, gfp, node); nc = kmalloc_node(memsize, gfp, node);
/*
* The array_cache structures contain pointers to free object.
* However, when such objects are allocated or transfered to another
* cache the pointers are not cleared and they could be counted as
* valid references during a kmemleak scan. Therefore, kmemleak must
* not scan such objects.
*/
kmemleak_no_scan(nc);
if (nc) { if (nc) {
nc->avail = 0; nc->avail = 0;
nc->limit = entries; nc->limit = entries;
...@@ -2625,6 +2634,14 @@ static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp, ...@@ -2625,6 +2634,14 @@ static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
/* Slab management obj is off-slab. */ /* Slab management obj is off-slab. */
slabp = kmem_cache_alloc_node(cachep->slabp_cache, slabp = kmem_cache_alloc_node(cachep->slabp_cache,
local_flags, nodeid); local_flags, nodeid);
/*
* If the first object in the slab is leaked (it's allocated
* but no one has a reference to it), we want to make sure
* kmemleak does not treat the ->s_mem pointer as a reference
* to the object. Otherwise we will not report the leak.
*/
kmemleak_scan_area(slabp, offsetof(struct slab, list),
sizeof(struct list_head), local_flags);
if (!slabp) if (!slabp)
return NULL; return NULL;
} else { } else {
...@@ -3145,6 +3162,12 @@ static inline void *____cache_alloc(struct kmem_cache *cachep, gfp_t flags) ...@@ -3145,6 +3162,12 @@ static inline void *____cache_alloc(struct kmem_cache *cachep, gfp_t flags)
STATS_INC_ALLOCMISS(cachep); STATS_INC_ALLOCMISS(cachep);
objp = cache_alloc_refill(cachep, flags); objp = cache_alloc_refill(cachep, flags);
} }
/*
* To avoid a false negative, if an object that is in one of the
* per-CPU caches is leaked, we need to make sure kmemleak doesn't
* treat the array pointers as a reference to the object.
*/
kmemleak_erase(&ac->entry[ac->avail]);
return objp; return objp;
} }
...@@ -3364,6 +3387,8 @@ __cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid, ...@@ -3364,6 +3387,8 @@ __cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
out: out:
local_irq_restore(save_flags); local_irq_restore(save_flags);
ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller);
kmemleak_alloc_recursive(ptr, obj_size(cachep), 1, cachep->flags,
flags);
if (unlikely((flags & __GFP_ZERO) && ptr)) if (unlikely((flags & __GFP_ZERO) && ptr))
memset(ptr, 0, obj_size(cachep)); memset(ptr, 0, obj_size(cachep));
...@@ -3419,6 +3444,8 @@ __cache_alloc(struct kmem_cache *cachep, gfp_t flags, void *caller) ...@@ -3419,6 +3444,8 @@ __cache_alloc(struct kmem_cache *cachep, gfp_t flags, void *caller)
objp = __do_cache_alloc(cachep, flags); objp = __do_cache_alloc(cachep, flags);
local_irq_restore(save_flags); local_irq_restore(save_flags);
objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller);
kmemleak_alloc_recursive(objp, obj_size(cachep), 1, cachep->flags,
flags);
prefetchw(objp); prefetchw(objp);
if (unlikely((flags & __GFP_ZERO) && objp)) if (unlikely((flags & __GFP_ZERO) && objp))
...@@ -3534,6 +3561,7 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp) ...@@ -3534,6 +3561,7 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp)
struct array_cache *ac = cpu_cache_get(cachep); struct array_cache *ac = cpu_cache_get(cachep);
check_irq_off(); check_irq_off();
kmemleak_free_recursive(objp, cachep->flags);
objp = cache_free_debugcheck(cachep, objp, __builtin_return_address(0)); objp = cache_free_debugcheck(cachep, objp, __builtin_return_address(0));
/* /*
......
...@@ -67,6 +67,7 @@ ...@@ -67,6 +67,7 @@
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/kmemtrace.h> #include <linux/kmemtrace.h>
#include <linux/kmemleak.h>
#include <asm/atomic.h> #include <asm/atomic.h>
/* /*
...@@ -509,6 +510,7 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node) ...@@ -509,6 +510,7 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
size, PAGE_SIZE << order, gfp, node); size, PAGE_SIZE << order, gfp, node);
} }
kmemleak_alloc(ret, size, 1, gfp);
return ret; return ret;
} }
EXPORT_SYMBOL(__kmalloc_node); EXPORT_SYMBOL(__kmalloc_node);
...@@ -521,6 +523,7 @@ void kfree(const void *block) ...@@ -521,6 +523,7 @@ void kfree(const void *block)
if (unlikely(ZERO_OR_NULL_PTR(block))) if (unlikely(ZERO_OR_NULL_PTR(block)))
return; return;
kmemleak_free(block);
sp = slob_page(block); sp = slob_page(block);
if (is_slob_page(sp)) { if (is_slob_page(sp)) {
...@@ -584,12 +587,14 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size, ...@@ -584,12 +587,14 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size,
} else if (flags & SLAB_PANIC) } else if (flags & SLAB_PANIC)
panic("Cannot create slab cache %s\n", name); panic("Cannot create slab cache %s\n", name);
kmemleak_alloc(c, sizeof(struct kmem_cache), 1, GFP_KERNEL);
return c; return c;
} }
EXPORT_SYMBOL(kmem_cache_create); EXPORT_SYMBOL(kmem_cache_create);
void kmem_cache_destroy(struct kmem_cache *c) void kmem_cache_destroy(struct kmem_cache *c)
{ {
kmemleak_free(c);
slob_free(c, sizeof(struct kmem_cache)); slob_free(c, sizeof(struct kmem_cache));
} }
EXPORT_SYMBOL(kmem_cache_destroy); EXPORT_SYMBOL(kmem_cache_destroy);
...@@ -613,6 +618,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node) ...@@ -613,6 +618,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
if (c->ctor) if (c->ctor)
c->ctor(b); c->ctor(b);
kmemleak_alloc_recursive(b, c->size, 1, c->flags, flags);
return b; return b;
} }
EXPORT_SYMBOL(kmem_cache_alloc_node); EXPORT_SYMBOL(kmem_cache_alloc_node);
...@@ -635,6 +641,7 @@ static void kmem_rcu_free(struct rcu_head *head) ...@@ -635,6 +641,7 @@ static void kmem_rcu_free(struct rcu_head *head)
void kmem_cache_free(struct kmem_cache *c, void *b) void kmem_cache_free(struct kmem_cache *c, void *b)
{ {
kmemleak_free_recursive(b, c->flags);
if (unlikely(c->flags & SLAB_DESTROY_BY_RCU)) { if (unlikely(c->flags & SLAB_DESTROY_BY_RCU)) {
struct slob_rcu *slob_rcu; struct slob_rcu *slob_rcu;
slob_rcu = b + (c->size - sizeof(struct slob_rcu)); slob_rcu = b + (c->size - sizeof(struct slob_rcu));
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/kmemtrace.h> #include <linux/kmemtrace.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpuset.h> #include <linux/cpuset.h>
#include <linux/kmemleak.h>
#include <linux/mempolicy.h> #include <linux/mempolicy.h>
#include <linux/ctype.h> #include <linux/ctype.h>
#include <linux/debugobjects.h> #include <linux/debugobjects.h>
...@@ -143,7 +144,7 @@ ...@@ -143,7 +144,7 @@
* Set of flags that will prevent slab merging * Set of flags that will prevent slab merging
*/ */
#define SLUB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ #define SLUB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
SLAB_TRACE | SLAB_DESTROY_BY_RCU) SLAB_TRACE | SLAB_DESTROY_BY_RCU | SLAB_NOLEAKTRACE)
#define SLUB_MERGE_SAME (SLAB_DEBUG_FREE | SLAB_RECLAIM_ACCOUNT | \ #define SLUB_MERGE_SAME (SLAB_DEBUG_FREE | SLAB_RECLAIM_ACCOUNT | \
SLAB_CACHE_DMA) SLAB_CACHE_DMA)
...@@ -1617,6 +1618,7 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, ...@@ -1617,6 +1618,7 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,
if (unlikely((gfpflags & __GFP_ZERO) && object)) if (unlikely((gfpflags & __GFP_ZERO) && object))
memset(object, 0, objsize); memset(object, 0, objsize);
kmemleak_alloc_recursive(object, objsize, 1, s->flags, gfpflags);
return object; return object;
} }
...@@ -1746,6 +1748,7 @@ static __always_inline void slab_free(struct kmem_cache *s, ...@@ -1746,6 +1748,7 @@ static __always_inline void slab_free(struct kmem_cache *s,
struct kmem_cache_cpu *c; struct kmem_cache_cpu *c;
unsigned long flags; unsigned long flags;
kmemleak_free_recursive(x, s->flags);
local_irq_save(flags); local_irq_save(flags);
c = get_cpu_slab(s, smp_processor_id()); c = get_cpu_slab(s, smp_processor_id());
debug_check_no_locks_freed(object, c->objsize); debug_check_no_locks_freed(object, c->objsize);
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/radix-tree.h> #include <linux/radix-tree.h>
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <linux/pfn.h> #include <linux/pfn.h>
#include <linux/kmemleak.h>
#include <asm/atomic.h> #include <asm/atomic.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
...@@ -1326,6 +1327,9 @@ static void __vunmap(const void *addr, int deallocate_pages) ...@@ -1326,6 +1327,9 @@ static void __vunmap(const void *addr, int deallocate_pages)
void vfree(const void *addr) void vfree(const void *addr)
{ {
BUG_ON(in_interrupt()); BUG_ON(in_interrupt());
kmemleak_free(addr);
__vunmap(addr, 1); __vunmap(addr, 1);
} }
EXPORT_SYMBOL(vfree); EXPORT_SYMBOL(vfree);
...@@ -1438,8 +1442,17 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, ...@@ -1438,8 +1442,17 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
void *__vmalloc_area(struct vm_struct *area, gfp_t gfp_mask, pgprot_t prot) void *__vmalloc_area(struct vm_struct *area, gfp_t gfp_mask, pgprot_t prot)
{ {
return __vmalloc_area_node(area, gfp_mask, prot, -1, void *addr = __vmalloc_area_node(area, gfp_mask, prot, -1,
__builtin_return_address(0)); __builtin_return_address(0));
/*
* A ref_count = 3 is needed because the vm_struct and vmap_area
* structures allocated in the __get_vm_area_node() function contain
* references to the virtual address of the vmalloc'ed block.
*/
kmemleak_alloc(addr, area->size - PAGE_SIZE, 3, gfp_mask);
return addr;
} }
/** /**
...@@ -1458,6 +1471,8 @@ static void *__vmalloc_node(unsigned long size, gfp_t gfp_mask, pgprot_t prot, ...@@ -1458,6 +1471,8 @@ static void *__vmalloc_node(unsigned long size, gfp_t gfp_mask, pgprot_t prot,
int node, void *caller) int node, void *caller)
{ {
struct vm_struct *area; struct vm_struct *area;
void *addr;
unsigned long real_size = size;
size = PAGE_ALIGN(size); size = PAGE_ALIGN(size);
if (!size || (size >> PAGE_SHIFT) > num_physpages) if (!size || (size >> PAGE_SHIFT) > num_physpages)
...@@ -1469,7 +1484,16 @@ static void *__vmalloc_node(unsigned long size, gfp_t gfp_mask, pgprot_t prot, ...@@ -1469,7 +1484,16 @@ static void *__vmalloc_node(unsigned long size, gfp_t gfp_mask, pgprot_t prot,
if (!area) if (!area)
return NULL; return NULL;
return __vmalloc_area_node(area, gfp_mask, prot, node, caller); addr = __vmalloc_area_node(area, gfp_mask, prot, node, caller);
/*
* A ref_count = 3 is needed because the vm_struct and vmap_area
* structures allocated in the __get_vm_area_node() function contain
* references to the virtual address of the vmalloc'ed block.
*/
kmemleak_alloc(addr, real_size, 3, gfp_mask);
return addr;
} }
void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot) void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment