Commit 3b612c8f authored by David Hildenbrand's avatar David Hildenbrand Committed by Andrew Morton

mm: optimize CONFIG_PER_VMA_LOCK member placement in vm_area_struct

Currently, we end up wasting some memory in each vm_area_struct. Pahole
states that:
	[...]
	int                        vm_lock_seq;          /*    40     4 */

	/* XXX 4 bytes hole, try to pack */

	struct vma_lock *          vm_lock;              /*    48     8 */
	bool                       detached;             /*    56     1 */

	/* XXX 7 bytes hole, try to pack */
	[...]

Let's reduce the holes and memory wastage by moving the bool:
	[...]
	bool                       detached;             /*    40     1 */

	/* XXX 3 bytes hole, try to pack */

	int                        vm_lock_seq;          /*    44     4 */
	struct vma_lock *          vm_lock;              /*    48     8 */
	[...]

Effectively shrinking the vm_area_struct with CONFIG_PER_VMA_LOCK by
8 byte.

Likely, we could place "detached" in the lowest bit of vm_lock, but at
least on 64bit that won't really make a difference, so keep it simple.

Link: https://lkml.kernel.org/r/20240327143548.744070-1-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 07db63a2
......@@ -671,6 +671,9 @@ struct vm_area_struct {
};
#ifdef CONFIG_PER_VMA_LOCK
/* Flag to indicate areas detached from the mm->mm_mt tree */
bool detached;
/*
* Can only be written (using WRITE_ONCE()) while holding both:
* - mmap_lock (in write mode)
......@@ -687,9 +690,6 @@ struct vm_area_struct {
*/
int vm_lock_seq;
struct vma_lock *vm_lock;
/* Flag to indicate areas detached from the mm->mm_mt tree */
bool detached;
#endif
/*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment