- 15 May, 2019 40 commits
-
-
Alexey Dobriyan authored
struct linux_binprm::buf is the first field and it is exactly 128 bytes in size. It means that on x86_64 all accesses to other fields will go though [r64 + disp32] addressing mode which is 3 bytes bloatier than [r64 + disp8] addressing mode. Given that accesses to other fields outnumber accesses to ->buf, move it down. Space savings (x86_64 defconfig): more on distro configs because LSMs actively dereference "bprm" but do not care about first 128 bytes of the executable itself. add/remove: 0/0 grow/shrink: 0/24 up/down: 0/-492 (-492) Function old new delta selinux_bprm_committing_creds 552 549 -3 finalize_exec 94 91 -3 __audit_log_bprm_fcaps 283 280 -3 __audit_bprm 39 36 -3 perf_trace_sched_process_exec 347 341 -6 install_exec_creds 105 99 -6 cap_bprm_set_creds.cold 60 54 -6 would_dump 137 128 -9 load_script 637 628 -9 bprm_change_interp 61 52 -9 trace_event_raw_event_sched_process_exec 260 250 -10 search_binary_handler 255 240 -15 remove_arg_zero 295 277 -18 free_bprm 119 101 -18 prepare_binprm 379 360 -19 setup_new_exec 336 315 -21 flush_old_exec 1638 1617 -21 copy_strings.isra 746 724 -22 setup_arg_pages 559 530 -29 load_misc_binary 1151 1118 -33 selinux_bprm_set_creds 792 753 -39 load_elf_binary 11111 11072 -39 cap_bprm_set_creds 1496 1454 -42 __do_execve_file.isra 2395 2286 -109 Link: http://lkml.kernel.org/r/20190421165025.GA26843@avx2Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
->recursion_depth is changed only by current, therefore decrementing can be done without taking any locks. Link: http://lkml.kernel.org/r/20190417213150.GA26474@avx2Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Mathieu Malaterre authored
There is a plan to build the kernel with -Wimplicit-fallthrough and this place in the code produced a warning (W=1). This commit remove the following warning: kernel/signal.c:795:13: warning: this statement may fall through [-Wimplicit-fallthrough=] Link: http://lkml.kernel.org/r/20190114203505.17875-1-malat@debian.orgSigned-off-by: Mathieu Malaterre <malat@debian.org> Acked-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Hou Tao authored
fsync() needs to make sure the data & meta-data of file are persistent after the return of fsync(), even when a power-failure occurs later. In the case of fat-fs, the FAT belongs to the meta-data of file, so we need to issue a flush after the writeback of FAT instead before. Also bail out early when any stage of fsync fails. Link: http://lkml.kernel.org/r/20190409030158.136316-1-houtao1@huawei.comSigned-off-by: Hou Tao <houtao1@huawei.com> Acked-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Jan Kara <jack@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Bharath Vedartham authored
csum_partial() gives different results for little-endian and big-endian hosts. This causes images created on little-endian hosts and mounted on big endian hosts to see csum mismatches. This causes an endianness bug. Sparse gives a warning as csum_partial returns a restricted integer type __wsum_t and xattr_hash expects __u32. This warning acts as a reminder for this bug and should not be suppressed. This comment aims to convey these endianness issues. [akpm@linux-foundation.org: coding-style fixes] Link: http://lkml.kernel.org/r/20190423161831.GA15387@bharath12345-Inspiron-5559Signed-off-by: Bharath Vedartham <linux.bhar@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Jann Horn <jannh@google.com> Cc: Jeff Mahoney <jeffm@suse.com> Cc: Jan Kara <jack@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ian Kent authored
Add a description of the "ignore" pseudo mount option that can be used to provide a generic indicator to applications that the mount entry should be ignored when displaying mount information. Link: http://lkml.kernel.org/r/155287084617.12593.812733161112154904.stgit@pluto.themaw.netSigned-off-by: Ian Kent <raven@themaw.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ian Kent authored
Describe AUTOFS_EXP_FORCED in addition to AUTOFS_EXP_IMMEDIATE in the description of the AUTOFS_DEV_IOCTL_EXPIRE_CMD ioctl. Link: http://lkml.kernel.org/r/155287084078.12593.15000931045413195778.stgit@pluto.themaw.netSigned-off-by: Ian Kent <raven@themaw.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ian Kent authored
Update the description of AUTOFS_EXP_LEAVES to cover its possible future use with amd format mount maps. Link: http://lkml.kernel.org/r/155287083538.12593.18163159677020718048.stgit@pluto.themaw.netSigned-off-by: Ian Kent <raven@themaw.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ian Kent authored
A "strictexpire" mount option has been added to the autofs file system. It is meant to be used in cases where a GUI continually accesses or an application frquently scans an automount directory tree causing an accumulation of otherwise unused mounts. Link: http://lkml.kernel.org/r/155287083000.12593.2722713092537666885.stgit@pluto.themaw.netSigned-off-by: Ian Kent <raven@themaw.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ian Kent authored
Alter a few word usages in Documentation/filesystems/autofs.txt and correct some spelling mistakes. Link: http://lkml.kernel.org/r/155287082394.12593.6506084453911662450.stgit@pluto.themaw.netSigned-off-by: Ian Kent <raven@themaw.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sinan Kaya authored
CONFIG_DEBUG_KERNEL should not impact code generation. Use the newly defined CONFIG_DEBUG_MISC instead to keep the current code. Link: http://lkml.kernel.org/r/20190413224438.10802-6-okaya@kernel.orgSigned-off-by: Sinan Kaya <okaya@kernel.org> Acked-by: Florian Westphal <fw@strlen.de> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: Florian Westphal <fw@strlen.de> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu> Cc: "David S. Miller" <davem@davemloft.net> Cc: Anders Roxell <anders.roxell@linaro.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Chris Zankel <chris@zankel.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: James Hogan <jhogan@kernel.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sinan Kaya authored
CONFIG_DEBUG_KERNEL should not impact code generation. Use the newly defined CONFIG_DEBUG_MISC instead to keep the current code. Link: http://lkml.kernel.org/r/20190413224438.10802-5-okaya@kernel.orgSigned-off-by: Sinan Kaya <okaya@kernel.org> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Max Filippov <jcmvbkbc@gmail.com> Cc: Chris Zankel <chris@zankel.net> Cc: Anders Roxell <anders.roxell@linaro.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: "David S. Miller" <davem@davemloft.net> Cc: Florian Westphal <fw@strlen.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: James Hogan <jhogan@kernel.org> Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sinan Kaya authored
CONFIG_DEBUG_KERNEL should not impact code generation. Use the newly defined CONFIG_DEBUG_MISC instead to keep the current code. Link: http://lkml.kernel.org/r/20190413224438.10802-3-okaya@kernel.orgSigned-off-by: Sinan Kaya <okaya@kernel.org> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Anders Roxell <anders.roxell@linaro.org> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Florian Westphal <fw@strlen.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: James Hogan <jhogan@kernel.org> Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sinan Kaya authored
Patch series "init: Do not select DEBUG_KERNEL by default", v5. CONFIG_DEBUG_KERNEL has been designed to just enable Kconfig options. Kernel code generatoin should not depend on CONFIG_DEBUG_KERNEL. Proposed alternative plan: let's add a new symbol, something like DEBUG_MISC ("Miscellaneous debug code that should be under a more specific debug option but isn't"), make it depend on DEBUG_KERNEL and be "default DEBUG_KERNEL" but allow itself to be turned off, and then mechanically change the small handful of "#ifdef CONFIG_DEBUG_KERNEL" to "#ifdef CONFIG_DEBUG_MISC". This patch (of 5): Introduce DEBUG_MISC ("Miscellaneous debug code that should be under a more specific debug option but isn't"), make it depend on DEBUG_KERNEL and be "default DEBUG_KERNEL" but allow itself to be turned off, and then mechanically change the small handful of "#ifdef CONFIG_DEBUG_KERNEL" to "#ifdef CONFIG_DEBUG_MISC". Link: http://lkml.kernel.org/r/20190413224438.10802-2-okaya@kernel.orgSigned-off-by: Sinan Kaya <okaya@kernel.org> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: Anders Roxell <anders.roxell@linaro.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Florian Westphal <fw@strlen.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: James Hogan <jhogan@kernel.org> Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Kees Cook authored
Commmit eab09532 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE"), made changes in the rare case when the ELF loader was directly invoked (e.g to set a non-inheritable LD_LIBRARY_PATH, testing new versions of the loader), by moving into the mmap region to avoid both ET_EXEC and PIE binaries. This had the effect of also moving the brk region into mmap, which could lead to the stack and brk being arbitrarily close to each other. An unlucky process wouldn't get its requested stack size and stack allocations could end up scribbling on the heap. This is illustrated here. In the case of using the loader directly, brk (so helpfully identified as "[heap]") is allocated with the _loader_ not the binary. For example, with ASLR entirely disabled, you can see this more clearly: $ /bin/cat /proc/self/maps 555555554000-55555555c000 r-xp 00000000 ... /bin/cat 55555575b000-55555575c000 r--p 00007000 ... /bin/cat 55555575c000-55555575d000 rw-p 00008000 ... /bin/cat 55555575d000-55555577e000 rw-p 00000000 ... [heap] ... 7ffff7ff7000-7ffff7ffa000 r--p 00000000 ... [vvar] 7ffff7ffa000-7ffff7ffc000 r-xp 00000000 ... [vdso] 7ffff7ffc000-7ffff7ffd000 r--p 00027000 ... /lib/x86_64-linux-gnu/ld-2.27.so 7ffff7ffd000-7ffff7ffe000 rw-p 00028000 ... /lib/x86_64-linux-gnu/ld-2.27.so 7ffff7ffe000-7ffff7fff000 rw-p 00000000 ... 7ffffffde000-7ffffffff000 rw-p 00000000 ... [stack] $ /lib/x86_64-linux-gnu/ld-2.27.so /bin/cat /proc/self/maps ... 7ffff7bcc000-7ffff7bd4000 r-xp 00000000 ... /bin/cat 7ffff7bd4000-7ffff7dd3000 ---p 00008000 ... /bin/cat 7ffff7dd3000-7ffff7dd4000 r--p 00007000 ... /bin/cat 7ffff7dd4000-7ffff7dd5000 rw-p 00008000 ... /bin/cat 7ffff7dd5000-7ffff7dfc000 r-xp 00000000 ... /lib/x86_64-linux-gnu/ld-2.27.so 7ffff7fb2000-7ffff7fd6000 rw-p 00000000 ... 7ffff7ff7000-7ffff7ffa000 r--p 00000000 ... [vvar] 7ffff7ffa000-7ffff7ffc000 r-xp 00000000 ... [vdso] 7ffff7ffc000-7ffff7ffd000 r--p 00027000 ... /lib/x86_64-linux-gnu/ld-2.27.so 7ffff7ffd000-7ffff7ffe000 rw-p 00028000 ... /lib/x86_64-linux-gnu/ld-2.27.so 7ffff7ffe000-7ffff8020000 rw-p 00000000 ... [heap] 7ffffffde000-7ffffffff000 rw-p 00000000 ... [stack] The solution is to move brk out of mmap and into ELF_ET_DYN_BASE since nothing is there in the direct loader case (and ET_EXEC is still far away at 0x400000). Anything that ran before should still work (i.e. the ultimately-launched binary already had the brk very far from its text, so this should be no different from a COMPAT_BRK standpoint). The only risk I see here is that if someone started to suddenly depend on the entire memory space lower than the mmap region being available when launching binaries via a direct loader execs which seems highly unlikely, I'd hope: this would mean a binary would _not_ work when exec()ed normally. (Note that this is only done under CONFIG_ARCH_HAS_ELF_RANDOMIZATION when randomization is turned on.) Link: http://lkml.kernel.org/r/20190422225727.GA21011@beast Link: https://lkml.kernel.org/r/CAGXu5jJ5sj3emOT2QPxQkNQk0qbU6zEfu9=Omfhx_p0nCKPSjA@mail.gmail.com Fixes: eab09532 ("binfmt_elf: use ELF_ET_DYN_BASE only for PIE") Signed-off-by: Kees Cook <keescook@chromium.org> Reported-by: Ali Saidi <alisaidi@amazon.com> Cc: Ali Saidi <alisaidi@amazon.com> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Michal Hocko <mhocko@suse.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jann Horn <jannh@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
Get "current_pt_regs" pointer right before usage. Space savings on x86_64: add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-180 (-180) Function old new delta load_elf_binary 5806 5626 -180 !!! Looks like the compiler doesn't know that "current_pt_regs" is stable pointer (because it doesn't know ->stack isn't) even though it knows that "current" is stable pointer. So it saves it in the very beginning and then tries to carry it through a lot of code. Here is what happens here: load_elf_binary() ... mov rax,QWORD PTR gs:0x14c00 mov r13,QWORD PTR [rax+0x18] r13 = current->stack call kmem_cache_alloc # first kmalloc [980 bytes later!] # let's spill that sucker because we need a register # for "load_bias" calculations at # # if (interpreter) { # load_bias = ELF_ET_DYN_BASE; # if (current->flags & PF_RANDOMIZE) # load_bias += arch_mmap_rnd(); # elf_flags |= elf_fixed; # } mov QWORD PTR [rsp+0x68],r13 If this is not _the_ root cause it is still eeeeh. After the patch things become much simpler: mov rax, QWORD PTR gs:0x14c00 # current mov rdx, QWORD PTR [rax+0x18] # current->stack movq [rdx+0x3fb8], 0 # fill pt_regs ... call finalize_exec Link: http://lkml.kernel.org/r/20190419200343.GA19788@avx2Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Tested-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
There are two places where mapping protections are calculated: one for executable, another one for interpreter -- take them out. ELF read and execute permissions are interchanged with Linux PROT_READ and PROT_EXEC, microoptimizations are welcome! Link: http://lkml.kernel.org/r/20190417213413.GB26474@avx2Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
Link: http://lkml.kernel.org/r/20190416202002.GB24304@avx2Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
Rewrite for (...) { if (->p_type == PT_INTERP) { ... break; } } loop into for (...) { if (->p_type != PT_INTERP) continue; ... break; } Link: http://lkml.kernel.org/r/20190416201906.GA24304@avx2Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
Link: http://lkml.kernel.org/r/20190314205042.GE18143@avx2Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
There is no reason for PT_INTERP filename to linger till the end of the whole loading process. Link: http://lkml.kernel.org/r/20190314204953.GD18143@avx2Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Nikitas Angelinas <nikitas.angelinas@gmail.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Mukesh Ojha <mojha@codeaurora.org> [nikitas.angelinas@gmail.com: fix GPF when dereferencing invalid interpreter] Link: http://lkml.kernel.org/r/20190330140032.GA1527@vostroSigned-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
Link: http://lkml.kernel.org/r/20190314204707.GC18143@avx2Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Andrew Morton authored
As pointed out by zoujc@lenovo.com, setup_arg_pages() already initialized current->mm->start_stack. Link: https://bugzilla.kernel.org/show_bug.cgi?id=202881 Reported-by: <zoujc@lenovo.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Andrew Morton authored
Local 'ret' is unneeded and was poorly named: the variable `ret' generally means the "the value which this function will return". Cc: Roman Gushchin <guro@fb.com> Cc: Uladzislau Rezki <urezki@gmail.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Thomas Garnier <thgarnie@google.com> Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Joel Fernandes <joelaf@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
The ror32 implementation (word >> shift) | (word << (32 - shift) has undefined behaviour if shift is outside the [1, 31] range. Similarly for the 64 bit variants. Most callers pass a compile-time constant (naturally in that range), but there's an UBSAN report that these may actually be called with a shift count of 0. Instead of special-casing that, we can make them DTRT for all values of shift while also avoiding UB. For some reason, this was already partly done for rol32 (which was well-defined for [0, 31]). gcc 8 recognizes these patterns as rotates, so for example __u32 rol32(__u32 word, unsigned int shift) { return (word << (shift & 31)) | (word >> ((-shift) & 31)); } compiles to 0000000000000020 <rol32>: 20: 89 f8 mov %edi,%eax 22: 89 f1 mov %esi,%ecx 24: d3 c0 rol %cl,%eax 26: c3 retq Older compilers unfortunately do not do as well, but this only affects the small minority of users that don't pass constants. Due to integer promotions, ro[lr]8 were already well-defined for shifts in [0, 8], and ro[lr]16 were mostly well-defined for shifts in [0, 16] (only mostly - u16 gets promoted to _signed_ int, so if bit 15 is set, word << 16 is undefined). For consistency, update those as well. Link: http://lkml.kernel.org/r/20190410211906.2190-1-linux@rasmusvillemoes.dkSigned-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reported-by: Ido Schimmel <idosch@mellanox.com> Tested-by: Ido Schimmel <idosch@mellanox.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Cc: Vadim Pasternak <vadimp@mellanox.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Jacek Anaszewski <jacek.anaszewski@gmail.com> Cc: Pavel Machek <pavel@ucw.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Yury Norov authored
Propagate existing bitmap_parselist() tests to bitmap_parselist_user(). Link: http://lkml.kernel.org/r/20190405173211.11373-6-ynorov@marvell.comSigned-off-by: Yury Norov <ynorov@marvell.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Kees Cook <keescook@chromium.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Travis <travis@sgi.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Yury Norov authored
Add tests for non-number character, empty regions, integer overflow. [ynorov@marvell.com: v5] Link: http://lkml.kernel.org/r/20190416063801.20134-5-ynorov@marvell.com Link: http://lkml.kernel.org/r/20190405173211.11373-5-ynorov@marvell.comSigned-off-by: Yury Norov <ynorov@marvell.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Kees Cook <keescook@chromium.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Travis <travis@sgi.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Yury Norov authored
test_bitmap_parselist currently uses get_cycles which is not implemented on some platforms, so use ktime_get() instead. Link: http://lkml.kernel.org/r/20190405173211.11373-4-ynorov@marvell.comSigned-off-by: Yury Norov <ynorov@marvell.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Kees Cook <keescook@chromium.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Travis <travis@sgi.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Yury Norov authored
Remove __bitmap_parselist helper and split the function to logical parts. [ynorov@marvell.com: v5] Link: http://lkml.kernel.org/r/20190416063801.20134-3-ynorov@marvell.com Link: http://lkml.kernel.org/r/20190405173211.11373-3-ynorov@marvell.comSigned-off-by: Yury Norov <ynorov@marvell.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Kees Cook <keescook@chromium.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Travis <travis@sgi.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Yury Norov authored
Patch series "lib: rework bitmap_parselist and tests", v5. bitmap_parselist has been evolved from a pretty simple idea for long and now lacks for refactoring. It is not structured, has nested loops and a set of opaque-named variables. Things are more complicated because bitmap_parselist() is a part of user interface, and its behavior should not change. In this patchset - bitmap_parselist_user() made a wrapper on bitmap_parselist(); - bitmap_parselist() reworked (patch 2); - time measurement in test_bitmap_parselist switched to ktime_get (patch 3); - new tests introduced (patch 4), and - bitmap_parselist_user() testing enabled with the same testset as bitmap_parselist() (patch 5). This patch (of 5): Currently we parse user data byte after byte which leads to overcomplification of parsing algorithm. The only user of bitmap_parselist_user() is not performance-critical, and so we can duplicate user data to kernel buffer and simply call bitmap_parselist(). This rework lets us unify and simplify bitmap_parselist() and bitmap_parselist_user(), which is done in the following patch. Link: http://lkml.kernel.org/r/20190405173211.11373-2-ynorov@marvell.comSigned-off-by: Yury Norov <ynorov@marvell.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Kees Cook <keescook@chromium.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Mike Travis <travis@sgi.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Andy Shevchenko authored
The integer exponentiation is used in few places and might be used in the future by other call sites. Move it to wider use. Link: http://lkml.kernel.org/r/20190323172531.80025-2-andriy.shevchenko@linux.intel.comSigned-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Daniel Thompson <daniel.thompson@linaro.org> Cc: Lee Jones <lee.jones@linaro.org> Cc: Ray Jui <rjui@broadcom.com> Cc: Thierry Reding <thierry.reding@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Andy Shevchenko authored
For better maintenance and expansion move the mathematic helpers to the separate folder. No functional change intended. Note, the int_sqrt() is not used as a part of lib, so, moved to regular obj. Link: http://lkml.kernel.org/r/20190323172531.80025-1-andriy.shevchenko@linux.intel.comSigned-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Lee Jones <lee.jones@linaro.org> Cc: Daniel Thompson <daniel.thompson@linaro.org> Cc: Ray Jui <rjui@broadcom.com> [mchehab+samsung@kernel.org: fix broken doc references for div64.c and gcd.c] Link: http://lkml.kernel.org/r/734f49bae5d4052b3c25691dfefad59bea2e5843.1555580999.git.mchehab+samsung@kernel.orgSigned-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
George Spelvin authored
CONFIG_RETPOLINE has severely degraded indirect function call performance, so it's worth putting some effort into reducing the number of times cmp() is called. This patch avoids badly unbalanced merges on unlucky input sizes. It slightly increases the code size, but saves an average of 0.2*n calls to cmp(). x86-64 code size 739 -> 803 bytes (+64) Unfortunately, there's not a lot of low-hanging fruit in a merge sort; it already performs only n*log2(n) - K*n + O(1) compares. The leading coefficient is already at the theoretical limit (log2(n!) corresponds to K=1.4427), so we're fighting over the linear term, and the best mergesort can do is K=1.2645, achieved when n is a power of 2. The differences between mergesort variants appear when n is *not* a power of 2; K is a function of the fractional part of log2(n). Top-down mergesort does best of all, achieving a minimum K=1.2408, and an average (over all sizes) K=1.248. However, that requires knowing the number of entries to be sorted ahead of time, and making a full pass over the input to count it conflicts with a second performance goal, which is cache blocking. Obviously, we have to read the entire list into L1 cache at some point, and performance is best if it fits. But if it doesn't fit, each full pass over the input causes a cache miss per element, which is undesirable. While textbooks explain bottom-up mergesort as a succession of merging passes, practical implementations do merging in depth-first order: as soon as two lists of the same size are available, they are merged. This allows as many merge passes as possible to fit into L1; only the final few merges force cache misses. This cache-friendly depth-first merge order depends on us merging the beginning of the input as much as possible before we've even seen the end of the input (and thus know its size). The simple eager merge pattern causes bad performance when n is just over a power of 2. If n=1028, the final merge is between 1024- and 4-element lists, which is wasteful of comparisons. (This is actually worse on average than n=1025, because a 1204:1 merge will, on average, end after 512 compares, while 1024:4 will walk 4/5 of the list.) Because of this, bottom-up mergesort achieves K < 0.5 for such sizes, and has an average (over all sizes) K of around 1. (My experiments show K=1.01, while theory predicts K=0.965.) There are "worst-case optimal" variants of bottom-up mergesort which avoid this bad performance, but the algorithms given in the literature, such as queue-mergesort and boustrodephonic mergesort, depend on the breadth-first multi-pass structure that we are trying to avoid. This implementation is as eager as possible while ensuring that all merge passes are at worst 1:2 unbalanced. This achieves the same average K=1.207 as queue-mergesort, which is 0.2*n better then bottom-up, and only 0.04*n behind top-down mergesort. Specifically, defers merging two lists of size 2^k until it is known that there are 2^k additional inputs following. This ensures that the final uneven merges triggered by reaching the end of the input will be at worst 2:1. This will avoid cache misses as long as 3*2^k elements fit into the cache. (I confess to being more than a little bit proud of how clean this code turned out. It took a lot of thinking, but the resultant inner loop is very simple and efficient.) Refs: Bottom-up Mergesort: A Detailed Analysis Wolfgang Panny, Helmut Prodinger Algorithmica 14(4):340--354, October 1995 https://doi.org/10.1007/BF01294131 https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.6.5260 The cost distribution of queue-mergesort, optimal mergesorts, and power-of-two rules Wei-Mei Chen, Hsien-Kuei Hwang, Gen-Huey Chen Journal of Algorithms 30(2); Pages 423--448, February 1999 https://doi.org/10.1006/jagm.1998.0986 https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.4.5380 Queue-Mergesort Mordecai J. Golin, Robert Sedgewick Information Processing Letters, 48(5):253--259, 10 December 1993 https://doi.org/10.1016/0020-0190(93)90088-q https://sci-hub.tw/10.1016/0020-0190(93)90088-Q Feedback from Rasmus Villemoes <linux@rasmusvillemoes.dk>. Link: http://lkml.kernel.org/r/fd560853cc4dca0d0f02184ffa888b4c1be89abc.1552704200.git.lkml@sdf.orgSigned-off-by: George Spelvin <lkml@sdf.org> Acked-by: Andrey Abramov <st5pub@yandex.ru> Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Daniel Wagner <daniel.wagner@siemens.com> Cc: Dave Chinner <dchinner@redhat.com> Cc: Don Mullis <don.mullis@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
George Spelvin authored
Rather than a fixed-size array of pending sorted runs, use the ->prev links to keep track of things. This reduces stack usage, eliminates some ugly overflow handling, and reduces the code size. Also: * merge() no longer needs to handle NULL inputs, so simplify. * The same applies to merge_and_restore_back_links(), which is renamed to the less ponderous merge_final(). (It's a static helper function, so we don't need a super-descriptive name; comments will do.) * Document the actual return value requirements on the (*cmp)() function; some callers are already using this feature. x86-64 code size 1086 -> 739 bytes (-347) (Yes, I see checkpatch complaining about no space after comma in "__attribute__((nonnull(2,3,4,5)))". Checkpatch is wrong.) Feedback from Rasmus Villemoes, Andy Shevchenko and Geert Uytterhoeven. [akpm@linux-foundation.org: remove __pure usage due to mysterious warning] Link: http://lkml.kernel.org/r/f63c410e0ff76009c9b58e01027e751ff7fdb749.1552704200.git.lkml@sdf.orgSigned-off-by: George Spelvin <lkml@sdf.org> Acked-by: Andrey Abramov <st5pub@yandex.ru> Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Daniel Wagner <daniel.wagner@siemens.com> Cc: Dave Chinner <dchinner@redhat.com> Cc: Don Mullis <don.mullis@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
George Spelvin authored
Similar to what's being done in the net code, this takes advantage of the fact that most invocations use only a few common swap functions, and replaces indirect calls to them with (highly predictable) conditional branches. (The downside, of course, is that if you *do* use a custom swap function, there are a few extra predicted branches on the code path.) This actually *shrinks* the x86-64 code, because it inlines the various swap functions inside do_swap, eliding function prologues & epilogues. x86-64 code size 767 -> 703 bytes (-64) Link: http://lkml.kernel.org/r/d10c5d4b393a1847f32f5b26f4bbaa2857140e1e.1552704200.git.lkml@sdf.orgSigned-off-by: George Spelvin <lkml@sdf.org> Acked-by: Andrey Abramov <st5pub@yandex.ru> Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Daniel Wagner <daniel.wagner@siemens.com> Cc: Dave Chinner <dchinner@redhat.com> Cc: Don Mullis <don.mullis@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
George Spelvin authored
This uses fewer comparisons than the previous code (approaching half as many for large random inputs), but produces identical results; it actually performs the exact same series of swap operations. Specifically, it reduces the average number of compares from 2*n*log2(n) - 3*n + o(n) to n*log2(n) + 0.37*n + o(n). This is still 1.63*n worse than glibc qsort() which manages n*log2(n) - 1.26*n, but at least the leading coefficient is correct. Standard heapsort, when sifting down, performs two comparisons per level: one to find the greater child, and a second to see if the current node should be exchanged with that child. Bottom-up heapsort observes that it's better to postpone the second comparison and search for the leaf where -infinity would be sent to, then search back *up* for the current node's destination. Since sifting down usually proceeds to the leaf level (that's where half the nodes are), this does O(1) second comparisons rather than log2(n). That saves a lot of (expensive since Spectre) indirect function calls. The one time it's worse than the previous code is if there are large numbers of duplicate keys, when the top-down algorithm is O(n) and bottom-up is O(n log n). For distinct keys, it's provably always better, doing 1.5*n*log2(n) + O(n) in the worst case. (The code is not significantly more complex. This patch also merges the heap-building and -extracting sift-down loops, resulting in a net code size savings.) x86-64 code size 885 -> 767 bytes (-118) (I see the checkpatch complaint about "else if (n -= size)". The alternative is significantly uglier.) Link: http://lkml.kernel.org/r/2de8348635a1a421a72620677898c7fd5bd4b19d.1552704200.git.lkml@sdf.orgSigned-off-by: George Spelvin <lkml@sdf.org> Acked-by: Andrey Abramov <st5pub@yandex.ru> Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Daniel Wagner <daniel.wagner@siemens.com> Cc: Dave Chinner <dchinner@redhat.com> Cc: Don Mullis <don.mullis@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
George Spelvin authored
Patch series "lib/sort & lib/list_sort: faster and smaller", v2. Because CONFIG_RETPOLINE has made indirect calls much more expensive, I thought I'd try to reduce the number made by the library sort functions. The first three patches apply to lib/sort.c. Patch #1 is a simple optimization. The built-in swap has special cases for aligned 4- and 8-byte objects. But those are almost never used; most calls to sort() work on larger structures, which fall back to the byte-at-a-time loop. This generalizes them to aligned *multiples* of 4 and 8 bytes. (If nothing else, it saves an awful lot of energy by not thrashing the store buffers as much.) Patch #2 grabs a juicy piece of low-hanging fruit. I agree that nice simple solid heapsort is preferable to more complex algorithms (sorry, Andrey), but it's possible to implement heapsort with far fewer comparisons (50% asymptotically, 25-40% reduction for realistic sizes) than the way it's been done up to now. And with some care, the code ends up smaller, as well. This is the "big win" patch. Patch #3 adds the same sort of indirect call bypass that has been added to the net code of late. The great majority of the callers use the builtin swap functions, so replace the indirect call to sort_func with a (highly preditable) series of if() statements. Rather surprisingly, this decreased code size, as the swap functions were inlined and their prologue & epilogue code eliminated. lib/list_sort.c is a bit trickier, as merge sort is already close to optimal, and we don't want to introduce triumphs of theory over practicality like the Ford-Johnson merge-insertion sort. Patch #4, without changing the algorithm, chops 32% off the code size and removes the part[MAX_LIST_LENGTH+1] pointer array (and the corresponding upper limit on efficiently sortable input size). Patch #5 improves the algorithm. The previous code is already optimal for power-of-two (or slightly smaller) size inputs, but when the input size is just over a power of 2, there's a very unbalanced final merge. There are, in the literature, several algorithms which solve this, but they all depend on the "breadth-first" merge order which was replaced by commit 835cc0c8 with a more cache-friendly "depth-first" order. Some hard thinking came up with a depth-first algorithm which defers merges as little as possible while avoiding bad merges. This saves 0.2*n compares, averaged over all sizes. The code size increase is minimal (64 bytes on x86-64, reducing the net savings to 26%), but the comments expanded significantly to document the clever algorithm. TESTING NOTES: I have some ugly user-space benchmarking code which I used for testing before moving this code into the kernel. Shout if you want a copy. I'm running this code right now, with CONFIG_TEST_SORT and CONFIG_TEST_LIST_SORT, but I confess I haven't rebooted since the last round of minor edits to quell checkpatch. I figure there will be at least one round of comments and final testing. This patch (of 5): Rather than having special-case swap functions for 4- and 8-byte objects, special-case aligned multiples of 4 or 8 bytes. This speeds up most users of sort() by avoiding fallback to the byte copy loop. Despite what ca96ab85 ("lib/sort: Add 64 bit swap function") claims, very few users of sort() sort pointers (or pointer-sized objects); most sort structures containing at least two words. (E.g. drivers/acpi/fan.c:acpi_fan_get_fps() sorts an array of 40-byte struct acpi_fan_fps.) The functions also got renamed to reflect the fact that they support multiple words. In the great tradition of bikeshedding, the names were by far the most contentious issue during review of this patch series. x86-64 code size 872 -> 886 bytes (+14) With feedback from Andy Shevchenko, Rasmus Villemoes and Geert Uytterhoeven. Link: http://lkml.kernel.org/r/f24f932df3a7fa1973c1084154f1cea596bcf341.1552704200.git.lkml@sdf.orgSigned-off-by: George Spelvin <lkml@sdf.org> Acked-by: Andrey Abramov <st5pub@yandex.ru> Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Daniel Wagner <daniel.wagner@siemens.com> Cc: Don Mullis <don.mullis@gmail.com> Cc: Dave Chinner <dchinner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Davidlohr Bueso authored
This is a lot more appropriate than PI_LIST, which in the kernel one would assume that it has to do with priority-inheritance; which is not -- furthermore futexes make use of plists so this can be even more confusing, albeit the debug nature of the config option. Link: http://lkml.kernel.org/r/20190317185434.1626-1-dave@stgolabs.netSigned-off-by: Davidlohr Bueso <dbueso@suse.de> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
The bitmap_remap, _bitremap, _onto and _fold functions are only used, via their node_ wrappers, in mm/mempolicy.c, which is only built for CONFIG_NUMA. The helper bitmap_ord_to_pos used by these functions is global, but its only external caller is node_random() in lib/nodemask.c, which is also guarded by CONFIG_NUMA. For !CONFIG_NUMA: add/remove: 0/6 grow/shrink: 0/0 up/down: 0/-621 (-621) Function old new delta bitmap_pos_to_ord 20 - -20 bitmap_ord_to_pos 70 - -70 bitmap_bitremap 81 - -81 bitmap_fold 113 - -113 bitmap_onto 123 - -123 bitmap_remap 214 - -214 Total: Before=4776, After=4155, chg -13.00% Link: http://lkml.kernel.org/r/20190329205353.6010-2-linux@rasmusvillemoes.dkSigned-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Yury Norov <yury.norov@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Rasmus Villemoes authored
AFAICT, there have never been any callers of these functions outside mm/mempolicy.c (via their nodemask.h wrappers). In particular, no modular code has ever used them, and given their somewhat exotic semantics, I highly doubt they will ever find such a use. In any case, no need to export them currently. Link: http://lkml.kernel.org/r/20190329205353.6010-1-linux@rasmusvillemoes.dkSigned-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Cc: Yury Norov <yury.norov@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-