- 07 Apr, 2020 40 commits
-
-
Gustavo A. R. Silva authored
The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertenly introduced[3] to the codebase from now on. This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732 ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200211205813.GA25602@embeddedorSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Gustavo A. R. Silva authored
The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertenly introduced[3] to the codebase from now on. This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732 ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200211205620.GA24694@embeddedorSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Gustavo A. R. Silva authored
The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertenly introduced[3] to the codebase from now on. This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732 ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200211205119.GA21234@embeddedorSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Konstantin Khlebnikov authored
file_path=<path> defines file or directory to open lock_inode=Y set lock_rwsem_ptr to inode->i_rwsem lock_mapping=Y set lock_rwsem_ptr to mapping->i_mmap_rwsem lock_sb_umount=Y set lock_rwsem_ptr to sb->s_umount This gives safe and simple way to see how system reacts to contention of common vfs locks and how syscalls depend on them directly or indirectly. For example to block s_umount for 60 seconds: # modprobe test_lockup file_path=. lock_sb_umount time_secs=60 state=S This is useful for checking/testing scalability issues like this: https://lore.kernel.org/lkml/158497590858.7371.9311902565121473436.stgit@buzz/Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Colin Ian King <colin.king@canonical.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Kees Cook <keescook@chromium.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Sasha Levin <sashal@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/158498153964.5621.83061779039255681.stgit@buzzSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Colin Ian King authored
There is a spelling mistake in a pr_notice message. Fix it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Kees Cook <keescook@chromium.org> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Sasha Levin <sashal@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/20200221155145.79522-1-colin.king@canonical.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Konstantin Khlebnikov authored
CONFIG_TEST_LOCKUP=m adds module "test_lockup" that helps to make sure that watchdogs and lockup detectors are working properly. Depending on module parameters test_lockup could emulate soft or hard lockup, "hung task", hold arbitrary lock, allocate bunch of pages. Also it could generate series of lockups with cooling-down periods, in this way it could be used as "ping" for locks or page allocator. Loop checks signals between iteration thus could be stopped by ^C. # modinfo test_lockup ... parm: time_secs:lockup time in seconds, default 0 (uint) parm: time_nsecs:nanoseconds part of lockup time, default 0 (uint) parm: cooldown_secs:cooldown time between iterations in seconds, default 0 (uint) parm: cooldown_nsecs:nanoseconds part of cooldown, default 0 (uint) parm: iterations:lockup iterations, default 1 (uint) parm: all_cpus:trigger lockup at all cpus at once (bool) parm: state:wait in 'R' running (default), 'D' uninterruptible, 'K' killable, 'S' interruptible state (charp) parm: use_hrtimer:use high-resolution timer for sleeping (bool) parm: iowait:account sleep time as iowait (bool) parm: lock_read:lock read-write locks for read (bool) parm: lock_single:acquire locks only at one cpu (bool) parm: reacquire_locks:release and reacquire locks/irq/preempt between iterations (bool) parm: touch_softlockup:touch soft-lockup watchdog between iterations (bool) parm: touch_hardlockup:touch hard-lockup watchdog between iterations (bool) parm: call_cond_resched:call cond_resched() between iterations (bool) parm: measure_lock_wait:measure lock wait time (bool) parm: lock_wait_threshold:print lock wait time longer than this in nanoseconds, default off (ulong) parm: disable_irq:disable interrupts: generate hard-lockups (bool) parm: disable_softirq:disable bottom-half irq handlers (bool) parm: disable_preempt:disable preemption: generate soft-lockups (bool) parm: lock_rcu:grab rcu_read_lock: generate rcu stalls (bool) parm: lock_mmap_sem:lock mm->mmap_sem: block procfs interfaces (bool) parm: lock_rwsem_ptr:lock rw_semaphore at address (ulong) parm: lock_mutex_ptr:lock mutex at address (ulong) parm: lock_spinlock_ptr:lock spinlock at address (ulong) parm: lock_rwlock_ptr:lock rwlock at address (ulong) parm: alloc_pages_nr:allocate and free pages under locks (uint) parm: alloc_pages_order:page order to allocate (uint) parm: alloc_pages_gfp:allocate pages with this gfp_mask, default GFP_KERNEL (uint) parm: alloc_pages_atomic:allocate pages with GFP_ATOMIC (bool) parm: reallocate_pages:free and allocate pages between iterations (bool) Parameters for locking by address are unsafe and taints kernel. With CONFIG_DEBUG_SPINLOCK=y they at least check magics for embedded spinlocks. Examples: task hang in D-state: modprobe test_lockup time_secs=1 iterations=60 state=D task hang in io-wait D-state: modprobe test_lockup time_secs=1 iterations=60 state=D iowait softlockup: modprobe test_lockup time_secs=1 iterations=60 state=R hardlockup: modprobe test_lockup time_secs=1 iterations=60 state=R disable_irq system-wide hardlockup: modprobe test_lockup time_secs=1 iterations=60 state=R \ disable_irq all_cpus rcu stall: modprobe test_lockup time_secs=1 iterations=60 state=R \ lock_rcu touch_softlockup lock mmap_sem / block procfs interfaces: modprobe test_lockup time_secs=1 iterations=60 state=S lock_mmap_sem lock tasklist_lock for read / block forks: TASKLIST_LOCK=$(awk '$3 == "tasklist_lock" {print "0x"$1}' /proc/kallsyms) modprobe test_lockup time_secs=1 iterations=60 state=R \ disable_irq lock_read lock_rwlock_ptr=$TASKLIST_LOCK lock namespace_sem / block vfs mount operations: NAMESPACE_SEM=$(awk '$3 == "namespace_sem" {print "0x"$1}' /proc/kallsyms) modprobe test_lockup time_secs=1 iterations=60 state=S \ lock_rwsem_ptr=$NAMESPACE_SEM lock cgroup mutex / block cgroup operations: CGROUP_MUTEX=$(awk '$3 == "cgroup_mutex" {print "0x"$1}' /proc/kallsyms) modprobe test_lockup time_secs=1 iterations=60 state=S \ lock_mutex_ptr=$CGROUP_MUTEX ping cgroup_mutex every second and measure maximum lock wait time: modprobe test_lockup cooldown_secs=1 iterations=60 state=S \ lock_mutex_ptr=$CGROUP_MUTEX reacquire_locks measure_lock_wait [linux@roeck-us.net: rename disable_irq to fix build error] Link: http://lkml.kernel.org/r/20200317133614.23152-1-linux@roeck-us.netSigned-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Sasha Levin <sashal@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Dmitry Monakhov <dmtrmonakhov@yandex-team.ru Cc: Colin Ian King <colin.king@canonical.com> Cc: Guenter Roeck <linux@roeck-us.net> Link: http://lkml.kernel.org/r/158132859146.2797.525923171323227836.stgit@buzzSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Josh Poimboeuf authored
With CONFIG_CC_OPTIMIZE_FOR_SIZE, objtool reports: drivers/gpu/drm/i915/gem/i915_gem_execbuffer.o: warning: objtool: i915_gem_execbuffer2_ioctl()+0x5b7: call to gen8_canonical_addr() with UACCESS enabled This means i915_gem_execbuffer2_ioctl() is calling gen8_canonical_addr() from the user_access_begin/end critical region (i.e, with SMAP disabled). While it's probably harmless in this case, in general we like to avoid extra function calls in SMAP-disabled regions because it can open up inadvertent security holes. Fix the warning by changing the sign extension helpers to __always_inline. This convinces GCC to inline gen8_canonical_addr(). The sign extension functions are trivial anyway, so it makes sense to always inline them. With my test optimize-for-size-based config, this actually shrinks the text size of i915_gem_execbuffer.o by 45 bytes -- and no change for vmlinux. Reported-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Link: http://lkml.kernel.org/r/740179324b2b18b750b16295c48357f00b5fa9ed.1582982020.git.jpoimboe@redhat.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joe Perches authored
The MAINTAINERS file header has never shown a preferred order for the section entries but scripts/parse-maintainers.pl added a preferred order with commit 61f74164 ("parse-maintainers: Add section pattern sorting") Commit 5cdbec10 ("parse-maintainers: Do not sort section content by default") changed the preferred order to be a bit more sensible. Update the MAINTAINERS section description block to use this preferred section entry ordering. Add a slightly better description for the N: entry too. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Link: http://lkml.kernel.org/r/5aa5aad6fb1678230c260337dc066cd449a2bf32.camel@perches.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Vegard Nossum authored
compiletime_assert() uses __LINE__ to create a unique function name. This means that if you have more than one BUILD_BUG_ON() in the same source line (which can happen if they appear e.g. in a macro), then the error message from the compiler might output the wrong condition. For this source file: #include <linux/build_bug.h> #define macro() \ BUILD_BUG_ON(1); \ BUILD_BUG_ON(0); void foo() { macro(); } gcc would output: ./include/linux/compiler.h:350:38: error: call to `__compiletime_assert_9' declared with attribute error: BUILD_BUG_ON failed: 0 _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__) However, it was not the BUILD_BUG_ON(0) that failed, so it should say 1 instead of 0. With this patch, we use __COUNTER__ instead of __LINE__, so each BUILD_BUG_ON() gets a different function name and the correct condition is printed: ./include/linux/compiler.h:350:38: error: call to `__compiletime_assert_0' declared with attribute error: BUILD_BUG_ON failed: 1 _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Daniel Santos <daniel.santos@pobox.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Ian Abbott <abbotti@mev.co.uk> Cc: Joe Perches <joe@perches.com> Link: http://lkml.kernel.org/r/20200331112637.25047-1-vegard.nossum@oracle.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Masahiro Yamada authored
Commit ac7c3e4f ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") made this always-on option. We released v5.4 and v5.5 including that commit. Remove the CONFIG option and clean up the code now. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: David Miller <davem@davemloft.net> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20200220110807.32534-2-masahiroy@kernel.orgSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Masahiro Yamada authored
The code, #undef CONFIG_OPTIMIZE_INLINING, is not working as expected because <linux/compiler_types.h> is parsed before vclock_gettime.c since 28128c61 ("kconfig.h: Include compiler types to avoid missed struct attributes"). Since then, <linux/compiler_types.h> is included really early by using the '-include' option. So, you cannot negate the decision of <linux/compiler_types.h> in this way. You can confirm it by checking the pre-processed code, like this: $ make arch/x86/entry/vdso/vdso32/vclock_gettime.i There is no difference with/without CONFIG_CC_OPTIMIZE_FOR_SIZE. It is about two years since 28128c61. Nobody has reported a problem (or, nobody has even noticed the fact that this code is not working). It is ugly and unreliable to attempt to undefine a CONFIG option from C files, and anyway the inlining heuristic is up to the compiler. Just remove the broken code. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: David Miller <davem@davemloft.net> Link: http://lkml.kernel.org/r/20200220110807.32534-1-masahiroy@kernel.orgSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Nathan Chancellor authored
Clang warns: ../kernel/extable.c:37:52: warning: array comparison always evaluates to a constant [-Wtautological-compare] if (main_extable_sort_needed && __stop___ex_table > __start___ex_table) { ^ 1 warning generated. These are not true arrays, they are linker defined symbols, which are just addresses. Using the address of operator silences the warning and does not change the resulting assembly with either clang/ld.lld or gcc/ld (tested with diff + objdump -Dr). Suggested-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Link: https://github.com/ClangBuiltLinux/linux/issues/892 Link: http://lkml.kernel.org/r/20200219202036.45702-1-natechancellor@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Michal Simek authored
Generated files are also checked by sparse that's why add newline to remove sparse (C=1) warning. The issue was found on Microblaze and reported like this: ./arch/microblaze/include/generated/uapi/asm/unistd_32.h:438:45: warning: no newline at end of file Mips and PowerPC have it already but let's align with style used by m68k. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Stefan Asserhall <stefan.asserhall@xilinx.com> Acked-by: Max Filippov <jcmvbkbc@gmail.com> (xtensa) Cc: Arnd Bergmann <arnd@arndb.de> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Chris Zankel <chris@zankel.net> Cc: David S. Miller <davem@davemloft.net> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Helge Deller <deller@gmx.de> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Burton <paulburton@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Rich Felker <dalias@libc.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Tony Luck <tony.luck@intel.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/4d32ab4e1fb2edb691d2e1687e8fb303c09fd023.1581504803.git.michal.simek@xilinx.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Matthew Wilcox (Oracle) authored
It's clearer to just put this inline. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200317193201.9924-5-adobriyan@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Matthew Wilcox (Oracle) authored
The process maps file was the only user of version (introduced back in 2005). Now that it uses ppos instead, we can remove it. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200317193201.9924-4-adobriyan@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Matthew Wilcox (Oracle) authored
The ppos is a private cursor, just like m->version. Use the canonical cursor, not a special one. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200317193201.9924-3-adobriyan@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Matthew Wilcox (Oracle) authored
Instead of setting m->version in the show method, set it in m_next(), where it should be. Also remove the fallback code for failing to find a vma, or version being zero. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200317193201.9924-2-adobriyan@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Matthew Wilcox (Oracle) authored
Instead of calling vma_stop() from m_start() and m_next(), do its work in m_stop(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200317193201.9924-1-adobriyan@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
top(1) reads all /proc/*/statm files but kernel threads will always have zeros. Print those zeroes directly without going through seq_put_decimal_ull(). Speed up reading /proc/2/statm (which is kthreadd) is like 3%. My system has more kernel threads than normal processes after booting KDE. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200307154435.GA2788@avx2Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
Now that "struct proc_ops" exist we can start putting there stuff which could not fly with VFS "struct file_operations"... Most of fs/proc/inode.c file is dedicated to make open/read/.../close reliable in the event of disappearing /proc entries which usually happens if module is getting removed. Files like /proc/cpuinfo which never disappear simply do not need such protection. Save 2 atomic ops, 1 allocation, 1 free per open/read/close sequence for such "permanent" files. Enable "permanent" flag for /proc/cpuinfo /proc/kmsg /proc/modules /proc/slabinfo /proc/stat /proc/sysvipc/* /proc/swaps More will come once I figure out foolproof way to prevent out module authors from marking their stuff "permanent" for performance reasons when it is not. This should help with scalability: benchmark is "read /proc/cpuinfo R times by N threads scattered over the system". N R t, s (before) t, s (after) ----------------------------------------------------- 64 4096 1.582458 1.530502 -3.2% 256 4096 6.371926 6.125168 -3.9% 1024 4096 25.64888 24.47528 -4.6% Benchmark source: #include <chrono> #include <iostream> #include <thread> #include <vector> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h> const int NR_CPUS = sysconf(_SC_NPROCESSORS_ONLN); int N; const char *filename; int R; int xxx = 0; int glue(int n) { cpu_set_t m; CPU_ZERO(&m); CPU_SET(n, &m); return sched_setaffinity(0, sizeof(cpu_set_t), &m); } void f(int n) { glue(n % NR_CPUS); while (*(volatile int *)&xxx == 0) { } for (int i = 0; i < R; i++) { int fd = open(filename, O_RDONLY); char buf[4096]; ssize_t rv = read(fd, buf, sizeof(buf)); asm volatile ("" :: "g" (rv)); close(fd); } } int main(int argc, char *argv[]) { if (argc < 4) { std::cerr << "usage: " << argv[0] << ' ' << "N /proc/filename R "; return 1; } N = atoi(argv[1]); filename = argv[2]; R = atoi(argv[3]); for (int i = 0; i < NR_CPUS; i++) { if (glue(i) == 0) break; } std::vector<std::thread> T; T.reserve(N); for (int i = 0; i < N; i++) { T.emplace_back(f, i); } auto t0 = std::chrono::system_clock::now(); { *(volatile int *)&xxx = 1; for (auto& t: T) { t.join(); } } auto t1 = std::chrono::system_clock::now(); std::chrono::duration<double> dt = t1 - t0; std::cout << dt.count() << ' '; return 0; } P.S.: Explicit randomization marker is added because adding non-function pointer will silently disable structure layout randomization. [akpm@linux-foundation.org: coding style fixes] Reported-by: kbuild test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Joe Perches <joe@perches.com> Link: http://lkml.kernel.org/r/20200222201539.GA22576@avx2Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jules Irenge authored
Fix sparse locking imbalance warning: warning: context imbalance in close_pdeo() - unexpected unlock Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200227201538.GA30462@avx2Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Waiman Long authored
Both bootmem_data and bootmem_data_t structures are no longer defined. Remove the dummy forward declarations. Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Baoquan He <bhe@redhat.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Link: http://lkml.kernel.org/r/20200326022617.26208-1-longman@redhat.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Mateusz Nosek authored
Previously there was a check if 'size' is aligned to 'align' and if not then it was aligned. This check was expensive as both branch and division are expensive instructions in most architectures. 'ALIGN' function on already aligned value will not change it, and as it is cheaper than branch + division it can be executed all the time and branch can be removed. Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200320173317.26408-1-mateusznosek0@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ira Weiny authored
Fixes: 80a72d0a ("memremap: remove the data field in struct dev_pagemap") Fixes: fdc029b1 ("memremap: remove the dev field in struct dev_pagemap") Signed-off-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Dan Williams <dan.j.williams@intel.com> Link: http://lkml.kernel.org/r/20200316213205.145333-1-ira.weiny@intel.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Steven Price authored
If CONFIG_DEVICE_PRIVATE is defined, but neither CONFIG_MEMORY_FAILURE nor CONFIG_MIGRATION, then non_swap_entry() will return 0, meaning that the condition (non_swap_entry(entry) && is_device_private_entry(entry)) in zap_pte_range() will never be true even if the entry is a device private one. Equally any other code depending on non_swap_entry() will not function as expected. I originally spotted this just by looking at the code, I haven't actually observed any problems. Looking a bit more closely it appears that actually this situation (currently at least) cannot occur: DEVICE_PRIVATE depends on ZONE_DEVICE ZONE_DEVICE depends on MEMORY_HOTREMOVE MEMORY_HOTREMOVE depends on MIGRATION Fixes: 5042db43 ("mm/ZONE_DEVICE: new type of ZONE_DEVICE for unaddressable memory") Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Jérôme Glisse <jglisse@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: John Hubbard <jhubbard@nvidia.com> Link: http://lkml.kernel.org/r/20200305130550.22693-1-steven.price@arm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joe Perches authored
Convert the various /* fallthrough */ comments to the pseudo-keyword fallthrough; Done via script: https://lore.kernel.org/lkml/b56602fcf79f849e733e7b521bb0e17895d390fa.1582230379.git.joe@perches.com/Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Link: http://lkml.kernel.org/r/f62fea5d10eb0ccfc05d87c242a620c261219b66.camel@perches.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Mateusz Nosek authored
MAX_ZONELISTS is a compile time constant, so it should be compared using BUILD_BUG_ON not BUG_ON. Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> Link: http://lkml.kernel.org/r/20200228224617.11343-1-mateusznosek0@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
chenqiwu authored
The parameter of remap_pfn_range() @pfn passed from the caller is actually a page-frame number converted by corresponding physical address of kernel memory, the original comment is ambiguous that may mislead the users. Meanwhile, there is an ambiguous typo "VMM" in the comment of vm_area_struct. So fixing them will make the code more readable. Signed-off-by: chenqiwu <chenqiwu@xiaomi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1583026921-15279-1-git-send-email-qiwuchen55@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jules Irenge authored
Sparse reports a warning at unpin_tag()() warning: context imbalance in unpin_tag() - unexpected unlock The root cause is the missing annotation at unpin_tag() Add the missing __releases(bitlock) annotation Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Minchan Kim <minchan@kernel.org> Link: http://lkml.kernel.org/r/20200214204741.94112-14-jbi.octave@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jules Irenge authored
Sparse reports a warning at pin_tag()() warning: context imbalance in pin_tag() - wrong count at exit The root cause is the missing annotation at pin_tag() Add the missing __acquires(bitlock) annotation Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Minchan Kim <minchan@kernel.org> Link: http://lkml.kernel.org/r/20200214204741.94112-13-jbi.octave@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jules Irenge authored
Sparse reports a warning at migrate_read_unlock()() warning: context imbalance in migrate_read_unlock() - unexpected unlock The root cause is the missing annotation at migrate_read_unlock() Add the missing __releases(&zspage->lock) annotation Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Minchan Kim <minchan@kernel.org> Link: http://lkml.kernel.org/r/20200214204741.94112-12-jbi.octave@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jules Irenge authored
Sparse reports a warning at migrate_read_lock()() warning: context imbalance in migrate_read_lock() - wrong count at exit The root cause is the missing annotation at migrate_read_lock() Add the missing __acquires(&zspage->lock) annotation Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Minchan Kim <minchan@kernel.org> Link: http://lkml.kernel.org/r/20200214204741.94112-11-jbi.octave@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jules Irenge authored
Sparse reports a warning at put_map()() warning: context imbalance in put_map() - unexpected unlock The root cause is the missing annotation at put_map() Add the missing __releases(&object_map_lock) annotation Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200214204741.94112-10-jbi.octave@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jules Irenge authored
Sparse reports a warning at get_map()() warning: context imbalance in get_map() - wrong count at exit The root cause is the missing annotation at get_map() Add the missing __acquires(&object_map_lock) annotation Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200214204741.94112-9-jbi.octave@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jules Irenge authored
Sparse reports a warning at queue_pages_pmd() context imbalance in queue_pages_pmd() - unexpected unlock The root cause is the missing annotation at queue_pages_pmd() Add the missing __releases(ptl) Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200214204741.94112-8-jbi.octave@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jules Irenge authored
Sparse reports a warning at gather_surplus_pages() warning: context imbalance in hugetlb_cow() - unexpected unlock The root cause is the missing annotation at gather_surplus_pages() Add the missing __must_hold(&hugetlb_lock) Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Link: http://lkml.kernel.org/r/20200214204741.94112-7-jbi.octave@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jules Irenge authored
Sparse reports a warning at compact_lock_irqsave() warning: context imbalance in compact_lock_irqsave() - wrong count at exit The root cause is the missing annotation at compact_lock_irqsave() Add the missing __acquires(lock) annotation. Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200214204741.94112-6-jbi.octave@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Maciej S. Szmigiero authored
The compressed cache for swap pages (zswap) currently needs from 1 to 3 extra kernel command line parameters in order to make it work: it has to be enabled by adding a "zswap.enabled=1" command line parameter and if one wants a different compressor or pool allocator than the default lzo / zbud combination then these choices also need to be specified on the kernel command line in additional parameters. Using a different compressor and allocator for zswap is actually pretty common as guides often recommend using the lz4 / z3fold pair instead of the default one. In such case it is also necessary to remember to enable the appropriate compression algorithm and pool allocator in the kernel config manually. Let's avoid the need for adding these kernel command line parameters and automatically pull in the dependencies for the selected compressor algorithm and pool allocator by adding an appropriate default switches to Kconfig. The default values for these options match what the code was using previously as its defaults. Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Vitaly Wool <vitaly.wool@konsulko.com> Link: http://lkml.kernel.org/r/20200202000112.456103-1-mail@maciej.szmigiero.nameSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Palmer Dabbelt authored
I recently build the RISC-V port with LLVM trunk, which has introduced a new warning when casting from a pointer to an enum of a smaller size. This patch simply casts to a long in the middle to stop the warning. I'd be surprised this is the only one in the kernel, but it's the only one I saw. Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200227211741.83165-1-palmer@dabbelt.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Hugh Dickins authored
Yang Shi writes: Currently, when truncating a shmem file, if the range is partly in a THP (start or end is in the middle of THP), the pages actually will just get cleared rather than being freed, unless the range covers the whole THP. Even though all the subpages are truncated (randomly or sequentially), the THP may still be kept in page cache. This might be fine for some usecases which prefer preserving THP, but balloon inflation is handled in base page size. So when using shmem THP as memory backend, QEMU inflation actually doesn't work as expected since it doesn't free memory. But the inflation usecase really needs to get the memory freed. (Anonymous THP will also not get freed right away, but will be freed eventually when all subpages are unmapped: whereas shmem THP still stays in page cache.) Split THP right away when doing partial hole punch, and if split fails just clear the page so that read of the punched area will return zeroes. Hugh Dickins adds: Our earlier "team of pages" huge tmpfs implementation worked in the way that Yang Shi proposes; and we have been using this patch to continue to split the huge page when hole-punched or truncated, since converting over to the compound page implementation. Although huge tmpfs gives out huge pages when available, if the user specifically asks to truncate or punch a hole (perhaps to free memory, perhaps to reduce the memcg charge), then the filesystem should do so as best it can, splitting the huge page. That is not always possible: any additional reference to the huge page prevents split_huge_page() from succeeding, so the result can be flaky. But in practice it works successfully enough that we've not seen any problem from that. Add shmem_punch_compound() to encapsulate the decision of when a split is needed, and doing the split if so. Using this simplifies the flow in shmem_undo_range(); and the first (trylock) pass does not need to do any page clearing on failure, because the second pass will either succeed or do that clearing. Following the example of zero_user_segment() when clearing a partial page, add flush_dcache_page() and set_page_dirty() when clearing a hole - though I'm not certain that either is needed. But: split_huge_page() would be sure to fail if shmem_undo_range()'s pagevec holds further references to the huge page. The easiest way to fix that is for find_get_entries() to return early, as soon as it has put one compound head or tail into the pagevec. At first this felt like a hack; but on examination, this convention better suits all its callers - or will do, if the slight one-page-per-pagevec slowdown in shmem_unlock_mapping() and shmem_seek_hole_data() is transformed into a 512-page-per-pagevec speedup by checking for compound pages there. Signed-off-by: Hugh Dickins <hughd@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Yang Shi <yang.shi@linux.alibaba.com> Cc: Alexander Duyck <alexander.duyck@gmail.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: David Hildenbrand <david@redhat.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2002261959020.10801@eggly.anvilsSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-