- 05 Jun, 2020 40 commits
-
-
Desnes A. Nunes do Rosario authored
The number of reserved pkeys in a PowerNV environment is different from that on PowerVM or KVM. Tested on PowerVM and PowerNV environments. Signed-off-by: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/0341a0ca961166814b44c9e724774672c18d54ca.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ram Pai authored
This makes use of the abstractions added earlier and introduces support for powerpc. For powerpc, after receiving the SIGSEGV, the signal handler must explicitly restore access permissions for the faulting pkey to allow the test to continue. As this makes use of pkey_access_allow(), all of its dependencies and other similar functions have been moved ahead of the signal handler. [sandipan@linux.ibm.com: fix powerpc access right updates] Link: http://lkml.kernel.org/r/5f65cf37be993760de8112a88da194e3ccbb2bf8.1588959697.git.sandipan@linux.ibm.comSigned-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/b121e9fd33789ed9195276e32fe4e80bb6b88a31.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ram Pai authored
This introduces some generic abstractions and provides the corresponding architecture-specfic implementations for these abstractions. Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/1c977915e69fb7767fb0dbd55ac7656554b15b93.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sandipan Das authored
The huge page size can vary across architectures. This will ensure that the correct huge page size is used when accessing the hugetlb controls under sysfs. Instead of using a hardcoded page size (i.e. 2MB), this now uses the HPAGE_SIZE macro which is arch-specific. Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ram Pai <linuxram@us.ibm.com> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/66882a5d6e45c73c3a52bc4aef9754e48afa4f88.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ram Pai authored
alloc_random_pkey() was allocating the same pkey every time. Not all pkeys were geting tested. This fixes it. Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/0162f55816d4e783a0d6e49e554d0ab9a3c9a23b.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ram Pai authored
In some cases, a pkey's bits need not necessarily change in a way that the value of the pkey register increases when performing a pkey_disable_set() or decreases when performing a pkey_disable_clear(). For example, on powerpc, if a pkey's current state is PKEY_DISABLE_ACCESS and we perform a pkey_write_disable() on it, the bits still remain the same. We will observe something similar when the pkey's current state is 0 and a pkey_access_enable() is performed on it. Either case would cause some assertions to fail. This fixes the problem. Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/8240665131e43fc93eed4eea8194676c1ea39a7f.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ram Pai authored
Currently, pkey_disable_clear() sets the specified bits instead clearing them. This has been dead code up to now because its only callers i.e. pkey_access/write_allow() are also unused. Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/1f70bca60330a85dca42c3cd98212bb1cdf5a076.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sandipan Das authored
This introduces some functions that help with setting or clearing bits of a particular pkey. This also adds an abstraction for getting a pkey's bit position in the pkey register as this may vary across architectures. Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ram Pai <linuxram@us.ibm.com> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/2ad9705f4f68ca7e72155cc583415e5a979546f1.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Sandipan Das authored
The size of the pkey register can vary across architectures. This converts the data type of all its references to u64 in preparation for multi-arch support. To keep the definition of the u64 type consistent and remove format specifier related warnings, __SANE_USERSPACE_TYPES__ is defined as suggested by Michael Ellerman. Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ram Pai <linuxram@us.ibm.com> Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/d3e271798455d940e395e56e1ff1e82a31bcb7aa.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Thiago Jung Bauermann authored
This will help us ensure we print pkey_reg_t values correctly in different architectures. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Ram Pai <linuxram@us.ibm.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/b40b7a95fdd4045d62530a2a34452934caf3b0bc.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Thiago Jung Bauermann authored
In preparation for multi-arch support, move definitions which have arch-specific values to x86-specific header. Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/d58eba2930059c8b209eefd6d5b48fe922a5b010.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ram Pai authored
Moved all the generic definition and helper functions to the header file. Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/57177f99e92a51295956715d5f2d5688a4d13927.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ram Pai authored
This renames PKRU references to "pkey_reg" or "pkey" based on the usage. Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Dave Hansen <dave.hansen@intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/2c6970bc6d2e99796cd5cc1101bd2ecf7eccb937.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ram Pai authored
Patch series "selftests, powerpc, x86: Memory Protection Keys", v19. Memory protection keys enables an application to protect its address space from inadvertent access by its own code. This feature is now enabled on powerpc and has been available since 4.16-rc1. The patches move the selftests to arch neutral directory and enhance their test coverage. Tested on powerpc64 and x86_64 (Skylake-SP). This patch (of 24): Move selftest files from tools/testing/selftests/x86/ to tools/testing/selftests/vm/. Signed-off-by: Ram Pai <linuxram@us.ibm.com> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Dave Hansen <dave.hansen@intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Florian Weimer <fweimer@redhat.com> Cc: "Desnes A. Nunes do Rosario" <desnesn@linux.vnet.ibm.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: Michal Suchanek <msuchanek@suse.de> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Shuah Khan <shuah@kernel.org> Link: http://lkml.kernel.org/r/14d25194c3e2e652e0047feec4487e269e76e8c9.1585646528.git.sandipan@linux.ibm.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Pengcheng Yang authored
When reading, read_pos should start with bytes_consumed, not file->f_pos. Because when there is more than one reader, the read_pos corresponding to file->f_pos may have been consumed, which will cause the data that has been consumed to be read and the bytes_consumed update error. Signed-off-by: Pengcheng Yang <yangpc@wangsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jann Horn <jannh@google.com> Cc: Al Viro <viro@zeniv.linux.org.uk>e Link: http://lkml.kernel.org/r/1579691175-28949-1-git-send-email-yangpc@wangsu.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Daniel Axtens authored
alloc_percpu() may return NULL, which means chan->buf may be set to NULL. In that case, when we do *per_cpu_ptr(chan->buf, ...), we dereference an invalid pointer: BUG: Unable to handle kernel data access at 0x7dae0000 Faulting instruction address: 0xc0000000003f3fec ... NIP relay_open+0x29c/0x600 LR relay_open+0x270/0x600 Call Trace: relay_open+0x264/0x600 (unreliable) __blk_trace_setup+0x254/0x600 blk_trace_setup+0x68/0xa0 sg_ioctl+0x7bc/0x2e80 do_vfs_ioctl+0x13c/0x1300 ksys_ioctl+0x94/0x130 sys_ioctl+0x48/0xb0 system_call+0x5c/0x68 Check if alloc_percpu returns NULL. This was found by syzkaller both on x86 and powerpc, and the reproducer it found on powerpc is capable of hitting the issue as an unprivileged user. Fixes: 017c59c0 ("relay: Use per CPU constructs for the relay channel buffer pointers") Reported-by: syzbot+1e925b4b836afe85a1c6@syzkaller-ppc64.appspotmail.com Reported-by: syzbot+587b2421926808309d21@syzkaller-ppc64.appspotmail.com Reported-by: syzbot+58320b7171734bf79d26@syzkaller.appspotmail.com Reported-by: syzbot+d6074fb08bdb2e010520@syzkaller.appspotmail.com Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Michael Ellerman <mpe@ellerman.id.au> Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Akash Goel <akash.goel@intel.com> Cc: Andrew Donnellan <ajd@linux.ibm.com> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Salvatore Bonaccorso <carnil@debian.org> Cc: <stable@vger.kernel.org> [4.10+] Link: http://lkml.kernel.org/r/20191219121256.26480-1-dja@axtens.netSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
John Hubbard authored
This code was using get_user_pages_fast(), in a "Case 2" scenario (DMA/RDMA), using the categorization from [1]. That means that it's time to convert the get_user_pages_fast() + put_page() calls to pin_user_pages_fast() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/Signed-off-by: John Hubbard <jhubbard@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Matt Porter <mporter@kernel.crashing.org> Cc: Alexandre Bounine <alex.bou9@gmail.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Dan Carpenter <dan.carpenter@oracle.com> Link: http://lkml.kernel.org/r/20200517235620.205225-3-jhubbard@nvidia.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Madhuparna Bhowmik authored
Fields of md(mport_dev) are set after cdev_device_add(). However, the file operation callbacks can be called after cdev_device_add() and therefore accesses to fields of md in the callbacks can race with the rest of the mport_cdev_add() function. One such example is INIT_LIST_HEAD(&md->portwrites) in mport_cdev_add(), the list is initialised after cdev_device_add(). This can race with list_add_tail(&pw_filter->md_node,&md->portwrites) in rio_mport_add_pw_filter() which is called by unlocked_ioctl. To avoid such data races use cdev_device_add() after initializing md. Found by Linux Driver Verification project (linuxtesting.org). Signed-off-by: Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Alexandre Bounine <alex.bou9@gmail.com> Cc: Matt Porter <mporter@kernel.crashing.org> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Mike Marshall <hubcap@omnibond.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Allison Randal <allison@lohutok.net> Cc: Pavel Andrianov <andrianov@ispras.ru> Link: http://lkml.kernel.org/r/20200426112950.1803-1-madhuparnabhowmik10@gmail.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Christoph Hellwig authored
Currently copy_string_kernel is just a wrapper around copy_strings that simplifies the calling conventions and uses set_fs to allow passing a kernel pointer. But due to the fact the we only need to handle a single kernel argument pointer, the logic can be sigificantly simplified while getting rid of the set_fs. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Link: http://lkml.kernel.org/r/20200501104105.2621149-3-hch@lst.deSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Christoph Hellwig authored
copy_strings_kernel is always used with a single argument, adjust the calling convention to that. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Link: http://lkml.kernel.org/r/20200501104105.2621149-2-hch@lst.deSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Kefeng Wang authored
Use DEFINE_SEQ_ATTRIBUTE macro to simplify the code. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Link: http://lkml.kernel.org/r/20200509064031.181091-4-wangkefeng.wang@huawei.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Kefeng Wang authored
Use DEFINE_SEQ_ATTRIBUTE macro to simplify the code. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Link: http://lkml.kernel.org/r/20200509064031.181091-3-wangkefeng.wang@huawei.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Kefeng Wang authored
Patch series "seq_file: Introduce DEFINE_SEQ_ATTRIBUTE() helper macro". As discussed in https://lore.kernel.org/lkml/20191129222310.GA3712618@kroah.com/, we could introduce a new helper macro to reduce losts of boilerplate code, vmstat and kprobes is the example which covert to use it, if this is accepted, I will send out more cleanups. This patch (of 3): Introduce DEFINE_SEQ_ATTRIBUTE() helper macro to decrease code duplication. [akpm@linux-foundation.org: coding style fixes] Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Link: http://lkml.kernel.org/r/20200509064031.181091-1-wangkefeng.wang@huawei.com Link: http://lkml.kernel.org/r/20200509064031.181091-2-wangkefeng.wang@huawei.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joe Perches authored
Use a more common logging style. Add and use pr_fmt, coalesce the format string, align arguments, use better grammar. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Vasily Averin <vvs@virtuozzo.com> Link: http://lkml.kernel.org/r/96ff603230ca1bd60034c36519be3930c3a3a226.camel@perches.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
OGAWA Hirofumi authored
Current readahead for FAT entries is very simple but is having some flaws, so it is not working well for some environments. This patch improves the readahead more or less. The key points of modification are, - make the readahead size tunable by using bdi->ra_pages - care the bdi->io_pages to avoid the small size I/O request - update readahead window before fully exhausting With this patch, on slow USB connected 2TB hdd: [before] 383.18sec [after] 51.03sec Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: hyeongseok.kim <hyeongseok.kim@lge.com> Reviewed-by: hyeongseok.kim <hyeongseok.kim@lge.com> Link: http://lkml.kernel.org/r/87d08e1dlh.fsf@mail.parknet.co.jpSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
OGAWA Hirofumi authored
If FAT length == 0, the image doesn't have any data. And it can be the cause of overlapping the root dir and FAT entries. Also Windows treats it as invalid format. Reported-by: syzbot+6f1624f937d9d6911e2d@syzkaller.appspotmail.com Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Marco Elver <elver@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Link: http://lkml.kernel.org/r/87r1wz8mrd.fsf@mail.parknet.co.jpSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Chris Down authored
Some init systems (eg. systemd) have init at their own paths, for example, /usr/lib/systemd/systemd. A compatibility symlink to one of the hardcoded init paths is provided by another package, usually named something like systemd-sysvcompat or similar. Currently distro maintainers who are hands-off on the bootloader are more or less required to include those compatibility links as part of their base distribution, because it's hard to migrate away from them since there's a risk some users will not get the message to set init= on the kernel command line appropriately. Moreover, for distributions where the init system is something the distribution itself is opinionated about (eg. Arch, which has systemd in the required `base` package), we could usually reasonably configure this ahead of time when building the distribution kernel. However, we currently simply don't have any way to configure the kernel to do this. Here's an example discussion where removing sysvcompat was discussed by distro maintainers[0]. This patch adds a new Kconfig tunable, CONFIG_DEFAULT_INIT, which if set is tried before the hardcoded fallback list. So the order of precedence is now thus: 1. init= on command line (on failure: panic) 2. CONFIG_DEFAULT_INIT (on failure: try #3) 3. Hardcoded fallback list (on failure: panic) This new config parameter will allow distribution maintainers to move away from these compatibility links safely, without having to worry that their users might not have the right init=. There are also two other benefits of this over having the distribution maintain a symlink: 1. One of the value propositions over simply having distributions maintain a /sbin/init symlink via a package is that it also frees distributions which have a preferred default, but not mandatory, init system from having their package manager fight with their users for control of /{s,}bin/init. Instead, the distribution simply makes their preference known in CONFIG_DEFAULT_INIT, and if the user installs another init system and uninstalls the default one they can still make use of /{s,}bin/init and friends for their own uses. This makes more cases Just Work(tm) without the user having to perform extra configuration via init=. 2. Since before this we don't know which path the distribution actually _intends_ to serve init from, we don't pr_err if it is simply missing, and usually will just silently put the user in a /bin/sh shell. Now that the distribution can make a declaration of intent, we can be more vocal when this init system fails to launch for any reason, even if it's simply because no file exists at that location, speeding up the palaver of init/mount dependency/etc debugging a bit. [0]: https://lists.archlinux.org/pipermail/arch-dev-public/2019-January/029435.htmlSigned-off-by: Chris Down <chris@chrisdown.name> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Link: http://lkml.kernel.org/r/20200522160234.GA1487022@chrisdown.nameSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Nick Desaulniers authored
ELFNOTE_START allows callers to specify flags for .pushsection assembler directives. All callsites but ELF_NOTE use "a" for SHF_ALLOC. For vdso's that explicitly use ELF_NOTE_START and BUILD_SALT, the same section is specified twice after preprocessing, once with "a" flag, once without. Example: .pushsection .note.Linux, "a", @note ; .pushsection .note.Linux, "", @note ; While GNU as allows this ordering, it warns for the opposite ordering, making these directives position dependent. We'd prefer not to precisely match this behavior in Clang's integrated assembler. Instead, the non __ASSEMBLY__ definition of ELF_NOTE uses __attribute__((section(".note.Linux"))) which is created with SHF_ALLOC, so let's make the __ASSEMBLY__ definition of ELF_NOTE consistent with C and just always use "a" flag. This allows Clang to assemble a working mainline (5.6) kernel via: $ make CC=clang AS=clang Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Fangrui Song <maskray@google.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://github.com/ClangBuiltLinux/linux/issues/913 Link: http://lkml.kernel.org/r/20200325231250.99205-1-ndesaulniers@google.comDebugged-by: Ilie Halip <ilie.halip@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Anthony Iliopoulos authored
The ifndef was added a long time ago to support archs that would define their own mapping function. The last user was the metag arch which was removed from the tree, and as such there are no users left. Let's kill it. Signed-off-by: Anthony Iliopoulos <ailiop@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200402161543.4119-1-ailiop@suse.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Geert Uytterhoeven authored
While "git am" can apply an mbox file containing multiple patches (e.g. as created by b4[1], or a patch bundle downloaded from patchwork), checkpatch does not have proper support for that. When operating on an mbox, checkpatch will merge all detected tags, and complain falsely about duplicates: WARNING: Duplicate signature As modifying checkpatch to reset state in between each patch is a lot of work, a simple solution is splitting the mbox into individual patches, and invoking checkpatch for each of them. Fortunately checkpatch can read a patch from stdin, so the classic "formail" tool can be used to split the mbox, and pipe all individual patches to checkpatch: formail -s scripts/checkpatch.pl < my-mbox However, when reading a patch file from standard input, checkpatch calls it "Your patch", and reports its state as: Your patch has style problems, please review. or: Your patch has no obvious style problems and is ready for submission. Hence it can be difficult to identify which patches need to be reviewed and improved. Fix this by replacing "Your patch" by (the first line of) the email subject, if present. Note that "git mailsplit" can also be used to split an mbox, but it will create individual files for each patch, thus requiring cleanup afterwards. Formail does not have this disadvantage. [1] https://git.kernel.org/pub/scm/utils/b4/b4.git [joe@perches.com: reduce cpu usage] Link: http://lkml.kernel.org/r/c9d89bb24c7414142414c60371e210fdcf4617d2.camel@perches.comSigned-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Joe Perches <joe@perches.com> Cc: Konstantin Ryabitsev <konstantin@linuxfoundation.org> Link: http://lkml.kernel.org/r/20200505132613.17452-1-geert+renesas@glider.beSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joe Perches authored
Don't allow these options to be combined. Miscellanea: o Add missing $P: to some die("reason message") output Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/3dc7bdaa58490f5906efc11a4d6113e42a087723.camel@perches.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joe Perches authored
Some checks look for comments around a specific function like read_barrier_depends. Extend the check to support both c89 and c90 comment styles. c89 /* comment */ or c99 // comment For c99 comments, only look a 3 single lines, the line being scanned, the line above and the line below the line being scanned rather than the patch diff context. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: Paul E. McKenney <paulmck@kernel.org> Cc: Marco Elver <elver@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Andy Whitcroft <apw@canonical.com> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/65cb075435d2f385a53c77571b491b2b09faaf8e.camel@perches.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joe Perches authored
There is a preferred order for the entries in MAINTAINERS sections. See commits 3b50142d ("MAINTAINERS: sort field names for all entries") and 6680125e ("MAINTAINERS: list the section entries in the preferred order") Add checkpatch tests to try to keep that ordering. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com> Link: http://lkml.kernel.org/r/17677130b3ca62d79817e6a22546bad39d7e81b4.camel@perches.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Arnd Bergmann authored
Clang normally does not warn about certain issues in inline functions when it only happens in an eliminated code path. However if something else goes wrong, it does tend to complain about the definition of hweight_long() on 32-bit targets: include/linux/bitops.h:75:41: error: shift count >= width of type [-Werror,-Wshift-count-overflow] return sizeof(w) == 4 ? hweight32(w) : hweight64(w); ^~~~~~~~~~~~ include/asm-generic/bitops/const_hweight.h:29:49: note: expanded from macro 'hweight64' define hweight64(w) (__builtin_constant_p(w) ? __const_hweight64(w) : __arch_hweight64(w)) ^~~~~~~~~~~~~~~~~~~~ include/asm-generic/bitops/const_hweight.h:21:76: note: expanded from macro '__const_hweight64' define __const_hweight64(w) (__const_hweight32(w) + __const_hweight32((w) >> 32)) ^ ~~ include/asm-generic/bitops/const_hweight.h:20:49: note: expanded from macro '__const_hweight32' define __const_hweight32(w) (__const_hweight16(w) + __const_hweight16((w) >> 16)) ^ include/asm-generic/bitops/const_hweight.h:19:72: note: expanded from macro '__const_hweight16' define __const_hweight16(w) (__const_hweight8(w) + __const_hweight8((w) >> 8 )) ^ include/asm-generic/bitops/const_hweight.h:12:9: note: expanded from macro '__const_hweight8' (!!((w) & (1ULL << 2))) + \ Adding an explicit cast to __u64 avoids that warning and makes it easier to read other output. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Christian Brauner <christian.brauner@ubuntu.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Link: http://lkml.kernel.org/r/20200505135513.65265-1-arnd@arndb.deSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jesse Brandeburg authored
Test some bit clears/sets to make sure assembly doesn't change, and that the set_bit and clear_bit functions work and don't cause sparse warnings. Instruct Kbuild to build this file with extra warning level -Wextra, to catch new issues, and also doesn't hurt to build with C=1. This was used to test changes to arch/x86/include/asm/bitops.h. In particular, sparse (C=1) was very concerned when the last bit before a natural boundary, like 7, or 31, was being tested, as this causes sign extension (0xffffff7f) for instance when clearing bit 7. Recommended usage: make defconfig scripts/config -m CONFIG_TEST_BITOPS make modules_prepare make C=1 W=1 lib/test_bitops.ko objdump -S -d lib/test_bitops.ko insmod lib/test_bitops.ko rmmod lib/test_bitops.ko <check dmesg>, there should be no compiler/sparse warnings and no error messages in log. Link: http://lkml.kernel.org/r/20200310221747.2848474-2-jesse.brandeburg@intel.comSigned-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> CcL Ingo Molnar <mingo@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wei Yang <richard.weiyang@gmail.com> Cc: Christian Brauner <christian.brauner@ubuntu.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Tan Hu authored
If the given type has fraction smaller than max_frac/FPROP_FRAC_BASE, the code could be modified to call __fprop_inc_percpu() directly and easier to understand. After this patch, fprop_reflect_period_percpu() will be called twice, and quicky return on pl->period == p->period test, so it would not result to significant downside of performance. Thanks for Jan's guidance. Signed-off-by: Tan Hu <tan.hu@zte.com.cn> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Jan Kara <jack@suse.cz> Cc: <xue.zhihong@zte.com.cn> Cc: Yi Wang <wang.yi59@zte.com.cn> Cc: <wang.liang82@zte.com.cn> Link: http://lkml.kernel.org/r/1589004753-27554-1-git-send-email-tan.hu@zte.com.cnSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Joe Perches authored
Remove the trailing newline from the used-once pr_fmt and add it to the single use of pr_<level> in this code to use a more common logging style. Miscellanea: o Use %lu in the pr_debug format and remove the unnecessary cast Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: http://lkml.kernel.org/r/47372467902a047c03b0fd29aab56e0c38d3f848.camel@perches.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jann Horn authored
The zlib inflate code has an old micro-optimization based on the assumption that for pre-increment memory accesses, the compiler will generate code that fits better into the processor's pipeline than what would be generated for post-increment memory accesses. This optimization was already removed in upstream zlib in 2016: https://github.com/madler/zlib/commit/9aaec95e8211 This optimization causes UB according to C99, which says in section 6.5.6 "Additive operators": "If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined". This UB is not only a theoretical concern, but can also cause trouble for future work on compiler-based sanitizers. According to the zlib commit, this optimization also is not optimal anymore with modern compilers. Replace uses of OFF, PUP and UP_UNALIGNED with their definitions in the POSTINC case, and remove the macro definitions, just like in the upstream patch. Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Mikhail Zaslonko <zaslonko@linux.ibm.com> Link: http://lkml.kernel.org/r/20200507123112.252723-1-jannh@google.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jason Yan authored
Fix the following sparse warning: lib/test_lockup.c:145:14: warning: symbol 'test_inode' was not declared. Should it be static? Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Jason Yan <yanaijie@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200417074021.46411-1-yanaijie@huawei.comSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
KP Singh authored
When updating a piece of broken logic from using get_user to strncpy_from_user, we noticed that a warning which is expected when calling a function that might fault from an atomic context with pagefaults enabled disappeared. Not having this warning in place can lead to calling strncpy_from_user from an atomic context and eventually kernel crashes/stack corruption. Signed-off-by: KP Singh <kpsingh@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Jann Horn <jannh@google.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20200414225705.255711-1-kpsingh@chromium.orgSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-