Commit af709adf authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'mm-hotfixes-stable-2024-04-05-11-30' of...

Merge tag 'mm-hotfixes-stable-2024-04-05-11-30' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull misc fixes from Andrew Morton:
 "8 hotfixes, 3 are cc:stable

  There are a couple of fixups for this cycle's vmalloc changes and one
  for the stackdepot changes. And a fix for a very old x86 PAT issue
  which can cause a warning splat"

* tag 'mm-hotfixes-stable-2024-04-05-11-30' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
  stackdepot: rename pool_index to pool_index_plus_1
  x86/mm/pat: fix VM_PAT handling in COW mappings
  MAINTAINERS: change vmware.com addresses to broadcom.com
  selftests/mm: include strings.h for ffsl
  mm: vmalloc: fix lockdep warning
  mm: vmalloc: bail out early in find_vmap_area() if vmap is not init
  init: open output files from cpio unpacking with O_LARGEFILE
  mm/secretmem: fix GUP-fast succeeding on secretmem folios
parents c7830236 a6c1d9cb
...@@ -20,6 +20,7 @@ Adam Oldham <oldhamca@gmail.com> ...@@ -20,6 +20,7 @@ Adam Oldham <oldhamca@gmail.com>
Adam Radford <aradford@gmail.com> Adam Radford <aradford@gmail.com>
Adriana Reus <adi.reus@gmail.com> <adriana.reus@intel.com> Adriana Reus <adi.reus@gmail.com> <adriana.reus@intel.com>
Adrian Bunk <bunk@stusta.de> Adrian Bunk <bunk@stusta.de>
Ajay Kaher <ajay.kaher@broadcom.com> <akaher@vmware.com>
Akhil P Oommen <quic_akhilpo@quicinc.com> <akhilpo@codeaurora.org> Akhil P Oommen <quic_akhilpo@quicinc.com> <akhilpo@codeaurora.org>
Alan Cox <alan@lxorguk.ukuu.org.uk> Alan Cox <alan@lxorguk.ukuu.org.uk>
Alan Cox <root@hraefn.swansea.linux.org.uk> Alan Cox <root@hraefn.swansea.linux.org.uk>
...@@ -36,6 +37,7 @@ Alexei Avshalom Lazar <quic_ailizaro@quicinc.com> <ailizaro@codeaurora.org> ...@@ -36,6 +37,7 @@ Alexei Avshalom Lazar <quic_ailizaro@quicinc.com> <ailizaro@codeaurora.org>
Alexei Starovoitov <ast@kernel.org> <alexei.starovoitov@gmail.com> Alexei Starovoitov <ast@kernel.org> <alexei.starovoitov@gmail.com>
Alexei Starovoitov <ast@kernel.org> <ast@fb.com> Alexei Starovoitov <ast@kernel.org> <ast@fb.com>
Alexei Starovoitov <ast@kernel.org> <ast@plumgrid.com> Alexei Starovoitov <ast@kernel.org> <ast@plumgrid.com>
Alexey Makhalov <alexey.amakhalov@broadcom.com> <amakhalov@vmware.com>
Alex Hung <alexhung@gmail.com> <alex.hung@canonical.com> Alex Hung <alexhung@gmail.com> <alex.hung@canonical.com>
Alex Shi <alexs@kernel.org> <alex.shi@intel.com> Alex Shi <alexs@kernel.org> <alex.shi@intel.com>
Alex Shi <alexs@kernel.org> <alex.shi@linaro.org> Alex Shi <alexs@kernel.org> <alex.shi@linaro.org>
...@@ -110,6 +112,7 @@ Brendan Higgins <brendan.higgins@linux.dev> <brendanhiggins@google.com> ...@@ -110,6 +112,7 @@ Brendan Higgins <brendan.higgins@linux.dev> <brendanhiggins@google.com>
Brian Avery <b.avery@hp.com> Brian Avery <b.avery@hp.com>
Brian King <brking@us.ibm.com> Brian King <brking@us.ibm.com>
Brian Silverman <bsilver16384@gmail.com> <brian.silverman@bluerivertech.com> Brian Silverman <bsilver16384@gmail.com> <brian.silverman@bluerivertech.com>
Bryan Tan <bryan-bt.tan@broadcom.com> <bryantan@vmware.com>
Cai Huoqing <cai.huoqing@linux.dev> <caihuoqing@baidu.com> Cai Huoqing <cai.huoqing@linux.dev> <caihuoqing@baidu.com>
Can Guo <quic_cang@quicinc.com> <cang@codeaurora.org> Can Guo <quic_cang@quicinc.com> <cang@codeaurora.org>
Carl Huang <quic_cjhuang@quicinc.com> <cjhuang@codeaurora.org> Carl Huang <quic_cjhuang@quicinc.com> <cjhuang@codeaurora.org>
...@@ -529,6 +532,7 @@ Rocky Liao <quic_rjliao@quicinc.com> <rjliao@codeaurora.org> ...@@ -529,6 +532,7 @@ Rocky Liao <quic_rjliao@quicinc.com> <rjliao@codeaurora.org>
Roman Gushchin <roman.gushchin@linux.dev> <guro@fb.com> Roman Gushchin <roman.gushchin@linux.dev> <guro@fb.com>
Roman Gushchin <roman.gushchin@linux.dev> <guroan@gmail.com> Roman Gushchin <roman.gushchin@linux.dev> <guroan@gmail.com>
Roman Gushchin <roman.gushchin@linux.dev> <klamm@yandex-team.ru> Roman Gushchin <roman.gushchin@linux.dev> <klamm@yandex-team.ru>
Ronak Doshi <ronak.doshi@broadcom.com> <doshir@vmware.com>
Muchun Song <muchun.song@linux.dev> <songmuchun@bytedance.com> Muchun Song <muchun.song@linux.dev> <songmuchun@bytedance.com>
Muchun Song <muchun.song@linux.dev> <smuchun@gmail.com> Muchun Song <muchun.song@linux.dev> <smuchun@gmail.com>
Ross Zwisler <zwisler@kernel.org> <ross.zwisler@linux.intel.com> Ross Zwisler <zwisler@kernel.org> <ross.zwisler@linux.intel.com>
...@@ -651,6 +655,7 @@ Viresh Kumar <vireshk@kernel.org> <viresh.kumar@st.com> ...@@ -651,6 +655,7 @@ Viresh Kumar <vireshk@kernel.org> <viresh.kumar@st.com>
Viresh Kumar <vireshk@kernel.org> <viresh.linux@gmail.com> Viresh Kumar <vireshk@kernel.org> <viresh.linux@gmail.com>
Viresh Kumar <viresh.kumar@linaro.org> <viresh.kumar@linaro.org> Viresh Kumar <viresh.kumar@linaro.org> <viresh.kumar@linaro.org>
Viresh Kumar <viresh.kumar@linaro.org> <viresh.kumar@linaro.com> Viresh Kumar <viresh.kumar@linaro.org> <viresh.kumar@linaro.com>
Vishnu Dasa <vishnu.dasa@broadcom.com> <vdasa@vmware.com>
Vivek Aknurwar <quic_viveka@quicinc.com> <viveka@codeaurora.org> Vivek Aknurwar <quic_viveka@quicinc.com> <viveka@codeaurora.org>
Vivien Didelot <vivien.didelot@gmail.com> <vivien.didelot@savoirfairelinux.com> Vivien Didelot <vivien.didelot@gmail.com> <vivien.didelot@savoirfairelinux.com>
Vlad Dogaru <ddvlad@gmail.com> <vlad.dogaru@intel.com> Vlad Dogaru <ddvlad@gmail.com> <vlad.dogaru@intel.com>
......
...@@ -16733,9 +16733,9 @@ F: include/uapi/linux/ppdev.h ...@@ -16733,9 +16733,9 @@ F: include/uapi/linux/ppdev.h
PARAVIRT_OPS INTERFACE PARAVIRT_OPS INTERFACE
M: Juergen Gross <jgross@suse.com> M: Juergen Gross <jgross@suse.com>
R: Ajay Kaher <akaher@vmware.com> R: Ajay Kaher <ajay.kaher@broadcom.com>
R: Alexey Makhalov <amakhalov@vmware.com> R: Alexey Makhalov <alexey.amakhalov@broadcom.com>
R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com> R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: virtualization@lists.linux.dev L: virtualization@lists.linux.dev
L: x86@kernel.org L: x86@kernel.org
S: Supported S: Supported
...@@ -23654,9 +23654,9 @@ S: Supported ...@@ -23654,9 +23654,9 @@ S: Supported
F: drivers/misc/vmw_balloon.c F: drivers/misc/vmw_balloon.c
VMWARE HYPERVISOR INTERFACE VMWARE HYPERVISOR INTERFACE
M: Ajay Kaher <akaher@vmware.com> M: Ajay Kaher <ajay.kaher@broadcom.com>
M: Alexey Makhalov <amakhalov@vmware.com> M: Alexey Makhalov <alexey.amakhalov@broadcom.com>
R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com> R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: virtualization@lists.linux.dev L: virtualization@lists.linux.dev
L: x86@kernel.org L: x86@kernel.org
S: Supported S: Supported
...@@ -23665,33 +23665,34 @@ F: arch/x86/include/asm/vmware.h ...@@ -23665,33 +23665,34 @@ F: arch/x86/include/asm/vmware.h
F: arch/x86/kernel/cpu/vmware.c F: arch/x86/kernel/cpu/vmware.c
VMWARE PVRDMA DRIVER VMWARE PVRDMA DRIVER
M: Bryan Tan <bryantan@vmware.com> M: Bryan Tan <bryan-bt.tan@broadcom.com>
M: Vishnu Dasa <vdasa@vmware.com> M: Vishnu Dasa <vishnu.dasa@broadcom.com>
R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com> R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-rdma@vger.kernel.org L: linux-rdma@vger.kernel.org
S: Supported S: Supported
F: drivers/infiniband/hw/vmw_pvrdma/ F: drivers/infiniband/hw/vmw_pvrdma/
VMWARE PVSCSI DRIVER VMWARE PVSCSI DRIVER
M: Vishal Bhakta <vbhakta@vmware.com> M: Vishal Bhakta <vishal.bhakta@broadcom.com>
R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com> R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Supported S: Supported
F: drivers/scsi/vmw_pvscsi.c F: drivers/scsi/vmw_pvscsi.c
F: drivers/scsi/vmw_pvscsi.h F: drivers/scsi/vmw_pvscsi.h
VMWARE VIRTUAL PTP CLOCK DRIVER VMWARE VIRTUAL PTP CLOCK DRIVER
R: Ajay Kaher <akaher@vmware.com> M: Nick Shi <nick.shi@broadcom.com>
R: Alexey Makhalov <amakhalov@vmware.com> R: Ajay Kaher <ajay.kaher@broadcom.com>
R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com> R: Alexey Makhalov <alexey.amakhalov@broadcom.com>
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
F: drivers/ptp/ptp_vmw.c F: drivers/ptp/ptp_vmw.c
VMWARE VMCI DRIVER VMWARE VMCI DRIVER
M: Bryan Tan <bryantan@vmware.com> M: Bryan Tan <bryan-bt.tan@broadcom.com>
M: Vishnu Dasa <vdasa@vmware.com> M: Vishnu Dasa <vishnu.dasa@broadcom.com>
R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com> R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Supported S: Supported
F: drivers/misc/vmw_vmci/ F: drivers/misc/vmw_vmci/
...@@ -23706,16 +23707,16 @@ F: drivers/input/mouse/vmmouse.c ...@@ -23706,16 +23707,16 @@ F: drivers/input/mouse/vmmouse.c
F: drivers/input/mouse/vmmouse.h F: drivers/input/mouse/vmmouse.h
VMWARE VMXNET3 ETHERNET DRIVER VMWARE VMXNET3 ETHERNET DRIVER
M: Ronak Doshi <doshir@vmware.com> M: Ronak Doshi <ronak.doshi@broadcom.com>
R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com> R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
F: drivers/net/vmxnet3/ F: drivers/net/vmxnet3/
VMWARE VSOCK VMCI TRANSPORT DRIVER VMWARE VSOCK VMCI TRANSPORT DRIVER
M: Bryan Tan <bryantan@vmware.com> M: Bryan Tan <bryan-bt.tan@broadcom.com>
M: Vishnu Dasa <vdasa@vmware.com> M: Vishnu Dasa <vishnu.dasa@broadcom.com>
R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com> R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
L: linux-kernel@vger.kernel.org L: linux-kernel@vger.kernel.org
S: Supported S: Supported
F: net/vmw_vsock/vmci_transport* F: net/vmw_vsock/vmci_transport*
......
...@@ -947,6 +947,38 @@ static void free_pfn_range(u64 paddr, unsigned long size) ...@@ -947,6 +947,38 @@ static void free_pfn_range(u64 paddr, unsigned long size)
memtype_free(paddr, paddr + size); memtype_free(paddr, paddr + size);
} }
static int get_pat_info(struct vm_area_struct *vma, resource_size_t *paddr,
pgprot_t *pgprot)
{
unsigned long prot;
VM_WARN_ON_ONCE(!(vma->vm_flags & VM_PAT));
/*
* We need the starting PFN and cachemode used for track_pfn_remap()
* that covered the whole VMA. For most mappings, we can obtain that
* information from the page tables. For COW mappings, we might now
* suddenly have anon folios mapped and follow_phys() will fail.
*
* Fallback to using vma->vm_pgoff, see remap_pfn_range_notrack(), to
* detect the PFN. If we need the cachemode as well, we're out of luck
* for now and have to fail fork().
*/
if (!follow_phys(vma, vma->vm_start, 0, &prot, paddr)) {
if (pgprot)
*pgprot = __pgprot(prot);
return 0;
}
if (is_cow_mapping(vma->vm_flags)) {
if (pgprot)
return -EINVAL;
*paddr = (resource_size_t)vma->vm_pgoff << PAGE_SHIFT;
return 0;
}
WARN_ON_ONCE(1);
return -EINVAL;
}
/* /*
* track_pfn_copy is called when vma that is covering the pfnmap gets * track_pfn_copy is called when vma that is covering the pfnmap gets
* copied through copy_page_range(). * copied through copy_page_range().
...@@ -957,20 +989,13 @@ static void free_pfn_range(u64 paddr, unsigned long size) ...@@ -957,20 +989,13 @@ static void free_pfn_range(u64 paddr, unsigned long size)
int track_pfn_copy(struct vm_area_struct *vma) int track_pfn_copy(struct vm_area_struct *vma)
{ {
resource_size_t paddr; resource_size_t paddr;
unsigned long prot;
unsigned long vma_size = vma->vm_end - vma->vm_start; unsigned long vma_size = vma->vm_end - vma->vm_start;
pgprot_t pgprot; pgprot_t pgprot;
if (vma->vm_flags & VM_PAT) { if (vma->vm_flags & VM_PAT) {
/* if (get_pat_info(vma, &paddr, &pgprot))
* reserve the whole chunk covered by vma. We need the
* starting address and protection from pte.
*/
if (follow_phys(vma, vma->vm_start, 0, &prot, &paddr)) {
WARN_ON_ONCE(1);
return -EINVAL; return -EINVAL;
} /* reserve the whole chunk covered by vma. */
pgprot = __pgprot(prot);
return reserve_pfn_range(paddr, vma_size, &pgprot, 1); return reserve_pfn_range(paddr, vma_size, &pgprot, 1);
} }
...@@ -1045,7 +1070,6 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn, ...@@ -1045,7 +1070,6 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
unsigned long size, bool mm_wr_locked) unsigned long size, bool mm_wr_locked)
{ {
resource_size_t paddr; resource_size_t paddr;
unsigned long prot;
if (vma && !(vma->vm_flags & VM_PAT)) if (vma && !(vma->vm_flags & VM_PAT))
return; return;
...@@ -1053,11 +1077,8 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn, ...@@ -1053,11 +1077,8 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
/* free the chunk starting from pfn or the whole chunk */ /* free the chunk starting from pfn or the whole chunk */
paddr = (resource_size_t)pfn << PAGE_SHIFT; paddr = (resource_size_t)pfn << PAGE_SHIFT;
if (!paddr && !size) { if (!paddr && !size) {
if (follow_phys(vma, vma->vm_start, 0, &prot, &paddr)) { if (get_pat_info(vma, &paddr, NULL))
WARN_ON_ONCE(1);
return; return;
}
size = vma->vm_end - vma->vm_start; size = vma->vm_end - vma->vm_start;
} }
free_pfn_range(paddr, size); free_pfn_range(paddr, size);
......
...@@ -13,10 +13,10 @@ static inline bool folio_is_secretmem(struct folio *folio) ...@@ -13,10 +13,10 @@ static inline bool folio_is_secretmem(struct folio *folio)
/* /*
* Using folio_mapping() is quite slow because of the actual call * Using folio_mapping() is quite slow because of the actual call
* instruction. * instruction.
* We know that secretmem pages are not compound and LRU so we can * We know that secretmem pages are not compound, so we can
* save a couple of cycles here. * save a couple of cycles here.
*/ */
if (folio_test_large(folio) || !folio_test_lru(folio)) if (folio_test_large(folio))
return false; return false;
mapping = (struct address_space *) mapping = (struct address_space *)
......
...@@ -44,10 +44,9 @@ typedef u32 depot_stack_handle_t; ...@@ -44,10 +44,9 @@ typedef u32 depot_stack_handle_t;
union handle_parts { union handle_parts {
depot_stack_handle_t handle; depot_stack_handle_t handle;
struct { struct {
/* pool_index is offset by 1 */ u32 pool_index_plus_1 : DEPOT_POOL_INDEX_BITS;
u32 pool_index : DEPOT_POOL_INDEX_BITS; u32 offset : DEPOT_OFFSET_BITS;
u32 offset : DEPOT_OFFSET_BITS; u32 extra : STACK_DEPOT_EXTRA_BITS;
u32 extra : STACK_DEPOT_EXTRA_BITS;
}; };
}; };
......
...@@ -367,7 +367,7 @@ static int __init do_name(void) ...@@ -367,7 +367,7 @@ static int __init do_name(void)
if (S_ISREG(mode)) { if (S_ISREG(mode)) {
int ml = maybe_link(); int ml = maybe_link();
if (ml >= 0) { if (ml >= 0) {
int openflags = O_WRONLY|O_CREAT; int openflags = O_WRONLY|O_CREAT|O_LARGEFILE;
if (ml != 1) if (ml != 1)
openflags |= O_TRUNC; openflags |= O_TRUNC;
wfile = filp_open(collected, openflags, mode); wfile = filp_open(collected, openflags, mode);
......
...@@ -330,7 +330,7 @@ static struct stack_record *depot_pop_free_pool(void **prealloc, size_t size) ...@@ -330,7 +330,7 @@ static struct stack_record *depot_pop_free_pool(void **prealloc, size_t size)
stack = current_pool + pool_offset; stack = current_pool + pool_offset;
/* Pre-initialize handle once. */ /* Pre-initialize handle once. */
stack->handle.pool_index = pool_index + 1; stack->handle.pool_index_plus_1 = pool_index + 1;
stack->handle.offset = pool_offset >> DEPOT_STACK_ALIGN; stack->handle.offset = pool_offset >> DEPOT_STACK_ALIGN;
stack->handle.extra = 0; stack->handle.extra = 0;
INIT_LIST_HEAD(&stack->hash_list); INIT_LIST_HEAD(&stack->hash_list);
...@@ -441,7 +441,7 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) ...@@ -441,7 +441,7 @@ static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle)
const int pools_num_cached = READ_ONCE(pools_num); const int pools_num_cached = READ_ONCE(pools_num);
union handle_parts parts = { .handle = handle }; union handle_parts parts = { .handle = handle };
void *pool; void *pool;
u32 pool_index = parts.pool_index - 1; u32 pool_index = parts.pool_index_plus_1 - 1;
size_t offset = parts.offset << DEPOT_STACK_ALIGN; size_t offset = parts.offset << DEPOT_STACK_ALIGN;
struct stack_record *stack; struct stack_record *stack;
......
...@@ -5973,6 +5973,10 @@ int follow_phys(struct vm_area_struct *vma, ...@@ -5973,6 +5973,10 @@ int follow_phys(struct vm_area_struct *vma,
goto out; goto out;
pte = ptep_get(ptep); pte = ptep_get(ptep);
/* Never return PFNs of anon folios in COW mappings. */
if (vm_normal_folio(vma, address, pte))
goto unlock;
if ((flags & FOLL_WRITE) && !pte_write(pte)) if ((flags & FOLL_WRITE) && !pte_write(pte))
goto unlock; goto unlock;
......
...@@ -989,6 +989,27 @@ unsigned long vmalloc_nr_pages(void) ...@@ -989,6 +989,27 @@ unsigned long vmalloc_nr_pages(void)
return atomic_long_read(&nr_vmalloc_pages); return atomic_long_read(&nr_vmalloc_pages);
} }
static struct vmap_area *__find_vmap_area(unsigned long addr, struct rb_root *root)
{
struct rb_node *n = root->rb_node;
addr = (unsigned long)kasan_reset_tag((void *)addr);
while (n) {
struct vmap_area *va;
va = rb_entry(n, struct vmap_area, rb_node);
if (addr < va->va_start)
n = n->rb_left;
else if (addr >= va->va_end)
n = n->rb_right;
else
return va;
}
return NULL;
}
/* Look up the first VA which satisfies addr < va_end, NULL if none. */ /* Look up the first VA which satisfies addr < va_end, NULL if none. */
static struct vmap_area * static struct vmap_area *
__find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) __find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root)
...@@ -1025,47 +1046,39 @@ __find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root) ...@@ -1025,47 +1046,39 @@ __find_vmap_area_exceed_addr(unsigned long addr, struct rb_root *root)
static struct vmap_node * static struct vmap_node *
find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va) find_vmap_area_exceed_addr_lock(unsigned long addr, struct vmap_area **va)
{ {
struct vmap_node *vn, *va_node = NULL; unsigned long va_start_lowest;
struct vmap_area *va_lowest; struct vmap_node *vn;
int i; int i;
for (i = 0; i < nr_vmap_nodes; i++) { repeat:
for (i = 0, va_start_lowest = 0; i < nr_vmap_nodes; i++) {
vn = &vmap_nodes[i]; vn = &vmap_nodes[i];
spin_lock(&vn->busy.lock); spin_lock(&vn->busy.lock);
va_lowest = __find_vmap_area_exceed_addr(addr, &vn->busy.root); *va = __find_vmap_area_exceed_addr(addr, &vn->busy.root);
if (va_lowest) {
if (!va_node || va_lowest->va_start < (*va)->va_start) { if (*va)
if (va_node) if (!va_start_lowest || (*va)->va_start < va_start_lowest)
spin_unlock(&va_node->busy.lock); va_start_lowest = (*va)->va_start;
*va = va_lowest;
va_node = vn;
continue;
}
}
spin_unlock(&vn->busy.lock); spin_unlock(&vn->busy.lock);
} }
return va_node; /*
} * Check if found VA exists, it might have gone away. In this case we
* repeat the search because a VA has been removed concurrently and we
static struct vmap_area *__find_vmap_area(unsigned long addr, struct rb_root *root) * need to proceed to the next one, which is a rare case.
{ */
struct rb_node *n = root->rb_node; if (va_start_lowest) {
vn = addr_to_node(va_start_lowest);
addr = (unsigned long)kasan_reset_tag((void *)addr); spin_lock(&vn->busy.lock);
*va = __find_vmap_area(va_start_lowest, &vn->busy.root);
while (n) { if (*va)
struct vmap_area *va; return vn;
va = rb_entry(n, struct vmap_area, rb_node); spin_unlock(&vn->busy.lock);
if (addr < va->va_start) goto repeat;
n = n->rb_left;
else if (addr >= va->va_end)
n = n->rb_right;
else
return va;
} }
return NULL; return NULL;
...@@ -2343,6 +2356,9 @@ struct vmap_area *find_vmap_area(unsigned long addr) ...@@ -2343,6 +2356,9 @@ struct vmap_area *find_vmap_area(unsigned long addr)
struct vmap_area *va; struct vmap_area *va;
int i, j; int i, j;
if (unlikely(!vmap_initialized))
return NULL;
/* /*
* An addr_to_node_id(addr) converts an address to a node index * An addr_to_node_id(addr) converts an address to a node index
* where a VA is located. If VA spans several zones and passed * where a VA is located. If VA spans several zones and passed
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
#include <stdbool.h> #include <stdbool.h>
#include <sys/mman.h> #include <sys/mman.h>
#include <err.h> #include <err.h>
#include <string.h> /* ffsl() */ #include <strings.h> /* ffsl() */
#include <unistd.h> /* _SC_PAGESIZE */ #include <unistd.h> /* _SC_PAGESIZE */
#define BIT_ULL(nr) (1ULL << (nr)) #define BIT_ULL(nr) (1ULL << (nr))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment