- 21 Oct, 2018 40 commits
-
-
Matthew Wilcox authored
Since the XArray is embedded in the struct address_space, its address contains exactly as much entropy as the address of the mapping. This patch is purely preparatory for later patches which will simplify the wait/wake interfaces. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Remove mentions of 'radix' and 'radix tree'. Simplify some names by dropping the word 'mapping'. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
This is a straightforward conversion. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
This is close to a 1:1 replacement of radix tree APIs with their XArray equivalents. It would be possible to optimise nilfs_copy_back_pages(), but that doesn't seem to be in the performance path. Also, I think it has a pre-existing bug, and I've added a note to that effect in the source code. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
A couple of short loops. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Mostly comment fixes, but one use of __xa_set_mark. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Signed-off-by: Matthew Wilcox <willy@infradead.org> Acked-by: David Sterba <dsterba@suse.com>
-
Matthew Wilcox authored
Remove the last mentions of radix tree from various comments. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Switch to a batch-processing model like memfd_wait_for_pins() and use the xa_state previously set up by memfd_wait_for_pins(). Signed-off-by: Matthew Wilcox <willy@infradead.org> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-
Matthew Wilcox authored
Simplify the locking by taking the spinlock while we walk the tree on the assumption that many acquires and releases of the lock will be worse than holding the lock while we process an entire batch of pages. Signed-off-by: Matthew Wilcox <willy@infradead.org> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
-
Matthew Wilcox authored
Simpler code because the xarray takes care of things like the limit and dereferencing the slot. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Since we are conditionally storing NULL in the XArray, we do not need to allocate memory and the GFP flags will be unused. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
xa_find() is a slightly easier API to use than radix_tree_gang_lookup_slot() because it contains its own RCU locking. This commit removes the last user of radix_tree_gang_lookup_slot() so remove the function too. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
We can use xas_find_conflict() instead of radix_tree_gang_lookup_slot() to find any conflicting entry and combine the three paths through this function into one. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
This is a 1:1 conversion. The major part of this patch is converting the test framework from userspace to kernel space and mirroring the algorithm now used in find_swap_entry(). Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
xa_load has its own RCU locking, so we can eliminate it here. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Rename shmem_radix_tree_replace() to shmem_replace_entry() and convert it to use the XArray API. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Removes sparse warnings. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
This is just a variable rename and comment change. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Slightly shorter and easier to read code. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
I found another victim of the radix tree being hard to use. Because there was no call to radix_tree_preload(), khugepaged was allocating radix_tree_nodes using GFP_ATOMIC. I also converted a local_irq_save()/restore() pair to disable()/enable(). Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Quite a straightforward conversion. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
This one is trivial. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Both callers of __delete_from_swap_cache have the swp_entry_t already, so pass that in to make constructing the XA_STATE easier. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Combine __add_to_swap_cache and add_to_swap_cache into one function since there is no more need to preload. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
This is essentially xa_cmpxchg() with the locking handled above us, and it doesn't have to handle replacing a NULL entry. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
We construct an XA_STATE and use it to delete the node with xas_store() rather than adding a special function for this unique use case. Includes a test that simulates this usage for the test suite. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Includes moving mapping_tagged() to fs.h as a static inline, and changing it to return bool. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Instead of calling find_get_pages_range() and putting any reference, use xas_find() to iterate over any entries in the range, skipping the shadow/swap entries. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Rename the function from page_cache_tree_delete_batch to just page_cache_delete_batch. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Now the page cache lookup is using the XArray, let's convert this regression test from the radix tree API to the XArray so it's testing roughly the same thing it was testing before. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Slight change of strategy here; if we have trouble getting hold of a page for whatever reason (eg a compound page is split underneath us), don't spin to stabilise the page, just continue the iteration, like we would if we failed to trylock the page. Since this is a speculative optimisation, it feels like we should allow the process to take an extra fault if it turns out to need this page instead of spending time to pin down a page it may not need. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Slightly shorter and simpler code. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
The 'end' parameter of the xas_for_each iterator avoids a useless iteration at the end of the range. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
There's no direct replacement for radix_tree_for_each_contig() in the XArray API as it's an unusual thing to do. Instead, open-code a loop using xas_next(). This removes the only user of radix_tree_for_each_contig() so delete the iterator from the API and the test suite code for it. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
The 'end' parameter of the xas_for_each iterator avoids a useless iteration at the end of the range. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Slightly shorter and simpler code. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-
Matthew Wilcox authored
Slightly shorter and simpler code. Signed-off-by: Matthew Wilcox <willy@infradead.org>
-