Commit e7392b4e authored by Fabio M. De Francesco's avatar Fabio M. De Francesco Committed by Andrew Morton

mm/highmem: fix kernel-doc warnings in highmem*.h

Patch series "Extend and reorganize Highmem's documentation", v4.

This series has the purpose to extend and reorganize Highmem's
documentation.

This is a work in progress because some information should still be moved
from highmem.rst to highmem.h and highmem-internal.h.  Specifically I'm
talking about moving the "how to" information to the relevant headers, as
it as been suggested by Ira Weiny (Intel).

Also, this is a work in progress because some kdocs in highmem.h and
highmem-internal.h should be improved.


This patch (of 4):

`scripts/kernel-doc -v -none include/linux/highmem*` reports the following
warnings:

include/linux/highmem.h:160: warning: expecting prototype for kunmap_atomic(). Prototype was for nr_free_highpages() instead
include/linux/highmem.h:204: warning: No description found for return value of 'alloc_zeroed_user_highpage_movable'
include/linux/highmem-internal.h:256: warning: Function parameter or member '__addr' not described in 'kunmap_atomic'
include/linux/highmem-internal.h:256: warning: Excess function parameter 'addr' description in 'kunmap_atomic'

Fix these warnings by (1) moving the kernel-doc comments from highmem.h to
highmem-internal.h (which is the file were the kunmap_atomic() macro is
actually defined), (2) extending and merging it with the comment which was
already in highmem-internal.h, and (3) using correct parameter names (4)
correcting a few technical inaccuracies in comments, and (5) adding a
deprecation notice in kunmap_atomic() for consistency with kmap_atomic().

Link: https://lkml.kernel.org/r/20220428212455.892-1-fmdefrancesco@gmail.com
Link: https://lkml.kernel.org/r/20220428212455.892-2-fmdefrancesco@gmail.comSigned-off-by: default avatarFabio M. De Francesco <fmdefrancesco@gmail.com>
Reviewed-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: default avatarIra Weiny <ira.weiny@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent e240ac52
...@@ -236,9 +236,21 @@ static inline unsigned long totalhigh_pages(void) { return 0UL; } ...@@ -236,9 +236,21 @@ static inline unsigned long totalhigh_pages(void) { return 0UL; }
#endif /* CONFIG_HIGHMEM */ #endif /* CONFIG_HIGHMEM */
/* /**
* Prevent people trying to call kunmap_atomic() as if it were kunmap() * kunmap_atomic - Unmap the virtual address mapped by kmap_atomic() - deprecated!
* kunmap_atomic() should get the return value of kmap_atomic, not the page. * @__addr: Virtual address to be unmapped
*
* Unmaps an address previously mapped by kmap_atomic() and re-enables
* pagefaults. Depending on PREEMP_RT configuration, re-enables also
* migration and preemption. Users should not count on these side effects.
*
* Mappings should be unmapped in the reverse order that they were mapped.
* See kmap_local_page() for details on nesting.
*
* @__addr can be any address within the mapped page, so there is no need
* to subtract any offset that has been added. In contrast to kunmap(),
* this function takes the address returned from kmap_atomic(), not the
* page passed to it. The compiler will warn you if you pass the page.
*/ */
#define kunmap_atomic(__addr) \ #define kunmap_atomic(__addr) \
do { \ do { \
......
...@@ -37,7 +37,7 @@ static inline void *kmap(struct page *page); ...@@ -37,7 +37,7 @@ static inline void *kmap(struct page *page);
/** /**
* kunmap - Unmap the virtual address mapped by kmap() * kunmap - Unmap the virtual address mapped by kmap()
* @addr: Virtual address to be unmapped * @page: Pointer to the page which was mapped by kmap()
* *
* Counterpart to kmap(). A NOOP for CONFIG_HIGHMEM=n and for mappings of * Counterpart to kmap(). A NOOP for CONFIG_HIGHMEM=n and for mappings of
* pages in the low memory area. * pages in the low memory area.
...@@ -138,24 +138,16 @@ static inline void *kmap_local_folio(struct folio *folio, size_t offset); ...@@ -138,24 +138,16 @@ static inline void *kmap_local_folio(struct folio *folio, size_t offset);
* *
* Returns: The virtual address of the mapping * Returns: The virtual address of the mapping
* *
* Effectively a wrapper around kmap_local_page() which disables pagefaults * In fact a wrapper around kmap_local_page() which also disables pagefaults
* and preemption. * and, depending on PREEMPT_RT configuration, also CPU migration and
* preemption. Therefore users should not count on the latter two side effects.
*
* Mappings should always be released by kunmap_atomic().
* *
* Do not use in new code. Use kmap_local_page() instead. * Do not use in new code. Use kmap_local_page() instead.
*/ */
static inline void *kmap_atomic(struct page *page); static inline void *kmap_atomic(struct page *page);
/**
* kunmap_atomic - Unmap the virtual address mapped by kmap_atomic()
* @addr: Virtual address to be unmapped
*
* Counterpart to kmap_atomic().
*
* Effectively a wrapper around kunmap_local() which additionally undoes
* the side effects of kmap_atomic(), i.e. reenabling pagefaults and
* preemption.
*/
/* Highmem related interfaces for management code */ /* Highmem related interfaces for management code */
static inline unsigned int nr_free_highpages(void); static inline unsigned int nr_free_highpages(void);
static inline unsigned long totalhigh_pages(void); static inline unsigned long totalhigh_pages(void);
...@@ -191,6 +183,8 @@ static inline void clear_user_highpage(struct page *page, unsigned long vaddr) ...@@ -191,6 +183,8 @@ static inline void clear_user_highpage(struct page *page, unsigned long vaddr)
* @vma: The VMA the page is to be allocated for * @vma: The VMA the page is to be allocated for
* @vaddr: The virtual address the page will be inserted into * @vaddr: The virtual address the page will be inserted into
* *
* Returns: The allocated and zeroed HIGHMEM page
*
* This function will allocate a page for a VMA that the caller knows will * This function will allocate a page for a VMA that the caller knows will
* be able to migrate in the future using move_pages() or reclaimed * be able to migrate in the future using move_pages() or reclaimed
* *
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment