Commit ba004e39 authored by Chris Wilson's avatar Chris Wilson Committed by Daniel Vetter

drm: Fix kerneldoc for drm_mm_scan_remove_block()

The nodes must be removed in the *reverse* order. This is correct in the
overview, but backwards in the function description. Whilst here add
Intel's copyright statement and tweak some formatting.
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: default avatarJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20161222083641.2691-23-chris@chris-wilson.co.uk
parent 71733207
/************************************************************************** /**************************************************************************
* *
* Copyright 2006 Tungsten Graphics, Inc., Bismarck, ND., USA. * Copyright 2006 Tungsten Graphics, Inc., Bismarck, ND., USA.
* Copyright 2016 Intel Corporation
* All Rights Reserved. * All Rights Reserved.
* *
* Permission is hereby granted, free of charge, to any person obtaining a * Permission is hereby granted, free of charge, to any person obtaining a
...@@ -31,9 +32,9 @@ ...@@ -31,9 +32,9 @@
* class implementation for more advanced memory managers. * class implementation for more advanced memory managers.
* *
* Note that the algorithm used is quite simple and there might be substantial * Note that the algorithm used is quite simple and there might be substantial
* performance gains if a smarter free list is implemented. Currently it is just an * performance gains if a smarter free list is implemented. Currently it is
* unordered stack of free regions. This could easily be improved if an RB-tree * just an unordered stack of free regions. This could easily be improved if
* is used instead. At least if we expect heavy fragmentation. * an RB-tree is used instead. At least if we expect heavy fragmentation.
* *
* Aligned allocations can also see improvement. * Aligned allocations can also see improvement.
* *
...@@ -67,7 +68,7 @@ ...@@ -67,7 +68,7 @@
* where an object needs to be created which exactly matches the firmware's * where an object needs to be created which exactly matches the firmware's
* scanout target. As long as the range is still free it can be inserted anytime * scanout target. As long as the range is still free it can be inserted anytime
* after the allocator is initialized, which helps with avoiding looped * after the allocator is initialized, which helps with avoiding looped
* depencies in the driver load sequence. * dependencies in the driver load sequence.
* *
* drm_mm maintains a stack of most recently freed holes, which of all * drm_mm maintains a stack of most recently freed holes, which of all
* simplistic datastructures seems to be a fairly decent approach to clustering * simplistic datastructures seems to be a fairly decent approach to clustering
...@@ -78,14 +79,14 @@ ...@@ -78,14 +79,14 @@
* *
* drm_mm supports a few features: Alignment and range restrictions can be * drm_mm supports a few features: Alignment and range restrictions can be
* supplied. Further more every &drm_mm_node has a color value (which is just an * supplied. Further more every &drm_mm_node has a color value (which is just an
* opaqua unsigned long) which in conjunction with a driver callback can be used * opaque unsigned long) which in conjunction with a driver callback can be used
* to implement sophisticated placement restrictions. The i915 DRM driver uses * to implement sophisticated placement restrictions. The i915 DRM driver uses
* this to implement guard pages between incompatible caching domains in the * this to implement guard pages between incompatible caching domains in the
* graphics TT. * graphics TT.
* *
* Two behaviors are supported for searching and allocating: bottom-up and top-down. * Two behaviors are supported for searching and allocating: bottom-up and
* The default is bottom-up. Top-down allocation can be used if the memory area * top-down. The default is bottom-up. Top-down allocation can be used if the
* has different restrictions, or just to reduce fragmentation. * memory area has different restrictions, or just to reduce fragmentation.
* *
* Finally iteration helpers to walk all nodes and all holes are provided as are * Finally iteration helpers to walk all nodes and all holes are provided as are
* some basic allocator dumpers for debugging. * some basic allocator dumpers for debugging.
...@@ -510,7 +511,7 @@ EXPORT_SYMBOL(drm_mm_insert_node_in_range_generic); ...@@ -510,7 +511,7 @@ EXPORT_SYMBOL(drm_mm_insert_node_in_range_generic);
* *
* This just removes a node from its drm_mm allocator. The node does not need to * This just removes a node from its drm_mm allocator. The node does not need to
* be cleared again before it can be re-inserted into this or any other drm_mm * be cleared again before it can be re-inserted into this or any other drm_mm
* allocator. It is a bug to call this function on a un-allocated node. * allocator. It is a bug to call this function on a unallocated node.
*/ */
void drm_mm_remove_node(struct drm_mm_node *node) void drm_mm_remove_node(struct drm_mm_node *node)
{ {
...@@ -689,16 +690,16 @@ EXPORT_SYMBOL(drm_mm_replace_node); ...@@ -689,16 +690,16 @@ EXPORT_SYMBOL(drm_mm_replace_node);
* efficient when we simply start to select all objects from the tail of an LRU * efficient when we simply start to select all objects from the tail of an LRU
* until there's a suitable hole: Especially for big objects or nodes that * until there's a suitable hole: Especially for big objects or nodes that
* otherwise have special allocation constraints there's a good chance we evict * otherwise have special allocation constraints there's a good chance we evict
* lots of (smaller) objects unecessarily. * lots of (smaller) objects unnecessarily.
* *
* The DRM range allocator supports this use-case through the scanning * The DRM range allocator supports this use-case through the scanning
* interfaces. First a scan operation needs to be initialized with * interfaces. First a scan operation needs to be initialized with
* drm_mm_init_scan() or drm_mm_init_scan_with_range(). The the driver adds * drm_mm_init_scan() or drm_mm_init_scan_with_range(). The driver adds
* objects to the roaster (probably by walking an LRU list, but this can be * objects to the roaster (probably by walking an LRU list, but this can be
* freely implemented) until a suitable hole is found or there's no further * freely implemented) until a suitable hole is found or there's no further
* evitable object. * evictable object.
* *
* The the driver must walk through all objects again in exactly the reverse * The driver must walk through all objects again in exactly the reverse
* order to restore the allocator state. Note that while the allocator is used * order to restore the allocator state. Note that while the allocator is used
* in the scan mode no other operation is allowed. * in the scan mode no other operation is allowed.
* *
...@@ -838,9 +839,10 @@ EXPORT_SYMBOL(drm_mm_scan_add_block); ...@@ -838,9 +839,10 @@ EXPORT_SYMBOL(drm_mm_scan_add_block);
* drm_mm_scan_remove_block - remove a node from the scan list * drm_mm_scan_remove_block - remove a node from the scan list
* @node: drm_mm_node to remove * @node: drm_mm_node to remove
* *
* Nodes _must_ be removed in the exact same order from the scan list as they * Nodes _must_ be removed in exactly the reverse order from the scan list as
* have been added, otherwise the internal state of the memory manager will be * they have been added (e.g. using list_add as they are added and then
* corrupted. * list_for_each over that eviction list to remove), otherwise the internal
* state of the memory manager will be corrupted.
* *
* When the scan list is empty, the selected memory nodes can be freed. An * When the scan list is empty, the selected memory nodes can be freed. An
* immediately following drm_mm_search_free with !DRM_MM_SEARCH_BEST will then * immediately following drm_mm_search_free with !DRM_MM_SEARCH_BEST will then
......
/************************************************************************** /**************************************************************************
* *
* Copyright 2006-2008 Tungsten Graphics, Inc., Cedar Park, TX. USA. * Copyright 2006-2008 Tungsten Graphics, Inc., Cedar Park, TX. USA.
* Copyright 2016 Intel Corporation
* All Rights Reserved. * All Rights Reserved.
* *
* Permission is hereby granted, free of charge, to any person obtaining a * Permission is hereby granted, free of charge, to any person obtaining a
...@@ -117,7 +118,10 @@ struct drm_mm { ...@@ -117,7 +118,10 @@ struct drm_mm {
* drm_mm_node_allocated - checks whether a node is allocated * drm_mm_node_allocated - checks whether a node is allocated
* @node: drm_mm_node to check * @node: drm_mm_node to check
* *
* Drivers should use this helpers for proper encapusulation of drm_mm * Drivers are required to clear a node prior to using it with the
* drm_mm range manager.
*
* Drivers should use this helper for proper encapsulation of drm_mm
* internals. * internals.
* *
* Returns: * Returns:
...@@ -132,7 +136,10 @@ static inline bool drm_mm_node_allocated(const struct drm_mm_node *node) ...@@ -132,7 +136,10 @@ static inline bool drm_mm_node_allocated(const struct drm_mm_node *node)
* drm_mm_initialized - checks whether an allocator is initialized * drm_mm_initialized - checks whether an allocator is initialized
* @mm: drm_mm to check * @mm: drm_mm to check
* *
* Drivers should use this helpers for proper encapusulation of drm_mm * Drivers should clear the struct drm_mm prior to initialisation if they
* want to use this function.
*
* Drivers should use this helper for proper encapsulation of drm_mm
* internals. * internals.
* *
* Returns: * Returns:
...@@ -152,8 +159,8 @@ static inline u64 __drm_mm_hole_node_start(const struct drm_mm_node *hole_node) ...@@ -152,8 +159,8 @@ static inline u64 __drm_mm_hole_node_start(const struct drm_mm_node *hole_node)
* drm_mm_hole_node_start - computes the start of the hole following @node * drm_mm_hole_node_start - computes the start of the hole following @node
* @hole_node: drm_mm_node which implicitly tracks the following hole * @hole_node: drm_mm_node which implicitly tracks the following hole
* *
* This is useful for driver-sepific debug dumpers. Otherwise drivers should not * This is useful for driver-specific debug dumpers. Otherwise drivers should
* inspect holes themselves. Drivers must check first whether a hole indeed * not inspect holes themselves. Drivers must check first whether a hole indeed
* follows by looking at node->hole_follows. * follows by looking at node->hole_follows.
* *
* Returns: * Returns:
...@@ -174,8 +181,8 @@ static inline u64 __drm_mm_hole_node_end(const struct drm_mm_node *hole_node) ...@@ -174,8 +181,8 @@ static inline u64 __drm_mm_hole_node_end(const struct drm_mm_node *hole_node)
* drm_mm_hole_node_end - computes the end of the hole following @node * drm_mm_hole_node_end - computes the end of the hole following @node
* @hole_node: drm_mm_node which implicitly tracks the following hole * @hole_node: drm_mm_node which implicitly tracks the following hole
* *
* This is useful for driver-sepific debug dumpers. Otherwise drivers should not * This is useful for driver-specific debug dumpers. Otherwise drivers should
* inspect holes themselves. Drivers must check first whether a hole indeed * not inspect holes themselves. Drivers must check first whether a hole indeed
* follows by looking at node->hole_follows. * follows by looking at node->hole_follows.
* *
* Returns: * Returns:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment