Commit bc47e7d9 authored by Tristan Lelong's avatar Tristan Lelong Committed by Greg Kroah-Hartman

Staging: android: ion: fix typos in comments

s/comming/coming/ in drivers/staging/android/ion/ion.c
s/specfic/specific/ in drivers/staging/android/ion/ion.h
s/peformance/performance/ in drivers/staging/android/ion/ion_priv.h
Signed-off-by: default avatarTristan Lelong <tristan@lelong.xyz>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent dafe2716
...@@ -250,7 +250,7 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap, ...@@ -250,7 +250,7 @@ static struct ion_buffer *ion_buffer_create(struct ion_heap *heap,
our systems the only dma_address space is physical addresses. our systems the only dma_address space is physical addresses.
Additionally, we can't afford the overhead of invalidating every Additionally, we can't afford the overhead of invalidating every
allocation via dma_map_sg. The implicit contract here is that allocation via dma_map_sg. The implicit contract here is that
memory comming from the heaps is ready for dma, ie if it has a memory coming from the heaps is ready for dma, ie if it has a
cached mapping that mapping has been invalidated */ cached mapping that mapping has been invalidated */
for_each_sg(buffer->sg_table->sgl, sg, buffer->sg_table->nents, i) for_each_sg(buffer->sg_table->sgl, sg, buffer->sg_table->nents, i)
sg_dma_address(sg) = sg_phys(sg); sg_dma_address(sg) = sg_phys(sg);
......
...@@ -76,7 +76,7 @@ struct ion_platform_data { ...@@ -76,7 +76,7 @@ struct ion_platform_data {
* size * size
* *
* Calls memblock reserve to set aside memory for heaps that are * Calls memblock reserve to set aside memory for heaps that are
* located at specific memory addresses or of specfic sizes not * located at specific memory addresses or of specific sizes not
* managed by the kernel * managed by the kernel
*/ */
void ion_reserve(struct ion_platform_data *data); void ion_reserve(struct ion_platform_data *data);
......
...@@ -345,7 +345,7 @@ void ion_carveout_free(struct ion_heap *heap, ion_phys_addr_t addr, ...@@ -345,7 +345,7 @@ void ion_carveout_free(struct ion_heap *heap, ion_phys_addr_t addr,
* functions for creating and destroying a heap pool -- allows you * functions for creating and destroying a heap pool -- allows you
* to keep a pool of pre allocated memory to use from your heap. Keeping * to keep a pool of pre allocated memory to use from your heap. Keeping
* a pool of memory that is ready for dma, ie any cached mapping have been * a pool of memory that is ready for dma, ie any cached mapping have been
* invalidated from the cache, provides a significant peformance benefit on * invalidated from the cache, provides a significant performance benefit on
* many systems */ * many systems */
/** /**
...@@ -362,7 +362,7 @@ void ion_carveout_free(struct ion_heap *heap, ion_phys_addr_t addr, ...@@ -362,7 +362,7 @@ void ion_carveout_free(struct ion_heap *heap, ion_phys_addr_t addr,
* *
* Allows you to keep a pool of pre allocated pages to use from your heap. * Allows you to keep a pool of pre allocated pages to use from your heap.
* Keeping a pool of pages that is ready for dma, ie any cached mapping have * Keeping a pool of pages that is ready for dma, ie any cached mapping have
* been invalidated from the cache, provides a significant peformance benefit * been invalidated from the cache, provides a significant performance benefit
* on many systems * on many systems
*/ */
struct ion_page_pool { struct ion_page_pool {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment