Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
L
linux
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
linux
Commits
66d485b4
Commit
66d485b4
authored
Nov 27, 2007
by
Paul Mundt
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
sh: Bump up ARCH_KMALLOC_MINALIGN for DMA cases.
Signed-off-by:
Paul Mundt
<
lethal@linux-sh.org
>
parent
eddeeb32
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
9 additions
and
13 deletions
+9
-13
include/asm-sh/page.h
include/asm-sh/page.h
+9
-13
No files found.
include/asm-sh/page.h
View file @
66d485b4
...
...
@@ -138,22 +138,18 @@ typedef struct { unsigned long pgd; } pgd_t;
#endif
/*
* S
lub defaults to 8-byte alignment, we're only interested in 4.
*
Slab defaults to BYTES_PER_WORD, which ends up being the same anyway
s.
* S
ome drivers need to perform DMA into kmalloc'ed buffers
*
and so we have to increase the kmalloc minalign for thi
s.
*/
#ifdef CONFIG_SUPERH32
#define ARCH_KMALLOC_MINALIGN 4
#define ARCH_SLAB_MINALIGN 4
#else
/* If gcc inlines memset, it will use st.q instructions. Therefore, we need
kmalloc allocations to be 8-byte aligned. Without this, the alignment
becomes BYTE_PER_WORD i.e. only 4 (since sizeof(long)==sizeof(void*)==4 on
sh64 at the moment). */
#define ARCH_KMALLOC_MINALIGN 8
#define ARCH_KMALLOC_MINALIGN L1_CACHE_BYTES
#ifdef CONFIG_SUPERH64
/*
* We want 8-byte alignment for the slab caches as well, otherwise we have
* the same BYTES_PER_WORD (sizeof(void *)) min align in kmem_cache_create().
* While BYTES_PER_WORD == 4 on the current sh64 ABI, GCC will still
* happily generate {ld/st}.q pairs, requiring us to have 8-byte
* alignment to avoid traps. The kmalloc alignment is gauranteed by
* virtue of L1_CACHE_BYTES, requiring this to only be special cased
* for slab caches.
*/
#define ARCH_SLAB_MINALIGN 8
#endif
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment