- 15 Aug, 2009 19 commits
-
-
Paul Mundt authored
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
The caches enabled case needs more work, but is presently broken regardless, so this can be done incrementally. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This is superfluous, as the default CPU type and family are already established by the initial cpuinfo definition. Given that we are still able to probe for the CPU family even if we are not able to detect the subtype, it's preferable to let the probing code fill out what it can and leave the rest. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This paves the way for allowing individual CPUs to overload the individual flushing routines that they care about without having to depend on weak aliases. SH-4 is converted over initially, as it wires up pretty much everything. The majority of the other CPUs will simply use the default no-op implementation with their own region flushers wired up. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
We use flush_cache_page() outright in copy_to_user_page(), and nothing else needs it, so just kill it off. SH-5 still defines its own version, but that too will go away in the same fashion once it converts over. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
All of the flush_dcache_mmap_lock()/flush_dcache_mmap_unlock() definitions are identical across all CPUs, so just provide them generically in asm/cacheflush.h. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
flush_dcache_all() is used internally by the SH-4 cache code, it is not part of the exported cache API, so make it static and don't export it. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This migrates the alias computation and printing of probed cache parameters from the SH-4 code to the shared cpu_cache_init(). This permits other platforms with aliases to make use of the same probe logic without having to roll their own, and also produces consistent output regardless of platform. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This provides a central point for CPU cache initialization routines. This replaces the antiquated p3_cache_init() method, which the vast majority of CPUs never cared about. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This adds a family member to struct sh_cpuinfo, which allows us to fall back more on the probe routines to work out what sort of subtype we are running on. This will be used by the CPU cache initialization code in order to first do family-level initialization, followed by subtype-level optimizations. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
These were previous littered around tlb-nommu.c and pg-nommu.c, though at this point there are more stubs than are strictly TLB or page op related, so just consolidate them in a single nommu.c. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This does a bit of reorganizing for allowing nommu to use the new and generic cache.c, no functional changes. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This builds in the newly created cache.c (renamed from pg-mmu.c) for both MMU and NOMMU configurations. The kmap_coherent() stubs and alias information recorded by each CPU family takes care of doing the right thing while enabling the code to be commonly shared. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This plugs in kmap_coherent() for the non-SH4 cases to permit the pg-mmu.c bits to be used generically across all CPUs. SH-5 is still in the TODO state, but will move over to fixmap and the generic interface gradually. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This kills off the ifdef from kmap_coherent_init() and just bails if there are no cache aliases. This permits the kmap coherent code to be used on other CPUs. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
- 14 Aug, 2009 10 commits
-
-
Paul Mundt authored
-
Paul Mundt authored
This only bothers with the TLB entry flush in the case of the initial page write exception, as it is unecessary in the case of the load/store exceptions. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This adds a bit of rework to have the TLB protection violations skip the TLB miss fastpath and go directly in to do_page_fault(), as these require slow path handling. Based on an earlier patch by SUGIOKA Toshinobu. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This optimizes for the cases when a CPU does not yet have a valid ASID context associated with it, as in this case there is no work for any of flush_cache_mm()/flush_cache_page()/flush_cache_range() to do. Based on the the MIPS implementation. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
Now with all of the prep work out of the way, kill off the SH-5 variants and use the SH-4 version directly. This also takes advantage of the unrolling that was previously done for the new version. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This plugs in some register alignment helpers for the shared flushers, allowing them to also be used on SH-5. The main rationale here is that in the SH-5 case we have a variable ABI, where the pointer size may not equal the register width. This register extension is taken care of by the SH-5 code already today, and is otherwise unused on the SH-4 code. This combines the two and allows us to kill off the SH-5 implementation. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This inserts a ULONG_MAX entry at the end of the valid entries in the stack trace buffer so the default code doesn't need to scan to the end of available slots. This also makes the trace buffer termination behaviour consistent with the other architectures. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This flags the default unwinder as reliable, as it tends to be reliable enough for the purposes of the stacktrace buffer. We leave the unreliable cases for the unwind methods that we know to be completely broken. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This adopts the reliability checks from the x86 stacktrace code so known bad addresses are not recorded in the stack trace buffer. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
save_stack_trace_tsk() and friends can be called from atomic context (as triggered by latencytop), and subsequently hit two problematic allocation points that were using GFP_KERNEL (these were dwarf_unwind_stack() and dwarf_frame_alloc_regs()). Convert these over to GFP_ATOMIC and get latencytop working with the DWARF unwinder. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
- 13 Aug, 2009 11 commits
-
-
Paul Mundt authored
-
Matt Fleming authored
Trying to figure out the best value for DWARF_ARCH_UNWIND_OFFSET is tricky at best. Various things can change the size (and offset from the beginning of the function) of the prologue. Notably, turning on ftrace adds calls to mcount at the beginning of functions, thereby pushing the prologue further into the function. So replace DWARF_ARCH_UNWIND_OFFSET with some code that continues to execute CFA instructions until the value of return address register is defined. This is safe to do because we know that the return address must have been pushed onto the frame before our first function call; we just can't figure out where at compile-time. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This is no longer used, kill it off. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
The destination address might be unaligned, so set it with put_unaligned() for safety. This restores the previous behaviour, albeit through the proper API. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Paul Mundt authored
This was using internal symbols for unaligned accesses, bypassing the exposed interface for variable sized safe accesses. This converts all of the __get_unaligned_cpuXX() users over to get_unaligned() directly, relying on the cast to select the proper internal routine. Additionally, the __put_unaligned_cpuXX() case is superfluous given that the destination address is aligned in all of the current cases, so just drop that outright. Furthermore, this switches to the asm/unaligned.h header instead of the asm-generic version, which was silently bypassing the SH-4A optimized unaligned ops. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Matt Fleming authored
Annotate various assembly code paths with CFI assembler directives so that DWARF unwind info is available for the unwinder. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Matt Fleming authored
In order to use DWARF unwinder info the frame register has to contain a valid value. Whilst GCC takes care of this for C code, we have to do it ourselves for assembly. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Matt Fleming authored
This is a first cut at a generic DWARF unwinder for the kernel. It's still lacking DWARF64 support and the DWARF expression support hasn't been tested very well but it is generating proper stacktraces on SH for WARN_ON() and NULL dereferences. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Matt Fleming authored
Instead of implementing our own stack unwinder via dump_trace() we should use the new stack unwinder API because it is more modular. This change allows us to decouple the interface for generating stacktraces from the implementation of a stack unwinder. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Matt Fleming authored
Provide an interface for registering stack unwinders, where each unwinder is given a rating that describes its accuracy and complexity. The more accurate an unwinder is, the more complex it is. If a the current stack unwinder faults, then the stack unwinder with the next highest accuracy will be used in its place (provided one is available). For example, this allows unwinders, such as the DWARF unwinder, to liberally sprinkle BUG()s to catch badly formed DWARF debug info. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-
Matt Fleming authored
Copy the stacktrace ops code from x86 and provide a central function for use by functions that need to dump a callstack. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
-