- 26 Mar, 2014 40 commits
-
-
Markos Chandras authored
Use the EVA instruction wrappers from asm.h to perform read/write operations from userland. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Leonid Yegoshin authored
ulb, ulh, ulw are macros which emulate unaligned access for MIPS. However, no such macros exist for EVA mode, so the only way to do EVA unaligned accesses is in the ADE exception handler. As a result of which, disable these macros for EVA. Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Similar to __get_user_* functions, move common code to __put_user_*_common so it can be shared among similar users. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
In preparation for EVA support, an instruction argument is needed for the __get_user_asm{,_ll32} functions to allow instruction overrides in EVA mode. Even though EVA only works for MIPS 32-bit, both codepaths are changed (32-bit and 64-bit) for consistency reasons. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Build the __bzero function using the EVA load/store instructions when operating in the EVA mode. This function is only used when accessing user code so there is no need to build two distinct symbols for user and kernel operations respectively. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Build the __bzero symbol using a macor. In EVA mode we will need to use similar code to do the userspace load operations so it is better if we use a macro to avoid code duplications. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Add copy_{to,from,in}_user when the CPU operates in EVA mode. This is necessary so the EVA specific instructions can be used to perform the virtual to physical translation for user space addresses. We will use the non-EVA functions to read from kernel if needed. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
The code can be shared between EVA and non-EVA configurations, therefore use a macro to build it to avoid code duplications. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
In preparation for EVA support, the PREF macro is split into two separate macros, PREFS and PREFD, for source and destination data prefetching respectively. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Each load/store macro always adds an entry to the __ex_table using the EXC macro. Therefore, these load/store macros are now merged with the EXC one. The argument list is also expanded in order to make the macro more flexible. The extra 'type' argument is not used by this commit, but it will be used when the EVA support is added to the memcpy. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
In non-EVA mode, strncpy_from_user* aliases are used for the strncpy_from_kernel* symbols since the code is identical. In EVA mode, new strcpy_from_user* symbols are used which use the EVA specific instructions to load values from userspace. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Build the __strncpy_from_user symbol using a macro. In EVA mode we will need to use similar code to do the userspace load operations so it is better if we use a macro to avoid code duplications. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
In non-EVA mode, strlen_user* aliases are used for the strlen_kernel* symbols since the code is identical. In EVA mode, new strlen_user* symbols are used which use the EVA specific instructions to load values from userspace. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Build the __strlen_user symbol using a macro. In EVA mode we will need to use similar code to do the userspace load operations so it is better if we use a macro to avoid code duplications. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
In non-EVA mode, a strlen_user* alias is used for the strlen_kernel* symbols since the code is identical. In EVA mode, a new strlen_user* symbol is used which uses the EVA specific instructions to load values from userspace. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Build the __strnlen_user symbol using a macro. In EVA mode we will need to use similar code to do the userspace load operations so it is better if we use a macro to avoid code duplications. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Leonid Yegoshin authored
When a breakpoint or trap happens when operating in kernel mode but on users behalf (eg syscall) it is necessary to change the address limit to KERNEL_DS so any address checking can be bypassed and print the correct stack trace. Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Arguments 4-8 are stored on user's stack, so use the EVA instructions to fetch them if EVA is enabled. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Leonid Yegoshin authored
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Leonid Yegoshin authored
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
Use LLE/SCE instructions for performing an address translation for userspace when EVA is enabled. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Markos Chandras authored
EVA uses specific instructions for accessing user memory. Instead of polluting the kernel with numerous #ifdef CONFIG_EVA we add wrappers for all the instructions that need special handling when EVA is enabled. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Leonid Yegoshin authored
EVA can use the PREFE instruction to perform the virtual address translation using the user mapping of the address rather than the kernel mapping. Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
Leonid Yegoshin authored
Add basic Kconfig support for EVA. Not selectable by any platform at this point. Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
-
James Hogan authored
Add a CPU_P5600 cpu type case in oprofile_arch_init() to use the MIPS model, and in mipsxx_init() to set the cpu_type string to "mips/P5600". Signed-off-by: James Hogan <james.hogan@imgtec.com> Reviewed-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Robert Richter <rric@kernel.org> Cc: oprofile-list@lists.sf.net Patchwork: https://patchwork.linux-mips.org/patch/6410/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
James Hogan authored
Allow FTLB to be turned on or off for CPU_P5600 as well as CPU_PROAPTIV. The existing if statement is converted into a switch to allow for future expansion. Signed-off-by: James Hogan <james.hogan@imgtec.com> Reviewed-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6411/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
James Hogan authored
Add a case in cpu_probe_mips for the MIPS P5600 processor ID, which sets the CPU type to the new CPU_P5600. Signed-off-by: James Hogan <james.hogan@imgtec.com> Reviewed-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6409/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
James Hogan authored
Add a CPU_P5600 case to various switch statements, doing the same thing as for CPU_PROAPTIV. Signed-off-by: James Hogan <james.hogan@imgtec.com> Reviewed-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6408/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
James Hogan authored
Add a Processor ID and CPU type for the MIPS P5600 core. Signed-off-by: James Hogan <james.hogan@imgtec.com> Reviewed-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6407/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
This patch extends sigcontext in order to hold the most significant 64 bits of each vector register in addition to the MSA control & status register. The least significant 64 bits are already saved as the scalar FP context. This makes things a little awkward since the least & most significant 64 bits of each vector register are not contiguous in memory. Thus the copy_u & insert instructions are used to transfer the values of the most significant 64 bits via GP registers. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6533/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
No current systems implementing MSA include support for vector register partitioning which makes it somewhat difficult to implement support for it in the kernel. Thus for the moment the kernel includes no such support. However if the kernel were to be run on a system which implemented register partitioning then it would not function correctly, mishandling MSA disabled exceptions. Print a warning if run on a system with vector register partitioning implemented to indicate this problem should it occur. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6494/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
This patch adds a simple handler for MSA FP exceptions which delivers a SIGFPE to the running task. In the future it should probably be extended to re-execute the instruction with the MSACSR.NX bit set in order to generate results for any elements which did not cause an exception before delivering the SIGFPE signal. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6432/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
This patch adds support for context switching the MSA vector registers. These 128 bit vector registers are aliased with the FP registers - an FP register accesses the least significant bits of the vector register with which it is aliased (ie. the register with the same index). Due to both this & the requirement that the scalar FPU must be 64-bit (FR=1) if enabled at the same time as MSA the kernel will enable MSA & scalar FP at the same time for tasks which use MSA. If we restore the MSA vector context then we might as well enable the scalar FPU since the reason it was left disabled was to allow for lazy FP context restoring - but we just restored the FP context as it's a subset of the vector context. If we restore the FP context and have previously used MSA then we have to restore the whole vector context anyway (see comment in enable_restore_fp_context for details) so similarly we might as well enable MSA. Thus if a task does not use MSA then it will continue to behave as without this patch - the scalar FP context will be saved & restored as usual. But if a task executes an MSA instruction then it will save & restore the vector context forever more. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6431/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
This patch adds support for probing the MSAP bit within the Config3 register in order to detect the presence of the MSA ASE. Presence of the ASE will be indicated in /proc/cpuinfo. The value of the MSA implementation register will be displayed at boot to aid debugging and verification of a correct setup, as is done for the FPU. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6430/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
This patch introduces definitions for the MSA control registers and functions which allow access to both the control & vector registers. If the toolchain being used to build the kernel includes support for MSA then this patch will make use of that support & use MSA instructions directly. However toolchain support for MSA is very new & far from a point where it can be reasonably expected that everyone building the kernel uses a toolchain with support. Thus fallbacks using .word assembler directives are also provided for now as a temporary measure. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6429/ Patchwork: https://patchwork.linux-mips.org/patch/6607/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
When saving or restoring scalar FP context we want to access the least significant 64 bits of each FP register. When the FP registers are 64 bits wide that is trivially the start of the registers value in memory. However when the FP registers are wider this equivalence will no longer be true for big endian systems. Define a new set of offset macros for the least significant 64 bits of each saved FP register within thread context, and make use of them when saving and restoring scalar FP context. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6428/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
When we want to access 64-bit FP register values we can only treat consecutive registers as being consecutive in memory when the width of an FP register equals 64 bits. This assumption will not remain true once MSA support is introduced, so provide a code path which copies each 64 bit FP register value in turn when the width of an FP register differs from 64 bits. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6427/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
This code assumed that saved FP registers are 64 bits wide, an assumption which will no longer be true once MSA is introduced. This patch modifies the code to copy the lower 64 bits of each register in turn, which is safe for any FP register width >= 64 bits. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/6425/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-