- 25 Jun, 2015 4 commits
-
-
Vineet Gupta authored
ARCv2 based HS38 cores are weakly ordered and thus explicit barriers for kernel proper. SMP barrier is provided by DMB instruction which also guarantees local barrier hence used as backend of smp_*mb() as well as *mb() APIs Also hookup barriers into MMIO accessors to avoid ordering issues in IO Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
That way arches can define the minimal versions and still #include asm-generic for defaults (vs. defining defaults in arch code) See new barrier.h in arc for usage ! Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
- arch_spin_lock/unlock were lacking the ACQUIRE/RELEASE barriers Since ARCv2 only provides load/load, store/store and all/all, we need the full barrier - LLOCK/SCOND based atomics, bitops, cmpxchg, which return modified values were lacking the explicit smp barriers. - Non LLOCK/SCOND varaints don't need the explicit barriers since that is implicity provided by the spin locks used to implement the critical section (the spin lock barriers in turn are also fixed in this commit as explained above Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: stable@vger.kernel.org Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
When auditing cmpxchg call sites, Chuck noted that gcc was optimizing away some of the desired LDs. | do { | new = old = *ipi_data_ptr; | new |= 1U << msg; | } while (cmpxchg(ipi_data_ptr, old, new) != old); was generating to below | 8015cef8: ld r2,[r4,0] <-- First LD | 8015cefc: bset r1,r2,r1 | | 8015cf00: llock r3,[r4] <-- atomic op | 8015cf04: brne r3,r2,8015cf10 | 8015cf08: scond r1,[r4] | 8015cf0c: bnz 8015cf00 | | 8015cf10: brne r3,r2,8015cf00 <-- Branch doesn't go to orig LD Although this was fixed by adding a ACCESS_ONCE in this call site, it seems safer (for now at least) to add compiler barrier to LLSC based cmpxchg Reported-by: Chuck Jordan <cjordan@synopsys,com> Cc: <stable@vger.kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
- 22 Jun, 2015 18 commits
-
-
Vineet Gupta authored
Cc: Jason Cooper <jason@lakedaemon.net> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
- Handle possible interrupt coalescing from MCIP - chk if prev IPI ack before sending new Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Cc: Jason Cooper <jason@lakedaemon.net> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
This allows platforms to provide their own cpu wakeup routines as well as IPI send / clear backends, while allowing a SMP kernel w/o any such backend to build/boot Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Branch insn can't be scheduled as last insn of Zero Overhead loop Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Claudiu Zissulescu authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
This is also default for AXS103 release Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Caveats about cache flush on ARCv2 based cores - dcache is PIPT so paddr is sufficient for cache maintenance ops (no need to setup PTAG reg - icache is still VIPT but only aliasing configs need PTAG setup So basically this is departure from MMU-v3 which always need vaddr in line ops registers (DC_IVDL, DC_FLDL, IC_IVIL) but paddr in DC_PTAG, IC_PTAG respectively. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
The issue was, on HS when interrupt is taken, IRQ_ACT is set and that is NOT cleared unless we do RTIE (or manually clear it). Linux interrupt handling has top and bottom halves. Latter lead to softirqs (which can reschedule) AND expect interrupts to be REALLY re-enabled which was NOT happening for us since we only SETI, dont clear IRQ_ACT So we can have a state when both cores have taken interrupt (IRQ_ACT set), get rescheduled, both send IPI and wait in CSD lock which will never be cleared as cores can't take the pending IPI IRQ due to existing IRQ_ACT set. So local_irq_enable() now drops the IRQ_ACT.act bit to re-enable IRQs. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Reported by Anton as LTP:munmap01 failing with Illegal Instruction Exception. --------------------->8-------------------------------------- mmap2(NULL, 24576, PROT_READ|PROT_WRITE, MAP_SHARED, 3, 0) = 0x200d2000 munmap(0x200d2000, 24576) = 0 --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x200d2000} --- potentially unexpected fatal signal 4. Path: /munmap01 CPU: 0 PID: 61 Comm: munmap01 Not tainted 3.13.0-g5d5c46d9a556 #8 task: 9f1a8000 ti: 9f154000 task.ti: 9f154000 [ECR ]: 0x00020100 => Illegal Insn [EFA ]: 0x0001354c [BLINK ]: 0x200515d4 [ERET ]: 0x1354c @off 0x1354c in [/munmap01] VMA: 0x00010000 to 0x00018000 [STAT32]: 0x800802c0 ... --------------------->8-------------------------------------- The issue was 1. munmap01 accessed unmapped memory (on purpose) with signal handler installed for SIGSEGV 2. The faulting instruction happened to be in Delay Slot 00011864 <main>: 11908: bl.d 13284 <tst_resm> 1190c: stb r16,[r2] 3. kernel sets up the reg file for signal handler and correctly clears the DE bit in pt_regs->status32 placeholder 4. However RESTORE_CALLEE_SAVED_USER macro is not adjusted for ARCv2, and it over-writes the above with orig/stale value of status32 5. After RTIE, userspace signal handler executes a non branch instruction with DE bit set, triggering Illegal Instruction Exception. Reported-by: Anton Kolesov <akolesov@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
The notable features are: - SMP configurations of upto 4 cores with coherency - Optional L2 Cache and IO-Coherency - Revised Interrupt Architecture (multiple priorites, reg banks, auto stack switch, auto regfile save/restore) - MMUv4 (PIPT dcache, Huge Pages) - Instructions for * 64bit load/store: LDD, STD * Hardware assisted divide/remainder: DIV, REM * Function prologue/epilogue: ENTER_S, LEAVE_S * IRQ enable/disable: CLRI, SETI * pop count: FFS, FLS * SETcc, BMSKN, XBFU... Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Cc: Jason Cooper <jason@lakedaemon.net> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
ioremap already uses the hard define, just make sure BCR value matches that Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
- 19 Jun, 2015 18 commits
-
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
These have been register compatible so far. However ARCv2 mandates different pt_regs layout (due to h/w auto save). To keep pt_regs same for both, we start by removing the assumption - used mainly for block copies between the 2 structs in signal handling and ptrace Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Previously this macro was overloaded with stack switching, saving SP at right slot in pt_regs, saving/setup of r25 and setting SP baseline to where pt_regs->sp is saved (vs. bottom of pt_regs) Now it only does SP switch, and leaves SP pointing to bottom of pt_regs. r25 saving is no longer done here to allow for future reordering of regfile in pt_regs w/o touching this macro Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Returning from pure kernel mode and exception mode use the same code anyways. Remove one the duplicate blocks Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Elide the need to re-read ECR in Trap handler by ensuring that EXCEPTION_PROLOGUE does that at the very end just before returning to Trap handler ARCv2 EXCEPTION_PROLOGUE already did that, so same for ARcompact and the common trap handler adjusted to use cached ECR Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
This fixes the possible link/relo errors, since restore_regs will be provided by ISA code, but called from ARC common code. The .L prefix reassures binutils that it will be in same compilation unit. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
-EXCEPTION_EPILOGUE introduced -EXCEPTION_PROLOGUE now also includes reg file saving Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
-common'ize macros for level 1 and level 2 interrupts Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
- Remove the ifdef'ery and write distinct versions for each mmu ver even if there is some code duplication Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
That is because __after_dc_op() already reads it for status check, so it is better anyways to use that "newer" value. Also reduces the clutter in callers for passing from/to these routines. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-
Vineet Gupta authored
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
-