Commit 46c3e6b8 authored by Tal Zilcer's avatar Tal Zilcer Committed by Vineet Gupta

ARC: [plat-eznps] Use dedicated cpu_relax()

Since the CTOP is SMT hardware multi-threaded, we need to hint
the HW that now will be a very good time to do a hardware
thread context switching. This is done by issuing the schd.rw
instruction (binary coded here so as to not require specific
revision of GCC to build the kernel).
sched.rw means that Thread becomes eligible for execution by
the threads scheduler after all pending read/write
transactions were completed.

Implementing cpu_relax_lowlatency() with barrier()
Since with current semantics of cpu_relax() it may take a
while till yielded CPU will get back.
Signed-off-by: default avatarNoam Camus <noamc@ezchip.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Acked-by: default avatarVineet Gupta <vgupta@synopsys.com>
parent 86c25466
...@@ -57,10 +57,20 @@ struct task_struct; ...@@ -57,10 +57,20 @@ struct task_struct;
* A lot of busy-wait loops in SMP are based off of non-volatile data otherwise * A lot of busy-wait loops in SMP are based off of non-volatile data otherwise
* get optimised away by gcc * get optimised away by gcc
*/ */
#define cpu_relax() __asm__ __volatile__ ("" : : : "memory") #ifndef CONFIG_EZNPS_MTM_EXT
#define cpu_relax() barrier()
#define cpu_relax_lowlatency() cpu_relax() #define cpu_relax_lowlatency() cpu_relax()
#else
#define cpu_relax() \
__asm__ __volatile__ (".word %0" : : "i"(CTOP_INST_SCHD_RW) : "memory")
#define cpu_relax_lowlatency() barrier()
#endif
#define copy_segments(tsk, mm) do { } while (0) #define copy_segments(tsk, mm) do { } while (0)
#define release_segments(mm) do { } while (0) #define release_segments(mm) do { } while (0)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment