Commit 40eb0cb4 authored by Ingo Molnar's avatar Ingo Molnar

x86/cpu: Fix typos and improve the comments in sync_core()

- Fix typos.

- Move the compiler barrier comment to the top, because it's valid for the
  whole function, not just the legacy branch.
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200818053130.GA3161093@gmail.comReviewed-by: default avatarRicardo Neri <ricardo.neri-calderon@linux.intel.com>
parent 86109813
...@@ -47,16 +47,19 @@ static inline void iret_to_self(void) ...@@ -47,16 +47,19 @@ static inline void iret_to_self(void)
* *
* b) Text was modified on a different CPU, may subsequently be * b) Text was modified on a different CPU, may subsequently be
* executed on this CPU, and you want to make sure the new version * executed on this CPU, and you want to make sure the new version
* gets executed. This generally means you're calling this in a IPI. * gets executed. This generally means you're calling this in an IPI.
* *
* If you're calling this for a different reason, you're probably doing * If you're calling this for a different reason, you're probably doing
* it wrong. * it wrong.
*
* Like all of Linux's memory ordering operations, this is a
* compiler barrier as well.
*/ */
static inline void sync_core(void) static inline void sync_core(void)
{ {
/* /*
* The SERIALIZE instruction is the most straightforward way to * The SERIALIZE instruction is the most straightforward way to
* do this but it not universally available. * do this, but it is not universally available.
*/ */
if (static_cpu_has(X86_FEATURE_SERIALIZE)) { if (static_cpu_has(X86_FEATURE_SERIALIZE)) {
serialize(); serialize();
...@@ -67,10 +70,10 @@ static inline void sync_core(void) ...@@ -67,10 +70,10 @@ static inline void sync_core(void)
* For all other processors, there are quite a few ways to do this. * For all other processors, there are quite a few ways to do this.
* IRET-to-self is nice because it works on every CPU, at any CPL * IRET-to-self is nice because it works on every CPU, at any CPL
* (so it's compatible with paravirtualization), and it never exits * (so it's compatible with paravirtualization), and it never exits
* to a hypervisor. The only down sides are that it's a bit slow * to a hypervisor. The only downsides are that it's a bit slow
* (it seems to be a bit more than 2x slower than the fastest * (it seems to be a bit more than 2x slower than the fastest
* options) and that it unmasks NMIs. The "push %cs" is needed * options) and that it unmasks NMIs. The "push %cs" is needed,
* because, in paravirtual environments, __KERNEL_CS may not be a * because in paravirtual environments __KERNEL_CS may not be a
* valid CS value when we do IRET directly. * valid CS value when we do IRET directly.
* *
* In case NMI unmasking or performance ever becomes a problem, * In case NMI unmasking or performance ever becomes a problem,
...@@ -81,9 +84,6 @@ static inline void sync_core(void) ...@@ -81,9 +84,6 @@ static inline void sync_core(void)
* CPUID is the conventional way, but it's nasty: it doesn't * CPUID is the conventional way, but it's nasty: it doesn't
* exist on some 486-like CPUs, and it usually exits to a * exist on some 486-like CPUs, and it usually exits to a
* hypervisor. * hypervisor.
*
* Like all of Linux's memory ordering operations, this is a
* compiler barrier as well.
*/ */
iret_to_self(); iret_to_self();
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment