1. 28 May, 2014 1 commit
  2. 23 May, 2014 7 commits
  3. 22 May, 2014 1 commit
  4. 21 May, 2014 1 commit
  5. 16 May, 2014 8 commits
  6. 14 May, 2014 7 commits
  7. 12 May, 2014 5 commits
  8. 09 May, 2014 10 commits
    • Linus Torvalds's avatar
      Linux 3.15-rc5 · d6d211db
      Linus Torvalds authored
      d6d211db
    • Linus Torvalds's avatar
      Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 181da3c3
      Linus Torvalds authored
      Pull x86 fixes from Peter Anvin:
       "A somewhat unpleasantly large collection of small fixes.  The big ones
        are the __visible tree sweep and a fix for 'earlyprintk=efi,keep'.  It
        was using __init functions with predictably suboptimal results.
      
        Another key fix is a build fix which would produce output that simply
        would not decompress correctly in some configuration, due to the
        existing Makefiles picking up an unfortunate local label and mistaking
        it for the global symbol _end.
      
        Additional fixes include the handling of 64-bit numbers when setting
        the vdso data page (a latent bug which became manifest when i386
        started exporting a vdso with time functions), a fix to the new MSR
        manipulation accessors which would cause features to not get properly
        unblocked, a build fix for 32-bit userland, and a few new platform
        quirks"
      
      * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        x86, vdso, time: Cast tv_nsec to u64 for proper shifting in update_vsyscall()
        x86: Fix typo in MSR_IA32_MISC_ENABLE_LIMIT_CPUID macro
        x86: Fix typo preventing msr_set/clear_bit from having an effect
        x86/intel: Add quirk to disable HPET for the Baytrail platform
        x86/hpet: Make boot_hpet_disable extern
        x86-64, build: Fix stack protector Makefile breakage with 32-bit userland
        x86/reboot: Add reboot quirk for Certec BPC600
        asmlinkage: Add explicit __visible to drivers/*, lib/*, kernel/*
        asmlinkage, x86: Add explicit __visible to arch/x86/*
        asmlinkage: Revert "lto: Make asmlinkage __visible"
        x86, build: Don't get confused by local symbols
        x86/efi: earlyprintk=efi,keep fix
      181da3c3
    • Will Deacon's avatar
      arm64: mm: use inner-shareable barriers for inner-shareable maintenance · dc60b777
      Will Deacon authored
      In order to ensure ordering and completion of inner-shareable maintenance
      instructions (cache and TLB) on AArch64, we can use the -ish suffix to
      the dmb and dsb instructions respectively.
      
      This patch updates our low-level cache and tlb maintenance routines to
      use the inner-shareable barrier variants where appropriate.
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      dc60b777
    • Will Deacon's avatar
      arm64: kvm: use inner-shareable barriers for inner-shareable maintenance · ee9e101c
      Will Deacon authored
      In order to ensure completion of inner-shareable maintenance instructions
      (cache and TLB) on AArch64, we can use the -ish suffix to the dsb
      instruction.
      
      This patch relaxes our dsb sy instructions to dsb ish where possible.
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ee9e101c
    • Will Deacon's avatar
      arm64: head: fix cache flushing and barriers in set_cpu_boot_mode_flag · d0488597
      Will Deacon authored
      set_cpu_boot_mode_flag is used to identify which exception levels are
      encountered across the system by CPUs trying to enter the kernel. The
      basic algorithm is: if a CPU is booting at EL2, it will set a flag at
      an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable.
      Otherwise, a flag is set at an offset of zero into the same cacheline.
      This enables us to check that all CPUs booted at the same exception
      level.
      
      This cacheline is written with the stage-1 MMU off (that is, via a
      strongly-ordered mapping) and will bypass any clean lines in the cache,
      leading to potential coherence problems when the variable is later
      checked via the normal, cacheable mapping of the kernel image.
      
      This patch reworks the broken flushing code so that we:
      
        (1) Use a DMB to order the strongly-ordered write of the cacheline
            against the subsequent cache-maintenance operation (by-VA
            operations only hazard against normal, cacheable accesses).
      
        (2) Use a single dc ivac instruction to invalidate any clean lines
            containing a stale copy of the line after it has been updated.
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d0488597
    • Will Deacon's avatar
      arm64: barriers: use barrier() instead of smp_mb() when !SMP · be6209a6
      Will Deacon authored
      The recently introduced acquire/release accessors refer to smp_mb()
      in the !CONFIG_SMP case. This is confusing when reading the code, so use
      barrier() directly when we know we're UP.
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      be6209a6
    • Will Deacon's avatar
      arm64: barriers: wire up new barrier options · 493e6874
      Will Deacon authored
      Now that all callers of the barrier macros are updated to pass the
      mandatory options, update the macros so the option is actually used.
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      493e6874
    • Will Deacon's avatar
      arm64: barriers: make use of barrier options with explicit barriers · 98f7685e
      Will Deacon authored
      When calling our low-level barrier macros directly, we can often suffice
      with more relaxed behaviour than the default "all accesses, full system"
      option.
      
      This patch updates the users of dsb() to specify the option which they
      actually require.
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      98f7685e
    • Steve Capper's avatar
      arm64: mm: Optimise tlb flush logic where we have >4K granule · fa48e6f7
      Steve Capper authored
      The tlb maintainence functions: __cpu_flush_user_tlb_range and
      __cpu_flush_kern_tlb_range do not take into consideration the page
      granule when looping through the address range, and repeatedly flush
      tlb entries for the same page when operating with 64K pages.
      
      This patch re-works the logic s.t. we instead advance the loop by
       1 << (PAGE_SHIFT - 12), so avoid repeating ourselves.
      
      Also the routines have been converted from assembler to static inline
      functions to aid with legibility and potential compiler optimisations.
      
      The isb() has been removed from flush_tlb_kernel_range(.) as it is
      only needed when changing the execute permission of a mapping. If one
      needs to set an area of the kernel as execute/non-execute an isb()
      must be inserted after the call to flush_tlb_kernel_range.
      
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Signed-off-by: default avatarSteve Capper <steve.capper@linaro.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      fa48e6f7
    • Will Deacon's avatar
      arm64: xchg: prevent warning if return value is unused · e1dfda9c
      Will Deacon authored
      Some users of xchg() don't bother using the return value, which results
      in a compiler warning like the following (from kgdb):
      
      In file included from linux/arch/arm64/include/asm/atomic.h:27:0,
                       from include/linux/atomic.h:4,
                       from include/linux/spinlock.h:402,
                       from include/linux/seqlock.h:35,
                       from include/linux/time.h:5,
                       from include/uapi/linux/timex.h:56,
                       from include/linux/timex.h:56,
                       from include/linux/sched.h:19,
                       from include/linux/pid_namespace.h:4,
                       from kernel/debug/debug_core.c:30:
      kernel/debug/debug_core.c: In function ‘kgdb_cpu_enter’:
      linux/arch/arm64/include/asm/cmpxchg.h:75:3: warning: value computed is not used [-Wunused-value]
        ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
         ^
      linux/arch/arm64/include/asm/atomic.h:132:30: note: in expansion of macro ‘xchg’
       #define atomic_xchg(v, new) (xchg(&((v)->counter), new))
      
      kernel/debug/debug_core.c:504:4: note: in expansion of macro ‘atomic_xchg’
          atomic_xchg(&kgdb_active, cpu);
          ^
      
      This patch makes use of the same trick as we do for cmpxchg, by assigning
      the return value to a dummy variable in the xchg() macro itself.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      e1dfda9c