1. 30 May, 2013 31 commits
  2. 13 May, 2013 9 commits
    • Ben Hutchings's avatar
      Linux 3.2.45 · 88fd5f3b
      Ben Hutchings authored
      88fd5f3b
    • jerry.hoemann@hp.com's avatar
      x86/mm: account for PGDIR_SIZE alignment · 6af66ec5
      jerry.hoemann@hp.com authored
      Patch for 3.0-stable.  Function find_early_table_space removed upstream.
      
      Fixes panic in alloc_low_page due to pgt_buf overflow during
      init_memory_mapping.
      
      find_early_table_space sizes pgt_buf based upon the size of the
      memory being mapped, but it does not take into account the alignment
      of the memory.  When the region being mapped spans a 512GB (PGDIR_SIZE)
      alignment, a panic from alloc_low_pages occurs.
      
      kernel_physical_mapping_init takes into account PGDIR_SIZE alignment.
      This causes an extra call to alloc_low_page to be made.  This extra call
      isn't accounted for by find_early_table_space and causes a kernel panic.
      
      Change is to take into account PGDIR_SIZE alignment in find_early_table_space.
      Signed-off-by: default avatarJerry Hoemann <jerry.hoemann@hp.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      6af66ec5
    • françois romieu's avatar
      r8169: fix vlan tag read ordering. · 88933df6
      françois romieu authored
      commit ce11ff5e upstream.
      
      Control of receive descriptor must not be returned to ethernet chipset
      before vlan tag processing is done.
      
      VLAN tag receive word is now reset both in normal and error path.
      Signed-off-by: default avatarFrancois Romieu <romieu@fr.zoreil.com>
      Spotted-by: default avatarTimo Teras <timo.teras@iki.fi>
      Cc: Hayes Wang <hayeswang@realtek.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      88933df6
    • Vaidyanathan Srinivasan's avatar
      powerpc: fix numa distance for form0 device tree · 3abcaf2c
      Vaidyanathan Srinivasan authored
      commit 7122beee upstream.
      
      The following commit breaks numa distance setup for old powerpc
      systems that use form0 encoding in device tree.
      
      commit 41eab6f8
      powerpc/numa: Use form 1 affinity to setup node distance
      
      Device tree node /rtas/ibm,associativity-reference-points would
      index into /cpus/PowerPCxxxx/ibm,associativity based on form0 or
      form1 encoding detected by ibm,architecture-vec-5 property.
      
      All modern systems use form1 and current kernel code is correct.
      However, on older systems with form0 encoding, the numa distance
      will get hard coded as LOCAL_DISTANCE for all nodes.  This causes
      task scheduling anomaly since scheduler will skip building numa
      level domain (topmost domain with all cpus) if all numa distances
      are same.  (value of 'level' in sched_init_numa() will remain 0)
      
      Prior to the above commit:
      ((from) == (to) ? LOCAL_DISTANCE : REMOTE_DISTANCE)
      
      Restoring compatible behavior with this patch for old powerpc systems
      with device tree where numa distance are encoded as form0.
      Signed-off-by: default avatarVaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      3abcaf2c
    • Chen Gang's avatar
      kernel/audit_tree.c: tree will leak memory when failure occurs in audit_trim_trees() · 0e6f42bb
      Chen Gang authored
      commit 12b2f117 upstream.
      
      audit_trim_trees() calls get_tree().  If a failure occurs we must call
      put_tree().
      
      [akpm@linux-foundation.org: run put_tree() before mutex_lock() for small scalability improvement]
      Signed-off-by: default avatarChen Gang <gang.chen@asianux.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Eric Paris <eparis@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      0e6f42bb
    • Benjamin Poirier's avatar
      ixgbe: add missing rtnl_lock in PM resume path · 449132d0
      Benjamin Poirier authored
      commit 34948a94 upstream.
      
      Upon resume from standby, ixgbe may trigger the ASSERT_RTNL() in
      netif_set_real_num_tx_queues(). The call stack is:
      	netif_set_real_num_tx_queues
      	ixgbe_set_num_queues
      	ixgbe_init_interrupt_scheme
      	ixgbe_resume
      Signed-off-by: default avatarBenjamin Poirier <bpoirier@suse.de>
      Tested-by: default avatarStephen Ko <stephen.s.ko@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      449132d0
    • Chris Wilson's avatar
      drm/i915: Fix detection of base of stolen memory · 53e587aa
      Chris Wilson authored
      commit e12a2d53 upstream.
      
      The routine to query the base of stolen memory was using the wrong
      registers and the wrong encodings on virtually every platform.
      
      It was not until the G33 refresh, that a PCI config register was
      introduced that explicitly said where the stolen memory was. Prior to
      865G there was not even a register that said where the end of usable
      low memory was and where the stolen memory began (or ended depending
      upon chipset). Before then, one has to look at the BIOS memory maps to
      find the Top of Memory. Alas that is not exported by arch/x86 and so we
      have to resort to disabling stolen memory on gen2 for the time being.
      
      Then SandyBridge enlarged the PCI register to a full 32-bits and change
      the encoding of the address, so even though we happened to be querying
      the right register, we read the wrong bits and ended up using address 0
      for our stolen data, i.e. notably FBC.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      [bwh: Backported to 3.2: adjust filename, context]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      53e587aa
    • Stefan Bader's avatar
      r8169: fix 8168evl frame padding. · 03000102
      Stefan Bader authored
      commit e5195c1f upstream.
      Signed-off-by: default avatarStefan Bader <stefan.bader@canonical.com>
      Acked-by: default avatarFrancois Romieu <romieu@fr.zoreil.com>
      Cc: hayeswang <hayeswang@realtek.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      03000102
    • David S. Miller's avatar
      sparc64: Fix race in TLB batch processing. · 8431bc6f
      David S. Miller authored
      [ Commits f36391d2 and
        f0af9707 upstream. ]
      
      As reported by Dave Kleikamp, when we emit cross calls to do batched
      TLB flush processing we have a race because we do not synchronize on
      the sibling cpus completing the cross call.
      
      So meanwhile the TLB batch can be reset (tb->tlb_nr set to zero, etc.)
      and either flushes are missed or flushes will flush the wrong
      addresses.
      
      Fix this by using generic infrastructure to synchonize on the
      completion of the cross call.
      
      This first required getting the flush_tlb_pending() call out from
      switch_to() which operates with locks held and interrupts disabled.
      The problem is that smp_call_function_many() cannot be invoked with
      IRQs disabled and this is explicitly checked for with WARN_ON_ONCE().
      
      We get the batch processing outside of locked IRQ disabled sections by
      using some ideas from the powerpc port. Namely, we only batch inside
      of arch_{enter,leave}_lazy_mmu_mode() calls.  If we're not in such a
      region, we flush TLBs synchronously.
      
      1) Get rid of xcall_flush_tlb_pending and per-cpu type
         implementations.
      
      2) Do TLB batch cross calls instead via:
      
      	smp_call_function_many()
      		tlb_pending_func()
      			__flush_tlb_pending()
      
      3) Batch only in lazy mmu sequences:
      
      	a) Add 'active' member to struct tlb_batch
      	b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
      	c) Set 'active' in arch_enter_lazy_mmu_mode()
      	d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode()
      	e) Check 'active' in tlb_batch_add_one() and do a synchronous
                 flush if it's clear.
      
      4) Add infrastructure for synchronous TLB page flushes.
      
      	a) Implement __flush_tlb_page and per-cpu variants, patch
      	   as needed.
      	b) Likewise for xcall_flush_tlb_page.
      	c) Implement smp_flush_tlb_page() to invoke the cross-call.
      	d) Wire up global_flush_tlb_page() to the right routine based
                 upon CONFIG_SMP
      
      5) It turns out that singleton batches are very common, 2 out of every
         3 batch flushes have only a single entry in them.
      
         The batch flush waiting is very expensive, both because of the poll
         on sibling cpu completeion, as well as because passing the tlb batch
         pointer to the sibling cpus invokes a shared memory dereference.
      
         Therefore, in flush_tlb_pending(), if there is only one entry in
         the batch perform a completely asynchronous global_flush_tlb_page()
         instead.
      Reported-by: default avatarDave Kleikamp <dave.kleikamp@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarDave Kleikamp <dave.kleikamp@oracle.com>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      8431bc6f