1. 31 Oct, 2014 40 commits
    • David Matlack's avatar
      kvm: don't take vcpu mutex for obviously invalid vcpu ioctls · 007a4d98
      David Matlack authored
      commit 2ea75be3 upstream.
      
      vcpu ioctls can hang the calling thread if issued while a vcpu is running.
      However, invalid ioctls can happen when userspace tries to probe the kind
      of file descriptors (e.g. isatty() calls ioctl(TCGETS)); in that case,
      we know the ioctl is going to be rejected as invalid anyway and we can
      fail before trying to take the vcpu mutex.
      
      This patch does not change functionality, it just makes invalid ioctls
      fail faster.
      Signed-off-by: default avatarDavid Matlack <dmatlack@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      007a4d98
    • Christian Borntraeger's avatar
      KVM: s390: unintended fallthrough for external call · dc17be89
      Christian Borntraeger authored
      commit f346026e upstream.
      
      We must not fallthrough if the conditions for external call are not met.
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Reviewed-by: default avatarThomas Huth <thuth@linux.vnet.ibm.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      dc17be89
    • David Matlack's avatar
      kvm: fix potentially corrupt mmio cache · 8d09d4af
      David Matlack authored
      commit ee3d1570 upstream.
      
      vcpu exits and memslot mutations can run concurrently as long as the
      vcpu does not aquire the slots mutex. Thus it is theoretically possible
      for memslots to change underneath a vcpu that is handling an exit.
      
      If we increment the memslot generation number again after
      synchronize_srcu_expedited(), vcpus can safely cache memslot generation
      without maintaining a single rcu_dereference through an entire vm exit.
      And much of the x86/kvm code does not maintain a single rcu_dereference
      of the current memslots during each exit.
      
      We can prevent the following case:
      
         vcpu (CPU 0)                             | thread (CPU 1)
      --------------------------------------------+--------------------------
      1  vm exit                                  |
      2  srcu_read_unlock(&kvm->srcu)             |
      3  decide to cache something based on       |
           old memslots                           |
      4                                           | change memslots
                                                  | (increments generation)
      5                                           | synchronize_srcu(&kvm->srcu);
      6  retrieve generation # from new memslots  |
      7  tag cache with new memslot generation    |
      8  srcu_read_unlock(&kvm->srcu)             |
      ...                                         |
         <action based on cache occurs even       |
          though the caching decision was based   |
          on the old memslots>                    |
      ...                                         |
         <action *continues* to occur until next  |
          memslot generation change, which may    |
          be never>                               |
                                                  |
      
      By incrementing the generation after synchronizing with kvm->srcu readers,
      we ensure that the generation retrieved in (6) will become invalid soon
      after (8).
      
      Keeping the existing increment is not strictly necessary, but we
      do keep it and just move it for consistency from update_memslots to
      install_new_memslots.  It invalidates old cached MMIOs immediately,
      instead of having to wait for the end of synchronize_srcu_expedited,
      which makes the code more clearly correct in case CPU 1 is preempted
      right after synchronize_srcu() returns.
      
      To avoid halving the generation space in SPTEs, always presume that the
      low bit of the generation is zero when reconstructing a generation number
      out of an SPTE.  This effectively disables MMIO caching in SPTEs during
      the call to synchronize_srcu_expedited.  Using the low bit this way is
      somewhat like a seqcount---where the protected thing is a cache, and
      instead of retrying we can simply punt if we observe the low bit to be 1.
      Signed-off-by: default avatarDavid Matlack <dmatlack@google.com>
      Reviewed-by: default avatarXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Reviewed-by: default avatarDavid Matlack <dmatlack@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      8d09d4af
    • David Matlack's avatar
      kvm: x86: fix stale mmio cache bug · 3fe0bc33
      David Matlack authored
      commit 56f17dd3 upstream.
      
      The following events can lead to an incorrect KVM_EXIT_MMIO bubbling
      up to userspace:
      
      (1) Guest accesses gpa X without a memory slot. The gfn is cached in
      struct kvm_vcpu_arch (mmio_gfn). On Intel EPT-enabled hosts, KVM sets
      the SPTE write-execute-noread so that future accesses cause
      EPT_MISCONFIGs.
      
      (2) Host userspace creates a memory slot via KVM_SET_USER_MEMORY_REGION
      covering the page just accessed.
      
      (3) Guest attempts to read or write to gpa X again. On Intel, this
      generates an EPT_MISCONFIG. The memory slot generation number that
      was incremented in (2) would normally take care of this but we fast
      path mmio faults through quickly_check_mmio_pf(), which only checks
      the per-vcpu mmio cache. Since we hit the cache, KVM passes a
      KVM_EXIT_MMIO up to userspace.
      
      This patch fixes the issue by using the memslot generation number
      to validate the mmio cache.
      Signed-off-by: default avatarDavid Matlack <dmatlack@google.com>
      [xiaoguangrong: adjust the code to make it simpler for stable-tree fix.]
      Signed-off-by: default avatarXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Reviewed-by: default avatarDavid Matlack <dmatlack@google.com>
      Reviewed-by: default avatarXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Tested-by: default avatarDavid Matlack <dmatlack@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      3fe0bc33
    • Josef Ahmad's avatar
      pci_ids: Add support for Intel Quark ILB · e7fd6c7a
      Josef Ahmad authored
      commit bb048713 upstream.
      
      This patch adds the PCI id for Intel Quark ILB.
      It will be used for GPIO and Multifunction device driver.
      Signed-off-by: default avatarJosef Ahmad <josef.ahmad@intel.com>
      Acked-by: default avatarBjorn Helgaas <bhelgaas@google.com>
      Signed-off-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Signed-off-by: default avatarLee Jones <lee.jones@linaro.org>
      Signed-off-by: default avatarChang Rebecca Swee Fun <rebecca.swee.fun.chang@intel.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      e7fd6c7a
    • Bryan O'Donoghue's avatar
      usb: pch_udc: usb gadget device support for Intel Quark X1000 · ceac347d
      Bryan O'Donoghue authored
      commit a68df706 upstream.
      
      This patch is to enable the USB gadget device for Intel Quark X1000
      Signed-off-by: default avatarBryan O'Donoghue <bryan.odonoghue@intel.com>
      Signed-off-by: default avatarBing Niu <bing.niu@intel.com>
      Signed-off-by: default avatarAlvin (Weike) Chen <alvin.chen@intel.com>
      Signed-off-by: default avatarFelipe Balbi <balbi@ti.com>
      Signed-off-by: default avatarChang Rebecca Swee Fun <rebecca.swee.fun.chang@intel.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      ceac347d
    • Sage Weil's avatar
      Btrfs: fix race in WAIT_SYNC ioctl · 8ef9958b
      Sage Weil authored
      commit 42383020 upstream.
      
      We check whether transid is already committed via last_trans_committed and
      then search through trans_list for pending transactions.  If
      last_trans_committed is updated by btrfs_commit_transaction after we check
      it (there is no locking), we will fail to find the committed transaction
      and return EINVAL to the caller.  This has been observed occasionally by
      ceph-osd (which uses this ioctl heavily).
      
      Fix by rechecking whether the provided transid <= last_trans_committed
      after the search fails, and if so return 0.
      Signed-off-by: default avatarSage Weil <sage@redhat.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      8ef9958b
    • Josef Bacik's avatar
      Btrfs: fix build_backref_tree issue with multiple shared blocks · b346e6df
      Josef Bacik authored
      commit bbe90514 upstream.
      
      Marc Merlin sent me a broken fs image months ago where it would blow up in the
      upper->checked BUG_ON() in build_backref_tree.  This is because we had a
      scenario like this
      
      block a -- level 4 (not shared)
         |
      block b -- level 3 (reloc block, shared)
         |
      block c -- level 2 (not shared)
         |
      block d -- level 1 (shared)
         |
      block e -- level 0 (shared)
      
      We go to build a backref tree for block e, we notice block d is shared and add
      it to the list of blocks to lookup it's backrefs for.  Now when we loop around
      we will check edges for the block, so we will see we looked up block c last
      time.  So we lookup block d and then see that the block that points to it is
      block c and we can just skip that edge since we've already been up this path.
      The problem is because we clear need_check when we see block d (as it is shared)
      we never add block b as needing to be checked.  And because block c is in our
      path already we bail out before we walk up to block b and add it to the backref
      check list.
      
      To fix this we need to reset need_check if we trip over a block that doesn't
      need to be checked.  This will make sure that any subsequent blocks in the path
      as we're walking up afterwards are added to the list to be processed.  With this
      patch I can now mount Marc's fs image and it'll complete the balance without
      panicing.  Thanks,
      Reported-by: default avatarMarc MERLIN <marc@merlins.org>
      Signed-off-by: default avatarJosef Bacik <jbacik@fb.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      b346e6df
    • Josef Bacik's avatar
      Btrfs: cleanup error handling in build_backref_tree · 3ffb5771
      Josef Bacik authored
      commit 75bfb9af upstream.
      
      When balance panics it tends to panic in the
      
      BUG_ON(!upper->checked);
      
      test, because it means it couldn't build the backref tree properly.  This is
      annoying to users and frankly a recoverable error, nothing in this function is
      actually fatal since it is just an in-memory building of the backrefs for a
      given bytenr.  So go through and change all the BUG_ON()'s to ASSERT()'s, and
      fix the BUG_ON(!upper->checked) thing to just return an error.
      
      This patch also fixes the error handling so it tears down the work we've done
      properly.  This code was horribly broken since we always just panic'ed instead
      of actually erroring out, so it needed to be completely re-worked.  With this
      patch my broken image no longer panics when I mount it.  Thanks,
      Signed-off-by: default avatarJosef Bacik <jbacik@fb.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      3ffb5771
    • Josef Bacik's avatar
      Btrfs: try not to ENOSPC on log replay · 1125942d
      Josef Bacik authored
      commit 1d52c78a upstream.
      
      When doing log replay we may have to update inodes, which traditionally goes
      through our delayed inode stuff.  This will try to move space over from the
      trans handle, but we don't reserve space in our trans handle on replay since we
      don't know how much we will need, so instead we try to flush.  But because we
      have a trans handle open we won't flush anything, so if we are out of reserve
      space we will simply return ENOSPC.  Since we know that if an operation made it
      into the log then we definitely had space before the box bought the farm then we
      don't need to worry about doing this space reservation.  Use the
      fs_info->log_root_recovering flag to skip the delayed inode stuff and update the
      item directly.  Thanks,
      Signed-off-by: default avatarJosef Bacik <jbacik@fb.com>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      1125942d
    • David Sterba's avatar
      btrfs: wake up transaction thread from SYNC_FS ioctl · 40b69f03
      David Sterba authored
      commit 2fad4e83 upstream.
      
      The transaction thread may want to do more work, namely it pokes the
      cleaner ktread that will start processing uncleaned subvols.
      
      This can be triggered by user via the 'btrfs fi sync' command, otherwise
      there was a delay up to 30 seconds before the cleaner started to clean
      old snapshots.
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.cz>
      Signed-off-by: default avatarChris Mason <clm@fb.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      40b69f03
    • David S. Miller's avatar
      sparc64: Kill unnecessary tables and increase MAX_BANKS. · f0a4aeb4
      David S. Miller authored
      [ Upstream commit d195b71b ]
      
      swapper_low_pmd_dir and swapper_pud_dir are actually completely
      useless and unnecessary.
      
      We just need swapper_pg_dir[].  Naturally the other page table chunks
      will be allocated on an as-needed basis.  Since the kernel actually
      accesses these tables in the PAGE_OFFSET view, there is not even a TLB
      locality advantage of placing them in the kernel image.
      
      Use the hard coded vmlinux.ld.S slot for swapper_pg_dir which is
      naturally page aligned.
      
      Increase MAX_BANKS to 1024 in order to handle heavily fragmented
      virtual guests.
      
      Even with this MAX_BANKS increase, the kernel is 20K+ smaller.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      f0a4aeb4
    • bob picco's avatar
      sparc64: sparse irq · b11148d7
      bob picco authored
      [ Upstream commit ee6a9333 ]
      
      This patch attempts to do a few things. The highlights are: 1) enable
      SPARSE_IRQ unconditionally, 2) kills off !SPARSE_IRQ code 3) allocates
      ivector_table at boot time and 4) default to cookie only VIRQ mechanism
      for supported firmware. The first firmware with cookie only support for
      me appears on T5. You can optionally force the HV firmware to not cookie
      only mode which is the sysino support.
      
      The sysino is a deprecated HV mechanism according to the most recent
      SPARC Virtual Machine Specification. HV_GRP_INTR is what controls the
      cookie/sysino firmware versioning.
      
      The history of this interface is:
      
      1) Major version 1.0 only supported sysino based interrupt interfaces.
      
      2) Major version 2.0 added cookie based VIRQs, however due to the fact
         that OSs were using the VIRQs without negoatiating major version
         2.0 (Linux and Solaris are both guilty), the VIRQs calls were
         allowed even with major version 1.0
      
         To complicate things even further, the VIRQ interfaces were only
         actually hooked up in the hypervisor for LDC interrupt sources.
         VIRQ calls on other device types would result in HV_EINVAL errors.
      
         So effectively, major version 2.0 is unusable.
      
      3) Major version 3.0 was created to signal use of VIRQs and the fact
         that the hypervisor has these calls hooked up for all interrupt
         sources, not just those for LDC devices.
      
      A new boot option is provided should cookie only HV support have issues.
      hvirq - this is the version for HV_GRP_INTR. This is related to HV API
      versioning.  The code attempts major=3 first by default. The option can
      be used to override this default.
      
      I've tested with SPARSE_IRQ on T5-8, M7-4 and T4-X and Jalap?no.
      Signed-off-by: default avatarBob Picco <bob.picco@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b11148d7
    • David S. Miller's avatar
      sparc64: Adjust vmalloc region size based upon available virtual address bits. · a31c967b
      David S. Miller authored
      [ Upstream commit bb4e6e85 ]
      
      In order to accomodate embedded per-cpu allocation with large numbers
      of cpus and numa nodes, we have to use as much virtual address space
      as possible for the vmalloc region.  Otherwise we can get things like:
      
      PERCPU: max_distance=0x380001c10000 too large for vmalloc space 0xff00000000
      
      So, once we select a value for PAGE_OFFSET, derive the size of the
      vmalloc region based upon that.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      a31c967b
    • David S. Miller's avatar
      sparc64: Increase MAX_PHYS_ADDRESS_BITS to 53. · 7f3fde55
      David S. Miller authored
      commit 7c0fa0f2 upstream.
      
      Make sure, at compile time, that the kernel can properly support
      whatever MAX_PHYS_ADDRESS_BITS is defined to.
      
      On M7 chips, use a max_phys_bits value of 49.
      
      Based upon a patch by Bob Picco.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      7f3fde55
    • David S. Miller's avatar
      sparc64: Use kernel page tables for vmemmap. · b7be1597
      David S. Miller authored
      [ Upstream commit c06240c7 ]
      
      For sparse memory configurations, the vmemmap array behaves terribly
      and it takes up an inordinate amount of space in the BSS section of
      the kernel image unconditionally.
      
      Just build huge PMDs and look them up just like we do for TLB misses
      in the vmalloc area.
      
      Kernel BSS shrinks by about 2MB.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      b7be1597
    • David S. Miller's avatar
      sparc64: Fix physical memory management regressions with large max_phys_bits. · cac611bd
      David S. Miller authored
      [ Upstream commit 0dd5b7b0 ]
      
      If max_phys_bits needs to be > 43 (f.e. for T4 chips), things like
      DEBUG_PAGEALLOC stop working because the 3-level page tables only
      can cover up to 43 bits.
      
      Another problem is that when we increased MAX_PHYS_ADDRESS_BITS up to
      47, several statically allocated tables became enormous.
      
      Compounding this is that we will need to support up to 49 bits of
      physical addressing for M7 chips.
      
      The two tables in question are sparc64_valid_addr_bitmap and
      kpte_linear_bitmap.
      
      The first holds a bitmap, with 1 bit for each 4MB chunk of physical
      memory, indicating whether that chunk actually exists in the machine
      and is valid.
      
      The second table is a set of 2-bit values which tell how large of a
      mapping (4MB, 256MB, 2GB, 16GB, respectively) we can use at each 256MB
      chunk of ram in the system.
      
      These tables are huge and take up an enormous amount of the BSS
      section of the sparc64 kernel image.  Specifically, the
      sparc64_valid_addr_bitmap is 4MB, and the kpte_linear_bitmap is 128K.
      
      So let's solve the space wastage and the DEBUG_PAGEALLOC problem
      at the same time, by using the kernel page tables (as designed) to
      manage this information.
      
      We have to keep using large mappings when DEBUG_PAGEALLOC is disabled,
      and we do this by encoding huge PMDs and PUDs.
      
      On a T4-2 with 256GB of ram the kernel page table takes up 16K with
      DEBUG_PAGEALLOC disabled and 256MB with it enabled.  Furthermore, this
      memory is dynamically allocated at run time rather than coded
      statically into the kernel image.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      cac611bd
    • David S. Miller's avatar
      sparc64: Adjust KTSB assembler to support larger physical addresses. · 4d00d3e2
      David S. Miller authored
      [ Upstream commit 8c82dc0e ]
      
      As currently coded the KTSB accesses in the kernel only support up to
      47 bits of physical addressing.
      
      Adjust the instruction and patching sequence in order to support
      arbitrary 64 bits addresses.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      4d00d3e2
    • David S. Miller's avatar
      sparc64: Define VA hole at run time, rather than at compile time. · f3106616
      David S. Miller authored
      [ Upstream commit 4397bed0 ]
      
      Now that we use 4-level page tables, we can provide up to 53-bits of
      virtual address space to the user.
      
      Adjust the VA hole based upon the capabilities of the cpu type probed.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      f3106616
    • David S. Miller's avatar
      sparc64: Switch to 4-level page tables. · 0e44ea2e
      David S. Miller authored
      [ Upstream commit ac55c768 ]
      
      This has become necessary with chips that support more than 43-bits
      of physical addressing.
      
      Based almost entirely upon a patch by Bob Picco.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      0e44ea2e
    • bob picco's avatar
      sparc64: T5 PMU · 666892de
      bob picco authored
      commit 05aa1651 upstream.
      
      The T5 (niagara5) has different PCR related HV fast trap values and a new
      HV API Group. This patch utilizes these and shares when possible with niagara4.
      
      We use the same sparc_pmu niagara4_pmu. Should there be new effort to
      obtain the MCU perf statistics then this would have to be changed.
      
      Cc: sparclinux@vger.kernel.org
      Signed-off-by: default avatarBob Picco <bob.picco@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      666892de
    • Allen Pais's avatar
      sparc64: cpu hardware caps support for sparc M6 and M7 · 45a61adc
      Allen Pais authored
      commit 40831625 upstream.
      Signed-off-by: default avatarAllen Pais <allen.pais@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      45a61adc
    • Allen Pais's avatar
      sparc64: support M6 and M7 for building CPU distribution map · 4891b830
      Allen Pais authored
      commit 9bd3ee33 upstream.
      
      Add M6 and M7 chip type in cpumap.c to correctly build CPU distribution map that spans all online CPUs.
      Signed-off-by: default avatarAllen Pais <allen.pais@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4891b830
    • Allen Pais's avatar
      sparc64: correctly recognise M6 and M7 cpu type · 6e7ffc94
      Allen Pais authored
      commit cadbb580 upstream.
      
      The following patch adds support for correctly
      recognising M6 and M7 cpu type.
      Signed-off-by: default avatarAllen Pais <allen.pais@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6e7ffc94
    • David S. Miller's avatar
      sparc64: Fix hibernation code refrence to PAGE_OFFSET. · dca89ea9
      David S. Miller authored
      commit 9d0713ed upstream.
      
      We changed PAGE_OFFSET to be a variable rather than a constant,
      but this reference here in the hibernate assembler got missed.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      dca89ea9
    • David S. Miller's avatar
      sparc64: Add basic validations to {pud,pmd}_bad(). · a553c52f
      David S. Miller authored
      [ Upstream commit 26cf4325 ]
      
      Instead of returning false we should at least check the most basic
      things, otherwise page table corruptions will be very difficult to
      debug.
      
      PMD and PTE tables are of size PAGE_SIZE, so none of the sub-PAGE_SIZE
      bits should be set.
      
      We also complement this with a check that the physical address the
      pud/pmd points to is valid memory.
      
      PowerPC was used as a guide while implementating this.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a553c52f
    • David S. Miller's avatar
      f215d509
    • David S. Miller's avatar
      sparc64: Fix range check in kern_addr_valid(). · 08a33b83
      David S. Miller authored
      [ Upstream commit ee73887e ]
      
      In commit b2d43834 ("sparc64: Make
      PAGE_OFFSET variable."), the MAX_PHYS_ADDRESS_BITS value was increased
      (to 47).
      
      This constant reference to '41UL' was missed.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      08a33b83
    • David S. Miller's avatar
      a55db39c
    • David S. Miller's avatar
      sparc64: Fix hex values in comment above pte_modify(). · bfd0f4ba
      David S. Miller authored
      [ Upstream commit c2e4e676 ]
      
      When _PAGE_SPECIAL and _PAGE_PMD_HUGE were added to the mask, the
      comment was not updated.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bfd0f4ba
    • David S. Miller's avatar
      sparc64: Fix bugs in get_user_pages_fast() wrt. THP. · 69a46d90
      David S. Miller authored
      [ Upstream commit 04df419d ]
      
      The large PMD path needs to check _PAGE_VALID not _PAGE_PRESENT, to
      decide if it needs to bail and return 0.
      
      pmd_large() should therefore just check _PAGE_PMD_HUGE.
      
      Calls to gup_huge_pmd() are guarded with a check of pmd_large(), so we
      just need to add a valid bit check.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      69a46d90
    • David S. Miller's avatar
      sparc64: Fix huge PMD invalidation. · 62af2523
      David S. Miller authored
      [ Upstream commit 51e5ef1b ]
      
      On sparc64 "present" and "valid" are seperate PTE bits, this allows us to
      naturally distinguish between the user explicitly asking for PROT_NONE
      with mprotect() and other situations.
      
      However we weren't handling this properly in the huge PMD paths.
      
      First of all, the page table walker in the TSB miss path only checks
      for _PAGE_PMD_HUGE.  So the generic pmdp_invalidate() would clear
      _PAGE_PRESENT but the TLB miss paths would still load it into the TLB
      as a valid huge PMD.
      
      Fix this by clearing the valid bit in pmdp_invalidate(), and also
      checking the valid bit in USER_PGTABLE_CHECK_PMD_HUGE using "brgez"
      since _PAGE_VALID is bit 63 in both the sun4u and sun4v pte layouts.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      62af2523
    • David S. Miller's avatar
      sparc64: Fix executable bit testing in set_pmd_at() paths. · 0ed11e0d
      David S. Miller authored
      [ Upstream commit 5b1e94fa ]
      
      This code was mistakenly using the exec bit from the PMD in all
      cases, even when the PMD isn't a huge PMD.
      
      If it's not a huge PMD, test the exec bit in the individual ptes down
      in tlb_batch_pmd_scan().
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0ed11e0d
    • Dave Kleikamp's avatar
      Revert "sparc64: Fix __copy_{to,from}_user_inatomic defines." · f68d72bc
      Dave Kleikamp authored
      This reverts commit 145e1c00.
      
      This commit broke the behavior of __copy_from_user_inatomic when
      it is only partially successful. Instead of returning the number
      of bytes not copied, it now returns 1. This translates to the
      wrong value being returned by iov_iter_copy_from_user_atomic.
      
      xfstests generic/246 and LTP writev01 both fail on btrfs and nfs
      because of this.
      Signed-off-by: default avatarDave Kleikamp <dave.kleikamp@oracle.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: sparclinux@vger.kernel.org
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f68d72bc
    • oftedal's avatar
      sparc: PCI: Fix incorrect address calculation of PCI Bridge windows on Simba-bridges · 1d9e79d6
      oftedal authored
      commit 557fc587 upstream.
      
      The SIMBA APB Bridges lacks the 'ranges' of-property describing the
      PCI I/O and memory areas located beneath the bridge. Faking this
      information has been performed by reading range registers in the
      APB bridge, and calculating the corresponding areas.
      
      In commit 01f94c4a
      ("Fix sabre pci controllers with new probing scheme.") a bug was
      introduced into this calculation, causing the PCI memory areas
      to be calculated incorrectly: The shift size was set to be
      identical for I/O and MEM ranges, which is incorrect.
      
      This patch set the shift size of the MEM range back to the
      value used before 01f94c4a.
      Signed-off-by: default avatarKjetil Oftedal <oftedal@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1d9e79d6
    • David S. Miller's avatar
      sparc64: Encode huge PMDs using PTE encoding. · eaf01993
      David S. Miller authored
      commit a7b9403f upstream.
      
      Now that we have 64-bits for PMDs we can stop using special encodings
      for the huge PMD values, and just put real PTEs in there.
      
      We allocate a _PAGE_PMD_HUGE bit to distinguish between plain PMDs and
      huge ones.  It is the same for both 4U and 4V PTE layouts.
      
      We also use _PAGE_SPECIAL to indicate the splitting state, since a
      huge PMD cannot also be special.
      
      All of the PMD --> PTE translation code disappears, and most of the
      huge PMD bit modifications and tests just degenerate into the PTE
      operations.  In particular USER_PGTABLE_CHECK_PMD_HUGE becomes
      trivial.
      
      As a side effect, normal PMDs don't shift the physical address around.
      This also speeds up the page table walks in the TLB miss paths since
      they don't have to do the shifts any more.
      
      Another non-trivial aspect is that pte_modify() has to be changed
      to preserve the _PAGE_PMD_HUGE bits as well as the page size field
      of the pte.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      eaf01993
    • David S. Miller's avatar
      sparc64: Move to 64-bit PGDs and PMDs. · f82fe1d2
      David S. Miller authored
      commit 2b77933c upstream.
      
      To make the page tables compact, we were using 32-bit PGDs and PMDs.
      We only had to support <= 43 bits of physical addresses so this was
      quite feasible.
      
      In order to support larger physical addresses we have to move to
      64-bit PGDs and PMDs.
      
      Most of the changes are straight-forward:
      
      1) {pgd,pmd}_t --> unsigned long
      
      2) Anything that tries to use plain "unsigned int" types with pgd/pmd
         values needs to be adjusted.  In particular things like "0U" become
         "0UL".
      
      3) {PGDIR,PMD}_BITS decrease by one.
      
      4) In the assembler page table walkers, use "ldxa" instead of "lduwa"
         and adjust the low bit masks to clear out the low 3 bits instead of
         just the low 2 bits during pgd/pmd address formation.
      
      Also, use PTRS_PER_PGD and PTRS_PER_PMD in the sizing of the
      swapper_{pg_dir,low_pmd_dir} arrays.
      
      This patch does not try to take advantage of having 64-bits in the
      PMDs to simplify the hugepage code, that will come in a subsequent
      change.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f82fe1d2
    • David S. Miller's avatar
      sparc64: Move from 4MB to 8MB huge pages. · 86a68ad0
      David S. Miller authored
      commit 37b3a8ff upstream.
      
      The impetus for this is that we would like to move to 64-bit PMDs and
      PGDs, but that would result in only supporting a 42-bit address space
      with the current page table layout.  It'd be nice to support at least
      43-bits.
      
      The reason we'd end up with only 42-bits after making PMDs and PGDs
      64-bit is that we only use half-page sized PTE tables in order to make
      PMDs line up to 4MB, the hardware huge page size we use.
      
      So what we do here is we make huge pages 8MB, and fabricate them using
      4MB hw TLB entries.
      
      Facilitate this by providing a "REAL_HPAGE_SHIFT" which is used in
      places that really need to operate on hardware 4MB pages.
      
      Use full pages (512 entries) for PTE tables, and adjust PMD_SHIFT,
      PGD_SHIFT, and the build time CPP test as needed.  Use a CPP test to
      make sure REAL_HPAGE_SHIFT and the _PAGE_SZHUGE_* we use match up.
      
      This makes the pgtable cache completely unused, so remove the code
      managing it and the state used in mm_context_t.  Now we have less
      spinlocks taken in the page table allocation path.
      
      The technique we use to fabricate the 8MB pages is to transfer bit 22
      from the missing virtual address into the PTEs physical address field.
      That takes care of the transparent huge pages case.
      
      For hugetlb, we fill things in at the PTE level and that code already
      puts the sub huge page physical bits into the PTEs, based upon the
      offset, so there is nothing special we need to do.  It all just works
      out.
      
      So, a small amount of complexity in the THP case, but this code is
      about to get much simpler when we move the 64-bit PMDs as we can move
      away from the fancy 32-bit huge PMD encoding and just put a real PTE
      value in there.
      
      With bug fixes and help from Bob Picco.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      86a68ad0
    • David S. Miller's avatar
      sparc64: Make PAGE_OFFSET variable. · 05ce20e5
      David S. Miller authored
      commit b2d43834 upstream.
      
      Choose PAGE_OFFSET dynamically based upon cpu type.
      
      Original UltraSPARC-I (spitfire) chips only supported a 44-bit
      virtual address space.
      
      Newer chips (T4 and later) support 52-bit virtual addresses
      and up to 47-bits of physical memory space.
      
      Therefore we have to adjust PAGE_SIZE dynamically based upon
      the capabilities of the chip.
      
      Note that this change alone does not allow us to support > 43-bit
      physical memory, to do that we need to re-arrange our page table
      support.  The current encodings of the pmd_t and pgd_t pointers
      restricts us to "32 + 11" == 43 bits.
      
      This change can waste quite a bit of memory for the various tables.
      In particular, a future change should work to size and allocate
      kern_linear_bitmap[] and sparc64_valid_addr_bitmap[] dynamically.
      This isn't easy as we really cannot take a TLB miss when accessing
      kern_linear_bitmap[].  We'd have to lock it into the TLB or similar.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      05ce20e5
    • David S. Miller's avatar
      sparc64: Fix inconsistent max-physical-address defines. · 7ae3f120
      David S. Miller authored
      commit f998c9c0 upstream.
      
      Some parts of the code use '41' others use '42', make them
      all use the same value.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      7ae3f120