1. 20 Jul, 2013 2 commits
  2. 19 Jul, 2013 32 commits
  3. 18 Jul, 2013 6 commits
    • Linus Torvalds's avatar
      Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux · 0a693ab6
      Linus Torvalds authored
      Pull drm fixes from Dave Airlie:
       "You'll be terribly disappointed in this, I'm not trying to sneak any
        features in or anything, its mostly radeon and intel fixes, a couple
        of ARM driver fixes"
      
      * 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: (34 commits)
        drm/radeon/dpm: add debugfs support for RS780/RS880 (v3)
        drm/radeon/dpm/atom: fix broken gcc harder
        drm/radeon/dpm/atom: restructure logic to work around a compiler bug
        drm/radeon/dpm: fix atom vram table parsing
        drm/radeon: fix an endian bug in atom table parsing
        drm/radeon: add a module parameter to disable aspm
        drm/rcar-du: Use the GEM PRIME helpers
        drm/shmobile: Use the GEM PRIME helpers
        uvesafb: Really allow mtrr being 0, as documented and warn()ed
        radeon kms: do not flush uninitialized hotplug work
        drm/radeon/dpm/sumo: handle boost states properly when forcing a perf level
        drm/radeon: align VM PTBs (Page Table Blocks) to 32K
        drm/radeon: allow selection of alignment in the sub-allocator
        drm/radeon: never unpin UVD bo v3
        drm/radeon: fix UVD fence emit
        drm/radeon: add fault decode function for CIK
        drm/radeon: add fault decode function for SI (v2)
        drm/radeon: add fault decode function for cayman/TN (v2)
        drm/radeon: use radeon device for request firmware
        drm/radeon: add missing ttm_eu_backoff_reservation to radeon_bo_list_validate
        ...
      0a693ab6
    • Eric Dumazet's avatar
      vlan: fix a race in egress prio management · 3e3aac49
      Eric Dumazet authored
      egress_priority_map[] hash table updates are protected by rtnl,
      and we never remove elements until device is dismantled.
      
      We have to make sure that before inserting an new element in hash table,
      all its fields are committed to memory or else another cpu could
      find corrupt values and crash.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: Patrick McHardy <kaber@trash.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      3e3aac49
    • Eric Dumazet's avatar
      vlan: mask vlan prio bits · d4b812de
      Eric Dumazet authored
      In commit 48cc32d3
      ("vlan: don't deliver frames for unknown vlans to protocols")
      Florian made sure we set pkt_type to PACKET_OTHERHOST
      if the vlan id is set and we could find a vlan device for this
      particular id.
      
      But we also have a problem if prio bits are set.
      
      Steinar reported an issue on a router receiving IPv6 frames with a
      vlan tag of 4000 (id 0, prio 2), and tunneled into a sit device,
      because skb->vlan_tci is set.
      
      Forwarded frame is completely corrupted : We can see (8100:4000)
      being inserted in the middle of IPv6 source address :
      
      16:48:00.780413 IP6 2001:16d8:8100:4000:ee1c:0:9d9:bc87 >
      9f94:4d95:2001:67c:29f4::: ICMP6, unknown icmp6 type (0), length 64
             0x0000:  0000 0029 8000 c7c3 7103 0001 a0ae e651
             0x0010:  0000 0000 ccce 0b00 0000 0000 1011 1213
             0x0020:  1415 1617 1819 1a1b 1c1d 1e1f 2021 2223
             0x0030:  2425 2627 2829 2a2b 2c2d 2e2f 3031 3233
      
      It seems we are not really ready to properly cope with this right now.
      
      We can probably do better in future kernels :
      vlan_get_ingress_priority() should be a netdev property instead of
      a per vlan_dev one.
      
      For stable kernels, lets clear vlan_tci to fix the bugs.
      Reported-by: default avatarSteinar H. Gunderson <sesse@google.com>
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d4b812de
    • Jason Wang's avatar
      macvtap: do not zerocopy if iov needs more pages than MAX_SKB_FRAGS · ece793fc
      Jason Wang authored
      We try to linearize part of the skb when the number of iov is greater than
      MAX_SKB_FRAGS. This is not enough since each single vector may occupy more than
      one pages, so zerocopy_sg_fromiovec() may still fail and may break the guest
      network.
      
      Solve this problem by calculate the pages needed for iov before trying to do
      zerocopy and switch to use copy instead of zerocopy if it needs more than
      MAX_SKB_FRAGS.
      
      This is done through introducing a new helper to count the pages for iov, and
      call uarg->callback() manually when switching from zerocopy to copy to notify
      vhost.
      
      We can do further optimization on top.
      
      This bug were introduced from b92946e2
      (macvtap: zerocopy: validate vectors before building skb).
      
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: default avatarJason Wang <jasowang@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ece793fc
    • Jason Wang's avatar
      tuntap: do not zerocopy if iov needs more pages than MAX_SKB_FRAGS · 88529176
      Jason Wang authored
      We try to linearize part of the skb when the number of iov is greater than
      MAX_SKB_FRAGS. This is not enough since each single vector may occupy more than
      one pages, so zerocopy_sg_fromiovec() may still fail and may break the guest
      network.
      
      Solve this problem by calculate the pages needed for iov before trying to do
      zerocopy and switch to use copy instead of zerocopy if it needs more than
      MAX_SKB_FRAGS.
      
      This is done through introducing a new helper to count the pages for iov, and
      call uarg->callback() manually when switching from zerocopy to copy to notify
      vhost.
      
      We can do further optimization on top.
      
      The bug were introduced from commit 0690899b
      (tun: experimental zero copy tx support)
      
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Signed-off-by: default avatarJason Wang <jasowang@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      88529176
    • Paolo Valente's avatar
      pkt_sched: sch_qfq: remove a source of high packet delay/jitter · 87f40dd6
      Paolo Valente authored
      QFQ+ inherits from QFQ a design choice that may cause a high packet
      delay/jitter and a severe short-term unfairness. As QFQ, QFQ+ uses a
      special quantity, the system virtual time, to track the service
      provided by the ideal system it approximates. When a packet is
      dequeued, this quantity must be incremented by the size of the packet,
      divided by the sum of the weights of the aggregates waiting to be
      served. Tracking this sum correctly is a non-trivial task, because, to
      preserve tight service guarantees, the decrement of this sum must be
      delayed in a special way [1]: this sum can be decremented only after
      that its value would decrease also in the ideal system approximated by
      QFQ+. For efficiency, QFQ+ keeps track only of the 'instantaneous'
      weight sum, increased and decreased immediately as the weight of an
      aggregate changes, and as an aggregate is created or destroyed (which,
      in its turn, happens as a consequence of some class being
      created/destroyed/changed). However, to avoid the problems caused to
      service guarantees by these immediate decreases, QFQ+ increments the
      system virtual time using the maximum value allowed for the weight
      sum, 2^10, in place of the dynamic, instantaneous value. The
      instantaneous value of the weight sum is used only to check whether a
      request of weight increase or a class creation can be satisfied.
      
      Unfortunately, the problems caused by this choice are worse than the
      temporary degradation of the service guarantees that may occur, when a
      class is changed or destroyed, if the instantaneous value of the
      weight sum was used to update the system virtual time. In fact, the
      fraction of the link bandwidth guaranteed by QFQ+ to each aggregate is
      equal to the ratio between the weight of the aggregate and the sum of
      the weights of the competing aggregates. The packet delay guaranteed
      to the aggregate is instead inversely proportional to the guaranteed
      bandwidth. By using the maximum possible value, and not the actual
      value of the weight sum, QFQ+ provides each aggregate with the worst
      possible service guarantees, and not with service guarantees related
      to the actual set of competing aggregates. To see the consequences of
      this fact, consider the following simple example.
      
      Suppose that only the following aggregates are backlogged, i.e., that
      only the classes in the following aggregates have packets to transmit:
      one aggregate with weight 10, say A, and ten aggregates with weight 1,
      say B1, B2, ..., B10. In particular, suppose that these aggregates are
      always backlogged. Given the weight distribution, the smoothest and
      fairest service order would be:
      A B1 A B2 A B3 A B4 A B5 A B6 A B7 A B8 A B9 A B10 A B1 A B2 ...
      
      QFQ+ would provide exactly this optimal service if it used the actual
      value for the weight sum instead of the maximum possible value, i.e.,
      11 instead of 2^10. In contrast, since QFQ+ uses the latter value, it
      serves aggregates as follows (easy to prove and to reproduce
      experimentally):
      A B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 A A A A A A A A A A B1 B2 ... B10 A A ...
      
      By replacing 10 with N in the above example, and by increasing N, one
      can increase at will the maximum packet delay and the jitter
      experienced by the classes in aggregate A.
      
      This patch addresses this issue by just using the above
      'instantaneous' value of the weight sum, instead of the maximum
      possible value, when updating the system virtual time.  After the
      instantaneous weight sum is decreased, QFQ+ may deviate from the ideal
      service for a time interval in the order of the time to serve one
      maximum-size packet for each backlogged class. The worst-case extent
      of the deviation exhibited by QFQ+ during this time interval [1] is
      basically the same as of the deviation described above (but, without
      this patch, QFQ+ suffers from such a deviation all the time). Finally,
      this patch modifies the comment to the function qfq_slot_insert, to
      make it coherent with the fact that the weight sum used by QFQ+ can
      now be lower than the maximum possible value.
      
      [1] P. Valente, "Extending WF2Q+ to support a dynamic traffic mix",
      Proceedings of AAA-IDEA'05, June 2005.
      Signed-off-by: default avatarPaolo Valente <paolo.valente@unimore.it>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      87f40dd6