1. 17 Jun, 2016 28 commits
  2. 16 Jun, 2016 12 commits
    • David S. Miller's avatar
      Merge branch 'stmmac-wol' · b4eccef8
      David S. Miller authored
      Vincent Palatin says:
      
      ====================
      net: stmmac: dwmac-rk: fixes for Wake-on-Lan on RK3288
      
      In order to support Wake-On-Lan when using the RK3288 integrated MAC
      (with an external RGMII PHY), we need to avoid shutting down the regulator
      of the external PHY when the MAC is suspended as it's currently done in the MAC
      platform code.
      As a first step, create independant callbacks for suspend/resume rather than
      re-using exit/init callbacks. So the dwmac platform driver can behave differently
      on suspend where it might skip shutting the PHY and at module unloading.
      Then update the dwmac-rk driver to switch off the PHY regulator only if we are
      not planning to wake up from the LAN.
      Finally add the PMT interrupt to the MAC device tree configuration, so we can
      wake up the core from it when the PHY has received the magic packet.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b4eccef8
    • Vincent Palatin's avatar
      ARM: dts: rockchip: add interrupt for Wake-on-Lan on RK3288 · d5bfbeb8
      Vincent Palatin authored
      In order to use Wake-on-Lan on RK3288 integrated MAC, we need to wake-up
      the CPU on the PMT interrupt when the MAC and the PHY are in low power mode.
      Adding the interrupt declaration.
      Signed-off-by: default avatarVincent Palatin <vpalatin@chromium.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d5bfbeb8
    • Vincent Palatin's avatar
      net: stmmac: dwmac-rk: keep the PHY up for WoL · 229666c1
      Vincent Palatin authored
      When suspending the machine, do not shutdown the external PHY by cutting
      its regulator in the mac platform driver suspend code if Wake-on-Lan is enabled,
      else it cannot wake us up.
      In order to do this, split the suspend/resume callbacks from the
      init/exit callbacks, so we can condition the power-down on the lack of
      need to wake-up from the LAN but do it unconditionally when unloading the
      module.
      Signed-off-by: default avatarVincent Palatin <vpalatin@chromium.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      229666c1
    • Vincent Palatin's avatar
      net: stmmac: allow to split suspend/resume from init/exit callbacks · cecbc556
      Vincent Palatin authored
      Let the stmmac platform drivers provide dedicated suspend and resume
      callbacks rather than always re-using the init and exits callbacks.
      If the driver does not provide the suspend or resume callback, we fall
      back to the old behavior trying to use exit or init.
      
      This allows a specific platform to perform only a partial power-down on
      suspend if Wake-on-Lan is enabled but always perform the full shutdown
      sequence if the module is unloaded.
      Signed-off-by: default avatarVincent Palatin <vpalatin@chromium.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cecbc556
    • Xin Long's avatar
      sctp: change sk state to CLOSED instead of CLOSING in sctp_sock_migrate · 141ddefc
      Xin Long authored
      Commit d46e416c ("sctp: sctp should change socket state when
      shutdown is received") may set sk_state CLOSING in sctp_sock_migrate,
      but inet_accept doesn't allow the sk_state other than ESTABLISHED/
      CLOSED for sctp. So we will change sk_state to CLOSED, instead of
      CLOSING, as actually sk is closed already there.
      
      Fixes: d46e416c ("sctp: sctp should change socket state when shutdown is received")
      Reported-by: default avatarYe Xiaolong <xiaolong.ye@intel.com>
      Signed-off-by: default avatarXin Long <lucien.xin@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      141ddefc
    • Fabien Siron's avatar
    • David S. Miller's avatar
      Merge branch 'bpf-fd-array-release' · f0362eab
      David S. Miller authored
      Daniel Borkmann says:
      
      ====================
      bpf: improve fd array release
      
      This set improves BPF perf fd array map release wrt to purging
      entries, first two extend the API as needed. Please see individual
      patches for more details.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f0362eab
    • Daniel Borkmann's avatar
      bpf, maps: flush own entries on perf map release · 3b1efb19
      Daniel Borkmann authored
      The behavior of perf event arrays are quite different from all
      others as they are tightly coupled to perf event fds, f.e. shown
      recently by commit e03e7ee3 ("perf/bpf: Convert perf_event_array
      to use struct file") to make refcounting on perf event more robust.
      A remaining issue that the current code still has is that since
      additions to the perf event array take a reference on the struct
      file via perf_event_get() and are only released via fput() (that
      cleans up the perf event eventually via perf_event_release_kernel())
      when the element is either manually removed from the map from user
      space or automatically when the last reference on the perf event
      map is dropped. However, this leads us to dangling struct file's
      when the map gets pinned after the application owning the perf
      event descriptor exits, and since the struct file reference will
      in such case only be manually dropped or via pinned file removal,
      it leads to the perf event living longer than necessary, consuming
      needlessly resources for that time.
      
      Relations between perf event fds and bpf perf event map fds can be
      rather complex. F.e. maps can act as demuxers among different perf
      event fds that can possibly be owned by different threads and based
      on the index selection from the program, events get dispatched to
      one of the per-cpu fd endpoints. One perf event fd (or, rather a
      per-cpu set of them) can also live in multiple perf event maps at
      the same time, listening for events. Also, another requirement is
      that perf event fds can get closed from application side after they
      have been attached to the perf event map, so that on exit perf event
      map will take care of dropping their references eventually. Likewise,
      when such maps are pinned, the intended behavior is that a user
      application does bpf_obj_get(), puts its fds in there and on exit
      when fd is released, they are dropped from the map again, so the map
      acts rather as connector endpoint. This also makes perf event maps
      inherently different from program arrays as described in more detail
      in commit c9da161c ("bpf: fix clearing on persistent program
      array maps").
      
      To tackle this, map entries are marked by the map struct file that
      added the element to the map. And when the last reference to that map
      struct file is released from user space, then the tracked entries
      are purged from the map. This is okay, because new map struct files
      instances resp. frontends to the anon inode are provided via
      bpf_map_new_fd() that is called when we invoke bpf_obj_get_user()
      for retrieving a pinned map, but also when an initial instance is
      created via map_create(). The rest is resolved by the vfs layer
      automatically for us by keeping reference count on the map's struct
      file. Any concurrent updates on the map slot are fine as well, it
      just means that perf_event_fd_array_release() needs to delete less
      of its own entires.
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      3b1efb19
    • Daniel Borkmann's avatar
      bpf, maps: extend map_fd_get_ptr arguments · d056a788
      Daniel Borkmann authored
      This patch extends map_fd_get_ptr() callback that is used by fd array
      maps, so that struct file pointer from the related map can be passed
      in. It's safe to remove map_update_elem() callback for the two maps since
      this is only allowed from syscall side, but not from eBPF programs for these
      two map types. Like in per-cpu map case, bpf_fd_array_map_update_elem()
      needs to be called directly here due to the extra argument.
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d056a788
    • Daniel Borkmann's avatar
      bpf, maps: add release callback · 61d1b6a4
      Daniel Borkmann authored
      Add a release callback for maps that is invoked when the last
      reference to its struct file is gone and the struct file about
      to be released by vfs. The handler will be used by fd array maps.
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      61d1b6a4
    • David S. Miller's avatar
      Merge branch 'sfc-rx-vlan-filtering' · b478af0c
      David S. Miller authored
      Edward Cree says:
      
      ====================
      sfc: RX VLAN filtering
      
      Adds support for VLAN-qualified receive filters on EF10 hardware.
      This is needed when running as a guest if the hypervisor has enabled
      vfs-vlan-restrict, in which case the firmware rejects filters not qualified
      with VLAN 0.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b478af0c
    • Andrew Rybchenko's avatar
      sfc: Fix VLAN filtering feature if vPort has VLAN_RESTRICT flag · 38d27f38
      Andrew Rybchenko authored
      If vPort has VLAN_RESTRICT flag, VLAN tagged traffic will not be
      delivered without corresponding Rx filters which may be proxied to and
      moderated by hypervisor.
      Signed-off-by: default avatarEdward Cree <ecree@solarflare.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      38d27f38