1. 22 Aug, 2019 6 commits
    • Eran Ben Elisha's avatar
      net/mlx5: Add HV VHCA control agent · 29ddad43
      Eran Ben Elisha authored
      Control agent is responsible over of the control block (ID 0). It should
      update the PF via this block about every capability change. In addition,
      upon block 0 invalidate, it should activate all other supported agents
      with data requests from the PF.
      
      Upon agent create/destroy, the invalidate callback of the control agent
      is being called in order to update the PF driver about this change.
      
      The control agent is an integral part of HV VHCA and will be created
      and destroy as part of the HV VHCA init/cleanup flow.
      Signed-off-by: default avatarEran Ben Elisha <eranbe@mellanox.com>
      Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: default avatarHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      29ddad43
    • Eran Ben Elisha's avatar
      net/mlx5: Add HV VHCA infrastructure · 87175120
      Eran Ben Elisha authored
      HV VHCA is a layer which provides PF to VF communication channel based on
      HyperV PCI config channel. It implements Mellanox's Inter VHCA control
      communication protocol. The protocol contains control block in order to
      pass messages between the PF and VF drivers, and data blocks in order to
      pass actual data.
      
      The infrastructure is agent based. Each agent will be responsible of
      contiguous buffer blocks in the VHCA config space. This infrastructure will
      bind agents to their blocks, and those agents can only access read/write
      the buffer blocks assigned to them. Each agent will provide three
      callbacks (control, invalidate, cleanup). Control will be invoked when
      block-0 is invalidated with a command that concerns this agent. Invalidate
      callback will be invoked if one of the blocks assigned to this agent was
      invalidated. Cleanup will be invoked before the agent is being freed in
      order to clean all of its open resources or deferred works.
      
      Block-0 serves as the control block. All execution commands from the PF
      will be written by the PF over this block. VF will ack on those by
      writing on block-0 as well. Its format is described by struct
      mlx5_hv_vhca_control_block layout.
      Signed-off-by: default avatarEran Ben Elisha <eranbe@mellanox.com>
      Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: default avatarHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      87175120
    • Eran Ben Elisha's avatar
      net/mlx5: Add wrappers for HyperV PCIe operations · 913d14e8
      Eran Ben Elisha authored
      Add wrapper functions for HyperV PCIe read / write /
      block_invalidate_register operations.  This will be used as an
      infrastructure in the downstream patch for software communication.
      
      This will be enabled by default if CONFIG_PCI_HYPERV_INTERFACE is set.
      Signed-off-by: default avatarEran Ben Elisha <eranbe@mellanox.com>
      Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: default avatarHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      913d14e8
    • Haiyang Zhang's avatar
      PCI: hv: Add a Hyper-V PCI interface driver for software backchannel interface · 348dd93e
      Haiyang Zhang authored
      This interface driver is a helper driver allows other drivers to
      have a common interface with the Hyper-V PCI frontend driver.
      Signed-off-by: default avatarHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      348dd93e
    • Dexuan Cui's avatar
      PCI: hv: Add a paravirtual backchannel in software · e5d2f910
      Dexuan Cui authored
      Windows SR-IOV provides a backchannel mechanism in software for communication
      between a VF driver and a PF driver.  These "configuration blocks" are
      similar in concept to PCI configuration space, but instead of doing reads and
      writes in 32-bit chunks through a very slow path, packets of up to 128 bytes
      can be sent or received asynchronously.
      
      Nearly every SR-IOV device contains just such a communications channel in
      hardware, so using this one in software is usually optional.  Using the
      software channel, however, allows driver implementers to leverage software
      tools that fuzz the communications channel looking for vulnerabilities.
      
      The usage model for these packets puts the responsibility for reading or
      writing on the VF driver.  The VF driver sends a read or a write packet,
      indicating which "block" is being referred to by number.
      
      If the PF driver wishes to initiate communication, it can "invalidate" one or
      more of the first 64 blocks.  This invalidation is delivered via a callback
      supplied by the VF driver by this driver.
      
      No protocol is implied, except that supplied by the PF and VF drivers.
      Signed-off-by: default avatarJake Oshins <jakeo@microsoft.com>
      Signed-off-by: default avatarDexuan Cui <decui@microsoft.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: K. Y. Srinivasan <kys@microsoft.com>
      Cc: Stephen Hemminger <sthemmin@microsoft.com>
      Signed-off-by: default avatarSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: default avatarHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e5d2f910
    • David S. Miller's avatar
      Merge tag 'mlx5-updates-2019-08-21' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux · fed07ef3
      David S. Miller authored
      Saeed Mahameed says:
      
      ====================
      mlx5 tc flow handling for concurrent execution (Part 3)
      
      This series includes updates to mlx5 ethernet and core driver:
      
      Vlad submits part 3 of 3 part series to allow TC flow handling
      for concurrent execution.
      
      Vlad says:
      ==========
      
      Structure mlx5e_neigh_hash_entry code that uses it are refactored in
      following ways:
      
      - Extend neigh_hash_entry with rcu and modify its users to always take
        reference to the structure when using it (neigh_hash_entry has already
        had atomic reference counter which was only used when scheduling neigh
        update on workqueue from atomic context of neigh update netevent).
      
      - Always use mlx5e_neigh_update_table->encap_lock when modifying neigh
        update hash table and list. Originally, this lock was only used to
        synchronize with netevent handler function, which is called from bh
        context and cannot use rtnl lock for synchronization. Use rcu read lock
        instead of encap_lock to lookup nhe in atomic context of netevent even
        handler function. Convert encap_lock to mutex to allow creating new
        neigh hash entries while holding it, which is safe to do because the
        lock is no longer used in atomic context.
      
      - Rcu-ify mlx5e_neigh_hash_entry->encap_list by changing operations on
        encap list to their rcu counterparts and extending encap structure
        with rcu_head to free the encap instances after rcu grace period. This
        allows fast traversal of list of encaps attached to nhe under rcu read
        lock protection.
      
      - Take encap_table_lock when accessing encap entries in neigh update and
        neigh stats update code to protect from concurrent encap entry
        insertion or removal.
      
      This approach leads to potential race condition when neigh update and
      neigh stats update code can access encap and flow entries that are not
      fully initialized or are being destroyed, or neigh can change state
      without updating encaps that are created concurrently. Prevent these
      issues by following changes in flow and encap initialization:
      
      - Extend mlx5e_tc_flow with 'init_done' completion. Modify neigh update
        to wait for both encap and flow completions to prevent concurrent
        access to a structure that is being initialized by tc.
      
      - Skip structures that failed during initialization: encaps with
        encap_id<0 and flows that don't have OFFLOADED flag set.
      
      - To ensure that no new flows are added to encap when it is being
        accessed by neigh update or neigh stats update, take encap_table_lock
        mutex.
      
      - To prevent concurrent deletion by tc, ensure that neigh update and
        neigh stats update hold references to encap and flow instances while
        using them.
      
      With changes presented in this patch set it is now safe to execute tc
      concurrently with neigh update and neigh stats update. However, these
      two workqueue tasks modify same flow "tmp_list" field to store flows
      with reference taken in temporary list to release the references after
      update operation finishes and should not be executed concurrently with
      each other.
      
      Last 3 patches of this series provide 3 new mlx5 trace points to track
      mlx5 tc requests and mlx5 neigh updates.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fed07ef3
  2. 21 Aug, 2019 34 commits