1. 01 Nov, 2022 17 commits
  2. 31 Oct, 2022 18 commits
  3. 30 Oct, 2022 1 commit
  4. 29 Oct, 2022 4 commits
    • Jakub Kicinski's avatar
      Merge tag 'mlx5-updates-2022-10-24' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux · 02a97e02
      Jakub Kicinski authored
      Saeed Mahameed says:
      
      ====================
      mlx5-updates-2022-10-24
      
      SW steering updates from Yevgeny Kliteynik:
      
      1) 1st Four patches: small fixes / optimizations for SW steering:
      
       - Patch 1: Don't abort destroy flow if failed to destroy table - continue
         and free everything else.
       - Patches 2 and 3 deal with fast teardown:
          + Skip sync during fast teardown, as PCI device is not there any more.
          + Check device state when polling CQ - otherwise SW steering keeps polling
            the CQ forever, because nobody is there to flush it.
       - Patch 4: Removing unneeded function argument.
      
      2) Deal with the hiccups that we get during rules insertion/deletion,
      which sometimes reach 1/4 of a second. While insertion/deletion rate
      improvement was not the focus here, it still is a by-product of removing these
      hiccups.
      
      Another by-product is the reduced standard deviation in measuring the duration
      of rules insertion/deletion bursts.
      
      In the testing we add K rules (warm-up phase), and then continuously do
      insertion/deletion bursts of N rules.
      During the test execution, the driver measures hiccups (amount and duration)
      and total time for insertion/deletion of a batch of rules.
      
      Here are some numbers, before and after these patches:
      
      +--------------------------------------------+-----------------+----------------+
      |                                            |   Create rules  |  Delete rules  |
      |                                            +--------+--------+--------+-------+
      |                                            | Before |  After | Before | After |
      +--------------------------------------------+--------+--------+--------+-------+
      | Max hiccup [msec]                          |    253 |     42 |    254 |    68 |
      +--------------------------------------------+--------+--------+--------+-------+
      | Avg duration of 10K rules add/remove [msec]| 140.07 | 124.32 | 106.99 | 99.51 |
      +--------------------------------------------+--------+--------+--------+-------+
      | Num of hiccups per 100K rules add/remove   |   7.77 |   7.97 |  12.60 | 11.57 |
      +--------------------------------------------+--------+--------+--------+-------+
      | Avg hiccup duration [msec]                 |  36.92 |  33.25 |  36.15 | 33.74 |
      +--------------------------------------------+--------+--------+--------+-------+
      
       - Patch 5: Allocate a short array on stack instead of dynamically- it is
         destroyed at the end of the function.
       - Patch 6: Rather than cleaning the corresponding chunk's section of
         ste_arrays on chunk deletion, initialize these areas upon chunk creation.
         Chunk destruction tend to come in large batches (during pool syncing),
         so instead of doing huge memory initialization during pool sync,
         we amortize this by doing small initsializations on chunk creation.
       - Patch 7: In order to simplifies error flow and allows cleaner addition
         of new pools, handle creation/destruction of all the domain's memory pools
         and other memory-related fields in a separate init/uninit functions.
       - Patch 8: During rehash, write each table row immediately instead of waiting
         for the whole table to be ready and writing it all - saves allocations
         of ste_send_info structures and improves performance.
       - Patch 9: Instead of allocating/freeing send info objects dynamically,
         manage them in pool. The number of send info objects doesn't depend on
         number of rules, so after pre-populating the pool with an initial batch of
         send info objects, the pool is not expected to grow.
         This way we save alloc/free during writing STEs to ICM, which by itself can
         sometimes take up to 40msec.
       - Patch 10: Allocate icm_chunks from their own slab allocator, which lowered
         the alloc/free "hiccups" frequency.
       - Patch 11: Similar to patch 9, allocate htbl from its own slab allocator.
       - Patch 12: Lower sync threshold for ICM hot memory - set the threshold for
         sync to 1/4 of the pool instead of 1/2 of the pool. Although we will have
         more syncs, each     sync will be shorter and will help with insertion rate
         stability. Also, notice that the overall number of hiccups wasn't increased
         due to all the other patches.
       - Patch 13: Keep track of hot ICM chunks in an array instead of list.
         After steering sync, we traverse the hot list and finally free all the
         chunks. It appears that traversing a long list takes unusually long time
         due to cache misses on many entries, which causes a big "hiccup" during
         rule insertion. This patch replaces the list with pre-allocated array that
         stores only the bookkeeping information that is needed to later free the
         chunks in its buddy allocator.
       - Patch 14: Remove the unneeded buddy used_list - we don't need to have the
         list of used chunks, we only need the total amount of used memory.
      
      * tag 'mlx5-updates-2022-10-24' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux:
        net/mlx5: DR, Remove the buddy used_list
        net/mlx5: DR, Keep track of hot ICM chunks in an array instead of list
        net/mlx5: DR, Lower sync threshold for ICM hot memory
        net/mlx5: DR, Allocate htbl from its own slab allocator
        net/mlx5: DR, Allocate icm_chunks from their own slab allocator
        net/mlx5: DR, Manage STE send info objects in pool
        net/mlx5: DR, In rehash write the line in the entry immediately
        net/mlx5: DR, Handle domain memory resources init/uninit separately
        net/mlx5: DR, Initialize chunk's ste_arrays at chunk creation
        net/mlx5: DR, For short chains of STEs, avoid allocating ste_arr dynamically
        net/mlx5: DR, Remove unneeded argument from dr_icm_chunk_destroy
        net/mlx5: DR, Check device state when polling CQ
        net/mlx5: DR, Fix the SMFS sync_steering for fast teardown
        net/mlx5: DR, In destroy flow, free resources even if FW command failed
      ====================
      
      Link: https://lore.kernel.org/r/20221027145643.6618-1-saeed@kernel.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      02a97e02
    • Jakub Kicinski's avatar
      Merge branch 'net-ipa-start-adding-ipa-v5-0-functionality' · eb288cbd
      Jakub Kicinski authored
      Alex Elder says:
      
      ====================
      net: ipa: start adding IPA v5.0 functionality
      
      The biggest change for IPA v5.0 is that it supports more than 32
      endpoints.  However there are two other unrelated changes:
        - The STATS_TETHERING memory region is not required
        - Filter tables no longer support a "global" filter
      
      Beyond this, refactoring some code makes supporting more than 32
      endpoints (in an upcoming series) easier.  So this series includes
      a few other changes (not in this order):
        - The maximum endpoint ID in use is determined during config
        - Loops over all endpoints only involve those in use
        - Endpoints IDs and their directions are checked for validity
          differently to simplify comparison against the maximum
      ====================
      
      Link: https://lore.kernel.org/r/20221027122632.488694-1-elder@linaro.orgSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      eb288cbd
    • Alex Elder's avatar
      net: ipa: record and use the number of defined endpoint IDs · b7aaff0b
      Alex Elder authored
      Define a new field in the IPA structure that records the maximum
      number of entries that will be used in the IPA endpoint array.  Use
      that value rather than IPA_ENDPOINT_MAX to determine the end
      condition for two loops that iterate over all endpoints.
      Signed-off-by: default avatarAlex Elder <elder@linaro.org>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      b7aaff0b
    • Alex Elder's avatar
      net: ipa: determine the maximum endpoint ID · 5274c715
      Alex Elder authored
      Each endpoint ID has an entry in the IPA endpoint array.  But the
      size of that array is defined at compile time.  Instead, rename
      ipa_endpoint_data_valid() to be ipa_endpoint_max() and have it
      return the maximum endpoint ID defined in configuration data.
      That function will still validate configuration data.
      
      Zero is returned on error; it's a valid endpoint ID, but we need
      more than one, so it can't be the maximum.  The next patch makes use
      of the returned maximum value.
      
      Finally, rename the "initialized" mask of endpoints defined by
      configuration data to be "defined".
      Signed-off-by: default avatarAlex Elder <elder@linaro.org>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      5274c715