- 27 Mar, 2019 19 commits
-
-
Maxime Chevallier authored
The C2 classification engine has a 256 entry TCAM, used for ternary matches on an 8 byte Header Extracted Key. For now, we compute the various indices for classification and RSS that use this engine thanks to a set of macros. This commit mainly renames the macros used to make it clear that they should be used with the C2 engine, but also make use of the full 256 entries in the engine. For now, the C2 entries are only used for RSS. These entries are put at the end of the TCAM range, in case we want to add higher priority matches later on. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
When classifying a packet pertaining to a given flow, the classifier will issue multiple lookup commands until it finds one with the 'last' bit set. It expects all prorities to be assign continuously (although not necessarily in an ordered fashion) from 0 to the number of lookups. We can initialize this once, and make sure unused lookups are given an empty port map. This avoids having to maintain priorities and the information of which lookup is the last. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
C2 TCAM entries can be invalidated to avoid unwanted matches. Make sure all entries are invalidated at init, then validate only the ones we use. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The Flow Table dictates what lookups will be issued for each flow type. The lookup sequence for each flow is similar, and the index of each lookup is computed by some macros. There are similar mechanisms for the C2 TCAM lookups, so in order to avoid confusion, rename the flow table index computing macros with a common prefix. The only difference in behaviour is that we now use the very first entry in the flow for the RSS lookup (the first entry was previously unused). Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The classifier allows to combine multiple lookups in one "sequence" that is counted as a single lookup to an engine, with a single result. We don't actually use that feature, so remove any places where we set this field, so that the classifier doesn't try to interpret these fields. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
This commit renames some of the classifier functions to follow the naming 'mvpp2_port_*' that's used for function that act on a given port. This commit is purely cosmetic. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
Move C2 read/write helpers higher in the file to ease future work that rely on these helpers Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
When writing a C2 entry to hardware, some registers writes will only take effect when the TCAM_DATA4 register is written. This includes all C2 TCAM registers, and the C2 invalidate register. To make sure we always write C2 entries correctly, document that behaviour with a comment, and move TCAM writes to the end of the mvpp2_cls_c2_write helper. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The cls_table is a global read-only table containing the different parameters that are used by various tables in the classifier. It describes the links between the Header Parser, the decoding table and the flow_table. There are several possible way we want to iterate over that table, depending on wich classifier engine we want to configure. For the Header Parser, we want to iterate over each entry. For the Decoding table, we want to iterate over each entry having a unique flow_id. Finally, when configuring an ethtool flow, we want to iterate over each entry having a unique flow_id and that has a given flow_type. This commit introduces some iterator to both provide syntactic sugar and also clarify the way we want to iterate over the table. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
PPv2's Classifier uses multiple engines to perform classification. So far, only the C2 engine is used, which has a 256 entries TCAM. So far, we only accessed the relevant entries from the C2 engines, which are the one implementing RSS. To implement and debug ntuple classification offload, beaing able to see the hit count for each C2 entry is helpful, so this commit moves the logic to a dedicated directory allowing to access each entry. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The Classifier flow table is the central part of the PPv2 Classifier, since it describes all classification steps performed for each flow. It has 512 entries, shared between all ports, which are divided into sequences that are pointed-to by the decoding table. Being able to see which entries in the flow table were hit is a key point when implementing and debugging classification offload. This commit allows reading each flow table entry's hit count independently, with a clear-on-read behaviour. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The current way to store the required private data needed to access various debugfs entries is to alloc them on the fly, share them within the entries that need to access them, and finally have one entry free that data upon closing. This leads to hard to maintain code, and is very error-prone. This commit stores all debugfs related data in the same place, making sure this is allocated only when the debugfs directory is successfully created, so that we don't waste memory when we don't use this feature. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The cls_flow table represent the overall configuration of the classifier, used to match the different traffic classes in the Parsing and Classification engines. This configuration is static, and applies to all PPv2 instances, we must therefore keep it const so that no modifications of this table are performed at runtime. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The macro definition MVPP2_N_FLOWS is ambiguous because it really represents the number of entries in the Header Parser that are used to identify the classification flows. Rename the macro to clearly state that we represent the number of flows in the Header Parser. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The PPv2 classifier allows to perform multiple lookups on the same engine when classifying a packet. These lookups can match similar parts of a packet header, but perform different actions upon matching. To differentiate these types of lookups, it's possible to specify a Lookup Type in the flow table entries, which will be part of the key for the lookup engines. This commit introduces the use of Lookup Types for C2 matches. Since for now we only perform C2 lookups to enable RSS, we only need one Lookup Type. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The Classifier flow table has 512 entries, that contains lookups commands executed consecutively for every flow. Since we have 21 different flows, we have to carefully manage the flow table use. As of today, the start index of a lookup sequence is computed directly based in the flow->id. There are 8 reserved flow ids, from 0-7, which don't have any corresponding sequence in the flow table. We can therefore ignore them when computing the index, and make so that the first non-reserved flow point to the very beginning of the flow table. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Suggested-by: Alan Winkowski <walan@marvell.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
PPv2's classifier supports extracting the MAC Destination Address from the L2 header to perform RSS and flow steering. Add the missing case when setting the Header Extracted Key fields in the flow table. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
int is not long enough to store all netdev_features, use the correct dedicated type to store them when building the list of dev->features. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller authored
Alexei Starovoitov says: ==================== pull-request: bpf-next 2019-03-26 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) introduce bpf_tcp_check_syncookie() helper for XDP and tc, from Lorenz. 2) allow bpf_skb_ecn_set_ce() in tc, from Peter. 3) numerous bpf tc tunneling improvements, from Willem. 4) and other miscellaneous improvements from Adrian, Alan, Daniel, Ivan, Stanislav. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 26 Mar, 2019 11 commits
-
-
Flavio Leitner authored
When the conntrack is initialized, there is no helper attached yet so the nat info initialization (nf_nat_setup_info) skips adding the seqadj ext. A helper is attached later when the conntrack is not confirmed but is going to be committed. In this case, if NAT is needed then adds the seqadj ext as well. Fixes: 16ec3d4f ("openvswitch: Fix cached ct with helper.") Signed-off-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Pravin B Shelar <pshelar@ovn.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stanislav Fomichev authored
When running stacktrace_build_id_nmi, try to query kernel.perf_event_max_sample_rate sysctl and use it as a sample_freq. If there was an error reading sysctl, fallback to 5000. kernel.perf_event_max_sample_rate sysctl can drift and/or can be adjusted by the perf tool, so assuming a fixed number might be problematic on a long running machine. Signed-off-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Ioana Ciornei authored
Take advantage of the software Rx batching by using netif_receive_skb_list instead of napi_gro_receive. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
Fix the return value check which testing the wrong variable in tipc_mcast_send_sync(). Fixes: c55c8eda ("tipc: smooth change between replicast and broadcast") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Heiner Kallweit says: ==================== net: phy: aquantia: report Aquantia-specific settings and features This series detects and reports quite some Aquantia-specific settings and features. v2: - propagate timeout in patch 2 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
The AQCS109 supports a proprietary 2-pair 1Gbps mode. The standard registers don't allow to tell between 1000BaseT and 1000BaseT2. Add reporting this proprietary mode based on a vendor register. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Add reporting firmware details. These details are available only once the firmware has finished initializing the chip. This can take some time and we need to poll for init completion. v2: - Propagate timeout in aqr107_wait_reset_complete(). Don't bail out completely on timeout because chip may be functional even w/o firmware image. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
If both link partners are Aquantia PHY's then additional information is exchanged as part of the auto-negotiation. Report remote capabilities if link partner is Aquantia PHY. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
When phylink_of_phy_connect fails, dsa_slave_phy_setup tries to save the day by connecting to an alternative PHY, none other than a PHY on the switch's internal MDIO bus, at an address equal to the port's index. However this does not take into consideration the scenario when the switch that failed to probe an external PHY does not have an internal MDIO bus at all. Fixes: aab9c406 ("net: dsa: Plug in PHYLINK support") Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Simplify aqr_config_aneg(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queueDavid S. Miller authored
Jeff Kirsher says: ==================== 100GbE Intel Wired LAN Driver Updates 2019-03-25 This series contains updates to the ice driver only. Victor updates the ice driver to be able to update the VSI queue configuration dynamically, by providing the ability to increase or decrease the VSI's number of queues. Michal fixes an issue when the VM starts or the VF driver is reloaded, the VLAN switch rule was lost (i.e. not added), so ensure it gets added in these cases. Brett updates the driver to support link events over the admin receive queue, instead of polling link events. Maciej refactors the code a bit to introduce a new function to fetch the receiver buffer and do the DMA synchronization to reduce the code duplication. Also added ice_can_reuse_rx_page() to verify whether the page can be reused so that in the future, we can use this check elsewhere in the driver. Additional driver optimizations so that we can drop the ice_pull_tail() altogether. Added support for bulk updates of refcount instead of doing it one by one. Refactored the page counting and buffer recycling so that we can use this code to clean up receive buffers when there is no skb allocated, like XDP. Added DMA_ATTR_WEAK_ORDERING and DMA_ATTR_SKIP_CPU_SYNC attributes to the DMA API during the mapping operations on the receive side, so that nonx86 platforms will be able to sync with what is being used (2k buffers) instead of the entire page. Dave fixes the driver to perform the most intrusive of the resets requested and clear the other request bits so that we do not end up with repeated reset, after reset. Bruce adds a iterator macro to clean up several for() loops. Chinh modifies the packet flags to be more generic so that they can be used for both receive and transmit. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 25 Mar, 2019 10 commits
-
-
Chinh T Cao authored
This structure is used to define the packet flags. These flags are applicable for both TX and RX packet. Thus, this patch changes its name from ice_rx_flag64_bits to ice_flg64_bits, and its member definition. Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com> Reviewed-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Bruce Allan authored
There are numerous for() loops iterating over each of the max traffic classes. Use a simple iterator macro instead to make the code cleaner. Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Preethi Banala authored
Update VF VSI tc info along with vsi->num_txq/num_rxq when VF requests to configure queues. Signed-off-by: Preethi Banala <preethi.banala@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Dave Ertman authored
In the current implementation of ice_reset_subtask, if multiple reset types are set in the pf->state, the most intrusive one is meant to be performed only, but the bits requesting the other types are not being cleared. This would lead to another reset being performed the next time the service task is scheduled. Change the flow of ice_reset_subtask so that all reset request bits in pf->state are cleared, and we still perform the most intrusive of the resets requested. Signed-off-by: Dave Ertman <david.m.ertman@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
Provide DMA_ATTR_WEAK_ORDERING and DMA_ATTR_SKIP_CPU_SYNC attributes to the DMA API during the mapping operations on Rx side. With this change the non-x86 platforms will be able to sync only with what is being used (2k buffer) instead of entire page. This should yield a slight performance improvement. Furthermore, DMA unmap may destroy the changes that were made to the buffer by CPU when platform is not a x86 one. DMA_ATTR_SKIP_CPU_SYNC attribute usage fixes this issue. Also add a sync_single_for_device call during the Rx buffer assignment, to make sure that the cache lines are cleared before device attempting to write to the buffer. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
Refactor ice_fetch_rx_buf and ice_add_rx_frag in a way that we have standalone functions that do either the skb construction or frag addition to previously constructed skb. The skb handling between rx_bufs is spread among various functions. The ice_get_rx_buf will retrieve the skb pointer from rx_buf and if it is a NULL pointer then we do the ice_construct_skb, otherwise we add a frag to the current skb via ice_add_rx_frag. Then, on the ice_put_rx_buf the skb pointer that belongs to rx_buf will be cleared. Moving further, if the current frame is not EOP frame we assign the current skb to the rx_buf that is pointed by updated next_to_clean indicator. What is more during the buffer reuse let's assign each member of ice_rx_buf individually so we avoid the unnecessary copy of skb. Last but not least, this logic split will allow us for better code reuse when adding a support for build_skb. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
Pull out the code responsible for page counting and buffer recycling so that it will be possible to clean up the Rx buffers in cases where we won't allocate skb (ex. XDP) Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
{get,put}_page are atomic operations which we use for page count handling. The current logic for refcount handling is that we increment it when passing a skb with the data from the first half of page up to netstack and recycle the second half of page. This operation protects us from losing a page since the network stack can decrement the refcount of page from skb. The performance can be gently improved by doing the bulk updates of refcount instead of doing it one by one. During the buffer initialization, maximize the page's refcount and don't allow the refcount to become less than two. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
Instead of adding a frag and later when dealing with EOP frame accessing that frag in order to copy the headers onto linear part of skb, we can do this in ice_add_rx_frag in case where the data_len is still 0 and frame won't fit onto the linear part as a whole. Function comment of ice_pull_tail was a bit misleading because of mentioned optimizations that can be performed (drop a frag/maintaining accurate truesize of skb) - it seems that this part of logic was dropped and the comment was not updated to reflect this change. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
Introduce ice_can_reuse_rx_page which will verify whether the page can be reused and return the boolean result to caller. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-