- 17 Apr, 2022 7 commits
-
-
Ansuel Smith authored
Add support for multiple switch with OF mdio bus declaration. Unify the bus id naming and use the same logic for both legacy and OF mdio bus. Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ansuel Smith authored
Restore original way to handle mdio read error by returning 0xffff. This was wrongly changed when the internal_mdio_read was introduced, now that both legacy and internal use the same function, make sure that they behave the same way. Fixes: ce062a0a ("net: dsa: qca8k: fix kernel panic with legacy mdio mapping") Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ansuel Smith authored
Now that dsa_switch_ops is not switch specific anymore, we can drop it from qca8k_priv and use the static ops directly for the dsa_switch pointer. Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ansuel Smith authored
In an attempt to reduce qca8k_priv space, rework and simplify mdiobus logic. We now declare a mdiobus instead of relying on DSA phy_read/write even if a mdio node is not present. This is all to make the qca8k ops static and not switch specific. With a legacy implementation where port doesn't have a phy map declared in the dts with a mdio node, we declare a 'qca8k-legacy' mdiobus. The conversion logic is used as legacy read and write ops are used instead of the internal one. Also drop the legacy_phy_port_mapping as we now declare mdiobus with ops that already address the workaround. Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ansuel Smith authored
Port_sts is a thing of the past for this driver. It was something present on the initial implementation of this driver and parts of the original struct were dropped over time. Using an array of int to store if a port is enabled or not to handle PM operation seems overkill. Switch and use a simple u8 to store the port status where each bit correspond to a port. (bit is set port is enabled, bit is not set, port is disabled) Also add some comments to better describe why we need to track port status. Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ansuel Smith authored
DSA set the CPU port based on the largest MTU of all the slave ports. Based on this we can drop the MTU array from qca8k_priv and set the port_change_mtu logic on DSA changing MTU of the CPU port as the switch have a global MTU settingfor each port. Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arun Ajith S authored
Add a new neighbour cache entry in STALE state for routers on receiving an unsolicited (gratuitous) neighbour advertisement with target link-layer-address option specified. This is similar to the arp_accept configuration for IPv4. A new sysctl endpoint is created to turn on this behaviour: /proc/sys/net/ipv6/conf/interface/accept_unsolicited_na. Signed-off-by: Arun Ajith S <aajith@arista.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 15 Apr, 2022 33 commits
-
-
Eric Dumazet authored
idev can be NULL, as the surrounding code suggests. Fixes: 4daf841a ("net: ipv6: add skb drop reasons to ip6_rcv_core()") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Menglong Dong <imagedong@tencent.com> Cc: Jiang Biao <benbjiang@tencent.com> Cc: Hao Peng <flyingpeng@tencent.com> Link: https://lore.kernel.org/r/20220413205653.1178458-1-eric.dumazet@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
For some unknown reason qdisc_reset() is using a convoluted way of freeing two lists of skbs. Use __skb_queue_purge() instead. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Link: https://lore.kernel.org/r/20220414011004.2378350-1-eric.dumazet@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Leon Romanovsky authored
In review comment [1] was pointed that new code is not supposed to set driver version and should rely on kernel version instead. As an outcome of that comment all the dance around setting such driver version to FW should be removed too, because in upstream kernel whole driver will have same version so read/write from/to FW will give same result. [1] https://lore.kernel.org/all/YladGTmon1x3dfxI@unreal Fixes: 862cd659 ("octeon_ep: Add driver framework and device initialization") Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/5d76f3116ee795071ec044eabb815d6c2bdc7dbd.1649922731.git.leonro@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Sukadev Bhattiprolu says: ==================== ibmvnic: Use a set of LTBs per pool ibmvnic uses a single large long term buffer (LTB) per rx or tx pool (queue). This has two limitations. First, if we need to free/allocate an LTB (eg during a reset), under low memory conditions, the allocation can fail. Second, the kernel limits the size of single LTB (DMA buffer) to 16MB (based on MAX_ORDER). With jumbo frames (mtu = 9000) we can only have about 1763 buffers per LTB (16MB / 9588 bytes per frame) even though the max supported buffers is 4096. (The 9588 instead of 9088 comes from IBMVNIC_BUFFER_HLEN.) To overcome these limitations, allow creating a set of LTBs per queue. ==================== Link: https://lore.kernel.org/r/20220413171026.1264294-1-drt@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Sukadev Bhattiprolu authored
Allow multiple LTBs in the txpool's ltb_set. i.e rather than using a single large LTB, use several smaller LTBs. The first n-1 LTBs will all be of the same size. The size of the last LTB in the set depends on the number of buffers and buffer (mtu) size. This strategy hopefully allows more reuse of the initial LTBs and also reduces the chances of an allocation failure (of the large LTB) when system is low in memory. Suggested-by: Brian King <brking@linux.ibm.com> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com> Signed-off-by: Dany Madden <drt@linux.ibm.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Sukadev Bhattiprolu authored
Allow multiple LTBs in the rxpool's ltb_set. The first n-1 LTBs will all be of the same size. The size of the last LTB in the set depends on the number of buffers and buffer (mtu) size. Having a set of LTBs per pool provides a couple of benefits. First, with the current value of IBMVNIC_MAX_LTB_SIZE of 16MB, with an MTU of 9000, we need a LTB (DMA buffer) of that size but the allocation can fail in low memory conditions. With a set of LTBs per pool, we can use several smaller (8MB) LTBs and hopefully have fewer allocation failures. (See also comments in ibmvnic.h on the trade-off with smaller LTBs) Second since the kernel limits the size of the DMA buffer to 16MB (based on MAX_ORDER), with a single DMA buffer per pool, the pool is also limited to 16MB. This in turn limits the number of buffers per pool to 1763 when MTU is 9000. With a set of LTBs per pool, we can have upto the max of 4096 buffers per pool even when MTU is 9000. Suggested-by: Brian King <brking@linux.ibm.com> Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com> Signed-off-by: Dany Madden <drt@linux.ibm.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Sukadev Bhattiprolu authored
Define and use interfaces that treat the long term buffer (LTB) of an rxpool as a set of LTBs rather than a single LTB. The set only has one LTB for now. Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com> Signed-off-by: Dany Madden <drt@linux.ibm.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Sukadev Bhattiprolu authored
Define a helper to map a given txpool buffer into its corresponding long term buffer (LTB) and offset. Currently there is just one LTB per txpool so the mapping is trivial. When we add support for multiple LTBs per txpool, this helper will be more useful. Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com> Signed-off-by: Dany Madden <drt@linux.ibm.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Sukadev Bhattiprolu authored
Define a helper to map a given rx pool buffer into its corresponding long term buffer (LTB) and offset. Currently there is just one LTB per pool so the mapping is trivial. When we add support for multiple LTBs per pool, this helper will be more useful. Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com> Signed-off-by: Dany Madden <drt@linux.ibm.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Sukadev Bhattiprolu authored
The local variable 'index' is heavily used in some functions and is confusing with the presence of other "index" fields like pool->index, ->consumer_index, etc. Rename it to bufidx to better reflect that its the index of a buffer in the pool. Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.ibm.com> Signed-off-by: Dany Madden <drt@linux.ibm.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Jie Wang says: ==================== net: ethool: add support to get/set tx push by ethtool -G/g These three patches add tx push in ring params and adapt the set and get APIs of ring params. ==================== Link: https://lore.kernel.org/r/20220412020121.14140-1-huangguangbin2@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jie Wang authored
This patch adds tx push param to hns3 ring param and adapts the set and get API of ring params. So users can set it by cmd ethtool -G and get it by cmd ethtool -g. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jie Wang authored
Currently these two checks in ethnl_set_rings are added after rtnl_lock() which will do useless works if the request is invalid. So this patch moves these checks before the rtnl_lock() to avoid these costs. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jie Wang authored
Currently tx push is a standard driver feature which controls use of a fast path descriptor push. So this patch extends the ringparam APIs and data structures to support set/get tx push by ethtool -G/g. Signed-off-by: Jie Wang <wangjie125@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Yang Yingliang authored
If register_netdev() fails , it should return error code in octep_probe(). Fixes: 862cd659 ("octeon_ep: Add driver framework and device initialization") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Radhey Shyam Pandey says: ==================== net: emaclite: Trivial code cleanup This patchset fix coding style issues, remove BUFFER_ALIGN macro and also update copyright text. I have to resend as earlier series didn't reach mailing list due to some configuration issue. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shravya Kumbham authored
BUFFER_ALIGN macro is used to calculate the number of bytes required for the next alignment. Instead of this, we can directly use the skb_reserve(skb, NET_IP_ALIGN) to make the protocol header buffer aligned on at least a 4-byte boundary, where the NET_IP_ALIGN is by default defined as 2. So removing the BUFFER_ALIGN and its related defines which it can be done by the skb_reserve() itself. Signed-off-by: Shravya Kumbham <shravya.kumbham@xilinx.com> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michal Simek authored
Based on recommended guidance Copyright term should be also present in front of (c). That's why aligned driver to match this pattern. It helps automated tools with source code scanning. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Radhey Shyam Pandey authored
Make coding style changes to fix checkpatch script warnings. There is no functional change. Fixes below check and warnings- CHECK: Blank lines aren't necessary after an open brace '{' CHECK: spinlock_t definition without comment CHECK: Please don't use multiple blank lines WARNING: Prefer 'unsigned int' to bare use of 'unsigned' CHECK: braces {} should be used on all arms of this statement CHECK: Unbalanced braces around else statement CHECK: Alignment should match open parenthesis WARNING: Missing a blank line after declarations Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Minghao Chi authored
Using pm_runtime_resume_and_get() to replace pm_runtime_get_sync and pm_runtime_put_noidle. This change is just to simplify the code, no actual functional changes. Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
There is a spelling mistake in a dev_info message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ido Schimmel says: ==================== mlxsw: Preparations for line cards support Currently, mlxsw registers thermal zones as well as hwmon entries for objects such as transceiver modules and gearboxes. In upcoming modular systems, these objects are no longer found on the main board (i.e., slot 0), but on plug-able line cards. This patchset prepares mlxsw for such systems in terms of hwmon, thermal and cable access support. Patches #1-#3 gradually prepare mlxsw for transceiver modules access support for line cards by splitting some of the internal structures and some APIs. Patches #4-#5 gradually prepare mlxsw for hwmon support for line cards by splitting some of the internal structures and augmenting them with a slot index. Patches #6-#7 do the same for thermal zones. Patch #8 selects cooling device for binding to a thermal zone by exact name match to prevent binding to non-relevant devices. Patch #9 replaces internal define for thermal zone name length with a common define. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vadim Pasternak authored
Replace internal define 'MLXSW_THERMAL_ZONE_MAX_NAME' by common 'THERMAL_NAME_LENGTH'. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vadim Pasternak authored
Modular system supports additional cooling devices "mlxreg_fan1", "mlxreg_fan2", etcetera. Thermal zones in "mlxsw" driver should be bound to the same device as before called "mlxreg_fan". Used exact match for cooling device name to avoid binding to new additional cooling devices. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vadim Pasternak authored
Add prefix "lc#n" to thermal zones associated with the thermal objects found on line cards. For example thermal zone for module #9 located at line card #7 will have type: mlxsw-lc7-module9. And thermal zone for gearbox #3 located at line card #5 will have type: mlxsw-lc5-gearbox3. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vadim Pasternak authored
Introduce intermediate level for thermal zones areas. Currently all thermal zones are associated with thermal objects located within the main board. Such objects are created during driver initialization and removed during driver de-initialization. For line cards in modular system the thermal zones are to be associated with the specific line card. They should be created whenever new line card is available (inserted, validated, powered and enabled) and removed, when line card is getting unavailable. The thermal objects found on the line card #n are accessed by setting slot index to #n, while for access to objects found on the main board slot index should be set to default value zero. Each thermal area contains the set of thermal zones associated with particular slot index. Thus introduction of thermal zone areas allows to use the same APIs for the main board and line cards, by adding slot index argument. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vadim Pasternak authored
Add 'slot' parameter to 'mlxsw_hwmon_dev' structure. Use this parameter in mlxsw_reg_mtmp_pack(), mlxsw_reg_mtbr_pack(), mlxsw_reg_mgpir_pack() and mlxsw_reg_mtmp_slot_index_set() routines. For main board it'll always be zero, for line cards it'll be set to the physical slot number in modular systems. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vadim Pasternak authored
Currently, mlxsw supports a single hwmon device and registers it with attributes corresponding to the various objects found on the main board such as fans and gearboxes. Line cards can have the same objects, but unlike the main board they can be added and removed while the system is running. The various hwmon objects found on these line cards should be created when the line card becomes available and destroyed when the line card becomes unavailable. The above can be achieved by representing each line card as a different hwmon device and registering / unregistering it when the line card becomes available / unavailable. Prepare for multi hwmon device support by splitting 'struct mlxsw_hwmon' into 'struct mlxsw_hwmon' and 'struct mlxsw_hwmon_dev'. The first will hold information relevant to all hwmon devices, whereas the second will hold per-hwmon device information. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vadim Pasternak authored
Use a separate function for enablement of port module events such plug/unplug and temperature threshold crossing. The motivation is to reuse the function for line cards. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vadim Pasternak authored
The port module core is tasked with module operations such as setting power mode policy and reset. The per-module information is currently stored in one large array suited for non-modular systems where only the main board is present (i.e., slot index 0). As a preparation for line cards support, allocate a per line card array according to the queried number of slots in the system. For each line card, allocate a module array according to the queried maximum number of modules per-slot. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vadim Pasternak authored
Extend all cable info APIs with 'slot_index' argument. For main board, slot will always be set to zero and these APIs will work as before. If reading cable information is required from cages located on line cards, slot should be set to the physical slot number, where line card is located in modular systems. Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Minghao Chi authored
Using pm_runtime_resume_and_get is more appropriate for simplifing code Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Minghao Chi authored
Using pm_runtime_resume_and_get is more appropriate for simplifing code Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Minghao Chi <chi.minghao@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-