- 28 Jun, 2019 40 commits
-
-
Harshitha Ramamurthy authored
This patch enables macvlan offloads for i40e. The idea is to use channels as macvlan interfaces. The channels are VSIs of type VMDQ. When the first macvlan is created, the maximum number of channels possible are created. From then on, as a macvlan interface is created, a macvlan filter is added to these already created channels (VSIs). This patch utilizes subordinate device traffic classes to make queue groups(channels) available for an upper device like a macvlan. Steps to configure macvlan offloads: 1. ethtool -K ethx l2-fwd-offload on 2. ip link add link ethx name macvlan1 type macvlan 3. ip addr add <address> dev macvlan1 4. ip link set macvlan1 up Signed-off-by: Harshitha Ramamurthy <harshitha.ramamurthy@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
Change the ethtool link settings call to just read the cached state out of the adapter structure instead of trying to recheck the value from the PF. Doing this should prevent excessive reading of the mailbox. Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com> Reviewed-by: "Guilherme G. Piccoli" <gpiccoli@canonical.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Colin Ian King authored
A recent commit efa14c39 ("iavf: allow null RX descriptors") added a null pointer sanity check on rx_buffer, however, rx_buffer is being dereferenced before that check, which implies a null pointer dereference bug can potentially occur. Fix this by only dereferencing rx_buffer until after the null pointer check. Addresses-Coverity: ("Dereference before null check") Signed-off-by: Colin Ian King <colin.king@canonical.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Artem Bityutskiy authored
This patch adds the RR2DCDELAY register to the ethtool registers dump. RR2DCDELAY exists on I210 and I211 Intel Gigabit Ethernet chips and it stands for "Read Request To Data Completion Delay". Here is how this register is described in the I210 datasheet: "This field captures the maximum PCIe split time in 16 ns units, which is the maximum delay between the read request to the first data completion. This is giving an estimation of the PCIe round trip time." In other words, whenever I210 reads from the host memory (e.g., fetches a descriptor from the ring), the chip measures every PCI DMA read transaction and captures the maximum value. So it ends up containing the longest DMA transaction time. This register is very useful for troubleshooting and research purposes. If you are dealing with time-sensitive networks, this register can help you get an idea of your "I210-to-ring" latency. This helps answering questions like "should I have PCIe ASPM enabled?" or "should I enable deep C-states?" on my system. It is safe to read this register at any point, reading it has no effect on the I210 chip functionality. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Artem Bityutskiy authored
This patch has no functional impact and it is just a preparation for the following patch. It removes an early return from the 'igb_get_regs()' function by moving the 82576-only registers dump into an "if" block. With this preparation, we can dump more non-82576 registers at the end of this function. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jeff Kirsher authored
This aligns the iavf_debug() macro with the other Intel drivers. Add the bus number, bus_id field to i40e_bus_info so output shows each physical port(i.e func) in following format: [[[[<domain>]:]<bus>]:][<slot>][.[<func>]] domains are numbered from 0 to ffff), bus (0-ff), slot (0-1f) and function (0-7). Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
-
Arjan van de Ven authored
The e1000e driver is a great user of the usleep_range() API, and has nice ranges that in principle help power management. However the ranges that are used only during system startup are very long (and can add easily 100 msec to the boot time) while the power savings of such long ranges is irrelevant due to the one-off, boot only, nature of these functions. This patch shrinks some of the longest ranges to be shorter (while still using a power friendly 1 msec range); this saves 100msec+ of boot time on my BDW NUCs Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Paul Menzel <pmenzel@molgen.mpg.de> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Gustavo A. R. Silva authored
Make use of the struct_size() helper instead of an open-coded version in order to avoid any potential type mistakes, in particular in the context in which this code is being used. So, replace code of the following form: sizeof(struct virtchnl_ether_addr_list) + (count * sizeof(struct virtchnl_ether_addr)) with: struct_size(veal, list, count) and so on... This code was detected with the help of Coccinelle. Signed-off-by: "Gustavo A. R. Silva" <gustavo@embeddedor.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Venkatesh Srinivas authored
e1000 writes to doorbells to post transmit descriptors and fill the receive ring. After writing descriptors to memory but before writing to doorbells, use dma_wmb() rather than wmb(). wmb() is more heavyweight than necessary for a device to see descriptor writes. On x86, this avoids SFENCEs before doorbell writes in both the Tx and Rx paths. On ARM, this converts DSB ST -> DMB OSHST. Tested: 82576EB / x86; QEMU (qemu emulates an 8257x) Signed-off-by: Venkatesh Srinivas <venkateshs@google.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Colin Ian King authored
The u32 variable rem is being shifted using u32 arithmetic however it is being passed to div_u64 that expects the expression to be a u64. The 32 bit shift may potentially overflow, so cast rem to a u64 before shifting to avoid this. Also remove comment about overflow. Addresses-Coverity: ("Unintentional integer overflow") Fixes: cd458320 ("ixgbe: implement support for SDP/PPS output on X550 hardware") Fixes: 68d9676f ("ixgbe: fix PTP SDP pin setup on X540 hardware") Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Dann Frazier authored
An ipsec structure will not be allocated if the hardware does not support offload. Fixes the following Oops: [ 191.045452] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000 [ 191.054232] Mem abort info: [ 191.057014] ESR = 0x96000004 [ 191.060057] Exception class = DABT (current EL), IL = 32 bits [ 191.065963] SET = 0, FnV = 0 [ 191.069004] EA = 0, S1PTW = 0 [ 191.072132] Data abort info: [ 191.074999] ISV = 0, ISS = 0x00000004 [ 191.078822] CM = 0, WnR = 0 [ 191.081780] user pgtable: 4k pages, 48-bit VAs, pgdp = 0000000043d9e467 [ 191.088382] [0000000000000000] pgd=0000000000000000 [ 191.093252] Internal error: Oops: 96000004 [#1] SMP [ 191.098119] Modules linked in: vhost_net vhost tap vfio_pci vfio_virqfd vfio_iommu_type1 vfio xt_CHECKSUM iptable_mangle ipt_MASQUERADE iptable_nat nf_nat_ipv4 nf_nat xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ipt_REJECT nf_reject_ipv4 xt_tcpudp bridge stp llc ebtable_filter devlink ebtables ip6table_filter ip6_tables iptable_filter bpfilter ipmi_ssif nls_iso8859_1 input_leds joydev ipmi_si hns_roce_hw_v2 ipmi_devintf hns_roce ipmi_msghandler cppc_cpufreq sch_fq_codel ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables autofs4 ses enclosure btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor hid_generic usbhid hid raid6_pq libcrc32c raid1 raid0 multipath linear ixgbevf hibmc_drm ttm [ 191.168607] drm_kms_helper aes_ce_blk aes_ce_cipher syscopyarea crct10dif_ce sysfillrect ghash_ce qla2xxx sysimgblt sha2_ce sha256_arm64 hisi_sas_v3_hw fb_sys_fops sha1_ce uas nvme_fc mpt3sas ixgbe drm hisi_sas_main nvme_fabrics usb_storage hclge scsi_transport_fc ahci libsas hnae3 raid_class libahci xfrm_algo scsi_transport_sas mdio aes_neon_bs aes_neon_blk crypto_simd cryptd aes_arm64 [ 191.202952] CPU: 94 PID: 0 Comm: swapper/94 Not tainted 4.19.0-rc1+ #11 [ 191.209553] Hardware name: Huawei D06 /D06, BIOS Hisilicon D06 UEFI RC0 - V1.20.01 04/26/2019 [ 191.218064] pstate: 20400089 (nzCv daIf +PAN -UAO) [ 191.222873] pc : ixgbe_ipsec_vf_clear+0x60/0xd0 [ixgbe] [ 191.228093] lr : ixgbe_msg_task+0x2d0/0x1088 [ixgbe] [ 191.233044] sp : ffff000009b3bcd0 [ 191.236346] x29: ffff000009b3bcd0 x28: 0000000000000000 [ 191.241647] x27: ffff000009628000 x26: 0000000000000000 [ 191.246946] x25: ffff803f652d7600 x24: 0000000000000004 [ 191.252246] x23: ffff803f6a718900 x22: 0000000000000000 [ 191.257546] x21: 0000000000000000 x20: 0000000000000000 [ 191.262845] x19: 0000000000000000 x18: 0000000000000000 [ 191.268144] x17: 0000000000000000 x16: 0000000000000000 [ 191.273443] x15: 0000000000000000 x14: 0000000100000026 [ 191.278742] x13: 0000000100000025 x12: ffff8a5f7fbe0df0 [ 191.284042] x11: 000000010000000b x10: 0000000000000040 [ 191.289341] x9 : 0000000000001100 x8 : ffff803f6a824fd8 [ 191.294640] x7 : ffff803f6a825098 x6 : 0000000000000001 [ 191.299939] x5 : ffff000000f0ffc0 x4 : 0000000000000000 [ 191.305238] x3 : ffff000028c00000 x2 : ffff803f652d7600 [ 191.310538] x1 : 0000000000000000 x0 : ffff000000f205f0 [ 191.315838] Process swapper/94 (pid: 0, stack limit = 0x00000000addfed5a) [ 191.322613] Call trace: [ 191.325055] ixgbe_ipsec_vf_clear+0x60/0xd0 [ixgbe] [ 191.329927] ixgbe_msg_task+0x2d0/0x1088 [ixgbe] [ 191.334536] ixgbe_msix_other+0x274/0x330 [ixgbe] [ 191.339233] __handle_irq_event_percpu+0x78/0x270 [ 191.343924] handle_irq_event_percpu+0x40/0x98 [ 191.348355] handle_irq_event+0x50/0xa8 [ 191.352180] handle_fasteoi_irq+0xbc/0x148 [ 191.356263] generic_handle_irq+0x34/0x50 [ 191.360259] __handle_domain_irq+0x68/0xc0 [ 191.364343] gic_handle_irq+0x84/0x180 [ 191.368079] el1_irq+0xe8/0x180 [ 191.371208] arch_cpu_idle+0x30/0x1a8 [ 191.374860] do_idle+0x1dc/0x2a0 [ 191.378077] cpu_startup_entry+0x2c/0x30 [ 191.381988] secondary_start_kernel+0x150/0x1e0 [ 191.386506] Code: 6b15003f 54000320 f1404a9f 54000060 (79400260) Fixes: eda0333a ("ixgbe: add VF IPsec management") Signed-off-by: Dann Frazier <dann.frazier@canonical.com> Acked-by: Shannon Nelson <snelson@pensando.io> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Miguel Bernal Marin authored
Suggested-by: Tim Pepper <timothy.c.pepper@linux.intel.com> Signed-off-by: Miguel Bernal Marin <miguel.bernal.marin@linux.intel.com> Signed-off-by: Paul Menzel <pmenzel@molgen.mpg.de> Acked-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Gustavo A. R. Silva authored
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; struct boo entry[]; }; size = sizeof(struct foo) + count * sizeof(struct boo); instance = alloc(size, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: size = struct_size(instance, entry, count); This code was detected with the help of Coccinelle. Signed-off-by: "Gustavo A. R. Silva" <gustavo@embeddedor.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Christian Brauner authored
Tools such as vpnc try to flush routes when run inside network namespaces by writing 1 into /proc/sys/net/ipv4/route/flush. This currently does not work because flush is not enabled in non-initial network namespaces. Since routes are per network namespace it is safe to enable /proc/sys/net/ipv4/route/flush in there. Link: https://github.com/lxc/lxd/issues/4257Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.open-mesh.org/linux-mergeDavid S. Miller authored
Simon Wunderlich says: ==================== This feature/cleanup patchset includes the following patches: - bump version strings, by Simon Wunderlich - fix includes for _MAX constants, atomic functions and fwdecls, by Sven Eckelmann (3 patches) - shorten multicast tt/tvlv worker spinlock section, by Linus Luessing - routeable multicast preparations: implement MAC multicast filtering, by Linus Luessing (2 patches, David Millers comments integrated) - remove return value checks for debugfs_create, by Greg Kroah-Hartman - add routable multicast optimizations, by Linus Luessing (2 patches) ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Huazhong Tan says: ==================== net: hns3: some code optimizations & cleanups & bugfixes [patch 01/12] fixes a TX timeout issue. [patch 02/12 - 04/12] adds some patch related to TM module. [patch 05/12] fixes a compile warning. [patch 06/12] adds Asym Pause support for autoneg [patch 07/12] optimizes the error handler for VF reset. [patch 08/12] deals with the empty interrupt case. [patch 09/12 - 12/12] adds some cleanups & optimizations. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peng Li authored
If CMDQ ring is full, hclge_cmd_send may return directly, but IMP still working and HW pointer changed, SW ring pointer do not match the HW pointer. This patch update the SW pointer every time when the space is full, so it can work normally next time if IMP and HW still working. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
The HNS3_RXD_VLD_B bit has already been checked in hns3_add_frag or hns3_handle_rx_bd before calling hns3_handle_bdinfo, so when hns3_handle_bdinfo is called, the HNS3_RXD_VLD_B bit is always set, which makes the checking in hns3_handle_bdinfo unnecessary. This patch removes the RXD_VLD_B checking in hns3_handle_bdinfo. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
This patch removes unused linkmode definition. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yufeng Mo authored
The frame column is based on rx_crc_errors and rx_frame_errors. So l3l4 checksum error should not be counted by rx_crc_errors. Instead, l3l4 checksum error should be counted in ifconfig error column. Fixes: d3ec4ef6 ("net: hns3: refactor the statistics updating for netdev") Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Huazhong Tan authored
Since some MSI-X interrupt's status may be cleared by hardware, so when the driver receives the interrupt, reading HCLGE_VECTOR0_PF_OTHER_INT_STS_REG register will get an empty unknown interrupt. For this case, the irq handler should enable vector0 interrupt. This patch also use dev_info() instead of dev_dbg() in the hclge_check_event_cause(), since this information will be useful for normal usage. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Huazhong Tan authored
The VF reset may fail for some probabilistic reasons, such as wait for hardware reset timeout, wait for mailbox response timeout, so this patch tries to re-schedule the reset task when the number of reset failing is under HCLGEVF_RESET_MAX_FAIL_CNT. This patch also add a function hclgevf_reset_err_handle() to handle the reset failing. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yonglong Liu authored
Local device and link partner config auto-negotiation on both, local device config pause frame use as: rx on/tx off, link partner config pause frame use as: rx off/tx on. We except the result is: Local device: Autonegotiate: on RX: on TX: off RX negotiated: on TX negotiated: off Link partner: Autonegotiate: on RX: off TX: on RX negotiated: off TX negotiated: on But actually, the result of Local device and link partner is both: Autonegotiate: on RX: off TX: off RX negotiated: off TX negotiated: off The root cause is that the supported flag is has only Pause, reference to the function genphy_config_advert(): static int genphy_config_advert(struct phy_device *phydev) { ... linkmode_and(phydev->advertising, phydev->advertising, phydev->supported); ... } The pause frame use of link partner is rx off/tx on, so its advertising only set the bit Asym_Pause, and the supported is only set the bit Pause, so the result of linkmode_and(), is rx off/tx off. This patch adds Asym_Pause to the supported flag to fix it. Signed-off-by: Yonglong Liu <liuyonglong@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yonglong Liu authored
When setting -Wformat=2, there is a compiler warning like this: hclge_main.c:xxx:x: warning: format not a string literal and no format arguments [-Wformat-nonliteral] strs[i].desc); ^~~~ This patch adds missing format parameter "%s" to snprintf() to fix it. Fixes: 46a3df9f ("Add HNS3 Acceleration Engine & Compatibility Layer Support") Signed-off-by: Yonglong Liu <liuyonglong@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
When hdev->tx_sch_mode is HCLGE_FLAG_VNET_BASE_SCH_MODE, the hclge_tm_schd_mode_vnet_base_cfg calls hclge_tm_pri_schd_mode_cfg with vport->vport_id as pri_id, which is used as index for hdev->tm_info.tc_info, it will cause out of bound access issue if vport_id is equal to or larger than HNAE3_MAX_TC. Also hardware only support maximum speed of HCLGE_ETHER_MAX_RATE. So this patch adds two checks for above cases. Fixes: 84844054 ("net: hns3: Add support of TX Scheduler & Shaper to HNS3 driver") Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
Currently when there is share buffer in the SSU(storage switching unit), the low waterline for RX private buffer is too low to keep the hardware running. Hardware may have processed all the packet stored in the private buffer of the low waterline before the new packet comes, because hardware only tell the peer send packet again when the private buffer is under the low waterline. So this patch only allocate RX private buffer if there is enough buffer according to hardware user manual. This patch also reserve some buffer for reusing when TC num is less than or equal to 2, and change PAUSE_TRANS_GAP & HCLGE_NON_DCB_ADDITIONAL_BUF according to hardware user manual. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
Currently when TC num is one, the DCB will be disabled no matter if pfc_en is non-zero or not. This patch enables the DCB if pfc_en is non-zero, even when TC num is one. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Huazhong Tan authored
When change MTU or other operations, which just calling .reset_notify to do HNAE3_DOWN_CLIENT and HNAE3_UP_CLIENT, then the netdev_tx_reset_queue() in the hns3_clear_all_ring() will be ignored. So the dev_watchdog() may misdiagnose a TX timeout. This patch separates netdev_tx_reset_queue() from hns3_clear_all_ring(), and unifies hns3_clear_all_ring() and hns3_force_clear_all_ring into one, since they are doing similar things. Fixes: 3a30964a ("net: hns3: delay ring buffer clearing during reset") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vladimir Oltean says: ==================== Better PHYLINK compliance for SJA1105 DSA After discussing with Russell King, it appears this driver is making a few confusions and not performing some checks for consistent operation. Changes in v2: - Removed redundant print in the phylink_validate callback (in 2/3). ==================== Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
We need a better way to signal this, perhaps in phylink_validate, but for now just print this error message as guidance for other people looking at this driver's code while trying to rework PHYLINK. Cc: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
PHYLINK being designed with PHYs in mind that can change MII protocol, for correct operation it is necessary to ensure that the PHY interface mode stays the same (otherwise clear the supported bit mask, as required). Because this is just a hypothetical situation for now, we don't bother to check whether we could actually support the new PHY interface mode. Actually we could modify the xMII table, reset the switch and send an updated static configuration, but adding that would just be dead code. Cc: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
It has been pointed out that PHYLINK can call mac_config only to update the phy_interface_type and without knowing what the AN results are. Experimentally, when this was observed to happen, state->link was also unset, and therefore was used as a proxy to ignore this call. However it is also suggested that state->link is undefined for this callback and should not be relied upon. So let the previously-dead codepath for SPEED_UNKNOWN be called, and update the comment to make sure the MAC's behavior is sane. Cc: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arnd Bergmann authored
On 32-bit architectures, putting an array of 256 u32 values on the stack uses more space than the warning limit: drivers/net/ethernet/huawei/hinic/hinic_main.c: In function 'hinic_rss_init': drivers/net/ethernet/huawei/hinic/hinic_main.c:286:1: error: the frame size of 1068 bytes is larger than 1024 bytes [-Werror=frame-larger-than=] I considered changing the code to use u8 values here, since that's all the hardware supports, but dynamically allocating the array is a more isolated fix here. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jose Abreu says: ==================== net: stmmac: 10GbE using XGMAC Support for 10Gb Link using XGMAC core plus some performance tweaks. Tested in a PCI based setup. iperf3 TCP results: TSO ON, MTU=1500, TX Queues = 1, RX Queues = 1, Flow Control ON Pinned CPU (-A), Zero-Copy (-Z) [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-600.00 sec 643 GBytes 9.21 Gbits/sec 1 sender [ 5] 0.00-600.00 sec 643 GBytes 9.21 Gbits/sec receiver ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
We support more speeds now. Update the Kconfig entry. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: Joao Pinto <jpinto@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
Only disable the interrupts if RX NAPI gets to be scheduled. Also, schedule the TX NAPI only when the interrupts are disabled. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: Joao Pinto <jpinto@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
Update the RX Tail Pointer to the last available SKB entry. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: Joao Pinto <jpinto@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
Currently, stmmac only supports 32 bits addressing for SKB. Enable the support for upto 48 bits addressing in XGMAC core. This avoids the use of bounce buffers and increases performance. Changes from v1: - Fallback to 32 bits in failure (Andrew) Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: Joao Pinto <jpinto@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Cc: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
This is a performance killer and anyways the interrupts are being disabled by RX NAPI so no need to disable them again. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: Joao Pinto <jpinto@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jose Abreu authored
XGMAC supports following speeds: - 10G XGMII - 5G XGMII - 2.5G XGMII - 2.5G GMII - 1G GMII - 100M MII - 10M MII Add them to the stmmac driver. Signed-off-by: Jose Abreu <joabreu@synopsys.com> Cc: Joao Pinto <jpinto@synopsys.com> Cc: David S. Miller <davem@davemloft.net> Cc: Giuseppe Cavallaro <peppe.cavallaro@st.com> Cc: Alexandre Torgue <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-