- 27 Aug, 2019 8 commits
-
-
Michal Swiatkowski authored
Check num_queue_pairs to avoid access to unallocated field of vsi->tx_rings/vsi->rx_rings. Without this validation we can set vsi->alloc_txq/vsi->alloc_rxq to value smaller than ICE_MAX_BASE_QS_PER_VF and send this command with num_queue_pairs greater than vsi->alloc_txq/vsi->alloc_rxq. This lead to access to unallocated memory. In VF vsi alloc_txq and alloc_rxq should be the same. Get minimum because looks more readable. Also add validation for ring_len param. It should be greater than 32 and be multiple of 32. Incorrect value leads to hang traffic on PF. Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Akeem G Abodunrin authored
In case of MDD events on VF, don't clog kernel log with unlimited VF MDD events message "VF 0 has had 1018 MDD events since last boot" - limit events log message to 30, based on the observation in some experimentation with sending malicious packet once, and number of events reported before device stopped observing MDD events. Also removed defunct macro "ICE_DFLT_NUM_MDD_EVENTS_ALLOWED" for tracking number of MDD events allowed before disabling the interface... Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Krzysztof Kazimierczak authored
When a VSI is accessed inside the ice_for_each_vsi macro in the rebuild path (ice_vsi_rebuild_all() and ice_vsi_replay_all()), it is referred to as pf->vsi[i]. Introduce local variables to improve readability. Signed-off-by: Krzysztof Kazimierczak <krzysztof.kazimierczak@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jesse Brandeburg authored
Add some verbose debugging for dyndbg to help us when we are having issues with link and/or PHY. While there, shorten some strings used by locals that were causing long line wrapping. Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Anirudh Venkataramanan authored
1. ndo_open and ndo_stop are implemented by ice_open and ice_stop respectively. When enabling/disabling VSIs, just call ice_open/ice_stop instead of ndo_open/ndo_stop. 2. Rework logic around rtnl_lock/rtnl_unlock 3. In ice_ena_vsi, remove an unnecessary stack variable and return 0 instead of err when __ICE_NEEDS_RESTART is not set. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Victor Raj authored
There was a bug in the previous code which never traverses all the children to get the first node of the requested layer. Add a sibling head pointer to point the first node of each layer per TC. This helps traverse easier and quicker and also removes the recursion. Signed-off-by: Victor Raj <victor.raj@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Usha Ketineni authored
This patch fixes the issue where port and PFC statistics counters are incrementing at the wrong port with 4x25G cards. Read the GLPRT port registers using lport parameter instead of pf_id to update the statistics otherwise the pf_ids are flipped for ports 2 and 3 when read from the HW register PF_FUNC_RID and this is expected as per hardware specification. Signed-off-by: Usha Ketineni <usha.k.ketineni@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jakub Kicinski authored
Add MODULE_FIRMWARE entries for AMDA0058 boards. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 26 Aug, 2019 15 commits
-
-
Heiner Kallweit authored
Move the call to dma_sync_single_for_cpu after calling napi_alloc_skb. This avoids calling dma_sync_single_for_cpu w/o handing control back to device if the memory allocation should fail. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vlad Buslov says: ==================== Refactor cls hardware offload API to support rtnl-independent drivers Currently, all cls API hardware offloads driver callbacks require caller to hold rtnl lock when calling them. This patch set introduces new API that allows drivers to register callbacks that are not dependent on rtnl lock and unlocked classifiers to offload filters without obtaining rtnl lock first, which is intended to allow offloading tc rules in parallel. Recently, new rtnl registration flag RTNL_FLAG_DOIT_UNLOCKED was added. TC rule update handlers (RTM_NEWTFILTER, RTM_DELTFILTER, etc.) are already registered with this flag and only take rtnl lock when qdisc or classifier requires it. Classifiers can indicate that their ops callbacks don't require caller to hold rtnl lock by setting the TCF_PROTO_OPS_DOIT_UNLOCKED flag. Unlocked implementation of flower classifier is now upstreamed. However, this implementation still obtains rtnl lock before calling hardware offloads API. Implement following cls API changes: - Introduce new "unlocked_driver_cb" flag to struct flow_block_offload to allow registering and unregistering block hardware offload callbacks that do not require caller to hold rtnl lock. Drivers that doesn't require users of its tc offload callbacks to hold rtnl lock sets the flag to true on block bind/unbind. Internally tcf_block is extended with additional lockeddevcnt counter that is used to count number of devices that require rtnl lock that block is bound to. When this counter is zero, tc_setup_cb_*() functions execute callbacks without obtaining rtnl lock. - Extend cls API single hardware rule update tc_setup_cb_call() function with tc_setup_cb_add(), tc_setup_cb_replace(), tc_setup_cb_destroy() and tc_setup_cb_reoffload() functions. These new APIs are needed to move management of block offload counter, filter in hardware counter and flag from classifier implementations to cls API, which is now responsible for managing them in concurrency-safe manner. Access to cb_list from callback execution code is synchronized by obtaining new 'cb_lock' rw_semaphore in read mode, which allows executing callbacks in parallel, but excludes any modifications of data from register/unregister code. tcf_block offloads counter type is changed to atomic integer to allow updating the counter concurrently. - Extend classifier ops with new ops->hw_add() and ops->hw_del() callbacks which are used to notify unlocked classifiers when filter is successfully added or deleted to hardware without releasing cb_lock. This is necessary to update classifier state atomically with callback list traversal and updating of all relevant counters and allows unlocked classifiers to synchronize with concurrent reoffload without requiring any changes to driver callback API implementations. New tc flow_action infrastructure is also modified to allow its user to execute without rtnl lock protection. Function tc_setup_flow_action() is modified to conditionally obtain rtnl lock before accessing action state. Action data that is accessed by reference is either copied or reference counted to prevent concurrent action overwrite from deallocating it. New function tc_cleanup_flow_action() is introduced to cleanup/release all such data obtained by tc_setup_flow_action(). Flower classifier (only unlocked classifier at the moment) is modified to use new cls hardware offloads API and no longer obtains rtnl lock before calling it. ==================== Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Don't manually take rtnl lock in flower classifier before calling cls hardware offloads API. Instead, pass rtnl lock status via 'rtnl_held' parameter. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
In order to remove dependency on rtnl lock, modify tc_setup_flow_action() to copy tunnel info, instead of just saving pointer to tunnel_key action tunnel info. This is necessary to prevent concurrent action overwrite from releasing tunnel info while it is being used by rtnl-unlocked driver. Implement helper tcf_tunnel_info_copy() that is used to copy tunnel info with all its options to dynamically allocated memory block. Modify tc_cleanup_flow_action() to free dynamically allocated tunnel info. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
In order to remove dependency on rtnl lock when calling hardware offload API, take reference to action mirred dev when initializing flow_action structure in tc_setup_flow_action(). Implement function tc_cleanup_flow_action(), use it to release the device after hardware offload API is done using it. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
In order to allow using new flow_action infrastructure from unlocked classifiers, modify tc_setup_flow_action() to accept new 'rtnl_held' argument. Take rtnl lock before accessing tc_action data. This is necessary to protect from concurrent action replace. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
In order to remove dependency on rtnl lock from offloads code of classifiers, take rtnl lock conditionally before executing driver callbacks. Only obtain rtnl lock if block is bound to devices that require it. Block bind/unbind code is rtnl-locked and obtains block->cb_lock while holding rtnl lock. Obtain locks in same order in tc_setup_cb_*() functions to prevent deadlock. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Extend struct flow_block_offload with "unlocked_driver_cb" flag to allow registering and unregistering block hardware offload callbacks that do not require caller to hold rtnl lock. Extend tcf_block with additional lockeddevcnt counter that is incremented for each non-unlocked driver callback attached to device. This counter is necessary to conditionally obtain rtnl lock before calling hardware callbacks in following patches. Register mlx5 tc block offload callbacks as "unlocked". Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
To remove dependency on rtnl lock, extend classifier ops with new ops->hw_add() and ops->hw_del() callbacks. Call them from cls API while holding cb_lock every time filter if successfully added to or deleted from hardware. Implement the new API in flower classifier. Use it to manage hw_filters list under cb_lock protection, instead of relying on rtnl lock to synchronize with concurrent fl_reoffload() call. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Without rtnl lock protection filters can no longer safely manage block offloads counter themselves. Refactor cls API to protect block offloadcnt with tcf_block->cb_lock that is already used to protect driver callback list and nooffloaddevcnt counter. The counter can be modified by concurrent tasks by new functions that execute block callbacks (which is safe with previous patch that changed its type to atomic_t), however, block bind/unbind code that checks the counter value takes cb_lock in write mode to exclude any concurrent modifications. This approach prevents race conditions between bind/unbind and callback execution code but allows for concurrency for tc rule update path. Move block offload counter, filter in hardware counter and filter flags management from classifiers into cls hardware offloads API. Make functions tcf_block_offload_{inc|dec}() and tc_cls_offload_cnt_update() to be cls API private. Implement following new cls API to be used instead: tc_setup_cb_add() - non-destructive filter add. If filter that wasn't already in hardware is successfully offloaded, increment block offloads counter, set filter in hardware counter and flag. On failure, previously offloaded filter is considered to be intact and offloads counter is not decremented. tc_setup_cb_replace() - destructive filter replace. Release existing filter block offload counter and reset its in hardware counter and flag. Set new filter in hardware counter and flag. On failure, previously offloaded filter is considered to be destroyed and offload counter is decremented. tc_setup_cb_destroy() - filter destroy. Unconditionally decrement block offloads counter. tc_setup_cb_reoffload() - reoffload filter to single cb. Execute cb() and call tc_cls_offload_cnt_update() if cb() didn't return an error. Refactor all offload-capable classifiers to atomically offload filters to hardware, change block offload counter, and set filter in hardware counter and flag by means of the new cls API functions. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
As a preparation for running proto ops functions without rtnl lock, change offload counter type to atomic. This is necessary to allow updating the counter by multiple concurrent users when offloading filters to hardware from unlocked classifiers. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
In order to remove dependency on rtnl lock, extend tcf_block with 'cb_lock' rwsem and use it to protect flow_block->cb_list and related counters from concurrent modification. The lock is taken in read mode for read-only traversal of cb_list in tc_setup_cb_call() and write mode in all other cases. This approach ensures that: - cb_list is not changed concurrently while filters is being offloaded on block. - block->nooffloaddevcnt is checked while holding the lock in read mode, but is only changed by bind/unbind code when holding the cb_lock in write mode to prevent concurrent modification. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Fixes gcc '-Wunused-but-set-variable' warning: drivers/net/ethernet/cirrus/cs89x0.c: In function 'cs89x0_platform_probe': drivers/net/ethernet/cirrus/cs89x0.c:1847:20: warning: variable 'lp' set but not used [-Wunused-but-set-variable] Reported-by: Hulk Robot <hulkci@huawei.com> Fixes: 6751edeb ("cirrus: cs89x0: Use managed interfaces") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This reverts commit ee641b0c. Actually it is not clear whether this register read is not needed for it's HW side effects or not. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mao Wenan authored
Fixes gcc '-Wunused-but-set-variable' warning: drivers/net/ethernet/mediatek/mtk_eth_soc.c: In function mtk_handle_irq: drivers/net/ethernet/mediatek/mtk_eth_soc.c:1951:6: warning: variable status set but not used [-Wunused-but-set-variable] Fixes: 296c9120 ("net: ethernet: mediatek: Add MT7628/88 SoC support") Signed-off-by: Mao Wenan <maowenan@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 25 Aug, 2019 1 commit
-
-
Andrew Lunn authored
SFPs can report two different power values, the transmit power and the receive power. Add labels to make it clear which is which. Also add labels to the other sensors, VCC power supply, bias and module temperature. sensors(1) now shows: sff2-isa-0000 Adapter: ISA adapter VCC: +3.23 V temperature: +33.4 C TX_power: 276.00 uW RX_power: 20.00 uW bias: +0.01 A Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 24 Aug, 2019 13 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queueDavid S. Miller authored
Jeff Kirsher says: ==================== 100GbE Intel Wired LAN Driver Updates 2019-08-23 This series contains updates to ice driver only. Dave adds logic for the necessary bits to be set in the VSI context for the PF_VSI and the TX_descriptors for control packets egressing the PF_VSI. Updated the logic to detect both DCBx and LLDP states in the firmware engine to account for situations where DCBx is enabled and LLDP is disabled. Fixed the driver to treat the DCBx state of "NOT_STARTED" as a valid state and should not assume "is_fw_lldp" true automatically. Since "enable-fw-lldp" flag was confusing and cumbersome, change the flag to "fw-lldp-agent" with a value of on or off to help clarify whether the LLDP agent is running or not. Brett fixes an issue where synchronize_irq() was being called from the host of VF's, which should not be done. Michal fixed an issue when rebuilding the DCBx configuration while in IEEE mode versus CEE mode, so add a check before copying the configuration value to ensure we are only in CEE mode. Jake fixes the PF to reject any VF request to setup head writeback since the support has been deprecated. Mitch adds an additional check to ensure the VF is active before sending out an error message that a message was unable to be sent to a particular VF. Chinh updates the driver to use "topology" mode when checking the PHY for status, since this mode provides us the current module type that is available. Fixes the driver from clearing the auto_fec_enable bit which was blocking a user from forcing non-spec compliant FEC configurations. Amruth does a refactor on the code to first check, then assign in the virtual channel space. Bruce updates the driver to actually update the stats when a user runs the ethtool command 'ethtool -S <iface>' instead of providing a snapshot of the stats that maybe from a second ago. Akeem fixes up the adding/removing of VSI MAC filters for VFs, so that VFs cannot add/remove a filter from another VSI. We now track the number of filters added right from when the VF resources get allocated and won't get into MAC filter mis-match issue in the switch. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Julian Wiedmann says: ==================== s390/qeth: updates 2019-08-23 please apply one more round of qeth patches. These implement support for a bunch of TX-related features - namely TX NAPI, BQL and xmit_more. Note that this includes two qdio patches which lay the necessary groundwork, and have been acked by Vasily. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
IQD devices offer limited support for bulking: all frames in a TX buffer need to have the same target. qeth_iqd_may_bulk() implements this constraint, and allows us to defer the TX doorbell until (a) the buffer is full (since each buffer needs its own doorbell), or (b) the entire TX queue is full, or (b) we reached the BQL limit. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Each TX buffer may contain multiple skbs. So just accumulate the sent byte count in the buffer struct, and later use the same count when completing the buffer. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
This allows the stack to bulk-free our TX-completed skbs. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Due to their large MTU and potentially low utilization of TX buffers, IQD devices in particular require fast TX recycling. This makes them a prime candidate for a TX NAPI path in qeth. qeth_tx_poll() uses the recently introduced qdio_inspect_queue() helper to poll the TX queue for completed buffers. To avoid hogging the CPU for too long, we yield to the stack after completing an entire queue's worth of buffers. While IQD is expected to transfer its buffers synchronously (and thus doesn't support TX interrupts), a timer covers for the odd case where a TX buffer doesn't complete synchronously. Currently this timer should only ever fire for (1) the mcast queue, (2) the occasional race, where the NAPI poll code observes an update to queue->used_buffers while the TX doorbell hasn't been issued yet. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
This consolidates the SW statistics code, and improves it to (1) account for the header overhead of each segment on a TSO skb, (2) count dangling packets as in-error (during eg. shutdown), and (3) only count offloads when the skb was successfully transmitted. We also count each segment of an TSO skb as one packet - except for tx_dropped, to be consistent with dev->tx_dropped. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
If a driver wants to use the new Output Queue poll code, then the qdio layer must disable its internal Queue scanning. Let the driver select this mode by passing a special scan_threshold of 0. As the scan_threshold is the same for all Output Queues, also move it into the main qdio_irq struct. This allows for fast opt-out checking, a driver is expected to operate either _all_ or none of its Output Queues in polling mode. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
While commit d36deae7 ("qdio: extend API to allow polling") enhanced the qdio layer so that drivers can poll their Input Queues, we don't have the corresponding infrastructure for Output Queues yet. Factor out a helper that scans a single QDIO Queue, so that qeth can implement TX NAPI on top of it. While doing so, remove the duplicated tracking of the next-to-scan index (q->first_to_check vs q->first_to_kick) in this code path. qdio_handle_aobs() needs to move slightly upwards in the code hierarchy, so that it's still called from the polling path. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
Recent commit added logic to determine the appropriate statistics block size to allocate and the size is stored in bp->hw_ring_stats_size. But if the firmware spec is older than 1.6.0, it is 0 and not initialized. This causes the allocation to fail with size 0 and bnxt_open() to abort. Fix it by always initializing bp->hw_ring_stats_size to the legacy default size value. Fixes: 4e748506 ("bnxt_en: Allocate the larger per-ring statistics block for 57500 chips.") Reported-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Tested-by: Jonathan Lemon <jonathan.lemon@gmail.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Markus Elfring authored
The consume_skb() function performs also input parameter validation. Thus the test around the call is not needed. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h:542:30: warning: meta_data_key_info defined but not used [-Wunused-const-variable=] drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h:553:30: warning: tuple_key_info defined but not used [-Wunused-const-variable=] The two variable is only used in hclge_main.c, so just move the definition over there. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
xiaolinkui authored
This is an unlikely case, use unlikely() on it seems logical. Signed-off-by: xiaolinkui <xiaolinkui@kylinos.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 23 Aug, 2019 3 commits
-
-
Heiner Kallweit authored
As reported by Aaro this patch causes network problems on MIPS Loongson platform. Therefore revert it. Fixes: f072218c ("r8169: remove not needed call to dma_sync_single_for_device") Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Timestamps are currently communicated to user space as 'struct timespec', which is not considered y2038 safe since it uses a 32-bit signed value for seconds. Fix this while the API is still not part of any official kernel release by using 64-bit nanoseconds timestamps instead. Fixes: ca30707d ("drop_monitor: Add packet alert mode") Fixes: 5e58109b ("drop_monitor: Add support for packet alert mode for hardware drops") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dag Moxnes authored
Add the RDMA cookie and RX timestamp to the usercopy whitelist. After the introduction of hardened usercopy whitelisting (https://lwn.net/Articles/727322/), a warning is displayed when the RDMA cookie or RX timestamp is copied to userspace: kernel: WARNING: CPU: 3 PID: 5750 at mm/usercopy.c:81 usercopy_warn+0x8e/0xa6 [...] kernel: Call Trace: kernel: __check_heap_object+0xb8/0x11b kernel: __check_object_size+0xe3/0x1bc kernel: put_cmsg+0x95/0x115 kernel: rds_recvmsg+0x43d/0x620 [rds] kernel: sock_recvmsg+0x43/0x4a kernel: ___sys_recvmsg+0xda/0x1e6 kernel: ? __handle_mm_fault+0xcae/0xf79 kernel: __sys_recvmsg+0x51/0x8a kernel: SyS_recvmsg+0x12/0x1c kernel: do_syscall_64+0x79/0x1ae When the whitelisting feature was introduced, the memory for the RDMA cookie and RX timestamp in RDS was not added to the whitelist, causing the warning above. Signed-off-by: Dag Moxnes <dag.moxnes@oracle.com> Tested-by: Jenny <jenny.x.xu@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-