- 26 Nov, 2021 29 commits
-
-
wengjianfeng authored
Combine two judgments that return the same value Signed-off-by: wengjianfeng <wengjianfeng@yulong.com> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> Link: https://lore.kernel.org/r/20211126013130.27112-1-samirweng1979@163.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Eric Dumazet says: ==================== net: small csum optimizations After recent x86 csum_partial() optimizations, we can more easily see in kernel profiles costs of add/adc operations that could be avoided, by feeding a non zero third argument to csum_partial() ==================== Link: https://lore.kernel.org/r/20211124202446.2917972-1-eric.dumazet@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
Remove one pair of add/adc instructions and their dependency against carry flag. We can leverage third argument to csum_partial(): X = csum_block_sub(X, csum_partial(start, len, 0), 0); --> X = csum_block_add(X, ~csum_partial(start, len, 0), 0); --> X = ~csum_partial(start, len, ~X); Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
We can leverage third argument to csum_partial(): X = csum_sub(X, csum_partial(start, len, 0)); --> X = csum_add(X, ~csum_partial(start, len, 0)); --> X = ~csum_partial(start, len, ~X); This removes one add/adc pair and its dependency against the carry flag. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Xin Long authored
Currently, the probe timer is reused as the raise timer when PLPMTUD is in the Search Complete state. raise_count was introduced to count how many times the probe timer has timed out. When raise_count reaches to 30, the raise timer handler will be triggered. During the whole processing above, the timer keeps timing out every probe_ interval. It is a waste for the Search Complete state, as the raise timer only needs to time out after 30 * probe_interval. Since the raise timer and probe timer are never used at the same time, it is no need to keep probe timer 'alive' in the Search Complete state. This patch to introduce sctp_transport_reset_raise_timer() to start the timer as the raise timer when entering the Search Complete state. When entering the other states, sctp_transport_reset_probe_timer() will still be called to reset the timer to the probe timer. raise_count can be removed from sctp_transport as no need to count probe timer timeout for raise timer timeout. last_rtx_chunks can be removed as sctp_transport_reset_probe_timer() can be called in the place where asoc rtx_data_chunks is changed. Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Link: https://lore.kernel.org/r/edb0e48988ea85997488478b705b11ddc1ba724a.1637781974.git.lucien.xin@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Xin Long authored
When a skb comes to tipc_aead_encrypt(), it's always linear. The unlikely check 'skb_cloned(skb) && tailen <= skb_tailroom(skb)' can completely be taken care of in skb_cow_data() by the code in branch "if (!skb_has_frag_list())". Also, remove the 'TODO:' annotation, as the pages in skbs are not writable, see more on commit 3cf4375a ("tipc: do not write skb_shinfo frags when doing decrytion"). Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Jon Maloy <jmaloy@redhat.com> Link: https://lore.kernel.org/r/47a478da0b6095b76e3cbe7a75cbd25d9da1df9a.1637773872.git.lucien.xin@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Alex Elder says: ==================== net: ipa: GSI channel flow control Starting with IPA v4.2, endpoint DELAY mode (which prevents data transfer on TX endpoints) does not work properly. To address this, changes were made to allow underlying GSI channels to be put into a "flow controlled" state, which achieves a similar objective. The first patch in this series implements the flow controlled channel state and the commands used to control it. It arranges to use the new mechanism--instead of DELAY mode--for IPA v4.2+. In IPA v4.11, the notion of GSI channel flow control was enhanced, and implemented in a slightly different way. For the most part this doesn't affect the way the IPA driver uses flow control, but the second patch adds support for the newer mechanism. ==================== Link: https://lore.kernel.org/r/20211124194416.707007-1-elder@linaro.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
IPA v4.2 introduced GSI channel flow control, used instead of IPA endpoint DELAY mode to prevent a TX channel from injecting packets into the IPA core. It used a new FLOW_CONTROLLED channel state which could be entered using GSI generic commands. IPA v4.11 extended the channel flow control model. Rather than having a distinct FLOW_CONTROLLED channel state, each channel has a "flow control" property that can be enabled or not--independent of the channel state. The AP (or modem) can modify this property using the same GSI generic commands as before. The AP only uses channel flow control on modem TX channels, and only when recovering from a modem crash. The AP has no way to discover the state of a modem channel, so the fact that (starting with IPA v4.11) flow control no longer uses a distinct channel state is invisible to the AP. So enhanced flow control generally does not change the way AP uses flow control. There are a few small differences, however: - There is a notion of "primary" or "secondary" flow control, and when enabling or disabling flow control that must be specified in a new field in the GSI generic command register. For now, we always specify 0 (meaning "primary"). - When disabling flow control, it's possible a request will need to be retried. We retry up to 5 times in this case. - Another new generic command allows the current flow control state to be queried. We do not use this. Other than the need for retries, the code essentially works the same way as before. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
One quirk for certain versions of IPA is that endpoint DELAY mode does not work properly. IPA DELAY mode prevents any packets from being delivered to the IPA core for processing on a TX endpoint. The AP uses DELAY mode when the modem crashes, to prevent modem TX endpoints from generating traffic during crash recovery. Without this, there is a chance the hardware will stall during recovery from a modem crash. To achieve a similar effect, a GSI FLOW_CONTROLLED channel state was created. A STARTED TX channel can be placed in FLOW_CONTROLLED state, which prevents the transfer of any more packets. A channel in FLOW_CONTROLLED state can be either returned to STARTED state, or can be transitioned to STOPPED state. Because this operates on GSI channels, two generic commands were added to allow the AP to control this state for modem channels (similar to the ALLOCATE and HALT channel commands). Previously the code assumed this quirk only applied to IPA v4.2. In fact, channel flow control (rather than endpoint DELAY mode) should be used for all versions *starting* with IPA v4.2. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Jeremy Kerr says: ==================== mctp serial minor fixes We had a few minor fixes queued for a v4 of the original series, so they're sent here as separate changes. v2: - fix ordering of cancel_work vs. unregister_netdev. ==================== Link: https://lore.kernel.org/r/20211125060739.3023442-1-jk@codeconstruct.com.auSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jeremy Kerr authored
Jiri assures me that a ldisc->open with tty->disc_data set should never happen, so this check doesn't do anything. Reported-by: Jiri Slaby <jirislaby@kernel.org> Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jeremy Kerr authored
The current serial driver requires a maximum MTU of 68, and it doesn't make sense to set a MTU below the MCTP-required baseline (of 68) either. This change sets the min_mtu & max_mtu of the mctp netdev, essentially disallowing changes. By using these instead of a ndo_change_mtu op, we get the netlink extacks reported too. Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jeremy Kerr authored
We want to ensure that the tx work has finished before returning from the ldisc close op, so do a synchronous cancel. Reported-by: Jiri Slaby <jirislaby@kernel.org> Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Alex Elder says: ==================== net: ipa: small collected improvements This series contains a somewhat unrelated set of changes, some inspired by some recent work posted for back-port. For the most part they're meant to improve the code without changing it's functionality. Each basically stands on its own. ==================== Link: https://lore.kernel.org/r/20211124202511.862588-1-elder@linaro.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The dummy net_device is a large field in the GSI structure, but it is not at all interesting from the perspective of debugging. Move it to the end of the GSI structure so the other fields are easier to find in memory. The channel and event ring arrays are also very large, so move them near the end of the structure as well. Swap the position of the result and completion fields to improve structure packing. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
A mutex ensures we never submit more than one GSI command of any kind at once. This means the per-channel and per-event ring completion structures provide no benefit. Instead, just use the single (existing) GSI completion to signal the completion of GSI commands of all types. This makes gsi_evt_ring_init() a trivial function with no inverse, so open-code it in its sole caller and get rid of the function. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
In ipa_endpoint_skb_copy(), a new socket buffer structure is allocated so that some data can be copied into it. However, after doing this, if the endpoint has a null netdev pointer, we just drop free the socket buffer. Instead, check endpoint->netdev pointer first, and just return early if it's null. Also return early if the SKB allocation fails, to avoid the deeper indentation in the normal path. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
During setup, ipa_endpoint_program() programs each endpoint with various configuration parameters. One of those registers defines whether to drop packets when a head-of-line blocking condition is detected on an RX endpoint. We currently assume this is disabled; instead, explicitly set it to be disabled. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The head-of-line block (HOLB) drop timer is only meaningful when dropping packets due to blocking is enabled. Given that, redefine the interface so the timer is specified when enabling HOLB drop, and use a different function when disabling. To enable and disable HOLB drop, these functions will now be used: ipa_endpoint_init_hol_block_enable(endpoint, milliseconds) ipa_endpoint_init_hol_block_disable(endpoint) The existing ipa_endpoint_init_hol_block_enable() becomes a helper function, renamed ipa_endpoint_init_hol_block_en(), and used with ipa_endpoint_init_hol_block_timer() to enable HOLB block on an endpoint. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
Not all filter table entries are used. Only certain endpoints support filtering, and the table begins with a bitmap indicating which endpoints use the "slots" that follow for filter rules. Currently, unused filter table entries are not initialized. Instead, zero-fill the entire unused portion of the filter table memory regions, to make it more obvious that memory is unused (and not subsequently modified). This is not strictly necessary, but the result is reassuring when looking at filter table memory. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
A recent commit made disabling the SMP2P "setup ready" interrupt unrelated to ipa_modem_stop(). Given that, it seems fitting to get rid of ipa_modem_init() and ipa_modem_exit() (which are trivial wrapper functions), and call ipa_smp2p_init() and ipa_smp2p_exit() directly instead. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
The VSC9959 switch embedded within NXP LS1028A (and that version of Ocelot switches only) supports cut-through forwarding - meaning it can start the process of looking up the destination ports for a packet, and forward towards those ports, before the entire packet has been received (as opposed to the store-and-forward mode). The up side is having lower forwarding latency for large packets. The down side is that frames with FCS errors are forwarded instead of being dropped. However, erroneous frames do not result in incorrect updates of the FDB or incorrect policer updates, since these processes are deferred inside the switch to the end of frame. Since the switch starts the cut-through forwarding process after all packet headers (including IP, if any) have been processed, packets with large headers and small payload do not see the benefit of lower forwarding latency. There are two cases that need special attention. The first is when a packet is multicast (or flooded) to multiple destinations, one of which doesn't have cut-through forwarding enabled. The switch deals with this automatically by disabling cut-through forwarding for the frame towards all destination ports. The second is when a packet is forwarded from a port of lower link speed towards a port of higher link speed. This is not handled by the hardware and needs software intervention. Since we practically need to update the cut-through forwarding domain from paths that aren't serialized by the rtnl_mutex (phylink mac_link_down/mac_link_up ops), this means we need to serialize physical link events with user space updates of bonding/bridging domains. Enabling cut-through forwarding is done per {egress port, traffic class}. I don't see any reason why this would be a configurable option as long as it works without issues, and there doesn't appear to be any user space configuration tool to toggle this on/off, so this patch enables cut-through forwarding on all eligible ports and traffic classes. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20211125125808.2383984-2-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
The only called takes ocelot_port->bridge and passes it as the "bridge" argument to this function, which then compares it with ocelot_port->bridge. This is not useful. Instead, we would like this function to return 0 if ocelot_port->bridge is not present, which is what this patch does. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20211125125808.2383984-1-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Colin Ian King authored
There is a spelling mistake in a netdev_err error message. Fix it. Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Link: https://lore.kernel.org/r/20211125002932.49217-1-colin.i.king@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ong Boon Leong authored
When XDP program is loaded, it is desirable that the previous TX and RX coalesce values are not re-inited to its default value. This prevents unnecessary re-configurig the coalesce values that were working fine before. Fixes: ac746c85 ("net: stmmac: enhance XDP ZC driver level switching performance") Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Tested-by: Kurt Kanzenbach <kurt@linutronix.de> Link: https://lore.kernel.org/r/20211124114019.3949125-1-boon.leong.ong@intel.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Yang Yingliang authored
The node pointer is returned by of_get_child_by_name() with refcount incremented in tsnep_mdio_init(). Calling of_node_put() to aovid the refcount leak in tsnep_mdio_init(). Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20211124084048.175456-1-yangyingliang@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Tonghao Zhang authored
use ethtools api ethtool_sprintf to instead of snprintf. Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com> Link: https://lore.kernel.org/r/20211125025444.13115-1-xiangxia.m.yue@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Russell King (Oracle) authored
Populate the supported interfaces bitmap and MAC capabilities mask for the macb driver and remove the old validate implementation. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://lore.kernel.org/r/E1mpuRv-00D4rb-Lz@rmk-PC.armlinux.org.ukSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Heiner Kallweit authored
It seems only XID 609 made it to the mass market. Therefore let's disable detection of the other RTL8125a XID's. If nobody complains we can remove support for RTL_GIGA_MAC_VER_60 later. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://lore.kernel.org/r/2cd3df01-5f8b-08dd-6def-3f31a3014bde@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 25 Nov, 2021 11 commits
-
-
Maciej Żenczykowski authored
This is to match ipv4 behaviour, see __ip_sock_set_tos() implementation. Technically for ipv6 this might not be required because normally we do not allow tclass to influence routing, yet the cli tooling does support it: lpk11:~# ip -6 rule add pref 5 tos 45 lookup 5 lpk11:~# ip -6 rule 5: from all tos 0x45 lookup 5 and in general dscp/tclass based routing does make sense. We already have cases where dscp can affect vlan priority and/or transmit queue (especially on wifi). So let's just make things match. Easier to reason about and no harm. Cc: Eric Dumazet <edumazet@google.com> Cc: Neal Cardwell <ncardwell@google.com> Signed-off-by: Maciej Żenczykowski <maze@google.com> Link: https://lore.kernel.org/r/20211123223208.1117871-1-zenczykowski@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Maciej Żenczykowski authored
This is to match ipv4 behaviour, see __ip_sock_set_tos() implementation at ipv4/ip_sockglue.c:579 void __ip_sock_set_tos(struct sock *sk, int val) { if (sk->sk_type == SOCK_STREAM) { val &= ~INET_ECN_MASK; val |= inet_sk(sk)->tos & INET_ECN_MASK; } if (inet_sk(sk)->tos != val) { inet_sk(sk)->tos = val; sk->sk_priority = rt_tos2priority(val); sk_dst_reset(sk); } } Cc: Neal Cardwell <ncardwell@google.com> Signed-off-by: Maciej Żenczykowski <maze@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20211123223154.1117794-1-zenczykowski@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Maciej Żenczykowski authored
A CAP_NET_RAW capable process can already spoof (on transmit) anything it desires via raw packet sockets... There is no good reason to not allow it to also be able to play routing tricks on packets from its own normal sockets. There is a desire to be able to use SO_MARK for routing table selection (via ip rule fwmark) from within a user process without having to run it as root. Granting it CAP_NET_RAW is much less dangerous than CAP_NET_ADMIN (CAP_NET_RAW doesn't permit persistent state change, while CAP_NET_ADMIN does - by for example allowing the reconfiguration of the routing tables and/or bringing up/down devices). Let's keep CAP_NET_ADMIN for persistent state changes, while using CAP_NET_RAW for non-configuration related stuff. Signed-off-by: Maciej Żenczykowski <maze@google.com> Link: https://lore.kernel.org/r/20211123203715.193413-1-zenczykowski@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Maciej Żenczykowski authored
CAP_NET_ADMIN is and should continue to be about configuring the system as a whole, not about configuring per-socket or per-packet parameters. Sending and receiving raw packets is what CAP_NET_RAW is all about. It can already send packets with any VLAN tag, and any IPv4 TOS mark, and any IPv6 TCLASS mark, simply by virtue of building such a raw packet. Not to mention using any protocol and source/ /destination ip address/port tuple. These are the fields that networking gear uses to prioritize packets. Hence, a CAP_NET_RAW process is already capable of affecting traffic prioritization after it hits the wire. This change makes it capable of affecting traffic prioritization even in the host at the nic and before that in the queueing disciplines (provided skb->priority is actually being used for prioritization, and not the TOS/TCLASS field) Hence it makes sense to allow a CAP_NET_RAW process to set the priority of sockets and thus packets it sends. Signed-off-by: Maciej Żenczykowski <maze@google.com> Link: https://lore.kernel.org/r/20211123203702.193221-1-zenczykowski@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ansuel Smith authored
Fix warning reported by bot. Make sure hash is init to 0 and fix wrong logic for hash_type in qca8k_lag_can_offload. Reported-by: kernel test robot <lkp@intel.com> Fixes: def97530 ("net: dsa: qca8k: add LAG support") Signed-off-by: Ansuel Smith <ansuelsmth@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Link: https://lore.kernel.org/r/20211123154446.31019-1-ansuelsmth@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Rahul Lakkireddy authored
Even if firmware fails to recognize the plugged-in port module type, allow reading port module EEPROM anyway. This helps in obtaining necessary diagnostics information for debugging and analysis. Signed-off-by: Manoj Malviya <manojmalviya@chelsio.com> Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Link: https://lore.kernel.org/r/1637682437-31407-1-git-send-email-rahul.lakkireddy@chelsio.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ido Schimmel authored
Cited commit converted simple_strtoul() to kstrtoul() as suggested by the former's documentation. However, it also forced all the inputs to be decimal resulting in user space breakage. Fix by setting the base to '0' so that the base is automatically detected. Before: # ip link add name br0 type bridge vlan_filtering 1 # echo "0x88a8" > /sys/class/net/br0/bridge/vlan_protocol bash: echo: write error: Invalid argument After: # ip link add name br0 type bridge vlan_filtering 1 # echo "0x88a8" > /sys/class/net/br0/bridge/vlan_protocol # echo $? 0 Fixes: 520fbdf7 ("net/bridge: replace simple_strtoul to kstrtol") Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <nikolay@nvidia.com> Link: https://lore.kernel.org/r/20211124101122.3321496-1-idosch@idosch.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Eric Dumazet says: ==================== gro: remove redundant rcu_read_lock Recent trees got an increase of rcu_read_{lock|unlock} costs, it is time to get rid of the not needed pairs. ==================== Link: https://lore.kernel.org/r/20211123225608.2155163-1-eric.dumazet@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
All gro_complete() handlers are called from napi_gro_complete() while rcu_read_lock() has been called. There is no point stacking more rcu_read_lock() Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
All gro_receive() handlers are called from dev_gro_receive() while rcu_read_lock() has been called. There is no point stacking more rcu_read_lock() Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Gerhard Engleder authored
The following warning is fixed, by removing the unused resource size: drivers/net/ethernet/engleder/tsnep_main.c:1155:21-24: WARNING: Suspicious code. resource_size is maybe missing with io Reported-by: kernel test robot <lkp@intel.com> Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Link: https://lore.kernel.org/r/20211124205225.13985-1-gerhard@engleder-embedded.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-