- 06 Jul, 2016 18 commits
-
-
David Howells authored
Move the peer lookup done in input.c by data_ready into rxrpc_find_connection(). Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Prune the contents of the rxrpc_conn_proto struct. Most of the fields aren't used anymore. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Overhaul the usage count accounting for the rxrpc_connection struct to make it easier to implement RCU access from the data_ready handler. The problem is that currently we're using a lock to prevent the garbage collector from trying to clean up a connection that we're contemplating unidling. We could just stick incoming packets on the connection we find, but we've then got a problem that we may race when dispatching a work item to process it as we need to give that a ref to prevent the rxrpc_connection struct from disappearing in the meantime. Further, incoming packets may get discarded if attached to an rxrpc_connection struct that is going away. Whilst this is not a total disaster - the client will presumably resend - it would delay processing of the call. This would affect the AFS client filesystem's service manager operation. To this end: (1) We now maintain an extra count on the connection usage count whilst it is on the connection list. This mean it is not in use when its refcount is 1. (2) When trying to reuse an old connection, we only increment the refcount if it is greater than 0. If it is 0, we replace it in the tree with a new candidate connection. (3) Two connection flags are added to indicate whether or not a connection is in the local's client connection tree (used by sendmsg) or the peer's service connection tree (used by data_ready). This makes sure that we don't try and remove a connection if it got replaced. The flags are tested under lock with the removal operation to prevent the reaper from killing the rxrpc_connection struct whilst someone else is trying to effect a replacement. This could probably be alleviated by using memory barriers between the flag set/test and the rb_tree ops. The rb_tree op would still need to be under the lock, however. (4) When trying to reap an old connection, we try to flip the usage count from 1 to 0. If it's not 1 at that point, then it must've come back to life temporarily and we ignore it. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Move the lookup of a peer from a call that's being accepted into the function that creates a new incoming connection. This will allow us to avoid incrementing the peer's usage count in some cases in future. Note that I haven't bother to integrate rxrpc_get_addr_from_skb() with rxrpc_extract_addr_from_skb() as I'm going to delete the former in the very near future. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Split the service-specific connection code out into into its own file. The client-specific code has already been split out. This will leave just the common code in the original file. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Split the client-specific connection code out into its own file. It will behave somewhat differently from the service-specific connection code, so it makes sense to separate them. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Each channel on a connection has a separate, independent number space from which to allocate callNumber values. It is entirely possible, for example, to have a connection with four active calls, each with call number 1. Note that the callNumber values for any particular channel don't have to start at 1, but they are supposed to increment monotonically for that channel from a client's perspective and may not be reused once the call number is transmitted (until the epoch cycles all the way back round). Currently, however, call numbers are allocated on a per-connection basis and, further, are held in an rb-tree. The rb-tree is redundant as the four channel pointers in the rxrpc_connection struct are entirely capable of pointing to all the calls currently in progress on a connection. To this end, make the following changes: (1) Handle call number allocation independently per channel. (2) Get rid of the conn->calls rb-tree. This is overkill as a connection may have a maximum of four calls in progress at any one time. Use the pointers in the channels[] array instead, indexed by the channel number from the packet. (3) For each channel, save the result of the last call that was in progress on that channel in conn->channels[] so that the final ACK or ABORT packet can be replayed if necessary. Any call earlier than that is just ignored. If we've seen the next call number in a packet, the last one is most definitely defunct. (4) When generating a RESPONSE packet for a connection, the call number counter for each channel must be included in it. (5) When parsing a RESPONSE packet for a connection, the call number counters contained therein should be used to set the minimum expected call numbers on each channel. To do in future commits: (1) Replay terminal packets based on the last call stored in conn->channels[]. (2) Connections should be retired before the callNumber space on any channel runs out. (3) A server is expected to disregard or reject any new incoming call that has a call number less than the current call number counter. The call number counter for that channel must be advanced to the new call number. Note that the server cannot just require that the next call that it sees on a channel be exactly the call number counter + 1 because then there's a scenario that could cause a problem: The client transmits a packet to initiate a connection, the network goes out, the server sends an ACK (which gets lost), the client sends an ABORT (which also gets lost); the network then reconnects, the client then reuses the call number for the next call (it doesn't know the server already saw the call number), but the server thinks it already has the first packet of this call (it doesn't know that the client doesn't know that it saw the call number the first time). Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
The socket's accept queue (socket->acceptq) should be accessed under socket->call_lock, not under the connection lock. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Add RCU destruction for connections and calls as the RCU lookup from the transport socket data_ready handler is going to come along shortly. Whilst we're at it, move the cleanup workqueue flushing and RCU barrierage into the destruction code for the objects that need it (locals and connections) and add the extra RCU barrier required for connection cleanup. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
When a call is disconnected, clear the call's pointer to the connection and release the associated ref on that connection. This means that the call no longer pins the connection and the connection can be discarded even before the call is. As the code currently stands, the call struct is effectively pinned by userspace until userspace has enacted a recvmsg() to retrieve the final call state as sk_buffs on the receive queue pin the call to which they're related because: (1) The rxrpc_call struct contains the userspace ID that recvmsg() has to include in the control message buffer to indicate which call is being referred to. This ID must remain valid until the terminal packet is completely read and must be invalidated immediately at that point as userspace is entitled to immediately reuse it. (2) The final ACK to the reply to a client call isn't sent until the last data packet is entirely read (it's probably worth altering this in future to be send the ACK as soon as all the data has been received). This change requires a bit of rearrangement to make sure that the call isn't going to try and access the connection again after protocol completion: (1) Delete the error link earlier when we're releasing the call. Possibly network errors should be distributed via connections at the cost of adding in an access to the rxrpc_connection struct. (2) Remove the call from the connection's call tree before disconnecting the call. The call tree needs to be removed anyway and incoming packets delivered by channel pointer instead. (3) The release call event should be considered last after all other events have been processed so that we don't need access to the connection again. (4) Move the channel_lock taking from rxrpc_release_call() to rxrpc_disconnect_call() where it will be required in future. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
If rxrpc_connect_call() fails during the creation of a client connection, there are two bugs that we can hit that need fixing: (1) The call state should be moved to RXRPC_CALL_DEAD before the call cleanup phase is invoked. If not, this can cause an assertion failure later. (2) call->link should be reinitialised after being deleted in rxrpc_new_client_call() - which otherwise leads to a failure later when the call cleanup attempts to delete the link again. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Rather than calling rxrpc_get_connection() manually before calling rxrpc_queue_conn(), do it inside the queue wrapper. This allows us to do some important fixes: (1) If the usage count is 0, do nothing. This prevents connections from being reanimated once they're dead. (2) If rxrpc_queue_work() fails because the work item is already queued, retract the usage count increment which would otherwise be lost. (3) Don't take a ref on the connection in the work function. By passing the ref through the work item, this is unnecessary. Doing it in the work function is too late anyway. Previously, connection-directed packets held a ref on the connection, but that's not really the best idea. And another useful changes: (*) Don't need to take a refcount on the connection in the data_ready handler unless we invoke the connection's work item. We're using RCU there so that's otherwise redundant. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Check that the client conns cache is empty before module removal and bug if not, listing any offending connections that are still present. Unfortunately, if there are connections still around, then the transport socket is still unexpectedly open and active, so we can't just unallocate the connections. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Turn the connection event and state #define lists into enums and move outside of the struct definition. Whilst we're at it, change _SERVER to _SERVICE in those identifiers and add EV_ into the event name to distinguish them from flags and states. Also add a symbol indicating the number of states and use that in the state text array. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Provide queueing helper functions so that the queueing of local and connection objects can be fixed later. The issue is that a ref on the object needs to be passed to the work queue, but the act of queueing the object may fail because the object is already queued. Testing the queuedness of an object before hand doesn't work because there can be a race with someone else trying to queue it. What will have to be done is to adjust the refcount depending on the result of the queue operation. Signed-off-by: David Howells <dhowells@redhat.com>
-
Herbert Xu authored
rxkad uses stack memory in SG lists which would not work if stacks were allocated from vmalloc memory. In fact, in most cases this isn't even necessary as the stack memory ends up getting copied over to kmalloc memory. This patch eliminates all the unnecessary stack memory uses by supplying the final destination directly to the crypto API. In two instances where a temporary buffer is actually needed we also switch use a scratch area in the rxrpc_call struct (only one DATA packet will be being secured or verified at a time). Finally there is no need to split a split-page buffer into two SG entries so code dealing with that has been removed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
When looking up a client connection to which to route a packet, we need to check that the packet came from the correct source so that a peer can't try to muck around with another peer's connection. Signed-off-by: David Howells <dhowells@redhat.com>
-
David Howells authored
Fix the following sparse errors: ../net/rxrpc/conn_object.c:77:17: warning: incorrect type in assignment (different base types) ../net/rxrpc/conn_object.c:77:17: expected restricted __be32 [usertype] call_id ../net/rxrpc/conn_object.c:77:17: got unsigned int [unsigned] [usertype] call_id ../net/rxrpc/conn_object.c:84:21: warning: restricted __be32 degrades to integer ../net/rxrpc/conn_object.c:86:26: warning: restricted __be32 degrades to integer ../net/rxrpc/conn_object.c:357:15: warning: incorrect type in assignment (different base types) ../net/rxrpc/conn_object.c:357:15: expected restricted __be32 [usertype] epoch ../net/rxrpc/conn_object.c:357:15: got unsigned int [unsigned] [usertype] epoch ../net/rxrpc/conn_object.c:369:21: warning: restricted __be32 degrades to integer ../net/rxrpc/conn_object.c:371:26: warning: restricted __be32 degrades to integer ../net/rxrpc/conn_object.c:411:21: warning: restricted __be32 degrades to integer ../net/rxrpc/conn_object.c:413:26: warning: restricted __be32 degrades to integer Signed-off-by: David Howells <dhowells@redhat.com>
-
- 01 Jul, 2016 1 commit
-
-
David Howells authored
When a jumbo packet is being split up and processed, the crypto checksum for each split-out packet is in the jumbo header and needs placing in the reconstructed packet header. When the code was changed to keep the stored copy of the packet header in host byte order, this reconstruction was missed. Found with sparse with CF=-D__CHECK_ENDIAN__: ../net/rxrpc/input.c:479:33: warning: incorrect type in assignment (different base types) ../net/rxrpc/input.c:479:33: expected unsigned short [unsigned] [usertype] _rsvd ../net/rxrpc/input.c:479:33: got restricted __be16 [addressable] [usertype] _rsvd Fixes: 0d12f8a4 ("rxrpc: Keep the skb private record of the Rx header in host byte order") Signed-off-by: David Howells <dhowells@redhat.com>
-
- 27 Jun, 2016 21 commits
-
-
David S. Miller authored
Russell King says: ==================== Initial SFP support patches Please review and merge this initial patch set, which is part of a larger set previously posted adding SFP support to phy and mvneta. This initial set are focused on cleaning up and reorganising the fixed-phy code to allow the core software-phy code to be re-used. These are based on net-next. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Russell King authored
There is no prevention of a concurrent call to both fixed_mdio_read() and fixed_phy_update_state(), which can result in the state being modified while it's being inspected. Fix this by using a seqcount to detect modifications, and memcpy()ing the state. We remain slightly naughty here, calling link_update() and updating the link status within the read-side loop - which would need rework of the design to change. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Russell King authored
Generate software phy registers as and when requested, rather than duplicating the state in fixed_phy. This allows us to eliminate the duplicate storage of of the same data, which is only different in format. As fixed_phy_update_regs() no longer updates register state, rename it to fixed_phy_update(). Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Russell King authored
Separate out the generation of MII registers from the state validation. This allows us to simplify the error handing in fixed_phy() by allowing earlier error detection. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Russell King authored
Convert the swphy register generation to tabular form which allows us to eliminate multiple switch() statements. This results in a smaller object code size, more efficient, and easier to add support for faster speeds. Before: Idx Name Size VMA LMA File off Algn 0 .text 00000164 00000000 00000000 00000034 2**2 text data bss dec hex filename 388 0 0 388 184 swphy.o After: Idx Name Size VMA LMA File off Algn 0 .text 000000fc 00000000 00000000 00000034 2**2 5 .rodata 00000028 00000000 00000000 00000138 2**2 text data bss dec hex filename 324 0 0 324 144 swphy.o Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Russell King authored
Move the fixed_phy MII register generation to a library to allow other software phy implementations to use this code. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'linux-can-next-for-4.8-20160623' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2016-06-17 this is a pull request of 4 patches for net-next/master. Arnd Bergmann's patch fixes a regresseion in af_can introduced in linux-can-next-for-4.8-20160617. There are two patches by Ramesh Shanmugasundaram, which add CAN-2.0 support to the rcar_canfd driver. And a patch by Ed Spiridonov that adds better error diagnoses messages to the Ed Spiridonov driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amitoj Kaur Chawla authored
Replace calls to kmalloc followed by a memcpy with a direct call to kmemdup. The Coccinelle semantic patch used to make this change is as follows: @@ expression from,to,size,flag; statement S; @@ - to = \(kmalloc\|kzalloc\)(size,flag); + to = kmemdup(from,size,flag); if (to==NULL || ...) S - memcpy(to, from, size); Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
trivial fixes to spelling mistakes of the words "excessive collisions" Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
trivial fixes to spelling mistakes of the word "descriptors" Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Saeed Mahameed says: ==================== Mellanox 100G mlx5e Ethernet extensions This series includes multiple features extensions for mlx5 Ethernet netdevice driver. Namely, TX Rate limiting, RX interrupt moderation, ethtool settings. TX Rate limiting: - ConnectX-4 rate limiting infrastructure - Set max rate NDO support RX interrupt moderation: - CQE based coalescing option (controlled via priv flags) - Adaptive RX coalescing ethtool settings: - priv flags callbacks - Support new ksettings API - Add 50G missing link mode - Support auto negotiation on/off Applied on top: 0e9390eb ("Merge branch 'mlxsw-next'") ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gal Pressman authored
Previous to this patch auto negotiation was reported off although it was on by default in hardware. This patch reports the correct information to ethtool and allows the user to toggle it on/off. Added another parameter to set port proto function in order to pass the auto negotiation field to the hardware. Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gal Pressman authored
Use new get/set link ksettings and remove get/set settings legacy callbacks. This allows us to use bitmasks longer than 32 bit for supported and advertised link modes and use modes that were previously not supported. Signed-off-by: Gal Pressman <galp@mellanox.com> CC: Ben Hutchings <bwh@kernel.org> CC: David Decotigny <decot@googlers.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gal Pressman authored
Add MLX5E_50GBASE_SR2 as ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT. Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Cc: Ben Hutchings <bwh@kernel.org> Cc: David Decotigny <decot@googlers.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gal Pressman authored
Add ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT bit. Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Cc: Ben Hutchings <bwh@kernel.org> Cc: David Decotigny <decot@googlers.com> Acked-By: David Decotigny <decot@googlers.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gal Pressman authored
Add a dedicated function to toggle port link. It should be called only after setting a port register. Toggle will set port link to down and bring it back up in case that it's admin status was up. Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gil Rockah authored
Striving for high message rate and low interrupt rate. Usage: ethtool -C <interface> adaptive-rx on/off Signed-off-by: Gil Rockah <gilr@mellanox.com> Signed-off-by: Achiad Shochat <achiad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> CC: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
In this mode the moderation timer will restart upon new completion (CQE) generation rather than upon interrupt generation. The outcome is that for bursty traffic the period timer will never expire and thus only the moderation frames counter will dictate interrupt generation, thus the interrupt rate will be relative to the incoming packets size. If the burst seizes for "moderation period" time then an interrupt will be issued immediately. CQE based moderation is off by default and can be controlled via ethtool set_priv_flags. Performance tested on ConnectX4-Lx 50G. Less packet loss in netperf UDP and TCP tests, with no bw degradation, for both single and multi streams, with message sizes of 64, 1024, 1472 and 32768 byte. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Achiad Shochat <achiad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Gil Rockah <gilr@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gal Pressman authored
Introduce an infrastructure for getting/setting private net device flags. Currently a 'nop' priv flag is added, following patches will override the flag will actual feature specific flags. Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yevgeny Petrilin authored
Implement set_maxrate ndo. Use the rate index from the hardware table to attach to channel SQ/TXQ. In case of failure to configure new rate, the queue remains with unlimited rate. We save the configuration on priv structure and apply it each time Send Queues are being reinitialized (after open/close) operations. Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yevgeny Petrilin authored
Configuring and managing HW rate limit tables. The HW holds a table of rate limits, each rate is associated with an index in that table. Later a Send Queue uses this index to set the rate limit. Multiple Send Queues can have the same rate limit, which is represented by a single entry in this table. Even though a rate can be shared, each queue is being rate limited independently of others. The SW shadow of this table holds the rate itself, the index in the HW table and the refcount (number of queues) working with this rate. The exported functions are mlx5_rl_add_rate and mlx5_rl_remove_rate. Number of different rates and their values are derived from HW capabilities. Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-