- 09 Sep, 2010 40 commits
-
-
Zach Brown authored
The RDS IB client .remove callback used to free the rds_ibdev for the given device unconditionally. This could race other users of the struct. This patch adds refcounting so that we only free the rds_ibdev once all of its users are done. Many rds_ibdev users are tied to connections. We give the connection a reference and change these users to reference the device in the connection instead of looking it up in the IB client data. The only user of the IB client data remaining is the first lookup of the device as connections are built up. Incrementing the reference count of a device found in the IB client data could race with final freeing so we use an RCU grace period to make sure that freeing won't happen until those lookups are done. MRs need the rds_ibdev to get at the pool that they're freed in to. They exist outside a connection and many MRs can reference different devices from one socket, so it was natural to have each MR hold a reference. MR refs can be dropped from interrupt handlers and final device teardown can block so we push it off to a work struct. Pool teardown had to be fixed to cancel its pending work instead of deadlocking waiting for all queued work, including itself, to finish. MRs get their reference from the global device list, which gets a reference. It is left unprotected by locks and remains racy. A simple global lock would be a significant bottleneck. More scalable (complicated) locking should be done carefully in a later patch. Signed-off-by: Zach Brown <zach.brown@oracle.com>
-
Zach Brown authored
rds_ib_xmit_rdma() was calling ib_get_client_data() to get at the rds_ibdevice just to get the max_sge for the transmit. This patch instead has it get it directly off the rds_ibdev which is stored on the connection. The current code won't free the rds_ibdev until all the IB connections that use it are freed. So it's safe to reference the rds_ibdev this way. In the future it also makes it easier to support proper reference counting of the rds_ibdev struct. As an additional bonus, this gets rid of the performance hit of calling in to the IB stack to look up the rds_ibdev. The current implementation in the IB stack acquires an interrupt blocking spinlock to protect the registration of client callback data. Signed-off-by: Zach Brown <zach.brown@oracle.com>
-
Zach Brown authored
rds_ib_cm_handle_connect() could return without unlocking the c_conn_lock if rds_setup_qp() failed. Rather than adding another imbalanced mutex_unlock() to this error path we only unlock the mutex once as we exit the function, reducing the likelyhood of making this same mistake in the future. We remove the previous mulitple return sites, leaving one unambigious return path. Signed-off-by: Zach Brown <zach.brown@oracle.com>
-
Chris Mason authored
This makes sure we have the proper number of references in rds_ib_xmit_atomic and rds_ib_xmit_rdma. We also consistently drop references the same way for all message types as the IOs end. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
The connection hash was almost entirely RCU ready, this just makes the final couple of changes to use RCU instead of spinlocks for everything. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
rds_conn_destroy really needs locking while it changes the connection hash. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
The RDS send_xmit code was trying to get fancy with message counting and was dropping the final reference on the RDMA messages too early. This resulted in memory corruption and oopsen. The fix here is to always add a ref as the parts of the message passes through rds_send_xmit, and always drop a ref as the parts of the message go through completion handling. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
This is the first in a long line of patches that tries to fix races between RDS connection shutdown and RDS traffic. Here we are maintaining a count of active senders to make sure the connection doesn't go away while they are using it. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
The RDS bind lookups are somewhat expensive in terms of CPU time and locking overhead. This commit changes them into a faster RCU based hash tree instead of the rbtrees they were using before. On large NUMA systems it is a significant improvement. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Andy Grover authored
Allocate send/recv rings in memory that is node-local to the HCA. This significantly helps performance. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Chris Mason authored
rds_ib_get_device is called very often as we turn an ip address into a corresponding device structure. It currently take a global spinlock as it walks different lists to find active devices. This commit changes the lists over to RCU, which isn't very complex because they are not updated very often at all. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
This removes a global waitqueue used to wait for rds messages and replaces it with a waitqueue inside the rds_message struct. The global waitqueue turns into a global lock and significantly bottlenecks operations on large machines. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
The bind_lock is almost entirely readonly, but it gets hammered during normal operations and is a major bottleneck. This commit changes it to an rwlock, which takes it from 80% of the system time on a big numa machine down to much lower numbers. A better fix would involve RCU, which is done in a later commit Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Andy Grover authored
Update comments to reflect changes in previous commit. Keeping as separate commits due to different authorship. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Chris Mason authored
rds_send_xmit is required to loop around after it releases the lock because someone else could done a trylock, found someone working on the list and backed off. But, once we drop our lock, it is possible that someone else does come in and make progress on the list. We should detect this and not loop around if another process is actually working on the list. This patch adds a generation counter that is bumped every time we get the lock and do some send work. If the retry notices someone else has bumped the generation counter, it does not need to loop around and continue working. Signed-off-by: Chris Mason <chris.mason@oracle.com> Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Call send_xmit() directly from pong() Set pongs as op_active Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Can't see a reason not to allow signals to interrupt the wait. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
The purpose of the send quota was really to give fairness when different connections were all using the same workq thread to send backlogged msgs -- they could only send so many before another connection could make progress. Now that each connection is pushing the backlog from its completion handler, they are all guaranteed to make progress and the quota isn't needed any longer. A thread *will* have to send all previously queued data, as well as any further msgs placed on the queue while while c_send_lock was held. In a pathological case a single process can get roped into doing this for long periods while other threads get off free. But, since it can only do this until the transport reports full, this is a bounded scenario. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
If the queue has nobody on it, then wake_up does nothing. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Do not nest m_rs_lock under c_lock Disable interrupts in {rdma,atomic}_send_complete Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Can no longer block, so use NOWAIT. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Now that rds_send_xmit() does not block, we can call it directly instead of going through the helper thread. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
rds_sendmsg() is calling the send worker function to send the just-queued datagrams, presumably because it wants the behavior where anything not sent will re-call the send worker. We now ensure all queued datagrams are sent by retrying from the send completion handler, so this isn't needed any more. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
rds_message_put() cannot be called with irqs off, so move it after irqs are re-enabled. Spinlocks throughout the function do not to use _irqsave because the lock of c_send_lock at top already disabled irqs. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
This change allows us to call rds_send_xmit() from a tasklet, which is crucial to our new operating model. * Change c_send_lock to a spinlock * Update stats fields "sem_" to "_lock" * Remove unneeded rds_conn_is_sending() About locking between shutdown and send -- send checks if the connection is up. Shutdown puts the connection into DISCONNECTING. After this, all threads entering send will exit immediately. However, a thread could be *in* send_xmit(), so shutdown acquires the c_send_lock to ensure everyone is out before proceeding with connection shutdown. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Performance is better if we use allocations that don't block to refill the receive ring. Since the whole reason we were kicking out to the worker thread was so we could do blocking allocs, we no longer need to do this. Remove gfp params from rds_ib_recv_refill(); we always use GFP_NOWAIT. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
We now ask the transport to give us a rm for the congestion map, and then we handle it normally. Previously, the transport defined a function that we would call to send a congestion map. Convert TCP and loop transports to new cong map method. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Now that we are signaling send completions much less, we are likely to have dirty entries in the send queue when the connection is shut down (on rmmod, for example.) These are cleaned up a little further down in conn_shutdown, but if we wait on the ring_empty_wait for them, it'll never happen, and we hand on unload. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Previously, RDS would wait until the final send WR had completed and then handle cleanup. With silent ops, we do not know if an atomic, rdma, or data op will be last. This patch handles any of these cases by keeping a pointer to the last op in the message in m_last_op. When the TX completion event fires, rds dispatches to per-op-type cleanup functions, and then does whole-message cleanup, if the last op equalled m_last_op. This patch also moves towards having op-specific functions take the op struct, instead of the overall rm struct. rds_ib_connection has a pointer to keep track of a a partially- completed data send operation. This patch changes it from an rds_message pointer to the narrower rm_data_op pointer, and modifies places that use this pointer as needed. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
It hasn't cropped up in the field, but this code ensures it is impossible to issue operations that pass an rdma cookie (DEST, MAP) in the same sendmsg call that's actually initiating rdma or atomic ops. Disallowing this perverse-but-technically-allowed usage makes silent RDMA heuristics slightly easier. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Add a flag to the API so users can indicate they want silent operations. This is needed because silent ops cannot be used with USE_ONCE MRs, so we can't just assume silent. Also, change send_xmit to do atomic op before rdma op if both are present, and centralize the hairy logic to determine if we want to attempt silent, or not. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Also, add a comment. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
When dropping ops in the send queue, we notify the client of failed rdma ops they asked for notifications on, but not atomic ops. It should be for both. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
rds_message_alloc_sgs() only works when nents is nonzero. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Do not allocate sgs for data for 0-length datagrams Set data.op_active in rds_sendmsg() instead of rds_message_copy_from_user(). Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Simplify rds_send_xmit(). Send a congestion map (via xmit_cong_map) without decrementing send_quota. Move resetting of conn xmit variables to end of loop. Update comments. Implement a special case to turn off sending an rds header when there is an atomic op and no other data. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
For consistency. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
A big changeset, but it's all pretty dumb. struct rds_rdma_op was already embedded in struct rm_rdma_op. Remove rds_rdma_op and put its members in rm_rdma_op. Rename members with "op_" prefix instead of "r_", for consistency. Of course this breaks a lot, so fixup the code accordingly. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-