- 09 Sep, 2010 40 commits
-
-
Andy Grover authored
We now ask the transport to give us a rm for the congestion map, and then we handle it normally. Previously, the transport defined a function that we would call to send a congestion map. Convert TCP and loop transports to new cong map method. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Now that we are signaling send completions much less, we are likely to have dirty entries in the send queue when the connection is shut down (on rmmod, for example.) These are cleaned up a little further down in conn_shutdown, but if we wait on the ring_empty_wait for them, it'll never happen, and we hand on unload. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Previously, RDS would wait until the final send WR had completed and then handle cleanup. With silent ops, we do not know if an atomic, rdma, or data op will be last. This patch handles any of these cases by keeping a pointer to the last op in the message in m_last_op. When the TX completion event fires, rds dispatches to per-op-type cleanup functions, and then does whole-message cleanup, if the last op equalled m_last_op. This patch also moves towards having op-specific functions take the op struct, instead of the overall rm struct. rds_ib_connection has a pointer to keep track of a a partially- completed data send operation. This patch changes it from an rds_message pointer to the narrower rm_data_op pointer, and modifies places that use this pointer as needed. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
It hasn't cropped up in the field, but this code ensures it is impossible to issue operations that pass an rdma cookie (DEST, MAP) in the same sendmsg call that's actually initiating rdma or atomic ops. Disallowing this perverse-but-technically-allowed usage makes silent RDMA heuristics slightly easier. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Add a flag to the API so users can indicate they want silent operations. This is needed because silent ops cannot be used with USE_ONCE MRs, so we can't just assume silent. Also, change send_xmit to do atomic op before rdma op if both are present, and centralize the hairy logic to determine if we want to attempt silent, or not. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Also, add a comment. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
When dropping ops in the send queue, we notify the client of failed rdma ops they asked for notifications on, but not atomic ops. It should be for both. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
rds_message_alloc_sgs() only works when nents is nonzero. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Do not allocate sgs for data for 0-length datagrams Set data.op_active in rds_sendmsg() instead of rds_message_copy_from_user(). Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Simplify rds_send_xmit(). Send a congestion map (via xmit_cong_map) without decrementing send_quota. Move resetting of conn xmit variables to end of loop. Update comments. Implement a special case to turn off sending an rds header when there is an atomic op and no other data. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
For consistency. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
A big changeset, but it's all pretty dumb. struct rds_rdma_op was already embedded in struct rm_rdma_op. Remove rds_rdma_op and put its members in rm_rdma_op. Rename members with "op_" prefix instead of "r_", for consistency. Of course this breaks a lot, so fixup the code accordingly. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Add atomic_free_op function, analogous to rdma_free_op, and call it in rds_message_purge(). Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
cmsg_rdma_args just calls rdma_prepare and does a little arg checking -- not quite enough to justify its existence. Plus, it is the only caller of rdma_prepare(). Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Also, try to better-document the locking around the rm and its m_inc in loop.c. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Maybe things worked fine with the flow control code running even in the non-flow-control case, but making it explicitly conditional helps the non-fc case be easier to read. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Removed unsignaled_bytes sysctl and code to signal based on it. I believe unsignaled_wrs is more than sufficient for our purposes. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Now that the header always goes first, it is possible to simplify rds_ib_xmit. Instead of having a path to handle 0-byte dgrams and another path to handle >0, these can both be handled in one path. This lets us eliminate xmit_populate_wr(). Rename sent to bytes_sent, to differentiate better from other variable named "send". Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
These functions were to cope with differently ordered sg entries depending on RDS 3.0 or 3.1+. Now that we've dropped 3.0 compatibility we no longer need them. Also, modify usage sites for these to refer to sge[0] or [1] directly. Reorder code to initialize header sgs first. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
RDS 3.0 connections (in OFED 1.3 and earlier) put the header at the end. 3.1 connections put it at the head. The code has significant added complexity in order to handle both configurations. In OFED 1.6 we can drop this and simplify the code by only supporting "header-first" configuration. This patch checks the protocol version, and if prior to 3.1, does not complete the connection. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
both atomics and rdmas need to convert ib-specific completion codes into RDS status codes. Rename rds_ib_rdma_send_complete to rds_ib_send_complete, and have it take a pointer to the function to call with the new error code. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Tidy up some whitespace issues. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
This does not appear to be necessary. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Instead of using a constant for initiator_depth and responder_resources, read the per-QP values when the device is enumerated, and then use these values when creating the connection. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Implement a CMSG-based interface to do FADD and CSWP ops. Alter send routines to handle atomic ops. Add atomic counters to stats. Add xmit_atomic() to struct rds_transport Inline rds_ib_send_unmap_rdma into unmap_rm Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
The previous code was correct, but made the assumption that if r_notifier was non-NULL then either r_recverr or r_notify was true. Valid, but fragile. Changed to explicitly check r_recverr (shows up in greps for recverr now, too.) Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
rds_message_alloc_sgs() now returns correctly-initialized sg lists, so calleds need not do this themselves. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
This eliminates a separate memory alloc, although it is now necessary to add an "r_active" flag, since it is no longer to use the m_rdma_op pointer as an indicator of if an rdma op is present. rdma SGs allocated from rm sg pool. rds_rm_size also gets bigger. It's a little inefficient to run through CMSGs twice, but it makes later steps a lot smoother. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
RDMA is now an intrinsic part of RDS, so it's easier to just have a single header. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
r_m_copy_from_user used to allocate the rm as well as kernel buffers for the data, and then copy the data in. Now, sendmsg() allocates the rm, although the data buffer alloc still happens in r_m_copy_from_user. SGs are still allocated with rm, but now r_m_alloc_sgs() is used to reserve them. This allows multiple SG lists to be allocated from the one rm -- this is important once we also want to alloc our rdma sgl from this pool. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
First, it looks to me like the atomic_inc is wrong. We should be decrementing refcount only once here, no? It's already being done by the mr_put() at the end. Second, simplify the logic a bit by bailing early (with a warning) if !mr. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Clearly separate rdma-related variables in rm from data-related ones. This is in anticipation of adding atomic support. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Favor "if (foo)" style over "if (foo != NULL)". Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
This fits better in connection.c, rather than threads.c. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
Do not nest m_rs_lock under c_lock Disable interrupts in {rdma,atomic}_send_complete Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Andy Grover authored
This function has been the source of numerous bugs; it's just too complicated. Simplified to nest spinlocks cleanly within the second loop body, and kick out early if there are no rms to drop. This will be a little slower because conn lock is grabbed for each entry instead of "caching" the lock across rms, but this should be entirely irrelevant to fastpath performance. Signed-off-by: Andy Grover <andy.grover@oracle.com>
-
Tina Yang authored
On second look at this bug (OFED #2002), it seems that the collision is not with the retransmission queue (packet acked by the peer), but with the local send completion. A theoretical sequence of events (from time t0 to t3) is thought to be as follows, Thread #1 t0: sock_release rds_release rds_send_drop_to /* wait on send completion */ t2: rds_rdma_drop_keys() /* destroy & free all mrs */ Thread #2 t1: rds_ib_send_cq_comp_handler rds_ib_send_unmap_rm rds_message_unmapped /* wake up #1 @ t0 */ t3: rds_message_put rds_message_purge rds_mr_put /* memory corruption detected */ The problem with the rds_rdma_drop_keys() is it could remove a mr's refcount more than its due (i.e. repeatedly as long as it still remains in the tree (mr->r_refcount > 0)). Theoretically it should remove only one reference - reference by the tree. /* Release any MRs associated with this socket */ while ((node = rb_first(&rs->rs_rdma_keys))) { mr = container_of(node, struct rds_mr, r_rb_node); if (mr->r_trans == rs->rs_transport) mr->r_invalidate = 0; rds_mr_put(mr); } I think the correct way of doing it is to remove the mr from the tree and rds_destroy_mr it first, then a rds_mr_put() to decrement its reference count by one. Whichever thread holds the last reference will free the mr via rds_mr_put(). Signed-off-by: Tina Yang <tina.yang@oracle.com> Signed-off-by: Andy Grover <andy.grover@oracle.com>
-