fs: dlm: add unbound flag to dlm_io workqueue
This patch will add the WQ_UNBOUND flag to the lowcomms dlm_io workqueue which handles socket io handling to send and receive dlm messages. The amount of sockets will be 2 for a 3 node cluster. Each socket has two different workers for doing send and receive work by calling socket API functionality. Each worker will do their task in order to send dlm messages in a ordered stream based socket communication. On receive side the receive buffer will be queued up for an ordered dlm_process workqueue to parse received dlm messages. The parsing need to be done currently in an ordered synchronized way because the dlm message processing is not being made to parse parallel. After explaining all those workqueue behaviours in lowcomms, the dlm_io workqueue is only being used for socket handling. Each socket handling has 2 workers (send and receive). In a 3 cluster node we will end up with 4 workers. Without the WQ_UNBOUND flag the workers are tight to a CPU and can never switch, this could be an advantage because local CPU execution. However with dlm_locktorture testcase I expierenced not all workers are always in use and my assumption is that some workers are bound to the same CPU. We should always send or receive when we are ready to do so, one reason why we disable nigel algorithm on sockets. We should be safe to do the socket io handling on any CPU which can be switched during runtime. There is no assumption that the worker stays on the same CPU. There is no need to respect any workqueue concurrency model that each worker can only run on one CPU. Lowcomms queue_work() mechanism has an higher level flag to be sure that it can't schedule work if the previous worker did not signal it to keep ordered socket handling. Therefore this patch sets the WQ_UNBOUND flag to allow workers being executed by any available CPU. Signed-off-by: Alexander Aring <aahringo@redhat.com> Signed-off-by: David Teigland <teigland@redhat.com>
Showing
Please register or sign in to comment