- 25 Apr, 2022 36 commits
-
-
Jens Axboe authored
We currently check REQ_F_POLLED before arming async poll for a notification to retry. If it's set, then we don't allow poll and will punt to io-wq instead. This is done to prevent a situation where a buggy driver will repeatedly return that there's space/data available yet we get -EAGAIN. However, if we already transferred data, then it should be safe to rely on poll again. Gate the check on whether or not REQ_F_PARTIAL_IO is also set. Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
Like commit 7ba89d2a for recv/recvmsg, support MSG_WAITALL for the send side. If this flag is set and we do a short send, retry for a stream of seqpacket socket. Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
Rather than match on a specific key, be it user_data or file, allow canceling any request that we can lookup. Works like IORING_ASYNC_CANCEL_ALL in that it cancels multiple requests, but it doesn't key off user_data or the file. Can't be set with IORING_ASYNC_CANCEL_FD, as that's a key selector. Only one may be used at the time. Signed-off-by: Jens Axboe <axboe@kernel.dk> Link: https://lore.kernel.org/r/20220418164402.75259-6-axboe@kernel.dkSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
Currently sqe->addr must contain the user_data of the request being canceled. Introduce the IORING_ASYNC_CANCEL_FD flag, which tells the kernel that we're keying off the file fd instead for cancelation. This allows canceling any request that a) uses a file, and b) was assigned the file based on the value being passed in. Signed-off-by: Jens Axboe <axboe@kernel.dk> Link: https://lore.kernel.org/r/20220418164402.75259-5-axboe@kernel.dk
-
Jens Axboe authored
The current cancelation will lookup and cancel the first request it finds based on the key passed in. Add a flag that allows to cancel any request that matches they key. It completes with the number of requests found and canceled, or res < 0 if an error occured. Signed-off-by: Jens Axboe <axboe@kernel.dk> Link: https://lore.kernel.org/r/20220418164402.75259-4-axboe@kernel.dk
-
Jens Axboe authored
In preparation for being able to not only key cancel off the user_data, pass in the io_cancel_data struct for the various functions that deal with request cancelation. Signed-off-by: Jens Axboe <axboe@kernel.dk> Link: https://lore.kernel.org/r/20220418164402.75259-3-axboe@kernel.dk
-
Jens Axboe authored
It's only called from one location, and it always passes in 'false'. Kill the argument, and just pass in 'false' to io_poll_find(). Signed-off-by: Jens Axboe <axboe@kernel.dk> Link: https://lore.kernel.org/r/20220418164402.75259-2-axboe@kernel.dk
-
Pavel Begunkov authored
Split timeout handling into removal + failing, so we can reduce spinlocking time and remove another instance of triple nested locking. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/0f00d115f9d4c5749028f19623708ad3695512d6.1650458197.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Move ->timeout_lock grabbing inside of io_timeout_cancel(), so we can do io_req_task_queue_fail() outside of the lock. It's much nicer than relying on triple nested locking. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/cde758c2897930d31e205ed8f476d4ec879a8849.1650458197.git.asml.silence@gmail.com [axboe: drop now wrong timeout_lock annotation] Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
A previous commit removed SCM accounting for non-unix sockets, as those are the only ones that can cause a fixed file reference. While that is true, it also means we're now dereferencing the file as part of the workqueue driven __io_sqe_files_unregister() after the process has exited. This isn't safe for SCM files, as unix gc may have already reaped them when the process exited. KASAN complains about this: [ 12.307040] Freed by task 0: [ 12.307592] kasan_save_stack+0x28/0x4c [ 12.308318] kasan_set_track+0x28/0x38 [ 12.309049] kasan_set_free_info+0x24/0x44 [ 12.309890] ____kasan_slab_free+0x108/0x11c [ 12.310739] __kasan_slab_free+0x14/0x1c [ 12.311482] slab_free_freelist_hook+0xd4/0x164 [ 12.312382] kmem_cache_free+0x100/0x1dc [ 12.313178] file_free_rcu+0x58/0x74 [ 12.313864] rcu_core+0x59c/0x7c0 [ 12.314675] rcu_core_si+0xc/0x14 [ 12.315496] _stext+0x30c/0x414 [ 12.316287] [ 12.316687] Last potentially related work creation: [ 12.317885] kasan_save_stack+0x28/0x4c [ 12.318845] __kasan_record_aux_stack+0x9c/0xb0 [ 12.319976] kasan_record_aux_stack_noalloc+0x10/0x18 [ 12.321268] call_rcu+0x50/0x35c [ 12.322082] __fput+0x2fc/0x324 [ 12.322873] ____fput+0xc/0x14 [ 12.323644] task_work_run+0xac/0x10c [ 12.324561] do_notify_resume+0x37c/0xe74 [ 12.325420] el0_svc+0x5c/0x68 [ 12.326050] el0t_64_sync_handler+0xb0/0x12c [ 12.326918] el0t_64_sync+0x164/0x168 [ 12.327657] [ 12.327976] Second to last potentially related work creation: [ 12.329134] kasan_save_stack+0x28/0x4c [ 12.329864] __kasan_record_aux_stack+0x9c/0xb0 [ 12.330735] kasan_record_aux_stack+0x10/0x18 [ 12.331576] task_work_add+0x34/0xf0 [ 12.332284] fput_many+0x11c/0x134 [ 12.332960] fput+0x10/0x94 [ 12.333524] __scm_destroy+0x80/0x84 [ 12.334213] unix_destruct_scm+0xc4/0x144 [ 12.334948] skb_release_head_state+0x5c/0x6c [ 12.335696] skb_release_all+0x14/0x38 [ 12.336339] __kfree_skb+0x14/0x28 [ 12.336928] kfree_skb_reason+0xf4/0x108 [ 12.337604] unix_gc+0x1e8/0x42c [ 12.338154] unix_release_sock+0x25c/0x2dc [ 12.338895] unix_release+0x58/0x78 [ 12.339531] __sock_release+0x68/0xec [ 12.340170] sock_close+0x14/0x20 [ 12.340729] __fput+0x18c/0x324 [ 12.341254] ____fput+0xc/0x14 [ 12.341763] task_work_run+0xac/0x10c [ 12.342367] do_notify_resume+0x37c/0xe74 [ 12.343086] el0_svc+0x5c/0x68 [ 12.343510] el0t_64_sync_handler+0xb0/0x12c [ 12.344086] el0t_64_sync+0x164/0x168 We have an extra bit we can use in file_ptr on 64-bit, use that to store whether this file is SCM'ed or not, avoiding the need to look at the file contents itself. This does mean that 32-bit will be stuck with SCM for all registered files, just like 64-bit did before the referenced commit. Fixes: 1f59bc0f ("io_uring: don't scm-account for non af_unix sockets") Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
The ctx argument of io_req_put_rsrc() is not used, kill it. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/bb51bf3ff02775b03e6ea21bc79c25d7870d1644.1650311386.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Add a simple helper to encapsulating dropping rsrc nodes references, it's cleaner and will help if we'd change rsrc refcounting or play with percpu_ref_put() [no]inlining. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/63fdd953ac75898734cd50e8f69e95e6664f46fe.1650311386.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
req->fixed_rsrc_refs keeps a pointer to rsrc node pcpu references, but it's more natural just to store rsrc node directly. There were some reasons for that in the past but not anymore. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/cee1c86ec9023f3e4f6ce8940d58c017ef8782f4.1650311386.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
All io_assign_file() callers do error handling themselves, req_set_fail() in the io_assign_file()'s fail path needlessly bloats the kernel and is not the best abstraction to have. Simplify the error path. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/eff77fb1eac2b6a90cca5223813e6a396ffedec0.1650311386.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
We have io_ring_submit_[un]lock() functions helping us with conditional ->uring_lock locking, use them in io_file_get_fixed() instead of hand coding. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/c9c9ff1e046f6eb68da0a251962a697f8a2275fa.1650311386.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
We have several racy reads, mark them with data_race() to demonstrate this fact. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/7e56e750d294c70b2a56938bd733386f19f0eb53.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Inline io_req_complete_fail_submit(), there is only one caller and the name doesn't tell us much. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/fe5851af01dcd39fc84b71b8539c7cbe4658fb6d.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Remove one extra if for non-linked path of io_submit_sqe(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/03183199d1bf494b4a72eca16d792c8a5945acb4.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Remove the lazy link fail logic from io_submit_sqe() and hide it into a helper. It simplifies the code and will be needed in next patches. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/6a68aca9cf4492132da1d7c8a09068b74aba3c65.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Add a macro for all link request flags to avoid duplication. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/df38b883e31e7e0ca4e364d25a0743862961b180.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
io_queue_sqe() is a part of the submission path and we try hard to keep it inlined, so shed some extra bytes from it by moving the error checking part into io_queue_sqe_arm_apoll() and renaming it accordingly. note: io_queue_sqe_arm_apoll() is not inlined, thus the patch doesn't change the number of function calls for the apoll path. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/9b79edd246336decfaca79b949a15ac69123490d.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Rename io_queue_async_work(). The name is pretty old but now doesn't reflect well what the function is doing. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/5d4b25c54cccf084f9f2fd63bd4e4fa4515e998e.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Inline io_queue_sqe() as there is only one caller left, and rename __io_queue_sqe(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/d5742683b7a7caceb1c054e91e5b9135b0f3b858.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
We try to aggresively inline the submission path, so it's a good idea to not pollute it with colder code. One of them is linked timeout preparation + queue, which can be extracted into a function. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/ecf74df7ac77389b6d9211211ec4954e91de98ba.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Inline io_free_req() into its only user and remove an underscore prefix from __io_free_req(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/ed114edef5c256a644f4839bb372df70d8df8e3f.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
We have several spots where a call to io_fill_cqe_req() is immediately followed by io_put_req_deferred(). Replace them with __io_req_complete_post() and get rid of io_put_req_deferred() and io_fill_cqe_req(). > size ./fs/io_uring.o text data bss dec hex filename 86942 13734 8 100684 1894c ./fs/io_uring.o > size ./fs/io_uring.o text data bss dec hex filename 86438 13654 8 100100 18704 ./fs/io_uring.o Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/10672a538774ac8986bee6468d960527af59169d.1650056133.git.asml.silence@gmail.com [axboe: fold in followup fix] Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Get rid of some useless local variables Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/7798327b684b7015f7e4300420142ddfcd317297.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
When we meet PF_EXITING in io_poll_check_events(), don't overcomplicate the code with io_poll_mark_cancelled() but just return -ECANCELED and the callers will deal with the rest. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/f0cc981af82a5b193658f8f44397eeb3bf838b7b.1650056133.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
io_get_cqe() is expensive because of a bunch of loads, masking, etc. However, most of the time we should have enough of entries in the CQ, so we can cache two pointers representing a range of contiguous CQE memory we can use. When the range is exhausted we'll go through a slower path to set up a new range. When there are no CQEs avaliable, pointers will naturally point to the same address. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/487eeef00f3146537b3d9c1a9cef2fc0b9a86f81.1649771823.git.asml.silence@gmail.com [axboe: santinel -> sentinel] Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Considering all inlining io_submit_sqe() is huge and usually ends up calling some other functions. We decrement @left in io_submit_sqes() just before calling io_submit_sqe() and use it later after the call. Considering how huge io_submit_sqe() is, there is not much hope @left will be treated gracefully by compilers. Decrement it after the call, not only it's easier on register spilling and probably saves stack write/read, but also at least for x64 uses CPU flags set by the dec instead of doing (read/write and tests). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/807f9a276b54ee8ff4e42e2b78721484f1c71743.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Instead of keeping @submitted in io_submit_sqes(), which for each iteration requires comparison with the initial number of SQEs, store the number of SQEs left to submit. We'll need nr only for when we're done with SQE handling. note: if we can't allocate a req for the first SQE we always has been returning -EAGAIN to the userspace, save this behaviour by looking into the cache in a slow path. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/c3b3df9aeae4c2f7a53fd8386385742e4e261e77.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Don't hand code wq_stack_add_head() to ->free_list, which serves for recycling io_kiocb, add a helper doing it for us. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/f206f575486a8dd3d52f074ab37ed146b2d215b7.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Add io_req_cache_empty(), which checks if there are requests in the inline req cache or not. It'll be needed in the future, but also nicely cleans up a few spots poking into ->free_list directly. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/b18662389f3fb483d0bd07906647f65f6037475a.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
io_flush_cached_reqs() isn't descriptive and has only one caller, inline it into __io_alloc_req_refill(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/ec38abe65a883d9fe6b169793119ce86806655a4.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
All good users should not set IOSQE_IO_*LINK flags for the last request of a link. io_uring flushes collected links at the end of submission, but it's not the optimal way and so we don't care too much about it. Replace io_queue_sqe() call with io_queue_sqe_fallback() as the former one is inlined and will generate a bunch of extra code. This will also help compilers with the submission path inlining. > size ./fs/io_uring.o text data bss dec hex filename 87265 13734 8 101007 18a8f ./fs/io_uring.o > size ./fs/io_uring.o text data bss dec hex filename 87073 13734 8 100815 189cf ./fs/io_uring.o Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/01fb5e417ef49925d544a0b0bae30409845ed2b4.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
We can do CQE filling a bit more efficiently when req->cqe is fully filled by memcpy()'ing it to the userspace instead of doing it field by field. It's easier on register spilling, removes a couple of extra loads/stores and write combines two u32 memory writes. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/ee3f514ff28b1fe3347a8eca93a9d91647f2eaad.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 24 Apr, 2022 4 commits
-
-
Pavel Begunkov authored
We already have req->{result,user_data,cflags}, which mimic struct io_uring_cqe and are intended to store CQE data. Combine them into a struct io_uring_cqe field. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/e1efe65d5005cd6a9ec3440767eb15a9fa9351cf.1649771823.git.asml.silence@gmail.com [axboe: add mirror cqe to cater to fd union] Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Rename io_sqe_file_register(), so the name better reflects what the function is doing. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/d5091518883786969e244d2f0854a47bbdaa5061.1649334991.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Merge io_sqe_file_register() and io_sqe_file_register(). The only real difference left between them is from where we get an skb. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/dddda3039c71fcbec24b3465cbe8c7e7ae7bb0e8.1649334991.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
There is an old API nuisance where io_uring's SCM accounting functions traverse fixed file tables and so requires them to be set in advance, which leads to some implicit rules of how io_sqe_file_register() should be used. __io_sqe_files_scm() now works with only one file at a time, pass a file directly and get rid of all fixed table dereferencing inside. Clean io_sqe_file_register() callers. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/fb32031d892e61a7748c70da7999725d5e798671.1649334991.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-