- 13 Apr, 2023 6 commits
-
-
Damien Le Moal authored
For an identify command with cns set to NVME_ID_CNS_CS_CTRL, the NVMe 2.0 specification states that: If the I/O Command Set specified by the CSI field does not have an Identify Controller data structure, then the controller shall return a zero filled data structure. If the host requests a data structure for an I/O Command Set that the controller does not support, the controller shall abort the command with a status code of Invalid Field in Command. However, the current implementation of this identify command in nvmet_execute_identify() only handles the ZNS command set, returning an error for the NVM command set, which is not compliant with the specifications as we do support this command set. Fix this by: 1) Renaming nvmet_execute_identify_cns_cs_ctrl() to nvmet_execute_identify_ctrl_zns() to continue handling the ZNS command set as is. 2) Introduce a nvmet_execute_identify_ctrl_ns() helper to handle the NVM command set, returning a zero filled nvme_id_ctrl_nvm data structure. 3) Modify nvmet_execute_identify() to call these helpers based on the csi specified, returning an error for unsupported command sets. Fixes: aaf2e048 ("nvmet: add ZBD over ZNS backend support") Signed-off-by: Damien Le Moal <damien.lemoal@opensource.wdc.com> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Tested-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Damien Le Moal authored
The identify command with cns set to NVME_ID_CNS_NS_ACTIVE_LIST does not depend on the command set. The execution of this command should thus not look at the csi field specified in the command. Simplify nvmet_execute_identify() to directly call nvmet_execute_identify_nslist() without the csi switch-case. Fixes: ab5d0b38 ("nvmet: add Command Set Identifier support") Signed-off-by: Damien Le Moal <damien.lemoal@opensource.wdc.com> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Tested-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Damien Le Moal authored
The identify command with cns set to NVME_ID_CNS_CTRL does not depend on the command set. The execution of this command should thus not look at the csi specified in the command. Simplify nvmet_execute_identify() to directly call nvmet_execute_identify_ctrl() without the csi switch-case. Fixes: ab5d0b38 ("nvmet: add Command Set Identifier support") Signed-off-by: Damien Le Moal <damien.lemoal@opensource.wdc.com> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Tested-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Damien Le Moal authored
The identify command with cns set to NVME_ID_CNS_NS does not directly depend on the command set. The NVMe specifications is rather confusing here as it appears that this command only applies to the NVM command set. However, footnote 8 of Figure 273 in the NVMe 2.0 base specifications clearly state that this command applies to NVM command sets that support logical blocks, that is, NVM and ZNS. Both the NVM and ZNS command set specifications also list this identify as mandatory. The command handling should thus not look at the csi field since it is defined as unused for this command. Given that we do not support the KV command set, simply remove the csi switch-case for that command handling and call directly nvmet_execute_identify_ns() in nvmet_execute_identify(). Fixes: ab5d0b38 ("nvmet: add Command Set Identifier support") Signed-off-by: Damien Le Moal <damien.lemoal@opensource.wdc.com> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Tested-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Damien Le Moal authored
Nvme specifications state that: If the I/O Command Set associated with the namespace identified by the NSID field does not support the Identify Namespace data structure specified by the CSI field, the controller shall abort the command with a status code of Invalid Field in Command. In other words, if nvmet_execute_identify_cns_cs_ns() is called for a target with a block device that is not zoned, we should not return any data and set the status to NVME_SC_INVALID_FIELD. While at it, it is also better to revalidate the ns block devie *before* checking if the block device is zoned, to ensure that nvmet_execute_identify_cns_cs_ns() operates against updated device characteristics. Fixes: aaf2e048 ("nvmet: add ZBD over ZNS backend support") Signed-off-by: Damien Le Moal <damien.lemoal@opensource.wdc.com> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Bjorn Helgaas authored
pci_enable_pcie_error_reporting() enables the device to send ERR_* Messages. Since f26e58bf ("PCI/AER: Enable error reporting when AER is native"), the PCI core does this for all devices during enumeration, so the driver doesn't need to do it itself. Remove the redundant pci_enable_pcie_error_reporting() call from the driver. Also remove the corresponding pci_disable_pcie_error_reporting() from the driver .remove() path. Note that this only controls ERR_* Messages from the device. An ERR_* Message may cause the Root Port to generate an interrupt, depending on the AER Root Error Command register managed by the AER service driver. Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 12 Apr, 2023 7 commits
-
-
Stefan Haberland authored
The DASD driver does not kick the requeue list when requeuing IO requests to the blocklayer. This might lead to hanging blockdevice when there is no other trigger for this. Fix by automatically kick the requeue list when requeuing DASD requests to the blocklayer. Fixes: e443343e ("s390/dasd: blk-mq conversion") CC: stable@vger.kernel.org # 4.14+ Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Link: https://lore.kernel.org/r/20230405142017.2446986-8-sth@linux.ibm.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Stefan Haberland authored
Add a check for errors in the start_io function that signal a not working device. Trigger an autoquiesce event in that case. Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Link: https://lore.kernel.org/r/20230405142017.2446986-7-sth@linux.ibm.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Stefan Haberland authored
Add a sysfs attribute aq_timeouts that controls after how many timeouts a autoquiesce event might be triggered. The default value is 32768 which is the maximum number of retries for the DASD device driver DASD_RETRIES_MAX. This means that the timeout trigger will never happen. The default value for DASD retries is 255. Setting the value to below 255 will trigger the timeout autoquiesce event before an IO error is generated. Also add the check for the configured amount of timeouts and trigger an autoquiesce event if exceeded. Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Link: https://lore.kernel.org/r/20230405142017.2446986-6-sth@linux.ibm.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Stefan Haberland authored
Add a sysfs attribute to control if all IO requests will be requeued to the blocklayer in case of an autoquiesce event or not. A value of 1 means that in case of an autoquiesce event all IO requests will be requeued to the blocklayer. A value of 0 means that the device will only be stopped. Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Link: https://lore.kernel.org/r/20230405142017.2446986-5-sth@linux.ibm.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Stefan Haberland authored
Add sysfs attribute that controls the DASD autoquiesce feature. The autoquiesce is disabled when 0 is echoed to the attribute. A value greater than 0 will enable the feature. The aq_mask attribute will accept an unsigned integer and the value will be interpreted as bitmask defining the trigger events that will lead to an automatic quiesce. The following autoquiesce triggers will currently be available: DASD_EER_FATALERROR 1 - any final I/O error DASD_EER_NOPATH 2 - no remaining paths for the device DASD_EER_STATECHANGE 3 - a state change interrupt occurred DASD_EER_PPRCSUSPEND 4 - the device is PPRC suspended DASD_EER_NOSPC 5 - there is no space remaining on an ESE device DASD_EER_TIMEOUT 6 - a certain amount of timeouts occurred DASD_EER_STARTIO 7 - the IO start function encountered an error The currently supported maximum value is 255. Bit 31 is reserved for internal usage. Bit 0 is not used. Example: - deactivate autoquiesce $ echo 0 > /sys/bus/ccw/0.0.1234/aq_mask - enable autoquiesce for FATALERROR, NOPATH and TIMEOUT (0000 0000 0000 0000 0000 0000 0100 0110 => 70) $ echo 70 > /sys/bus/ccw/0.0.1234/aq_mask Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Link: https://lore.kernel.org/r/20230405142017.2446986-4-sth@linux.ibm.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Stefan Haberland authored
Add the internal logic to check for autoquiesce triggers and handle them. Quiesce and resume are functions that tell Linux to stop/resume issuing I/Os to a specific DASD. The DASD driver allows a manual quiesce/resume via ioctl. Autoquiesce will define an amount of triggers that will lead to an automatic quiesce if a certain event occurs. There is no automatic resume. All events will be reported via DASD Extended Error Reporting (EER) if configured. Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Link: https://lore.kernel.org/r/20230405142017.2446986-3-sth@linux.ibm.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Stefan Haberland authored
Remove definitions that have never been used. Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Reviewed-by: Jan Hoeppner <hoeppner@linux.ibm.com> Reviewed-by: Halil Pasic <pasic@linux.ibm.com> Link: https://lore.kernel.org/r/20230405142017.2446986-2-sth@linux.ibm.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 06 Apr, 2023 3 commits
-
-
Chengming Zhou authored
blkcg_policy cpd_init_fn() is used to just initialize some default fields of policy data, which is enough to do in cpd_alloc_fn(). This patch delete the only user bfq_cpd_init(), and remove cpd_init_fn from blkcg_policy. Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20230406145050.49914-4-zhouchengming@bytedance.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Chengming Zhou authored
cpd_bind_fn is just used for update default weight when block subsys attached to a hierarchy. No any policy need it anymore. Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20230406145050.49914-3-zhouchengming@bytedance.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Chengming Zhou authored
BFQ_WEIGHT_LEGACY_DFL is the same as CGROUP_WEIGHT_DFL, which means we don't need cpd_bind_fn() callback to update default weight when attached to a hierarchy. This patch remove BFQ_WEIGHT_LEGACY_DFL and cpd_bind_fn(). Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/20230406145050.49914-2-zhouchengming@bytedance.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 05 Apr, 2023 5 commits
-
-
Ondrej Kozina authored
It returns following attributes: locking range start locking range length read lock enabled write lock enabled lock state (RW, RO or LK) It can be retrieved by user authority provided the authority was added to locking range via prior IOC_OPAL_ADD_USR_TO_LR ioctl command. The command was extended to add user in ACE that allows to read attributes listed above. Signed-off-by: Ondrej Kozina <okozina@redhat.com> Tested-by: Luca Boccassi <bluca@debian.org> Tested-by: Milan Broz <gmazyland@gmail.com> Link: https://lore.kernel.org/r/20230405111223.272816-6-okozina@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ondrej Kozina authored
Refactors current code querying single column to use the new helper. Real multi column usage will be added later. Signed-off-by: Ondrej Kozina <okozina@redhat.com> Tested-by: Luca Boccassi <bluca@debian.org> Tested-by: Milan Broz <gmazyland@gmail.com> Acked-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230405111223.272816-5-okozina@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ondrej Kozina authored
Extend ACE set of locking range attributes accessible to user authority. This patch allows user authority to get following locking range attribues when user get added to locking range via IOC_OPAL_ADD_USR_TO_LR: locking range start locking range end read lock enabled write lock enabled read locked write locked lock on reset active key Note: Admin1 authority always remains in the ACE. Otherwise it breaks current userspace expecting Admin1 in the ACE (sedutils). See TCG OPAL2 s.4.3.1.7 "ACE_Locking_RangeNNNN_Get_RangeStartToActiveKey". Signed-off-by: Ondrej Kozina <okozina@redhat.com> Tested-by: Luca Boccassi <bluca@debian.org> Tested-by: Milan Broz <gmazyland@gmail.com> Acked-by: Christian Brauner <brauner@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230405111223.272816-4-okozina@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ondrej Kozina authored
Move ACE construction away from add_user_to_lr routine and refactor it to be used also in later code. Also adds boolean operators defines from TCG Core specification. Signed-off-by: Ondrej Kozina <okozina@redhat.com> Tested-by: Luca Boccassi <bluca@debian.org> Tested-by: Milan Broz <gmazyland@gmail.com> Link: https://lore.kernel.org/r/20230405111223.272816-3-okozina@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ondrej Kozina authored
While adding user authority in boolean ace value of uid OPAL_LOCKINGRANGE_ACE_WRLOCKED or OPAL_LOCKINGRANGE_ACE_RDLOCKED, it was added twice. It seemed redundant when only single authority was added in the set method aka { authority1, authority1, OR }: TCG Storage Architecture Core Specification, 5.1.3.3 ACE_expression "This is an alternative type where the options are either a uidref to an Authority object or one of the boolean_ACE (AND = 0 and OR = 1) options. This type is used within the AC_element list to form a postfix Boolean expression of Authorities." Signed-off-by: Ondrej Kozina <okozina@redhat.com> Tested-by: Luca Boccassi <bluca@debian.org> Tested-by: Milan Broz <gmazyland@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Christian Brauner <brauner@kernel.org> Link: https://lore.kernel.org/r/20230405111223.272816-2-okozina@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 03 Apr, 2023 5 commits
-
-
Ming Lei authored
'struct ublk_map_data' is passed to ublk_copy_user_pages() for copying data between userspace buffer and request pages. Here what matters is userspace buffer address/len and 'struct request', so replace ->io field with user buffer address, and rename max_bytes as len. Meantime remove 'ubq' field from ublk_map_data, since it isn't used any more. Then code becomes more readable. Reviewed-by: Ziyang Zhang <ZiyangZhang@linux.alibaba.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Ming Lei authored
Convert the following pattern in several helpers if (Z) return true return false into: return Z; Reviewed-by: Ziyang Zhang <ZiyangZhang@linux.alibaba.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Ming Lei authored
Add two helpers for checking if map/unmap is needed, since we may have passthrough request which needs map or unmap in future, such as for supporting report zones. Meantime don't mark ublk_copy_user_pages as inline since this function is a bit fat now. Reviewed-by: Ziyang Zhang <ZiyangZhang@linux.alibaba.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Ming Lei authored
There isn't data in request of REQ_OP_FLUSH always, so don't consider it in both ublk_map_io() and ublk_unmap_io(). Reviewed-by: Ziyang Zhang <ZiyangZhang@linux.alibaba.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Ming Lei authored
Simplify exit handling a bit, and prepare for supporting fused command. Reviewed-by: Ziyang Zhang <ZiyangZhang@linux.alibaba.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 02 Apr, 2023 9 commits
-
-
Christoph Böhmwalder authored
Originally-from: Andreas Grünbacher <agruen@linbit.com> Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> Link: https://lore.kernel.org/r/20230330102744.2128122-2-christoph.boehmwalder@linbit.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Böhmwalder authored
In preparation to support multiple connections, we need to know which one we need to modify the request state for. Originally-from: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> Link: https://lore.kernel.org/r/20230330102744.2128122-2-christoph.boehmwalder@linbit.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Böhmwalder authored
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> Link: https://lore.kernel.org/r/20230330102744.2128122-2-christoph.boehmwalder@linbit.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Andreas Gruenbacher authored
Signed-off-by: Andreas Gruenbacher <agruen@linbit.com> Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> Link: https://lore.kernel.org/r/20230330102744.2128122-2-christoph.boehmwalder@linbit.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Andreas Gruenbacher authored
Pass a peer device parameter through the bitmap I/O functions to the I/O handlers. In after_state_ch(), set that parameter when queuing the drbd_send_bitmap operation so that this operation knows where to send the bitmap. Signed-off-by: Andreas Gruenbacher <agruen@kernel.org> Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> Link: https://lore.kernel.org/r/20230330102744.2128122-2-christoph.boehmwalder@linbit.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Andreas Gruenbacher authored
Signed-off-by: Andreas Gruenbacher <agruen@linbit.com> Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> Link: https://lore.kernel.org/r/20230330102744.2128122-2-christoph.boehmwalder@linbit.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Christoph Böhmwalder authored
Primarily to silence warnings like: warning: no previous prototype for 'xxx_genl_cmd_to_str' [-Wmissing-prototypes] Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> Link: https://lore.kernel.org/r/20230330102744.2128122-2-christoph.boehmwalder@linbit.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Chaitanya Kulkarni authored
Replace the deprecated API kmap_atomic() and kunmap_atomic() with kmap_local_page() and kunmap_local() in null_flush_cache_page(). Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20230330184926.64209-2-kch@nvidia.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Chaitanya Kulkarni authored
Use library helper memcpy_page() to copy source page into destination instead of having duplicate code in copy_to_nullb() & copy_from_nullb(). In copy_from_nullb() also replace the memset call with zero_user(). Use library helper memset_page() to set the buffer to 0xFF instead of having duplicate code. This also removes deprecated API kmap_atomic() from copy_to_nullb() copy_from_nullb() and nullb_fill_pattern(), from :include/linux/highmem.h: "kmap_atomic - Atomically map a page for temporary usage - Deprecated!" Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20230330184926.64209-2-kch@nvidia.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 27 Mar, 2023 2 commits
-
-
Chaitanya Kulkarni authored
There is only one caller for __blk_account_io_done(), the function is small enough to fit in its caller blk_account_io_done(). Remove the function and opencode in the its caller blk_account_io_done(). Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20230327073427.4403-2-kch@nvidia.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Chaitanya Kulkarni authored
There is only one caller for __blk_account_io_start(), the function is small enough to fit in its caller blk_account_io_start(). Remove the function and opencode in the its caller blk_account_io_start(). Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Link: https://lore.kernel.org/r/20230327073427.4403-2-kch@nvidia.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 20 Mar, 2023 1 commit
-
-
Keith Busch authored
io_uring provides the only way user space can poll completions, and that always sets BLK_POLL_NOSLEEP. This effectively makes hybrid polling dead code, so remove it and everything supporting it. Hybrid polling was effectively killed off with 9650b453, "block: ignore RWF_HIPRI hint for sync dio", but still potentially reachable through io_uring until d729cf9a, "io_uring: don't sleep when polling for I/O", but hybrid polling probably should not have been reachable through that async interface from the beginning. Fixes: 9650b453 ("block: ignore RWF_HIPRI hint for sync dio") Fixes: d729cf9a ("io_uring: don't sleep when polling for I/O") Signed-off-by: Keith Busch <kbusch@kernel.org> Link: https://lore.kernel.org/r/20230320194926.3353144-1-kbusch@meta.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 16 Mar, 2023 2 commits
-
-
Eric Biggers authored
Now that all callers of blk_crypto_put_keyslot() check for NULL before calling it, there is no need for blk_crypto_put_keyslot() to do the NULL check itself. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Eric Biggers <ebiggers@google.com> Link: https://lore.kernel.org/r/20230315183907.53675-2-ebiggers@kernel.orgSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Eric Biggers authored
To avoid hiding information, pass on the error code from blk_crypto_rq_get_keyslot() instead of always using BLK_STS_IOERR. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230315183907.53675-2-ebiggers@kernel.orgSigned-off-by: Jens Axboe <axboe@kernel.dk>
-