- 06 May, 2016 16 commits
-
-
Matias Bjørling authored
The set_bb_tbl takes struct nvm_rq and only uses its ppa_list and nr_pages internally. Instead, make these two variables explicit. This allows a user to call it without initializing a struct nvm_rq first. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Matias Bjørling authored
We move the responsibility of managing the persistent bad block table to the target. The target may choose to mark a block bad or retry writing to it. Never the less, it should be the target that makes the decision and not the media manager. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Matias Bjørling authored
A virtual block enables a block to identify multiple physical blocks. This is useful for metadata where a device media supports multiple planes. In that case, a block, with multiple planes can be managed as a single vblk. Reducing the metadata required by one forth. nvm_set_rqd_ppalist() takes care of expanding a ppa_list with vblks automatically. However, for some use-cases, where only a single physical block is required, the ppa_list should not be expanded. Therefore, add a vblk parameter to nvm_set_rqd_ppalist(), and only expand the ppa_list if vblk is set. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Matias Bjørling authored
Now that device ops->get_bb_table no longer uses a callback, the struct factory_blks can be removed. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Matias Bjørling authored
The device ops->get_bb_tbl() takes a callback, that allows the caller to use its own callback function to update its data structures in the returning function. This makes it difficult to send parameters to the callback, and usually is circumvented by small private structures, that both carry the callers state and any flags needed to fulfill the update. Refactor ops->get_bb_tbl() to fill a data buffer with the status of the blocks returned, and let the user call the callback function manually. That will provide the necessary flags and data structures and simplify the logic around ops->get_bb_tbl(). Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Matias Bjørling authored
Users that wish to iterate all luns on a device. Must create a struct ppa_addr and separate iterators for channels and luns. To set the iterators, two loops are required, one to iterate channels, and another to iterate luns. This leads to decrease in readability. Introduce nvm_for_each_lun_ppa, which implements the nested loop and sets ppa, channel, and lun variable for each loop body, eliminating the boilerplate code. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Simon A. F. Lund authored
A target name must be unique. However, a per-device registration of targets is maintained on a dev->online_targets list, with a per-device search for targets upon registration. This results in a name collision when two targets, with the same name, are created on two different targets, where the per-device list is not shared. Signed-off-by: Simon A. F. Lund <slund@cnexlabs.com> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Simon A. F. Lund authored
The functions nvm_register_target(), nvm_unregister_target() and associated list refers to a target type that is being registered by a target type module. Rename nvm_*_targets() to nvm_*_tgt_type(), so that the intension is clear. This enables target instances to use the _nvm_*_targets() naming. Signed-off-by: Simon A. F. Lund <slund@cnexlabs.com> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Wenwei Tao authored
Since we mainly use soffset in device sector size, we therefore store this value in rrpc->soffset, instead of the offset in 512byte sector size. This eliminates the "(ilog2(dev->sec_size) - 9)" calculation on each I/O. Signed-off-by: Wenwei Tao <ww.tao0320@gmail.com> Updated patch description. Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Wenwei Tao authored
Calculate rrpc total blocks and sectors up front, make sense to use them. For example, we use rrpc->nr_sects to calculate rrpc area size, but it makes no sense if we don't initialize it up front, since it would be zero until we finish rrpc luns init. Signed-off-by: Wenwei Tao <ww.tao0320@gmail.com> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Matias Bjørling authored
A memory leak occurs if the lower page table is initialized and the following dev->lun_map fails on allocation. Rearrange the initialization of lower page table to allow dev->lun_map to fail gracefully without memory leak. Reviewed by: Johannes Thumshirn <jthumshirn@suse.de> Move kfree of dev->lun_map to nvm_free() Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Matias Bjørling authored
The get block table command returns a list of blocks and planes with their associated state. Users, such as gennvm and sysblk, manages all planes as a single virtual block. It was therefore natural to fold the bad block list before it is returned. However, to allow users, which manages on a per-plane block level, to also use the interface, the get_bb_tbl interface is changed to not fold by default and instead let the caller fold if necessary. Reviewed by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Matias Bjørling authored
The flash page size (fpg) and size across planes (pfpg) are convenient to know when allocating buffer sizes. This has previously been a calculated in various places. Replace with the pre-calculated values. Reviewed by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Matias Bjørling authored
The nvm_submit_ppa function assumes that users manage all plane blocks as a single block. Extend the API with nvm_submit_ppa_list to allow the user to send its own ppa list. If the user submits more than a single PPA, the user must take care to allocate and free the corresponding ppa list. Reviewed by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Matias Bjørling authored
The device ->submit_io() callback might fail to submit I/O to device. In that case, the nvm_submit_ppa function should not wait for completion. Instead return the ->submit_io() error. Reviewed by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jeff Mahoney authored
This fixes the following warnings: drivers/lightnvm/sysblk.c:125:9: warning: ‘ret’ may be used uninitialized in this function drivers/lightnvm/sysblk.c:275:15: warning: ‘ret’ may be used uninitialized in this function In both cases, ret is only set from within a loop that may not be entered. Signed-off-by: Jeff Mahoney <jeffm@suse.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 03 May, 2016 1 commit
-
-
Keith Busch authored
This fixes a scenario where device is present and being reset, but a request to unbind the driver occurs. A previous patch series addressing a device failure removal scenario flushed reset_work after controller disable to unblock reset_work waiting on a completion that wouldn't occur. This isn't safe as-is. The broken scenario can potentially be induced with: modprobe nvme && modprobe -r nvme To fix, the reset work is flushed immediately after setting the controller removing flag, and any subsequent reset will not proceed with controller initialization if the flag is set. The controller status must be polled while active, so the watchdog timer is also left active until the controller is disabled to cleanup requests that may be stuck during namespace removal. [Fixes: ff23a2a1] Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 02 May, 2016 11 commits
-
-
Ming Lin authored
On receipt of a namespace attribute changed AER, we acquire the namespace mutex lock before proceeding to scan and validate the namespace list. In case of namespace detach/delete command, nvme_ns_remove function deadlocks trying to acquire the already held lock. All callers, except nvme_remove_namespaces(), of nvme_ns_remove() already held namespaces_mutex. So we can simply fix the deadlock by not acquiring the mutex in nvme_ns_remove() and acquiring it in nvme_remove_namespaces(). Reported-by: Sunad Bhandary S <sunad.s@samsung.com> Signed-off-by: Ming Lin <ming.l@ssi.samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimerg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Ming Lin authored
Switch to RCU freeing the namespace structure so that nvme_start_queues, nvme_stop_queues and nvme_kill_queues would be able to get away with only a RCU read side critical section. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ming Lin <ming.l@ssi.samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimerg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Wang Sheng-Hui authored
Section 3.1 gives the comment for the offset of controller registers in the specification 1.2a. Some are mis-copied in the header file nvme.h. Correct them. Signed-off-by: Wang Sheng-Hui <shhuiw@foxmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Ming Lin authored
This hides command cleanup into nvme.h and fabrics drivers will also use it. Signed-off-by: Ming Lin <ming.l@ssi.samsung.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
The transport driver still needs to do the actual submission, but all the higher level code can be shared. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
Move the scan work item and surrounding code to the common code. For now we need a new finish_scan method to allow the PCI driver to set the irq affinity hints, but I have plans in the works to obsolete this as well. Note that this moves the namespace scanning from nvme_wq to the system workqueue, but as we don't rely on namespace scanning to finish from reset or I/O this should be fine. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by Jon Derrick: <jonathan.derrick@intel.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
We only should be scanning namespaces if the controller is live. Currently we call the function just before setting it live, so fix the code up to move the call to nvme_queue_scan to just below the state change. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Acked-by Jon Derrick: <jonathan.derrick@intel.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
Replace the adhoc flags in the PCI driver with a state machine in the core code. Based on code from Sagi Grimberg for the Fabrics driver. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Acked-by Jon Derrick: <jonathan.derrick@intel.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
It's unused since "NVMe: Move error handling to failed reset handler". Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Jon Derrick <jonathan.derrick@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Wang Sheng-Hui authored
nvme_core_init does: 1) register_blkdev 2) __register_chrdev 3) class_create nvme_core_exit should do cleanup in the reverse order. Signed-off-by: Wang Sheng-Hui <shhuiw@foxmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Keith Busch authored
If the controller fails and is degraded after a reset, we need to kill off all requests queues before removing the inaccessble namespaces. This will prevent del_gendisk from syncing dirty data, which we can't due from a WQ_MEM_RECLAIM work queue. Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 26 Apr, 2016 3 commits
-
-
Wang Sheng-Hui authored
"as well as " is miss typed "as well a " in section "config BLK_DEV_NVME_SCSI" Signed-off-by: Wang Sheng-Hui <shhuiw@foxmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
Controller IDs in NVMe are unsigned 16-bit types. In the Fabrics driver we actually pass ctrl->id by reference, so we need it to have the correct type. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jeff Moyer authored
Simply creating a file system on an skd device, followed by mount and fstrim will result in errors in the logs and then a BUG(). Let's remove discard support from that driver. As far as I can tell, it hasn't worked right since it was merged. This patch also has a side-effect of cleaning up an unintentional shadowed declaration inside of skd_end_request. I tested to ensure that I can still do I/O to the device using xfstests ./check -g quick. I didn't do anything more extensive than that, though. Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 13 Apr, 2016 2 commits
-
-
Jens Axboe authored
Now that we converted everything to the newer block write cache interface, kill off the queue flush_flags and queueable flush entries. Signed-off-by: Jens Axboe <axboe@fb.com>
-
Guilherme G. Piccoli authored
This patch adds a check on nvme_watchdog_timer() function to avoid the call to reset_work() when an error recovery process is ongoing on controller. The check is made by looking at pci_channel_offline() result. If we don't check for this on nvme_watchdog_timer(), error recovery mechanism can't recover well, because reset_work() won't be able to do its job (since we're in the middle of an error) and so the controller is removed from the system before error recovery mechanism can perform slot reset (which would allow the adapter to recover). In this patch we also have split the huge condition expression on nvme_watchdog_timer() by introducing an auxiliary function to help make the code more readable. Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 12 Apr, 2016 7 commits
-
-
Jens Axboe authored
Depending on options, we might not be using dev in nvme_cancel_io(): drivers/nvme/host/pci.c: In function ‘nvme_cancel_io’: drivers/nvme/host/pci.c:970:19: warning: unused variable ‘dev’ [-Wunused-variable] struct nvme_dev *dev = data; ^ So get rid of it, and just cast for the dev_dbg_ratelimited() call. Fixes: 82b4552b ("nvme: Use blk-mq helper for IO termination") Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jens Axboe authored
We don't have any drivers left using it, so kill it off. Update documentation to use the newer blk_queue_write_cache(). Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
-
Jens Axboe authored
Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
-
Jens Axboe authored
Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
-
Jens Axboe authored
Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
-
Jens Axboe authored
Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
-
Jens Axboe authored
Signed-off-by: Jens Axboe <axboe@fb.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
-