- 18 Jun, 2019 40 commits
-
-
Tyrel Datwyler authored
After a successful SRP login response we call scsi_unblock_requests() to kick any pending IOs. The callback to process this SRP response happens in a tasklet and therefore is in softirq context. The result of such is that when blk-mq is enabled, it is no longer safe to call scsi_unblock_requests() from this context. The result of duing so triggers the following WARN_ON splat in dmesg after a host reset or CRQ reenablement. WARNING: CPU: 0 PID: 0 at block/blk-mq.c:1375 __blk_mq_run_hw_queue+0x120/0x180 Modules linked in: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.0.0-rc8 #4 NIP [c0000000009771e0] __blk_mq_run_hw_queue+0x120/0x180 LR [c000000000977484] __blk_mq_delay_run_hw_queue+0x244/0x250 Call Trace: __blk_mq_delay_run_hw_queue+0x244/0x250 blk_mq_run_hw_queue+0x8c/0x1c0 blk_mq_run_hw_queues+0x60/0x90 scsi_run_queue+0x1e4/0x3b0 scsi_run_host_queues+0x48/0x80 login_rsp+0xb0/0x100 ibmvscsi_handle_crq+0x30c/0x3e0 ibmvscsi_task+0x54/0xe0 tasklet_action_common.isra.3+0xc4/0x1a0 __do_softirq+0x174/0x3f4 irq_exit+0xf0/0x120 __do_irq+0xb0/0x210 call_do_irq+0x14/0x24 do_IRQ+0x9c/0x130 hardware_interrupt_common+0x14c/0x150 This patch fixes the issue by introducing a new host action for unblocking the scsi requests in our seperate work thread. Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Tyrel Datwyler authored
The current implemenation relies on two flags in the driver's private host structure to signal the need for a host reset or to reenable the CRQ after a LPAR migration. This patch does away with those flags and introduces a single action flag and defined enums for the supported kthread work actions. Lastly, the if/else logic is replaced with a switch statement. Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Tyrel Datwyler authored
Wire up the host_reset function in our driver_template to allow a user requested adpater reset via the host_reset sysfs attribute. Example: echo "adapter" > /sys/class/scsi_host/host0/host_reset Signed-off-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
Update lpfc version to 12.2.0.3 Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
Kernel warnings may be seen with preempt debugging enabled. Replace smp_processor_id calls with raw_smp_processor_id or cpu information stored in hdwq structures. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
Crashes in scsi_queue_rq or in dma_unmap_direct_sg during BFS when lpfc has lpfc_enable_bg=1. lpfc is setting DIX and prot sg after scsi_add_host_with_dma() has been called. The scsi_host_set_prot() and scsi_host_set_guard() routines need to be called before scsi_add_host_with_dma(). Revise the calling sequence to set the protection/guard data before calling scsi_add_host_with_dma(). Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reviewed-by: Ewan D. Milne <emilne@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
FDMI protocol support registration was not accurately showing nvme support. The fcponly-path clears the parameter object. Move the code out of the fcponly code path. Fix the FDMI registration data to properly check for nvme support. Commonize the manner in which the fdmi routines set protocol support. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
Issuing a LUN reset was resulting in a command failure which then escalated to a host reset. The FCP-4 spec allows fcp_rsp_len field to specify the number of valid bytes of FCP_RSP_INFO, and the value could be 4 or 8. The driver is allowing only a value of 8, thus it failed the command. Revise the driver to allow 4 or 8. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
While fixing the resources per socket, realized the driver was not using hardware queues (up to 1 per cpu) if there were fewer interrupt vectors. The driver was only using the hardware queue assigned to the cpu with the vector. Rework the affinity map check to use the additional hardware queue elements that had been allocated. If the cpu count exceeds the hardware queue count - share, but choose what is shared with by: hyperthread peer, core peer, socket peer, or finally similar cpu in a different socket. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
The driver was coded expecting enough hardware queues and interrupt vectors such that at least there was one per socket. In the case where there were fewer than sockets, cpus were left unassigned thus null pointers. Rework the affinity mappings. Map settings for the cpu's that are in the irq cpu mask. For each cpu not in the mask, map to another cpu that does have a mask. Choice of the "other" cpu will attempt to map to the same cpu but differing hyperthread, or cpu within in same core, or cpu within same socket, or finally cpu in the base socket. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
Invalid logical speed is displayed for trunk enabled ports when all ports are down. Also noted that link speed is incorrectly reported for the units when links are up. Current code is returning the logical link speed from the last event from the adapter. In cases where the last link went down, the link speed in the event was not valid - meaning that although the links where down the field had a bogus value. Rework the event handling to qualify the trunk link state before using the event speed data. Also correct units on other areas where the logical link speed was taken from a link event. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
eq create is leaking mailbox memory if it encounters an error. rework error path to free the memory. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
The driver unconditionally says fw doesn't support nvme when in truth it was a driver parameter settings that disabled nvme support. Rework the code validating nvme support to accurately report what condition is disabling nvme support. Save state on whether nvme fw supports nvme in case sysfs attributes change dynamically. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
There is a race condition with the abort handler declaring a waitq item on it's stack, followed by a timeout in the abort handler that has it give up on the abort return to its caller. When the io is finally aborted and its completion handler called, it references the waitq element that the abort_handler set up, which is no longer valid resulting in a deadlock. Fix by clearing the waitq reference, under lock, when the abort handler timeout gives up. Have the completion handler validate the waitq before referencing it. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
When queued work is executed posting a new command to the transport the driver is reporting a null buffer. The driver had received an ABTS which matched a command that had been scheduled for delivery to the transport. The driver proceeded to cancel the command, but the work item was never cancelled. Fix by cancelling the queued work item. Also turns out the ABTS response was not properly sending a BA_ACC, so set the flag to send the ACC. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
Use-after-free memory overwrite detected. Problem reported by Ewan Milne at Red Hat after running lpfc target with additional memory checking enabled. Race condition when lpfc_nvmet_xmt_ls_rsp_cmp frees the ctxp memory in interrupt context before lpfc_nvmet_xmt_ls_rsp clears a field in the ctxp after successfully issuing the wqe. Remove the unnecessary ctxp write after reposting the rq buffer. The ctxp->rqb_buffer field is not checked in LS handling after the wqe is submitted. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Reported-by: Ewan Milne <emilne@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
Under heavy load the target stops responding, the drivers aborts timeout and we start recovery by logging out of the target, but the target is never logged into again. In a point-to-point scenario, there were battling PLOGI's. When we received a PLOGI request after having sent one, the driver cancels the processing of the original plogi. However, the completion path of the remaining plogi was coded to skip the reg_rpi that should be happening on the 2nd plogi. Correct by adding a simple pt2pt check such that the 2nd plogi isn't skipped and the reg_login occurs. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
Turns out the message change in 12.2.0.1 for unsupported topology makes the linux driver out of sync with other products. Revert the message back to the prior content for product consistency. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
The driver currently is relying on firmware to match ABTSs to existing exchanges. This works fine as long as an exchange has been assigned to the io and work posted to it. However, for unmapped frames (rxid=0xFFFF), the driver has yet to assign an xri. The driver was blindly saying it couldn't match the ABTS and sending the BA_xxx. However, the command frame may have been in queues waiting on xri's before posting to the nvmet_fc layer. When xri's became available, the command frame would still be pushed to the transport and that io would execute, even though the io had been killed by ABTS. The initiator, seeing the io ABTS'd, would reuse the exchange for a different io which would be received on the target and pushed up. If the "zombie" io then came back down and started transmitting, the initiator would match the oxid and accept erroneous data. Bad things happened. Add tracking of active exchanges in the target to allow matching of a received ABTS against active or pending IO requests. If the ABTS is matched to a pending or active IO, the drive initiates cleanup and conditionally notifies the transport. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
Currently the driver is notified of new command frame receipt by CQEs. As part of the CQE processing, the driver upcalls the nvmet_fc transport to deliver the command. nvmet_fc, as part of receiving the command builds out a context for it, where one of the first steps is to allocate memory for the io. When running with tests that do large ios (1MB), it was found on some systems, the total number of outstanding I/O's, at 1MB per, completely consumed the system's memory. Thus additional ios were getting blocked in the memory allocator. Given that this blocked the lpfc thread processing CQEs, there were lots of other commands that were received and which are then held up, and given CQEs are serially processed, the aggregate delays for an IO waiting behind the others became cummulative - enough so that the initiator hit timeouts for the ios. The basic fix is to avoid the direct upcall and instead schedule a work item for each io as it is received. This allows the cq processing to complete very quickly, and each io can then run or block on it's own. However, this general solution hurts latency when there are few ios. As such, implemented the fix such that the driver watches how many CQEs it has processed sequentially in one run. As long as the count is below a threshold, the direct nvmet_fc upcall will be made. Only when the count is exceeded will it revert to work scheduling. Given that debug of this showed a surprisingly long delay in cq processing, the io timer stats were updated to better reflect the processing of the different points. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
Revise a stalled adapter message to also include the number of jobs that are stalling the thread. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
A race condition resulted in receive buffers being placed in the free list twice. Change the locking and handling to check whether the "other" path will be freeing the entry in a later thread and skip it if it is. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
After receiving an unsolicited ABTS (meaning rxid is 0xFFFF), the driver used the oxid from the initiator to match against a local xri which may have been allocated for the io. The xri would be the rxid - it's an invalid check resulting in the command not being matched or erroneously matched. Change the lookup to use the oxid and the SID to match against received IO's original values. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
James Smart authored
Softlockups are seen in low memory situations. They are due to doing oas_lun allocation with GFP_KERNEL in atomic contexts. Change the calls to oas_lun to indicate atomic context so that GFP_ATOMIC is used. Signed-off-by: Dick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: James Smart <jsmart2021@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Create a debugfs interface for megaraid_sas driver. Provide interface to dump driver RAID map in debugfs. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Print FW supported MSI-X vector count only if FW supports MSI-X. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Add debug prints related to device list being returned by firmware. The a debug flag to activate these prints. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Add prints in resume/suspend path to help in debugging hibernation issues. The print gives an indication when the driver entry points are called. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Add a print to dump the interrupt status in system log for debugging. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
When driver detects a firmware fault during load, dump additional information on fault code and subcode that will help in debugging. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Add a sysfs interface to get the raid map index that is being used by driver. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Add prints for BAR address information during driver load. This helps in debugging issues with BAR address changing during OS boot. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
When controller fails to transition to READY state during driver probe, dump the system interface register set. This will give snapshot of the firmware status for debugging driver load issues. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Add a sysfs interface to dump the controller's system interface registers. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Add option to format the buffer that is being dumped. Currently, the IO frame and chain frame dumped in the syslog is getting split across multiple lines based on the formatting. Fix this by using KERN_CONT in printk. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Add prints to identify the internal DCMD opcode that has timed out. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
This patch enhances the existing debug prints in reset and task management path. These debug prints in adapter reset path helps with debugging issues related to IO timeouts that are seen frequently in the field. Add additional debug prints to dump the pending command frames before initiating an adapter reset. Also, print FastPath IOs that are outstanding. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Driver will use "reply descriptor post queues" in round robin fashion when the combined MSI-X mode is not enabled. With this IO completions are distributed and load balanced across all the available reply descriptor post queues equally. This is enabled only if combined MSI-X mode is not enabled in firmware. This improves performance and also fixes soft lockups. When load balancing is enabled, IRQ affinity from driver needs to be disabled. Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Shivasharan S authored
Issue Description: We have seen cpu lock up issues from field if system has a large (more than 96) logical cpu count. SAS3.0 controller (Invader series) supports max 96 MSI-X vector and SAS3.5 product (Ventura) supports max 128 MSI-X vectors. This may be a generic issue (if PCI device support completion on multiple reply queues). Let me explain it w.r.t megaraid_sas supported h/w just to simplify the problem and possible changes to handle such issues. MegaRAID controller supports multiple reply queues in completion path. Driver creates MSI-X vectors for controller as "minimum of (FW supported Reply queues, Logical CPUs)". If submitter is not interrupted via completion on same CPU, there is a loop in the IO path. This behavior can cause hard/soft CPU lockups, IO timeout, system sluggish etc. Example - one CPU (e.g. CPU A) is busy submitting the IOs and another CPU (e.g. CPU B) is busy with processing the corresponding IO's reply descriptors from reply descriptor queue upon receiving the interrupts from HBA. If CPU A is continuously pumping the IOs then always CPU B (which is executing the ISR) will see the valid reply descriptors in the reply descriptor queue and it will be continuously processing those reply descriptor in a loop without quitting the ISR handler. megaraid_sas driver will exit ISR handler if it finds unused reply descriptor in the reply descriptor queue. Since CPU A will be continuously sending the IOs, CPU B may always see a valid reply descriptor (posted by HBA Firmware after processing the IO) in the reply descriptor queue. In worst case, driver will not quit from this loop in the ISR handler. Eventually, CPU lockup will be detected by watchdog. Above mentioned behavior is not common if "rq_affinity" set to 2 or affinity_hint is honored by irqbalancer as "exact". If rq_affinity is set to 2, submitter will be always interrupted via completion on same CPU. If irqbalancer is using "exact" policy, interrupt will be delivered to submitter CPU. Problem statement: If CPU count to MSI-X vectors (reply descriptor Queues) count ratio is not 1:1, we still have exposure of issue explained above and for that we don't have any solution. Exposure of soft/hard lockup is seen if CPU count is more than MSI-X supported by device. If CPUs count to MSI-X vectors count ratio is not 1:1, (Other way, if CPU counts to MSI-X vector count ratio is something like X:1, where X > 1) then 'exact' irqbalance policy OR rq_affinity = 2 won't help to avoid CPU hard/soft lockups. There won't be any one to one mapping between CPU to MSI-X vector instead one MSI-X interrupt (or reply descriptor queue) is shared with group/set of CPUs and there is a possibility of having a loop in the IO path within that CPU group and may observe lockups. For example: Consider a system having two NUMA nodes and each node having four logical CPUs and also consider that number of MSI-X vectors enabled on the HBA is two, then CPUs count to MSI-X vector count ratio as 4:1. e.g. MSI-X vector 0 is affinity to CPU 0, CPU 1, CPU 2 & CPU 3 of NUMA node 0 and MSI-X vector 1 is affinity to CPU 4, CPU 5, CPU 6 & CPU 7 of NUMA node 1. numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 --> MSI-X 0 node 0 size: 65536 MB node 0 free: 63176 MB node 1 cpus: 4 5 6 7 --> MSI-X 1 node 1 size: 65536 MB node 1 free: 63176 MB Assume that user started an application which uses all the CPUs of NUMA node 0 for issuing the IOs. Only one CPU from affinity list (it can be any cpu since this behavior depends upon irqbalance) CPU0 will receive the interrupts from MSI-X 0 for all the IOs. Eventually, CPU 0 IO submission percentage will be decreasing and ISR processing percentage will be increasing as it is more busy with processing the interrupts. Gradually IO submission percentage on CPU 0 will be zero and it's ISR processing percentage will be 100% as IO loop has already formed within the NUMA node 0, i.e. CPU 1, CPU 2 & CPU 3 will be continuously busy with submitting the heavy IOs and only CPU 0 is busy in the ISR path as it always find the valid reply descriptor in the reply descriptor queue. Eventually, we will observe the hard lockup here. Chances of occurring of hard/soft lockups are directly proportional to value of X. If value of X is high, then chances of observing CPU lockups is high. Solution: Use IRQ poll interface defined in "irq_poll.c". megaraid_sas driver will execute ISR routine in softirq context and it will always quit the loop based on budget provided in IRQ poll interface. Driver will switch to IRQ poll only when more than a threshold number of reply descriptors are handled in one ISR. Currently threshold is set as 1/4th of HBA queue depth. In these scenarios (i.e. where CPUs count to MSI-X vectors count ratio is X:1 (where X > 1)), IRQ poll interface will avoid CPU hard lockups due to voluntary exit from the reply queue processing based on budget. Note - Only one MSI-X vector is busy doing processing. Select CONFIG_IRQ_POLL from driver Kconfig for driver compilation. Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-