Commit 97a9ed3b authored by James Smart's avatar James Smart Committed by Martin K. Petersen

scsi: lpfc: fix lpfc_nvmet_mrq to be bound by hdw queue count

Currently, lpfc_nvmet_mrq is always scaled back to the min(lpfc_nvmet_mrq,
lpfc_irq_chann). There's no reason to reduce it to the number of interrupt
vectors.  Rather, it should be scaled down based on the number of hardware
queues for the system (if lower than max of 16).

Change scaling to use hardware queue count rather than interrupt vector
count.

Link: https://lore.kernel.org/r/20191018211832.7917-2-jsmart2021@gmail.comSigned-off-by: default avatarDick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: default avatarJames Smart <jsmart2021@gmail.com>
Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
parent e519a34c
......@@ -7264,11 +7264,11 @@ lpfc_nvme_mod_param_dep(struct lpfc_hba *phba)
}
if (!phba->cfg_nvmet_mrq)
phba->cfg_nvmet_mrq = phba->cfg_irq_chann;
phba->cfg_nvmet_mrq = phba->cfg_hdw_queue;
/* Adjust lpfc_nvmet_mrq to avoid running out of WQE slots */
if (phba->cfg_nvmet_mrq > phba->cfg_irq_chann) {
phba->cfg_nvmet_mrq = phba->cfg_irq_chann;
if (phba->cfg_nvmet_mrq > phba->cfg_hdw_queue) {
phba->cfg_nvmet_mrq = phba->cfg_hdw_queue;
lpfc_printf_log(phba, KERN_ERR, LOG_NVME_DISC,
"6018 Adjust lpfc_nvmet_mrq to %d\n",
phba->cfg_nvmet_mrq);
......
......@@ -8630,8 +8630,8 @@ lpfc_sli4_queue_verify(struct lpfc_hba *phba)
*/
if (phba->nvmet_support) {
if (phba->cfg_irq_chann < phba->cfg_nvmet_mrq)
phba->cfg_nvmet_mrq = phba->cfg_irq_chann;
if (phba->cfg_hdw_queue < phba->cfg_nvmet_mrq)
phba->cfg_nvmet_mrq = phba->cfg_hdw_queue;
if (phba->cfg_nvmet_mrq > LPFC_NVMET_MRQ_MAX)
phba->cfg_nvmet_mrq = LPFC_NVMET_MRQ_MAX;
}
......@@ -11033,8 +11033,6 @@ lpfc_sli4_enable_msix(struct lpfc_hba *phba)
phba->cfg_irq_chann, vectors);
if (phba->cfg_irq_chann > vectors)
phba->cfg_irq_chann = vectors;
if (phba->nvmet_support && (phba->cfg_nvmet_mrq > vectors))
phba->cfg_nvmet_mrq = vectors;
}
return rc;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment