Commit 31f06d2e authored by James Smart's avatar James Smart Committed by Martin K. Petersen

scsi: lpfc: Limit xri count for kdump environment

scsi-mq operation inherently performs pre-allocation of resources for
blk-mq request queues. Even though the kdump environment reduces the
configuration to a single CPU, thus 1 hardware queue, which helps
significantly, the resources are still rather large due to the per request
allocations. blk-mq pre-allocations can be over 4KB per request.  With
adapter can_queue values in the 4k or 8k range, this can easily be 32MBs
before any other driver memory is factored in.  Driver SGL DMA buffer
allocation can be up to 8KB per request as well adding an additional
64MB. Totals are well over 100MB for a single shost.  Given kdump memory
auto-sizing utilities don't accommodate this amount of memory well, it's
very possible for kdump to fail due to lack of memory.

Fix by having the driver recognize that it is booting within a kdump
context and reduce the number of requests it will support to a more
reasonable value.
Signed-off-by: default avatarDick Kennedy <dick.kennedy@broadcom.com>
Signed-off-by: default avatarJames Smart <jsmart2021@gmail.com>
Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
parent a9677833
......@@ -39,6 +39,7 @@
#include <linux/msi.h>
#include <linux/irq.h>
#include <linux/bitops.h>
#include <linux/crash_dump.h>
#include <scsi/scsi.h>
#include <scsi/scsi_device.h>
......@@ -8307,6 +8308,10 @@ lpfc_sli4_read_config(struct lpfc_hba *phba)
bf_get(lpfc_mbx_rd_conf_extnts_inuse, rd_config);
phba->sli4_hba.max_cfg_param.max_xri =
bf_get(lpfc_mbx_rd_conf_xri_count, rd_config);
/* Reduce resource usage in kdump environment */
if (is_kdump_kernel() &&
phba->sli4_hba.max_cfg_param.max_xri > 512)
phba->sli4_hba.max_cfg_param.max_xri = 512;
phba->sli4_hba.max_cfg_param.xri_base =
bf_get(lpfc_mbx_rd_conf_xri_base, rd_config);
phba->sli4_hba.max_cfg_param.max_vpi =
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment