- 22 Apr, 2024 2 commits
-
-
Bob Pearson authored
A previous commit incorrectly added an 'if(!err)' before scheduling the requester task in rxe_post_send_kernel(). But if there were send wrs successfully added to the send queue before a bad wr they might never get executed. This commit fixes this by scheduling the requester task if any wqes were successfully posted in rxe_post_send_kernel() in rxe_verbs.c. Link: https://lore.kernel.org/r/20240329145513.35381-5-rpearsonhpe@gmail.comSigned-off-by:
Bob Pearson <rpearsonhpe@gmail.com> Fixes: 5bf944f2 ("RDMA/rxe: Add error messages") Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com>
-
Bob Pearson authored
In rxe_comp_queue_pkt() an incoming response packet skb is enqueued to the resp_pkts queue and then a decision is made whether to run the completer task inline or schedule it. Finally the skb is dereferenced to bump a 'hw' performance counter. This is wrong because if the completer task is already running in a separate thread it may have already processed the skb and freed it which can cause a seg fault. This has been observed infrequently in testing at high scale. This patch fixes this by changing the order of enqueuing the packet until after the counter is accessed. Link: https://lore.kernel.org/r/20240329145513.35381-4-rpearsonhpe@gmail.comSigned-off-by:
Bob Pearson <rpearsonhpe@gmail.com> Fixes: 0b1e5b99 ("IB/rxe: Add port protocol stats") Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com>
-
- 21 Apr, 2024 1 commit
-
-
Michael Guralnik authored
The existing behavior of ib_umad, which maintains received MAD packets in an unbounded list, poses a risk of uncontrolled growth. As user-space applications extract packets from this list, the rate of extraction may not match the rate of incoming packets, leading to potential list overflow. To address this, we introduce a limit to the size of the list. After considering typical scenarios, such as OpenSM processing, which can handle approximately 100k packets per second, and the 1-second retry timeout for most packets, we set the list size limit to 200k. Packets received beyond this limit are dropped, assuming they are likely timed out by the time they are handled by user-space. Notably, packets queued on the receive list due to reasons like timed-out sends are preserved even when the list is full. Signed-off-by:
Michael Guralnik <michaelgur@nvidia.com> Reviewed-by:
Mark Zhang <markzhang@nvidia.com> Link: https://lore.kernel.org/r/7197cb58a7d9e78399008f25036205ceab07fbd5.1713268818.git.leon@kernel.orgSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
- 16 Apr, 2024 18 commits
-
-
Chengchang Tang authored
Too much print may lead to a panic in kernel. Change ibdev_err() to ibdev_err_ratelimited(), and change the printing level of cqe dump to debug level. Fixes: 7c044adc ("RDMA/hns: Simplify the cqe code of poll cq") Signed-off-by:
Chengchang Tang <tangchengchang@huawei.com> Signed-off-by:
Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20240412091616.370789-11-huangjunxian6@hisilicon.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Chengchang Tang authored
Use complete parentheses to ensure that macro expansion does not produce unexpected results. Fixes: a25d13cb ("RDMA/hns: Add the interfaces to support multi hop addressing for the contexts in hip08") Signed-off-by:
Chengchang Tang <tangchengchang@huawei.com> Signed-off-by:
Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20240412091616.370789-10-huangjunxian6@hisilicon.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
wenglianfa authored
Add mutex_destroy(). Signed-off-by:
wenglianfa <wenglianfa@huawei.com> Signed-off-by:
Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20240412091616.370789-9-huangjunxian6@hisilicon.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Chengchang Tang authored
GMV's BA table only supports 4K pages. Currently, PAGESIZE is used to calculate gmv_bt_num, which will cause an abnormal number of gmv_bt_num in a 64K OS. Fixes: d6d91e46 ("RDMA/hns: Add support for configuring GMV table") Signed-off-by:
Chengchang Tang <tangchengchang@huawei.com> Signed-off-by:
Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20240412091616.370789-8-huangjunxian6@hisilicon.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
wenglianfa authored
When dma_alloc_coherent() fails in hns_roce_alloc_hem(), just call kfree() to release hem instead of hns_roce_free_hem(). Fixes: c00743cb ("RDMA/hns: Simplify 'struct hns_roce_hem' allocation") Signed-off-by:
wenglianfa <wenglianfa@huawei.com> Signed-off-by:
Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20240412091616.370789-7-huangjunxian6@hisilicon.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Chengchang Tang authored
The refcount of CQ is not protected by locks. When CQ asynchronous events and CQ destruction are concurrent, CQ may have been released, which will cause UAF. Use the xa_lock() to protect the CQ refcount. Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Signed-off-by:
Chengchang Tang <tangchengchang@huawei.com> Signed-off-by:
Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20240412091616.370789-6-huangjunxian6@hisilicon.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Chengchang Tang authored
xa_lock for SRQ table may be required in AEQ. Use xa_store_irq()/ xa_erase_irq() to avoid deadlock. Fixes: 81fce629 ("RDMA/hns: Add SRQ asynchronous event support") Signed-off-by:
Chengchang Tang <tangchengchang@huawei.com> Signed-off-by:
Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20240412091616.370789-5-huangjunxian6@hisilicon.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Chengchang Tang authored
Add max_ah and cq moderation capacities to hns_roce_query_device(). Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Signed-off-by:
Chengchang Tang <tangchengchang@huawei.com> Signed-off-by:
Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20240412091616.370789-4-huangjunxian6@hisilicon.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Chengchang Tang authored
Remove unused parameters and variables. Signed-off-by:
Chengchang Tang <tangchengchang@huawei.com> Signed-off-by:
Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20240412091616.370789-3-huangjunxian6@hisilicon.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Yangyang Li authored
Use macro instead of magic number. Signed-off-by:
Yangyang Li <liyangyang20@huawei.com> Signed-off-by:
Chengchang Tang <tangchengchang@huawei.com> Signed-off-by:
Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20240412091616.370789-2-huangjunxian6@hisilicon.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Zhengchao Shao authored
As described in the ib_map_mr_sg function comment, it returns the number of sg elements that were mapped to the memory region. However, hns_roce_map_mr_sg returns the number of pages required for mapping the DMA area. Fix it. Fixes: 9b2cf76c ("RDMA/hns: Optimize PBL buffer allocation process") Signed-off-by:
Zhengchao Shao <shaozhengchao@huawei.com> Link: https://lore.kernel.org/r/20240411033851.2884771-1-shaozhengchao@huawei.comReviewed-by:
Junxian Huang <huangjunxian6@hisilicon.com> Signed-off-by:
Leon Romanovsky <leon@kernel.org>
-
Konstantin Taranov authored
Set local mac address in RNIC, which is required by the HW. Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1712738551-22075-7-git-send-email-kotaranov@linux.microsoft.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Konstantin Taranov authored
Implement add_gid and del_gid for RNIC. IPv4 and IPv6 addresses are supported. Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1712738551-22075-6-git-send-email-kotaranov@linux.microsoft.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Konstantin Taranov authored
Set netdev and RoCEv2 flag to enable GID population on port 1. Use GIDs of the master netdev. As mc->ports[] stores slave devices, use a helper to get the master netdev. Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1712738551-22075-5-git-send-email-kotaranov@linux.microsoft.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Konstantin Taranov authored
Implement port parameters for RNIC: 1) extend query_port() method 2) implement get_link_layer() 3) implement query_pkey() Only port 1 can store GIDs. Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1712738551-22075-4-git-send-email-kotaranov@linux.microsoft.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Konstantin Taranov authored
Add functions for RNIC creation and destruction. If creation fails, the ib_probe fails as well. Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1712738551-22075-3-git-send-email-kotaranov@linux.microsoft.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Konstantin Taranov authored
Create an error EQ for the RNIC adapter. Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1712738551-22075-2-git-send-email-kotaranov@linux.microsoft.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Konstantin Taranov authored
Use num_comp_vectors of struct ib_device instead of max_num_queues from gdma_context. Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1712911656-17352-1-git-send-email-kotaranov@linux.microsoft.comReviewed-by:
Long Li <longli@microsoft.com> Signed-off-by:
Leon Romanovsky <leon@kernel.org>
-
- 11 Apr, 2024 6 commits
-
-
Zhu Yanjun authored
In the function __rxe_add_to_pool, the function xa_alloc_cyclic is called. The return value of the function xa_alloc_cyclic is as below: " Return: 0 if the allocation succeeded without wrapping. 1 if the allocation succeeded after wrapping, -ENOMEM if memory could not be allocated or -EBUSY if there are no free entries in @limit. " But now the function __rxe_add_to_pool only returns -EINVAL. All the returned error value should be returned to the caller. Signed-off-by:
Zhu Yanjun <yanjun.zhu@linux.dev> Link: https://lore.kernel.org/r/20240408142142.792413-1-yanjun.zhu@linux.devSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Konstantin Taranov authored
Remove printing ret value on success as it was always 0. Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1712672465-29960-1-git-send-email-kotaranov@linux.microsoft.comReviewed-by:
Long Li <longli@microsoft.com> Signed-off-by:
Leon Romanovsky <leon@kernel.org>
-
Leon Romanovsky authored
The "struct mana_cfg_rx_steer_req_v2" uses a dynamically sized set of trailing elements. Specifically, it uses a "mana_handle_t" array. So, use the preferred way in the kernel declaring a flexible array [1]. At the same time, prepare for the coming implementation by GCC and Clang of the __counted_by attribute. Flexible array members annotated with __counted_by can have their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family functions). Also, avoid the open-coded arithmetic in the memory allocator functions [2] using the "struct_size" macro. Moreover, use the "offsetof" helper to get the indirect table offset instead of the "sizeof" operator and avoid the open-coded arithmetic in pointers using the new flex member. This new structure member also allow us to remove the "req_indir_tab" variable since it is no longer needed. Now, it is also possible to use the "flex_array_size" helper to compute the size of these trailing elements in the "memcpy" function. Specifically, the first commit adds the flex member and the patches 2 and 3 refactor the consumers of the "struct mana_cfg_rx_steer_req_v2". This code was detected with the help of Coccinelle, and audited and modified manually. The Coccinelle script used to detect this code pattern is the following: virtual report @rule1@ type t1; type t2; identifier i0; identifier i1; identifier i2; identifier ALLOC =~ "kmalloc|kzalloc|kmalloc_node|kzalloc_node|vmalloc|vzalloc|kvmalloc|kvzalloc"; position p1; @@ i0 = sizeof(t1) + sizeof(t2) * i1; ... i2 = ALLOC@p1(..., i0, ...); @script:python depends on report@ p1 << rule1.p1; @@ msg = "WARNING: verify allocation on line %s" % (p1[0].line) coccilib.report.print_report(p1[0],msg) [1] https://www.kernel.org/doc/html/next/process/deprecated.html#zero-length-and-one-element-arrays [2] https://www.kernel.org/doc/html/next/process/deprecated.html#open-coded-arithmetic-in-allocator-arguments Link: https://lore.kernel.org/all/AS8PR02MB72374BD1B23728F2E3C3B1A18B022@AS8PR02MB7237.eurprd02.prod.outlook.comSigned-off-by:
Erick Archer <erick.archer@outlook.com> Signed-off-by:
Leon Romanovsky <leon@kernel.org> * mana-ib-flex: net: mana: Avoid open coded arithmetic RDMA/mana_ib: Prefer struct_size over open coded arithmetic net: mana: Add flex array to struct mana_cfg_rx_steer_req_v2
-
Erick Archer authored
This is an effort to get rid of all multiplications from allocation functions in order to prevent integer overflows [1][2]. As the "req" variable is a pointer to "struct mana_cfg_rx_steer_req_v2" and this structure ends in a flexible array: struct mana_cfg_rx_steer_req_v2 { [...] mana_handle_t indir_tab[] __counted_by(num_indir_entries); }; the preferred way in the kernel is to use the struct_size() helper to do the arithmetic instead of the calculation "size + size * count" in the kzalloc() function. Moreover, use the "offsetof" helper to get the indirect table offset instead of the "sizeof" operator and avoid the open-coded arithmetic in pointers using the new flex member. This new structure member also allow us to remove the "req_indir_tab" variable since it is no longer needed. Now, it is also possible to use the "flex_array_size" helper to compute the size of these trailing elements in the "memcpy" function. This way, the code is more readable and safer. This code was detected with the help of Coccinelle, and audited and modified manually. Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#open-coded-arithmetic-in-allocator-arguments [1] Link: https://github.com/KSPP/linux/issues/160 [2] Signed-off-by:
Erick Archer <erick.archer@outlook.com> Link: https://lore.kernel.org/r/AS8PR02MB7237A21355C86EC0DCC0D83B8B022@AS8PR02MB7237.eurprd02.prod.outlook.comReviewed-by:
Justin Stitt <justinstitt@google.com> Signed-off-by:
Leon Romanovsky <leon@kernel.org>
-
Erick Archer authored
This is an effort to get rid of all multiplications from allocation functions in order to prevent integer overflows [1][2]. As the "req" variable is a pointer to "struct mana_cfg_rx_steer_req_v2" and this structure ends in a flexible array: struct mana_cfg_rx_steer_req_v2 { [...] mana_handle_t indir_tab[] __counted_by(num_indir_entries); }; the preferred way in the kernel is to use the struct_size() helper to do the arithmetic instead of the calculation "size + size * count" in the kzalloc() function. Moreover, use the "offsetof" helper to get the indirect table offset instead of the "sizeof" operator and avoid the open-coded arithmetic in pointers using the new flex member. This new structure member also allow us to remove the "req_indir_tab" variable since it is no longer needed. This way, the code is more readable and safer. This code was detected with the help of Coccinelle, and audited and modified manually. Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#open-coded-arithmetic-in-allocator-arguments [1] Link: https://github.com/KSPP/linux/issues/160 [2] Signed-off-by:
Erick Archer <erick.archer@outlook.com> Link: https://lore.kernel.org/r/AS8PR02MB72375EB06EE1A84A67BE722E8B022@AS8PR02MB7237.eurprd02.prod.outlook.comReviewed-by:
Long Li <longli@microsoft.com> Reviewed-by:
Justin Stitt <justinstitt@google.com> Signed-off-by:
Leon Romanovsky <leon@kernel.org>
-
Erick Archer authored
The "struct mana_cfg_rx_steer_req_v2" uses a dynamically sized set of trailing elements. Specifically, it uses a "mana_handle_t" array. So, use the preferred way in the kernel declaring a flexible array [1]. At the same time, prepare for the coming implementation by GCC and Clang of the __counted_by attribute. Flexible array members annotated with __counted_by can have their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family functions). This is a previous step to refactor the two consumers of this structure. drivers/infiniband/hw/mana/qp.c drivers/net/ethernet/microsoft/mana/mana_en.c The ultimate goal is to avoid the open-coded arithmetic in the memory allocator functions [2] using the "struct_size" macro. Link: https://www.kernel.org/doc/html/next/process/deprecated.html#zero-length-and-one-element-arrays [1] Link: https://www.kernel.org/doc/html/next/process/deprecated.html#open-coded-arithmetic-in-allocator-arguments [2] Signed-off-by:
Erick Archer <erick.archer@outlook.com> Link: https://lore.kernel.org/r/AS8PR02MB7237E2900247571C9CB84C678B022@AS8PR02MB7237.eurprd02.prod.outlook.comReviewed-by:
Long Li <longli@microsoft.com> Reviewed-by:
Justin Stitt <justinstitt@google.com> Signed-off-by:
Leon Romanovsky <leon@kernel.org>
-
- 09 Apr, 2024 1 commit
-
-
Junxian Huang authored
Add support for DSCP configuration. For DSCP, get dscp-prio mapping via hns3 nic driver api .get_dscp_prio() and fill the SL (in WQE for UD or in QPC for RC) with the priority value. The prio-tc mapping is configured to HW by hns3 nic driver. HW will select a corresponding TC according to SL and the prio-tc mapping. Signed-off-by:
Junxian Huang <huangjunxian6@hisilicon.com> Link: https://lore.kernel.org/r/20240315093551.1650088-1-huangjunxian6@hisilicon.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
- 08 Apr, 2024 3 commits
-
-
Or Har-Toov authored
Currently IB_ACCESS_REMOTE_ATOMIC is blocked from being updated via UMR although in some cases it should be possible. These cases are checked in mlx5r_umr_can_reconfig function. Fixes: ef3642c4 ("RDMA/mlx5: Fix error unwinds for rereg_mr") Signed-off-by:
Or Har-Toov <ohartoov@nvidia.com> Link: https://lore.kernel.org/r/24dac73e2fa48cb806f33a932d97f3e402a5ea2c.1712140377.git.leon@kernel.orgSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Or Har-Toov authored
umem can be NULL for user application mkeys in some cases. Therefore umem can't be used for checking if the mkey is cacheable and it is changed for checking a flag that indicates it. Also make sure that all mkeys which are not returned to the cache will be destroyed. Fixes: dd1b913f ("RDMA/mlx5: Cache all user cacheable mkeys on dereg MR flow") Signed-off-by:
Or Har-Toov <ohartoov@nvidia.com> Link: https://lore.kernel.org/r/2690bc5c6896bcb937f89af16a1ff0343a7ab3d0.1712140377.git.leon@kernel.orgSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Or Har-Toov authored
As some mkeys can't be modified with UMR due to some UMR limitations, like the size of translation that can be updated, not all user mkeys can be cached. Fixes: dd1b913f ("RDMA/mlx5: Cache all user cacheable mkeys on dereg MR flow") Signed-off-by:
Or Har-Toov <ohartoov@nvidia.com> Link: https://lore.kernel.org/r/f2742dd934ed73b2d32c66afb8e91b823063880c.1712140377.git.leon@kernel.orgSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
- 02 Apr, 2024 4 commits
-
-
Konstantin Taranov authored
Use struct mana_ib_queue and its helpers for RAW QPs Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1711483688-24358-5-git-send-email-kotaranov@linux.microsoft.comReviewed-by:
Long Li <longli@microsoft.com> Signed-off-by:
Leon Romanovsky <leon@kernel.org>
-
Konstantin Taranov authored
Use struct mana_ib_queue and its helpers for WQs Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1711483688-24358-4-git-send-email-kotaranov@linux.microsoft.comReviewed-by:
Long Li <longli@microsoft.com> Signed-off-by:
Leon Romanovsky <leon@kernel.org>
-
Konstantin Taranov authored
Use struct mana_ib_queue and its helpers for CQs Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1711483688-24358-3-git-send-email-kotaranov@linux.microsoft.comReviewed-by:
Long Li <longli@microsoft.com> Signed-off-by:
Leon Romanovsky <leon@kernel.org>
-
Konstantin Taranov authored
Intoduce helpers to work with mana ib queues (struct mana_ib_queue). A queue always consists of umem, gdma_region, and id. A queue can become a WQ or a CQ. Signed-off-by:
Konstantin Taranov <kotaranov@microsoft.com> Link: https://lore.kernel.org/r/1711483688-24358-2-git-send-email-kotaranov@linux.microsoft.comReviewed-by:
Long Li <longli@microsoft.com> Signed-off-by:
Leon Romanovsky <leon@kernel.org>
-
- 01 Apr, 2024 4 commits
-
-
Wenchao Hao authored
struct rdma_restrack_entry's kern_name was set to KBUILD_MODNAME in ib_create_cq(), while if the module exited but forgot del this rdma_restrack_entry, it would cause a invalid address access in rdma_restrack_clean() when print the owner of this rdma_restrack_entry. These code is used to help find one forgotten PD release in one of the ULPs. But it is not needed anymore, so delete them. Signed-off-by:
Wenchao Hao <haowenchao2@huawei.com> Link: https://lore.kernel.org/r/20240318092320.1215235-1-haowenchao2@huawei.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Boshi Yu authored
The dma_alloc_coherent() interface automatically zero the memory returned. Thus, we do not need to specify the __GFP_ZERO flag explicitly when we call dma_alloc_coherent(). Reviewed-by:
Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by:
Boshi Yu <boshiyu@linux.alibaba.com> Link: https://lore.kernel.org/r/20240311113821.22482-4-boshiyu@alibaba-inc.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Boshi Yu authored
There exist two different names for the doorbell records: db_info and db_record. We use dbrec for cpu address of the doorbell record and dbrec_dma for dma address of the doorbell recordi uniformly. Reviewed-by:
Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by:
Boshi Yu <boshiyu@linux.alibaba.com> Link: https://lore.kernel.org/r/20240311113821.22482-3-boshiyu@alibaba-inc.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
Boshi Yu authored
Currently, the 8 byte doorbell record is allocated along with the queue buffer, which may result in waste of dma space when the queue buffer is page aligned. To address this issue, we introduce a dma pool named db_pool and allocate doorbell record from it. Reviewed-by:
Cheng Xu <chengyou@linux.alibaba.com> Signed-off-by:
Boshi Yu <boshiyu@linux.alibaba.com> Link: https://lore.kernel.org/r/20240311113821.22482-2-boshiyu@alibaba-inc.comSigned-off-by:
Leon Romanovsky <leon@kernel.org>
-
- 31 Mar, 2024 1 commit
-
-
Linus Torvalds authored
-