-
Rasmus Villemoes authored
Since kmalloc() is nowadays [1] guaranteed to return naturally aligned (i.e., aligned to the size itself) memory for power-of-2 sizes, we don't need to over-allocate the align amount, compute an aligned address within the allocation, and (for later freeing) also storing the original pointer [2]. Instead, just round up the length we want to allocate to the alignment requirements, then round that up to the next power of 2. In theory, this could allocate up to about twice as much memory as we needed. In practice, (a) kmalloc() would in most cases anyway return a power-of-2-sized allocation and (b) with the default values of the bdRingLen[RT]x fields, the length is already itself a power of 2 greater than the alignment. So we actually end up saving memory compared to the current situtation (e.g. for tx, we currently allocate 128+32 bytes, which kmalloc() likely rounds up to 192 or 256; with this patch, we just allocate 128 bytes.) Also struct ucc_geth_private becomes a little smaller. [1] 59bb4798 ("mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)") [2] That storing was anyway done in a u32, which works on 32 bit machines, but is not very elegant and certainly makes a reader of the code pause for a while. Signed-off-by: Rasmus Villemoes <rasmus.villemoes@prevas.dk> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
9b0dfef4