• Jan Kara's avatar
    mm: avoid overflows in dirty throttling logic · 385d838d
    Jan Kara authored
    The dirty throttling logic is interspersed with assumptions that dirty
    limits in PAGE_SIZE units fit into 32-bit (so that various multiplications
    fit into 64-bits).  If limits end up being larger, we will hit overflows,
    possible divisions by 0 etc.  Fix these problems by never allowing so
    large dirty limits as they have dubious practical value anyway.  For
    dirty_bytes / dirty_background_bytes interfaces we can just refuse to set
    so large limits.  For dirty_ratio / dirty_background_ratio it isn't so
    simple as the dirty limit is computed from the amount of available memory
    which can change due to memory hotplug etc.  So when converting dirty
    limits from ratios to numbers of pages, we just don't allow the result to
    exceed UINT_MAX.
    
    This is root-only triggerable problem which occurs when the operator
    sets dirty limits to >16 TB.
    
    Link: https://lkml.kernel.org/r/20240621144246.11148-2-jack@suse.czSigned-off-by: default avatarJan Kara <jack@suse.cz>
    Reported-by: default avatarZach O'Keefe <zokeefe@google.com>
    Reviewed-By: default avatarZach O'Keefe <zokeefe@google.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    385d838d
page-writeback.c 96.2 KB