Skip to content
  • Vlastimil Babka's avatar
    mm, page_alloc: more extensive free page checking with debug_pagealloc · 4462b32c
    Vlastimil Babka authored
    The page allocator checks struct pages for expected state (mapcount,
    flags etc) as pages are being allocated (check_new_page()) and freed
    (free_pages_check()) to provide some defense against errors in page
    allocator users.
    
    Prior commits 479f854a ("mm, page_alloc: defer debugging checks of
    pages allocated from the PCP") and 4db7548c ("mm, page_alloc: defer
    debugging checks of freed pages until a PCP drain") this has happened
    for order-0 pages as they were allocated from or freed to the per-cpu
    caches (pcplists).  Since those are fast paths, the checks are now
    performed only when pages are moved between pcplists and global free
    lists.  This however lowers the chances of catching errors soon enough.
    
    In order to increase the chances of the checks to catch errors, the
    kernel has to be rebuilt with CONFIG_DEBUG_VM, which also enables
    multiple other internal debug checks (VM_BUG_ON() etc), which is
    suboptimal when the goal is to catch errors in mm users, not in mm code
    itself.
    
    To catch some wrong users of the page allocator we have
    CONFIG_DEBUG_PAGEALLOC, which is designed to have virtually no overhead
    unless enabled at boot time.  Memory corruptions when writing to freed
    pages have often the same underlying errors (use-after-free, double free)
    as corrupting the corresponding struct pages, so this existing debugging
    functionality is a good fit to extend by also perform struct page checks
    at least as often as if CONFIG_DEBUG_VM was enabled.
    
    Specifically, after this patch, when debug_pagealloc is enabled on boot,
    and CONFIG_DEBUG_VM disabled, pages are checked when allocated from or
    freed to the pcplists *in addition* to being moved between pcplists and
    free lists.  When both debug_pagealloc and CONFIG_DEBUG_VM are enabled,
    pages are checked when being moved between pcplists and free lists *in
    addition* to when allocated from or freed to the pcplists.
    
    When debug_pagealloc is not enabled on boot, the overhead in fast paths
    should be virtually none thanks to the use of static key.
    
    Link: http://lkml.kernel.org/r/20190603143451.27353-3-vbabka@suse.cz
    
    
    Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
    Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Cc: Mel Gorman <mgorman@techsingularity.net>
    Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: Michal Hocko <mhocko@kernel.org>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    4462b32c