Commit 81a6fcae authored by Joonsoo Kim's avatar Joonsoo Kim Committed by Linus Torvalds
Browse files

mm, hugetlb: clean-up alloc_huge_page()

Unify successful allocation paths to make the code more readable.  There
are no functional changes.
Signed-off-by: default avatarJoonsoo Kim <>
Acked-by: default avatarMichal Hocko <>
Reviewed-by: default avatarWanpeng Li <>
Reviewed-by: default avatarAneesh Kumar K.V <>
Cc: Hillf Danton <>
Cc: Naoya Horiguchi <>
Cc: Wanpeng Li <>
Cc: Rik van Riel <>
Cc: Mel Gorman <>
Cc: "Aneesh Kumar K.V" <>
Cc: KAMEZAWA Hiroyuki <>
Cc: Hugh Dickins <>
Cc: Davidlohr Bueso <>
Cc: David Gibson <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent c748c262
......@@ -1166,12 +1166,7 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve);
if (page) {
/* update page cgroup details */
hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h),
h_cg, page);
} else {
if (!page) {
page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
if (!page) {
......@@ -1182,11 +1177,11 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
return ERR_PTR(-ENOSPC);
hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h),
h_cg, page);
list_move(&page->lru, &h->hugepage_activelist);
/* Fall through */
hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page);
set_page_private(page, (unsigned long)spool);
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment