Commit 2a1e274a authored by Mel Gorman's avatar Mel Gorman Committed by Linus Torvalds

Create the ZONE_MOVABLE zone

The following 8 patches against 2.6.20-mm2 create a zone called ZONE_MOVABLE
that is only usable by allocations that specify both __GFP_HIGHMEM and
__GFP_MOVABLE.  This has the effect of keeping all non-movable pages within a
single memory partition while allowing movable allocations to be satisfied
from either partition.  The patches may be applied with the list-based
anti-fragmentation patches that groups pages together based on mobility.

The size of the zone is determined by a kernelcore= parameter specified at
boot-time.  This specifies how much memory is usable by non-movable
allocations and the remainder is used for ZONE_MOVABLE.  Any range of pages
within ZONE_MOVABLE can be released by migrating the pages or by reclaiming.

When selecting a zone to take pages from for ZONE_MOVABLE, there are two
things to consider.  First, only memory from the highest populated zone is
used for ZONE_MOVABLE.  On the x86, this is probably going to be ZONE_HIGHMEM
but it would be ZONE_DMA on ppc64 or possibly ZONE_DMA32 on x86_64.  Second,
the amount of memory usable by the kernel will be spread evenly throughout
NUMA nodes where possible.  If the nodes are not of equal size, the amount of
memory usable by the kernel on some nodes may be greater than others.

By default, the zone is not as useful for hugetlb allocations because they are
pinned and non-migratable (currently at least).  A sysctl is provided that
allows huge pages to be allocated from that zone.  This means that the huge
page pool can be resized to the size of ZONE_MOVABLE during the lifetime of
the system assuming that pages are not mlocked.  Despite huge pages being
non-movable, we do not introduce additional external fragmentation of note as
huge pages are always the largest contiguous block we care about.

Credit goes to Andy Whitcroft for catching a large variety of problems during
review of the patches.

This patch creates an additional zone, ZONE_MOVABLE.  This zone is only usable
by allocations which specify both __GFP_HIGHMEM and __GFP_MOVABLE.  Hot-added
memory continues to be placed in their existing destination as there is no
mechanism to redirect them to a specific zone.

[ Fix section mismatch of memory hotplug related code]
[ various fixes]
Signed-off-by: default avatarMel Gorman <>
Cc: Andy Whitcroft <>
Signed-off-by: default avatarYasunori Goto <>
Cc: William Lee Irwin III <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 769848c0
......@@ -106,6 +106,9 @@ static inline enum zone_type gfp_zone(gfp_t flags)
if (flags & __GFP_DMA32)
return ZONE_DMA32;
if ((flags & (__GFP_HIGHMEM | __GFP_MOVABLE)) ==
if (flags & __GFP_HIGHMEM)
......@@ -1005,6 +1005,7 @@ extern unsigned long find_max_pfn_with_active_regions(void);
extern void free_bootmem_with_active_regions(int nid,
unsigned long max_low_pfn);
extern void sparse_memory_present_with_active_regions(int nid);
extern int cmdline_parse_kernelcore(char *p);
extern int early_pfn_to_nid(unsigned long pfn);
......@@ -146,6 +146,7 @@ enum zone_type {
......@@ -167,6 +168,7 @@ enum zone_type {
+ defined(CONFIG_ZONE_DMA32) \
+ 1 \
+ defined(CONFIG_HIGHMEM) \
+ 1 \
#if __ZONE_COUNT < 2
#define ZONES_SHIFT 0
......@@ -499,10 +501,22 @@ static inline int populated_zone(struct zone *zone)
return (!!zone->present_pages);
extern int movable_zone;
static inline int zone_movable_is_highmem(void)
return movable_zone == ZONE_HIGHMEM;
return 0;
static inline int is_highmem_idx(enum zone_type idx)
return (idx == ZONE_HIGHMEM);
return (idx == ZONE_HIGHMEM ||
(idx == ZONE_MOVABLE && zone_movable_is_highmem()));
return 0;
......@@ -522,7 +536,9 @@ static inline int is_normal_idx(enum zone_type idx)
static inline int is_highmem(struct zone *zone)
return zone == zone->zone_pgdat->node_zones + ZONE_HIGHMEM;
int zone_idx = zone - zone->zone_pgdat->node_zones;
return zone_idx == ZONE_HIGHMEM ||
(zone_idx == ZONE_MOVABLE && zone_movable_is_highmem());
return 0;
......@@ -25,7 +25,7 @@
#define HIGHMEM_ZONE(xx)
enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
......@@ -170,7 +170,8 @@ static inline unsigned long node_page_state(int node,
zone_page_state(&zones[ZONE_HIGHMEM], item) +
zone_page_state(&zones[ZONE_NORMAL], item);
zone_page_state(&zones[ZONE_NORMAL], item) +
zone_page_state(&zones[ZONE_MOVABLE], item);
extern void zone_statistics(struct zonelist *, struct zone *);
......@@ -46,9 +46,14 @@ unsigned int nr_free_highpages (void)
pg_data_t *pgdat;
unsigned int pages = 0;
for_each_online_pgdat(pgdat) {
pages += zone_page_state(&pgdat->node_zones[ZONE_HIGHMEM],
if (zone_movable_is_highmem())
pages += zone_page_state(
return pages;
This diff is collapsed.
......@@ -472,7 +472,7 @@ const struct seq_operations fragmentation_op = {
#define TEXTS_FOR_ZONES(xx) TEXT_FOR_DMA(xx) TEXT_FOR_DMA32(xx) xx "_normal", \
TEXT_FOR_HIGHMEM(xx) xx "_movable",
static const char * const vmstat_text[] = {
/* Zoned VM counters */
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment