Commit 381eab4a authored by David Hildenbrand's avatar David Hildenbrand Committed by Linus Torvalds
Browse files

mm/memory_hotplug: fix online/offline_pages called w.o. mem_hotplug_lock

There seem to be some problems as result of 30467e0b ("mm, hotplug:
fix concurrent memory hot-add deadlock"), which tried to fix a possible
lock inversion reported and discussed in [1] due to the two locks
	a) device_lock()
	b) mem_hotplug_lock

While add_memory() first takes b), followed by a) during
bus_probe_device(), onlining of memory from user space first took a),
followed by b), exposing a possible deadlock.

In [1], and it was decided to not make use of device_hotplug_lock, but
rather to enforce a locking order.

The problems I spotted related to this:

1. Memory block device attributes: While .state first calls
   mem_hotplug_begin() and the calls device_online() - which takes
   device_lock() - .online does no longer call mem_hotplug_begin(), so
   effectively calls online_pages() without mem_hotplug_lock.

2. device_online() should be called under device_hotplug_lock, however
   onlining memory during add_memory() does not take care of that.

In addition, I think there is also something wrong about the locking in

3. arch/powerpc/platforms/powernv/memtrace.c calls offline_pages()
   without locks. This was introduced after 30467e0b. And skimming over
   the code, I assume it could need some more care in regards to locking
   (e.g. device_online() called without device_hotplug_lock. This will
   be addressed in the following patches.

Now that we hold the device_hotplug_lock when
- adding memory (e.g. via add_memory()/add_memory_resource())
- removing memory (e.g. via remove_memory())
- device_online()/device_offline()

We can move mem_hotplug_lock usage back into

Why is mem_hotplug_lock still needed? Essentially to make
get_online_mems()/put_online_mems() be very fast (relying on
device_hotplug_lock would be very slow), and to serialize against
addition of memory that does not create memory block devices (hmm).

[1] driverdev-devel/

This patch is partly based on a patch by Vitaly Kuznetsov.


Signed-off-by: default avatarDavid Hildenbrand <>
Reviewed-by: default avatarPavel Tatashin <>
Reviewed-by: default avatarRashmica Gupta <>
Reviewed-by: default avatarOscar Salvador <>
Cc: Benjamin Herrenschmidt <>
Cc: Paul Mackerras <>
Cc: Michael Ellerman <>
Cc: "Rafael J. Wysocki" <>
Cc: Len Brown <>
Cc: Greg Kroah-Hartman <>
Cc: "K. Y. Srinivasan" <>
Cc: Haiyang Zhang <>
Cc: Stephen Hemminger <>
Cc: Martin Schwidefsky <>
Cc: Heiko Carstens <>
Cc: Boris Ostrovsky <>
Cc: Juergen Gross <>
Cc: Rashmica Gupta <>
Cc: Michael Neuling <>
Cc: Balbir Singh <>
Cc: Kate Stewart <>
Cc: Thomas Gleixner <>
Cc: Philippe Ombredanne <>
Cc: Pavel Tatashin <>
Cc: Vlastimil Babka <>
Cc: Dan Williams <>
Cc: Oscar Salvador <>
Cc: Mathieu Malaterre <>
Cc: John Allen <>
Cc: Jonathan Corbet <>
Cc: Joonsoo Kim <>
Cc: Nathan Fontenot <>
Cc: Rafael J. Wysocki <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 8df1d0e4
......@@ -228,7 +228,6 @@ static bool pages_correctly_probed(unsigned long start_pfn)
* MEMORY_HOTPLUG depends on SPARSEMEM in mm/Kconfig, so it is
* OK to have direct references to sparsemem variables in here.
* Must already be protected by mem_hotplug_begin().
static int
memory_block_action(unsigned long phys_index, unsigned long action, int online_type)
......@@ -294,7 +293,6 @@ static int memory_subsys_online(struct device *dev)
if (mem->online_type < 0)
mem->online_type = MMOP_ONLINE_KEEP;
/* Already under protection of mem_hotplug_begin() */
ret = memory_block_change_state(mem, MEM_ONLINE, MEM_OFFLINE);
/* clear online_type */
......@@ -341,19 +339,11 @@ store_mem_state(struct device *dev,
goto err;
* Memory hotplug needs to hold mem_hotplug_begin() for probe to find
* the correct memory block to online before doing device_online(dev),
* which will take dev->mutex. Take the lock early to prevent an
* inversion, memory_subsys_online() callbacks will be implemented by
* assuming it's already protected.
switch (online_type) {
/* mem->online_type is protected by device_hotplug_lock */
mem->online_type = online_type;
ret = device_online(&mem->dev);
......@@ -364,7 +354,6 @@ store_mem_state(struct device *dev,
ret = -EINVAL; /* should never happen */
......@@ -838,7 +838,6 @@ static struct zone * __meminit move_pfn_range(int online_type, int nid,
return zone;
/* Must be protected by mem_hotplug_begin() or a device_lock */
int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_type)
unsigned long flags;
......@@ -850,6 +849,8 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ
struct memory_notify arg;
struct memory_block *mem;
* We can't use pfn_to_nid() because nid might be stored in struct page
* which is not yet initialized. Instead, we find nid from memory block.
......@@ -914,6 +915,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ
if (onlined_pages)
memory_notify(MEM_ONLINE, &arg);
return 0;
......@@ -921,6 +923,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ
(unsigned long long) pfn << PAGE_SHIFT,
(((unsigned long long) pfn + nr_pages) << PAGE_SHIFT) - 1);
memory_notify(MEM_CANCEL_ONLINE, &arg);
return ret;
......@@ -1125,20 +1128,20 @@ int __ref add_memory_resource(int nid, struct resource *res, bool online)
/* create new memmap entry */
firmware_map_add_hotplug(start, start + size, "System RAM");
/* device_online() will take the lock when calling online_pages() */
/* online pages if requested */
if (online)
walk_memory_range(PFN_DOWN(start), PFN_UP(start + size - 1),
NULL, online_memory_block);
goto out;
return ret;
/* rollback pgdat allocation and others */
if (new_node)
memblock_remove(start, size);
return ret;
......@@ -1555,10 +1558,16 @@ static int __ref __offline_pages(unsigned long start_pfn,
return -EINVAL;
if (!IS_ALIGNED(end_pfn, pageblock_nr_pages))
return -EINVAL;
/* This makes hotplug much easier...and readable.
we assume this for now. .*/
if (!test_pages_in_a_zone(start_pfn, end_pfn, &valid_start, &valid_end))
if (!test_pages_in_a_zone(start_pfn, end_pfn, &valid_start,
&valid_end)) {
return -EINVAL;
zone = page_zone(pfn_to_page(valid_start));
node = zone_to_nid(zone);
......@@ -1567,8 +1576,10 @@ static int __ref __offline_pages(unsigned long start_pfn,
/* set above range as isolated */
ret = start_isolate_page_range(start_pfn, end_pfn,
if (ret)
if (ret) {
return ret;
arg.start_pfn = start_pfn;
arg.nr_pages = nr_pages;
......@@ -1639,6 +1650,7 @@ static int __ref __offline_pages(unsigned long start_pfn,
memory_notify(MEM_OFFLINE, &arg);
return 0;
......@@ -1648,10 +1660,10 @@ static int __ref __offline_pages(unsigned long start_pfn,
memory_notify(MEM_CANCEL_OFFLINE, &arg);
/* pushback to free area */
undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
return ret;
/* Must be protected by mem_hotplug_begin() or a device_lock */
int offline_pages(unsigned long start_pfn, unsigned long nr_pages)
return __offline_pages(start_pfn, start_pfn + nr_pages);
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment