• Jean-Philippe Brucker's avatar
    mm/mmu_notifier: use hlist_add_head_rcu() · 543bdb2d
    Jean-Philippe Brucker authored
    Make mmu_notifier_register() safer by issuing a memory barrier before
    registering a new notifier.  This fixes a theoretical bug on weakly
    ordered CPUs.  For example, take this simplified use of notifiers by a
    	my_struct->mn.ops = &my_ops; /* (1) */
    	mmu_notifier_register(&my_struct->mn, mm)
    		hlist_add_head(&mn->hlist, &mm->mmu_notifiers); /* (2) */
    Once mmu_notifier_register() releases the mm locks, another thread can
    invalidate a range:
    		hlist_for_each_entry_rcu(mn, &mm->mmu_notifiers, hlist) {
    			if (mn->ops->invalidate_range)
    The read side relies on the data dependency between mn and ops to ensure
    that the pointer is properly initialized.  But the write side doesn't have
    any dependency between (1) and (2), so they could be reordered and the
    readers could dereference an invalid mn->ops.  mmu_notifier_register()
    does take all the mm locks before adding to the hlist, but those have
    acquire semantics which isn't sufficient.
    By calling hlist_add_head_rcu() instead of hlist_add_head() we update the
    hlist using a store-release, ensuring that readers see prior
    initialization of my_struct.  This situation is better illustated by
    litmus test MP+onceassign+derefonce.
    Link: http://lkml.kernel.org/r/20190502133532.24981-1-jean-philippe.brucker@arm.com
    Fixes: cddb8a5c ("mmu-notifiers: core")
    Signed-off-by: default avatarJean-Philippe Brucker <jean-philippe.brucker@arm.com>
    Cc: Jérôme Glisse <jglisse@redhat.com>
    Cc: Michal Hocko <mhocko@suse.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>