[PATCH] mm: unmap_vmas with inner ptlock
Remove the page_table_lock from around the calls to unmap_vmas, and replace the pte_offset_map in zap_pte_range by pte_offset_map_lock: all callers are now safe to descend without page_table_lock. Don't attempt fancy locking for hugepages, just take page_table_lock in unmap_hugepage_range. Which makes zap_hugepage_range, and the hugetlb test in zap_page_range, redundant: unmap_vmas calls unmap_hugepage_range anyway. Nor does unmap_vmas have much use for its mm arg now. The tlb_start_vma and tlb_end_vma in unmap_page_range are now called without page_table_lock: if they're implemented at all, they typically come down to flush_cache_range (usually done outside page_table_lock) and flush_tlb_range (which we already audited for the mprotect case). Signed-off-by:Hugh Dickins <hugh@veritas.com> Signed-off-by:
Andrew Morton <akpm@osdl.org> Signed-off-by:
Linus Torvalds <torvalds@osdl.org>
Showing
- fs/hugetlbfs/inode.c 3 additions, 7 deletionsfs/hugetlbfs/inode.c
- include/linux/hugetlb.h 0 additions, 2 deletionsinclude/linux/hugetlb.h
- include/linux/mm.h 1 addition, 1 deletioninclude/linux/mm.h
- mm/hugetlb.c 3 additions, 9 deletionsmm/hugetlb.c
- mm/memory.c 12 additions, 29 deletionsmm/memory.c
- mm/mmap.c 2 additions, 6 deletionsmm/mmap.c
Loading
Please register or sign in to comment