Commit 3a13c4d7 authored by Johannes Weiner's avatar Johannes Weiner Committed by Linus Torvalds
Browse files

x86: finish user fault error path with fatal signal

The x86 fault handler bails in the middle of error handling when the
task has a fatal signal pending.  For a subsequent patch this is a
problem in OOM situations because it relies on pagefault_out_of_memory()
being called even when the task has been killed, to perform proper
per-task OOM state unwinding.

Shortcutting the fault like this is a rather minor optimization that
saves a few instructions in rare cases.  Just remove it for
user-triggered faults.

Use the opportunity to split the fault retry handling from actual fault
errors and add locking documentation that reads suprisingly similar to
Signed-off-by: default avatarJohannes Weiner <>
Reviewed-by: default avatarMichal Hocko <>
Acked-by: default avatarKOSAKI Motohiro <>
Cc: David Rientjes <>
Cc: KAMEZAWA Hiroyuki <>
Cc: azurIt <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 759496ba
......@@ -842,23 +842,15 @@ do_sigbus(struct pt_regs *regs, unsigned long error_code, unsigned long address,
force_sig_info_fault(SIGBUS, code, address, tsk, fault);
static noinline int
static noinline void
mm_fault_error(struct pt_regs *regs, unsigned long error_code,
unsigned long address, unsigned int fault)
* Pagefault was interrupted by SIGKILL. We have no reason to
* continue pagefault.
if (fatal_signal_pending(current)) {
if (!(fault & VM_FAULT_RETRY))
if (!(error_code & PF_USER))
no_context(regs, error_code, address, 0, 0);
return 1;
if (fatal_signal_pending(current) && !(error_code & PF_USER)) {
no_context(regs, error_code, address, 0, 0);
if (!(fault & VM_FAULT_ERROR))
return 0;
if (fault & VM_FAULT_OOM) {
/* Kernel mode? Handle exceptions or die: */
......@@ -866,7 +858,7 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
no_context(regs, error_code, address,
return 1;
......@@ -884,7 +876,6 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
return 1;
static int spurious_fault_check(unsigned long error_code, pte_t *pte)
......@@ -1189,9 +1180,17 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code)
fault = handle_mm_fault(mm, vma, address, flags);
if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) {
if (mm_fault_error(regs, error_code, address, fault))
* If we need to retry but a fatal signal is pending, handle the
* signal first. We do not need to release the mmap_sem because it
* would already be released in __lock_page_or_retry in mm/filemap.c.
if (unlikely((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)))
if (unlikely(fault & VM_FAULT_ERROR)) {
mm_fault_error(regs, error_code, address, fault);
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment