- May 15, 2021
-
-
Peter Zijlstra authored
This came up in the discussion of the requirements of qspinlock on an architecture. OpenRISC uses qspinlock, but it was noticed that the memmory barrier was not defined. Peter defined it in the mail thread writing: As near as I can tell this should do. The arch spec only lists this one instruction and the text makes it sound like a completion barrier. This is correct so applying this patch. Signed-off-by:
Peter Zijlstra <peterz@infradead.org> [shorne@gmail.com:Turned the mail into a patch] Signed-off-by:
Stafford Horne <shorne@gmail.com>
-
- May 09, 2021
-
-
Mike Rapoport authored
A build with W=1 enabled produces the following warning: CC arch/openrisc/mm/init.o arch/openrisc/mm/init.c: In function 'paging_init': arch/openrisc/mm/init.c:131:16: warning: variable 'end' set but not used [-Wunused-but-set-variable] 131 | unsigned long end; | ^~~ Remove the unused variable 'end'. Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Signed-off-by:
Stafford Horne <shorne@gmail.com>
-
Mike Rapoport authored
Kernel test robot reports: cppcheck possible warnings: (new ones prefixed by >>, may not real problems) >> arch/openrisc/mm/init.c:125:10: warning: Uninitialized variable: region [uninitvar] region->base, region->base + region->size); ^ Replace usage of memblock_region fields with 'start' and 'end' variables that are initialized in for_each_mem_range() and remove the declaration of region. Fixes: b10d6bca ("arch, drivers: replace for_each_membock() with for_each_mem_range()") Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Signed-off-by:
Stafford Horne <shorne@gmail.com>
-
- Apr 28, 2021
-
-
Christophe JAILLET authored
'setup_find_cpu_node()' take a reference on the node it returns. This reference must be decremented when not needed anymore, or there will be a leak. Add the missing 'of_node_put(cpu)'. Note that 'setup_cpuinfo()' that also calls this function already has a correct 'of_node_put(cpu)' at its end. Fixes: 9d02a428 ("OpenRISC: Boot code") Signed-off-by:
Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by:
Stafford Horne <shorne@gmail.com>
-
- Apr 23, 2021
-
-
Wanpeng Li authored
kvm_memslots() will be called by kvm_write_guest_offset_cached() so we should take the srcu lock. Let's pull the srcu lock operation from kvm_steal_time_set_preempted() again to fix xen part. Fixes: 30b5c851 ("KVM: x86/xen: Add support for vCPU runstate information") Signed-off-by:
Wanpeng Li <wanpengli@tencent.com> Message-Id: <1619166200-9215-1-git-send-email-wanpengli@tencent.com> Reviewed-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Apr 22, 2021
-
-
Jim Mattson authored
The only stepping of Broadwell Xeon parts is stepping 1. Fix the relevant isolation_ucodes[] entry, which previously enumerated stepping 2. Although the original commit was characterized as an optimization, it is also a workaround for a correctness issue. If a PMI arrives between kvm's call to perf_guest_get_msrs() and the subsequent VM-entry, a stale value for the IA32_PEBS_ENABLE MSR may be restored at the next VM-exit. This is because, unbeknownst to kvm, PMI throttling may clear bits in the IA32_PEBS_ENABLE MSR. CPUs with "PEBS isolation" don't suffer from this issue, because perf_guest_get_msrs() doesn't report the IA32_PEBS_ENABLE value. Fixes: 9b545c04 ("perf/x86/kvm: Avoid unnecessary work in guest filtering") Signed-off-by:
Jim Mattson <jmattson@google.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Peter Shier <pshier@google.com> Acked-by:
Andi Kleen <ak@linux.intel.com> Link: https://lkml.kernel.org/r/20210422001834.1748319-1-jmattson@google.com
-
Andre Przywara authored
Commit 941432d0 ("arm64: dts: allwinner: Drop non-removable from SoPine/LTS SD card") enabled the card detect GPIO for the SOPine module, along the way with the Pine64-LTS, which share the same base .dtsi. This was based on the observation that the Pine64-LTS has as "push-push" SD card socket, and that the schematic mentions the card detect GPIO. After having received two reports about failing SD card access with that patch, some more research and polls on that subject revealed that there are at least two different versions of the Pine64-LTS out there: - On some boards (including mine) the card detect pin is "stuck" at high, regardless of an microSD card being inserted or not. - On other boards the card-detect is working, but is active-high, by virtue of an explicit inverter circuit, as shown in the schematic. To cover all versions of the board out there, and don't take any chances, let's revert the introduction of the active-low CD GPIO, but let's use the broken-cd property for the Pine64-LTS this time. That should avoid regressions and should work for everyone, even allowing SD card changes now. The SOPine card detect has proven to be working, so let's keep that GPIO in place. Fixes: 941432d0 ("arm64: dts: allwinner: Drop non-removable from SoPine/LTS SD card") Reported-by:
Michael Weiser <michael.weiser@gmx.de> Reported-by:
Daniel Kulesz <kuleszdl@posteo.org> Suggested-by:
Chen-Yu Tsai <wens@csie.org> Signed-off-by:
Andre Przywara <andre.przywara@arm.com> Tested-by:
Michael Weiser <michael.weiser@gmx.de> Signed-off-by:
Maxime Ripard <maxime@cerno.tech> Link: https://lore.kernel.org/r/20210414104740.31497-1-andre.przywara@arm.com
-
- Apr 21, 2021
-
-
Kan Liang authored
There may be a kernel panic on the Haswell server and the Broadwell server, if the snbep_pci2phy_map_init() return error. The uncore_extra_pci_dev[HSWEP_PCI_PCU_3] is used in the cpu_init() to detect the existence of the SBOX, which is a MSR type of PMON unit. The uncore_extra_pci_dev is allocated in the uncore_pci_init(). If the snbep_pci2phy_map_init() returns error, perf doesn't initialize the PCI type of the PMON units, so the uncore_extra_pci_dev will not be allocated. But perf may continue initializing the MSR type of PMON units. A null dereference kernel panic will be triggered. The sockets in a Haswell server or a Broadwell server are identical. Only need to detect the existence of the SBOX once. Current perf probes all available PCU devices and stores them into the uncore_extra_pci_dev. It's unnecessary. Use the pci_get_device() to replace the uncore_extra_pci_dev. Only detect the existence of the SBOX on the first available PCU device once. Factor out hswep_has_limit_sbox(), since the Haswell server and the Broadwell server uses the same way to detect the existence of the SBOX. Add some macros to replace the magic number. Fixes: 5306c31c ("perf/x86/uncore/hsw-ep: Handle systems with only two SBOXes") Reported-by:
Steve Wahl <steve.wahl@hpe.com> Signed-off-by:
Kan Liang <kan.liang@linux.intel.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by:
Steve Wahl <steve.wahl@hpe.com> Link: https://lkml.kernel.org/r/1618521764-100923-1-git-send-email-kan.liang@linux.intel.com
-
- Apr 20, 2021
-
-
Mike Galbraith authored
Commit in Fixes: added support for kexec-ing a kernel on panic using a new system call. As part of it, it does prepare a memory map for the new kernel. However, while doing so, it wrongly accesses memory it has not allocated: it accesses the first element of the cmem->ranges[] array in memmap_exclude_ranges() but it has not allocated the memory for it in crash_setup_memmap_entries(). As KASAN reports: BUG: KASAN: vmalloc-out-of-bounds in crash_setup_memmap_entries+0x17e/0x3a0 Write of size 8 at addr ffffc90000426008 by task kexec/1187 (gdb) list *crash_setup_memmap_entries+0x17e 0xffffffff8107cafe is in crash_setup_memmap_entries (arch/x86/kernel/crash.c:322). 317 unsigned long long mend) 318 { 319 unsigned long start, end; 320 321 cmem->ranges[0].start = mstart; 322 cmem->ranges[0].end = mend; 323 cmem->nr_ranges = 1; 324 325 /* Exclude elf header region */ 326 start = image->arch.elf_load_addr; (gdb) Make sure the ranges array becomes a single element allocated. [ bp: Write a proper commit message. ] Fixes: dd5f7260 ("kexec: support for kexec on panic using new system call") Signed-off-by:
Mike Galbraith <efault@gmx.de> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Dave Young <dyoung@redhat.com> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/725fa3dc1da2737f0f6188a1a9701bead257ea9d.camel@gmx.de
-
- Apr 18, 2021
-
-
Fredrik Strupe authored
Since uprobes is not supported for thumb, check that the thumb bit is not set when matching the uprobes instruction hooks. The Arm UDF instructions used for uprobes triggering (UPROBE_SWBP_ARM_INSN and UPROBE_SS_ARM_INSN) coincidentally share the same encoding as a pair of unallocated 32-bit thumb instructions (not UDF) when the condition code is 0b1111 (0xf). This in effect makes it possible to trigger the uprobes functionality from thumb, and at that using two unallocated instructions which are not permanently undefined. Signed-off-by:
Fredrik Strupe <fredrik@strupe.net> Cc: stable@vger.kernel.org Fixes: c7edc9e3 ("ARM: add uprobes support") Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
- Apr 16, 2021
-
-
Randy Dunlap authored
Fix IA64 discontig.c Section mismatch warnings. When CONFIG_SPARSEMEM=y and CONFIG_MEMORY_HOTPLUG=y, the functions computer_pernodesize() and scatter_node_data() should not be marked as __meminit because they are needed after init, on any memory hotplug event. Also, early_nr_cpus_node() is called by compute_pernodesize(), so early_nr_cpus_node() cannot be __meminit either. WARNING: modpost: vmlinux.o(.text.unlikely+0x1612): Section mismatch in reference from the function arch_alloc_nodedata() to the function .meminit.text:compute_pernodesize() The function arch_alloc_nodedata() references the function __meminit compute_pernodesize(). This is often because arch_alloc_nodedata lacks a __meminit annotation or the annotation of compute_pernodesize is wrong. WARNING: modpost: vmlinux.o(.text.unlikely+0x1692): Section mismatch in reference from the function arch_refresh_nodedata() to the function .meminit.text:scatter_node_data() The function arch_refresh_nodedata() references the function __meminit scatter_node_data(). This is often because arch_refresh_nodedata lacks a __meminit annotation or the annotation of scatter_node_data is wrong. WARNING: modpost: vmlinux.o(.text.unlikely+0x1502): Section mismatch in reference from the function compute_pernodesize() to the function .meminit.text:early_nr_cpus_node() The function compute_pernodesize() references the function __meminit early_nr_cpus_node(). This is often because compute_pernodesize lacks a __meminit annotation or the annotation of early_nr_cpus_node is wrong. Link: https://lkml.kernel.org/r/20210411001201.3069-1-rdunlap@infradead.org Signed-off-by:
Randy Dunlap <rdunlap@infradead.org> Cc: Mike Rapoport <rppt@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Randy Dunlap authored
Fix ia64 generic_defconfig duplicate entries, as warned by: arch/ia64/configs/generic_defconfig: warning: override: reassigning to symbol ATA: => 58 arch/ia64/configs/generic_defconfig: warning: override: reassigning to symbol ATA_PIIX: => 59 These 2 symbols still have the same value as in the removed lines. Link: https://lkml.kernel.org/r/20210411020255.18052-1-rdunlap@infradead.org Fixes: c331649e ("ia64: Use libata instead of the legacy ide driver in defconfigs") Signed-off-by:
Randy Dunlap <rdunlap@infradead.org> Reported-by:
Geert Uytterhoeven <geert@linux-m68k.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Randy Dunlap authored
e1000's #define of CONFIG_RAM_BASE conflicts with a Kconfig symbol in arch/csky/Kconfig. The symbol in e1000 has been around longer, so change arch/csky/ to use DRAM_BASE instead of RAM_BASE to remove the conflict. (although e1000 is also a 2-line change) Link: https://lkml.kernel.org/r/20210411055335.7111-1-rdunlap@infradead.org Signed-off-by:
Randy Dunlap <rdunlap@infradead.org> Reported-by:
kernel test robot <lkp@intel.com> Acked-by:
Guo Ren <guoren@kernel.org> Cc: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Walter Wu authored
CONFIG_KASAN_STACK and CONFIG_KASAN_STACK_ENABLE both enable KASAN stack instrumentation, but we should only need one config, so that we remove CONFIG_KASAN_STACK_ENABLE and make CONFIG_KASAN_STACK workable. see [1]. When enable KASAN stack instrumentation, then for gcc we could do no prompt and default value y, and for clang prompt and default value n. This patch fixes the following compilation warning: include/linux/kasan.h:333:30: warning: 'CONFIG_KASAN_STACK' is not defined, evaluates to 0 [-Wundef] [akpm@linux-foundation.org: fix merge snafu] Link: https://bugzilla.kernel.org/show_bug.cgi?id=210221 [1] Link: https://lkml.kernel.org/r/20210226012531.29231-1-walter-zh.wu@mediatek.com Fixes: d9b571c8 ("kasan: fix KASAN_STACK dependency for HW_TAGS") Signed-off-by:
Walter Wu <walter-zh.wu@mediatek.com> Suggested-by:
Dmitry Vyukov <dvyukov@google.com> Reviewed-by:
Nathan Chancellor <natechancellor@gmail.com> Acked-by:
Arnd Bergmann <arnd@arndb.de> Reviewed-by:
Andrey Konovalov <andreyknvl@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Jisheng Zhang authored
Current riscv's kprobe handlers are run with both preemption and interrupt enabled, this violates kprobe requirements. Fix this issue by keeping interrupts disabled for BREAKPOINT exception. Fixes: c22b0bcb ("riscv: Add kprobes supported") Cc: stable@vger.kernel.org Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Reviewed-by:
Masami Hiramatsu <mhiramat@kernel.org> [Palmer: add a comment] Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
Jisheng Zhang authored
Currently, the riscv's kprobes(powerred by ftrace) handler is preemptible. Futher check indicates we miss something similar as the commit c536aa1c ("kprobes/ftrace: Add recursion protection to the ftrace callback"), so do similar modifications as the commit does. Fixes: 829adda5 ("riscv: Add KPROBES_ON_FTRACE supported") Cc: stable@vger.kernel.org Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Reviewed-by:
Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
Jisheng Zhang authored
These two functions are used to implement the kprobes feature so they can't be kprobed. Fixes: c22b0bcb ("riscv: Add kprobes supported") Cc: stable@vger.kernel.org Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Reviewed-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
Kefeng Wang authored
There is a spelling mistake when SPARSEMEM Kconfig copy. Fixes: a5406a7f ("riscv: Correct SPARSEMEM configuration") Cc: stable@vger.kernel.org Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
- Apr 15, 2021
-
-
Nathan Chancellor authored
After commit 2decad92 ("arm64: mte: Ensure TIF_MTE_ASYNC_FAULT is set atomically"), LLVM's integrated assembler fails to build entry.S: <instantiation>:5:7: error: expected assembly-time absolute expression .org . - (664b-663b) + (662b-661b) ^ <instantiation>:6:7: error: expected assembly-time absolute expression .org . - (662b-661b) + (664b-663b) ^ The root cause is LLVM's assembler has a one-pass design, meaning it cannot figure out these instruction lengths when the .org directive is outside of the subsection that they are in, which was changed by the .arch_extension directive added in the above commit. Apply the same fix from commit 966a0acc ("arm64/alternatives: move length validation inside the subsection") to the alternative_endif macro, shuffling the .org directives so that the length validation happen will always happen in the same subsections. alternative_insn has not shown any issue yet but it appears that it could have the same issue in the future so just preemptively change it. Fixes: f7b93d42 ("arm64/alternatives: use subsections for replacement sequences") Cc: <stable@vger.kernel.org> # 5.8.x Link: https://github.com/ClangBuiltLinux/linux/issues/1347 Signed-off-by:
Nathan Chancellor <nathan@kernel.org> Reviewed-by:
Sami Tolvanen <samitolvanen@google.com> Tested-by:
Sami Tolvanen <samitolvanen@google.com> Reviewed-by:
Nick Desaulniers <ndesaulniers@google.com> Tested-by:
Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20210414000803.662534-1-nathan@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Apr 13, 2021
-
-
Reiji Watanabe authored
__vmx_handle_exit() uses vcpu->run->internal.ndata as an index for an array access. Since vcpu->run is (can be) mapped to a user address space with a writer permission, the 'ndata' could be updated by the user process at anytime (the user process can set it to outside the bounds of the array). So, it is not safe that __vmx_handle_exit() uses the 'ndata' that way. Fixes: 1aa561b1 ("kvm: x86: Add "last CPU" to some KVM_EXIT information") Signed-off-by:
Reiji Watanabe <reijiw@google.com> Reviewed-by:
Jim Mattson <jmattson@google.com> Message-Id: <20210413154739.490299-1-reijiw@google.com> Cc: stable@vger.kernel.org Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Rafael J. Wysocki authored
Commit 1a1c130a ("ACPI: tables: x86: Reserve memory occupied by ACPI tables") attempted to address an issue with reserving the memory occupied by ACPI tables, but it broke the initrd-based table override mechanism relied on by multiple users. To restore the initrd-based ACPI table override functionality, move the acpi_boot_table_init() invocation in setup_arch() on x86 after the acpi_table_upgrade() one. Fixes: 1a1c130a ("ACPI: tables: x86: Reserve memory occupied by ACPI tables") Reported-by:
Hans de Goede <hdegoede@redhat.com> Tested-by:
Hans de Goede <hdegoede@redhat.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Jisheng Zhang authored
If instruction being single stepped caused a page fault, the kprobes is cancelled to let the page fault handler continue as a normal page fault. But the local irqflags are disabled so cpu will restore pstate with DAIF masked. After pagefault is serviced, the kprobes is triggerred again, we overwrite the saved_irqflag by calling kprobes_save_local_irqflag(). NOTE, DAIF is masked in this new saved irqflag. After kprobes is serviced, the cpu pstate is retored with DAIF masked. This patch is inspired by one patch for riscv from Liao Chang. Signed-off-by:
Jisheng Zhang <Jisheng.Zhang@synaptics.com> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20210412174101.6bfb0594@xhacker.debian Signed-off-by:
Will Deacon <will@kernel.org>
-
- Apr 12, 2021
-
-
Catalin Marinas authored
The entry from EL0 code checks the TFSRE0_EL1 register for any asynchronous tag check faults in user space and sets the TIF_MTE_ASYNC_FAULT flag. This is not done atomically, potentially racing with another CPU calling set_tsk_thread_flag(). Replace the non-atomic ORR+STR with an STSET instruction. While STSET requires ARMv8.1 and an assembler that understands LSE atomics, the MTE feature is part of ARMv8.5 and already requires an updated assembler. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Fixes: 637ec831 ("arm64: mte: Handle synchronous and asynchronous tag check faults") Cc: <stable@vger.kernel.org> # 5.10.x Reported-by:
Will Deacon <will@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210409173710.18582-1-catalin.marinas@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Vasily Gorbik authored
Currently psw_idle does not allocate a stack frame and does not save its r14 and r15 into the save area. Even though this is valid from call ABI point of view, because psw_idle does not make any calls explicitly, in reality psw_idle is an entry point for controlled transition into serving interrupts. So, in practice, psw_idle stack frame is analyzed during stack unwinding. Depending on build options that r14 slot in the save area of psw_idle might either contain a value saved by previous sibling call or complete garbage. [task 0000038000003c28] do_ext_irq+0xd6/0x160 [task 0000038000003c78] ext_int_handler+0xba/0xe8 [task *0000038000003dd8] psw_idle_exit+0x0/0x8 <-- pt_regs ([task 0000038000003dd8] 0x0) [task 0000038000003e10] default_idle_call+0x42/0x148 [task 0000038000003e30] do_idle+0xce/0x160 [task 0000038000003e70] cpu_startup_entry+0x36/0x40 [task 0000038000003ea0] arch_call_rest_init+0x76/0x80 So, to make a stacktrace nicer and actually point for the real caller of psw_idle in this frequently occurring case, make psw_idle save its r14. [task 0000038000003c28] do_ext_irq+0xd6/0x160 [task 0000038000003c78] ext_int_handler+0xba/0xe8 [task *0000038000003dd8] psw_idle_exit+0x0/0x6 <-- pt_regs ([task 0000038000003dd8] arch_cpu_idle+0x3c/0xd0) [task 0000038000003e10] default_idle_call+0x42/0x148 [task 0000038000003e30] do_idle+0xce/0x160 [task 0000038000003e70] cpu_startup_entry+0x36/0x40 [task 0000038000003ea0] arch_call_rest_init+0x76/0x80 Reviewed-by:
Sven Schnelle <svens@linux.ibm.com> Signed-off-by:
Vasily Gorbik <gor@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Vasily Gorbik authored
Currently when interrupt arrives to cpu while in kernel context INT_HANDLER macro (used for ext_int_handler and io_int_handler) allocates new stack frame and pt_regs on the kernel stack and sets up the backchain to jump over the pt_regs to the frame which has been interrupted. This is not ideal to two reasons: 1. This hides the fact that kernel stack contains interrupt frame in it and hence breaks arch_stack_walk_reliable(), which needs to know that to guarantee "reliability" and checks that there are no pt_regs on the way. 2. It breaks the backchain unwinder logic, which assumes that the next stack frame after an interrupt frame is reliable, while it is not. In some cases (when r14 contains garbage) this leads to early unwinding termination with an error, instead of marking frame as unreliable and continuing. To address that, only set backchain to 0. Fixes: 56e62a73 ("s390: convert to generic entry") Reviewed-by:
Sven Schnelle <svens@linux.ibm.com> Signed-off-by:
Vasily Gorbik <gor@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
- Apr 11, 2021
-
-
Angelo Dureghello authored
Detected a broken boot on mcf54415, likely introduced from commit 4bfc848e ("m68k/mm: enable use of generic memory_model.h for !DISCONTIGMEM") Fix ARCH_PFN_OFFSET to be a pfn. Signed-off-by:
Angelo Dureghello <angelo@kernel-space.org> Acked-by:
Mike Rapoport <rppt@linux.ibm.com> Signed-off-by:
Greg Ungerer <gerg@linux-m68k.org>
-
- Apr 09, 2021
-
-
Marco Elver authored
On systems with KPTI enabled, we can currently observe the following warning: BUG: using smp_processor_id() in preemptible caller is invalidate_user_asid+0x13/0x50 CPU: 6 PID: 1075 Comm: dmesg Not tainted 5.12.0-rc4-gda4a2b1a5479-kfence_1+ #1 Hardware name: Hewlett-Packard HP Pro 3500 Series/2ABF, BIOS 8.11 10/24/2012 Call Trace: dump_stack+0x7f/0xad check_preemption_disabled+0xc8/0xd0 invalidate_user_asid+0x13/0x50 flush_tlb_one_kernel+0x5/0x20 kfence_protect+0x56/0x80 ... While it normally makes sense to require preemption to be off, so that the expected CPU's TLB is flushed and not another, in our case it really is best-effort (see comments in kfence_protect_page()). Avoid the warning by disabling preemption around flush_tlb_one_kernel(). Link: https://lore.kernel.org/lkml/YGIDBAboELGgMgXy@elver.google.com/ Link: https://lkml.kernel.org/r/20210330065737.652669-1-elver@google.com Signed-off-by:
Marco Elver <elver@google.com> Reported-by:
Tomi Sarvela <tomi.p.sarvela@intel.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Jann Horn <jannh@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Sergei Trofimovich authored
ia64 has two stacks: - memory stack (or stack), pointed at by by r12 - register backing store (register stack), pointed at by ar.bsp/ar.bspstore with complications around dirty register frame on CPU. In [1] Dmitry noticed that PTRACE_GET_SYSCALL_INFO returns the register stack instead memory stack. The bug comes from the fact that user_stack_pointer() and current_user_stack_pointer() don't return the same register: ulong user_stack_pointer(struct pt_regs *regs) { return regs->ar_bspstore; } #define current_user_stack_pointer() (current_pt_regs()->r12) The change gets both back in sync. I think ptrace(PTRACE_GET_SYSCALL_INFO) is the only affected user by this bug on ia64. The change fixes 'rt_sigreturn.gen.test' strace test where it was observed initially. Link: https://bugs.gentoo.org/769614 [1] Link: https://lkml.kernel.org/r/20210331084447.2561532-1-slyfox@gentoo.org Signed-off-by:
Sergei Trofimovich <slyfox@gentoo.org> Reported-by:
Dmitry V. Levin <ldv@altlinux.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Rapoport authored
Commit cb9f753a ("mm: fix races between swapoff and flush dcache") updated flush_dcache_page implementations on several architectures to use page_mapping_file() in order to avoid races between page_mapping() and swapoff(). This update missed arch/nds32 and there is a possibility of a race there. Replace page_mapping() with page_mapping_file() in nds32 implementation of flush_dcache_page(). Link: https://lkml.kernel.org/r/20210330175126.26500-1-rppt@kernel.org Fixes: cb9f753a ("mm: fix races between swapoff and flush dcache") Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Reviewed-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by:
Greentime Hu <green.hu@gmail.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Nick Hu <nickhu@andestech.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Marek Behún authored
Change my e-mail address to kabel@kernel.org, and fix my name in non-code parts (add diacritical mark). Link: https://lkml.kernel.org/r/20210325171123.28093-2-kabel@kernel.org Signed-off-by:
Marek Behún <kabel@kernel.org> Cc: Bartosz Golaszewski <bgolaszewski@baylibre.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jassi Brar <jassisinghbrar@gmail.com> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Pavel Machek <pavel@ucw.cz> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Thomas Tai authored
Commit 334872a0 ("x86/traps: Attempt to fixup exceptions in vDSO before signaling") added return statements which bypass calling cond_local_irq_disable(). According to ca4c6a98 ("x86/traps: Make interrupt enable/disable symmetric in C code"), cond_local_irq_disable() is needed because the asm return code no longer disables interrupts. Follow the existing code as an example to use "goto exit" instead of "return" statement. [ bp: Massage commit message. ] Fixes: 334872a0 ("x86/traps: Attempt to fixup exceptions in vDSO before signaling") Signed-off-by:
Thomas Tai <thomas.tai@oracle.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Alexandre Chartre <alexandre.chartre@oracle.com> Link: https://lkml.kernel.org/r/1617902914-83245-1-git-send-email-thomas.tai@oracle.com
-
- Apr 08, 2021
-
-
Piotr Krysiuk authored
The branch displacement logic in the BPF JIT compilers for x86 assumes that, for any generated branch instruction, the distance cannot increase between optimization passes. But this assumption can be violated due to how the distances are computed. Specifically, whenever a backward branch is processed in do_jit(), the distance is computed by subtracting the positions in the machine code from different optimization passes. This is because part of addrs[] is already updated for the current optimization pass, before the branch instruction is visited. And so the optimizer can expand blocks of machine code in some cases. This can confuse the optimizer logic, where it assumes that a fixed point has been reached for all machine code blocks once the total program size stops changing. And then the JIT compiler can output abnormal machine code containing incorrect branch displacements. To mitigate this issue, we assert that a fixed point is reached while populating the output image. This rejects any problematic programs. The issue affects both x86-32 and x86-64. We mitigate separately to ease backporting. Signed-off-by:
Piotr Krysiuk <piotras@gmail.com> Reviewed-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net>
-
Piotr Krysiuk authored
The branch displacement logic in the BPF JIT compilers for x86 assumes that, for any generated branch instruction, the distance cannot increase between optimization passes. But this assumption can be violated due to how the distances are computed. Specifically, whenever a backward branch is processed in do_jit(), the distance is computed by subtracting the positions in the machine code from different optimization passes. This is because part of addrs[] is already updated for the current optimization pass, before the branch instruction is visited. And so the optimizer can expand blocks of machine code in some cases. This can confuse the optimizer logic, where it assumes that a fixed point has been reached for all machine code blocks once the total program size stops changing. And then the JIT compiler can output abnormal machine code containing incorrect branch displacements. To mitigate this issue, we assert that a fixed point is reached while populating the output image. This rejects any problematic programs. The issue affects both x86-32 and x86-64. We mitigate separately to ease backporting. Signed-off-by:
Piotr Krysiuk <piotras@gmail.com> Reviewed-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net>
-
Paolo Bonzini authored
Right now, if a call to kvm_tdp_mmu_zap_sp returns false, the caller will skip the TLB flush, which is wrong. There are two ways to fix it: - since kvm_tdp_mmu_zap_sp will not yield and therefore will not flush the TLB itself, we could change the call to kvm_tdp_mmu_zap_sp to use "flush |= ..." - or we can chain the flush argument through kvm_tdp_mmu_zap_sp down to __kvm_tdp_mmu_zap_gfn_range. Note that kvm_tdp_mmu_zap_sp will neither yield nor flush, so flush would never go from true to false. This patch does the former to simplify application to stable kernels, and to make it further clearer that kvm_tdp_mmu_zap_sp will not flush. Cc: seanjc@google.com Fixes: 048f4980 ("KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping") Cc: <stable@vger.kernel.org> # 5.10.x: 048f4980: KVM: x86/mmu: Ensure TLBs are flushed for TDP MMU during NX zapping Cc: <stable@vger.kernel.org> # 5.10.x: 33a31641: KVM: x86/mmu: Don't allow TDP MMU to yield when recovering NX pages Cc: <stable@vger.kernel.org> Reviewed-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Apr 07, 2021
-
-
Vitaly Kuznetsov authored
Commit 8cdddd18 ("ACPI: processor: Fix CPU0 wakeup in acpi_idle_play_dead()") tried to fix CPU0 hotplug breakage by copying wakeup_cpu0() + start_cpu0() logic from hlt_play_dead()//mwait_play_dead() into acpi_idle_play_dead(). The problem is that these functions are not exported to modules so when CONFIG_ACPI_PROCESSOR=m build fails. The issue could've been fixed by exporting both wakeup_cpu0()/start_cpu0() (the later from assembly) but it seems putting the whole pattern into a new function and exporting it instead is better. Reported-by:
kernel test robot <lkp@intel.com> Fixes: 8cdddd18 ("CPI: processor: Fix CPU0 wakeup in acpi_idle_play_dead()") Cc: <stable@vger.kernel.org> # 5.10+ Signed-off-by:
Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
Heiko Carstens authored
Use memblock_free_late() to free the old machine check stack to the buddy allocator instead of leaking it. Fixes: b61b1595 ("s390: add stack for machine check handler") Cc: Vasily Gorbik <gor@linux.ibm.com> Acked-by:
Sven Schnelle <svens@linux.ibm.com> Signed-off-by:
Heiko Carstens <hca@linux.ibm.com>
-
Jernej Skrabec authored
Although every Beelink GS1 seems to have external 32768 Hz oscillator, it works only on one from four tested. There are more reports of RTC issues elsewhere, like Armbian forum. One Beelink GS1 owner read RTC osc status register on Android which shipped with the box. Reported value indicated problems with external oscillator. In order to fix RTC and related issues (HDMI-CEC and suspend/resume with Crust) on all boards, switch to internal oscillator. Fixes: 32507b86 ("arm64: dts: allwinner: h6: Move ext. oscillator to board DTs") Signed-off-by:
Jernej Skrabec <jernej.skrabec@siol.net> Tested-by:
Clément Péron <peron.clem@gmail.com> Signed-off-by:
Maxime Ripard <maxime@cerno.tech> Link: https://lore.kernel.org/r/20210330184218.279738-1-jernej.skrabec@siol.net
-
Andre Przywara authored
Commit 941432d0 ("arm64: dts: allwinner: Drop non-removable from SoPine/LTS SD card") enabled the card detect GPIO for the SOPine module, along the way with the Pine64-LTS, which share the same base .dtsi. However while both boards indeed have a working CD GPIO on PF6, the polarity is different: the SOPine modules uses a "push-pull" socket, which has an active-high switch, while the Pine64-LTS use the more traditional push-push socket and the common active-low switch. Fix the polarity in the sopine.dtsi, and overwrite it in the LTS board .dts, to make the SD card work again on systems using SOPine modules. Fixes: 941432d0 ("arm64: dts: allwinner: Drop non-removable from SoPine/LTS SD card") Reported-by:
Ashley <contact@victorianfox.com> Signed-off-by:
Andre Przywara <andre.przywara@arm.com> Signed-off-by:
Maxime Ripard <maxime@cerno.tech> Link: https://lore.kernel.org/r/20210316144219.5973-1-andre.przywara@arm.com
-
Chen-Yu Tsai authored
The macros for the clock and reset indices for the RSB hardware block were replaced with raw numbers when the RSB controller node was added. This was done to avoid cross-tree dependencies. Now that both the clk and DT changes have been merged, we can switch back to using the macros. Signed-off-by:
Chen-Yu Tsai <wens@csie.org> Signed-off-by:
Maxime Ripard <maxime@cerno.tech>
-
- Apr 06, 2021
-
-
Bhaskar Chowdhury authored
with some additional cleanups by Helge. Signed-off-by:
Bhaskar Chowdhury <unixbhaskar@gmail.com> Acked-by:
Randy Dunlap <rdunlap@infradead.org> Signed-off-by:
Helge Deller <deller@gmx.de>
-