Skip to content
Snippets Groups Projects
  1. Mar 15, 2018
    • David S. Miller's avatar
      sparc64: Fix regression in pmdp_invalidate(). · cfb61b5e
      David S. Miller authored
      
      pmdp_invalidate() was changed to update the pmd atomically
      (to not lose dirty/access bits) and return the original pmd
      value.
      
      However, in doing so, we lost a lot of the essential work that
      set_pmd_at() does, namely to update hugepage mapping counts and
      queuing up the batched TLB flush entry.
      
      Thus we were not flushing entries out of the TLB when making
      such PMD changes.
      
      Fix this by abstracting the accounting work of set_pmd_at() out into a
      separate function, and call it from pmdp_establish().
      
      Fixes: a8e654f0 ("sparc64: update pmdp_invalidate() to return old pmd value")
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cfb61b5e
  2. Mar 09, 2018
    • Marc Zyngier's avatar
      arm64: Relax ARM_SMCCC_ARCH_WORKAROUND_1 discovery · e21da1c9
      Marc Zyngier authored
      
      A recent update to the ARM SMCCC ARCH_WORKAROUND_1 specification
      allows firmware to return a non zero, positive value to describe
      that although the mitigation is implemented at the higher exception
      level, the CPU on which the call is made is not affected.
      
      Let's relax the check on the return value from ARCH_WORKAROUND_1
      so that we only error out if the returned value is negative.
      
      Fixes: b092201e ("arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support")
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      e21da1c9
    • Francis Deslauriers's avatar
      x86/kprobes: Fix kernel crash when probing .entry_trampoline code · c07a8f8b
      Francis Deslauriers authored
      
      Disable the kprobe probing of the entry trampoline:
      
      .entry_trampoline is a code area that is used to ensure page table
      isolation between userspace and kernelspace.
      
      At the beginning of the execution of the trampoline, we load the
      kernel's CR3 register. This has the effect of enabling the translation
      of the kernel virtual addresses to physical addresses. Before this
      happens most kernel addresses can not be translated because the running
      process' CR3 is still used.
      
      If a kprobe is placed on the trampoline code before that change of the
      CR3 register happens the kernel crashes because int3 handling pages are
      not accessible.
      
      To fix this, add the .entry_trampoline section to the kprobe blacklist
      to prohibit the probing of code before all the kernel pages are
      accessible.
      
      Signed-off-by: default avatarFrancis Deslauriers <francis.deslauriers@efficios.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: mathieu.desnoyers@efficios.com
      Cc: mhiramat@kernel.org
      Link: http://lkml.kernel.org/r/1520565492-4637-2-git-send-email-francis.deslauriers@efficios.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c07a8f8b
  3. Mar 08, 2018
  4. Mar 07, 2018
  5. Mar 06, 2018
  6. Mar 05, 2018
  7. Mar 04, 2018
  8. Mar 03, 2018
  9. Mar 02, 2018
    • Guenter Roeck's avatar
      s390: Fix runtime warning about negative pgtables_bytes · 61e18270
      Guenter Roeck authored
      
      When running s390 images with 'compat' processes, the following
      BUG is seen repeatedly.
      
      BUG: non-zero pgtables_bytes on freeing mm: -16384
      
      Bisect points to commit b4e98d9a ("mm: account pud page tables").
      Analysis shows that init_new_context() is called with
      mm->context.asce_limit set to _REGION3_SIZE. In this situation,
      pgtables_bytes remains set to 0 and is not increased. The message is
      displayed when the affected process dies and mm_dec_nr_puds() is called.
      
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Fixes: b4e98d9a ("mm: account pud page tables")
      Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      61e18270
    • Helge Deller's avatar
      parisc: Reduce irq overhead when run in qemu · 636a415b
      Helge Deller authored
      
      When run under QEMU, calling mfctl(16) creates some overhead because the
      qemu timer has to be scaled and moved into the register. This patch
      reduces the number of calls to mfctl(16) by moving the calls out of the
      loops.
      
      Additionally, increase the minimal time interval to 8000 cycles instead
      of 500 to compensate possible QEMU delays when delivering interrupts.
      
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      Cc: stable@vger.kernel.org # 4.14+
      636a415b
    • Helge Deller's avatar
      parisc: Use cr16 interval timers unconditionally on qemu · 5ffa8518
      Helge Deller authored
      
      When running on qemu we know that the (emulated) cr16 cpu-internal
      clocks are syncronized. So let's use them unconditionally on qemu.
      
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      Cc: stable@vger.kernel.org # 4.14+
      5ffa8518
    • Helge Deller's avatar
      parisc: Check if secondary CPUs want own PDC calls · 0ed1fe4a
      Helge Deller authored
      
      The architecture specification says (for 64-bit systems): PDC is a per
      processor resource, and operating system software must be prepared to
      manage separate pointers to PDCE_PROC for each processor.  The address
      of PDCE_PROC for the monarch processor is stored in the Page Zero
      location MEM_PDC. The address of PDCE_PROC for each non-monarch
      processor is passed in gr26 when PDCE_RESET invokes OS_RENDEZ.
      
      Currently we still use one PDC for all CPUs, but in case we face a
      machine which is following the specification let's warn about it.
      
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      0ed1fe4a
    • Helge Deller's avatar
      parisc: Hide virtual kernel memory layout · fd8d0ca2
      Helge Deller authored
      
      For security reasons do not expose the virtual kernel memory layout to
      userspace.
      
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      Suggested-by: default avatarKees Cook <keescook@chromium.org>
      Cc: stable@vger.kernel.org # 4.15
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      fd8d0ca2
    • John David Anglin's avatar
      parisc: Fix ordering of cache and TLB flushes · 0adb24e0
      John David Anglin authored
      
      The change to flush_kernel_vmap_range() wasn't sufficient to avoid the
      SMP stalls.  The problem is some drivers call these routines with
      interrupts disabled.  Interrupts need to be enabled for flush_tlb_all()
      and flush_cache_all() to work.  This version adds checks to ensure
      interrupts are not disabled before calling routines that need IPI
      interrupts.  When interrupts are disabled, we now drop into slower code.
      
      The attached change fixes the ordering of cache and TLB flushes in
      several cases.  When we flush the cache using the existing PTE/TLB
      entries, we need to flush the TLB after doing the cache flush.  We don't
      need to do this when we flush the entire instruction and data caches as
      these flushes don't use the existing TLB entries.  The same is true for
      tmpalias region flushes.
      
      The flush_kernel_vmap_range() and invalidate_kernel_vmap_range()
      routines have been updated.
      
      Secondly, we added a new purge_kernel_dcache_range_asm() routine to
      pacache.S and use it in invalidate_kernel_vmap_range().  Nominally,
      purges are faster than flushes as the cache lines don't have to be
      written back to memory.
      
      Hopefully, this is sufficient to resolve the remaining problems due to
      cache speculation.  So far, testing indicates that this is the case.  I
      did work up a patch using tmpalias flushes, but there is a performance
      hit because we need the physical address for each page, and we also need
      to sequence access to the tmpalias flush code.  This increases the
      probability of stalls.
      
      Signed-off-by: default avatarJohn David <Anglin &lt;dave.anglin@bell.net>
      Cc: stable@vger.kernel.org # 4.9+
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      0adb24e0
Loading