1. 21 Apr, 2020 1 commit
  2. 20 Apr, 2020 2 commits
    • Paolo Bonzini's avatar
      KVM: x86: cleanup kvm_inject_emulated_page_fault · 0cd665bd
      Paolo Bonzini authored
      
      
      To reconstruct the kvm_mmu to be used for page fault injection, we
      can simply use fault->nested_page_fault.  This matches how
      fault->nested_page_fault is assigned in the first place by
      FNAME(walk_addr_generic).
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      0cd665bd
    • Paolo Bonzini's avatar
      KVM: x86: introduce kvm_mmu_invalidate_gva · 5efac074
      Paolo Bonzini authored
      
      
      Wrap the combination of mmu->invlpg and kvm_x86_ops->tlb_flush_gva
      into a new function.  This function also lets us specify the host PGD to
      invalidate and also the MMU, both of which will be useful in fixing and
      simplifying kvm_inject_emulated_page_fault.
      
      A nested guest's MMU however has g_context->invlpg == NULL.  Instead of
      setting it to nonpaging_invlpg, make kvm_mmu_invalidate_gva the only
      entry point to mmu->invlpg and make a NULL invlpg pointer equivalent
      to nonpaging_invlpg, saving a retpoline.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      5efac074
  3. 31 Mar, 2020 1 commit
  4. 16 Mar, 2020 18 commits
  5. 12 Feb, 2020 1 commit
  6. 05 Feb, 2020 2 commits
  7. 27 Jan, 2020 12 commits
  8. 23 Jan, 2020 1 commit
    • Paolo Bonzini's avatar
      KVM: x86: fix overlap between SPTE_MMIO_MASK and generation · 56871d44
      Paolo Bonzini authored
      The SPTE_MMIO_MASK overlaps with the bits used to track MMIO
      generation number.  A high enough generation number would overwrite the
      SPTE_SPECIAL_MASK region and cause the MMIO SPTE to be misinterpreted.
      
      Likewise, setting bits 52 and 53 would also cause an incorrect generation
      number to be read from the PTE, though this was partially mitigated by the
      (useless if it weren't for the bug) removal of SPTE_SPECIAL_MASK from
      the spte in get_mmio_spte_generation.  Drop that removal, and replace
      it with a compile-time assertion.
      
      Fixes: 6eeb4ef0
      
       ("KVM: x86: assign two bits to track SPTE kinds")
      Reported-by: default avatarBen Gardon <bgardon@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      56871d44
  9. 21 Jan, 2020 2 commits
    • Sean Christopherson's avatar
      KVM: x86/mmu: Apply max PA check for MMIO sptes to 32-bit KVM · e30a7d62
      Sean Christopherson authored
      Remove the bogus 64-bit only condition from the check that disables MMIO
      spte optimization when the system supports the max PA, i.e. doesn't have
      any reserved PA bits.  32-bit KVM always uses PAE paging for the shadow
      MMU, and per Intel's SDM:
      
        PAE paging translates 32-bit linear addresses to 52-bit physical
        addresses.
      
      The kernel's restrictions on max physical addresses are limits on how
      much memory the kernel can reasonably use, not what physical addresses
      are supported by hardware.
      
      Fixes: ce88decf
      
       ("KVM: MMU: mmio page fault support")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e30a7d62
    • Sean Christopherson's avatar
      KVM: x86/mmu: Micro-optimize nEPT's bad memptype/XWR checks · b5c3c1b3
      Sean Christopherson authored
      Rework the handling of nEPT's bad memtype/XWR checks to micro-optimize
      the checks as much as possible.  Move the check to a separate helper,
      __is_bad_mt_xwr(), which allows the guest_rsvd_check usage in
      paging_tmpl.h to omit the check entirely for paging32/64 (bad_mt_xwr is
      always zero for non-nEPT) while retaining the bitwise-OR of the current
      code for the shadow_zero_check in walk_shadow_page_get_mmio_spte().
      
      Add a comment for the bitwise-OR usage in the mmio spte walk to avoid
      future attempts to "fix" the code, which is what prompted this
      optimization in the first place[*].
      
      Opportunistically remove the superfluous '!= 0' and parantheses, and
      use BIT_ULL() instead of open coding its equivalent.
      
      The net effect is that code generation is largely unchanged for
      walk_shadow_page_get_mmio_spte(), marginally better for
      ept_prefetch_invalid_gpte(), and significantly improved for
      paging32/64_prefetch_invalid_gpte().
      
      Note, walk_shadow_page_get_mmio_spte() can't use a templated version of
      the memtype/XRW as it works on the host's shadow PTEs, e.g. checks that
      KVM hasn't borked its EPT tables.  Even if it could be templated, the
      benefits of having a single implementation far outweight the few uops
      that would be saved for NPT or non-TDP paging, e.g. most compilers
      inline it all the way to up kvm_mmu_page_fault().
      
      [*] https://lkml.kernel.org/r/20200108001859.25254-1-sean.j.christopherson@intel.com
      
      
      
      Cc: Jim Mattson <jmattson@google.com>
      Cc: David Laight <David.Laight@ACULAB.COM>
      Cc: Arvind Sankar <nivedita@alum.mit.edu>
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Reviewed-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b5c3c1b3