1. 21 Apr, 2020 9 commits
  2. 20 Apr, 2020 2 commits
    • Sean Christopherson's avatar
      KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook · e64419d9
      Sean Christopherson authored
      
      
      Add a dedicated hook to handle flushing TLB entries on behalf of the
      guest, i.e. for a paravirtualized TLB flush, and use it directly instead
      of bouncing through kvm_vcpu_flush_tlb().
      
      For VMX, change the effective implementation implementation to never do
      INVEPT and flush only the current context, i.e. to always flush via
      INVVPID(SINGLE_CONTEXT).  The INVEPT performed by __vmx_flush_tlb() when
      @invalidate_gpa=false and enable_vpid=0 is unnecessary, as it will only
      flush guest-physical mappings; linear and combined mappings are flushed
      by VM-Enter when VPID is disabled, and changes in the guest pages tables
      do not affect guest-physical mappings.
      
      When EPT and VPID are enabled, doing INVVPID is not required (by Intel's
      architecture) to invalidate guest-physical mappings, i.e. TLB entries
      that cache guest-physical mappings can live across INVVPID as the
      mappings are associated with an EPTP, not a VPID.  The intent of
      @invalidate_gpa is to inform vmx_flush_tlb() that it must "invalidate
      gpa mappings", i.e. do INVEPT and not simply INVVPID.  Other than nested
      VPID handling, which now calls vpid_sync_context() directly, the only
      scenario where KVM can safely do INVVPID instead of INVEPT (when EPT is
      enabled) is if KVM is flushing TLB entries from the guest's perspective,
      i.e. is only required to invalidate linear mappings.
      
      For SVM, flushing TLB entries from the guest's perspective can be done
      by flushing the current ASID, as changes to the guest's page tables are
      associated only with the current ASID.
      
      Adding a dedicated ->tlb_flush_guest() paves the way toward removing
      @invalidate_gpa, which is a potentially dangerous control flag as its
      meaning is not exactly crystal clear, even for those who are familiar
      with the subtleties of what mappings Intel CPUs are/aren't allowed to
      keep across various invalidation scenarios.
      
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Message-Id: <20200320212833.3507-15-sean.j.christopherson@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e64419d9
    • Paolo Bonzini's avatar
      KVM: x86: introduce kvm_mmu_invalidate_gva · 5efac074
      Paolo Bonzini authored
      
      
      Wrap the combination of mmu->invlpg and kvm_x86_ops->tlb_flush_gva
      into a new function.  This function also lets us specify the host PGD to
      invalidate and also the MMU, both of which will be useful in fixing and
      simplifying kvm_inject_emulated_page_fault.
      
      A nested guest's MMU however has g_context->invlpg == NULL.  Instead of
      setting it to nonpaging_invlpg, make kvm_mmu_invalidate_gva the only
      entry point to mmu->invlpg and make a NULL invlpg pointer equivalent
      to nonpaging_invlpg, saving a retpoline.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      5efac074
  3. 15 Apr, 2020 1 commit
  4. 31 Mar, 2020 3 commits
  5. 16 Mar, 2020 25 commits