1. 09 Mar, 2020 1 commit
  2. 05 Jul, 2019 3 commits
    • James Morse's avatar
      KVM: arm64: Consume pending SError as early as possible · 0e5b9c08
      James Morse authored
      
      
      On systems with v8.2 we switch the 'vaxorcism' of guest SError with an
      alternative sequence that uses the ESB-instruction, then reads DISR_EL1.
      This saves the unmasking and remasking of asynchronous exceptions.
      
      We do this after we've saved the guest registers and restored the
      host's. Any SError that becomes pending due to this will be accounted
      to the guest, when it actually occurred during host-execution.
      
      Move the ESB-instruction as early as possible. Any guest SError
      will become pending due to this ESB-instruction and then consumed to
      DISR_EL1 before the host touches anything.
      
      This lets us account for host/guest SError precisely on the guest
      exit exception boundary.
      
      Because the ESB-instruction now lands in the preamble section of
      the vectors, we need to add it to the unpatched indirect vectors
      too, and to any sequence that may be patched in over the top.
      
      The ESB-instruction always lives in the head of the vectors,
      to be before any memory write. Whereas the register-store always
      lives in the tail.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      0e5b9c08
    • James Morse's avatar
      KVM: arm64: Make indirect vectors preamble behaviour symmetric · 5d994374
      James Morse authored
      
      
      The KVM indirect vectors support is a little complicated. Different CPUs
      may use different exception vectors for KVM that are generated at boot.
      Adding new instructions involves checking all the possible combinations
      do the right thing.
      
      To make changes here easier to review lets state what we expect of the
      preamble:
        1. The first vector run, must always run the preamble.
        2. Patching the head or tail of the vector shouldn't remove
           preamble instructions.
      
      Today, this is easy as we only have one instruction in the preamble.
      Change the unpatched tail of the indirect vector so that it always
      runs this, regardless of patching.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      5d994374
    • James Morse's avatar
      KVM: arm64: Abstract the size of the HYP vectors pre-amble · 3dbf100b
      James Morse authored
      
      
      The EL2 vector hardening feature causes KVM to generate vectors for
      each type of CPU present in the system. The generated sequences already
      do some of the early guest-exit work (i.e. saving registers). To avoid
      duplication the generated vectors branch to the original vector just
      after the preamble. This size is hard coded.
      
      Adding new instructions to the HYP vector causes strange side effects,
      which are difficult to debug as the affected code is patched in at
      runtime.
      
      Add KVM_VECTOR_PREAMBLE to tell kvm_patch_vector_branch() how big
      the preamble is. The valid_vect macro can then validate this at
      build time.
      Reviewed-by: default avatarJulien Thierry <julien.thierry@arm.com>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      3dbf100b
  3. 19 Jun, 2019 1 commit
  4. 19 Feb, 2019 1 commit
  5. 06 Dec, 2018 1 commit
    • Will Deacon's avatar
      arm64: entry: Place an SB sequence following an ERET instruction · 679db708
      Will Deacon authored
      
      
      Some CPUs can speculate past an ERET instruction and potentially perform
      speculative accesses to memory before processing the exception return.
      Since the register state is often controlled by a lower privilege level
      at the point of an ERET, this could potentially be used as part of a
      side-channel attack.
      
      This patch emits an SB sequence after each ERET so that speculation is
      held up on exception return.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      679db708
  6. 19 Oct, 2018 1 commit
  7. 31 May, 2018 1 commit
  8. 25 May, 2018 1 commit
  9. 11 Apr, 2018 1 commit
  10. 19 Mar, 2018 3 commits
    • Marc Zyngier's avatar
      arm64: KVM: Allow far branches from vector slots to the main vectors · 71dcb8be
      Marc Zyngier authored
      
      
      So far, the branch from the vector slots to the main vectors can at
      most be 4GB from the main vectors (the reach of ADRP), and this
      distance is known at compile time. If we were to remap the slots
      to an unrelated VA, things would break badly.
      
      A way to achieve VA independence would be to load the absolute
      address of the vectors (__kvm_hyp_vector), either using a constant
      pool or a series of movs, followed by an indirect branch.
      
      This patches implements the latter solution, using another instance
      of a patching callback. Note that since we have to save a register
      pair on the stack, we branch to the *second* instruction in the
      vectors in order to compensate for it. This also results in having
      to adjust this balance in the invalid vector entry point.
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      71dcb8be
    • Marc Zyngier's avatar
      arm64: KVM: Move stashing of x0/x1 into the vector code itself · 7e80f637
      Marc Zyngier authored
      
      
      All our useful entry points into the hypervisor are starting by
      saving x0 and x1 on the stack. Let's move those into the vectors
      by introducing macros that annotate whether a vector is valid or
      not, thus indicating whether we want to stash registers or not.
      
      The only drawback is that we now also stash registers for el2_error,
      but this should never happen, and we pop them back right at the
      start of the handling sequence.
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarAndrew Jones <drjones@redhat.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      7e80f637
    • Christoffer Dall's avatar
      KVM: arm64: Avoid storing the vcpu pointer on the stack · 4464e210
      Christoffer Dall authored
      
      
      We already have the percpu area for the host cpu state, which points to
      the VCPU, so there's no need to store the VCPU pointer on the stack on
      every context switch.  We can be a little more clever and just use
      tpidr_el2 for the percpu offset and load the VCPU pointer from the host
      context.
      
      This has the benefit of being able to retrieve the host context even
      when our stack is corrupted, and it has a potential performance benefit
      because we trade a store plus a load for an mrs and a load on a round
      trip to the guest.
      
      This does require us to calculate the percpu offset without including
      the offset from the kernel mapping of the percpu array to the linear
      mapping of the array (which is what we store in tpidr_el1), because a
      PC-relative generated address in EL2 is already giving us the hyp alias
      of the linear mapping of a kernel address.  We do this in
      __cpu_init_hyp_mode() by using kvm_ksym_ref().
      
      The code that accesses ESR_EL2 was previously using an alternative to
      use the _EL1 accessor on VHE systems, but this was actually unnecessary
      as the _EL1 accessor aliases the ESR_EL2 register on VHE, and the _EL2
      accessor does the same thing on both systems.
      
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: default avatarAndrew Jones <drjones@redhat.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      4464e210
  11. 06 Feb, 2018 1 commit
  12. 13 Jan, 2018 2 commits
    • James Morse's avatar
      KVM: arm64: Change hyp_panic()s dependency on tpidr_el2 · c97e166e
      James Morse authored
      
      
      Make tpidr_el2 a cpu-offset for per-cpu variables in the same way the
      host uses tpidr_el1. This lets tpidr_el{1,2} have the same value, and
      on VHE they can be the same register.
      
      KVM calls hyp_panic() when anything unexpected happens. This may occur
      while a guest owns the EL1 registers. KVM stashes the vcpu pointer in
      tpidr_el2, which it uses to find the host context in order to restore
      the host EL1 registers before parachuting into the host's panic().
      
      The host context is a struct kvm_cpu_context allocated in the per-cpu
      area, and mapped to hyp. Given the per-cpu offset for this CPU, this is
      easy to find. Change hyp_panic() to take a pointer to the
      struct kvm_cpu_context. Wrap these calls with an asm function that
      retrieves the struct kvm_cpu_context from the host's per-cpu area.
      
      Copy the per-cpu offset from the hosts tpidr_el1 into tpidr_el2 during
      kvm init. (Later patches will make this unnecessary for VHE hosts)
      
      We print out the vcpu pointer as part of the panic message. Add a back
      reference to the 'running vcpu' in the host cpu context to preserve this.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarChristoffer Dall <cdall@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c97e166e
    • James Morse's avatar
      KVM: arm64: Store vcpu on the stack during __guest_enter() · 32b03d10
      James Morse authored
      
      
      KVM uses tpidr_el2 as its private vcpu register, which makes sense for
      non-vhe world switch as only KVM can access this register. This means
      vhe Linux has to use tpidr_el1, which KVM has to save/restore as part
      of the host context.
      
      If the SDEI handler code runs behind KVMs back, it mustn't access any
      per-cpu variables. To allow this on systems with vhe we need to make
      the host use tpidr_el2, saving KVM from save/restoring it.
      
      __guest_enter() stores the host_ctxt on the stack, do the same with
      the vcpu.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarChristoffer Dall <cdall@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      32b03d10
  13. 09 Apr, 2017 3 commits
  14. 16 Nov, 2016 1 commit
    • Suzuki K Poulose's avatar
      arm64: Support systems without FP/ASIMD · 82e0191a
      Suzuki K Poulose authored
      
      
      The arm64 kernel assumes that FP/ASIMD units are always present
      and accesses the FP/ASIMD specific registers unconditionally. This
      could cause problems when they are absent. This patch adds the
      support for kernel handling systems without FP/ASIMD by skipping the
      register access within the kernel. For kvm, we trap the accesses
      to FP/ASIMD and inject an undefined instruction exception to the VM.
      
      The callers of the exported kernel_neon_begin_partial() should
      make sure that the FP/ASIMD is supported.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      [catalin.marinas@arm.com: add comment on the ARM64_HAS_NO_FPSIMD conflict and the new location]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      82e0191a
  15. 08 Sep, 2016 3 commits
  16. 03 Jul, 2016 1 commit
  17. 28 Apr, 2016 2 commits
    • Geoff Levand's avatar
      arm64: hyp/kvm: Make hyp-stub extensible · ad72e59f
      Geoff Levand authored
      
      
      The existing arm64 hcall implementations are limited in that they only
      allow for two distinct hcalls; with the x0 register either zero or not
      zero.  Also, the API of the hyp-stub exception vector routines and the
      KVM exception vector routines differ; hyp-stub uses a non-zero value in
      x0 to implement __hyp_set_vectors, whereas KVM uses it to implement
      kvm_call_hyp.
      
      To allow for additional hcalls to be defined and to make the arm64 hcall
      API more consistent across exception vector routines, change the hcall
      implementations to reserve all x0 values below 0xfff for hcalls such
      as {s,g}et_vectors().
      
      Define two new preprocessor macros HVC_GET_VECTORS, and HVC_SET_VECTORS
      to be used as hcall type specifiers and convert the existing
      __hyp_get_vectors() and __hyp_set_vectors() routines to use these new
      macros when executing an HVC call.  Also, change the corresponding
      hyp-stub and KVM el1_sync exception vector routines to use these new
      macros.
      Signed-off-by: default avatarGeoff Levand <geoff@infradead.org>
      [Merged two hcall patches, moved immediate value from esr to x0, use lr
       as a scratch register, changed limit to 0xfff]
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      ad72e59f
    • James Morse's avatar
      arm64: kvm: Move lr save/restore from do_el2_call into EL1 · 00a44cda
      James Morse authored
      
      
      Today the 'hvc' calling KVM or the hyp-stub is expected to preserve all
      registers. KVM saves/restores the registers it needs on the EL2 stack using
      do_el2_call(). The hyp-stub has no stack, later patches need to be able to
      be able to clobber the link register.
      
      Move the link register save/restore to the the call sites.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      00a44cda
  18. 29 Feb, 2016 2 commits
  19. 14 Dec, 2015 4 commits