- Dec 19, 2018
-
-
Christoffer Dall authored
The use of a work queue in the hrtimer expire function for the bg_timer is a leftover from the time when we would inject interrupts when the bg_timer expired. Since we are no longer doing that, we can instead call kvm_vcpu_wake_up() directly from the hrtimer function and remove all workqueue functionality from the arch timer code. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
The kvm_exit tracepoint strangely always reported exits as being IRQs. This seems to be because either the __print_symbolic or the tracepoint macros use a variable named idx. Take this chance to update the fields in the tracepoint to reflect the concepts in the arm64 architecture that we pass to the tracepoint and move the exception type table to the same location and header files as the exits code. We also clear out the exception code to 0 for IRQ exits (which translates to UNKNOWN in text) to make it slighyly less confusing to parse the trace output. Signed-off-by:
Christoffer Dall <christoffer.dall@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
When checking if there are any pending IRQs for the VM, consider the active state and priority of the IRQs as well. Otherwise we could be continuously scheduling a guest hypervisor without it seeing an IRQ. Signed-off-by:
Christoffer Dall <christoffer.dall@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Gustavo A. R. Silva authored
When using the nospec API, it should be taken into account that: "...if the CPU speculates past the bounds check then * array_index_nospec() will clamp the index within the range of [0, * size)." The above is part of the header for macro array_index_nospec() in linux/nospec.h Now, in this particular case, if intid evaluates to exactly VGIC_MAX_SPI or to exaclty VGIC_MAX_PRIVATE, the array_index_nospec() macro ends up returning VGIC_MAX_SPI - 1 or VGIC_MAX_PRIVATE - 1 respectively, instead of VGIC_MAX_SPI or VGIC_MAX_PRIVATE, which, based on the original logic: /* SGIs and PPIs */ if (intid <= VGIC_MAX_PRIVATE) return &vcpu->arch.vgic_cpu.private_irqs[intid]; /* SPIs */ if (intid <= VGIC_MAX_SPI) return &kvm->arch.vgic.spis[intid - VGIC_NR_PRIVATE_IRQS]; are valid values for intid. Fix this by calling array_index_nospec() macro with VGIC_MAX_PRIVATE + 1 and VGIC_MAX_SPI + 1 as arguments for its parameter size. Fixes: 41b87599 ("KVM: arm/arm64: vgic: fix possible spectre-v1 in vgic_get_irq()") Cc: stable@vger.kernel.org Signed-off-by:
Gustavo A. R. Silva <gustavo@embeddedor.com> [dropped the SPI part which was fixed separately] Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- Dec 18, 2018
-
-
Marc Zyngier authored
SPIs should be checked against the VMs specific configuration, and not the architectural maximum. Cc: stable@vger.kernel.org Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
In attempting to re-construct the logic for our stage 2 page table layout I found the reasoning in the comment explaining how we calculate the number of levels used for stage 2 page tables a bit backwards. This commit attempts to clarify the comment, to make it slightly easier to read without having the Arm ARM open on the right page. While we're at it, fixup a typo in a comment that was recently changed. Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Julien Thierry authored
To change the active state of an MMIO, halt is requested for all vcpus of the affected guest before modifying the IRQ state. This is done by calling cond_resched_lock() in vgic_mmio_change_active(). However interrupts are disabled at this point and we cannot reschedule a vcpu. We actually don't need any of this, as kvm_arm_halt_guest ensures that all the other vcpus are out of the guest. Let's just drop that useless code. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Suggested-by:
Christoffer Dall <christoffer.dall@arm.com> Cc: stable@vger.kernel.org Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Punit Agrawal authored
KVM only supports PMD hugepages at stage 2. Now that the various page handling routines are updated, extend the stage 2 fault handling to map in PUD hugepages. Addition of PUD hugepage support enables additional page sizes (e.g., 1G with 4K granule) which can be useful on cores that support mapping larger block sizes in the TLB entries. Signed-off-by:
Punit Agrawal <punit.agrawal@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [ Replace BUG() => WARN_ON(1) for arm32 PUD helpers ] Signed-off-by:
Suzuki Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Punit Agrawal authored
In preparation for creating larger hugepages at Stage 2, add support to the age handling notifiers for PUD hugepages when encountered. Provide trivial helpers for arm32 to allow sharing code. Signed-off-by:
Punit Agrawal <punit.agrawal@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [ Replaced BUG() => WARN_ON(1) for arm32 PUD helpers ] Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Punit Agrawal authored
In preparation for creating larger hugepages at Stage 2, extend the access fault handling at Stage 2 to support PUD hugepages when encountered. Provide trivial helpers for arm32 to allow sharing of code. Signed-off-by:
Punit Agrawal <punit.agrawal@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [ Replaced BUG() => WARN_ON(1) in PUD helpers ] Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Punit Agrawal authored
In preparation for creating PUD hugepages at stage 2, add support for detecting execute permissions on PUD page table entries. Faults due to lack of execute permissions on page table entries is used to perform i-cache invalidation on first execute. Provide trivial implementations of arm32 helpers to allow sharing of code. Signed-off-by:
Punit Agrawal <punit.agrawal@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [ Replaced BUG() => WARN_ON(1) in arm32 PUD helpers ] Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Punit Agrawal authored
In preparation for creating PUD hugepages at stage 2, add support for write protecting PUD hugepages when they are encountered. Write protecting guest tables is used to track dirty pages when migrating VMs. Also, provide trivial implementations of required kvm_s2pud_* helpers to allow sharing of code with arm32. Signed-off-by:
Punit Agrawal <punit.agrawal@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [ Replaced BUG() => WARN_ON() in arm32 pud helpers ] Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Punit Agrawal authored
Introduce helpers to abstract architectural handling of the conversion of pfn to page table entries and marking a PMD page table entry as a block entry. The helpers are introduced in preparation for supporting PUD hugepages at stage 2 - which are supported on arm64 but do not exist on arm. Signed-off-by:
Punit Agrawal <punit.agrawal@arm.com> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by:
Christoffer Dall <christoffer.dall@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Punit Agrawal authored
Stage 2 fault handler marks a page as executable if it is handling an execution fault or if it was a permission fault in which case the executable bit needs to be preserved. The logic to decide if the page should be marked executable is duplicated for PMD and PTE entries. To avoid creating another copy when support for PUD hugepages is introduced refactor the code to share the checks needed to mark a page table entry as executable. Signed-off-by:
Punit Agrawal <punit.agrawal@arm.com> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Punit Agrawal authored
The code for operations such as marking the pfn as dirty, and dcache/icache maintenance during stage 2 fault handling is duplicated between normal pages and PMD hugepages. Instead of creating another copy of the operations when we introduce PUD hugepages, let's share them across the different pagesizes. Signed-off-by:
Punit Agrawal <punit.agrawal@arm.com> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
When restoring the active state from userspace, we don't know which CPU was the source for the active state, and this is not architecturally exposed in any of the register state. Set the active_source to 0 in this case. In the future, we can expand on this and exposse the information as additional information to userspace for GICv2 if anyone cares. Cc: stable@vger.kernel.org Signed-off-by:
Christoffer Dall <christoffer.dall@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We recently addressed a VMID generation race by introducing a read/write lock around accesses and updates to the vmid generation values. However, kvm_arch_vcpu_ioctl_run() also calls need_new_vmid_gen() but does so without taking the read lock. As far as I can tell, this can lead to the same kind of race: VM 0, VCPU 0 VM 0, VCPU 1 ------------ ------------ update_vttbr (vmid 254) update_vttbr (vmid 1) // roll over read_lock(kvm_vmid_lock); force_vm_exit() local_irq_disable need_new_vmid_gen == false //because vmid gen matches enter_guest (vmid 254) kvm_arch.vttbr = <PGD>:<VMID 1> read_unlock(kvm_vmid_lock); enter_guest (vmid 1) Which results in running two VCPUs in the same VM with different VMIDs and (even worse) other VCPUs from other VMs could now allocate clashing VMID 254 from the new generation as long as VCPU 0 is not exiting. Attempt to solve this by making sure vttbr is updated before another CPU can observe the updated VMID generation. Cc: stable@vger.kernel.org Fixes: f0cf47d9 "KVM: arm/arm64: Close VMID generation race" Reviewed-by:
Julien Thierry <julien.thierry@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Mark Rutland authored
When we emulate a guest instruction, we don't advance the hardware singlestep state machine, and thus the guest will receive a software step exception after a next instruction which is not emulated by the host. We bodge around this in an ad-hoc fashion. Sometimes we explicitly check whether userspace requested a single step, and fake a debug exception from within the kernel. Other times, we advance the HW singlestep state rely on the HW to generate the exception for us. Thus, the observed step behaviour differs for host and guest. Let's make this simpler and consistent by always advancing the HW singlestep state machine when we skip an instruction. Thus we can rely on the hardware to generate the singlestep exception for us, and never need to explicitly check for an active-pending step, nor do we need to fake a debug exception from the guest. Cc: Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Christoffer Dall <christoffer.dall@arm.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Mark Rutland authored
When we emulate an MMIO instruction, we advance the CPU state within decode_hsr(), before emulating the instruction effects. Having this logic in decode_hsr() is opaque, and advancing the state before emulation is problematic. It gets in the way of applying consistent single-step logic, and it prevents us from being able to fail an MMIO instruction with a synchronous exception. Clean this up by only advancing the CPU state *after* the effects of the instruction are emulated. Cc: Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org> Reviewed-by:
Christoffer Dall <christoffer.dall@arm.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- Oct 26, 2018
-
-
Michal Hocko authored
Revert 5ff7091f ("mm, mmu_notifier: annotate mmu notifiers with blockable invalidate callbacks"). MMU_INVALIDATE_DOES_NOT_BLOCK flags was the only one used and it is no longer needed since 93065ac7 ("mm, oom: distinguish blockable mode for mmu notifiers"). We now have a full support for per range !blocking behavior so we can drop the stop gap workaround which the per notifier flag was used for. Link: http://lkml.kernel.org/r/20180827112623.8992-4-mhocko@kernel.org Signed-off-by:
Michal Hocko <mhocko@suse.com> Cc: David Rientjes <rientjes@google.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Oct 18, 2018
-
-
Dongjiu Geng authored
The commit 539aee0e ("KVM: arm64: Share the parts of get/set events useful to 32bit") shares the get/set events helper for arm64 and arm32, but forgot to share the cap extension code. User space will check whether KVM supports vcpu events by checking the KVM_CAP_VCPU_EVENTS extension Acked-by:
James Morse <james.morse@arm.com> Reviewed-by : Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Dongjiu Geng <gengdongjiu@huawei.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Dongjiu Geng authored
Rename kvm_arch_dev_ioctl_check_extension() to kvm_arch_vm_ioctl_check_extension(), because it does not have any relationship with device. Renaming this function can make code readable. Cc: James Morse <james.morse@arm.com> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Dongjiu Geng <gengdongjiu@huawei.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- Oct 17, 2018
-
-
Mark Rutland authored
At boot time, KVM stashes the host MDCR_EL2 value, but only does this when the kernel is not running in hyp mode (i.e. is non-VHE). In these cases, the stashed value of MDCR_EL2.HPMN happens to be zero, which can lead to CONSTRAINED UNPREDICTABLE behaviour. Since we use this value to derive the MDCR_EL2 value when switching to/from a guest, after a guest have been run, the performance counters do not behave as expected. This has been observed to result in accesses via PMXEVTYPER_EL0 and PMXEVCNTR_EL0 not affecting the relevant counters, resulting in events not being counted. In these cases, only the fixed-purpose cycle counter appears to work as expected. Fix this by always stashing the host MDCR_EL2 value, regardless of VHE. Cc: Christopher Dall <christoffer.dall@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Fixes: 1e947bad ("arm64: KVM: Skip HYP setup when already running in HYP") Tested-by:
Robin Murphy <robin.murphy@arm.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- Oct 16, 2018
-
-
Wei Yang authored
The original comment is little hard to understand. No functional change, just amend the comment a little. Signed-off-by:
Wei Yang <richard.weiyang@gmail.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Peng Hao authored
Coalesced pio is based on coalesced mmio and can be used for some port like rtc port, pci-host config port and so on. Specially in case of rtc as coalesced pio, some versions of windows guest access rtc frequently because of rtc as system tick. guest access rtc like this: write register index to 0x70, then write or read data from 0x71. writing 0x70 port is just as index and do nothing else. So we can use coalesced pio to handle this scene to reduce VM-EXIT time. When starting and closing a virtual machine, it will access pci-host config port frequently. So setting these port as coalesced pio can reduce startup and shutdown time. without my patch, get the vm-exit time of accessing rtc 0x70 and piix 0xcf8 using perf tools: (guest OS : windows 7 64bit) IO Port Access Samples Samples% Time% Min Time Max Time Avg time 0x70:POUT 86 30.99% 74.59% 9us 29us 10.75us (+- 3.41%) 0xcf8:POUT 1119 2.60% 2.12% 2.79us 56.83us 3.41us (+- 2.23%) with my patch IO Port Access Samples Samples% Time% Min Time Max Time Avg time 0x70:POUT 106 32.02% 29.47% 0us 10us 1.57us (+- 7.38%) 0xcf8:POUT 1065 1.67% 0.28% 0.41us 65.44us 0.66us (+- 10.55%) Signed-off-by:
Peng Hao <peng.hao2@zte.com.cn> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Wei Yang authored
update_memslots() is only called by __kvm_set_memory_region(), in which "change" is calculated and indicates how to adjust slots->used_slots * increase by one if it is KVM_MR_CREATE * decrease by one if it is KVM_MR_DELETE * not change for others This patch adjusts slots->used_slots in update_memslots() based on "change" value instead of re-calculate those states again. Signed-off-by:
Wei Yang <richard.weiyang@gmail.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Vitaly Kuznetsov authored
We can use 'NULL' to represent 'all cpus' case in kvm_make_vcpus_request_mask() and avoid building vCPU mask with all vCPUs. Suggested-by:
Radim Krčmář <rkrcmar@redhat.com> Signed-off-by:
Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by:
Roman Kagan <rkagan@virtuozzo.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Oct 03, 2018
-
-
Punit Agrawal authored
PageTransCompoundMap() returns true for hugetlbfs and THP hugepages. This behaviour incorrectly leads to stage 2 faults for unsupported hugepage sizes (e.g., 64K hugepage with 4K pages) to be treated as THP faults. Tighten the check to filter out hugetlbfs pages. This also leads to consistently mapping all unsupported hugepage sizes as PTE level entries at stage 2. Signed-off-by:
Punit Agrawal <punit.agrawal@arm.com> Reviewed-by:
Suzuki Poulose <suzuki.poulose@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: stable@vger.kernel.org # v4.13+ Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
__cpu_init_stage2 doesn't do anything anymore on arm64, and is totally non-sensical if running VHE (as VHE is 64bit only). Reviewed-by:
Eric Auger <eric.auger@redhat.com> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
VM tends to be a very overloaded term in KVM, so let's keep it to describe the virtual machine. For the virtual memory setup, let's use the "stage2" suffix. Reviewed-by:
Eric Auger <eric.auger@redhat.com> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Suzuki K Poulose authored
So far we have restricted the IPA size of the VM to the default value (40bits). Now that we can manage the IPA size per VM and support dynamic stage2 page tables, we can allow VMs to have larger IPA. This patch introduces a the maximum IPA size supported on the host. This is decided by the following factors : 1) Maximum PARange supported by the CPUs - This can be inferred from the system wide safe value. 2) Maximum PA size supported by the host kernel (48 vs 52) 3) Number of levels in the host page table (as we base our stage2 tables on the host table helpers). Since the stage2 page table code is dependent on the stage1 page table, we always ensure that : Number of Levels at Stage1 >= Number of Levels at Stage2 So we limit the IPA to make sure that the above condition is satisfied. This will affect the following combinations of VA_BITS and IPA for different page sizes. Host configuration | Unsupported IPA ranges 39bit VA, 4K | [44, 48] 36bit VA, 16K | [41, 48] 42bit VA, 64K | [47, 52] Supporting the above combinations need independent stage2 page table manipulation code, which would need substantial changes. We could purse the solution independently and switch the page table code once we have it ready. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@kernel.org> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- Oct 01, 2018
-
-
Kristina Martsenko authored
Add support for handling 52bit guest physical address to the VGIC layer. So far we have limited the guest physical address to 48bits, by explicitly masking the upper bits. This patch removes the restriction. We do not have to check if the host supports 52bit as the gpa is always validated during an access. (e.g, kvm_{read/write}_guest, kvm_is_visible_gfn()). Also, the ITS table save-restore is also not affected with the enhancement. The DTE entries already store the bits[51:8] of the ITT_addr (with a 256byte alignment). Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@kernel.org> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Kristina Martsenko <kristina.martsenko@arm.com> [ Macro clean ups, fix PROPBASER and PENDBASER accesses ] Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Suzuki K Poulose authored
Right now the stage2 page table for a VM is hard coded, assuming an IPA of 40bits. As we are about to add support for per VM IPA, prepare the stage2 page table helpers to accept the kvm instance to make the right decision for the VM. No functional changes. Adds stage2_pgd_size(kvm) to replace S2_PGD_SIZE. Also, moves some of the definitions in arm32 to align with the arm64. Also drop the _AC() specifier constants wherever possible. Cc: Christoffer Dall <cdall@kernel.org> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Suzuki K Poulose authored
Allow the arch backends to perform VM specific initialisation. This will be later used to handle IPA size configuration and per-VM VTCR configuration on arm64. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@kernel.org> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Suzuki K Poulose authored
On a 4-level page table pgd entry can be empty, unlike a 3-level page table. Remove the spurious WARN_ON() in stage_get_pud(). Acked-by:
Christoffer Dall <cdall@kernel.org> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Suzuki K Poulose authored
So far we have only supported 3 level page table with fixed IPA of 40bits, where PUD is folded. With 4 level page tables, we need to check if the PUD entry is valid or not. Fix stage2_flush_memslot() to do this check, before walking down the table. Acked-by:
Christoffer Dall <cdall@kernel.org> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- Sep 27, 2018
-
-
Eric W. Biederman authored
This simplifies the code making it clearer what is going on, and making the siginfo generation easier to maintain. Signed-off-by:
"Eric W. Biederman" <ebiederm@xmission.com>
-
- Sep 18, 2018
-
-
Vladimir Murzin authored
We rely on cpufeature framework to detect and enable CNP so for KVM we need to patch hyp to set CNP bit just before TTBR0_EL2 gets written. For the guest we encode CNP bit while building vttbr, so we don't need to bother with that in a world switch. Reviewed-by:
James Morse <james.morse@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Sep 07, 2018
-
-
Marc Zyngier authored
kvm_unmap_hva is long gone, and we only have kvm_unmap_hva_range to deal with. Drop the now obsolete code. Fixes: fb1522e0 ("KVM: update to new mmu_notifier semantic v2") Cc: James Hogan <jhogan@kernel.org> Reviewed-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@arm.com>
-
Marc Zyngier authored
When triggering a CoW, we unmap the RO page via an MMU notifier (invalidate_range_start), and then populate the new PTE using another one (change_pte). In the meantime, we'll have copied the old page into the new one. The problem is that the data for the new page is sitting in the cache, and should the guest have an uncached mapping to that page (or its MMU off), following accesses will bypass the cache. In a way, this is similar to what happens on a translation fault: We need to clean the page to the PoC before mapping it. So let's just do that. This fixes a KVM unit test regression observed on a HiSilicon platform, and subsequently reproduced on Seattle. Fixes: a9c0e12e ("KVM: arm/arm64: Only clean the dcache on translation fault") Cc: stable@vger.kernel.org # v4.16+ Reported-by:
Mike Galbraith <efault@gmx.de> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@arm.com>
-