1. 15 Jun, 2020 1 commit
  2. 09 Jun, 2020 2 commits
    • Mike Rapoport's avatar
      mm: reorder includes after introduction of linux/pgtable.h · 65fddcfc
      Mike Rapoport authored
      
      
      The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include
      of the latter in the middle of asm includes.  Fix this up with the aid of
      the below script and manual adjustments here and there.
      
      	import sys
      	import re
      
      	if len(sys.argv) is not 3:
      	    print "USAGE: %s <file> <header>" % (sys.argv[0])
      	    sys.exit(1)
      
      	hdr_to_move="#include <linux/%s>" % sys.argv[2]
      	moved = False
      	in_hdrs = False
      
      	with open(sys.argv[1], "r") as f:
      	    lines = f.readlines()
      	    for _line in lines:
      		line = _line.rstrip('
      ')
      		if line == hdr_to_move:
      		    continue
      		if line.startswith("#include <linux/"):
      		    in_hdrs = True
      		elif not moved and in_hdrs:
      		    moved = True
      		    print hdr_to_move
      		print line
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin....
      65fddcfc
    • Mike Rapoport's avatar
      mm: introduce include/linux/pgtable.h · ca5999fd
      Mike Rapoport authored
      
      
      The include/linux/pgtable.h is going to be the home of generic page table
      manipulation functions.
      
      Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and
      make the latter include asm/pgtable.h.
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt ...
      ca5999fd
  3. 04 Jun, 2020 3 commits
  4. 15 May, 2020 1 commit
    • David Matlack's avatar
      kvm: add halt-polling cpu usage stats · cb953129
      David Matlack authored
      
      
      Two new stats for exposing halt-polling cpu usage:
      halt_poll_success_ns
      halt_poll_fail_ns
      
      Thus sum of these 2 stats is the total cpu time spent polling. "success"
      means the VCPU polled until a virtual interrupt was delivered. "fail"
      means the VCPU had to schedule out (either because the maximum poll time
      was reached or it needed to yield the CPU).
      
      To avoid touching every arch's kvm_vcpu_stat struct, only update and
      export halt-polling cpu usage stats if we're on x86.
      
      Exporting cpu usage as a u64 and in nanoseconds means we will overflow at
      ~500 years, which seems reasonably large.
      Signed-off-by: default avatarDavid Matlack <dmatlack@google.com>
      Signed-off-by: default avatarJon Cargille <jcargill@google.com>
      Reviewed-by: default avatarJim Mattson <jmattson@google.com>
      
      Message-Id: <20200508182240.68440-1-jcargill@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      cb953129
  5. 13 May, 2020 2 commits
    • Jason Yan's avatar
      KVM: MIPS: use true,false for bool variable · 04146f22
      Jason Yan authored
      
      
      Fix the following coccicheck warning:
      
      arch/mips/kvm/mips.c:82:1-28: WARNING: Assignment of 0/1 to bool
      variable
      arch/mips/kvm/mips.c:88:1-28: WARNING: Assignment of 0/1 to bool
      variable
      Signed-off-by: default avatarJason Yan <yanaijie@huawei.com>
      Signed-off-by: default avatarThomas Bogendoerfer <tsbogend@alpha.franken.de>
      04146f22
    • Davidlohr Bueso's avatar
      kvm: Replace vcpu->swait with rcuwait · da4ad88c
      Davidlohr Bueso authored
      The use of any sort of waitqueue (simple or regular) for
      wait/waking vcpus has always been an overkill and semantically
      wrong. Because this is per-vcpu (which is blocked) there is
      only ever a single waiting vcpu, thus no need for any sort of
      queue.
      
      As such, make use of the rcuwait primitive, with the following
      considerations:
      
        - rcuwait already provides the proper barriers that serialize
        concurrent waiter and waker.
      
        - Task wakeup is done in rcu read critical region, with a
        stable task pointer.
      
        - Because there is no concurrency among waiters, we need
        not worry about rcuwait_wait_event() calls corrupting
        the wait->task. As a consequence, this saves the locking
        done in swait when modifying the queue. This also applies
        to per-vcore wait for powerpc kvm-hv.
      
      The x86 tscdeadline_latency test mentioned in 8577370f
      
      
      ("KVM: Use simple waitqueue for vcpu->wq") shows that, on avg,
      latency is reduced by around 15-20% with this change.
      
      Cc: Paul Mackerras <paulus@ozlabs.org>
      Cc: kvmarm@lists.cs.columbia.edu
      Cc: linux-mips@vger.kernel.org
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Signed-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Message-Id: <20200424054837.5138-6-dave@stgolabs.net>
      [Avoid extra logic changes. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      da4ad88c
  6. 21 Apr, 2020 2 commits
  7. 31 Mar, 2020 1 commit
  8. 16 Mar, 2020 3 commits
  9. 05 Feb, 2020 2 commits
  10. 27 Jan, 2020 5 commits
  11. 24 Jan, 2020 5 commits
  12. 05 Aug, 2019 1 commit
  13. 04 Jun, 2019 1 commit
    • Sean Christopherson's avatar
      KVM: Directly return result from kvm_arch_check_processor_compat() · f257d6dc
      Sean Christopherson authored
      
      
      Add a wrapper to invoke kvm_arch_check_processor_compat() so that the
      boilerplate ugliness of checking virtualization support on all CPUs is
      hidden from the arch specific code.  x86's implementation in particular
      is quite heinous, as it unnecessarily propagates the out-param pattern
      into kvm_x86_ops.
      
      While the x86 specific issue could be resolved solely by changing
      kvm_x86_ops, make the change for all architectures as returning a value
      directly is prettier and technically more robust, e.g. s390 doesn't set
      the out param, which could lead to subtle breakage in the (highly
      unlikely) scenario where the out-param was not pre-initialized by the
      caller.
      
      Opportunistically annotate svm_check_processor_compat() with __init.
      Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
      Reviewed-by: default avatarCornelia Huck <cohuck@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f257d6dc
  14. 28 May, 2019 1 commit
    • Thomas Huth's avatar
      KVM: s390: Do not report unusabled IDs via KVM_CAP_MAX_VCPU_ID · a86cb413
      Thomas Huth authored
      
      
      KVM_CAP_MAX_VCPU_ID is currently always reporting KVM_MAX_VCPU_ID on all
      architectures. However, on s390x, the amount of usable CPUs is determined
      during runtime - it is depending on the features of the machine the code
      is running on. Since we are using the vcpu_id as an index into the SCA
      structures that are defined by the hardware (see e.g. the sca_add_vcpu()
      function), it is not only the amount of CPUs that is limited by the hard-
      ware, but also the range of IDs that we can use.
      Thus KVM_CAP_MAX_VCPU_ID must be determined during runtime on s390x, too.
      So the handling of KVM_CAP_MAX_VCPU_ID has to be moved from the common
      code into the architecture specific code, and on s390x we have to return
      the same value here as for KVM_CAP_MAX_VCPUS.
      This problem has been discovered with the kvm_create_max_vcpus selftest.
      With this change applied, the selftest now passes on s390x, too.
      Reviewed-by: default avatarAndrew Jones <drjones@redhat.com>
      Reviewed-by: default avatarCornelia Huck <cohuck@redhat.com>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Signed-off-by: default avatarThomas Huth <thuth@redhat.com>
      Message-Id: <20190523164309.13345-9-thuth@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      a86cb413
  15. 04 Feb, 2019 1 commit
    • Paul Burton's avatar
      MIPS: MemoryMapID (MMID) Support · c8790d65
      Paul Burton authored
      
      
      Introduce support for using MemoryMapIDs (MMIDs) as an alternative to
      Address Space IDs (ASIDs). The major difference between the two is that
      MMIDs are global - ie. an MMID uniquely identifies an address space
      across all coherent CPUs. In contrast ASIDs are non-global per-CPU IDs,
      wherein each address space is allocated a separate ASID for each CPU
      upon which it is used. This global namespace allows a new GINVT
      instruction be used to globally invalidate TLB entries associated with a
      particular MMID across all coherent CPUs in the system, removing the
      need for IPIs to invalidate entries with separate ASIDs on each CPU.
      
      The allocation scheme used here is largely borrowed from arm64 (see
      arch/arm64/mm/context.c). In essence we maintain a bitmap to track
      available MMIDs, and MMIDs in active use at the time of a rollover to a
      new MMID version are preserved in the new version. The allocation scheme
      requires efficient 64 bit atomics in order to perform reasonably, so
      this support depends upon CONFIG_GENERIC_ATOMIC64=n (ie. currently it
      will only be included in MIPS64 kernels).
      
      The first, and currently only, available CPU with support for MMIDs is
      the MIPS I6500. This CPU supports 16 bit MMIDs, and so for now we cap
      our MMIDs to 16 bits wide in order to prevent the bitmap growing to
      absurd sizes if any future CPU does implement 32 bit MMIDs as the
      architecture manuals suggest is recommended.
      
      When MMIDs are in use we also make use of GINVT instruction which is
      available due to the global nature of MMIDs. By executing a sequence of
      GINVT & SYNC 0x14 instructions we can avoid the overhead of an IPI to
      each remote CPU in many cases. One complication is that GINVT will
      invalidate wired entries (in all cases apart from type 0, which targets
      the entire TLB). In order to avoid GINVT invalidating any wired TLB
      entries we set up, we make sure to create those entries using a reserved
      MMID (0) that we never associate with any address space.
      
      Also of note is that KVM will require further work in order to support
      MMIDs & GINVT, since KVM is involved in allocating IDs for guests & in
      configuring the MMU. That work is not part of this patch, so for now
      when MMIDs are in use KVM is disabled.
      Signed-off-by: default avatarPaul Burton <paul.burton@mips.com>
      Cc: linux-mips@vger.kernel.org
      c8790d65
  16. 14 Dec, 2018 2 commits
    • Paolo Bonzini's avatar
      kvm: introduce manual dirty log reprotect · 2a31b9db
      Paolo Bonzini authored
      
      
      There are two problems with KVM_GET_DIRTY_LOG.  First, and less important,
      it can take kvm->mmu_lock for an extended period of time.  Second, its user
      can actually see many false positives in some cases.  The latter is due
      to a benign race like this:
      
        1. KVM_GET_DIRTY_LOG returns a set of dirty pages and write protects
           them.
        2. The guest modifies the pages, causing them to be marked ditry.
        3. Userspace actually copies the pages.
        4. KVM_GET_DIRTY_LOG returns those pages as dirty again, even though
           they were not written to since (3).
      
      This is especially a problem for large guests, where the time between
      (1) and (3) can be substantial.  This patch introduces a new
      capability which, when enabled, makes KVM_GET_DIRTY_LOG not
      write-protect the pages it returns.  Instead, userspace has to
      explicitly clear the dirty log bits just before using the content
      of the page.  The new KVM_CLEAR_DIRTY_LOG ioctl can also operate on a
      64-page granularity rather than requiring to sync a full memslot;
      this way, the mmu_lock is taken for small amounts of time, and
      only a small amount of time will pass between write protection
      of pages and the sending of their content.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2a31b9db
    • Paolo Bonzini's avatar
      kvm: rename last argument to kvm_get_dirty_log_protect · 8fe65a82
      Paolo Bonzini authored
      
      
      When manual dirty log reprotect will be enabled, kvm_get_dirty_log_protect's
      pointer argument will always be false on exit, because no TLB flush is needed
      until the manual re-protection operation.  Rename it from "is_dirty" to "flush",
      which more accurately tells the caller what they have to do with it.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      8fe65a82
  17. 31 Oct, 2018 1 commit
  18. 20 Jun, 2018 1 commit
  19. 01 Jun, 2018 1 commit
  20. 14 May, 2018 1 commit
  21. 14 Dec, 2017 3 commits