linux/arch/x86/kvm/svm
Peter Xu ff5a983cbb KVM: X86: Don't track dirty for KVM_SET_[TSS_ADDR|IDENTITY_MAP_ADDR]
Originally, we have three code paths that can dirty a page without
vcpu context for X86:

  - init_rmode_identity_map
  - init_rmode_tss
  - kvmgt_rw_gpa

init_rmode_identity_map and init_rmode_tss will be setup on
destination VM no matter what (and the guest cannot even see them), so
it does not make sense to track them at all.

To do this, allow __x86_set_memory_region() to return the userspace
address that just allocated to the caller.  Then in both of the
functions we directly write to the userspace address instead of
calling kvm_write_*() APIs.

Another trivial change is that we don't need to explicitly clear the
identity page table root in init_rmode_identity_map() because no
matter what we'll write to the whole page with 4M huge page entries.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20201001012044.5151-4-peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-11-15 09:49:12 -05:00
..
avic.c KVM: X86: Don't track dirty for KVM_SET_[TSS_ADDR|IDENTITY_MAP_ADDR] 2020-11-15 09:49:12 -05:00
nested.c KVM: x86: Return bool instead of int for CR4 and SREGS validity checks 2020-11-15 09:49:08 -05:00
pmu.c KVM: x86/pmu: Tweak kvm_pmu_get_msr to pass 'struct msr_data' in 2020-06-01 04:26:08 -04:00
sev.c ARM: 2020-10-23 11:17:56 -07:00
svm.c KVM: x86: Move vendor CR4 validity check to dedicated kvm_x86_ops hook 2020-11-15 09:49:07 -05:00
svm.h KVM: x86: Move vendor CR4 validity check to dedicated kvm_x86_ops hook 2020-11-15 09:49:07 -05:00
vmenter.S x86/kvm/svm: Move guest enter/exit into .noinstr.text 2020-07-09 07:08:41 -04:00