1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
|
x86/VMX: sanitize rIP before re-entering guest
... to prevent guest user mode arranging for a guest crash (due to
failed VM entry). (On the AMD system I checked, hardware is doing
exactly the canonicalization being added here.)
Note that fixing this in an architecturally correct way would be quite
a bit more involved: Making the x86 instruction emulator check all
branch targets for validity, plus dealing with invalid rIP resulting
from update_guest_eip() or incoming directly during a VM exit. The only
way to get the latter right would be by not having hardware do the
injection.
Note further that there are a two early returns from
vmx_vmexit_handler(): One (through vmx_failed_vmentry()) leads to
domain_crash() anyway, and the other covers real mode only and can
neither occur with a non-canonical rIP nor result in an altered rIP,
so we don't need to force those paths through the checking logic.
This is XSA-170.
Reported-by: 刘令 <liuling-it@360.cn>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2968,7 +2968,7 @@ static int vmx_handle_apic_write(void)
void vmx_vmexit_handler(struct cpu_user_regs *regs)
{
unsigned long exit_qualification, exit_reason, idtv_info, intr_info = 0;
- unsigned int vector = 0;
+ unsigned int vector = 0, mode;
struct vcpu *v = current;
__vmread(GUEST_RIP, ®s->rip);
@@ -3566,6 +3566,41 @@ void vmx_vmexit_handler(struct cpu_user_
out:
if ( nestedhvm_vcpu_in_guestmode(v) )
nvmx_idtv_handling();
+
+ /*
+ * VM entry will fail (causing the guest to get crashed) if rIP (and
+ * rFLAGS, but we don't have an issue there) doesn't meet certain
+ * criteria. As we must not allow less than fully privileged mode to have
+ * such an effect on the domain, we correct rIP in that case (accepting
+ * this not being architecturally correct behavior, as the injected #GP
+ * fault will then not see the correct [invalid] return address).
+ * And since we know the guest will crash, we crash it right away if it
+ * already is in most privileged mode.
+ */
+ mode = vmx_guest_x86_mode(v);
+ if ( mode == 8 ? !is_canonical_address(regs->rip)
+ : regs->rip != regs->_eip )
+ {
+ struct segment_register ss;
+
+ gprintk(XENLOG_WARNING, "Bad rIP %lx for mode %u\n", regs->rip, mode);
+
+ vmx_get_segment_register(v, x86_seg_ss, &ss);
+ if ( ss.attr.fields.dpl )
+ {
+ __vmread(VM_ENTRY_INTR_INFO, &intr_info);
+ if ( !(intr_info & INTR_INFO_VALID_MASK) )
+ hvm_inject_hw_exception(TRAP_gp_fault, 0);
+ /* Need to fix rIP nevertheless. */
+ if ( mode == 8 )
+ regs->rip = (long)(regs->rip << (64 - VADDR_BITS)) >>
+ (64 - VADDR_BITS);
+ else
+ regs->rip = regs->_eip;
+ }
+ else
+ domain_crash(v->domain);
+ }
}
void vmx_vmenter_helper(const struct cpu_user_regs *regs)
|