summaryrefslogtreecommitdiffstats
path: root/main/xen/xsa52-4.2-unstable.patch
diff options
context:
space:
mode:
authorNatanael Copa <ncopa@alpinelinux.org>2013-06-04 09:30:54 +0000
committerNatanael Copa <ncopa@alpinelinux.org>2013-06-04 09:30:54 +0000
commitf6e99451d47fbe7cdb852f48dd11006808db52ae (patch)
tree174b0e6a82ab19bb221109cadc326350e025a534 /main/xen/xsa52-4.2-unstable.patch
parent0d259bc43cda35fc7d64c6de9bff0c679183657e (diff)
downloadaports-f6e99451d47fbe7cdb852f48dd11006808db52ae.tar.bz2
aports-f6e99451d47fbe7cdb852f48dd11006808db52ae.tar.xz
main/xen: security fixes (CVE-2013-2076,CVE-2013-2077,CVE-2013-2078)
ref #2044 ref #2049 ref #2054
Diffstat (limited to 'main/xen/xsa52-4.2-unstable.patch')
-rw-r--r--main/xen/xsa52-4.2-unstable.patch46
1 files changed, 46 insertions, 0 deletions
diff --git a/main/xen/xsa52-4.2-unstable.patch b/main/xen/xsa52-4.2-unstable.patch
new file mode 100644
index 000000000..14db8a8a7
--- /dev/null
+++ b/main/xen/xsa52-4.2-unstable.patch
@@ -0,0 +1,46 @@
+x86/xsave: fix information leak on AMD CPUs
+
+Just like for FXSAVE/FXRSTOR, XSAVE/XRSTOR also don't save/restore the
+last instruction and operand pointers as well as the last opcode if
+there's no pending unmasked exception (see CVE-2006-1056 and commit
+9747:4d667a139318).
+
+While the FXSR solution sits in the save path, I prefer to have this in
+the restore path because there the handling is simpler (namely in the
+context of the pending changes to properly save the selector values for
+32-bit guest code).
+
+Also this is using FFREE instead of EMMS, as it doesn't seem unlikely
+that in the future we may see CPUs with x87 and SSE/AVX but no MMX
+support. The goal here anyway is just to avoid an FPU stack overflow.
+I would have preferred to use FFREEP instead of FFREE (freeing two
+stack slots at once), but AMD doesn't document that instruction.
+
+This is CVE-2013-2076 / XSA-52.
+
+Signed-off-by: Jan Beulich <jbeulich@suse.com>
+
+--- a/xen/arch/x86/xstate.c
++++ b/xen/arch/x86/xstate.c
+@@ -78,6 +78,21 @@ void xrstor(struct vcpu *v, uint64_t mas
+
+ struct xsave_struct *ptr = v->arch.xsave_area;
+
++ /*
++ * AMD CPUs don't save/restore FDP/FIP/FOP unless an exception
++ * is pending. Clear the x87 state here by setting it to fixed
++ * values. The hypervisor data segment can be sometimes 0 and
++ * sometimes new user value. Both should be ok. Use the FPU saved
++ * data block as a safe address because it should be in L1.
++ */
++ if ( (mask & ptr->xsave_hdr.xstate_bv & XSTATE_FP) &&
++ !(ptr->fpu_sse.fsw & 0x0080) &&
++ boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
++ asm volatile ( "fnclex\n\t" /* clear exceptions */
++ "ffree %%st(7)\n\t" /* clear stack tag */
++ "fildl %0" /* load to clear state */
++ : : "m" (ptr->fpu_sse) );
++
+ asm volatile (
+ ".byte " REX_PREFIX "0x0f,0xae,0x2f"
+ :