Skip to content

Commit 510603f

Browse files
pa1guptagregkh
authored andcommitted
x86/vmscape: Add conditional IBPB mitigation
Commit 2f8f173 upstream. VMSCAPE is a vulnerability that exploits insufficient branch predictor isolation between a guest and a userspace hypervisor (like QEMU). Existing mitigations already protect kernel/KVM from a malicious guest. Userspace can additionally be protected by flushing the branch predictors after a VMexit. Since it is the userspace that consumes the poisoned branch predictors, conditionally issue an IBPB after a VMexit and before returning to userspace. Workloads that frequently switch between hypervisor and userspace will incur the most overhead from the new IBPB. This new IBPB is not integrated with the existing IBPB sites. For instance, a task can use the existing speculation control prctl() to get an IBPB at context switch time. With this implementation, the IBPB is doubled up: one at context switch and another before running userspace. The intent is to integrate and optimize these cases post-embargo. [ dhansen: elaborate on suboptimal IBPB solution ] Suggested-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent d83e611 commit 510603f

File tree

5 files changed

+27
-0
lines changed

5 files changed

+27
-0
lines changed

arch/x86/include/asm/cpufeatures.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -492,6 +492,7 @@
492492
#define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */
493493
#define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */
494494
#define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */
495+
#define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-userspace, see VMSCAPE bug */
495496

496497
/*
497498
* BUG word(s)

arch/x86/include/asm/entry-common.h

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -93,6 +93,13 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
9393
* 8 (ia32) bits.
9494
*/
9595
choose_random_kstack_offset(rdtsc());
96+
97+
/* Avoid unnecessary reads of 'x86_ibpb_exit_to_user' */
98+
if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) &&
99+
this_cpu_read(x86_ibpb_exit_to_user)) {
100+
indirect_branch_prediction_barrier();
101+
this_cpu_write(x86_ibpb_exit_to_user, false);
102+
}
96103
}
97104
#define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare
98105

arch/x86/include/asm/nospec-branch.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -530,6 +530,8 @@ void alternative_msr_write(unsigned int msr, u64 val, unsigned int feature)
530530
: "memory");
531531
}
532532

533+
DECLARE_PER_CPU(bool, x86_ibpb_exit_to_user);
534+
533535
static inline void indirect_branch_prediction_barrier(void)
534536
{
535537
asm_inline volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB)

arch/x86/kernel/cpu/bugs.c

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,14 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
105105
DEFINE_PER_CPU(u64, x86_spec_ctrl_current);
106106
EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current);
107107

108+
/*
109+
* Set when the CPU has run a potentially malicious guest. An IBPB will
110+
* be needed to before running userspace. That IBPB will flush the branch
111+
* predictor content.
112+
*/
113+
DEFINE_PER_CPU(bool, x86_ibpb_exit_to_user);
114+
EXPORT_PER_CPU_SYMBOL_GPL(x86_ibpb_exit_to_user);
115+
108116
u64 x86_pred_cmd __ro_after_init = PRED_CMD_IBPB;
109117

110118
static u64 __ro_after_init x86_arch_cap_msr;

arch/x86/kvm/x86.c

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11145,6 +11145,15 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
1114511145
if (vcpu->arch.guest_fpu.xfd_err)
1114611146
wrmsrq(MSR_IA32_XFD_ERR, 0);
1114711147

11148+
/*
11149+
* Mark this CPU as needing a branch predictor flush before running
11150+
* userspace. Must be done before enabling preemption to ensure it gets
11151+
* set for the CPU that actually ran the guest, and not the CPU that it
11152+
* may migrate to.
11153+
*/
11154+
if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER))
11155+
this_cpu_write(x86_ibpb_exit_to_user, true);
11156+
1114811157
/*
1114911158
* Consume any pending interrupts, including the possible source of
1115011159
* VM-Exit on SVM and any ticks that occur between VM-Exit and now.

0 commit comments

Comments
 (0)